Introducing The Startup othermo
Hey there! After some abstinence I’m back with some news. Since October 2017 I’m one of the co-founders of othermo. We are currently working on bringing mid to large-scale heating systems together with the world of machine learning. The heating systems we talk about here are everything from projects like student residences, over local to district wide heating systems. With techniques like machine learning we enable those systems to learn an optimized operation based on their configuration, usage and meta data. One of my (many) jobs is to leverage the collected data for analysis and optimization which will be a lot of fun actually!
Path to othermo
got interested as data scientist, got involved as system administrator, now working on frontend application
My part in this whole story is to build the infrastructure necessary to collect and process the data for later analysis and optimization. Until now I utilized puppet to build a centralized client-server management of our IoT-Devices, built a continous integration pipeline from commit to automated deploy over docker containers, created first frontend and API applications and a simple homepage covering what we do from a customers perspective.
Well, that’s a lot actually. Let’s crumble down that pile of buzz-words and see what it’s all about.
The ground truth
The ground truth for my work are my two coworkers. One supplies me with the data from every sensor he can get a grasp on, while the other keeps the project organized, the customers informed and the cash flowing. There is a huge amount of different manufacturers, protocols and specifications to go through until all that data magically happens to flow to my server endpoint.
To provide an uplink from all sites to our servers we are swimming in the sea of IoT topics. Basically we supply a data accumulating computer with a gateway configured to send the data regularly to our servers. This computer is connected to our central puppet configuration management server which enables us to smoothly deploy any changes and/or updates of our software and configuration.
Continous Integration Pipeline
I setup different services which are deployed as docker containers on our servers. These containers are created from images we build on our own docker registry. The images are build through a continous integration pipeline which is started by incoming commits to the master branch and works through different tests and finally the docker image creation process. In order to always have the server use the most recent docker image a docker watchtower is configured to watch for new image versions and then to automatically deploy them.
Since the applications are all built as docker images all team members can easily run and develop the applications no matter which operating system is used.
Our frontend application is mostly a proof-of-concept and for that purpose composed of many different open-source services and tools. With different patterns, techniques and hacks these applications are tailored together to form one unified user experience. Although it requires a lot of work still, it is an interesting aspect of micro-service composition.
Our first homepage is a simple static generated site with the use of the open-source static site generator Hugo and the Agency Theme as its foundation.
Any thoughts of your own?
Feel free to raise a discussion with me on Mastodon or drop me an email.