WebControl

The biggest work project so far I had to work on was the Vivreco WebControl : an IoT platform allowing our company to connect our heat pumps and manage them.

The job was to develop the platform from scratch, from the custom heat pump server to the web and mobile app. The best part was to look for technical solutions, and plan the architecture of the project.

A first prototype had been designed, with a simple custom server for the heat pump, using MySQL as a queue for the commands and storing the responses. A PHP app was serving the content, at a very slow pace, due to the design.

The data amount was around 50 points of data per heat pump per minute for the monitoring part, and around 800 different points for the parameter part. Those wouldn’t need to be saved every minute, only the last seen version and the changes.

Microservices

Around the time I’ve started this project, beginning of 2018, microservices were all the rage. The issues I’ve seen in the prototype with data ingestion, data retrieval and the synchronous request/response model, as well as the ease of development around Docker pushed me to choose this architecture. Designing the whole system was a really interesting part of the job.

The choice of technology and around what the microservices would revolve took a while to decide. I wanted the new generation of heat pumps to be available to connect directly to the new system without a custom server app. The choice of the MQTT protocol looked strong and adapted to the task. I’ve reviewed the different known brokers, and had a liking in the Erlang VM before. VerneMQ was chosen to carry the task of passing messages. As the stack was pretty small, I chose to use VerneMQ to pass the asynchronous requests between microservices as well: ingesting the data, sending emails, sending commands to the customer server app. The rest would be simple HTTP requests between microservices. As the stack will grow, we might switch to a more adapted message broker for in-between microservices messages, akin to Redpanda (Kafka equivalent), Pulsar or RabbitMQ. Today, we have a total of 24 microservices and 23 apps :

  • 3 microservices for the custom server app stack
  • 2 microservices on a Windows Server VM
  • 3 microservices for the front-end
  • 2 different MQTT brokers in bridge mode
  • 16 microservices for the backend on an Linux dedicated server
  • 5 for the TICK stack (Telegraf, 2 different InfluxDB, Chronograf, Kapacitor)
  • 6 databases servers (Redis mostly), used to store users, heat pumps, search, weather reports, parameters…
    Everything is deployed as several Docker stacks, managed by Docker Swarm. The solution was simpler than Kubernetes at the time, and fulfilled the need.

Timeseries

At that time, around 400 heat pumps were connected to the prototype. The data amount was pretty heavy on the MySQL server, as it wasn’t optimised, with a MyISAM storage engine and a bad composite index. The choice was quickly directed towards time series databases : CrateDB and InfluxDB. CrateDB wasn’t free at the time, so I tried InfluxDB and settled on it. There would be an data ingest, a small Python service that would listen to different MQTT topics for incoming data, and then batch insert it into InfluxDB. A Python API service would then serve this data to the front-end.

Shadowstate

A problem from the prototype was the long time to retrieve the parameters. The heat pump can only respond to a request every 3 second, and with a limited amount of data. Some commands can be composed of several requests, and can take up to 12 to 15 seconds to get the answer. I thought about the idea of saving the last retrieved data, and presenting it right away to the front-end waiting for the fresher one. The idea was to get a “shadowstate” of the heat pump, representing the last known data. The other problem was the way of representing this data. There are around 1300 different parameters today (CHECK THIS), not all available at the same time depending on the model of the heat pump and its configuration. The data needed to be structured a certain way. The ingest works the same way as the time series. A less small Python service will listen on the topic, and compare the incoming data to the new one for changes. If the data didn’t exist or if its structure is different due to a change of configuration, it will create a schema to be used by the frontend to shape the data, along with some metadata (min/max values, labels, group structure). Another Python API service will serve this data to the front-end and provide the command system.

Websockets, and real time data

A problem we had in the prototype was employees working on a heat pump simultaneously. I implemented WebSockets to see in real time whenever another technician would change parameters. We could also use this real time capacity to follow closely the heat pump data, toggling a flag and receiving data at a mucher faster rate, adding points to the charts. On the front-end side, I’ve modified the Phoenix websockets client, as I liked the model of Channels and using only one websocket connection. I had to follow the message format and the idea on the back-end now. At first, the Websocket server was implemented in Python with asyncio. It was used as a bridge between the MQTT broker and the web app, translating MQTT into Websockets message following the idea of Phoenix Channels. It worked but was pretty instable to my taste. I rewrote the service in NodeJS some time later and it is working with no issues nowadays.

The rest in the next part of this series

That will be it for now, let’s see soon for the rest of the series! We will talk about creating a VueJS SPA, a hybrid mobile app with Quasar (VueJS and Capacitor), Python API and more!