CATALYST at Qarnot

How CATALYST helps Qarnot distributed Data Centre DC to increase its flexibility

In this article, Nicolas Sainthérant, Innovation Manager at our consortium partner Qarnot, explains how the project helps the QARNOT DC to increase its flexibility.

Qarnot has been involved in the Catalyst project to investigate how IT workload management can be used to control energy consumption, and therefore heat production. Indeed, at Qarnot we use the computing jobs to produce heat according to users’ demand in regular housing buildings. Qarnot owns and operates a fully distributed data centre, that could be assimilated to several edge data centres where heat is fully reused.

The picture above presents one of the housing buildings in Bordeaux equipped with QH1 computing heaters, which is one of Qarnot’s edge distributed DC.

Energy consumption in data centres is obviously linked to servers’ activity, firstly relating directly to their usage, and secondly to keep the hardware in acceptable environmental conditions, i.e. mainly low temperature. Indeed, electricity consumption and waste heat production are directly related. This has been very well described by Sadi CARNOT[1] in his “heat-engine” theory that can easily be generalized to the IT industry. The French physicist “2nd law of thermodynamics” applied to ICT is presented in the following figure.


In other words, if a server is running, especially CPU intensive workload, heat is produced. Almost all electricity becomes heat, which is a waste for IT industry. As a matter of fact, at the server level, managing heat or power consumption can be done using the exact same techniques, by managing IT workload execution.

If the goal is to reduce electrical power consumption, the first possibility would be to reduce servers’ usage intensity, and the second possibility is just to stop the IT activity. The direct consequence of reducing power consumption is to reduce the heat produced.

The servers’ usage intensity is important but cannot solve the problem totally. If the DC has the control over the hardware it is possible to adjust the CPUs’ frequency and reduce performances. It is working while maintaining a limited computing capacity, but acceptance highly depends on the actual workload and SLAs agreements. It is worthwhile to notice that even idle, a server power consumption is about 30% of its maximum capacity. As this threshold is important, leveraging actual power consumption through usage intensity remains limited and can be used in limited cases.

We then investigated how to handle power consumption by stopping IT servers. We deployed our pilot in one housing and used priority level to just kill low priority job that could be seen as spot instances. Using this technique, it is possible to announce a certain amount of power consumption that the site (one of the edge data centre) can actively reduce. This is working fine but is quite brutal. As for Qarnot, the problem is twofold, first because the server is no longer available, and second because heat is no longer produced and therefore inhabitants’ comfort could be impacted… This solution works and is quite reactive but the drawbacks are very important to be applied in every situation.


To mitigate drawback but actually act on DCs power consumption, CATALYST members imagined and developed a solution to migrate the IT workload from a DC_1 (sender) to a remote DC_2 (receiver), within a DC federation. Using this technique, it is possible to reduce power consumption in DC_1 and move the corresponding power consumption (and also the heat produced) in DC_2.

This component is the CATALYST Migration Controller. The goal of this component is to actually leverage the Virtual Machines (VMs) location to master the energy consumption. It is able to generate a secured connection between DCs and perform a live migration of the VMs. While the VMs have been transferred, it maintains a seamless control for the client. The global deployment diagram is presented hereafter.

This feature is integrated in a marketplace shared among the DCs participating in the federation. This marketplace is responsible for trading IT workload that needs to be relocated and DCs that can propose servers available, and the corresponding energy. To fully benefit from this migration, it is mandatory to actually shut down the sending server otherwise idle power consumption still remains. This means that it necessitates to have the control over hardware, which is not possible in colocation DCs for example.

This relocation could be used in many situations. For example, due to a large electricity demand, the DC_1 is asked to reduce its power consumption. To do so, DC_1 can propose on the marketplace IT workload to be relocated. As for the receiving DCs, they can propose hardware capacity for various reasons, either extra renewable energy, or heat demand in heating district network for example.

After acceptance, the Migration Controller opens a secure connection, moves the IT workload and also tracks the migration in a blockchain. This tracking is used for resources use payment agreed in the marketplace. Then this component is also responsible to move the IT load back to original DC_1. More technical details about this component in the corresponding deliverable “D3.3: Federated DCs Migration Controller”[1].

Regarding exploitation of this kind mechanisms, we can imagine new business model emerging mixing IT markets in association with energy markets where IT workload become a commodity as electricity or heat for that matter. For Qarnot computing, the project outcomes will be used for edge purposes, at the building level either for local smart grid energy management such as demand response, or balance between edge sites. Finally, a recent blog article from Google reports that they are investigating such techniques that are being developed in our CATALYST project[2].





At a glance

  • No: 768739
  • Acronym: CATALYST
  • Title: Converting data centres in energy flexibility ecosystems
  • Starting date: October 1, 2017
  • Duration in months: 36
  • Call identifier: H2020-EE-2017-RIA-IA
  • Topic: EE-20-2017; Bringing to market more energy efficient and integrated data centres