Smart forecasts in Aurora by Sigholm Forecasts — a topic that easily sparks discussion, whether you work in operations, engineering or business development. In Aurora by Sigholm, forecasts form a crucial pillar on which our operational planning optimisation is built. Without accurate forecasts, you cannot achieve an accurate operating plan.
In the platform we primarily produce three types of forecasts: electricity price forecasts, weather forecasts and load forecasts. The load forecasts mainly concern district heating load, but also include district cooling, steam and various waste heat suppliers. All forecasts serve as input to the operating plan optimisation.
Forecasts in production
When producing 20,000 forecasts per week, it is essential to consider how the process is designed and automated. It is not feasible to calculate them one at a time, since new forecasts enter the queue faster than they can be processed.
Aurora by Sigholm is a cloud-native application that uses the scalability of the cloud to handle all forecasting computations, with forecast jobs scheduled every hour. Since every plant has its own unique forecasts, the flow below is executed in parallel for all plants simultaneously.

The flow is the same for all plants, but each forecast and its underlying model are unique. First, a weather forecast is created. This often comes from an external service such as SMHI, but some plants choose to adjust the weather forecast — which happens at this stage. Since many load forecasts depend on weather data, we wait for the weather forecast step before generating load forecasts. Load forecasts, on the other hand, are calculated in parallel for facilities that include more than just district heating load. Because we can spin up new processes when needed and compute forecasts in parallel, the entire forecasting flow takes only four seconds. When all load forecasts are complete, a new optimisation is triggered, which in turn updates the operating plan for users in Aurora by Sigholm.
Customising the forecasting engine
Different forecasts have different characteristics. Therefore, it is important to have a flexible forecasting engine that can be fine-tuned for each individual plant and load type. For example, district heating load is often linearly correlated with outdoor temperature, making a linear regression model appropriate. District cooling load, however, has a polynomial relationship with outdoor temperature, where linear regression does not perform well.
In Aurora by Sigholm, we can create new models for different scenarios that focus on specific properties (e.g., temperature), and apply post-processing if needed to achieve the best possible forecast. We can also test different models and methods against each other and use the one with the highest accuracy.

As a final step after each forecast, we save the forecast values in the database so they can be used in the optimisation or displayed to users. These values represent the “best available” forecast and are overwritten with each new run. To enable evaluation and review of previous forecasts, each forecast is also stored in its original form in an archive. This allows us to answer questions such as: “What was the accuracy of forecasts 24 hours before the actual outcome?”
Automatic tuning
The forecasting models have a set of parameters that are fine-tuned to achieve the best forecast possible. This step is known as “training”. The training is based on historical data for each plant — the more and the better the historical data, the better the forecasts. Because new data continuously flows in from the plants, we have the opportunity to continuously refine the forecasting model. This is done automatically each week through a process that looks like this:

Caption: We only switch to a new forecasting model if it performs better than the currently active model. But how do you evaluate a forecast to determine what is “better”?
Follow-up & evaluation
Common evaluation methods include MAE (Mean Absolute Error), MSE (Mean Square Error), MAPE (Mean Absolute Percentage Error), and others. We most often use MAE in the automatic evaluation. A variation of MAPE, which we call Accuracy, is a useful alternative when comparing different models and when a percentage value is helpful for communication. Sometimes it is easier to reason around a percentage than an “average deviation in MW”.
The accuracy is calculated simply by taking the difference between the forecast (P) and the outcome (U), and dividing it by the overall span of possible values. This span is determined by the difference between the 95th and 5th percentile of the outcomes.

It is, of course, difficult to define a single exact percentage for forecast accuracy, and many methods exist. However, this method works well when a standardised metric is needed across different forecast types (weather, electricity price, district heating load). Looking more closely at the accuracy, we see that it decreases the further into the future the forecast extends. Below is a comparison between SMHI’s temperature forecast and Aurora by Sigholm’s district heating load forecast for Q1 2022:

With a robust system that automatically trains models and scales up when needed, we can confidently leave the forecasting operations to Aurora by Sigholm — allowing us engineers to focus on more exciting tasks, such as experimenting with new techniques and models to further improve accuracy.