For any company pursuing energy or sustainability goals, the first step is to set an energy baseline. Getting this right the first time makes life a lot easier down the road. The last thing you want to do is scrap your baseline and start over once you decide to add new sites.
One approach to baselining involves benchmarking a company’s performance against others in the industry. However, industry benchmarking is very specific to each type of operation and not widely available for the majority of industries.
For example, ENERGY STAR publishes a benchmark for frozen fried potato processing, but not for the processing of other frozen foods. Therefore, this paper focuses on establishing a baseline using a company’s own data, an approach which can be used by all.
A good baseline doesn’t just help companies track savings. Importantly, it can help identify areas for improvement. It can also help companies fend off claims of “greenwashing,” and it can even help secure lower-interest loans – but only if the data is trustworthy and accessible.
To establish a meaningful energy baseline, consider these four fundamentals:
The scope, costs, and benefits of energy monitoring projects can vary widely. There is no generally accepted standard cost of setting up this infrastructure, nor are there widely generalizable data on the savings or return on investment that can be achieved. Therefore it’s crucial that companies define their objectives up front and craft a scope of work according to their goals and budget.
For example, if the goal is simply to track monthly energy consumption to facilitate energy budgeting and reporting, a platform that aggregates utility bills is likely sufficient. This approach requires little to no capital investment. The downside is that utility bill data can have significant lag times and gaps, and it can be difficult to measure the effects of energy conservation measures without further granularity.
If the goal is to provide actionable information to facility operators, a more robust metering infrastructure is required. Here too, an approach can be crafted that meets the capital constraints of the project. Common assets to meter include the main service; plant-generated utilities (e.g., compressed air and steam); and major processes.
ISO 50001 Energy Management is a helpful reference to determine the most significant energy users within a facility and those that have a significant potential for improvement. These are the most important points to submeter, and additional submetering can be added as budget allows.
In short, energy metering and monitoring deployments should be scoped in service of a predetermined metric or set of metrics. Energy baseline data should be collected in such a way that it will be comparable to some desired future state in order to properly measure progress.
Common energy management metrics | Description |
Electricity consumption | Electric energy used in a given period, measured in kilowatt-hours (kWh). This is typically measured at the main meter, meaning electricity consumed from behind-the-meter sources such as rooftop solar is not included in this amount. |
Energy consumption | Total energy used from all sources in a given period (e.g. electric + gas), reported in a common metric (e.g kWh, BTUs or Therms). |
Energy intensity | Energy consumption divided by a normalization factor (e.g. kWh/unit or BTUs/ft^2). |
Energy spend | Total dollars spent on energy in a given period ($). This accounts for the timing of energy usage and other factors, in addition to the amount energy used. See here for details. |
Energy spend vs. budget | Variance between energy expenses and budgeted amount in a given period ($). |
Energy cost intensity | Total dollars spent on energy in a given period per unit of throughput ($/unit). |
Carbon intensity | Greenhouse gas emissions per unit of throughput, measured in carbon dioxide equivalent (CO2e/unit). This can account for various sources of emissions in addition to energy consumption, such as leakage of methane or fluorocarbon-based refrigerants. |
To truly make sense of energy data, companies need to put it in context. Normalization is the process of putting data into context by dividing it by something else. This enables valid comparisons.
For example, what if the baseline period coincides with particularly extreme weather? Removing the effects of weather is a common way to normalize baseline data, helping create a more trustworthy comparison between it and future years.
A simple approach is to average multiple years of energy baseline data, reducing the chances that a single year of extreme weather or other disruptions will significantly bias the baseline. A more robust approach may use linear regression to control for variables such as temperature and humidity.
Note that for businesses significantly affected by the pandemic, 2020 should not be used as a baseline year, regardless of normalization.
For industrial operations, it’s also critically important to understand energy consumption in relation to production. In other words, it’s fine if energy usage goes up – as long as throughput goes up even more. Measuring production-normalized energy intensity allows businesses to understand the true meaning of their energy consumption.
A proper project scoping exercise should attempt to determine which factors most influence energy consumption. In many industrial environments, production is the most important normalizing factor. However, in facilities with very high HVACR loads, weather normalization may be more significant.
There are many other normalizing factors to consider too, such as square footage, volume, and revenue. Some companies may also want to divide their data by customer or by product. All these comparisons provide a more meaningful understanding of energy usage – one that can actually be used to drive results. It is often necessary to consider multiple metrics at once to get a complete picture of energy performance.
Common energy normalization approaches | Description |
Multi-year averaging | Averaging multiple years’ data to create a more reliable baseline of energy consumption. |
Production normalization | Dividing energy usage by a unit of throughput to determine the energy intensity of production. |
Weather normalization | Dividing energy usage by a unit that quantifies the effects of weather, e.g. Cooling Degree Days. |
Space normalization | Dividing energy usage by the area of a facility (e.g. ft^2). |
Volume normalization | Dividing energy usage by the volume of a facility (e.g. ft^3). Commonly used in temperature-controlled facilities. |
Revenue normalization | Dividing energy usage by the revenue of a product line, facility, or business to determine the efficiency with which energy is used to create value. Commonly used to compare facilities with different production approaches, e.g. those using very different equipment or producing different types of products. |
Many companies simply tally up their utility bills to get a snapshot of their energy spend. This approach is simple and inexpensive.
The problem is that this type of data isn’t actionable. It does establish a baseline, but it doesn’t set companies up for success. For example, it doesn’t indicate:
● When energy usage is highest
● When rates and demand charges are highest
Commercial and industrial electric customers don’t get charged a flat rate for electricity the way residential consumers do. Most are on some form of time-of-use rate, whereby rates vary by time of the day, week and/or year. Depending on the contract, these rates may be known well in advance, or they may only be published a day or even 15 minutes in advance.
This is because electric utilities’ costs are driven not only by fuel consumption, but by the costs of building and maintaining a system that can meet the highest peaks. Variable rates are one way utilities incentivize companies to use energy during off-peak times. Similarly, demand charges penalize companies for large spikes in their energy use and for using energy during periods of grid stress.
These incentive structures are intended to reduce overall system costs, and in some cases utilities or other parties provide email notifications of expected prices and events, but these are often difficult for industrial-scale operations to respond to with sufficient speed, and they can become extremely complex to manage for companies with many facilities across utility territories.
It can be very helpful during the energy baselining process to understand when rates are spiking or penalties are being applied. It may make sense to schedule maintenance during these times, or even shut down entirely. Enabling this kind of flexibility can be very lucrative, especially for refrigerated facilities who can often pre-cool in order to curtail load during peak times without affecting product quality or throughput.
In cases where throughput may in fact be compromised, it’s especially helpful to have real-time and production-normalized data so that the expected benefits from curtailment can be compared to the expected costs of lost production.
By setting up their own real-time data collection system, companies also get a much clearer picture of their own patterns such as on-shift versus off-shift usage. This helps flag areas for improvement. Large spikes in consumption can point the way to targeted improvements, whereas high usage across the board may point to a more capital-intensive strategy. Similarly, persistently higher demand during first shift may illuminate behavioral modifications that can be implemented at no cost.
Even utility meters with Advanced Metering Infrastructure (AMI) data available are not always a good substitute for real-time monitoring systems. AMI systems or “smart meters” are typically installed by utilities to make meter reading easier, and they can provide data to energy users in 15-minute increments. This is far more granular than a monthly utility bill, but it is still often insufficient for effective energy management, primarily because of the lag time in receiving data.
Facilities looking for actionable information, e.g. to optimize processes or take advantage of load shifting opportunities, typically find that they need more even more granular and more timely information. It is this real-time data that allows operators to be proactive in driving down energy intensity and costs.
Most energy management systems monitor power usage at the main meter. This data should never be overlooked, but for industrial operations, it’s not enough.
First, it’s typically a smart move to install a redundant meter alongside the utility’s. The reason is simple: billing errors can happen, and large energy users will want to double check. A good meter can also uncover power factor and overvoltage issues that can cost money and damage equipment.
Then, it’s especially useful to install submeters to monitor the most energy-intensive processes within a facility. Baselining specific systems in addition to whole facilities allows for a much more targeted energy management approach. Baselining the real-time Coefficient of Performance of a refrigeration system, for example, enables true performance management at a level that operators can act on and be held accountable for.
It’s important to note that when defining what to submeter, boundaries should be drawn around processes rather than equipment. Blast freezing, for example, involves not only the blast freezer unit but also the refrigeration system, the air circulation system, and the conveyors. Submetering should account for all major components involved in the process and should feed into one or more of the metrics defined above, such as production-normalized energy intensity.
As discussed above, submetering projects must be scoped according to each company’s needs and budget, and consideration must also be given to the systems that are already in place. Some existing meters and control systems are already IoT-enabled, and these should be leveraged wherever possible. In many cases, additional meters, sensors and IoT gateways are required to get data to energy monitoring systems by way of industrial communications protocols such as MODBUS.
Off-the-shelf hardware is available for most applications, but care must be taken to design a system that meets the needs of the project and creates an accessible and coordinated data repository. A fractured monitoring system that sends data to multiple places or in multiple formats can become quite burdensome to manage. Many energy monitoring and management systems are designed only for specific equipment, so it’s important to ask vendors about interoperability and extensibility.
Interoperability is the capacity of a computer system or software to communicate directly with various other systems. This is important because many systems are proprietary, meaning they only read data from certain devices (often only those manufactured by the creators of the software). Relatedly, extensibility refers to a software system design that allows it to easily add new capabilities or integrations in the future. The energy management landscape is rapidly changing, so systems should be designed with flexibility in mind.
Creating a holistic monitoring system requires knowledge of energy management best practices as well as a practical understanding of the equipment being monitored, the metering and IoT infrastructure, and software design and configuration. Although some companies have all these skills, they are often siloed in multiple departments who are not accustomed to collaborating with each other. Experienced partners can bring these competencies together and help bridge cultural gaps within the organization.
Establishing an energy baseline can be done on almost any budget. A simple tally of utility bills can be an acceptable solution for companies looking to measure and report their consumption. However, this approach doesn’t provide actionable information to help companies drive down costs and emissions, nor does it allow for robust comparisons or for easy measurement and verification of the impact of energy conservation measures.
To accomplish those goals, companies would be well-served to define their performance metrics up front; normalize the data in ways that make sense for their operations; set up real-time monitoring; and submeter specific energy-intensive processes.
Not all of these fundamentals must be followed in every case. The right approach will be different for every company and every situation. That said, there are three main reasons to get energy baselining right the first time:
● Usefulness
● Scalability
● Accessibility
A behind-the-meter data collection system with proper normalization is simply more useful than a tally of bills. It’s not strictly required for establishing an energy baseline, but it sets companies up for success as they do the real work of driving down energy costs. It points the way to the highest-return investments, and it makes it easier to find and analyze best practices across a portfolio of facilities.
A robust metering and monitoring system also makes life easier down the road. Reporting becomes instantaneous, and new sites can easily be added without having to re-baseline or re-normalize. The last thing companies want to do is scrap their baseline and start over each time a new facility is built or acquired.
Finally, with a robust data collection system, primary-source data is accessible to anyone who needs it. This will become even more important for public companies – and anyone who does business with them – in an era of mandatory carbon disclosure. In particular, Scope 3 greenhouse gas reporting requirements will necessitate a whole new way of thinking about data accessibility.
Scope 1 emissions are those generated on-site, such as from burning natural gas and diesel. Scope 2 emissions are from procured electricity, and Scope 3 emissions include all the other life-cycle emissions an operation generates, ranging from supplier emissions to corporate air travel to downstream product use and disposal.
With the recent passage of California’s Climate Corporate Data Accountability Act, all large U.S. companies doing business in California will now be required to measure and report emissions across all three scopes. Similar laws recently passed in the European Union will require full-scope emissions disclosure from all companies who do significant business in the EU.
This level of reporting will require massive new amounts of data collection and sharing. A well-defined energy monitoring system should help distill these complex data streams into relevant and easily understandable metrics. It should also enable valid comparisons among time periods, facilities, and even companies. Lastly, it should enable granular analysis, troubleshooting, and optimization of energy consumption, all without muddling the view for those who only need to see the bottom line.