Heiko Claussen is SVP and Co-CTO at AspenTech, leading the company’s artificial intelligence research & technology strategy.
At this point, it’s common knowledge that AI comes with some steep energy demands. Depending on which analysis you believe, the technology already burns through the same amount of electricity annually as some small countries, and that power bill seems poised to grow in the years ahead.
When it comes to power use, though, not all AI is created equal. Although some forms of AI—think of large language models like GPT or text-to-image generators like DALL-E—do require vast amounts of power to train and run at scale, there are other approaches that come with significantly smaller power bills. This includes most forms of industrial AI, which relies on industrial and scientific domain expertise to provide important guardrails that ensure accuracy, reliability and safety.
Although greater efficiency means industrial AI has lower power demands than its more generalized counterparts, that’s only part of the benefits it delivers. A growing number of companies rely on technology to help optimize certain processes, improve decision making and increase energy efficiency to help meet their sustainability goals. Those efforts can also translate into significant energy savings and lower power bills as a result.
AI And Carbon Emissions
Although every online activity—whether collaborating with colleagues around the globe or texting a GIF to a friend—uses at least some power, in the case of training large AI models, those demands can quickly run to the extreme. Today, data centers and data transmission networks worldwide each consume about 1% to 1.5% of all electricity, according to estimates from the International Energy Agency (IEA). If the AI boom continues, Goldman Sachs researchers found that this number could jump by 160% by the end of the decade to as much as 4%.
The concern for many is that generating the electricity needed to meet those huge power demands also comes with equally large carbon dioxide emissions. Data centers and transmission networks worldwide are already responsible for emitting as much carbon dioxide annually as Brazil, according to the IEA, and that number is expected to rise as AI use grows.
The Industrial AI Difference
Industrial AI, by comparison, is typically more narrowly focused, meaning that it’s more data efficient, and putting it to work requires far less power.
Where a generalized, large language model like GPT-3.5 was built using a massive database of more than 300 billion words, industrial AI may be limited only to the areas that are relevant to a particular industrial application. Importantly, though, smaller size doesn’t mean industrial AI is less powerful; it’s a common philosophy that the simplest model that addresses the problem is the best option.
The lower power demand for industrial AI also extends to how it’s trained. Training GPT-4, for example, required more than 50 gigawatt-hours of power—enough energy to power just shy of 44 million homes, assuming they use approximately 10,000 kilowatt-hours—for an entire year.
To understand the difference in size between it and industrial AI applications, imagine using a handful of temperature sensors to monitor a particular process. To ensure the process runs correctly, the sensors may be sampled about one time per second. And although those measurements may generate a large amount of data over time, they ultimately include redundancy and represent only a few equipment states. Tracking how that temperature fluctuates over time can be done with a relatively simple and efficient AI model that trains on a standard laptop in seconds.
The lower power demands for training industrial AI translate into reduced power bills. The relatively smaller models also mean they can continuously run—as in the case of the temperature sensor—while remaining highly efficient and minimizing power demand.
The simple fact is that no company would apply AI to a problem if it didn’t improve results. For many asset-intensive companies, even small improvements can result in significant savings in both costs and emissions. For example, industrial AI can be used to help companies avoid unplanned shutdowns, optimize batch processing, and automate equipment layout, enabling companies to adapt more quickly to business conditions.
Light On The Horizon
Although the power demands for some AI applications are large—and getting larger—the hardware needed to run AI models has not only steadily grown more powerful but has also made significant strides in energy efficiency. Other hardware options, like the use of relatively low-cost GPUs, tensor processing units and, in the future, maybe neuromorphic computing, offer even more speed and efficiency improvements.
As the use of AI continues to grow, more energy-efficient algorithms—like those that use a smaller number of trainable parameters, smart quantization or just focus on more narrow applications, like small language models—are also emerging to maximize both the performance of AI models and their efficiency.
As companies work to navigate the energy transition going forward, AI will play a crucial role. Carefully selecting AI tools, however, must be part of the equation to ensure models can not only deliver insights and improve decision making across industries but do so in a safe and energy-efficient way.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?