Elise London is the CTO of Lakeside Software, where she oversees the design and delivery of its digital employee experience platform.
I still remember when GPUs burst onto the scene 25 years ago. More computer scientist than gamer myself (unless you count being an avid Tetris player), I witnessed people flock to the graphics processing unit (GPU) to use it in ways it was never intended (i.e., parallel computing, especially for gaming).
Why did it find this type of adoption? Because the GPU is great at high throughput parallel processing, algorithms with very high-compute needs can run much more efficiently on it. As Stephen Gossett at Built In explains: “Some of the super-complex computations asked of today’s hardware are so demanding that the compute burden must be handled through parallel processing. … The result? Slashed latencies and turbocharged completion times.”
It’s no wonder that AI innovators took advantage of the GPU to handle the massive amounts of data needed to train AI models, including large language models (LLMs). As great as GPUs are, however, computer chips are continuing to transform at breakneck speed.
Enter the neural processing unit (NPU): “a specialized accelerator for AI workloads,” as Anshel Sag explains in Forbes. All the enterprise customers I work with are deciding whether to buy AI PCs with a NPU. All major chip manufacturers and OEMs are preparing their OS and hardware to support AI PCs: Intel, Qualcomm, Nvidia, Dell, Lenovo, HP, Apple—everyone is a contender in the great AI race.
Intel, for example, is developing power-efficient NPUs for sustained AI workloads. Since my company offers the capability to monitor AI PCs that use NPUs (as well as any other kind of device), we’re working with Intel to enable the performance monitoring of AI PCs and to explore how using an AI PC can impact the digital employee experience (DEX).
Based on this experience, here are several considerations for any enterprise that wants to be AI-ready at the edge.
NPU Performance
Unlike GPUs, NPUs require very low power. NPU’s power profile means that they don’t use up your battery as much as if you were to put the same workload on the other processing units. Built on the edge, they also overcome the latency challenge of cloud-based processing.
Maybe most importantly, unlike the central processing unit (CPU) and GPU, NPUs do not detract from the user experience on the device. For example, the GPU enables the images and video you see on your laptop, so the graphics will look choppy if it becomes overloaded. NPUs, on the other hand, can run loads at the edge at the same time a user is going about their day-to-day use of device applications.
NPU Capabilities
NPUs are really good at matrix math, which is foundational for machine learning to use and understand data. As such, NPUs enable additional AI processing at the edge that the machine previously wouldn’t be able to handle without affecting the user experience.
This capability opens up the door not just for personal use but for distributed computing possibilities.
Let’s say you’re a big organization with tens or even hundreds of thousands of devices. Maybe you want to take advantage of a huge amount of highly optimized compute power on the devices themselves—which are sitting idle much of the time—instead of processing in the cloud (which we all know adds up in cost faster than anyone would like). NPUs on AI PCs can give end users more computing power than ever at their fingertips.
NPUs And Cybersecurity
Because NPUs can bring massive amounts of compute power to the device level, digital risk can be mitigated to some extent.
By leveraging AI-enabled PCs, organizations can run AI models locally, avoiding reliance on the cloud. This shift to edge computing can allow sensitive data to remain within the corporate environment, reducing the risk of data breaches and mitigating potential legal liabilities associated with AI-generated content.
The Future Of NPUs
Needless to say, I was excited to join Intel on stage at CES 2025 to discuss the future of the NPU.
Despite all the excitement, however, there is understandable skepticism about the current value of NPUs. To me, that’s a natural result of being in the early adoption phase.
One of the reasons for skepticism is that there is currently not a lot of software that takes advantage of the NPU. I expect this to change as companies start to buy more AI PCs and enterprise software starts taking advantage of them. For example, I see a lot of potential in the security space. McAfee released a capability to do deepfake detection, taking advantage of the local compute on the AI PCs.
The expansion of NPUs simply means more compute available for more complex workloads. Just as the use of GPUs exploded beyond gaming, NPUs will enable countless use cases, including having a “Clippy” that does what you want it to do. On a more serious note, another use case is having constant security running on the device without impacting the user experience.
We’ll have to wait to see how software companies will continue to take advantage of NPUs.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?