At the ISSCC Plenary sessions on Monday February 16 Dr. Rick Tsai, CEO of MediaTech spoke about the impact of AI on semiconductor innovations. As shown in the charts below cloud data center spending is expected to increase by 87% between 2025 and 2030 and this will drive a 53% higher semiconductor market by 2029 with an estimated 16% CAGR compared to 9% without the current AI buildout.
There are serious constraints on the growth of these AI data centers. I and many others have been writing about the demand versus supply imbalances between memory and storage due to the data center buildouts. But this could eventually be relieved within a couple of years by expanding memory and storage production.
The bigger issue is likely to be the additional infrastructure requirements to run these data centers, for instance the electrical power required. The chart below compares an estimate of the expected data center power requirements versus expected 2% annual expansion in power generating capacity with and without the estimated AI data center buildout.
As can be seen here, with the AI data center buildout there could be a 2X higher energy requirement per year and before 2030 data centers will consume all of the world’s energy production! That is non-sustainable.
So, what can be done to deliver intelligence at scale cost effectively and without straining the world’s energy resources? Dr. Tsai said that performance per total cost of ownership, Perf/TCO and Perf/Watt are important metrics to achieving best in class data center efficiency.
Important approaches to achieve the best from these metrics he said we need co-optimization across compute, scale-up and scale-out interconnects, accommodation of both industry standard and proprietary interconnects and heterogenous computing, new ways to pack electronics.
Memory developments are a crucial element in applications specific processing, such as GPUs. The figure below shows various options for system memory in these applications, including high-bandwidth flash as well as conventional DDR and HBM memory.
For consumer and other applications for AI advanced packaging is very important. This enables the integration of large-die complexes and heterogeneous components and requires advanced power delivery to support higher power density and efficiency. System technology co-optimization across all these needs is required.
Anirudh Devgan, President and CEO of Cadence spoke about designing for AI and using AI in design. He pointed out the growing need for AI inference, which will be needed to monetize the huge investments in training AI models. The image from his talk below shows estimates of the data center power demand for non-AI, AI training and AI inference, showing a 4.5X growth in AI inference power consumption between 2025 and 2030.
Cadence like other chip design companies is automating many of its design functions and also expanding beyond traditional chips to multi-chip, chiplet and other types of heterogenous integration. The company believes it can improve design performance ty up to 100 times using more advanced AI. The image below shows Cadence’s super agent for digital design.
Cadence says that it is improving and creating its own LLMs trained on Cadence chip design data. These models allow significant savings on design time and also power efficiency.
Hope Giles from Apple also spoke about an important initiative to increase training for the next generation of silicon engineers. This is important because the number of electrical engineers in US schools has been declining for several years while the complexity of silicon chips has been increasing as shown in the figure below.
However, this trend may be reversing, based upon the 2023 data shown here. Due to efforts by Apple and universities, enrollment and participation in VLSI classes and actual tapeout and testing of actual chips has been increasing. This is a result of close collaboration and support and there are programs like this at many universities all over the US.
2026 ISSCC plenary talks addressed the challenges for electronics to meet AI demand, including using AI to speed up the design of more efficient chips and training the next generation workforce on chip design.








