AI-driven silicon, 43% less power, 30% less delay.
Reinforcement learning, genetic algorithms and Bayesian optimisation steering Cadence and Synopsys synthesis flows toward power-delay sweet spots traditional VLSI tooling cannot reach.
At a glance
01Why traditional VLSI tooling cannot get IoT right
The performance-per-watt envelope IoT now demands is not reachable with hand-tuned synthesis sweeps. Healthcare, smart cities, agricultural sensing, renewable-energy management, all of them want microcontroller-class power budgets at near-edge-server compute.
Traditional EDA flows were not designed for that. Their assumption is a designer iterating manually through a small parameter space. AI-driven design-space exploration replaces the iteration with policy learning over a far larger search space.
02The optimisation stack
Three optimisers, three roles:
- Reinforcement learning for sequential synthesis-and-layout decisions where the action at each step constrains the next. Power-gating and clock-tree placement are the natural fits.
- Genetic algorithms for global parameter sweeps where the loss landscape is bumpy and gradient-free methods earn their keep.
- Bayesian optimisation for the expensive simulator-in-the-loop runs where every evaluation costs real wall-clock time and you need sample efficiency.
All three feed Cadence and Synopsys flows directly, which is the part that matters for adoption: nothing in the proposal asks the designer to leave their existing toolchain.
03What the numbers say
"43 percent power, 30 percent delay, 52 percent energy. Those are not paper-only numbers, they are industry-standard simulator outputs on industry-standard flows."
Plus the architecture-level moves: dynamic voltage and frequency scaling for active-mode efficiency, clock gating for switching-power reduction, AI-based power gating for leakage minimisation. Each compounds with the AI-driven synthesis-stage savings to extend device lifetime in energy-constrained IoT deployments.
04Where this fits in the stack
This paper is the silicon layer beneath the oneM2M wire format and the PQC-anchored metering framework. The same dollar of energy budget covers more compute when the silicon is AI-optimised; that is what makes residential-scale quantum-resilient IoT economically plausible.
FAQWhat people ask me about this paper
Q1Why not just use commercial AI-EDA products?
Q2Are the reported savings worst-case or best-case?
Q3How sensitive is the result to PDK choice?
Q4Does this require a custom RTL?
Q5What is the catch?
CITEHow to cite this paper
@inproceedings{badami2026vlsi,
author = {Shujaatali Badami and others},
title = {AI-Optimized VLSI Architecture for Energy-Efficient and Sustainable IoT Systems},
booktitle = {IEEE ICAIC 2026},
year = {2026},
publisher = {IEEE}
doi = {10.1109/ICAIC67076.2026.11395694}
}S. Badami et al., "AI-Optimized VLSI Architecture for Energy-Efficient and Sustainable IoT Systems," in IEEE ICAIC 2026, 2026, doi: 10.1109/ICAIC67076.2026.11395694.
Badami, S., et al. (2026). AI-Optimized VLSI Architecture for Energy-Efficient and Sustainable IoT Systems. In IEEE ICAIC 2026. https://doi.org/10.1109/ICAIC67076.2026.11395694
TY - CONF AU - Badami, Shujaatali TI - AI-Optimized VLSI Architecture for Energy-Efficient and Sustainable IoT Systems T2 - IEEE ICAIC 2026 PB - IEEE PY - 2026 DO - 10.1109/ICAIC67076.2026.11395694 ER -