What ZZFeatureMap is doing
A ZZFeatureMap is a Pauli-Z entangling quantum circuit that encodes pairwise correlations between classical data features into a quantum state. Specifically, single-qubit Z rotations encode each feature, and two-qubit ZZ rotations encode pairwise interactions, repeated for some number of layers. The kernel between two data points is the squared overlap of their encoded states.
The interesting property is that this kernel is conjectured (not proved) to be classically hard to simulate efficiently. So if it captures correlations that matter for your task and that classical kernels miss, you get a signal classical kernels cannot reproduce, even with infinite training data.
The 8-qubit choice
Eight qubits is the largest size that fits inside the 32-bit statevector simulator's working memory while still admitting reliable execution on a current 156-qubit NISQ device after transpilation. Larger feature maps either explode classical simulation cost or dissolve into NISQ decoherence. Eight is the practical sweet spot for now.
What the cross-testbed result actually showed
SWaT (Singapore University of Technology and Design's water treatment testbed) is well-studied. Almost any reasonable model gets AUC near 0.99. Quantum versus classical: tied. That is the right outcome on a saturated benchmark.
HAI (South Korea's HIL-based Augmented ICS testbed) is harder. Multi-stage attacks, longer-horizon dependencies, more stealth. RBF SVM gets to roughly AUC 0.75. The 8-qubit ZZFeatureMap quantum kernel: 0.8309 with standard deviation 0.050 across folds. Statistically significant gap (p = 0.003). Cross-validated, not best-fold.
Hardware execution on ibm_fez confirmed physical realisability with circuit depth 76 and 28 CNOT gates after transpilation. Fidelity degraded by approximately 17 to 20 percent relative to ideal simulation, consistent with current gate-error and decoherence budgets.
What I do not claim
This is not "quantum advantage" in the strong sense. The 8-qubit feature map can be classically simulated. The point is that the structural choice (entangling pairwise correlations the way ZZ does) captured something on HAI that RBF could not, and the quantum hardware path is open enough that scaling up to where classical simulation does break down is now the next experiment, not a hypothetical.
I also do not claim general superiority. Try ZZFeatureMap on a saturated benchmark like SWaT, you get a tie. Try it on something where the structure does not match, you can lose. The contribution is the cross-testbed validation framework and a concrete positive result on one of the two harder ICS datasets.
Where this is going
- Longer-horizon dependencies, the next paper extends the 8-qubit work to time-series with longer attack windows.
- Shot-noise scaling, the open question is whether the simulated AUC gap survives finite-shot kernel estimation on real hardware.
- Hybrid with classical features, ZZ correlations as one channel of a feature stack, not a replacement for RBF.
- Other domains, smart-grid telemetry, smart-meter anomaly detection (see IoT and PQ-EDHOC and the quantum-resilient IoT research).
Where this is not going
Quantum kernels do not break public-key cryptography. That work is Shor's algorithm on a CRQC, which is the post-quantum cryptography story. These are orthogonal threads, both quantum-relevant. Worth keeping straight.