SP-1: Quantum monolith
Summary: This pattern describes the direct encoding of input classical data into quantum states, which are then processed by a quantum computer. The output is generated by the quantum computer as a result of quantum state measurements. The inference task is fully delegated to a quantum component, without any involvement of classical components.
Figure 1. Graphical representation of the quantum monolith pattern.
Problem: This pattern is designed to enhance AI system performance by maximising the quantum advantage provided by quantum computers.
Solution: In this approach, the quantum computer is solely responsible for inference, while the classical computer handles only the pre-processing and post-processing of data. This pattern is currently the most widely used, serving as a basis for building proofs of concept and seeking for quantum advantage in AI systems.
Benefits:
- Efficiency. The monolithic architecture typically includes only one encoding layer, or several in cases involving data re-uploading, alongside a single measurement layer. Avoiding repeated time-consuming encoding and measurement procedures, as well as frequent communication between classical and quantum AI components – which is typically managed via a network – reduces both inference and training times.
- Simplicity and deployability. A small number of components and classical communication channels between them enhances the simplicity and deployability of the systems.
Drawbacks:
- Restricted scalability for NISQ computers. For NISQ devices, directly encoding raw data into quantum circuits limits the volume of data that can be processed due to the restricted number of qubits available. Additionally, for the monolithic architecture, NISQ-era computers impose restrictions on quantum circuit depth – the maximum number of quantum gates that can be applied sequentially. This restriction arises from internal hardware noise, which limits qubit coherence times. Note that many existing quantum encoding methods require a number of gates that scales exponentially with the number of input qubits, n (LaRose and Coyle 2020; Mitsuda et al. 2024). This restriction can be somewhat relaxed to O(poly(n)) using the approximate amplitude encoding (Nakaji et al. 2022; Mitsuda et al. 2024).
- Restricted trainability at large scale. The restricted trainability of quantum circuits for machine learning in the monolith architecture is related to the barren plateau issue (McClean et al.). This phenomenon involves the exponential diminishing of the gradient of the cost function used to train quantum neural networks as the size of a quantum monolith system increases, impeding the training of quantum AI systems.
Known uses:
- Grant et al. 2018 utilised quantum circuits to perform binary classification of classical and quantum data, achieving superior accuracy by leveraging more expressive circuits and robustness to noise.
- X. Gao et al. 2022 incorporated quantum correlations into generative models, i.e., Bayesian networks, for unsupervised learning tasks.
- Huang et al. 2021 proposed an experimental implementation of a simple quantum monolith framework for quantum generative adversarial networks using a superconducting processor with multiple qubits.