SP-7: Quantum accelerator
Summary: This pattern employs a quantum component to evaluate a specific, well-defined function within the system. This quantum component, called a quantum accelerator, usually has a classical analog with lower performance, introduces dependencies on operations executed by a quantum computer, and typically does not possess trainable parameters.

Figure 1. graphical representation of the quantum accelerator pattern.
Problem: Quantum computing promises to offer an exponential advantage for certain tasks, such as linear algebra operations and optimization. The challenge lies in identifying practical methods to leverage this advantage to enhance the performance of AI systems. This challenge is related to the trade-off between integration costs, performance, deployability, and maintenance.
Solution: Similarly to the acceleration of neural networks on classical computing platforms such as CPUs, GPUs, FPGAs, and ASICs, the quantum processing unit can be utilised to accelerate computations as well. At the software level, a quantum algorithm that provides a quantum advantage is wrapped in a function and integrated with a classical inference engine. This function is further mapped onto a service or microservice offered by a particular quantum hardware, which interacts with the system via API calls.
Benefits:
- Efficiency. It is anticipated that the quantum advantages provided by quantum algorithms can be translated into practical performance enhancements and improvements for AI systems.
Drawbacks:
- High latency numbers. Depending on the frequency of evaluating the accelerated function, this pattern may require a tight integration of classical and quantum components as well as intense data exchange between them. When accessible via a network, latency must be considered, and parallelization should be employed to ensure a reasonably fast system response.
Known uses:
- (Cherrat et al. 2024) utilize quantum circuits to speed up matrix multiplication routines that are utilised in vision transformer models, and (Hubregtsen et al. 2020) have developed an accelerator for a data-driven function.
- For classical neural networks, (J. Liu et al. 2024) have accelerated the training process, that is based on the stochastic gradient descent method. They implemented this by employing a variant of the Harrow-Hassidim-Lloyd algorithm, a highly efficient quantum algorithm for sparse matrix inversion. This algorithm solves the problem in log(n) time for suitably conditioned n x n sparse matrices.
- (Wang et al. 2022) have improved the training speed for support vector machine algorithm applied to the fraud detection problem by using quantum annealing solvers. The authors have reformulated the problem of obtaining kernel functions for support vector machines as a quadratic unconstrained binary optimisation (QUBO) problem. Subsequently, the QUBO problem has been solved using quantum annealing solvers.