NEROX/QAOA
Solver

QAOA

Quantum Approximate Optimization Algorithm on GPU-simulated quantum circuits. The canonical benchmark for quantum-inspired QUBO solvers.

What is QAOA?

QAOA (Quantum Approximate Optimization Algorithm, Farhi et al. 2014) is a variational quantum algorithm for combinatorial optimization. It prepares a parameterized quantum state by alternating between a cost Hamiltonian (encoding the QUBO objective) and a mixer Hamiltonian (enabling state transitions). The circuit has p layers (depth p); deeper circuits produce better approximation ratios at exponential classical simulation cost.

NEROX simulates QAOA circuits on GPU using statevector simulation for small instances (≤ 28 qubits) and tensor network contraction for medium instances (28–64 qubits). Classical variational parameter optimization uses L-BFGS-B.

Usage

python
import nerox

client = nerox.Client()

# MaxCut on a 32-node graph (natural QAOA benchmark)
job = client.optimize.maxcut(
    adjacency_matrix=A,
    solver="qaoa",
    depth=4,                # circuit layers p (default 4)
    n_shots=1000,           # measurement samples per circuit eval
    optimizer="lbfgs",      # lbfgs | cobyla | adam
)

result = job.wait(timeout=3600)
print(f"Cut: {result.objective}  Approx ratio: {result.approximation_ratio:.4f}")

# Raw QUBO (≤ 200 variables)
job2 = client.optimize.qubo(
    Q=Q_small,
    solver="qaoa",
    depth=6,
)
result2 = job2.wait()

Approximation ratio

QAOA at depth p=1 guarantees an approximation ratio of at least 0.6924 for MaxCut on 3-regular graphs (matching Goemans-Williamson is ~0.878). Deeper circuits approach optimal but classical simulation cost grows exponentially. For production optimization, GPU Annealing consistently outperforms QAOA on problems over 50 nodes — use QAOA for research and quantum advantage benchmarking.

Limitations

Maximum ~200 variables (GPU statevector simulation limit)
Slow: minutes to hours at depth p ≥ 6
Not recommended for production workloads — use GPU Annealing instead
No barren plateau mitigation — performance degrades at depth > 12 without careful initialization