Saltar al contenido principal

Escribe una PREreview

QGT: A Fully Specified Quantum-Enhanced Transformer for NISQ-era Generative AI

Publicada
Servidor
Preprints.org
DOI
10.20944/preprints202509.2354.v1

Quantum computing holds promise for accelerating Transformer-based generative models, yet existing proposals often remain at the sketch level and lack full specification for near-term devices. We introduce QGT, a fully defined hybrid quantum–classical Transformer tailored to the NISQ-to-simulation regime. Under a k-sparse attention assumption and efficient block-encoding oracles, QGT lowers the per-layer attention cost from \( O(n^2d) \) to \( O(\sqrt{n}\,d) \)​. We provide a unified algorithmic and complexity framework with rigorous theorems and proofs, detailed quantum circuit implementations with parameter-shift gradient derivations and measurement-variance bounds, and comprehensive resource accounting of qubits, gates, and shots. A reproducible classical simulation and ablation study for n = 8 and d = 16 demonstrates that QGT matches classical Transformer performance using only 12 qubits and 40 shots per expectation. QGT thus establishes a concrete foundation for practical quantum-enhanced generative AI on NISQ hardware.

Puedes escribir una PREreview de QGT: A Fully Specified Quantum-Enhanced Transformer for NISQ-era Generative AI. Una PREreview es una revisión de un preprint y puede variar desde unas pocas oraciones hasta un extenso informe, similar a un informe de revisión por pares organizado por una revista.

Antes de comenzar

Te pediremos que inicies sesión con tu ORCID iD. Si no tienes un iD, puedes crear uno.

¿Qué es un ORCID iD?

Un ORCID iD es un identificador único que te distingue de otros/as con tu mismo nombre o uno similar.

Comenzar ahora