Eriko Kaminishi, Takashi Mori, Michihiko Sugawara, Naoki Yamamoto
Scientific Reports 16(1) 2026年2月17日 査読有り筆頭著者責任著者
Abstract
Stochastic gradient descent (SGD) is a widely used optimization technique in classical machine learning and the Variational Quantum Eigensolver (VQE). In VQE implementations on quantum hardware, measurement shot noise is inevitable. We analyze how this noise affects optimization dynamics, especially escape from saddle points in non-convex loss landscapes. Our simulations show that the escape time scales as a power law with respect to $$\eta /N_s$$ , where $$\eta$$ is the learning rate and $$N_s$$ is the number of measurements. Through SGD analysis, we provide theoretical insight into how measurement noise facilitates escape. In particular, we demonstrate that a continuous-time approximation via stochastic differential equations (SDE) accurately captures the transient escape dynamics. This suggests that $$\eta /N_s$$ represents effective noise strength, indicating that increasing $$\eta$$ or decreasing $$N_s$$ has similar effects. While concerns exist about the SDE’s validity in stationary regimes, our findings clarify its applicability to transient behavior. Our work improves understanding of the role of measurement noise in VQE optimization.