Model-free reinforcement learning with noisy actions for automated experimental control in optics
- authored by
- Lea Richtmann, Viktoria S. Schmiesing, Dennis Wilken, Jan Heine, Aaron Tranter, Avishek Anand, Tobias J. Osborne, Michèle Heurs
- Abstract
Setting up and controlling optical systems is often a challenging and tedious task. The high number of degrees of freedom to control mirrors, lenses, or phases of light makes automatic control challenging, especially when the complexity of the system cannot be adequately modeled due to noise or non-linearities. Here, we show that reinforcement learning (RL) can overcome these challenges when coupling laser light into an optical fiber, using a model-free RL approach that trains directly on the experiment without pre-training on simulations. By utilizing the sample-efficient algorithms Soft Actor-Critic (SAC), Truncated Quantile Critics (TQC), or CrossQ, our agents learn to couple with 90% efficiency. A human expert reaches this efficiency, but the RL agents are quicker. In particular, the CrossQ agent outperforms the other agents in coupling speed while requiring only half the training time. We demonstrate that direct training on an experiment can replace extensive system modeling. Our result exemplifies RL’s potential to tackle problems in optics, paving the way for more complex applications where full noise modeling is not feasible.
- Organisation(s)
-
Institute of Gravitation Physics
Institute of Theoretical Physics
Institute of Photonics
- External Organisation(s)
-
Australian National University
Delft University of Technology (TU Delft)
- Type
- Article
- Journal
- Transactions on Machine Learning Research
- Volume
- 2025
- No. of pages
- 32
- ISSN
- 2835-8856
- Publication date
- 07.2025
- Publication status
- Published
- Peer reviewed
- Yes
- ASJC Scopus subject areas
- Artificial Intelligence, Computer Vision and Pattern Recognition
- Electronic version(s)
-
https://doi.org/10.48550/arXiv.2405.15421 (Access:
Open)