Spectral bandwidth correction with optimal parameters based on deep learning


Spectrometers are used in a wide range of applications to decompose complex light. Usually, the bandwidth function of these spectrometers distorts the measured spectrum, limiting their accuracy and applications. Therefore, it is important to correct the measured spectrum to obtain accurate measurements results. Bandwidth correction algorithms have been widely used for spectral bandwidth correction to obtain the original spectrum. The recovery effects of bandwidth correction algorithms have significantly improved over time; they consider the effects of the bandwidth functions and the influence of noise on the original spectrum. These algorithms involve additional parameters, such as regularization parameters, which suppress the influence of noise and improve the recovery effect. Unfortunately, the traditional parameter selection methods are insufficient and based on outrageous techniques like generalized cross-validation that compromises the correction results. Therefore, finding reliable, optimal parameter selection methods is highly desirable.

With the development of computational and data processing technology, deep learning, an area within machine learning, has emerged as a promising method for overcoming the shortcomings of the conventional parameter selection methods. Deep learning has been successfully used in different applications, including image and speech recognition, advertisement, medical and agriculture. Lately, it has increasingly drawn attention in the optics field due to its ability to construct and generalize models and extract and recognize various features. Furthermore, compared to conventional machine learning techniques, deep learning is more advantageous in terms of easy conversion, string adaptation and high accuracy.

On this account, researchers from the Hefei University of Technology:Mr. Hao Cui, Dr. Guo Xia, Mr. Jiangtao Wang, and Mr. Lihao Bai, in collaboration with Dr. Chan Huang from the Chinese Academy of Sciences, developed a new optimal parameters selection method based on deep learning for improved spectral bandwidth correction. The optimal parameters were obtained by training the neural networks, which were later combined with the respective algorithm to analyze its spectral recoveries performance. Notably, the design of the bandwidth correction algorithm and that of the neural network were completely different. The research work is currently published in the journal, Applied Optics.

In their approach, the authors began by analyzing the spectral bandwidth correction model and illustrating the importance of optimal parameters using the least square method. They then constructed the database and the neural network. Next, neural network training was carried out to obtain the optimal parameters applied in the corresponding bandwidth correction algorithms. Finally, the feasibility of the proposed parameter selection method was validated via experiments and simulations that involved recovering the distorted white light-emitting diode (LED), compact fluorescent lamp (CFL) spectrum, and Raman spectrum, using both traditional and the proposed versions of the Richardson-Lucy (R-L) and Levenberg-Marquardt (L-M) algorithms. The obtained results for traditional and modern algorithms were compared.

The authors successfully obtained the type A uncertainty and root mean square errors for all the algorithm cases involved. Results demonstrated the feasibility and superiority of the neural network in obtaining optimal parameters that could enhance the efficiency and accuracy of the bandwidth correction algorithms for distorted spectrum recovery compared to the traditional methods. Furthermore, the stability of the algorithms with optimal parameters was also improved.

In summary, the study proposed a new method based on deep learning for selecting the optimal parameters for bandwidth correction. This method addressed the shortcomings of the traditional methods as the new parameter section method exhibited superior accuracy, time, reliability, and efficacy. For example, it successfully recovered the distortion of LED, Raman, and CFL spectra, achieving the original spectrum. In a statement to Advances in Engineering, the authors said they new method would pave the way for extended application of deep learning technology in optics to address a wide range of issues.

Spectral bandwidth correction with optimal parameters based on deep learning - Advances in Engineering

About the author

Guo Xia received the M.S. degree in Measuring and Testing Technologies and Instruments, Hefei University of Technology, Hefei, China, in 2009, and the Ph.D. degree in optical engineering, Zhejiang University, China, in 2013, respectively. He is currently the master’s Supervisor with the Hefei University of Technology. His research interests include optical engineering, computational optics, and spectral analysis.


About the author

Hao Cui received the B.S. degree (2017) from Shandong University of Science and Technology (SDUST), Qingdao, China, and M. S. degree (2021) from Hefei University of Technology (HFUT), Hefei, China. He is currently pursuing his Ph.D. degree in electronic science and technology from Zhejiang University (ZJU). His present research interests include computational optics, Fourier optics, waveguide, and deep learning.



Cui, H., Xia, G., Huang, C., Wang, J., & Bai, L. (2021). Spectral bandwidth correction with optimal parameters based on deep learningApplied Optics, 60 (5), 1273.

Go To Applied Optics

Check Also

Australian scientists develop self-calibrated photonic chip - Advances in Engineering

Australian scientists develop self-calibrated photonic chip