Atmospheric Light Estimation via Color Vector Geometry for Robust Single-Image Dehazing

Significance 

Capturing sharp and accurate images in hazy environments continues to be a significant challenge across a wide range of real-world applications. From self-driving cars and drone-based aerial surveys to environmental monitoring and urban planning, many systems depend on visual data that can be severely degraded by atmospheric conditions. When haze is present—typically due to the scattering and absorption of light by airborne particles—images suffer a noticeable loss in contrast, color accuracy, and overall clarity. While we encounter haze often in daily life, its impact on image-based systems introduces complex technical obstacles, especially for automated vision tasks that require scene understanding or restoration. At the core of most image dehazing methods lies the estimation of atmospheric light, which refers to the global illumination scattered into the camera’s view by the atmosphere itself. This step is essential because it directly influences how the original scene radiance is reconstructed. If the atmospheric light is estimated inaccurately, the result is often visually jarring—objects may appear with distorted colors, overly bright or dim regions, or artifacts that obscure important details. Although numerous algorithms have been developed to tackle this problem, including data-driven deep learning models and more traditional physical-model-based methods, accurately estimating atmospheric light remains a difficult task, particularly for high-resolution images or complex outdoor scenes. Many of the conventional approaches rely on assumptions or visual indicators—like identifying the brightest pixels, detecting the sky, or leveraging prior knowledge of color statistics. While these techniques can work reasonably well in controlled settings, they tend to break down when dealing with ambiguous or atypical scenarios. For example, bright areas in an image might not always represent atmospheric light sources, and urban environments can produce confusing reflections or occlusions that violate underlying model assumptions. On top of that, these methods often require significant computational resources, limiting their practicality in real-time or embedded systems.

To address these limitations, a recent study led by Associate Professor Dagang Jiang from the University of Electronic Science and Technology of China, along with Lingzhao Kong, Xin Liu, Yu Zhang, and Kaiyu Qin, introduced a new perspective on atmospheric light estimation by revisiting the physics of light propagation in hazy conditions. Their research, published in Optical Engineering, proposes a fundamentally different approach—one that builds on the geometric relationships within the atmospheric scattering model itself. Specifically, the team discovered and formalized an intriguing property: in RGB color space, the atmospheric light vector is orthogonal to the normal vectors of what they term “color degradation planes.” These planes are formed by observing how the color of a scene changes due to varying levels of haze.

The research team used the RESIDE benchmark dataset to evaluate the performance and reliability of their proposed dehazing technique. This dataset is highly regarded in the image dehazing community for its scale and diversity and includes over 500,000 images depicting a wide range of environments, from natural landscapes to busy urban scenes, all captured under varying levels of atmospheric haze and lighting conditions. By incorporating both synthetic images with known ground truth and real-world photographs without annotations, the researchers ensured that their assessment was not only methodologically rigorous but also relevant to practical applications. For the initial experiments, the authors simulated different degrees of haze by manipulating atmospheric extinction and light intensity. Starting from 2,061 clear base images, they generated a total of 535,860 synthetic hazy scenes. Each image was modified with different combinations of haze thickness and illumination levels, creating a robust testbed to determine whether their algorithm could accurately retrieve the atmospheric light values that were intentionally embedded during simulation. The results were striking: their method consistently provided estimates that closely matched the original values, significantly outperforming several established approaches—including the well-known dark channel prior and haze-lines techniques. This advantage became especially evident in more demanding situations, such as scenes with dense haze or intense brightness, where other methods often lose reliability. At the same time, the team examined how well the algorithm scaled with image resolution. One of the major strengths of their approach is its computational efficiency. While most existing methods become slower as image size increases, theirs remained remarkably fast and stable. This was achieved through an elegant strategy: downsampling high-resolution images to smaller representative patches for the color vector analysis. Despite this reduction, the quality of atmospheric light estimation remained high, even for images exceeding 1280 × 960 pixels. As a result, the method is particularly well-suited for time-sensitive or resource-constrained scenarios, such as real-time video processing or deployment on embedded systems.

The authors also applied their method to real-world hazy photographs from the HSTS and RTTS subsets of the RESIDE dataset. Here, the goal was to observe how their refined atmospheric light estimation impacted the final dehazed image. Combined with the dark channel prior for transmission estimation, their technique produced images that were clearly more balanced and visually convincing compared to those produced by competing methods. Viewers could easily notice sharper details in the background, more accurate and vivid color tones, and a more natural overall brightness. These visual improvements were further backed by objective metrics—such as SSIM, PSNR, and BRISQUE—which confirmed the method’s superiority across multiple image quality dimensions.

One of the most notable strengths of the approach was its resilience in scenes with bright skies or complex lighting. These are typically difficult for dehazing algorithms, as they often misinterpret light sources or overcorrect exposure. Yet in these challenging conditions, the proposed method maintained impressive accuracy. The sky was reconstructed faithfully without overwhelming the rest of the image, and foreground details remained intact. Even in darker scenes such as traffic images captured at dusk or areas with heavy pollution, the algorithm held up well and preserved fine textures and minimizing unnatural color shifts.

In conclusion, the researchers built new method and showed that atmospheric light vectors are orthogonal to the normal vectors of color degradation planes, a relationship grounded in the physics of light scattering. This shift from a heuristic-based approach to one anchored in geometric principles marks a rare and refreshing moment in computer vision research, where a deep understanding of the underlying phenomena leads directly to a practical and elegant solution. From an application standpoint, the implications of the authors’ work are both broad and impactful in autonomous driving, environmental monitoring, and aerial imaging—where the clarity of visual data is essential—having a method that dehazes effectively and consistently can significantly improve downstream performance. The new approach is particularly well-suited to real-time scenarios because, unlike many existing methods, its runtime doesn’t scale with image resolution. That kind of computational predictability is important for embedded systems including onboard processors in drones or autonomous vehicles, where resources are limited and speed is non-negotiable.

Moreover, the study encourages a re-examination of established models in computer vision and optical imaging. By uncovering a geometric relationship that had been largely overlooked in the context of atmospheric scattering, the researchers have opened new avenues for exploration. Tasks like shadow removal, color correction, and even underwater image enhancement—all of which are affected by complex lighting conditions—might benefit from a similar vector-based analytical framework. In fact, the authors mention that their method could be adapted to model illumination in shadowed areas, a notoriously difficult challenge due to the mixed contributions of direct and indirect lighting. What also stands out about this work is its accessibility and reproducibility. Since the algorithm does not rely on machine learning or pretrained models, it can be applied directly to new datasets or domains without the need for retraining or high-end computing infrastructure. This makes it especially valuable in low-resource settings or for researchers who may not have access to GPUs or massive labeled datasets. Ultimately, this universality reinforces the strength of the method—it’s both physically grounded and broadly applicable.

Atmospheric Light Estimation via Color Vector Geometry for Robust Single-Image Dehazing - Advances in Engineering Atmospheric Light Estimation via Color Vector Geometry for Robust Single-Image Dehazing - Advances in Engineering Atmospheric Light Estimation via Color Vector Geometry for Robust Single-Image Dehazing - Advances in Engineering Atmospheric Light Estimation via Color Vector Geometry for Robust Single-Image Dehazing - Advances in Engineering Atmospheric Light Estimation via Color Vector Geometry for Robust Single-Image Dehazing - Advances in Engineering

About the author

Dagang Jiang is an associate professor at the University of Electronic Science and Technology of China, the Deputy Dean of the School of Aeronautics and Astronautics, the Vice Director of the Aircraft Swarm Intelligent Sensing and Cooperative Control Key Laboratory of Sichuan Province, and the academic leader of the laser communication at the Adaptive Optics National Key Laboratory. He received his BS and MS degrees from the Beijing University of Aeronautics and Astronautics in 2004 and 2007, respectively, and received his PhD from the University of Electronic Science and Technology of China in 2014. His current research area is free space optical communication and laser atmospheric propagation.

Reference

Kong, Lingzhao & Jiang, Dagang & Liu, Xin & Zhang, Yu & Qin, Kaiyu. (2024). Atmospheric light estimation through color vector orthogonality for image dehazing. Optical Engineering. 63. 10.1117/1.OE.63.8.083103.

Go to Optical Engineering.

Check Also

Redefining Cochlear Implant Strategies: Auditory Nerve Excitability in Scala Vestibuli vs. Scala Tympani Placements - Advances in Engineering

Redefining Cochlear Implant Strategies: Auditory Nerve Excitability in Scala Vestibuli vs. Scala Tympani Placements