
In this blog, Chandresh talks about the age old debate of what the eyes saw vs what the camera captured more specific to Northern Lights photography.
June 2023
One comment I often get form aurora chasers is that the auroras they saw/experienced with their eyes looked nowhere close to the ones depicted in the pictures captured by the cameras. Generally it means that they are not seeing the vibrant colours in the pictures or the auroras do not seem as bright as its depicted. That is a very fair comment and the answer lies in how the camera sensors and human eye work. Hopefully this post helps you gain some clarity of vision on this topic.
THE FUNDAMENTAL WORKING PRINCIPLES ARE SIMILAR …
The Human Eyes
The evolution of human eyes is a testament to the gradual development of sophisticated visual systems over millions of years. Through the process of natural selection, our ancestors' eyes evolved from simple light-sensitive structures to complex organs capable of perceiving details, colors, and depth. These adaptations allowed early humans to navigate their environment, detect predators and prey, and ultimately played a significant role in our species' survival and success. Our eye lids and pupil determines the amount of light to be let into the eye photoreceptors (Rods and Cones) located at the back of the eye ball, which then process the light collected and forms the image we see. Light enters the eye through the cornea, the transparent front surface of the eye that acts as a protective barrier. The cornea helps focus incoming light onto the next structure, the lens. The lens of the eye sits behind the iris, which controls the size of the pupil. The lens adjusts it's shape using muscles to focus the incoming light onto the back of the eye, specifically the retina. The retina is a layer of specialized cells located at the back of the eye. It contains two types of photoreceptor cells called rods and cones. Rods are responsible for low-light and peripheral vision, while cones are responsible for color and central vision. When light reaches the retina, it interacts with the photoreceptor cells. The photoreceptors convert the light into electrical signals, a process called phototransduction. Rods and cones contain specific pigments that respond to different wavelengths of light. he electrical signals generated by the photoreceptor cells are transmitted to the adjacent layers of cells in the retina, such as bipolar cells and ganglion cells. The signals undergo complex processing and integration before being sent to the optic nerve. The optic nerve carries the electrical signals from the retina to the brain. The signals travel through the optic nerve fibers and reach the visual cortex, the region of the brain responsible for processing visual information. In the visual cortex, the electrical signals are further processed and analyzed to form a visual representation. The brain combines the signals from both eyes to create a cohesive and three-dimensional perception of the visual scene.
The Camera
The camera has a lens that funnels the light to a camera sensor and a processor converts the light signals to an image. The basic three settings (Exposure time, F-stop and ISO) determines the amount of light the camera sensor collects. Camera image sensors are the heart of digital cameras, responsible for capturing light and converting it into electrical signals that form digital images. Most cameras use either CCD (Charge-Coupled Device) or CMOS (Complementary Metal-Oxide-Semiconductor) sensors. When light enters the camera through the lens, it passes through a series of optical elements that focus it onto the image sensor. The sensor consists of millions of tiny light-sensitive pixels, each capable of detecting and converting incoming photons into electrical charges. The intensity of light hitting each pixel determines the amount of charge it accumulates. Once the exposure is complete, the camera reads the electrical charges from each pixel and converts them into digital values. These values are then processed by the camera's image processor to create a final digital image, where brightness, colors, and details are captured based on the accumulated charge in each pixel. The resolution and quality of the image sensor play a crucial role in determining the level of detail and overall image quality that the camera can produce. Cameras, with their image sensors and advanced processing algorithms, strive to replicate and capture the essence of what the human eye perceives.Â
THE DIFFERENCES IN WHAT THE CAMERA AND OUR EYES SEE, ESPECIALLY AT NIGHT
Blinking Eye Lids Vs Camera Shutter Button
The act of blinking with the human eye and the concept of the camera shutter time differ in their functionality and purpose. While the human eye blink and camera shutter time both involve the opening and closing of an aperture, their purposes and durations differ substantially.
Blinking is a natural and involuntary action of the human eye. It occurs when the eyelids briefly close and then reopen, typically lasting around 100 to 400 milliseconds. The primary function of blinking is to keep the eye lubricated, protect it from debris, and spread tears across the cornea to maintain clear vision. Humans blink frequently throughout the day, and the duration of a blink is relatively short compared to camera shutter times.Â
In a camera, the shutter time refers to the duration for which the camera's shutter remains open to expose the camera sensor to incoming light. Shutter speeds can vary significantly, ranging from fractions of a second to several seconds or even longer, depending on the camera settings. A longer shutter speed allows more light to enter the camera, resulting in a brighter exposure, while a shorter shutter speed freezes motion and reduces the amount of light captured.Â
It's important to note that the comparison between a human eye blink and camera shutter time is primarily conceptual, as the underlying mechanisms and purposes of these actions are distinct. Blinking is an essential physiological process that occurs frequently and unconsciously to maintain eye health and comfort. In contrast, the camera shutter time is a deliberate setting chosen by the photographer to control exposure and capture desired effects in a photograph, such as motion blur or freeze-frame shots.
For Northern Lights, camera’s longer shutter exposure times ( generally 1-15 seconds) allows for a massive amount of light to be collected as against a fraction of a second our human eye blinks. Thus, the northern light will be perceived as a dim glow and lacking intensity when seen from an human eye perspective.
Eye Pupil Vs Lens F-Stop
The camera's f-stop and the human eye's pupil are both mechanisms that control the amount of light entering their respective systems. However, there are some notable differences between the two:Â
The pupil is the opening at the center of the eye's iris that regulates the amount of light reaching the retina. It constricts or dilates in response to changes in lighting conditions and the eye's focus. The muscles surrounding the pupil control its size. In bright conditions, the pupil constricts, becoming smaller to reduce the amount of incoming light. Conversely, in dim lighting, the pupil dilates, becoming larger to allow more light to enter the eye.Â
In a camera, the f-stop refers to the aperture setting of a camera lens. It determines the size of the lens opening, which regulates the amount of light passing through to the camera's image sensor. The f-stop is represented by a numerical value, such as f/2.8 or f/16. A smaller f-stop number (e.g., f/2.8) indicates a wider lens opening, allowing more light to reach the sensor, while a larger f-stop number (e.g., f/16) corresponds to a narrower lens opening, limiting the amount of light entering the camera.Â
While both the camera's f-stop and the human eye's pupil adjust to control light, there are differences in their mechanisms and capabilities. The camera's f-stop is a fixed setting determined by the lens and can be manually adjusted by the photographer. In contrast, the human eye's pupil is dynamic and continuously adjusts based on external lighting conditions and the eye's needs for optimal vision. Additionally, the range of adjustment differs. Camera lenses can have a wide range of f-stop values, allowing for precise control over the exposure. The human eye's pupil, although adaptable, has limitations in terms of its minimum and maximum sizes.It's important to note that while the comparison is made between f-stop and the pupil, the overall visual system of the human eye is much more complex, involving additional processes such as the iris, lens, and retina, which contribute to the eye's overall ability to perceive and process light.
Talking more from a Northern Lights perspective, Camera F-stops generally ranges from F/2.8 (~ 17-18mm opening) to F/22 (~1-2mm opening). Normal human eye pupils are ~2-4mm during the day and ~4-8mm during night vision. Just to draw comparison specific to night vision (not the best way as our eyes pupil have instantaneous continuous vision like a video), the pupil’s dynamic range translates to around F/8 to F/14 on the camera. F-stops below F/2.8 makes a significant difference in its ability to collect light. This factor again contributes to the perceived low intensity of northern lights as seen from the human eye.
THE PERCEPTION OF LIGHT INTENSITY IN LOW LIGHT CONDITIONS …Â
ISO SETTINGS VS Photoreceptors Rods
While the human eye and image sensors in cameras share some similarities in capturing visual information, they also have distinct characteristics and limitations. The capture of light in low light conditions between the camera and the human eye are fundamentally similar in the sense that the captured light interact with a medium that converts the light signals into an electrical signal and the electrical signal is processed to form a final image. In the human eye the photoreceptors rods are are highly sensitive to light and are responsible for vision in low-light conditions. Similarly, camera image sensors can have different levels of light sensitivity, often referred to as ISO sensitivity.
The human visual system's complexity and adaptability, combined with the brain's processing capabilities, allow for a rich and dynamic visual experience. Rods are highly sensitive to light and are responsible for our vision in low-light conditions, such as at night or in dimly lit environments. They are concentrated around the outer edges of the retina. Rod cells contain a pigment called rhodopsin, which is sensitive to different wavelengths of light. When photons of light enter the eye and strike the rod cells, the rhodopsin molecules undergo a chemical reaction, triggering an electrical signal. This signal is then sent to the brain, where it is interpreted as visual information. Although rods do not provide detailed color vision, they are essential for peripheral vision and detecting motion.Â
Camera image sensors are the heart of digital cameras, responsible for capturing light and converting it into electrical signals that form digital images. In low light conditions, the light signals can be captured more efficiently by changing the ISO settings on the camera. By changing the ISO settings, a camera sensor’s sensitivity to light can be changed. A higher ISO setting increases the camera's sensitivity to light, allowing for brighter exposures in low-light conditions but potentially introducing more digital noise. Conversely, a lower ISO setting reduces sensitivity, resulting in cleaner images but requiring more light for proper exposure. Advances in modern digital camera sensor technologies have ensured that even at higher ISO settings, cameras can capture images in low-light situations with reduced noise, resembling the light sensitivity of rods.Photographers can often adjust the ISO to achieve the desired balance between image brightness and noise levels based on the available lighting conditions. The intensity of light hitting each pixel determines the amount of charge it accumulates and this is then processed by the camera's image processor to create a final digital image.
THE PERCEPTION OF COLOUR IN LOW LIGHT CONDITIONS…
Bayer Filter Array Vs Human Eye Photosites Cones
Cones in the human eye enable us to perceive colors and provide us with detailed color vision. Similarly, the image sensor in a camera can capture color information through the use of individual pixels, often arranged in a Bayer filter pattern. Each pixel captures either red, green, or blue light, and through interpolation, the camera reconstructs the full-color image.
Cones are responsible for our central and color vision, providing us with a more detailed and vibrant perception of the world. They are concentrated in the central part of the retina, called the fovea. Unlike rods, cones are less sensitive to light and require higher light levels to function effectively. However, they are responsible for our ability to perceive colors and finer details. There are three types of cone cells, each containing a pigment that responds to different wavelengths of light: red, green, and blue. When light strikes the cone cells, the corresponding pigment molecules undergo a similar chemical reaction as rods, generating electrical signals that are transmitted to the brain. These signals are then processed, allowing us to perceive colors, recognize faces, and discern fine details in our visual environment.Â
In a camera, color is processed by the help of a color filter array called Bayer filter. It consists of a pattern of red, green, and blue color filters placed over individual pixels on the sensor. The Bayer filter is arranged in a mosaic pattern, with twice as many green filters as red or blue filters. This arrangement is based on the fact that the human eye is more sensitive to green light. When light passes through the Bayer filter, each pixel captures only one color component of the incoming light based on the filter directly above it. Green filters capture green light, red filters capture red light, and blue filters capture blue light. However, each pixel only records intensity values for one color, resulting in incomplete color information. The demosaicing process is performed by the camera's image processor. It analyzes the neighboring pixels of each color to estimate missing color information. By interpolating the intensity values from adjacent pixels, a full-color value is assigned to each pixel. This process reconstructs a complete RGB (red, green, blue) value for every pixel on the sensor. The camera's image processor applies color correction algorithms to adjust for any color inaccuracies introduced by the Bayer filter or other factors. These algorithms aim to match the captured colors to a standard color space, such as sRGB or Adobe RGB, ensuring accurate and consistent color representation in the final image.
HOW THE CAMERA COMPARES TO THE HUMAN EYE IN LOW LIGHT CONDITIONS….
The interplay between rods and cones in the retina enables us to perceive the diverse range of light conditions and color information in our surroundings. While rods excel in low-light situations, providing us with night vision and motion detection, cones contribute to our detailed vision, color perception, and the ability to appreciate the world's visual intricacies.
During night, our rods helps in distinguishing motion and thus we are able to see the active motion of the auroras although in monochromatic vision. The cones do come into play at night as the auroral light is bright enough for the cones to distinguish green/yellow wavelengths of light. On really intense auroral nights the auroral lights are bright enough for the cones to have the ability to distinguish red wavelengths. Women have the ability to perceive more reds as compared to men. Pinks and purples are near impossible for our cones to pick up. At Night, Blue is a wavelength that is very challenging for our cones, it's possible to perhaps see them during sunset auroras where the setting sun could light up the blues in higher atmosphere during nautical twilight. This means that the human eye is able to see active motion of the auroras, not being able to see the details in the shadows and a limited perception of greens, yellows and a bit of reds when viewing the auroras.
WHAT IT ALL MEANS FOR NIGHT VISION…
Humans not being nocturnal creatures, evolution has not worked its magic on our night vision. Rods are predominantly for night vision and are monochromatic and do not have the ability to distinguish colors at night. Cones needs a lot of light in order to have the ability to distinguish RGB light spectrum. During night vision, cones have very limited ability to distinguish blue and red spectrum and a bit more capabilities in the green spectrum. Modern cameras processors are highly capable of low light capabilities and producing images with low noise at high ISO settings, plus they are very efficient in splitting the light into its RGB spectrum at night there by distinguishing the yellows, pinks and purples of the Northern Lights.
The above differences just means that the camera has a lot more ability to collect light and is able to distinguish color more efficiently at night compared to the human eye. This leads to a lot more details in the shadow region of the pictures and especially for auroras a better perception of colours especially the blue, reds, pinks, and purple compared to the human eye. Thus, it results in the camera being able to capture aurora lights intensity better and also be able to distinguish the colors at the a same time, there by leading to the observation by aurora chasers that the camera pictures were more intense and more colourful.
POST PROCESSING AND BRINGING OUT THE HUES…
I have seen some other interesting colours in pictures especially orange auroras (neon interaction auroras). Its always difficult to comment on those (not knowing the post-processing techniques used) since as discussed above the human eye has night vision limitations beyond the green spectrum. Just because the human eye cannot perceive those colours, it does not mean the colours are non-existent. By increasing saturation in post-processing, one can bring out the hidden colours and and its always possible to have orange colours especially during dawn or dusk hours. It's always a bit of heated debate in photography forums of posting something true to what human vision sees as opposed to an artist posting something with a bit of creative license.Â
A FEW FROM THE ARCHIVES ON WHAT THE CAMERA CAPTURED VS WHAT MY EYES SAW...
Hoodoos Aurora - Early Night
This image was shot during the early hours of the night in March, 2018. The sun had just set and darkness was setting in, the auroras had been very active prior to sunset. On this night, I could clearly see the active motion of the aurora pillars and the greens and blue was visible. The greens were far more vivid compared to the blue for the human eye. This has probably been the only time I have seen blue auroras via the hyman eye.
Barn Auroras - Before Midnight
This image was shot just prior to midnight in August, 2018. The sun had just set and it was astronomical twilight. On this night, I could clearly see the active motion of the aurora pillars and the greens were clearly visible. The pinks, purples and reds were not visible to the human eye.
Electric Pole Aurora
This image was shot just prior to sunrise in November, 2021. This was a night of high auroral activity, G3 storm. The auroras were very active throughout the night from dusk to well past dawn. On this night, I could clearly see the active motion of the aurora pillars and the greens and reds were very vibrant and visible to the human eye.
I hope that this article gave you a good perspective and insights into the differences between the camera and the human eye for Northern Lights photography. When it comes to night photography, the comparison between cameras and the human eye becomes even more pronounced. The human eye has not evolved for night vision and our rods and cones can provided limited capabilities in low-light conditions. However, cameras techniques like longer exposures, higher ISO settings, and specialized equipment such as fast, low-light lenses and camera's ability to accurately split light into different wavelenghs allows photographers to capture colors not perceived by the human eye.
I would love to hear your thoughts on the above blog and if I have missed out any important aspects, would love to hear your thoughts and feedback. If you have any questions please reach out to me in the contacts section and I will be glad to connect.
