Discipline
Don't shoot thoughtlessly. Although your emotional reaction to the scene may be spontaneous, it should lead to a non-spontaneous act of shooting that requires attention and accuracy. Negligence at work…

Continue reading →

Dynamic range
Light falling on the photodiodes of the digital camera matrix is converted into an electrical signal. For this to happen, the number of photons that hit each individual photodiode must…

Continue reading →

Histogram
Getting the correct exposure when shooting with a digital camera is easy. In this article, you will learn what a histogram is, how it can be used to control exposure,…

Continue reading →

Central and peripheral vision

Often, a scene that looks attractive to our eyes is completely unrepresentable in the photo – with a whitish, illuminated sky, with black holes in the place of shadows, with surreal color shades. What is the reason? Why can’t the camera just take and display the scene as it is? In fact, she’s trying. Because of its modest capabilities. The problem is that we ourselves never see the world as it really is. Our eyes and brain do a tremendous job so that we can enjoy the surrounding reality. The camera does not know how to do this, and you will have to think for it, perform non-obvious and not always natural manipulations to get images that look natural.

Central and peripheral vision
The field of view, sensitive to detail, is very small – about three degrees. You will see this if you hold your eyes on a letter in this text and try to look at the surrounding letters without moving your eyes. As you move away from the center, you quickly lose the ability to distinguish small details. Peripheral vision is very sensitive to movement, but not to detail. To get a detailed image in the brain, the eye constantly scans the scene, every moment sending information to the brain about its individual fragments, from which, after their individual processing, a complete picture is formed. The camera draws the entire scene as it is, not caring that different parts of the scene have different semantic significance, need different color and brightness correction, and, in fact, should be photographed completely differently. Hence all the problems.

Dynamic range
When the eye looks at light or dark areas of the scene, the pupil changes its diameter, narrowing when looking at bright objects and expanding when looking at shadows, thus adjusting the amount of light that falls on the retina. In addition, retinal receptors can vary their sensitivity to light depending on its intensity. As a result, we can distinguish details, both in lights and shadows, adapting to high contrast conditions. The camera exposes the entire scene with constant, preset aperture, shutter speed, and ISO values, and therefore is not able to capture the difference in the light level of a high-contrast scene. The solution is this: avoid scenes whose contrast doesn’t match the dynamic range of your camera. If the contrast is high, try to soften it by using a reflector or fill-in flash to highlight the shadows slightly. If you can’t affect the lighting, and you have to sacrifice either light or dark areas of the scene-sacrifice the shadows. We are more attuned to the perception of details in light, and therefore black shadows look much less unnatural than flat whitewashed lights. In the end, if you are not lazy (and I usually am), no one forbids you to use the HDR (High Dynamic Range) technique, i.e., make several exposures of the same scene, working separately dark and light areas, and then combine them into a single image in a graphics editor.

Overexposure
It doesn’t look like a sunset, because the sky is whitewashed with excessive exposure. Let’s try to keep the next image a little underexposed.
Underexposure
Get better. The sky looks as it did in nature, but the meadow in the foreground is lost in darkness. Disorder.
HDR
By combining the two previous images, we get a realistic image.
Semantic selectivity
The next interesting feature of human vision is its selectivity. We see what we are interested in and ignore details that are insignificant to us. When the photographer sees an object worthy of shooting, such as a blooming spring tree, he points the camera at it and presses the shutter. Later, looking at the resulting picture at home, he discovers with annoyance that in the background behind a tree can be seen dull and not blooming buildings, under a tree sheltered dumpster, and the blue cloudless sky is crossed by high-voltage wires. I’m exaggerating, but you understand the problem. What to do? You need to carefully monitor the garbage in the frame and try to exclude all unwanted objects. Pay special attention to the corners of the frame – there is often something extra. The more careful you are at the time of shooting, the less time you will have to spend on subsequent editing of the image.

The perception of volume
A person has binocular vision. Having two eyes allows us to estimate the distance to various objects in the three-dimensional world. Photography is a flat interpretation of the original three-dimensional scene. The camera (if it is not intended for stereo shooting, of course) produces a flat, two-dimensional image, and not every three-dimensional scene retains its volume and depth, being projected on a plane. You can check this before shooting by closing one eye and looking at the scene the way your camera would.

When all else fails
Are you still unhappy with the sharpness of your images? Isn't it too much you want? A digital camera is basically not capable of providing absolute sharpness. Most photo sensors…

...

Light
Light is the heart of photography. The camera does not see the image. There are no lines, shapes, or textures for it. A film or digital sensor is only susceptible…

...

Аuto mode
Fully automatic or green mode does not need to be introduced. The camera thinks about everything here. Not only the shutter speed and aperture, but also autofocus, flash, white balance,…

...