Features of color correction in Photoshop
The work of the color corrector mainly consists in processing images (and other material) that are being prepared for output to devices. Often, processing causes a lot of problems for…

Continue reading →

Close up
A cloudy day is a good time for macro photography. This is an area where high contrast is rarely appropriate, while diffuse lighting makes the shooting process comfortable and controlled.…

Continue reading →

Camera modes
Today, any digital camera offers the photographer a frightening variety of shooting modes. Due to the fact that the instructions for cameras describe the features and purpose of a particular…

Continue reading →

Central and peripheral vision

Often, a scene that looks attractive to our eyes is completely unrepresentable in the photo – with a whitish, illuminated sky, with black holes in the place of shadows, with surreal color shades. What is the reason? Why can’t the camera just take and display the scene as it is? In fact, she’s trying. Because of its modest capabilities. The problem is that we ourselves never see the world as it really is. Our eyes and brain do a tremendous job so that we can enjoy the surrounding reality. The camera does not know how to do this, and you will have to think for it, perform non-obvious and not always natural manipulations to get images that look natural.

Central and peripheral vision
The field of view, sensitive to detail, is very small – about three degrees. You will see this if you hold your eyes on a letter in this text and try to look at the surrounding letters without moving your eyes. As you move away from the center, you quickly lose the ability to distinguish small details. Peripheral vision is very sensitive to movement, but not to detail. To get a detailed image in the brain, the eye constantly scans the scene, every moment sending information to the brain about its individual fragments, from which, after their individual processing, a complete picture is formed. The camera draws the entire scene as it is, not caring that different parts of the scene have different semantic significance, need different color and brightness correction, and, in fact, should be photographed completely differently. Hence all the problems.

Dynamic range
When the eye looks at light or dark areas of the scene, the pupil changes its diameter, narrowing when looking at bright objects and expanding when looking at shadows, thus adjusting the amount of light that falls on the retina. In addition, retinal receptors can vary their sensitivity to light depending on its intensity. As a result, we can distinguish details, both in lights and shadows, adapting to high contrast conditions. The camera exposes the entire scene with constant, preset aperture, shutter speed, and ISO values, and therefore is not able to capture the difference in the light level of a high-contrast scene. The solution is this: avoid scenes whose contrast doesn’t match the dynamic range of your camera. If the contrast is high, try to soften it by using a reflector or fill-in flash to highlight the shadows slightly. If you can’t affect the lighting, and you have to sacrifice either light or dark areas of the scene-sacrifice the shadows. We are more attuned to the perception of details in light, and therefore black shadows look much less unnatural than flat whitewashed lights. In the end, if you are not lazy (and I usually am), no one forbids you to use the HDR (High Dynamic Range) technique, i.e., make several exposures of the same scene, working separately dark and light areas, and then combine them into a single image in a graphics editor.

Overexposure
It doesn’t look like a sunset, because the sky is whitewashed with excessive exposure. Let’s try to keep the next image a little underexposed.
Underexposure
Get better. The sky looks as it did in nature, but the meadow in the foreground is lost in darkness. Disorder.
HDR
By combining the two previous images, we get a realistic image.
Semantic selectivity
The next interesting feature of human vision is its selectivity. We see what we are interested in and ignore details that are insignificant to us. When the photographer sees an object worthy of shooting, such as a blooming spring tree, he points the camera at it and presses the shutter. Later, looking at the resulting picture at home, he discovers with annoyance that in the background behind a tree can be seen dull and not blooming buildings, under a tree sheltered dumpster, and the blue cloudless sky is crossed by high-voltage wires. I’m exaggerating, but you understand the problem. What to do? You need to carefully monitor the garbage in the frame and try to exclude all unwanted objects. Pay special attention to the corners of the frame – there is often something extra. The more careful you are at the time of shooting, the less time you will have to spend on subsequent editing of the image.

The perception of volume
A person has binocular vision. Having two eyes allows us to estimate the distance to various objects in the three-dimensional world. Photography is a flat interpretation of the original three-dimensional scene. The camera (if it is not intended for stereo shooting, of course) produces a flat, two-dimensional image, and not every three-dimensional scene retains its volume and depth, being projected on a plane. You can check this before shooting by closing one eye and looking at the scene the way your camera would.

Technic
Despite the progress in the field of automation of photography, cameras still require people who would suggest them to take pictures of the object and pressed the button. The camera…

...

Selecting the optimal aperture
The ability to effectively use an existing lens has a much greater impact on the sharpness of the photo than the choice of the lens itself. The aperture number is…

...

Camera modes
Today, any digital camera offers the photographer a frightening variety of shooting modes. Due to the fact that the instructions for cameras describe the features and purpose of a particular…

...