Pages

Saturday, September 7, 2024

Confusing Concepts - Digital Image "Sharpness"

HOPEFULLY, THIS final installment of my 3-part series will tie some things together. The concepts of resolution, diffraction, and image sharpness are integrally related, and difficult to discuss separately, even though I have attempted to do so. In my view, the last concept is the most important. If an image looks acceptably sharp, I normally do not care how it is effected by the other two concepts. They are still part of the equation, though. In my own photography, I try to achieve sharpness in the parts of the image that I think should be sharp. That doesn't mean every photo must be sharp across the entire image. In some cases, we are purposely trying to have parts of the image remain out-of-focus. Also, in the very rare case, we may not want any part of an image to be in sharp focus.

There is no perfectly "sharp" digital image

THE MOST common approach, though is to try to ensure that our photographs are in sharp focus. What does that mean, though? As I have noted here in the past, lack of sharp focus will usually ruin an otherwise nice photograph. In the numerous times I have discussed "sharpness" here, I have often described the concept as apparent sharpness.

WHY DO I call it "apparent?" There is no perfectly "sharp" digital image. There are many things that influence our perception of sharpness in a digital image, including contrast, viewing distance, focus, camera and/or subject movement, lens quality and design, image sensor size and design, and of course, resolution, and diffraction.

The concepts of resolution, diffraction, and image sharpness are integrally related

THERE IS a more fundamental element that goes to the heart of "apparent sharpness:" the way a digital image is recorded and displayed. Most of us remember a little bit of computer-programming science from way back, and are familiar with the way a computer compiles information in "bits and bytes," or 1's and 0's. That is the same basic building blocks that make up digital images. They start out as black and white 1's and 0's, which are recorded and as they accumulate, they stack against each other, to create shape. The "line" we see that creates the outer outline of detail in an image is created by contrast between these "pixels." The contrast is created by a black and white pixel opposite each other. Now that is really a kindergarten (maybe even pre-school) explanation, but it is probably the best I am qualified to do.

There are many things that influence our perception of sharpness

IN THE first installment in this series ("Resolution"), we introduced two filters that are used in the above process. By the time our final recorded image is made, the light rays must pass through a lens (which almost always a series of glass lenses constructed together to "bend" the light a certain way), whatever filter(s) we might be using in front of the lens, an "anti-aliasing" filter, and a (color) Bayer filter. All that glass in front of the sensor means we are going to have some inherent softness in any digitally recorded image. If we want color images, there is no avoiding the Bayer (or some other substitute) filter. We can, however, exert some control over the other physical elements. Many (if not most) newer cameras no longer use the anti-aliasing filter. When I selected my own Sony a7rii, I did so in large part due to Sony's purposeful exclusion of an anti-aliasing filter on their "r" versions. That does introduce another potential issue (moire) which is normally easily fixed by the photographic approach and with post processing. I have said many times here that I only rarely put a filter on the front end of a lens. My thinking is that I don't spend the money on quality glass, just to put another piece of glass on the front of it that will surely effect the image quality. I applied the same reasoning to the anti-aliasing filter.

NO MATTER what, though, the other factors above are going to result in some softness of an image. Our goal is to obtain as much apparent sharpness as possible. One way to do that is to create more well-defined contrast between pixels along the edges of an image. Generally, this means making the contrasting blacks more pure black and the whites more pure white. Every software sharpening process involves an increase in contrast along those edges. There is a lot of nuance to the process. For years, sharpening in Photoshop was a process of almost alchemy. The generally agreed best tool in the drawer at the time was called (ironically) "unsharp mask." That goes back to a traditional wet darkroom masking process that is not only beyond the scope of this blog, but beyond my ability to explain. 😅 For me (and plenty of others, I am sure), using the "unsharp mask" was mostly trial and error. The trick was to make the adjustments subtle, or your "sharpened" image could become a ghastly looking mess.

All that glass in front of the sensor means we are going to have some inherent softness in any digitally recorded image

OF COURSE most of us are working with color and of course there are many images where the edges do not consist of pure black and white pixels. Often, we do not want the pixels in between (which we usually refer to as mid-tones) to have high contrast, and part of the "unsharp mask" alchemy was adusting so only the parts we wanted to sharpen were effected. Left to their own mischief, sharpening tools can not only effect an image's apparent sharpness, but they can also introduce color casts. One approach to this was to sharpen in just one of the color channel which make up the rgb color image; a channel which contained only brightness information and no color information (the "Luminosity" channel). Applying this to selected portions of the image required some skill that not all of us found easy to master. Thankfully, some of those who did - and were really good at it - provided us with some pre-made masking tools. In the early 2000's, a photographer named Tony Kuyper made the luminosity mask popular by offereing his Photoshop Actions for a very reasonable nominal cost. I have them somewhere, but never really mastered them - though I know some others who have.

I DID spend an awful lot (too much) of time studying and trying to master the art of sharpening on my own. The best (and still seminal, in my opinion) text resource is "Real World Image Sharpening," an Adobe Photoshop and Light Room focused book, written by (the late) Bruce Fraser, and Jeff Schewe (two of my favorite digital processing authors). It is a $50 purchase on Amazon, so it is not inexpensive. Nor is it "Readers Digest" level reading. If you like technical "under the hood" stuff (I do), it is a fascinating read. Fortunately, there have been some folks (including Fraser and Schewe themselves) who have - over the years - provided us with a relatively easy to use pre-programmed version of their handi-work. Today, most software post-processing programs contain sharpening utilities. Some are better than others. Years back we didn't have the number of post-processing choices. In  2009, PK Sharpener was introduced by a company called Pixel Genius, founded by Fraser, Schewe, Seth Resnick, and a few other known Photoshop gurus (interestingly, the Nik collection contained Nik Sharpener Pro and was brought to market in 2006 - but I had not yet been introduced to Nik at that time). Like Photoshop itself, most of the utilities in the package were beyond my needs (and ability to understand). But what I could use, did a better job than I had ever been able to do before. A couple years back, I did some of my own empirical experimentation between the Nik and PK Sharpen. I found the differences to be "nuanced." I ultimately stayed with the PK Sharpen program (which I believe is no longer available). The point is that we don't have to become "under-the-hood" sharpening experts, as that has been done for us and incorporated into virtually every software out there today. The Nik and PK software can be easily loaded as "plugins" to Photoshop and Lightroom (how and if they work with other software, I don't really know).

WHAT ALL this work done by the experts on digital processing has given us is a nice collection of tools to achieve the most "apparently sharp" images we can with our own recorded digital images. In their Real World book, Fraser and Schewe brought a sharpening process to light. There are "recipes" in the appendix of the book for Photoshop Actions that will accomplish the process they espouse in the book." I don't know if very many people even write their own actions anymore. But if you are that type, it may be worth the $50. Their process, the one that seems to be the accepted approach today, posits that sharpening should be done in three separate phases or steps. The first phase is what they referred to as "pre-sharpening," and is (mostly) applied to raw files to account for the issues I spoke about above that were created by the lens and sensor filter issues. Every raw image converter I know of contains a pre-sharpening algorithm. In my "empirical" testing above, I concluded that the Adobe Raw Converter "default" sharpening tool does as well as any of the others (including my PK Sharpener), and so I leave it at its default setting (25%) to save myself the step of presharpening in my workflow. If you prefer a more "hands-on" approach, the setting can be set to zero. I put "empirical" in quotes because - of course - there is going to be some subjectivity in this analysis. It is what my own subjective conclusion is, but you should probably do your own.

THE SECOND phase of sharpening is best done, in my view, with a more "hands on" approach. It is sometimes referred to as "Targeted Sharpening." Targeted sharpening can be applied globally to an image, or to just select parts of it. Some images benefit from only sharpening certain areas. Shadow areas often won't benefit from sharpening, and sharpening them can sometimes make the image worse, as the sharpening "highlights" unwanted noise in the image. In some images with areas of of shallow depth of field, we want to leave them out of focus and unsharp purposefully, while sharpening other parts of the image that we want to be in critical focus. The beauty of the targeted phase is that we can use various masking techniques to selectively sharpen the image. This can be done manually, or some of the software has algorithms that do a pretty decent job of doing that for you.

FINALLY, WE should consider whether every image should be sharpened for "output." For many years, I made my own inkjet prints. There are major differences in the way an image is "projected" on a screen from the way it is printed with ink. Ink is laid onto paper in microspic droplets of colored pigment. Because they are liquid, even though microscopic, those droplets are going to have some "runout." They are also a reflective media, and as such, are going to be percieved visually very differently than projected media. I often found that I needed to apply much stronger sharpening to my print files. On screen, they would have an oversharpened look, but on paper, they were just right.

TODAY, WE have another new approach to sharpening denominated "AI" sharpening. This sharpening algorithm uses so-called artificial intelligence, using a memory bank of hundreds of thousands of images, to sharpen by replacing pixels with sharp(er) new pixels. Personally, I have not been as impressed with it as all the testimonials seem to suggest. I have tried it a couple times and have either felt it didn't live up to the hype, or it looked fake. I have consistently said you cannot fix a truly blurry image in digital processing. With AI, that view will undoubtedly change. I have seen so much "progress" with AI in just a couple short years. In my view, it is not there yet. But it is certainly worth keeping an eye on. It is coming.

Saturday, August 31, 2024

Confusing Concepts - Diffraction

OF THE 3 different concepts in this 3-part series, this one is probably the most technical, the most difficult to understand, and the most difficult for sure to explain. Sensor resolution, we saw, was really just a matter of the relative size of the "container" (the sensor size), the size of the individual photo sites (pixels), and the number of photo sites within a given sensor dimension. We also said, however, that these things don't work in a vacuum. A sharp, detailed (resolution) image is affected by not only the pixel number and size, but also by diffraction. The creation and presentation of a digital photographic image is not a precise science. In fact, I am going to introduce a term that will be central to the final post about image sharpness: "appearance." We probably really ought to refer to image sharpness in a digital photo as "apparent sharpness." The reality is that there is no absolute sharpness - only apparent sharpness). What do I mean by that? Stay tuned for the third and final installment of this series: "Confusing Concepts - Image Sharpness."

The creation and presentation of a digital photographic image is not a precise science

THE EFFECT of diffraction on a digital image is a strongly related concept, however. To make a photographic image with virtually any camera and/or medium, we must focus rays of light through a lens. We mentioned in the previous installment that almost all lenses are circular. A primary reason for this is that the circular coverage provides the most consistent coverage of the rays of light smoothly from side to center. The round lens, however, "bends" the light rays, which generally requires a series of glass elements to - if you will - "unbend" them

WHY DOES any of this matter? Diffraction occurs during the process of  "bending" the light through the lens. What causes diffraction is the light waves that diverge from parallel. As a general rule, diffraction is effected by the size of the opening that the light waves pass through, and the length of the light wave. Let's address opening size first. There are going to be two mechanical factors: First, the  physical size of the lens circle at its widest aperture (which is what brings relevance to the above "coverage" discussion) is constrained by design. It follows that we should experience less diffraction from the larger openings of lenses designed to cover larger sensors. Confoundingly, as our apertures get larger, the depth of field of an image gets more shallow, so from front to back the apparent sharpness of the image seems less. Somewhere, "the twain shall meet," creating the "sweet spot" I talk about below.

THE OTHER mechanical factor is lens aperture (within a given system). Generally, the smaller the aperture (for the same reasons as the size of the physical lens circle matters), the more diffraction, and vice versa. Note that I have referenced lens "size," and lens "aperture." I did not say f-stop number. Why? Because a given f-stop varies in physical size between different lenses. This is true both in terms of focal length within system, and different system lenses (i.e., an M4/3 lens f8 will be physically smaller than a "full frame" lens at f8).

ANOTHER THING that effects diffraction is the length of the light waves. Again, as a general proposition, shorter waves diffract much sooner than longer waves do. Think about the spectrum of light. Blue light waves are among the very shortest (those who understand polarizing filters are probably familiar with this). This explains what certain light conditions demonstrate the effects of diffraction more than others.
Every Lens has its own "sweet spot"
EVERY LENS has what we sometimes refer to as its "sweet spot." That is where it is at its absolute sharpest performance. Most of us have an awareness that  many lenses are not sharp across the frame at their most wide open apertures. We also have a general awareness that as we stop down the aperture, we tend to get increasingly (apparently) sharp images. Some of us have been aware, over the years, though, that there is a point of no return, where not only does the lens no longer render an increasingly sharp image, but the image might even degrade some. This degradation is due to diffraction. Recall above that we said diffraction increases as the lens opening gets smaller. This is why it is important to keep that "sweet spot" in mind. Generally, a "full frame" (35mm equivalent) lens will be at its sharpest at f8 - maybe f11. An M4/3 lens will probably be at somewhere between f4 and f5.6. We will talk about why there is a difference shortly. All of this is, of course, also limited by lens design and overall quality. So-called "cheap glass," or zoom lenses trying to encompass too much zoom range, will mechanically and optically also negatively effect image quality, sometimes introducing optical and color aberation, and lack of contrast.

THERE IS another factor in the diffraction discussion other than lenses. Perhaps the most significant factor is sensor and pixel size. Once again, smaller pixels will be more susceptible to the effects of diffraction. That is the primary reason we find that "sweet spot" in M4/3 lenses to be at a wider aperture (f4- 5.6).

THE CONTRIBUTORS to diffraction mean that there is an aperture on each lens that is that "sweet spot." While we have generalized, each lens has its own "spot" and you may need to do some empirical testing of each of your lenses to arrive at that spot. It is important to acknowledge that there will always be some diffraction at every lens aperture. That point where it becomes visibly deleterious to image qualilty is referred to as the point where the lens is "diffraction-limited." My definition here is, of course, overly simplified. The simplest "technical" definition I could find was: "The diffraction limit is the maximum resolution possible for a theoretically perfect, or ideal, optical system." Think back to our discussion of "resolution." They are interdependent, and this "technical" definition feels awfully circular to me. The ultimate conclusion for me is that diffraction is one of the primary factors which effect image quality (without regard to the quality of the equipment being used), along with resolution and sharpness.
We shouldn't let all this technical jargon get in the way of our creativity
DOES THIS all mean that you should always and only shoot at the "sweet spot" aperture of your lens? Of course not. As I am fond of saying here, all of photography is a compromise. The artistic part of composition means that we must work with the limitations of the tools. Sometimes we want very shallow depth of field. Sometimes we want the image to be crisp from front to back (one of the ways photographers have been dealing with this issue in still photographs, by the way, is a phenom called "focus stacking"). But we shouldn't let all this technical (sometimes pixel peeping) jargon get in the way of our creativity. It is just useful to know the limits of our equipment when applying it to our craft. Next time we will address that third factor: Image Sharpness.

Saturday, August 24, 2024

Confusing Concepts - Resolution

VOLUMES UPON volumes have been written about this topic. I am not for a moment trying to convince you that I am either an expert or the proverbial "last word." In recent weeks, I have read a few comments here, and online in other blogs (mostly in the discussion and comments) that seems to me to underscore a lack of complete understanding of the terminology. What motivates this blog (and a couple more to follow) is the thought the maybe I can shed some - albeit elementary - light on these topics. This is the first of a 3-part series.

PART OF the confusion probably stems from the fact that there are actually different kinds of "resolution" when we apply it to photography. It is a rather broad term, which is often used imprecisely. The making of a photographic image involves a lens, a medium - these days mostly a digital sensor and resulting file, and a manner of display. Each of these components applies a different "spin" on the word, "resolution." Consequently, when we are addressing resolution, we need to understand what kind of resolution we mean.

Sensor Size Comparison

THE RESOLUTION of a particular camera lens (or its "resolving power), simply refers to its ability to resolve detail. There are numerous factors that effect this ability, including lens design, size and quality of the glass elements, coatings, etc., 

IN THE case of digital cameras, resolution refers to the sensor used to record the digital image. This component of the optical "system" is perhaps the most difficult to get one's arms around. Sensors are intricate mechanisms. On a rudimentary level, they seem simple enough. They are just a collection of electronic recording sites (known as photo sites) grouped together on the camera sensor surface. They are, of course, microscopic in size. A more in-depth look at sensors leads us to realize that things aren't as simple as that sounds. Two significant factors are the size and number of individual photo sites. It seems evident enough that a smaller sensor will not be able to hold as many of the same-sized photo sites (or photo cells) as a larger sensor. Sensor size is functionally related to the lens circle. Smaller lenses will only "cover" a smaller sensor area. As the sensor gets larger, in order to cover the sensor area, lenses must be designed with larger circles. The reason lenses are circular is really beyond the scope of this article (and my expertise, 😰), but it is a matter of physics, and the desire to balance the light being directed by the lens. If you use an image sensor that is larger than the image circle, the image will show up framed as a circle encompassed by a black area outside the circle. As a general rule (we will see as we go on, that these things don't work independently), smaller photo sites will have less "resolving" power than larger ones. Coupled with the concept of an optical occurrence known as diffraction (stay tuned), conventional wisdom has it that smaller sensor - based cameras will generally have less resolving power than larger ones. While not precisely correct, it is a valid consideration when using such equipment. I have only recently empirically tested (and concluded) that this applies to my m4/3 camera setup as compared to my "full frame" sensor gear. The rationale for this line of thinking is that it is difficult to match photo sites in terms of both number and size on a smaller sensor. My Olympus m4/3 sensor is nearly 1/4 the size of my Sony a7Rii "full (35mm equivalent) frame" sensor. At only 20 megapixels (a measure of the number of photo sites on the sensor), it is the largest m4/3 sensor available, to the best of my knowledge. My Sony, on the other hand, is 46 megapixels (and the newest iteration - the a7Rv - is 61 mp). Not only are there from 2 -3 times the sites, but each individual photo site is also significantly physically larger. That phenomenon creates conditions for increased sensor resolution.

Bayer Color Filter Array

THERE IS more to the sensor story than photo sites and sizes though. Diffraction plays a significant part in this equation, too. I will cover diffraction all by itself in the next post. During the early years of digital sensors, one of the concerns that designers (and users, of course) had was the phenoma of "aliasing." As we have discussed here in the past, the basis of a digital image is the "lego-like" stacking of rectangular pixels to produce the shapes found in images. Because of these individual pixels, there are always straight line transitions between pixels (to continue the analogy, each lego block). At some level - particularly in lower "resolution" (in this case meaning smaller and less megapixels) images will have the appearance of jagged edges (or "jaggies"). In order to address this concern, camera manufacturers put an anti-aliasing filter in front of the sensor (known as a "low pass" filter), that was designed to introduce a bit of blur. Obviously, my explanation is hopelessly oversimplified and the process is/was complex, if not consistent. As they added megapixels (my first Nikon D100 was a 6mp camera), and processing software (especially raw conversion engines) got better and better, the aliasing issue has become less important. Indeed, I have personally looked for camera specifications that do not have the low pass filter, reasoning that I don't want anything I don't absolutely need to introduce softness. On the contrary, I am looking for the maximum sharpness I can get. In my view, the presence of an AA filter - though perhaps only very marginally - effects resolution. Neither of my current cameras (Sony a7rii and Olympus EM10iv) have AA filters on their sensor.

Why are the pictures square if the lens is round? - (Steven Wright)

 A SECOND filter (or filter array) known as a "Bayer Color Filter Array" is placed in front of the almost every digital sensor. Through a process called digital sampling, the sensor creates the digital image. The Bayer filter involves additional color sampling, which produces the colors in our images, using primary colors of red, green and blue. The sharp observer will note that there are (many) more green sensors than red or blue (see the illustration above). This is because our visual system is the most sensitive to the green light spectrum, which is where the sun emits the largest amount of light. Green light contributes much more to our perception of luminance. Color filter arrays are designed to capture twice as much green light as either of the other two colors. The takeaway here is that the Bayer filter is yet another path of interference between the light rays and the sensor sites. This introduces softness and therefore, effects resolution. This also explains why most raw processing software has a "default" amount of sharpening (often referred to as "capture sharpening") that is applied automatically to a raw digital file. Most software (Adobe ACR does) allows the user to adjust, or even eliminate, that default sharpening.

all of these individual measures of resolution work together to create the end product

THE LAST of my three resolution considerations is the manner of display. For many years, the primary method of display was the print, on a photographic fiber medium. This involved some kind of pigmentation process from being embedded into the medium (traditional photographic darkroom printing) to printing press ink printing methods to the more modern digital inkjet printing. By the time of the latter, we were also commonly projecting images onto a cathode ray type tube (CRT), and eventually, LCD screens. Prior to the emergence of digital, another method of displaying images was through what was called a color-transparency system (or simply slides). Each of these presentation methods react differently in terms of resolution. The medium itself has its own "resolution," which - once an image is put in the form of the presentation, becomes the predominant factor. With the ascendancy of social media, smart phones and tablets, it is probably safe to say that digital projection is the most common manner of presentation today. Resolution in the context of presentation media, has begotten perhaps one of the most confusing terminology puzzles in the realm of resolution. Resolution of an image when projected on a CRT/LCD screen is purely electronic and is often measured as pixels per inch (PPI), a measure of the size and density of the displayed image. When we speak of an inkjet print however, the printer uses colored pigments to create a microscopic dot-based pattern on the medium. The correct resolution terminology here is "dots per inch" (DPI). DPI is also used for traditional printing-press type media presentations. The two (PPI/DPI) are often - confusingly - interchanged.

AS I said earlier all of these individual measures of resolution work together to create the end product. When choosing and using camera gear, an understanding of these factors will make more sense out of your choices. When the hype from the seller, or the specifications from one of the testers out there emphasizes the particular component's "resolution," or "resolving power," it is important to think about the other components. The highest quality (think Leica or Zeiss) lens, with a "medium - format" sensor (or larger) camera (and yes, confoundingly, in the digital world, MF is bigger than "Full Frame"), that is going to be only seen on your FB or Instagram page is extreme overkill. The final digital resting spot for the image cannot begin to match the resolution of the other two components.

Resolution in the context of presentation media, has begotten perhaps one of the most confusing terminology puzzles in the realm of resolution

I  AM not saying you shouldn't have high quality or high resolution equipment. I am saying that an understanding of resolution and its significant variability will help put your photography - and your gear needs/wants in perspective. "Pixel peeping" is a (sometimes pejorative) description that is given to a lot of photographers these days who tend to place an over-emphasis on technical factors, like resolution, noise (see, What's All the Noise about Noise), and diffraction (another term I will cover in an upcoming blog), over the more artistic part of photography. To be sure, some fundamental skills and reasonably good quality equipment are required to make sure the image is going to be viewable as intended. But beyond that, in many cases, the technical issues tend to be overblown, in my opinion.

WE STILL haven't told the whole story though! Stay tuned for upcoming blogs on Diffraction and Image Sharpness.