Magnification vs Resolution - What's the Difference
Understanding the quality of a lens comes down to two simple factors: Magnification and resolution. But to understand the intricacies of sensor and lens quality, we need to understand the differences in these metrics and how they coexist.
In this article, we’ll explain:
Magnification concepts such as lens ratios
See the correlation between sensor size, pixel size, and resolution
Look at Modulation Transfer Functions, which help us understand image clarity as details get finer.
These are complicated concepts and require studying. This guide provides an overview to get you started.
How Does Magnification Work?
Magnification is an optical property of a lens that describes how closely it’s focused.
More specifically, we know optical magnification as the ratio between an object’s actual size versus the image that a camera sensor receives. We measure this as either a percentage or a decimal, with 1:1 referred to as “life-size.”
Use the left figure to refer to the actual size. The right figure is the shown size, as this ratio becomes lopsided in either direction. For example, a 10:1 ratio is an object showing at ten times its original size, whereas a 1:10 ratio represents the object being just 1/10th of original size.
Photography Life explains this concept well and goes into depth on it and its implementations within macro photography. They also explain the importance of a sensor’s size in magnification, using a crop sensor as an example.
They produce the same image in two different windows in their example by taking one with a cropped sensor. While the subjects are identical, the crop sensor’s image appears to be larger because it occupies a larger percentage of the frame. To this end, they explain that magnification and resolution are unrelated, but magnification has a significant impact on image capturing.
In any form of photography, magnification is important for picking out a lens and understanding your depth of field. Depending on what you are capturing, you will need a different magnification lens to accommodate an object instead of an entire landscape.
However, you should understand that the higher you magnify your lens, the more sensitive to motion it becomes. This sensitivity to motion means that if your use generates a substantial amount of camera shake or subject motion, it will cause problems capturing a crisp image.
How Does Modulation Transfer Function (MTF) Work?
Resolution is more important than just a single metric, as it isn’t just about detail captured, but the quality of said detail.
As Cambridge in Colour explains, this property is known as a Modulation Transfer Function (MTF) which helps explain how well a photo captures a subject’s regional brightness. In other words, it means how well specific colors are delineated and isolated from one another, as opposed to them blurring into one another as resolution shrinks.
This statistic isn’t related to megapixels, and in modern photography is an unavoidable barrier on any lens related to its aperture.
An MTF of 1.0 perfectly preserves contrast at the resolution. Still, as it shrinks, it cannot steadily differentiate one space from the next, so the limit of the resolution is reached. Even with a perfect lens, this effect occurs. With the arc steadily cascading down to 0, the increase in quality of your lens only slows this effect but does not stop it.
LP/mm, or line pairs per millimeter, measures this cascade to zero as a second statistic. Line pairs are simply a pair of lines that can fit within a given unit of length or here within a particular pixel resolution.
We measure lenses at 50% MTF, or a 50% loss in quality, because of the way our perception works. Combined, this means that a high-end lens with a MTF-50 of 50 LP/mm, and is higher than a lens with an MTF-50 of 20 LP/mm. Essentially, the second statistic detects how many line pairs are in a millimeter at a 50% loss in quality.
You’ll analyze MTF through charts, which can sometimes be tricky to understand. Importantly, these often contain lines of different thickness, colors, and types, indicating different metrics.
A bold line thickness is for small-scale contrast, whereas thin is for resolution or fine detail. Black lines represent the aperture being wide open, and blue means the aperture is at f/8.0. A dashed line represents concentric line pairs, and a solid is for radial line pairs or their orientation.
One lens gives several readings on an MTF chart depending on its current configuration, so these charts can often look daunting at first, with eight separate lines moving amongst one another.
However, this is not a universally adopted concept and is sometimes difficult to compare in cameras. The MTF chart uses finished images, which means it represents the quality of your entire camera, rather than just one aspect of it.
Additionally, when making these comparisons, you should consider the cropping or size of a sensor as the same printed form factor enlarges the cropped sensor. When evaluating the image quality of a machine vision camera, it is imperative to understand the best means of making a comparison and not blindly comparing one statistic to another.
Still, MTF is a helpful baseline in specific scenarios and offers insight into how a piece of equipment captures finer details.
How Big Are CCD and CMoS Sensors?
The sizes of CCD and CMoS sensors aren’t about their physical footprint but about how many semiconductors can fit onto them.
With technological advancements and time, these shrink, just as pixel sizes shrink to fit more and more pixels onto a single platform or device. This constant iteration is known as Moore’s Law, which states that the number of semiconductors that fit onto a chip doubles every two years.
However, it is still essential to understand the physical footprint in your application, as larger sensors can more naturally hold more pixels.
The Vision Doctor in Germany has typical sensor sizes in standard industrial vision cameras, which are as follows.
1” Format: 16mm Diagonal, 119mm2
⅔” Format: 11mm Diagonal, 58mm2
1/1.8” Format: 9mm Diagonal, 39mm2
½” Format: 8mm Diagonal, 32mm2
1/2.5” Format: 7mm Diagonal, 25mm2
⅓” Format: 6mm Diagonal, 17mm2
¼” Format: 4mm Diagonal, 8mm2
Vision Doctor retains these metrics from the transition from glass vacuum tubes and CRT televisions into more conventional imaging technologies. The format titles do not match the actual dimensions because the light-sensitive surface cathode-ray tube is two-thirds of its original diameter.
To better understand these metrics, they explain how we see ⅓” sensors in 640x480 cameras, ½” in 1280x1024, and 1/1.8” in 1600x1200.
Another important metric for understanding a sensor is the size of its pixels on conventional cameras is growing smaller and smaller. The size of the sensor’s pixels impacts how much data it collects.
In turn, it reduces the amount of visual noise seen in an image. In machine vision applications, this means that you will want larger sensor pixels for high-resolution capture. For example, Vision Doctor cites that sensors with pixels 14 or 10 micrometers in size are preferable for line scan applications.
This high-resolution means high-quality machine vision cameras are more expensive, but the upgrade is essential for accurate imaging. It also explains that a larger sensor doesn’t necessarily correlate to a higher resolution. You should consider the size of pixels on a CMoS or CCD chip when selecting a solution.
We understand magnification as the size of an image and resolution as a means of capturing finer details. But these explanations are somewhat generic, and there is much more to understanding dwindling resolution and how it applies to optical magnification, as well as your imaging sensor.
The most important thing to grasp is that all of these components play a role in the depth and quality of your imaging solution, so consider them seriously or - Sensor and pixel sizes of CCD and CMOS sensors | Vision-doctor.com when designing an application.
Visit Navitar for more information on our line of high-quality machine vision cameras.