The Spooky Forest of Resolution Terminology: LPI and DPI and PPI ! Oh, My!
Dorothy: Do — do you suppose we'll meet any wild animals?
Tin Man: Mmm, we might.
Scarecrow: Animals that — that eat straw?
Dorothy: Lions and tigers and bears. Oh my!
— from "The Wizard of Oz"
The terminology used to describe resolution can be confusing and perhaps downright scary to some. While the terms "dots per inch" and "pixels per inch" tend to be used pretty much interchangeably, there is a difference, and indeed sometimes neither one is actually relevant.
With the ever-increasing numbers of megapixels new camera sensors are capable of capturing, it's clear that individual pixels are tiny. Dots are practically the very definition of tiny. But to say the two concepts are synonymous is a leap too far. For one thing, the printing trade has been doing its thing since long before digital photography came about. As such, to understand this whole confusion about terminology, I'd like to begin from the beginning and proceed from there.
If you grew up long enough ago, you probably have memories of looking at the color comics in the Sunday newspaper with a magnifying glass to see that the pictures were actually composed of regular patterns of small colored dots. If you grew up more recently than this, you have no idea what a newspaper is. At any rate, back then, images were printed with just four colors, cyan, magenta, yellow and black. The appearance of additional colors was achieved by overprinting dots of varying sizes from this limited palette. Even before color printing came about the same process was employed albeit with just a single ink color. Ink dots were always printed in the same pattern known as a screen pattern. The only thing that changed to create the illusion of various shades was the size of dots for each of the four constituent ink colors.
That is, the dots of each color varied in size but were always equally spaced. Varying the size of the colored dots was achieved by varying the amount of ink used for each dot. More ink meant bigger dots. As such, it made sense to measure how far apart the dots were and to calculate how many occurred per inch as a measure of the resolution possible for that printing method. But if you expected me to say that this measurement was one of "dots per inch" or DPI, you'd be mistaken. The printing trade wasn't counting the resulting ink dots they were instead counting the number of lines of dots in the regular array that comprised those dots. This measurement was termed "lines per inch" or LPI. The term relates to the origins of the process wherein the lines were indeed literal lines finely etched on glass plates that formed places where ink would pool and from which it would be laid down on paper. The dots were the outcome. The lines were the critical measurement. And thus "lines per inch."
DPI or "dots per inch" enter the picture when we look at the how those screen lines are generally reproduced today. Rather than pressing paper to screened glass plates, most images now are produced under computer control and DPI measures the discreet addressable locations onto which a printer can place ink. It takes multiple printer locations in an array to construct a single screened ink dot. Think of each printer dot as being either on or off. They don't vary in size or in location. The printer either puts ink a give location or it doesn't. If you want to print a small dot, you fill in just the location in the center of a square array of potential dots and leave the rest blank. To create larger and larger screen dots, you progressively color in more and more printer dots surrounding that center point.
Now it's time to confuse matters a bit further, because, you know, technology marches on.
With computers it became possible to solve limitations in traditional printing stemming from the regularity of the screen pattern. The requirement for screen dots to vary in size but not placement to represent differing tones made it difficult to create smooth image gradients. Also, if dots were placed just slightly off their intended target, the regular pattern of screen dots could produce an optical moiré effect that distracted from the intended image appearance. To address these issues, screening techniques for creating the best printed images today has evolved.
Images today are still printed using colored dots but the technology generally employed has changed to use something called stochastic (a fancy word for random) screening. No longer are the screen dots uniformly arranged and varying in size, but instead they are randomly arranged and more or less constant in size. To make a color more intense, more dots are printed closer together. To produce a lighter color for that ink, fewer dots are printed farther apart.
"Old school" printing can be described as "Amplitude Modulation" (or AM) screening in that color intensity differences were achieved by varying (or "modulating") the amount of ink per dot (or the amplitude of the content). By contrast, newer stochastic printing is also referred to as "Frequency Modulation" or (FM) screening. Rather than varying the amplitude (size) of the dots, it is the frequency (or spacing between) dots that is modulated to vary the color intensity.
This change greatly complicates the relationship between "dots per inch" and "lines per inch" in that lines are now essentially gone. We're no longer bound by placing dots on a regular screen array and indeed now intentionally place them randomly to avoid the lines and the potential for moiré and other associated issues.
This is where Epson and other inkjet printers, or perhaps just their marketing departments, confuse things for their own aims. Rather than quoting the equivalent DPI numbers their stochastic screening equates to, they quote only the density of individual ink droplets their print heads are capable of. That is, they are actually counting the individual FM screened dots while subtly implying they are referencing the fixed grid of traditionally printed screen technologies. Epson printers may be able to print individual droplets close enough together to pack up to 2880 of them per inch, but this doesn't mean you can distinguish objects that small in the printed output since many, many of those tiny ink dots are needed to render anything discernible.
So what about "pixels per inch" where we're no longer in the realm of printed output at all? Counting pixels is relatively easy, but what about inches? In your computer, there are no inches. Inches take on a degree of meaning only within a context or an intended purpose. You can view your image onscreen zoomed in or zoomed out and count image pixels per inch across your monitor. But perhaps your monitor is bigger than my monitor. Is the image therefore a different resolution when I look at it than when you do? What about projecting that same image on a screen that is eight feet across? Forget about inches. Now we're into feet yet the image itself remains the same. An image that is 1024 x 768 occupies exactly the same number of pixels on both monitors and on that big screen.
Inches remain somewhat secondary until you ascribe intended dimensions to the array of pixels that is your image. You can take an image that is perhaps 3000 by 4500 pixels and print it out bigger or smaller. The choice is yours. But the larger your print it, the further apart you will be spreading those same pixels, and thus the lower the output resolution will be. But until you print it, that image has no resolution in anything per inch. You can change the intended size by simply resizing the image. If you choose to resample the image as you resize it, you cause additional, new pixels to be created so as to retain the same PPI as the original claimed to be, interpolating new pixels in between and based on the original adjacent ones. But if you uncheck the "resample" option, resizing only changes the image metadata, not the actual image pixels at all. The number of pixels in height and width stays constant and you only flag the image file to say that ultimately you intend to output it at new physical dimensions in inches. 3000 by 4500 pixels at 300 pixels per inch is in fact the identical image as would be 3000 by 4500 pixels at 150 pixels per inch. It's the same 13.5 megapixels.
If you ignore specific images for a moment, your monitor itself does have specific dimensions in pixels, but these are again something different. Here we're talking about the individual picture elements in the LCD panel that makes up your display (or holes or slits in the CRT mask if you're still into CRT displays). When you view a given image on that display, your monitor driver translates and interpolates the pixels that make up that image into the pixels that make up your display much in the same way as if you had resized and resample that image yourself. But for display on your monitor, this process happens automatically and on the fly, continuously updating the mapping as you change the underlying image or zoom in or out on it.
As such, DPI attempts to measure the physical resolution of an output image while PPI essentially maps raw image pixel data and dimensions to an intended output target size. DPI doesn't care where the image comes from while PPI is all about where and how the image is formed and targeted for output.
These days, only commercial printing people talk in terms of lines per inch, but it's easy for the rest of us to use the term DPI for all other semi-related concepts even though this isn't really correct. Neither is the tendency of digital photographers to use "pixels per inch" to refer to all-things-resolution correct. And so long as an image exists solely as data in your camera or computer — both places where inches have no fixed meaning — neither DPI nor PPI is applicable without reference to what you intend to do with that image.