> > In it, the author says that > > 'the optimum pixel resolution [of the printed image] should ideally be > the printer dpi divisible by a whole number. The following pixel > resolutions should be used [for a 1440/2880 printer] : 144, 160, 180, > 240, 288, 320, 360.
The reason for this is fairly simple. The printer driver software uses dithering to get the various shades of colour. The dithering is done for each input pixel, which corresponds to a rectangular array of inkdot sites. For example, a 360ppi image on that 1440/2880 printer has a 4x8 array of possible inkdot sites. If the printer also has two different drop sizes this means the printer can manage something like 64 different levels of each colour at each pixel location. Dithering is much, *much* simpler if the number of input pixels divides the resolution exactly. As most printers embed the dithering in hardware this means that source images at resolutions other than those submultiples will usually be resampled by the printer driver software before being sent on to the device. It's unlikely that the resampling code in the driver will be as good as the image resizing code in your favourite image editing application, and you certainy won't have any control over the amount of extra sharpening (if any) that might need to be applied after this resampling. If you resize the image yourself, though, you retain full control over everything except the final microdrop dithering stage; something the printer manufacturer probably *does* implement as well as they know how. (Incidentally, the quoted example is wrong. While 320 is a factor of 2880, it is not a factor of 1440, and thus is not an optimal resolution).

