Anyone who reads this blog and owns an iPhone 4 (or iPod equivalent) undoubtedly knows that one of the key differences between this device and its predecessor is the jump from 320×480 screen resolution to 640×960, which is double in both directions. Since the screen is the same physical size as the predecessor, this means that the pixel density is 4 times what it was, which essentially means that the resolution is so high that the user experience no longer needs to be reminded that pixels even exist. Hence the use of the term “retina”.
From an under-the-hood perspective, Apple chose to use a rather clever trick to make the software work. Admittedly this cleverness glosses over a ground-level design flaw: the way the software development toolkit works for iOS devices is very heavily based on pixel-positioning of user interface objects at design time, which is a shortcut that allows software developers to quickly put together user interfaces that are arranged exactly the way they want them to be. This concept fails the moment the dimensions of the display area changes, e.g. the user resizes a window, the mobile device is switched from portrait to landscape mode, or a regular iPhone app is run on an iPad: the result is that the user interface widgets do not know how to resize themselves to appreciate the change of scene.
Considering how much effort and inconvenience went into having developers make their apps work for iPhones and iPads, no doubt the engineers at Apple were not in any great hurry to introduce a third branch; so instead they devised a kludge: to developers, the iPhone 4 pretends that it has a regular old low-resolution 320×480 screen. Everything written for an older device works just exactly as it used to. Except that if you know how to ask, the platform will reveal that a logical pixel is actually made up of 4 device pixels. This logical/device pixel discrepancy is an idea that is as old as computer graphics. It is the extent to which this hoax has been perpetrated throughout the platform libraries that is the clever part.
For an app that is designed for a pre-retina device, where one pixel really is one pixel, there are some benefits that come through automatically: drawing things like lines and text directly onto a widget comes out as much crisper-looking graphics, because the platform adapts automatically to the higher density pixels. However, most apps come with a large number of predrawn images, and the mobile products derived from MMDS are no exception. These images are usually drawn at the same resolution that they expect the screen to have, which, to make a long story short, means that when they are displayed, your new iPhone will look like an old iPhone.
To solve this problem, developers need to include an additional image for each case: one drawn at normal resolution, and another one that’s twice as big in both dimensions. The platform API has some convenient techniques for selecting the right one. There are some other gotchas, too: there are many good reasons for apps to generate their own images dynamically and hand them off to be displayed elsewhere. These code blocks need to be located and upgraded so that they will take advantage of the higher resolution when available.
The icon set and codebase for the Mobile Molecular DataSheet have been reworked so that they now take advantage of the higher resolution, and this upgrade will trickle down to the derived apps whenever they are next updated: the first submitted update is MolPrime, which should be looking great on your iPhone 4 in a matter of days, if all goes well!