Mobile devices are energy efficient by design, while traditional PC architectures are not. This may seem like a trivial and obvious point, but it may still be worth making. Technology has a way of being influenced by its origins such that obsolete constraints and objectives live on throughout the decades. It may well be that the most practical way to reduce the energy bill for our information technology is to accelerate the transition to mobile platforms, even when we don’t necessarily need the mobility.
Consider the early personal computers of the 1980s: heavy (and ugly) boxes that were inseparable from a power socket. The early Intel chips that powered them didn’t draw so much power, but they were painfully slow, and so software was designed to use every CPU cycle it could grab, in order to make the user experience as close to adequate as possible. The rest of the 1980s and 1990s saw an arms race in which each new generation of hardware enabled a whole new kind of software. Software offered new tools that pushed the hardware to the edge, and new hardware was needed to make it perform comfortably, for a time, until the next best new thing arrived.
Things became even more hectic in the 1990s, when computers became mainstream and the demand for software seemed to be growing exponentially, while capable software engineers couldn’t graduate fast enough. The industry filled up with not-so-capable engineers, and a new generation of development tools was designed to make the software creation process easier for novices and quicker for veterans, all at the expense of performance, putting an even heavier demand on the hardware upgrade cycle.
But something happened in the early 2000s – it’s hard to say when exactly, perhaps around the time of the dot-com bust. People realised that it had been awhile since the last great new thing came along. The proportion of people who needed a new computer every one-to-two years was shrinking down to a small base of power users, and that pun is definitely intended: a modern PC had become the energy consumption equivalent of a fan and enough lightbulbs to keep a small apartment illuminated.
Some software designers tried to keep the treadmill running by making software classier, by adding nice-to-have features and improving the user interface in ways that were not possible with slower machines, while others favoured an improvement in software quality by transitioning to programming languages that were less efficient but much better at facilitating robust products. Hardware makers in the meanwhile reacted to complaints about their products being effective ways to keep a house warm, which is not so useful in the summer, and did their best to reduce the wattage.
But there’s only so much you can do when almost all of your software is essentially a legacy from a different age – an era in which priorities were completely different.
Now consider the devices that originated from the desire to have a computer that fits in the pocket and can be used for a whole day without returning for a recharge. Starting out as cellphones and PDAs, it is only in the last few years that it has made sense to describe this category as computers, in the same vernacular sense that people use to refer to their desktop PC.
The smartphones and tablets of all sizes that are hitting the market and gaining traction are succeeding for two main reasons: the hardware is designed from the ground up to miserishly hoard every last Joule of energy stored up in the battery, and the software has been redesigned from scratch to match the user interface constraints of a computer that has a small screen and no keyboard or mouse. These are the two differentiating features of modern tablets that spring to mind most readily, but there is another incredibly important trait: the software is efficient, too.
This is a simple consequence of the truism that if resources are available, at no apparent cost, then they will be used. Most of what most people do with a computer now could be done just as effectively with an entry level computer manufactured ten years ago, but most contemporary software would not work well, because in the interim software has grown to consume the extra capabilities simply because it can.
The opposite is the case for mobile software, for several reasons. The capabilities of high end mobile devices are modest compared to those of a low end PC, and although they are increasing rapidly, the fact that mobile devices are always expected to run for a long time using only a battery provides an extremely strong incentive to make every CPU cycle count. Also, the operating systems are designed to discourage individual programs from going rogue, and their APIs are designed to encourage efficiency over profligacy. When a developer seeks to provide some feature, efficiency is a major consideration. The relative slowness of mobile CPUs is the first bottleneck, but as any veteran software developer knows, there is always more than one way to solve a problem. The easiest and most obvious way is sometimes good enough, but when it is not, more effort is required. With mobile software development, such considerations are a much more prominent part of the process: relatively speaking, it is more important for a chunk of code to work well, in terms of not wasting resources. If an efficient solution cannot be found, or is too much effort, a feature must be cut, or a workaround must be found.
So what does this all mean for cheminformatics?
First of all, the products from Molecular Materials Informatics demonstrate that a large proportion of the client-side functionality required for cheminformatics can be implemented within the constraints of a mobile device, i.e. small screen, imprecise finger-based input, slow CPU, finite battery life, limited memory and storage space, and intermittent network access. Through a combination of cost-cutting efficiency methods, re-thinking of traditional approaches, and making a few sacrifices, some of the functionality is starting to approach that of much more comprehensive desktop-based software. With a much lower power drain.
This leaves out the many situations in cheminformatics where a fast computer really is necessary. There are numerous tasks that take an inconveniently long time even on a high-end desktop PC, for which porting to a mobile device would not be doing anyone any favours. That’s where the cloud comes in. Making use of servers housed in a server farm is a great way to obtain efficiencies of scale. Companies that specialise in cloud hosting know how many of which kind of CPU to cram into each box to minimise the total power draw, and can even choose their geography to optimise weather conditions and access to cheap renewable electricity.
What remains to be done is to finish the job of separating the user interfaces, which must be localised, responsive, lean and efficient, from the heavy duty algorithms, which need to run in discrete chunks with well defined checkpoints for user interaction. When this process is complete, chemists will be able to do all their work with a low-power mobile device that can run for a day on battery power, and the heavy lifting will be done by rented time-shared server utilities under ideal conditions.
The Mobile Molecular DataSheet has already begun implementing some of this division of labour by adding a webservices client and remote procedure calls, but these are largely in the proof of concept stage. The user interfaces are coming along well, and there are much more sophisticated services currently on the drawing board or in skunkworks projects. The goal is a worthy one: mobility, small form factor and energy efficiency combined with a new generation of user interfaces to fit contemporary professional lifestyles, rather than the limitations of the 1980s desktop computer.
fascinating – another example of being green in chemistry!