The new field of energy-efficient algorithms aims to develop new techniques for solving computational problems with vastly reduced energy consumption—for some problems, by several orders of magnitude—in exchange for a small increase in time and memory requirements. Specifically, we explore how to algorithmically exploit reversible computation, an idea that has been around since the 1970s and has just started to become a practical reality in the latest AMD chips, but for which we have only just begun understanding how to design efficient algorithms. Our preliminary investigations indicate that some basic problems (but not others) can have their energy cost substantially reduced, far less than that spent by a traditional algorithm on traditional hardware. This theoretical ground work will become especially important over the next two decades, when Landauer’s Principle predicts that the energy efficiency of computers (kilowatt hours spent per nonreversible computation) must stop improving. Improving the energy efficiency of computation will reduce the ~$15 billion annual cost spent to power today’s data centers, not to mention many other computers in the world processing big data. Furthermore, reducing the energy consumed by a chip reduces its heat generation, which should allow that chip to run proportionally faster with the same cooling system, or to run proportionally longer powered by the same battery. Thus our work will enable computation to become cheaper, faster, and longer lasting, and determine the theoretical limits thereof.