Sustainability Made Easy? R&D and Energy Technopolitics
While politicians speak of renewable energy, energy policy in the United States has long focused on gasoline. Matthew N. Eisler takes a look at the issues behind any real shift in energy policy.
Petroleum has become virtually synonymous with energy. In the United States energy crises typically seem most urgent when the price of gasoline increases. The country’s unique dependence on gasoline for transportation has provided politicians with a way to appeal to voters’ desires for low-cost commodities. In the early part of the 2008 U.S. presidential campaign, the energy crisis was a dominant theme for both major candidates, and the rising cost of gasoline was their chief rallying cry.
But the United States (and the rest of the world) has potential access to other forms of primary energy. In the 21st-century United States, when politicians talk about public energy, they hold that their goal is to develop a variety of sources (nuclear, wind, biofuels, and solar are commonly mentioned) and to provide plentiful, clean power. They claim they will achieve this goal while sustaining the American way of life and making the country energy-independent. It has become a virtual truism that advanced technology can help meet this uncompromising objective, but such enterprises are complicated by many factors. Not least of these is the historic absence of a comprehensive U.S. energy policy beyond a general commitment to secure abundant supplies as cheaply as possible. Since the end of World War II, there has been a vast gulf between politicians’ stated goal and energy R&D policy.
The U.S. government has influenced the direction and pace of technological progress, both as a consumer and as a partner in R&D. Beginning in World War I, the military ordered vast quantities of vehicles equipped with the fossil-fueled internal combustion engine (ICE). As a result of these orders the ICE became further entrenched as the dominant power source for practically all forms of automobile transport. The technology quickly became the keystone of American industrial power and, by virtue of its reliance on petroleum, federal geopolicy in the 20th century.
But government did not involve itself in energy R&D until the invention of nuclear power. Although it was originally developed as a military naval power plant, Washington strongly encouraged the technology’s adoption for civilian use.
Economically, however, this policy didn’t make much sense. In 1954 Atomic Energy Commission chief Lewis Strauss infamously predicted that civilian nuclear reactors would produce electricity “too cheap to meter,” a claim that has haunted the industry’s proponents ever since. With fossil energy cheap and plentiful in the 1950s, there was no demand for nuclear power. Only massive subsidies, particularly the cap on private insurance liability provided by the Price-Anderson Act of 1957, enticed the private sector to invest in the first civilian reactors.
Over the next half-century, nuclear power was the chief preoccupation of federal energy R&D. But this work has not yet succeeded in reducing cost and risk to the point where the industry can flourish without government-backed loan guarantees, production tax credits, and insurance indemnity. The question of cost is, of course, relative. Proponents note that a fully amortized nuclear plant produces very cheap electricity. Conversely, detractors claim that actual costs are much greater once fuel mining, reactor construction, maintenance,waste storage, and decommissioning are accounted for. Plans by private manufacturers to massively expand the number of reactors may be unrealizable; it would require huge public investment at a time when economic conditions are less favorable than at any time since before World War II.
This is not to say that nuclear power won’t play a role in American power production in coming years. But operating the existing 104 U.S. reactors presents major problems that further R&D may not be able to quickly solve. Often touted as a limitless energy source, natural uranium is in relatively short supply. Proven reserves amount to around 3.5 million tons, enough to fire reactors for 50 years at the current consumption rate. The industry’s preferred solution is to close the nuclear fuel cycle by reusing spent fuel. A chief candidate is mixed-oxide (MOX) fuel, an amalgam of plutonium and uranium. But the facilities required to produce this substance could also easily be used to manufacture weapons-grade materials. Given the global reach of the nuclear power industry, the Carter administration committed to permanently store depleted nuclear fuel. The 20-year gestation of the still-unfinished Yucca Mountain repository in Nevada shows just how expensive, technically difficult, and unpopular this venture is.
Much is made of “Generation IV” reactors, a range of advanced designs including several types capable of “breeding” more fuel than they consume by using neutron bombardment to transmute materials that cannot sustain a chain reaction like uranium-238 into fissile materials like plutonium-239. Under study by the Department of Energy, these costly and complex power sources are more volatile and difficult to control than conventional pressurized water reactors. And they are highly controversial—in effect they would bring about a plutonium economy, creating new problems of safely transporting and disposing of large quantities of this highly toxic element.
The United States has invested far less in renewable energy sources than in nuclear power. Nevertheless, wind, passive solar, geothermal, and cogenerated power devices and conservation materials have entered widespread service in varying degrees around the world. These technologies did not require sustained research into their basic physical properties in order to become commercialized, which made it possible to quickly install them into the existing infrastructure and measure their success. Photovoltaic (PV) power, another important sustainable energy-power device, is a different story.
DOE/NREL, Todd Spink
Unlike passive solar power, which consists of materials that capture and use solar energy as heat, PV cells produce electricity by absorbing photons that pass their energy to electrons. An offshoot of semiconductor technology, the PV cell was developed by Bell Laboratories in 1954. They were first applied in spacecraft but were not tested in terrestrial civilian roles on a large scale until the mid-1970s. As with most new power sources, the key thrust of research has been cost reduction—making devices that produce more power during their lifetimes than it took to construct them. This was achieved with PV panels in the early 2000s. Second-generation commercial cells using crystalline silicon—currently the most common light-gathering material—have a lifetime of between 25 and 35 years and recover their manufacturing energy in 1 to 4 years. The technology is particularly suitable in areas where power is expensive and solar irradiation (insolation) is greatest, such as California. Researchers hope third-generation photovoltaic substances—organic dyes and polymers, inorganic layers and nanocrystals applied as films to substrates or matrixes—can improve efficiency and cut manufacturing costs by reducing the amount of material in solar panels.
The United States has invested far less in renewable energy sources than in nuclear power.
As with nuclear power, demand for PV as a power supply often has to be fostered. In 1999 Germany began offering generous grants and loans to stimulate production and purchase solar panels under the “100,000 Roofs” program, which ended in 2003. A national feed-in tariff enacted in 2000 compelled utilities to purchase solar power at preferential rates. By the end of 2007 Germany led the world with nearly 4,000 megawatts of installed PV peak power capacity. Japan, with a similar program, is second with nearly 2,000 megawatts. The United States has far greater solar potential than either of these nations, concentrated mainly in the Southwest. But its subsidies and incentives are much less comprehensive. Only a little over 800 megawatts of PV power has been installed in the United States. By way of comparison 1 megawatt of power produced in a coal-fired thermal plant is equivalent to the electricity used by 400 to 900 homes in one year. It is important to note, however, that all power sources supply only a portion of their rated capacities. Both wind and solar devices typically operate at efficiencies much lower than 50% because of the intermittent nature of wind and terrestrial sunshine.
Although other countries have successfully adopted solar power largely using established PV technology that originated with government-funded innovation, critics charge that public cash has cushioned the real cost of this technology. Others take a different view. In their book Apollo’s Fire: Igniting America’s Clean Energy Economy, Jay Inslee and Bracken Hendricks contend that the full social, economic, and environmental costs of petroleum and nuclear systems are vastly higher and are themselves socialized. They claim political will, much more than a technological breakthrough, is the chief catalyst enabling sustainable energy and power systems to take root.
Lessons from the Automobile Industry
In some cases U.S. federal energy R&D has become so highly politicized that it has become an end in itself. This occurred with automobile technology after the California Air Resources Board (CARB) passed the Zero Emission Vehicle mandate in 1990. Although the automobile industry had traditionally framed the terms of automotive R&D, the Zero Emission Vehicle mandate compelled it to build and market large numbers of battery electric passenger vehicles. Because of their investment in the ICE, the automobile and oil industries bitterly resisted. In an effort to reconcile the conflicting interest groups, the Clinton administration launched the Partnership for a New Generation of Vehicles (PNGV) in 1993. A public-private venture, the PNGV aimed to improve the fuel economy of the average 1994 passenger sedan by approximately 300% over 10 years without compromising comfort or performance. If this could be done, the hope was that market demand would increase the efficiency of the light-duty fleet without the need for government intervention.
But in reality Detroit was uninterested in producing such automobiles when cheap gasoline made it tremendously profitable to build sport-utility vehicles and light trucks, and the federal government had no intention of forcing the industry to do otherwise. By 2000 Japanese automakers had two commercial hybrids available in the United States, Toyota’s Prius and Honda’s Insight. By contrast, American automakers had no similar products on the market and were forced to play catch-up to meet demand from customers who increasingly took environmental impact into account when purchasing a new car. Detroit began promoting the fuel-cell electric automobile as its preferred zero-emission vehicle entrant. In 2002 the Bush administration replaced the PNGV with FreedomCAR (Cooperative Automotive Research), another public-private partnership but devoted instead to developing the fuel cell—a device that combines hydrogen and oxygen in an electrochemical reaction that produces electricity and water—as a replacement for the battery in the electric vehicle. Industry and government believed the power source could be developed as an “electrochemical engine,” a hybrid of heat engine and battery able to electro-oxidize cheap hydrogenous fuels. But building such a device proved extremely difficult. After several years of inconclusive research, analysts began suggesting that hydrogen was the only practical fuel for fuel cells.
The period witnessed a resurgence of the idea of a “hydrogen economy,” a utopian energy and power scheme first broached in the early 1970s. Supporters often referred to hydrogen as though it were an inexhaustible primary energy source when in fact it is an energy carrier bound in biomass and fossil fuels. In January 2003 President George W. Bush announced the Hydrogen Fuel Initiative (HFI), a $700 million hydrogen production R&D effort to complement FreedomCAR. Planners styled these programs as a panacea that would make American light-duty transportation sustainable and energy-independent without compromising the performance and comfort consumers had come to expect from their automobiles. Crucially, the HFI made no provision for infrastructure procurement—costs estimated between $80 to $200 billion.
Today FreedomCAR and the HFI have been all but forgotten, and the notion of a hydrogen economy is largely dismissed. Political and industrial leaders have been unwilling to invest in a new automotive fuel and power source system and the new industrial revolution that it implied. Instead they have supported a less radical alternative-energy megaproject that would preserve the place of the ICE in the U.S. transportation system. Unlike the hydrogen economy, biofuels have attracted a broad political constituency, primarily in midwestern agricultural areas. Thanks to a 51¢ per gallon tax credit and congressional lobbying, ethanol production has been gradually increased over the years, from 81 million barrels in 2004, composing about 2% of U.S. gasoline consumption, to about 171 million barrels by the end of 2007, almost all of it derived from corn. The Energy Independence and Security Act (EISA) of 2007 mandated a massive production increase to 2.3 million barrels a day or 857 million barrels a year by 2022.
This enterprise has several problems, and not all of them can be easily solved solely by R&D. Planners gave little thought to the infrastructure necessary for the ethanol boom and, as during the fuel-cell boom, assumed that alcohol fuels could be easily handled by existing petroleum-fuel storage and transport systems. This assumption proved wrong. Ethanol is contaminated by water present in gasoline pipelines, so it must be shipped by more expensive road and rail systems. This is a major consideration since the primary markets are on the East and West Coasts. A more difficult problem is volume. Owing to the very high cost of producing ethanol from cereal grain, 380 million barrels of the EISA’s 2022 quota must be produced from cellulose, or nonfood biomass. This method requires developing cheap enzymes that can break down this tough material. While such a process has been accomplished in the laboratory, it has not yet been commercialized. Although the EISA does not specify precisely how the 2022 target will be met, large doses of public cash will likely be necessary. A February 2008 study by Mindy L. Baker, Dermot J. Hayes, and Bruce A. Babcock of Iowa State University calculated that in the best-case scenario, nearly $25 billion would be required to meet the EISA’s goals for cellulosic ethanol alone. In subsidizing an expensive synthetic fuel that can satisfy only a small fraction of gasoline demand, the study adds, the government is stimulating massive inflation in the agricultural sector.
R&D in the Age of Permanent Crisis
History suggests that policy makers will continue to believe that technology can offer sustainable, abundant energy. But the unspoken question about energy policy is whether this is a worthy goal. Researchers and planners will likely continue to be disappointed as such projects encounter physical and political obstacles.
The historical development of energy and power technologies has been shaped at least as much by human judgments of value and desirability as by demand and “objective” physical factors. Government, industry, and the popular press widely assume that standards of comfort, convenience, and aesthetics in consumer technology—exemplified in the automobile—are unchanging, when in fact they undergo constant metamorphosis. Despite their notorious conservatism, the Big Three U.S. automakers have been forced by popular dissatisfaction, fierce foreign competition, and government legislation to shift their efforts toward somewhat cleaner, more efficient, and better-built automobiles.
History suggests that policy makers will continue to believe that technology can offer sustainable, abundant energy. But the unspoken question about energy policy is whether this is a worthy goal.
After years of resistance, General Motors has finally embraced the commercial hybrid electric passenger automobile. Production of the plug-in Chevrolet Volt begins in summer 2009 for sale in 2010. A decade after Toyota introduced its market segment–leading Prius, General Motors will at last field its response. Yet Detroit’s acceptance of the latest commercial alternative automobile technology has probably come too late to have a significant impact on an industry near the brink of collapse as the recession bleeds consumers of discretionary income.
As old energy and power schemes are recycled and discarded, new ones rise in their stead. Yet the aura of omnipotence that R&D acquired in industry and government circles in the early postwar years has long since dissipated. Scholars recognize R&D as a set of social practices where the problems to be solved do not objectively emerge, but rather reflect the interests of those groups that support and benefit from these activities. Historically this process has been the prerogative of a very select group. As knowledge becomes increasingly democratized, the questions to be explored necessarily change. The Enlightenment tradition conjured nature as a vast, inexhaustible cornucopia, but researchers are now recognizing there are limits to the ways people can make physical matter conform to their desires. From this perspective the world faces a crisis not of energy or technology but of social relations. In the postindustrial era, perhaps R&D begins most profitably by asking how society can curb its appetite for resources, distribute them more equitably, and use them more efficiently. As people come to understand how energy and power technologies relate to human values, they become aware of how they have been used to create the socio-technical and cultural landscape we now live in and how they might be used to create alternative ones.