Fifty Years

Earlier this month, I turned 50 years old.  Such milestones are natural occasions for reflection.

Beyond recalling many of the phases and individual episodes of my life, my reflection included a consideration of how the world had changed in the 50 years in which I had lived.  And, naturally, given my profession, I pondered what it would have been like to have been a “cleantech” practicioner 50 years ago, in 1962.

Frankly, it’s not really possible to imagine “cleantech” back then.  50 years ago, there wasn’t much “clean” and there wasn’t much “tech”.

In the U.S., the Clean Air Act and the Clean Water Act hadn’t been passed, and there wasn’t even an Environmental Protection Agency.  Silicon Valley was still mainly apple orchards, and computers less powerful than your smartphone barely fit into large warehouses.

In the energy sector, the U.S. still dominated the petroleum industry.  Not only did Americans consume more petroleum than anyone else (accounting for about 40% of world demand), U.S. oil production was still a major factor, representing almost 30% of worldwide production.

The oil industry’s operations would still have been very recognizable by John D. Rockefeller:  production was mainly from “conventional” onshore seesaw pumpers dotting the countryside; remote locations such as Alaska hadn’t yet been touched, nor had any material production yet been achieved from offshore wells.

Other than perhaps by watching the recently-released “Lawrence of Arabia”, few Americans paid much attention to the deserts of the Middle East in 1962.

Though unnoticed by most Americans, important forces in the oil industry were already beginning to shift in the early 1960s.  Although Texas oil production had been decisive in fueling the Allied victory in World War II just two decades previously, by 1962, the U.S. had become a net importer of oil.  Yet, only King Hubbert projected a future waning of American supremacy in oil production.

Oil prices in 1962 were a little less than $3/barrel, largely due to the price-setting powers of the Railroad Commission (RRC) of Texas, then still the source of a significant share of world oil production.  When a hitherto little-noticed group formed in the early 1960s called the Organization of the Petroleum Exporting Countries (OPEC) assumed the dominant influence in pricing oil a decade later, the world would change forever, as oil prices would never again be anywhere near below $10/barrel.

It’s almost quaint to summon up memories of the oil sector of the era.  Remember what filling up at a gas station was like in the 1960s?  The attendant would come out, put the nozzle in the tank (always with the filler behind the rear license plate), cheerfully wipe the windshield and ask “may I check your oil?”.  Looking out the window, I remember seeing “29.9” on the gas pumps.  That’s 29.9 cents per gallon — which seems almost surreal to us now, but remember, oil prices were then only a few percent of what they are today.

Of course, given the now-unbelievably appalling gas mileage of those Detroit beasts, usually under 10 miles per gallon, you still had to fill up about as often then as you do now.  Back then, it was all about horsepower — it certainly wasn’t about efficiency, nor about cleanliness.  (Nor, for that matter, reliability.)

Every once in awhile these days, I find myself behind a 1960s-vintage car at a stoplight, most often on a sunny summer afternoon.  When the light turns green, I am left in a thin cloud of light bluish smoke and the fragrance of octane and unburned hydrocarbons.  Odors of my youth.  You don’t see and smell that anymore — and I don’t miss it.

Thank goodness for a plethora of cleantech innovation during the past decades:  unleaded fuels, pollution controls and fuel injection systems.

And, let’s not forget that these advances were pushed by, only happened because of, foresightful proactive policies.

While the financial bonanzas and corporate/family dramas enabled by oil discoveries and production had thoroughly captured the American imagination by the early 1960s — consider everything from “Giant” to “The Beverly Hillbillies” — natural gas in 1962 was an afterthought.  Other than some use for power generation in Texas and Oklahoma (where there was no local coal resource), natural gas was mostly flared at the wellhead.  In many ways because (and many people now forget this) natural gas prices were then regulated at depressed levels, the companies that produced gas as a side-consequence from oil production didn’t see much value in making the investments necessary to collect it and transport it to markets.  In fact, natural gas was widely considered a nuisance in 1962.

Certainly, gas is no longer considered a nuisance.  In fact, it’s now being touted by politicians across the U.S. as the Godsend:  providing lower energy prices, lower emissions, higher domestic employment and reduced dependence on foreign energy sources.

No, the oil/gas industry — and those two fuels are today inextricably intertwined — is now much more aggressive in capturing and processing every Btu that courses through the markets.

In the late 1960s, our family lived in the Philadelphia area, and I remember being awed – almost scared, really – by the immense flames emitted by the refinery near the mouth of the Schuylkill River.  All those now-valuable hydrocarbons…gone, wasted, up in smoke.  You don’t see that anymore at refineries, thankfully.

Oil company practices have massively changed in the past 50 years to capture everything of possible economic value.  Of course, that’s the effect of a 30x increase in oil prices, driven by a worldwide search and race to find and produce new reserves to replace five decades’ worth of depletion of much of the cheap/easy stuff in the face of a tripling of global oil demand (mostly from outside the U.S.), counterbalanced by technological progress on a host of fronts over the span of five decades.

Today, oil is pretty consistently trading between $80-100/barrel, and while U.S. oil production has rebounded a bit to approach early 1960s levels, American production now accounts for less than 10% of world oil production.

But think about how low U.S. oil production would be and how high oil prices might be today if not for offshore oil production, directional drilling, 3-D seismic, and an untold number of other innovations produced by the oil patch in the last half-century to enable production from hitherto undeveloped places.

Of course, beauty is in the eye of the beholder, and not all of these developments are viewed positively by everyone.  The current debates about fracking and development of the Alberta oil sands would have been unimaginable in 1962.  At the time, fracking barely existed as a practice, and the Alberta oil sands were then hopelessly uneconomic as a source of fuels.  Moreover, there was virtually no environmental movement to give voice to the concerns of citizens.

It wasn’t really until Rachel Carson published Silent Spring just a few weeks after I was born that much attention was paid to pollution.  Later in the decade and into the 1970s came the grassroots emergence of the environmental groups, such as Greenpeace and the Natural Resources Defense Council.

If you are about my age or older, you may well remember this 1971 commercial.  The tagline (“People start pollution, people can stop it”) and the image of the Native American shedding a tear remain indelible decades later.

Before this, there was virtually no accountability placed on emitters, and anyone could pretty much dump whatever they wanted, wherever they wanted, whenever they wanted.  And, in the early 1960s, no set of interests benefitted from ongoing inattention to environmental considerations in the U.S. more than the coal sector.  For those with coal interests, the times before environmentalists were truly the glory days — and in 1962, the future for coal in the U.S. at that time was terrifically bright.

Sure, trains had just moved from coal steam to diesel-electric, but over half of all the electricity generated in the U.S. in 1962 was based on burning coal.  With burgeoning demand for electricity (especially to keep pace with the exploding utilization of increasingly-ubiquitous air conditioning), coal was poised for significant growth, as thousands of megawatts of new coal powerplants would be added to the nation’s energy grid each year during the 1960s.

While coal is certainly no poster-child for the cleantech sector today, back in 1962, coal remained a particularly brutish and nasty form of energy.  288 American miners were killed on the job in 1962, and all of the coal burned was subject to minimal pollution control – no electrostatic precipitators or baghouses to capture particulates (i.e., soot), much less scrubbers for sulfur dioxide or selective catalytic reduction for nitrogen oxide emissions.  You pretty much didn’t want to be a coal miner or live anywhere near a coal-burning powerplant, as your health and longevity were seriously at risk.

Indeed, some observers speculate that the uncontrolled emissions from powerplants (not to mention other industrial facilities, such as steel mills) threw up such large amounts of material into the atmosphere that the 1970s became a period of unusually cold temperatures — to the point that many scientists were projecting a future of damaging global cooling.  (Although the then-common theory of global cooling is now mainly forgotten, climate change deniers are quick to employ this prior dead-end of thought as one reason for dismissing the strong likelihood suggested by climate scientists that global warming is probably occurring today.)

Of course, the U.S. still mines coal, lots of it, to fuel lots of coal-fired powerplants.  Production in 2011 was 1.1 billion tons, more than double 1962 levels.  However, employment in the coal industry had fallen by over 40% during the same period.  (And, mercifully, annual fatalities have decreased by a factor of 10.)  The primary factors for these changes:  productivity increases due to new technologies (e.g., longwall mining), lower rates of unionization, and a shift from underground to surface mining (now accounting for nearly 70% of U.S. production).

With respect to the latter factor, Wyoming coal activity has exploded — now representing more than 40% of U.S. production — at the expense of Appalachia, whose coal sector is now but a shell of what it was 50 years ago.  The causes are simple:  the subbituminous Powder River stuff from Wyoming is much more abundant and cheaper to mine, and generally has much lower sulfur content to boot, than what is available from Appalachia.

On a broader level, coal is on the retreat in the U.S.:  while coal still accounts for almost 50% of power generation, this share is dwindling.  It seems as though U.S. coal production levels have plateaued at just over 1 billion tons a year.  While so-called “clean-coal” technologies may at some point provide the basis for a resurgence in the industry, the possibility of future growth certainly seems far from obvious today.

Many legacy coal powerplants – some of which remain in operation from well more than 50 years ago – are fading away.  Tightening emission requirements, particularly on toxic emissions such as mercury, are just one  competitive disadvantage facing coal; coal power is increasingly uncompetitive with cheap and cleaner natural gas powerplants and (in some places) wind and solar energy.

“Wind energy” and “solar energy”:  50 years ago, these would have been oxymorons.  Other than the minute niches of sailboats and waterwell pumping in the Great Plains, a good wind resource had virtually no commercial value in 1962.  At the same time, Bell Labs scientists were wrangling some with solar energy technologies — primarily for satellites – although a lot more attention was being paid to a related device called the semiconductor.

For energy, scientists were mainly working on nuclear power, moving from weapons and Navy submarines to powerplants.  The nuclear era was dawning:  electricity was going to be “too cheap to meter”.

The very first commercial nuclear powerplant, the relatively puny 60 megawatt plant at Shippingport in Western Pennsylvania, had been running for only a few years in 1962, though dozens of nuclear powerplants were just coming onto the drawing boards.  Visionaries were even talking about nuclear-powered automobiles in 1962.  (“Electric vehicles?  Puh-lease.  Batteries are for cheap portable Japanese radios.”)

Perhaps as a psychological defense mechanism to drown out the anxieties associated with potential Armeggedon from a Cold War missile exchange, such was the sense of optimism in the possibilities of the age.

Apparently, no-one could foresee Three Mile Island, Chernobyl or Fukushima at the time.

The future held boundless possibilities.  Back then, who needed to recycle?  To think about efficient utilization of resources?  To care about water quality or air quality?  There was always more and better, somewhere, to be had.  And we Americans would surely obtain it, somehow and someway.  It was Manifest Destiny, ever-onward.

This American philosophy may have confronted its limits early in my lifetime with the ultimate realization, brought home so vividly at the end of the 1960s by the first-ever images of the solitary Earth as provided by the Apollo program, that we’re all utterly dependent upon a finite planet in an infinite sea of otherwise-unpopulable space.  Earth Day followed in April 1970.

To commemorate this first Earth Day, I remember our second-grade class picking up scads of litter along the side of a section of highway.  Upon reflection, I am glad to note how much litter has declined in subsequent years — a case of how values can be reshaped and behaviors can be changed, if people are just a bit more conscious.

That’s a positive take.  However, one can reasonably look back on 50 years of the evolution of the energy sector and say, well, that not that much has really changed in America.

True, the basic structure of American life may not have changed too dramatically.

We still primarily live in single family dwellings, in suburbia, dependent upon cars that look more or less the same, fueled by gasoline available at stations just down the road.  The power grid is still there, powered by central-station powerplants; the light switches and outlets haven’t changed, with refrigerators still in every kitchen and TVs in every living space.

By all measures, Americans are still energy hogs, relative to the rest of the world.

Even so, I would assert that a lot has changed, at both the macro and micro-level, that have consequentially altered the trajectory of resource utilization in America from the path determinedly being travelled 50 years ago.

Admittedly, some of the changes we have experienced are a bummer:  niceties like summer evenings with the windows open are much rarer.  Nevertheless, I claim that most of the changes of the past half-century are positive – and can be attributed to a significant degree to what we now call “cleantech”.

Our energy bounty, improved so significantly by technological innovation, has been achieved while simultaneously improving environmental conditions in almost every respect.  Notwithstanding the substantial increase in carbon dioxide emissions, almost all other manifestations of environmental impact from energy production and use have dramatically improved in the past half-century.  Standards of living enabled by modern energy use, here in America and even more so in the rest of the world, have dramatically improved.

Moreover, the trends for further future improvement on all these fronts are favorable.

With the proliferation of improved technologies such as LED lighting, energy efficiency continues to advance.  Renewable energy continues to gain share:  wind and solar energy represented about a quarter of new U.S. electricity generation additions in 2010.  Citizen understanding of energy and environmental issues continues to become more sophisticated.

Beyond the forces specifically pertaining to the energy sector, a number of broader influences in U.S. society are improving the prospects for accelerating cleantech innovation and adoption.  Entrepreneurship is booming, consumerism is increasingly being called into question, capital markets are more amenable to investment in this sector and more capital is arriving accordingly, and the Internet makes an immense and ever-expanding pool of information freely available to enable better decisions.

Not to mention:  much of the opposition to a transition to the cleantech future emanates from people in generations that are older, that will die out in the next couple of decades, to be replaced by younger generations that are generally more supportive of increased cleantech activity.

So, while it’s easy to get discouraged by the impediments to cleantech progress on a day-to-day basis, over the long-view, it’s pretty apparent that big positive things can happen and in fact are happening.

50 years from now, in 2062, I hope to be alive and well at 100 and still contributing to the cleantech sector.  That may be overoptimistic.  But I don’t think it’s at all overoptimistic that we’ll see more changes, and more changes for the better, in the cleantech realm over the next 50 years than in the previous 50.

Chief Blogger’s Favorite Cleantech Blogs

I’ve personally written hundreds of articles over the years.  I selected a few I thought were pretty timeless or prescient, and worth rereading:

What is Cleantech?  Always a good starting point:

or try, The Seminal List of Cleantech Definitions

 

The “Rules” in Cleantech Investing – Rereading this one after the cleantech exits study we just did, wow, was I on the money!

 

VeraSun IPO analysis – Read this carefully, I predicted exactly what would happen, and try the later version Beware the Allure of Ethanol Investing

 

Cleantech Venture Capitalists Beware, What You Don’t Know about Energy CAN Kill you – The title says it all.

 

 

What’s the State of REALLY Advanced Energy?

What’s the state of REALLY advanced energy?  Is there a breakthrough on the horizon that would really change the game in energy?

Fusion?  Small Modular Nuclear Reactors?

Is Cold Fusion or LENR real?  How much work is actually going into the area?  Are VCs funding it?

Can the 1st and 2nd laws of thermodynamics be violated, or at least stretched?  What do physicists say about that?

Check out Dr. Ed Beardsworth’s survey of the state and theory behind New New Energy tomorrow at 10 am pacific.

Sign up here for our Cleantech.org class:

http://newenergy101.eventbrite.com/

Or email Dr. Ed Beardsworth at beardsworth @ janecapital.com for details on Cleantech.org members discount.

Cleantech Venture Backed M&A Exits? Well, Yes, Sort of . . .

When people ask me, are investors making money in cleantech, I tell them yes, but not by whom or in what you thought they were.

Most of the analyses of cleantech exits do not differentiate for venture backed companies.  So we conducted our own study.

In the last 10 years, Cleantech.org’s Cleantech Venture Backed M&A Exit Study shows a grand total of 27 venture backed cleantech deals > $50 mm.

All in all, very tough returns.   A number of 8 to 10 figure fortunes made, just laregly not by the investors spending the 9 and 10 figure investments.

19 where we had data on both exit values and venture capital invested, 8 where we had revenue estimates.

We found a 2.78x Median Exit Value Multiple on Venture Capital Invested

– Those exit numbers include the founders and management’s shares, so average returns to investors would be somewhat lower.

We found a 2.2x Median Exit Value Multiple on Revenues.

$13 Billion in total M&A exit value.  Not bad, until you realize that’s over 10 years where cleantech has seen tens of billions in investment, and we used a pretty broad definition of “venture backed”.  To get there we included Toshiba’s Landys+Gyr, Total’s Sunpower, EDP’s Horizon and ABB’s Ventyx deals.  Those are the top 5 deals by value, and represent 60% of the $13 Billion.  None were backed by investors you would normally think of as cleantech venture capital powerhouses (Bayard Capital, Cypress Semiconductor, Zilkha and Goldman Sachs, Vista Energy).  Three of them included prior acquisitions themselves.

Excluding those and looking at only the transactions where we had both valuation and exit data we found and even weaker $3.8 Billion on $1.8 Billion in venture capital, 2.1x.

Most surprising, if you looked at the list of investors in these Nifty 27 exits, you’d have heard of very few of them.  This is truly not your father’s venture capital sector.

The exits have a surprisingly low tech flavor, and were carried by renewable energy project developers, ESCOs, and smart grid, and solar balance of system manufacturers.

If we had limited this to Silicon Valley venture investors in high tech deals, well, you’d have wondered if M&A were a four letter word.

Interesting, isn’t it?  Contact me at dikeman@janecapital.com with any questions or if you’ve got deal data you’d like to see included.

The Advent of SPS Policy?

In the cleantech sector, pretty much everyone knows the acronym RPS, for Renewable Portfolio Standards.  Since the first RPS policy in the U.S., implemented in Iowa in the late 1990s, 30 states have passed similar policies to promote the installation of renewable energy projects and expedite penetration (overcoming the ambivalence or outright opposition of utilities) of renewable energy in electric power supply.

Now, as reported in this article, California is considering the adoption of what looks to be the first Storage Portfolio Standard:  requirements for utilities to install grid-scale energy storage.  Specifically, in early August, the California Public Utilities Commission (CPUC) voted unanimously to adopt a framework for analyzing the energy storage needs of each utility.  This builds upon a previous bill, AB 2514, which included a mandate for the CPUC to “determine appropriate targets, if any, for each load-serving entity to procure viable and cost-effective energy storage systems to be achieved by” the end of 2015 and 2020.

Not surprisingly, the three major electric “load-serving entities” (i.e., electric utilities) in California — PG&E, SCE and SDG&E — all opposed this movement.  As did the Division of Ratepayer Advocates (DRA), the consumer watchdog organization, which argued that “picking arbitrary procurement levels…would most likely result in sub-optimal market solutions and increase costs to ratepayers without yielding commensurate benefits”.

As one of my former McKinsey colleagues noted on a number of occasions, quoting an executive who worked his entire career at a large electric utility, “No technology has ever been widely adopted by the electric utility industry without having it mandated by the regulators.”

The storage analogue of RPS policy — let’s call it SPS — faces some hurdles, no doubt.  But so did RPS policies.

Given that GE (NYSE: GE) is now working on a grid-scale battery technology, given how much GE’s wind business has benefited from the expansion of RPS policies over the last decade, and given how active GE tends to be in energy policy circles, it’s not a stretch to think that there will be a push for SPS-like policies across the U.S.

It will take time to fully implement, but perhaps grid-scale energy storage will soon be following the path blazed by renewables over the past 15 years, with a domino-effect of SPS requirements spreading across the country.

 

 

Is the “Weak Force” the Key to LENR?

By David Niebauer

In the early part of the 20th Century physicists theorized that a mysterious force held the nucleus of an atom together.  When it was demonstrated that this force could be tapped, releasing tremendous amounts of energy, a wave of excitement swept the scientific world.  It took only a few short years before atomic energy theories were experimentally validated in the first nuclear weapon detonations.  Hiroshima and Nagasaki followed.  Most of us alive today were born under the mushroom cloud that has loomed over humanity ever since.  Accessing the power of the strong nuclear force has been a mixed blessing:  it has brought the possibility of energy beyond our wildest dreams but with nightmarish consequences that were literally unimaginable a generation ago.

That physicists would become enamored of the strong nuclear force is understandable:  the energy locked in the nucleus of the atom is potent, it is real, and the challenge of harnessing it for useful purposes has become the “holy grail” of scientific endeavor.

But could another, more subtle, “fundamental force” hold the key to our energy future?

The Fundamental Forces of Nature and the Weak Force

Of the four fundamental forces (gravity, electromagnetism, strong nuclear force and weak nuclear force), the “weak force” is the most enigmatic. Whereas the other three forces act through attraction/repulsion mechanisms, the weak force is responsible for transmutations – changing one element into another – and incremental shifts between mass and energy at the nuclear level.

Simply put, the weak force is the way Nature seeks stability.  Stability at the nuclear level permits elements to form, which make up all of the familiar stuff of our world.  Without the stabilizing action of the weak force, the material world, including our physical bodies, would not exist.  The weak force is responsible for the radioactive decay of heavy (radioactive) elements into their lighter, more stable forms.  But the weak force is also at work in the formation of the lightest of elements, hydrogen and helium, and all the elements in between.

A good way to understand the weak force is in comparison with the actions of the other forces at work in the center of the Sun.  The Sun, although extraordinarily hot (10 million degrees), is cool enough for the constituent parts of matter, quarks, to clump together to form protons.  A proton is necessary to form an element, which occurs when it attracts an electron – the simplest case being hydrogen, which is composed of a single proton and a single electron.  By the force of gravity, protons are pulled together until two of them touch – but because of the electrostatic repulsion of their two positive charges, their total energy becomes unstable and one of the protons undergoes a form of radioactive decay, turning it into a neutron and emitting a positron (the antiparticle of an electron) and a neutrino.  This action forms a deuteron (one proton and one neutron), which is more stable than the two repelling protons.  This transmutation of proton into neutron plus beta particles is mediated by the weak force.

A neutron is slightly heavier, and therefore less stable, than a proton.  So the normal action of the weak force causes a neutron to decay into a proton, an electron and a neutrino.  At any rate, at the center of the Sun, once a deuteron is formed, it will fuse with another free proton to form helium-3 (one neutron and two protons), releasing tremendous amounts of energy.  These helium-3 atoms then fuse to form helium-4 and releasing two more protons and more energy.  The release of energy in these fusion reactions from the strong force is what powers the Sun.  But the entire process is set in motion by the weak force.

Enter “Cold Fusion”

When in 1989 Pons and Fleishman stunned the world by reporting nuclear reaction signatures at room temperatures, physicists were understandably baffled and skeptical.  Given that virtually all nuclear physicists at the time were trained in the powerful energies of the strong force, table top fusion made no sense.  The fact that the phenomenon was dubbed “cold fusion” was unfortunate and likely contributed to almost universal rejection by the scientific community.  Standard theoretical models were not able to explain how cold fusion might even be possible and unless it could be understood it was pointless and a waste of time.  A comment attributed to Wolfgang Pauli describes the reaction of most physicists at the time: “its not right; its not even wrong”.  Without a coherent theory to explain it, it wasn’t even science at all.

This all changed in 2006 with the publication of a paper in the peer-reviewed The European Physical Journal by Allan Widom and Louis Larsen titled “Ultra low momentum neutron catalyzed nuclear reactions on metallic hydride surfaces”.

In this paper for the first time a theoretical basis was put forth that explained many of the anomalous results being reported by experimentalists in the new field of Low Energy Nuclear Reactions (LENR) – and the common explanatory action was the weak force.

As explained by Dennis Bushnell, Chief Scientist at NASA Langley Research Center in his article “Low Energy Nuclear Reactions, the Realism and the Outlook”:

“The Strong Force Particle physicists have evidently been correct all along. “Cold Fusion” is not possible. However, via collective effects/ condensed matter quantum nuclear physics, LENR is allowable without any “miracles.” The theory states that once some energy is added to surfaces loaded with hydrogen/protons, if the surface morphology enables high localized voltage gradients, then heavy electrons leading to ultra low energy neutrons will form– neutrons that never leave the surface. The neutrons set up isotope cascades which result in beta decay, heat and transmutations with the heavy electrons converting the beta decay gamma into heat.”

Brief Description of Widom-Larsen Theory

Not everyone agrees that the Widom-Larsen Theory (“WLT”) accurately explains all, or even most, of the observed phenomenon in LENR experiments.  But it is worth a brief look at what WLT proposes.

In the first step of WLT, a proton captures a charged lepton (an electron) and produces a neutron and a neutrino.  No Coulomb barrier inhibits the reaction.  In fact, a strong Coulomb attraction that can exist between an electron and a nucleus helps the nuclear transmutation proceed.

This process is well known to occur with muons, a type of lepton that can be thought of as very heavy electrons – the increased mass is what pulls the lepton into the nucleus.  For this to occur with electrons in a condensed matter hydrogen system, local electromagnetic field fluctuations are induced to increase the mass of the electron.  Thus, a “mass modified” hydrogen atom can decay into a neutron and a neutrino.  These neutrons are born with ultra low momentum and, because of their long wavelength, get caught in the cavity formed by oscillating protons in the metal lattice.

These ultra low momentum neutrons, which do not escape the immediate vicinity of the cavity and are therefore difficult to detect, yield interesting reaction sequences.  For example, helium-3 and helium-4 are produced often yielding large quantities of heat.  WLT refers to these as neutron catalyzed nuclear reactions.  As Dennis Bushnell explains:  “the neutrons set up isotope cascades which result in beta decay, heat and transmutations.”  Nuclear fusion does not occur and therefore there is no Coulomb barrier obstruction to the resulting neutron catalyzed nuclear reaction.

Brief Description of Brillouin Theory

Robert Godes of Brillouin Energy Corp., claims that WLT explains some, but not all, of the observed LENR phenomena.  As Godes understands the process, metal hydrides stimulated with precise, narrow, high voltage, bipolar pulse frequencies (“Q-pulse”) cause protons or deuterons to undergo electron capture.  The metal lattice stimulation by the Q-pulse reverses the natural decay of neutrons to protons, plus beta particles, catalyzing an electron capture in a first endothermic step.  When the initial proton (or deuteron) is confined in the metal lattice and the total Hamiltonian (total energy of the system) reaches a certain threshold level by means of the Q-pulse stimulation, an ultra cold neutron is formed.  This ultra cold neutron occupies a position in the lattice where dissolved hydrogen tunnels and undergoes transmutation, forming a cascade of transmutations – deuteron, triton, quadrium – by capturing the cold neutron and releasing binding energy.  Such a cascading reaction will result in a beta decay transmutation to helium-4, plus heat.

The Q pulse causes a dramatic increase of the phonon activity, driving the system far out of equilibrium.  When this energy reaches a threshold level, neutron production via electron capture becomes a natural path to bring the system back to stability.

Theory and Experiment

Other well-known LENR theorists have implicated the weak force, including Peter Hagelstein, Tadahiko Mizuno, Yasuhiro Iwamura and Mitchell Swartz.  The project now, as with all scientific endeavor, is to match experimental evidence to theory.  The hope is that the electron capture/weak force theories will help guide new, even more successful experiments.  This process will also allow theorists to add refinement and new thinking to their models.  I am reminded of the two “laws” of physicists proposed by an early weak force pioneer:

1. Without experimentalists, theorists tend to drift.

2. Without theorists, experimentalists tend to falter.

(T.D. Lee, as quoted in “The Weak Force: From Fermi to Feynman” by A. Lesov).

Experimentalists have been reporting anomalous heat from metal hydrides since before Pons and Fleischmann.  But without a cogent theory, they have had to rely on ad hoc, trial and error methods.  Given this state of affairs, the progress made in the LENR field in the last twenty years is remarkable.  Perhaps we are now at the beginning of a new era in which theoretical models will guide a rapid transformation of the science.

Conclusion

Scientists have focused on the strong nuclear force due to the immense power that can be released from breaking the nuclear bond.  Less attention has been paid to the weak force, which causes transmutations and the release of energy in more subtle ways.  Recent theories that explain many of the phenomena observed in low energy nuclear reactions (LENR) implicate the weak force.  We are now at the stage where theory and experiment begin to complement each other to allow for the rapid transformation of the new science of LENR.

Journalistic disclosure:  David Niebauer is general legal counsel to Brillouin Energy Corp.

Holy Grail 12.0: Is Our Quest At Its End?

I’ve been working with new energy inventions and their creators for almost 15 years now.  I don’t know how many times I’ve heard a new technology described as “the Holy Grail”:  solving all of the world’s problems forever.

Well, here’s the newest one using the Holy Grail cliche:  a supposedly carbon-neutral method of using microbes to convert electricity into natural gas.

Thanks to an article written by Brita Belli of Ecomagination at GE (NYSE: GE), I was pointed to the recently-reported work of a team of researchers led by Alfred Spormann at Stanford University and Bruce Logan of Penn State University.  These researchers have determined that an organism called Methanobacterium palustre, when submerged in water on an electrically-charged cathode, will produce methane (i.e., natural gas, CH4) — supposedly at an 80% efficiency rate.

The carbon-neutrality of this approach stems from (1) using surplus electricity generation from non-emitting wind or solar and (2) the microbe extracts the carbon atom for the methane from the CO2 in the atmosphere.

So, in theory, one can make an infinite supply of a relatively clean fossil-fuel from renewable electricity by sucking carbon out of the air.  And, given the extensive natural gas pipeline, storage and distribution network, this fuel could be used for baseload power generation, traditional space/water heating and cooking purposes, and even transportation (e.g., natural gas vehicles).

The catch:  as is often the case with early discoveries in university labs, the researchers don’t know how to scale the technology and achieve consistent/stable results at commercially-useful levels.  The economics are also highly uncertain.

Don’t hold your breath.  This type of invention could take a very very long time to turn into something that’s viable for the energy marketplace.  As a long-time executive from one of the supermajors once said to me, it takes 12-24 months to really prove something at the next order of magnitude — and in energy, it’s usually several orders of magnitudes of expansion from the laboratory to the field.  Thus, what seems like an overnight success story usually has a decade or more of development behind it.

So, while this discovery might turn out to be the Holy Grail — and it definitely seems worth monitoring — one should not get too excited just yet.  There are a lot of potential hurdles to be overcome, and some of them may not be surmounted.  Even if the technology develops favorably, it’s a long way from being ready for prime-time.

In the meantime, this is the only Holy Grail to which I will pay attention.

Cold Facts About Air Conditioning

There may be people who understand the big picture about air conditioning better than Stan Cox, but the list is surely a short one.

Cox, who wrote Losing Our Cool:  Uncomfortable Truths About Our Air-Conditioned World, has also just written an excellent brief article called “Cooling A Warming Planet:  A Global Air Conditioning Surge”

In this posting, rather than comment on Cox’s article, some of the facts are so succinctly presented that it makes no sense for me to try to improve upon them.  So, I will excerpt some highlights, starting right off the bat with the introductory paragraph:

“The world is warming, incomes are rising, and smaller families are living in larger houses in hotter places.  One result is a booming market for air conditioning — world sales in 2011 were up 13 percent over 2010, and that growth is expected to accelerate in coming decades…If global consumption for cooling grows as projected to 10 trillion kilowatt-hours per year — equal to half of the world’s electricity supply today — the climate forecast will be grim indeed.”

“The United States has long consumed more energy each year for air conditioning than the rest of the world combined.  In fact, we use more electricity for cooling than the entire continent of Africa, home to a billion people, consumes for all purposes.”

“Because it is so deeply dependent on high-energy cooling, the United States is not very well positioned to call on other countries to exercise restraint for the sake of our common atmosphere…With less exposure to heat, our bodies can fail to acclimatize physiologically to summer conditions, while we develop a mental dependence on cooling.  Community cohesion also has been ruptured, as neighborhoods that on warm summer evenings were once filled with people mingling are now silent — save for the whirring of air-conditioning units.  A half-century of construction on the mondel of refrigerated cooling has left us with homes and offices in which natural ventilation often is either impossible or ineffective.  The result is that the same cooling technology that can save lives during brief, intense heat waves is helping undermine our health at most other times.”

But, “China is already sprinting forward and is expected to surpass the United States as the world’s biggest user of electricity for air conditioning by 2020.  Consider this:  the number of U.S. homes equipped with air conditioning rose from 64 to 100 million between 1993 and 2009, whereas 50 million air-conditioning units were sold in 2010 alone.”

“The greatest demand growth in the post-2020 world is expected to occur elsewhere….Already, [in India], about 40 percent of all electricity consumption in the city of Mumbai goes for air conditioning…Within 15 years, Saudi Arabia could actually be consuming more oil than it exports, due largely to air conditioning.”

“In thinking about global demand for cooling, two key questions emerge:  Is it fair to expect people in Mumbai to go without air conditioning when so many in Miami use it freely?  And if not, can the world find ways to adapt to warmer temperatures that are fair to all and do not depend on the unsupportable growth of air conditioning?”

In response to these two daunting questions, Cox suggests some possible technological paths forward:

“Efforts to develop low-energy methods for warm climates are in progress on every continent.  Passive cooling projects…combine traditional technologies — like wind towers and water evaporation — with newly designed ventilation-friendly architectural features.  Solar adsorption air conditioning performs a magician’s trick, using only the heat of the sun to cool the indoor air….Meanwhile, in India and elsewhere, cooling is being achieved solely with air pumped from underground tunnels.”

This implies a wide space of opportunity for cleantech innovators, entrepreneurs and financiers.  In a short piece in its July 28 edition, The Economist profiled Advantix Systems, which is developing a new air conditioning technology that promises 30-50% less energy consumption.  Hopefully, one of many new entrants to address the pressing cooling challenge facing the world.