All new industries seem to think they deserve a Moore’s Law. The photovoltaic solar really, really thinks it deserves one, since it kind of sort of looks like a semiconductor business: Photovoltaic Moore’s Law Will Make Solar Competitive by 2015, IEEE.org, Understanding Moore’s Law, DistributedEnergy.com, and Silicon Valley Starts to Turn Its Face to the Sun, NY Times.
However the nuances are mischevious. The cost implications of Moore’s Law at heart are built around a constant rate of technology performance improvement (2x transistors every 2 years), implying certain cost improvements. PV’s falling costs curves have had more variables at play. In fact, the real equivalent to Moore’s Law in solar would be to say that cell efficiency or a similiar measure doubles every x years. Most people have tried to apply a Moore’s Law like concept in solar directly to the cost curve, not the technology improvement curve. In fact, the solar costs “Moore’s Law” that seemed the simplest was the idea that every doubling of industry size equaled 10% in cost reductions. But that is not a Moore’s Law, that’s mainly just a description of the supply curve shape and shift, it’s a totally different animal.
I’ve been researching this topic for some time, trying to develop a simple conceptual model to understand falling solar cost curves and their impacts, and I update my cost analysis spreadsheets based on numerous inputs from energy companies, solar developers, solar integrators, as well as module manufacturers. I think I now have a simple, economically sound model with good explanatory power, that allows us to shed some light on why and how the cost curves fall.
We’ll call it the Dikeman Solar Cost Model – DiSoCo Model, and it’s somewhat simple and axiomatic: the value on the supply side = the value on the demand side, broken down into fixed, sticky, and variable components, by market segment.
Over the last couple of years, I’d argue that roughly half of the cost reduction in solar have come from massive increases in larger installations (primarily spreading NRE and installation cost across a larger projects at the installations, as well as dealing improved economics of scale in manufacturing), not really from solar costs themselves. And roughly the other half from actual technology cost reductions.
This is an important distinction as it means that arguably with say 2003 solar technology, if the subsidies and demand had been there to build a whole bunch of 10 MW PV farms, a similiar cost could have been achieved to today’s costs, at least within striking distance (as opposed to a Moore’s law industry where the fundamental technology performance curves would have been 8x better, with drastic cost improvements resulting). Technology costs haven’t necessarily fallen as much as we think, so much as the scale has changed, making costs look like they’ve fallen a significant amount.
And we have to be careful about making generalizations of the technology cost reductions, too. A large chunk of the technology cost reductions at scale (perhaps 50%?) have come from one company, First Solar, out of the hundreds that manufacture PV products. If you take them out of the equation, the falling technology cost curves don’t look so great.
But I’ll posit a cost reduction law for solar that may hold. Roughly speaking, the per unit solar industry costs at a system level fall every year in line with the reduction in per unit subsidies for the key solar subsidy programs in that year, adjusted for interest rates and margin changes. Because if they don’t, they don’t sell product.
Why? We argue that the market is basically willing to pay a set rate per kwh for solar that is reasonably constant over time. The underlying conceptual DiSoCo Model is this: the market’s set rate for solar + the cost of capital + the per unit subsidy = solar system cost + solar system embedded margin. My primary use of the model has been to break out each component, market by market, segment by segment, and analyze how fixed, variable, or sticky they are, to better understand their interactions as conditions change. If this is true, then for a given set rate, same interest rates as last year, then changes in the subsidy either come out of cost or margin. If margin were mature and fixed, then cost changes would equal subsidy changes.
We could extend the model by suggesting that changes in the market set rate is a function of retail and wholesale energy prices, and non direct subsidy programs like a RPSs and RECs, and non market based buyers willing to accept low equity ROEs. We could further extend it by suggesting that some subsidies, like the ITC, may manifest in the cost of capital, not the per unit of subsidy.
In a real life example, when the subsidy programs have built in per unit reductions in them over time or volume (like the Japanese industry maker did, and California does, and many of the FITs do), then the industry has to find a way to take enough costs out to match the reduction, otherwise the margin gets hammered. This suggests that market won’t actually see the cost reductions until the subsidy ends, except where the industry cost reductions exceed the subsidy reductions in a given period (in fact, this was true, and available manufacturing capacity seems to have a big impact on this component also, as for several years, the manufacturer’s didn’t pass on ANY technology costs reductions, but fattened margins and prices instead).
And extending on that, we realize that the swing variable has been manufacturer’s margin at the ingot/wafer, cell, and module levels, not cost, which has tended to be more fixed or sticky than we thought. And in a period of tight supply, as we had in the silicon refining shortage, margin goes up, all else equal, and in period of oversupply, where we are moving too, margin goes down, since the other major components (including, unlike the corollary to Moore’s Law, technology cost) are relatively fixed or sticky over short time frames. The market still only pays what it will pay per kwh, and the subsidies and interest rates are what they are, and so known coming reductions /volumes in per unit subsidies force the industry to find a way to take it out of costs, see margin suffer, or find new markets with new subsidies. Hence, the model allows us to posit the law that the real long term linkage is subsidy reductions to cost reductions, adjusted for swings in margins.
This would help explain the rise of the grid linked industrial market in California and Germany, effectively as a partnership between public policy, manufacturers with limited near term technology cost reduction potential needing economies of scale, and the rise of the PPA/developer model as the facilitator between the two, and explain the continual skinny economics for end users/PPA owners, despite falling costs.
We could further extend that last point by suggesting it can be applied niche by niche, country by country. And better understand the market by realizing that manufacturers, starting with the Japanese firms 5 years ago when the Japen rebates rolled off, and extending currently to First Solar’s and Suntech’s et al moves into power plant development, effectively applied this model on a country by country, niche by niche approach seeking new markets as the subsidies fall and move, in a bid to maintain margins while cost curves were steady.
So the DiSoCo Model is simple enough, it states that the value on the supply side = the value on the demand side, and when breaking the components out and evaluating market by market which are fixed in the short term and which are variable, it has seemed to us to shed some light on why the solar markets have moved the way they’ve moved. And it posits that a market set price exists segment by segment, and therefore that if margins are normal in that segment, reductions in the per unit subsidy levels roughly equal reductions in cost, and only when reductions in cost drastically exceed those of subsidy levels, can price be effected.
And it gives us a very different picture of falling cost curves and price implications than pretending Moore’s Law works for solar.