Disintermediation, Technology Adoption Rates, CNC Mills and Innovators
By Michael Mealling
p. This is a long one so click through to read the entire thing
p. In Disintermediation and Politics I momentarily touch on how disintermediation works and why its a good thing, especially if we could “disintermediate NASA”. But the question that leaves open is exactly how to do that.
p. In nearly all cases disintermediation happens by radically changing cost structures associated with a given industry. I'll use the Internet as the currently extant example. The two 'laws' that are currently driving this market are bq.
- Moore's Law: every 18 months, processing power doubles while cost holds constant
- Metcalfe's Law: the usefulness, or utility, of a network equals the square of the number of users
- First, in every market there is a rate of improvement that customers can utilize or absorb.
- Second, in every market there is a distinctly different trajectory of improvement that innovating companies provide as they introduce new and improved products. This pace of technological progress almost always outstrips the ability of customers in any given tier of the market to use it.
- The third critical element of the model is the distinction between sustaining and disruptive innovation. A sustaining innovation targets demanding, high end customers with better performance than what was previously possible. Disruptive innovations, in contrast, don't attempt to bring better products to established customers in existing markets. Rather they disrupt and redefine that trajectory by introducing products and services that are not as good as currently available products. But disruptive technologies offer other benefits - typically, they are simpler, more convenient, and less expensive products that appeal to new or less-demanding customers.
The combination of these two creates a ratio where the costs of computing power are dropping and value of the network they are connected to is growing. This ratio is creating value at ever lower costs, making the total transaction costs associated with many normally high cost tasks drop drammatically. And when transaction costs drop, disintermediation happens: bq. Moore's Law and Metcalfe's Law have made many types of transactions cheaper. This has resulted in outsourcing and the disintermediation, or cutting out, of many transaction middlemen. This trend is sure to accelerate because digitization has cut transaction costs all the way to zero in some cases. Once a product or service is in place to be sold on the Internet, for example, there is zero cost for each additional transaction.
So in the case of a space industry, is it possible to at least approximate Moore's Law when it comes to aerospace technology? I think so and here's why: most of what makes Moore's Law work had to do with margin incentives and the fact that technology still has room to advance. Much of the reason for why companies like Intel have done the research it takes to double the computing power of a processor is that the payout is large. Intel estimates that their per processor margin is around 80%. But the flip side of that is that since we are still learning about the atomic and subatomic worlds, there is still room for significant advancement as we get ever smaller and smaller.
So what will reduce the costs in aerospace? It certainly won't be some brand new launch technology such as the X-33 or the OSP. (Although, some advances such as a space elevators could end up doing just that). Instead it will be the fact that CNC machining is now cheap enough to be completely outsourced. Which means that one of the largest costs associated with traditional aerospace has become disintermediated. The combination of cheap access to fabrication and standardized plans means that the costs associated with building aerospace hardware drops significantly.
p. What all of that means is that the combination of reduced aerospace manufacturing costs and hopefully reduced regulatory overhead, new markets with disruptive innovations can be developed. The Space Shuttle Main Engines (SSMEs) are the most efficient engines around, but would you really need something that efficient (and that expensive) if you had hundreds of thousands of much smaller launch vehicles capable of carrying large numbers of much smaller payloads for much cheaper? (in other words, the same process that happened with the move from trains to over the road trucking).
p. As Henry Spencer says to Clark S. Lindsey in Clark'sSpace Review article concerning suborbital's critics such as Bond and Pike: bq. I think these folks are making a fundamental mistake. TheyÂ’re assuming the traditional model of rocket development, where huge amounts of money are poured into building a system with absolute maximum performance, and into making Â“certainÂ” that it will operate perfectly the first time. While this approach has been standard in the past, it is horribly expensiveÂ… and worse yet, it doesnÂ’t actually work very well.
Notice that their objections are essentially on technical grounds, where none of the four points I make above is really technical. In the old modelÂ—government-funded development of artillery rockets technical problems dominate. In the harsh, cold real world that commercial rocket projects face today, the technical problems are not the hard ones.
p. Let me quote that again: the technical problems are not the hard ones. We need to start thinking disruptively. Some of us are but it needs to be an industry wide paradigm shift.
comments powered by Disqus