Industrial revolution? Maybe

I spoke at the Oak Ridge National Labs about 3D printing and industrial revolution.  Here’s what I said.

The first industrial revolution took root about 200 years ago in England. Steam powered machines were used to pump water out of underground coal mines, dramatically increasing coal mining productivity.  Cheap and abundant coal then made it profitable to run large machines in textile factories.  After a few decades, the manufacture of textiles shifted from cottage industry into factories owned by corporations.

Depending on who you ask, the second industrial revolution was either the wide scale implementation of mass production based on the assembly line in the early 20th century, most notably by the Ford company.   Or, the second industrial revolution was the convergence and rapid improvement of computing and communications technologies beginning in the 1950s that’s still going strong today.

Many people believe that we’re in the early days of a third industrial revolution, one triggered by low-cost design and manufacturing tools and a growing Maker movement.  Here’s how the theory goes.   In the next industrial revolution, manufacturing will come full circle, evolving away from centralized, factory-dominated mass production back to a new era of digital cottage industries.

Some people believe that this looming third industrial revolution will democratize the production and distribution of physical goods.  Manufacturing hubs of small businesses will supply larger companies with inventory which will be stored in digital form as a design file.  3D printed parts and products will be produced just-in-time, locally, shortening global supply chains.  Low-cost customization will spark a newly revitalized economy and the creation of new jobs.

Maybe.  But would such a new manufacturing paradigm really constitute a full-blown industrial revolution?  Part of the challenge in answering this question lies in the use of the phrase “industrial revolution.”

How is the phrase “industrial revolution” defined?  The phrase is defined by the Business Dictionary as “An era of unprecedented technological and economic development that began during 1830s in UK and spread in varying degrees to the rest of Europe, US, and Japan. It replaced the animal and human power by mechanical power and transformed agriculture based economies to manufacturing based ones.”  The Cambridge Business English Dictionary takes a broader view and defines an industrial revolution as “any period of time during which there is a lot of growth in industry or in a particular industry.”

An industrial revolution is a transition, a cultural and economic sea change that’s triggered by a convergence of new technology, social forces, and available natural resources.  Which forces trigger an industrial revolution also depend on who you ask – innovative technologies, social factors, natural resources or a blend of all three.  During the 18th and 19th centuries, some contributing factors were the development of the steam engine, rapid population growth, cheap steel, abundant coal and hydropower.  If you imagine the development of 3D printing technologies in the context of either of these two definitions, then it would seem that reports of a third industrial revolution may be greatly exaggerated.

Maybe we’re not at the brink of a third industrial revolution.  But we are at the brink of a technological leap forward.  This may sound like splitting hairs, but the phrase “industrial revolution” is actually not that useful in a practical context.  In fact, many scholars believe that the phrase is simply a shortcut, a misleading buzz word that oversimplifies the complicated feedback loops that take place when new technologies come together and very gradually accelerate social and economic change.

Here’s the environment in which 3D printing technologies are gaining traction.  These converging forces – a blend of technology and economic factors — are creating a cascade of downstream innovation that’s accelerating in speed.

These converging forces are creating a cascade of downstream innovation that’s accelerating in speed.

  • Massive increases in low-cost computing power
  • Rapidly improving, low-cost design software
  • Hardware components shrinking in size, growing in power and dropping in cost
  • Key additive manufacturing patents are finally expiring
  • High speed internet is everywhere and used for everything
  • Companies are hungry to compress their product design life cycles as they compete in fast-paced global markets
  • Designers are creating increasingly complex products  they demand that they iterate faster, in the privacy of their own design studios (no leaked blueprints)

In this environment of converging forces, 3D printing technologies are feeding a technological leap forward that in turn, feeds and is being fed by a number of converging forces.  The stage has been set for decades.  Significant, noticeable change will take yet decades more.  You could call this convergence of forces an industrial revolution.  Or, not.   For now, I’ll use the phrase “technological leap forward.”

3D printing will bring about significant change in the way we design and produce physical objects.   Here’s why.

Technological leaps forward happen when a significant cost factor drops to nearly zero.  In the first industrial revolution, the cost of power per watt dropped significantly when steam engines replaced horses and water wheels.  More recently, the cost of calculations per second has dropped significantly as computer components have become smaller, faster and cheaper to produce.  There are many more examples of a cost factor dropping significantly, triggering a cascade of downstream innovation and new business models.

3D printing drops several cost factors to nearly zero.   In the book Fabricated, when we interviewed people and thought about 3D printing, we kept coming across some recurring themes, ways in which 3D printing disrupted traditional paradigms of mass manufacturing.   We distilled these recurring core ideas into ten principles of 3D printing.

Each principle describes what makes 3D printing technology unique.  Each principle also demonstrates the removal of a significant cost element that’s associated with traditional mass manufacturing.  Some of these principles hold true today; others will need more time to really develop.

Principle one:  Manufacturing complex shapes costs as much as manufacturing a simple shape.   In traditional manufacturing, the more complicated an object’s shape, the more it costs to make

Principle two:  Variety is free. A single 3D printer can make many shapes. Like a human artisan, a 3D printer can fabricate a different shape each time. In contrast, traditional manufacturing machines are much less versatile and can only make things in a limited spectrum of shapes.

Principle three:  No assembly required. 3D printing can form objects that contain already interlocked parts. The more parts a product contains, the longer it takes to assemble and the more expensive it becomes to make.

Principle four:   Zero lead time. A 3D printer can print on demand, when an object is needed. The capacity for on-the-spot manufacturing reduces the need for companies to stockpile physical inventory. New types of business services become possible as 3D printers enable a business to make specialty–or custom–objects on demand in response to customer orders. .

Principle five:   Unlimited design space. Traditional manufacturing technologies and human artisans can make only a finite repertoire of shapes. A 3D printer removes these barriers and can fabricate shapes that until now have been possible only in nature, opening up vast new design spaces.

Principle six:  Zero skill manufacturing. Traditional manufacturing machines still demand that a skilled expert to adjust and calibrate them. A 3D printer gets most of its guidance from a design file. Unskilled manufacturing opens up new business models and could offer new modes of production for people in remote environments or extreme circumstances.

Principle seven:  Compact, portable manufacturing. Per volume of production space, a 3D printer has more manufacturing capacity than a traditional manufacturing machine. For example, an injection molding machine can only make objects significantly smaller than itself. In contrast, a 3D printer can fabricate objects as large or larger than itself.

Principle eight:  Less waste by-product. 3D printing in metal creates less waste by-product than the traditional grinding or molding techniques used in mass manufacturing.  Machining metal is highly wasteful as an estimated 90 percent of the original metal gets ground off and ends up on the factory floor.

Principle nine:  Infinite shades of materials. Combining different raw materials into a single product is difficult using today’s manufacturing machines. As multi-material 3D printing develops, we will gain the capacity to blend and mix different raw materials. New previously inaccessible blends of raw material offer us a much larger, mostly unexplored palette of materials that have novel properties or useful types of behaviors.

Principle ten:   Precise physical replication. A digital music file can be endlessly copied with no loss of audio quality. In the future, 3D printing will extend this digital precision and repeatability to the world of physical objects.

Today, we’re in the dawn of 3D printing technology.  The development and improvement of  3D printing and related technologies will continue to accelerate.  As significant manufacturing costs are reduced to nearly zero, in the coming years,  we may witness a third industrial revolution.

Innovative technology, jobs and the razor’s edge

According to the Wall Street Journal, the unemployment rate in the U.S. is nearly 8%.  For people under age 25 unemployment is 16%.  In Greece, the employment rate is 26%, double for young adults at 58%;  Spain and Italy have similar unemployment challenges.

What is making jobs in economically developed nations disappear?  Could it be the effect of offshored manufacturing?  Frugal consumer behavior that’s shrinking company bottom lines, hence triggering layoffs?  Or…  the unintended side effects of advanced technology?

The answer is all three.  But let’s look at the unintended side effects of advanced technology.

Technology is a double-edged sword.  One the one hand, technologies that are more efficient than humans improve the quality of human life.  On the other hand, these same technologies destroy jobs.  New technologies improve food production and health care, liberate creative people from the shackles of the middle man,  and increase social mobility.  Yet, as technological advancement accelerates, software programs, machines and robots are becoming better employees than we humans.

We walk a razor’s edge.

Technological innovation is frequently offered as a panacea for what’s wrong with our faltering economy.  It’s a pretty widely accepted notion, at least in mainstream circles in the western world, that technological innovation increases economic growth, which in turn introduces high paying jobs, new industries, and new efficiencies that in turn, begat new industries and high paying jobs.  Here’s a typical belief expressed in a white paper on patent reform written by the U.S. Department of Commerce.  Note the Department’s unblinking certainty, that technological innovation is the pulse of a healthy economy.

“All major strands of economic thought now recognize that technological change is the primary driver of growth.   In fact, modern economic theory holds that without technological innovation, accumulation of wealth could not be sustained and per capita growth would trend to zero.”[1]

When new technologies replace human workers, economists call the demise of industries and resulting job-loss “creative destruction,” a term that hints at an underlying belief, that in the long term, innovative technology has a regenerative effect.  Economists point out that although Expedia, the internet and airline databases made travel agents obsolete, these new technologies kicked off a cycle of creative destruction.  In exchange for the short-term loss of some jobs, these technologies created an entirely new type of tourism industry and made travel cheaper and more convenient.

Like a slow-moving traditional travel agency that processes paper airline tickets, the theory of creative destruction holds that when a business fails, it fails because it’s inefficient.  Or, in other words, a technologically behind company can’t compete with more efficient companies that have embraced technologically, hence have become more efficient.  The theory of creative destruction holds that prosperity arises from the ashes:  a failed business’s customers will migrate to a more technologically fit company.  The failed business’s former employees will eventually find their feet at a more technologically adept company.

In the book “The Lights in the Tunnel:  Automation, Accelerating Technology and the economy of the future,” author Martin Ford points out how entrenched mainstream economic thought is in the notion of Creative Destruction.  Ford writes,

“the idea that technology will ever truly replace a large fraction of the human workforce and lead to permanent, structural unemployment is, for the majority of economists, almost unthinkable. For mainstream economists, at least in the long run, technological advancement always leads to more prosperity and more jobs. This is seen almost as an economic law. Anyone who challenges this “law of economics” is called a “neo-Luddite.” This is not a compliment.”[2]

What if mainstream economics are wrong and there’s no such thing as Creative Destruction?  What if there’s just a bit of Creative and then mostly…  Destruction?

One of the best recent books on the dance between job loss and new technology is called “Race Against the Machine” by two MIT economists, Erik Brynjolfsson and Andrew McAfee.  This book is wonderfully written — clear, yet rich with data and information.  “Race against The Machine” swims against the tide and makes a compelling case that innovative technology, by removing the need for human workers, is creating a devastating tidal wave of redundant workers.  Or unemployment.

According to Brynjolfsson and McAfee, traditional economic theory attributes high unemployment rates to one of three factors, either “stagnation,” “lack of economic growth,” or “end of work.”  In the “stagnation” explanation of unemployment, the cure for unemployment is to unleash the adoption of new technology. The Stagnation theory assumes that the way to get people back to work is to make better use of new technologies, to remove social or regulatory barriers to its widespread uptake.  In other words, technological advancement = more company profits  = the creation of new, high-value jobs = economic growth.  In my experience, people who work in high tech fields embrace the stagnation theory of unemployment.

Here’s where “The Race Against the Machine” gets interesting:  its authors break rank and argue that innovative technology is not the cure, but the cause for high levels of unemployment.  Brynjolfsson and McAfee argue that the reason that jobs in the U.S. and Europe have been disappearing over the past few decades is not (as popularly assumed) because labor has been offshored to factories located in cheaper, less regulated markets.  In fact, factory automation, not offshoring, has reduced the need for human workers, leading to high unemployment.

We’re in a losing race against machine labor.  Brynjolfsson and McAfee  point out that computing power is improving at an ever-increasing rate and the cost of hardware components is plummeting.  The result is that automated solutions (be it a database, industrial robot, or data mining application) are rapidly becoming more efficient, hence more cost-effective than a band of unpredictable and complicated human employees.

Machines don’t get bored.  They don’t complain.  And they’re a lot more reliable and precise than human workers.

As computing power increases and software becomes more sophisticated, machines are taking the first steps onto what used to be once human-only territory:  being intelligent.  In 2011, Watson, a project from IBM’s research division, won the game show Jeopardy, beating out the show’s former human champion.  Watson created quite a stir; imagine applying Watson’s massive text-scanning deductive power to interpreting a patient’s symptoms.  A computer can process and remember entire libraries worth of arcane medical data; a human doctor cannot.

Watson’s triumph at Jeopardy was reminiscent of the victory of another IBM computer a few decades earlier, Big Blue, that defeated the world chess champion’s Kasparov.  Although Big Blue’s victory caused quite a stir, it makes sense that a powerful, well-designed computer program could beat a human on the chess board.  Chess is a complicated and difficult game that involves a nearly-infinite number of possible outcomes.  However, unlike life, a chess board is a finite environment.

A chess game is bounded.  Though the number of possible chess moves is staggeringly large, a powerful computer is more than capable of rapidly chewing through all the possible permutations of how a game may play out.  Kasparov’s defeat at the “hands” of IBM’s Big Blue was a milestone.  Yet, the way that IBM’s Watson and Big Blue defeated humans was by processing large amounts of data very quickly and then drawing (good) conclusions.  That doesn’t mean that they’re really “intelligent.”

Being intelligent is one of the primary aspects of being human that many of us still confidently assume makes us superior to technology.  How many times have you heard somebody say, “machines will *never* be as good as humans at any task that involves thinking or reacting.”  Another belief is that jobs that involve “people skills” can’t be automated.  Or creative activity can’t be automated.

Creativity, people skills, even decision making — all traits we assure ourselves will remain the sole domain of the human brain — are becoming automated.  For example, ten years ago, most people would refuse to get into the back seat of a car whose driver was a computer.  Today, Google has proven that computer-guided cars that find their way around using a firehose stream of GPS data and complex systems of sensors and interactive software are actually safer drivers than humans.  During Google’s massive road test where automated cars drove around U.S. roads for a few months, the only accident was a human driver rear ended a car being driven by a computer.

In reality, there are very few occupations ultimately exempt from automation.  Blue collar jobs have been automated for decades.  Now computers can do many white collar jobs as well.  Marshall Brain points out that “As CPU chips and memory systems finally reach parity with the human brain, and then surpass it, robots will be able to perform nearly any normal job that a human performs today.”

I’m torn.  It’s one thing to point to repetitive, grueling labor — be it physical or cognitive — and say “technology liberates humans from drudgery.”  But that liberation has a hidden cost:  no labor, no job.  Yet, new technologies , even as they take away jobs, also dramatically improve the quality of our lives.  The phrase, “do it manually” has become a synonym for work that’s error-prone and inefficient.  In addition, by introducing automation into manual jobs, humans are spared work that’s tedious, inconvenient, even dangerous.

The problem is that technological advancement is accelerating, yet the human brain remains essentially the same.

Where does this leave those of us who are fascinated and excited by the steady stream of innovative technologies that flow out of universities, companies, government labs and the garages of DIY tinkerers around the world?  Martin Ford offers a solution, that we must re-adjust our notion of what constitutes “work.”  In some nations, farmers are paid by the government to let their fields lie fallow.  Ford proposes that one solution for massive, technological unemployment would be that the government pay people for engaging in constructive activities, for example, for going to school or for reading books.

Here’s another solution:  freeze technology development in place.  Imagine if somehow the development of new technology were to just stop.  No existing technology would be destroyed.  We would simply continue to live with the technology we have now.  But that seems like a poor cure for technologically-induced unemployment.

[1] Patent Reform, Unleashing Innovation, Promoting Economic Growth & Producing High-Paying Jobs.”  A White Paper from the U.S. Department of Commerce, April 13, 2010.  By Arti Rai, Administrator, Office of External Affairs, USPTO, Stuart Graham, Chief Economist, USPTO, Mark Doms, Chief Economist, Department of Commerce.

[2] The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future.  Martin R. Ford.  Acculant™ Publishing.  2009.