SUBSCRIBE TO RECEIVE UPDATES:

GET IN TOUCH:

Tel: 305-854-0675

Email: contact@hellerhs.com

1111 Brickell Ave, Suite 2135

Miami, FL 33131

  • Grey LinkedIn Icon
  • Grey Instagram Icon

© 2020 Heller House LLC

The World in Ten Years

“Maintaining a firm grasp of the obvious is more difficult than one would think it should be. But it’s useful to try.”

– Jeff Bezos


The surest way of looking like a fool is to make predictions. What I’ve tried to do here instead is to compile a short list of trends that have legs; things that appear obvious, and that are likely to have a deep impact as the next ten years unfold.


With the obvious in mind, enjoy the ride.


Self-driving cars


At the initial DARPA Grand Challenge, held in March 2004, autonomous cars failed to drive a mere 8 miles in the desert. Fifteen years later, Waymo—born out of that DARPA competition—had delivered over 100,000 autonomous rides in real-world city conditions. While still restricted to a relatively small geographic area in Phoenix, AZ, Waymo’s service boasts 1,500 monthly active users. In December 2019, the company delivered three times as many rides as it did in January of the same year. Anyone can download their iOS app and try it out:

Waymo is taking its learnings in Phoenix and slowly expanding to other geographies. Its computers are learning at a fast pace: it took Waymo 10 years to drive its first 10 million miles, but just one year to drive the next 10 million. These are real-world miles; Waymo has driven billions more in simulation. Hundreds of other companies are attempting something similar, albeit using a variety of different approaches.

Strategically, Waymo seems to be doing something interesting (and very difficult). It is trying to be a vertically integrated player, by bundling together the hardware (LIDAR and other sensors, computers, and cars from third-party suppliers), proprietary software, and creating its own rider network. At the same time, Waymo is trying to be a horizontal player by licensing its technology to others.


Companies usually succeed by being either vertical or horizontal players, rarely both. Waymo seems to be figuring out which way to go. To the extent its horizontal experiment bears fruit, Waymo could become the Microsoft of autonomy, the operating system layer that powers the vast majority of autonomous vehicles.


Another autonomy entrant is Tesla, which is pursuing a completely different strategy. First, it sells cars with cameras rather than LIDARs, which are still very expensive. As customers drive around collecting data, Tesla trains its self-driving algorithms. The company's fleet is several times larger than Waymo's and spans everywhere Teslas are sold.


At an event held last year, Musk noted that Tesla has a hundred times more data than anyone else. It's also building its own silicon chips. The latest version runs twenty times faster than Nvidia’s solution—the leader in the space—with less power consumption.


With the most data and the best hardware, Tesla, according to Musk, will be first to full autonomy. Musk says gross profits to customers who buy a Tesla and deploy it in the company's own autonomous ride-hailing network could reach $30,000 per year, nearly the price of the car.


Musk added: “The fundamental message that consumers should be taking today is that it’s financially insane to buy anything other than a Tesla. It’ll be like owning a horse in three years. If you buy a car that does not have the hardware necessary for full self-driving it’ll be like buying a horse. And the only car that has FSD is a Tesla. It’s basically crazy to buy any car other than a Tesla.”


If Musk is right, his approach has the potential to handily beat Waymo and others who rely on expensive LIDAR sensors and high-definition maps of their routes, an approach that is inherently brittle. And if Tesla can also bootstrap its own ride-hailing network, it will have a substantially lower cost structure and proprietary technology relative to the existing ride-hailing networks (Lyft and Uber).


Removing the labor component (humans) from diving, and electrifying the car dramatically reduces the cost per mile driven. I’ve seen estimates ranging from 50-70 percent cheaper. Inexpensive, autonomous rides will open up new possibilities for those who can’t or won’t take a traditional driven car. Eventually, children might ride themselves safely to after-school activities or friends’ houses.


New in-car configurations with desks or conference tables will allow commuters to work, watch movies, or enjoy naps on longer trips. For long-distance travel, there might even be autonomous “hotels” that let you sleep comfortably in a bed throughout your trip. The overall societal impact will be huge; after all, 1.25 million people die in car accidents every year around the world.

This dream has been a long time coming.

This ad is from 1956, by a group of electric power companies.


The possibilities for trucking, a tough job with long hours on the road away from family and friends, are equally enticing. The industry faces a chronic labor shortage; autonomy could not only fill the gap, but allow significantly higher utilization of the existing truck fleet, at lower cost. Some analysts believe a 45 percent cost reduction is possible, saving the industry around $100 billion.


Snapping these pieces together, it's likely that a growing portion of our package deliveries will likely arrive in electric, autonomous trucks by the end of the decade.


It’s impossible to know who the winners will be and how fast these innovations will diffuse. But it’s reasonable to expect that within ten years we’ll have dramatically more autonomy on the road than today, and that will have enormous impact on society.


(Very) personal computing


Computers started out as room-sized machines and over the decades have become ever more personal. After the mainframe there was the minicomputer, then the personal computer, then the smartphone, and now we have computers in our ears (Apple AirPods and others) and on our wrists.

The IBM System/360 from 1964, and the Apple AirPods Pro from 2019.

Source: IBM and Apple.


How soon will computers interface directly with our brains?


It’s already happening. CTRL Labs, recently acquired by Facebook, has created a wristband that can use the arm’s neuronal patterns to control a computer. Hand movement can be used to control a mouse cursor or play games. But the software has become so accurate, mere intent to move your hands is enough. The bracelet is literally reading your mind.


Alongside this technology for controlling computers with our thoughts, we now have virtual reality. If you haven’t yet tried the Oculus Quest, you’re missing out; it’s fantastic.

Source: Oculus


Right now, VR’s main use case is still gaming and entertainment (although Oculus has a division dedicated to business, with many customers already signed up). But as the technology improves—with higher resolution screens and more comfortable headsets—we’ll start seeing more productivity applications of VR.


By the end of the decade, there might be tens of millions of workers collaborating daily in high resolution virtual offices.


A similar but harder problem to solve is augmented reality. Depending on how you define it, AR is either already here (Focals by North), or it’s still ten years away. Mike Abrash, Chief Scientist at Oculus, believes it’s the latter. The real judge will be consumers. The latest rumor is that Apple plans to release AR glasses somewhere in 2022 or 2023, so we’ll soon find out.

Focals by North. This is the first generation product.

Version 2.0 should come out this year. Source: North


Whether it’s VR or AR that gains adoption sooner, the existence of neural interfaces (perhaps built into a watch) and earbuds with voice assistants means we have the ingredients for an entirely new computing platform.


And whenever there are platform shifts, new winners emerge. Think of the shift from mainframe to personal computers (Microsoft as the winner), then the shift from personal computers to smartphones (Apple and Google as the winners).


Much like the smartphone complements our desktops and laptops, it’s likely that AR glasses (and VR headsets) will complement the existing computing landscape. It’ll be interesting when they gain mainstream adoption, creating a whole new cadre of winners and losers in their wake.


Space exploration grows the pie


The first time a human stepped on the moon was in 1969, and the last time was in 1972. We’ve known about the vast potential of space exploration for a long time. How come we’ve never gone back?


The answer is it costs too much. Until now.


The reason space exploration has been so expensive up to this point is two-fold. First, we’ve never had reusable rockets. Jeff Bezos, founder of Amazon and rocket company Blue Origin, notes that, “It’d be like getting in your 747 and flying across the country and then throwing it away, just using it one time. Imagine how expensive traveling would be.”


What about the Space Shuttle? It wasn’t functionally reusable: after every flight NASA would take it apart and inspect it, at huge expense.


The second reason is that space exploration has been a government monopoly, and we all know what that means: no incentive for cost control. NASA’s mindset has always been about getting things working reliably and safely. Cost was never a primary constraint.


SpaceX—the leading rocket company today—had cost in its crosshairs from day one. As Elon Musk recounts it, “We had to be super scrappy. If we did it the standard way, we would have run out of money. For many years we were week to week on cash flow, within weeks of running out of money. It definitely creates a mind-set of smart spending. Be scrappy or die: those were our two options.”


The examples below were taken from the books Space Barons and Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future. They show how the industry was hampered from a lack of focus on price and cost, and how SpaceX was able to massively deflate its bill of materials:



According to a WSJ report, “SpaceX’s roughly $60 million launch fee for Falcon 9 is one-quarter to one-half of what the Boeing-Lockheed venture typically charges.” These launch costs are going to keep coming down as the frequency and capacity of SpaceX’s rockets increase.


The cost and reliability of space exploration is put in stark relief when we compare it to aviation. The former suffered from “closed” innovation, driven mostly by NASA and its cost-plus contractors. The latter benefited from “open” innovation, driven by free markets. The results:


1961, first man in space

48 years later (2009)

- 46 people flying low earth orbit

- chance of death 1 in 254


1903, first human flight

48 years later (1951)

- 39 million people flying

- chance of death 1 in 288,444


But why should we go to space at all? Two words: economic growth.


If we stay within the confines of our planet, we’ll inevitably hit a wall. Bezos notes that compounding our energy usage by just two percent per year would result in us running out of energy in a mere 200 years (put differently, we’d need to cover the entire planet in solar panels to generate enough energy for our needs).


And we want to use more energy, because energy is the lifeblood of economic growth.


An even more compelling reason is that going to space allows us to create entire new economies, growing the pie for humanity. Bezos held an event last year to show off Blue Origin’s upcoming moon lander, and he talked about O’Neill habitats, named after his Princeton professor Gerard O’Neill:



Imagine a future in which humans inhabit cities on the moon, on Mars, inside O’Neill habitats and asteroids.


These are vast spaces. Mars has a surface area only slightly smaller than the total area of Earth’s dry land. The moon has a surface area nearly five times larger than the 48 contiguous United States. The asteroid belt has enough material—in terms of minerals—to build habitable areas one hundred times larger than Earth.


According to the CIA World Factbook, the world’s economy was roughly $80 trillion in “gross world product” in 2017. Expanding into space would generate enormous economic growth and support hundreds of billions of humans. Imagine how much larger our economy would be.


A first obvious industry would be space mining. There are plenty of rocks in space containing enormous resources. One Goldman Sachs analyst estimated that a single asteroid the size of a football field might contain $25 to $50 billion worth of platinum. This might not last long, as the flood of minerals would lower their prices.


Over time, it’s more likely that these markets would develop like the new frontiers of the 18th and 19th centuries, which saw pioneers settle new lands and create several innovations in the process. There are many industrial processes that can only be pursued in the special conditions afforded by space: perfect vacuum and zero or low gravity. Space-based R&D could yield a bonanza of inventions.


More appealing still is the vision of space exploration as a platform. When Bezos started Amazon, he relied on the hundreds of billions of dollars spent on existing infrastructure such as roads, the US Postal service, the credit card networks, airlines, ocean freight, and many others.


Entrepreneurs starting internet businesses in their garages today can rely on the hundreds of billions of dollars spent to build up the internet infrastructure, including incidentally, Bezos’s AWS.


But no entrepreneur today can rely on infrastructure to build a space-based business, because that infrastructure doesn’t exist. Imagine the trillions of dollars that will be unlocked once it’s built.


Cheaper, faster data everywhere


What happens when data connectivity becomes cheaper? We use more of it! Much more, in fact. With the advent of the smartphone, we were consuming about 20 exabytes of data per month in 2017. This doubled by 2019 and is expected to grow another four-fold by 2025, according to the Ericsson mobile data traffic outlook.


With less expensive data and faster connectivity, new use cases are unlocked. The first cell phones could make phone calls and send text messages. Then, internet access allowed us to check stock prices and the weather. The smartphone delivered web pages and photos. Today, we can use our phones for 4K video streaming and real-time gameplay.


But this is not universal. According to the United Nations, by the end of 2019 there were still 3.8 billion people without even basic internet access.


To help remedy this, several startups have announced they will launch constellations of low earth orbit satellites to provide low-latency, high bandwidth connectivity around the world. While this has been a dream of satellite entrepreneurs for decades, the technology has finally evolved to turn the dream into reality.


Together with OneWeb, SpaceX and Amazon will launch tens of thousands of satellites in the coming years (the current estimate is 46,100 altogether), which is more than five times the number of objects launched into space over the past 60 years.


Why the sudden interest?


Three main factors are at play. First, internet markets are large enough to justify the investment, with data-hungry smartphones, connected cars, and IoT (internet of things) devices. Second, it has become feasible to mass produce small satellites that are cheap and reliable. And finally, launch costs are much lower (see the section on space exploration, above).


There are existing solutions for internet access through satellites, but they’re pricey and experience high latency, given these satellites' geostationary orbits. The newly proposed constellations will be nearly twenty times closer to Earth (in LEO, or low earth orbit), making their latency comparable with current fiber optic systems.


In mid-2019, SpaceX said it could generate $30 billion in revenues from its satellite constellation just by capturing a low-single-digit percentage of the global internet access market. This would be nearly ten times SpaceX’s current revenues.


However, it is Amazon that probably has the most straightforward path to monetizing its constellation. The company is also touting the internet access angle for its satellites (called Project Kuiper). But in late 2018, Amazon’s cloud computing subsidiary, AWS, announced a series of ground station antennas aimed at helping customers download, process and store data from their satellite deployments.


None of the LEO constellations are capable of beaming internet directly to a smartphone; they all require ground stations as a relay, so this gives AWS a leg up in the race. Furthermore, AWS has been building out its cloud infrastructure by draping fiber optic cables around the planet. Having additional capacity through low-latency satellites only improves its competitive position with customers.


The outcome of this potential bonanza is hard to predict, but there are some obvious beneficiaries: consumers and businesses will have more choice, faster speeds, and lower prices.


Facebook, which has seen video usage explode, will also benefit; and for the same reasons, so will YouTube, Netflix and other OTT video providers.


Gaming is another obvious beneficiary, with various companies now pursuing strategies based on having all of the intense computation done in the cloud rather than on a local console. Finally, future AR and VR headsets, when widely adopted, will have ample bandwidth to deploy real-time gaming, entertainment and collaboration software.


Cloud computing grows more dominant


Cloud computing was launched by AWS in 2006 with the announcement of its simple storage service (S3). The industry has grown tremendously since then, but we are clearly still in the early innings not only of cloud adoption, but also of optimizing workloads for the cloud and optimizing the cloud datacenters themselves.


Andy Jassy, the CEO of AWS, recently noted that only three percent of IT workloads have moved to the cloud. Sell-side analysts from investment banks tend to believe cloud penetration is closer to 10-20 percent. The reality is probably somewhere in between.


Estimates of the size of the cloud “market,” however, suffer from an intrinsic flaw: we cannot measure a market that does not yet exist.


To understand what this means, let’s take a short diversion into the story of Shopify, a company that builds software that allows merchants to host their own ecommerce stores. When its founder, Tobi Lütke, was looking for venture capital funding, one investor noted that the total addressable market for Shopify was too small, at only 40,000 merchants. He passed on the investment.


Last year, when Shopify achieved 1 million merchants on its platform, the same investor asked Lütke what he had missed. “Nothing,” said Lütke, “You correctly identified the problem, which was that the market was too small because Shopify didn’t yet exist.” In other words, the existence of Shopify itself enabled the creation of the market.


Something similar is happening with cloud computing. There are thousands of startups that could not have been created were it not for the ease of logging onto AWS, inputting one’s credit card number, and immediately having access to unlimited resources and a vast array of software.


As AWS continues to build additional capabilities, it will further broaden its appeal and lower the barriers to innovation and experimentation. The result is that we will continue to witness an explosion in cloud-native businesses. (Here, I'm using AWS as the industry standard, but this applies to Azure and Google Cloud as well.)


All the while, we will also continue to witness the tidal wave of existing enterprises moving their workloads to the cloud. As AWS continues to improve its offerings and lower its prices, it becomes increasingly irresponsible not to move to AWS. Indeed, the yearly keynotes at AWS’s conference are peppered with real-life case studies of large enterprises moving their applications off legacy, on-premises hardware and software stacks and onto AWS, experiencing dramatic improvements in performance and much lower costs.


Every company has limited resources and a long list of IT projects it would like to see implemented. Inevitably, it must prioritize. When companies realize that running their own fleet of servers, installing and managing software, patches and security, and other mundane tasks can be better performed by AWS or another cloud provider, they are happy to make the move. This immediately frees up resources, cuts costs, and allows these companies to become much nimbler and more innovative.


Being new, though, means that most folks aren’t yet optimizing their cloud instances. Doing so can dramatically cut cloud bills and speed up applications. Given the ongoing scarcity of software developers, anything that can improve their productivity is a sure bet.


At the same time, while storage was AWS’s first offering, it has since built a cornucopia of ever-growing software services like databases, analytics and machine learning:

Source: https://www.awsgeek.com/AWS-Periodic-Table/


In industry parlance, AWS continues to move “up the stack” from base-layer infrastructure into more value-added (and higher margin) offerings. In addition, given the vast scale of AWS, it is also one of the world’s largest consumers of computer hardware. There are enormous advantages to be reaped from vertically integrating parts of the hardware stack, and this is what AWS has done with the acquisition of Annapurna Labs in 2015.


This year, AWS announced new chips that offer a 40 percent better price/performance than those by Intel. It has also moved virtualization—the first job of any cloud provider, as it can split one server among many different customers—entirely to AWS custom Nitro chips.


Finally, there remains a lot of work to be done in understanding how to build warehouse-scale computers. In a document written by Google Cloud engineers in 2018, they wrote that, “It is notable that nearly one out three compute cycles at Google is attributable to a handful of ‘datacenter tax’ functions that cross-cut all applications.” The same is probably true of AWS.


Crypto finally takes off


Over the past decade, television has gone over the top (OTT). Netflix and others have a direct connection with consumers, eschewing middlemen like broadcast and cable companies. The result has been the rise of cord cutting and a decline in cable video subscribers.


What does this have to do with crypto? The fundamental revolution enabled by bitcoin is the transmission of value between two untrusted parties without the need of a central counterparty. Before bitcoin, Alice and Bob needed a bank or bank-issued currency in order to transact.


Much like OTT television got rid of the middlemen, so has bitcoin eliminated the bank and the currency-issuing government. Through bitcoin, Alice and Bob needn’t trust each other. Instead, they trust the mathematics of encryption, which allow their transaction to be securely recorded on the blockchain.


Crypto has yet to reach mass adoption, despite dozens of central banks experimenting with their own crypto currencies. Now, large corporations with billions of users—like Facebook, with its project Libra—make it likelier that within ten years, we’ll have OTT money with at least a billion users.


Why does this matter?


It's absurd that sitting here in 2020, it's still very expensive and cumbersome to send money cross-border. Just try sending $5 to your friend in Canada.


Zuckerberg's vision with Libra is to make money as easy to send as a photo on WhatsApp. The removal of friction this enables is incredible. We might finally see the rise of micro-transactions (which are infeasible using credit cards) and entire new business models, scarce digital assets and monetization schemes built on top of crypto.