Proximity to Power, The Horizon of Augmented Reality, & More Cycles for Creativity
We’ll dive into AI’s thirst for (literal) power, talk about META’s Orion and new advances in AR, and share new products that give us more cycles for creativity.
Edition #24 of Implications.
One day of biological age is ten days of tech evolution; hard to keep up! This edition explores forecasts and implications around: (1) proximity to power and what we learn from Dyson Spheres, (2) major advances in AR and the obstacles to overcome, (3) how “what” is replacing “how,” and (4) some surprises at the end, as always.
If you’re new, here’s the rundown on what to expect. This ~monthly analysis is written for founders + investors I work with, colleagues, and a select group of subscribers. I aim for quality, density, and provocation vs. frequency and trendiness. We don’t cover news; we explore the implications of what’s happening. My goal is to ignite discussion, socialize edges that may someday become the center, and help all of us connect dots.
If you missed previous editions of Implications, check out recent analysis and archives here. A few highlights include:
Disruptive interfaces are a generation of drastically simpler and more accessible interfaces that ultimately commoditize everything underneath. GenerativeAI-enabled platform-level Agents are the ultimate disruptive interface.
Brandertainment: Brands will create more mainstream media in direct competition with studios (but will it be good?). I anticipate a significant amount of popular movies, streaming series, and podcasts will be produced by brands leveraging (and simultaneously promoting) IP from their primary business. But I also believe we’ll crave more meaning and richer stories than ever before…
Feeling special, as a service. Another opportunity for consumer AI is scaling things that were previously never scalable. Brands will also leverage data in remarkable ways to personalize experiences to make people feel special…
Dyson Spheres on Earth: Proximity to Power
As I spend more time with leaders of AI at different companies, it is clear that shortages of chips and infrastructure will be addressed over the next 3-5 years. As super clusters of chips for training (and increasingly for inference) emerge, the ultimate pursuit will be cheap and scalable sources of power to operate these mammoth installations of computing. What are the cheapest and most scalable sources of power? Volcanos? Desert solar arrays? Nuclear energy is having a resurgence, and we’re seeing companies like Talen Energy make deals with companies like Amazon to provide power to the next generation of computing clusters that might even be built directly adjacent to their sources of power. The metaphor that comes to mind is that of the “Dyson Sphere” that astronomers hypothesize as evidence of an advanced civilization that has exhausted the power supply of their own planets and has ultimately migrated their civilization to a structure that surrounds their local star - the ultimate source of energy in any solar system. In a similar yet more rudimentary fashion, my friends in the industrial real-estate world are looking for development opportunities adjacent to power sources as enticing spaces for the next generation of data centers. Much like the gold rush and mining periods of history spawned new towns and cultural epicenters in different parts of the world, might the insatiable pursuit of power give rise to new towns and cities and change demographic trends? Will new states or countries emerge as key players in the age of AI much like Taiwan has in the age of chip manufacturing? No doubt, proximity to next generation power will have many unexpected implications. On the environmental side, I feel concerned in the near-term (will we leverage more cheap and dirty sources to feed demand now?) but more optimistic over long term (will this massive need for power drive more ingenuity to generate it more cleanly and efficiently?). History suggests that a step-function growth in demand, across most industries and commodities, has been the catalyst for such breakthroughs.
Summoning The Horizon of Augmented Reality
The topic of augmented reality (AR) right now feels a lot like AI felt ~2016 when everyone knew AI would be important for the mainstream but the building blocks just weren’t there yet. Of course, just eight years later AI is impacting everything. I am a huge believer in AR and I have little doubt that it will transform how we live, work, buy, learn, play and relate to one another. But the building blocks of the AR era are still very much under construction. Here’s the state of play across four critical dimensions that all need to mature to unlock the age of consumer AR (aka disruption of the smartphone):
Hardware: While damn impressive, Apple’s VisionPro is not an AR device. AR allows you to augment reality, not remove yourself from it. Even with its “pass through” capabilities, VisionPro is an isolating experience that removes you from wherever you are. The first true AR device will make us feel present with the places and people around us, and will simply augment the experience for us using audio and imagery. They will be more akin to glasses. Speaking of which, I had the opportunity to try META’s new “Orion” glasses this week in NYC. A few quick take-aways: (1) the form factor is light/easy to wear, field of vision impressive, haptic feedback for hand gestures is excellent, and the resolution is good enough for most use cases, (2) I heard the term “head leash” at META to describe apps that accompany you as you walk - always the same distance from your eyes. Omnipresent information that travels with you vs. stationary experiences are two different types of AR digital experiences to explore, (3) from gaming to object identification to instructions, it is so refreshing to walk around with a TRUE AR device - it is clear to me that AR is the future and we underestimate its threat to “smartphones” as we know them.
Operating Systems: The cardinal rule of product is that “the devil’s in the defaults,” (as my friend Dave Morin coined) and the ultimate default of consumer software is the operating system and the native capabilities and default interface upon power up. One might argue that Apple and Android have a massive lead when it comes to operating systems, but let’s focus on Apple given the lack of great immersive devices on Android (yet). Apple has a mature mobile operating system that they have evolved beautifully to VisionOS, bolstered by unmatched distribution of AppleIDs and a privacy-driven ecosystem used by over a billion people. But what if the AR paradigm, fueled by Generative AI, is so remarkably different that this advantage proves to be a constraint? No doubt, when it comes to META’s AR explorations, the operating system and software are still a work in progress. I imagine that my friends at META are considering what a step-function-different LLM-powered operating system using natural language and hyper-personalization might feel like. It also seems like they’ll leverage Android (the benefits of an app ecosystem from day 1), but shouldn’t constrain themselves to the familiar mobile patterns. No doubt, we’re still in early days.
3D & Immersive Creation: The world of AR will fall completely flat if it isn’t filled with rich, interactive, and life-like three-dimensional experiences. Adobe, and other software developers like Maxon and Unity and Epic, are all part of the future 3D & Immersive stack. The rise of 3D tools that democratize 3D creation and make it easier to design and deploy interactive experiences has become a passion of mine. Over the years, my team in Adobe’s Emerging Products group, which includes the incredible Substance 3D franchise has explored all sorts of AR-related tools. A few years ago we launched products like Aero for designing AR experiences, and this week, we announced two new products: Project Neo, which enables 3D illustration, and Substance 3D Viewer, which lets anyone edit and place 3D objects into 2D experiences (short demo video below, and huge congrats to the team!). One major area of progress is formats. Several years ago, one of our teams at Adobe collaborated with teams at Apple and Pixar to help develop the USDZ format and a general set of standards for creating and distributing immersive experiences. We’ve also incorporated Firefly AI capabilities. We’re making progress, but there is a bit of a “chicken-or-egg” conundrum when it comes to building AR tools before the hardware is ubiquitous enough for people to want them.
Applications & Experiences: Maps will become the new “app store” as every application and experience will be merchandised based on your location. Third-party software of all kinds will be progressively disclosed to us as ambient location-based experiences that defy the notion of “apps” as we know them — they won’t be installed by us, they won’t require sign-up (your eyes will activate them), and there won’t be any onboarding or learning curve (they will be hyper personalized for each of us). These experiences will range from AI-powered personal shoppers, characters we saw in movies that reappear in our lives, and a new generation of “informative layers” akin to street signs that float around every space, person, and object we encounter. One other fascinating and polarizing implication of the AR era fueled by AI: Everyone will know who everyone is. While helpful for people like me who are bad with faces and names, the “doxing” potential of this is a serious privacy concern researchers have demonstrated in some hacks of early AR devices. I mention this only to underscore the privacy and preferences problems to be solved for the world of AR. The defaults must help us navigate the world and leverage our data, contacts, and preferences without the privacy risks.
Going from the “HOW To Do” Era to the “WHAT To Do” Era & Debut of Project Concept
So much of humanity has been all about "HOW to do" something (aka skill development). Our educational system, product on-boardings, the mass deployment of productivity software and corporate training, and entire categories of products, are all about outfitting us with skills and teaching us how to do things. But change is in the air. We’re seeing the fields of ideation, writing, and application development and deployment all collapse into one another. With the next generation of generative AI tools, you can just start writing and get working applications or semi-autonomous agents that do things you imagine - from building an agent that sources, vets, and schedules candidates for interviews to developing an application that tracks all the shows and movies you want to watch. In some ways, we’re entering a new era of humanity that is increasingly about "WHAT to do” as the “how” becomes less of a daunting constraint. We’re entering an era where taste will outperform skill, and creative choices will distinguish every story, brand, and business more than anything else. What new genre of products will help make this happen? How will the competitive advantage of “taste” and human intuition differentiate leaders across each industry? This week, one of my teams in Emerging Products at Adobe shared the first glimpse of “Project Concept,” a new product for mood-boarding and concepting in the age of AI and kicked-off our Private Beta. This was a major collaboration across our organization. In Project Concept, you can drag in your own content or build a mood-board of content that inspires you, and then leverage state-of-the-art AI tools to remix assets, colors, shapes, and cover a tremendous amount of surface area to explore endless possibilities. I’ve enclosed a sneak video below. Project Concept is a great example of where creators may spend MORE time in the future as the final mile of production (the mundane, repetitive, and more laborious part of the creative process) is refactored.
More cycles, better solutions.
Project Concept is just an example of a new generation of AI-first tools that gives humans more cycles for exploration, ultimately yielding better solutions. I think a lot of rhetoric around the implications of AI is mistakenly focused on saving time or replacing people. Sure, AI will truncate some workflows to save time, and there will be some medial workflows across every industry replaced by this new technology, as always. But what makes this technology truly distinctive from other advances is its reasoning and imaginative capabilities (not taste-based imagination, but boundless directed exploration). What this technology really gives us is MORE CYCLES - more cycles to explore a wider array of color palettes, more cycles to explore those ten or ten thousand other pathways for drug discovery or marketing slogans than humans can possibly pursue, more cycles to explore the right timing for a critical scene in a film by removing the constraints of what can be shot on set, more cycles of research and comparison analysis than one can possibly execute before making a purchase decision. Now, as a product leader, investor, and consumer, I have become obsessed with contemplating the problems in every industry and our everyday life that could be transformed with more cycles. Truth is, no matter your role or industry, we are all in search of cycles. The key question is: Cycles for what, and what becomes exponentially better as a result?
Assortment Of Findings & Call-Outs
At our Adobe MAX Conference earlier this week in Miami, I kicked off our second-day keynote with a series of updates on Behance (now ~56 Million members!) and Content Credentials (now ~3700 partners onboard for the next generation of attribution and metadata to help us know how content was made and who made it). But before the keynote, we debuted a short piece acknowledging the more difficult and less discussed aspects of the creative journey. It's called the "journey of difference," and it is a poem or ode-of-sorts to our customers who have all chosen a journey of difference by being creative despite being misunderstood, and taking creative risk in this world. It’s a hard and lonely path. This was originally something I wrote before one of my team off-sites three years ago, and our production team worked wonders to bring this to life.
If you’re headed to Lenny & Friend’s Product Conference later this month, you’ll get a limited edition Action Book Mini - a more recent evolution of the organizational products line I designed with my friend Matias back in (checks notes) 2006! Yikes. If you’re not at the conference, you can browse the latest at Action Method (and here’s a 15% coupon valid through Oct 31 - IMPLICATIONS15 the team gave me to share with my Implications readers).
Shout-out to the team at AgendaHero, who leveraged AI and extreme focus to build a highly accurate way to turn any email, image, PDF, or copy/pasted text into calendar events. What a great and obvious implementation of AI that we can all appreciate.
Excellence is the war you wage until every trace of the struggle disappears. I especially enjoyed this spotlight on Maya Angelou in a recent edition of the Action Digest newsletter. Angelou elaborates that it can take as long as three weeks to describe a single scene and that she discards over half of the pages she writes. “I must have such control of my tools, of words” Angelou continues, “that I can make this sentence leap off the page. I have to have my writing so polished that it doesn’t look polished at all. I want a reader, especially an editor, to be a half-hour into my book before he realizes it’s reading he’s doing.” Action Digest’s author, Lewis, spends the rest of this edition exploring lessons learned in excellence from legends like Leonard Bernstein and Stanley Kubrick. It’s a must read/subscribe.
Ideas, Missives & Mentions
**Finally, here’s a set of ideas and worthwhile mentions (and stuff I want to keep out of web-scraper reach) intended for those I work with (free for founders in my portfolio, Adobe folks…ping me!) and a smaller group of subscribers. We’ll cover a few things that caught my eye and have stayed on my mind (including a software/firmware/hardware metaphor on my mind, Messi’s style of play and how this translates to the workday, more examples of AI “flooding the zone” in different industries, and some data provocations). Subscriptions go toward organizations I support including the Museum of Modern Art. Thanks again for following along, and to those who have reached out with ideas and feedback.
Keep reading with a 7-day free trial
Subscribe to Implications, by Scott Belsky to keep reading this post and get 7 days of free access to the full post archives.