The Law of Displacement Speed & Leveraging Artifacts of Humanity
During chaotic eras, where new stuff replaces new stuff incessantly, what do we know will happen? And what are "artifacts of humanity?"
Edition #19 of Implications.
Geeash; this monthly Implications exercise is quite the challenge. Thanks to many flights, debates, and compulsive capturing, we made it; this edition explores implications around: (1) making sense of the rapid displacement of startups and “the best model” rat race, (2) how Content Credentials will flex the humanity of digital experiences, and (3) some surprises at the end, as always.
If you’re new, here’s the rundown on what to expect. This ~monthly analysis is written for founders + investors I work with, colleagues, and a select group of subscribers. I aim for quality, density, and provocation vs. frequency and trendiness. We don’t cover news; we explore the implications of what’s happening. My goal is to ignite discussion, socialize edges that may someday become the center, and help us all connect dots.
If you missed the big annual analysis or more recent editions of Implications, check out recent analysis and archives here. A few highlights include the rise of luxury software, the new entertainment stack (from discussions in Hollywood), and the era of abstraction — the implications of a world where reasoning, summarization, and automation abstract us away from the sources of everything. ;-) OK, now let’s dive in…
The Law of Displacement Speed & The Implications
There has been much written about the “Cambrian explosion” of AI, but the investors and leaders I respect most are still a bit circumspect about how and where to invest and place long-term bets. Most recently, I have been comparing the current AI frenzy to previous platform shifts and their aftermaths. For example, do you remember the explosion of applications in the early days of the iPhone? There were hundreds of apps that unleashed all sorts of superpowers, and then entire cohorts of these startups got destroyed every year at the Worldwide Developer Conference as Apple announced that their capabilities were now integrated into new OS-level native apps. News apps, flashlight apps, geo tracking tools, stock tracking apps, weather apps, screen management tools, voice note-taking, and the list goes on. The age-old truth coined by my friend Dave that “ The devil’s in the default” proved disastrously true as a whole cohort of apps in investor portfolios like mine were disrupted in one day.
These extinction waves continued on a regular cycle for years as the historic platform shift to mobile continued. But another pattern I only noticed in retrospect is that these apps were also frequently displaced by each other even before the OS-level apps entered the picture. As I look back at my notes and old TechCrunch articles, these startups were not only spawned in waves, but also kept one-upping each other rapidly with every release. There were constant comparisons — and “winners” declared (for a few weeks at least) as they competed with better features and interfaces. Then, usually, an OS-level app or feature emerged that, while often simpler, displaced them all. The speed of displacement was a signal of two likely outcomes: either a path to commoditization based on the sheer speed of innovation and displacement, or a platform-level innovation resulting in displacement.
Let’s call this “The Law of Displacement Speed:” When applications or services are able to rapidly displace each other at a rapid and regular cadence, the result is either commoditization or platform-level displacement.
Here we are today at another historic platform shift to AI. Every AI model imaginable is being made by startups and big companies alike. And every day we witness new versions of these models being announced and compared with one another. We’ve seen it even last week, with OpenAI’s announcement of GPT-4o and “Voice Mode.” At times it is hard to even discern version is better, given the long tail of advantages and disadvantages of each model. One effect of displacement is downward pressure on prices. There are so many generic LLM-type models, some of which are open source, that the price competition is intense. Meanwhile, the consumer device chip companies are laser-focused on a generation of devices that will run models locally (closer to free!). And, the developers of the OSes that permeate our lives, like Apple, Android, and Microsoft, are all-in on using AI to transform our everyday devices. The Law of Displacement Speed would suggest that either (1) these mass generalized models will become commoditized as more and more of the top use-cases — the stuff we would ask of LLM-powered services — no longer require the best and most expensive options on the market, or (2) the capabilities enabled by the growing market of AI models will eventually be performed by platform-level services, like the OSes of consumer devices or the platforms that rule each function of the enterprise. What are the implications?
Interface > Data > Model: The “interface” and “data” layers will further distinguish market leaders while the “models” layer becomes increasingly commoditized and pushed to the edge. As a growing number of our everyday use-cases of powerful GenerativeAI models fall below the frontier of “the best models,” they will be enabled by cheaper commoditized models. And we’ll be running many models locally on device within a few years. The distinguishing factors for companies to succeed using AI will be the radical refactoring of workflows via interface innovations and the data itself - enabling companies and people to uniquely leverage their own data in powerful ways. Designers FTW! :-)
Customer empathy makes a comeback. Companies will move beyond the “best gen AI model” rat race by radically transforming everyday consumer and enterprise workflows with a rich and deep pipeline of category-specific capabilities for the long-tail of professional needs. I like how my friend Aaron Levie, founder of Box, recently described these layers as the scaffolding, “something not everyone sees with AI is the rate of improvement in the layers of technology outside the models themselves. We’re regularly seeing breakthroughs in the scaffolding around AI models, which leads to a compounding effect that will produce incredible software.” These capabilities will come from a deep empathy for and understanding of customer problems, and will leverage nuanced interface design, fine-tuned models, and proprietary data in ways that the large generalized models would never attempt.
World-class natural language AI becomes free for consumers. How do you “land grab” in a commoditized market? Make it free. OpenAI’s announcement earlier this week that a (very advanced level) of ChatGPT would be made available free for everyone is the first step. Next, I expect to see OS-level platform integrations that deliver both local and cloud-based LLMs at your fingertips and the rapid commoditization of an increasing number of LLM use-cases.
The winners will be the major consumer platform OSes (Apple, Windows, Google) and the primary platforms that support each function in the enterprise. The AI products I am most excited about leverage a proprietary or uniquely structured set of data and a superior interface that transforms an antiquated workflow. Companies that really understand data AND workflows within a deep vertical will have an advantage. And designers will be more important than ever before, imagining entirely new ways to transform the interfaces of our everyday work and life with the superpowers of AI.
Authenticity & Artifacts of Humanity Will Engage Us
I recall a product review over five years ago with the After Effects team, one of our products at Adobe that enables some of the world’s greatest moviemakers to add special effects to their films, among many other use cases. The team was showing me a new feature called “Content Aware Fill” that could replace an object in moving video. It was an extremely useful breakthrough (remember that Starbucks cup mistakenly left on a Game of Thrones set?), but it also had more concerning implications. How would we help people determine what media they could trust in a world where AI-powered editing was exploding and we can no longer believe our eyes? Over the years since, our Content Authenticity Initiative has helped spawn an industry-wide open source movement across camera manufacturers, software developers, social media platforms, and AI companies to enable creators to add credentials to the content they make or edit.
In the last few weeks, our drive to increase adoption of Content Credentials and C2PA, the group driving an open source technical standard that publishers, creators, and consumers can use to trace the origin of different types of media, has achieved some major milestones. We welcomed OpenAI and Google to the C2PA steering committee, and TikTok became the first social media platform to support Content Credentials, including tags for AI-generated content. Content Credentials is ushering in the “verify, then trust” era where content on platforms like YouTube or TikTok will display provenance information about the media you’re viewing, helping you understand how the content was edited and who captured and created the content. It is intended to reward good actors with trust from their viewers (as opposed to punishing bad actors, since Content Credentials are optional). As I see this effort to communicate authenticity take off across the industry, I can imagine a number of other less expected but profound implications for creators and consumers.
“Proof of human” behind a creation will become a signal of meaning. As every brand increasingly “floods the zone” with AI-generated and hyper-personalized content, we will crave story, craft, and meaning more than ever before. And nothing strengthens meaning more than authenticity. One unintended benefit of Content Credentials is the ability to demonstrate the “human work” that went into a story or piece of media, whether it be information about the type of camera (vs. an AI model) used, the digital tools used (vs. an automated workflow), or the actual authenticated person that digitally signs a work as the creator - these credentials may actually INCREASE the value of content in the future by infusing it with human provenance.
"Artifacts of humanity" in creative work — in the form of dust, mistakes, and deliberate strokes that flex human engagement — will showcase the human labor spent making a creation. Such artifacts of humanity, alongside Content Credentials that factually articulate how a piece of work was made, will add meaning and scarcity to the work in the eyes of the viewer. Whether it is art or advertising or stories of any kind, people who have taste will not only want to use it (with the help of precision tools) but also demonstrate that they used it. Recently, my friend Scott Rudin, sent a few of his pieces over that underscore this example. “The information on the side of the photo,” as he calls it, is from the negative, and the holes are called the sprockets. This information on classic negatives tells you what number negative on the roll you’re looking at, and the holes are what helps the film advance through the camera. On the photos where you see a “fade,” what you’re really seeing is a light leak that results from light hitting the negative in a diffused way. Rudin does this on purpose to get a unique effect that, in his words “makes the photo like a fingerprint, so no two can ever be the same unless printed from the same negative.” These are all artifacts of humanity that serve to express the very human PROCESS behind the work. We can expect more equivalents of light fades and sprocket exposure in the form of hidden signatures, easter eggs in designs, process replays, open source files, or the use of credentials to show how something was made.
Content Credentials will become a flex of a creator’s humanity, a media outlet’s credibility, an artist’s attribution, and a story’s transparency. Right now, the focus of Content Credentials is around authenticity and determining whether you can trust what you’re looking at. Of course, the existence of credentials will never guarantee that the content itself can be trusted, but it shows that the creators and publishers are taking the steps to be trusted and that viewers are invited to review the provenance of an asset and decide for themselves. But as the use of credentials spreads, I anticipate that viewers will use them to see who made something (perhaps to hire them!?), and to gain an appreciation for the craft, story, and process that went into a digital creation. The story behind art has always been a core driver of its value, but the digital world has never embraced the power of provenance to indicate the amount of effort that went into a piece of work. Ultimately, I expect to see more "artifacts of humanity" in creative work that demonstrate the human inputs that ultimately appeal to the viewer's heart and soul.
My declaration: If you have taste, use it and flex it. GenAI will evolve the tools of creativity, but will never replace them because the instruments of creative expression are how we share the humanity in each of us, combined with our taste.
Assortment Of Findings
Some Data on the Opportunity & Desire for Trust: As we’ve discussed before in Implications, the new “trust, but verify” is becoming “verify, then trust” as we enter an era in which we can no longer believe our eyes. Here are a few key findings worth sharing from a research study our team recently did at Adobe:
In the US, 70% of people said it’s getting hard to verify whether the content they encounter online is trustworthy.
In the US, 80% said they believe that misinformation and harmful deepfakes will impact future elections.
88% of people believe it is essential that they have the right tools to verify online content.
76% of people agree that it is important to know if the content they are consuming is generated using AI.
The Allocation Economy - Enjoyed reading this piece from Every that supports some of the implications we've explored in previous editions on the future of management. As AI automates many of the tasks that consumed us during the knowledge economy (tasks like scheduling, reporting, analyzing data, etc) where will we shift our focus? The article makes the case that we will all become managers to some degree, especially when it comes to managing the AI…
Meet people out of your zone. I enjoyed this Action Digest highlight on Hollywood producer Brian Grazer, who went from trying to meet one new person in the entertainment business every day to seeking to meet no one. “I had quickly discovered that the entertainment business is incredibly insular—we tend to talk only to ourselves.” This insular approach is one “that leads to mediocre movies, and also to being boring.” How are you pushing your imagination out of your zone?
Ideas, Missives & Mentions
And that’s a wrap for Implications #19 public edition. Now, here’s a set of ideas and worthwhile mentions (and stuff I want to keep out of web-scraper reach) intended for those I work with (free for founders in my portfolio, Adobe folks…ping me!) and a smaller group of subscribers. We’ll cover a few things that caught my eye and have stayed on my mind (including agents buying on our behalf, emotional intelligence models, and thoughts on belongingness), including some areas of interest as an angel investor. Subscriptions go toward organizations I support including the Cooper-Hewitt National Design Museum. Thanks again for following along, and to those who have reached out with ideas and feedback.
Keep reading with a 7-day free trial
Subscribe to Implications, by Scott Belsky to keep reading this post and get 7 days of free access to the full post archives.