Our World Shaken, Not Stirred: Synthetic entertainment, hybrid social experiences, syncing ourselves with apps, and more.
Things will get weird. And exciting.
Edition #5 of Implications.
If you’re new, here’s the download on what to expect. This ~monthly analysis is written for founders + investors I work with (and colleagues, slack me for the code ;-)), and a small group of subscribers who wish to go down the rabbit hole... If you missed the annual edition, or more recent private editions of Implications, check out recent analysis and archives here.
This month’s edition starts with some provocative forecasts and implications of new tech on the more consumer and cultural side of things. We’ll also discuss a few things that are increasingly clear to me given recent advances, some of the new companies under development, and some ideas/missives as always. Buckle up, my friends.
Our World Shaken, Not Stirred.
Recent advances in technology will stir shake the pot of culture and our day-to-day experiences. Examples? A new era of synthetic entertainment will emerge, online social dynamics will become “hybrid experiences” where AI personas are equal players, and we will sync ourselves with applications as opposed to using applications.
We only get to see a handful of platform shifts in a lifetime, like the advent of the web or smartphone. You know you’re upon a new platform shift when months of progress seem to happen in days…where you can barely keep up with the breakthroughs and their implications. Consider the following provocations that are increasingly likely to happen:
A new era of synthetic entertainment will emerge as the world’s video archives - as well as actors’ bodies and voices - will be used to train models. Expect sequels made without actor participation, a new era of ai-outfitted creative economy participants, a deluge of imaginative media that would have been cost prohibitive, and copyright wars and legislation.
Hollywood changes and stays the same: In just the last couple of weeks we saw a mini sci-fi movie generated with AI, the Pope and the characters of Harry Potter outfitted in Balenciaga doing all sorts of activities, and some shorts starring George Washington in modern apparel. How will this technology be used for the next generation of Hollywood? This was the topic of conversations with two well known top directors during a recent series of dinners in Los Angeles. One direct confided that he uses ChatGPT “not to get original lines or generate scripts, but to pose a scenario and get a list of possible plot twists or outcomes to consider.” He characterized ChatGPT as a straight-laced brainy creative partner of sorts. In another conversation, the director talked about the importance of creative control in post-production, explaining that he often splits the screen of a scene and changes the speed of one character to improve the timing of a scene between characters. He also shared an anecdote of correcting a sentence in post-production using AI-powered audio dubbing. The actor, who had a thick Italian accent, couldn’t articulate the line clearly enough after numerous takes. So, they redid the line in post and apparently the actor never noticed. Then I asked, “well, would you ever add an entirely new scene in post-production?” The director paused for a moment of thought, and then said he’d consider it but proclaimed that “great movies will never be made without the actor’s participation.” “Why?” I asked. His response was thoughtful. He explained to me that great actors debate and help direct the director, often suggesting that a scene get shot five different ways to provide choices in post-production. “At the scene level, you’d lose the benefit of an actor being a creative partner, and the best ones are,” the director explained to me. This conversation was a helpful proxy for me. Perhaps, in the creative era ahead, our boundaries with how we will and won’t use AI will ultimately come down to (1) the degree of pixel-level creative control we require, and (2) the benefits of collaboration and thought partnership. The greatest creators and leaders I know have all been humbled by the partnerships required to make something extraordinary. I don’t expect this to change in the age of AI.
Unauthorized sequels, spin-offs, some amazing stuff, and a legal dumpster fire: Now lets shift beyond Hollywood to the fast-growing long tail of prosumer-made entertainment. This is where entirely new genres of entertainment will emerge including the unauthorized sequels and spinoffs that I expect we will start seeing. If you can generate an original screenplay with vivid details, characters and lines using ChatGPT (you can today), anyone’s voice using just 15 seconds of recording (available today), and can generate original video using text-to-video prompts (within months we’ll see models with indistinguishable video generation capabilities), you can generate a spin-off or sequel on your own to any mainstream TV show or movie. Of course, the biggest problem here is copyright and enforcement. I suspect copyright law will minimize these capabilities among responsible AI companies, but many smaller or “open source” alternatives will spawn and I can’t imagine enforcement preventing little clips or entire unauthorized episodes and sequels going viral. This will become one of the great legal issues of the century. Personally, I am definitively on the side of the artist. While we cannot shy away from AI technology, we must develop new compensation models for suppliers of training data, we must foster attribution so we know who/what made what, we must regulate (as we always have) the use of copyrighted property and people, and we must incentivize creativity with protection for what you create. Ok, will take a deep breath. Setting media that violates copyright aside, we’ll see entirely original and imaginative creations spawn as the barriers to entry (production crews, big budgets, sophisticated skills, and lack of creative confidence) are drastically reduced. Prediction: we’ll see the first original AI-made Netflix show within the next 12-18 months. It will be made by someone like Rick Rubin - someone with extremely great taste and ideas who doesn’t necessarily know how the tools work. Tobi Lutke had a good thread about some of the underlying tools that will let you “be able to describe a scene, get a movie script to edit, assign virtual actors, add a cinematographic direction and sound design prompt, and get a full draft movie back over night. Further editing can be structured as a chat.” Media creation tech is evolving at an insane rate right now. We will see new studios, agencies, and tools emerge daily that take advantage of this technology . Great new ideas will see the light of day, and this is good for the world.
Content authenticity couldn’t be more important. Finally, we can’t talk about synthetic media without giving some time and attention to the nefarious political and criminal risks. In a previous edition we discussed why we’re entering “the era where we can no longer believe our eyes.” The implications of “deep fakes” goes up a notch when people and their voices can be generated using widely available tools. As a result, we will need to become a bit more skeptical of media verify the provenance of whatever we see before we decide to trust it. “Verify, then trust” is the new “trust, but verify.” The next generation will be inoculated from sensational media of all kinds and our new default mindset will be to doubt what we are seeing until we can verify the source. One of our long-standing passion projects at Adobe has been founding the Content Authenticity Initiative as an open source non-for-profit effort to get every creative tool, camera company, and ai-generation product to start adding “content credentials” to media that is made, with details for how it was made and edited. We started this project years ago to address the phenomenon of deep fakes, but the application for the emerging world of GenerativeAI couldn’t be more timely. Now 900 members strong, the effort has made some great inroads promoting transparency around the use of AI (making it easy to indicate when AI was used to generate or alter content, helping to prevent misinformation and increase transparency around the use of AI) and helping people decide what is trust worthy (open-source tools to integrate secure provenance signals into products so users can share and consume tamper-evident context about changes to content over time, including identity info, types of edits used, etc). When Adobe launched Firefly, our new family of GenerativeAI models, we baked in content credentials as the default (in fact, we are requiring it!).
Hybrid social experiences where AI-powered personas are somewhat-equal players.
Fanfare as a service: If you have any kids playing games in Roblox, you’ve likely come across Pet Simulator X. In this game, you accumulate “pets” that follow you around wherever you go. They are like an automated audience of fans following your every move. Watching this, alongside the rise of AI coupled with everyone’s continued desire to have followers and engagement with their content on social platforms, made me wonder whether we will start to see a wave of “fanfare as a service” where you have engaged AI-powered followers that engage with your content automatically. Imagine informed and witty supportive responses to every post you and your friends make as a core part of the service. Imagine that some of these AI-personas are historical figures or invented fascinating characters like the “talent” from Superplastic? I actually think there is a new genre of social applications that will emerge where you’re actively engaging with AI-powered characters that each have a very specific set of characteristics as a public spectacle for your friends and others to watch. If the core driver of any social product is others engaging with our content, how can AI not be a key unlock to this?
AI, the ultimate wingman/wingwoman: Dating will be facilitated by AI within the next few years, with a witty and infinitely intelligent third party with complete plausible shameless deniability for any quip that brings two people closer together - even at its own expense. This will happen fast, much like the surge and acceptability of online dating (see chart below, from National Academy of Sciences research). I anticipate a virtual matchmaker of sorts that knows exactly how to break the ice, spark conversation, and even advise each participant privately as the conversation gets underway.
OK, now let’s dive into a few more forecasts and implications below as a smaller group of subscribers (and deprive hungry scraping AI training models with a good ‘ol fashioned paywall…that benefits the Cooper-Hewitt National Design Museum, ;-) ):
“Sync my AI” will be the most common action for the future of technology.
Two outcomes that are becoming increasingly clear given recent developments.
Some ideas and missives…
Keep reading with a 7-day free trial
Subscribe to Implications, by Scott Belsky to keep reading this post and get 7 days of free access to the full post archives.