A Chat With Dead Legends & 3 Forecasts: The Return of Socratic Method, Assertive Falsehoods, & What's Investable?
A rare "Cambrian Moment" is upon us, and the implications are both mind blowing and equally concerning, let's explore the impact of a few forecasts in particular.
Edition 2 of Implications. Welcome and happy new year.
If you’re new, here’s the download on what this is and what to expect.
If you missed Edition 1 (the annual analysis shared broadly, as opposed to the monthly editions for founders I work with and a small group of subscribers), check out those forecasts and implications here.
This edition has three forecasts with several implications each. There is an AI emphasis, just given the current moment. But this won’t always be the case. ;-)
A morning chat with dead legends of the past.
I was having coffee the other morning while enjoying a chat with the late Carl Sagan. We went back and forth about his education, whether he thought we’d ever discover alien civilizations, and what he thought we’d learn from alien civilizations. The app I was playing with, “Historical Figures” was one of many chat or inquiry-based apps emerging that mount LLMs (large language models) like ChatGPT, augmented by different interfaces and, in some cases, data sets. In fact, this app in particular even supported a “group chat” between Socrates, George Washington, and John Lennon. I mean, wow. What are the forecasts for alternative uses of this groundbreaking technology, and what are the implications?
A “Cambrian moment” is an entrepreneurial explosion in a particular vertical that seems to happen all at once…and, in retrospect, changes everything. Well folks, we’re having one now. While critical innovations take many years to bake (and widespread adoption is often slower than people anticipate), advances in “the collective ahhhhh!” amongst technologists about the possibilities of a new technology seem to happen quite suddenly. And this is exactly what’s happening right now in generative AI and large natural language models. Three forecasts are becoming clear…
Education will be reimagined by AI tools.
AI-powered results will be both highly confident and often wrong, this dangerous combo of inconsistent accuracy with high authority and assertiveness will be the long final mile to overcome.
The defensibility of these AI capabilities as stand-alone companies will rely on data moats, privacy preferences for consumers and enterprises, developer ecosystems, and GTM advantages. (still brewing, but let’s discuss)
These forecasts also raise some questions about what utilities (“picks and shovel” type companies) will emerge that make AI capabilities work for consumers and enterprises at scale. Let’s jump into some implications for each…
Education will be reimagined by AI tools.
While traditional, memorization-driven, arithmetic-heavy (industrial revolution era) education is already widely criticized, the prime elements of education (text books, linear learning, essay writing, etc) are all on the brink of being disrupted. As I suggested in Edition 1, ChatGPT has done to writing what the calculator did to arithmetic. But what other implications can we expect here?
The return of the Socratic method, at scale and on-demand. The Socratic Method, named after the Greek philosopher Socrates, is anchored on dialogue between teacher and students, fueled by a continuous probing stream of questions. The method is designed to explore the underlying perspectives that inform a student’s perspective and natural interests. I experienced a couple years of this during business school (I rarely admit my youthful insecurity-fueled desire to get a Harvard MBA, that’s another..umm…dialogue), and loved the student-directed nature of learning rather than being lectured at. The framework felt optimized for surfacing relevance and stoking organic intrigue. Imagine history “taught” through a chat interface that allows students to interview historical figures. Imagine a philosophy major dueling with past philosophers - or even a group of philosophers with opposing viewpoints.
The art and science of prompt engineering. How to search Google is one thing, but imagine an entire logic-based lexicon of how to inform, constrain, and optimize the prompts we give to AI that impact outcomes. Much like students learn Excel and calculators - and even how to use more advanced formulas and engineering calculators - how do we outfit students to make sure AI is working for them (and not the other way around)? Also, related to the next forecast, how do we equip the next generation to know what they can trust and how to evolve their own mental judgment as AI spits out answers?
The bar for teaching will rise, as traditional research for paper-writing and memorization become antiquated ways of building knowledge. What is practical knowledge anyways? Is it knowing the answer, or knowing where and how to find the answer? Is it having the information, or being able to connect it to stimulate ingenuity? I hope this new tech evolves education to be more about learning how to think. How to find answers. How to connect dots. How to express yourself creatively (and stand out, merchandise your ideas, galvanize support for unpopular views). I think we’ll see the return of oral arguments and supervised persuasive essay writing. And “art class” will shift from being an hour spent painting on a Thursday to the use of creative tools across the curriculum (making a short film is the new history paper, drawing a comic book is new the science report, etc). As I like to say, creativity is the new productivity. In the age of robots, AI, and algorithms replacing human jobs, we need to outfit the next generation to be creative and provocative minds.
AI-powered results will be both highly confident and often wrong, this dangerous combo of inconsistent accuracy with high authority and assertiveness will be the long final mile to overcome.
Keep reading with a 7-day free trial
Subscribe to Implications, by Scott Belsky to keep reading this post and get 7 days of free access to the full post archives.