Reprogramming Humanity’s Primal Instincts & What We Learn From A Future “History of Tech” Class
In this edition we explore AI predictions becoming self-fulfilling prophesies among other implications of reprogramming and we consider lessons learned from the history of tech.
Edition #29 of Implications.
This edition explores forecasts and implications around: (1) how humanity is being reprogrammed as our primal instincts are subconsciously overridden (prepare yourself for this one), (2) what a future “History of Tech” class would teach us about today, and (3) some surprises at the end, as always.
If you’re new, here’s the rundown on what to expect. This ~monthly analysis is written for founders + investors I work with, colleagues, and a select group of subscribers. I aim for quality, density, and provocation vs. frequency and trendiness. We don’t cover news; we explore the implications of what’s happening. My goal is to ignite discussion, socialize edges that may someday become the center, and help all of us connect dots.
If you missed the big annual analysis or more recent editions of Implications, check out recent analysis and archives here. A few recommendations based on reader engagement:
The rise of cognition-driven companies (aka “cognicos”) and what I’ve come to call “the cognition stack.”
We have an ongoing series of insights for the modern product leader, and part two discusses why the product leaders who win their industry often do so by delaying gratification, how you should get customers to talk about their problem rather than your product, and why the best product leaders are optimistic about the future yet pessimistic about the present.
A set of wild expectations for the future of commerce, including context-based purchase decisions, hyper-personalized pricing, on-the-fly UI and text-based-commerce, more scarcity-driven offerings, AI that will haggle for you, and a return to non-scalable one-on-one attention from fellow humans in hospitality-driven experiences.
The reprogramming of humanity as primal instincts become overridden by AI.
I often ask myself, in the process of writing Implications, what is the most important thing we aren’t talking about? A recurring theme is the consequences of tech-reliance over self-reliance. For example, we discussed how (and why) algorithms that are optimized for engagement are socializing us at scale. This is important because it is an example of humanity being programmed by technology, rather than the other way around. If you’ve ever participated or witnessed a debate with a state-of-the-art LLM that is directed to convince you of something, you’ll see how effective AI can be at persuasion. Now imagine that the AI is persuading you not with words, but with carefully choreographed feeds of content that slowly shift your opinions and desires. This may seem alarmist, but I see as many positives (effective and inexpensive therapy, smoking cessation, tailor-made education) as I do negatives (the override of human value systems, widespread shifts in opinions and values, etc). Let’s explore the implications of world-class AI influencing (if not programming) of our decisions, opinions, and belief systems.
AI advice and predictions become self-fulfilling prophesies. We aren’t too far from trusting AI more than our own instincts, at least in certain parts of our life. Whether it is a commerce decision about what inflatable raft to buy, at driving route to take or what restaurant to visit, we will increasingly have faith that AI (as a compendium of the world’s information and reasoning) has more credibility than any other source. As AI becomes increasingly personalized and powerful, we will stop questioning it. Which begs a very important question: Will advice and predictions from these AI tools become self-fulfilling? I saw some people asking the latest DeepSeek LLM about Bitcoin price predictions and other future oriented predictions. The answers and the exposed reasoning layer that accompanied the predictions were extraordinary. This all made me wonder about whether, as we trust AI more than ourselves, the guidance and predictions from AI will become self-fulfilling prophesies? If AI tells us, based on everything it knows about our finances, family, actions, and values, that we should make a particular purchase decision, career move, or vote for a particular candidate, at what point does our trust become blind faith? Will there still be people who are willing to test the AI’s effectiveness by disregarding its advice? And, at scale, when does blind faith become self-fulfilling to the point where AI predictions are determining the outcomes?
Are we being reprogrammed by algorithms? In the last edition, I mentioned a recent tweet-er-xeet post by Twitter founder Jack Dorsey. Jack suggested that we should all be able to choose the algorithm that controls our timelines across social media products. But the underlying premise is more interesting: As social algorithms get far more effective at commanding our attention and tuning our interests, they remain a mystery to end users. We certainly cannot understand how they influence us if we don’t understand how they work. However, if consumers had the option of choosing an algorithm much like we choose an app on the App Store, we would be forced to understand how these algorithms work and decide which ones we want. This decision-making process would expose the objectives of each algorithm, does it give us more of what we like and agree with (filter bubble algorithms) or challenge our views with opposing narratives? Some algorithms may be optimized for humor, content from friends and people we actively connect with, or news from both sides rather than reinforcing one particular side. Some algorithms could be customized to help change your behaviors for the better — like adding a bias towards improving you as a parent, improving your wellness lifestyle, or developing you culturally through exposure to great music, films, and original art.
Is the next step in human evolution an “override” of our primal instincts? As humans, we have a set of “factory defaults” that have persisted for ~300,000 years. Whether it is sexual attraction and the desire to procreate, the instinct to ensure shelter and protect your children (through, for instance, a steady job), or the “fight or flight” mechanisms that channel our fears and prejudices into instinctual biases, there is an undeniable set of default drives that govern so many of our natural human tendencies. So much of modern society is designed to help suppress or channel some of our more destructive primal defaults, with mixed levels of success. But as I considered the accelerating pace of AI model innovation and Jack Dorsey’s proposal that we all choose our own algorithms, I wondered when (not if) these AI-powered algorithms become effective enough that they override our factory default settings. When might these algorithms of persuasive content and doses of personalized social proof influence our hunger (say, convincing us to use supplements instead of naturally satiating our appetites)? When might algorithms influence culture and society’s definition of “beauty” faster than biological factors that drive procreation (and evolution) via natural selection? When might other decisions we normally make out of primal instincts - from the leaders we vote for to the friends we keep and the lovers we choose - become governed more by AI-optimized algorithms than anything else?
Optimizing for success vs. Optimizing for agency. Until now, we have all lived rather self-determined lives riddled with human error. But the era ahead offers the chance to have every decision we make take the world’s knowledge and probability analysis into consideration. In such a world, success is most often assured by doing what the AI tells us to do. But what we gain in eliminating human error, we lose in agency. Sure, we technically have a choice, but going against the guidance of AI will increasingly look reckless if not self-destructive — much like turning off the headlights while driving at night (driving manually, that is! damnit my metaphors are losing steam in the age of AI!). As humans, we will need to decide where classic self-determination adds value through originality and creativity, and where it is simply a form of self-sabotage. How many people will choose to live partially or completely AI-free, like the people who choose to live off the grid now, and how tolerant will society at large be of that choice?
We are being programmed. There’s no delicate way to say it. The technology to reprogram ourselves — and unduly influence the desires and beliefs of others — has arrived. Our daily life is flush with evidence of the power of AI-powered algorithms, from the radical and volatile shifts in political views and “cancel culture” to the speed of market trends and spread of memes. The water boils slowly, but we are the frogs without a doubt. How do we respond? I have three things to say: (1) we must take control of the programming by understanding how algorithms work, demanding transparency into how they are optimized, and exercising our choice for how we want to be influenced; (2) let’s use AI to program ourselves in the same way we take courses to learn and change our behaviors in our professional and personal lives; (3) let’s reimagine the full stack of education (version 1.0 of human programming was good ‘ol classroom teaching) using this new tailor-made approach to programming the next generation to be skilled, creative, passionate, and values-driven leaders and thinkers. Only by taking the reins on this technology can we ensure that this is, in fact, an upgrade of humanity as we know it.
What would a future “History of Tech” class teach us?
Sometimes we need to do post-mortems on our industry, not just our own projects. Especially as I review new startups, think about the future system of work as an Atlassian board member, and build out A24 Labs to provide the world’s greatest storytellers with the absolute best stack of technology, I have been spending time synthesizing observations and lessons learned. The fun prompt here is, “What would a future “History of Tech” class teach us?” I have a growing collection of these conclusions, but here are a few…
Competition between technologies is ultimately a game of “slap a hand” in which the layer on the top supplants the technologies beneath it…until a new layer is built on top. Companies made apps, and then OS-level features supplanted many of those apps. The default search in your browser supplanted the sites you went to for search. Chat-based agents are now supplanting the discovery process and destinations we once frequented to find answers (soon we will ask for a car, as opposed to going to a particular branded app to order one). Enterprise search and function-specific agents will gradually supplant the places you visit (and the people you go to) to get data and analysis. If you want to disrupt any industry — or prevent your disruption — go up the stack of user experience.
Novelty precedes utility. I remember when Slack first came on the scene over a decade ago, when my team at Behance was using HipChat. We had no need for another messaging app, but people loved using Slack to summon company when they were leaving the office to grab a coffee or leverage “/giphy” to send random animated GIFs to each other with some degree of plausible deniability. What started as fun led to the discovery of utility and ultimately Slack became a mission-critical technology for the team. I saw the same phenomenon during the rollout of a virtual conference room technology in 2005 while working at an investment bank. Nobody used it until one member of the team summoned everyone to “jump on audio conf room 1” to secretly make fun of a partner’s tie. When we play with new technology, we become socialized to its use cases. Now, in the age of AI, the same pattern is repeating itself and we must let our teams play to discover the utility.
Don’t dismiss or discount new tech because of early misuses. Whether it was the early use cases of the internet for porn and impropriety, the early use cases of Bitcoin for illicit transactions, the early use cases of NFTs for scams, the early use of Generative AI to make deep fakes, and the list goes on…we must learn to look beyond the early uses cases of new technology, and we must wait to pass judgment.
Technology ultimately succeeds because of the user’s experience of the technology, not the technology itself. Fighting words here, and I don’t mean to offend my engineering colleagues, but have seen this prove itself time and time again. Breakthrough technology spreads fast and is seldom the moat you expect it to be. Look no further than the rapid commoditization of LLMs and media models, the ubiquity of mobile apps doing everything you can imagine, or the dozens of SaaS companies that emerge to solve problems faced across every function of business and life. The tech itself is important, but great tech alone does not guarantee success. The user experience determines whether a new customer can survive the first mile of the product, whether the product’s functionality is even used, whether the customer is willing to pay, and whether the product grows. The secret of any successful and honest product leader is their design partner. When you empower designers at every part of the process of building products — and companies — you stack the deck in your favor. What you’ll also learn is that design can compensate for technical shortcomings. I saw this in the early days of Behance, Pinterest, Uber and other companies that had growing pains in the form of technical scaling and performance issues that were addressed, at least initially, with design changes.
Momentum is a moat. As an optimistic entrepreneur, chief product officer focused on innovation, and especially as a leader of M&A for years, this is a lesson I have learned the hard way more times than I care to admit. When a product has escape velocity, it doesn’t even need to be the best product on the market to continue winning. This pains me to say, but history proves it time and time again. Why? Because the vast majority of potential customers for a product are pragmatists and only adopt AFTER rampant adoption. Most companies (and consumers) choose the safe option when it comes to tools. Also, the ripples of network effects continue far longer than people expect. You need to hear about a product many times before you’re willing to try it, and any product with a learning curve, once overcome, is far stickier than the average entrepreneur imagines. Most people take “don’t fix it if it ain’t broke” too literally, even if there’s a better way.
Insanity is a moat. There’s no better way to overcome an incumbent’s moat of momentum than using a dose of insanity. I’m being cheeky here, but Airbnb was “insane” enough (”who would ever let a stranger sleep in their home!?”) to prevent the entire hospitality industry from competing for over a decade. When you launch a new product or tool that is border line “sacrilegious” to conventions, you turn heads enough to attract great talent and leapfrog competition. We’ve seen this with Anduril (“”defense startups are impossible”), Rippling (”good luck competing with entrenched leaders and practices!”), and, Figma (”the web will never be reliable enough for professional design!”), among other examples that stomached being misunderstood long enough to change their industry.
Open source almost always exceeds expectations. The power of collective contribution and creativity outperforms centralized teams and bureaucracy, time and time again. So many companies are desperate for a business model out of the gate and fail to quantify the value of the world working to build and maintain their technology for free.
Building for The Future of Storytelling
Building for The Future of Storytelling: I mentioned in a previous edition of Implications that I have joined A24’s leadership team and am building a team of remarkable designers, engineers, operators, and technologists who all share an obsession with storytelling and empowering creative minds to take more risk and tell world class stories in new ways. Are you a designer or engineer who loves building products that stitch together various parts of the creative process? Are you a comfyUI expert who wants to push new media production technologies to their limits? Are you an operator who loves outfitting fast-growing companies with better tools to coordinate and execute ambitious projects? Or do you know someone I need to meet? Reach out and let me know!
Happy birthday Dot Grid Books! It was ~17 years ago this month when Matias and I designed and launched a few new additions to the Action Method product line, including the “Dot Grid Book.” While originally designed for ourselves, so many designers, architects, illustrators, and product designers started carrying these around. The Dot Grid Book was developed as an alternative to traditional lines and boxes, using a light geometric dot matrix as a subtle guide for your notations and sketches. It comes in regular and a smaller “Dot Grid Book Mini” size (my favorite). To commemorate, the team keeping this dream alive made a 20% discount code for Implications readers, valid until the end of the month. DOTGRIDBDAY20
Ideas, Missives & Mentions
Finally, here’s a set of ideas and worthwhile mentions (and stuff I want to keep out of web-scraper reach) intended for those I work with (free for founders in my portfolio, and colleagues past and present…ping me!) and a smaller group of subscribers. We’ll cover a few things that caught my eye and have stayed on my mind as an investor, technologist, and product leader (including the rise of biologically-inspired computing and a new investment I have made, my generally negative outlook on venture capital as an asset class (and reasons why), and other random thoughts. Subscriptions go toward organizations I support including the Museum of Modern Art. Thanks again for following along, and to those who have reached out with ideas and feedback.
Keep reading with a 7-day free trial
Subscribe to Implications, by Scott Belsky to keep reading this post and get 7 days of free access to the full post archives.