For years, I've been deeply influenced by the wisdom embedded in Roy Amara's Law—the notion that we "tend to overestimate the effect of a technology in the short run and underestimate its effect in the long run." Time and again, this principle has proven true, especially as I reflect on the early internet boom, the dot-com bust, and the subsequent ubiquitous integration of online technology in virtually every facet of life.
And yet, today, I find myself considering something that makes even me—a committed skeptic of the phrase "this time it's different"—pause and genuinely reconsider. With artificial intelligence, particularly recent generative AI models and the rise of multimodal AI, I’m beginning to wonder if, perhaps, this time it truly is different.
Historically, new technologies have typically followed a predictable hype cycle. Initial excitement and inflated expectations inevitably give way to disillusionment, followed by a gradual—and eventually profound—long-term impact. According to Amara though, eventually every technology leads to a plateau of productivity and our estimates of its effectiveness catch up to reality, and this is where I am having my doubts.
Artificial intelligence, especially generative AI, is demonstrating something distinctively different. Its tangible short-term impacts are immediate, powerful, and widespread, reaching across numerous sectors including creative industries, education, software engineering, and healthcare.
One could argue that we are cheating on where we are choosing our point of origin. If we choose November 2022 as the universal ChatGPT moment that transformed the world, we are ignoring all the overestimation in capability and false starts that AI experienced since its early days of conception. That’s undoubtedly true. However, I also don’t see a plateau of productivity on the horizon anytime soon.
Just consider the speed at which AI-driven innovations have moved from lab curiosities into practical, everyday tools. In mere months, generative AI has transformed entire workflows, automating creative processes, content generation, and decision support tasks previously assumed to require nuanced human judgment. Where past technologies gradually permeated society, AI seems to be leapfrogging traditional timelines, becoming deeply integrated almost overnight.
This unprecedented pace isn't just incremental technological progress—it's exponential acceleration fueled by self-reinforcing developments. AI technologies are now instrumental in advancing AI itself, creating an accelerating feedback loop that's not merely theoretical but observable in real-time. The barriers we anticipated—technological, social, and economic, all constrained by human ability and norms—appear far less robust in the face of such rapid change.
It leads me to question if Amara’s Law itself might need reconsideration. Perhaps with AI, instead of initially overestimating impact, we might now be consistently underestimating both its short-term disruptions and long-term transformations. If our foundational assumptions about technological adoption no longer hold, the implications for business strategies, policymaking, and workforce readiness could be profound.
This is no longer just about keeping pace with innovation—it’s about recalibrating our mental models entirely. Companies and leaders who anticipate slow, steady adoption may find themselves unprepared, caught off guard by abrupt shifts and immediate disruptions. Likewise, policymakers and educators working from traditional frameworks might soon discover they’re navigating a world that has fundamentally outpaced their planning horizons.
One example of this is my own method to determine the readiness of technology tools for productive use. In the past, if I had an objectively bad experience with a product, I could safely assume that it wasn’t ready for prime-time and relegate it to the “tried, doesn’t work” shelf for the foreseeable future. That’s not true today. Product quality and capabilities are iterating over days and weeks so this approach could lead to blind-spots and missed opportunities.
So, these days when I come across a product or technology that falls short, instead of dismissing it, I say to myself, “It’s not ready now. But what needs to happen for it to be ready?” If the answer is anything related to features or functionality, I put the evaluation on pause and revisit it a short time later. This has saved me from missing out on great tools that needed a little bit of finishing and refining delivered at AI speed.
Case in point: In the early days of ChatGPT, we had to pay attention to the number of tokens a particular prompt consumed. This substantially reduced the product’s ability to analyze proprietary information. Today, nobody talks about tokens. AI tools across the board seem capable of ingesting large amounts of data and offering meaningful syntheses.
As we navigate these unfamiliar waters, openness to rapid adaptation becomes crucial. Leaders must cultivate agility, recalibrating strategies not annually, but perhaps monthly or even weekly, responding in real-time to the transformative waves initiated by AI.
Reflecting on Amara’s Law now, I’m convinced that while its fundamental insight remains powerful, perhaps it needs an asterisk or even a rewrite for our AI-driven era. AI might very well represent the rare instance where the short-term is radically underestimated—and that alone demands our attention.
So, to close with a simple yet provocative question: If AI truly reshapes our future faster and more profoundly than we anticipate, are we ready? It's time for each of us—technologists, strategists, policymakers, educators—to deeply consider this possibility and prepare ourselves accordingly.