The bitter lesson for organisations
I read another great piece by Ethan Mollick titled “The Bitter Lesson versus The Garbage Can”. He presents two compelling concepts that are on a collision course.
First is the “Garbage Can Model” of organisations. This theory suggests that most companies are not the well-oiled machines we imagine. Instead, they are chaotic collections of problems, solutions, and people, where processes are often undocumented, informal, and evolved rather than designed. As Mollick shares from one case study:
The CEO, after being walked through the map, sat down, put his head on the table, and said, “This is even more fucked up than I imagined.”
Second is “The Bitter Lesson” from the world of AI research. Coined by computer scientist Richard Sutton, it observes that attempts to build AI by encoding complex human knowledge are consistently beaten by general methods that leverage massive computational power. The AI simply figures out a better way on its own through brute force.
…encoding human understanding into an AI tends to be worse than just letting the AI figure out how to solve the problem, and adding enough computing power until it can do it better than any human.
The core conflict
The central question Mollick raises is whether this Bitter Lesson will apply to our messy, “Garbage Can” organisations. Will we soon have AI agents that we can simply give an outcome—like “produce the weekly sales report”—and they will figure out how to navigate the internal chaos to deliver it, rendering our carefully crafted processes obsolete?
This seems logical. An AI trained on outcomes could find more efficient paths than the ones humans have created through habit and negotiation. It suggests a future where understanding how work gets done is less important than clearly defining what a successful output looks like.
Where humans remain essential
This leads to an interesting question: if AI can optimise processes through brute force, where does that leave human expertise and historical knowledge? I believe our value shifts to areas that computation alone cannot address.
- Defining ‘good’: An AI can achieve a target, but a human must define it. For qualitative goals like “excellent customer service” or “a positive company culture,” human judgment is needed to set the parameters of success.
- Handling ambiguity and ethics: Business is not as clear-cut as a chess game. Humans are required to navigate situations with conflicting goals, ethical grey areas, and incomplete information. An AI might find the most efficient way to a sale, but a human must ensure it is not a manipulative or brand-damaging one.
- Asking the right questions (first principles): An AI is exceptional at solving a problem it is given. Humans, however, are capable of questioning if we are solving the right problem in the first place. This first-principles thinking, which redefines the entire challenge, is a uniquely human form of innovation.
- Understanding ‘why’ (Chesterton’s Fence): Many seemingly inefficient processes exist for good, unwritten reasons—to maintain relationships, ensure legal compliance, or manage team morale. Humans provide the wisdom to understand why a fence was built before tearing it down, preventing unforeseen consequences.
The future is not about AI replacing human processes wholesale. It is about a partnership where we use our uniquely human skills to direct the immense power of AI. We will set the destination, define the rules of the road, and let the AI find the best route.