Revisiting the Social Contract in the Age of AI

As AI shapes our world, it's time to rethink the social contract between individuals and governments. Inspired by Baroness Shafik's What We Owe Each Other, I explore how AI influences rights, responsibilities, and freedoms. Let’s discuss how we can evolve our relationship with technology.

Revisiting the Social Contract in the Age of AI

Hi All,

This has been on my mind for quite some time, and I’ve been meaning to share it with you all. This post has been brewing since the pandemic, inspired by the book What We Owe Each Other by Baroness Shafik. If you’re a fan of books like Why the West Rule - for Now or Guns, Germs, and Steel or the Age of AI (some conspiracy theorists believe this is the deep state's playbook for the future), then this post and discussion are right up your alley! I’d really love to hear your thoughts and perspectives as well. 😄

In today’s world, where artificial intelligence (AI) is becoming a part of almost everything we do, it’s clear we need to rethink the social contract between individuals and governments. We’ve taken this agreement—defining our rights and responsibilities—largely for granted. But as AI continues to shape everything from our freedoms to how we interact with each other, I think it’s time we revisit this contract and think about how it should evolve.

The Bottom Line: You’re on your own. Until you’re not.

Reclaiming Sovereignty: Rethinking the Social Contract in the Age of AI

Whether we like it or not, AI is here to stay. The real question is—how do we coexist with it? AI is already shaping our daily decisions, often without us realizing it. As technology advances at an unprecedented pace, this is a crucial moment to reassess what it means to reclaim our sovereignty and rethink the balance of power between individuals, governments, and AI.

For too long, we have accepted the social contract—the unwritten agreement defining our rights and responsibilities—without much scrutiny. But with AI increasingly embedded in governance, finance, and even personal decision-making, we must take a closer look. Just as we read the fine print on loans or mortgages, we must now question how our data is being used, how AI-driven policies shape our freedoms, and whether the existing contract still serves us. The days of blind trust are over.

The social contract isn’t just a political theory—it directly impacts how we live, what freedoms we enjoy, and what we owe each other. In an AI-driven world, issues like data privacy, ethics, and governance are no longer abstract concerns; they are fundamental to our future. If AI continues to shape every aspect of our lives, we must ensure that the contract evolves to match these new realities.

Here’s what that means:

  • Transparency and Accountability: Governments must be crystal clear about how AI is used in decision-making. We have the right to know how our data is collected and used.
  • Ethical AI Development: AI must be designed with ethics at its core, prioritizing fairness and human well-being. We've seen how AI can perpetuate bias, but it also has the potential to be a force for good—if built with the right safeguards.
  • Public Participation: AI policy shouldn’t be dictated solely by corporations and governments. Public involvement in shaping AI regulations should become standard practice. After all, this technology will shape our lives—shouldn’t we have a say in how it’s deployed?
  • Protection of Rights: In an era where surveillance and predictive algorithms play an ever-greater role, our privacy, security, and freedom of expression must be safeguarded. The social contract must protect individuals, not just optimize for efficiency.

As the power dynamics between humans, governments, and AI shift, sovereignty is no longer just about national borders or political systems—it’s about maintaining agency over our own lives in a world where technology is becoming an invisible but omnipresent force. The question we must ask ourselves is: Will we redefine the social contract to reflect this new era, or passively accept the terms dictated by AI?

A Look Back: The Social Contract Through History

I must admit, the concept of the "social contract" is more of a Western political theory. It’s a foundational cornerstone in every democratic society today, but the transactional nature of this concept is more deeply rooted in Western philosophy than in other cultures. That said, since democracy is the prevailing ideology worldwide (except for China’s communist regime), I want to focus our discussion on this framework.

In Asia and many other parts of the world, the relationship between the ruler and the ruled, or the protector and the protected, is far more nuanced. In these cultures, the right to rule is often seen as divinely protected and guided. The ruler wasn’t just a political leader; they were thought to have a special, almost sacred connection to a higher power. They were viewed as vessels or conduits of heaven, of Almighty power, or of God. This belief in divine right meant the ruler was seen as the natural, almost unquestionable authority to govern and rule the masses.

So, when we talk about the social contract, it's important to understand that the transactional view of it—where citizens agree to give up certain freedoms in exchange for protection—is more Western in origin. Yet, in many parts of the world, the idea of governance was and still is tied to a more divine or sacred order. Now, as AI becomes more intertwined with governance, we must question whether the more "Western" idea of the social contract, based on rights and protections, still holds up in a world where technology is reshaping the very essence of these relationships.

  • Western Political Thought: Hobbes, Locke, and Rousseau laid the groundwork for the social contract, and their ideas have influenced democratic governments worldwide. Hobbes viewed the social contract as an escape from the 'state of nature'—a condition he described as disorder and perpetual conflict among individuals. Safe to say, he didn’t have the most optimistic view of humanity. He saw us as barbarians. Locke placed an emphasis on natural rights and the protection of property, while Rousseau believed in the "general will" of the people. But in a world where technology, especially AI, plays such a massive role, we need to ask if these classical theories are still applicable. Technology has increasingly become a mediator between the individual and the state, and it’s crucial to consider whether these foundational ideas need to evolve to meet the demands of our digital age.
  • Eastern Philosophies: The wisdom from Confucianism and the Mandate of Heaven in Asian thought reminds us of the moral responsibility of rulers to their people. In the age of AI, this idea suggests that our leaders need to prioritize the ethical use of technology for the collective good, ensuring it benefits everyone.
  • Latin American and African Perspectives: These regions have shaped their own social contracts in the wake of colonization. The struggles for justice and equality seen in the Haitian Revolution or post-colonial Africa remind us that technology—especially something as powerful as AI—should empower people, not oppress them. The social contract needs to reflect these struggles and be adapted to the unique challenges each culture faces.

Colonization, Democracy, and AI

Colonization deeply impacted the social contract, imposing foreign systems of governance that didn’t respect traditional ways of life. The spread of Western democracy introduced the protector-protected dynamic that has continued to shape governance today. And now, with AI stepping into the picture, we need to ask whether AI systems are continuing this historical pattern or offering us a chance to rewrite the rules.

What Does It Mean to Be Human in the Age of AI? - A Reckoning Moment for Humanity

This is more than a political transition—it’s a defining moment for humanity. What does it mean to be a human being with a beating heart in a world where AI agents and robots surpass us in intelligence and physical power? Sam Altman, CEO of OpenAI, predicted that with deep research AI, within a few years, a single individual could wield more computational intelligence than all of humanity combined today. He envisions a world where artificial general intelligence (AGI) could emerge by 2027, transforming industries and redefining human capabilities. Altman believes that AI will become so advanced that it could surpass human intelligence in various tasks, leading to unprecedented economic growth and societal changes. This reality forces us to reconsider our place not only in society but also on this planet. For centuries, humans have seen ourselves at the top of the food chain, but that paradigm is shifting. AI will soon outmatch us, yet we must remember—we are not just economic agents required to continuously prove our value to exist. We have an inherent worth to exist and this is an invitation to a question to self - why are we here?

More importantly, we are not separate from Earth; we are part of her. The planet is not an inert rock orbiting the Sun—it is alive, intelligent, and conscious. Our challenge is no longer about dominance but about proving that we deserve to belong in this new, brave world.

Creating a Utopian World, Not a Dystopia

Right now, we’re at a crossroads. The technology we’re creating can either lead us to a future where everything falls into place—or to one where we lose control. At the World Economic Forum in Davos, AI CEOs and government leaders are starting to emphasize the need for technology to reflect human values. It’s more important than ever to ensure that AI is being developed not just for progress but for fairness, equality, and the good of all. This is our moment to make a choice: will we use AI to create a future where we all thrive? Or will we let it lead us into a dystopia?

In Closing...

As citizens of this connected world, we have a right to question the terms of our social contract. Governments, individuals, and AI are intertwined now more than ever, and we must rethink how this relationship works. By taking back control of our sovereignty and ensuring that technology serves the collective good, we can create a future where AI elevates human potential and protects our rights. What's your take on it?


Suggested Readings & Interviews

[1] Social contract | Definition, Examples, Hobbes, Locke, & Rousseau ...

[2] The Historical Flow of Modern Western Social Contract Thought and Its ...

[3] Social Contract Theory - (Latin American History - Fiveable

[4] (DOC) Social Contract Theory in South Africa's Politics - Academia.edu

[5] (PDF) Realizing the Social Contract: The case of Colonialism and ...

[6] A Roadmap to AI Utopia - TIME