As we move forward in the ever-evolving digital landscape, the concept of the commons is becoming increasingly significant, particularly in the realm of artificial intelligence (AI). In this new AI environment, we’re faced with vast shared resources, such as data pools, algorithms, and technological infrastructures, all forming a sort of digital ‘commons’.
However, in discussions around this digital commons, an outdated and overly simplistic narrative often resurfaces: the myth of the ‘Tragedy of the Commons’. First posited by ecologist Garrett Hardin in 1968, this theory asserts that individuals, acting independently and rationally according to their self-interest, will ultimately deplete shared resources, resulting in detrimental effects for the entire group.
In the context of AI, this tragedy could theoretically manifest in several ways, such as monopolistic control over AI technologies, misuse of data, or the development of AI systems that do not take into account societal well-being. This leads to a narrative of fear and inevitable failure in the management of our collective AI resources.
The Tragedy of the Commons theory has been robustly challenged and deconstructed, most notably by the Nobel Laureate Elinor Ostrom. Ostrom’s work has demonstrated that communities around the world have effectively managed shared resources without overexploitation or destruction.
Her concept of polycentricity, involving multiple governing bodies at different scales, offers a more nuanced and promising approach for managing shared resources.
The myth of the ‘Tragedy of the Commons’, then, should not define our understanding of the AI commons. Instead, we should be guided by the principles of collaboration, inclusive governance, equitable distribution, and adaptive management, building a shared digital landscape that benefits all. As AI continues to transform our society, reevaluating and understanding these concepts will be crucial for a prosperous and equitable future.
What is the Commons?
The concept of the Commons originates from the English agricultural tradition. During the Middle Ages, commons were shared lands where community members had the right to graze their livestock, gather firewood, and forage for food. These rights were typically attached to the ownership or tenancy of a home or parcel of land in the community. The system was built on mutual trust, understanding, and the necessity of shared resources for survival.
However, beginning in the late 15th century and continuing through the 19th century, a process known as ‘enclosure’ started taking place. Enclosure was a legal process in England that divided and fenced off portions of the commons into individually owned parcels of land. This process, often undertaken for the modernization of agricultural practices and increased profitability, led to significant socio-economic changes, often disadvantaging the commoners who depended on these lands.
Enclosure and AI
In his work, David Bollier, a renowned author, activist, and consultant, delves deep into the concept of the Commons, its evolution, and its relevance in the contemporary world. The idea of the Commons harks back to a time when resources such as land, water, and forests were shared by communities, leading to collective responsibility and stewardship. Today, the principle of the Commons has resurged in relevance, expanding to encapsulate not only natural resources but also digital ones, like data and algorithms.
David Bollier, an author, activist, and consultant on societal handling of the Commons, delivered a talk in Barcelona that focuses on redefining the concept of the Commons, its historical misconceptions, and its contemporary importance. Bollier challenged the traditional understanding that the Commons is a vague, less significant category, subservient to the State and the market.
He criticized the infamous theory of the “Tragedy of the Commons”, introduced by biologist Garrett Hardin in 1968, which suggested that shared resources will inevitably be over-exploited and ruined, leading to a tragedy. According to Bollier, this theory was based on a set of questionable assumptions and overlooked the inherent social system that is the Commons. He emphasized that the Commons is chiefly a system of cooperation managing shared wealth sustainably, not merely a collection of resources.
Bollier drew attention to the work of Elinor Ostrom, the first woman to win a Nobel Prize in Economics. Ostrom empirically debunked the Tragedy of the Commons theory through her extensive research on natural resource commons around the world. She established that the Commons is a sustainable model for managing shared resources, demonstrating the importance of relationships and cooperation in economic activity.
An important concept brought up by Bollier in the discussion of the Commons is the idea of “enclosure”. Enclosure refers to the privatization of shared wealth and commodification of previously shared, non-monetized resources. This process can lead to dispossession, impacting people who relied on these resources for survival or basic needs. Bollier drew parallels to urban environments, where global speculation and urban real estate can lead to the enclosure of city spaces, impacting local communities.
He emphasized that the privatization of shared resources such as the human genome and fresh water can inhibit innovation and have serious ecological consequences. In cities, Bollier explained, enclosure often happens through global speculation in urban real estate, which can force people to leave their neighborhoods due to unaffordability.
Bollier’s talk highlights the need for a new understanding and appreciation of the Commons in the face of contemporary challenges. The Commons is not a tragedy waiting to happen, but a robust, cooperative system capable of sustainable resource management. This perspective is important, especially in the context of urban development and the preservation of shared resources in an increasingly privatized world.
We can think of AI as a metaphore for the city as we listen to this.
Bollier posits that the central challenge of the Commons is the preservation of shared resources for future generations. In a world increasingly dictated by capitalist markets, this task is far from straightforward. Market-driven resource control often leads to overexploitation, underrepresentation of marginalized groups, and a gradual erosion of the collective benefit.
Enclosure and AI
Enclosure is a term that originated from historical land practices in Europe, particularly England, during the 15th to 19th centuries. It refers to the process whereby open, communally used lands (the commons) were transformed into parcels of private property. Enclosures often involved legal changes, leading to displacement of local communities, who were deprived of their traditional rights to use these common lands for farming, grazing, and other essential activities.
In the context of AI, enclosure can be seen as a metaphor for similar practices in the digital domain. Here, instead of land, the ‘commons’ may refer to shared digital resources such as data, algorithms, and even the internet itself.
For instance, data enclosure happens when large tech corporations accumulate vast amounts of data (often from users’ digital footprints) and keep them within ‘walled gardens’ or proprietary databases, restricting access and use by others. This hoarding of data leads to a concentration of power and control, as data is a crucial resource for training and improving AI models.
Similarly, algorithmic enclosure refers to the practice where algorithms, which could be used for the common good, are kept private or patented, limiting the wider community’s access to these valuable AI resources.
Such practices of enclosure in the AI context can create a digital divide, exacerbate power imbalances, and limit innovation and inclusivity. Therefore, acknowledging these processes is the first step in addressing them, and a call for increased transparency, openness, and equitable distribution of AI resources.
The idea of ‘digital commons’, a space where resources are collectively owned and managed, counters this enclosure movement in AI. It promotes collaborative efforts to create and maintain shared digital resources, fostering a more democratic, accessible, and inclusive AI landscape.
Here are a few examples of both ‘enclosure’ and ‘commons’ in the AI and digital realm:
- Data Enclosure: Tech giants like Open AI, Facebook, and Amazon collect vast amounts of user data, which they then use to improve their services, sell targeted ads, or even sell to third parties. This data, often gathered without explicit user consent or without clear understanding by users of how it will be used, is kept within these companies’ private databases, restricting access and use by others.
- Algorithmic Enclosure: Companies developing AI technologies often keep their algorithms and AI models proprietary. For instance, the algorithms behind Google’s search engine or Facebook’s news feed are closely guarded secrets. This lack of transparency and openness restricts innovation and scrutiny.
On the other hand, there are initiatives that uphold the principles of digital commons:
- Open Source Software: This is a prime example of a digital ‘commons’. Open-source projects, like Linux or Python libraries used in AI (e.g., TensorFlow, PyTorch), are freely available for anyone to use, modify, and distribute. They are maintained by a community of developers who voluntarily contribute their time and skills for the benefit of all.
- Public Datasets: Datasets made available for public use can also be considered part of the digital commons. For example, ImageNet, a dataset of millions of labeled images used for object recognition, is freely available for researchers around the world. Other public datasets include those provided by governments or international organizations, like census data or climate data.
The shift from common ownership to private control – often referred to as ‘enclosure’ – has been driven by factors such as industrialization, urbanization, and economic liberalization. These forces have changed the face of the Commons, shrinking shared spaces and promoting individual ownership.
But in recent years, there has been a resurgence in the discourse surrounding the Commons. This ‘renaissance’ comes from an increasing recognition that market-driven resource management can lead to unsustainable outcomes – what Garrett Hardin referred to as ‘The Tragedy of the Commons’. This tragedy, however, is not a foregone conclusion, as evidenced by numerous instances of successful communal resource management highlighted by Elinor Ostrom and others.
For Bollier, a key part of the challenge lies in making the Commons ‘visible’. The Commons often goes unrecognized because it doesn’t fit neatly into the dominant economic paradigm of private ownership and market exchange. Yet, it is in these shared spaces – physical and digital – that communities can exercise collective agency, innovate, and safeguard resources for future generations.
As we move further into the 21st century, the discourse around the Commons is likely to grow in importance. In the face of issues like climate change, data privacy, and AI governance, there is a growing need for collective, sustainable resource management. Recognizing and strengthening the Commons, then, is a crucial step towards a more equitable and sustainable future.
Criticisms of the ‘Tragedy of the Commons’ Made up Story
In the discourse of environmental and resource management, the concept of the ‘Tragedy of the Commons’ has been considered a cornerstone principle for decades. Introduced by ecologist Garrett Hardin in 1968, it postulates that shared resources will inevitably be depleted due to individuals’ self-interest, leading to a disastrous outcome for all. However, a growing body of academic research, practical evidence, and community initiatives are challenging this assertion. They argue that the ‘Tragedy of the Commons’ is not an inevitable law of nature but rather a reflection of specific social and economic conditions.
One of the main criticisms of the ‘Tragedy of the Commons’ is its assumption that humans are inherently selfish and short-sighted. This perspective ignores the rich tapestry of human behaviors that include cooperation, altruism, and a keen sense of responsibility towards shared resources. People have shown time and time again that they can cooperate and manage shared resources sustainably when given the opportunity and proper incentives. This phenomenon is beautifully captured in the work of Elinor Ostrom, who was awarded the Nobel Prize in Economic Sciences in 2009 for her work on collective action and common-pool resources.
Ostrom’s work fundamentally challenges Hardin’s thesis. She meticulously documented examples of communities around the world effectively managing shared resources through collective action. She presented an array of strategies and institutional arrangements that communities have developed to manage common resources, ranging from grazing lands in Switzerland to irrigation systems in Nepal. This body of work illustrates that the so-called tragedy is not a foregone conclusion; it can be averted through community organization, collective norms, and appropriate governance structures.
Ending The Tragedy of The Commons Myth
In the Big Think video titled “Ending The Tragedy of The Commons,” Nobel Laureate Elinor Ostrom discusses her critique and counter-argument to Garrett Hardin’s 1968 paper that introduced the ‘Tragedy of the Commons’. Ostrom begins by summarizing Hardin’s argument about how individuals, acting out of self-interest, would inevitably deplete shared resources, creating a tragedy for all.
She then introduces her own empirical and theoretical work which demonstrates that people, in many instances, are capable of establishing their own rules and regulations to effectively manage shared resources, thereby preventing the tragedy that Hardin predicted. This is the concept of polycentricity, which involves multiple governing bodies at different scales, including markets, governments, and community organizations, working together in a nested, complex system.
Ostrom suggests that while this system might not appear ‘pretty’ or neatly organized, it is necessary due to the complexity of society and the problems we face. She warns against the urge to find simple solutions to complex problems.
Applying this concept globally, she stresses that waiting for overarching, global decisions to be made can lead to severe issues. Instead, she advocates for the recognition of both local and global effects of actions like greenhouse gas emissions, and the necessity of enhancing local and regional organization.
In a concrete example, Ostrom mentions the Masai people of East Africa, who have developed a sophisticated system of grazing over centuries that maintained their rangelands sustainably. However, outside interference, such as colonial and subsequent Kenyan government policies, disrupted these traditional systems, leading to environmental degradation. The Masai are now working towards re-establishing their traditional systems and finding ways to adapt to their challenging environment.
In summary, Ostrom argues that the ‘Tragedy of the Commons’ is not a predetermined outcome, but one that can be avoided through polycentric governance, local organization, and respect for traditional knowledge systems.
‘Tragedy of the Commons’ and is its over-simplicity
Another common criticism of the ‘Tragedy of the Commons’ is its over-simplicity. While the theory neatly illustrates a potential issue with shared resources, it fails to account for the complexities of real-world systems. It does not consider the diversity of resource types, management strategies, social structures, and cultural norms that influence how resources are used. Furthermore, it does not factor in the potential for technological innovation or policy interventions that can transform the dynamics of resource use.
The narrative of the ‘Tragedy of the Commons’ also often overlooks the critical role of power structures and socio-economic inequality in resource exploitation. It tends to place blame on individual actions and choices, instead of recognizing the structural factors, such as policies favoring industrial exploitation or imbalances in resource access, that often play a more significant role in resource depletion. This perspective can often lead to solutions that disproportionately affect marginalized populations, such as the imposition of strict regulations without considering local livelihoods or access needs.
To be clear, this is not to disregard the potential challenges of managing shared resources. Situations where overuse of resources leads to depletion are real and serious. However, these are not inevitable outcomes determined solely by the nature of the resource and human greed, but rather by complex social, economic, and political factors. Recognizing this complexity can open up a broader range of potential solutions and interventions.
The ‘Tragedy of the Commons’ has served as a useful tool to draw attention to the challenges of resource management. However, its application as a blanket theory of human behavior and resource use is overly simplistic and can lead to misdirected policy decisions. We should take heed of the lessons from Ostrom’s work and the countless communities worldwide that have shown us that shared resources can indeed be managed sustainably and equitably. Instead of resigning ourselves to inevitable tragedy, we should focus on building the institutions, norms, and policies that empower people to cooperate, innovate, and sustainably manage our shared resources.
Recognising the ‘Tragedy of the Commons’ as a myth, rather than an inevitable outcome, is essential for addressing the challenges of AI
Recognising the ‘Tragedy of the Commons’ as a myth, rather than an inevitable outcome, is essential for addressing the challenges of AI, because it reframes the narrative from one of predetermined doom to one of potential solutions and proactive management.
When we perceive the Tragedy of the Commons as inevitable, we essentially submit to a fatalistic viewpoint that shared resources will always be over-exploited due to individual self-interest. In the realm of AI, this could be interpreted as tech giants monopolizing AI resources and technologies, misuse of AI for personal or corporate gain, or the widespread negative consequences of AI systems on society, such as job displacement or privacy breaches.
However, when we understand that the Tragedy of the Commons is not a foregone conclusion, we open up space for alternative possibilities and strategies. Elinor Ostrom’s work, for example, demonstrated that many communities across the world have successfully self-regulated their resources, avoiding the tragedy. In the AI context, this suggests that we can indeed manage AI technologies and data in a way that is beneficial for all, rather than harmful or exploitative.
This recognition allows for a shift in focus towards creating governance models, ethical standards, and regulations that can guide the use of AI for the collective good. It implies that potential negative outcomes are not insurmountable, but rather challenges that can be addressed with thoughtful, inclusive, and adaptive approaches.
In essence, understanding the Tragedy of the Commons as a myth rather than a certainty is critical to shaping a more optimistic and proactive narrative around AI. It promotes the idea that the “bad stuff” with AI isn’t unavoidable, but can be managed through cooperative efforts, sound policies, and equitable practices, ensuring the benefits of AI can be shared by all.
The Commons an Artificial Intelligence (AI)
The principles discussed in reconsidering the ‘Tragedy of the Commons’ can indeed be applied to our era of Artificial Intelligence (AI) and the challenges that come with it. AI is an immensely powerful tool with the potential to significantly influence society, but it also represents a form of shared resource that can be mismanaged or overused. Here are several ways in which we can apply these principles to manage AI responsibly.
Collaboration and Collective Action: Much like the communities Elinor Ostrom studied, stakeholders in the AI community – researchers, developers, users, regulators, and those affected by AI systems – need to collaborate to ensure responsible use of AI. This could involve developing community norms for responsible AI research and use, creating mechanisms for collective decision-making about AI policies, or establishing systems for sharing the benefits of AI more equitably.
Inclusive Governance Structures: Given the broad societal impacts of AI, it is crucial to have inclusive governance structures that allow for diverse perspectives and account for the needs and concerns of all stakeholders. This could involve public consultation on AI regulations, inclusion of marginalized communities in AI decision-making, or research partnerships between AI developers and communities affected by AI systems.
Managing Power Dynamics: AI, like other shared resources, is subject to power dynamics and socio-economic inequality. Large tech companies, for instance, have disproportionate control over AI development and use, which can lead to AI systems that serve their interests at the expense of others. Addressing this requires recognizing and actively managing these power dynamics, such as through regulations that prevent monopolistic control of AI or policies that promote more equitable access to AI technology and benefits.
Adaptive Management and Innovation: Finally, managing AI responsibly requires recognizing the complexity and dynamism of AI systems. As with natural resource management, this requires adaptive management approaches that can respond to changing circumstances and new information. It also involves promoting innovation – not only in AI technology itself, but also in the policies, norms, and institutions that govern AI use.
In conclusion, the reconsideration of the ‘Tragedy of the Commons’ provides valuable insights for managing AI in our current era. It reminds us that we can and should manage AI as a shared resource, through collaboration, inclusive governance, equitable distribution, and adaptive management. By doing so, we can work towards a future where AI is used sustainably and equitably, for the benefit of all.
FAQ on The Tragedy of the Commons
Q1: What is the Commons? A1: The Commons refers to resources that are shared by a community and managed collectively for the benefit of all its members. These resources can be natural, like air, water, and wildlife, or created, like public parks, open-source software, and shared cultural knowledge.
Q2: What is the Tragedy of the Commons? A2: The Tragedy of the Commons is a theory introduced by biologist Garrett Hardin in 1968. He proposed that shared resources would inevitably be overused and depleted because each individual would act in their own self-interest, leading to resource exhaustion. This theory, however, has been widely criticized and debunked.
Q3: Who was Elinor Ostrom? A3: Elinor Ostrom was an American political economist who was awarded the Nobel Prize in Economics in 2009 for her analysis of economic governance, especially the Commons. She empirically debunked the Tragedy of the Commons theory through extensive research on natural resource commons around the world.
Q4: What is ‘enclosure’ in the context of the Commons? A4: ‘Enclosure’ refers to the process where shared resources of the Commons are privatized or commoditized. This often results in people who relied on these resources for survival or basic needs being dispossessed.
Q5: What are the consequences of enclosure? A5: Enclosure often leads to economic and social inequalities. It can inhibit innovation and have serious ecological consequences. In urban settings, enclosure often happens through global speculation in real estate, which can displace local communities due to rising living costs.
Q6: How can the Commons be protected? A6: The Commons can be protected through collaborative governance, community management, and legislation that supports sustainable resource use. Elinor Ostrom proposed eight “design principles” for managing the commons sustainably, including clear boundaries, collective decision-making, effective monitoring, and graduated sanctions for rule violators
What is the Tragedy of the Commons?
The ‘Tragedy of the Commons’ is a concept introduced by ecologist Garrett Hardin in 1968. It suggests that shared resources, or ‘commons’, will inevitably be depleted due to individuals acting in their own self-interest, leading to a negative outcome for all.
Is the Tragedy of the Commons inevitable?
No, the ‘Tragedy of the Commons’ is not inevitable. As demonstrated by Elinor Ostrom’s research, people in many instances have found ways to establish their own rules and effectively manage shared resources, thus averting the tragedy. This involves collaboration, inclusive governance, equitable distribution, and adaptive management.
What is the concept of polycentricity?
Polycentricity is a concept that involves multiple governing bodies at different scales working together in a nested, complex system. This includes markets, governments, and community organizations, which all play a role in managing shared resources.
How does polycentricity help address the Tragedy of the Commons?
Polycentricity provides a framework that allows for diverse actors to participate in the governance of shared resources. By involving multiple entities at different scales, it allows for a more nuanced and adaptive approach to resource management that is more likely to prevent the depletion of shared resources and avoid the ‘Tragedy of the Commons’.
Ever since stumbling upon the evocative verses of a revered Korean poet, I’ve been enchanted. The desire to honor his legacy led me to a novel idea – a cinematic tribute. However, the task wasn’t easy until I found my digital muse, ChatGPT. I shared the poet’s work with this AI companion, hoping to delve…
The passage of time can take a toll on our cherished memories, especially when it comes to preserving old photographs. Faded colors, scratches, and damage can render these precious moments unrecognizable, but all is not lost. With the help of technology, there are several ways – both old and new – to restore and breathe…
By: Ricky Wright In the digital epoch, personal branding has transcended beyond mere self-promotion into a realm of storytelling and authentic self-expression. As a creative digital marketer with a penchant for social enterprise, my quest for articulating my personal brand narrative led me to an intriguing ally – OpenAI’s ChatGPT. Here’s a voyage through my…
In Silicon Valley’s lexicon, few words carry as much gravitas as “disruption,” “innovation,” and “scaling.” These terms have practically become the holy trinity of tech startup culture, a sort of shorthand for the revolutionary potential promised by new technologies. However, as these buzzwords reach near-mythic status, they increasingly gloss over the nuanced, often problematic realities…
There’s a whispering narrative, echoing through the tree-lined streets of San Francisco and resonating in the innovation-charged air of Silicon Valley, that technology, in all its magnificent glory, holds the promise to our future. Having grown amidst these environments, I’ve been privy to the tech world’s ebb and flow – its brilliant surges and its…
TESCREAL is an acronym for seven ideologies that are often associated with transhumanism and the future of artificial intelligence. https://shows.acast.com/58ad887a1608b1752663b04a/64d304b04aea7200110ef426 They are: These ideologies are all interrelated, and they often overlap. For example, transhumanists are often also extropians, singularitarians, and cosmists. They believe that technology can be used to improve our lives, and that the…
Francis Bacon’s perspective on nature—that human power can be used to conquer and control it—can be seen as a useful metaphor for understanding Elon Musk’s decision to rebrand Twitter by replacing the blue bird logo with an Art Deco-style black and white X. Elon Musk’s actions at Twitter can be viewed through the lens of…
The “dole-bludger” myth refers to a stereotype often used to describe individuals who receive welfare payments, suggesting that they are lazy or fraudulent. However, the article The ‘dole-bludger’ myth can die now — the real cheats were highly paid public servants posits that the real issue lies not with these individuals, but with public servants…
Today, we’ll embark on a thrilling journey through the realms of psychology, mythology, and artificial intelligence. Intrigued? I thought you might be. The territory we’re charting? The fascinating evolution of traditional archetypes in our rapidly digitizing world. To start, let’s recap what archetypes are. Rooted deep in our collective unconscious, archetypes are universal symbols or…
The transformation of traditional archetypes into the Alpha generation archetypes is a reflection of how societal changes, technology, and evolving values have shaped the characteristics and priorities of the younger generation. Here’s an explanation of the transformation for each Alpha archetype: Alpha Archetype Traditional Archetype Digital Native Explorer Global Citizen Sage Entrepreneur Creator Social Activist…
Yes, Generation Z and Alpha generation are often used interchangeably and refer to the same demographic group. Generation Z typically includes individuals born between the mid-1990s and the early 2010s, while the Alpha generation includes those born from the mid-2010s to the mid-2020s. Both generations are known for their affinity with technology, social consciousness, and…