More
    Home Blog

    The human right to science is 76 years old. It’s a reminder for us all to be more curious.

    0

     

    Sujatha Raman, Professor and UNESCO Chair-holder, Australian National University and Brian Schmidt, Distinguished Professor, Australian National University

    Signed exactly 76 years ago today, the Universal Declaration of Human Rights is the world’s most translated document. It is widely acknowledged as the foundation of international human rights work, not just in legal settings but in wider civil society.

    But few know that among the many social and political freedoms defined by the declaration is a human right to science. Article 27 of the declaration positions this right in the cultural sphere, stating:

    Everyone has the right freely to participate in the cultural life of the community, to enjoy the arts and to share in scientific advancement and its benefits.

    This right might seem meaningless at a time when governments around the world have slashed funding for science and appear to be ignoring scientific evidence for how to address global problems such as climate change.

    But there’s much more to the right to science than what you might immediately think of. It can also serve as a spark for human imagination and curiosity. And this is where its true power resides.

    The Universal Declaration of Human Rights was signed exactly 76 years ago today. Flicker

    The evolution of the right to science

    Interpretations of the right to science have evolved a lot over the past decade. It was initially interpreted mainly as the right of scientists to do their research and the public’s right to access and benefit from this research. But this led to no small share of conundrums. For example, what if the right to do research is at odds with the human rights of affected communities?

    This conflict arises in virtually all fields, from anthropology and archaeology to computer science and the life sciences. For example, building a laboratory or collecting data for research can potentially put a community at risk of losing their heritage, identity or livelihood. Some scholars therefore argue that the right should also include a duty to anticipate and take steps to ameliorate such tensions.

    The United States National Academies have also begun to recognise that access and benefit doesn’t automatically follow from biomedical research. In fact, research may increase inequities if it’s not conducted in line with the principles of fairness, justice, equity and the common good. Equally, what are we forgetting if we treat the public only as a beneficiary of science done by credentialed researchers? The right to science is also about the right to participate in science and in decisions about research.

    For example, it means Indigenous peoples have the right to be recognised as knowledge producers – a sentiment captured in Australia by researchers acknowledging that First Nations peoples are also the First Astronomers. The International Science Council’s recently released framework nicely captures these nuances. It states that the right allows people to participate in and enjoy the benefits of science.

    The right to science means Indigenous people have the right to be recognised as knowledge producers. For example, researchers recognise First Nations people in Australia as the First Astronomers. Flicker

    Most of these discussions see the right to science as a way to protect fundamental freedoms – conjoined with responsibilities – of both scientists and the public. But a different meaning emerges when we remember the right to science is also a cultural right.

    In a keynote address to an international conference in Switzerland in 2015, Farida Shaheed, the former United Nations Special Rapporteur for Cultural Rights, explained how the right to science and the right to culture are inextricably linked. Both entail, she said, the conditions for:

    people to reconsider, create and contribute to cultural meanings, expressions, or manifestations and ways of life.

    This highlights how the right to science can serve as a force to galvanise the more positive role of curiosity and the imagination. As such, it can be a spark for a new ethos of curiosity-driven research for the planet.

    Curiosity in a time of crisis

    The role of science in policy making and practice is at a crossroads. Governments routinely invoke geopolitical competitiveness and commercial success as reasons for supporting research – particularly on so-called “critical technologies” such as quantum computing.

    Yet the planet faces interconnected crises of climate change, pollution, biodiversity loss and deepening inequalities. The response to this must therefore include all of humanity while creating space for researchers to be curious about different possible futures and pathways for designing them.

    The International Science Council’s initiative on Science Missions for Sustainability is predicated on the understanding that we won’t achieve the ambitions of the United Nations 2030 agenda with siloed thinking or new technologies alone. The council calls for all disciplines to work together to produce actionable knowledge oriented towards practical solutions for our planetary challenges.

    Humans thrive on curiosity even in times of crisis. We have many examples from the 20th century of curiosity-driven research yielding a “giant pool of ideas” from which came many of the technologies we take for granted today. The challenge now is to harness and support this curiosity in ways appropriate to the scale and scope of the challenges we currently face.

    We know from history that worlds are created and changed not just through new technologies and market-based solutions, but also through culture and social innovation. The right to science provides a welcome stimulus for thinking more deeply, creatively and curiously about these interrelationships in developing policies for research.The Conversation

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Main Photo from Freepik.

    Charles Telfair Centre is an independent nonpartisan not for profit organisation and does not take specific positions. All views, positions, and conclusions expressed in our publications are solely those of the author(s).

    AI could make cities autonomous, but that doesn’t mean we should let it happen

    0

    Federico Cugurullo, Assistant Professor in Smart and Sustainable Urbanism, Trinity College Dublin

    You are walking back home. Suddenly the ground seems to open and a security drone emerges, blocking your way to verify your identity. This might sound far-fetched but it is based on an existing technology – a drone system made by the AI company Sunflower Labs.

    As part of an international project looking at the impact of AI on cities, we recently “broke ground” on a new field of research called AI urbanism. This is different from the concept of a “smart city”. Smart cities gather information from technology, such as sensor systems, and use it to manage operations and run services more smoothly.

    AI urbanism represents a new way of shaping and governing cities, by means of artificial intelligence (AI). It departs substantially from contemporary models of urban development and management. While it’s vital that we closely monitor this emerging area, we should also be asking whether we should involve AI so closely in the running of cities in the first place.

    The development of AI is intrinsically connected to the development of cities. Everything that city dwellers do teaches AI something precious about our world. The way you drive your car or ride your bike helps train the AI behind an autonomous vehicle in how urban transport systems function.

    What you eat and what you buy tells AI systems about your preferences. Multiply these individual records by the billions of people that live in cities, and you will get a feeling for how much data AI can harvest from urban settings.

    Predictive policing

    Under the traditional concept of smart cities, technologies such as the Internet of Things use connected sensors to observe and quantify what is happening. For example, smart buildings can calculate how much energy we consume and real-time technology can quantify how many people are using a subway at any one time. AI urbanism does not simply quantify, it tells stories, explaining why and how certain events take place.

    We are not talking about complex narratives, but even a basic story can have substantial repercussions. Take the AI system developed by US company Palantir, that is already employed in several cities, to predict where crimes will take place and who will be involved.

    These predictions may be acted on by police officers in terms of where to assign resources. Predictive policing in general is one of the most controversial powers that artificial intelligences are gaining under AI urbanism: the capacity to determine what is right or wrong, and who is “good” or “bad” in a city.

    This is a problem because, as the recent example of ChatGPT has made clear, AI can produce a detailed account, without grasping its meaning. It is an amoral intelligence, in the sense that it is indifferent to questions of right or wrong.

    And yet this is exactly the kind of question that we are increasingly delegating to AI in urban governance. This might save our city managers some time, given AI’s extraordinary velocity in analysing large volumes of data, but the price that we are paying in terms of social justice is enormous.

    A human problem

    Recent studies indicate that AI-made decisions are penalising racial minorities in the fields of housing and real-estate. There is also a substantial environmental cost to bear in mind, since AI technology is energy intensive. It is projected to contribute significantly to carbon emissions from the tech sector in coming decades, and the infrastructure needed to maintain it consumes critical raw materials. AI seems to promise a lot in terms of sustainability), but when we look at its actual costs and applications in cities, the negatives can easily outweigh the positives.

    It is not that AI is getting out of control, as we see in sci-fi movies and read in novels. Quite the opposite: we humans are consciously making political decisions that place AI in the position to make decisions about the governance of cities. We are willingly ceding some of our decision-making responsibilities to machines and, in different parts of the world, we can already see the genesis of new cities supposed to be completely operated by AI.

    NEOM Promotional Video from The Line developer. 

    This trend is exemplified by Neom, a colossal project of regional development currently under construction in Saudi Arabia. Neom will feature new urban spaces, including a linear city called The Line, managed by a multitude of AI systems, and it is supposed to become a paragon of urban sustainability. These cities of the future will feature self-driving vehicles transporting people, robots cooking and serving food and algorithms predicting your behaviour to anticipate your needs.

    These visions resonate with the concept of the autonomous city which refers to urban spaces where AI autonomously performs social and managerial functions with humans out of the loop.

    We need to remember that autonomy is a zero sum game. As the autonomy of AI grows, ours decreases and the rise of autonomous cities risks severely undermining our role in urban governance. A city run not by humans but by AIs would challenge the autonomy of human stakeholders, as it would also challenge many people’s wellbeing.

    Are you going to qualify for a home mortgage and be able to buy a property to raise a family? Will you be able to secure life insurance? Is your name on a list of suspects that the police are going to target? Today the answers to these questions are already influenced by AI. In the future, should the autonomous city become the dominant reality, AI could become the sole arbiter.

    AI needs cities to keep devouring our data. As citizens, it is now time to carefully question the spectre of the autonomous city as part of an expanded public debate, and ask one very simple question: do we really need AI to make our cities sustainable?The Conversation

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Main photo from Freepik.

    Charles Telfair Centre is an independent nonpartisan not for profit organisation and does not take specific positions. All views, positions, and conclusions expressed in our publications are solely those of the author(s). 

    West Africa’s coast was a haven for piracy and illegal fishing – how technology is changing the picture

    0

    The Gulf of Guinea – a coastal region that stretches from Senegal to Angola – is endowed with vast reserves of hydrocarbon, mineral and fisheries resources. It is also an important route for international commerce, making it critical to the development of countries in the region.

    For a long time, however, countries in the Gulf of Guinea haven’t properly monitored what’s happening in their waters. This has allowed security threats at sea to flourish. The threats include illegal, unreported and unregulated fishing, drug trafficking, piracy and armed robbery, and toxic waste dumping.

    For instance, in 2020, the International Maritime Bureau reported that the region had experienced the highest number of crew kidnappings ever recorded: 130 crew members taken in 22 incidents. In 2019, 121 crew members were kidnapped in 17 incidents.

    Regional action to address these threats is being taken. In 2013, heads of state signed the Yaoundé Code of Conduct – a declaration to work together and address the threats. This also involved setting up a large hub, known as the Yaoundé Architecture (made up of different divisions), which coordinates and shares information on what’s happening at sea.

    Since the Yaoundé Code of Conduct was signed in 2013, there has been some progress. As we found in a new study, tech-driven tools have been playing a vital role in addressing security threats at sea in west and central African countries. For instance, Nigeria was once designated a piracy hotspot but, in 2022, was delisted. This was in large part due to the use of technology.

    Tech tools have helped countries to more efficiently manage and monitor the marine environment. They also support information sharing among law enforcement agencies. This has led to successful interdictions and enabled the prosecution of pirates in the region.

    The tech tools

    Cargo and fishing vessels are required, under international law, to be fitted with systems that transmit data showing where they are. Since the signing of the Yaoundé Code of Conduct, we found that new technology is now using this location data to help countries in the Gulf of Guinea monitor their waters.

    Tools and systems – like Radar, Yaoundé Architecture Regional Information System (Yaris), Sea-Vision, Skylight and Global Fishing Watch – are integrating information from various surveillance and location monitoring systems and satellite data to identify suspicious behaviour. This has significantly helped to improve efforts to combat security threats.

    Countries in the Gulf now have increased awareness of vessel activity in their waters and are able to make more informed responses in emergencies, like piracy or armed robbery and oil theft. For instance, in 2022 the Heroic Idun tanker, evaded arrest in Nigeria for suspicious behaviour, then travelled on to Equatorial Guinea. Using the Yaoundé Architecture system, Equatorial Guinea held the vessel at Nigeria’s request and it was later fined.

    Without the Yaoundé Code of Conduct, and the new tech that it has introduced, the sharing of information, capture of evidence and cooperation between countries would not have been possible.

    Nigeria’s tech advancements

    Nigeria is a prime example of a country where investment in technology-based infrastructure has helped it to tackle threats to security and development. Over the past three years, Nigeria has deployed a range of tech tools. For instance, the navy deployed the Regional Maritime Awareness Capability facility, which receives, records and distributes data and the mass surveillance FALCON EYE system.

    The Nigerian Maritime Administration and Safety Agency has also made advancements through its Deep Blue Project. This includes a central intelligence and data collection centre which works with special mission vessels (like unmanned aerial vehicles) to take action against threats. Nigeria has since had a reduction in piracy and armed robbery at sea. Once designated a piracy hotspot, the country was delisted as a hotspot in 2022.

    Cautious optimism

    Evidently, technology has an important role to play in enhancing safety and security at sea. But it’s not without it’s challenges, as we identified in our study.

    First, an over-reliance on external tech tools has resulted in a lack of ownership of the technology. This affects the sustainability of the projects. For instance, once EU funding for YARIS expires, the operating costs will be transferred from the EU to Yaoundé Architecture states. But there are still no clear plans from regional states on how to sustain YARIS.

    Second, people with specific expertise are needed to use the tech. But many countries can’t afford to hire them, or aren’t producing human resources with this expertise. Even when personnel have received training, they may not have access to the tools (which aren’t available at the country level) to apply what they have learnt.

    Third, existing monitoring systems such as AIS and VMS can be switched off, a vulnerability that criminals continue to exploit. Radar systems can fill these gaps, but there’s a lack of RADAR coverage along coastlines. Related to this, the scarcity of national data centres for long range vessel identification and tracking (due to lack of investment) makes using existing technology difficult.

    Fifth, there are challenges related to communication difficulties, the absence of internet connections onboard some vessels or low internet speed. Finally, private operators like the shipping industry aren’t using the services provided by the Yaoundé Architecture. This smacks of politics and lack of trust in the regional solutions.

    Vessel operators report incidents instead to agencies outside the region, such as Maritime Domain Awareness for Trade – Gulf of Guinea (based in France) or the International Maritime Bureau in Malaysia and these agencies often broadcast the information without confirming with the regional architecture. This undermines the ability of regional agencies to do their work effectively. It’s in the best interests of Atlantic nations to cooperate and coordinate on meeting maritime security challenges.

    Technology can play a key role in this. But it’s vital that countries enhance technological know-how, and ensure that external partners and businesses use the available technological services. This will be a big step towards a secure and collaborative maritime environment.The Conversation

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Main photo from Freepik

    Charles Telfair Centre is an independent nonpartisan not for profit organisation and does not take specific positions. All views, positions, and conclusions expressed in our publications are solely those of the author(s). 

     

    AI can make African elections more efficient – but trust must be built and proper rules put in place.

    0

     

    Shamira Ahmed, Policy Leader Fellow, Florence School of Transnational Governance, European University Institute

    Mehari Taddele Maru, Professor, European University Institute and John Hopkins University, European University Institute

    Time magazine has dubbed 2024 a “super election year”. An astonishing 4 billion people are eligible to vote in countries across the world this year. Many are on the African continent, where presidential, parliamentary and general elections have already been held or are set for the latter half of the year.

    Artificial intelligence (AI) will play a major role in many countries’ elections. In fact, it already does. AI systems are used in a number of ways. They analyse large amounts of data, like voter patterns. They run automated chatbots for voter engagement. They authenticate voters and detect cyber threats. But many pundits and ordinary people alike seem unsure what to make of the use of AI in African electoral processes. It is often described as simultaneously promising and perilous.

    We are experts on transnational governance whose ongoing research aims to define the challenges AI could pose to legitimate governance in Africa. We want to help create a base of empirical evidence that the continent’s electoral bodies can use to harness the potential benefits of AI and similar technologies while not ignoring the risks.

    The effects of AI on electoral democracy in Africa will fundamentally depend on two factors. First, popular legitimacy and trust in AI. Second, the capability of African states to govern, regulate, and enforce oversight on the use of AI by all political stakeholders, including ruling and opposition parties.

    Varied examples

    It is too simplistic to say that the use of AI in elections is all good or all bad. The truth is that it can be both, fundamentally depending on two key factors: the public’s trust in AI and the ability of African states to regulate the use of AI by key stakeholders.

    Identity politics, diversity, and digital illiteracy must also be taken into account. These all play a role in the rise of polarisation and whether political constituencies are particularly susceptible to disinformation and misinformation. For instance, during Kenya’s 2017 election, consulting firm Cambridge Analytica allegedly used AI to target voters with disinformation. This potentially influenced the outcome.

    In South Africa, there is increasing awareness that anonymous influencers, often positioned at the extremes of the political spectrum, contribute significantly to online misinformation and disinformation. These figures, who largely remain unknown to the general public, introduce highly emotive and polarising content into discussions without live and adequate moderation – often through automated processes.

    But AI also has the potential to enhance electoral legitimacy. Kenya’s 2022 Umati project monitored social media for hate speech using computerised analysis known as natural language processing. Once harmful content had been flagged by the AI it was removed. During Sierra Leone’s 2021 general election its Election Monitoring and Analysis Platform identified and countered hate speech, disinformation and violence incitement.

    Similarly, in South Africa’s latest polls AI-powered bots were used to mitigate the spread of disinformation. Elsewhere, facial recognition technology was used during Ghana’s 2020 general election to verify voters and prevent impersonation. Nigeria’s 2019 Automated Fingerprint Identification System detected duplicate registrations, bolstering the accuracy of voter rolls.

    Lessons and challenges

    These cases offer valuable lessons for other countries on the continent – both in what works and what doesn’t. There are several obstacles electoral governance bodies must overcome in most African countries. One is a scarcity of skilled professionals in data science and machine learning. Limited technological infrastructure is another. There are also regulatory and policy gaps to be overcome.

    And ethical concerns cannot be ignored. For example, Kenya’s Huduma Namba national ID system and Nigeria’s telecommunication companies have been criticised for inadequate data protection. They’ve also been accused of using AI technology for surveillance.

    In South Africa, a 2021 lawsuit took on Facebook for allegedly violating users’ privacy rights. African countries need to allay people’s very valid concerns about ethics and data privacy in election technology. Part of doing so involves the development of robust normative, institutional and collaborative frameworks to govern the use of AI in fair, transparent and accountable ways. African states must seek to exercise sovereignty on AI systems – that is, they need to develop their own systems, fit for local purposes, rather than just importing systems from elsewhere.

    The frameworks we’re describing should include clear guidelines to promote African cultural values that protect human rights. They must also be designed to prevent the misuse of AI for electoral manipulation or suppression of political opposition.

    Public trust in AI systems can also be built in a number of ways. These include public awareness and education campaigns. Transparency and accountability mechanisms that impose sanctions and provide remedies when breaches of trust and law occur are also crucial.

    Examples exist

    Several initiatives already exist from which the kind of frameworks we describe can be drawn. One example is the Association of African Electoral Authorities’ Principles and Guidelines for the Use of Digital and Social Media in Elections in Africa.

    A number of African countries are already working to address the challenges and opportunities presented by AI and to develop appropriate governance mechanisms. Egypt’s National Council for Artificial Intelligence and Kenya’s Distributed Ledger and Artificial Intelligence Taskforce are examples of ongoing initiatives from which other countries’ electoral bodies can learn.

    Overall, solid governance will be crucial for the successful integration of AI systems in promoting the legitimacy of African political processes.The Conversation

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Main photo by Freepik.

    Charles Telfair Centre is an independent nonpartisan not for profit organisation and does not take specific positions. All views, positions, and conclusions expressed in our publications are solely those of the author(s). 

    Land Use and Environmental Degradation in Mauritius: Interrogating the Governance Framework

    0

     

    Original research by:

    Xavier G.H. Koenig,  Doctoral Student, Université des Mascareignes

    Prakash N.K. Deenapanray, Adjunct Professor, Université des Mascareignes

    The global environmental crisis is getting worse, with more biodiversity loss, deforestation, and ecosystem degradation. According to the Millennium Ecosystem Assessment (2005), over 60% of the world’s ecosystems are being used unsustainably, causing irreversible damage to biodiversity and natural resources. In response, international frameworks such as the UN Sustainable Development Goals (SDGs) and the Kunming-Montreal Global Biodiversity Framework (KMGBF) aim to halt biodiversity loss, calling for land use policies that safeguard ecosystems.

    In this global context, Mauritius is facing its own challenges in domesticating these frameworks to fight environmental degradation. Inadequate governance and a model that prioritises financial gains and economic growth over sustainability is preventing the responsible management of land use on the island.

    The Extent of Desertification in Mauritius

    Mauritius, one of two of the most ecologically devastated islands in the world, has lost much of its natural forest cover, with less than 1.3% of indigenous forest remaining. Over 80% of the island is privately owned, but this tenure security has not translated into ecosystem protection. Despite the introduction of a robust land tenure system, the country’s biodiversity continues to decline, largely driven by land use changes such as real estate development. The unchecked land use changes including deforestation and wetland backfilling, is leading to degradation and raises questions about governance. Without a solid and effective governance structure, these changes threaten the environment and the long-term well-being of the population.

    The paper investigates the governance architecture underlying land use and environmental degradation in Mauritius and how it is contributing to unsustainable land use practices. It aims to explore how policy, legislation, and institutional frameworks in Mauritius affect land use and environmental outcomes.

    How it was researched:

    The authors conducted semi-structured interviews with 20 key informants from the government, civil society, private business, and research sectors. They corroborated these interviews with a review of relevant documentation, including policy papers, government reports, and legal texts.

    Mauritius’ Governance Framework: Limitations and Gaps

    The degradation of Mauritius’ ecosystems can be attributed to weaknesses in the governance framework affecting land use and the environment. While the country has developed an ambitious National Development Strategy (NDS), this plan has largely remained on paper, with key components not being effectively implemented.

    The NDS, designed to facilitate sustainable development, frequently becomes secondary in priority when confronted with economic pressures that promise immediate financial returns. As a result, land use planning operates in a fragmented manner, leading to inconsistencies between policies, regulations, and development practices.

    The Environmental Impact Assessment (EIA) is a formal process used to evaluate the potential environmental effects of a proposed development project in Mauritius before it is approved, ensuring that environmental factors are considered in decision-making. The EIA system is an integral component of Mauritius’ governance. While EIAs are legally required for most developments, they are often seen as a formality rather than a safeguard. The data found that EIAs typically assess projects in isolation, not considering the cumulative environmental impact of developments on a broader scale. In addition, the public has only 10 to 21 days to comment on EIA applications, which respondents argued is insufficient time to fully assess the potential impact of a project. The short consultation period makes it hard for the public to engage and provide sufficient evidence-based feedback in the environmental decision-making process.

    The Role of Environmentally Sensitive Areas (ESAs)

    Environmentally Sensitive Areas (ESAs) in Mauritius, including wetlands and forests, are critical in supporting biodiversity. However, these areas face increasing encroachment due to socio-economic pressures. While the ESA policy was drafted in 2009, it has never been fully implemented. Developers continue to push projects that degrade these areas, taking advantage of weak enforcement and vague legal protections. The 358-hectare Roches Noires Smart City project, for example, would encroach on wetlands and other sensitive ecosystems, sparking public and scientific outcry.

    Financial Maximisation over Sustainability

    From these analyses of land use and environmental governance and process, the paper finds that Mauritius has historically prioritised economic growth over environmental sustainability, a pattern which appears exacerbated today. The government’s current development model, which promotes large-scale real estate projects, tourism, and foreign direct investment (FDI), often comes at the expense of the natural environment. The Economic Development Board (EDB), tasked with attracting FDI, is perceived to have fast-tracked projects and influenced the stringency of the EIA process. This approach would align with what researchers term the “reinforcing economic optimisation” model (where policies are designed to manage environmental impacts while promoting economic growth). However, results suggest that the governance framework is going further than optimisation, that is, pushing for financial maximisation at the expense of ecosystems and nature.

    This focus on economic growth results in trade-offs that consistently favour development over conservation. For instance, the failure to implement Strategic Environmental Assessments (SEAs) at a broader regional scale has limited the government’s ability to manage cumulative environmental impacts, a shortcoming now addressed in the Environment Act 2024. Similarly, key informants argue that developers can easily bypass environmental safeguards, including EIAs, by splitting large projects into smaller ones to avoid stricter regulations.

    Proposing a Shift in Mauritius’ Economic Model

    The authors suggest a shift in Mauritius’ economic model to balance environmental and economic goals. One proposal is to integrate natural capital accounting into decision-making. This approach would quantify the contribution of natural capital to the economy, national wealth, and the public good. Such accounting (and scenario modelling of various development pathways) would inform national planning and decisions that do not compromise natural assets for short-term gains and move the country towards nature-positive outcomes, in accordance with the Kunming-Montreal Global Biodiversity Framework.

    Additionally, the above approach will move us closer to a well-being economy rather than one solely focused on GDP growth; an indicator that is misused as a measure of how well communities and countries are faring and prospering. By integrating the contribution of nature in national wealth creation, the well-being economy will make explicit the significant contribution of nature towards the human well-being through numerous pathways. This could help Mauritius align its policies with long-term sustainability goals. A well-being economy prioritises environmental health, community resilience, and equity, recognising that ecosystems are vital to human health and quality of life. It will also prioritise longer-term thinking over short-term thinking. This shift would require reviewing current subsidies and perverse incentives that promote unsustainable land use and redirecting them towards conservation and sustainable development.

    Conclusion

    Mauritius’ current governance framework falls short of addressing the complex challenge of environmental degradation. While the island has secure land tenure systems and a well-developed policy framework, the maximisation of financial returns over environmental sustainability has led to unchecked degradation. To reverse this trend, Mauritius must review its economic model, integrating natural capital accounting, to ensure that environmental costs are fully considered in land use decisions. In addition, the public must be given reinforced means of participation at multiple stages of the policy and decision-making processes (integrated policy cycle); to secure their ability to effectively protect their natural heritage and the ecosystem services it provides; all of which are common goods.


    KEY FINDINGS

      1. Weak Commons and Compromise Approaches: Mauritius’ governance structure prioritises financial maximisation over ecosystem preservation. This biases decisions, enabling systematic land artificialisation, affecting ecosystem services and potentially encroaching on environmentally sensitive areas (ESAs).
      2. Lack of integration of ecosystems in land use planning: Despite the recognised role of natural ecosystems in citizen well-being, governance frameworks in Mauritius fail to quantifiably prioritise ecosystem protection in land use planning.
      3. Ineffective Environmental Impact Assessments (EIAs): While EIAs are mandatory for major developments, the process is often perceived as inadequate. It focuses on individual projects without considering cumulative environmental impacts and constraints the ability of citizens to meaningfully challenge land use change decisions. Of note, the newly reintroduced SEA (Environment Act 2024) provides for the assessment of cumulative impacts.
      4. Financial maximisation at the expense of nature: Informants highlighted that economic growth often trumps environmental considerations. This is evident in the apparent government pressure to approve projects that bring foreign direct investment (FDI), sometimes at the expense of environmental sustainability.
      5. Challenges with Environmental Policy Implementation: The existing policies and strategies, such as the National Development Strategy or National Biodiversity Strategy and Action Plan (NBSAP), are often not effectively implemented, monitored and evaluated, and thus fail to hamper the continued ecosystem decline.

    OTHER RESEARCH BY THE AUTHORS:

    Koenig, X.G.H., Deenapanray, P.N.K., Weber, JL., Rakotondraompiana, S., Ramihangihajason, T.A., 2024. Are neutrality targets alone sufficient for protecting nature? Learning from land cover change and Land Degradation Neutrality targets in Mauritius. Land Degradation and Development (in press).Amode, N., Deenapanray,
    P.N.K., Khadoo, P., 2024. An Analysis of the Climate Change Mitigation Potential of Solid Waste Management Scenarios for the Small Island Developing State of Mauritius from a Life Cycle Sustainability Perspective. Materials Circular Economy 6:60. https://doi.org/10.1007/s42824-024-00153-6
    Deenapanray, P.N.K., 2022. Reflections on the practice of CSR for poverty alleviation in Mauritius. Academia Letters. DOI: 10.20935/AL5929

    Original paper: Land use and environmental degradation in the island state of Mauritius: Governance and problem conceptions – ScienceDirect

    Main Photo by Johann Juraver on Unsplash

    Charles Telfair Centre is an independent nonpartisan not for profit organisation and does not take specific positions. All views, positions, and conclusions expressed in our publications are solely those of the author(s).

    Online spaces are rife with toxicity. Well-designed AI tools can help clean them up.

    0

     

    Lucy Sparrow, Lecturer in Human-Computer Interaction, The University of Melbourne

    Eduardo Oliveira, Senior Lecturer in Software Engineering, The University of Melbourne

    Mahli-Ann Butt, Lecturer, Cultural Studies, The University of Melbourne

    Imagine scrolling through social media or playing an online game, only to be interrupted by insulting and harassing comments. What if an artificial intelligence (AI) tool stepped in to remove the abuse before you even saw it?

    This isn’t science fiction. Commercial AI tools like ToxMod and Bodyguard.ai are already used to monitor interactions in real time across social media and gaming platforms. They can detect and respond to toxic behaviour. The idea of an all-seeing AI monitoring our every move might sound Orwellian, but these tools could be key to making the internet a safer place.

    However, for AI moderation to succeed, it needs to prioritise values like privacy, transparency, explainability and fairness. So can we ensure AI can be trusted to make our online spaces better? Our two recent research projects into AI-driven moderation show this can be done – with more work ahead of us.

    Negativity thrives online

    Online toxicity is a growing problem. Nearly half of young Australians have experienced some form of negative online interaction, with almost one in five experiencing cyberbullying. Whether it’s a single offensive comment or a sustained slew of harassment, such harmful interactions are part of daily life for many internet users.

    The severity of online toxicity is one reason the Australian government has proposed banning social media for children under 14. But this approach fails to fully address a core underlying problem: the design of online platforms and moderation tools. We need to rethink how online platforms are designed to minimise harmful interactions for all users, not just children.

    Unfortunately, many tech giants with power over our online activities have been slow to take on more responsibility, leaving significant gaps in moderation and safety measures. This is where proactive AI moderation offers the chance to create safer, more respectful online spaces. But can AI truly deliver on this promise? Here’s what we found.

    ‘Havoc’ in online multiplayer games

    In our Games and Artificial Intelligence Moderation (GAIM) Project, we set out to understand the ethical opportunities and pitfalls of AI-driven moderation in online multiplayer games. We conducted 26 in-depth interviews with players and industry professionals to find out how they use and think about AI in these spaces.

    Interviewees saw AI as a necessary tool to make games safer and combat the “havoc” caused by toxicity. With millions of players, human moderators can’t catch everything. But an untiring and proactive AI can pick up what humans miss, helping reduce the stress and burnout associated with moderating toxic messages.

    But many players also expressed confusion about the use of AI moderation. They didn’t understand why they received account suspensions, bans and other punishments, and were often left frustrated that their own reports of toxic behaviour seemed to be lost to the void, unanswered.

    Participants were especially worried about privacy in situations where AI is used to moderate voice chat in games. One player exclaimed: “my god, is that even legal?” It is – and it’s already happening in popular online games such as Call of Duty.

    Our study revealed there’s tremendous positive potential for AI moderation. However, games and social media companies will need to do a lot more work to make these systems transparent, empowering and trustworthy.

    Right now, AI moderation is seen to operate much like a police officer in an opaque justice system. What if AI instead took the form of a teacher, guardian, or upstander – educating, empowering or supporting users?

    Enter AI Ally

    This is where our second project AI Ally comes in, an initiative funded by the eSafety Commissioner. In response to high rates of tech-based gendered violence in Australia, we are co-designing an AI tool to support girls, women and gender-diverse individuals in navigating safer online spaces.

    We surveyed 230 people from these groups, and found that 44% of our respondents “often” or “always” experienced gendered harassment on at least one social media platform. It happened most frequently in response to everyday online activities like posting photos of themselves, particularly in the form of sexist comments.

    Interestingly, our respondents reported that documenting instances of online abuse was especially useful when they wanted to support other targets of harassment, such as by gathering screenshots of abusive comments. But only a few of those surveyed did this in practice. Understandably, many also feared for their own safety should they intervene by defending someone or even speaking up in a public comment thread.

    These are worrying findings. In response, we are designing our AI tool as an optional dashboard that detects and documents toxic comments. To help guide us in the design process, we have created a set of “personas” that capture some of our target users, inspired by our survey respondents.

    Some of the user ‘personas’ guiding the development of the AI Ally tool. Ren Galwey/Research Rendered
    Some of the user ‘personas’ guiding the development of the AI Ally tool. Ren Galwey/Research Rendered

    We allow users to make their own decisions about whether to filter, flag, block or report harassment in efficient ways that align with their own preferences and personal safety. In this way, we hope to use AI to offer young people easy-to-access support in managing online safety while offering autonomy and a sense of empowerment.

    We can all play a role

    AI Ally shows we can use AI to help make online spaces safer without having to sacrifice values like transparency and user control. But there is much more to be done. Other, similar initiatives include Harassment Manager, which was designed to identify and document abuse on Twitter (now X), and HeartMob, a community where targets of online harassment can seek support.

    Until ethical AI practices are more widely adopted, users must stay informed. Before joining a platform, check if they are transparent about their policies and offer user control over moderation settings.

    The internet connects us to resources, work, play and community. Everyone has the right to access these benefits without harassment and abuse. It’s up to all of us to be proactive and advocate for smarter, more ethical technology that protects our values and our digital spaces.


    The AI Ally team consists of Dr Mahli-Ann Butt, Dr Lucy Sparrow, Dr Eduardo Oliveira, Ren Galwey, Dahlia Jovic, Sable Wang-Wills, Yige Song and Maddy Weeks.The Conversation

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Main Photo by Freepik.

    Charles Telfair Centre is an independent nonpartisan not for profit organisation and does not take specific positions. All views, positions, and conclusions expressed in our publications are solely those of the author(s). 

    Data centre emissions are soaring – it’s AI or the climate

    0

     

    Jack Marley, Environment + Energy Editor, The Conversation

    Artificial intelligence (AI) is curating your social media feed and giving you directions to the train station. It’s also throwing the fossil fuel industry a lifeline. Three of the biggest tech companies, Microsoft, Google and Meta, have reported ballooning greenhouse gas emissions since 2020. Data centres packed with servers running AI programs day and night are largely to blame.

    AI models consume a lot of electricity, and the World Economic Forum estimated in April that the computer power dedicated to AI is doubling every 100 days. Powering this boom in the US, where many AI tech pioneers are based, have been revitalised gas power plants once slated for closure.

    First, what actually is AI?

    AI sucks (power and water)

    “At its core, the kind of AI we are seeing in consumer products today identifies patterns,” say Sandra Peter and Kai Riemer, computing experts at the University of Sydney. “Unlike traditional coding, where developers explicitly program how a system works, AI ‘learns’ these patterns from vast datasets, enabling it to perform tasks.”

    While AI programs are being “trained” and fed huge sums of data over several weeks and months, data processors run 24/7. Once up to speed, an AI can use 33 times more energy to complete a function than traditional software.

    A row of data centre server columns.
    Data centre energy usage has exploded to keep up with the growth of AI. Dil_Ranathunga/Shutterstock

    In fact, a single query to an AI-powered chatbot can consume ten times as much energy as a traditional Google search according to Gordon Noble and Fiona Berry, sustainability researchers at the University of Technology Sydney.

    “This enormous demand for energy translates into surges in carbon emissions and water use, and may place further stress on electricity grids already strained by climate change,” they say.

    Data centres are thirsty as well as power-hungry: millions of litres of water have to be pumped to keep them cool.These enormous server warehouses are vying with people for an increasing share of power and water, a situation which could prove deadly during a heatwave or drought.

    A dubious solution

    Experts only have a partial picture of AI’s resource diet, Noble and Berry argue. One survey showed that just 5% of sustainability professionals in Australia believed data centre operators provided detailed information about their environmental impact. Its fierce appetite aside, AI is feted as a Swiss army knife of fixes for our ailing planet.

    AI’s ability to process mountains of data means it could spot the warning signs of a building storm or flood and track how the environment is changing say Ehsan Noroozinejad and Seyedali Mirjalili, AI experts at Western Sydney University and Torrens University Australia respectively. “For example, it can reportedly measure changes in icebergs 10,000 times faster than a human can,” they add.

    Kirk Chang and Alina Vaduva, management experts at the University of East London, highlight hopes that AI might make simulations of Earth’s climate more accurate. AI could closely monitor an entire electricity grid and coordinate generators so that they waste less energy while meeting demand. AI models could identify materials for sorting in a recycling facility and analyse air pollution to pinpoint its sources. On farms, AI systems could track weather and soil conditions to ensure crops receive only as much water as they need.

    However, AI’s claims to efficiency are sadly undermined by a well-worn problem. When humanity makes an activity more efficient through innovation, the energy or resource savings are generally ploughed into expanding that activity or others.

    “The convenience of an autonomous vehicle may increase people’s travel and in a worst-case scenario, double the amount of energy used for transport,” says Felippa Amanta, a PhD candidate in digital technologies and climate change.

    And while there is value in imagining what AI might help us do, it is important to recognise what it is already doing. An investigation by Scientific American found AI was deployed in oil extraction in 2019 to substantially increase production. Elsewhere, targeted advertising that uses AI creates demand for material goods. More mass-produced stuff, more emissions.

    Does our answer to climate change need to be high-tech? During a climate disaster like Hurricane Helene, which claimed more than 150 lives in the south-eastern US over the weekend, a reliable power supply is often the first thing to go. AI can be of little help in these circumstances.

    Low-tech solutions to life’s problems are generally more resilient and low carbon. Indeed, most of them – like fruit walls, that used renewable energy to grow Mediterranean produce in England as early as the Middle Ages – have been around for a very long time.

    “‘Low-tech’ does not mean a return to medieval ways of living. But it does demand more discernment in our choice of technologies – and consideration of their disadvantages,” says Chris McMahon, an engineering expert at the University of Bristol.

    “What’s more, low-tech solutions often focus on conviviality. This involves encouraging social connections, for example through communal music or dance, rather than fostering the hyper-individualism encouraged by resource-hungry digital devices.”The Conversation

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Main Photo by ThisIsEngineering: https://www.pexels.com/photo/code-projected-over-woman-3861969/

    Charles Telfair Centre is an independent nonpartisan not for profit organisation and does not take specific positions. All views, positions, and conclusions expressed in our publications are solely those of the author(s). 

    The United Nations has a plan to govern AI – but has it bought the industry’s hype?

    0
    AI technology microchip background vector digital transformation concept

     

    Zena Assaad, Senior Lecturer, School of Engineering, Australian National University

    The United Nations Secretary-General’s Advisory Board on Artificial Intelligence (AI) has released its final report on governing AI for humanity. The report presents a blueprint for addressing AI-related risks while still enabling the potential of this technology. It also includes a call to action for all governments and stakeholders to work together in governing AI to foster development and protection of all human rights.

    On the surface, this report seems to be a positive step forward for AI, encouraging developments while also mitigating potential harms. However, the finer details of the report expose a number of concerns.

    Reminiscent of the IPCC

    The UN advisory board on AI was first convened on October 26, 2023. The purpose of this committee is to advance recommendations for the international governance of AI. It says this approach is needed to ensure the benefits of AI, such as opening new areas of scientific inquiry, are evenly distributed, while the risks of this technology, such as mass surveillance and the spread of misinformation, are mitigated.

    The advisory board consists of 39 members from a diversity of regions and professional sectors. Among them are industry representatives from Microsoft, Mozilla, Sony, Collinear AI and OpenAI. The committee is reminiscent of the UN’s Intergovernmental Panel on Climate Change (IPCC) which aims to provide key input into international climate change negotiations.

    The inclusion of prominent industry representatives in the advisory board on AI is a point of difference from the IPCC. This may have advantages, such as a more informed understanding of AI technologies. But it may also have disadvantages, such as biased viewpoints in favour of commercial interests. The recent release of the final report on governing AI for humanity provides a vital insight into what we can likely expect from this committee.

    What’s in the report?

    The final report on governing AI for humanity follows an interim report released in December 2023. It proposes seven recommendations for addressing gaps in current AI governance arrangements.

    These include the creation of an independent international scientific panel on AI, the creation of an AI standards exchange and the creation of a global AI data framework. The report also ends with a call to action for all governments and relevant stakeholders to collectively govern AI.

    What’s disconcerting about the report are the imbalanced and at times contradictory claims made throughout. For example, the report rightly advocates for governance measures to address the impact of AI on concentrated power and wealth, geopolitical and geoeconomic implications. However, it also claims that:

    “No one currently understands all of AI’s inner workings enough to fully control its outputs or predict its evolution.”

    This claim is not factually correct on many accounts. It is true that there are some “black box” systems – those in which the input is known, but the computational process for generating outputs is not. But AI systems more generally are well understood on a technical level.

    AI reflects a spectrum of capabilities. This spectrum ranges from generative AI systems such as ChatGPT, through to deep learning systems such as facial recognition. The assumption that all these systems embody the same level of impenetrable complexity is not accurate.

    The inclusion of this claim calls into question the advantages of including industry representatives in the advisory board, as they should be bringing a more informed understanding of AI technologies.

    The other issue this claim raises is the notion of AI evolving of its own accord. What has been interesting about the rise of AI over recent years is the accompanying narratives which falsely position AI as a system of agency.This inaccurate narrative shifts perceived liability and responsibility away from those who design and develop these systems, providing a creative scapegoat for industry.

    Despite the subtle undertone of powerlessness in the face of AI technologies and the imbalanced claims made throughout, the report does positively progress the discourse in some ways.

    A small step forward

    Overall, the report and its call to action are a positive step forward because they emphasise that AI can be governed and regulated, despite contradictory claims throughout the report which imply otherwise. The inclusion of the term “hallucinations” is a salient example of these contradictions.

    The term itself was popularised by OpenAI’s chief executive Sam Altman when he used the term to reframe nonsensical outputs as part of the “magic” of AI. Hallucinations is not a technically accepted term – it’s a creative marketing agenda. Pushing for governance of AI while simultaneously endorsing a term which implies a technology that cannot be governed is not constructive.

    What the report lacks is consistency in how AI is perceived and understood. It also lacks application specificity – a common limitation among many AI initiatives. A global approach to AI governance will only work if it is able to capture the nuances of application and domain specificity. The report is one step forward in the right direction. However, it will need refinement and amendments to ensure it encourages developments while mitigating the many harms of AI.The Conversation

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Charles Telfair Centre is an independent nonpartisan not forprofit organisation and does not take specific positions. All views, positions, and conclusions expressed in our publications are solely those of the author(s).

    Wondering what AI actually is? Here are the 7 things it can do for you

    0

     

    Sandra Peter, Director of Sydney Executive Plus, University of Sydney and Kai Riemer, Professor of Information Technology and Organisation, University of Sydney.

    You know we’ve reached peak interest in artificial intelligence (AI) when Oprah Winfrey hosts a television special about it. AI is truly everywhere. And we will all have a relationship with it – whether using it, building it, governing it or even befriending it.

    But what exactly is AI? While most people won’t need to know exactly how it works under the hood, we will all need to understand what it can do. In our conversations with global leaders across business, government and the arts, one thing stood out – you can’t fake it anymore. AI fluency that is.

    AI isn’t just about chatbots. To help understand what it is about, we’ve developed a framework which explains the broad broad range of capabilities it offers. We call this the “capabilities stack”.

    We see AI systems as having seven basic kinds of capability, each building on the ones below it in the stack. From least complex to most, these are: recognition, classification, prediction, recommendation, automation, generation and interaction.

    Recognition

    At its core, the kind of AI we are seeing in consumer products today identifies patterns. Unlike traditional coding, where developers explicitly program how a system works, AI “learns” these patterns from vast datasets, enabling it to perform tasks. This “learning” is essentially just advanced mathematics that turns patterns into complex probabilistic models – encoded in so-called artificial neural networks.

    Once learned, patterns can be recognised – such as your face, when you open your phone, or when you clear customs at the airport. Pattern recognition is all around us – whether it’s license plate recognition when you park your car at the mall, or when the police scan your registration. It’s used in manufacturing for quality control to detect defective parts, in health care to identify cancer in MRI scans, or to identify potholes by using buses equipped with cameras that monitor the roads in Sydney.

    A stack of seven red blocks.
    The AI capabilities stack is a framework for understanding how AI is used.
    Sandra Peter & Kai Remer, CC BY-NC-ND

    Classification

    Once an AI system can recognise patterns, we can train it to detect subtle variations and categorise them. This is how your photo app neatly organises albums by family members, or how apps identify and label different kinds of skin lesions. AI classification is also at work behind the scenes when phone companies and banks identify spam and fraud calls.

    In New Zealand, non-profit organisation Te Hiku developed an AI language model to classify thousands of hours of recordings to help revitalise Te Reo Māori, the local indigenous language.

    Prediction

    When AI is trained on past data, it can be used to predict future outcomes. For example, airlines use AI to predict the estimated arrival times of incoming flights and to assign gates on time so you don’t end up waiting on the tarmac. Similarly, Google Flights uses AI to predict flight delays even before airlines announce them.

    In Hong Kong, an AI prediction model saves taxpayer money by predicting when a project needs early intervention to prevent it overrunning its budget and completion date. And when you buy stuff on Amazon, the ecommerce giant uses AI to predict demand and optimise delivery routes, so you get your packages within hours, not just days.

    Recommendation

    Once we predict, we can make recommendations for what to do next. If you went to Taylor Swift’s Eras tour concert at Sydney’s Accor stadium, you were kept safe thanks to AI recommendations. A system funded by the New South Wales government used data from multiple sources to analyse the movement and mood of the 80,000 strong crowd, providing real-time recommendations to ensure everyone’s safety.

    AI-based recommendations are everywhere. Social media, streaming platforms, delivery services and shopping apps all use past behaviour patterns to present you with their “for you” pages. Even pig farms use pig facial recognition and tracking to alert farmers to any issues and recommend particular interventions.

    Automation

    It’s a small step from prediction and recommendation to full automation. In Germany, large wind turbines use AI to keep the lesser spotted eagle safe. An AI algorithm detects approaching birds and automatically slows down the turbines allowing them to pass unharmed.

    Closer to home, Melbourne Water uses AI to autonomously regulate its pump control system to reduce energy costs by around 20% per year. In Western Sydney, local buses on key routes are AI-enabled: if a bus is running late, the system predicts its arrival at the next intersection and automatically green-lights its journey.

    Generation

    Once we can encode complex patterns into neural networks, we can also use these patterns to generate new, similar ones. This works with all kinds of data – images, text, audio and video.

    Image generation is now built into many new phones. Don’t like the look on someone’s face? Change into a smile. Want a boat on that lake? Just add it in. And it doesn’t stop there.

    Tools such as Runway let you manipulate videos or create new ones with just a text prompt. ElevenLabs allows you to generate synthetic voices or digitise existing ones from short recordings. These can be used to narrate audiobooks, but also carry risks such as deepfake impersonation.

    And we haven’t even mentioned large language models such as ChatGPT, which are transforming how we work with text and how we develop computer code. Research by McKinsey found that these models can cut the time required for complex coding tasks by up to 50%.

    Interaction

    Finally, generative AI also makes it possible to mimic human-like interactions. Soon, virtual assistants, companions and digital humans will be everywhere. They will attend your Zoom meeting to take notes and schedule follow-up meetings.

    Interactive AI assistants, such as IBM’s AskHR bot, will answer your HR questions. And when you get home, your AI friend app will entertain you, while digital humans on social media are ready to sell you anything, any time. And with voice mode activated, even ChatGPT gets in on the inter-action.

    Amid the excitement around generative AI, it is important to remember that AI is more than chatbots. It impacts many things beyond the flashy conversational tools – often in ways that quietly improve everyday processes.The Conversation

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Main photo by Tara Winstead: https://www.pexels.com/photo/robot-pointing-on-a-wall-8386440/

    Charles Telfair Centre is an independent nonpartisan not forprofit organisation and does not take specific positions. All views, positions, and conclusions expressed in our publications are solely those of the author(s).

    Freedom for Chagos Islands: UK’s deal with Mauritius will be a win for all

    0

     

    Peter Harris, Associate Professor of Political Science, Colorado State University

    Britain is close to resolving its territorial dispute with Mauritius over the Chagos Archipelago, located in the central Indian Ocean. For years, Mauritius has claimed the island group as part of its sovereign territory. It says that Britain unlawfully detached the islands from Mauritius in 1965, three years before Mauritius gained independence. The Mauritian position is backed by international courts and the United Nations, creating enormous pressure for Britain to decolonise.

    London, however, has been reluctant to abandon the Chagos Archipelago. This is because the largest island, Diego Garcia, is the site of a strategically important US military base. Britain pledged to make Diego Garcia available to its American ally and has been anxious to avoid a situation where it is prevented from making good on these promises. The US, for its part, has declined to become publicly involved in the dispute. Its private position is merely that the base on Diego Garcia should not be placed in jeopardy.

    In a deal announced in a joint statement, London and Port Louis have agreed that all but one of the Chagos Islands will be returned to Mauritian control as soon as a treaty can be finalised. This comes after nearly two years of intense negotiations. It seems as though settling the dispute was a top priority for Britian’s new Labour government. Though the deal isn’t done yet, it is expected to go through. Both Britain and Mauritius, along with the White House, have endorsed the agreement, indicating that the toughest negotiations are complete.

    Diego Garcia will remain under British administration for at least 99 years – this time with the blessing of Mauritius – enabling Britain to continue furnishing the US with unfettered access to its military base on the island.

    In exchange for permission to continue on Diego Garcia, Britain will provide “a package of financial support” to Mauritius. The exact sums of money have not been disclosed but will include an annual payment from London to Port Louis. Both sides will cooperate on environmental conservation, issues relating to maritime security, and the welfare of the indigenous Chagossian people – including the limited resettlement of Chagossians onto the outer Chagos Islands under Mauritian supervision.

    I’ve studied the Chagos Islands for 15 years, first as a master’s student and now as a professor. It often looked as though this day would never come. The deal that’s been announced is a good one – a rare “win-win-win-win” moment in international relations, with all the relevant actors able to claim a meaningful victory: Britain, Mauritius, the US, and the Chagossians.

    Win for Britain

    Britain went into these negotiations with one goal in mind: to bring itself into alignment with international law. London suffered humiliating setbacks at the permanent court of arbitration in 2015, concerning the legality of its Chagos marine protected area; at the International Court of Justice in 2019, when the World Court found that Mauritius was sovereign over the archipelago; and at the UN general assembly that same year, when a whopping 116 governments called on Britain to exit the Chagos Islands.

    Mauritian sovereignty over the Chagos group had even begun to be inscribed into international case law. London could probably have defied international opinion if it had wanted to. Nobody would have forced Britain to halt its illegal occupation of the Chagos Archipelago. But such a course would have badly undermined Britain’s global reputation and its ability to criticise others for breaches of international law. This agreement will give Britain exactly what it wanted: a continued presence on Diego Garcia that conforms with international law.

    Win for Mauritius

    Mauritius, of course, went into these negotiations intent on securing full decolonisation at long last. Britain and the US now recognise that the Chagos Archipelago belongs to Mauritius.

    Mauritius will not have day-to-day control of Diego Garcia, but it will be acknowledged as being sovereign there. The public description of the agreement also doesn’t seem to prohibit Mauritius from exercising its sovereignty over Diego Garcia as it relates to non-military domains.

    Win for the US

    The US is another clear winner from the deal. In fact, hardly anything will change for America. Washington will continue working closely with London, and will not need to negotiate an agreement with Mauritius on its rights to the base or the status of forces.

    Indeed, Pentagon officials should be thrilled that their base on Diego Garcia has been put on firm legal footing. This is something that Britain alone was unable to offer. The bilateral agreement with Mauritius will ensure the security of the base for 99 years – no small feat.

    Good for Chagos Islanders

    Finally, the deal is good for the Chagos Islanders.

    British agents forcibly depopulated the entire Chagos group between 1965 and 1973. The point was to rid the archipelago of its permanent population so that the US base on Diego Garcia would operate far from prying eyes. Britain deported the Chagossians to Mauritius and the Seychelles, which is where most Chagossians and their descendants still live. Some have migrated onwards, including to Britain.

    Britain had long opposed the resettlement of the Chagos group by the exiled Chagossians. Mauritius, on the other hand, has indicated its openness to resettlement of the Outer Chagos Islands – so, not Diego Garcia – something that Port Louis is now free to pursue.

    Not all islanders have welcomed news of an agreement. The Chagossians are a large and diverse group, with differing views about how their homeland should be governed. Some would have preferred Britain to administer the entire archipelago long into the future, feeling that Mauritius was an unwelcoming host to the exiled Chagossians. But Britain could not hold onto the Chagos Islands forever – at least, not lawfully. For their part, the largest Chagossian organisations are content with the deal as it has been announced and will now work with Mauritius on a resettlement plan.

    The critics

    This is the first instance of decolonisation that London has attempted since returning Hong Kong to China in 1997. Predictably, some in Britain are opposed to the settlement. Some accuse the Keith Starmer government of “giving up” the Chagos Archipelago. But the islands were never Britain’s to give up – they were always Mauritian sovereign territory, and Britain was an unlawful occupier.

    They are also wrong to blame this deal for jeopardising the base on Diego Garcia. The opposite is true: for better or worse, the agreement will resolve any uncertainty about the US base’s future. It will have total legal security.

    Finally, critics are grasping at straws when they raise the prospect of Mauritius permitting a Chinese base in the Chagos Archipelago. This is a baseless smear. There is no indication whatsoever that Port Louis has any interest in hosting the Chinese military.

    What happens now?

    Britain and Mauritius still need to reveal the text of their bilateral treaty. But the deal is highly unlikely to fall through. Both governments, plus the White House, have welcomed the agreement – a sure sign that the hard work of negotiations is over.

    All that remains is for the treaty to be ratified – a process that does not require a parliamentary vote in the House of Commons. There is no reason why this cannot be done quickly.This could be the end of a shameful saga that went on for too long.The Conversation

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Main photo from Nasa Johnson on flickr.

    Charles Telfair Centre is an independent nonpartisan not forprofit organisation and does not take specific positions. All views, positions, and conclusions expressed in our publications are solely those of the author(s).