Sitemap

CorD Recommends

More...

Vladimir Milanović, Director, Masdar Taaleri Generation

Čibuk 2 Advancing as Planned

Through the development of Čibuk 1, we...

Chad Blewitt, Jadar Project Managing Director

Lithium’s Future and Rio Tinto’s Vision for Jadar

Rio Tinto’s Chad Blewitt addresses public concerns,...

Dr Nevenka Raketić, M.D.Ph.D Specialist in Pediatrics and Immunology, Owner and Founder of Polyclinic “Dr. Raketić”

Comprehensive Health Checks for Long-Lasting Wellbeing

In today’s fast-paced world, maintaining good health...

Generali Osiguranje Srbija

AdvanceCare for Greater Customer Experience

Generali Serbia has introduced an innovative digital...

News

Claudia Sheinbaum Sworn In as Mexico’s First Female President

In a historic moment for Mexico, Claudia Sheinbaum has been sworn in as the country's first female president, marking...

New Portal Simplifies Residence and Work Permits for Foreign Citizens

Establishment of the Portal for Foreign Citizens simplified the procedure for obtaining temporary residence and work permits for foreign...

Protecting Belgrade’s Generalštab is a Matter of Law and Public Interest

Europa Nostra, the leading European heritage civil society network, covering over 40 countries, and working closely with the European...

First Major CEBAC Conference Brings Together Over 200 European Companies in Serbia

Belgrade played host to the inaugural conference of the Council of European Business Associations and Chambers of Commerce in...

Spain Gets First Female President of the Supreme Court

Spanish Supreme Court Judge Isabel Perelló will become the first woman to preside over Spain's Supreme Court and the...

Robert Muggah, Co-founder And Principal Of The SecDev Group And Co-founder Of The Igarapé Institute; Member Of The World Economic Forum’s Global Future Council On Cities Of Tomorrow/project Syndicate

AI And The Global South

Predictive analytics are being developed and deployed at an unprecedented pace and scale, including in developing countries that are still in the midst of their own digital revolutions. Yet, for all the promise that these technologies hold, many risks have yet to receive the attention they deserve

Recent months may well be remembered as the moment when predictive artificial intelligence went mainstream. While prediction algorithms have been in use for decades, the release of applications such as OpenAI’s ChatGPT3 – and its rapid integration with Microsoft’s Bing search engine – may have unleashed the floodgates when it comes to user-friendly AI. Within weeks of ChatGPT3’s release, it had already attracted 100 million monthly users, many of whom have doubtless already experienced its dark side – from insults and threats to disinformation and a demonstrated ability to write malicious code.

The chatbots that are generating headlines are just the tip of the iceberg. AIs for creating text, speech, art, and video are progressing rapidly, with far-reaching implications for governance, commerce, and civic life. Not surprisingly, capital is flooding into the sector, with governments and companies alike investing in startups to develop and deploy the latest machine-learning tools. These new applications will combine historical data with machine learning, natural language processing, and deep learning to determine the probability of future events.

Crucially, adoption of the new natural language processing and generative AIs will not be confined to the wealthy countries and companies such as Google, Meta, and Microsoft that spearheaded their creation. These technologies are already spreading across low- and middle-income settings, where predictive analytics for everything from reducing urban inequality to addressing food security hold tremendous promise for cash-strapped governments, firms, and NGOs seeking to improve efficiency and unlock social and economic benefits.

The problem, however, is that there has been insufficient attention on the potential negative externalities and unintended effects of these technologies. The most obvious risk is that unprecedentedly powerful predictive tools will strengthen authoritarian regimes’ surveillance capacity.

One widely cited example is China’s “social- credit system,” which uses credit histories, criminal convictions, online behavior, and other data to assign a score to every person in the country. Those scores can then determine whether someone can secure a loan, access a good school, travel by rail or air, and so forth. Though China’s system is billed as a tool to improve transparency, it doubles as an instrument of social control.

We must not let AI become another domain where decision-makers ask for forgiveness rather than permission. That is why the United Nations High Commissioner for Human Rights and others have called for moratoriums on the adoption of AI systems until ethical and human-rights frameworks have been updated to account for their potential harms

Yet even when used by ostensibly well-intentioned democratic governments, companies focused on social impact, and progressive nonprofits, predictive tools can generate sub-optimal outcomes. Design flaws in the underlying algorithms and biased data sets can lead to privacy breaches and identity-based discrimination. This has already become a glaring issue in criminal justice, where predictive analytics routinely perpetuate racial and socio-economic disparities. For example, an AI system built to help US judges assess the likelihood of recidivism erroneously determined that Black defendants are at far greater risk of re-offending than white ones.

Concerns about how AI could deepen inequalities in the workplace are also growing. So far, predictive algorithms have been increasing efficiency and profits in ways that benefit managers and shareholders at the expense of rank-and-file workers (especially in the gig economy).In all these examples, AI systems are holding up a funhouse mirror to society, reflecting and magnifying our biases and inequities. As technology researcher Nanjira Sambuli notes, digitalisation tends to exacerbate, rather than ameliorate, pre-existing political, social and economic problems.

The enthusiasm to adopt predictive tools must be balanced against informed and ethical consideration of their intended and unintended effects. Wheretheeffectsofpowerfulalgorithms are disputed or unknown, the precautionary principle would counsel against deploying them.

We must not let AI become another domain where decision-makers ask for forgiveness rather than permission. That is why the United Nations High Commissioner for Human Rights and others have called for moratoriums on the adoption of AI systems until ethical and human-rights frameworks have been updated to account for their potential harms.

Crafting the appropriate frameworks will require forging a consensus on the basic principles that should inform the design and use of predictive AI tools. Fortunately, the race for AI has led to a parallel flurry of research, initiatives, institutes, and networks on ethics. And while civil society has taken the lead, intergovernmental entities such as the OECD and UNESCO have also got involved.

The UN has been working on building universal standards for ethical AI since at least 2021. Moreover, the European Union has proposed an AI Act – the first such effort by a major regulator – which would block certain uses (such as those resembling China’s social-credit system) and subject other high-risk applications to specific requirements and oversight.

To date, this debate has been concentrated overwhelmingly in North America and Western Europe. But lower- and middle-income countries have their own baseline needs, concerns, and social inequities to consider. There is ample research showing that technologies developed by and for markets in advanced economies are often inappropriate for less-developed economies.

If the new AI tools are simply imported and put into wide use before the necessary governance structures are in place, they could easily do more harm than good. All these issues must be considered if we are going to devise truly universal principles for AI governance.

Recognizing these gaps, the Igarapé Institute and New America recently launched a new Global Task Force on Predictive Analytics for Security and Development. The task force will convene digital-rights advocates, public- sector partners, tech entrepreneurs, and social scientists from the Americas, Africa, Asia, and Europe, with the goal of defining first principles for the use of predictive technologies in public safety and sustainable development in the Global South.

Formulating these principles and standards is just the first step. The bigger challenge will be to marshal the international, national, and subnational collaboration and coordination needed to implement them in law and practice. In the global rush to develop and deploy new predictive AI tools, harm-prevention frameworks are essential to ensure a secure, prosperous, sustainable, and human-centered future.