For more than 200,000 years, humans have built solutions to the challenges we face and shared our knowledge with each other. AI could continue this trend, complementing human capabilities and enabling us to unleash our full potential, but the technology is developing in a different direction
I was fortunate to participate in the recent AI Action Summit in Paris, where many discussions emphasized the need to steer AI in a more socially beneficial direction. At a time of increasingly loud calls for AI acceleration from Silicon Valley – and now from the US government – the opportunity to focus on what we want from the technology was like a breath of fresh air.
Though the human journey has not always been smooth – our capabilities, machines, and knowledge sometimes cause profound harms – constant inquiry and prolific sharing of information is essential to what we are.
While economic development has created tremendous inequality between and within countries, people almost everywhere today are healthier and more prosperous than they would have been in the eighteenth century. AI could invigorate this trend by complementing human skills, talents, and knowledge, improving our decision-making, experimentation, and applications of useful knowledge.
It is relevant practical knowledge, not mere information, that makes factory workers more productive; enables electricians to handle new equipment and perform more sophisticated tasks; helps nurses play a more critical decision- making role in health care; and generally allows workers of all skills and backgrounds to fill new and more productive roles.
AI, properly developed and used, can indeed make us better – not just by providing “a bicycle for the mind,” but by truly expanding our ability to think and act with greater understanding, independent of coercion or manipulation.
Yet owing to its profound potential, AI also represents one of the gravest threats that humanity has ever faced. The risk is not only (or even mainly) that superintelligent machines will someday rule over us; it is that AI will undermine our ability to learn, experiment, share knowledge, and derive meaning from our activities. AI will greatly diminish us if it ceaselessly eliminates tasks and jobs; overcentralizes information and discourages human inquiry and experiential learning; empowers a few companies to rule over our lives; and creates a twotier society with vast inequalities and status differences. It may even destroy democracy and human civilization as we know it.
I fear this is the direction we are heading in. But nothing is preordained. We can devise better ways to govern our societies, and choose a direction for technology that boosts knowledge acquisition and maximizes human flourishing.
But first, the public must recognize that this socially desirable path is technically feasible. AI will move in a pro-human direction only if technologists, engineers, and executives work together with democratic institutions, and if developers in the United States, Europe, and China listen to the five billion people who live in other parts of the world. We desperately need more thoughtful advice from experts and inspiring leadership from politicians, whose focus should be on incentivizing pro-human AI through policy and regulatory frameworks.
But we also need more than regulation. European society must encourage the more socially beneficial direction of AI and European leaders will need to invest in the necessary digital infrastructure, design regulations that do not discourage investment or drive away talented AI researchers, and create the kind of financing mechanisms that successful startups need to scale up. Without a robust AI industry of its own, Europe will have little to no influence on the direction of AI globally.
By Daron Acemoglu, a 2024 Nobel laureate in economics and Institute Professor of Economics at MIT