Gradual Disempowerment or Golden Age? Rethinking Human Freedom in the Shadow of AGI

Arthur Juliani
9 min readFeb 11, 2025

--

(Portrait by Vasily Perov, c. 1872)

A preprint on arXiv has been making the rounds this past week called Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development. As the title suggests, the article explores the ways in which humanity may experience a slow and gradual loss of control over its future as AI (and eventually AGI/ASI) becomes increasingly advanced. This is in contrast to the more flashy and headline-grabbing rapid AI takeover scenario which is so prominent in science fiction. The article argues that as AI becomes increasingly advanced, we will hand over more and more of our economic, governmental, and cultural functions to these systems. In doing so, humans will be giving up more and more control and collective agency, until we eventually find that we have lost it completely. Is it that straightforward though?

According to the authors, what has maintained human agency so far is that on the whole humanity is currently “fairly aligned.” Our existing economic, governmental, and cultural institutions are operated by humans, so on some level they reflect human values as such. The authors admit that there is some controversy around the notion of fair alignment. In some ways the history of human civilization, which is filled with slavery, oppression, genocide, and conflict, is an attestation against this proposition. Still, it is clear that human institutions at the very least empower humanity as a whole (tautologically so). While I agree with the authors that a gradual disempowerment of human institutions due to AI development is likely, I also think it is just as likely that individual human empowerment may be drastically increased at the same time. Notably, this seemingly paradoxical situation is in some ways the inverse trend to what has taken place for most of human civilization’s history.

It is incontestable that technological development throughout history has enabled civilizations as a whole to gain more and more empowerment over the forces of nature. We can build structures which withstand hurricanes, earthquakes, tornadoes, and extreme temperatures. We can predict the weather anywhere in the world over a week in advance. We can fly people en masse from one side of the world to another in less than a day; and we can build towers which reach up into the heavens. We can cure diseases which have plagued humanity for centuries, and in our most symbolic triumph, we proved capable of sending people to the moon. All of this enables humanity as a whole to deal with problems and threats to our collective survival and flourishing in ways that would have seemed like magic a few centuries ago.

However, if we zoom in from the civilizational level to the personal or group level, the track record for maintaining and encouraging human agency is much more complicated. In the previous two centuries, significant progress has been made to enfranchise more people into collective decision making. It is also true that in theory, individuals in liberal democracies are free to pursue their goals and dreams in a relatively unencumbered manner. In practice, however, the amount of agency which anyone can exercise is often directly tied (and increasingly so as time progresses) to the amount of personal wealth at their command at any given moment. In western industrialized nations we feel this acutely in the current affordability crisis in which the ability to own a home, even an incredibly modest one, is outside the reach of large numbers of people.

Even individuals with wealth currently find themselves constrained in countless ways by the ever-increasing number of laws and regulations of states, many of which are supported and enforced by increasingly advanced and intrusive surveillance technology. Outside the niche of crypto, it is currently near-impossible to conduct a legitimate economic transaction without it being monitored and subject to censure. We could also take the case of individuals who are fortunate enough to own a home, yet are still subject to the arbitrary regulations of a homeowner’s society. Many such homeowners find that they cannot even freely choose the color of a house they own or the height of a fence. We may collectively agree that any one of these rules are desirable for the maintenance of social harmony and the collective good, but it is undeniable that their cumulative effect is a restriction of personal agency.

The physical constraints on our behavior in modern society are matched or even perhaps exceeded by the more invisible constraints on our psychic freedom, which are often much less noticeable for their all-encompassing nature. We have spent the previous twenty years having our attentional systems rewritten by smartphones and the modern internet. This rewrite has largely been in the service of maximizing advertising revenue for social media companies, and certainly not in service of maximizing individual agency. What we think and believe is often now the result of rapid memetic transfer of someone else’s ideas or beliefs through social media. This is in contrast to the world of twenty five years ago when the stationary TV or desktop computer only had influence for those within physical proximity to it. This is to say nothing of the world of the 19th century and earlier, before mass communications media even existed.

Given the current state of individual and collective human empowerment, how might sufficiently advanced AI change things? It certainly does seem to be the case that society as a whole will likely face the kinds of disempowerment that the article describes. As we hand over more and more high-level decision making to machines, this naturally implies that humans themselves are no longer the ones making those decisions. We will do this because at each step, it will be easier and more efficient to offload to the AI what it can do better than a human, or ensemble of humans. In the same way that we’d find it absurd to rely on humans for some essential arithmetic tasks in place of a computer, the same logic will apply to cognitive tasks and AGI in the future.

If it is unclear whether AGI decision making will be misaligned, yet clear that more disempowerment is likely, the question then becomes: is disempowerment something to be avoided? Is it possible to imagine a scenario in which humans are indeed disempowered, but still happy and content, because the empowered AIs act unfailingly in the service of human interests? We can take the example of a pair of loving parents caring for their newborn child. The child clearly has little to no agency, yet all of its needs are taken care of and it is nurtured. The child is even happy, and free within its own domain to explore and interact with the world. The same logic applies to a human and their pet. Here again, in the greater scheme of things the pet has little agency, yet is still content so long as all of its needs are met.

Understandably, people recoil at the thought of becoming mere pets. Even I did, as I wrote the above. This aversion to losing our personal agency is not something new. It has a long history, one that mirrors the historical process by which individuals have become increasingly disempowered relative to societies as a whole. In particular, we can look to the effects of the industrial revolution in the 19th and early 20th centuries. This was a time when new forms of social control were being developed and spreading alongside the technological developments which made them possible. Although the works of authors during these times are perhaps not representative of the average person’s experience, their lasting appeal over a century later attest to their speaking to something fundamental in human nature.

In Notes from Underground the great Russian writer Fyodor Dostoevsky explores the problem of individual agency through a thought experiment concerning a utopia. If a society could be developed in which all of its members’ needs were met, would the members of that society be happy? Dostoevsky answers this question with an emphatic “no”. He goes so far as to contend that it is in human nature to prefer to destroy such a society simply for the sake of being willing to assert one’s own independence from it. This desire for individual freedom, even when destructive, irrational, and serving no utilitarian purpose, is something he sees as superordinate in the human spirit over almost everything else. We find a similar theme a few decades later in Aldous Huxley’s classic work Brave New World in which an impassioned case is made for the freedom to act irrationally and (dare we say) humanely, in the face of an increasingly automated and regulated society. This anxiety around losing individual agency is also perhaps why the Gradual Disempowerment article has resonated the way it has with so many people in recent days.

Is a soul crushing disempowerment at the ‘hands’ of AI truly our fate then? Or are there opportunities to be had in this coming phase shift? Although institutions can and perhaps have a moral obligation to have their control turned over to systems capable of running them most efficiently and beneficially, I believe that for individuals themselves personal agency has the potential to finally be able to mount a comeback. Coincidentally, Sam Altman shared a blog post recently containing some predictions about the societal changes AGI will bring. One of the things he highlighted is the fact that as intelligence becomes orders of magnitude cheaper than it is today, the ability for the average person to act in the world may very well increase rather than decrease. His argument goes that if you had the equivalent of a 10,000 person company of PhD-level knowledge workers at your command for $20/month, then you could clearly accomplish much more than you could today by yourself. Dario Amodei has made a similar argument recently, suggesting that anyone in the world will soon have access to “a nation of geniuses” to act on their behalf.

Of course this picture of super-powered individuals is complicated somewhat by the fact that in this scenario nearly everyone else will also have such resources. Governments will also have surveillance technology far beyond what they do now, and it may be of greater efficacy than what is available to individuals for the purposes of counteracting it. Regardless, it certainly will not be an entirely zero-sum situation, especially for any undertaking which is outside the domain of interpersonal competition. So much of what prevents people now from acting truly agentially is that they do not know what actions to actually take in order to make a difference in their lives. Some cannot see the openings which genuinely do exist. Or, even if they do see them, they may not have the ability to navigate them successfully. Consider today’s millennial would-be homeowner who finds themselves currently disempowered to the extreme. For most, the notion of finding a plot of undeveloped land, building a house, and supplying it with water and electricity (and doing so affordably) would be a daunting task. It is easy to imagine how, with the help of a couple dozen super-intelligent entities, it would become trivial.

In summary, my contention is that as humans we clearly care deeply about empowerment. Although AI will involve handing more institutional control over to AI systems, those same systems have the potential to empower individuals in radical new ways as well. That said, I can’t claim to know what will happen in the coming decades any more than anyone else. In large part thanks to the pioneering work of Ray Kurzweil, people often talk about a ‘technological singularity’. The analogy to a black hole’s event horizon is apropos, but not for the reason people often assume. The vernacular understanding is something like: technological development beyond a certain point will advance at a near-infinite rate. I think that a more fruitful way of thinking about the singularity is that it is a point beyond which advancements in technology will be such that it becomes difficult-to-impossible for us to see beyond it. This is because advanced AI will not just change one part of society at a time, but rather all parts of all societies at once, and in complex and chaotic ways.

On the one hand, it seems more important than ever to exercise epistemic humility. We simply do not know what in particular the technological developments in coming decades will bring. Anyone who claims to have certainty about the future has not thought very deeply about the pervasive implications of commodifying intelligence. On the other hand, we can and should marshal all of our collective resources to ensure that whatever does happen has the greatest chance of benefiting as many people as possible; both by ensuring their needs are met, and by safeguarding (or even increasing) their freedoms. In that sense, the Gradual Disempowerment article is part of a necessary conversation. Many more perspectives and voices will be needed though in the coming weeks, months, and years. As we invent the future, it is critical to ensure it is the one we want.

--

--

Arthur Juliani
Arthur Juliani

Written by Arthur Juliani

Interested in artificial intelligence, neuroscience, philosophy, psychedelics, and meditation. http://arthurjuliani.com/

No responses yet