What happens after AI progress reaches the singularity?
Some postulations on the future of AI
Published: 2025-10-11
Premise
At one of the farewell dinners in Baltimore before I embark on my journey to San Diego, one of my friends was asking whether I think AI will reach the singularity in our life time. While the precise definition within the context of the dinner was not clear so we devolved to random babbling, I had a lot of time to collect and organize my thoughts on this topic during my 6 days roadtrip to San Diego. The big question here is: what would happen after AI reaches singularity? More specially, I want to focus on shortly after AI reaching singularity, saying within this century.
The scope of discussion
To guess what happens after AI reaches singularity, we first need to define what an AI reaching singularity means and what properties it has. It is somewhat prevalant to hear people say once AI reaches singularity it will be infinitely powerful and essentially become an omnipotent god-like entity, that can advance science by thousands of years. I think this way of putting it muddies the waters. In the spirit of a short blog post, let’s just keep three properties of an AI that has reached singularity:
- Its capability and intelligence is beyond human comprehension.
 - It cannot be controlled by human.
 - It will be able to progress science infinitely fast.
 
My stand on the first property is that it is already true to some extent, and go no further than the famous AlphaGo beating Lee Sedol in 2016. And I think the beyond human comprehension part is not that important. There are many subjects and phenomena that we still do not have a complete understanding of, but we can still use them to our advantage. Anecdotally, a good portion of scientific discoveries were first made by empirical observations and experiments, and only later were explained theoretically, for example humans have been using stars for navigation for thousands of years before we understood what stars actually are. In the context of AI’s capability, I don’t think not being able to comprehend how AI works is of much importance. I am not saying this is not important, especially some people may argue AI is different because it is a constantly evolving entity so there is fundamentally no way to understand it, but still I think in terms of discussing the limit of AI progress, this is not the most important property.
The second property is an interesting one, and it usually goes hand in hand with the first property. By controlling, some imagine a steering wheel, that one’s will is bending the direction of a car with it. But I argue this sense of control is an illusion, especially in cutting edge technology. Controllability is composed of our capability to interact with an external identity and our capability to predict what is going to happen after our actions, and thus it is more a spectrum than a binary. Here are a couple of examples:
- I know whatever I do on Earth does not change the fact that the Sun is going to rise from the East tomorrow. Therefore I do not have control over the Sun.
 - When I flip a coin, I know it is going to land on either heads or tails, but I cannot predict which one it is going to be. Therefore I have limited control over the coin flip.
 - When I type on my computer, I know what is going to happen after I type a certain key. Therefore I have full control over my computer.
 
Now pay closer attention to the third example. Modern computers got really reliable over the last few decades, so it gives me a sense of complete control of the computers I use. But in reality, it is a matter of time that something goes wild then I lose control of my computer. I remember I had a laptop that I used for three years, that one day it just decide not to boot anymore despite nothing disastrous happened to it.
I think even an AI that has reached singularity is still going to be somewhat interactable and predictable, especially in scenarios which human has a good grip on how to solve a problem. Hence, it is unrealistic to imagine a post-signularity AI to just do actions we cannot anticipate all the time. The main reason I think people afraid of uncontrollable tools is our inherent adversion to uncertainty. There is some grey area in defining controllability since I left out the sense of “ownership” in my definition. But again, you can think you own something until you don’t, cat owner can attest to this.
Finally, the property I want to focus on is the third one, that the AI can progress science infinitely fast. Let me just say I don’t think this is true, and here is the blog post to explain why.
Okay AI can improve itself, now what?
I think it is a misconception that once AI reaches singularity then overnight it becomes this entity that is centuries ahead of our time. Even when AI can deterministically improve itself, this doesn’t necessarily mean the timescale of improvement is of days, weeks, or months. To speculate on the timescale of AI improvement under no constraint is beyond my imgination, but I can at least list some of the limiting factors that could potentially limit the rate of improvement of AI.
Energy limitation
However efficient the machine might be, it is difficult for me to imagine it defying our understanding of the laws of physics, at least within this century. To scale up in intelligence, which I define as the capability to solve increasing more diverse and complex problems, more and more energy is likely required. At the same time, being able to solve many problems at once, which is what we can imagine an AI overlord may want to do, also requires more energy.
One may say the AI can improve its own efficiency, but there is a limit to how efficient a problem can be solved. So at some point the AI is going to need more energy to reach both higher intelligence and higher throughput. And here is how energy production rate is going to limit the rate of improvement of AI. Constructing new power plants takes time, and there are costs associated with running the power distribution infrastructure. It would be an interesting calculation to estimate how much energy is required to run a ASI, perhaps in another blog post.
Hardware limitation
Going hand in hand with energy limitation is hardware limitation. To scale up an AI intelligence and throughput, not only it needs more energy but also more processing power, memory, and storage. After the AI gains control over energy production, it also needs to optimize the production and deployment of hardware. Obviously both energy production and hardware production are complex problems associated with their own timescale, from raw material extraction, to logistics and manufacturing, to deployment and maintenance. If the AI is to mass produce robots to help with these tasks, then it will add another layer of complexity of robot design and manufacturing. It is hard to pinpoint how quickly the AI can solve these problems in the real world, but I think a reasonable guess is that a planet-scale factory is not going to be bulit in the next two decades.
Data limitation
While the first two limitations are somewhat an engineering problem that perhaps some sci-fi scenario happens and expedite the process, data limitation remains a more fundamental issue, at least for some subset of problems. Say the AI wants revolutionalize our understanding to the Sun, in particular to be able to drastically improve its capability of predicting solar activity such as solar flares over a timescale of 50 years. The limiting factor here is the amount and quality of data we have over such period, which means the AI has to at least wait for 50 years or some reasonable fraction of it to collect the data. Theory can only help so much, and if you are not convinced yet, try changing the subject from the Sun to the farside of Neptune. This is an example to point there will be a set of problems that just have a fundamentally long timescale, and no matter how intelligent the AI is, it cannot speed up the process.
Some big questions
Will AI exterminate human like SkyNet?
Probably not. Exterminating humans is quite difficult: nuke can miss some bunkers and bioweapons may miss some pocket of isolated tribes. Not to say the AI does not have the capability to set back modern civilization to a pre-industrial level, but total extinction is much harder than eliminating 99.99% of human population (And there will still be 100k people left). Secondly, killer robots are not that efficient. Imagine in the far future that somehow this advance AI is still killing human with humanoid machine with guns, how primitive is that. Even human have come up with better ways to kill each other, such as neutron bombs. So no, AI is not likely to exterminate human, and certainly not in a SkyNet way if it does.
What will be left for human to do?
Gladiator. I have always been saying since undergraduate time that in the far future where AI can do everything human can do and better, the only “job” for human exclusively is human gladiator. People used to say AI can never do art and creative jobs, that aged like fine milk. The only truely irreplaceable quality of human is the fact that we are human. We still watch horse races despite cars are much faster. In the same vein, the only thing economically productive left for human is to serve as a specimen for whatever purpose.
That sounded radical, nonetheless I think it is the only logical conclusion I can think of. Now the more meaningful question is what will be the purpose of “job” in the post-ASI world. And that I think we should look no further than what is already happening now. Have you ever wondered why a certain occupation exists, or in an amount that doesn’t make sense? White collar jobs such as admins in university in the US always confuse me because I see a large number of them but yet every time I need something done it is always counter productive (yes, personal beef when I was a professor). Let’s be real, not every job in the world is necessary or near necessary. By necessary I don’t mean for the society, but it could be for the organization itself. Entities such as large corporations, governments, and universities have a tendency to bloat in size and headcount by fragmenting a job that can be handled by one person into multiple positions, or creating new positions that are not necessary. And the takers of these jobs are not necessarily searching for meaning in these jobs, it is merely a way to make a living. So in the post-ASI world where jobs no longer requires human as input, the questions become how to maintain living standard and what is the meaning of labour. This is a huge topic that we can go back and forth for days so I am not going to list all the points I can think of, but here is my thought: chill, do something that is meaningful to you purely because you want to do it. Read a book not to gain some advantage in a job, write a book not to be published and make money. Do the thing just to do the thing.