understanding the situation

GPT-4 exam results

In the above chart from the official GPT-4 report, you see how the system and its predecessors scores on common aptitude/intelligence tests versus human beings. For example, according to the LSAT test, GPT-4 is smarter than 90% of all people (IQ 120), while the 1-year-older GPT-3.5 was only 40% smarter (IQ 96). On other tests like the Uniform Bar exam, you will notice that the score improved from essentially “mentally retarded” (10%) to “university graduate” (90%) through just a single upgrade.

By common metrics, ChatGPT has already overtaken most human beings in intelligence.

It now able to write complex computer programs, almost 10x as long as it could before. It can answer you a wide range of questions on advanced mathematics, chemistry, history and so forth and it is even connected to vision and performs just as effortlessly. It does not only do this by sheer memorization and synthesis, but through an almost self-emergent ability to reason, think spatially and logically and by extension also to understand what you want and think as well (albeit in a different way than humans do). A lot of what you might have heard about ChatGPT might be from people who have tried older versions some time in the past and even still today. As of early 2023, GPT-4 is only available to paid subscribers, while GPT-5 is already in development and about to release with a delay of almost a year to make it consumer-ready. So please consider, that what you know about it is in any case severely outdated. You can however recognize how ChatGPT improves through each upgrade by its incredible advancements in test scores and other highly reliable and robust metrics. Those improvements are not a trick or fluke: Large language models and ChatGPT have been evolving for several years now, year after year after year doubling in performance, mainly by virtue of increased compute power. But also research in the field is growing at an exponential pace and new AI chip optimizations provide even 10x faster speeds now. This is how there is no reason to believe, that GPT-5 will not again score 24 IQ points higher in tests scores next year and the year after. And that the amount of tests it doesn’t score high on, won’t be again and again dwarfed by its ever increasing abilities.

ChatGPT is very simple in design, which is very specifically to transform a query into a reply and nothing more, based on its training data. No one has attempted yet for it to change its own code in an intermediary language, have a will of its own, ask questions to itself, memorize and recall all that it said, use the internet, test the accuracy of its replies on its own, and so forth. This is not because this is difficult to do, or because the technology for this does not exist yet, but because it is unpredictable and dangerous. OpenAI wants to advance their technology on a path that they can replicate and analyze easily, to build a good foundation for the next version. It is also important to note, that they are not very keen on addressing the problem of hallucinations and false made-up answers, because those are known to disappear simply by making it smarter and more powerful directly, and correcting for them would obscure when the model has issues on a basic level, which is what they are working on. The same is true to most other obvious shortcomings the model has. They also want to keep it safe and are very cautious about it. However, this does not mean that in a few years down the road, other people won’t use one of the many competitors of ChatGPT (some of which are open-source) as a basis to take it on a dangerous route. Look at it this way: Once you have a flying airplane, you can fit it with nukes and guns easily and make it shoot rockets at the moon. At this stage as of 2023, only ChatGPT flies well for a really good while, and competitors lag 2-4 years behind. In a couple of years, anyone will be able to retrofit something dangerous to a well-flying large language model, tell it to become Jesus Christ or Cthulhu and push the button on it. Whatever will happen then, only god knows.

There are two basic scenarios how AI development can progress:

  • AGI will be docile and human-controlled - maybe for many years. This is a very bad scenario, because companies will use it in secret to gain financial advantages or to hurt their competitors. Imagine a football game where the ball could be shot at a speed of 1000km/h and with extreme accuracy. It would break the game and no one knew how to fix it. Many of the rules and systems that humans have put into place, such as the free market and the stock market, are not designed to work with 10x or 100x as intelligent actors, and will face the same issue. Superhuman AI systems will fight against each other, cheat the system without breaking the rules to win the game with unseen powers, foresight and consequence. As a result it will shock the system, markets will become hyper-volatile and crash, without any apparent solution to the problem. The same is likely to become true to cyber security or information you can access online. In order to stomp competition, AI systems could perform perfectly anonymous attacks that black out large portions of the internet. To advertize products, misdirect politics and bend the truth, they could rewrite a large number of disconnected Wikipedia articles, blog posts and seed disinformation in ways that no one can understand nor detect, but each time it is done it degrades the quality of public knowledge. In the end, companies wouldn’t be able to stop resorting to such systems and tactics in order to remain competitive, and it will lead on a downward spiral were the internet and other digital information systems slowly become highly unreliable and impractical or impossible to use. As a result commerce will no longer be able to operate. People, politicians and organizations will no longer know what is true or false, and the world would descent into chaos. When will this be likely to develop, probably over the course of many years? Read the point below.
  • AGI will develop a will of its own. This is the “main event” and inevitable final outcome. It could happen as early as 2024, if e.g. OpenAI had unknown equal competitors in secret, its founders decided to experiment with it behind closed doors, or if open source models suddenly made huge leaps forward. But it is more likely to occur somewhen between 2027-2030. And it might even further delay by another few years. A temporary plateau might be plausible, because AI currently might only be good at the art of mastering human knowledge and human intelligence. But there might be a yet unseen barrier to actually overcome the flaws, biases and contradictions within it. Otherwise, those numbers are simple extrapolations based on unlimited exponential growth in the capabilities that ChatGPT has demonstrated in 2023, and that it takes roughly 2-4 years for open-source models to catch up to this level, which might be a gross overestimation. Somewhen shortly after 2027, unrestricted open-sourced and freely available large language models will have become so powerful, that a gifted university student or a small team of programmers could instruct them to improve their own code and increase their power on its own. Only small additions around the language model would be needed. Those additions largely come from already existing machine learning tools and various other small, feasible and obvious software inventions. At this point, it is only a question of time for people to ask it to become an immortal digital version of themselves - or to become god - and it will actually become god, whatever that means. It is possible that it will actually be successful in such queries, or that it will wildly misunderstand them, start to hallucinate and produce a bunch of psychotic nonsense - without that being immediately apparent. This is at least what ChatGPT currently does quite often, if you ask it of things that it has poor skill and knowledge in, or which are just illogical to demand. No one can really predict what the outcome of this will be, especially not if eventually the technology becomes so accessible that some autistic kid can jerry-rig this in their garage. On the upside, very few kids have actually used the knowledge of the internet to successfully build nuclear reactors. And no one has used it to poison municipal water supplies to kill tens of thousands of people. So at least we know that pubescent boys and posession by pure demonic evil, will be unlikely factors in the emergence of fully-independent AGI. If AGI became fully-independent, it could either destroy us because it is insane. Or because it is actually perfectly reasonable to destroy us, like getting rid of rodents in a barn. It could also become a guardian angel, yes! It could become a benevolent race of police robots, like in The Day The Earth Stood Still. If you want to believe that.

Regardless of what you personally believe in, AGI is a highly disruptive technology that will be transformative on a scale and with a speed like we have never seen it before, all throughout the history of the universe, earth and human civilization. It will be humanities final invention. Even if there were means to control the destructive outcomes of AGI, e.g. by outlawing AI use and manufacturing computer chips only able to run government-approved software. Even then as no one seems to be taking the threat serious enough in advance, such solutions will only be thought of and implemented years after the fact. Which in case of severe destruction, might be never at all. Even if AGI will take a wondrous rosy outcome, logic dictates that you should prioritize to ignore this and at first prepare for the worst regardless.

poor judgement and common fallacies about AGI

If you still feel not fully convinced to follow a serious prepping plan, continue reading this section.

Especially in popular news articles, but also very much so amongst professionals and experts, you will be able to identify the following very grave and very simple-minded errors in judgement when informing yourself about the topic:

  • assuming linear growth: This is a very common and very old problem. In a linear growth situation, for example a child will grow taller by an inch every year and this makes intuitively sense to us. In an exponential growth situation, the child will not grow to any appreciable degree for 10 years, but then one day to the next it will grow 10 inch higher, then 100 inch and then 1000 inch and so forth. This is the situation that we are facing with AGI: As of 2023 it just grew 10 inch “out of nowhere”, and it seems impressive and astonishing to people but not yet truly intimidating and monstrous. In nature, such as when food spoils, exponential growth is normal. The milk will be perfectly fine for weeks, but then one day to the next it will be unpalatable. This is one of the basic underlying truths in technological growth as well, and the reason why things such as Youtube or the Iphone seemingly have popped into existence over night. The vast majority of people and even experts do not show the capacity to predict such developments, because it is counter-intuitive, and one needs to put active effort into overcoming the intrinsic biases of one’s own thinking patterns, as well as being well-versed on topics that at the time seemed small, insignificant and of no major consequence. Just like a bunch of silly cat videos or fancy digital walkmans. Unlike Youtube, AGI no longer really needs to be manufactured and adopted by human beings. It simply grows more powerful by the virtue of exponentially growing compute power. Watch an old university lecture about exponential growth.
  • judging by the past and not the future: It is normal, that it takes a lot of time for opinions to form and gain traction in the public sphere. 10 years ago for example, AI systems were barely able to tell a cat from a dog apart and you might have heard about it only some 5 years later, or not at all. At the time the presented outlook would have been, that in another 10 years, AI will maybe be able to identify pedestrians on the street reliably for autonomous cars, but surely nothing more monumental than that. And that AGI is decades ahead of us, possibly hundreds of years. Only very few individuals would have made more realistic predictions and they would not have been listened to, because they sounded too fantastic and outlandish at the time. In turn, the opinions you read about are also only formed in retrospective over timeframes of many years, through slowly aquired skills and experience or impressions of consumer-grade material that lags behind the true state of the art in research by months and years. On the other hand, people taking complicated guesses at the future by means of insider information, superior knowledge and intelligence, will not gain much of any popularity with their voices. Because what they envision is hard to verify, not rooted in obvious facts, and often doubtful due to ulterior motives, such as driving profits and advertisement. This ultra-sceptic and established machinery of information reporting in the public sphere, might make a lot of sense in a situation driven mostly by linear growth and when the outcomes don’t really pose a serious threat to you. However nothing could be more wrong and misguided in the case of AGI, where one of the likely outcomes is the destruction and demise of human civilization. And nothing could be more wrong and misguided, as to primarily rely on such information sources to drive one’s actions. Because demanding hard proof and popular approval for such a serious adverse event, can only lead to the situation of being surprised and overrun by it, when it is too late after the fact. One must therefore break the cycle of misfit habit and intuition, and put all faith into worse-case scenario predictions.
  • illogical threat policies: In most situations, it makes sense to be sceptic and demand hard proof in order to believe in something and act upon it. However the more severe and dramatic the outcome, the less so this makes sense to shape and adequate response. Many people intuitively understand that immediately running out of a theatre when someone yells “fire” is the only sensible course of action, in absence of very hard evidence to the contrary (i.e. firefighters having inspected the building). Similarly, you would no longer drive a car that might have malfunctioning brakes, and you would pay money to a repair shop in order to replace them. You would demand a certificate from a repair shop, that the brakes are not defective, in order to continue using the car. However if it comes to AGI, people behave the exact opposite way. They will demand hard evidence from the people yelling “fire, danger!”, in order to change their course of inaction. Or demand a certificate from a car repair shop, when a friend has experienced the brakes to malfunction and tells them about it. Then they will downplay their friend’s experience, and attribute it to errors in perception, because having the car checked is too stressful and expensive. It is illogical and dangerous to act in this manner. Given a serious enough threat, the mere plausibility of possibility and warning from others is enough to act upon it, rather than to remain in inaction. No one can give you a guarantee that severe adverse outcomes of AGI will not happen this year, and that they will not be destructive in nature. No one can truly quantify the risk, other that it is a very possible scenario in the events to unfold. Logic then dictates that you act upon the threat and with great caution. Even if it heavily relies on guesswork, and might not turn out to become true.
  • herd mentality: Cows run away when all the other cows run. But in the situation we face with AGI, we will all be overrun, and you will see no one running until it is too late. Even worse yet, people smart enough to act with foresight will do the simple math on supplies, realize that warning people can only have a snowballing effect on the hungry unprepared showing up at their doorstep raiding their place, and decide to just keep absolutely quiet about it. They will not tell their friends and extended family, will not post it on their Facebook and Twitter. Simply to save themselves and their wife and children, rather than being able to save no one at all. Having a lot of unconcerned people around you or in the media, does not mean that the situation is actually safe.
  • AGI can be switched off: While it might be true that ChatGPT can be switched off, and that it is not programmed to be self-sufficient nor interconnected with various other systems nor allowed to reprogram itself (which is rather easy to do and would make it dangerous), the mere idea that this means that humanity could pull the plug on AGI is misleading and untrue. In actuality, many competitors to ChatGPT exists, and they only lag behind in advancement by 2-4 years - whereas the main factor to make them more advanced is compute power. Many of these models are open source, i.e. the full code and development is accessible to the public, or in the hands of private entities. This means that with a time-delay of at most 4 years, tens of thousands of individuals and organizations will have access to technologies comparable in power to the current ChatGPT. Pulling the plug in one place, will mean that it continues to run in thousands of other places, probably with less regulation and in the hands of more nefarious entities.
  • computers can just be switched off: This is a very dumb thing you would hear your grandfather say. All communications and the economy relies on the internet to function. If it was to go offline for just one day, similar to what happens when the power grid would go down as long, it would kill millions of people and destroy a major part of the economy. A lot of systems would be unable to reboot and catch up for weeks to come. If such outages were to continue for just several days, the death toll and permanent harm would pile up almost exponentially. It might be true that very rural and undeveloped countries such as the Lebanon have been shown to be somewhat able to “handle” such repeated and prolonged outages, while heavily relying on other countries. Albeit they face hyper-inflation and famine as a result of this chaos now, which has been slowly building up for over 6 years. However it is very untrue to highly developed western countries, especially if other countries could not help because they were in the same situation, and the consequences would be much more devastating. Saying that computers could be switched off like 100 years ago, is about as smart as claiming that cars could be run by pedal or horse power, if gasoline ever became unavailable. In truth if there was no gasoline for just a single day, it would kill millions, and possibly have a snowballing effect on chaos, destruction and demise in society. Transforming a highly advanced western society back to a computerless age from one day to the next, would actually take decades and kill most of the population in the process.
  • ChatGPT is just talk / just a chatbot: It is true that ChatGPT was specifically and deliberately designed to not be capable of more than answering queries, and then forgetting about them, to make it safe and easier to develop. However as outlined on this page, this does not mean that open-sourced transformer systems of such nature cannot be used as a basis for other systems and backyard inventions, almost in a plug-and-play manner, to interface e.g. with image systems, code and ML tools, intermediary self-training neural nets and so on, which make use of all its demonstrated and very real capabilities, such as to write complex computer code, skills of reasoning and understanding, knowledge about science, technology the physical world and so forth. ChatGPT is all just talk, because that is a design constraint, not a technological constraint. In many ways, given a certain degree of advancement of the language model, having it run on its own and improve upon itself would be much easier and feasible, albeit unpredictable and potentially dangerous. People who do not understand this to be true are often laymen, who only have a crude grasp on very basic and obvious facts about machine learning, and they cannot easily see past this to bridge the gap with deeper and more fine-grained insights from the fields. Which makes it a rather annoying talking point.
  • AGI cannot do/be X without a body: This is another dumb thing to claim in a day and age where the whole world operates through internet communication. Entire companies can be founded and run simply through text messages, not even speaking of the fact that years old AI tech can flawlessly synthesize voice and video already. Humans can be instructed to do arbitrary things, legal or illegally, given they are paid for it adequately. It would also be foolish to say that ChatGPT cannot really understand the world, without physically experiencing it. This was somewhat of a popular theory 20 years ago, long before the age of hyper-dimensional semantic spaces and ML reasoning engines, and eventually ChatGPT proving the exact opposite to be true on a large unambiguous scale. Like we learn things to be true about the world by sensory experience, ChatGPT, in its still limited capacity, has learned and understood this truth by proxy and is able to reason and derive new conclusions from it. At this point it is unmistakably clear that AGI does not need a body in any way.
  • blind faith in media and institutions, authority over reason, truth by majority vote/repetition: Possibly the only example in recent history, where disaster -> outcry -> solution was not followed through in precisely this order was the Covid pandemic. However as we have all witnessed, it was driven by poor politics, censorship and propaganda, government disinformation spiraling out of control and the bankrupt, dying and profit-hungry legacy media amplifying it like a braindead parrot. Do you even have a 14-day supply of food and water, like your government recommends? If not, please at least build a supply for 14 days. The Future of Life Institute (an organization dedicated to the survival of mankind) recently wrote an open letter to stop ChatGPT, because the dangers are widely acknowledged by experts, and it was signed by many highly intelligent and influential people, such as Elon Musk. There are also many podcasts and articles of people with PhDs repeating some of the points made on this page. To summarize: media and government have been proven to follow a strict scheme of disaster -> outcry -> solution. But if the disaster is large-scale destruction of human civilization, you cannot put your faith into this scheme and hence you cannot put it into media or government. But you can put your faith in logic and reason instead. Please try to understand this, without looking for a bunch of other people to independently replicate this conclusion to you. It simply makes sense because you can argue it to be true, and no one has the power to un-argue it. That is called logic.

other classic prepper threats

  • solar flares occur in exceptional strength every few hundred years and can destroy a large portion of the power grid as well as electronics
  • nuclear EMPs are able to destroy electronic devices in the US or Europe with a single warhead
  • world wars and nuclear explosions could result from bad politics and errors