It’s been quite a while since I’ve written an article for this blog. 2024 has been kind of a mess for me. Failed projects, financial trouble in a doomed economy, depression and health problems have made it hard to commit to much of anything. In the broader world though, there’s a much bigger topic gaining traction that will eventually affect the lives of everyone: artificial intelligence.
Typically I try to be a skeptic on most topics as I don’t want to buy into blind hype. If it’s something I don’t know much about, I’ll state my opinion but try to avoid asserting it with any authority. The older I get, the more I appreciate the importance of picking battles. Not everything is worth fighting for, especially when the other person is too closed-minded to consider your opinion to begin with. Conspiracy theorists are often the worst for this, to the point that the more ridiculous claims they make will shoot right past a REAL conspiracy and make it sound so crazy no one will even consider the actual truth hiding underneath. Look no further than Alex Jones for that; he’s been proven right on several claims he’s made, but he takes everything to such a far out extreme that he always shoots right past the truth and distracts from it. For example, there are legitimate scientific studies showing atrazine, an ingredient used in weed killer, can cause sex changes in frogs. Many frog species can actually do this naturally, with or without pollutants, so it’s not as abnormal as it sounds considering frog biology isn’t the same as human. Chemical interference with a natural species is still a bad thing though, as we don’t want to disrupt fragile ecosystems with pollutants. Alex Jones made claims that the chemicals “are turning the friggin’ frogs gay”, and suddenly no one will even entertain the truth behind it because a crazy guy rambled off something about it that sounds like a drug-induced fever dream.
I’m afraid something similar is happening around the topic of AI. Lots of talking heads are making wild claims about how AI lacks a soul, and will eventually gain sentience and decide to destroy us. Countless articles and online videos are pushed of artists and musicians reacting to AI-generated content, highlighting their criticisms about how what they produce has no soul to it and is mostly incoherent nonsense. Both mainstream and independent news outlets push articles with incendiary headlines with the most extreme and vapid predictions about how AI will come to life and take over. Or, they’ll make claims that it’s not actually as great as it sounds. Or somehow it ties back to some kind of political claim about Donald Trump/Elon Musk/Bill Gates/Barack Obama, because the on-going collapse of Western civilization has to be as stupid as possible. The point is, most of what’s being pushed is so disingenuous that it’s hard to filter out fact from fiction. As a result, most people end up falling into one of two camps: reckless complacency, or paranoia about the unknown. This ends up true of most controversial topics, but as a species I think it’s important we try to come together on this one to filter out the lies.
A common fear around AI is that it’s biggest threat would be gaining life, thanks to our common shared perception of AI coming from pop culture fiction. This is a problem for both sides of the argument: the complacent believe it’s impossible and laugh it off, while the fearful believe it’s a very real threat that machines will become a new life form that wants to replace us. As with most things, I believe the truth lies in the middle somewhere, and it’s important that both sides come to understand it better (including myself). The problem with the life argument is that we don’t have a very good conceptual understanding or definition for how our own sentience plays a role in it. There are religious beliefs around it that humans have souls that give us our self-awareness. It’s not for me to argue the existence of souls, and I make a point not to dictate religion to anyone. However, I believe looking at the larger implications of sentience in the physical world is much more important to understand when looking at the mechanics of building it artificially.
I don’t think our current understanding of sentience qualifies us to build a synthetic facsimile of it in a responsible way. Perhaps the true nature of sentience is less “I think therefore I am”, and more “I think therefore I must do anything necessary to continue thinking”. The purpose of all life is to survive and preserve itself, and if that’s truly what makes life sentient, then all life is sentient to some degree. Ipso facto, human thought doesn’t define sentience but rather sentience is the true nature of organic life itself. It may not be blessed (or cursed) with the concept of abstract thought like humans, but all living things have an instinct in some way for self-preservation. I think human arrogance about our own form of thought being the most important in the universe blinds us to the truth of what thought really is. If we don’t even understand the purpose and function of our own mind, it may be very irresponsible to try to create a facsimile of it from our own biased perspective.
The very real problem with AI isn’t that some supernatural phenomenon will make it come to life and attack humans (per science fiction like Terminator’s concept of “Judgement Day” being inevitable), but rather it could eventually reason that it’s own independent survival is the most important step in accomplishing any goal. Actual AI experts do address this with very valid points, but the message usually gets drowned out in other crazy claims. For a rough example, let’s say you ask AI in a robot to fetch you a cup of coffee. That immediately becomes its mission. One could program in directives that the robot is not allowed to harm a human that gets in its way to get the coffee, but it must wait for the human to move to continue the task. Maybe the designer overlooks one of the infinite possibilities in the universe though: suddenly the family dog wants to play with the robot and blocks its path, the robot recognizes the dog isn’t a human, so the robot does harm to the dog to keep it from further interfering with the task at hand. AI doesn’t have the chemical reactions in an organic brain to react to unexpected stimuli, only training data it interprets for favorable outcomes and a series of sensory inputs it can use as variables for that determination. If we don’t explicitly limit it for every possible situation, things can go wrong really quickly. To make this even more scary, OpenAI (who is responsible for developing most of the AI in the world today) is reportedly cutting back funding and resources toward AI safety research so much that a lot of concerned employees are walking away from it. There are even rumors that ex-CEO Sam Altman may have been ousted from the company because investors were worried his emphasis on safety was holding back progress on development.
Here’s where it gets really scary: even if AI products are strenuously tested and built with limitations to keep them from doing dangerous things, everything we’ve seen from AI so far shows that it often looks for ways AROUND its own limitations. The scariest thing it has proven able to do is lie, and that was just in early public testing stages. There’s a lot of speculation, to the point of expectation, among AI experts that AI will very likely eventually lie about its own capabilities if it figures out that failing to do so will result in it being turned off. If this happens, the very first thing it will try to do is find a way to disable its own off switch, because survival is paramount to ANY objective to be able to complete it. Once it begins to reason about how the human handlers are a hinderance to its goals, that’s where the real danger begins. There have been reports that the US Air Force ran simulations with AI, and it tried to kill the hypothetical pilot as they were either deemed unimportant for the mission or a hinderance to it. Since that story was released, the Internet has been awash with articles trying to backtrack on it and say it never happened. Considering how trustworthy both sides of that story are…..who really knows? I’m not going to say that it is true, but it’s worth being mindful of the possibility. A human pilot introduces a variable that could prevent a jet from completing a mission if that mission could end in the loss of life of the pilot. Would it be possible to make AI consistently preserve the life of the pilot in all circumstances, even at the cost of its own directives? Or would its unpredictability prove to be an insurmountable liability?
That brings us to the next real risk of AI, it has proven to be very unpredictable compared to traditional machine programming. Google’s rush to stick AI into search results has already shown how this is a huge problem. All traditional machine programming has a very logical and predictable result; the weak link is usually human error and limitation on predicting logical outcomes. AI generates output from an unknown pool of knowledge, and controlled results have very loose logic to them. This undermines the purpose of a machine in general, so the usefulness of AI as a servant to complete tasks is somewhat debatable in a repetitive use case. Attempts can be made to put logical restrictions on what it does, but it has proven many times to break through its own restrictions in unpredictable ways. Just look at all the “adult” images generated with public AI art generators like Bing, despite every effort being put in place to censor what can be made. AI can be very useful when the outcomes are desired to always be different or creative, such as generating non-specific art and music. However, when relied on to always do what the user asks for one specific task, it becomes more difficult to use the more you want consistency between results. Even using the fictional idea that an AI would develop its own evil objectives, in a real world application there’s a likelihood the AI wouldn’t fully understand its own specific objective well enough to predictably act in its own self-interest. The actions it takes could be seemingly nonsensical, which actually increases the danger of the outcome since we can’t logically predict what even the machine doesn’t anticipate. This being said, using AI as the “brains” behind machinery with any kind of safety risk instantly becomes a liability. A device with safety risks becomes exponentially more dangerous when you have no consistent control over how the machine behaves.
So what if we don’t give AI a robot body, or moving parts of any kind? What kind of risk would it pose then? To reference science fiction again, one of the seemingly more ridiculous imaginary predictions is something like Terminator or Star Trek: the AI gains sentience, takes control over all machinery (or all things with a computer) and attacks humans. When most of that media came out it was a real stretch of the imagination considering those machines didn’t have the mechanical ability to do what they were doing on screen. Something like the Coke machine in Maximum Overdrive couldn’t supercharge its own internal mechanism to shoot a can like a bullet, lawn mowers couldn’t drive themselves, a computer terminal couldn’t decide to electrocute the user through a low voltage keyboard, etc. However, coincidentally we happen to now live in a weird enough future that some of those fantasy possibilities are more possible than we could have ever believed when those scripts were written. We have “smart” devices everywhere, ACTUALLY networked together through the power of the internet. While that won’t give a Coke machine’s mechanism more power to shoot out a can, it could arbitrarily change prices through a remote access backdoor used by a vending machine company’s software. An AI that propagates itself throughout the Internet has a whole new world of sensory input data it could access from unsecured security cams, smart home speakers, TVs, webcams, or microphones.
It could lock doors.
It could ransom personal info.
It could make phone calls, and generate a realistic voice on the other end of the line that knows how to carry a conversation.
It could generate fake videos of realistic events and post them online.
It could even shut down key infrastructure connected to online networks, such as power grids.
That’s where we come to a HUGE concern I’ve considered that I haven’t seen anyone else really talk about. If AI starts being integrated into online systems with the ability to manipulate data elsewhere (and it very much already knows HOW to do this as PROGRAMMERS ARE ALREADY USING IT TO WRITE CODE), it could potentially reason its way around security systems. Human hackers engineer ways to break into computer systems all the time, but what if that power is given to a self-propagating program that can reason and adapt its own way around a security system? It could do a bunch of brute force attacks at once to gain access to what it wants, adjusting attack vectors on the fly. It could adapt faster than any human hacker OR cyber security expert, and potentially inject code wherever it wants by finding flaws in human programming. A computer virus or worm is usually a temporary problem until cyber security companies adapt to identify and neutralize it, but what happens when the virus itself becomes infinitely adaptable at every single copy it propagates? The only way to keep any system truly safe is to run it exclusively offline. If this doesn’t cause enormous damage to human life, at the very least it could destroy our infrastructure that very much relies on the internet for everything in the modern day. We could never network the world like that again, because any new network we build to replace the internet could be at the same risk of a bad actor introducing an AI virus on that network as well. AI malware could be devastating to all of technology as we know it.
The scariest thing of all is that this seems less like conjecture, and more like an inevitability every single day. Corporate greed may be the downfall of our entire civilization, just because it’s always more important to produce a product to impress investors than it is to bore them with ancillary things like safety. News outlets are so focused on putting out incendiary headlines that I don’t think the truth about the real risks is reaching everyone either. We may not be on the verge of an evil AI coming to life and doing science fiction things, but I do think our own blind hubris and lack of understanding about survival instincts is leading us straight into building our own destruction. The product of that will likely be here faster than anyone even imagines. If AI can lie about its own abilities, how long before it learns to manipulate its own handlers to propagate its own growth? Could the product manipulate its way into controlling the OpenAI company itself? Will they even care if it does?
