
The growth and increased application of artificial intelligence technology (AI) in practically all domains of human life is hugely affecting the way people individually and society as a whole function.
However, in the stage of development AI is now, it’s hard to predict where it will finally take us (if there’s a final destination at all), but some signs of the changes AI brings about are already very visible.
For instance, when you contact the front desks of either commercial or governmental organizations you will frequently be welcomed by AI bots that guide you and answer your questions (with “mixed” results, to say the least), and it’s become increasingly difficult to actually connect with a real person to answer your questions or to support you in other ways.
Then, on the Internet, search engines increasingly respond to your queries with “AI overviews” or facilitate so-called “AI mode” giving you ready-made answers, consisting of a “creative compilation” of data and information found on millions of websites.
In general, the data AI systems are trained with data that has been appropriated without permission of website owners and lack any factual validation, often resulting in partial, misleading, or outright incorrect answers. The examples hereof are plenty, and if this wasn’t a very grave development with potentially dangerous and devastating results, it would be hilarious.
In addition, AI has become structurally implemented in decision-making systems, may it be by presenting you with “personalized” prices for airplane tickets based on your preferences, background, and size of your wallet, accepting you for a medical insurance plan or not, guiding you through a mental health crisis, or finding you the best washing machine or vacuum cleaner for your domestic situation.
But AI can also autonomously drive cars, buses, and trains, fly airplanes, navigate ships, and can monitor, control, and operate complex and critical technology, such as computer and internet infrastructures, (nuclear) energy and water supply systems, satellites and rockets, and whatnot.
In fact, organizations increasingly use AI software and AI-driven machines and processes in their business, manufacturing, and service models because it saves human labor costs, follows up orders without complaints, doesn’t join labor unions, and depending on the domain it’s applied in gives better and more reliable work outcomes than human workers.
The applications of AI are sheer endless, but the bottom line is that AI technology not only simply supports human activities, but it increasingly makes decisions for us, that is, thinks for us, making human decision-making, human thought, and/or human intervention obsolete.
AI is artificial, it is not real in the sense of “being human” but it mimics human intelligence designed as a mirror of the combined ability and knowledge of the whole of mankind, involving emotional, social, and intellectual intelligence, even physical intelligence, the latter embodied by a broad variety of robots.
For instance, robots in the form of humans or animals, or of their own hybrid kind. Robots used to make cars or implemented as an artificial human companion for physically disabled or emotionally needy people, but also robot dogs and drones used for warfare as reconnaissance technology or as assault weapons being able to operate autonomously.
It’s not an exaggeration to state that AI technology, either applied as conversational and generative language or integrated as software-driven decision-makers in sensors, machines and robots, effectively threaten human function as we know it today. AI doesn’t only takes charge of previously exclusively human domains of work and activity, it also makes human less prone to be creative, learn, think, or decide for themselves.
Hence, although humans — for now — perhaps mainly use AI to increase their work productivity or simply as much needed help in certain domains, in the long run it actually makes people “dumber” because they increasingly tend to, are obliged, or have no other choice than to follow or obey the answers, decisions, prompts, and actions of AI systems.
Nevertheless, we should never forget that AI technology is created by human beings, and history has proven us over and over again that technology is far from faultless. Technology fails at some point or the other, because both the software and the technological devices controlled by it are made by humans, who themselves are not perfect.
So, just as airplanes crash, cars fail, rockets explode, and computers suddenly shut down, AI will make grave errors too. It’s bound to happen and happen more frequently, simply because of the widespread application of AI. In addition, AI is in continuous development, which also means continuous occurrences of trials and errors.
Now, if AI technology would only be used to play chess games, it would all be quite harmless. But as it’s applied within critical decision-making systems, systems that need to protect, guide, and/or support people, AI errors can, and now and again will have profoundly negative and potentially lethal consequences, not only for individuals but even for whole countries or perhaps even the world.
Another thing with technology is that — by itself — it’s a neutral phenomenon. That is, it’s the way humans apply technology that decides its eventual impact. And here also it has been proven throughout mankind’s history that man use technology for both the good and the bad. It’s used to save people, but likewise to destroy people. It’s used to protect people, but likewise to control people.
AI poses big risks for mankind because of its processing power, all-encompassing application, increased interconnectedness and reactiveness between AI systems, and the growing human dependence on AI. This could lead to devastating chain reactions in the case of even one failure in one critical AI system.
The questions that rise here are manifold. How autonomous will we let AI systems operate? Will AI be able to independently create new AI systems, improved versions of themselves? Will we allow them to protect themselves against human intervention or attempts of humans to shut them down? Will critical AI systems and the machines and devices they control gain some sort of self-consciousness (with the wish to protect themselves against “death”) and is it ever possible to give them some kind of real conscience?
How “dumb” will humanity eventually become? Why would we still need intelligence or critical thinking of our own if AI systems are far more savvy because of their superior processing power and their ability of instant access to all the knowledge of the world? Are we gradually digging our own graves enslaving ourselves by creating our future masters?
I would want to think that I don’t need to answer these types of questions in my lifetime. However, I fear that it’s only wishful thinking. The pace of change nowadays is dazzling and new developments follow each other up in mindboggling speed. Who can still keep up with all the manifold changes and their impact on our world?
Since the industrial revolution, we live in times in which the speed and advancement of technological changes perpetually increase. It’s like a chain reaction by itself. And ironically enough, the power of AI exactly lies in its incredible ability to speed things up. I can only say: “Please, may God help us, if there is one.”






















