A.I.’s “Doomsday” could look different than we think.

Our technologies have profoundly changed the way the world works.

Artificial intelligence is not going to experience any sort of apocalypse, according to experts. They emphasize that these types of scenarios are not realistic. By the time A.I. is sufficiently advanced, they claim, we will have carefully planned for a host of worst-case scenarios; circumventing the problem before it comes to fruition. In the middle of the 20th century, people were worried that nuclear wasteland-grown monsters would hurt humans with their glowing, melty third-arms.

We all have an idea of what the world would look like if there were more machines. Hollywood has given us visions of metallic killers who only want to advance their cause by using humanity as a means to their ends. The robots farm their vital energy in order to power their society and keep the humans distracted. Artificial intelligence technologies have made these kinds of fears come back in recent years. nightmares run amok when the mind of the people is influenced by sensationalist media. We are starting to live in a world that is filled with such nightmares.

The car rental system. Despite what some experts are saying, A.I. has the potential to bring about irreversible doomsday scenarios, and is following in the footsteps of other powerful technologies. The transition from normal life to a life where we are puppets of digital systems will be gradual, so subtle, that we will likely be taken along for the ride. We should look at car rental systems, social media, and nuclear weapons to illustrate this argument. Each offers a portal into a future where humanity has no clue that it is ruled by artificial intelligence and a reflection of our own world that may reveal it to look more similar to a future apocalypse than anyone thought.

I had to wait over an hour and a half to work with the employee at the rental office, his manager, and a customer service representative in a call center, while my case was being fixed, after this completely threw off my rental. I felt worse for them because of their frustration with the computer system, and they were all so helpful. I would have received my keys and been on my way if this had happened in the days before the computer systems. The rules of the rental office software were so strict that the involvement of three different employees was necessary because of a slight mistake that allowed for in the first place. How many people do it take to fix a light bulb? If the light bulb is part of a national, computerized, strictly rule enforced light bulb screwed system with error messages that only confuse those who are just trying to get some light in the room, at least four would be appropriate.

I had to rent a car for a business trip after having an experience. I paid full price for the rental car so I could pick it up at a cheaper rate, and then showed up to pick it up. I didn’t know that the location I specified for pickup was not a rental office. The exact destination of my train was listed in the choice of options, but the office down the road was a different name.

Everyone who has ever worked with computerized systems at the doctor, for medical insurance, or for any other large, national company who is anything other than a top notch in their software design knows that these issues happen all the time. In many cases, the convenience promised by digital systems is mired in poor design; thus rendering the systems almost impossible to use given any edge cases. We are already living in a world where computers are controlling us.

We are already living in a world where computers are controlling us. The employees helping me out were made to follow the rental system. The computer system held all the keys to my car and rendered human ingenuity meaningless. Everyone went along for the ride when the world started implementing digital systems. A low-level employee wouldn’t have a say in whether or not their employer forces a broken, complex system on their job. A few people change the way the world works, and we are all forced to accept that it is our new reality.

My experience at the car rental office says a lot more about our society than just the occasional bad software. These types of issues have become normalized to us because they arise all the time. We had to accept that computer systems fail and that sometimes we have to spend hours trying to untangle issues that never would have arisen in a world without strict software.

Social media sites. Different technologies have different effects on us. When other technologies exhibit flaws in their design, the outcomes can be much more damaging. We need to look no further than social media to understand the effects of machine learning systems.

This type of machine learning, which is aimed at learning user behavior in order to bring them to an app or website to maximize profit, has had a profound effect on our society Social media can be very dangerous. People become addicted to social media and are surrounded by information that is completely different from what someone with a different preference would want. It has led to many people adopting completely different beliefs from those of their political or ideological opposites, and thinking them based on what they read on the internet. A.I. technologies used on social media have affected many individual viewpoints, which has profoundly changed the discourse of society as a whole.

In the past few years, many diverse groups of people have criticized social media companies for their scandals. Social media platforms used to be just networks to connect friends, but now they’re hubs for news and politics. Due to the nature of these systems being primarily financially motivated, the networks have turned themselves into addictive systems. Unsavvy social media users experience ideological isolation and augmentation by nature of only seeing news or opinions that agree with their own, if they are given content that they would inherently enjoy.

People who use social media became accustomed to the fact that their data was being used by social media sites. Digital systems have an unfortunate and frustrating grip on our lives because of their bugs and poor software design. Machine learning recommendation systems are designed with a flawed outcome in mind. They are having a negative societal impact when they work correctly. Wide-scale societal effects are heavily influenced by the perspectives of a few individuals.

Humans have been normalized of incredibly destructive technologies before. Nuclear militarization has become almost commonplace to accept as the way things are. The world has moved past the initial shock of such notions, even though the threat posed by the technology is arguably more significant today.

People don’t think of social media systems as being the same as A.I. doomsday robots. We aren’t brutally subjugated against our will, but we are restricted by systems that govern our lives like social media. If something tells you something, you can control your actions based on that information. When A.I. researchers dismiss the fear of systems taking more control of our lives than we want, I believe they are missing a bigger context. Computer scientists and software engineers are often too isolated in the world of computing to recognize that deep social change is made by their technologies all the time.

When the conversation surrounding them is lacking, technologies that are initially shocking can become normalized. Our post-Cold War world doesn’t care about the spread of nuclear weapons or the destruction of nuclear treaties as much as it did a generation or two ago. The public conversations around nuclear proliferation have stopped. Some world leaders and technicians may suggest that proliferation of nuclear weapons is an inevitability. State leaders have the ability to choose how technology progresses based on their goals. A president who wants to use technology for more humanitarian purposes will be more likely to rip up nuclear arms treaties than a president who wants to use technology for military reasons. The dissemination of these weapons is considered to be the way things are because so-called leaders have pushed for it to be the way things are, motivated by profits, power, or perhaps stupidity. Nuclear technology is used.

The best way to ensure accountability of powerful technology actors is to voice our concerns in a loud way. Artificial intelligence will probably follow the same path. It will fall on the shoulders of political and industry leaders to decide how to integrate it into society. Wide-scale societal effects are heavily influenced by the perspectives of a few individuals. There is a risk of important discourse fading away after a phase of initial public scrutiny of such decisions.

The world is a simple mistake away from a nuclear catastrophe, but a not-too- distant future world may also have an added threat of subtle, societal doomsday driven by an undiagnosable digital flaw, or the unknown mistake of a handful of programmers. Those who make decisions about these technologies are making decisions that affect the entire society. Experts say that any malintent, bias, or ignorance on the part of those in control is impossible to bring about. The experts are becoming part of the process. This should be countered by an equally powerful and diverse group of voices calling for caution. If the balance swayed too far toward normalization, it could mean complete control by a small group of unaccountable, potentially malicious, few.

The power of A.I. is comparable to that of nuclear weapons. The long-term societal effects from the use of A.I. systems can be as far reaching as the aftermath of Hiroshima and Nagasaki. Only a few people in the world have an opinion about the effects of A.I. People who will be most affected by the results of these systems should have a say in how they are developed. When it gets in the way of short-term profits or power gains, discourse should be encouraged.

There is only one way to mitigate a threat that is perpetuated by the few and that is by the many. Technologists don’t like to talk about this because they become too wrapped up in their own expertise to consider the opinions of those whose lives are most affected by their potential failures. They believe that technologists alone have a mandate to shape the developments and usages of their systems, and that any ordinary person would have an ignorant opinion regarding decisions made about A.I. Life or death may be related to accountability.

The designers of artificial intelligence and machine learning systems need to plan for a certain degree of control that could easily be mitigated. They need to commit to taking the view of the common person into account when making decisions. The only way to change how these systems affect your life is to become enraged.

This mindset is not smart. The perspective of those whose lives have changed the most by technology is essential to make any conversation even-sided. If their home had been destroyed by a nuclear weapon, nuclear proliferators would have a different opinion of the technology. Executives at Facebook would have different opinions about the effects of their recommendation systems if they had found themselves manipulated. Those who have the privilege of standing outside of the blast zone of these technologies don’t have a good idea of how bad the effects are. A.I. experts who dismiss domination by computer systems are correct in that they are wrong to dismiss the notion that our societies could be controlled by computers without the necessary human understanding.

To see machine learning technologies in a different light is to look at the world in a different way. Machine learning or artificial intelligence won’t be able to control us through a simulation. If we are not careful, we may wake up in a world where our lives are dominated by machines that were created by a few, cannot be easily rolled back, and where we may not even know how much we are being controlled until it.