Why Banning Technology is Stupid

Last Saturday I read the article “You’re Fired” which was published in the weekly newspaper DIE ZEIT. Steven Hill the author of the article is of the opinion that we should forbid certain technologies.

Technologies which are about to take our jobs away. Hill criticized startups in San Francisco, Palo Alto, and their neighborhoods. Ray Kurzweil who predicted singularity in 2045 is for him just another idiot. For Steven Hill, the best and only option is to limit and forbid certain technologies which will make us half-robot-half-human. But prohibitions are not a solution especially if those are made on a national level. Here is my opinion on how we should deal with the upcoming singularity.

What exactly is Singularity?

Technological singularity describes the point in time when an artificial intelligence overtakes our human intelligence. What is the result of this event? At the point of singularity, the artificial super intelligence will enter an endless circle of improvement, and innovation. This happens because the artificial super intelligence is now able to learn and teach itself. It will self-improve in levels which are unimaginable for humans. As a result, the artificial super intelligence will surpass the human intelligence by far. Ray Kurzweil is a technology visionary. He predicts a technological singularity by 2045. Other industry experts also predict the moment of singularity to a date in 2040.

Half-Human Half-Robot?

AI technologies are getting faster and smarter. Computers which are running AI software are already helping us to improve current technologies. They help us fighting cancer and HIV. The controversial part of this is that humans will begin implementing this artificial intelligence into our bodies. For example, nanobots will cure all diseases before they even show symptoms. Larry Page the co-founder of Google describes the future in Steven Levy’s book *In the Plex: How Google, Thinks, Works, and Shapes* in similar terms: “It [Google] will be included in people’s brains. When you think about something you don’t know much about, you will automatically get the information.”

Ethical Point of View

The major protests against such technologies are our own ethical viewpoints. Do we really want to become half a robot? Do we really want to live forever? But on the other hand we should ask ourselves other questions: Do we really want to miss and prohibit great technological progress? Do you really want to ban technologies which will be able to cure cancer, HIV, and many other illnesses and diseases?

It is a highly controversial topic but in the end, it will be impossible to forbid artificial super intelligence. But it is even more controversial to ban these advancing technologies to save jobs.

Do we really want to save Jobs

Steven Hill has a huge anxiety of future technologies. His main argument why he wants to ban these technologies is to save jobs. This will result in a technological stagnation. If we forbid artificial intelligence we will limit the innovation in any field. We will experience a technological stagnation. A complete technological stagnation will first of all lead to a job loss in the high-tech R&D sector. The times will be over where we experience new product launches every year. People will not work on new technologies rather they will work to feed, and care for ourselves.

If we accept a technological stagnation we will save jobs. But which jobs exactly? We will save hard labor jobs: Blue-collar jobs. Are those workers really happy about their jobs or are they working in their jobs because they need the money? Do we really want to save child labor jobs in Asia because we stop robots from taking their jobs?

Time to Focus on Happiness

I personally think that a great job loss of hard labor jobs is not necessarily a bad thing. Is it bad if people who hate their jobs lose their jobs? Isn’t it a better alternative to letting them focus on what makes them the happiest?

I once read a book where a very successful American entrepreneur talked to an apparently poor Mexican fisherman. The American entrepreneur asked the poor fisherman why he only works 2 hours in the morning. The fisherman said: “Why should I work more if those 2 hours provide me and my family enough money to survive?” The businessman was shocked and said: “But you could expand your business, make more profit, buy new fisher boats, employ people, grow, list your company on a stock exchange and then you will be rich and you will be able to provide your family with money forever.”
The fisherman looked a little bit weird and said: “Well, I don’t want to. I am the happiest when I can spend 22 hours a day with my family and my friends. This gives my life a meaning. I don’t care about the money.”

Basic Income as a Solution to Job Losses

We need to apply the same underlying idea to the working class who will lose their jobs due to artificial intelligence and automation. Why can’t we let them lose their jobs they hate and pay them a basic income? With this basic income, they can focus on what makes them happy. Let them become musicians, artists, or actors. Let people enjoy 24 hours every day with their friends and families. Let humans explore the beautiful nature of this planet. Let everyone take part in social jobs such as elderly care, education, and more.

How to Finance a Basic Income

Despite the fact that our western countries are in theory already able to pay every citizen a monthly basic income of 1000€ we need to look further. Bill Gates proposed that we need to introduce a robot tax. People who are currently paying their income tax will be replaced by robots. States should not allow companies to do so without paying a robot tax.

But it is not only about a robot tax. The fact is that future economic value will be a result of pure artificial intelligence, robots, and other forms of automation. There is no real work behind economic value anymore. Because of this reason, nations will need to introduce a 70%, 80%, 90%, or even 99% tax rate on companies.
This will result in never seen before tax revenues which may be spend to create the best living environments, a basic income for everyone, free education, and more.

Is it Possible to Ban Technologies?

Despite that fact that I find the idea of “banning technologies to save jobs” ridiculous I also think that it is impossible. If a nation like The United States decides to forbid every research and development in artificial intelligence what will actually happen? Even if 99% of all nations decide to ban these technologies there will be the remaining 1%. This remaining 1% if out-compete every single other nation in every field imaginable. They will have more advanced weapons, computers, and a wealthier and probably happier society. They can then start to impose those technologies upon other nations by choice or accidentally.

The Best Alternative

I think allowing technological automation to take place is the best alternative for many hard-working people worldwide. They will not work in slavery like jobs (they hate) anymore. They can spend quality time with their family and their friends. Work on creative things they always wanted to achieve.
In the end we should not forbid any technologies which might improve our worldwide standard of living.

Allow Super-AI but Limit it

We should allow the emergence of artificial super intelligence and technological automation. But while we are doing so we need to set clear limits. These limits need to be designed on purpose. OpenAI is one non-profit AI company whose aim is it to create and promote a friendly AI. Elon Musk, Sam Altman, Reid Hoffman, and Peter Thiel are all actively involved in the OpenAI project. Here a quote of Elon Musk taken from the OpenAI Wikipedia page to end this article:

Musk poses the question: “what is the best thing we can do to ensure the future is good? We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing AI in a way that is safe and is beneficial to humanity.” Musk acknowledges that “there is always some risk that in actually trying to advance (friendly) AI we may create the thing we are concerned about”; nonetheless, the best defense is “to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower.”

Leave a Reply

Your email address will not be published. Required fields are marked *

I accept the Privacy Policy