Categories
Posts

Are AI Researchers too Smart to appropriately assess Risks?

Is Artificial Intelligence the greatest threat to our human species ever? This is a question which is dividing many well-known figures in the tech industry. With this short post, I want to share my thoughts that maybe the smartest in the field of AI has a limited awareness of risk.

Why? The discussion about AI safety has been heated up during the last week when Mark Zuckerberg made a few negative remarks about Elon Musks’ security concerns and his demand for more regulation for AI development. There is no doubt, that AI has enormous potential to enhance our human life. But can it also eradicate humanity?

Elon Musk, Bill Gates, and Steven Hawking think so. At the same time, many AI experts are screaming loud “No”. But whom should we trust? Who can properly assess the dangers of AI?

I asked myself the question. Should I trust the leading AI researchers in this question? Or should I rather trust the healthy reasoning of personalities like Elon Musk and Bill Gates?

Not Seeing the Forest for the Trees

Two days ago I had a striking thought: is it possible that all AI experts have too much expertise to properly assess the situation? Don’t they see the forest for the trees? Do they have too much specified knowledge that they cannot or don’t want to accept any harmful developments of artificial intelligence?

Can’t we observe similar patterns in science fiction and in entrepreneurship?

Science fiction books and movies are written by authors and screen writers with very limited technical know-how of the future. They know where the world is heading, which technologies exists but they are never the greatest experts of future technologies. Nevertheless, many were able to predict the future pretty accurately. Think about Star Trek! Today we use cell-phones, we have video chats, and many more things which were unthinkable before.

There are also many established companies who try their best to innovate but fail to do so. They are pure experts of their own fields but startups who have limited industry knowledge come to disrupt them. Therefore even the richest companies like Google and Apple need to buy external talents and startups like Siri and Deep Mind.

One thing is sure, AGI (artificial general intelligence) is still a little bit far in the future. While some advocate that it holds no risks at all and comparing the risks evolving from AI with overpopulation on mars – other find it more important to address possible risks early enough.

I think that an artificial general intelligence cannot be assessed by humans correctly. While some experts may have their exact ideas how they may limit the power of future AI it is not certain how they will actually behave.

It is important to not simply block responsible thoughts about the risks of AI but to embrace them. We have to think about them and most importantly we have to really understand AI – not only a limited group of people but also the greater public.

Leave a Reply

Your email address will not be published. Required fields are marked *

I accept the Privacy Policy