Inspirational interview: what will be the consequences of artificial intelligence according to Prof. Andrzej Dragan?

Posted by

A few days ago I watched an inspiring interview by Patrycjusz Wyżga with Professor Andrzej Dragan. Both the presenter and the interviewee impressed me with their professionalism and originality of views. Since this impression has lasted for more than a week, I decided to write a few words about the content covered in the interview.

Patrycjusz Wyżga: Are you afraid of artificial intelligence?

Andrzej Dragan: I can’t imagine such a variant that it won’t end badly for us.

And it only gets more interesting further on… Here are some excerpts.

“I can’t imagine a scenario that would be successful for us. (…) I am not a fatalist, I am rather a well-informed realist.”

“It follows from elementary game theory that if one species goes extinct and another does not, it is never because they bite each other to death. (…) Species die out when they compete for the same resources in the same coin. (…) And we are breeding for ourselves a species that will be more efficient than us in the only thing that has given us dominance on Earth, which is intellect.”

“I have an eight-year-old son (…) and he can’t solve the logic puzzles that ChatGPT can solve. (…) What does this mean? If at this point he exceeds in terms of solving logic tasks the capabilities of eight-year-olds, and since he is developing faster than eight-year-olds are developing, it means that our children will never again solve any logic problem that is too difficult for artificial intelligence.”

“It takes longer to search for an image (photo) than it does for an artificial intelligence to create such an image. So having a photographic base is no longer necessary, and therefore the profession of photographer is no longer necessary…”

You can watch the entire interview on YT: Hodujemy gatunek, który będzie dominował nad nami intelektem: prof. Andrzej Dragan – didaskalia #7

In my mind there are a lot of questions and also controversial theses to the content of the interview. I will only flag a few.

First, looking through the eyes of an electronics engineer, still artificial intelligence as software is encapsulated in some electronic device (actually in hundreds of devices – interconnected computers). It is not a nebulous creation that, like in a sci-fi movie, runs along a cable from the plug, through the paths of the electronic circuit to the processor. As a result, one can always simply turn off the electricity if one so desires. Even if parts of the software were on hundreds or thousands of computers in different parts of the world, it is still possible.

Second, artificial intelligence must have some form of physical impact on the real world we live in. One could imagine it shutting down some manufacturing process (if that process is connected to the Internet) or swapping websites. Already the height of cleverness would be the well-known from thrillers to disrupt traffic lights at an intersection. But what’s next? After all, artificial intelligence won’t drive a car (I’m leaving out exceptional and rare models), it won’t turn off our washing machine at home (at least it won’t turn off 99.9% of the washing machines in the world), or it won’t lock the doors and windows at school making a silly joke to the students. We can, of course, imagine all these simple, mechanical devices being connected to the Internet (and having some kind of actuators, control motors, etc.), but this will not happen on a large scale even by the end of the 21st century, in my opinion. Paradoxically, the countries with the highest level of development will be the most vulnerable, but also only in a few spheres and only at selected points in life.

Third, artificial intelligence (represented for the time being by ChatGPT) would have to transform itself from a talking head into an artificial manager with real impact on people’s lives. This would not be a visualization of the manager’s head on a computer screen giving orders, but a physical humanoid with human skills – walking, lifting things, physically pressing on people, having the right appearance, etc. Otherwise, humans would have to be the “devices” executing the will of artificial intelligence. Nevertheless, a robocop in the role of manager is not likely to be on the horizon of technology development yet, and reducing everything to electricity (I’m an electronics engineer), he would also have to have some kind of power source….

Be sure to watch this fascinating interview, a must-see.