A number of articles and videos have been appearing recently, which include warnings against the uncontrolled development of artificial intelligence. Bigger and smaller experts in machine learning, artificial intelligence and pattern recognition are making statements on the subject. With this post, I begin a series of such statements, which have piqued my interest and have something non-obvious in them.
In this article, I stop a short speech by Dr. Geoffrey Hilton, seen as one of the fathers of artificial intelligence. Here the video footage of this speech:
How can it be summarized and what is interesting about it? Well, Dr. Geoffrey Hinton says that artificial intelligence is quite different from the biological intelligence that occurs in humans. This is for two reasons:
First, if there are 10,000 objects that learn something about the world, in the world of humans this knowledge does not spread as quickly. On the other hand, if these objects are “computers,” they can communicate with each other instantly, and this is where the power of this knowledge comes from.
Secondly, the pace of development is very inconspicuous, since new versions of ChatGPT are appearing month after month, with more and more new capabilities. The pace is incomparably faster than any development of intelligence in earthly beings to date.
These are very interesting insights. Indeed, given that both phenomena occur at the same time, the logical product of these phenomena is formed, that is, they reinforce each other, not giving, according to formal logic, the same value as each of them (i.e., 1), but amplifying their effect on the principle of positive feedback (resonance).
Who is Dr. Geoffrey Hinton?
In 1977, at the University of Edinburgh, he earned a doctorate in neural network work on the basis of his dissertation Relaxation and its Role in Vision, but this topic was not of interest at the time, hence funding was difficult to come by. Hinton emigrated to the United States, where he joined a group of cognitive psychologists at the University of California, San Diego. In two papers in 1986, he also popularized the concept of back propagation, developed as a result of work with Ronald J. Williams, which is used for machine learning.
In the US, this research was also not appreciated, so he moved to Canada and in 1987 inaugurated the Learning in Machines and Brain program there, establishing a thriving research center at the University of Toronto. In 2012, he and his team won the annual ImageNet competition, presenting a computer system that could recognize 1,000 objects through deep learning. In 2013, his company DNNresearch Inc. was acquired by Google, and he worked for the corporation until his retirement in 2014.