What do I think about ChatGPT’s artificial intelligence?

Posted by

In my previous two blog posts, I described why ChatGPT cannot act as an artificial manager. My conclusions are only based on its current, free version 3.5. What the future will show, we shall see. However, the obstacles to ChatGPT being an artificial manager are fundamental and relate to the lack of a coherent ontology of organizational reality and the knowledge of the manager’s work drawn by the chat mechanism only from books and the Internet, not the real activities performed by the manager.

If ChatGPT were connected to online managerial tools such as TransistorsHead.com (http://transistorshead.com/) or all the applications of the world, ERP systems, SAP, etc., then… But it is not, so the manager’s job would not be made of it.

At the end of considering the use of this technology in management as an electronics engineer, I have to share a few observations.

First, ChatGPT does a great job with well-structured tasks, such as:

  • mathematical tasks – it can quickly solve a quadratic equation in the real number domain and, when there is no solution, in the complex number domain – it’s good!
  • scrambled egg recipe – from now on I will add a little milk or cream to scrambled eggs, very tasty comes out according to the GPT recipe – it is good!
  • writing code, for example, in html – or, in fact, in any programming language; on You Tube we can find tutorials on how ChatGPT will improve our work on data in Excel or write a piece of code for our website – it is very good!
  • writing stories for children – try it yourself – it’s great!

Secondly, ChatGPT gets lost when it has to use the Internet, not to mention You Tube or other social networks, e.g.:

  • he doesn’t know who Olaf Flak is – not only Olaf Flak, but also who other scientists I know are, and whose activities can easily be checked in a few seconds with a Google browser or in databases like WoS or Scopus – it’s very bad!
  • he knows nothing about certain areas of life – he can’t correctly name what devices the Polish company Fonica produced (gramophones, amplifiers); this is supposedly quite specialized knowledge (and yet constant, historical and unchanging!), but also to be checked by a person on Google in 3 seconds – disastrous!
  • and finally, a completely failed test – he gives complete nonsense when asked who is the rector of any university in Poland (in his opinion, the University of Silesia in Katowice does not have a rector at all) – below the bottom! (Stable knowledge, available in many databases, websites, etc.).

In conclusion, if we define artificial intelligence as operations on natural language and turn a blind eye to the fact that these operations expound on simple facts to be checked on the Internet, we can think of ChatGPT as artificial intelligence. However, if we treat artificial intelligence according to Max Tegmark’s definition – Intelligence to the ability to achieve complex goals (https://www.amazon.pl/Life-3-0-Being-Artificial-Intelligence/dp/1101970316) – then ChatGPT is definitely not artificial intelligence. For me, it definitely is not.

Finally, two extreme opinions about artificial intelligence. The first is expressed by a call to halt work on artificial intelligence that exceeds the capabilities of the ChatGPT-4 by Elon Musk, among others:

https://www.wirtualnemedia.pl/artykul/gpt-4-jak-dziala-sztuczna-inteligencja-protest-elon-musk

The second opinion on artificial intelligence personified with the ChatGPT is summarized in a video on one of my favorite YT channels on technology, hosted by the brilliant Adam Daredevil (for English-speaking readers – please turn on the translation of the video), especially from the 13th minute of the video: