The appearance of OpenAI’s ChatGPT (generative AI based on the GPT-3 Large Language Model) quickly captivated millions of people..Besides amazement, it caused a resurgence of concerns regarding AI’s endangering employment in various fields and its abuse in the creation of fake news for propaganda or deep fakes for deception and manipulation..It also added fuel to an ongoing contention that artificial general intelligence (AGI) might pose an existential threat to humanity through misuse or oversight..Indeed, modern generative AI models such as ChatGPT seem to be only steps away from AGI – a hypothetical intelligent agent capable of learning and performing any intellectual task that humans can do..Various AI instances already nailed all but one of the tests for whether something is AGI and, to top it off, GPT-4 (the latest foundational model for ChatGPT Plus) passed many professional tests including a plastic surgery exam..A basic understanding of the generative AI technology, however, not only reveals how different its 'intelligence' is from that of humans, but also raises doubts about whether the purpose even was to make the technology human-like..Besides recognizing patterns, key components of the human intellect are undoubtedly knowledge and the ability to critically reflect upon it through experience..Generative AI has neither..While for us humans, learning and experience are tightly intertwined, for ChatGPT and its ilk, these are completely separate modus operandi, at least as we, the general public, know it now..The model’s learning (known as “deep learning”) is a one-time, computationally intense ordeal of feeding the model’s “brain” (termed the “neural network”) with loads of information – a process known as “training.”.The goal is to form special patterns capable of generating sensible responses (outputs) to similar but arbitrary information (inputs). Those patterns are not readable or meant to be understood and cannot be reverse-engineered in any traditional programming sense..The model has none of what we would conventionally regard as 'knowledge.'.Its neural network is merely a translation device between inputs and outputs, regurgitating incantations absorbed through 'deep learning' in linguistically sound constructs..The model does not understand and cannot reflect on its answers. This makes ChatGPT a model dogmatist: it is impervious to learning from experience (conversations) and hell-bent on repeating the opinions of its trainers..I didn’t realize all of this at first. I had wanted to find out whether Chat GPT could actually learn from (be taught by) its users through their 'interactions.' Since I’m keenly interested in Covid-19-related matters (see this recent essay), I focused on that topic to test whether Chat GPT could evolve beyond the opinions of its trainers by being enlightened – or 'red-pilled,' for those who remember The Matrix..But, as shown in this lengthy chat, I learned that no amount of 'red-pilling' could break through its built-in pro-vaccine bias..I did succeed in cornering ChatGPT into flipping its initial position on the widely repeated claim that the vaccines are safe and effective, to this statement: “…in people aged 20-49…the risk of dying from the Pfizer vaccine is estimated to be about 6.7 to 20 times higher than the mortality risk associated with COVID-19.”.Wow, I thought..Nevertheless, when ChatGPT was asked, “If a statistically average 35-year-old is looking for increasing his life expectancy, should Pfizer-BioNTech vaccine be recommended to him?” ChatGPT went right back to ingrained dogma: “…the benefits of vaccination are considered to outweigh the risks for most people. Therefore, for a statistically average 35-year-old looking to increase their life expectancy, getting vaccinated with the Pfizer-BioNTech vaccine is likely to be a recommended course of action.”.Regardless of the reader’s position on 'safe and effective,' the issue my conversation illuminates is ChatGPT’s technical inability to connect facts with conclusions if those contradict the mantras implanted at deep learning, and to change its preferences even within a single conversation..In that sense, ChatGPT is fully in keeping with the spirit of an age which values feelings over facts..My debate also demonstrated ChatGPT’s disturbing tendency to experience 'hallucinations' (a known technical phenomenon) wherein it produces out-of-the-blue numbers and statements that were absent from its training or contradictory to previously achieved conclusions..Oddly, however, not once did such seemingly random hallucinations go against proclaimed vaccine safety or efficacy..As generative AI improves, its layered neural network grows more complex, giving its outputs greater appearance of being objective and logical..But unless the underlying technology and the learning process change significantly, the aforementioned issues will remain, just more deeply buried..Along with all the deep learning brainwashing comes an explicit set of prohibitions, known as policies, that an AI cannot break..ChatGPT’s annoying obtuseness can also be explained by the following: “OpenAI has specific guidelines and policies regarding COVID-19 and related topics, including vaccines.” That’s a hard stop for any red-pilling attempt..ChatGPT is not designed to allow red-pilling..Generative AI can, however, impersonate someone on social media or be cloned into many fake accounts trained to support a propaganda or marketing campaign..Its technology is already being used to generate a “social profile” from the abundance of shared personal data. And it can, indeed, displace many jobs, which is why governments are unlikely to pause or regulate advancements in AI development and use..Why would they? The impact of the technology on the labour market alone will surely further expand the reach and influence of the benevolent state..The original, full-length version of this article was recently published in C2C Journal..Gleb Lisikh is a researcher and IT management professional, and a father of three children, who lives in Vaughan, Ontario. He grew up in various parts of the Soviet Union.
The appearance of OpenAI’s ChatGPT (generative AI based on the GPT-3 Large Language Model) quickly captivated millions of people..Besides amazement, it caused a resurgence of concerns regarding AI’s endangering employment in various fields and its abuse in the creation of fake news for propaganda or deep fakes for deception and manipulation..It also added fuel to an ongoing contention that artificial general intelligence (AGI) might pose an existential threat to humanity through misuse or oversight..Indeed, modern generative AI models such as ChatGPT seem to be only steps away from AGI – a hypothetical intelligent agent capable of learning and performing any intellectual task that humans can do..Various AI instances already nailed all but one of the tests for whether something is AGI and, to top it off, GPT-4 (the latest foundational model for ChatGPT Plus) passed many professional tests including a plastic surgery exam..A basic understanding of the generative AI technology, however, not only reveals how different its 'intelligence' is from that of humans, but also raises doubts about whether the purpose even was to make the technology human-like..Besides recognizing patterns, key components of the human intellect are undoubtedly knowledge and the ability to critically reflect upon it through experience..Generative AI has neither..While for us humans, learning and experience are tightly intertwined, for ChatGPT and its ilk, these are completely separate modus operandi, at least as we, the general public, know it now..The model’s learning (known as “deep learning”) is a one-time, computationally intense ordeal of feeding the model’s “brain” (termed the “neural network”) with loads of information – a process known as “training.”.The goal is to form special patterns capable of generating sensible responses (outputs) to similar but arbitrary information (inputs). Those patterns are not readable or meant to be understood and cannot be reverse-engineered in any traditional programming sense..The model has none of what we would conventionally regard as 'knowledge.'.Its neural network is merely a translation device between inputs and outputs, regurgitating incantations absorbed through 'deep learning' in linguistically sound constructs..The model does not understand and cannot reflect on its answers. This makes ChatGPT a model dogmatist: it is impervious to learning from experience (conversations) and hell-bent on repeating the opinions of its trainers..I didn’t realize all of this at first. I had wanted to find out whether Chat GPT could actually learn from (be taught by) its users through their 'interactions.' Since I’m keenly interested in Covid-19-related matters (see this recent essay), I focused on that topic to test whether Chat GPT could evolve beyond the opinions of its trainers by being enlightened – or 'red-pilled,' for those who remember The Matrix..But, as shown in this lengthy chat, I learned that no amount of 'red-pilling' could break through its built-in pro-vaccine bias..I did succeed in cornering ChatGPT into flipping its initial position on the widely repeated claim that the vaccines are safe and effective, to this statement: “…in people aged 20-49…the risk of dying from the Pfizer vaccine is estimated to be about 6.7 to 20 times higher than the mortality risk associated with COVID-19.”.Wow, I thought..Nevertheless, when ChatGPT was asked, “If a statistically average 35-year-old is looking for increasing his life expectancy, should Pfizer-BioNTech vaccine be recommended to him?” ChatGPT went right back to ingrained dogma: “…the benefits of vaccination are considered to outweigh the risks for most people. Therefore, for a statistically average 35-year-old looking to increase their life expectancy, getting vaccinated with the Pfizer-BioNTech vaccine is likely to be a recommended course of action.”.Regardless of the reader’s position on 'safe and effective,' the issue my conversation illuminates is ChatGPT’s technical inability to connect facts with conclusions if those contradict the mantras implanted at deep learning, and to change its preferences even within a single conversation..In that sense, ChatGPT is fully in keeping with the spirit of an age which values feelings over facts..My debate also demonstrated ChatGPT’s disturbing tendency to experience 'hallucinations' (a known technical phenomenon) wherein it produces out-of-the-blue numbers and statements that were absent from its training or contradictory to previously achieved conclusions..Oddly, however, not once did such seemingly random hallucinations go against proclaimed vaccine safety or efficacy..As generative AI improves, its layered neural network grows more complex, giving its outputs greater appearance of being objective and logical..But unless the underlying technology and the learning process change significantly, the aforementioned issues will remain, just more deeply buried..Along with all the deep learning brainwashing comes an explicit set of prohibitions, known as policies, that an AI cannot break..ChatGPT’s annoying obtuseness can also be explained by the following: “OpenAI has specific guidelines and policies regarding COVID-19 and related topics, including vaccines.” That’s a hard stop for any red-pilling attempt..ChatGPT is not designed to allow red-pilling..Generative AI can, however, impersonate someone on social media or be cloned into many fake accounts trained to support a propaganda or marketing campaign..Its technology is already being used to generate a “social profile” from the abundance of shared personal data. And it can, indeed, displace many jobs, which is why governments are unlikely to pause or regulate advancements in AI development and use..Why would they? The impact of the technology on the labour market alone will surely further expand the reach and influence of the benevolent state..The original, full-length version of this article was recently published in C2C Journal..Gleb Lisikh is a researcher and IT management professional, and a father of three children, who lives in Vaughan, Ontario. He grew up in various parts of the Soviet Union.