In the 1950s, British mathematician Alan Turing, widely regarded as the father of modern computing, famously conjectured that machines may be considered to exhibit human-like intelligence should they produce output indistinguishable to that of a human assigned the same task. As part of this “imitation game” that machines must play in order to think like humans, according to Turing, advances in computing would eventually render humans unable to confidently discern a machine from a human peer.
A radical thought experiment at the time, the Turing test has become little more than a benchmark for artificial intelligence performance in the modern era. ChatGPT’s newest models have convinced human participants in controlled studies that they are human 73% of the time. Definitionally, artificial intelligence has thus gained the ability to think like a human, at least on paper. New AI models intended to generate visual works challenge this notion, however, while presenting a rather grim outlook for the future of this decades-long imitation game.
To achieve a perfect imitation of the human mind is to mimic the multidimensional components of human ingenuity. Given the extensive training of large language models or LLMs (such as ChatGPT) on a gargantuan amount of data—essentially representing the corpus of all information presently known to mankind—raw human knowledge is well within reach. Regurgitating information, however, is a task that simple search engines have been capable of for decades. Another significant element of the human mind is its ability to solve complex problems (despite any perceived lack of information), but the gap between human and machine appears to be closing. LLMs are continually improving in performance on rigorous problem-solving exams such as the American Invitational Mathematics Exam. The ability to devise robust methods of deciphering unknown problems as well as the ability to learn new information and remember it are indicative of human-like intelligence. However, there is a notable shortfall in machine intelligence.
A significant human aspect of intelligence where modern artificial intelligence noticeably lags behind is creativity, particularly in regards to the expression of unique thought via art. For an entity to perfectly mimic a human, it must be able to create unique, self-expressive works distinctive to the entity itself. AI models have recently made leaps and bounds in terms of generating visual works. These visual works can be recognized as sophisticated by most people. Therefore, theoretically, AI has almost fully mimicked human intelligence.
Yet, there is one crucial trait that AI art lacks: a fundamental aspect of “humanity.” This means that AI, at its current state, is not capable of truly mimicking human performance. After all, an entity that lacks humanity can never truly be considered fully human. What exactly is the aspect of humanity in this context? Such a trait, as a matter of fact, is often nearly imperceptible and difficult to truly distinguish. However, it makes all the difference when comparing the work of a human to the work of a machine. I’d personally describe the sensation of looking at an AI-generated artwork as akin to the uncanny valley effect, which describes the unsettling feeling one gets when viewing something very humanoid yet not exactly human-like. In the same vein, many AI artworks look very close to human-made, but are not exactly there yet. Presently, AI artworks are rather distinguishable for many people, especially those that have been exposed to it extensively through social media.
Of course, it may be argued—given the fact generative AI models base their responses on existing works made by humans—that AI does not inherently possess an ability to be self-expressive and original. This lack of self-expression means that these machines will never mimic the creative intelligence of a human and thus may never create fully human-like output. After all, one of the most popular formats of AI images currently is not original but an imitation of the human style of Studio Ghibli. It’s important to note, though, there is a very real possibility (or more accurately, a very real probability) that AI may reach a point of no return in terms of the “human-ness” of its output.
Innovation in regard to particular technologies often follows an S-curve. After its infancy, a well-established technology like AI often ramps up in capability before it stagnates as it reaches its limit in possible improvements. It’s hard to know where on the S-curve a particular technology is at while improvements to it are continually made, but there’s no indication that AI will stagnate in its progress any time soon. Only two years ago, an AI-generated video of Will Smith eating a plate of spaghetti went viral for its hilarious failure to imitate human art. Today, models like the new Google Veo 3 have demonstrated that AI art is converging on a far more believable imitation of human art. The impressive progression of AI in the past few years should nullify any denialism we have in its capabilities, including its capability to mimic humans in its visual output. In conjunction with the aforementioned lack of creativity of AI models, the potential indistinguishability of AI art means that, at some point, humans may gradually lose the battle of “creative” work to machines. The ramifications of generative AI unfortunately far supersede a societal loss of human creativity.
The question remains: why imitate a human at all? This imitation of the human mind is not particularly flattering to the human minds behind the workforce, but it is still being pushed in full-force. This is the fundamental root of the problem: business interests have promoted AI as a replacement for humans rather than a tool. Human-like intelligence within machines has become fully commercialized; it is no longer a proprietary innovation or a novel scientific endeavor. AI is here to stay, and the writing is on the wall. It will ultimately become almost fully indistinguishable in its output—without the need to be paid or fed or rested or credited. Should we be worried? Yes—but we should be worried primarily on how mankind will cope with its own creation rather than bunkering down and waiting for the worst. It is time to act on new legislation, new regulation and new attitudes on AI. It is, unfortunately, now or never.