Source: Unz.com
Back in my younger years I greatly enjoyed Science Fiction, and from junior high through graduate school, I probably read a thousand or more books in that genre, captivated by the enormous range of interesting ideas presented.
My two favorite authors had originally been Isaac Asimov and Arthur C. Clarke, though they were eventually replaced by Larry Niven and Roger Zelazny. However, I also enjoyed a multitude of other SF writers, and among them there was someone calling himself Cordwainer Smith. During the 1950s and early 1960s he’d produced a relatively small but influential body of work, with his stories heavily laced with puzzling terminology and concepts, mostly set in a future history of the next few thousand years during which humanity was governed by the Instrumentality of Mankind. Since he wrote under a pseudonym and his chronicle was filled with rather bizarre but strangely self-consistent elements, members of the SF community sometimes jokingly speculated that he might actually be a time-traveler from the distant future, amusing himself by passing along various half-remembered tales of his own past eras.
Then after his 1966 death from a heart attack at age 53, his identity was revealed as Paul Linebarger, a military officer and academic specialist on East Asia, who had spent most of his career working as a psychological warfare expert for the CIA.
One of the many minor elements in his stories had been the notion of using laminated mouse-brains as computers, embedded with the complete knowledge and personality of an individual. This technology allowed space travelers and others to take along with them a complete collection of top specialists, whose expertise could be drawn upon as circumstances required. That strange concept recently came into my mind when I began to play around with the current generation of AI chatbots.
For many decades, I’d noticed the periodic waves of media hype surrounding Artificial Intelligence (AI) systems, which appeared with considerable regularity. Wild claims about the transformative power of AI were invariably followed by bitter disappointment as the technology turned out to be far less practically useful than originally suggested, so I’d become rather inured to such hype. Back a couple of decades ago, AI systems had passed the remarkable hurdle of being able to defeat the world’s best players in Chess and Go, certainly a very important achievement but one that seemed to have few practical implications for my own activities. But then a few years ago, Google’s AI-based language translation system had become remarkably effective so that I’d incorporated it into our own website and occasionally used it when reading foreign language articles on the Internet. However, that was the limit of my own AI usage.
Then in late 2022, the ChatGPT system of OpenAI was released and soon became wildly popular, attracting over 100 million worldwide users within weeks and sparking a gigantic AI technology boom that pushed the stocks of various AI companies and chipmaker Nvidia to stratospheric heights, with the latter now worth around $3 trillion. These Large Language Models (LLMs) were trained upon many billions of words of text scraped off the Internet and then could apparently respond to questions in well-formed English sentences, something that seemed likely to have useful and important applications.
But although these were exciting results for the world and scientific progress, they didn’t seem to have much applicability to my own research and writing, and this work was keeping me so busy that I never even bothered trying to test ChatGPT or any of the other rival chatbots.
Although our website contained many tens of millions of words of articles and posts along with hundreds of millions of words of comments, this total seemed orders-of-magnitude too scanty for any AI system to find useful. Anyway, so many of our authors and commenters were in such sharp disagreement about everything that their totally conflicting views would surely confuse any chatbot trying to process them for meaning.
However, just a few weeks ago I discovered I’d been mistaken about all of this. Apparently although generic AI systems must first be trained upon enormous quantities of text, once that has been achieved they can then be “focused” upon a much smaller body of content, which they will then treat as their primary source of knowledge, drawing upon it for any questions asked. Someone tested my own writings in that way, and the results were amazing to me, with the chatbot very effectively responding to queries by using information drawn from my own articles. So apparently it was easy to produce a simple chatbot that reasonably reflected my own perspective and accumulated knowledge, something that I found totally astonishing.
I have absolutely no expertise in AI and until a couple of weeks ago I’d never even once used a chatbot. But considering the very impressive results I saw, I began speculating about how that might be possible.
Consider the following thought-experiment. Suppose you had a text-analysis system and you were able to provide it with every word spoken or written by a given individual during his entire lifetime, supplemented by all the rest of the world’s information. Perhaps if the software were sufficiently powerful, it could then “simulate” that individual, saying the things he might say and responding in plausible fashion to any questions it were asked.
Obviously, the content we can provide to an AI chatbot is merely a minuscule fraction of that total accumulated lifetime of human output. But there’s no reason to expect that the quality of the simulation scales linearly with the volume of content provided, and perhaps a logarithmic scaling might be more likely. Moreover, published writings are surely far more significant than someone’s casually spoken utterances. So given these arguments, it then becomes an entirely empirical question of whether providing hundreds of thousands or millions of published words would be sufficient to produce a simulation of the writer good enough to be worth using. And at least in my own case, I think the answer is a resounding “Yes!”
My own body of writing is fairly substantial, totaling about 1.9 million words of articles, with the bulk of these produced in the last six years. And although many would surely disagree with much of my material, my work has been extremely self-consistent over time, so much so that I’d still stand behind at least 99% of everything I’ve published over the last thirty years. Thus a chatbot based upon my articles wouldn’t get confused by too many conflicting claims.
The last point seems like a very important one. My impression is that most general chatbots are created based upon an enormous scrape of as much Internet content as possible. But since a great deal of that content is contradictory or conflicting, the effectiveness of the resulting simulation might scale far less than linearly, perhaps more like the Root-N mean distance produced by a random walk. So by producing a chatbot derived from a single individual’s corpus of writing, such distorting conflicts are minimized and the results may be far more effective since they are based upon a far more coherent and aligned body of primary content. The fully coherent light of a low-watt laser can do many things much more effectively than the completely incoherent light of a powerful sun-lamp.
All these speculations can only be resolved empirically, so I had my writings fed into a ChatGPT4o system producing a customized Ron Unz Chatbot and asked it a few simple questions, comparing these responses to those of the OpenAI Generic Chatbot, with the differences being just what I would have expected. The chatbot that simulates my responses is now freely available for anyone to use, and I’m sure that others far more experienced in AI usage than myself should be able to ask it much better questions. I’d be glad to learn the results of such testing….