[excerpt from Fatal Certainty]
… Every algorithmic predictive system ultimately must be created out of whole cloth, its equations, variables, operands, etc. representing some reality completely outside of our computer systems, which we would like to predict. And every element of an algorithmic system represents an assumption about how something in the outside world interacts with other things. As David Hume wrote centuries ago, “We can at least conceive a change in the course of nature; which sufficiently proves, that such a change is not absolutely impossible.”
In other words, there is no reason for us to assume that our variables, equations, operands, etc. will remain relevant to the reality we are trying to model and the future we are trying to predict; in fact, all experience seems to indicate that change in these things is a constant feature of human existence. How would an artificial intelligence with no broader understanding of the outside world learn to recognize when its algorithms have become reductiones ad absurdum, utterly unrepresentative of that outside world? How would it go about identifying entirely new variables and equations to employ in its predictive mission? To date, only human imagination is capable of such reality-based wholesale revision of its models of the material world.
This is not to say that machines will never develop human-style imagination. There is no scientific reason to suspect that our imaginations do not arise from our physical makeup, especially our brains. Therefore, there is no particular reason to think that an artificial version of our own imaginations might not one day be created. Advances in AI “chatbots” have, as of late 2022, reached almost spooky levels of competence.
But for the moment, AI chatbots are very, very far from achieving anything resembling human consciousness; they operate on a completely different principle, something akin to “autocorrect on steroids.” Our brains, which feature 100 to 180 trillion synaptic connections between our neocortical cells, combined with our intensely data-rich lifetime of experiences, together allow us to imagine a near-infinite array of combinations of future circumstances, and vastly to outperform any digital computing system that we have been able to invent so far at the task of imagining wholly new potential futures.
…Bertrand Russell’s paradoxes and Kurt Gödel’s Incompleteness Theorem, from which Turing derived his “halting problem,” show that, at the most basic level, there is no necessary connection between the external reality we confront and the logical, mathematical, and computer systems we use to analyze that external reality. There is no certainty, even in computing.
…Any vision of a future in which “Turing machines” do all of our thinking and planning would seem to ignore the fact that there is no ultimate basis upon which we can trust these machines to achieve any particular goal (even an evil or computer-generated one). Even in a system that is theoretically closed, antinomies, paradoxes, incompleteness, and unsolved million-dollar problems abound.
And when the system is not closed, then analytical logic, which requires a somewhat fixed and well-understood object upon which to operate, becomes more or less moot, because the nature of the phenomena in question is changing in unpredictable ways. The only way, right now, and for the foreseeable future, to deal with a situation of ongoing fundamental change in the elements and variables and relationships that constitute the system in question – what I would call strategic change – is via rigorous use of human imagination.
If this is true, as stated above, the truly strategic – unpredictable changes in whatever system in which one is operating – is precisely the realm that humans, not machines, not mathematics, not algorithms, not trading programs, not “25 standard-deviation moves every day” models – must handle, at least for the near future.