Certain situations are complex enough that predictions cannot be made to sufficient precision or sufficiently far ahead to be useful. We call this the region of the absurd; Logical models can not be made, and Intuition will not work either. But "close to the axes" we can use either Logic, Intuition, or either, depending on the problem and the requirements. A diagram is used to illustrate the point.

The Absurd And The Possible

Dr. Stephen Kercel has often stated that "The Bizarre is not the Absurd". He originated the term "Bizarre Systems".

Certain situations are complex enough that predictions cannot be made to sufficient precision or sufficiently far ahead to be useful. I call this the region of the absurd. I will attempt to sketch this graphically. I will plot the complexity of the problem domains on the Y axis against how far into the future we are able to make predictions with some accuracy.

These diagrams are of course just for coarse visualization purposes; the axes are not linear and not to any scale.

Note that I am plotting time range of prediction on the X axis. I might just as well have plotted the precision of the prediction, where such would be appropriate. Precision, timespan, fallibility, and complexity of the problem domain interact.

Here (above) we see a that most of the diagram is red, which denotes the "absurd region". In this area, no useful predictions can be made because the problem domain is too chaotic and unpredictable. Neither Intuition nor Logic will work.

Close to the X axis we can use Logic based methods for predictable problems; close to the Y axis we can use Intuition based methods for problems in Bizarre domains. The absurd area does not have a sharp boundary close to the axes. Rather, the absurdity slowly strengthens with the distance from the axes, indicating increased difficulty and diminished returns for effort spent when attempting to make predictions. Both the precision of the prediction and the chance that the prediction will be correct diminish as we leave the axes.

Close to the origin we have simple problems that don't require far-reaching predictions, and above and to the right of that we have increasingly complex problems with increasing requirements for prediction length.

The diagonal line at the top of the light blue area really should have been another feathered gradient like the "Absurd region" but doing this consistently would likely make some of the illustrations below much more confusing. I hope readers realize that these lines are all "fuzzy". In fact, the areas should really be made up of little dots denoting successful predictions/experiments and these dots should become sparser and sparser closer to the edges to the Absurd region.

Current science can deal with problems of a certain complexity if it needs to make short-term predictions. If it needs to make longer term predictions, then the problems better be more well behaved, i.e. less complex.

Future science will be able to handle both more complex problems and make longer-term predictions for simple problems. But the increased absurdity of the problem domain in this region leads to diminishing returns for our efforts.

Or high precision predictions

Intuition, which is what we use to handle our daily problems, can handle significantly more complex problems than science but is incapable of long-term predictions even for simple problems.

There is an overlap area where both Intuition and Logic based methods work. Here we have a choice of methods and we select them based on requirements for precision and tolerance for failure. We can use satellite images, radar, thousands of barometric sensors, and supercomputers to predict the weather. Or we can open the window and state "it looks like it will rain", basing the prediction on prior experience. The Intuition based prediction is likely to be less reliable, and may only streatch a few hours into the future as opposed to five-day weather forecasts. We are happy to use Intuitive methods when the cost of failure is trivial. If I'm going for a walk, I might get wet. But bad weather may be fatal when launching the Space Shuttle, so we put our trust in Logical models of weather in order to extend the prediction days into the future.

We can compute how spherical objects behave in elastic collisions or we can go downtown to shoot some pool, using intuition based "muscle memory" acquired through prior experience.

Further out but close to the X axis, Science will outperform Intuition and guesswork. Few of us can estimate as a gut feel where Jupiter will be a month from now. We'd rather trust tables that had been computed using Logical methods and causal models based on Newton's equations.

Humans who attempt to extend their guesswork and Intuition into this region are guilty of overconfidence, ignorance, and superstition.

Higher up and close to the to the Y axis, Intuition based methods outperform Logic and Science. This is the area where we find Bizarre systems and Bizarre problem domains. Here we find the Discovery of Semantics of all kinds, such as understanding language, analyzing visual data and understand what we see, hearing melodies and sounds, and being able to make sense of the world in general. This is the area in which humans will outperform computers on most tasks. This is the domain of problems requiring Intelligence.

We need to choose the correct approach to our problems and our predictions. We have established regions where Logic and Science are necessary, and regions where Intuition must be used. Artificial Intuition will, in the near future, allow us to duplicate what humans do in their daily lives, and this includes competences like Discovery of Semantics in language.

It is possible that Artificial Intuition based systems may in the future outperform human competence. But we almost don't care. AN based computers would provide major benefits to humanity even if they did nothing beyond the simplest language understanding tasks.

GOFAI stands for "Good Old Fashioned AI", a term that has been used in the community for over 20 years. The opposite would be "Newfangled AI"

Historically, Weak AI such as expert systems has strived to solve problems that require Intelligence. These are exactly the problems in the Bizarre Domains. Such AI (GOFAI) would be operating at the edge of maximum complexity that was possible using Logic based programming and software engineering.

Because of the complexity of the domain these systems would just barely work even under the best of conditions. Any attempts to stretch beyond their reach into the region where Intuition was required gave us AI systems that failed in spectacular, brittle ways.

Brittleness in AI is the dual of overconfidence and supersititon in human Intuition.

I am currently exploring Artifical Intuition based methods. The performance I can achieve with the hardware and the algorithms I have is unspectacular. All my results to date can be rather easily duplicated using conventional programming methods. I like to say I have results but no demonstration, and the difference is that a result is impressive to someone who understands how it was achieved, whereas a demo is impressive to anyone whether or not they understand the technology.

It is not very effective to claim "I know it's not too impressive, but I'm doing it the hard way, and it will scale upward to much more complex problems". But if I can reach a point where my results exceed that which is possible with state of the art Logic based methods, then I'll have a useful demonstration.