In this issue of “Intractable Problems” I want to discuss various paradoxes that can occur during our work. Now, the Oxford dictionary says that a paradox is a self-contradictory or essentially absurd statement. This term can probably be applied to many aspects of computing, not the least of which is the notion that if a computer cable has one end, then it has another. Somehow this rarely seems to be true, particularly if you are tasked with finding the right end!

Anyway, some paradoxes are real. They illustrate that there are unprovable statements which can be shown to be false if they are false, but they cannot be shown to be true.

Consider the Epimenides paradox, which states: “This statement is false.” Actually, Epimenides, who was a Cretan, really said that “All Cretans are liars!” However, because ethnic and cultural bias is frowned upon today we try to use the more general form that does not single out one specific group of people. The Epimenides paradox is an assertion that is true if and only if it is false. Therefore it is neither true nor false.

Oops! What did I just say? If Epimenides was a Cretan and all Cretans were liars, then Epimenides is a liar, so he must have told a lie when he said that all Cretans are liars, which means that all Cretans are not liars, and we know that Epimenides would not lie, so it must be true that all Cretans are liars, which means …

That’s the problem with paradoxes, they are infernally difficult to prove.

Another paradox attributed to Bertrand Russell, a famous mathematician, deals with sets or collections of things. Russell said, “Consider the set of all sets that are not members of themselves. Is this set a member of itself?” This paradox showed that informal reasoning in mathematics can yield contradictions, and it lead to the creation of formal mathematical systems. Later, in 1931, Gödel demonstrated that virtually all formal systems are incomplete, in each of them there is at least one statement that is true but cannot be proved.

The Russell paradox caused no end of trouble for the mathematicians! Can you imagine yourself working as a mathematician trying to prove a theorem that you know to be true, but which can never be proved! It’s enough to drive a person mad. More importantly, the Russell paradox showed that all of mathematics cannot be built on the unshakable foundation of logic, there is a crack in the foundation. One day all of mathematics could come tumbling down around us.

Our last paradox, known as the Berry paradox, reads: “The smallest positive integer which to be specified requires more characters than there are in this sentence.” The sentence has 114 characters, (counting spaces between the words and the period, but not the quotation marks), yet it supposedly specifies an integer that, by definition, requires more than 114 characters to be specified.

The Berry paradox is rather subtle. To see the paradox, understand that the sentence defines a number which is beyond its capacity to describe. You actually have to step outside of the limits of the problem to be able to solve the problem.  This is why it is self-contradictory and absurd.

Still confused?  You are asked to find the smallest number that’s bigger than the sentence, but the sentence itself describes this number!  And the sentence can’t be the smallest because we are seeking numbers outside the bounds of the sentence !

Still confused?  Go back and re-read.

So, what do paradoxes have to do with computers? Well, the Berry paradox was used by computer scientist Gregory Chaitin[1] to prove that every computer, every formal system, and every human production is limited; there are always sequences too complex to be generated, outcomes too complex to be predicted, and events too dense to be compressed.

Does this mean that we can’t build the perfect system? Well, I suppose so. Given that the perfect system doesn’t exist, bankers and accountants should understand that there may be perfectly valid reasons why the numbers won’t always add up. Money can go missing, and it may not always be a mistake.  See?  The devil made me do it!  However, in our next issue I want to show you how we can never build the perfect travel plans for our salespeople. And, I’ll bet that this is an intractable problem, in more ways than one!


1. Chaitin, G. “Randomness and mathematical proof”, Scientific American 232, May 1975, pp. 47-52

2. Chaiten, G. Lecture, “A Century of Controversy over the Foundations of Mathematics”

Language Ambiguity

Ah, it’s time to roll up our sleeves and dive in to another issue of “Intractable Problems”. Let’s brush the dust off our thinking cap and see if we can make some sense out of the language we all speak. In this issue I want to look at some of the ambiguities and complexities of our natural language.[1]

We care about language because our computer applications must interface with people. We are frequently asked to find documents on certain topics from a database, extract information from messages or articles on certain topics, translate documents from one language to another, or summarize a thousand page proposal into two or three pages.

Today, most of our application systems solve these problems by using simple word matching techniques. They do not understand the language. While such techniques may produce useful applications, it is unlikely that they can be extended to handle complex query tasks, such as the query “Find me all the articles on software projects involving more than 10 million dollars that were attempted but failed during 1990 and 1995.” To handle such queries, the system needs to read the articles and extract enough information from them to understand which articles meet the criteria. A crucial component of understanding involves computing a representation of the meaning of sentences and texts.

If I said to you “We painted all the walls with cracks.” would you think that cracks were painted onto the walls, or that cracks were somehow used as an instrument to paint the walls?

Here is another one for you, called a garden path sentence. “The raft floated down the river sank.” When you read the word sank, you probably realized that the sentence interpretation that you had constructed so far in your mind was not correct. Yet the sentence does have a reasonable interpretation corresponding to “The raft that was floated down the river sank.” The initial sentence is grammatically correct, it simply uses a reduced relative clause for the word floated. Go back and read the sentence again and you will understand.

For another one like the above, try “The horse raced past the barn fell.”

How about the sentence “I thought it would rain yesterday.” Most of us will interpret this as yesterday was when it was thought to rain, rather than the time of thinking.

These are all examples of ambiguous sentences. Any natural language is full of ambiguity. I suppose, in some sense, it’s our ability to understand language ambiguity that makes us human. At least that’s what I try and believe whenever I listen to politicians, economists, and other fortune tellers.

The problem with language ambiguity is that it stops us from making the appropriate inferences needed to understand. Ambiguity resolution in natural language is an intractable problem.

It’s quite paradoxical to use language to understand how little we know about understanding language.

This leads me into our next issue’s topic on interesting paradoxes. If you thought that you were tongue-tied with language, then I promise you that your brain will be tied in knots with these paradoxes.


1. Allen, J. Natural Language Understanding, 2nd edition, Benjamin Cummings Publishing Co., 1995.

System Engineering

In this issue of “Intractable Problems” we wish take a close look at systems engineering. In many cases, proper system design requires that parts of the system be prototyped or tested through controlled experiments.

Effective experimental design was first pioneered by Finagle. He is the person who published methods for introducing fudge factors, rounding errors, and various other artifacts into our experiments. Without Finagle’s methods we would still be struggling to design many of the more complex systems that we use today. Some of Finagle’s most important contributions to experimental knowledge are summed up in his four laws, listed below.

Finagle’s First Law: If an experiment works, something has gone wrong.

Finagle’s Second Law: No matter what the experiment’s result, there will always be someone eager to: (a) misinterpret it. (b) fake it. or (c) believe it supports his own pet theory.

Finagle’s Third Law: In any collection of data, the figure most obviously correct, beyond all need of checking, is the mistake.

Finagle’s Fourth Law: Once a job is fouled up, anything done to improve it only makes it worse.

Finagle relied quite heavily on the previous work by Chaintok in developing his theory of change. Chaintok examined data on over 164 different engineering projects to understand what happens when you change the design specifications. Chaintok summarized the results of his work by proposing the various laws of revision, stated below.

First Law of Revision: Information necessitating a change of design will be conveyed to the designer after, and only after, the plans are complete.

Second Law of Revision: The more innocuous the modification appears to be, the further its influence will extend and the more plans will have to be redrawn.

Corollary to the First Law of Revision: In simple cases, presenting one obvious right way versus one obvious wrong way, it is often wiser to choose the wrong way, so as to expedite subsequent revision.

Some other results, usually published by students and thus not verified, are given below.

Wyszkowski’s Law: No experiment is ever reproducible.

Wyszkowski’s Second Law: Anything can be made to work if you fiddle with it long enough.

Fett’s Law: Never replicate a successful experiment.

Law of Research: Enough research will tend to support your theory.

Maier’s Law: If the facts do not conform to the theory, they must be disposed of.

Peer’s Law: The solution to the problem changes the problem.

Once again, if any of you have any insight into the inner meaning behind all this, please enlighten me.

Next issue, we will have a more serious look at language and some of the problems involved in understanding ourselves.

Bremermann’s Limit

It’s time for another issue of “Intractable Problems”, a column designed to challenge and stimulate your thinking skills. As I promised you last issue, we will examine what limits, if any, exist on the amount of information we can hope to process in a given interval of time.

A limit, you say? How can there be a limit for something like information, which is not real? Well, whether it is real or not, information needs to be moved around. We interpret the ‘processing of x bits’ to be the transmission of that many bits over one or many communication channels within the computing system. To do this, it’s obvious that the information in the computer system has to be physically encoded in some form.

In 1962, Hans Bremermann assumed that we would encode the information in terms of an energy level[1]. Of course, Heisenberg’s Uncertainty Principle showed that we can only measure the energy to a certain accuracy. The measurement accuracy limits the number of markers into which the energy level can accurately be divided. Now, Hartley’s information measure relates the information content of N markers to the logarithm of N. Using this, we can calculate the maximum amount of information that can be represented within our energy level.

Now, using Heisenberg’s uncertainty principle, Planck’s constant, and Einstein’s famous E = mc2 formula, we can easily show Bremermann’s conjecture that “no data processing system, whether artificial or living, can process more than 2 × 1047 bits per second per gram of its mass.”

There is a problem when mathematicians say things can be done easily.  Mathematicians are not normal people. To explain, Einstein said that E=mc2, where E is energy, m is mass, and c is the speed of light.  Planck said that E = hf, where E is energy, h is Planck’s constant, and f is the frequency or bits per second.  Planck’s equation relating energy to frequency is where the reference to Heisenberg’s uncertainty principle has come into this mix. And, Hartley’s information measure allows us to think of Planck’s frequency as if it were information bits per second.  So, by equating Einstein and Planck on the common energy term, we have:

mc2 = hf or f/m = c2/h.  The numerical value of c2/h is 1.37 x 1047 (bits per second per gram)

Oh my goodness! What does this mean? Well, hypothetically, let’s assume that we have a computer the size of the Earth and a time period equal to the estimated age of the Earth. Then this imaginary computer would not be able to process more than about 1093 bits. This number, 1093, is called Bremermann’s Limit, and problems that require processing more than 1093 bits of information are called transcomputational problems.

Again, for those of us who have feet planted firmly on the ground:  The Earth is about 4.5 billion years old ( 1.4 x 1017 seconds).  The Earth’s mass is about 6 x 1027 grams.  We have 8.4 x 1044 gram seconds for the mass time product, and when we multiply this by Bremermann’s Limit we get something around 1092 or 1093.  Close enough in my books.

The funny thing is, these types of transcomputational problems do exist in the real world. For example, exhaustively testing all combinations of an integrated circuit with 308 inputs and 1 output is a transcomputational problem. For those of us who dislike doing arithmetic, what we are saying is that 2308 is greater than 1093.  Did you know that such circuits exist in today’s computer processors?  Do you think the circuit works correctly for all inputs?  You now know that there is no way the engineers ever tested all combinations of their circuit!

Bremermann, in the conclusion of his paper, wrote the following:

“The experiences of various groups who work on problem solving, theorem proving and pattern recognition all seem to point in the same direction: These problems are tough. There does not seem to be a royal road or a simple method which at one stroke will solve all our problems. My discussion of ultimate limitations on the speed and amount of data processing may be summarized like this: Problems involving vast numbers of possibilities will not be solved by sheer data processing quantity. We must look for quality, for refinements, for tricks, for every ingenuity that we can think of. Computers faster than those of today will be of great help. We will need them. However, when we are concerned with problems in principle, present day computers are about as fast as they will ever be.”

Ah well, that’s life. If nothing else, at least it helps support my conjecture that cats must be dumber than dogs. On average, cat’s brains are not as massive as dog’s brains and therefore cannot hold as much information.  On the other hand, cats are all known to be tricky and devious, so it is possible that they have ingeniously discovered ways to work around their limitations.

Next issue, we will take a critical look at the serious problems involved in systems engineering.


1. Bremermann, H.J. “Optimization through evolution and recombination.” In: Self-Organizing Systems, edited by M.C. Yovits et al., Spartan Books, 1962, pp. 93-106

Query Optimization

Intractable Problems” is a column designed to challenge and stimulate your thinking skills. In this issue I want to challenge you to look at some of the difficulties inherent in accessing the data in our computer systems.

Now, I know that each and every one of you always gets all the information out of the computer that you need, without difficulty. Why, we all use SQL, the structured query language designed for easy access into relational databases!

Let’s take a close look at SQL[1]. Did you know that SQL statements are relational calculus expressions? I am not making this up. Relational calculus is a way of expressing what our result is to be, without figuring out how to get it. Of course, we’re all calculus experts, right? Yeah, sure!

Guess what happens if we want some data which must be selected on more than one condition. For example, we would like to find all 65 year old males in the province of New Brunswick.

Now, logically, we know that there are two ways to find the answer. First, we could find all the males in New Brunswick and then look to see which of them are 65 years old. Or, we could find all the 65 year old people in New Brunswick and then see which of them are male. Does it make a difference which way we use?

Yes, it does. We would like to know the selectivity of each condition. Selectivity is the ratio between the amount of data that satisfies the condition to the total amount of data that must be searched. The selectivity is the probability that a specific data item is found. If the selectivity is small, then only a few data items are selected by the condition. We want to use this condition first in any compound expression so that we minimize the amount of work necessary to satisfy the next condition. In general, for multiple conjunctive conditions, the best order of application is from smallest selectivity to largest.

In many cases neither you nor I nor the computer know the precise selectivities. So, the best assumption we can make is that the data is uniformly distributed across its range. For example, assuming uniform distributions, if we are looking for 65 year old males in a population then we can predict that a gender selectivity is 0.5 and an age selectivity is 0.01.  This assumes that gender is either male or female and age ranges across integer values from 1 to 100.  With these selectivities we will get a 50 fold reduction in search work if we select on age before gender. In other words, to find out how many 65 year old people are male, first find those who are 65 years old and then find out who are male. The other way is far too difficult.

Now, let’s raise the stakes. What happens if the distribution is not uniform? Let’s introduce a new area attribute and consider the case where 95% of the people in our population live in the province of New Brunswick and the remaining 5% live in 199 different areas of the world. In this case there are 200 different area values. The area selectivity, assuming a uniform distribution, is 1/200 or 0.005.  This is much smaller than the age selectivity shown above. According to our rule, the area attribute should be accessed first given any query with a conjunctive clause relating area and age and gender.

But  this is a very poor plan if we wanted a New Brunswick resident because 95% of people in our population will be chosen first.  Our distribution was not very uniform, was it? How is a person to know this, without understanding the data and applying some calculus?

SQL, a language founded on relational calculus, does not help us choose the most effective query formulation. SQL is a language to specify what we want, but not how to do it.  The query optimization task is often an intractable problem, frequently left to some foolish computer to figure out.

So, the next time the computer is slow, ask yourself if there was something that you could have done to formulate your request in a better way.  I don’t think you expect the computer to know how to solve problems that you yourself can’t do. Or do we expect computers to do our thinking for us?  Now that is an interesting problem!

Next issue, I want to take a critical look at the serious problems involved in processing information. Do you think that there are any limits as to what we can do?


1. Codd, E.F. “A Data Base Sublanguage Founded on Relational Calculus”, Proc. 1971 ACM SIGFIDET Workshop on Data Description, Access and Control.

Graph Colouring

Well, once again it is time for yet another issue of “Intractable Problems”. In this issue I would like to examine just how difficult it is to colour something. Now, I know that many of us did finger-painting as children, and we probably coloured our hands, feet, faces, and other areas of ourselves which, I suspect, even our mothers blush to talk about. So, what can be so difficult about colouring?

Well, random colouring such as children might do is not too difficult. But, if we are asked to colour something in a particular way, then this can be a very difficult problem to solve.

Consider, for example, a graph. A graph is a set of points with lines between the points. We are asked to colour the graph such that every line has a different colour at each end. The problem is that my crayon box has only a few colours. The question is: Can I colour the graph?

The graph shown here requires at least four colours to colour it. It can’t be coloured if crayon number 4, my yellow crayon, is broken. If you don’t believe me, then try and colour this graph using only three colours.

It is computationally intractable to determine if a graph can be coloured with fewer colours than the number of points in the graph.[1] This means that if the graph is sufficiently large, then there is a computational barrier that stops us from finding a valid colouring of the graph. The computational difficulty grows exponentially with graph size. This type of problem is similar to our trying to count the correct number of grains of sand on the beach. An impossibility, in anyone’s lifetime!

We can use computational intractability to our advantage, especially if we want to build secure systems. The idea is that we all generate our own three-colourable graph of sufficient size. This is easy to do and it does not take very much time. We then use this graph as our identification. To access something, we show our graph and prove our ownership of it by demonstrating, to whatever level is necessary to prove beyond any reasonable doubt, that we know how to properly colour it.

It doesn’t matter if someone else gets our graph because no one else can figure out how to three-colour it. It can’t be done in their lifetime, not even with the fastest conceivable computers to help them. They will always fail any non-trivial colouring test. This is similar to a public key cryptography system. In this system the user identifier, or graph, can be public. The colouring key must be kept secret and, unlike an ordinary password, does not need not be known by the system being accessed.

So, for those of you who, like me, get quite annoyed with computer systems that force us to change our passwords all the time, simply because these systems are susceptible to a security breach, have faith. There are other ways to protect our identity.  All we need do is convince those who build and run systems to do things for us, instead of the machine. Our task will be to memorize the proper colouring of a sufficiently large graph which, of course, is a completely different problem!

Next issue, I will continue to examine problems with computers. How many of you have difficulty getting what you want from the computer?


1. Karp, R. M. “Reducibility among combinatorial problems” in R. E. Miller and J. W. Thatcher, Complexity of Computer Computations, Plenum Press, 1972, pp. 85-103.

Information Theory

Welcome to yet another issue of “Intractable Problems”, a column designed to challenge and stimulate your thinking skills. In this issue we want to examine the nature and characteristics of information.

What is information, you may ask? This is a difficult question. I suppose that information is something that is useful, as opposed to noise, which is not. We do not define it any further. Given this thing called information, what do we know about it? Well, to quote Lord Kelvin, a physicist:

“When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind: it may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the stage of science.

So, how do we measure information? Shannon, the father of information theory, measured information by calculating the entropy of an event, or the amount of surprise that an event gives. Consider that there is a box which every once in a while spits out a coloured ball. If the box has, over time, continually spit out black balls, then not much information is gained when the box spits out another black ball. But, if a white ball pops out unexpectedly then we can get quite excited! Lots of new information was gained. We can measure the amount of new information from the probability that something differed from the norm. This is entropy, which is a measure of how ordered things are.

Information theory is derived from an engineering definition based on probability[1]. It is not a definition based on meaning. This can be quite confusing to those of us who suspect that we can somehow extract meaning from data. Has anyone ever talked to you about knowledge systems, or knowledge engineering, or data mining?  Think about what these systems claim to do.  Ask how the systems define meaning and you might be surprised to find that the definition is based upon a  hand-waving argument and fluff.  Understanding meaning can be a very intractable problem.  Anyone who has ever tried to make sense out of the assembly instructions for a child’s toy can easily attest to this difficulty.

Entropy increases when things become more disordered. In physics, Boltzmann’s second law of thermodynamics says that the total amount of disorder in the universe increases with time. In other words, the entropy increases. If we associate Shannon’s information entropy with the physical concept of entropy, then we can predict that total information in the universe increases with time. Oh my goodness!  What would this mean?

Well, if at first someone doesn’t see things your way, then wait for a while and they may eventually see the light. Second, if things seem to go from bad to worse, perhaps it is because you now have more information and can understand the problem better. Third, if you truly have lots and lots of information, then you are probably a very disordered person.

Our misunderstanding about information has grown to the point where people now use terms such as “Executive Information Systems”. Obviously, to do our best as computer people, we want to provide systems that will give the most information with the least amount of effort.  But think, such systems will give the most information to executives only when events are completely and uniformly random, with each result coming as a complete surprise! Now, do you think that executives really want to be surprised every time they turn on their computer?

Next issue, I will present a challenging and exciting colouring problem for your pleasure. Start practicing with your crayons!


1. Hamming, R. Coding and Information Theory, Prentice Hall, 1980.

Computers and the Information Age

Many of us have to deal with computers in all facets of our life. Yet all too often we hear that either the computer is down or that the system won’t let us do what we want it to. Worse yet, perhaps the computer can do what we want, but nobody seems to know how to do it (except the guru over in the corner).

What’s the problem? Obviously, all our computer applications are inadequate to do the job that we want. They are far too difficult to use, can’t understand us, and refuse to make our lives any easier.  To make matters worse, they probably use old technology and need replacing.

So, what should we do? Should we throw away the old systems and replace them with shiny new ones, re-engineered to save us money? After all, the new systems will have all the new features that will make our job far more fun!

This time we will do the job properly. We will make sure that we have an integrated environment where everything works together. We will have one database that is easy to understand and that will manage all our information. We will use one common programming language to develop theses system. Clearly, the operation of all these new systems will be a breeze, because we will use one vendor’s technology.

What is wrong with this scenario? Does it remind you of the way things once were? We used to have computer systems that ran on a central mainframe. The IS department used to have de facto ownership of all our computer applications. We kept all of our records in a central database. Our computer people wrote most of our application programs in COBOL. Usually, we used IBM’s technology solution. We were happy and secure in the knowledge that this was the correct and proper way to do things.

But, this happy, safe scenario is a scene from the past. It reminds us of the old ways of computing. Fie to these new ideas that are trying to drag us, kicking and screaming, into the information age. Who needs them, anyway?

Or is it that we may need new solutions to our problems? Perhaps the old ways were not working. After all, these colorful graphical systems are fun. Surely you have tried the Internet?

Today, we like to think of building our systems from component parts, or objects. We like objects because they have well-defined boundaries. We know how to plug objects into each other because they use standard interfaces that are accepted by everybody. This is a feature of object-oriented computing. When we use a computer system we tell it what we want, and not how it is to be done. This is a non-procedural system, one which does not need instructions. The system works when we want it to, on our request, and not when someone else decides to do it for us. This is event-driven processing. Our data is stored in many different places, usually where it is needed and used. This is distributed information management.

We are inundated with quotes from experts that tell us this new way is better, faster, and cheaper.  They say we can build computer systems that actually work!  Look at all the testimonials!  On the other hand, there are those disturbing little thoughts that pop up … Unix was built in 1968 as a research project at MIT and it was originally hacked together with bailing wire and spit and C, and we are still trying to use it.  We are currently spending millions of dollars on outsourcing and help desks because nobody can understand computer stuff. And open-source software means that software is free so why should anyone have to pay for anything?  And why should we have to pay to craft a web page that runs in all browsers?  And why is it that business is constantly looking for IT to provide value?  And why is it that Dilbert is the most popular cartoon?

It is no wonder that these new ideas can be foreign to those of use who were brought up on the old computing model. Change can be truly frightening.  Do we wonder why business leaders most often use the excuse that they can’t understand the technology?

The new world exists. The information age is here today. When we are faced with people who promote the old, safe notion of traditional systems we need to help them grow into the new world. Birth is a traumatic experience, and institutions that cannot adapt will be pushed aside as stillborn anachronisms. Welcome to the information age.

Fuzzy Logic

Welcome to another issue of “Intractable Problems”.  In this issue I want to stimulate your thinking skills by discussing fuzzy logic, a logic system that evaluates truth in shades of gray.

Ah, logic, the weapon of reason and the nemesis of emotion! Logic, epitomized by Spock, Aristotle, and the occasional departmental reorganization.

We all know that classical logic deals with propositions that are required to be either true or false. However, this assumption has been questioned since the time of Aristotle. In his writings Aristotle argued that the truth status of matters cannot be established for future events. Aristotle maintained that propositions about future events are neither true or false, but potentially either; hence their truth value is undetermined.

This leads me to wonder: If a politician speaks on future events, then what can we say about the truth of the politician’s statements? Will taxes really go down?  Will the new road get built this year?

Anyway, all kidding aside, how do we solve problems in the absence of complete truth? Well, if we know that something is absolutely true then we can assign it a truth value of 1. Similarly, if we know it to be absolutely false, then it has a value of 0. If we know nothing about its truth or falsehood, then we can give it a value of ½. Our degree of uncertainty, our fuzziness about a proposition, can be used throughout our logic as a confidence level in the validity of our statements.

So, if we can now ascertain the fuzziness of truth, then how do we propagate this fuzziness, particularly when we conjoin truth clauses such as (A and B) or think about the disjunction such as (A or B).  Zadeh[1], the founder of the theory, discussed methods to do this where conjunction was related to the product of the fuzziness and disjunction was related to the sum of the fuzziness.  With these methods we can now rework all our famous theorems in logic to discover their fuzzy complements.  We can apply fuzzy logic in new ways, because it is exciting to think of our world in a non-crisp way, as most things in our world appear to be.

To explain the significance of fuzzy logic, it is appropriate to quote Zadeh:

“Essentially, our contention is that the conventional quantitative techniques of system analysis are intrinsically unsuited for dealing with humanistic systems or, for that matter, any system whose complexity is comparable to that of a humanistic system.

An alternative approach is based on the premise that the key elements in human thinking are not numbers but labels of fuzzy sets. … Indeed, the pervasiveness of fuzziness in human thought processes suggests that much of the logic behind human reasoning is not the traditional two-valued or even multivalued logic, but a logic with fuzzy truths, fuzzy connectives, and fuzzy rules of inference. In our view, it is this fuzzy, and not well understood, logic that plays a basic role in what may well be one of the most important facets of human thinking, namely, the ability to summarize information.”

So, with our new-found knowledge about fuzzy logic, we can now better understand some of the difficulties with some people’s thought processes. Clearly, thinking can be fuzzy!

Fuzzy logic and other technological advances have already helped us to build systems that were previously intractable by their nature and complexity. We can use fuzzy relationships to simplify control processes for complex systems. Did you know that your washing machine may incorporate a fuzzy controller to help determine when the clothes are clean? Or are they perhaps only fuzzily clean? In general, a good problem simplification minimizes the loss of information relevant to the problem of concern. Information and complexity are thus (fuzzily) related.

To better understand human information systems, in our next issue I will discuss the nature and characteristics of information. You will be surprised to find that information is not what you think it is!


1. Zadeh, L.A. “Outline of a new approach to the analysis of complex systems and decision processes.” IEEE Trans. On Systems, Man, and Cybernetics, SMC-1, pp. 28-44.

Murphy’s Law

Welcome to the fourth issue of “Intractable Problems”, a column designed to challenge and stimulate your thinking skills. In this issue we take a close look at language, at least in the sense of assessing our collective wisdom at understanding computer projects.

Are you familiar with Murphy’s Law? You know, this is the colloquial statement that says that, left to themselves, things tend to go from bad to worse, or what can go wrong will. I would like to look at a few similar gems of wisdom to try and understand how they apply to our work. Let’s begin by listing some laws.

Manly’s Maxim: Logic is a systematic method of coming to the wrong conclusion with confidence.

Murphy’s Corollary: It is impossible to make anything foolproof because fools are so ingenious.

Quantized Revision of Murphy’s Law: Everything goes wrong at once.

Scott’s Second Law: When an error has been detected and corrected, it will be found to have been correct in the first place.

Shaw’s Principle: Build a system that even a fool can use, and only a fool will want to use it.

Rule of Accuracy: When working towards the solution of a problem, it always helps if you know the answer.

Hawkin’s Theory of Progress: Progress does not consist of replacing a theory that is wrong with one that is right; it consists of replacing a theory that is wrong with one that is more subtly wrong.

We follow with some various truths.

  1. Nature sides with the hidden flaw.
  2. Interchangeable parts won’t.
  3. Inside every small problem is a large problem struggling to get out.
  4. Spend sufficient time confirming the need and the need will disappear.
  5. Never attribute to malice that which is adequately explained by stupidity.

Lastly, let’s look at the Laws of Computer Programming:

  1. Any given program, when running, is obsolete.
  2. Any given program costs more and takes longer.
  3. If a program is useful, it will have to be changed.
  4. If a program is useless, it will have to be documented.
  5. Any program expands to fill available memory.
  6. The value of a program is proportional to the weight of its output.
  7. Program complexity grows until it exceeds the capabilities of the programmer who must maintain it.
  8. Any non-trivial program contains at least one bug.
  9. Undetectable errors are infinite in variety, in contrast to detectable errors, which by definition are limited.
  10. Adding manpower to a late software project makes it later.

If any of you have any insight into the inner meaning behind all this, please enlighten me.

In the next issue we will have a critical look at problem solving skills, perhaps more appropriately entitled “The Black Art of Fuzzy Logic.”