He (or she) who does not hesitate is lost.

[ Some thoughts in response to The Edge question of the year for 2015: What Do You Think About Machines that Think? ]

What do I think about machines that think? All sorts of mildly random and mildly relevant things, with answers flowing off in many different directions, just like most of the other contributors to this topic.

We (think we) understand them all. And faced with this Edge question, created to stimulate a variety of responses, we begin fashioning our most thoughtful response without much hesitation.

The Edge question looks like a question—a sentence followed by a question mark—but is this a question that should be answered? Or better, how should one respond to this question in a thoughtful way?

One set of answers would be to think of recent advances in AI and consider how this might affect humans and human societies. Although interesting in its own way, history has demonstrated that even very intelligent humans are not very good at prognosticating the future. The future has turned out to be full of unintended and unanticipated consequences. one thoughtful response, or meta-response, would be to explain why prognosticating the future is so difficult.

The most sensible—thought not the most interesting—answer is to say that, given the many AI related projects going on simultaneously in the world, there will be many new situations in the future about which we can say that "machines" are "thinking" for us. We cannot know what those situations will be, any more than we knew about the consequences of the Internet, the cell-phone, Twitter, FaceBook and Siri, to name just a few.

We do not know what all these inventions will be, or how they will affect us. We can anticipate that unless they impact a deeply embedded societal taboo, we will use these future inventions and casually as we use the present inventions, which is as casually as we used previous inventions, like books, or telephones.

In many situations we will want to say that a "machine" thinks for us, in others we will demur, and in some others we will not quote know what to say.

There are many issues that may well arise. Among them:

  • Is a machine that thinks a machine?
  • Do we think?
    Or we think and {something else}?
  • What would we want a machine to think about?
  • What if the thinking machine disagrees with us? What is a thinking machine provides socially unacceptable or offensive (unthinkable) solutions like preemptive strikes, releasing smart viruses to reduce the population, eugenics, or genetic engineering.
  • We can think about moral dilemmas of giving rights to non-human beings, a process to which we are slowly accustoming ourselves with the higher mammals. We can here consider the plights of R2-D2 and C-3PO from the Star Wars movies.
  • "Thinking" is often used a coming up with ways in which we can enable and furthering our goals. What if a thinking machine thinks the planet be better off without us. When we want a thinking machine, doe we want it to have the kind of speciesism that gives special preference to us, much like we have always done?
  • Will be ever be able to off-load questions that are now matters of moral personal responsibility? Who should I marry? Where should I live? What is the right thing to do about world hunger?
    Will we let a machine answer questions where we do not know the answer ourselves?

Perhaps this all we want, enjoy the responses, without wondering about how this is possible.

But I feel that the best response is hesitation, and get clear on what I will call the "fuzzy" nature of the words used in the question. Words are fuzzy (in ways I will ketch out below). Words are essentially fuzzy. But this can only be said with the other fuzzy words at our disposal.

It is important to realize the nature of these abstract words we use to talk and think (in one of the many uses of the word "think").

Marvin Minsky has famously written about the "society of mind." Although this is only a metaphor, if we take society in all its hurly-burly and confusions, it points to an alternative and useful way of thinking about our so-called "mind."

Our minds are multi-processing machine, constantly processing a large number of things at all times. Some of these we are aware of. Many of these we are not. They form the background to our thinking, but equally to our understanding, and to our speaking.

We must be self-aware and very cautious about using our large sets of abstract words, to avoid large sets if fruitless disagreements.

We can speak of this multi-processing as a "wall of thoughts" by which I mean the battery of simultaneous ongoing mental processes (which can also be called "thoughts")

When we use words like "thinking" and "machines," our minds make sense of our discourse, by bringing up a suitable context of understanding. In so doing, our minds carry along the connotations of such words.

(Usually, but not always, "thinking" is a positive word. "Machines" often has positive connotations, but it usually connotes something without humanity.)

Understanding another person is a basic purpose of speech. Without giving it much conscious awareness, we create contexts of understandings. These contexts are not identical. The depend upon what a person "is talking about," as well as what another person understand.

This a balancing act between what the person is talking about and the words they use in so doing. We make sense of what they say. There is no meaning to the word "machine" or "thinks." There is nothing we can point to and say that this, and only this, is what we mean by "machine" and "thinking." And if there were, we could and do easily invent new uses for the same words, or find ourselves in situations where those words are not the most useful.

We might invent new words for machines that think, but we will not (cannot) stop using the words we already have ready-to-mind.

The Edge question has no answer because the words of the questions have no clear definition. Or better, the words have no one definition; but a whole number of useful use, where we apply them, to one situation or another, guiding both us and our thinking. We will be inclined to us them, along with their connotations and their implicit judgments.

Extending Wittgenstein's concept of the "language game" to multi-criteral words, family resemblance is an active process. So we can enjoy the many thoughtful answers given here, but we should remember:

  • There is no "right" or "correct" answer to this question It would be better to think that there are more or less thoughtful, interesting, profound, provocative, or useful answers.
  • The words we use here do not and cannot have a clear definition. Better to think the words are mental glasses through which we conceptualize our world, in ways we think might help (impress, further, stimulate). What they cannot do is "answer" the question once and for all. We have neither the facts (about the future) or the concepts that give complete clarity and "answers" such a question.
  • Therefore, in these ways, the question is not a question. Or, equally, not a question with an answer. Which in some ways is a sign of it not being a "question."
  • Our attempted answers are not answers, as much as they are responses, which can never cover the whole set of situations which might fit under such concepts.

A salutary hesitation is perhaps our best first response.

... it's almost always wrong to seek the "real meaning" of anything. A thing with just one meaning has scarcely any meaning at all. — Marvin Minsky The Society of Mind (1987) 6.1

[ back ]