Science is what we have learned about how not to fool ourselves about the way the world is.and I quote Philip K. Dick as saying
Reality is that which, when you stop believing in it, doesn't go away.I'm in the middle of reading John Searle's Mind. He makes a similar point in much less poetic language.
[There is a] distinction between those features of the world that are observer independent and those that are observer dependent or observer relative. Think of the things that would exist regardless of what human beings thought about or did. Some such things are force, mass, gravitational attraction, the planetary system,k photosynthesis, and hydrogen atoms.Incidentally, Searle thinks that subjective experience is a level of abstraction. He doesn't seem to be aware of that concept from computer science, though, and he isn't really clear about it. But that's essentially the point he makes. (See my discussion of reductionism, evolution, and levels of abstraction for a further discussion of what this is about.)
I agree with Searle that subjective experience is a level of abstraction. But Seale seems to think that saying so solves the problem of consciousness, that in saying that subjective experience is a level of abstraction that explains why and how we have subjective experiences. I don't agree and don't see why he thinks so. There are many ontologically irreducible levels of abstraction. That consciousness is one of them doesn't explain how we experience it as subjective experience. Seale's explanation doesn't overcome what he calls the zombie criticism: why couldn't all the same stuff happen but without subjective experience?
Here's a talk Searle gave about it in May 2006.
Here's my current approach to epistemology, which deals with many of these same issues.
- Ideas are subjective experience in the same category as qualia. (This is the direction in which the "If a tree" paper goes.) Therefore ideas and qualia are both part of Chalmer's "hard problem of AI."
- I expect that we will eventually solve this problem and develop a reasonable understanding of how subjective experience comes about. But we don't have much to say about it now. The important point is that solving the problem of how subjective experience comes about (including how ideas come about) is a question for science.
- I actually expect that the "solution" to understanding consciousness will be disappointing. It will be something like: that's just the way it feels to operate on that level of abstraction. I know that sounds circular (what does it mean to say "that's just the way it feels"?) Why does it have to feel any way? The answer probably won't be quite panpsychism (that any way of being has some subjective component in some sense), but it may be close. It seems to me that Nicholas Humphrey attempts to do something like that in his "How to Solve the Mind-Body Problem."
- Symbols (and language in general) are our attempt to externalize our ideas in order to record, study, and communicate them. In other words, symbols are secondary to and derivative of our ideas. Symbols no more exist outside our minds than do ideas. We use natural mechanisms symbolically, but they are not themselves symbols. The issue of the discreteness of nature is somewhat separate. QM is discrete. Entities (levels of abstraction) are discrete. But they are not symbolic. How would I characterize the difference? Discrete elements are maintained in their state more or less the way bits are maintained in their state.
- The symbol grounding problem is not the right problem to attempt to solve since symbols are two steps away from their ground. The real problem is the idea grounding problem. To solve the symbol grounding problem we have to solve (a) the symbol to idea problem, i.e., what we intuitively mean by “semantics” and (b) the idea grounding problem.
- We will understand how ideas are grounded when we understand how subjective experience comes about. (See #2 and #3.) Until then, we just have to rely on the fact that we are able to ground our ideas—more or less the same way that we rely on our subjective experience being grounded. We don’t really worry about solipsism. (Searle tells a great story about Bertrand Russell. After a talk a well-known logician came up to him and commented that she was a solipsist and that the position seemed so logical that she was surprised that she hadn’t found many others.) We function pretty well as beings in the world. We have evolution to thank for our ability to ground our qualia and our ideas. If our qualia and ideas were not generally grounded, we would not have survived as a species.
- Of course we are not always successful at grounding our qualia and ideas. We have lots of wrong ideas—and we can be fooled about other qualia.. We have all sorts of ideas and subjective experience. Some of them are right, but we can't expect that just because we have an idea or other subjective experience that it will be right. The tag line of my blog (spoken by my cat) is: Humans, smart enough to have ideas; foolish enough to believe them. Richard Feynman once said that science is what we have learned about how not to fool ourselves about the way the world is. I think that's exactly right. Evolution built us to be pretty good at having ideas and other subjective experiences that match reality, but it doesn't always happen.
- Even when our ideas match reality, that doesn't mean our ideas are out there in the world. Our ideas are always in our minds. As I say in "If a tree," hemoglobin carries oxygen. But hemoglobin doesn't have a tag attached to it that certifies that it has the property of being able to carry oxygen. Neither does it have a tag certifying that it's hemoglobin. Nor does oxygen have a tag certifying that it's oxygen. All those sorts of tags are in our minds. Yet hemoglobin really does carry oxygen. So nature may be pretty close what we think it is. (I gather that's a form of scientific realism) It isn't composed of ideas or properties. But it is composed of stable entities.