Continued from my first rebuttal «
Steve has now posted his first rebuttal. I am reproducing it here for ease of reference.
It seems to me that the discussion is wandering away from the topic of the debate. The question is whether or not human reason requires the existence of God. It is not, I believe, a discussion about the nature of life, or of the existence or otherwise of purely subjective experiences (although I am happy to cover those matters, and I will). I mentioned those as illustrations of how our past notions of reality have been shown to be flawed, in the way that our “common sense” understanding of how our minds work, and what our minds consist of certainly is.
I also feel we need to stay focused on what human reasoning consists of, and not be tempted to abstract matters to ever higher levels so that they are always beyond whatever a materialistic view of the world can explain. One of the tools of reasoning is Occam’s Razor. We should stick with the simplest level at which we may be able to begin to explain things, and only move up from there if absolutely necessary. We also have to be careful about how we would justify moving to a different level. The history of science and reason has taught us that common sense, incredulity and ontological arguments are not good justifications.
So, let’s review what needs to be explained. I say that it is the ability of our minds to recognize truth, to reflect on it, and to draw correct conclusions.
I think this is an appropriate time to discuss where our connection with “truth” comes from. The philosopher Alvin Plantinga argues that evolution only tunes our minds to know what is necessary, but not truth. This implies that we may be living in some kind of mental fog of unreality, unable to correctly assess reality. This is a rather puzzling idea, because to survive, minds have to be able to detect and remember truths about the world. Even simple animals with minimal brains can have the ability to learn. Learning is based on reward and punishment. Eat the right kind of fruit and you get a full belly. Eat the wrong kind and you get sick. An animal living in a fog of delusion about the world is going to get sick much of the time, and won’t be as good at producing offspring as one that has better access to the truth. This is not to say that we always recognise truth. It can be useful for minds to over-detect certain patterns. It is better to mistake a stick for a snake than mistake a snake for a stick. However, even this is constrained, as too many “false positives” will result in a waste of energy. The tendency to false positives can also reduced by experience, and by discussion with others, and by research. In the end, when we carefully prod the stick, we find it is indeed just a stick.
There is a term used in computing – “bootstrapping”. This describes how a computer can “get going”. A computer needs to “know” facts about how to deal with its hardware: how to fetch information off the internal disk drive, for example. In the earliest machines the first few instructions had to be fed in manually, through switches. Then, this could get the machine starting to read what is now a barely remembered form of data storage – paper tape. The tape loaded up enough to start reading either a disk or a magnetic tape, and then the system was fully functional. Computer scientists knew that surely there had to be better a system. What was needed was some way that the computer could “pull itself up by its own bootstraps”; get itself going (in a way that seemed almost paradoxical, hence the metaphor) with no intervention. This was achieved using information stores that could be accessed directly by the central processor of the computer and which did not lose their information when the computer was unplugged. In older computers this was in the form of magnetic rings (core memory). In modern computers this is in ROMs (Read Only Memory). All the computer needs is to be hard-wired to always look in the same location in these stores for the first instruction to execute. I will come back to computers and their hardware again, as they provide an interesting parallel to brains in many respects, but for now, introducing the concept of bootstrapping has been important. Our brains have been “bootstrapped” into recognising and processing and analysing truths about the world by the survival value of interacting successfully (in terms of reproduction) with the world, and especially with the other living creatures within it.
One of the problems in discussing the mind is that most of us seem to think that we are experts about what goes on in our own heads, and what it feels like to have those processes going on. I need to re-iterate what I said at the start of this response, and I will do it as follows: If a process walks like reasoning, and it quacks like reasoning, then we have a reasoning duck! To look at the process and claim that it simply can’t be reasoning because it is physical is not acceptable. If a physical system is sufficiently powerful to give the appearance of reasoning, then it certainly is reasoning. If we were to try and argue otherwise, that would be begging the question about the nature of human reason. Evolution can produce solutions that we can find difficult to understand. It should not be surprising then that we haven’t yet sorted out in detail how brains give rise to minds. I would like to give an example of the power of evolution and how it can lead to effective yet mysterious results. To do this I am going to come back to computer hardware. In the 1990s Adrian Thompson of the Department of Infomatics at the University of Sussex performed an experiment. He wanted to see if he could evolve a computer circuit out of 100 components that could discriminate between two different signal frequencies – one and ten cycles per second. So, he varied parameters of the circuitry at random and selected from the results. It took thousands of cycles of mutation and selection, but in the end, he succeeded. The results were rather odd. They were nothing like anything designed by a human. After some analysis, redundant components were removed (removal was not part of the selection process). Various feedback loops were present in the circuitry, and strangely, some components were needed even though they were not connected to the others. They contributed to the electrical properties of adjacent circuits. To sum up, the circuit produced by evolution was, for a time, a mystery. It only worked in certain temperature changes, and the output of circuit changed if it was moved to a different area of silicon. One of the most significant aspects of this experiment was how efficient the result was in terms of components – only around 1/3 of the available components were used in the evolved design. The implications of this (and subsequent experiments) are that we should beware of invoking magic where there is mystery. There was no magic here, just evolution. We see similar mystery in the brain, only with trillions of components, not tens. So, it is not surprising that we aren’t yet close to understanding consciousness.
That example of the evolved circuit shows one way that complexity can arise – by the input of energy and resources combined with selection from variation. However, clearly, not all complexity is of the same nature, and it can arise in various ways. A watch is complex, but not in the same way as a tree. Brains are complex in ways that allow for intelligence. But, we need to be careful about definitions, otherwise we may end up calling intelligence “whatever a mind does”, and instead avoid considering its general nature. Although we have not achieved general intelligence in artificial systems, we have certainly achieved specific functions that are usually considered intelligent in nature – recognition of voices, faces and even handwriting, learning, planning, game-playing. We know of different kinds of complex systems that can achieve this level of functionality. One kind is a result of software engineering in the form of huge and carefully-written programs. Another kind is fuzzier, with a greater emphasis on hardware – neural networks. We may, for all we know, discover other kinds of complex system that exhibit this behaviour. What we have seen is behaviour from artificial systems that, if a human exhibited such behaviour, would certainly be called truth recognition and processing, in other words – reasoning.
Another matter raised in this debate is that of causations versus correlation in terms of the connection between brain and mind. I am afraid that attempting to explain things away using correlation just won’t wash. It is beyond anything we would accept in any other area of life. Above a certain level of correlation, we accept the likelihood of causation – that smoking is bad for you, that cars run by gas rather than magic. There is only so much “prodding” of a system and observing the results that needs to be done to get an idea that the prodding itself is having the effect. The alternative is absurd – that some non-physical process is constantly monitoring the state of all the neurons in the brain in order to figure out what is going on in order to manipulate the state of some non-physical mind. One would have to ask – what would the point be? It would surely be easier to just leave the mind fully working when the brain suffers a stroke.
Now I am going to return to some matters discussed in my previous contribution to clear up what I believe are misunderstandings. My discussion of the universals was meant to illustrate how misleading it can be to assume the actual existence of certain attributes of physical systems. In discussing redness, I showed that the assumption of an essence of redness as an extra physical or metaphysical property of something that appears red is mistaken. This was not to do with the experience of “redness”, but that “redness” does not have any independent existence from the physical world. I did not state that moral and propositional attitudes aren’t explicable to science in terms of the material world; I had rather hoped that my statement had shown that they were. What I stated was that there was still resistance to the idea that they are explicable in these terms, in a way that there is no longer for “redness”. There is no “essence of redness” that is present in the world, and there is no “essence of morality” or “essence of propositional attitude” that is present. They are simply terms that describe certain kinds of mental processing, and the experience of having such processing going on.
Regarding the issue of intentionality and modeling other minds, the actual mental state of a tiger that may attack us is of no relevance to the matter of how our minds can deal with this issue. Our ability to recognise intentions in other minds comes from the necessity to predict what the tiger will do. Our minds recognise the living nature of a tiger because we need to model what it may do, and we realise that sufficiently complex living creatures aren’t as predictable as, say, a rock. We usually tend not to assume that a rock is “out to get us” (although there is certainly some advantage to assuming the remote possibility it might be, as an enemy may be behind it, waiting to push). Our abilities to recognise and to deal with intentionality aren’t mysterious: they arise from having to share our world with both predictable events and the less predictable objects that are creatures with minds.
To conclude, I have dealt with the issue of how the ability to recognise and deal with supposed universals of reasoning, such as truth, can “bootstrap” into brains through selection – no metaphysical standards for these need to exist. There may be some issue about the fact that it is like something to have experiences, but there is nothing about that that implies anything non-physical. There are interesting discussions to be had, I feel, about the nature of experience – why it seems to feel like it is something to have neural processing going on in one’s head. There are fascinating times ahead in neuroscience. We are able to follow the activity of neurons to an ever-finer detail, and model that activity. At some point, we will be able to observe, and model, what happens in a brain when we reason. This modeling will be a consequence of the use of the scientific method to understand reasoning. Any explanation of how we reason must itself involve the use of reason, and must include both a description of mechanism and predictive power. Supernatural explanations don’t provide either mechanism or predictive power. They involve magic, and that really isn’t any kind of explanation at all. To pick up on one particular point, if it is proposed that some supernatural spirit follows the state and activity of nerve cells (so as to reproduce the effects of drugs, or brain lesions), then it is necessary to provide a description of the mechanism by which those nerve cells are monitored. The scientific approach will allow observation of the mind in action. It does not require such mechanism, as the patterns of activity of trillions of nerve cells is itself the mind. The results of that observation may be a million, or a billion, times more complicated and, initially, mysterious than the puzzling nature of the evolved frequency discriminator circuit described above, but we will get there in the end. The speed at which the human genome project was completed reveals the exponential rate of progress of science.