Read Chapter 6, “Body and Mind,” in our text Problems from Philosophy.
Read Chapter 7, “Could a Machine Think?,” in our text Problems from Philosophy.
Read this biography of Rene Descartes.
Read all content in the ‘Unit Four Lectures’ folder below.
Works Linked/Cited:
Rene Descartes (1596-1650). No date. Internet Encyclopedia of Philosophy. www.iep.utm.edu/descarte/. Accessed 30 April 2018.
unit four lecture folder
The mind-body problem is the problem of determining the relationship between the human body and the human mind. Philosophical positions on this question either reduce one to the other or posit the discrete coexistence of both. Put in another way, the mind-body problem grows out of the appearance of three distinct occurrences: it appears that there are physical events (e.g. I raise my arm), it appears that there are mental events (e.g. I desire to raise my arm), and it appears that there is interaction between the two (e.g. I desire to raise my arm, something occurs in my brain that tells my arm to raise, and it does raise). If things are just as they appear, then there are two completely different kinds of things (mental events and physical events), and they somehow interact. This view is known as dualistic interactionism and is probably the most ‘common sense’ solution to the mind-body problem. Obviously, there are problems with this view, not the least of which were brought up by Princess Elizabeth of Bohemia in her correspondence with Descartes after reading his Meditations on First Philosophy.
Mind-Body dualism is the view that the mind and the body are different sorts of things: the body is a physical thing, and the mind is a nonphysical thing. Mind-Body dualism was the theory of both Socrates and Descartes. Not all dualists are necessarily supporters of mind-body interaction. Parallelism is a dualistic position that says mental events and physical events do not interact; rather, they are perfectly coordinated by God. There are two well-known proponents of dualistic non-interactionism: the philosopher Gottfried Leibniz and Nicolas Malebranche. The latter argued that physical and mental events are coordinated as they occur. His position is called occasionalism. This view requires that God intervene specifically on each occasion in which interaction is required. Leibniz decided that God set things up so that they always behaved as if they were interacting, without particular intervention being required. His position is called pre-established harmony.
To summarize thus far, we have briefly covered two ‘solutions’ to the mind-body problem (the problem of offering a theory which will account for the appearance of mental events, the appearance of physical events, and the appearance of interaction between the two).
1. Dualistic interactionism (such as that offered by Descartes)
2. Dualistic non-interactionism (such as that offered by Leibniz and Malebranche)
Other ‘solutions’ to the mind-body problem argue that there is no interaction between mind and body because dualism is mistaken. In other words, these positions assert either that mind is just a part of the body or that the body can be reduced to the mind. Monism is the position that all phenomena are derived from a single origin. Therefore, according to materialist monistic positions mind is simply the result of matter. Materialist theories explain mental facts in purely physical terms. According to the monistic position of idealism, the material realm does not exist independent of the mind. Then, there is neutral monism, which is the philosophical view that mental events and physical events can both be reduced to aspects of some neutral substance, which considered by itself is neither physical nor mental but in essence prior to both.
Item
Thinking Machines/Machines that Think
When we start to seriously consider the question of whether a machine could think, we run up against difficult philosophical questions about what it means to be a human person. Our human experience of ‘thinking’ is so fundamental to who we take ourselves to be that, as Rachels points out, when we argue that computers can only ‘execute programs’ (and therefore cannot think), we beg the question. Remember that begging the question is the fallacy of assuming the thing you are trying to prove or smuggling in the conclusion as one of the premises. When we say that we humans, unlike machines, are doing something more than executing programs, we are assuming that we know that humans are not just highly complex machines. In other words, it can be argued that we humans are actually complex organic machines: machines made out of flesh. So, if we actually think, whatever ‘thinking’ means, then certainly it is not because of the material of which we are constructed. Therefore, a complex machine made of other materials, it could be argued, is capable of thinking as well. In fact, Rachels quotes one commentator as saying, “If trees could converse with you as fluently as they do in some fairy tales, wouldn’t you unhesitatingly say that trees can think?”
Rene Descartes believed that only humans are capable of thinking; however, his position is dependent upon his belief in mind-body dualism. In other words, his view rests on the belief that humans and only humans have souls.
According to Descartes, non-human animals, for example, are complex organic machines, all of whose actions can be fully explained without any reference to the operation of mind in thinking. He did not believe that animals, therefore, experience pain. Descartes regarded them as machines like clocks, which move and emit sounds, but have no feelings. So, to Descartes, when I step on my dog’s foot and the dog yelps as if in pain, what is actually happening is that the dog is mechanistically emitting a sound much like an alarm clock. In spite of Descartes’s view, it remains a matter of common sense to most people that many other animals do have conscious experiences similar, although not as complex, to those of humans.
The philosopher Peter Singer, for example, writes the following regarding great apes: “The great apes (chimpanzees, gorillas and orangutans) are not only our closest relatives; they are also, more importantly, beings who possess many of the characteristics which we consider distinctive in our own species. They form close and lasting attachments to others; they show grief; they play; when taught sign language, they tell lies; they plan for the future; they form political coalitions; they reciprocate favors, and they become angry when someone for whom they have done a favor does not respond similarly. Their intellectual abilities have been compared with those of children between two and three years old, and their social bonds are stronger than we would expect from a child of that age.”
In contemporary times, most people see animals much differently than Descartes did. Also, our experiences with machines, both in real experiences and in science fiction stories and movies, have taken us beyond Descartes’s belief that a machine “could never modify its phrases to reply to the sense of whatever was said in its presence . . . .” While artificial intelligence products are, as Rachels notes, “out of reach” of being able to pass the Turing Test, machines can do remarkable things. (If you would like to discuss any problems you are having in philosophy with Eliza, feel free to talk with her.)
What if a machine could pass the Turing Test? (The inability to clearly and successfully distinguish the responses of a human being from those of a computer.) According to Rachels, building such a machine is not the only problem. Even if such a machine could be constructed, then there are still two problems in using this as conclusive evidence that a machine could think. First, the Turing Test is based upon behaviorism, a discredited theory of mind. Second, the Chinese Room Argument shows that passing the Turing Test could be accomplished by a system that has syntax, rules for manipulating uninterpreted symbols. To have a thinking mind, it appears that a system would have to have more than syntax. It must also have semantics, rules for interpreting symbols.
In the final analysis, as Rachels points out, “we have no good understanding of what makes us conscious. We think that it has something to do with the brain. But what features of our brains give rise to consciousness, and how exactly does this work? We do not know. . . . If we knew what features of our brains account for this, we could then ask whether a computer could have similar features.” If someday we are able to create robots that seem to have thoughts and feelings like human beings, we will have to face ethical questions about how they should be treated and whether they would be a member of our moral community. In essence, these are the same questions that are being addressed today in ethics regarding the rights of non-human animals.
and then respond to these three topic separately
1 Ethics and AI
The World Economic Forum recently published an article about ethical issues related to artificial intelligence. After reading the article, address the following in your response to this thread:
Of the 9 ethical issues mentioned, which do you find the most pressing/troubling from a moral perspective, and why? And while most of us do not/will not have a direct impact on the creation of these technologies, what do you see as our moral responsibilities, as individuals, in relation to these technologies? In other words, can we, as individuals, ‘do’ anything to address some of these ethical concerns, even though most of us will not be involved in the creation and implementation of these technologies?
Works Linked/Cited:
Bossman, Julia. “Top 9 Ethical Issues in Artificial Intelligence”. World Economic Forum. 21 Oct 2016. https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/. Accessed 21 August 2019.
2 The Mind-Body Problem
After reading Chapter 6 and watching the video below on the mind-body problem, state your understanding of the mind-body problem and explain which view of the mind most agrees with your understanding of ‘mind.’ Are you a dualist? A materialist? Explain why; defend your position with reasons. video link Where Does Your Mind Reside?: Crash Course Philosophy #22. YouTube video file. [9:06]. Crash course. 2016, Aug 1. YouTube/3SJROTXnmus
3 AI and Personhood
Saudi Arabia recently granted the status of citizenship to a robot, Sophia. Read this article about Sophia and watch the video below to see her speak to a live audience. Given recent advancements in the development of sophisticated AI, what do you see as the implications for human beings? Do you think we have moral responsibilities to such creations? Should we consider them ‘persons’? Defend your answer with reasons for your position(s). Often in this thread, students write comments such as ‘AI will never be human’ or ‘no, I do not and will not ever consider AI to be human.’ BUT!!! Note that we are discussing whether AI should be considered persons, not humans. Clearly, to be human means to have human DNA etc., so, AI cannot be human. Make sure to focus your comments on the potential personhood of AI.
Interview with the Lifelike Hot Robot Named Sophia (Full) | NBC. YouTube video file. [5:04]. CNBC. 2017, Oct 25. youtu.be/S5t6K9iwcdw
What Students Are Saying About Us
.......... Customer ID: 12*** | Rating: ⭐⭐⭐⭐⭐"Honestly, I was afraid to send my paper to you, but splendidwritings.com proved they are a trustworthy service. My essay was done in less than a day, and I received a brilliant piece. I didn’t even believe it was my essay at first 🙂 Great job, thank you!"
.......... Customer ID: 14***| Rating: ⭐⭐⭐⭐⭐
"The company has some nice prices and good content. I ordered a term paper here and got a very good one. I'll keep ordering from this website."