Can Machines Think?

"Can machines think?" The problem can be described in terms of the "imitation game", with a man, a woman, and an interrogator. The interrogator stays in a room apart front the other two and tries, by sending questions (perhaps by teleprinter) to the others, to determine which is the man and which the woman. Could a digital computer convince the interrogator that it was a man?
Digital computers work by following written-down rules, called 'programmes', and can even change their own rules in response to other rules. I believe that in about fifty years' time it will be possible to programme computers to make them play the imitation game.
It could be argued that machines cannot have souls, but, if this matters at all, why could not God give a soul to whatever He wants. We cannot know if computers could have consciousness, because we cannot really know if other people have consciousness. Computers can still surprise us with their answers. It might be best to try and build a computer like a human infant, and then programme it to learn.

read more here and then post your thoughts


Also watch this:


via videosift.com

Some notes...

First of all, these are two brilliant people faced with an uncertain question about an unclear topic. To have any meaningful conversation for any longer than 30 mins is a feat in and of itself. Bravo to everyone involved for their time and energy!

Since this is the internets, I will of course give my opinion. AI was something I wrote much about in college. First, I stared like the man on our left. I was a technologist, I believed in the power of computing and simulation. Facts were only things that were verifiable and proven through rigorous trial and error. In an effort to discover the truths of the universe. I had the utmost zeal for technology solving all the worlds problems, and that it could realize any possible challenge. After years of study and introduction to many different areas and ways of thinking, I had a, what I consider, more realistic understanding about technology and philosophy. With that said, lets get some meat!

Let us go over some of the things they mentioned. First, the Chinese room argument.
This is a thought experiment where a man goes into a room. It is locked and only has a small slot for access. In the room with the man is a typewriter and book of Chinese. The man does not speak Chinese, but the book has explanations of how to respond to certain symbol sets. It does not offer translated meaning or things of this nature. It is simply if you see "This" then type "That". It is pure syntax, no meaning is applied.

Now, a second man comes to the slot of the room. He inserts a sentence into the slot and waits. The man inside the box looks at the paper, looks at this guide and begins to churn out his output. He slides the output through the slot and the second man receives it. He reads it and it appears that the response is from something that knows Chinese. Something that understood what he said and replied. However, this is not what happen. The person inside knows nothing of how to speak that language, he was only responding syntactically to other syntax. This is not intelligence, rather, more definitely, this is not understanding.

Much to my disappointment I became aware of this thought experiment. Because currently, this is how ALL software is realized. The hardware is essentially dumb, it does nothing except what the software tells it to do. This means at best, a computer in its current form can never have understanding. So at best, this conversation has to be about new, different computers that doesn't work on the same syntactical model that we have today.

The counter to this was that humans can be understood in the same way a computer can, were as the hardware is just doing what the brain is telling it to. That we are just state machines with brains being the software and the body being the dumb hardware. This would imply that humans also do not have understanding. However, we do, and that is where the problem is.

Now, we must be clear on what understanding means before we move further. Understanding is hard to flesh out briefly, but I will try. Experiencing the color blue is more than just experiencing a certain wavelength a light. It has a context that goes beyond just the facts of it, you experience blueness! Blue has a real experienced value. You have done more than just become aware of it, you have experience of it. More over, you can actually think back upon the experience itself, it is more than just a wavelength to you, not only is it blue, but you have an experience of blue to reflect on with all sorts of other things relating to it.

The man in the room had no understanding of Chinese. It was gibberish to him. He can only do what he was told in his special language.

The next is a typical fallacy that I have used from time to time without realizing it. It is easy to do and it is made in this presentation. Appeal To complexity in a slightly modified form. That, we don't understand how human consciousness as the brain is complex. And, in fact, it is in that complexity that the emergent property of consciousness comes from. This of course is not necessarily true or untrue, but he is stating this as a fact of consciousness in computing being a possibility because of this.

Let us use another example. Let us say that we have broadcasting towers all over the USA. They are broadcasting all sorts of different programs to all sorts of different people. It is a complex web of towers and receivers but it all seems to work out ok. So, are we to conclude that radio towers are conscious? Of course not, but that is what are are doing with the human experience of consciousness. Lets look at that quickly.

When you experience something, you experience every one of your scenes simultaneously. You remember the sounds, the tastes, the sights...it is all there. However, your brain never really has a point in which all points connect. Your consciousness is something that seems to violate the laws of physics, that things are happening in different locations in space at different times, but for your consciousness, at the same time. This isn't something that is reducible to brain states, and not something that is physically possible in computer technology as we know it. It doesn't matter if it is parallel or not, if things don't touch but are somehow related this is mystifying; and as a result, unreproducible. Perhaps consciousnesses is reducible to one point in the brain we haven't found, but so far, there is no such thing.

I have already gone on way to long, and I could go on for about 20 more pages. I still have my thesis on it laying around here somewhere. I LOVE THIS TOPIC, but my studies have lead me to believe that creating an ACTUAL intelligence isn't possible with current digital technology. Let me remind everyone that digital computing hasn't changed since basically Leibniz , and that was in the 1600s. In other words, AI, or Computers with Consciousness is NOT possible with state machine logic.

I would like to point out one more fallacy the pro-AI guy was (and let me be clear, I love the idea of AI too, so I am pro as well! But I just think it is impossible) that simulations of of brain states is simulacrum, not experience. Simulacrum difference from actual experience because it begs the question, is this thing ACTUALLY experiencing anything other than a brain state. For instance, the color blue is not necessarily equal to any particular brain state. Brain states alone do not sufficiently explain human consciousnesses, to assume that a proper modeling of them is anything other than just another simulacrum is without cause. In short, a simulacrum does not an experience make. (The people in the painting aren't experiencing a wonderful day)

No comments:

Post a Comment