Philosophy Online Forum  
10/12/10 @ 23:42 *
Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
News:
 
   Home   Help Search Login Register  
Pages: [1] |   Go Down
  Print  
Author Topic: Artifical Intelligence  (Read 297 times)
Scott
Teachers
Newbie
***
Offline

Posts: 8



« on: 12/10/10 @ 04:10 »

Hi

I'm beginning the research for an essay on AI. I hope to start with Descartes, dualism/monism, Wittgenstein and logical behaviorism as a progression towards modern research and the work of the likes of Dennet and Chalmers. If this sounds vague, it's because it is. I have not, as yet, found my own voice, as it were and am struggling for an angle to take, a thesis to support, without simply rehashing old arguments. Certainly there is a lot to say about the Turing Test and the Chinese Room, but I'm not sure I could add much originality to the debate.

I'm thinking to use this thread to clarify my thinking as I go along. In the meantime, advice and suggestions would be gratefully received.

I'm currently reading Damasio, and pondering the very fine line between philosophy and science...
Logged
Gareth Southwell
Administrator
Sr. Member
*****
Offline

Posts: 257


WWW
« Reply #1 on: 12/10/10 @ 08:03 »

Hi Scott,

I think perhaps the first thing to identify are the precise points of controversy. There are two that I can think of, but there may be more: firstly, as argued by Searle, AI may always lack the ability to be truly conscious; however, such as Penrose argue that machines will never think in the fullest sense, for there are non-algorithmic aspects to thought (e.g. Certain forms of insight which cannot be programmed). A third point - for me - would be the extent to which AI could approximate the instinctive drives. If, for instance, Nietzsche is right, and we are all driven on by some sort of 'will to power', then it's difficult to see how this could be replicated at the machine level, for it would basically amount to modelling the life force! Of course, some reject the will to power hypothesis because it is not mechanistic! But there would seem to be a general problem with aping processes that are not fully rational (e.g. Emotion).

Anyway, just some thoughts.   
Logged

This site is 100% free. To support it, you can Buy Books, Buy Art, Advertise, or Pay for Tutoring. For questions, please visit the FAQs.
badioutothebone
Teachers
Newbie
***
Offline

Posts: 11


WWW
« Reply #2 on: 13/10/10 @ 10:34 »

Nagel's "What is it like to be a bat?" is useful on this debate.  He discusses that completely subjective element which makes consciousness particular.  It's an interesting paper as it gets people thinking not just about understanding bats and ourselves, but also general ideas of consciousness and whether we can understand it.  I think Colin McGinn also does a lot on how consciousness is just too complex to actually delve into.

For me, the thing that will mess up our attempts to produce actual AI is that we've not fully understood the nature of subjectivity.  I think philosophy of mind has slowly tended towards an eliminative materialism (all mental terms should be outlawed in favour of pure brain talk) which isn't actually productive.  It fails to grasp a lot of animal elements (I won't call them human in case other animals are similar).  I think there is a faith that at some point an algorithm will be invented that creates self-awareness on a deep level, whereas it may turn out that that such a thing isn't possible.  Even if it was, how could you check that a machine is thinking any more than you could verify that a human is?

I admit, my own thoughts are sketchy.  Thanks for opening this can of worms/hornet's nest.
Logged

"Ain't no devil, just God when he's drunk"  (Tom Waits).
Pages: [1] |   Go Up
  Print  
 
Jump to: