人工智能哲學

维基百科,自由的百科全书
跳转至: 导航搜索

人工智慧哲學試圖回答下列問題[1]:

  • 機械可以有智慧的運作嗎? 可以解決人透過思考處理的所有問題嗎?
  • 人類智慧和人工智慧是一樣的嗎? 人腦本質上是電腦嗎?
  • 機械可以如同人類一般擁有自己的精神、心理狀態和意識嗎?能感知東西是如何嗎?

這三個問題分別反映了人工智慧開發者、語言學家、認知科學家和哲學家的不同興趣。透過討論 “智能”、“意識”的定義,和“機械”究竟是什麼,來尋找這些問題的科學答案。

人工智慧哲學的重要主張包括:

  • 圖靈禮貌公約:如果一個機器的行為可以如人類一般,那麼它就和人類擁有同樣的智能。[2]
  • 達特茅斯提議:“如果可以精確的描述學習的所有面向,或智能的任何其它特徵,一個機器就可以被製造來模擬它。”[3]
  • 塞爾強人工智慧假設:“只要電腦有適當的程式讓能夠正確的輸入和輸出,就能擁有與人類精神相同意義的精神。”[5]

參見[编辑]

參考文獻[编辑]

  1. ^ Russell & Norvig 2003,第947页 define the philosophy of AI as consisting of the first two questions, and the additional question of the ethics of artificial intelligence. Fearn 2007,第55页 writes "In the current literature, philosophy has two chief roles: to determine whether or not such machines would be conscious, and, second, to predict whether or not such machines are possible." The last question bears on the first two.
  2. ^ This is a paraphrase of the essential point of the Turing test. Turing 1950, Haugeland 1985,第6–9页, Crevier 1993,第24页, Russell & Norvig 2003,第2–3 and 948页
  3. ^ McCarthy等 1955. This assertion was printed in the program for the Dartmouth Conference of 1956, widely considered the "birth of AI."also Crevier 1993,第28页
  4. ^ Newell & Simon 1976 and Russell & Norvig 2003,第18页
  5. ^ This version is from Searle (1999), and is also quoted in Dennett 1991,第435页. Searle's original formulation was "The appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states." (Searle 1980,p.1). Strong AI is defined similarly by Russell & Norvig (2003, p. 947): "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis."
  6. ^ Hobbes 1651,chpt. 5