^Russell & Norvig 2003,第947頁 harvnb error: no target: CITEREFRussellNorvig2003 (help) define the philosophy of AI as consisting of the first two questions, and the additional question of the ethics of artificial intelligence. Fearn 2007,第55頁 harvnb error: no target: CITEREFFearn2007 (help) writes "In the current literature, philosophy has two chief roles: to determine whether or not such machines would be conscious, and, second, to predict whether or not such machines are possible." The last question bears on the first two.
^This is a paraphrase of the essential point of the Turing test. Turing 1950 harvnb error: no target: CITEREFTuring1950 (help), Haugeland 1985,第6–9頁 harvnb error: no target: CITEREFHaugeland1985 (help), Crevier 1993,第24頁 harvnb error: no target: CITEREFCrevier1993 (help), Russell & Norvig 2003,第2–3 and 948頁 harvnb error: no target: CITEREFRussellNorvig2003 (help)
^McCarthy et al. 1955 harvnb error: no target: CITEREFMcCarthyMinskyRochesterShannon1955 (help). This assertion was printed in the program for the Dartmouth Conference of 1956, widely considered the "birth of AI."also Crevier 1993,第28頁 harvnb error: no target: CITEREFCrevier1993 (help)
^This version is from Searle (1999), and is also quoted in Dennett 1991,第435頁 harvnb error: no target: CITEREFDennett1991 (help). Searle's original formulation was "The appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states." (Searle 1980,p.1). Strong AI is defined similarly by Russell & Norvig (2003, p. 947): "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis."
^Hobbes 1651,chpt. 5 harvnb error: no target: CITEREFHobbes1651 (help)