^Russell & Norvig 2003，第947頁 harvnb error: no target: CITEREFRussellNorvig2003 (help) define the philosophy of AI as consisting of the first two questions, and the additional question of the ethics of artificial intelligence. Fearn 2007，第55頁 harvnb error: no target: CITEREFFearn2007 (help) writes "In the current literature, philosophy has two chief roles: to determine whether or not such machines would be conscious, and, second, to predict whether or not such machines are possible." The last question bears on the first two.
^This version is from Searle (1999), and is also quoted in Dennett 1991，第435頁 harvnb error: no target: CITEREFDennett1991 (help). Searle's original formulation was "The appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states." （Searle 1980，p.1）. Strong AI is defined similarly by Russell & Norvig (2003, p. 947): "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis."