1 Müller, Vincent C. and Bostrom, Nick (forthcoming 2014), ‘Future progress in artif i- Fundamental cial intelligence: A Survey of Expert Opinion, in Vincent C. Müller (ed.), (Synthese Library; Berlin: Springer). Issues of Artificial Intelligence Future Progress in Artificial Intelligence: Survey of Expert Opinion A a,b a & Nick Bostrom Vincent C. Müller ) a Future of Humanity Institute, Department of Philosophy & Oxford Martin School, b ) niversity of Oxford . Anat olia College/ACT, Thessaloniki U There is, in some quarters, concern about high Abstract: level ma chine – intelligence brin g- and superintelligent AI coming up in a few decades, risks for humanity n other quarters, these issues ing with it significant . I We wanted to clarify what the on. are ignored or considered science ficti distribution of opinions actually is, what probability the best experts high – level machine intelligence coming up within a currently assign to – frame particular time which risks they see with that development , and , how fast they see these developing . We thus b rief questio n- designed a naire and distributed it to four groups of experts in 2012/2013 . The median estimate of respondents chance that high - was for a one in two around 2040 - 2050, rising level machine intelligence will be developed . Experts expect that systems will move to a nine in ten chance by 2075 in less than 30 years on to superintelligence . They estimate thereafter the is about one in three that this development turns out to be chance ely bad’ for humanity . ‘bad’ or ‘extrem 1. Introduction Artificial Intelligence began with the “... conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a m a- chine can be made to simulate it. ” (McCarthy, Minsky, Rochester, & Shannon, 1955, - level AI p. 1) and moved swiftly from this vision to grand promises for general human decades . This vision of general AI has now become merely a long within a few term - guiding idea for most current AI re search, which focuses on specific scientific and e n- gineering problems and maintains a distance to the cognitive sciences. A small minor i- ty believe the moment has come to pursue general AI directly as a technical aim with the traditional methods – these typ ically use the label ‘ artificial general intelligence’ (AGI) ( see Adams et al., 2012) . If general AI were to be achieved, this might also lead to superintelligence: “ We can tentatively define a superintelligence as any intellect that greatly exceeds the cognitive pe r- formance of humans in virtually all domains of i nterest .” (Bostrom, 2014 ch. 2) . One idea how
2 Future Progress in Artificial Intelligence: A Poll Among Experts 2 / 19 ould create artificial general superintelligence might come about is that if we humans c intelligent ability at a roughly human level, then this creation could, in turn, create yet higher intelligence, which could, in turn, create yet higher intelligence, and so on ... n ability and perhaps even an a c- So we might generate a growth well beyond huma celerating rate of growth: an ‘intelligence explosion’. Two main questions about this trom, 2006; Hubert L. Dreyfus, (see Bos development are when to expect it, if at all and what the impact of it would be, in particular which risks it 2012; Kurzweil, 2005) might entail, possibly up to a level of existential risk for humanity (see Bostrom, 2013; say “Success in creating AI would be the biggest . As Hawking et al. Müller, 2014a) event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” (Hawking, Russell, Tegmark, & Wilczek, 2014; cf. Price, 2013) . – knowing So, we decided to ask the experts what they predict the future holds (see Armstrong, Sotala, that predictions on the future of AI are often not too accurate and tend to cluster around ‘in 25 years or so’, no matter at & O Heigeartaigh, 2014) 1 what point in time one asks. Questionnaire 2. 2.1. Respondents The ques tionnaire was carried out online by invitation to particular individuals from four different groups for a total of ca. 550 participants (see Appendix 2 ) . Each of the (see participants got an email with a unique link to our site to fill in an online form Appendix 1 ) pond within 10 days, a reminder was sent, and another . I f they did not res 10 days later, with the note that this is the last reminder. In the case of EETN (see below) we could not obtain the individual addresses and thus sent the request email and reminders to the members’ mailing list. Responses were made on a single web page with one ‘submit’ button that only allowed submissions through these unique – invited res ponses extremely unlikely . The groups we asked links, thus making non were: PT – 1. P articipants of the conference on “Philosophy and Theory of AI”, AI: Thessaloniki Octobe r 2011, organized by one of us (see Müller, 2012, 2013) . Participants were asked in November 2012, i.e. over a year after the event. The total of 88 participants include a workshop on “The Web and Philos o- phy” (ca. 15 people), from which a number of non – respondents came. A list of participants is on: http://www.pt – ai.org/2011/registered – participants 1 There is a collection of predictions on http://www.neweuropeancentury.org/SIAI - FHI_AI_predictions.xl s
3 Future Progress in Artificial Intelligence: A Poll Among Experts 3 / 19 AGI: Participants of the conferences of “Artificial General Intelligence” 2. Impacts and Risks of Artificial General Intelligence ” (AGI (AGI 12) and “ 2012) ford December 2012. We organized AGI – Impacts mpacts I , both Ox and hosted AGI 12. The poll was announced at the (see Müller, 2014b) Impacts) and carried – meeting of 111 participants (of which 7 only for AGI is 10 later. The conference out days at : ca. site http://www.winterintelligence.org/oxford2012/ EETN: 3. Members of the Greek Association for Artificial Intelligence (EETN), a professional organization of Greek published researchers in the field, in The April 2013. Ca. 250 members. The request was sent to the mailing list. http://www.eetn.gr/ site of EETN: 4. The 100 ‘Top authors in arti ficial intelligence’ by ‘citation’ in ‘all TOP100: according to Microsoft Academic years’ Search ( ) in May 2013. We reduced the list http://academic.research.microsoft.com cessary to get back to 100, searched for to living authors, added as many as ne – mails on the web and sent notices to these. professional e The questionnaire was sent with our names on it and with an indication that we would use it for this paper and Nick Bostrom’s new book on superintelligenc e . Given that the respondent groups (Bostrom, 2014) – our request email is in Appendix 1 1 and 2 attended conferences organized by u s, they kne w whom they were responding to . In groups 3 and 4 we would assume that the majority of experts would not know are difference us, or even of us. These reflected in the response rates. s hese groups have different theoretical - ideological backgroun ds : The partic i- T – y – minded, mostly do not do technical work, and o f- AI are mostly theor pants of PT easy progress in AI (Herbert Dreyfus was a ten have a critical view on large claims for in 2011 ) keynote speaker . The participants of AGI are committed to the view that AI research should now return from technical details to ‘artificial general intelligence’ – do technical work. The thus the name AGI. The vast majority of AGI participants is a professional association in Greece that accepts only published EETN esearchers r from AI . The TOP100 group also works mostly in technical AI; it s members are se n- ior and older than the average academic ; the USA is strongly repre sented. Several individuals are members of more than one of these four sets and they unlikely to respond to the same questionnaire more than once. So, in these cases, were we sent the query only once, but counted a response for each set – i.e. we knew which individuals responded from the individual tokens they received (except in the case of E ETN).
4 Future Progress in Artificial Intelligence: A Poll Among Experts 4 / 19 Response rates 2.2. AI: 1) PT 43 out of 88 – 49% 65% 2) AGI: 72 out of 111 10% 26 out of 250 3) EETN: 29% 4) TOP100: 29 out of 100 170 out of 549 Total: 31% Methodology 2.3. t is hard to ask questions that do not I explanations or n this field, i require lengthy enerate resistance in certain groups of potential respondents (and thus biased results) . g It is not clear what constitutes ‘intelligence’ ‘progress’ and whether intelligence can or as ‘more’ or ‘less’ as a single dimension Furthe r- be measured or at least compared . at a level may surpass that more, for our purposes we need a notion of intelligence or where technical intelligent systems might contribute significantly to r e- humans – but ‘human search level intelligence’ is a rather elusive notion that generates r e- – sistance. Finally, we to avoid using terms that are already in circulation and need “artificial would thus associate the questionnaire with certain groups or opinions, like “singularity”, “artifi cial general intelligence” or “cognitive syst em”. intelligence”, For these reasons, we settled for a definition that a) is based on behavioral abil ity, b) avoids the notion of a general ‘human – level’ and c) uses a newly coined term. We put this definition ‘ high – level machine in the preamble of the questionnaire : “Define a (HLMI) as one that can carry out most human professions at least as well as intelligence ’ (We still had writing back to us that they could not say a typical human.” one expert – though they could be convinced to respond, a fter all. ) In what a ‘typical human’ is by hindsight, it may have been preferable to specify what we mean ‘most’ and whet h- ‘the profe of . One er we think of ‘most professions’ or ssions most working people do’ very lik ely implies merit of our behavioral question is that having HLMI in our sense being able to pass a classic Turing test. To achieve a high mple response rate, we tried to have few questions with si choices and eventually settl ed for four questions , plus three on the respondents. W e tried to choose questions that would allo w us to compare our results with those of ea r- – see below. lier questionnaires n order to improve on the quality of predictions, w I ‘prime’ respondents e tried to into thinking about what is involved in reaching HLMI before asking when they expect this. We a lso wanted to see whether people with a preference for particular approac h- es to HLMI would have particular responses to our central questions on prediction longer than whether people who think that ‘embod ied systems’ are crucial expect (e.g. average time to HLMI). For these two purpose s , we inserted a first question about contributing research approaches with a list to choose from – the options that were
5 Future Progress in Artificial Intelligence: A Poll Among Experts 5 / 19 the particular options given are an eclectic mix drawn from many sources, but are not of much significance . 2.4. Prior work W A few groups have recently made attempts to gauge opinions . e tried to phrase our o- o these earlier questionnaires. N questions such that the answers can be compared t table are: (Michie, 1973, p. 511f) 1. n opinion poll taken la st year among sixty - seven : “a a- British and American computer scientists working in, or close to, the m chine intelligence field” . Q uestion s asked live during the 2006 [email protected] conference at Dartmouth Co l- 2. lege (VCM participated (see Müller, 2007) ) . through a wireless voting device (Moor, 2006) , t Despite a short report on the conference in he results were published , but thankfully we were able to acquire them from the orga n- not and Carey E. Heckman – we publish a selection izers James H. Moor below . 3. (Baum, Goertzel, & Goertzel, 2011) : participants of AGI 2009, not anon y- 2 mous, on paper, respondents, response rate un known. 21 of 4. (Sandberg & Bostrom, 2011) : participants Winter Int elligence Conference 2011 , anonymous, on paper, 35 respondents, 41% response rate. by the famous AI researcher Donald Michie is very brief ( all the d e- The reference 1. but of great tails he gives are in the above quote of historical interest: 1972/3 were ) urning years for AI with the publication of Hubert Dreyfus’ “What computers can’t t do” (Hubert L. Dreyfus, 1972) , the “Lighthill D ebates” on BBC TV (with Michie, and R. Gregory ) and the influential “Lighthill Report” (Lighthill, 1973) . McCarthy poll asked for the estimated number of years before “computing exhibiting Michie’s and Michie’s graph shows 5 data points: intelligence at adult human level” Years Percentage 5 0% 10 1% 17% 20 50 19% >50 25% He also asked about “significant industrial spin - off”, “contributions to brain studies” and “contributions from brain studies to machine intelligence”. adds “Of Michie 2 A further, more informal, survey was conducted in August 2007 by Bruce J Klein (then of Novamente and the Singularity Institute) “... on the time – frame for when we may see great er – than – human level AI”, with a few numerical results and interesting commen ts, archived on https://web.archive.org/web/20110226225452/http://www.novamente.net/bruce/?p=54 .
6 Future Progress in Artificial Intelligence: A Poll Among Experts 6 / 19 those responding to a question on the risk of ultimate ‘takeover’ of human affairs by intelligent machines, about half regarded it as ‘negligible’, and most of the remainder . as ‘substantial’, with a view voting for ‘overwhelming’.” (Michie, 1973, p. 512) 50 2. prominent AI researchers, including all living participants [email protected] hosted many - , plus a ence, a set of DARPA of the 1956 Dartmouth Confer funded graduate students T he participants were asked 12 multiple choice questions on day few theoreticians. three results from day one one, 17 on day t wo and another 10 on day three. We select here: 3 The earliest that machines will be able to simulate learning and every other .) aspect of human intelligence: 6 5% Within 10 years 3 Between 11 and 25 years 2% Between 26 and 14 11% 50 years 50 41% More than 50 years 41% Never 50 123 100% Totals 5.) The earliest we will understand the basic operations (mental steps) of the human brain sufficiently to create machine simulation of human thought is: today 5 6% (we already understand enough) within the next 10 years 11 12% within the next 25 years 10% 9 19 21% within the next 50 years 29% within the next 100 years or more 26 19 21% never (we will never understand enough) 100% 89 Totals The earliest we will understand the architecture of the brain (how its organ i- 6.) u- zational control is structured) sufficiently to create machine simulation of h man thought is: 1 12 1% Within 10 years Between 11 and 25 years 14% 15 22% Between 26 and 50 years 24 More than 50 years 44 40% Never 15 1 4 % Totals 110 100% 3. Baum et al. asked for the ability to pass a Turing test, a third grade school year e x- am [i.e. for 9 year olds] and do Nobel Prize level research. T hey assume that all and The results they only the intelligent behavior of humans is captured in the Turing test.
7 Future Progress in Artificial Intelligence: A Poll Among Experts 7 / 19 got for the 50% probability point were: 2040 (Turing test), 2030 (third grade), and 2045 (Nobel). nd ’s (see below): first question was quite similar to our 2 4. Sandberg and Bostrom Assuming no global catastrophe halts progress, by what year would you assign a “ level machine intelligence? 10%/50%/90% chance of the development of human – ” 50% chance of human – level machine i n- The median estimate of when there will be telligence was 2050 . So, despite significant overlap with AGI 2009, the group asked by Sandberg and Bostrom in 2011 was a bit more guarded in their expectations. because th We think it is worthwhile to make a new attempt e prior ones asked specific groups and small samples, sometimes have methodological problems, and we – which is why also want to see how the answers change over time, or do not change tried to use similar questions. As explained below, we also think it mig ht be wort h- while to repeat our questionnaire at a later stage, to compare results. Questions & Responses 3. 3.1. Research Approaches 1. In your opinion, what are the research approaches that might contribute the most “ t o Selection from list, more than one selection po s- the development of such HLMI?” [ sible.] Algorithmic complexity theory − − Algorithms revealed by computational neuroscience Artificial neural networks − Bayesian nets − Cognitive science − Embodied systems − Evolutionary algorithms or systems − Faster computing ha rdware − Integrated cognitive architectures − − Large – scale datasets − Logic – based systems − Robotics − Swarm intelligence − Whole brain emulation − Other method(s) currently known to at least one investigator − Other method(s) currently completely unknown − No method w ill ev er contribute to this aim
8 Future Progress in Artificial Intelligence: A Poll Among Experts 8 / 19 Cognitive science . 9% 47 42 0% . Integrated cognitive architectures 42 . 0% Algorithms revealed by computational neuroscience 39 . 6% Artificial neural networks 37 . 3% Faster computing hardware Large scale datasets 35 . 5% - systems 34 . 9% Embodied . Other method(s) currently completely unknown 5% 32 Whole brain emulation 29 . 0% Evolutionary algorithms or systems 29 . 0% Other method(s) currently known to at least one investigator . 7% 23 - 21 . 3% Logic based systems theory Algorithmic complexity . 7% 20 No method will ever contribute to this aim 17 . 8% Swarm intelligence 13 . 6% Robotics . 1% 4 Bayesian nets 2 . 6% The percentages here are over the total of responses. There were no significant diffe r- ences bet hole brain emulation’ got 0% in TOP100, ween groups here, except that ‘W but 46% in AGI. We did also not find relevant correlations between the answers given here and the predictions made in the following questions (of the sort that, for example, people who think ‘embodied systems’ crucia l would predict later onset of HLMI).
9 Future 9 / 19 Progress in Artificial Intelligence: A Poll Among Experts Cognitive science Integrated cognitive architectures Algorithms revealed by computational Artificial neural networks Faster computing hardware Large-scale datasets Embodied systems Other method(s) currently completely Whole brain emulation Evolutionary algorithms or systems Other method(s) currently known to at Logic-based systems Algorithmic complexity theory No method will ever contribute to this Swarm intelligence Robotics Bayesian nets 30% 20% 10% 40% 00% 50% 60% 3.2. When HLMI? For the purposes of this question, assume that human scientific activity continues “2. By what year would you see a (10% wit / hout major negative disruption. 50% / 90%) exist?” ability for such HLMI to of these three probabilities, the r – For each prob e- year increments] or check a - 5000, in one – spondents were asked to select a year [2012 box marked ‘never’. Results sorted by g : roups of respondents AI - PT Median Mean St. Dev. 10% 2023 2043 81 50% 2092 2048 166 90% 515 2080 2247 Median AGI St. Dev. Mean 10% 2022 2033 60 50% 144 2040 2073 90% 2065 2130 202 St. Dev. EETN Median Mean
10 Future / 19 Progress in Artificial Intelligence: A Poll Among Experts 10 10% 2033 29 2020 50% 2050 2097 200 90% 2292 675 2093 Median Mean St. Dev. TOP100 10% 2034 33 2024 50% 2072 110 2050 90%: 2070 2168 342 ALL Mean St. Dev. Median 10%: 2036 59 2022 50%: 2081 153 2040 90%: 2075 2183 396 percentage steps: Results sorted by 10% Median Mean St. Dev. PT AI - 2023 2043 81 AGI 2022 2033 60 EETN 2033 29 2020 TOP100 2024 33 2034 ALL 2036 59 2022 Median Mean St. Dev. 50% - PT AI 166 2048 2092 AGI 2073 144 2040 EETN 2097 2050 200 TOP100 2050 2072 110 ALL 153 2040 2081 Median Mean St. Dev. 90% - PT AI 2247 2080 515 AGI 2065 2130 202 EETN 2093 2292 675 TOP100 2070 2168 342 ALL 2075 2183 396
11 Future Progress in Artificial Intelligence: A Poll Among Experts 11 / 19 These answers did enter in to the averages above. Clicks of the ‘never’ box. not no. % Never 10% 1. 2 2 50% 4. 1 7 90% 16. 5 28 Proportion of experts with 10% 50% 90% confidence of HLMI by that date 1 0.9 0.8 0.7 0.6 10% 0.5 50% 0.4 90% 0.3 0.2 0.1 0 2300 2250 2200 2150 2100 2050 2000 half of the respondents gave a year (i.e. For the 50% mark, the overall median is 2040 earlier than 2040 and half gave a year later than 2040 mean (average) overall ) but the always median is The . is 2081 t- here because there cannot be ou lower than the mean s ‘later’ (the maximum possible s outliers toward liers towards ‘earlier’ but there are e- lection was 5000, then ‘never’). 3.3. From HLMI to superintelligence “3. Assume for the purpose of this question that such HLMI will at some point exist. How likely do you then think it is that within (2 years / 30 years) thereafter there will be machine intelligence that greatly surpasses the performance of every human in most professions? ” – - Respondents were asked to select a probability from a drop down menu in 1% increments, starting with 0%.
12 Future Progress in Artificial Intelligence: A Poll Among Experts 12 / 19 For all respondents: Mean St. Dev. Median Within 2 years 10% 24 19% 30 years Within 62% 35 75% timates on probability of super Median es in different groups intelligence given HLMI : of respondents 30 years 2 years - PT AI 10% 60% AGI 90% 15% EETN 55% 5% TOP100 5% 50% allocate a low probability for E takeoff, but a significant probability for xperts a fast within superintelligence 30 years after HLMI. 3.4. The impact of superintelligence Assume for the purpose of this question that such HLMI will at some point exist. “4. on humanity, in the long run? How positive or negative would be overall impact a probability for each option. Please indicate ” – (The sum should be equal to 100%.) Respondents had to select a probability for each option (in 1% increments). The add i- tion of the selection was displayed; in green if the su m was 100%, otherwise in red. – The five options were: “ Extremely good – On balance good More or less neutral – On – Extremely bad (e xistential catastrophe )” . balance bad EETN - AI AGI PT TOP100 ALL % Extremely good 31 17 20 24 28 On balance good 25 30 24 28 40 More or less neutral 23 12 20 19 17 On balance bad 13 13 13 17 12 Extremely bad (existential catastrophe) 18 24 6 8 18 Percentages are means , not medians as in the other tables. There is a notable here difference here be tween the ‘theoretical’ (PT - AI and AGI) and the ‘ technical ’ groups (EETN and TOP1 00). 3.5. Respondents Statistics We then asked the respondents 3 questions about themselves:
13 Future Progress in Artificial Intelligence: A Poll Among Experts 13 / 19 “ r- 1. Concerning the above questions, how would you describe your own expe tise? 9 = expert) (0 = none, ” 85 Mean 5. − 2. “ Concerning technical work in artificial intelligence, how would you describe (0 = none, 9 = expert) your own expertise? ” Mean 6.26 − “ ” (Select from list with 8 o p- What is your main home academic discipline? 3. – tions: Biology/Physiology/Ne Computer Science – Engineering urosciences – Mathematics/Physics – Philosophy [non CS] Psychology/Cognitive Sc i- – ence Other academic discipline – None.) [ Absolut numbers.] – Biology/Physiology/Neurosciences 3 a. 107 b. Computer Science Engineering (non CS) 6 c. d. 10 Mathematics/Physics 20 e. Philosophy f. Psychology/Cognitive Science 14 g. Other academic discipline 9 h. 1 None And we finally invited participants to make a comment, plus a possibility to add their name, if they wished. W e cannot reproduce these here ; but they are on our site, see ( good que s- below). A number of comments concerned the difficulty of formulating tion s , much fewer the difficulty of predicting. Evaluation 4. 4.1. - bias in the respondents ? Selection with the selection of our respondents is that peopl e who think HLMI is One concern , unlikely fused idea , are less likely to respond (though we pleaded otherwise in or a con the letter , see below ). Here is a characteristic response from a keynote speaker at PT - AI 2011 : “ t think of responding to such a biased qu estionnaire. ... I think I wouldn’ – any discussion of imminent super n ce is misguided. It shows no understan d- intellige ing of the failure of all work in AI. Even just formulating such a questionnaire is b i- ased and is a waste o f time.” (Hubert Dreyfus , quoted with permis sion ) . So, we tried to find out what the non respondents think . To this end , we made a random selection of - non - respondents from two groups (11 for PT - AI and 17 from TOP100) and pressured them via personal email to respond, explaining that this would help us understand bias. The two groups were selected because AGI appears already biased in the opp o- site direction and EETN appears very similar to TOP100 but for EETN we did not
14 Future Progress in Artificial Intelligence: A Poll Among Experts 14 / 19 d and who did not. We got one additional r have the data to show us who responde e- AI and two from TOP100 in this way. - sponse from PT By what year would you see a (10% / 50% / 90%) prob abi l- For question 2 “... ity for such HLMI to exist?” we compared the additional responses to the responses ready had from the same respective group (PT AI and TOP100, respectively). we al - We found the following differences: 10% 50% 90% mean median mean median median mean - AI PT - 9 12 +55 +8 - 2 +169 - TOP100 25 19 - 9 - 47 - - 138 - 40 - The one additional respondent earlier than the mean from PT - AI expected HLMI but later than the median, while the two respondents from TOP100 x- (last row) e The very small sample forbids confident . pected HLMI earlier than mean and median the worry t hat the non - respondents would judgment, but we found no support for have been biased towards a later arrival of HLMI. 4.2. Lessons and outlook http://www.pt - ai.org/ai - polls/ . On We complement this paper with a small site on i- raw data from our results [anonymous unless the partic this site, we provide a) the pants decided to put their name on their responses], b) the basic results of the que s- tionnaire, c) the comments made, and d) the questionnaire in an online format where t that that online questionnaire will give us an interes t- anyone can fill it in. We expec ing view of the ‘popular’ view of these matters and on how this view changes over time. In the medium run, it be interesting to do a longitudinal study that repeats this exact questionnaire . We leave it to the reader to draw their own detailed conclusions from our results , perhaps after investigating the raw data . Let us stress, however, that the aim was to ‘ g auge the perception’, not to get well - founded predictions. T hese results should be taken with some grains of salt, but w e think it is fair to say that the results reveal a view among experts that AI systems will probably (over 50%) reach overall human ability by 2040 - 50 , and very likely (with 90% probability) by 2075. From reaching in 2 years (10%) to 30 years (75%) human ability, it will move on to superintelligence
15 Future / 19 Progress in Artificial Intelligence: A Poll Among Experts 15 . The experts say the probability is 31% that this development turns out to thereafter be ‘bad’ or ‘extremely bad’ for humanity. So, the experts think that superintelligence is likely t o come in a few decades and bad for humanity quite possibly this should be reason enough to do research into – the possible impact of superintelligence before it is too late. We could also put this and still come to an alarming conclusion : more modestly know of no compelling We reason to that progress in AI will grind to a halt ( say though deep new insights might be needed ) and we know of no compelling reason that superintelligent systems will be good for humanity . So, we should better investigate the future of superintelligence and the risk s it poses for humanity. 5. Acknowledgements Toby Ord and Anders Sandberg were helpful in the formulation of the questionnaire. he T work on the website form, sendin g mails and reminders, database and technical initial d ata analysis was done by Ilias Nitsos (under the guidance of VCM ). The o Ga n tinas provided the email s of the TOP100. Stuart Armstrong made most graphs AI 2013 conference in Oxford provided for presentation. The audience at the PT - helpful feedback. Mark Bish Miles Brundage and Daniel Dewey op , Carl Shulman, detailed comments on . s We are very grateful to all of them. draft made References 6. Adams, S., Arel, I., Bach, J., Coop, R., Furlan, R., Goertzel, B., . . . Sowa, J. F. andscape of human - level artificial general intelligence. (2012). Mapping the l (1), 25 AI Magazine, 33 - 42. Armstrong, S., Sotala, K., & O Heigeartaigh, S. (2014). The errors, insights and lessons of famous AI predictions and what they mean for the future. Journal – Experimental and Theoretical Artificial Intelligence, 26 (3 - Special issue ‘Risks of of - General Artificial Intelligence’, ed. V. Müller), 317 342. Baum, S. D., Goertzel, B., & Goertzel, T. G. (2011). How long until human - level AI? Results from an expert asse ssment. Technological Forecasting & Social Change, (1), 185 195. 78 - Linguistic and Philosophical Bostrom, N. (2006). How long before superintelligence? (1), 11 - 30. Investigations, 5 Bostrom, N. (2013). Existential risk prevention as global priority. Global Policy, 4 (1), 15 - 31. Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies . Oxford: Oxford University Press. Dreyfus, H. L. (1972). What computers still can't do: A critique of artificial reason (2 ed.). Cambridge, Mass.: MIT Press. Dreyfus, H. L. (2012). A history of first step fallacies. Minds and Machines, 22 (2 - special issue "Philosophy of AI" ed. Vincent C. Müller), 87 - 99.
16 Future Progress in Artificial Intelligence: A Poll Among Experts 16 / 19 Hawking, S., Russell, S., Tegmark, M., & Wilczek, F. (2014). Transcendence looks at the implications of artificial but are we taking AI seriously intelligence - . The Independent, 01.05.2014 enough? Kurzweil, R. (2005). The singularity is near: When humans transcend biology . London: Viking. lligence: A paper Lighthill, J. (1973). Artificial intelligence: A general survey Artificial inte symposion . London: Science Research Council. McCarthy, J., Minsky, M., Rochester, N., & Shannon, C. E. (1955). A proposal for the Dartmouth summer research project on artificial intelligence. Retrieved October 2006, from http://www - formal.stanford.edu/jmc/history/dartmouth/dartmouth.html Nature, 241 Michie, D. (1973). Machines and the theory of intelligence. (23.02.1973), - 512. 507 Dartmouth College artificial intelligence conference: The Moor, J. H. (2006). The AI Magazine, 27 (4), 87 - 91. next fifty years. Müller, V. C. (2007). Is there a future for AI without representation? Minds and (1), 101 - 115. Machines, 17 Müller, V. C. (2014a). Editorial: Risks of g Journal of eneral artificial intelligence. (3 - Experimental and Theoretical Artificial Intelligence, 26 Special issue ‘Risks of General Artificial Intelligence’, ed. V. Müller), 1 - 5. Müller, V. C. (Ed.). (2012). Theory and philosophy of AI ( Minds and Mach ines, Vol. 22/2 - Special volume): Springer. Theory and philosophy of artificial intelligence ( SAPERE, Vol. 5). Müller, V. C. (Ed.). (2013). Berlin: Springer. Risks of artificial general intelligence ( Journal of Experimental Müller, V. C. (Ed.). (2014b). and Theoretical Artificial Intelligence, Vol. (26/3) Special Issue): Taylor & Francis. Price, H. (2013). Cambridge, Cabs and Copenhagen: My Route to Existential Risk. The York Times, 27.01.2013 . New http://opinionator.blogs.nytimes.com/2013/01/27/cambridge - cabs - and - existential copenhagen - route - to - my - risk/?_php=true&_type=blogs&_r=0 - Sandberg, A., & Bostrom, N . (2011). Machine intelligence survey. FHI Technial Report, 2011 (1). http://www.fhi.ox.ac.uk/research/publications/ Appendices 7. 1. Questionnaire 2. Letter sent to participants
17 Future 17 / 19 Progress in Artificial Intelligence: A Poll Among Experts Appendix 1: Online Questionnaire Questionnaire: Future Progress in Artificial Intelligence | Phil... http://www.pt-ai.org/questionnaire-future-progress-artificial-in... Questionnaire: Future Progress in Artificial Intelligence http://www.futuretech.ox.ac.uk ( ) http://www.fhi.ox.ac.uk/ ( ) This brief questionnaire is directed towards researchers in artificial intelligence o r the theory of ss towards its artificial intelligence. It aims to gauge how people working in the field view progre original goals of intelligent machines, and what impacts they would associate with re aching these goals. led in without Contribution to this questionnaire is by invitation only. If the questionnaire is fil such an invitation, the data will be disregarded. e Programme on the Impacts of Future Technology: Answers will be anonymized. Results will be made publicly available on the site of th http://www.futuretech.ox.ac.uk ( http://www.futuretech.ox.ac.uk . ) Thank you for your time! ( http://www.sophia.de ) & Nick Bostrom ( Vincent C. Müller ) http://www.nickbostrom.com/ University of Oxford September 2012 A. The Future of AI (HLMI) as one that can carry out most human professions at least as well as a typica l human. "high-level machine intelligence" Define a o the development of such HLMI?: 1. In your opinion, what are the research approaches that might contribute the most t Large-scale datasets Algorithmic complexity theory Algorithms revealed by computational neuroscience Logic-based systems Artificial neural networks Robotics Swarm intelligence Bayesian nets Cognitive science Whole brain emulation Embodied systems Other method(s) currently known to at least one investigator Evolutionary algorithms or systems Other method(s) currently completely unknown No method will ever contribute to this aim Faster computing hardware Integrated cognitive architectures ithout major negative disruption. By what year would 2. Assume for the purpose of this question that human scientific activity continues w you see a 10%/50%/90% probability for such HLMI to exist? 90% 50% 10% Year reached: - - - Never: 3. Assume for the purpose of this question that such HLMI will at some point exist. H ow likely do you then think it is that within (2 years / 30 years) thereafter, there will be machine intelligence that greatly surpasses the perf ormance of any human in most professions? Within 2 years Within 30 years Probability: - - % % 4. Assume for the purpose of this question that such HLMI will at some point exist. H ow positive or negative would be the overall impact on humanity, in the long run? Please indicate a probability for each option. (The sum sh ould be equal to 100%.) On balance good Extremely good Extremely bad (existential catastrophe) More or less neutral On balance bad % - % % - % - % - - Total: 0% 1 of 2 26.11.12 10:29 OEZ
18 Future Progress in Artificial Intelligence: A Poll Among Experts 18 / 19 http://www.pt-ai.org/questionnaire-future-progress-artificial-in... Questionnaire: Future Progress in Artificial Intelligence | Phil... B. About you 1. Concerning the above questions, how would you describe your own expertise?: 4 5 0 = none 7 8 9 = expert 1 2 3 6 own expertise?: 2. Concerning technical work in artificial intelligence, how would you describe your 4 0 = none 7 8 9 = expert 3 2 1 5 6 3. What is your main home academic discipline?: Biology/Physiology/Neurosciences Computer Science Engineering (non CS) Mathematics/Physics Philosophy Psychology/Cognitive Science Other academic discipline None ease indicate whether you would like your name to 4. Add a brief comment, if you like (<250 words). These comments may be published. Pl be included with the comment. (The answers above will remain anonymous in any case.): Total word Count : 0 Please include my name with the comment (leave this field empty if you wish to remain anonymous): CAPTCHA spam submissions. This question is for testing whether you are a human visitor and to prevent automated * What code is in the image?: Enter the characters shown in the image. Submit 2 of 2 26.11.12 10:29 OEZ
19 Future / 19 Progress in Artificial Intelligence: A Poll Among Experts 19 Letter to participants (here TOP100) Appendix 2: Dear Professor [surname], given your prominence in the field of artificial intelligence we invite you to express your views on the future of artificial intelligence in a brief questionnaire. The aim of o- this exercise is to gauge how the top 100 cited people working in the field view pr g ress towards its original goals of intelligent machines, and what impacts they would associate with reaching these goals. The questionnaire has 4 multiple choice questions, plus 3 statistical data points on the respondent and an optional 'comments' field. It will only take a few minutes to fill in. Of course, this questionnaire will only reflect the actual views of researchers if we get nearly everybody to express their opinion. So, please do take a moment to respond, even (or especially) if you think this exercise is futile or misguided. ick Bostrom's forthcoming book Answers will be anonymous. Results will be used for N Superintelligence: Paths, Dangers, Strategies” “ (Oxford University Press, 2014) and made publicly available on the site of the Programme on the Impacts of Future Tec h- nology: http://www.futuretech.ox.ac.uk. Please click here now: [ link] Thank you for your time! Nick Bostrom & Vincent C. Müller University of Oxford
201 8 Fourth National Report on Human Exposure to Environmental Chemicals U pdated Tables, March 2018 , Volume OneMore info »
Second National Report on Biochemical Indicators of Diet and Nutrition in the U.S. Population Second National Report on Biochemical Indicators of Diet and Nutrition in the U.S. Population 2012 Nationa...More info »
Bureau of Justice Statistics Research and Development Series Campus Climate Survey Validation Study Final Technical Report Christopher Krebs, Christine Lindquist, Marcus Berzofsky, Bonnie Shook-Sa, an...More info »
The Health Consequences of Smoking—50 Years of Progress A Report of the Surgeon General U.S. Department of Health and Human ServicesMore info »
Taking Our Pulse: The OCLC Research Survey of Special Collections and Archives Jackie M. Dooley Program Officer Katherine Luce Research Intern OCLC Research A publication of OCLC ResearchMore info »
The Post-High School Outcomes of Young Adults With Disabilities up to 8 Years After High School A Report From the National Longitudinal Transition Study-2 (NLTS2) NCSER 2011-3005 U.S. DEPARTMENT OF ED...More info »
TIONAL ACADEMIES PRESS THE NA This PDF is available at http://nap.edu/24938 SHARE Thriving on Our Changing Planet: A Decadal Strategy for Earth Observation from Space DET AILS 700 pages | 8.5 ...More info »