tv National Security Commission on Artificial Intelligence Conference - PART 3 CSPAN November 25, 2019 1:47pm-2:40pm EST
government agencies so that we become or maintain our cutting edge lead in ai and in these other fields it's going to really hurt us dramatically within several years so eric has seen this proposal and helped us modify it, and if you have thoughts or ideas send them through the commission to us and it's one of my goals as minority leader might be one of my goals as majority leader one day to get this done because i love america and this is so vital for the future of america. thank you very much, everybody.
>> can everyone hear me? first we have to use my imagination a little bit because there is no fireplace here but we are thinking it's a fireside chat. i'm grateful to have this opportunity here, dr. kissinger and i met several years ago, now. dr. kissinger has been a key person where you can go to for advice about both professional and career things as well as geopolitical event, right, very few friends can do both, but dr. kissinger really needs no introduction. as all of you know he's one of
the world's most renowned geopolitical practitioner as well as thinker, and he did all of that well before ai came into bei being. he has that rare combination of a true intellect and i admire him for taking on something relatively new like ai after the height of his career, ai is pretty daunting, as someone also relatively new to it, dr. kissinger decided he wanted to do a deep dive and learn more about the technology and implications of artificial intelligence for our political systems and for geopolitics writ large. as many of you know, he's written two articles, both published in "the atlantic," 2018 and 2019. i'd encourage you all to read both of them. he also wrote a book in 2014 preceding that called world order and one of the last chapters of that book, sorry, it's sort of going in and out, one of the last chapters of the book talks about the
implications of technology. it has a really interesting insight. he talks about the ordering system sort of for the world, so during the age of envirnlighten, it was reason. religion. his area it's technology and science that helps us sort the events. that is a useful way to think about what we're going to talk about. at the talked about it as the governing concept of age. made several points in articles relevant to the commission as well. i'll draw out some of these. use them as questions to start out with. first he describes a.i. as inherently unstable. a.i. systems are constantly in flux as they acquire and analyze new data. to those in the audience who are national security professionals, stability is a key concept that we like to have in the system.
there's an inherent contradiction between the stability of a.i. and national security concepts and that's something i'd like dr. kissinger to talk about a little bit. even preceding that, we're here, really, ultimately, as we talk about this competition and about the tension that the interim report also talks about because ultimately this is a contest between two political systems. we shouldn't forget that. it's between two political systems and the impact artificial intelligence will have on those systems. -- talk a little bit about that and we'll move on to couple other questions. thanks dr. kissinger. >> thank you very much. i had the pleasure of working
with nadia on several projects. i've seen her on the advisor to the president's job. before it ended. everywhere on the advisory board, defense advisory board together. so it's a great pleasure to be here. so that you can calibrate what i'm saying, let me give you a few words about how i got into this field. i became a great friend of eric smith who is today one of my best friends. he invited me to give a speech at google and before that they showed me some of their
extraordinary achievements. and i had barely met eric before then. and i began my speech by saying, i'm tremendously impressed by what i've seen but i want you all to understand that i consider google a threat to civilization as i understand it. [ laughter ] this was the beginning of our friendship. [ laughter ] and the next step in my being here was i was in the national conference, which, in europe, which on its schedule had a provision for artificial
intelligence and i thought it was a great opportunity for me to catch up on my jet lag. and i was headed out of the door when eric was standing there, said, this might interest you and you really ought to hear it. except for that, you might have been spared. [ laughter ] it's okay. so. i went there and somebody from deep think was explaining that he was designing a computer that would be able to play the game of go and he was confident that he could design it so that it would be the champions of china and of
korea. as you know, google go has 180 pieces. for each side beginning on an open square and the strategic goal of the game is to constrict the ability of the opponent until they can't move at all. but when you put your first piece down, it's not like you have it lined up, you put your first piece down, you don't know how this is going to develop. and it takes a long time to develop. so the idea that you could design a computer that could match this, it's a creative game, seemed extraordinary to me.
and i went up to the speaker afterwards and said how long will it be that we become anchors to these spanish computers that that they will achieve intel intellectual dominance. and he said he was working on that. and he is. so, over the three years, eric was kind enough to introduce me to a lot of artificial intelligence researchers. and i look at it not as a technical person and i don't challenge for or debate the technical side of it.
i am concerned with the historical, philosophical, strategic aspect of it and i've become convince that artificial intelligence and the surrounding disciplines is going to bring a change in human consciousness. exceeding that of the enlightenme enlightenment. because of the inherent scope of the investigations it imposes. so that's why i'm here. and i gave a speech at stanford a few weeks ago. at the opening of the artificial intelligence center and i said it's sort of absurd that i'm here, you people who sit in the audience, i said to them, have written thousands of articles.
i've written two and one more joined authorship with eric and one other person. and i said the only significance of my presence and what i do, i said, you people work on implications, on the applications. i work on the implications. and i don't challenge the applications. i think they're important, they're crucial, but frankly, i think you don't do enough. you don't go the next step, those of you who know something about the field. of what that steps mean if man kind is still around it. by automatic actions.
that it sometimes cannot explain. it can explain what happens, but as i understand it, not always why it happens. so this is a why i'm here. and it's in that context that you ought to attest what i'm saying. but i have put aside some other work for the last three years. to work on this. and to educate myself. because i think in the conceptual field that is the next big step for mankind. >> hopefully they listen to you,
dr. kissinger, did they listen to you at the stanford audience? . >> i think the technicians are too modest in this end, that they're doing spectacular things, but they don't ask enough of what it means. i would say the same for strategist, this is bound to change the nature of -- of strategy, and of warfare. because if you -- and some of you can judge better than i how much it's taken aboard, yet, i don't think on the global field it is yet
understood what this will do. it's still handled as a new technical departure. it's not yet understood that it must bring a change in fillo -- philosophical perception in the world. much of human effort has been to explain the reality around it.e. much of human effort has been to explain the reality around it.h world. much of human effort has been to explain the reality around it. the enlightenment brought a way of looking at it on the mathematical pages and rational pages, that was a huge departure that really changed history fundamentally. but the idea that you can
explore reality in partnership of what is out there and that you explore it by means of algorithms, where you know what it will -- what they will produce but you do not get no why, that is when people start thinking about it and when -- as they will, that will fundamentally effect human perceptions. and this way of thinking up until now, historically, has been largely western thinking. other regions have adapted it from the west, i mean, the
rationallistic thing. rationalistic thing, as it sprends across the spreads across the world and now unpredicted consequences are going to follow. >> in the end, in terms with a.i. and democracy and cognition, with a.i. having explanatory powers, humans have explanatory powers and a.i. not necessarily. there's an interesting point you make if some of your articles how a.i. by its very nature is going to change human cognition and reasoning because we will not have the experiences that a.i. will get to -- a.i. will get there first before us. >> the point i made is a.i. it's consequences that we elicit but
we don't always know why. and so, now am i optimistic? first i would have to say, honestly, the future of democracy itself putting a.i. aside it's something that should concern us. because for a society to be great it has to have a vision of the future. that is to say, it has to go from where it is to where it has never been and have enough confidence in itself to do it.
when you look at political democracies, the political contest is so bitter and the rivalries are so great that to get an on theive view of their future get an objective view of their future is getting more and more difficult. who would have thought the house of commons could break down into a collection of treasure groups operating like the house of representatives but the house of representatives is part of a system of checks and balances, while britain is based on a unitary system that requires consensus for its operation. so what a.i. does is to inject a
new level of reality of a new level of perceiving reality most people don't understand that yet. most people don't know what it is. but i think those of you who work on it are pioneers in an inevitable future. and when we think the defense department about the future, there's a huge problem because increasingly a.i. will help shape the approaches to problems. for exampling, for example, i want to know in
the period it started with massive retaliation and then developed into various applications but the key proble with massive retaliation and then developed into various applications but the key proble the period it started with massive retaliation and then developed into various applications but the key problem we face in actual crisis as security advisor how do you threaten with nuclear weapons without triggering their preemptive strike. on the other side.r preemptive strike. on the other side. preemptive strike. on the other side. and as the weapons themselves became more esoteric even in terms of the 70s, when we move to fixed, land-based missiles, they had a high potential for retaliation.
but next to no potential for being used diplomatically. it's after in history when that period is written, the debates about the trigger happiness of an administration that went on alert, we went on alert from level four to level three, which isn't a high level of alert but nobody, no newspaper reader knows that. but one reason we went on alert was because we could generate a lot of -- off traffic lot of traffic. and we could see things that were being done
plainly through the air. -- but themselves were not yet threatening. >> a.i. can't see a lot of the activity. >> even with mobile missiles, you had travel and much of what goes on in a.i. we believe that arms control was an important aspect, and what you know of a.i., it makes it infinitely more important but much of what you can do in a.i. you don't want to put on the table as a capability to be restricted because it's secrecy in itself is part of its strength. but in the field of strategy we
are moving into an area where you can imagine a capable -- extraordinary capabilities -- and even permitting tremendous discrimination. one of your problems is that the enemy may not -- if you choose -- may not know where the threat came from for a while. so, you have to rethink what element of arms control. you have to rethink even how the concept of arms control, if at all, applies to that world. >> you have a nice line in one of the articles about how a.i. essentially upends all of the strategic area y iic varieties as our way of thinking over the past 30 years, including arms
control, deterrence, stability, but i wanted to ask you one specific question and then i'll open it up. so are there situations in which, you know, going backwards, you're at the white house again, taking decisions, are there situations in which today, a.i., you would trust an a.i. algorithm to make a decision at that level in the national security space, if you're faced with a tough decision. are there areas where you can see a.i. algorithms helping national security decision makers? >> i think it will be -- become standard that a.i. algorithms will be part of the decision-making process. but before that happens, or as that happens, the decision makers have to think through the
limits of it. and what might be wrong with it and they have to test themselves in war games and even term actual situations. to make sure that what degree of reliability they can get to the al geor-- algorithms. also they have to think through the consequences. when i talk about these things i think -- i studied a lot about the outbreak of world war i, because the disparity between
the intention of the leaders and what they produced is so shocking not one of the leaders who started the war in 1914 would have undertaken it if they had had any conception of what the world would look like in 1918 or even in 1917. none wanted an act of such scope. they thought they were dealing with a local problem and they were facing each other down but they didn't know how to turn it off. that once the mobilization process started it had to go to an end in which a crisis over serbia ended with a german attack on belgium, which neither of which had anything to do with the original crisis but the
attack on belgium was a consequence of a system that had been set up and required a quick victory that could only be achieved in northern france, so, never mind that there's a crisis in the volkins and that germany and france not directly involved in the outcome. the only way to get an advantage in time over the possible mobilization of russia was to defeat france no matter how the war started. it was a masterpiece of planning. then one of the really interesting things is that they had to lock up, germans had to
lock out france within six to eight weeks. and the man who designed this plan a leatheredly plan allegedly said on his death bed, make sure my right flank is strong. so when the attack developed and then russia began to move in the east the germans lost their nerve and pulled two army cores out of the right flank, which is exactly where they would stop these two army cores who were in transit who are the important patterns on both sides, were taking place. i mentioned that only that if you don't see through the implications of the technology to which you've
vetted yourself, and including your emotional capacity you will handle the predictable consequences, then you're going to fail. that's on the strategic side. how you conduct diplomacy when even the testing of new weapons can be shielded so that you really don't know what the other side is thinking , and it's not even clear how you could reassure somebody if you wanted to. that's a topic that's very important to think about.
and so, as you develop weapons of great capacity and even great discrimination how -- how do you talk about them? and how do you build a restrained under used -- tool. and how do you convince them -- i mean, the weapons in a way become your partner, and if their assigned certain tasks, how you can modify that under combat conditions. -- so questions have to be answered
and will be i'm sure answered in some way. and so, that's why i think you're only in the foothills of the real issues that you will be facing as you go down that road. as you must. i'm not arguing against a.i. we are -- a.i. will exist and will save us. >> before i open it up to the audience, just a quick comment, because you are a gio political thinker and talked about diplomacy and restraint can you talk about how you see the evolution of the u.s., russia and china relationship, just in brief, and then i'll open it to the audience. i think it's a missed
opportunity to have dr. kissinger here and not ask a question that's a little bit broader. >> you're asking me for prevances. [ laughter ] sign of great faith. >> or what -- your -- you're getting set to go to china so talking a little bit about some of your goals for that trip. >> i look at this pragmatically, and it's a strategic issue that is the impact of the societies on each other over an extended period of time, when they have such huge capabilities. now, the conventional way, the historic way it's been handled instead, some military conflict
settles the relative position of the science. and sometimes at huge cost, but historically at survivorable cost. so the key question is do we define our enemy, and then conduct our policy from a confrontational point of view and with confrontational language at every stake. as against my preference of looking at a strategic issue in
which at every moment you try to shape the environment to get on the one hand a relative advantage but on the other hand give your opponent an opportunity to move towards a less threatening position. and so, if your basic strategy is confrontation. then the other side loses nothing by being confrontational. because it is there anyway. and therefore i believe one should put an element of potential cooperation into these strategic relationships for i've
studied at one point, i was in office in the '73 war, and there's a little booklet by somebody who served on the public bureau as a note taker, and if you go through that book, you'll see on the one hand, they have arguments and leaning towards involvement and arms supply. but on the other, there's always somebody arguing about what we call detante so that didn't ever go all out. and so we could outmatch them when we went in there. so i favor a strategy of
complexity. and so, i would like containment to evolve out of a diplomacy that doesn't put it into a confrontational style. what that means is that we, on our side, have to know what our limits are. and we have to understand what we're trying to avoid and it isn't what we want to achieve. so we have to have strategies in high office, which is not the way we select people. but we've got to come to -- i'll tell you what we have to come to. when you look at strategic designs of the 19th century the
europeans had one of direct lines on both sides. the british on the road to india had a lot of alliances and friendships but not such a precise system. but when you got on the road to india before you got very far you would meet a lot of resistance organized by the british, even though it was not proclaimed and nobody ever quite made it. i'm talking about 19th century. so that's what we have to develop, at least in some parts of the world. now, i don't put russia into quite the same category because
russia is a weak country. it's a weak country with nuclear weapons. and one of its utilities is its existence. because by sitting there in the middle of eurasia it guarantees by its existence, the absence of yugoslav yugoslavia-type conflicts in the middle of central asia where it would draw in the greek, the turkish, the persian and all of the other empires. so what i think we need is a way of thinking about the world in that category. the basic principle has to be,
we cannot tolerate a germany of anybody over parts of the world we consider central for our survival. so we cannot tolerate the germany of any country over eurasia. but how to get there would require flexible thinking and flexible technology, and we've never been faced with such a situation. and also, if you go to most universities you will find many, the huge majority that will contradict this approach. so maybe i'm wrong. [ laughter ] >> so i'll open it up now.
>> almost unthinkable. >> clearly some of your ideas about that strategic design can find a way to work its way into the a.i. commission report. we'll talk about that. i'll open it up now to questions in the audience. there's a glare so it's hard to see. i do see someone in the back there. >> thank you. dr. kissinger, thank you so much for talking to us today. my name is elis e a practitioner and resident in georgetown university of foreign service. i wonder if you can expand on the emotional intelligence quotient and how do you take into account for a.i. for issues like empathy. when the internet was expanding a lot of critics of the new technology said it would make humans less personal, and mentally lazy and the champions and post-modernists said it
would free up the mind for bigger thoughts and more profound thinking. that's true in some sense. but it's also being used by smaller minded people to kind of spread their original negativity and thinking. i'm wondering how you square intentions with the new avenues of a.i., thank you so much. >> i don't know. [ laughter ] i don't know the answer to this question. because you have defined what the problem is. that is what we must deal with. when the enlightenment came along there were a lot of philosophers. because growing out of a religious period there was a lot of reflection about the nature
of the universe and if you studied the 16th, 17th, you find a lot of philosophers with very profound insights on the nature of the universe. and whether the universe was an objective reality or whether it reflected the structure of your own mind or whether you could express it in mathematical equations. but in our present period philosophy and reflection it's not a major -- we put our talents into the
technological field and this is why this happened. then for the first time world-changing events are happening, which have no fi philosophical explanation or intended explanation but you know sooner or later it will come. i'm sort of obsessed with the alpha zero phenomenon of teaching chance to a computer who then learns a form of -- chess that no human being in all of history has ever developed. who has ever worked. and against which we without traditional chance methods even in the most advanced computers
based on previous intelligence is in a way defenseless. so, what does that mean? that you were teaching something to somebody who did not learn what you set out to do but learned something entirely different and within that world decisive. i don't know the answer to this. but if sort of obsesses me. >> does anyone else know the answer? >> what else are we going to learn? no i don't know. there's two levels of this. one that i know a better answer, that would be terrific. i'd become very rich. but i'm 97 nearly.
so doesn't do me well. but the other answer is -- the other concern is, that we have to get our mind open to studying this problem and we have to find people in the key jobs that are capable of strategy in relation to ever-changing world which is being changed by our own efforts, and it never happened before in that way. and we are not conscious of that yet in our society. >> we have time for one final question before we wrap. someone? like to ask? yes, sir. >> so there's a story about
the moon coming up over the horizon and this country going on alert strategic alert against russia, but there were cooler heads that decided that wasn't an attack, it was something else. so, are what you're trying to say is we need very elegant a.i. before we put it in control of the button? >> [ inaudible ] >> do you want to repeat the question? essentially do we need more elegant a.i. before we put it in control of the button. that was his question. >> in one way or another a.i. will be the philosophical
challenge of the future. because on the one hand, you're in partnership with objects when you go to intelligence. it's never been conceived before. and in a deeper way, the implications of some of the things i've sketched are so vast that unless one reflects about it before -- [ inaudible ] i'm told, some -- that -- self driving ca-driving cars when they come to a stoplight they stop because they're engineered that way but when the
cars next to them start inching forward to get a jump on the others they do it also. why? where did they learn it? and what else have they learned that they're not telling us? [ laughter ] >> we'll end on that note. i think time -- time is up now. >> and how do they talk to each other? >> exactly. well, thank you so much. >> next time i come we'll -- i'll give you answers to that. [ applause ] >> thank you. >> thank you. [ applause ] >> thank you, dr. kissinger. so we're going to take a ten-minute break now and then we'll be meeting back here with commissioner clyburn who will look at a.i. in the workforce. thanks very much.
the go with the free radio app. >> during the week of thanksgiving we're featuring book tv programs showcasing what's available every weekend on cspacspan2. beginning with books on corporate america, ceo of sporting goods discussing the decision to stop selling guns in its stores. charles schwab talks about his life and career. and journalism professor on whether diversity programs in america and other fields working. enjoy booktv this and every weekend on cspan2. ♪ >> cspan student cam 2020 is in full swing, hard at work creating short documentaries on issues that most like 2020 presidential
candidates to address in their campaigns. we'd love you to share your photos using #studentcam2020 for a chance to win additional cash prizes. working on ideas? we got resources on the website to help out at studentcam.org has information through the process of making a documentary. cspan will award $100,000 in total prizes including $5,000 grand prize. all must be up loaded by midnight january 20, 2020. >> best advice i could give to young filmmakers is not to be afraid to take your issues seriously, your never too young to have an opinion so let your voice be heard. >> more on the website, studentcam.org. >> our cspan campaign bus is traveling across the country
asking voters what issues do they want presidential candidates to address during the campaign. >> one of the most pressing issues we're facing is portfove with income inequality. with the richest getting richer and sort of cliche the poor are getting poorer and something has to be done to equalize things. not necessarily taking the money away in rich people but making sure that those at the lower en ed ends of the economic spectrum have better opportunity to educate themselves and getting jobs for people to get out of poverty and their children can get out of poverty and we can continue to build the middle class which is being threatened by this massive inequality. >> one question i would have is how would you tackle the climate crisis without coming off as partisan. >> in 2020 i'd like to hear more about the black agenda.
the fact that there is still a black agenda after all these years proves there is something wrong, there's not enough being done, and in all honesty this is only one part of race reparations to be taken seriously in america. i'd like to see some things change. >> what are the candidates doing during the campaigns to attack the issues with global warming. we're seeing drastic changes in the weather and in the forest fires and changes to the environment what's being done right now as we speak during your campaign to address the issues. >> voices from the campaign trail part of cspan's battle ground states tour. >> more now from the national security commissioner on official intelligence conference in washington, d.c. during this portion experts discuss how to best create artificial intelligence workforce to address national security and defense needs for the united states.