tv National Security Commission on Artificial Intelligence Conference - PART 3 CSPAN November 6, 2019 8:08am-8:58am EST
8:08 am
so eric has seen this proposal and us help us modify it. and if you have thoughts or ideas, please send them through the commission to us. but it's one of my goals as minority leader -- might be what am i goals as majority leader one day. [laughing] to get this done because i love america, and this is so vital for the future of america. thank you very much, everybody. [applause] >> now, more from the national security commission on artificial intelligence conference in washington, d.c. during this portion former secretary of state henry kissinger shares his thoughts on the emergence of artificial
8:09 am
intelligence and the role it played in the world. [applause] >> can everyone can meet? first of all you will have to use our imagination because there is no fireplace you but we are thinking it's a fireside chat. and i'm so grateful to have this opportunity today. dr. kissinger and i met several years ago now. he has been a key person where he can go to for advice about both professional and career things as well as geopolitical events. very few friends can do both. but dr. kissinger really needs no introduction.
8:10 am
as all of you know he's one of the world's most renowned geopolitical practitioner as well as anchor and he did all of that well before i came into being. he has that rare combination also of true intellect and a really admire him for taking on something relatively new like ai after the height of his career. ai is pretty daunting estimate also relatively new to it coming to it, dr. kissinger decided he wanted to do a deep dive about the technology and about the implications of artificial intelligence for our political systems and for geopolitics writ large. as many of you know he is written to make articles both published in the atlantic, 2018 and 2019. i would encourage you all to read both of them. he also wrote the book in 2014 preceding that called world order. [inaudible] sorry, it's going in another one
8:11 am
of the last chapters of the book talks about the application of technology. it is a real interesting insight. he talks about the ordering system sort of for the world. so during the age of enlightenment it was a reason. the medieval period and was religion. his era is technology and science that helps us sort out events. i think that's a useful way to think about what we're going to talk about. he called them the governing concepts -- [inaudible] -- articles that are relevant to the commission as well and i will trot out some of these can use them as questions to start out with -- [inaudible] first-come he described ai as inherently unstable. ai systems are constantly in flux as they acquire and analyze new data. to those of you in the audience for national security professionals, stability is a key concept that we like to ask
8:12 am
you have in the system. there's an inherent contradiction -- stability of ai and national security concept. that something i would like dr. kissinger to talk about a little bit. but even preceding that, we are here really ultimately as we talk about this competition and about the tension that the interim report also talks about it because hopefully this is a contest between two political systems and we shouldn't forget that essentially. it's fun to middle between two political systems and the impact that artificial intelligence will have on those systems. it's about whether not artificial intelligence will advantage open and democratic countries like ours, or authoritarian states. that something i'd like to start off with what has conducted his journey to talk a little bit his views about that, if then we will move onto a couple of the questions. thanks, dr. kissinger. >> thank you very much, noddy. i had the pleasure of working
8:13 am
with nadia on several projects and i've seen her from the advisor to the president job, also ended. we were on the advisory board, defense advisory board together so it's a great pleasure to be here. so you can calibrate what i'm saying, let me give you a few words how i got into this field. i became a great friend of eric schmidt who is today one of my best friends. he invited me to give the speech at google, and before that they
8:14 am
showed me some of their extraordinary achievements. and i had barely met eric before then, and i begin my speech by saying, i'm tremendously impressed by what i have seen, i want you all to understand that i consider google a threat to civilization, as i understand it. [laughing] this was the beginning of our friendship. [laughing] and the next step in my being here was, i was at the national conference, which, in europe, which on its schedule had a provision for artificial
8:15 am
intelligence. and i thought this was a great opportunity for me to catch up on my jet lag, and i was heading out of the door when eric, who was standing there, said, this might interest you, and you will ought to hear it. except for that, you might have been spared. [laughing] so i went there and somebody from deep think was explaining that he was designing a computer that would be able to play the game of go, and he was confident that he could design it so that it would be the champion of
8:16 am
china and north korea. and as you know, google -- go has 180 pieces for each side, beginning on an open -- strategic gain, again, all of the game is to constrict the ability of the opponent until they can't move at all. but when you put your first piece down, it's not like chess that you have outlined up. put your first piece down, you don't know how it's going to develop, and it takes a long time to develop. so the idea that you could design a computer that could match this as a creative game
8:17 am
seemed extraordinary to me. and i went up to the speaker afterwards and said, how long will it be that we become -- [inaudible] to the spanish computers? that they will achieve intellectual dominance, and he said he was working on that. [laughing] and he is. so over the years, eric was kind enough to introduce me to a lot of artificial intelligence researchers. and i look at it not as a technical person, and i don't challenge or debate the technical side of it. i am concerned with the
8:18 am
historical, philosophical, strategic aspect of it. and i have become convinced that artificial intelligence and the surrounding disciplines is going to bring a change in human consciousness, exceeding that of the enlightenment. because the inherent scope of the investigations it imposes. so that's why i'm here. and i gave a speech at stanford a few weeks ago. at the opening of the artificial intelligence center, and as i said it's sort of absurd idea. you people who sit in the audience, i said to them, have
8:19 am
written thousands of articles. i have written two, and one was a joint authorship with eric and one other person. and i said the only significance of my presence and of what ideal, i said, you people work on the implications, on the applications. i work on the implications, and i don't challenge the applications. i think they are important, they are crucial, but frankly, i think you don't do enough. you don't go the next step, those of you who know something about the field, of what that means if mankind is surrounded
8:20 am
by automatic actions that it sometimes cannot explain. explains what happens. but as i understand it, not always why it happens. so this is why i am here, and it's in that context that you ought to assess what i'm saying. but i have put aside some other work for the last three years to work on this and to educate myself. because i think in the conceptual field that it's the next big step for mankind.
8:21 am
>> hopefully, they'll listen to, dr. kissinger. did they listen to you at the stanford audience? >> i think the technicians are too modest in the sense that they are doing spectacular things, but they don't ask enough of what it means. i would just say the same for strategists. this is bound to change the nature of strategy. because -- several of you can say how much better it is taken -- [inaudible] i don't think on the global
8:22 am
field it is yet understood what this will do. it's still handled as a new technical department. it's not yet understood that it must bring a change in philosophical perception of the world. much of human effort has been to explain the reality around it. the enlightenment brought a way of looking at it on a mathematical basis and on a rational basis. that was a huge departure already that changed history
8:23 am
fundamentally. but the idea that you can explore reality in partnership of what is out there, and that you explore it by means of algorithms that you know what they will produce, but you do not yet know why. that is when people start thinking about it, and i say, well, that will fundamentally affect human perceptions. and this way of thinking up to now, historically, has been largely western think. of the regions have adopted it from the west -- other regions
8:24 am
have adopted it from the western think. assets spread around the world, now unpredicted consequences are going to follow. >> in the end are you optimistic in terms of ai and the direction with democracy in ai changing human cognition, as you pointed out, in ai having explanatory powers or not -- humans having explanatory powers, ai not necessarily. there's an interesting point jamaican some of your articles about how ai by its very nature is going to change human cognition and ration it because we will not have the experiences that i will get to. and i will get the first before us. >> the point i make is ai has consequences that we elicit, but
8:25 am
we don't always know why. now, am i optimistic? first, i would have to say the future of democracy itself, putting ai aside, it's something that should concern us, because for a society to be great, it has to have a vision of the future. that is to say, it has to go from where it is to where it has never been, and have enough confidence in itself to do it. when you look at too many
8:26 am
democracies, the political contests is so bitter, and the rivalries are so great, that to get an objective view of their future, it's getting more and more difficult. who would have thought that the house of commons could break down into a collection of treasure groups operating like the house of representatives but the house of representatives is part of the system of checks and balances. but britain is based on a unitary system that requires consensus for its operation.
8:27 am
what ai does is to inject a new level of reality of a new level of conceiving reality. most people don't understand that yet. most people don't know what it is, but i think that those of you who work on it are pioneers in an inevitable future. and when we think, the defense department about the future, this is a huge problem because increasingly ai will help shape the approach to problems. for example, i was in office in
8:28 am
the period of -- started with massive retaliation and then develop into various applications, but the key problem we faced in actual -- [inaudible] as security advisor, how do you threaten with nuclear weapons without triggering a preemptive strike on the other side? and actually weapons themselves became more esoteric, even in terms of the '70s, when we moved to fixed land-based missiles, they had a high
8:29 am
potential for retaliation. but next to no potential for being used diplomatically. it's often been history of that period is written, there are debates about the trigger happiness of an administration -- [inaudible] from level four to level c c wh isn't a high level of alert but nobody, no newspaper reader knows that. but one reason we went on alert was because we could generate a lot of traffic, and you could see things that were being done,
8:30 am
planes were being put in the air and troops were called. but not yet threatening here. >> with ai you can't -- >> well, even with mobile missile threats and much of what goes on with ai. we believed that arms-control was an important aspect. and what you know of ai, it makes it infinitely more important, but much of what you can do in ai, you don't want to put on the table as a capability to be affected because it's safe to say, you could tell part of
8:31 am
its strength. in the field of strategy we are moving into an area where you can imagine i capability, extraordinary capability and even permitting tremendous discrimination. and one of your problems is that the nab may not come if you choose, may not know where the threat came from -- that the enemy may not know -- >> what elements of arms-control you have to rethink even how the concept of arms-control, if at all, applies to that world. >> you have a nice light in one of the articles about how ai essentially up in all of the strategic verities that we have taken, we've had as part of our way of thinking over the past 30
8:32 am
years, including arms control, including deterrence, including as a talker in the beginning, stability. but i wanted to ask you one for a specific question and then open it up. are there situations in which, going backwards, you at the white house again taking decisions. are there situations in which today and i come to a trust in ai algorithm to make a decision at that level, at the national security space if your face with a tough decision? other areas where you could see and i algorithms helping national security decision-makers? >> i think it will be, become standard that ai algorithms will be part of the decision-making process. but before that happens, or as that happens, the
8:33 am
decision-makers have to think through the limits of it. and what might be wrong with it, and i have to test themselves in wargames and even in some actual situations to make sure that what the degree of reliability they can -- to the algorithms. and also they have to think through of the consequences. when i talk about these things, i think -- i studied a lot about the outbreak of world war i, because the disparity between
8:34 am
the intentional leaders in what they produced is so shocking, not one of the leaders who started the war in 1914 would have undertaken it if they had had any conception of what the world would look like in 1918 or even 1917. none wanted an act of such scope. they thought they were dealing with a local problem and they were facing each other down, but they did know how to turn it off. that once the mobilization process started, it had to go to an end in which a crisis over serbia ended with a german attack on belgium, which neither of which it anything to do with the original crysis.
8:35 am
but the attack on belgium was an absolutely logical consequence of a system that had been set up and that required a quick victory, and a quick victory could only be achieved in northern france. so never mind that there's a crisis in the balkans, and that germany and france are not directly involved. in the outcome. the only way to get an advantage in time over the possible mobilization of russia was to defeat france, no matter how the war started. and it was a masterpiece of planning, then what is really interesting things is that they
8:36 am
had to knock out, the germans had to knock out france within six to eight weeks. and the man to design this plan allegedly said on his deathbed, make sure my right flank is strong. so when the attack developed and then russia began to move in the east, the germans lost their nerve and pulled two army cores out of the right flank, which is exactly where they stopped, these two army corps were in transit while the important -- [inaudible] on both sides were taking place but i mentioned that only that if you don't see through the implications of the technology,
8:37 am
to which -- [inaudible] and included your emotional capacity to handle the predictable consequences, then you're going to fail. that's on the strategic side. and how you conduct diplomacy when even the testing of new weapons can be shielded so you really don't know what the other side is thinking. it's not even clear how you could reassure somebody if you wanted to. that's the topic very important to think about.
8:38 am
and so, as you develop weapons of great capacity, and even great discrimination, how do you talk about and how do you build a -- [inaudible] -- and how do you convince them. i mean, the weapons in a way become your partner. and it they are assigned certain tasks, how you can modify that under certain conditions. these are all key questions that
8:39 am
have to be answered, and will be, i'm sure, answered in some way. and so that's why i think you are only in the foothills of the real issues that you will be facing as you go down that road. as you must, i'm not arguing against ai. we are -- ai will exist and will save us. >> before he opened it up to the audience, just a quick comment because you are at a geopolitical thick and you've talked about diplomacy and restraint. could you comment on how you see the evolution of the u.s.-china and russia relationship? just in brief, then i will open it up to the audience but i
8:40 am
think it is a missed opportunity to have dr. kissinger here and not ask a question that is a little bit broader. >> asking me for brief answers. [laughing] i signed of great faith. [laughing] -- a sign of great faith. >> you are getting set to go to china, so you've been talking a little bit about your goals for that trip. >> i look at this primarily as a strategic issue that has impact of the societies on each other over an extended period of time when the have such huge capabilities. now, the conventional way, the historic way has been handled,
8:41 am
it said, settled military conflict, settled -- of the sides, and sometimes at huge cost, but hugely, but historically, it's survivable, survivable costs. the key question is, do we define our enemy and then conduct our policy from a confrontational point of view and with confrontational language at every stage? at the end, my preference of
8:42 am
looking at it as a strategic issue in which at every moment you try to shape the environment to get, on the one hand, a relative advantage, but on the other hand, if your opponent of opportunity to move towards a less threatening position. and so if your basic strategy is confrontation, then the other side loses nothing by being confrontational because it's there anyway. and, therefore, i believe one should put an element of potential cooperation into the
8:43 am
strategic relationships. i studied at one point, i was in office in the 73 war, and there was a little booklet by somebody who served on the politburo as a notetaker. and if you go through that book you would see that, on the one hand, they have arguments and leading towards involvement and arms supply. but on the other there's always somebody arguing about what we call -- so they didn't ever go all out and so we could outmatched them when we went in there. i favor a strategy and a
8:44 am
complexity, and so i would like containment to evolve out of a diplomacy that doesn't put it into a confrontational style. what that means is that we on our side have to know what our limits are, and we have to understand what we are trying to avoid in addition to what we want to achieve. so we have to have strategies in high office which is not the way we should elect people. but we have to come to -- i'm talking about what we have to come to. when you look at strategic
8:45 am
designs of the 19th century, the europeans had one direct lines of both sides. the british on the road to india had a lot of alliances and friendships but not such a precise system. but when you got on the road to india, before you got very far, you would meet a lot of resistance organized by the british, even though it was not proclaimed and nobody ever quite made it. i'm talking about 19th century. so that's what we have to develop an it in some parts of the world. and now, i don't put a rush in
8:46 am
quite the same category because russia is a weak country. it's a weak country with nuclear weapons, and one of its abilities is its existence. because by sitting there in the middle of eurasia it guarantees its existence the absence of yugoslavian type conflicts in the middle of central asia where it would draw in the greek, the turkish, the persian, and all the other empires. so what i think we need is a way of thinking about the world in that category. the basic principle has to be we
8:47 am
cannot tolerate -- anybody over parts of the world we consider absolute for our survival. so we cannot tolerate the hegemony of any country over eurasia. but how to get there? would require flexible thinking and flexible technology. and we have never been faced with such a situation. and also if you go to most universities, you will find many, huge majority, that will contradict this approach. so maybe i'm wrong.
8:48 am
[laughing] >> i'll open it up now. >> almost unthinkable. [laughing] >> some of your ideas about that for gigi does i can find a way to work its way into the ai commission report, so i'll talk to others about that i'll open it up now to questions in the audience. there is a clay or so it's hard to see. i do see someone in the back there. >> thank you, dr. kissinger. thank you so much for talking to us today. i'm a practitioner and resident at the georgetown university school of foreign service. i was wondering if you could expand on your thoughts about the emotional intelligence quotient and how do you take into account rely on ai for issues of emotional intelligence like empathy when the internet was expanding, a lot of critics of the new technologies that it would make humans less personal and massively lazy to the
8:49 am
champions and the postmodernists said it would free up the mine for bigger thoughts and more profound thinking, that's true in some sense but it's also being used by small minded people to spread their original negativity and thinking. so i'm wondering how you square intentions with the new avenues of ai? thank you so much. >> i don't know. [laughing] i don't know the answer to this question, because you have defined what the problem is that we must deal with. when the enlightenment came along, there were a lot of philosophers, because going out of a religious period, there was
8:50 am
a lot of reflection about the nature of the universe. and if you studied the 16th or 17th, you find a lot of philosophers with very profound insights, the nature of the universe. whether the universe was on objective reality or whether it reflected the structure of your own mind, or whether you could express it in mathematical equations. but in our present period, philosophy and reflection, it's not a major, as major of the term. we put our talents into the technological field, and this is
8:51 am
why this happened, that for the first time, world changing events happening which have no philosophic explanation or attempted explanation. but sooner or later it will come. i'm sort of obsessed with the alpha zero phenomenon of teaching chess to a computer, who then learns a form of chess that no human being in all of history has ever developed, or has ever worked. and against which we, with our traditional methods, even in the
8:52 am
most advanced computers based on previous intelligence, is in a way defenseless. so what does that mean, that you would teach something to somebody who did not learn what you set out to do, but learned something entirely different? and within that world, i don't know the answer to this, but it sort of obsesses me. >> does anyone else know the answer? >> i don't know -- two levels of this, one that i know the answer, that would be terrific, become very rich.
8:53 am
but the other answer is, the other concern is that we have to get our minds open to studying this problem. and we have to find people in the key -- that are capable of strategy in relation to an ever-changing world which is being changed by our own efforts. that has never happened before in that way. and we are not conscious of that yet as a society. >> we had time for one final question before we wrap. yes, sir.
8:54 am
>> so there's this story about the moon coming up over the horizon and this country going on alert, strategic alert against russia, but there were cooler heads that decided that it wasn't an attack. it was something else. so are what you were trying to say is we need very elegant ai before we put in control of the button? >> what -- >> do you want to repeat the question? essential, do we need more elegant ai before we put it in control of the button? that was the question. >> in one way or another, ai
8:55 am
will be the philosophical challenge of the future. because on the one hand, you are in partnership with objects when you go to john intelligence. it's never even been conceived before. and in a deeper way the implications of some of the things are so vast that unless one reflects about it before, for example, -- [inaudible] self driving cars, when they come to a stop light, stop because it is engineered that way.
8:56 am
but when the cars next to them start inching forward to get a jump on the others, they do it also. why? where did they learn it? and what else have they learn that they are not telling us? [laughing] >> on that note, i think time is up now. >> how do they talk to each other? [laughing] >> well, thank you -- >> next time i come here i will give you an answer to that. [applause] >> thank you, dr. kissinger. we are going to take a ten minute break now and then we will be meeting back it with commissioner mignon clyburn to a look at ai and the workforce. thanks very much.
8:57 am
[inaudible conversations] [inaudible conversations] [inaudible conversations] >> and now more from the national security commission on artificial intelligence conference in washington, d.c. during this portion a look at how the world is approaching the growth of artificial intelligence and the role of the united states.
27 Views
IN COLLECTIONS
CSPAN2Uploaded by TV Archive on
