tv National Security Commission on Artificial Intelligence Conference - PART 1 CSPAN November 6, 2019 11:02pm-11:42pm EST
google senior vice president kent walker and joined artificial intelligence director general john shanahan. >> an excellent lunch. i enjoyed with two close friends of mine and i'm probably the only person who can say this in the entire world i work with and for both of them so i want to make sure i disclose my conflict of interest to start with. enter at the service of the country in 1984 he's been promoted and was in charge of a wholthe whole bunch of intellige
activities and operational activities and eventually we needed somebody operationally to implement ai in the entirety of the view was the perfect choice. kent walker is a federal prosecutor law and order federal prosecutor who then chose to come to silicon valley and work that ebay for a while and maybe 15 years ago during that time not only did he set up the legal function but is now in charge of global policy and all those sort of things together. what but i thought we should dos simply start and have you make some comments about the world as you see it today.
>> sure. thank you very much. it is a pleasure to be with you today. at the topic of today's panel private partnerships a commitment to getting this right and making sure the private sector, defense sector and universities can work together in the best possible way. it's been frustrating to hear concerns around the commitment to the national security and so i want to set the record straight on two issues, first on .china in 2010 you may remember google was part of an attack on the infrastructure sophisticated
cybersecurity attack we learned a lot from that experience and while a number of other companies have significant commercial operations we've chosen to scope our operations there carefully. our focus is on advertising and appointing the platform. second with regards to the general question of national security and our engagement in the project is an area where it is right we decided to press the reset button until we had an opportunity to develop our own set of principles and work with regards to internal standards and review processes that was a decision focused on the contract and not a broad statement about the willing us or working with the department of defense as a national security administration. we continue to do that. we are a coming to doing that ad adults on the tradition of work throughout the valley.
it's important to remember in large measure it built upon government technology for radar to the internet to gps to some of the work on the congress vehicles and personal assistan assistance. just in the last couple of weeks we had extraordinary accomplishment with the move forward for the frontier of science and technology.y. but that was not an achievement by google alone. it's built on research that hadn't been done at the university. it benefiteuniversity. it benefited from extensive consultations with research scientists and it was carried out in many ways from the department of energy. those kind of exchanges and collaborations are the key to what has made america technological innovation as successful as it has been and it just as we feel we are contributing to the defense and national security community, a lot of that work is a part.
we have a lot to work at google and we go above and beyond to make sure they can complete their military service while having a thriving careers and there are tools we've tried to take to make sure that is transitioning to civilian life and to make those best use of the military skills in the private sector. as we do that we are fully engagedwe in a wide variety of work with different agencies. we are working on a number of national initiatives on cybersecurity to health care to business automation. we are working on a number of fundamental projects to make sure to identify deep takes and progress work on that and and progress the operation of hardware and use software or hardware interfaces and better
ways. as we take on those kind of things, we are eager to do more and pursuing additional certifications that will allow us to engage across a range of different topic areas including its extremely important. there are ai principles which were i thought a lengthy document that continues to work the groundwork was laid by the department of defense in 2012 with directive 3,009 talked about the judgment in the application of the technologies, the work that the dod has done. and the private sector we have been trying to drive forward on this and not only put out a principles there is a lot in common in these areas, safety, human judgment, accountability,
explain ability, fairness in our own critical areas for different actors in the state, each of different things to contribute and that is important. this is a shared responsibility to get this right. as the report notes we need a global framework and approach to these issues endorsing the oecd framework and something that we want to support and we are working together to figure out because at the end of the day we are a proud american company committed to the defense of the united states, our allies and the safety and security of the world and we are here to continue the work and think about places we can work together to build on each other's strengths. >> thank you. >> thank you. general, take us through what you are up to. >> first of all let me say thanks it is great to be here and i thank you for the opportunity to do this. i will say i am a poor
substitute for the chairman of the joint chiefs of staff and the lower probability of any headline grabbing sound so i also confess this is the first and last time i will serve as a warm-up act for doctor henry kissinger so hang on for the main event. i relish the opportunity to have a broader conversation about the public-private partnerships. when he asked me to reflect back on my two years as the director of the public project just about a year in the seat as the director, there is one overarching theme that continues to resonate strongly with me. it's the importance and necessity of strengthening bonds between government, industry and academia. this was said this morning you brought it up and others have mentioned it, this idea of the relationship should be detected as a triangle and actually it
should be in the form of any collateral triangle. that is largely the form it did take beginning in the 1950s and lasting until the early part of the decade. it is what drove to silicon valley today. they are a little afraid in addition to being different lengths. the reason for that is multi-fold. a general mistrust between the government and industry. we started talking past each other instead of with each other. it's made more difficult today
by the fact that industry is moving so much faster than the department of defense they see no compelling reason to work with the department of defense and even those who want to work with the dod which is far more than it is in that category we don't make it easy for them. so i would just reinforce the themes in the commissions report or interim report and that is the idea of a shared responsibility. it is the trust and transparen transparency. our national security depends on it. and even for those who for various reasons still view the dod for suspicion who are reluctant to accept we are in a strategic competition with china
i hope would still agree that ai is a critical component of the nation's prosperity, vitality, and self-sufficiency. so in other words, no matter where you stand with respect to the government's future use of ththe aie enabling technologiesi submit that we can never obtain the vision outlined in the commission's interim report without industry and academia with us together in an equal partnership. we are in this together, public-private partnerships are the very essence of america's success as a nation not only that the defense of the entire government for the message we want to send today we have to make the triangle back to what mait used to be. >> thank you, general. i think i'm going to ask a couple of questions for both of you and we will start with the
same. talk about maven some more. [laughter] i think it is no secret that we came up as a consumer company evolved into becoming an enterprise company. i bring a wealth of resources to do but there are different protocols and ways of engaging in as we go along in that journey they have an identical view and a lot of hard issues but in some ways a i bit is a positive as well as a negative. you could argue it is america's first innovation also rethinking this we think out of that comes incredible strength if we work
together well we can have a more robust resilient framework as well as one that works for the world it devotes a couple pages to the principles and for section of implementation because you quickly discover that it out of the hard problems are conflicted into challenging. we have had debates about whether to publish a paper you can imagine it could be misused for surveillance and other kinds of purposes after receiving a particular technology we determined that it was appropriate to publish because that's particular technology was
used in only one setting but it's an example of the kind of discussions we have around issues like facial recognition or other challenging questions where we have to come to terms with the reality and the trade-offs we are making, very much the case in a lot of these issues as well but we think that there is a lot of room for collaboration and coordination of cybersecurity on the logistics and transportation on healthcare. >> our intent was to go after commercial industry and this is where the solutions already exist, do not reinvent the wheel. our approach was a simple one we wanted everybody in the market that was a small startup of 15 people and which is one of the companies we gotrt on contract o the biggest internet data and
why did we go after, because we wanted to take the best in the world in these images it is an extraordinary difficult problem to go after and we did a very successful collaboration with the team. what was happening internal to the company and how that played out a little bit ofer different stories, but we thought all the way to the contract and become products we were very pleased with the. we got tremendous support. what we found at the end of this is a critique on both sides is the most definitive very
quickly. part of it is about the company made the decision not to be public about what they want to do. our approach is willing to talk as much as the company wanted us to talk about. we do whatever the market being fair and we didn't want to get into the operational specifics. this was with surveillance reconnaissance and they had no weapons on it. it wasn't a weapons project and it is not a weapons project that would start as we started hearing these wild stories and assumptions about what it was and was not to the point where if you googled today, no pun intended, the adjective controversial has nowle been binserted permanently inside of project maven. it wasn't controversial to me or the dean. it wasn't controversial to anybody right now beyond people who just don't like what they are doing so .-full-stop oh, this is an interesting point i thought a lot and i'm not sure everybody up for appreciates that he idea what happened is a little bit of the canary in a
coal mine. the fact that it happened when it did as opposed to on the verge of a conflict or crisis we've gotten some of that out of the way. you heard him talking about a little bit of a preset and how the companies want to work in the department of defense. i think that there is important and it happens. it would have happened to somebody else at some point, but the transparency and the willingness to talk about what each side is trying to achieve may be the biggest lesson of all that i took from it. >> is a tragedy that we don't wear hats anymore because i could borrow three hats and figure oufigured out which one m wearing. i couldn't tell you when i mete general shanahan, the problem inside the military is that we take these trained soldiers, airmen, so forgive the salon and we put themha in front of mindnumbing observational tasks. they literally watch screens all day and it's a terrible waste of the human assets that the
military produces and so there is a huge opportunity to try to sort of get them to work at a higher level position and that's why they recommended procedure into the creation of the joint center that you both now have stood up -- let's talk about another question for those of you that have to do with ethics. now come in the middle of what went on inside of google, kent had the good idea of having a formal ethics proposal and he drove inside of google the ethics process that produced a remarkable public document and now i have my cool hat on. it is quite definitive and i think maybe you could talk about that and then similarly, it produced a proposal to the military, and i believe you are the customer for the proposal that wey wrote on military ai ethics.
i assume both of you are in favor since kent wrote the first one and all the other companies have now copied variances of your approach in one form or another and you said you are in favor of this. what are the consequences of these and does it really work, does for example, does google prevent or turn things off or stop doing things like in the last littleur while? hell does actually work interesting question for you, general. there are people who claimed the military won't operate under these ethics principles. in our report we cited the many roles that it's required to operate under and maybe you can report on that. >> what i think is the general noted having frameworks in place early on, both the sub visible but then also the review processes an escalation opportunities it is a critical part of the internal as well as external c transcript. it's quite amon right among thes we talk about surveillance being a concern, so we want to make sure some of the recognition
tools and tracking software that we are developing are deployed in appropriate ways. we want to be a good part. we don't want to pull away support what we want to make sure the scope of the project we are t developing and what we are licensing it for commercial uses and have a sense of the direction of travel and that is valuable for both sides in making sure expectations are clear and in terms of building not only trust internally but across the society, so another example would be when it comes to the general-purpose of the facial recognition, you don't know necessarily what use can be made of them until we develop more policy and technological safeguards we are going to be very cautious about that area. another example is when it comes to weapons you said this is a nascent technology we want to be very careful about the application of ai in this area so that isn't an area that we are pursuing given our background. we recognize the limits of our experience in that area. obviously the military is going to be deeper and have more
understanding of the safety implications, so we will continue to work through these different areas. there is a remarkable degree of convergence that we see between the dod and now internationally we are starting to see the european commission coming up with regulations for official intelligence of the next 100 days and this will be an interesting exercise as we offer some kind of a common mission of how we build acceptance for the next-generation technologies. >> looking at it from the dod lens, this may be the best starting point when you talk, he mentioned the area of convergence between the commercial industry, academia and the government. the principle is as good as anything else to drive a stake in the ground and do we agree on all of these, some of these if we don't agree that the conversation going is a good starting point. the other is i need to state the obvious i can tell you with certainty that china and russia did not embark on a 15 month process and route involving
public hearings into discussions about the use of artificial intelligence. they are not doing enough and i don't expect they ever will. as a people may question what the department is doing and why we are doing it but i tell you what, we just embarked on this long process just to make sure we took into account all the different voices on the ethical use of artificial intelligence and i would say the product that has been delivered is an excellent product shaped by a lot of people that spend time and attention against it. i've sai said this in other sets 35 and a half years in uniform i have never spent as much time on this question of the ethical use the department of defense actually has a long and i would say commendable history despite its walls along the way of looking at the ethical use of technologies. there are differences in the artificial intelligence and what the report does verye well is start with what is similar to every other technology that has ever been in the department here
are some areas that may be different. we are not quite sure yet and here are the differences like systems on their own. that is a pretty good framework for going after this. we have a way of looking a at te summit if it is artificial intelligence or any other technology our history and processes, the approach and training are in place to look at the technologies and how we bring you information prototype into production so now that this report has been presented to the secretary of defense, it is up to i get to questions no one is what you think about the report, the report provides the best possible starting point coming in at number two is what are you going to do about it. it's really complicated. we have to come up with an implementation plan that will not be unabl able to get departmentwide implementation plan taking these organizations putting something together to my boss and the chief information officer at the department and making some recommendations on how we implement this for the entire department of defense. that is not an overnight fast.
it's going to take a while to get this right that we no but we an outstanding starting point. >> that is a wonderful framing for where we are. i would like to push a little bit on where this will go. let me give you an example. opening to theen technology that will allow arbitrary rewriting ofhn the text that was sufficiently good that they became concerned and they didn't release it instead they only released it in certain models and researchers. that is an example of, and i asked them and they said does anyone put pressure on you and they said no we just thought it was our good judgment. you famously very early said on a speech recognition thing we are going to avoid that as a general purpose because of the dangers. where will the industry and that in this sort of self-restraint thing? is it going to be a common set of principles, is the industry
just going to have to have an ethics common with respect to doing careful? how will this play out? >> you already see some efforts with the artificial intelligence to exchange information on some of the work being done. it is going to be an evolving question as we develop more infrastructure and frameworks have the appropriate limit its use of artificial intelligence in safeguards and checks and balances for a variety of different areas, but i think that and i'm hopeful with a common ground to work briefly the work already and we are on of doing that this is true in any technology coming to communications platform comes with a television radio to the internet you need is for regulatory restructures, social conventions about how to use these different tools. it is an extraordinary powerful technology, i think it is understandable that you are seeing a variety of views come together but also notable that you are seeing the convergence
that you are seeing. >> you have talked inside the pentagon about this notion of the new kind of warfare. and i think the term you will use this algorithm it warfare. take us through in the same sense that kent talked about how this new emergence what is new and powerful about this technology in a military context? with your understanding of how the military frames this was the language and the positioning? >> i go back to as we were formed in a given secretary defense was in the room and i will never forget it's s like yesterday okay you are now formed in the team that is going to figure out how you actually feel the ai and get away from the research piece of that which was all happening wonderfully behind-the-scenes but now we need a team focusing on the work site are and the name he gave was the algorithmic cross
function. it's not an accidental name it's become project maven that is easier to say. [laughter] >> your acronyms are going to kill me. [laughter] why don't you tell me what algorithmic warfare is? >> we are used to fighting for 20 years in a certai and a certf fight, counter terrorism, insurgency. we need a point to be shocked by the speed and chaos in the bloodiness and friction of the future fight in which this will be out may be playing in my perspective how do we envision e genocide happening? it has to be algorithmic. as you described earlier as we were talking about this, it is how fast can we get into these decisions. colonel john boyd, air force colonel who was the author of the observed fact which is how you get through the cycle of
decision-making which is never really about the decider but it's more about in the future fight we are looking at this would be happening so fast if we are trying to do this by humans against machines and the other side has the machines in algorithms and we don't, we are at a high risk of losing that conflict. now this is a challenging one because i think part of what you are getting at is in a future scenario how are people going to be assured that our algorithms are going to work as intended and they don't take on the life of the state. what we'vwhat we've hopeful bacd this is the starting point for the principles that were given to us is to test evaluations. we have to do a lot more work on the front end by the time they feel we know what is being fielded. but i think that we are really going to be at a disadvantage if we think you're going to be in a pure human against machines andn it will be human and machine on one side, human and machine on the others, but the temporal
dimension, this fury that you may be facing with decisions will be made that fast. it might be algorithm against algorithm. >> to be the ke me the key quese matter is what happens when the whole scenario is fractured in the human decision making because i understand the way the military works is when there is a threat in general people check with their superior and there's a move of engagement with human judgment it's all built around some number of minutes. not a number of nano seconds. how will the military justice procedures to deal with this real possible threats? >> it won't be driven by ad of the innovation will happen at the lowest possible level. but we have to be able to do at these places is give people the policies and authority and framework to do with the need to do. the innovation of people will say i have a decision to this and i'm going to write a code and develop an algorithm and apply it to this problem but in the field if you give me the data and the tools and framework and all those other things, we
can do this. the idea that in that fight it with the more decentralized than a lot of people are comfortable with today, and that brings risk with it so we are talking about higher risk and consequence, but it's eitherr that or risk losing the fight so it's this idea of decentralized development, decentralized experimentation, decentralized innovation, the innovation as it was described in one of the panels this morning happens and we have to give them a push from above to make it. >> in the temporal component there are new fronts in cybersecurity and cyber defense. we are seeing already sort of efforts to destabilize with disinformation campaigns and the like so we can work together to recognize those patterns across the battlefield if you will. >> do you have a model for how the industry -- one of the themes of the whole conference is the industry and government need to work together broadly and obviously we have a senior general here but i'm really
referring to the government as a whole is more than just the dod. do you have a model for how the industry should work with the federal government, the state government, the dod and so forth? >> i've only talked about two important elements that they touch on as well. first is the notion of trying to build broad trust applications into new technologies and second is the need for the global framework chops in the process. third is the generally alluded to is a more operational and administrative question of how do we make it as easy as possible for the new companies to enter into these kind of partnerships. a lot of the innovation and cutting-edge research being done in silicon valley isn't being done by large companies but by small companies. it's a rich ecosystem of innovation and its challenge even for a company of google's size to start to get more involved in that environment. it is doubly difficult for some of the smaller companies. as we look at modernizing
procurement from the military side and working with congress as well to make that as quick and nimble and flexible as possible, i would respond to looking at the increase in the r&d funding across the board because that is traditionally a really fertile ground for a lot of these collaborative enterprises to move forward. looking at the human resources exchanges, there are a lot of authorities out there which authorize private sector people that come into the government. in practice it is harder than you would think. a lot of that hard work on the ground is important to make it a success. >> for both of you because we will be making recommendations that ideally went up in enlegislation a year from now ps or minus. are there specific things we can do that would promote private public partnerships for example as you know the dod has a number of other groups that work very closely on the extraordinary contribution has been played to the industry and the technology
and to me personally so the sum of all of the, do you have a model of -- and i will ask the same question you have specific things the way to decrease between small companies, large companies, the federal government, procurement and the dod? spinnaker couple of thoughts on that. so much started to have been of the last couple of years with places like defense and a digital service, compiled to combat global beginning insurgencies to get things moving. >> we should pause and say these are each teams of software people that have have an outside impact in changing the procedures and important aspects in the air force for examples and some of the chaos and things like that. >> that'll got started but wewe have to figure out how to institutionalize and make the change across the department of defense which iss an excellent thing to do. you ask about the things we do and one of the biggest is just bringing in talent from the outside from academia and industry and the chief scientist
who wasou here today working for the startups of the last job and our chief technical officer 25 years in the valley come he comes in and within 24 hours takes a different view of what we are trying to do. we need more of that. we need people coming from academia for a year or two coming to the outside putting people in education and the industry, secretary defense or fellowships. what that is all beginning to happen to the scale for the next level to really start to understand what we are each talking about. me going to the valley and the talking only get so far. it's the peer-to-peer relationships with discussions that are going to be more important than anything else. >> i think you've seen examples of it. we are priming the pump on a number of important areas. whether that is training our models and simulation to help on a recruiting a number of different areas. another important component of this is a key it modernization
because in many ways it is critical but it come that it coa larger environment of software that often times very difficult because you have to get security clearances and appropriate certification for all elements of that piece, so there's the accommodation of successful individual experiments and to build the familiarity of the peer-to-peer level but also this extensive change to make it easier to have wide adoption of the technology more broadly. >> it's time for us to finish up. my purpose in the panels to put to bed the notion that somehow silicon valley wouldn't work with the military. i think that we have clearly seen examples of small companies, large companies and we can sort of move forward and build this collective between the private and public partnerships. can you sort of summarize the key take away the you want to offer to us, the key message, the key word?
>> why are you here in why did you make a special trip just to make this point? >> i want to be clear and i will restate what i said at the beginning we are a proud american company committed to the cause of national defense for the united states of america, former our allies and for peace and safety and security inme the world. we approach the task thoughtfully as we do with using approaching every variety of advanced technologies we want to be thoughtful and make sure we have the frameworks and transparency and understanding as we move forward. i think is a mission the military and u.s. government share and i'm looking forward, we are looking forward to working closely together in thea future. >> so, general, i know you never liked these things but you are the sort of top, you are the fellow that is going to sort of make this change happen to cost 3.2 million people, $660 billion in an enormous bureaucracy. how are you going to pull this off? >> one person at a time.
it has to be a combination of top-down. it was said on the previous panel it must have the full support of leadership from the very top that show it is a priority for the department. that is critical for almost insufficient. you have to have the bottom-up innovation of people pushing today. there's no question some of them are in this room. they already know the future needs to look like and how do we need that in the middle and give them the resources and tools to succeed? the last thing i will say this is intimidating. it's a daunting task no way around it is a multi-generational problem and it's going to require a multigenerational solution a monopoly to wake up tomorrow and suddenly realize we've got thiso right. we have some successes and drawbacks but just keep going ahead and with the resources and commitments of the department behind us i know we will get there. >> i think it is worth saying i worked with tent for 15 years and with my google hat on i will tell you i cannot be more proud
of the impact he's had on our society, scale and the reach of our corporation. i think you can see this today. and general, i don't think that bob could have chosen a better person to lead this. our partnership with you over the last three years, you really have moved the resources and got the money, got the attention and delivered t and there was no one before you. you are that person so thank you both very much. thank you all. [applause] the national security commission on artificial intelligence at a conference in washington, d.c.. on the intersection between artificial intelligence and national security. hi