Skip to main content

tv   Henry Kissinger and Eric Schmidt The Age of AI  CSPAN  April 11, 2022 8:00am-9:04am EDT

8:00 am
>> and you've been watching booktv, every sunday on c-span2 watch nonfiction authors discuss their week -- books, television for serious readers. and watch them all online anytime at you can also find us on twitter, facebook and youtube @booktv. .. depended on gps. we also know u.s. and china are spending billions in the ai raise but there's so much we don't know and your book, dr. kissinger and dr. schmidt come seems to me to be an urgent call to get all of us to think harder about what it is going to mean for our future.
8:01 am
you were right whether we consider it a tool, a partner or arrival, ai will alter our experience as permanently changing our relationship with reality. the result will be a new epic. dr. kissinger i'm going to begin with you. in that vein, he wrote a few years ago that ai represents the end of the enlightenment. today, do use it mainly as a force for good or a cause for worry? >> it's both. it's unprecedented capacity to collect information, to absorb data, and to blend it in different directions. but it also has issues
8:02 am
incapacities in the military field and i believe that it will change our perception of reality and, therefore, how that is interpreted in in a religiou, in a mystical way, whether it's other manner. it will indeed -- enlightenment recently terminated the perception. here, extraordinary results can be achieved for which one does not know how they come about so we begin some issues with the end of the process but it's covering it through algorithms.
8:03 am
but you don't know why it operates this way. and these conclusions become dominant in various fields. and so they will inevitably be a question about why they occur. and that is the distinction. and they can happen so quickly, and the achievement of them is so much faster than the human mind can follow. so there is a gap that will have to be dealt with or explained.
8:04 am
>> dr. schmidt, pick up b on th. because the book does address the potential best and the potentialr worst to come from a. give us an example or two from each. my question is, how confident are you that our civilization is going toth be around either that it's going to survive pandemic, climate change, global conflict long enough to realize the best or the worst of ai? >> in the book we -- by the way, judy, thank you for doing this with us and thank you to the cfr which is an incredible organization that we are a all members of. it's a great honor to be here. in the book would give a couple of positive examples and would give a couple of negative possible scenarios. and a positive example is a drug called allison. this drug wasas developed at m.i.t. and is done by a set of
8:05 am
synthetic biologists and what the synthetic biologists is they imagine if the computer scientist can go to 100 million compounds in chemistry they could find a compound that would serve as a general antibiotic that was very different from the antibiotics that we as humans all have resistance to. so you build a network that generated candidates of the and about ox and built another network that greater than based on how different they were from the ones we were already getting. they came up with a drug that appears to work really well. that's a good example where incredibly intelligent scientists working together across fields were able to do something that no single scientist nor human could ever do, which is complicated. [inaudible] -- in the process by which they achieved it that they do not
8:06 am
know why -- they could not constructed by themselves without the artificial intelligence. it is a different evolution of thinking that was the case during the enlightenment period. >> and the issues with our technology as it marches forward are fundamentally because it's imprecise in the sense it doesn't know why it did something. it's like a teenager. yes continue to why did to do something, they can't explain it to you.y and it really does know. it's dynamic and emergent, meaning that it changes and it changes all the time. and the most important thing is it's learning. so i'll give you an example of the concern that we cite in the book which involves the development of children.
8:07 am
so you're a young parent with young kids. you get your kids and artificial intelligent toy, a bear or whatever, and the bear becomes the kids best friend. how do you feel about having your child's best friend be not human? what happens when the best friend learns things that are either not correct or not permitted by the parents and it's not a human reaction? we could be really altering human beingsnc experiences and lesser figure out aer way to del with this uncertainty. no one knows how to solve this problem. recommendations in the book for how to begin to address the uncertainty. i mean you may you're making it clear that it that it's something that should be done. but how do we do it? well we have a number of recommendations one is that the
8:08 am
good it conception it inclined. leading continues be assembled. to deal with this within issues that eric mentioned because we don't know precisely what issues will arise. but we know well what uncertainties may develop? so this an international basis? secondly, we believe that the companies that produce result based on artificial intelligence accompanied with a study or look into the implications of what would of their discoveries?
8:09 am
to create it's an consciousness of beyond the immediate technical solutions said that people is inserted in the international field the artificial intelligence produce it so many possibilities of intervention. inside the territory of other countries and kind of threat that had not been dealt with before. and their big developed unilaterally by its country feeling relatively developing it artificial intelligence that way and and therefore system form of dialogue needs to be developed in the nuclear field we had a
8:10 am
comparable problem, but with a much more transparent technology. which was large and could be counted. and the nuclear field i remember that there were. seminars of harvard mit caltech at a time when? the academic community still was felt it would part of a governmental process. the developed concepts that then over the years were developed in the arms control. field that it's nothing like that now. internally yet in this country to analyze this and it's not even the beginning of it. in relationships between us and
8:11 am
china in which we should could at least and understand. if there are restraints that can be carried out. and what did restraints would be? so these are conceptual problems. that i believe and i think we both and then i'd love everybody. should become a commonplace. and dr. schmidt you you do write at both you write about this challenge in the book. right now should we be thinking about cooperation with other nations like china or should? should we be thinking of this as purely competition? how do we figure out which it is? and is it going to be different for every aspect of ai? it's likely to be different and first there are plenty of areas where collaboration would be net positive the example that i use around halicent is something that is a global treasure health
8:12 am
matters to everyone in the world not just to the us and not just to the west and indeed ai could materially develop and help healthcare problems in the developing world because they bypass all that infrastructure that goes straight to digital many of them don't have very good physical health care systems, but we can give them very good information this very targeted to them and so on. with respect to defense collaboration and treaties we take a position in the book that the most important thing is to worry about essentially launch on warning systems. these are called automatic weapon systems. and we don't want a situation where the computer decides to start the war. because the computer figured out something was going wrong. that was perhaps not true or made a mistake. this is essentially human control. and the core problem is that with the compression of time.
8:13 am
in an active cyber war for example, you may not have time for humans to make the decision. and so we think collectively it's important that there be discussions at the at the diplomatic level over this. there are no such diplomatic discussions right now and i'll speak for myself and say that dr. kissinger worked and the reason we're safe today is because dr. kissinger and his colleagues in the 50s develop these doctrines, but they did so after hiroshima nagasaki and the explosion of a nuclear weapon by the soviets. they did it after a tragedy as opposed to before a tragedy and i will say for myself that i'd like this conversation to occur before we do something really bad to each other. and dr. kissinger can that happen because given i mean just as one example the tense relationship now between the united states and china. well do aspects to it one is
8:14 am
that eric has been discussed. to avoid automatic war and other matters of that kind. at the same time they will be undoubtedly concerned that you not teach the adversary things. in warning against him that he may not have developed yet. and that therefore increase the competitive of the adversary. damage you to defend. in a unilateral way so these are serious issues that have to be discussed. but i agreed strongly with eric. they must be addressed. quickly so that at least we get
8:15 am
a baseline of information? and and concept of how to avoid now the commercial field. the tendency of tech companies it starts monopoly. and that has to be limited by the fact that no country will accept. a monopoly position in a major technology for another country. so what is a commercially relationship? in which each side can develop some significant capacity but not a dominant capacity. that's never been had to be faced before but these are the sort of issues. we are raising in the book. and we didn't write the books i was. totally ignorant of in the
8:16 am
artificial intelligence field. i'd slid into by accident. by listening to a lecture that i would actually try to avoid. and er equals to the door of the drum, and i didn't know him very well. and he nevertheless. urged me to go back into the room. and listen to that lecture because it would rate some fundamental issues that i might want to address. and i did become so fascinated that then with eric help we created groups. admit informally and then we formed a smaller group of eric hutton. they're not like myself. that made regularly. but we it leads i did not know the outlines of an answer. when we started this and the main threats of the book is to
8:17 am
convey the fact that we are moving into something. of the same impact at the enlightenment in the sense that it changes. the human conception of reality and into agents of knowledge will be on human perception. and to study the country of that. crucial and it cannot be done only by studying the technological achievements. because pages of them it's what form the new perception. of reality dr. kissinger just a quick post script. so do i hear you saying you believe the chinese leadership today is open these kinds of
8:18 am
discussions and moves at the two of you are advocating. i don't know. i think fundamentally. we in the chinese in history challenge in the sense and here are two societies. that between themselves can destroy civilization as we know it through conflict. with each other and they can do so. because the interact on the global basis so it's going in that process. i hope that the leaders of the two countries get together and address that question. and say we have it joined application and can be convinced each other that they really believe in a joint. but constant dialogue on these
8:19 am
issues. seems to me necessary. to break through the hostilities and discussions that are created in the meantime it's going to be very difficult. but it will be necessary. and we go. it will be addressed. before the damage it become. a obvious dr. schmidt, i want you to weigh in on that because as you know, i mean it's been reported that the most of the ai labs at your former company google facebook ibm microsoft happened so recently been located outside the united states reportedly 10% of them in china, and there's obviously been concern about that. is that is that something that
8:20 am
should be addressed and changed? so so that statistic is not one. i'm familiar with and i don't believe it to be true. the vast majority of the ai research labs, which is what we're talking about now are in the west and in basically in beijing and the ones that are in beijing are run by the chinese by the ccp. google had a small group in china, which has since been shut down. and i'm not familiar with the other and non-you basically the chinese chinese presences from a us firms. but i'm not aware of any. so so i don't think that's true, but the concern is nevertheless legitimate. china announced two years ago that their strategy was to lead the world in technology including quantum computing supercomputing aerospace 5g mobile payments new energy
8:21 am
high-speed rail financial technology artificial intelligence. and of course semiconductors, so the chinese government and dr. kissinger is really a genuine expert on how they think dr. kissinger says that they think in the very long term. and so that's the same list that we should have in the west and furthermore. they're backing it up with a great deal of money in terms of funding phds and research. this is not the china that you thought about it 10 years ago. so i think a fair statement is that we're going to have a rivalry partnership with china. where they'll make some wins and we'll make some wins in this technology. it doesn't have to lead to war but it is going to be uncomfortable. and the trump administration for example restricted access to the ultra low-end ultra ultraviolet semiconductor manufacturing. that was a good decision on the part of the trump administration. so we have some tactics, but i think the new the point here especially for a cfr are on
8:22 am
audience is the china is not a near peer there appear. and so developing a global structure where the us is doing its thing china is doing its thing and in ai, and then how you manage those two is critical for a national security over the next 20 years. and do you see at this point i assume you're talking to individuals in this administration. and people in other countries that are playing a key role here. i mean are there the beginnings of an effort to put that kind of global? structure together there are and i was fortunate enough to lead a congressional commission called the national security commission for ai which exhaustively goes through these issues. 756 pages a good thing to read over the holidays. and we go through this in great detail. we conclude that the us is still
8:23 am
slightly ahead in china, but china is catching up very quickly and we make a set of recommendations which include more research and those sorts of things but also working very closely with our western partners. the bide administration is doing all of that the ndaa which is how this stuff gets funded includes roughly half of our recommendations, but the other half need to get done as well. so i'd say in typical american fashion we're getting there but we're getting there too slowly. and to meaning can we can we i guess i'm saying or there are is it inevitable that we're going to be behind as i'm asking? it may or may not be it's not possible to know what happens in our field is everyone says that because china does not have laws about privacy and data security. china can build systems that are essentially larger smarter because they have more data. but it may also be that the
8:24 am
field gets better at dealing with us size data rather than chinese data, which is four times larger. so all of these sort of quick things that you hear may not be true in the next five to ten years. you haven't mentioned it, but we're really in a race to build general intelligence. and we're working in the west. we're certainly working very hard to build systems that are human-like in the way they interact with us. you can imagine that that race could ultimately result in true supercomputers that are very very powerful very very scary very important which could lead to another nuclear arms race of that type. we mentioned in this in the book, but because we don't know when this could occur we simply say this is a possibility. so i think the fair statement right now is that we're locked in a very tough business competition between china and the us we're not we don't have the right conversations about
8:25 am
security between the two the two countries and the us needs far more cooperation with the democratic partners. and we need to think for ourselves. not just about who is technologically ahead? but what is significance of that? advantage is and and how to relate technology to purpose so a mere digmaological edge it's not in itself decided. if you can't explain to yourself. what did you use is? and what its impact it's and and so we have to be clear in our own mind. what we're trying to avoid? and why and what we're trying to
8:26 am
achieve and why? dr. kissinger i one of the other topics that you tackle in a book is is what's happened as a result of ai and social media, of course the algorithms that have led of course to some good things but also to a lot of disinformation misinformation and you talk about how that needs to be addressed. and at a time when we were looking at russia threatening, ukraine and true information as well as misinformation about that. how should the united states be approaching that and i have to i have to frankly sneak in a question as a journalist. do you think it's possible to to prevent russia from going into ukraine? it's it's possible and it's necessary that what the cold war
8:27 am
was about and it was achieved in the cold war. the objective has to be do. make clear to russia the benefit he achieve. by military actions said we trying to prevent. it's not. add a possible or is not worth. because the issue of ukraine has a long history of the relationship of the country do to russia. and the balancing of western security concept with the russian security concept i personally have been. critical of the attend to
8:28 am
integrate ukraine into nato but i would be totally opposed. to any military action by the by russia to restore the historic the situation so would i would thinking of it's a petition? of ukraine similar to finland that is not an institutional challenge but it capability to defend itself. to a significant degree and they've stated interest by other countries. to prevent the use of of words i don't know where the artificial intelligence. helps you in solution of that of
8:29 am
that problem. but as these strategic concept evolve strategic intelligence or adequately intelligent will make it more complicated and more. subtle and we have then do understand. how do you use it? to prevent to achieve the level of deterrence. that existed in the cold war would shouldn't happen it sliding. it don't sleep working under way like escalations at both sides. into a crisis we don't know how to do it. how did the parties don't know? how to end but we're not saying that every problem can be solved by artificial intelligence.
8:30 am
we're saying it will be compounded. but he that available. but the message available. but it's strategic printables. that we do not want another country do achieve for another group of countries. to achieve a germany remain let the definition of a germany. and the message of resisted are altering and they require. study within our country first. and we have in some way. must be found for the united states of china because the self-interest of the two countries. ought to be involved to address some of the questions and we
8:31 am
have outlined and others that may arise. no, i know this is on the minds of our members who are joining us and it's it's time now for me to open up for questions from all of you. who are watching questions for dr. smith and dr. kissinger. we'll take the first question from craig charney. thank you, and thank you for an extraordinary display of natural intelligence so far. my question is this is the real challenge posed by ai coming from general intelligence or rather from the development of an ai, which is also self-conscious. consciousness after all is not a function of intelligence animals who are far less intelligent than us. have consciousness are aware of pain have emotions and so forth. one of the things that seems to me that studies of human psychology of demonstrated is
8:32 am
that there is no intelligence or consciousness without emotion. so i'm wondering i'm not sure if eric was thinking of consciousness when he mentioned general intelligence, but i'm wondering if this is the sort of ai which would pose a great challenge. dr. smith, so so it's a very good question and we we decided that we would not explore the question of are these systems conscious? um, what we would say is that these systems will have human like but they're not human. we do not take a position that they have consciousness or pain or anything like that. and the scenario goes something like this. my own opinion people disagree is in 15 years or so. they'll be computers that not only have the kind of capability that we have been discussing in our book. but they'll also be able to begin to set their own objective function. in other words, they'll be able to say i want to work on this problem.
8:33 am
and at that point the definition of who we are becomes very interesting. dr. kissinger talks about this into context of as a historian of spengler and kant and the philosopher of what does it mean to be human? how do we think and so forth? and when you have a system in my view 15 plus years from now they will think as well as you but in a different way it will call into the basic question of what does it mean to be human, especially if we're not the top person in intelligence anymore. in fact, a non-person is even smarter. this has a lot of implications if it occurs for one as a national security matter, you'd want to make sure that our country gets there first because you wouldn't want the other country to have it and be able to use it against us and that's just one of many such examples kayla you want to call the next
8:34 am
person. it's also doctor kissinger. go ahead. which will be what kind of intelligence? we know it may be different from us. but in what way does it reads? conclusions oh, we are quoting a tree on the back of our book. a question that was asked to to a machine that can complete sentences and and and and the articles. oh and it says what would you motivations? meditates, i have no ethical principles. and it had no feelings. i am operating by language printables a so how it saves in
8:35 am
its operating it's in soaked. imposing its effect it would be considered ethics. that's not apply to it. now what the implications of the way i have but i'm not confident in the field it arrogant. i whether the general intelligence. it's a special challenge. but it's i don't think a unique challenge. it's long as you have ended it. that thing autonomously towards gold that you cannot predicted by human intelligence and comes to solutions. that you cannot predict. that is a world we have not. previously had to explore other
8:36 am
people that the technician did taught jets. to a computer and then discovered that this computer is applying strategies. that in 2000 years. of tres records have never been applied by human beings. beats the normal test players and i would reason they arrived at that. because the way it was achieved was to teach the moves. until the and then divided into a wide and black part and they played against each other for four hours. or maybe 24 hours. i'm not sure but whichever it was a very short period of time. after which it came up with a
8:37 am
strip did you that is that customer of the treasurements? oh it matches that this is a new dimension of human intelligence. so what the operation of that is? and how did conclusions when it becomes national? and when digital to many other fields? that is going to be the puzzle. the future and will need to be addressed in some fashion. when society's impact on each other and have to capacities. that we have against and and dr. smith just by way of definition when we when you refer to general intelligence.
8:38 am
it's when the machine and the human is working together. so the terms that the industry uses today are the kind of intelligence that artificial intelligence is today is determined by what humans asked it to do. so you can ask a computer. what's the weather you can ask it to solve a problem you can ask it to survey things. it's vision systems are better than humans that sort of thing. it's a complicated calculation, but you tell it what to do. general intelligence is generous. sorry for the pun generally viewed as the kind of creativity that humans have where you wake up in the morning and you have a new idea. and to to hammer on dr. kissinger's point my friends who are physicists are obsessed with answering the question dark energy and dark matter. so let's imagine that a future computer is you simply say to it work on physics. and it decides to work on dark energy and dark manner and it
8:39 am
actually solves them, but you can't figure out how it's solved it. so, you know you have the solution, but you don't understand in humans cannot understand how it got there. that's the point which we realize that our definition of humans themselves who we are is really and this is the the key point that dr. kitchener makes is it's a new epic because all of a sudden we're no longer the top thinkers, right something else is thinking do we rebel do we reject it? do we fight it? do we invent a religion for it? i don't think any of us know and even if you can teach the computer. to work for the same objectives so that there's no question about this if it interprets the best means of achieving it. in a different way and start deviating even very slightly at
8:40 am
the beginning of the process. by the time it had gone on for five years. there may be a very big gap between what your purpose was. what you thought you would do in as a common? end of drugs so and i give you another example where the morals are different. so you're in a war and the computer correctly calculates that to win the war. you have to allow your aircraft carrier to be sunk. which would which would result in the deaths of 5,000 people or what have you? would a human make that decision? almost certainly not would the computer be willing to do it. absolutely. so we can give you example after example where the computer's decision will not reflect human ethics human values human history. it'll have its own path. and that's a real challenge for humans.
8:41 am
just as in the chess game the computer was willing to kill the queen. a question kayla from another member we'll take the next question from mark rotenberg. thank you very much. this is mark rotenberg with the center for ai and digital policy. we're actually studying the national ai strategies of governments around the world. i just wanted to thanks tfr for the timely and important panel of also written a review of the book which is available in issues and science and technology, but but i want to ask a question that i think will be of interest to many cfr members and that concerns the general role of the united states government in the development of ai policy. we know for example that the european parliament is underway with comprehensive legislation
8:42 am
the us contributed to the oecd ai principles and the g20 ai guidelines and our foreign foreign policy talks about democratic values. and ai i think as a helpful way to think about new technology. in a way that strengthens democratic principles and and so my question both to dr. kissinger and to eric is in what sense? can you see the us developing policies that help advance democratic values in in the realm of ai? dr. kissinger i think but the first objective of government in relation to other governments have to be secured.
8:43 am
that no government gets into a position into edumonial which is but how you propagate democratic value to how you can apply it. i think it took very important subject. and i think it must be studied. but i don't yet know? how to approach it eddie and i have spoken between ourselves of admitting the next set of problems. and certainly the relationship between values and artificial intelligence. and then the relationship of those to each other. so i do not venture to know that
8:44 am
i have an answer to that. maybe in another two years. we could be at the beginning of an ant to do it. but i'd like to hear any. it is but yes so mark. thank you for your leadership on this issue. i think it's crucial important. i don't think it'll be one commission or one government action or one government report that will do this. i think we need and we say in the book that we need to get more than computer scientists talking about this we need to get economists philosophers biologists anthropologists and so forth to understand that we're playing with fire. we're playing with human beings and they do things that we don't necessarily agree with as you note pretty much every country now has an ai ethics project. and in europe last year, they actually introduced a draft form
8:45 am
of ai regulation which i ridiculed because it was so rough in terms of regulation. that is tough that it would effectively kill the ai industry in europe. and i said that publicly and i'll say it again. there's evidence that europe has now figured out that they can't just regulate themselves to success and that they need to also invest in these key areas and we want them as a great democratic partner. so the reason we we are so focused on this ethics thing. is imagine a situation where all of the ai development is done in china or in asia where the notion of personal privacy and surveillance is very different. i don't think we would be comfortable with that. do we need to write down the things we care about and we need to make sure that we win at least in those areas. and do you see that happening? well, it depends on whether you think that the us government is going to make the necessary changes in terms of funding and policy immigration.
8:46 am
we studied one of the things that happens is it gets this whole issue gets caught up in the issue of china. people say well just ban all the chinese students from from the us. well, we looked at that very carefully we decided that the that that would be terrible because chinese students in the us are some of the major contributors to ai research so there there are no simple answers to this, but the most important thing is to say american values western values need to be the dominant values in the platforms that we use every day. semiconductors are energy platforms our biology platforms. we need to make sure we know how they were built. kayla another member we'll take our next question from peggy delaney. yes, thank you. this is a fascinating discussion and dr. kissinger will certainly feel that i'm continuing on my naive humanistic route in our
8:47 am
discussions. but while we're circling around the edges of this question of ethics. there's another more subtle thing that i'd like your opinion on which is we know that they're as many or more neurons in the heart and the gut as there are in the brain. and that is the source of emotions of love of compassion etc. and i wonder when you because that won't be possible as you've said to insert in ai. to what extent it leaves room. for the kinds of negotiations that henry youth engaged in for many many years where there's a human dimension. there's a connection that exists between the leaders. and if it's all done by ai without sufficient human interaction on the one hand, it could be terrible if the two were megalomaniacs, but if the two are really searching for a
8:48 am
peaceful solution, sometimes that can maybe be more important than any strategic answer that comes through ai. i feel that and if we say in the book. it should be a human element. in all efforts there are based on on ai that we should not abdicate the basic decision. to artificial intelligence but i would certainly. favor using artificial intelligence to answer some of the questions that you that you are aging. and the what we have to avoid. it said the different cultures
8:49 am
developed totally different views of artificial intelligence. and competitive artificial intelligence which then indirect with each other. with a reduced capacity of of human control or that we find a by which artificial intelligence. can achievement comparable? in the actions that lead to comfortable results. like what we called arms control controversalated. but at least that was a way by which the true sides could educate each other. and by thinking through way to be patient. did covering ways of preventing catastrophy? so i would be very uneasy.
8:50 am
if the ultimate ethical questions live to the interaction of ai without a human essential component in it. but it also means that the humans have to develop their own understanding of ai. in a way so that there isn't simply an automatic. a system that that starts crisis. but given the impacted ai can have won biology and medicine one can know that other. government considered and necessity but what can't guarantee it and we certainly have to be at the highest level of which we are capable.
8:51 am
dr. schmidt, do you want to add anything? we know humans are also capable of doing terrible things. dr. hissinger and i always start by this by saying we want humans to be in control. and so whenever you've got a scenario where humans don't feel like they're in control. we've got a really think about that. the automatic weapon systems is one but there are plenty of other ones where an ai system could for example make a system very very efficient, but the efficiency that it seeks is not what we as humans want. and so it gets back to how do we establish the goals now you mentioned social media before here's an easy way to think about social media. it's somewhat cruel a company tries to maximize revenue you maximize revenue by engagement and the way to maximize
8:52 am
engagement is by maximizing outrage. so why are you surprised we don't have more outrage the system learned how to be outrageous. so that may not have been what we wanted, but it's what happened. and i want to avoid that scenario as ai becomes a partner in pretty much everything that we do and that's clearly going to happen because the amount of investment so in my field ai is taking over everything if you look at mit 70% of the undergraduates take a machine learning course 50% of the undergraduates are in computer science computer science is the number one major in every major university that i'm familiar with in the united states. we are producing a generation of people who will build these systems. we want to make sure they're built in the right way dr. kissinger says very clearly. he doesn't want the computer scientists like me to be solely in charge of this and i agree with him. kayla another member question
8:53 am
we'll take the next question from matthew ferraro. hello, good afternoon. my name is matthew ferraro. i'm an attorney and i used to be an intel officer. great discussion. here's my question. how closer we to the insertion of microcomputers and ai into the human physical body think like nodes in the brain and what will that do to our understanding of personhood consciousness liberty moral choice and all of that. thank you. so we don't we don't go into this into the book very much because it's so speculative. there are a number of startups. which are trying to do something similar to what you described? if i were to give you my prediction, which is just my opinion. this will start with people who have severe brain injuries. and we will be able to improve the quality of their lives. i think it's going to be a very
8:54 am
long time before you will have a voluntary chip inserted in your head and that you'll learn how to use that to make you smarter. the reason there are many reasons for this starting with the fact we don't actually understand how the brain works and the fact that the ai systems use neural networks does not mean that they they converge they're likely to diverge in other words as the ai systems get smarter their life had to get smarter in a way that's different from human and so the presumption that somehow these ai systems can be inserted into a human brain directly. i think his questionable. so the startups are there. we'll see if i were to give you a prediction all the things that we're talking about including general intelligence in the computer side will occur before you will as a healthy person get a brain implant. another question kayla we'll take our next question from barbara matthews. thank you very much and barbara
8:55 am
matthews founder and ceo of bcm strategy and data company that manufactures new data. i'm also senior fellow with the atlantic council. i want to thank you both dr. kissinger and dr. schmidt for compelling book and judy as well for compelling discussion as much as i would love to talk quite a lot about training data and get your feedback on how that can provide guardrails. um, i would like instead to ask a question about an issue raised by dr. cassandra at the beginning of this discussion. and which has dealt within your book about the interaction about the interaction between artificial intelligence throughout the development will change human behavior. and lots of economics, behavioral economics, much of quantitative is premised on the
8:56 am
belief and proven fact that much behavior, human behavior is predictable. if one just has enough data one can anticipate a certain amount of behavior. your book suggests strongly that much of what we know about behavioral economics, much of what we know about human decision science is about to change dramatically. and i'm wondering if perhaps you might not discuss a little bit more in the time allowed today? i'm confident we don't have time to get into it but i'm intrigued by this notion that everything you believe about how humans behave is about to change. >> can i offer you a framing? dr. kissinger will have an even more thoughtful response i think. we now know with humans that they have a whole bunch of
8:57 am
biases. so, for example, if you watch the video and i tell you that it is false, you will at some level still believe it even after tell you it's false. we know that people retweet and resend outrage and emotional content much more than thoughtful content. we know about anchoring bias in recency bias in thingsso like this. so one way to think about is ai will discover every bias of humans at skill because someone will figure out a way to use that. so we will know pretty well how humans behave on various stimuli in ways at skill we would never before as result of all this. what we do about them and computers tolow exploit those biases is a legal and regular question, not a business question. >> dr. kissinger? >> what concerns me about the ai
8:58 am
field, and remember i slid into this almost by accident by being very concerned about something i heard in a lecture. but what concerns me is when i look at the evolution of human achievement, much of it was brought about by people who grapple with the problem for many years and worked through sometimes even improbable alternatives. and we find that in the evolution of science very much. what worries me is that ai will so facilitate the acquisition of
8:59 am
immediate knowledge and, therefore, create a temptation to o rely on ai to do the conceptual final thinking. that their great qualities by which human beings developed quantumm mechanics by experimentingil will be lost and that this may apply to the economic field. so that to me the challenge is how to keep the human mind and the human personality self-reliant enough so that it can be at least an equal partner to artificial intelligence, and so that it will not be able to
9:00 am
delegate its concerns artificial intelligence. i don't know how to do that, but i would like to have leading people concerned with the subject to address it so that the technologies don't run away with it and produce things that will destroy the nature of human thinking. .. let it my deepest concern in this field. so many provocative provocative questions raised today. so on that provocative note from dr. kissinger and dr. schmidt, i want to thank both of them.
9:01 am
khanna. >> what i want to expose of them being part of this discussion today. dr. henry kissinger and dr. smith. ai and the future. >> every saturday book tv documents america's story and on sun, book tv brings you the latest in nonfiction books and authors. funding from c-span2 comes from these television companies and more. >> do you think it's the future, it's more than that. comcast is with a thousand centers so low income families could get the tools they need to be ready for anything. >> comcast, along with these television companies support c-span2 as a public service. at least six presidents recorded conversations while in
9:02 am
office. hear many of those conversations on c-span's new podcast, presidential recordings. >> season one focuses on the presidency of lyndon johnson. you will hear about the 1964 civil rights act, tonka and selma. not all knew they were recorded. >> certainly, johnson's section knew because they were tasked with transcribing many of the conversations and they were the ones who made sure the conversations with taped as johnson would signal to them through an open door between his office and theirs. >> you'll hear some blunt talk. >> jim. >> yes, sir. >> i want a report of the number of people assigned to kennedy and me and assigned to me now, and if less if i can't go to the bathroom i won't go, i promise i won't go anywhere,
9:03 am
i'll stay behind these black gates. >> presidential recordings, find it on the c-span now mobile app or wherever you get your podcast. c-span is c-span's on-line store. browse through our latest collection of c-span products, apparel, books, home decor and accessories. there's something for every c-span fan and purchase supports our nonprofit operation. shop now or anytime at c-span now, on book tv's author interview program after words, democratic congressman ro khanna of california talks about the digital divide in america and offers his suggestions how to close the gap. he's interviewed by founder and editor in chief of the markup, julia angwin. after words is a weekl


info Stream Only

Uploaded by TV Archive on