Skip to main content

tv   Cato Institute 2018 Surveillence Conference Part 2  CSPAN  January 3, 2019 8:56am-10:32am EST

8:56 am
8:57 am
8:58 am
8:59 am
9:00 am
9:01 am
as we explore oti it protects economic security and the personal safety and freedom of journalists and individuals in vulnerable communities, including victims of domestic violence. unlike the u.s. congress, which takes months and months or more likely years before it passes anything, the australian parliament managed to wrap up the consideration of the bill in a mere four months. following a public comment period on the exposure draft, a
9:02 am
slightly modified version of the bill was introduced in parliament and referred to the parliament i are joint committee on intelligence and security or pjcis which opened a new public comment period. my organization organized an international coalition of civil society organizations, tech companies, and trade associations and we filed three rounds of public comments on the bill outlining our concerns, which i'll describe in a moment. the committee held a series of hearings and then just at the beginning of last week, the pjcis issued a report recommending passage of the bill with certain amendments incorporated. early in the morning, just last thursday, december 6th, the parliament released an updated version of the bill, including 173 amendments that no one had ever seen before, but by the end of the day, the australian parliament had passed the bill into law.
9:03 am
so what does the australian law actually do? as one australian commenter put it, the combined stupid i did and cowardness of the coalition of labor means any i.t. product made in australia will be automatically too risky to use forren anyone concerned about cybersecurity. so we're focussing here on schedule one of the australian law. which is the one that is dined so undermine the safe guards of encryption. there are, also, other sections of the law create additional privacy threats but we're focussing on schedule one which relates to encryption. now the law includes what appears to be an encouraging statement that proports to prohibit the government from demanding the creation of encryption back doors. i have up here section 317 zg says that the government may not request or require communications providers to
9:04 am
implement or build a systemic weakness or systemic vulnerability. and, also, that the government must not prevent a communications provider from rectifying a systemic weakness or systemic vulnerability. however, the law grants unprecedented new authorities to the australian government that undermine this promise. specifically, the law creates three new and powerful tools for the australian government. technical assistance requests or tars, technical assistance notices or t.a.n., or technical capability notices. the requests are supposed to be voluntary whereas the notices are mandatory. and the difference between the t.i.n. and the t.c.n. depends on which government official is authorized to issue the notice. all of these authorize the australian government to request or demand any listed act or thing. now that's a long list in the
9:05 am
bill and it includes things like removing one or more forms of electronic protection that are or were applied by or behalf of the provider. and it also includes moving or facilitating the modification of any of the characteristics of a service provided by the designated communications provider. in short, these are powers to demand that tech companies weaken the security features of their products. for example, the australian government could now make the same request to apple that the fbi made in the 2015 bernardino shooter case, that they build a new operating system to circumvent security iphone features. as apple explained, building the requested software tool would have made that technique widely available. there by threatening the cybersecurity of other users. as we know in the lawsuit here in the u.s., the united states government argued that under the
9:06 am
somewhat obscure act which dates back to 1789, they were permitted to make this demand of apple. but apple supported by other tech companies and privacy advocates argued this demand was unconstitutional. the justice department ultimately withdrew its demand because the court -- before the court could resolve the legal question because the fbi was able to pay of the outside vendor to hack into the phone. in australia, they have a specific authority to make these kinds of demands. another worrisome snare kbroe is that australia may seek to use the tcn authority in the same way that the united kingdom is looking to use its powers. just last month, ian levy of the uk's nsa put out a proposal under this proposal tech companies would be asked or
9:07 am
required to add gchq as a silent participant in end-to-end encrypted chats and the tech company would suppress the notification to the user. they argue that, quote, you don't even have to touch the encryption to add gcaq as a ghost user inside the encrypted chat. so there are several other threats posed by the new australia laws approach to encryption. in our coalition comments, in addition to explaining the breadth of the new powers created by the bill, we also addressed three other key concerns. first, the law lacks any requirement for prior independent review or adequate oversight. many features of australia's new law, such as the authorization to technical capabilities notices were modded on the uk's investigative pours act passed in 2016. the uk's law also raises threats to digital security and human rights but section 254 of the
9:08 am
uk's act does require that the judicial commissioners must review and propose technical capability notices before they may be issued. all though we still have questions about the adequacy and independence of this review under the uk law, australia's tcn authority poses even greater threats to cybersecurity and individual rights. because there's no provision rights of prior let alone independent review. in addition, australia has no bill of rights. so while the procedures through which tech -- there are procedures which tech companies may challenge government requests and orders, these will be more difficult. tech companies will not have the same legal arguments available to them based on protecting individual rights as they would in countries like the uk and the u.s. second, the law requires undue secrecy. all though the law helpfully requires statistical transparency reporting by the government and permits
9:09 am
statistical transparency recording by tech companies, it includes strict nondisclosure requirements, whenever the government issues a request or notice to a tech company. violations of these secrecy rules is a criminal offense. there are no limitations to the durations of the gag orders. third the law's definition of covered designations communications providers is overbroad. it includes anyone who provides an electronic service that has one or more end users in australia. so this means any tech company doing business in australia or anyone providing electronic services in australia is subject to government demands that they weaken the security features of their products and services. so this is bad for australia but what does it mean for us here in the united states? well, australia's legislation appears to be part of a
9:10 am
coordinated effort to the five i's alliance. for those who may not be familiar with the term. it's an intelligence alliance comprised of australia, canada, new zealand, the united kingdom, and united states that dates back to world war ii. since 2013, these five nations also formed a five-country m ministerial. just this past august, 2018, the five countries released a statement on principles on access to evidence and encryption. and that statement includes that if these governments continue to, quote, encounter impediments in their efforts to access encrypted communications, they may pursue legislative mandates from encryption back doors. the very same month that
9:11 am
statement came out, australia released the exposure draft of its encryption bill. so now australia's law can provide the united states an other governments with the back door to an encryption back door. australia now has the authority to compel providers to create encryption back doors and once providers are forced to build weaknesses into their products, other governments can exploit those weaknesses. i've already mentioned the example of apple versus fbi. now if australia issued a technical capability notice to compel apple to build a new operating system to circumvent security iphone features, which is what the fbi demanded, if apple built the system, it could no longer argue it lacked the kpaptability. if australia forced facebook to reengineer what's apps encrypted chats to be assessable in
9:12 am
response to legal demands, those chats would be vulnerable to other government's demands. finally, there's, of course, a risk that the u.s. government could simply seek to expand its own direct authority by pointing to australia as the new model for, quote, responsible encryption legislation. thank you. [ applause ] >> thank you so much. a french philosopher is known for his analysis of the tight link between surveillance and training or discipline. his book usually translated in
9:13 am
english as disciplined and to punish is in french -- [ speaking in a foreign language ] close monitoring is a key part of training a. it also means we need to worry about whether we are training them for compliance with surveillance as the technological capability to monitor children ever more closely becomes both a reality and wide spread in use. i often wonder whether we are preparing children to accept as normal with everything they do is closely scrutinized.
9:14 am
to look at one aspect of that, i want to invite rachel to the podium. thank you so much. it's the perfect introduction. i'll be coming back to that point near the end of my presentation. i'm senior counsel with the liberty and national security program at the brennan center for justice. i'm going to be talking about today social media surveillance of students. and to start this off, i want to talk just for a moment about the prevalence, the sort of deep saturation, at this point, that kids have online. so according to a pugh internet study from last month, 97% of 13 to 17-year-olds in the u.s. are on at least one major online social media platform.
9:15 am
95% of american teens have access to a smartphone, and 45% said they're online almost constantly. so there is clearly a lot of content out there and a lot of time that teens and even younger kids are spending online. with that, social media presence comes social media monitoring. these tools are sold for a variety of purposes. they're sold as preventing bullying, preventing school shootings, potential suicides, and other online threats. and maybe not surprisingly they're also make business. so spending by public schools nationwide on nine major social media monitoring companies, you can see here and you can see there are sort of some spikes some, you know, mountains and valleys. overall it's as a pretty massive increase in spending starting in 2010 and going up to the spike it spikes in 2015 again in '16
9:16 am
and '17 and the big spike in the summer of 2018 potentially driven by the shooting in parkland, florida. but public school districts are spending more and more money on automated social media monitoring tools. this is -- it's similar reflected by keyword searches. so searches for social media monitoring in contracts between public schools and private companies and, again, showing these spikes over the last several years really is significant increases and then a major spike in 2018. so increasingly a lot of public money is being spent on these services. now based on these statistics, you might think that schools are getting more dangerous. but, in fact, the opposite is true, schools are actually getting safer. and while it's true that this country has a unique risk of school shootings among developed countries, and while, obviously, a single shooting or even a single serious bullying incident
9:17 am
is one too many, the overall crime decline in this country holds true in schools, as well. so the odds that a k through 12 student will be shot and killed at a public school are about 1 in 614 million. so by way of contrast, the odds of choking are about 1 in 3400. in 1995, 10% of students aged 12 through 18 reported being the victim of a crime at school and in the previous six months. in the 2015-2016 school year, 3% of students did. in that 20-year period it went from 10% to 3%. and in general over the last two decades, less than 3% of youth homicide and less than 1% of youth suicides have occurred at school. now, of course, part of the hope with social media monitoring may be it'll pick up risks off of school grounds, as well. by any measure, school is a pretty safe place to be.
9:18 am
now the one state in the country that has legislated social media monitoring is florida. i'm sure everyone here is familiar with the shooting in parkland, florida last february when nicklaus cruz sh-- nikolas cruz shot several students. this a office is required to coordinate with a florida department of law enforcement to set up a centralized data base to facilitate access to a pretty wide range of information, including social media data. the legislation also established a public safety commission, which recently recommended the development of protocols around social media monitoring. doesn't look like that collection has begun quite yet but it's likely to do in the new year. as it turned out, nikolas cruz
9:19 am
had posted online about his intentions before the shooting. and people had taken notice. he was reported to the fbi and local police at least three times for disturbing posts. one call to the fbi warned that he might become a school shooter and a separate call flagged a youtube post in which the user had said that he wanted to become a professional school shooter. all of the poster wasn't identified as cruz until after the shooting. so while there's certainly were warning signs on social media, it wasn't the case that the district was flying blind. people were seeing those warnings signals and trying to act on them. and what really failed those students wasn't a failure to see those posts. according to our view of the school district's actions that came out in august, it was more that the district itself had failed at nearly every turn to provide cruz with the educational and support services he needed. nevertheless, florida is embarking on a first of its kind
9:20 am
national experiment when it comes to social media data. there's kind of a big question here, which is, okay, but why not? right. if a single school shooting is one too many, if social media monitoring could catch one future kninikolas cruz, could ch one future suicidal student, why not do it. if the stakes are that high, what's the harm? there are a lot of reasons to at least be very cautious about this kind of monitoring. so the first is a real concern about the accuracy of social media monitoring tools. this plays out in a couple of different ways. so one way these tools can be inaccurate is through overreach. so the fact they're likely to pull in much more information than actually going to be useful. by way of example, police in jacksonville, florida set up a social media monitoring tool to search for key words that were going to be related to public safety or that might indicate some risk of criminal activity. so one of the words they set up
9:21 am
was the word "bomb." thinking there was some kind of bomb threat it would turn it out. it turned out, there were no bomb threat that were flagged online. instead it was inundated with posts about things like "pizza that was the bomb." and photo bombs. so a lot of stuff coming in of little use. the second issue is underreach. by which i mean that the kinds of risks that social media monitoring tools kind of would like to find often aren't going to appear online at all. i mentioned earlier that nikolas cruz posted online about his intentions. as it turns out, to some extent, he was the exception. the brennan center did an informal survey of major school shootings, unfortunately that's a category, since the sandy hook
9:22 am
elementary school shooting in 2012. there was only one other perpetrator, according to the public reporting that put up social media postings that strongly indicated an interest in school violence. that was adam lanza. he posted in discussion forums about the columbine school shooting and operated tumblr accounts with the columbine shooter's names. it's hard to imagine that now these wouldn't have been reported directly to authorities. in fact, we saw it with nikolas cruz, that's what happened. and the individual concerned users would report this in. the online profiles of other shooters in major school shootings, which again are usually getting a lot of reporting after the fact, don't show anything that would flag
9:23 am
them for an automated tool. for instance, the perpetrator of a 2014 shooting in oregon had a facebook page showing he liked first person shooter and military-theme games like "call of duty" and various knife and gun pages. retrospect, sure, these seem like warning signs that something were going on. but, in fact, the official facebook page for "call of duty" has nearly 24 million followers. sending up a red flag about every person who enjoys these past times would create a huge quantity of noise for little signal. and automated social media monitoring tools have built in shortcomings. so i'll flag a terrific report from the center on democracy called "mixed messages."
9:24 am
it does a lot of research on this. as their research shows, automated monitoring tools generally work best when the posts are in english and when the tool is looking for something concrete. they can be easily fooled by lingo or slang. during the trial of dzhokar tsarnaev the fbi produced several quotes from his twitter account to try to show that he himself was an extremist that he wasn't following his brother's words. for instance, he tweeted, and this is one of the things they brought up. a quote that said quali"i shall young." it was also a quote from a russian pop song. he linked to the song in the tweet. the agent hadn't bothered clicking on the tweet to see it was a song lyric. other quotes that the fbi relied on from jay-z songs and south park episodes, among other things.
9:25 am
social media sin credibly context yule. the second major concern is the risk of discrimination. this comes in two forms. the first is the key words themselves that the cools will be set to flag on will be discriminatory. for instance, an aclu report fond when the boston police department set up a social media monitoring tool, the hashtag that was flagging included black lives matter, ferguson, muslim lives matter, and the arabic word for community. and needless to say, these words aren't signs of a public safety threat. so these tools are only as good as the people who are using them and there are a lot of ways to use them to further a discriminatory mind set. the second is the risk of discriminatory impact.
9:26 am
so whatever key words are flagged, there's going to be a huge amount of discretion in what is done with the results including which students are brought in, who is punished by the school, and even who is subjected to criminal justice consequences. we already know that students of color at every level of schooling experience harsher discipline than white students even for the same infractions and even when they commit infractions at lower rates than white students. there's a real concern that social media monitoring could contribute to the school to prison pipeline. i suspect folks here remember the muslim teen who brought a homemade clock to his dallas area high school and was then arrested on the suspicious it concealed a bomb. he was well known at his school for bringing in electronics, tinkering, fixing other people's electronics. he told his teachers and the principal repeatedly it was, in
9:27 am
fact, a clock. it raises suspicions that the scrutiny was put under and his ultimate arrest was essentially grounded in islam phobia. an alabama high school paid a former fbi agent to go through students' social media accounts on the basis of anonymous tips. the district ultimately expelled over a dozen students on the basis of what he found online. 86 of the students expelled were black, even though blacks made up only 40% of the student body. now not surprisingly where people are mistakenly identified as posing a threat because of their social media posts, the consequences can be serious. one connecticut teen posted on snap chat a picture of a toy air soft gun that resembled a real rifle. in his words, in terms of explaining why he put it up, he thought it was awesome he knew his friends would think it was awesome. another student saw the post and was worried about it. he reported it to school
9:28 am
officials. this does not necessarily strike me as a crazy thing to do. all though as zach noted, the student noted, if officials googled the name on the side of the gun, they would have seen it was a toy gun even though it did bear a resemblance to a real one. but instead of discussing it with him, potentially with some lessons ability responsible social media use and thinking before you post, he was not only suspended for the day but arrested for breach of peace. because it's heart to reliably pinpoint individual social media posts that actually indicate some kind of live threat monitoring companies have kind of a perverse incentive. they have an incentive to sweep up everything so they can assure their customers they'll spot that needle in the haystack. at the same time, they have very little reliable way of gauging their effectiveness.
9:29 am
a 2015 investigation by the christian science monitor revealed that none of the three major school social media monitoring companies they looked into had firm metrics for measuring effectiveness. at least one said, well, basically we know that we've succeeded, you know, when we get a call from the school saying that, you know, something we sent them was interesting. so it's really a perfect storm for more and more. at the same time, parents and students often know very little about these tools. research shows that while social media monitoring companies may assume that students are consenting to be trapped by virtue of posting on public sites, students more often believe that companies are prohibited from sharing personal information with third parties. so there's a real lack of information about how these programs operate. finally, this goes to julian's point at the beginning, it's worth thinking about what it
9:30 am
means for students to be under constant surveillance online. as a practical matter, they may stop posting or start posting less and more and more private forums. it will blunt any effectiveness these tools would have had. maybe more concerningly, it teaches students to expect surveillance and anticipate an authority figure's opinion and react accordingly. now some of this you could say is good digital hygiene. i think we know that something we post publicly we need to think before we post about what that looks like, who might see it now, and who might see it in the future. it's not clear that it's healthy for students who are learning about citizens' role in a democracy to know they're under that surveillance all the time and to be acting accordingly. so what does this mean? at the very least, before a school or school district rolls out a social media monitoring
9:31 am
program, it's incumbent on officials to weigh the costs and benefits and to involve parents and students in a frank discussion of what it means. if they decide not to set forth on a monitoring program, they should remember that they are most likely not going dark but there are a lot of concerned people out there who will spot posts and find them. thank you so much. >> i know someone who is a science fiction writer who is more optimistic. this is great. we're training our children to develop habits of fairly sophisticated counter intelligence trade craft to be able to have a normal childhood. the next generation will be sophisticated about evading surveillance. i suppose we'll find out.
9:32 am
we have a speaker who will focus on privacy in public, in a sense. the myriad of ways that just walking down an ordinary city street we're being observed in ways we may not recognize. also, the ways existing networks can be transformed in a fairly deep way by existing infrastructure but a platform for new methods of monitoring. so the first of these is an examination of camera networks for facial recognition surveillance from jake la rue on the project of government oversight. >> hi everybody.
9:33 am
thank you so much for having me here. i'm a senior counsel at the constitution project where i focus on surveillance issues and i'm excited to be talking about facial recognition and a specific aspect of facial recognition how cameras and various aspects can empower and run facial recognition surveillance into dragnets. this is no longer a syfy technology of minority report that we will see in the distant future. it's happening now. the fbi conducts over 4,000 facial recognition searches every month on average. a quarter of all state and local police departments have the ability to conduct facial recognition scans, as well. customs and border protection has a exit program that uses facial recognition for outgoing flights. they're planning to spread this to airports in general as well as sea ports and land sporports
9:34 am
across the country. i.c.e. is looking to buy facial recognition technology, as well. it's a live and real surveillance threat. facial recognition depends on three key factors to be a powerful force for surveillance. first, you need a data base of photos that are identified with people. they have about half of all american adults in the photo data base. you need powerful software technology that can scan across hundreds of millions of photos and scan faces rapidly. lots of companies will develop the technology. the government is, as well. and third, what i want to focus on, you need a network of cameras you can tap into and use to see people's faces everywhere all the time. now there are four areas where this, you know, the potential to build the camera networks. first, government surveillance cameras, cctv, second, police body cameras, third, privately owned security cameras, and, last, social media photo data
9:35 am
bases. so let's start, first, with government surveillance programs, cctv programs. about a decade ago, then chicago mayor richard daily said he expected one day we would have a police camera basically on every corner. i want you to keep that quote in mind as we talk more and more about cctv and american cities. let's go to where we truly have a cctv photo dragnet and where it seems we have achieved big brother status and that's china. china is by far the most powerful network of government surveillance cameras we can see in the world. the country has an estimated 200 million government run surveillance cameras across the country. the effects are profound. these networks are dense and powerful. for example, beijing maintains over 46,000 cctv cams are a that blanket the state. state media and police boasts
9:36 am
that the network allows them to have 100% coverage of the city and see everything that is going on all the time. this can have powerful impacts for facial recognition. so, for example, recently bbc reporter tested the system. he went to a city of 3.5 million people and gave his photo to the government to put into the system and asked them to find him using their cameras and their systems, the automated facial recognition software tracked him down in a mere seven minutes. sop that is surveillance cameras. cctv is in america, to a strong degree. it's being instituted in large cities such as new york, chicago, washington, and los angeles. in new york, there is a cctv network hub. this is called a do main awareness, the way it work you have cameras networked into a centralized hub that can be
9:37 am
subject to real time viewing, analysis, and other tools. facial recognition could become one of those in the future. oakla oakland considered this. it would have hooked up cameras across the city used by government for involving everything from port authority to the police cars to cameras outside schools. smaller cities such as st. louis and new orleans have mass cctv networks and centralized hubs they used to watch. but the city with the largest, by far, cctv network in the united states is chicago. chicago is the closest to achieving big brother status in america. right now chicago maintains a police surveillance network of cameras that is over 30,000 total cameras in the city. this, in some ways, surpasses the level of surveillance dragnet you'll see in china. 30,000 cameras in chicago is less than the total 46,000 in beijing, if you look at area
9:38 am
density for cameras, the 128 cameras per square mile in an average on chicago is far higher than that in the beijing dragnet that covered 100% of the population. now this can have powerful effects for facial recognition. it's starting to in america. we're seeing this primarily first in orlando. orlando is currently running a pilot program with amazon's real time recognition facial recognition program. the way this system works is that you have cameras scanning throughout the city. they will try to scan faces, find people, identify them, and then flag any persons of interest. whatever persons of interest means. i'm not sure. so that is government cctv. next, i want to look at police body cameras. this is probably the area of greatest risk in terms of establishing video surveillance dragnets in the united states. and the simple reason is body cameras are becoming popular in america and in american police
9:39 am
departments. haw saw created the largest body camera producer in the united states has systems already in over half of american's largest cities. this isn't a huge surprise. it offers their body cameras for free as long as you then use the video storage system. studies from recent years of police departments indicate that 97% of the largest police departments in america all either have body cameras programs in place or pilot and testing stages or if they don't have them yet, are planning to build them in the future. this is going to be a universal phenomena of police wearing body cameras and that being a common thing we'll see on our streets as cops walk by. why is this a big day for a proliferation of government surveillance cameras? because it's cities have lots of police in them.
9:40 am
when you look at big cities, the amount is much higher. plenty of cities as many as 40 officers for every 10,000 residents or more. d.c. is over 50. if you look at area density, you can also see that some cities are populated with police officers. for example, ten different cities have over 20 police officers per square mile, topping the list is new york city which has well over 100 police officers for every square mile. now in terms of facial recognition, we have actually seen a little bit of progress here. they recently backtracked on a long term plan to put facial recognition in the body cameras. they acknowledged this tech, in a lot of ways, is flawed. so they scrapped plans that might have happened as soon as this year to put facial recognition in the system. but not all vendors are taking that cautious approach. some are charging ahead. it's only a matter of time before companies are probably
9:41 am
satisfied it's good enough for their work and begin to institute it. after all, they described their interest in body cameras by saying that by putting facial recognition in body cameras, one day every cop in america would be robo cop. this is worrying because while virtually all police departments are charging ahead with police body cameras, very few are setting rules and standards for facial recognition. according to a score card on body cameras map contained in the leadership conference, basically no cities that operate body camera programs have effective facial recognition. that is many, many cities that are not acting with appropriate standards. so that's police body cameras. so next i want to talk about private surveillance cameras and capacity to build government surveillance networks. now coopting private survey
9:42 am
epilepsy cams are a is similar to cctv. it's another way the government can potentially build out surveillance networks but do so with very little work without the infrastructure and at a fraction of the cost. we may not have the 200 million surveillance cameras that china has, but america has over 30 million privately owned security cameras throughout the country. given the potential to tap into these, instead of building your own cameras, it's no surprise the government may want to turn this into this. by the way, a couple of those cameras are amazon's ring doorbell. it's a video doorbell system. last night, news broke that amazon had patented technology to build facial recognition into those doorbells and connect it to police networks and notify them whenever anyone suspicious came up. another fun innovation from amazon. now police departments are not just thinking of this idea. they're proactively soliciting
9:43 am
owners of private security cameras asking for them to engage in formal agreements where by the cameras can be accessed and readedly used by law enforcement in video surveillance networks. i mentioned new york before the and do main system they have that allows real time streaming of video cameras. of the 6,000 cameras connected to new york's network, two-thirds are privately owned cameras that have agreements and allow the police department to access and use them. washington, d.c., and a lot of other cities offer incentives to try to get people to hook up their surveillance cameras into police networks. here, for example, is d.c. mayor saying please purchase security cameras and connect them to our networks. we'll pay you to do this. excellent use of emojis. so that is privately owned security cameras. in terms of effect, it's very similar to government cctv, it's a network of stable cameras that
9:44 am
can provide a video dragnet that can be coopted for facial recognition. it's a severe risk given that we don't have the option potentially stopping government in the tracks and building the cameras. the cameras are there we're worried about potentially having law enforcement tap into them. last, i want to talk about social media photo. this is a bit of a different area we're not talking about cameras taking images but images being stockpiled. nonetheless, social media photos are potentially the greatest risk in terms of a surveil of a photo dragnet that could be used or coopted by government because of facial recognition. it's because of the sheer size of the foe tada bases. we've seen facial recognition used to a limited degree. a few years ago, a firm got caught and admitted they during protests had run social media photos through facial
9:45 am
recognition technology during protests in baltimore to find individuals with any outstanding warrant and directly arrest and remove them from the crowd. luckily when it came out as a product of aclu research, companies responded properly. they blocked and shut down the access to the services it's important that social media companies continue to be vigilant on the front to limit the api and prevent fo photo da bases. it's important companies start to think about data scanning and harvesting on their platforms openly through api access, but also about scanning through those means. we've seen this similar things like this in the recent past. for example, a couple of years ago yahoo! received and complied with a court order asking that they scan all e-mail content in their data bases for specific bits of content that the government was looking for. it's not hard to imagine the
9:46 am
government coming with a similar corridor to someone that maintains photo data bases and asking for a mass scan to find a particular face print. so we have google talking about surveillance transparency and facebook. these companies maintain very large photo data bases. google has over 200 million users store photos in its cloud photo service, including 24 billion selfies. facebook has over 350 million photos uploaded every single day. it would be really great as these companies continue to build out what are already fantastic surveillance transparency reports that are getting better all the time to think about possibly including a canary for facial recognition so if the government does come up with this sort of broad assessable order saying we want to start scanning all your photos for facial recognition purposes, we will cut the heads off and start acting. with that, i want to conclude by talking about what actions can we take if we start to see these
9:47 am
activities and how should we respond? there's a lot of potential at the local level. so before i mentioned oakland had a proposed do main awareness system that would have connected all their cameras to a hub. this was a great success story. oakland activists got organized and got very mad and talked a lot to the city government about it and got it shut down. that's the thing we can see in other cities if we take action. i want to give a shout out to a great program going on now, the see a cop campaign. this is an effort to improve transparency and limit surveillance properly in cities across the country. i'm sure as that campaign goes on, will continue to do a lot of great work to limit advanced surveillance tools like facial recognition being build into cameras. on the federal level, we have a lot of potential in terms of limiting and conditioning funds. so we talked a little bit about government cctv. a lot of funds for local and government cctv networks don't
9:48 am
come from the localities. they come from the federal government. doj funds cctv in police grants often. for example, orlando, which is now running a cctv real time facial recognition network originally received funds for cctv from the department of justice. it would be great if in the future when doj handed out funds for cctv video surveillance networks they said you cannot use it for facial recognition or set strict guidelines and limits how it can be use. this is another opportunity we're setting stricter rules, guidelines, and limits could be a effective way from stopping these video surveillance networks from being turned into mass facial recognition, location tracking, and scanning networks. and the department of just issues grants in tens of millions of dollars every year for police body cameras. again, we do not see virtually any department putting in good rules for facial recognition on
9:49 am
body cameras. it would be a vast improvement when doj was handing out the grants for body cameras they said you need to put an effective rules and guidelines and limits to protect privacy before we give you all this money. so there's some actions we should take. i think it's important we take now because we're very quickly approaching the point where we're all going to on a daily basis be like the bbc reporter, tracked down through an automated computer system that is being monitored with a million little eyes. thank you very much. i'm looking forward to the rest of the conference. [applause] the classic feature of surveillance that makes it a mechanism of power is it's
9:50 am
unequality. the prisoners and this ultimate surveillance prison knew they were potential observation. they can be seen but can't see the viewer. it comes to public networks of cameras monitoring us maybe one of the most effective things we can do in order to encourage people to react to the changes that are happening around them is to be aware of them. i was fascinated by a tool by the electronic fund foundation has developed to recognize the way surveillance in public is exploding around us. i want to invite dave moss. >> thank you for having me today. my name is dave moss with electronic frontier foundation. we're based in san francisco and
9:51 am
been around since 1990 and we exist to make sure that our rights and liberties continue to exist as our society's use of technology advances. i particularly work on efs street level surveillance project which aims to ensure there is transparency, regulation and public awareness of the various technologies that law enforcement is deploying in our communities. a lot of times that work looks like filing public records requests. for example, with license plate readers, eff teamed up with the organization to file hundreds of public records requests around the country to find out how law enforcement agencies were sharing license plate reader data amongst themselves. let's say drones. we'll file a public records request for mission log reports on how -- to show how uc berkeley police used drones to surveil protesters in 2017. or we'll file a public records
9:52 am
request with the san francisco district attorney's office to get a spreadsheet with geolocation of every surveillance camera in their database similar to what jake was talking about. this is all a problem because too often our work looks like this. we are chucking public records at people saying, you know, here you go, here's documents on document cloud or here's a white paper we wrote or 3,000 word blog post or even worse, it's me standing in front of you doing a pour-point presentation and if we're lucky i have a funny cartoon to go with it. i don't have one today. i had to use this one. really, our work should look like this to the public. within their communities. if i could, i would run a walking tour company where i could take people around and show them the various surveillance technology around them. i'm a very busy person and don't know that doing tour groups of six or seven people is the most
9:53 am
effective way to get our message across. however, maybe this concept can transfer over to something like virtual reality. taking a step back, we look at virtual reality and look at law enforcement technology, police are already working on virtual reality stuff. this is a company out of georgia called motion reality that has a warehouse sized space where police officers put on virtual reality helmets, they're given real feeling, realistic feeling, fake electronic firearms and they're wired head to toe and go and run scenarios and that can be replayed back so they can see what they did right, what they did wrong. one of my favorite things about this, they are also covered in i guess electrodes and if they're shot they get shocked and demobilized in that part of their body. there's a company that has taken one of these oculus goes and modified it to work as a replacement for field sobriety
9:54 am
tests so the whole flashlight thing would happen within a vr visor. then there's a surveillance aspect, this is something called bounce imaging and it is a little ball covered with little cameras and a s.w.a.t. team officer might chuck that into a hostage situation or whatever and somebody could sit outside in virtual reality looking around before they go in and recording a 360 view of everything that's going on. what can we do on the other side with vr? i'm going to give you a quick background, a brief history of our organization and vr. this is our -- one of our founders, both a lyricist for the grateful dead and a pioneer. he wrote an essay which after he had gone and visited some of the early vr companies he came back and was amazed and thought it was a psychedelic experience. he thought a lot of things were a psychedelic experience back then because i think he was on psychedelics quite a bit chunk
9:55 am
of the time. welcome to virtual reality. we've leapt through the looking glass. jump 25 years, not a lot has happened since then. but in 2015 we finally saw vr move towards the mass commercial market. this was the oculus, the hdc vibe, the playstation vr, they came out early 2016. for our organization there were two big questions we were looking at. what are the digital rights implications of virtual reality on our society and two, what is the potential for virtual reality as an advocacy tool and educational tool? we'll start with the -- what i think is a privacy element. the intercept had a piece in 2016 about hypothesizing virtual reality might be the most nefarious kind of digital surveillance with regards to the internet yet. i tend to agree with this. this voiced a lot of the concerns i was having and we
9:56 am
were talking amongst ourselves and hadn't seen floated publicly yet. virtual reality tends to rely on our physical characteristics in order to function. on a very basic level, that is how your head is moving, the distance between your hand and your head, how long your arms are f you're left-handed or right-handed, but something as simple how your head is moving in a virtual reality environment can be correlated to mental health conditions. more advanced vr technology is starting to involve devices that measure your breath or track your eyes or map out your facial expreg expressions and that's another world and one of the creepiest things when you have companies that in order to gather sort of reactional biometrics are throwing stimulus at you in a fairly quiet manner without saying why so they can find something measurable on how you respond to it. we're not going to get too much into augmented reality but that's going to present even
9:57 am
more problems because a lot of devices are scanning the world around you in order to produce content. something interesting that came up as well is that there was a research study by the extended mind that found that current state of play 90% of vr users are taking some sort of steps to protect their privacy whether that is adjusting their facebook settings or using an ad blocker and while three quarters of users were okay with companies using their biometric data for product development the overwhelming majority was very much opposed to that biometric information being sold anonmized or not to other entities. now as far as vr as an advocacy tool we're not the first ones to try this. planned parenthood has an experience called across the line that puts people in the position of a woman trying to seek reproductive health services at a clinic that has a whole lot of angry protesters
9:58 am
there. peta has a couple of experiences that they take around to college campuses and other locations where they challenge people to step inside a factory farming situation, what is it like to be a calf at a factory farm or a chicken. then there's groups out of brookline massachusetts that worked with the united nations environmental assembly to do virtual reality visualizations of data on air pollution and they took that and they ran that through a bunch of u.n. delegates in nairobi. that brings us to eff spot surveillance project. this is at its base a virtual reality experience that uses a very basic simulation to teach people about the various spying technologies that police may deploy in their communities. when we were starting to pursue this in the early stages we had considerations, we wanted it to be a meaningful advocacy experience, we wanted to not
9:59 am
collect biometric information, we wanted it -- as an organization that supports open source and accessibility to technology we wanted to make sure it worked on multiple platforms and not just the oculus or vibe store. we wanted it to be functioned on a modest budget because we are a non-profit and not sony. when i say meaningful advocacy experience, we didn't want to rely on the novelty factor of vr. you can take anything and put it in vr and if it's somebody's first time using vr they will say this is amazing, regardless of what it is. we wanted to make sure ours was presenting research in a way only vr could allow. we didn't want people to just be watching a movie in vr. we wanted them to be doing something, interacting with the world and challenged by it. we wanted people to learn information that even though they were experiencing it in a virtual world, we wanted them to carry that back to the real world. the concept is somebody -- once you put the headset on, we'll
10:00 am
have demos during the lunch break, on a street scene in the neighborhood of san francisco where there is a police encounter going on between a young citizen and two officers. you look around and as you find something, you get a pop up and a voiceover explaining what it is. it's not meant to how quickly can you go through and score points about surveillance technology. it's supposed to be an educational tool. four goals, one can we do a virtual reality, as a non-profit, an experience, cheaply, and do it the first time maybe we can do other things down the road. number two is whether just to educate people about the forms of surveillance, then we also wanted to help them figure out where they are in their communities. finally, we had this thought that police encounters are stressful situations, protests are stressful situations, things move quickly, but it can be useful for people to take note of what surveillance technology they saw in those scenes.
10:01 am
perhaps by putting people in a simulation, in a controlled environment, where they were able to gain practice looking for these technologies, it might carry over to these higher stress situations. so we decided not to go with computer generated environment and just go with a 360-degree photo. this is the ricov. it's got two concave lenses, one on each side, and it captures just beyond 180 degrees on each side and stitches them together. you're able to take a photo of everything. if i used it right now you would get all of this and this. the only thing you might not get is just the very base of the tripod underneath the camera. but this helped us get past what people referred to as the uncanny valley when it comes to video games. the more you try to create a realistic person or environment the more creepy it is to people. by using a photo with a real scene with a few things photo
10:02 am
shopped in, it bypassed that altogether. this is what the photo looks like that we took. it's obviously once in the virtual reality headset it wraps all the way around you. you can see there is a scenario there going on and you can kind of see us at the bottom here. i'm going to show you a little bit. this is what it looked like. you don't see this in the game. this is like a behind the scenes exclusive here. we were just kind of hiding under this longer version of this pole that went this high and hiding outside this police station hoping police would come out, they did, it being san francisco they didn't question two people with a weird piece of technology on the street, which is great because, you know, it was kind of the perfect shot for us. for those of you who will not have a chance to try it, this what is it looks like if you looked at the body camera, you would get a pop-up that explains what it is and it has a voiceover. we didn't -- it's such a visual
10:03 am
medium we didn't want it to be you have to be fully sighted to enjoy this experience or learn from it. if you are only able to see out of one eye or have limited visibility but you have a certain amount of awareness of an environment you can go in and still learn things through audio. we did our beta launch on november 5th. this is at the internet archive at the international hackathon. that's brewster kale testing it out, which is an honor. for the most part we are looking at having tables like this. there's not a lot of at this point, not a lot of people have these deviceses in their homes. this one dropped down to $200 recently. it is something we can take to conferences and have our grassroots activists when going to visit community groups bring it with them like they would bring one pagers or brochures or things like that. they could bring one of these with them. we've run it through probably
10:04 am
500 people in the last month. if you think about it in terms of an activism organization if you're able to spend seven to nine minutes to somebody getting them to like only exclusively focus on surveillance, that is incredible. that's like thousands -- that's like a lot of time. it was available on the internet and one of the things i found gratifying, portland, maine, about as far from san francisco as you can get, while staying in the united states. there are maker and hacker spaces and media labs that are trying this out and having people demo it and we started to see social media respond to it as well. my favorite tweet is the one in the middle, vr tech is so f'ing rad i went spinning through my apartment pinning spy technology on my tracking device. lol sob. lol sob is what we were going for with this. i feel good about that. as far as next steps for us, we're still in beta mode and we will continue doing demos to gather user feedback and improve
10:05 am
the experience. one of the things working open source technology sometimes there might be a tweak in the language and everything breaks. sometimes we've had some bugs come up and we have to fix them and we need to get everything stable for an april 2019 launch. once we have that we'll start sending it out into communities, maybe come up with an educational curriculum so teachers can do it and after that what would the next version of this project be? we have a few ideas. some of them are let's do an internet of things version, a home office where you look around and see the nest, you see a printer, all these ways you might be surveilled through your devices in your home. maybe we do one where not everyone is like into san francisco and they want to know what it's like in iowa or like in new york city. maybe we build the same thing for various areas. maybe we just abandon vr altogether and go on to ar and have a way for people's phones to be able to project things for them into the world. i mean, all of these depend on how the technology develops, what kind of interests we get into it, whether there is a
10:06 am
return on investment and what kind of grants there are. it's a new world and, you know, we don't know where it's going to be in a year or in five years, but i can tell you i know where it's going to be at lunch time and that is outside the lunch room where if you can try it out i'm happy to show you how the camera works and things like that. that's all i have. if you have a headset at home or want to play around with it, it's [ applause ] >> thanks, dave. i love this idea. i don't know if anyone is familiar with a concept, the idea that people play games, especially that involves re pettive recognizing pattern behavior it spills over into the nongame lives, after people who play a lot, start seeing shapes and how they could fit them together. this shows up in the assassins creed games as the bleeding effect where someone who is sort
10:07 am
of reliving a simulation of his ancestor's lives, takes on their sort of super human murder stealth abilities and that seems both unrealistic and undesirable. it might be desirable to imagine a population trained on games that teach them about spotting surveillance technology in the world around them and a more useful version. turning back to the question of encryption as we heard from sharon bradford franklin earlier, law enforcement have for years now been complaining that the spread of encryption is causing them to go dark, making it more difficult to do electronic surveillance of communications. fascinating report from the center for strategic international studies that really points out that there are a lot of ways that difficulties law enforcement is having with intercepting electronic communications, really doesn't have a whole lot to do with the
10:08 am
need for back doors and there's a lot of low hanging fruit being left on the table that we ought to examine before we talk about legislating breaches in or sort of -- platforms for breaches in the tools we rely on to secure us. i want to invite jen dascall to discuss a report which i believe you will find on the table outside. >> great. thank you. thanks to cato for putting on this excellent conference. as julien said the focus of my talk today is the range of challenges that law enforcement faces in accessing digital evidence, separate and apart from the encryption related challenges. this talk stems from a report that i worked on with a
10:09 am
co-author will, what many of us known as csis. the debates about encryption will continue, but it was, is and is emphatically more so our view after writing this report and working on this report, encryption and the debates have taken up so much of the limelight there are a range of other challenges that law enforcement faces that need to be dealt with and they can be dealt with relatively easily and dealt with now. so as -- and that these challenges will continue no matter what happens with respect to encryption, no matter if, in fact, there ever were a clear decryption mandate there would still be these other ongoing challenges that need to be dealt with. as our title low hanging fruit indicates these are problems that we think can be relatively easily solved, not completely,
10:10 am
nothing in this space ever leads to a complete solution, and we make a mistake if we assume that we are seeking a complete solution or that we're ever trying to eliminate totally some of the friction in the process, some of that friction is, in fact, healthy. some of the friction is unnecessary and actually collectively harmful to both security and to privacy and minimizing that friction is not only a laudable goal, but one that is eminently achievable. to that end, i will just note that the report that we worked on was endorsed by a number of individuals and also groups and entities. it was endorsed by the former cia director john brennan, former fbi general counsel ken weinstein two former deputy attorney generals the former boston commissioner, police commissioner ed davis, former assistant attorney general for national security david crisp, and also been praised by a number of different groups and providers and several providers
10:11 am
have already introduced a number of reforms consistent with what we called for in this report. so now that i've given you the hard sell i'm going to spend the remainder of my time talking about the substance and talk a little bit about the methodology that we use in doing this report, a little bit about our findings and our ultimate recommendations. this report stems from about a year's worth of research including a series of qualitative interviews with state, local and federal law enforcement officials, prosecutors, representatives from a range of different tech companies and members of the civil society community. it also involved a quantitative survey of state, local and federal law enforcement officials. the survey results are notable, hopefully you can all read at least a little bit of this, the survey, according to the survey results, those surveys found difficulties accessing, analyzing and utilizing digital evidence in over a third of their cases. we believe that's a problem
10:12 am
that's only going to continue to grow as digital information becomes more and more ubiquitous and digital evidence is needed in just about every criminal investigation. this chart shows the response to the question, what is the biggest challenge that your department encounters in using digital evidence? accessing data from service providers was ranked as the key challenge amongst our respondents, separate and apart from questions about interpretation. identifying which service provider has the data was reported as the number one challenge. 30% of our respondent ranked it as their biggest problem. obtaining the data once it was identified was reported as the number two challenge, 29% of our -- 25% of our respondents ranked it as their number two -- as their biggest challenge. accessing data from a device was 19% ranked as the biggest
10:13 am
challenge they faced and then collectively analyzing data from devices and analyzing data from providers that has been disclosed from providers which are two separate things combined that's about 21% so that that was their biggest problem. this is important because these are problems that can be fixed or at least largely reduced without huge changes in the system, but with more resources and more dedicated systemic thought to addressing these problems. to the extent that law enforcement doesn't know where to go to get data of interest that is a problem that can be solved with better information flows and better training. to the extent that law enforcement faces challenges in obtaining data that is a bigger challenge and we heard two very different stories from the law enforcement officials we talked to and the provider community, the law enforcement officials talked about what they perceived
10:14 am
as long delays in getting information back from service providers, what they perceived as service providers dragging their feet, of service providers having insufficient resources to respond to their need, a request being slow walked or turned down in what they perceived to be inval invalid circumstances. they complained about overbroad request, law enforcement asking for things that weren't available, delays being the fault of law enforcement as they were internally debating and deciding whether or not to get nondisclosure orders that would prohibit the provider from telling their customer subscriber that the customer's data had been obtained and providers holding off at law enforcement requests on turning over the data until they learned whether or not they had permission to tell the customer or the subscriber.
10:15 am
now, the data interestingly kind of supports both sides of the story. this chart shows the requests from -- that were issued -- that u.s. law enforcement issued to six key companies, facebook, microsoft, twitter, google, yahoo! and apple over time. this is based on the company's own transparency reporting. there is no other good source of this data. not surprisingly, you see from this chart a pretty dramatic increase in requests over a pretty short period of time. these show requests in six-month intervals, so in a six-month period ending in december of 2013, there are about 400,000 requests to u.s. -- to these six u.s.-based providers. by december 2017, the previous six months before that, that -- not quite but almost doubled or at least increased by a significant amount to about 650,000, almost 700,000 requests in the prior six-month period.
10:16 am
now what's interesting about this chart is that the grant rates have hovered more or less at about the same rate at about 80%. they've been consistent over time in terms of the percentage of requests or demands that providers complied with, but that also means that the number -- the absolute of requests that are being turned down or disclosure demands that are not being complied with is higher given there is a bigger volume of actual requests. so to some extent law enforcement is frustrated because they're sensing this bigger number of request denials whereas providers are saying we're pretty consistent in how we've been treating this over time. two caveats, the chart only shows where the requests were made, not not made, because law enforcement didn't know where to go or stymied in making the request, and the grant rates say nothing about it legitimacy about the request or the grounds
10:17 am
for rejecting the request. there is and should be some ongoing disagreement about the appropriate scope of a request. this is an area where some friction is not only healthy, it's actually productive and going to persist because there's different views about the appropriate scopes of these requests. there's also a number of areas with respect to grant rates and law enforcement issuance of request to providers where there is unnecessary friction and some of the reduction in that friction can bolt privacy and security at the same time. some of the things that can be helpful in this regard are better up-to-date law enforcement guides provided by the providers, resourcing of law enforcement teams by the providers, better training and dissemination of that training to state and local law enforcement officers, better training of judges that review and approve the range of
10:18 am
requests subject to court order or warrants, and these have obvious security benefits in the sense that it provides law enforcement more streamlined ability to access data of interest but it also has privacy benefits to the extent that it leads to better tailored, better, more privacy protective requests and less -- and as a result more tailored, more narrowed requests. to the extent that law enforcement cannot interpret data that's disclosed, this is a problem that stems in part from encryption, but also what we heard over and over again what's from the absence of technical tools to decipher non-encrypted data that was disclosed. so this is a problem that results, one, from absence of tools and distribution problem. sometimes some of the bigger law enforcement entities would have
10:19 am
access to the appropriate tools but it was not disseminated to the 18,000 state and local law enforcement entities that exist around the country. so despite what appears to us to be pretty clear need and a pretty easy to identify solution with respect to resourcing -- resources, training and dissemination of tools, the sole federal entity with an explicit mission to better facilitate cooperation between law enforcement and providers is that fbi's national domestic communications center has a budget of just $11.4 million this fiscal year and that is spread out amongst several different programs designed to distribute knowledge about service providers, policies and products, develop and share technical tools, train law enforcement, maintain a 24/7 hotline center, among many other initiatives. that is a drop in the bucket given need that's out there. one of the key, most highly
10:20 am
regarded training centers, the national computer forensic institute, run by the secret service, fights for appropriations. this year it got 1.9 million, enough to train 1,200 students, fully funded it could train over 3,000, but that is just a drop in the bucket when you consider that there's 18,000 federal, state and local entities across the country. that's just the number of entities, not the number of individuals working at those entities. there are a range of state and local training centers and resources and other -- and some other federal resources that have rose to fill some of these gaps, but as you can see they are not geographically distributed evenly. much higher concentration on the east coast and big swaths in the middle where there's not much in terms of resources and training centers. there's no central entity for determining what's out there,
10:21 am
what works, what doesn't, and how to best allocate these resources. this gets me to our recommendations, which is the creation of a national digital evidence office that's authorized and resourced by congress that would sit in doj and that would do the kind of work that's needed to both assess what's out there and ensure a more efficient and reasoned distribution of resources so develop a national digital evidence policy, coordinate the ongoing efforts that are out there, including grant making, a lot of different grant making body, not well coordinated currently, identifying and rectifying some of the gaps, establish and promote consistent set of standards for securing and minimizing the data that is collected, developing authentication systems to ensure that the person who is asking
10:22 am
for the request is, in fact, asking for the data is, in fact, entitled to receive that data, coordinate with some of the interesting international efforts that are ongoing and report to congress and promote transparency about what is, in fact, going on. we've also called on congress to authorize, which sits within the fbi. it does not have an independent authorization at the moment, to adequate resource to do its job, to -- this would serve within the broader digital policy office. you have the synergy between the technologists aware of what's -- of the technology and are aware of what's going on in the field and the challenges in the field with some of the policy folks and, again, allow them to do what it already is trying to do but is trying do on a very slim budget, which is conduct and disseminate trainings, gather and disseminate information about service providers, develop and disseminate the technical
10:23 am
tools, provide a hotline system. then we've also included a series of recommendations to providers, providers to step up some of their training efforts, having a crennel trollized body where they can go and facilitate that. one of the things we heard from providers is we do do trainings, but there's 18,000 federal, state and local law enforcement entities across the country, rapid turnover, like a cat-and-mouse game. having a centralized place to disseminate the training and lead to more tailored requests is helpful for the law enforcement folks and provider folks. provide training, maintain on-line portals to facilitate the request process, to help with the authentication to some extent, provide explanations for rejections so there can be a dialog, ensure appropriate staffing to meet the needs,
10:24 am
provide rapid responses, what is an adequate response is going to change based on what is being requested, so we don't include specific time limits in there. also maintain the transparency that providers already are doing with respect -- the big providers with respect to the law enforcement requests they get, but to break that down even more in terms of the categories of requests and over time a range of other different, smaller categories as well. important to think as well about providers, some of the bigger providers working with and helping to develop best practices for the range of smaller providers that are increasingly coming in the market and that are going to have to deal with this bucket of issues as well. i'll end by saying that challenges are only going to grow over time. we think this is low hanging fruit, hence the title. these are structures and resources that need to be put in
10:25 am
place now because the needs are only going to expand as we move forward and in our view this has benefits both for security and privacy and allows us to do something as the debates about encryption continue to rage. thanks. [ applause ] >> thank you so much for reminding me, from somebody who worked for a tech company, had a recount of getting a confused e-mail from a law enforcement officer and said these files you sent me that we requested are encrypted and we need your help to decrypt them and they said it's just a spreadsheet, you need to open it in excel. a pattern we see is very often there are automatic calls for greater authority to solve problems that are often more
10:26 am
about institutional ability to change structures than ability a need for power. i want to thank our talkers and i want to invite you to join us upstairs for lunch. i hope you join us for the afternoon session as well and in particular that you will stick around or if you have to leave, return at the end of the day when we'll be leading a group over to the smithsonian american art museum for a tour of the [ inaudible ]. join me for thanking our speakers one last time. [ applause ] a divided government returns
10:27 am
to washington with the 116th congress, democrats assume control of the house of representatives while republicans increase their majority in the senate. this congress has been described as the most diverse in history with over 100 new members coming to washington including more women and minorities than ever before. join us at noon today as the 116th congress gavels into session. watch your member take the oath of office, the leccion of a new speaker and the congress begin its work. new congress, new leaders, live on c-span and c-span 2. >> as the congress comes into session today, new and re-elected senators will participate in a ceremonial swearing in with vice president mike pence in the old senate chamber. that starts at 1:00 p.m. eastern. live here on c-span 3. and on-line at later in the day, house members
10:28 am
will be at the ceremonial swearing in with incoming speaker of the house nancy pelosi. that's at 3:00 p.m. eastern live on c-span 3 and on-line at >> five new members from minnesota join the house of representatives including the only two republicans elected to seats previously held by democrats. the first of whom is pete stauber in minnesota's eighth district. mr. stauber was a police officer before being elected to minnesota's st. louis county commission. early in his career he played professional hockey and spoke with us about some of the lessons he learned from that sport. >> teamwork, perseverance, hard work is always the equalizer and for me, you know, many people never gave me a chance to not only play division 1 hockey, but a lone professional. through hard work, perseverance, dedication and just that drive to meet your goal and i was very fortunate. >> how long did you play for
10:29 am
professionally? >> i played three years. i retired due to an injury to my neck. >> the other republican joining minnesota's congressional delegation is jim hagecorn, exceeds tim walls, elected minnesota governor, the son of former representative tom hagedorn and served with arlen stackland and worked for the treasury department. democrat angie craig is a former reporter for the commercial appeal in memphis, tennessee. she moved to minnesota in 2005 to be an executive at st. jude medical, a medical device manufacturer. she's the first openly gay person elected to congress by minnesota voters. ilhan omar became one of the first two muslim women elected to congress in 2018 and succeeds keith ellison elected minnesota's attorney general. miss omar became an american citizen in 2000. she worked in a variety of positions teaching proper
10:30 am
nutrition while being engaged in state and local politics in minnesota. that activity led to her election to the minnesota house of representatives in 2016. democrat deen phillips was less than a year old when his father was killed in the vietnam war. his mother later married the son of abigail van burren known for her dear abby advice column and now president of the phillips distilling company which has been in his family over on100 years. watch it all on c-span. >> this weekend, c-span cities tour takes you to santa monica, california, with the help of our spectrum cable partners we highlight the literary life and history, saturday at noon eastern, on book tv. a visit with journalists, author and professor saul rubin as he described santa monica's culture, economy and more. >> santa monica is a progressive
10:31 am
southern california beach city and it's a major tourist destination. it's most well known for being a place where people might come to enjoy the day and be a tourist and also now it's a popular place for young tech start-up companies. >> and on sunday at 2:00 p.m. eastern, on american history tv o santa monica pier historian author of santa monica pier a century on the last great pleasure pier, shares the history of this iconic landmark. >> we see almost 9 million people a year come to the pier and that's people of all walks of life, all income levels, all interests. there is almost as many different reasons to come to the pier as there are people that come to visit it. i think if you were to walk down the pier today on any given day, and ask what brought them here, you would get a different reason from each one of them. >> watch c-span city tour of santa monica, california,


info Stream Only

Uploaded by TV Archive on