Sponsored link
Thursday, November 21, 2024

Sponsored link

Home News + Politics Business + Tech The plutonium of Artificial Intelligence could be coming to a city near you

The plutonium of Artificial Intelligence could be coming to a city near you

Facial recognition technology is a frightening threat to our civil liberties -- and government and corporations are starting to use it without adequate controls.

Of all the tools of mass surveillance deployed by law enforcement, private industry and the federal government, Facial Recognition Technology poses by far the greatest threat to civil liberties, racial justice and our simple right to be left alone.

It is a technology that, in the hands of federal, state and local law enforcement, threatens dissent, the Constitution, the citizenry, and the very concept of democracy. In the hands of corporate entities, it will be instrumental in eviscerating our ability to maintain any personal privacy.

This is not only an issue of governmental surveillance, failed identifications and improper arrests frequently targeting African Americans and other people of color. Think also of insurance companies denying coverage based on travel patterns identified using FRT; stalkers armed with access to personal data gleaned from FRT; and scammers armed with personal financial information, addresses and other information accessed through FRT identifications linked to that data.

Facial Recognition Technology does not promote safety or security. It is instead a massive threat to both. And it looks like some people, politicians, and institutions are finally getting the message:

*On June, 9 the BBC reported that IBM released a carefully worded statement released to Congress saying it “will not condone” the use of facial recognition technology for mass surveillance and racial profiling, while leaving a door open for a “national dialogue” about how FRT could be used by law enforcement.

*Privacy activists forced a stronger stand on June 3, when California Assembly member Ed Chau (D-Arcadia) saw AB 2261, that would have allowed use of FRT in many circumstances, go down in flames in the Assembly Appropriations Committee.

*On May 29 the ACLU took aim at Clearview AI, an FRT company that scraped billions of images from the web, and sued the company for violating Illinois’ privacy legislation, the Biometric Information Act.

*Meanwhile in 2019 the city councils of San Francisco, Berkeley, Oakland flat out banned the use of FRT by their police departments.

Facial Recognition Technology is the Plutonium of artificial intelligence, explained Luke Stark, a researcher and technology analyst at Microsoft Research Montreal: “Plutonium has only highly specialized and tightly controlled uses, and poses such a high risk of toxicity if allowed to proliferate that it is controlled by international regimes, and not produced at all if possible.” He wrote: “Plutonium, in other words, is an apt material metaphor for digital facial recognition technologies: Something to be recognized as anathema to the health of human society, and heavily restricted as a result.”

That may sound overly apocalyptic, but the danger is very real. Facial Recognition Technology poses a potential menace to American democracy in five areas, its Inaccuracy; Inherent Racial Bias; the Manufacture of a Surveillance State; and its unconstitutionality.

There are serious problems with FRT’s ability to accurately identify faces. Despite the hosannas of some coders and FRT apologists, the technology is—at this point—not particularly good at its job. Facial Recognition’s functionality is based on algorithms that rapidly compare the facial proportions of individuals to one another to determine a positive ID. The problem begins with how FRT “learns” to make that identification and what “trains” that algorithm.

To become operational, FRT programs are trained by scanning and evaluating tens of thousands of face images before they are set loose to run independently.An overwhelming number of western FRT programs have been created by male Caucasian coders. The faces used to train FRT algorithms—the training sets—have been similarly largely limited to Caucasian males. The programs appear not to have been developed with other ethnicities in mind—nor were they developed with women in mind.

In addition to the problem of fundamentally biased algorithm training data sets, FRT programs often require good lighting and full or near full-face images. But many images used in criminal and other investigations are simply not up to that level of quality, particularly when the sources are standard surveillance cameras. To be useful for FRT identification the images have to be manipulated, which itself is a problem, because making the image suitable for FRT can introduce error by substantially altering the original image.

FRT’s inability to provide accurate results prevented at least one law enforcement agency from adopting FRT. In 2016, the San Francisco Police Department decided against purchasing such software because none of the services tendered met required accuracy benchmarks.  (This decision preceded San Francisco’s ban on FRT.) Unfortunately, other law enforcement agencies lack San Francisco’s vigilance about purchasing accurate technologies.

Then there is the issue of deliberate misuse and incompetence. Police officers around the country have been disciplined for conducting improper searches of DMV databases: making searches on ex-girlfriends, romantic rivals, and legal opponents, or running searches as favors for family and friends.  Just think of the mischief rogue officers could engage in with FRT.

There are other kinds of misuse. If poor quality images are used the resultant identifications will be poor: garbage in, garbage out. The NYPD responded to this problem with a ‘creative’ work-around. When the images of suspects did not have sufficient detail for identification the NYPD came up with what they thought was as ingenious solution that undercut the technology, yet still justified arrests. In images where the eyes were obscured—sunglasses, poor lighting, etc.—the NYPDsimply substituted someone else’s eyes! Given that eye measurements are a critical element of FRT, such antics undermine accurate identification.

In other cases, the police used Photoshop-like programs designed to compensate for partial images where the face is turned away, in shadow, or otherwise obscured. They used programs to digitally fill in missing features. The problem is obvious. If FRT tried to identify Vincent Van Gogh without knowing he was missing an ear, that would be a slightly problematic identification.

A headline ripped from the Boston Globe in April of 2019 read: “Brown University student mistakenly identified as Sri Lanka bombing suspect.” A Sri Lankan security official had used an FRT program to “identify” the bomber and then name her. The student’s identity and picture were promptly all over the Internet. She was soon subject to death threats from around the world even though she was thousands of miles away from Sri Lanka at the time of the bombing.

Facial Recognition Technology has a serious racial discrimination problem. The technology has racial discrimination baked into its source code. FRT remains substantially less effective at identifying African Americans, Latinos and anyone with darker than Caucasian skin tones. Error rates rise dramatically when FRT attempts to identify women of color—particularly African American women.

One African American coder got so frustrated with FRT consistently misidentifying her that she created a white mask of her face. Then and only then could it recognize her. The problem, yet again, is that the data sets used to train FRT simply don’t have enough people of color in them to accurately train the AI program. Google came up with a “clever” and massively offensive “solution” to this problem. The company simply went and took photographs of homeless African Americans and paid them a small sum to do so, without explaining why the images were needed. But at least Google recognized personal images have inherent value. That’s better than what the now notorious FRT company Clearview AI did. The company simply scraped—stole—three billion images from the internet without permission and as a result now faces numerous lawsuits.

Amazon’s Rekognition is a particularly egregious Facial Recognition program. It is rife with misidentification of people of color. When the ACLU ran Amazon’s Rekognition program against both Congress and the California legislature it mis-identified roughly a third of African American and Asian legislators in both houses as felons. Amazon later claimed the ACLU used the program incorrectly. The ACLU shot back that Amazon does not provide correct settings for police department use, so law enforcement is likely to misuse Rekognition, just as the ACLU allegedly did.

This indicates, at the very least, an FRT race problem.

Any new law enforcement tool or technology introduced in the United States is inevitably tried, tested, and implemented upon communities of color. This is just how the criminal injustice system in the United States operates.

When Stingray, a phone tracker and faux cellphone tower technology that vacuums up cell phone metadata, was first developed it was designed for military and intelligence service use. But American cops were soon using Stingray to target small-time drug dealers in African American communities across the nation.

Given the failures of FRT in terms of its foundation in tech culture; its lack of sufficient data to correctly train the technology; the half-baked and offensive solutions offered to FRT’s race based failings; and that FRT will inevitably be used to target people of color, it is little wonder FRT has a race problem. About the only good thing that can be said about the issue of FRT and race is that the automation of racism offered by FRT makes obvious and explicit the racial assumptions and prejudices implicit in its development.

Facial Recognition Technology enables the creation of a total surveillance state, COVID-19 or not.

FRT is the cornerstone of a surveillance state because it can and will seamlessly integrate the real world (one’s face, facial images, mugshots) with the data world (financial, criminal, social), making one’s entire recorded history an easily accessible and monitorable entity. What could possibly go wrong?

Or put another way, FRT enables a totalitarian nightmare.

Databases now spit up reams of personal information about anyone, almost instantaneously, with a mere keystroke. But that information is in different formats and separate from visual identification of an individual, which requires human intervention. Thus, there is still at least some friction, or brake, on the speed with which information can be accessed and integrated.

The introduction of FRT changes all that. The police officer, the FBI agent, the ICE operative, and the corporate entity will all have all of your information, simply by taking your photograph.

When your face is your ID—in the context of FRT—all your other data is accessible to those with access to your image, be they corporate or government entities. Your face is a permanent record, unlike other markers, and is very difficult to change unless you favor plastic surgery.

It is not just a matter of an individual’s identification. Think of faces like data points. A cop on the street can identify, say 10-100 faces. The officer’s personal data set is their memory. Say two cops can identify 20-100 individuals. This is useful, that pair of officers could identify a few patterns unique to individuals the cops are interested in, and know where to the find them. This is standard police work and one kind of patterning.

Now imagine thousands of cops, rather, imagine a cop for each person in the USA. That is FRT. By being able to monitor millions of data points, i.e. faces, this becomes an entirely different kind of dataset. It’s “big data” that makes radically different kinds of patterning and modeling possible. Instead of following what a few, or even a few thousand people have done, FRT can create social models of patterns of friendship, acquaintanceship, purchases, and other factors from millions of people. With this level of data, FRT can allegedly help predictpatterns of behavior and the government could operate accordingly.

Do we really want the government, or for that matter corporations—i.e. surveillance capitalism—to have this level of power?

In the past this sort of government surveillance would have required massive person power. With the advent of big data powered by invasive surveillance this kind of spying will cost pennies on the dollar.

In Jones v. United States,the Supreme Court ruled the government had illegally used GPS technology affixed to a car. Supreme Court Justice Sandra Sotomayor perceptively warned, “by making available at a relatively low cost such a substantial quantum of intimate information about any person whom the Government, in its unfettered discretion, chooses to track-may ‘alter the relationship between citizen and government in a way that is inimical to democratic society.’”

When FRT was first considered for law enforcement, advocates suggested it would be used only in the most egregious criminal cases. Now Amazon and other companies are trying to sell FRT to investigate and prevent shoplifting. This is called mission creep.

Clearview AI, the corporation that scraped billions of images from the web, has secretively hawked its technology to law enforcement agencies and wealthy individuals, while mandating no limit, control, or oversight on how those agencies and individuals use the technology. Clearview AI just told potential clients to “run wild” with the program.

And boy did they: wealthy potential clients and their offspringsearched friends and potential romantic partners with nary a qualm. That’s not all. Other organizations contacted by Clearview AI as potential clients included numerous law enforcement agencies, I.C.E., Macy’s, the NBA, the Dept. of Justice and, yes, even the White House. All tried out the Clearview app.

Thereisa model totalitarian state that makes extensive use of the technology. FRT is everywhere in China, monitoring the citizenry. The Uighurs of Central and East Asia live an Orwellian nightmare of ubiquitous FRT cameras, oppressive surveillance, concentration camps, punishment and government oppression all enabled by a permanent security state. Uighurs are punished for practicing Islam, family members are jailed for the actions of relatives, and thousands are in forced labor camps for the crime of their ethnicity.

Stay tuned for the next steps from the Trump administration in the Covid-19 pandemic.

True, an American surveillance state would be different from China’s. US law enforcement would likely twin with a surveillance capitalism monetizing our every movement, activity and thought. The government will incarcerate you while corporations will sell or deny access to products you need, based on your real-world and online history. Both visions make adoption of FRT by governments and corporations an existential threat to liberty and privacy.

We are in the midst of a COVID pandemic. Among disease control measures being considered is the use of FRT to track individuals viewed as potential transmission vectors. This has tech companies salivating.

The huge glaring problem is that successful public health measures for disease suppression require openness and trust from the citizenry, not the imposition of Big Brother surveillance techniques.  Put simply, if people feel they are being tracked and surveilled electronically by the government, it is much less likely they will trust or want to talk public health officials who are doing contact tracing. It is worth noting that some 25 percent of people with the virus are infectious when they show no symptoms that could be detected by any app.

Clearview AI has admitted being in contact with “federal and state” authorities to develop a so-called digital contact tracing app. But, per the company’s usual behavior, it is refusing to say which agencies it is talking to. As Senator Ed Markey (D-Mass.) warned, “We can’t let the need for COVID contact tracing be used as cover by companies like Clearview to build shadowy surveillance networks.”

Under the guise of a health emergency privacy and civil society could be gravely undermined by our more than nascent surveillance state. This hashappened before. Look at legislation and government policies developed as a result of 9/11. That birthed the PATRIOT Act, which among other things allowed all sorts of government surveillance, including what library books you checked out. The PATRIOT Act it is still with us today.

Facial Recognition Technology is unconstitutional. FRT fundamentally violates bedrock Constitutional protections, specifically the First, Fourth and Fifth Amendments.

In terms of the First Amendment, the concern should be blindingly obvious. FRT has the potential to substantially chill the exercise of one’s freedom to associate and express one’s opinions.

A simple thought experiment should suffice: imagine going to a demonstration or picket line, but before you get there a police officer stands in front of you, takes your photograph, gets your address, social security number, profession and health status, and then interrogates you about why you are there and whether you have any outstanding warrants.  There are a lot of people who would be significantly less willing to exercise their free speech rights in this scenario. This is a likely impact of widespread public FRT use.

A University of Baltimore law review noted in 2017 that “The Supreme Court has previously noted that there exists a ‘vital relationship between freedom to associate and privacy in one’s associations,’“ citing a 1958 decision, NAACP v. Alabama. Even the Department of Homeland Security, the FBI and other government agencies noted in a 2011 report, “[S]urveillance has the potential to make people feel extremely uncomfortable, cause people to alter their behavior, and lead to self-censorship and inhibition.”

In Florida, where law enforcement was an early and enthusiastic adopter of FRT, people arrested find themselves on not one but two FRT databases—one at the FBI and the other at the Pinellas County Sheriff’s office. The information acquired includes records of those arrested/charged under misdemeanor failure to disperse or trespassing penal codes. Both charges are frequently filed in arrests made at First Amendment protected political gatherings. Thus, the simple fact of knowing FRT is present and that arrest is possible can chill a desire to participate in constitutionally protected political activity.

FRT violates Fourth Amendment protection against illegal search and seizure. Put simply FRT is a warrantless search of one’s person. It is a violation of one’s “reasonable expectation of privacy.” In its original iteration the Fourth Amendment simply stopped the government from storming into one’s home, tossing it and grabbing whatever officials wanted.

Three Supreme Court cases have addressed the issue of privacy. The court found, for example, that a warrant is necessary to search a cell phone. A cell phone is more akin to carrying around a file cabinet filled full of all your financial, personal records, and preferences. Given that cell phones are a potential treasure trove of data, the Court ruled citizens must have a reasonable expectation that information should remain private.

Given the amount of information in the average cell phone, just imagine the information that could be linked to your face. In a world where face data can be linked to financial data to social media data to law enforcement data, the simple human attribute of your face becomesthekey to unlock your entire life. Forget cell phone-linked data, think about face-linked data.

Finally, FRT is a threat to the Fifth Amendment in a pretty obvious way. If you have the right not to incriminate yourself with verbal statements, what about when one’s very face is used for an indictment? Police and government agencies argue that is a false argument, because police officers identify people all the time in person, with mugshots or other images, so how is FRT different? It’s not, they say. To them an image via FRT is just neutral evidence.

Not so fast. Cops have already used FRT installed on citizens cell phones to break into their phones. This means that a person’s face is not just prima facie evidence, it is more like a key or a passcode on a phone. Police need a warrant to get into a cell phone. If a person’s face is the key to getting into the phone, the police are forcing that person to open their phone with their face as a key to unlocking self-incriminating evidence.

Extend this to the use of FRT in the field. A person is identified as of interest to the government for criminal or political reasons, or both. The person is detained, and their face scanned. If the FRT is linked to other databases, all of that person’s private information comes tumbling out. Thus their face is used to unconstitutionally unlock information about an individual. Now think about that in the context of the extreme nightmare use of FRT—real time scanning of crowds.

The European Union has detailed electronic privacy guidelines, considered a ban on FRT, and has a right to be forgotten where individuals can delete personal information from the web, the United States has no such protections. However, the United States does have an implicit right to privacy from the prying eyes of government and corporations guaranteed by the Supreme Court. And, ever since 1972, Californians have had a state constitutional right to privacy.

But California, as yet, has no statewide legislation banning or restricting FRT there is in fact a proposed law that would allow it.  California though does have three municipalities that flat out banned law enforcement use of FRT: Berkeley, Oakland and San Francisco. At least two other Bay Area municipalities are considering such a step. Meanwhile, thirteen California cities have adopted legislation requiring civilian preapproval for any surveillance technology—including FRT—that law enforcement agencies wish to obtain. Given such pre-existing legislation, those thirteen cities would make good targets for campaigns banning FRT.

In an era of the COVID pandemic, protecting privacy is even more important, because for public health to be successful there must be trust that patient information will remain confidential and anonymous. This is essential for contact tracing, so people feel safe and protected to reveal to public health authorities their health status and identify people with whom they have been in close contact. Real time FRT surveillance of the use of FRT undermines such trust.

California’s constitutional right to privacy and an engaged citizenry are prepared to do battle to protect us from Facial Recognition Technology.  In an era where technology is seen as the immediate solution to any and all problems, it is good to remember the right of a committed civil society to guide its own course. It is time for California to lead the way, yet again, and get down to the business of eliminating a plutonium-like technology that turns our very appearance into a monetary unit and a surveilled subject.

Think of eliminating Facial Recognition Technology like washing your hands to protect from COVID-19 — except the virus is viral government surveillance.

Tim Kingston is a San Francisco Public Defender’s Office Investigator and a member of the Racial Justice Committee.