Atg Logo Vector
Single Article Template

CAMERAS EVERYWHERE: Examining the Conflict Between Technology and Human Rights

Cameras are everywhere. Hold a cell phone and you have direct access to a camera. Enter a bank, an elementary school, a superstore, or drive your car into an indoor parking garage and surveillance cameras are strategically placed to record your every move. At busy intersections, in public parks, and in boisterous sports arenas, cameras capture the haphazard, deliberate, and frenetic ways humans move about and interact. Even police are wearing body cameras, which will exponentially increase the number of cameras in public.
Cameras Everywhere: Examining The Conflict Between Technology And Human Rights
[social_warfare]

Cameras are a part of a larger system of surveillance tools known as facial recognition, machine learning, and artificial intelligence (AI). Facial recognition is a type of computer software that analyzes facial features and is used for the unique personal identification of individuals in still or video images; whereas machine learning is an application of AI that provides systems with the ability to automatically learn and improve from experience without being explicitly programmed.

Some see these tools as an added and crucial means for improved safety and security, and an important law enforcement tool. They can simplify law enforcement activities, advance investigations, and identify wanted at-large individuals. They could also help track an Alzheimer’s patient who slipped unnoticed out of a care center or locate a missing child. Some even see opportunities for advancement in health care. For example, machine-learning systems could enhance diagnostics and treatments, while potentially making health care services more widely available. 

Critics often view the technology as dystopian and a government overreach, an unnecessary invasion of privacy with the potential to discriminate and inadvertently cause harm.

The proliferation of facial recognition technology has heightened the concerns among civil rights advocates and human rights organizations who contend the technology will be used to conduct mass surveillance of innocent civilians. These groups also indicate the technology to date is less reliable in identifying people of color or women than white men, which can lead to false-positive identifications of racial and ethnic minorities, particularly in criminal investigations.

Cameras Everywhere: Examining The Conflict Between Technology And Human Rights
Sarah Myers West

“There’s strong evidence that many of the systems in deployment are reflecting and amplifying existing forms of inequality,” said Sarah Myers West, a postdoctoral researcher at AI Now Institute, an interdisciplinary research center at New York University dedicated to understanding the social implications of artificial intelligence. “For this reason, it’s critical that we have a public conversation about the social impact of AI systems, and AI Now’s work aims to engage in research to inform that conversation.”

Joy Buolamwini, an MIT graduate, AI researcher, and computer scientist, provided firsthand research to inform the conversation. Buolamwini, a Ghanaian American, wrote a thesis, “Gender Shades,” in 2017, after she was misidentified while working with facial analysis software. The software didn’t detect her face until she put on a white mask, she said, “because the people who coded the algorithm hadn’t taught it to identify a broad range of skin tones and facial structures.” The software returned worse results for women and darker-skinned persons.

“We often assume machines are neutral, but they aren’t,” she said in a Time magazine essay about her discoveries. Her thesis methodology uncovered large racial and gender bias in AI services from such companies as Microsoft, IBM, and Amazon. In response, Buolamwini founded the Algorithmic Justice League to “create a world with more ethical and inclusive technology.”

Though these research findings can be discouraging, “at least we’re paying attention now,” she said. “This gives us the opportunity to highlight issues early and prevent pervasive damage down the line.”

In addition to Buolamwini’s work, other organizations have launched extensive studies and surveys of their own.

Cameras Everywhere: Examining The Conflict Between Technology And Human Rights
Joy Buolamwini

The Georgetown University Law Center on Privacy and Technology conducted a widely heralded study released in 2016. The Perpetual Line-Up: Unregulated Police Face Recognition in America revealed that face recognition networks include more than 117 million American adults, the majority gleaned from driver’s licenses and ID photos. This kind of access poses a substantial risk to free speech, the right to assembly, and protest.

The law center cited 11 key findings from its yearlong review of more than 100 record requests from law enforcement agencies it identified as having piloted or implemented face recognition. The Perpetual Line-Up is the most comprehensive survey to date of law enforcement face recognition. 

Among the law center’s findings are that face recognition is unregulated; law enforcement groups are exploring face recognition on live surveillance video; little is being done to ensure these systems are accurate and bias free; and agencies that use facial recognition are not taking adequate steps to protect free speech. It also found that face recognition will be least accurate for those it will most affect: African Americans.

A core recommendation is that Congress and state legislatures address these risks through commonsense legislation comparable to the Wiretap Act. These reforms must be accompanied by key actions by law enforcement, the National Institute of Standards and Technology (NIST), face recognition companies, and community leaders.

Public reporting, oversight, transparency, and adequate federal funding to increase the frequency and scope of accuracy, and more diverse photo databases for training, are among the law center’s recommendations. At a minimum, the center recommends legislatures require that face recognition searches be conditioned on an officer’s reasonable suspicion that an individual is engaged in criminal conduct. This standard currently applies to police investigatory stops. The report asserts that use of this technology to track people based on their race, ethnicity, or religious or political views should be prohibited. 

A similar study, Facing the Future of Surveillance, a 2019 joint report by The Constitution Project and the Project On Government Oversight (POGO), a watchdog agency, is the organization’s latest effort to form solutions and address the constitutional questions regarding facial recognition technology. 

The Constitution Project at POGO discovered local, state, and federal agencies have amassed databases that contain our fingerprints, DNA, retinal images, photos of our faces, and even our gaits. Roughly half of all adults in the United States have pre-identified photos in databases used for law enforcement facial recognition searches.

Their report acknowledges that facial recognition can serve as a vital law enforcement tool to fight terrorism and crime, but cautions it must be “balanced with the imperative to protect constitutional rights and values, including privacy and anonymity, free speech and association, equal protection and government accountability and transparency.”

Congressional response

Prior to his untimely death in November 2019, U.S. Rep. Elijah Cummings (D-Md.) had begun working with Jim Jordan (R-Ohio) on bipartisan legislation to address the use of facial recognition and machine learning. Leaders were exploring potential First, Fourth, and 14th Amendment questions raised by the technology.

The lawmakers were attempting to get ahead of the technology instead of reacting to it by proposing federal measures to restrict its use as well as offer remedies in instances of misuse or misidentification.

For more than a year, legislators heard testimony and received studies from law enforcement groups, private companies, industry professionals, legal and tech experts, and civil and human rights organizations to determine the best way forward. Buolamwini also presented her findings during a congressional hearing on facial recognition software.

These conversations and hearings led to a bill introduced in November in the Senate. Bill co-sponsors Sens. Chris Coons (D-Del.) and Mike Lee (R-Utah) introduced the Facial Recognition Technology Warrant Act of 2019. It’s structured to limit the use of facial recognition technology by federal agencies and for other purposes. In other words, the bill applies only to federal law enforcement, not state and local police.

It requires federal authorities to obtain a judge’s approval before using facial recognition to conduct surveillance of a criminal suspect beyond three days to a maximum of 30. The measure is currently in review in the Senate Committee on the Judiciary.

While legislators acknowledge the bill is a first step in addressing the issue, human rights and other civil rights advocacy groups state the proposed bill does little to address the chief concern: the use of facial recognition to identify a person from a photo or video still.

“We decided that it was important to find a reasonable approach to balancing the interests that we have at stake here,” Lee said in an interview with The Hill. “The obvious civil liberties concerns that Americans have and what it provides to law enforcement.”

Even so, major companies like Apple, Amazon, and Facebook are free to sell mobile devices and other products with facial recognition software built in. It’s considered a different application of the software than, say, cameras installed on the periphery of a building, because people can choose whether or not to use it.

Two California cities, San Francisco and Oakland, and Somerville, Massachusetts, have already banned government use of the burgeoning technology. Officials in neighboring Cambridge, Massachusetts, are considering banning city departments from using facial recognition software, citing similar concerns of discrimination against women and people of color. Portland, Oregon, and Portland, Maine, are also on the path to prohibiting facial recognition and similar facial identification tools by their city departments and police.

Advocating for human rights

Law enforcement agencies that need more reasoning for pursuing this technology in measured terms might consider following the guidelines in The Toronto Declaration.

Officially called The Toronto Declaration on Equality and Non-Discrimination in Machine Learning Systems, it came about after a group of experts met in Toronto in 2018 to discuss equality and nondiscrimination in machine-learning systems. Amnesty International and Access Now organized the session, which included about two dozen technologists and human rights practitioners from around the world.

While these technologies can impact a range of human rights, The Toronto Declaration focuses on the right to equality and nondiscrimination. The Declaration urges governments and companies, both public and private, to ensure machine learning, AI, and facial recognition applications keep human rights at the forefront.

The Declaration is designed to build on “existing discussions, principles and papers exploring the harms arising from this technology. We wish to complement this existing work by reaffirming the role of human rights law and standards in protecting individuals and groups from discrimination in any context. The human rights law and standards referenced in this Declaration provide solid foundations for developing ethical frameworks for machine learning, including provisions for accountability and means for remedy.”

Cameras Everywhere: Examining The Conflict Between Technology And Human Rights
Jay D. Aronson, Ph.D.

Carnegie Mellon University’s Center for Human Rights Science (CHRS) was the first academic institution to endorse The Toronto Declaration. CHRS Founder and Director Jay D. Aronson, Ph.D., and his team participated in the discussions. The CHRS is continuing its research in the application of machine learning in human rights contexts, particularly in video analysis and 3D event reconstruction. Their goal, Dr. Aronson said, is to work with computer science and human rights communities to ensure machine learning yields positive benefits to all humanity and not just a privileged few. 

“As far as the technology itself goes, if it actually works it creates a situation in which governments and companies can chill the ability of citizens to engage in actions like assembly, demonstrations, and protests that are at the heart of democratic societies,” he said. “If it doesn’t work, it can lead to situations where people are accused of crimes or actions they did not commit.

“Technologists and human rights practitioners must work together to build a future in which human rights are integrated within technological systems,” Dr. Aronson stated. 

Recent reporting has shown the harm these facial recognition and machine-learning systems can cause if proper safeguards, accurate data, and unbiased algorithms are not put in place to minimize discriminatory, inadvertent bias, misidentification, and repressive practices.

To illustrate this point, Jacob Snow, J.D., a technology and civil liberties attorney with the American Civil Liberties (ACLU) of Northern California, described how the ACLU tested Rekognition, a facial recognition platform developed by Amazon. ACLU techs built a face database and search tool using 25,000 publicly available arrest photos. They searched that database against public photos of every current member of the House and Senate and used Amazon’s default match settings for Rekognition. The result: The software misidentified 28 members of Congress as people who had been arrested for a crime. The false matches were disproportionately of people of color, including six Congressional Black Caucus members. 

The ACLU has joined a growing chorus of organizations urging Congress to halt the use of the software all together or issue a moratorium on face surveillance stating its use “threatens to chill First Amendment-protected activity like engaging in protest or practicing religion, and it can be used to subject immigrants to further abuse from the government.” The ACLU, Snow said, is also suing for the FBI to release details as to how it is using face recognition technology and other forms of remote biometric identification technologies.

According to the Constitution Project at POGO study, the FBI’s system is technically not a single centralized database; rather, it is built on agency-owned databases of mug shots, a broad range of civil service photos, and millions of photos in databases of driver’s licenses attained through agreements with states. Several state and local law enforcement agencies have access to these databases as well.

Adding to the body of research is the IEEE, which issued Article 19, a report that outlines the impact of AI in freedom of expression and privacy. The IEEE sets the technical standards that drive modern telecommunications and information and communications technology hardware.

Article 19 is the result of a multistakeholder initiative to develop ethical guidelines for AI. It’s the culmination of more than three years of work with experts from Canada, the European Union, the U.K., and the Council of Europe. In it, the IEEE emphasizes the need to build consensus for standards, certifications, and codes of conduct for the ethical implementation of AI.

Perhaps most importantly, Article 19 makes human rights a guiding principle for technology development. It provides concrete guidelines regarding responsibilities and duties of care when AI goes awry by providing protections and redress for victims of human rights violations resulting from AI systems.


“Technologists and human rights practitioners must work together to build a future in which human rights are integrated within technological systems.”

 

– Jay D. Aronson, Ph.D., founder and director, Carnegie Mellon University Center for Human Rights Science


 

Facial recognition abroad

It’s been widely reported that China uses cameras and facial recognition to surveil its citizens. Recent coverage from The New York Times indicates China is doing so in part to track ethnic Muslims in the western region, which many consider a form of racial profiling.

The U.K. is not far behind. It reportedly has more closed-circuit TV (CCTV) surveillance cameras per person than any country except China. According to the British Security Industry Association (BSIA), a U.K. trade association, an estimated 4 million to 6 million cameras are in the U.K. or about one camera per 11 individuals. The BSIA is actively helping to set European standards for these applications and wants to ensure the CCTV is operated responsibly to respect the rights of citizens and maintain public trust. 

According to the BBC, police in South Wales, Leicestershire, and London have experimented with facial recognition software since 2015 to monitor peaceful protests and persons with mental health issues. Surveillance is known to be particularly prevalent in the 67-acre King’s Cross development in London, which is home to Google’s U.K. headquarters, Central Saint Martins art school, and several businesses.

The U.K. High Court of Justice ruled in September 2019 that facial recognition has considerable benefits, particularly in situations that may protect the public and prevent crime. Their decision stems from a lawsuit, thought to be the first ever globally, brought on by a privacy campaigner. He alleged his human rights were violated when his image was taken by South Wales Police using automated facial recognition software. Civil liberties groups signed on to the lawsuit, alleging the actions of the police violated both U.K. and European human rights and data privacy laws.

The U.K. adopted the Data Protection Act in 1988, to protect personal data stored on computers or in organized paper filing systems. It governs the use of cameras and any data produced or stored by them, and gives citizens the legal right to control information about themselves. It was further strengthened with the passage of the Data Protection Act 2018, which regulates the collection, storage, and use of personal data more strictly.

Although the High Court agreed that automated facial recognition software did impact the claimant’s individual rights because it involved processing sensitive personal information, it determined the police use of the software met the legal requirements and the actions taken were lawful. His image was captured in a public area to search for people on a police watch list.

Both sides have stated the ruling should not be perceived as a green light for widespread use of the technology. Instead, law enforcement agencies and public and private companies should make protecting human rights a priority as they develop and use this technology. U.K. advocacy groups plan to continue to pressure the courts to restrict their use.

Proceeding with caution

“We’ve seen a growing wave of pushback against harmful AI, particularly in the past year,” said Myers West. The pushback, she said, “is being led by the communities that are directly impacted by AI systems. In the 2019 AI Now report, we spotlight the work of community organizers, students, and tech workers trying to shape the uses of AI systems. We’re also seeing a growing tide of regulation to ensure accountability in how these systems are being used — from bans on the use of facial recognition in municipalities across the United States, to state-level initiatives to increase transparency in the use of AI for hiring, to the Algorithmic Accountability Act being introduced before Congress. 

“But there’s a lot more that needs to be done in this space,” she said. “At AI Now, we advocate to move beyond ‘technical fixes’ to harmful AI and actually address the broader politics and consequences of AI’s use.”

While a bill awaits review in the Senate Committee on the Judiciary, private and public tech companies and law enforcement agencies forge ahead with the technology that remains unregulated.

Researchers have demonstrated that advanced technologies can suffer from design flaws, or employ existing social inputs that disproportionately harm people of color, women, and religious minorities. This is true of facial recognition. AI Now is also exploring the nexus of bias, disability, and AI, to be sure AI systems don’t reproduce and extend histories of marginalization among those with disabilities.

The Constitution Project at POGO study and others urge governments to continually consider whether new policies and practices have a disparate impact. Even if facial recognition is built to be accurate and unbiased, the report states, if surveillance systems incorporate it in a discriminatory manner, then the technology is increasing the efficiency of an unjust system.

Author

>