Institute For Ethical Hacking Course and Ethical Hacking Training in Pune – India

Extreme Hacking | Sadik Shaikh | Cyber Suraksha Abhiyan

Credits: The Register

Facial recognition is having a rough time of it lately. Just six months ago, people were excited about Apple allowing you to open your phone just by looking at it. A year ago, Facebook users joyfully tagged their friends in photos. But then the tech got better, and so did the concerns.

In May, San Francisco became the first major city in the world to effectively ban facial recognition. A week later, Congress heard how it also needed to ban the tech until new rules could be drawn up to cover its safe use.

That same week, Amazon shot down a stakeholder proposal to ban the sale of its facial recognition technology from being sold to law enforcement. Then, in June, the state of Michigan started debating a state wide ban, and the FBI was slammed by the Government Accountability Office (GAO) for failing to measure the accuracy of its face-recognition technology.

The current sentiment, especially given a contentious political environment where there is an overt willingness, even determination, to target specific groups, is that facial recognition is dangerous and needs to be carefully controlled. Free market America has hit civil rights America.

It hasn’t helped that China’s use of the technology has created a situation previously only imagined in dystopian sci-fi movies: where a man who jaywalked across a road is identified several weeks later while walking down a different street, arrested and fined.

This turn is especially frustrating for one CEO of a facial recognition firm, Shaun Moore of Trueface, a company that until recently was based in the city that voted to ban its product, San Francisco.

Moratorium

Moore is keen to point out that San Francisco didn’t actually ban the technology; it can still be used if the authorities get a warrant. This is true.

The decision is more of a moratorium: any local government authority that wants to use facial recognition will need to apply to do so, and be approved, before it can. That system will only be lifted when new rules designed to balance privacy and accuracy with technological ability are drawn up.

He is, unsurprisingly, not happy about his company’s product being blocked by legislation. “It is not the right way to regulate,” he complains, especially since it has led to a broader sense that the technology is inherently dangerous. “We risk creating a Facebook situation,” he warns – where Congress feels obligated to act against a specific technology based on fears but with little or no understanding of how it works.

For one, Moore argues, he doesn’t know of any law enforcement agency that wants to use the technology for real-time surveillance. They want to use it as an investigative tool after the fact by scouring footage. “It can take five to seven days off investigation time,” he told us. “It is one piece of evidence that can be used to search for other evidence.”

In other words, fear of what facial recognition could be used for is limiting its usefulness in current investigations. Faster, more effective investigations mean better results and more available police time to cover more crimes: a win-win.

He also argues that the fear of ubiquitous surveillance is simply not possible, at least not yet. “We don’t have the processing power, we can’t physically do that,” he says in reference to the fear that widespread cameras could be turned into tools of constant surveillance.

But as we dig into the concerns around facial recognition, it increasingly feels like the proposed moratoriums make a lot of sense.

One of the biggest concerns is around accuracy: how confident can we be that someone on a camera, identified as a specific individual through facial recognition, is really that person? The answer is always given as a percentage likelihood. But that raises a whole host of other questions: what level of accuracy is sufficient for someone – like a police officer – to act?

Color blind

Combined with a well-recognized problem that the datasets used to train these systems are heavily skewed toward white-skinned men – which results in more accurate results for them but less accurate results for anyone who isn’t a man, or white – and you have a civil rights nightmare waiting to happen.

Moore says that his company – and the facial recognition industry as a whole – is “absolutely” aware of that dangerous bias. While stressing that it is not the technology itself that is racist but there are “lots of racist people” and that the data itself can cause bias, he says that the industry is working hard on fixing those biases.

Trueface is paying people in other countries across Asia and Africa to send them photos of their faces in order to build a much larger database of faces with different features and darker skin tones and that approach is “actively pushing the bias down.”

He says that combined with improvements in the technology, we are rapidly getting to the point where within two-to-three years, the degree of accuracy in facial recognition will be in “high 90s” for all types of people – which is basically the same as other forms of identification that we all accept within society, like banking.

He even argues that level of accuracy could help counteract human biases: it would be harder for a police officer to justify, say, stopping a black man because he thought he looked like a suspect if there was a facial recognition result that said it was only 80 per cent accurate.

But then, of course, we delve into the complex and fraught world of what is supposed to happen versus what really happens on the street. Moore admits that if there isn’t a clear picture of someone or the individual in question is wearing a hat, then it is never going to be possible to get a high-90s accuracy.

Except he describes it in a way that many of his clients are likely to see it: “If someone is actively avoiding cameras, or pulls on a hat, then there’s nothing we can do.” We relay the recent story from London where a man was stopped, questioned and fined £90 ($115) for “disorderly behavior” because he tried to hide his face. He didn’t want to be on camera; the cops immediately assumed he was up to no good.

Abuse

Moore admits that facial recognition use is going to be based on a “social contract” and that “to me, that was inappropriate” to stop and fine the man. It was “probably his right” to avoid the cameras, he notes, but then quickly adds that he “would like to assume that the police officers are trained to recognize behavior.” And, he points out, the issue only got a “spotlight on it because facial recognition was in the same sentence.”

Which is a fair point. Like any new technology, the initial sense of amazement at what has become possible is soon replaced with a fear of the new, of its possible abuses. And when any abuses do come to light, there are given disproportionate weight, leading to a sense of crisis that then drives lawmakers to believe they need to act and pass new laws.

This technology journalist often cites the wave of newspaper headlines in 1980s that surrounded the terror that once was “mobile phones.” There were even calls to ban them entirely because they being used by football supporters to organize fights.

Facial recognition has already proven its worth, Moore argues. One recent example was how a man traveling on a false ID was identified and arrested at a Washington DC airport thanks to their facial recognition system. And, faced with the unpleasant reality of gun violence and mass shootings in the US, its use at live events could end up saving lives and keeping everyone safer. “Guns are a serious problem,” he notes. “This technology is there to make better decisions.”

Which gets us back to the rules and regulations. Which don’t exist yet. Moore feels strongly that there is one area where federal – rather than local – regulation is needed. And that should include restrictions on use.

How then?

The question is what do those rules look like, how are they applied, and around what specific issues can they be drawn. Moore says he doesn’t have the answers, but he does help identify some key building blocks:

  • Government versus commercial use
  • Real-time use versus analysis of recorded footage
  • Opt-in use (where identification is used to provide access) versus recognition (where identification is used to stop, prevent or limit someone)
  • Transparency and benchmarks

The use of facial recognition is always going to be “situational,” Moore argues. And, he notes, it may well be necessary for the use of facial recognition within the US to be reliant on the use of technology that is created within the US, in order to make sure that the new rules are baked into hardware and software.

Even assuming new federal rules, a bigger question then is: how do you stop companies and/or specific police departments from abusing the technology?

Moore seeks to reassure us. “There are bad people. We have turned down multiple clients where their use of the technology was not aligned with what we wanted to do.” It would hard for companies to hide their planned use for such technology, he argues, because “we spend six months at a minimum with clients. If they were trying to deceive us, we would know it, and just shut it down.”

But what was intended as a reassurance in some respect only serves to further highlight concerns: this technology can be used in wrong and dangerous ways, and there are people out there who are already seeking to spend money to create systems that make the company that sells the technology uncomfortable enough to walk away.

Moore is right when he says that facial recognition is an “inevitability.” The big question is: is this the sort of technology that should be introduced and then scaled back – like ride-sharing or social media – or is the sort of technology that needs to forced to argue its case before it is introduced?

Moore thinks it’s the former. His former hometown thinks the latter.

www.extremehacking.org

Sadik Shaikh | Cyber Suraksha Abhiyan, Ethical Hacking Training Institute, CEHv10,CHFI,ECSAv10,CAST,ENSA, CCNA, CCNA SECURITY,MCITP,RHCE,CHECKPOINT, ASA FIREWALL,VMWARE,CLOUD,ANDROID,IPHONE,NETWORKING HARDWARE,TRAINING INSTITUTE IN PUNE, Certified Ethical Hacking,Center For Advanced Security Training in India, ceh v10 course in Pune-India, ceh certification in pune-India, ceh v10 training in Pune-India, Ethical Hacking Course in Pune-India