hckrnws
The story about Clearview finding a witness able to clear a wrongfully accused man, seems like it's intended to make us feel like it's good tech since it can help defense lawyers also. Seems to gloss over why he was able to be charged in the first place...
"Although Mr Conlyn said he was the passenger, police suspected he had been driving and he he was charged with vehicular homicide...The witness, Vince Ramirez, made a statement that he had taken Mr Conlyn out of the passenger's seat. Shortly after, the charges were dropped."
It is undoubtedly a feel-good story. Another “facial recognition feel good story” I’ve heard was a child porn case - the police ran the perpetrator’s image to find him and arrest him… and then ran the victim’s image to find her and inform her family, so they could get her therapy and support. Yet another story is the detective who ran a fugitive from a cold case and it pulled up an obituary for the fellow, effectively closing the case on the spot - not quite “feel good”, but it does serve to illustrate that these technologies have all sorts of applications in policing beyond “take a picture of the bad guy to send him to jail”.
What to make of these stories? On the one hand, we’re only hearing these stories because these stories make them look good. On the other hand, these stories… kinda do make them look good? Giving defendants access to the evidence they need to save themselves from a miscarriage of justice, locating and protecting victims of the worst crimes imaginable - those are seriously good things! We should be careful to respect the gravity of those results in our calculus of how to treat these feel-good stories.
Ultimately, my analysis is that facial recognition isn’t “good tech” or “bad tech”, it’s just “extraordinarily powerful tech”, and thus it will necessarily inherit the moral valence of whoever’s using it, and those users’ intentions and actions.
If they give that tech to defense lawyers and the lawyers use it for good, sometimes you will get stories like “we put in a single frame of body cam footage and in a few seconds it gave us back the only person who could save our defendant” (and the facial recognition companies will be falling over themselves to tell you all about it).
If they give that tech to sports stadiums and the stadiums use it for evil, sometimes you will get stories like “we put in the headshots of every lawyer who’s suing us, and prevented one from enjoying a football game with her friends and family” (and the media will be falling all over themselves to tell you all about it).
If they give that tech to police, well, I would guess your opinion of them would end up roughly the same as your opinion of the police.
The obituary cold case example leads me to the opposite conclusion. A determined criminal can literally use another murder to close the case on their existing murder with a well crafted prosthetic.
I don't honestly understand how the Clearview thing would make any difference. The burden of proof is on the prosecutor right? So in court they would need to present proof that he was in the front seat, and if he wasn't there wouldn't be any such proof? Or has the US justice system become so dysfunctional that people actively must prove their innocence?
It is, yes. The specific mechanism is that most cases don't go to trial; accused are offered plea deals with the threat that they will receive a much, much harsher sentence if they push for a trial. This threat is often made good too: police testimony is usually enough to get a guilty verdict and police say what prosecutors want them to.
Basically if they want you they've got you. To get out you have to have a very specific alignment of resources, sympathy and luck, and the risk if it doesn't work out is massive even then.
To add to this... Rikers Island is known as "Torture Island" colloquially and is mostly people awaiting trial, many of whom end up making plea deals
Yup guantanamo north I've heard it called. Like 30 people have died in houston jail while awaiting trial in the last two years alone. It's endemic to the point of being basically intentional now.
This is what the story should have been about! Not about glorifying more surveillance.
The purpose of a system is what it does
Are you implying that someone who is lawfully a passenger in car is somehow responsible for the drivers actions?
in Germany if the driver is obviously drunk or drugged and not fit for driving you may lose the license as a passenger, especially as a sober passenger, yes. you have to take the wheel or prevent the ride. not sure about the US.
My torts professor’s head just exploded, in case you were wondering what that sound was.
Surprised it is only 1M given 33.6 million Americans use mass transit daily, many cities have vast networks of cameras, private corporations frequently share camera feeds, 2.5 million people that pass through US airports daily, etc. Walmart, AT&T, Kohl’s, Best Buy, Albertson’s, Home Depot, etc — at some point used them; Walmart alone has roughly 37 million daily customers.[1]
https://findbiometrics.com/walmart-att-other-big-names-added...
Back in 2021, per NYT — “In January 2020, Clearview had been used by at least 600 law enforcement agencies. The company says that is now up to 3,100. The Army and the Air Force are customers. U.S. Immigration and Customs Enforcement, or ICE, signed [a contract with Clearview]” [2]
Clearview AI has been around awhile, at least in the beginning they simply pulled faces/names from social media profiles, though I wouldn’t be surprised if they hadn’t expanded their pipeline.
[1] https://findbiometrics.com/walmart-att-other-big-names-added...
[2] https://www.nytimes.com/2021/03/18/technology/clearview-faci...
Wikipedia page on them has more info:
Any residents of CA, VA, or IL that have opted-out of Clearview with a 'Do Not Sell' request? I was recently wondering what that's like.
Of course, it runs into the same problem as many other opt-outs where you have to provide them some information so they can identify the records that correspond to you, and presumably delete or mark them as "don't sell". Except with Clearview, the only reference material you can provide is an image, so that would definitely bother some folks.
I emailed them a long time ago to tell them to delete my data. They asked for more data, which I declined. I'm curious what their legal requirements would be in that case. I'd hope that since they could easily figure out who I am and delete my data from just my email, that it would mean they're still required to do so without me giving even more data, but who knows.
We used to think we could reliably use lie detectors, bite marks and finger prints to solve crime. We have learned these aren’t reliable. Why are we blindly trying to do the same with facial recognition? I am not convinced it is any better.
Things are less black and white with AI. Things like chatGPT bother me because they don't tell you the probabilities of certainty when they have this information available. (I know there is bias too, but the probabilities would be a best case)
Anyway, there is a huge difference between someone with a clear photo and you are showing up as a 99% with the next highest being 10%, vs a bunch of people around 20% chance. Not to mention, you could further triangulate with phones wifi/cell towers.
Maybe multiple pieces of evidence is too much to ask for petty crime.
An AI hallucination should NEVER be considered "evidence". As long as we cannot PROVE, ie mathematically, when an ML model is "right" or "wrong", it should never be considered "beyond a reasonable doubt" and if it ever is, that is a failure of the justice and jury system.
Do you think we decide to do things based on whether it works, or is right? It’s a product they’re selling. Nobody really cares about anything except making money.
Curious about the distinction between use of this vs fingerprints, dna, or even witness lineups. Are they all equally bad? Is the main issue that facial detection Algos are less accurate?
Such a tool could be used if there were regulations around its usage in police stations and proper auditing of its use. Seems like we're far from it. I hope it doesn't fall into the hands of foreign adversaries.
Personally I'm a lot more worried about the domestic adversaries who already have it. No chinese cia or whatever has ever knocked my teeth out and threatened to kill me, but an american police certainly has.
Yes and things that seem reasonable to allow of the current regime may not hold true of future ones.
How difficult would it be to pollute the DB with AI generated fake faces?
Not difficult at all now that it's easy to use AI to generate fake people and videos.
Genius.
Do we even know why or how AI matched two pictures together? Did AI cheated during learning phase and used details unrelated but present in training set to get higher score?
Is there any reason to believe this number? Could it be significantly padded to make it sound more useful than it is? Could it be padded to convince some LEO types that it's more useful than it is? Could it be low balled to make it sound like the LEOs aren't just sitting there scanning everyone they come across?
Comment was deleted :(
"CEO Hoan Ton-That also revealed Clearview now has 30bn images scraped from platforms such as Facebook, taken without users' permissions."
Not sure about this -- one would have to study FB's terms of use in detail. In any case, implicit consent was given: if you don't want your picture to be used, don't ever upload it anywhere.
I haven't even uploaded it to LinkedIn, which might help to understand my surprise about a chat I had with the security officer of a resort in Cancun. He seemed quite pleased that he could positively identify me as software engineer working out of SV (all I had given was my California driver license). To this day, I don't know how I earned that conversation though (he was friendly enough, still ...), but I "won" the TSA lottery many times as well, so I somehow must trigger a red flag (perhaps not uploading one's picture is one).
yeah but TikTok uses your WiFi, we better ban that instead.
Come on, you've got to reference such a critique to let everyone in on the joke:
https://www.reddit.com/r/facepalm/comments/11zzank/asked_on_...
To be fair a malicious app could possible scan the network for tracking? Or scan for open services? As all other apps can
But if that was really the case, they would be like hawks on the evidence
But the way this guy delivers it is very over dramatic and unaware
I figured he had been given questions like those to ask but simply didn't understand anything he was told.
Or was given questions to ask and not coached on how to ask those questions when the executive predictably waved it away.
"Are you sending any info back to your servers about what else is on the local network" would have been a perfect question, with a possibly worrying answer, but now instead everyone is laughing about "hur dur does tiktok use wifi" and using a dumb person being dumb to handwave away legitimate concerns.
> Or was given questions to ask and not coached on how to ask those questions when the executive predictably waved it away.
or they purposely asked a vague question that could elicit multiple different answers depending on someone's interpretation.
I figured it was quite infamous by now.
The US gov hates TikTok because they can’t influence them like they influenced twitter during COVID. US tech industry hates TikTok because they are having their lunch eaten by it. Perfect formula for corpo-fascist policies like banning software for “security” reasons.
Crafted by Rajat
Source Code