Reposting a comment I made on HN regarding the proposed EU ban on facial recognition technology:
>I love coming to these threads to watch the crowd who makes their living from invading the public's privacy attempt to rationalize their worldview, find loopholes, etc. If your job is mass surviellance, it has always been unethical and the law is catching up to you. The purpose of these kinds of laws isn't to bring your business in line - it's to put you out of business.
@sir If only US lawmakers would protect us like that.
@sir Now I have worked on a product that used facial recognition rationally and with consent, and fully ethical as far as I could see. Banning facial recognition at all I find as a detriment to possible technology developments, but regulating it is important.
@ignaloidas it looks like the EU wants a 5 year ban on facial recognition outside of research. It seems to be designed to stop the bleeding artery so regulation can be made.
@sir Oof, that's a bit harsh. I know like 7 companies which use facial recognition ethically, and which would basically kill their business. Those are also some of the main innovators in the space.
@ignaloidas good! Those companies ought to be killed.
@sir I disagree! Most of them are actually catching scammers and impersonators.
@ignaloidas we were doing alright before they got started, and we'll be fine without.
@rin Wow, such effort, much translation, much funny.
@passenger @sir Denying scammers at scale quickly would be lost. I worked with one such product(https://idenfy.com), and they get real creative. Removing facial recognition adds a lot more slow and error-prone (facial recognition algorithms outperformed humans in our case) manual labor which is very boring and very commonly outsourced. Who would you rather trust to review your document photos: an algorithm that uses facial recognition, or some dude in India? For me, the answer is clear.
@ignaloidas if the consumers cannot know _how_ exactly you are ID-ing them, they cannot truly consent. this goes doubly so if the ID-ing is required for use of a service the user wants (effectively meaning they have little choice), and triply so if the entity using your product is a large organization, i.e. a government or a megacorp, that has significant power over people’s lives
@sol Ah, here lies the problem.There are techniques to create adversarial images for various ML algorithms if the model is known, so sharing it in any way really, is just opening the door for vulnerabilities.Some might feel better if a human would review it, but in fact,it doesn't change anything. AI/ML does what humans can, and long could, just more effectively, and with better ability to scale. A human reviewing your photo still uses some kind of algorithm which is even harder to verify.
@ignaloidas an increase in efficiency is not necessarily a good thing. yes, individuals may find their paperwork goes faster in some cases. they also will find the process of having their insurance fees hiked, or being detained by ICE, to be much more streamlined as well.
furthermore, yes adversarial imagery can be created but think about who’s most at risk here. the targets of genuine fraud tend to be either corporations, or wealthy people, who have the resources to easily recover. meanwhile, the disenfranchised and impoverished stand to gain little in this tradeoff of “denying scammers”, as they have few to no assets to protect. meanwhile, it is these minorities who themselves are most at risk for being targets of government persecution!
so yes, sharing the model opens up risk, but consider who bears that risk, and who benefits from it. and if your AI is so vulnerable to exploitation that publishing the details compromises the entire use case, that may be a sign that AI isn’t the right approach to solving this “problem”
@sol I don't follow how you go from identity verification to insurance fees or ICE. These are entirely different things. Large scale facial recognition is more or less prohibited in EU by GDPR anyways.
About who gets targeted by scammers: wrong. There are countless examples of sleeping people being used to take loans for their relatives. Those aren't rich people. Also, sometimes actual terrorists try to use various services for money laundering purposes. I don't think anyone wants that.
@ignaloidas I mean the application in the case of ICE is pretty obvious. they knock at someone’s door, use their phone’s camera to identify who answers, and thanks to the cloud, they get a match and can arrest people before they have a chance to flee. insurance fees are also pretty clear, at least in the US, insurers are constantly working on profiling people to adjust their rates. successful facial ID checkins are very valuable data points.
saying that large scale facial recognition is illegal in the EU doesn’t mean much. are your clients strictly based in EU? do you have mechanisms to ensure they won’t lie about how they use your software? would your company really risk bankruptcy if it came back that your biggest clients were conducting unlawful surveillance? or would people just look the other way and collect their checks?
I admit I do not have statistics about scamming victims, but again think about who’s gonna be the people paying you to build this tech— large corporations. by definition, you are not serving the interests of regular people, except for when those interests are in alignment with large companies. incidental altruism is not really altruism at all
@sol ICU: this is very different from what I am arguing for. Identity verification I'm arguing for is of a verification type: someone say they are someone, we check that. Not of the identity check type, that is to say answering the question "who are you?" using facial recognition, I'm opposed to that.
We were originally talking about EU wanting to ban facial recognition, which obviously doesn't apply elsewhere, just like GDPR.
You are serving the public by preventing unlawful actions.
yes, you’re talking about identity verification, I know that. the question ICE would ask is not “who is this person” but rather “is this [NAME], the person we’re looking for?” and the question data companies would ask is not “who is the user of IP address X”, but rather, “can we verify that IP address X was physically being used by [NAME] at that time”, which allows you to build a reliable timeline of people’s actions by connecting data from many sources
and yeah we were originally talking about the EU but the now we are talking about whether your example of “ethical” facial recognition is in fact ethical. you cited the GDPR as a reason you expected only ethical behavior from your clients, and I simply pointed out that that only works if you operate entirely within the EU and if you assume companies would never break the law and get away with it.
it’s also troubling that you seem to be implying that just because something is unlawful, that makes it ethical to prevent it. that’s not necessarily the case.
@sol ICE wouldn't get any benefit from that,they could do the that by just training their officers facial feature comparison(~3 hour course)and give them a picture of a person to check for
I don't see how could you connect IP address with a person using facial recognition
If we assume that companies break the law and get away with it,then absolutely nothing would change:companies would still use facial recognition
Laws are mostly based on ethical principles,so I think that is a safe assumption
@ignaloidas yes, but it would be slower, more error prone, and more importantly, you wouldn’t be enabling them.
as for IP and facial recognition... if your technology gets embedded in an app or a website, it becomes trivial to record the IP address as well as the results of the verification check, you know that
and laws aren’t “mostly” based on ethical principles, many are first and foremost about protecting wealthy property owners, not ending things like hunger, homelessness, and war. it’s not at all a safe assumption
@sol In the scenario you described you don't need to be very accurate or fast if you're knocking into someone's door.
For IP to be recorded, the identification provider needs to share info with data companies, and let me tell you, companies are *really* serious about not having to add more data processors to their GDPR agreements, since they are checked every so often together with everyone who they are sharing data with. It's a massive pain in the ass.
Well, the laws depend on where you live.
@ignaloidas The idea is that I would take a selfie and send that and a picture of my ID to your servers for you to store it, right?
That sounds creepy.
@ignaloidas Wait, the website says "For starters we need a good quality camera to capture accurate images, then a fast speed connection to feed data back to the cloud just for algorithms to be able to do the work." Aren't those two things up to the user, and not something idenfy can help with? Or would idenfy also offer cameras for establishments to take pictures of customers?
@mort Nah, that's just for the best experience. With lower res camera the match rate falls and the slower connection obviously makes things slower as we try to prove that you are a real human, not a static image so we need to send video data back to the servers.
@mort The services that would use it probably legally need to store it anyways. Regulations on electronic registration on things related to money are getting stricter in EU, which is why such things started to pop up.
it's like how twitter asked for my street address to verify my identity.
*how do they know what the right answer would be!?*
and combined with stuff like this: https://www.theguardian.com/us-news/2019/feb/01/sacramento-rally-fbi-kkk-domestic-terrorism-california
and many countries make the US look like saints
not to mention this is random private companies not just government agencies we at least get to vote on!
I mean, even when it's showing the user "I am currently trying to recognize your face"..that still wouldn't be useful unless they already know what the user's face looks like. And how do so many companies that would be idenfy's clients already know what I look like?
@ignaloidas Right, but if people don't have another option, then that's not actually ethical consent.
@sir But really in that case, it's the employer or bank or etc. that is choosing to require it and not accept other options.
So there, perhaps regulation should probably apply to employers and banks and etc. to not require facial recognition, since they could just get around it by using some company like Idenfy but based in Russia or something.
@ignaloidas @sir @passenger The point is that combining your face with other things allows the *construction* of databases *external to the government* that associate faces with identities, which could then be hacked or bought or sold by unscrupulous employees (or sold by companies without regulation, or transferred to countries without regulation then sold)
And ultimately used for nefarious purposes, identifying dissidents and such in the background. (Otherwise background stuff is useless)
Also to note, any of such services have such a face<->identity database, since all will require you a document photo which provides both. It doesn't have to use face recognition to collect data.
And that includes electricity, water, sewage, trash, loans, buying food online, and etc.
Also what laws require them *to* be stored? I wasn't aware of that, but it doesn't surprise me. I presume it's "in case they turn out to be a terrorist" or something? ugh
I know! I thought @ignaloidas was talking about like, tagging your auto-tagging your family scrapbook photos on your own local hard drive or something (I think iPhoto does have that)
But this is horrifying..like,
It's not actually facial recognition technology itself that's horrifying..it's what it's used for.
*And this is the specific exact thing everyone's worried about!*
So good job evangelizing; no really, thanks XD
This is how a Cory Doctorow novel starts.
@Wolf480pl @sir God this was SO blown up. In the original source that all the articles are based on(https://g8fip1kplyr33r3krz5b97d1-wpengine.netdna-ssl.com/wp-content/uploads/2020/01/AI-white-paper-CLEAN.pdf) this is only discussed in one paragraph(page 15). It's only talking about public spaces there, and, I quote, "The Commission is of the view that it would be preferable to focus at this stage on full implementation of the provisions in the General Data Protection Regulation". Basically media decided to take this out of the context of the report for clickbait.
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!