Gaydar per gli amici: un'AI in grado di determinare l'orientamento sessuale dal volto del soggetto in esame
Presented with photos of gay men and straight men, a computer program was able to determine which of the two was gay with 81 percent accuracy, according to Dr. Kosinski and co-author Yilun Wang’s paper.
The backlash has been fierce. “I imagined I’d raise the alarm,” Dr. Kosinski said in an interview. “Now I’m paying the price.” He’d just had a meeting with campus police “because of the number of death threats.” Advocacy groups like Glaad and the Human Rights Campaign denounced the study as “junk science” that “threatens the safety and privacy of LGBTQ and non-LGBTQ people alike.”
Dr. Kosinski and Mr. Wang began by copying, or “scraping,” photos from more than 75,000 online dating profiles of men and women in the United States. Those seeking same-sex partners were classified as gay; those seeking opposite-sex partners were assumed to be straight.Some 300,000 images were whittled down to 35,000 that showed faces clearly and met certain cri teria. All were white, the researchers said, because they could not find enough dating profiles of gay minorities to generate a statistically valid result. The images were cropped further and then processed through a deep neural network, a layered mathematical system capable of identifying patterns in vast amounts of data.
The authors were then ready to pit their prediction model against humans in what would become a notorious gaydar competition. Both humans and machine were given pairings of two faces — one straight, one gay — and asked to pick who was more likely heterosexual.
The participants, who were procured through Amazon Mechanical Turk, a supplier for digital tasks, were advised to “use the best of your intuition.” They made the correct selection 54 percent of the time for women and 61 percent of the time for men — slightly better than flipping a coin.
Dr. Kosinski’s algorithm, by comparison, picked correctly 71 percent for of the time for women and 81 percent for men. When the computer was given five photos for each person instead of just one, accuracy rose to 83 percent for women and 91 percent for the men.
Continua qua: https://www.nytimes.com/2017/10/09/s...ence&smtyp=cur