Dear Visitor,

Our system has found that you are using an ad-blocking browser add-on.

We just wanted to let you know that our site content is, of course, available to you absolutely free of charge.

Our ads are the only way we have to be able to bring you the latest high-quality content, which is written by professional journalists, with the help of editors, graphic designers, and our site production and I.T. staff, as well as many other talented people who work around the clock for this site.

So, we ask you to add this site to your Ad Blocker’s "white list" or to simply disable your Ad Blocker while visiting this site.

Continue on this site freely
  HOME     MENU     SEARCH     NEWSLETTER    
TECHNOLOGY, DISCOVERY & INNOVATION. UPDATED ABOUT A MINUTE AGO.
You are here: Home / Health / Face-Reading AI Sees Politics, IQ
Face-Reading AI Will Detect Politics and IQ, Expert Says
Face-Reading AI Will Detect Politics and IQ, Expert Says
By Sam Levin Like this on Facebook Tweet this Link thison Linkedin Link this on Google Plus
PUBLISHED:
OCTOBER
13
2017
Voters have a right to keep their political beliefs private. But according to some researchers, it won't be long before a computer program can accurately guess whether people are liberal or conservative in an instant. All that will be needed are photos of their faces.

Michal Kosinski -- the Stanford University professor who went viral last week for research suggesting that artificial intelligence (AI) can detect whether people are gay or straight based on photos -- said sexual orientation was just one of many characteristics that algorithms would be able to predict through facial recognition. Using photos, AI will be able to identify people's political views, whether they have high IQs, whether they are predisposed to criminal behavior, whether they have specific personality traits and many other private, personal details that could carry huge social consequences, he said.

In a wide-ranging interview, Kosinski outlined the extraordinary and sometimes disturbing applications of facial detection technology that he expects to see in the near future, raising complex ethical questions about the erosion of privacy and the possible misuse of AI to target vulnerable people.

"The face is an observable proxy for a wide range of factors, like your life history, your development factors, whether you're healthy," he said.

Faces contain a significant amount of information, and using large datasets of photos, sophisticated computer programs can uncover trends and learn how to distinguish key traits with a high rate of accuracy. With Kosinski's "gaydar" AI, an algorithm used online dating photos to create a program that could correctly identify sexual orientation 91% of the time with men and 83% with women, just by reviewing a handful of photos.

Kosinski's research is highly controversial and faced a huge backlash from LGBT rights groups, which argued that the AI was flawed and that anti-LGBT governments could use this type of software to out gay people and persecute them. Kosinski and other researchers, however, have argued that powerful governments and corporations already possess these technological capabilities and that it is vital to expose possible dangers in an effort to push for privacy protections and regulatory safeguards, which have not kept pace with AI.

Kosinski, an assistant professor of organizational behavior, said he was currently studying links between facial features and political preferences, with preliminary results showing that AI is effective at guessing people's ideologies based on their faces.

This is probably due to the fact that political views appear to be heritable, as research has shown, he said. That means that political leanings are possibly linked to genetics or developmental factors, which could result in detectable facial differences.

Kosinski noted previous studies that have found that conservative politicians tend to be more attractive than liberals, possibly because good-looking people have more advantages and an easier time getting ahead in life.

The professor said the AI would perform best for people who are far to the right or left and would be less effective for the large population of voters in the middle. "A high conservative score … would be a very reliable prediction that this guy is conservative."

Kosinski is also known for his controversial work on psychometric profiling, including using Facebook data to draw inferences about personality. The data firm Cambridge Analytica has used similar tools to target voters in support of Donald Trump's campaign, sparking debate about the use of personal voter information in campaigns.

Facial recognition may also be used to make inferences about IQ, said Kosinski, suggesting a future in which schools could use the results of facial scans when considering prospective students. This application raises a host of ethical questions, particularly if the AI is purporting to reveal whether certain children are genetically more intelligent, he said: "We should be thinking about what to do to make sure we don't end up in a world where better genes means a better life."

Some of Kosinski's suggestions conjure up the 2002 science-fiction film Minority Report, in which police arrest people before they have committed crimes based on predictions of future murders. The professor argued that certain areas of society already function in a similar way.

He cited school counselors intervening when they observe children who appear to exhibit aggressive behavior. If algorithms could be used to accurately predict which students need help and early support, that could be beneficial, he said. "The technologies sound very dangerous and scary on the surface, but if used properly or ethically, they can really improve our existence."

There are, however, growing concerns that AI and facial recognition technologies are actually relying on biased data and algorithms and could cause great harm. It is particularly alarming in the context of criminal justice, where machines could make decisions about people's lives -- such as the length of a prison sentence or whether to release someone on bail -- based on biased data from a court and policing system that is racially prejudiced at every step.

Kosinski predicted that with a large volume of facial images of an individual, an algorithm could easily detect if that person is a psychopath or has high criminal tendencies. He said this was particularly concerning given that a propensity for crime does not translate to criminal actions: "Even people highly disposed to committing a crime are very unlikely to commit a crime."

He also cited an example referenced in the Economist -- which first reported the sexual orientation study -- that nightclubs and sport stadiums could face pressure to scan people's faces before they enter to detect possible threats of violence.

Kosinski noted that in some ways, this wasn't much different from human security guards making subjective decisions about people they deem too dangerous-looking to enter.

The law generally considers people's faces to be "public information," said Thomas Keenan, professor of environmental design and computer science at the University of Calgary, noting that regulations have not caught up with technology: no law establishes when the use of someone's face to produce new information rises to the level of privacy invasion.

Keenan said it might take a tragedy to spark reforms, such as a gay youth being beaten to death because bullies used an algorithm to out him: "Now, you're putting people's lives at risk."

Even with AI that makes highly accurate predictions, there is also still a percentage of predictions that will be incorrect.

"You're going down a very slippery slope," said Keenan, "if one in 20 or one in a hundred times … you're going to be dead wrong."

© 2017 Guardian Web under contract with NewsEdge/Acquire Media. All rights reserved.

Image credit: iStock.

Tell Us What You Think
Comment:

Name:

jay:
Posted: 2017-10-25 @ 5:29am PT
There is a degree of self selection that distorts this picture.

For example, gay people would tend to choose poses and grooming to appeal to their sex, and this would certainly be the case in dating site samples.

As for conservatives tending to be a bit more attractive (probably a bit too subjective anyhow), may reflect the difference in acculturation: traditional thinking places a strong importance on looking good, whereas the liberal mindset is that looks are not as important, so you are likely to find a significant downplay of visual attractiveness (especially with women in matters of cosmetics and hair styling).

Worried Citizen:
Posted: 2017-10-13 @ 10:07am PT
Fantastic tools for Hitler wannabees. The GDR was a paradise compared to what is awaiting us.

Like Us on FacebookFollow Us on Twitter
MORE IN HEALTH
SCI-TECH TODAY
NEWSFACTOR NETWORK SITES
NEWSFACTOR SERVICES
© Copyright 2017 NewsFactor Network. All rights reserved. Member of Accuserve Ad Network.