Dear Visitor,

Our system has found that you are using an ad-blocking browser add-on.

We just wanted to let you know that our site content is, of course, available to you absolutely free of charge.

Our ads are the only way we have to be able to bring you the latest high-quality content, which is written by professional journalists, with the help of editors, graphic designers, and our site production and I.T. staff, as well as many other talented people who work around the clock for this site.

So, we ask you to add this site to your Ad Blocker’s "white list" or to simply disable your Ad Blocker while visiting this site.

Continue on this site freely
  HOME     MENU     SEARCH     NEWSLETTER    
TECHNOLOGY, DISCOVERY & INNOVATION. UPDATED 12 MINUTES AGO.
You are here: Home / World Wide Web / Facebook: Send Use Your Nude Pics
To Fight Revenge Porn, Facebook Wants To See Your Nude Pictures
To Fight Revenge Porn, Facebook Wants To See Your Nude Pictures
By Olivia Solon Like this on Facebook Tweet this Link thison Linkedin Link this on Google Plus
PUBLISHED:
NOVEMBER
08
2017
Facebook is asking users to send the company their nude photos in an effort to tackle revenge porn, in an attempt to give some control back to victims of this type of abuse.

Individuals who have shared intimate, nude or sexual images with partners and are worried that the partner (or ex-partner) might distribute them without their consent can use Messenger to send the images to be "hashed." This means that the company converts the image into a unique digital fingerprint that can be used to identify and block any attempts to re-upload that same image.

Facebook is piloting the technology in Australia in partnership with a government agency headed up by the e-safety commissioner, Julia Inman Grant, who told ABC it would allow victims of "image-based abuse" to take action before pictures were posted to Facebook, Instagram or Messenger.

"We see many scenarios where maybe photos or videos were taken consensually at one point, but there was not any sort of consent to send the images or videos more broadly," she told the Australian broadcaster.

Carrie Goldberg, a New York-based lawyer who specializes in sexual privacy, said: "We are delighted that Facebook is helping solve this problem -- one faced not only by victims of actual revenge porn but also individuals with worries of imminently becoming victims.

"With its billions of users, Facebook is one place where many offenders aggress because they can maximize the harm by broadcasting the nonconsensual porn to those most close to the victim. So this is impactful."

In the Australian pilot, users must first complete an online form on the e-safety commissioner's website outlining their concerns. They will then be asked to send the pictures they are concerned about to themselves on Messenger while the e-safety commissioner's office notifies Facebook of their submission. Once Facebook gets that notification, a community operations analyst will access the image and hash it to prevent future instances from being uploaded or shared.

Facebook will store these images for a short period of time before deleting them to ensure it is enforcing the policy correctly, the company said.

Roughly 4% of US internet users have been victims of revenge porn, according to a 2016 report from the Data & Society Research Institute. The proportion rises to 10% when dealing with women under the age of 30.

This builds on existing tools Facebook has to deal with revenge porn. In April the social networking site released reporting tools to allow users to flag intimate photos posted without their consent to "specially trained representatives" from the site's community operations team who "review the image and remove it if it violates [Facebook's] community standards." Once a picture has been removed, photo-matching technology is used to ensure the image isn't uploaded again.

Facebook and other technology companies use this type of photo-matching technology where images are "hashed" to tackle other types of content including child sex abuse and extremist imagery.

The technology was first developed in 2009 by Microsoft, working closely with Dartmouth and the National Center for Missing and Exploited Children to clamp down on the same images of sexually abused children being circulated over and over again on the internet. There was technology that could find exact matches of images, but abusers could get around this by slightly altering the files -- either by changing their size or adding a small mark.

PhotoDNA's "hash" matching technology made it possible to identify known illegal images even if someone had altered them. Facebook, Twitter and Google all use the the same hash database to identify and remove illegal images.

Hany Farid, a professor of computer science at Dartmouth who helped develop PhotoDNA, described Facebook’s pilot as a "terrific idea."

"The deployment of this technology would not prevent someone from sharing images outside of the Facebook ecosystem, so we should be encourage all online platforms to participate in this program, as we do with PhotoDNA," he said.

A Facebook spokeswoman said the company was exploring additional partners and countries.

© 2017 Guardian Web under contract with NewsEdge/Acquire Media. All rights reserved.

Image credit: iStock.

Tell Us What You Think
Comment:

Name:

Hozz:
Posted: 2017-11-08 @ 2:35pm PT
30 seconds with image editing software and the hash can be changed

Like Us on FacebookFollow Us on Twitter
MORE IN WORLD WIDE WEB
SCI-TECH TODAY
NEWSFACTOR NETWORK SITES
NEWSFACTOR SERVICES
© Copyright 2017 NewsFactor Network. All rights reserved. Member of Accuserve Ad Network.