Facebook Inc (NASDAQ:FB) has been trying to keep the social media platform as safe as possible. That is why it comes as no surprise when we hear that the company is working on a new tool that will be able to remove one form of harassment and fraud on the platform.
The tech giant is working on a feature that will automatically notify you in case someone is trying to impersonate your account using your name and/or profile picture.
When Facebook detects the impersonating account being created it sends a message alerting you about the account, prompting you to identify the profile in question as one that is impersonating you or one that completely belongs to someone else.
The notification is automated, but the profiles which the original user will flag as impersonations will be manually reviewed by Facebook’s employees. The company began testing the feature in November last year and is now live in 75 percent of the world. Facebook is planning to expand the availability of the program shortly, according to the Facebook’s Head of Global Safety, Antigone Davis.
This is an attempt by Facebook to make the site safer though impersonations are not a widespread problem. It, however, is a source of harassment for most users, even though the company shuns harassment. Davis told reporters, “We heard feedback before the roundtables and also at the roundtables that this was the point of concern for women, and it is a real point of concern for some women in some parts of the world where it [impersonation] may have certain cultural or social ramifications.”
The alerts which come from the feature are ongoing efforts on the site to make women safer around the world as explained by Davis. He also mentioned of the roundtable discussions that Facebook had been having with users, activists, NGOs around the world to try and figure out a way and feedback on issues prevalent on the website and how best to tackle them.
Facebook also reported to be checking two other safety features; one was for publishing intimate consensual images and the other a photo check-up feature. Facebook banned intimate non-consensual images in 2012, but the feature is supposed to make reporting the experience compassionate for victims of abuse. Davis said the initial test showed that the features were working well, but they still needed more feedback and research before giving the features out to the public.
Facebook has already improved privacy controls in place, but most users are not familiar with them and do not know how to test them. The photo tool check-up is available in India, South America, Africa and Southeast Asia.