Predictim Babysitter App: Facebook And Twitter Take Action

Predictim Babysitter App: Facebook And Twitter Take Action

An app that claims to vet babysitters is being investigated by Facebook, and has been blocked altogether by Twitter.

Predictim, based in California, offers a service that scours a prospective babysitter’s social media activity in order to provide a score out of five to suggest how safe they may or may not be.

It looks for posts about drugs, violence of other undesirable content. Critics say algorithms should not be trusted to give advice on someone’s employability.

Earlier this month, after discovering the activity, Facebook revoked most of Predictim’s access to users, deeming the firm to be in violation of its policies on use of personal data.

Facebook is now investigating whether to block the firm entirely from its platform after Predictim said it was still scraping public Facebook data in order to power its algorithms.

Predictim’s chief executive and co-founder, Sal Parsa, said everyone looks people up on social media, they look people up on Google they are just automating this process.

Facebook did not see it that way.

Meanwhile, Twitter said it had “recently” decided to block Predictim’s access to its users.

According to a spokeswoman via email, she said they strictly prohibit the use of Twitter data and APIs for surveillance purposes, including performing background checks. When they became aware of Predictim’s services, they conducted an investigation and revoked their access to Twitter’s public APIs.

An  Application Programming Interface  is used to allow different software to interact.

In this case, Predictim would make use of Twitter’s API in order to quickly analyse a user’s tweets.

Predictim, which has been funded by a scheme set up the the University of California, gained considerable attention over the weekend thanks to a front-page story in the Washington Post. In it, experts warned of the fallibility of algorithms that might misinterpret the intent behind messages.

Predictim said it had a human review element to its system that meant posts flagged as being troublesome were looked at manually to prevent false negatives. As well as references to criminal behaviour, Predictim claims to be able to spot instances “when an individual demonstrates a lack of respect, esteem, or courteous behaviour”.


Facebook Comments

All rights reserved. Kindly share news, opinion, contributions and press releases with us at