Facebook announced last week that as part of their ongoing effort to help people build supportive, safe, informed communities, it will utilize artificial intelligence to help identify when a person on its platform is contemplating suicide. In addition, Facebook may send resources to the user or their friends, or contact local first-responders.

The announcement specified that the artificial intelligence will be employed to improve pattern recognition, in both posts and live streams, in order to increase accuracy and cut down on false positives. By incorporating AI, Facebook hopes to be able to more efficiently pinpoint those who need help and speed up the process of providing it.

AI will also be used to prioritize reports from users, allowing moderators to more effectively respond to community-generated alerts. Along with these efforts, Facebook is planning to engage more moderators in their suicide prevention operations, and it has teamed up with multiple local partners to provide greater prevention resources.

While the AI rollout has begun outside the US, it has not yet been implemented stateside. Facebook expects it to eventually be used worldwide, with the exception of the EU.

Facebook did not elaborate on the exclusion of the EU, but EU countries have very different internet laws than the United States, including stipulations on individual privacy. Indeed, privacy advocates have voiced some concern over Facebook’s announcement, wondering whether scanning all posts and and comments for concerning phrases, as this AI does, might lead to a concerning level of surveillance. Fortune published an opinion piece suggesting that, at this stage of AI development, AI could do more harm than good.

Citing a recent study that found even trained clinicians are not able to accurately predict true high suicide risk in subjects, Srini Pillay wondered whether current AI is likely to do any better, and whether if, instead, these highly-publicized steps by facebook will merely discourage suicidal people from using social media for fear of being discovered.

Alex Stemos, Facebook’s chief security officer, evidently aware of these concerns, sent a tweet in response, stating that “The creepy/scary/malicious use of AI will be a risk forever, which is why it’s important to set good norms today around weighing data use versus utility and be thoughtful about bias creeping in.”

Whether or not this tweet provides comfort in the short term, this move by Facebook is a reminder of the many possible uses of AI, while opening up new discussions about how AI will surely shape our future. It serves as a reminder of the many complicated conversations about privacy, efficacy, and the moral quandaries that must follow.

Image via Adobe Stock

TokenVerse loves a lively discussion or debate, but we expect everyone to do so in a respectful and polite manner.