With suicide prevention, every minute of response time matters. That’s why the technology team at the well-known nonprofit Crisis Text Line in New York City analyzed some 65 million text messages to determine what words were most statistically associated with a high risk of suicide. This scale of analysis would clearly be infeasible without some form of automated analysis, and its results surprised the team.
Use of the term “EMS” in a text, for example, is five times more predictive of a high risk of suicide than the actual word “suicide.” By using this analysis, the team can now better prioritize incoming messages, much like the triage system in a hospital emergency department. As a result, the organization is now able to respond to 94 percent of high-risk texters in fewer than five minutes.
In 2016 the NGO charity: water created a chatbot that simulates conversations with a fictional Ethiopian girl named “Yeshi” as a way to raise awareness about access to clean water. Organizations are also experimenting with using chatbots for research purposes, such as interviewing people in Nigeria and Haiti on the state of food prices and food security.
Great article by Gideon Rosenblatt and Abhishek Gupta. Click here for more examples of AI can be a force for Good!
Latest posts by Hanneke (see all)
- Space for Good - May 8, 2019
- Chimpface - February 5, 2019
- AI tegen stropers, illegale houtkap, en andere bedreigingen - February 4, 2019