Schools across the United States are turning to artificial intelligence to help prevent school shootings and other life-threatening attacks.
Companies like Bark Technologies, Gaggle.net, and Securly, Inc., are using a combination of artificial intelligence (AI) and machine learning (ML) along with trained human safety experts to examine student data, from the emails they send, and in some cases, what they post on their social media. The AI is searching for inappropriate behavior, including online bullying, explicit messages, substance abuse, depression, and those who may pose a threat to themselves and their classmates. Any potential threat discovered trigger alerts to school administration, parents, and law enforcement officials, depending on the severity.
Bark began its pilot program with 25 schools in Fall 2017. Bark chief parent officer, Titania Jordan, says, “We found some pretty alarming issues, including a bombing and school shooting threat.”
Bark’s product is free to schools in the United States. The company can provide this no-cost service because its revenue comes from a parent-specific program that costs $99 a year. The software monitors over 25 social media platforms, including the most popular like Twitter, Snapchat and Instagram.
Vice President of Sales at Gaggle Bill McCullough says, “Studies have shown that kids will communicate before a violent act happens and they will communicate electronically. If you don’t have the means to hear those cries out for help you’re going to have children in jeopardy.”
Twenty-year-old Gaggle charges schools $6 per student, per year, for its service. It sends alerts to the school and law enforcement, claiming its services have prevented 447 suicides, having hindered 240 threats of violence to another students last year alone.
Securly, available in 2,000 school districts, charges $3 per student, per year, for its flagship product called Filter. The Filter has the option of premium add-ons, including one known as 24, that can be around $2.50 per student.
24 connects AI with human analysts, flagged risky behavior. An analyst reached out to a school this past October after discovering that a student was searching how to make bombs and how to kill yourself.
Understanding AI’s Limitations
As advanced as these solutions are, they do have limitations.
- None of the companies USA TODAY talked to are 100% accurate. False positives sometimes arise.
- A school can’t monitor any device it did not issue to a student unless the student uses their school credentials for the login
- Students parents may not be aware of every online account they have
Even with these limitations, school officials are on board with embracing these solutions for the sake of their students. Rich O’Malley, Superintendent of Florence 1 schools in South Carolina (a Gaggle distract), says that “just saving one life or being able to touch one student who has an issue makes it priceless. As a superintendent, probably the number one issue I hear from parents is school security.”
AI researchers often highlight the importance of responsible innovation. But what exactly does that mean? Find out with cutting-edge online training from IEEE. Artificial Intelligence and Ethics in Design: Responsible Innovation is a five-course program focused on integrating AI and autonomous systems within product and systems design. Intended for industry professionals, this program is all about applying the ethics of AI to the business of design. Connect with an IEEE Content Specialist to learn more
Baig, Edward C. (13 Feb 2019). Can artificial intelligence prevent the next Parkland Shooting? USA Today.
[…] artificial para impedir atos trágicos de violência estudantil”. A informação é do site IEEE Innovation, que reflete sobre as perspectivas da tecnologia e mostra exemplos de empresas de TI que já […]