Metanav

Can Artificial Intelligence Prevent School Violence?

Can artificial intelligence prevent school violence

More and more frequently, schools across the United States are turning to artificial intelligence-backed solutions to stop tragic acts of student violence.

Companies like Bark Technologies, Gaggle.net, and Securly, Inc., are using a combination of artificial intelligence (AI) and machine learning (ML) along with trained human safety experts to scan student emails, texts, documents, and in some cases, social media activity. They’re looking for warning signs of cyber bullying, sexting, drug and alcohol use, depression, and to flag students who may pose a violent risk not only to themselves, but to classmates as well. Any potential problems discovered trigger alerts to school administration, parents, and law enforcement officials, depending on the severity.

Bark Technologies

Bark ran a test pilot of its program with 25 schools in fall 2017. Bark chief parent officer, Titania Jordan, says, “We found some pretty alarming issues, including a bombing and school shooting threat.”

Can artificial intelligence prevent school violence in America

Bark’s product is free to schools in the United States. The company is able to give the service to schools free of charge because it makes its money from a version of its program aimed at parents. The parent-specific program costs $9 per month, per family, or $99 per year. It includes monitoring across more than 25 social media platforms, including Twitter, Instagram, Snapchat, and YouTube.

Gaggle.net

Bill McCullough, vice president of sales at Gaggle, says, “Studies have shown that kids will communicate before a violent act happens and they will communicate electronically. If you don’t have the means to hear those cries out for help you’re going to have children in jeopardy.”

Twenty-year-old Gaggle charges schools $6 per student, per year, for its service. The company claims to have stopped 447 deaths by suicide at the 1,400 school districts that use its service. Gaggle also says it stopped 240 instances last year where a child brought a weapon to school to harm another student or intended to. Under such circumstances, Gaggle will immediately alert an emergency contact at the school and, if needed, law enforcement.

Securly, Inc.

Securly works with about 2,000 school districts. The company charges $3 per student, per year, for a flagship product called Filter, with premium add-ons that can add about $2.50 per student to the cost.

Securly’s premium service, known as 24, combines AI with trained human analysts. This past October, 24 flagged a student who had searched Google for both “how to make a bomb” and “how to kill yourself.” The analyst contacted the school. Another example involved a student who searched for “painless ways to kill yourself” and watched YouTube videos on the topic.

Understanding AI’s Limitations

As advanced as these solutions are, they do have limitations.

  • None of the companies USA TODAY talked to claim the ability to catch suspect behavior every time. False positives sometimes arise.
  • A school can’t police a student’s smartphone or other devices outside the ones it issued, unless the student signed into a social media, or other account, using the email or credentials the school provided.
  • Students are often more tech savvy than their parents and won’t tell them about every account they have.

Even with these limitations, school officials are on board with embracing these solutions for the sake of their students. Rich O’Malley, superintendent of Florence 1 schools in South Carolina (a district that pays for Gaggle), says that “just saving one life or being able to touch one student who has an issue makes it priceless. As a superintendent, probably the number one issue I hear from parents is school security.”

Taking Responsibility

AI researchers often highlight the importance of responsible innovation. But what exactly does that mean? Find out with cutting-edge online training from IEEE. Artificial Intelligence and Ethics in Design: Responsible Innovation is a five-course program focused on integrating AI and autonomous systems within product and systems design. Intended for industry professionals, this program is all about applying the ethics of AI to the business of design. Connect with an IEEE Content Specialist to learn more

Resources

Baig, Edward C. (13 Feb 2019). Can artificial intelligence prevent the next Parkland Shooting? USA Today.

Trackbacks/Pingbacks

  1. Inteligência artificial pode prevenir a violência escolar? - Radar do futuro - March 15, 2019

    […] artificial para impedir atos trágicos de violência estudantil”. A informação é do site IEEE Innovation, que reflete sobre as perspectivas da tecnologia e mostra exemplos de empresas de TI que já […]

Leave a Reply