Police officers begin implementing AI chatbots to write crime reports
Colorado, Indiana and Oklahoma have begun using the tool. However, this raises serious questions about the integrity and effectiveness of these reports in legal contexts, particularly in court.
Recently, some police agencies nationwide have begun using artificial intelligence (A.I.) chatbots to write incident reports. This development, while innovative, raises serious questions about the integrity and effectiveness of these reports in legal contexts, particularly in court.
The new A.I. technology is developed by Axon, creators of the Taser and body cameras, and promises to save time and improve accuracy. However, concerns about the possibility of officers becoming overly reliant on A.I. raise questions.
According to Axon's CEO Rick Smith: "They become police officers because they want to do police work, and spending half their day doing data entry is just a tedious part of the job that they hate. ... Now, there’s certainly concerns. ... They never want to get an officer on the stand who says, well, 'The A.I. wrote that, I didn’t.'"
New technology first used in several places around the country
The Oklahoma City Police Department is one of those currently testing chatbots to produce early drafts of incident reports.
Before testing it in the city, police showed it to local prosecutors, who were concerned about its use in serious criminal cases. For now, it is only being used for minor incident reports.
In Lafayette, Ind., Police Chief Scott Galloway commented that all of his officers can use Draft One in any type of case and that it has been "incredibly popular."
In Fort Collins, Colo., police Sgt. Robert Younger noted that officers are free to use it on any type of report.
Potential risks
The adoption of this technology has not come without controversy. One of the main concerns lies in the accuracy and reliability of A.I.-generated reports. While A.I. can process large amounts of data quickly, its ability to interpret critical nuances and details in complex situations is still limited. This raises the question of whether these reports will be able to hold up in court, where every word can be crucial.
In addition, there is a risk that officers could become too dependent on A.I., which could lead to a decline in the quality of police work. For example, an automated report might not fully capture the circumstances of an incident, omitting details that a human officer might consider important. This could be especially problematic in cases where the defense challenges the accuracy of the report during a trial.