Algorithm created that recognizes harassment, including sexual harassment, in emails and corporate chats

Artificial intelligence-based software that recognizes digital bullying and even sexual harassment, for example in the language of an email, was developed by Chicago-based NexLP. According to the manufacturers themselves, it was not an easy task given the linguistic subtleties and huge grey areas that can exist between a non-violent or offensive approach and another that can fall into the category of harassment and bullying.

The software was created especially for those companies that want to monitor all communication between employees so that harassment actions of a communicative nature can be more easily identified. The algorithm used by the software identifies potential bullying, including sexual harassment, in emails and company documents as well as in service chat. The data is analyzed by software and suspicious communications are then set aside, with probability levels, so that they can be evaluated by a human operator, such as the human resources manager, to investigate.

What the software does is search for specific language anomalies, anomalies that can be based not only on the words used but also on their frequency. For example, it can analyze real communication patterns through days or weeks of messages. Of course, the software looks for specific “triggers” and cannot go beyond certain parameters, let alone analyze and evaluate complex interpersonal dynamics that can develop between two or more people on a communicative level. However, the software could be very useful to make an initial skim in order to eliminate all those communications that, according to the algorithm, do not contain anything offensive.

Of course, with such a software also comes into play the problem of data confidentiality and privacy in general: what happens if the software makes a mistake and the data itself is disclosed? Software itself, by its very nature, could promote abuse by human resources operators and managers and could lead to inappropriate decisions.

The creators of the software, which is not the only one analyzing messages to detect harassment, especially sexual harassment, so much so that we are already talking about #MeTooBot, defend themselves by warning that the software itself has no discriminatory value and can only label a certain communication that must be evaluated and interpreted by human beings anyway.

Sarah Foster

I am the founder of Interfaith News and am responsible for all editorial decisions here. Prior to founding this publication, I was a lecturer of Biology at Macquarie University.

Email contact: [email protected]
Local number: 0491 570 156
International number: +61 491 570 156
Sarah Foster