Playing video games is an intense experience. How many times have gamers felt the rush of exuberance as they take down the final boss, the nail-biting tension of stalking through a dangerous encounter zone, the flicker of rage as you make that crucial mistake that damns your game? Tempers can sometimes flare when you’re caught up in the heat of the moment.
When you’re playing with friends, things might get even more heated. Worse yet, if you’re in a digital matchmaking lobby with strangers, you might find yourself succumbing to the dreaded “online disinhibition effect”: a simple formula where the anonymity of the internet and the impersonal relationships with your teammates opens up a ripe channel for consequence-free invective. Intensive, team-focused multiplayer action games like Overwatch, League of Legends or Dota 2 have already picked up a bad reputation. One newbie mistake, one wrong character choice, and you might find yourself hearing more than you want to hear about your mother’s intimate relationships.
Not only is this kind of thing bad for player morale, it’s also, in the long run, bad for business. Who wants to pick up a game where losing equals getting lambasted by strangers? Already, many multiplayer games have introduced an internal “report” system where players can flag examples of abusive conduct in the hopes that the perpetrator will face justice. But even this system has its pitfalls; for instance, a group of abusive players could wrongly incriminate an innocent teammate by ganging up and reporting them.
Tech startup System AI, recently unveiled a new prototype intending to take game cooperation to the next level. Code-named “Ally,” this program aims to become a “referee” of sorts in virtual environments, detecting verbal harassment like name-calling as well as monitoring non-verbal bullying such as the aforementioned report abuse.
When Ally detects potentially abusive behavior, its response is two-pronged. Rather than simply flagging the player in question, Ally will spawn in a “character” of its own: an automated virtual avatar disguised as a regular player. Ally’s bot will ask if the player in question is alright. If the player reassures Ally that all is harmonious and it’s merely friendly banter, than all’s well; otherwise, Ally can send a report to moderators and await further action. It’s a fascinating example of the Turing test in action: part of Ally’s magic is that the player can’t know that they’re conversing with a bot. Thus far, Ally’s programs have only been tested to a limited capacity – will jaded and cynical gamers know that Ally is a lie?
Despite this, System AI has attracted some attention in the gaming world; the small company now works closely with several major game companies who are working on new ways to cut back harassment. Game on!