At least one video game company has considered using large-language model AI to spy on its developers. The CEO of TinyBuild, which publishes Hello Neighbor 2 and Tinykin, discussed it during a recent talk at this month’s Develop:Brighton conference, explaining how ChatGPT could be used to try and monitor employees who are toxic, at risk of burning out, or simply talking about themselves too much.

“This one was quite bizarrely Black Mirror-y for me,” admitted TinyBuild boss Alex Nichiporchik, according to a new report by WhyNowGaming. It detailed ways that transcripts from Slack, Zoom, and various task managers with identifying information removed could be fed into ChatGPT to identify patterns. The AI chatbot would then apparently scan the information for warning signs that could be used to help identify “potential problematic players on the team.”

Nichiporchik took issue with how the presentation was framed by WhyNowGaming, and claimed in an email to Kotaku that he was discussing a thought experiment, and not actually describing practices the company currently employs. “This part of the presentation is hypothetical. Nobody is actively monitoring employees,” he wrote. “I spoke about a situation where we were in the middle of a critical situation in a studio where one of the leads was experiencing burnout, we were able to intervene fast and find a solution.”

While the presentation may have been aimed at the overarching concept of trying to predict employee burnout before it happens, and thus improve conditions for both developers and the projects they’re working on, Nichiporchik also appeared to have some controversial views on why types of behavior are problematic and how best for HR for flag them.

See also  Original ‘Metroid Prime’ designer praises remaster despite “fucked up” doors

In Nichiporchik’s hypothetical, one thing ChatGPT would monitor is how often people refer to themselves using “me” or “I” in office communications. Nichiporchik referred to employees who talk too much during meetings or about themselves as “Time Vampires.” “Once that person is no longer with the company or with the team, the meeting takes 20 minutes and we get five times more done,” he suggested during his presentation according to WhyNowGaming.

Another controversial theoretical practice would be surveying employees for names of coworkers they had positive interactions with in recent months, and then flagging the names of people who are never mentioned. These three methods, Nichiporchik suggested, could help a company “identify someone who is on the verge of burning out, who might be the reason the colleagues who work with that person are burning out, and you might be able to identify it and fix it early on.”

This use of AI, theoretical or not, prompted swift backlash online. “If you have to repeatedly qualify that you know how dystopian and horrifying your employee monitoring is, you might be the fucking problem my guy,” tweeted Warner Bros. Montreal writer Mitch Dyer. “A great and horrific example of how using AI uncritically has those in power taking it at face value and internalizing its biases,” tweeted UC Santa Cruz associate professor, Mattie Brice.

Corporate interest in generative AI has spiked in recent months, leading to backlashes among creatives across many different fields from music to gaming. Hollywood writers and actors are both currently striking after negotiations with movie studios and streaming companies stalled, in part over how AI could be used to create scripts or capture actors’ likenesses and use them in perpetuity.

See also  Ubisoft releases free ‘Prince Of Persia: The Lost Crown’ demo

       

Source link