Klaus Vedfelt | Digitalvision | Getty Images
Crack open that George Orwell book you’ve been ignoring since high school because it’s time to talk about Big Brother.
If you’re typing away at work on Slack, Microsoft Teams, or Zoom (yes, even in those breakout rooms), there’s a good chance Artificial Intelligence is peeking over your digital shoulder. Some of America’s biggest employers like Walmart, Delta Air Lines and T-Mobile are enlisting the help of Aware – a seven-year-old startup – to keep tabs on their employees’ communications.
Jeff Schumann, co-founder and CEO of the Columbus-based startup says his AI tool can suss out potential risks lurking within company communication channels. It’s like an annual employee survey but without the need for free pizza bribes. The technology uses anonymized info from Aware’s analytics product to gauge how different segments of employees feel about new policies or marketing campaigns.
In addition to being an office mood ring, these AI models can also sniff out instances of bullying, discrimination and non-compliance among other things. However, individual names aren’t flagged unless extremities occur that require intervention by HR or legal teams.
A few companies didn’t respond when asked about their use of Aware while AstraZeneca noted they only use its eDiscovery feature without any sentiment monitoring. Delta Airlines, however, does use Aware’s analytics for monitoring trends and sentiment to gather feedback from employees.
Reading all this might have you thinking we’re teetering on the edge of a dystopian hellscape. And honestly? You may not be wrong.
Jutta Williams, co-founder of Human Intelligence (a nonprofit focused on AI accountability), warns that using AI in insider risk programs could be problematic as it equates people with inventory items. The rise of employee surveillance AI is part of the larger surge in the AI market following chatbot ChatGPT’s launch by OpenAI in late 2022.
Aware has reported an average annual revenue jump of 150% over the past five years and boasts clients with about 30,000 employees each. Its top competitors include big names like Qualtrics, Relativity, Proofpoint, Smarsh and Netskope.
‘Tracking real-time toxicity’
Schumann started Aware after working almost eight years at insurance company Nationwide where he got really good at navigating office politics and water cooler gossip sessions. Before that gig though he was already dabbling with Orwellian vibes when he founded BigBrotherLite.com – a software designed to enhance digital viewing experience for fans of CBS reality series “Big Brother”.
Now, at Aware he’s taken things to a whole new level. Each year, the company publishes a report based on billions of messages sent across large companies and analyzes perceived risk factors and workplace sentiment scores.
The tool doesn’t just track text but also images and videos shared within the company network. This data helps to create an internal social graph that shows which teams are more chatty with each other.
Schumann explains that if there’s a sudden positive spike in employee sentiments it means something good is happening collectively within the organization. The technology then identifies what this ‘something’ is.
Aware confirms using enterprise client data to train its machine-learning models. It takes about two weeks for AI models to learn about patterns of emotion and sentiment within a new client’s employees so it can distinguish between normal versus abnormal behavior.
Despite claims of anonymized or aggregated data use, research suggests these methods aren’t foolproof when it comes to maintaining privacy – 87% of Americans could be identified solely by using ZIP code, birth date, and gender according to one landmark study on data privacy using 1990 U.S Census data.
Amba Kak from the AI Now Institute at New York University expressed worries over using AI in determining what counts as risky behavior since this might result in people being cautious about their conversations at work due to fear of surveillance. She added that worker rights issues are as much important as privacy issues here.
Schumann clarifies that Aware’s AI models don’t make decisions but merely identify potential risks or policy violations. He also addressed concerns about employees being unable to defend themselves if they’re disciplined or fired due to flagged interactions, saying the model provides full context around what happened and what policy it triggered, enabling investigation teams to decide next steps consistent with company policies and law.
WATCH: AI is ‘really at play here’ with the recent tech layoffs
How Walmart, Delta, Chevron and Starbucks are using AI to monitor employee messages