The Monitoring Horror Show
Happy Halloween. In the spirit of the season, we’re highlighting the monitoring practices that genuinely keep us up at night — not because they’re rare, but because they’re increasingly common. Every practice described below is real, deployed by actual vendors, and used by real organizations in 2022.
We’re not naming specific vendors (this isn’t about competitive attacks), but we are naming specific practices that the monitoring industry needs to confront honestly. As we argued when we discussed the bossware backlash, these practices threaten the legitimacy of the entire industry.
Webcam Emotion Detection
At least three monitoring vendors now offer “engagement analysis” features that use webcam feeds to analyze employees’ facial expressions during video calls — or worse, throughout the workday. The promise: AI that can detect whether employees are “engaged,” “distracted,” “frustrated,” or “satisfied” based on micro-expressions.
The reality: emotion detection AI is notoriously unreliable. A 2022 study from the Association for Psychological Science found that facial expressions are not reliable indicators of internal emotional states — people smile when nervous, frown when concentrating, and display culturally specific expressions that AI models consistently misinterpret.
Beyond the accuracy problems, webcam emotion detection is likely to be classified as “high-risk” or even “prohibited” under the upcoming EU AI Act, as we discussed in our ethical AI analysis. Organizations deploying this technology today are building regulatory liability into their monitoring stack.
Keystroke Cadence Personality Profiling
This one is genuinely dystopian. A handful of vendors are marketing the ability to build “personality profiles” based on keystroke patterns — typing speed, rhythm, pause patterns, and error correction habits. The claim is that these patterns can reveal personality traits, stress levels, and even deceptive behavior.
The science behind this is thin to nonexistent. While keystroke dynamics have some validated use cases in security (biometric authentication), the leap from “this person types in a distinctive way” to “this person is stressed” or “this person is lying” has no rigorous scientific support.
What makes this practice particularly insidious is that most employees don’t know it’s happening. Keystroke analysis runs silently in the background, building psychological profiles without any notification or consent mechanism beyond a buried clause in an employment agreement.
Always-On Audio Monitoring and Location Tracking
Always-on audio monitoring: At least one vendor offers ambient audio capture from employee laptops during work hours — ostensibly to analyze “collaboration patterns” based on voice interaction frequency. Set aside the privacy nightmare: in many U.S. states and most European countries, recording audio without explicit, ongoing consent is flatly illegal.
Remote worker GPS tracking: Several tools designed for field workers have been repurposed for remote employees, tracking the GPS location of company-issued phones throughout the day. The justification is “verifying work location” — but what it actually creates is a minute-by-minute location diary that tells employers where their employees go, when they leave home, and how long they spend at any given location.
Both practices share a common problem beyond privacy: they’re solving imaginary problems. If you need to monitor where a remote worker physically is to trust that they’re working, you have a management problem, not a technology problem.
Why This Matters for the Whole Industry
Every horror-show monitoring practice makes our job harder — not because we compete with these vendors, but because they poison the well for the entire industry. When employees hear “monitoring software,” the webcam emotion detectors and keystroke loggers are what they picture. That makes it harder for every ethical monitoring vendor to earn trust.
The monitoring industry needs self-regulation before external regulation arrives. The practices described above aren’t just unethical — they’re bad business. They generate unreliable data, create legal liability, and destroy the employee trust that organizations need to function.
We’re calling on the monitoring industry to adopt minimum ethical standards: no keystroke logging, no webcam analysis, no audio capture, no GPS tracking of knowledge workers, and full transparency about data collection. Until those become table stakes, “monitoring software” will continue to be a phrase that makes talented people update their resumes.
If you’re evaluating monitoring tools and encountering any of the practices described above, run. There are better options. Your employees — and your legal team — will thank you.
Teambridg is free for teams up to 3 users. No credit card required.
Get Started Free Download Timebridg