The AI Panopticon: Just Because You Can Doesn't Mean You Should
- Barry Thomas
- Jun 20
- 4 min read
AI meeting assistants like Granola.ai present a clear trade-off that organisations need to understand. The productivity benefits are substantial and measurable. The risks to workplace culture and staff wellbeing are equally real but harder to quantify. We're all navigating this balance without a clear roadmap.

The Clear Benefits
AI meeting transcription solves real problems. Taking notes while participating in discussions is cognitively demanding and often ineffective. Important details get missed. Action items disappear. AI tools fix this by creating comprehensive records while allowing full participation.
For management, these tools provide systematic documentation of decisions, automatic capture of commitments, and clear accountability trails. The efficiency gains are immediate and obvious.
The technology also helps level the playing field. Staff who find simultaneous note-taking challenging—whether due to language differences, neurodivergent processing styles, or other factors—can participate more effectively when documentation is automated.
The Surveillance Problem
The challenge emerges from a fundamental shift in surveillance capabilities. Previously, comprehensive workplace monitoring was theoretically possible but practically limited by human capacity to review data. AI removes this constraint entirely.
We now have the technical ability to deploy what amounts to infinite watchers—AI systems that can monitor, analyse, and report on every interaction. The key issue isn't whether organisations will actually implement comprehensive surveillance, but how staff behaviour changes when they know such surveillance is possible.
Once people understand that meetings could be analysed for sentiment, participation patterns, speaking time, or other metrics, the nature of workplace interactions shifts. This isn't speculation—it's a predictable response to potential observation.
Impacts on Workplace Dynamics
Consider what comprehensive meeting analysis would enable: real-time dashboards showing discussion dominance, idea attribution, engagement levels, and sentiment scores. All of this is technically feasible with current technology.
The impact on creative and collaborative processes could be significant. Effective collaboration often requires thinking aloud, proposing imperfect ideas, and showing uncertainty. These behaviours become risky when subject to AI analysis that might flag them as problematic.
We're also entering an era where AI systems demonstrably outperform humans in many domains. Being potentially evaluated by systems that are more capable than us in specific areas introduces a new form of workplace stress that we don't fully understand yet.
Potential Mitigation Strategies
Several approaches might help balance productivity gains with human needs:
Access restrictions: Limiting transcript and analysis access to meeting participants only. This preserves most productivity benefits while reducing surveillance concerns.
Data retention limits: Automatically deleting meeting data after specified periods unless explicitly preserved for defined purposes.
Purpose limitations: Technically enforcing that meeting AI cannot be used for performance evaluation, only productivity support.
Transparency requirements: Ensuring staff understand exactly what analysis is performed and how results are used.
Consent mechanisms: Requiring explicit agreement before applying analytical tools beyond basic transcription.
These measures aren't perfect solutions. We're still learning what works and what doesn't.
The Broader Context
The meeting AI question exemplifies a pattern we'll face repeatedly: powerful AI tools offer clear benefits but introduce complex human impacts. Organisations that ignore these impacts may find themselves with sophisticated technology but diminished human capability.
The challenge is particularly acute because we need engaged, creative human workers to navigate the complexities that AI can't yet handle. Creating workplace environments that undermine psychological safety and creative risk-taking is counterproductive, regardless of the efficiency gains.
Current State of Understanding
We don't have definitive answers about how to balance these competing needs. Different organisations will need different approaches based on their culture, industry, and specific requirements. What works in one context may fail in another.
What we do know is that passive adoption—implementing AI tools without considering human impacts—carries significant risks. The choices made now about relatively simple tools like meeting transcription will establish patterns for handling more powerful AI systems in the future.
The fundamental question isn't whether to use these tools but how to implement them thoughtfully. This requires ongoing attention to both the efficiency gains and the human costs, with a willingness to adjust approaches as we learn more.
The Policy Gap
At ShepherdThomas, we're seeing a consistent pattern: organisations significantly underestimate the importance of establishing AI use policies. There's a widespread assumption that it's too early for formal policies, that we should wait until the technology and use cases mature further.
This is a mistake. AI policies don't need to be perfect or permanent—they need to exist. Just as we're experimenting with AI tools and learning through implementation, we need to be experimenting with governance frameworks and learning what works. These policies will certainly evolve, but starting that evolution now is critical.
The same spirit of research and innovation we bring to AI adoption should apply to AI governance. Test policies, gather feedback, iterate. A policy that says "meeting transcripts are only accessible to attendees" might prove too restrictive—or not restrictive enough. But without starting somewhere, organisations won't develop the governance muscles they'll desperately need as AI capabilities expand.
Waiting for perfect clarity before establishing policies means perpetually playing catch-up with technology that's advancing rapidly. The organisations that thrive will be those that treat policy development as an integral part of their AI journey, not an afterthought.
Moving Forward
Organisations face a practical challenge: they need the productivity benefits of AI tools but also need to maintain the human creativity and engagement essential for long-term success. This isn't a problem that will be solved once and moved past—it's an ongoing balance that will require continuous adjustment.
The conversation about AI meeting tools is really about what kinds of workplaces we're creating. Every implementation decision reflects choices about the balance between efficiency and humanity, between capability and culture. These choices deserve careful consideration, even as we acknowledge that we're all still learning what works.
Comments