Microsoft Teams will tag third-party bots trying to join meetings
Microsoft Teams to Implement Enhanced Bot Security in 2026
In a significant move to bolster security and user control within its collaboration platform, Microsoft has announced plans to roll out a new feature that will automatically tag third-party bots in meeting lobbies, giving organizers unprecedented control over who—or what—joins their Teams meetings.
A Proactive Approach to Meeting Security
According to a recent update on the Microsoft 365 roadmap, this feature is currently in development and is slated for release in May 2026. When implemented, it will be available across all major platforms, including Windows, macOS, Android, and iOS, for both worldwide standard multi-tenant and GCC cloud environments.
The change represents a fundamental shift in how Teams handles bot access to meetings. Currently, third-party bots—whether they’re note-taking assistants, transcription services, or other automated tools—can join meetings without particularly distinctive identification. This can lead to confusion and, potentially, security vulnerabilities.
The New Lobby Experience
Once deployed, the system will create a distinctly different experience for meeting organizers. When an external third-party bot attempts to join a Teams meeting, it will be clearly labeled and placed in the lobby rather than being able to join seamlessly like human participants.
Meeting organizers will then need to explicitly approve each bot’s entry, creating a deliberate checkpoint in the process. This means that bots can no longer be accidentally admitted alongside human attendees—organizers must consciously choose to allow each bot into the meeting space.
Microsoft articulated the rationale behind this change, stating: “During Teams meetings, if there is an external 3P bot trying to join the meeting, organizers will be able to see a clear representation of the bots while they wait in the lobby. Organizers will be required to explicitly and separately admit these bots into the meeting, if really required.”
Security Implications
This feature addresses several pressing security concerns that have emerged as Teams has become ubiquitous in professional settings. The explicit approval requirement ensures that no one inadvertently accepts external bots into meetings, giving organizers complete control over the presence of these automated participants.
The implications are particularly relevant given the increasing sophistication of cyber threats. Malicious apps controlled by threat actors or third-party bots used for legitimate purposes could potentially be exploited if they join meetings without proper oversight. By requiring explicit admission, Microsoft is creating a crucial barrier against unauthorized access.
Part of a Broader Security Strategy
This bot tagging feature is not an isolated improvement but rather part of Microsoft’s comprehensive approach to securing Teams against evolving threats. In January, the company announced that Teams will receive a call reporting feature by mid-March, allowing users to flag suspicious or unwanted calls as potential scams or phishing attempts.
Additionally, Teams has added new fraud-protection features for calls, including warnings about external callers impersonating trusted organizations in social-engineering attacks. These warnings help users identify when they might be targeted by sophisticated impersonation attempts.
Administrative Controls
Starting in December, Microsoft introduced another layer of security by allowing administrators to block external Teams users via the Defender portal. This feature was developed specifically to counter cybercrime gangs—including ransomware groups—that have attempted to abuse the video conferencing and collaboration platform in social engineering attacks targeting victims’ employees.
These groups have employed various tactics, from posing as IT support on Teams to using voice calls for distributing malware like Matanbuchus. By giving administrators the ability to block external users at a network level, Microsoft is providing organizations with powerful tools to prevent such attacks before they can even begin.
The Evolution of Collaboration Security
The introduction of bot tagging in meeting lobbies reflects a broader trend in the tech industry: as collaboration tools become more sophisticated and integral to business operations, security measures must evolve accordingly. What was once a simple video conferencing tool has transformed into a complex ecosystem where automated services, AI assistants, and human participants interact in shared digital spaces.
This evolution necessitates new approaches to identity verification and access control. The bot tagging feature represents Microsoft’s acknowledgment that not all participants in a Teams meeting are created equal—and that the system must be able to distinguish between human attendees and automated services.
Timeline and Implementation
While the May 2026 rollout date provides Microsoft with considerable development time, it also reflects the complexity of implementing such a feature across diverse environments and platforms. The company will need to ensure consistent behavior whether users are joining from desktop applications, mobile devices, or web browsers, and across different cloud configurations.
Organizations planning their security roadmaps should factor this upcoming change into their long-term strategies, particularly if they rely heavily on third-party bots for meeting functionality. The explicit approval requirement may necessitate workflow adjustments for teams accustomed to automated bot participation.
Looking Ahead
As artificial intelligence and automation continue to permeate workplace tools, features like bot tagging will likely become standard across collaboration platforms. The ability to clearly identify and control automated participants represents a maturing approach to digital collaboration—one that balances the convenience of AI-powered assistants with the fundamental need for security and user control.
Microsoft’s phased approach to Teams security, from call reporting to administrative controls to bot tagging, demonstrates a thoughtful strategy for addressing both immediate threats and longer-term challenges in the collaboration space. As the May 2026 rollout approaches, users and administrators alike can anticipate a more secure and transparent meeting experience on Teams.
Tags: Microsoft Teams, bot security, meeting safety, collaboration tools, cyber security, third-party bots, Microsoft 365, digital workspace, AI assistants, meeting controls
Viral Phrases: “meeting security revolution,” “bot tagging breakthrough,” “collaboration safety upgrade,” “Teams transforms meeting security,” “AI bot control,” “cyber defense innovation,” “digital meeting protection,” “enterprise collaboration security,” “automated participant management,” “next-gen meeting safety”
,




Leave a Reply
Want to join the discussion?Feel free to contribute!