• Business
  • Entertainment
  • Health
  • News
  • Sport
  • Tech
  • World
Newsy Today
news of today
Home - Hybrid & Remote Work
Tag:

Hybrid & Remote Work

Tech

Shadow AI assistant Clawdbot raises workplace risks

by Chief Editor January 29, 2026
written by Chief Editor

The Rise of ‘Shadow AI’: How Unsanctioned Tools Like Clawdbot Are Reshaping Corporate Security

A recent report from Token Security Labs has revealed a startling trend: employees are increasingly adopting personal AI assistants – often without IT’s knowledge. Their analysis found Clawdbot (also known as Moltbot) is currently active within 22% of their customer organizations. This isn’t an isolated incident; it’s a symptom of a larger shift towards “shadow AI,” where powerful AI tools operate outside traditional security perimeters.

What is ‘Shadow AI’ and Why is it a Problem?

Shadow AI refers to the use of AI applications and services within an organization that haven’t been vetted or approved by the IT or security teams. Clawdbot, a locally-run AI assistant connecting to popular messaging apps like Slack, WhatsApp, and Microsoft Teams, exemplifies this. While offering convenience – calendar management, email responses, file access – it introduces significant risks. The core issue? Broad access to sensitive data coupled with lax security practices.

Consider this scenario: an employee uses Clawdbot on their personal laptop, connecting it to corporate Slack. Suddenly, confidential internal discussions, files, and even credentials are potentially accessible outside the company’s secure network. This bypasses crucial data loss prevention (DLP) controls and audit trails, making it difficult to detect and respond to breaches.

Did you know? A 2023 Gartner report estimated that 30% of organizations will experience “shadow IT” related security incidents by 2024, and AI tools are rapidly becoming a major component of this risk.

The Security Risks: Plaintext Credentials and Exposed APIs

Token Security’s investigation uncovered alarming security vulnerabilities. Clawdbot stores credentials in plaintext, meaning anyone with access to the user’s device can easily view them. Furthermore, researchers like Jamieson O’Reilly have discovered hundreds of publicly accessible Clawdbot instances with open admin dashboards, exposing API keys, OAuth tokens, and conversation histories. In some cases, remote code execution was even possible.

The lack of default sandboxing – explicitly acknowledged in Clawdbot’s documentation – further exacerbates the problem. This means the AI assistant operates with significant system access, increasing the potential damage from a successful attack. Prompt injection, where malicious instructions are embedded within seemingly harmless inputs, also poses a threat when the tool processes emails, documents, and web pages.

Beyond Clawdbot: The Expanding Landscape of Personal AI

Clawdbot is just the tip of the iceberg. The proliferation of open-source Large Language Models (LLMs) and user-friendly interfaces is making it easier than ever for employees to deploy personal AI assistants. Tools like LM Studio and Ollama allow users to run powerful models locally, further blurring the lines between personal and corporate data.

This trend is fueled by a genuine desire for increased productivity. Employees are seeking ways to automate tasks, streamline workflows, and gain a competitive edge. However, without proper guidance and security measures, these efforts can inadvertently create significant vulnerabilities.

What Can Organizations Do? A Proactive Approach

Addressing the challenge of shadow AI requires a multi-faceted approach:

  • Discovery and Visibility: Monitor network traffic for patterns associated with AI assistant activity. Scan endpoints for the presence of directories like “.clawdbot”.
  • Permission and Access Control: Regularly review OAuth grants and API tokens connected to critical systems. Revoke unauthorized integrations.
  • Clear Policies: Establish clear policies regarding the use of personal AI agents, outlining acceptable use cases and security requirements.
  • Approved Alternatives: Provide employees with secure, enterprise-grade AI tools that offer the functionality they need while maintaining IT oversight.

Pro Tip: Implement a robust security awareness training program to educate employees about the risks associated with shadow AI and the importance of following security protocols.

The Future of AI Security: Zero Trust and Continuous Monitoring

Looking ahead, the rise of shadow AI will likely accelerate the adoption of zero-trust security models. This approach assumes that no user or device is inherently trustworthy and requires continuous verification before granting access to resources.

Continuous monitoring and threat detection will also become increasingly critical. Organizations will need to leverage AI-powered security tools to identify and respond to anomalous activity associated with shadow AI applications. The focus will shift from simply blocking these tools to understanding how they are being used and mitigating the associated risks.

Furthermore, expect to see increased collaboration between security vendors and AI developers to build more secure and responsible AI solutions. This includes incorporating privacy-preserving techniques, robust access controls, and comprehensive audit logging.

FAQ: Shadow AI and Your Organization

  • What is the biggest risk of shadow AI? The biggest risk is the potential for data breaches and unauthorized access to sensitive information due to lack of security controls and visibility.
  • How can I detect shadow AI in my organization? Monitor network traffic, scan endpoints, and review OAuth grants and API tokens.
  • Should I completely ban the use of personal AI assistants? A complete ban may not be practical or effective. Instead, focus on providing secure alternatives and establishing clear policies.
  • What is OAuth? OAuth (Open Authorization) is a standard protocol that allows users to grant third-party applications access to their data without sharing their passwords.

The emergence of shadow AI is a wake-up call for organizations. Ignoring this trend is not an option. By proactively addressing the risks and embracing a security-first approach, businesses can harness the power of AI while protecting their valuable assets.

Want to learn more about securing your organization against emerging AI threats? Explore our comprehensive security solutions or subscribe to our newsletter for the latest insights.

January 29, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

Tollring secures Microsoft Teams compliance nod for Analytics 365 product

by Chief Editor December 12, 2025
written by Chief Editor

Why Policy‑Based Recording Is the Next Big Thing for Microsoft Teams

Businesses that rely on Microsoft Teams for daily collaboration are racing to meet ever‑stricter data‑protection laws. The recent certification of Tollring’s Analytics 365 under Microsoft’s updated compliance‑recording standards signals a turning point: policy‑based recording combined with AI analytics is becoming the default safety net for voice, video, and chat data.

AI‑Powered Conversation Analytics – From Reactive to Proactive

Today, most compliance tools simply store recordings. Tomorrow’s solutions will understand them in real time, flagging risky language, detecting fraud patterns, and even suggesting corrective actions before a regulator knocks on the door.

  • Real‑life example: A UK‑based financial services firm used an AI‑driven analytics layer to spot a phishing attempt within a Teams call. The system automatically alerted the security team, preventing a potential $1.2 million loss.
  • Industry data: According to a Gartner 2023 survey, 68 % of enterprises plan to embed AI into their compliance workflows by 2025.

Zero‑Trust Encryption Meets Immutable Audits

Encryption at rest and in transit, combined with tamper‑evident timestamps, creates an audit trail that regulators can trust. Future standards will demand that every modification attempt be cryptographically recorded, effectively turning each file into a “blockchain‑like” ledger.

Pro tip: When evaluating a compliance solution, ask for a detailed description of its cryptographic hash algorithm (SHA‑256 or higher) and how audit logs are stored.

Granular Participant‑Level Access – A GDPR Game‑Changer

Policy‑based tools now let participants view only the sections of a recording they were part of. This granular control not only reduces data exposure but also aligns neatly with Article 30 of the GDPR, which requires “data minimisation” in processing.

In practice, a multinational tech firm reduced its GDPR‑related audit requests by 42 % after implementing participant‑level view restrictions, according to a case study published on Privacy International.

Seamless Integration with Microsoft’s Cloud Stack

Being an ISV (Independent Software Vendor) in Microsoft’s ecosystem means tighter integration with Azure, Teams policy engines, and the Graph API. The Microsoft Teams compliance recording framework now requires solutions to:

  1. Respect Teams’ policy controls (e.g., retention, geo‑restriction).
  2. Expose metadata through Graph for automated discovery.
  3. Pass a rigorous technical audit before being listed in the Marketplace.

Future trends point toward real‑time compliance dashboards that pull metadata directly from Teams, giving compliance officers a live view of risk exposure across the organisation.

Emerging Trends to Watch in 2024‑2026

1. Conversational LLMs for Automated Risk Classification

Large Language Models (LLMs) are being fine‑tuned on industry‑specific vocabularies. Expect solutions that can automatically categorise a conversation as “compliant”, “potential breach”, or “high‑risk” with confidence scores.

2. Multi‑Modal Analytics – Voice, Video, and Text United

Combining speech‑to‑text, video‑frame analysis, and chat logs creates a 360° view of each interaction. Companies like Verint already pilot multi‑modal AI to detect insider threats in real time.

3. Edge‑Based Recording for Data Sovereignty

Regulations such as the EU’s “Data Localisation” rules will push recording workloads to the edge (e.g., Azure Stack) rather than central cloud zones.

4. Automated Legal Hold & E‑Discovery

Future platforms will let legal teams set “hold” policies that instantly lock relevant recordings, generate export packages, and even redact non‑relevant content via AI before delivery.

What This Means for Your Business

Adopting a certified, AI‑enhanced compliance recorder like Analytics 365 can future‑proof your Teams environment. It delivers:

  • Reduced risk of fines (e.g., GDPR penalties up to €20 million or 4 % of global turnover).
  • Operational efficiency – investigators locate relevant calls in seconds using metadata filters.
  • Scalable security – the same solution works across a 22,000‑plus customer base, from SMBs to Fortune 500 enterprises.

Did you know? Organizations that automate compliance recording see a 30 % reduction in time spent on data‑request handling, according to a recent PwC compliance study.

FAQ

What is policy‑based compliance recording?
It is a method where recordings are captured, stored, and managed according to pre‑defined organisational policies (e.g., retention, access, encryption) rather than ad‑hoc manual processes.
How does AI improve compliance?
AI can transcribe speech, index content, detect keyword patterns, and assign risk scores, turning raw recordings into searchable, actionable evidence.
Is participant‑level access compatible with GDPR?
Yes. By limiting visibility to only the data a user is directly involved with, it satisfies GDPR’s data‑minimisation principle.
Do I need an Azure subscription to use Analytics 365?
No. While Azure integration enhances performance, the solution is available through the Microsoft Marketplace and can be purchased without an existing Azure contract.
Can I export recordings for legal hold?
Absolutely. Analytics 365 maintains immutable audit logs and lets you export recordings with full metadata, ready for e‑discovery.

Take the Next Step

Ready to safeguard your Teams conversations and unlock AI‑driven insights? Contact us today to schedule a free demo, or read our deep‑dive guide for more on building a compliant communication strategy.

Have thoughts or experiences with compliance recording? Join the conversation in the comments below and subscribe to our newsletter for the latest updates on AI, privacy, and unified communications.

December 12, 2025 0 comments
0 FacebookTwitterPinterestEmail

Recent Posts

  • N-woord op radio: Controversiële uitspraak veroordeeld

    March 27, 2026
  • MIT Implant: Potential Cure for Type 1 Diabetes with Wireless-Powered Cell Therapy

    March 27, 2026
  • Ducati Superleggera V4 Centenario: Carbonový manifest na dvou kolech OR Ducati Superleggera V4 Centenario: Detailní test karbonové superbiky

    March 27, 2026
  • USF Women’s Lacrosse: Road Trip to Face ODU & No. 25 Richmond | ESPN+ Stream

    March 27, 2026
  • Russia School Violence: Attacks, Militarization & Warning Signs

    March 27, 2026

Popular Posts

  • 1

    Maya Jama flaunts her taut midriff in a white crop top and denim jeans during holiday as she shares New York pub crawl story

    April 5, 2025
  • 2

    Saar-Unternehmen hoffen auf tiefgreifende Reformen

    March 26, 2025
  • 3

    Marta Daddato: vita e racconti tra YouTube e podcast

    April 7, 2025
  • 4

    Unlocking Success: Why the FPÖ Could Outperform Projections and Transform Austria’s Political Landscape

    April 26, 2025
  • 5

    Mecimapro Apologizes for DAY6 Concert Chaos: Understanding the Controversy

    May 6, 2025

Follow Me

Follow Me
  • Cookie Policy
  • CORRECTIONS POLICY
  • PRIVACY POLICY
  • TERMS OF SERVICE

Hosted by Byohosting – Most Recommended Web Hosting – for complains, abuse, advertising contact: o f f i c e @byohosting.com


Back To Top
Newsy Today
  • Business
  • Entertainment
  • Health
  • News
  • Sport
  • Tech
  • World