• Business
  • Entertainment
  • Health
  • News
  • Sport
  • Tech
  • World
Newsy Today
news of today
Home - Anthropic
Tag:

Anthropic

Tech

Accenture and Anthropic Launch Cyber.AI; an Advanced AI Cybersecurity Solution for Organisations » World Business Outlook

by Chief Editor March 27, 2026
written by Chief Editor

The Rise of Machine-Speed Cybersecurity: How AI is Reshaping Digital Defense

The cybersecurity landscape is undergoing a seismic shift. Traditional, human-driven security operations are struggling to keep pace with increasingly sophisticated and rapid attacks. A new era of “machine-speed” defense is dawning, powered by artificial intelligence. Accenture’s recent launch of Cyber.AI, a platform built on Anthropic’s Claude AI model, exemplifies this trend and signals a fundamental change in how organizations protect themselves.

From Weeks to Hours: The Compression of Attack Timelines

Attackers are leveraging AI to dramatically shorten their attack cycles. As Damon McDougald, global Cybersecurity Services lead at Accenture, points out, adversaries are compressing timelines “from weeks to hours.” This acceleration renders traditional security controls, designed for slower, human-paced threats, increasingly ineffective. The World Economic Forum’s 2026 Global Cyber Outlook Report highlights that nearly 90% of organizations now identify AI-related vulnerabilities as their fastest-growing cyber risk.

Cyber.AI: Orchestrating AI Agents for Proactive Defense

Cyber.AI addresses this challenge by automating security operations across the entire lifecycle – from threat detection and triage to remediation and system transformation. The platform combines Claude’s advanced reasoning capabilities with Accenture’s extensive library of proprietary AI agents and over two decades of cybersecurity expertise. It orchestrates these agents across critical domains like identity and access management, cyber defense, and secure digital infrastructure.

A key component of Cyber.AI is “Agent Shield,” which governs these autonomous AI agents in real-time, ensuring they operate within organizational policies and risk tolerance. This focus on secure AI is crucial, as organizations grapple with the potential risks associated with deploying AI-powered security tools.

Real-World Impact: Dramatic Improvements in Efficiency

Accenture’s internal deployment of Cyber.AI has yielded impressive results. Scan times have been reduced from three to five days to under an hour, while security test coverage has jumped from approximately 10% to over 80%. This increased efficiency has led to a 35% improvement in service delivery and a significant reduction in the backlog of critical vulnerabilities.

A global Fortune 500 agriculture organization successfully used Cyber.AI to enhance its identity and access management (IAM) operations, accelerating identity platform migrations with greater precision. This demonstrates the platform’s ability to automate complex cybersecurity processes and strengthen resilience.

The Role of Anthropic’s Claude: Reasoning at Scale

Anthropic’s Claude serves as the central reasoning engine within Cyber.AI, synthesizing security data and providing contextual insights. Michael Moore, Head of Cybersecurity Products at Anthropic, emphasizes that Claude was “built for” the demands of cybersecurity, specifically its require for reasoning across vast datasets, autonomous action, and strict governance.

Beyond Automation: The Future of Agentic AI in Cybersecurity

The launch of Cyber.AI is part of a broader trend towards “agentic AI” in cybersecurity. This involves deploying AI agents to perform specific tasks autonomously, orchestrated by a central platform. IDC Research Vice President Craig Robinson notes that organizations need to “orchestrate agents across their security ecosystem with coordination and scale” to keep pace with evolving threats. Cyber.AI aims to provide this orchestration, enabling purpose-built, on-demand AI security.

FAQ: AI-Powered Cybersecurity

  • What is machine-speed cybersecurity? It refers to the ability to detect, analyze, and respond to threats at a speed that surpasses human capabilities, leveraging the power of artificial intelligence.
  • What is Agent Shield? Agent Shield is a component of Cyber.AI that monitors and governs autonomous AI agents in real-time, ensuring they adhere to organizational policies.
  • How does Cyber.AI improve security testing coverage? By automating security scans and leveraging AI-powered analysis, Cyber.AI can significantly expand the scope of security testing, identifying vulnerabilities that might be missed by manual processes.
  • What is agentic AI? Agentic AI involves deploying AI agents to perform specific tasks autonomously, orchestrated by a central platform.

Pro Tip: Regularly review and update the governance policies governing your AI agents to ensure they align with evolving security threats and organizational risk tolerance.

As AI adoption continues to accelerate, the need for robust, AI-powered cybersecurity solutions will only grow. Platforms like Cyber.AI represent a critical step towards a future where organizations can proactively defend themselves against increasingly sophisticated threats, operating at the speed of machine intelligence.

Did you know? Accenture has already secured 1,600 applications and over 500,000 APIs using Cyber.AI within its own infrastructure.

Explore more about the evolving cybersecurity landscape and how AI is transforming digital defense. Read our latest insights here.

March 27, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

Anthropic Wins Injunction Against DoD Over Supply Chain Risk Label

by Chief Editor March 27, 2026
written by Chief Editor

Judge Pauses Pentagon’s ‘Supply Chain Risk’ Designation for AI Firm Anthropic

A federal judge has issued a preliminary injunction blocking the U.S. Department of Defense (DoD) from labeling Anthropic, a leading artificial intelligence company, as a “supply chain risk.” This ruling represents a significant win for Anthropic as it battles the Pentagon over restrictions on its AI technology and could reshape how the government interacts with rapidly evolving AI firms.

The Dispute: AI, Autonomous Weapons, and Control

The core of the conflict stems from Anthropic’s attempts to prevent its AI technology, specifically its Claude chatbot, from being used in the development of fully autonomous weapons or for surveillance of American citizens. The Trump administration, operating under the designation of the Department of War, responded by effectively attempting to cut ties with Anthropic, citing concerns about usage restrictions the company placed on its technology.

This led to directives that ultimately designated Anthropic as a supply chain risk, a label that has hindered its ability to secure government contracts and damaged its reputation. Anthropic countered with two lawsuits, arguing the sanctions were unconstitutional, and retaliatory.

Judge Lin’s Concerns: Punishment, Not Security

U.S. District Judge Rita Lin expressed skepticism throughout the hearings, suggesting the DoD’s actions appeared to be less about legitimate national security concerns and more about punishing Anthropic for challenging the administration’s contracting position. She stated the government’s actions “glance like an attempt to cripple Anthropic.”

In her ruling, Judge Lin found the DoD’s designation “likely both contrary to law and arbitrary and capricious,” noting there was no legitimate basis to suspect Anthropic would sabotage its own technology simply because it sought usage restrictions.

What the Injunction Means – And Doesn’t Mean

The preliminary injunction restores the status quo to February 27th, before the restrictive directives were issued. Crucially, it doesn’t require the DoD to use Anthropic’s products, nor does it prevent the department from seeking alternative AI providers. However, it prohibits the DoD from relying on the “supply chain risk” designation as justification for avoiding Anthropic.

This allows Anthropic to potentially demonstrate to customers concerned about working with a company labeled a risk that the legal landscape may be shifting in its favor. However, the immediate impact is limited as the order takes effect in one week, and a separate case in Washington, D.C., remains pending.

The Broader Implications for the AI Industry

This case highlights a growing tension between the rapid development of AI technology and the government’s attempts to regulate its use. The DoD’s initial reliance on Anthropic’s Claude for sensitive tasks demonstrates the potential of AI in national security, but also the inherent risks associated with relying on external providers, particularly those with ethical concerns about the application of their technology.

The situation with Anthropic could set a precedent for how the government approaches AI procurement and regulation. Future contracts may include more stringent usage restrictions and oversight mechanisms to address concerns about autonomous weapons and data privacy.

The Rise of AI Ethics as a Business Risk

Anthropic’s stance on preventing its AI from being used in autonomous weapons systems underscores the increasing importance of ethical considerations in the AI industry. Companies are facing growing pressure from employees, customers, and the public to ensure their technology is used responsibly.

This case demonstrates that taking a strong ethical stance, even if it means challenging powerful government entities, can carry significant business risks – but also potential legal and reputational rewards.

FAQ

What is a ‘supply chain risk’ designation? It’s a label applied to companies that the government deems pose a threat to the security of its supply chain, potentially hindering their ability to secure government contracts.

What is Anthropic’s Claude? Claude is an AI chatbot developed by Anthropic, capable of generating text, translating languages, and answering questions.

Will the DoD now be forced to use Anthropic’s AI? No, the injunction only prevents the DoD from using the ‘supply chain risk’ designation to avoid Anthropic. They are still free to choose other providers.

What’s the status of the second lawsuit? A federal appeals court in Washington, D.C., is still considering a separate lawsuit filed by Anthropic.

Did you know? The Department of Defense, under the Trump administration, referred to itself as the Department of War during this legal dispute.

Pro Tip: Businesses operating in the AI space should proactively develop robust ethical guidelines and risk management strategies to navigate the evolving regulatory landscape.

Stay informed about the latest developments in AI and government regulation. Explore more articles on our website or subscribe to our newsletter for regular updates.

March 27, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

Accenture and Anthropic Team to Help Organizations Secure, Scale AI-Driven Cybersecurity Operations

by Chief Editor March 26, 2026
written by Chief Editor

The Rise of Agentic Cybersecurity: How AI is Transforming Digital Defense

The cybersecurity landscape is undergoing a seismic shift. Traditional, human-driven security operations are struggling to retain pace with increasingly sophisticated and rapid attacks powered by artificial intelligence. A novel era of “agentic cybersecurity” is dawning, leveraging AI to automate defenses, accelerate response times, and proactively manage evolving threats. Accenture’s recent launch of Cyber.AI, powered by Anthropic’s Claude, signals a major step towards this future.

From Human-Speed to Machine Speed: A Critical Need for Automation

For years, cybersecurity teams have been battling a growing volume of alerts and a shortage of skilled professionals. Adversaries are now compressing attack timelines from weeks to mere hours, exploiting vulnerabilities before defenders can react. This disparity demands a fundamental change in approach. Cyber.AI addresses this challenge by integrating Anthropic’s Claude models with Accenture’s extensive cybersecurity expertise, shifting defense from a reactive, manual posture to a continuous, autonomous operational model.

Cyber.AI: Orchestrating AI-Driven “Missions”

At its core, Cyber.AI functions as a reasoning engine for the entire security lifecycle. It doesn’t simply rely on pre-defined rules; it synthesizes security data, provides contextual insights, and executes complex workflows autonomously. This is achieved through the orchestration of AI-driven “missions,” deploying specialized agents to automate specific tasks – from vulnerability assessments and triage to remediation and transformation. A curated library of agents covers critical domains like identity security, cyber defense, and cyber resiliency.

Agent Shield: Governing Autonomous AI in Cybersecurity

A key component of Cyber.AI is Agent Shield, designed to protect, identify, monitor, and govern these autonomous AI agents in real-time. This is crucial, as organizations increasingly deploy AI systems, creating new attack surfaces. Agent Shield delivers identity controls, threat detection, and runtime protection, ensuring agents operate within organizational policies and risk tolerance. It leverages Claude’s built-in safety guardrails and enhances them with enterprise-grade governance.

Real-World Impact: Efficiency Gains and Reduced Vulnerabilities

The benefits of this approach are already becoming apparent. Accenture has deployed Cyber.AI within its own global IT infrastructure, securing 1,600 applications and over 500,000 APIs. The results are striking: scan turnaround times have been reduced from 3-5 days to under one hour, while security testing coverage has expanded from approximately 10% to over 80%. This efficiency translates to a dramatic reduction in the backlog of critical vulnerabilities and a 35% improvement in service delivery, contributing to consistent cost reductions.

Beyond Accenture: The Broader Trend of Agentic AI in Cybersecurity

Accenture and Anthropic aren’t alone in recognizing the potential of agentic AI. Industry analysts, like Craig Robinson from IDC, emphasize the need to orchestrate agents across the security ecosystem with coordination and scale. This suggests a broader trend towards purpose-built, on-demand AI security solutions that reshape how cybersecurity teams operate. A global Fortune 500 agriculture organization has already leveraged Cyber.AI to enhance its identity and access management (IAM) operations, accelerating identity platform migrations with greater precision.

The Future of Cybersecurity: Proactive, Intelligence-Driven Operations

The integration of AI into cybersecurity isn’t just about automating existing tasks; it’s about fundamentally changing the nature of defense. Cyber.AI enables more proactive, intelligence-driven operations, seamlessly integrating with existing technology environments. As AI adoption accelerates and the number of non-human identities and autonomous agents continues to grow, the ability to orchestrate and govern these agents will become paramount.

Frequently Asked Questions

What is agentic AI? Agentic AI refers to AI systems capable of autonomous action and decision-making, rather than simply responding to prompts. In cybersecurity, In other words AI agents can proactively identify and address threats without constant human intervention.

What is Cyber.AI’s core technology? Cyber.AI is powered by Anthropic’s Claude AI model, which serves as the reasoning engine for the platform. It’s combined with Accenture’s proprietary agents and cybersecurity expertise.

How does Agent Shield work? Agent Shield provides identity controls, threat detection, and runtime protection to secure and govern AI systems at scale, ensuring they operate within defined policies and risk tolerances.

What are the benefits of using Cyber.AI? Benefits include faster response times, increased security testing coverage, reduced vulnerability backlogs, improved service delivery, and lower costs.

Is Cyber.AI hard to integrate with existing systems? Cyber.AI is designed to integrate seamlessly with existing technology environments.

Did you understand? The deployment of Cyber.AI within Accenture’s infrastructure reduced application scan turnaround times by over 80%.

Pro Tip: Prioritize solutions that offer robust governance and control mechanisms for AI agents to mitigate potential risks and ensure compliance.

Want to learn more about the evolving landscape of cybersecurity? Explore our other articles on AI-powered threat detection and the future of IAM.

March 26, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

Pentagon’s Biggest Champion of Blacklisting Anthropic Has a Few Million Reasons for His Stance

by Chief Editor March 26, 2026
written by Chief Editor

Pentagon’s AI Battle: Conflicts of Interest and a Potential Power Grab

The Pentagon’s escalating conflict with Anthropic, a leading artificial intelligence firm, isn’t simply a matter of national security concerns. A closer look reveals potential conflicts of interest involving Emil Michael, the Under Secretary of Defense for Research and Engineering and Chief Technology Officer, and raises questions about the motivations behind the aggressive stance against Anthropic.

Financial Ties to a Rival

Recent reports indicate that Michael holds significant stock in Perplexity, a direct competitor to Anthropic. Financial disclosures show his ownership stake in Perplexity ranges from $2 to $10 million, alongside his past role on the company’s board. While Perplexity doesn’t have a direct contract with the Department of Defense, it does have a government-wide agreement to deploy its AI search engine to all federal agencies and is being considered for hosting government AI systems. This raises concerns about whether Michael’s push to restrict Anthropic was influenced by a desire to benefit a company he has a financial interest in.

A History of Grudges and Shifting Alliances

Michael’s history suggests a pattern of strong personal feelings influencing his professional decisions. He previously served as a key executive at Uber alongside Travis Kalanick, both of whom were ousted by investors. Michael has publicly stated he will “never forget…nor forgive” those investors. This demonstrated tendency to hold grudges casts a shadow over his actions regarding Anthropic, suggesting personal animosity could be a factor.

The Anthropic Fallout: A Judge Questions the Pentagon’s Motives

The Pentagon’s attempt to designate Anthropic as a supply chain risk has faced legal challenges. A judge overseeing a lawsuit filed by Anthropic against the Department of Defense described the Pentagon’s actions as “an attempt to cripple Anthropic,” suggesting the designation was retaliatory rather than based on legitimate security concerns. This legal pushback underscores the contentious nature of the dispute and the potential for overreach by the Pentagon.

The AI Landscape: A Shifting Power Dynamic

The situation highlights a broader trend: the increasing concentration of power in the AI sector and the potential for conflicts of interest when government officials have financial ties to companies vying for lucrative defense contracts. Anthropic’s contract was effectively handed to OpenAI, the company behind ChatGPT, further solidifying its position as a dominant player in the AI landscape.

Beyond the Headlines: Continued Reliance on Anthropic’s Tech

Despite publicly citing security concerns, the Department of Defense reportedly utilized Anthropic’s Claude AI during the early stages of its attack on Iran and continues to rely on the technology. This apparent contradiction raises questions about the true rationale behind the Pentagon’s actions and suggests a pragmatic need for Anthropic’s capabilities despite the stated concerns.

Tools for Humanity and the Eye-Scanning Orb

Michael’s involvement extends beyond Perplexity. He similarly held investments in and advised Tools for Humanity, the company developing an eye-scanning orb for human verification, led by Sam Altman of OpenAI. This further intertwines Michael’s interests with companies poised to benefit from the shifting AI landscape within the defense sector.

Future Trends and Implications

This case sets a concerning precedent for the future of AI procurement and deployment within the government. The potential for conflicts of interest, the aggressive tactics employed by the Pentagon, and the legal challenges faced by Anthropic all point to a need for greater transparency and accountability in the AI sector.

The Rise of AI Arms Races

The competition for dominance in AI is intensifying, with governments and private companies alike investing heavily in research and development. This is leading to an “AI arms race,” where the pursuit of technological superiority overshadows ethical considerations and potential risks.

Data Security and Supply Chain Risks

The Pentagon’s designation of Anthropic as a supply chain risk highlights the growing concern over data security and the potential for AI systems to be compromised. As AI becomes more integrated into critical infrastructure, protecting against cyberattacks and ensuring the integrity of data will grow paramount.

The Need for Regulation and Oversight

The Anthropic case underscores the urgent need for clear regulations and robust oversight of the AI industry. This includes establishing ethical guidelines for AI development, ensuring transparency in government procurement processes, and addressing potential conflicts of interest.

FAQ

Q: What is a supply chain risk designation?
A: It’s a determination that a company poses a potential threat to the security of government systems or data.

Q: What is Perplexity?
A: It’s an AI-powered search engine and a competitor to Anthropic.

Q: What role did Emil Michael play at Uber?
A: He was a senior vice president and chief business officer, working closely with founder Travis Kalanick.

Q: Is OpenAI now working with the Pentagon?
A: Yes, OpenAI is taking over the contract previously held by Anthropic.

Did you know? The Pentagon reportedly used Anthropic’s AI during its attack on Iran, despite later citing security concerns about the company.

Pro Tip: Stay informed about the latest developments in AI policy and regulation to understand the implications for your industry and your future.

What are your thoughts on the Pentagon’s actions? Share your opinions in the comments below and explore our other articles on artificial intelligence and national security.

March 26, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

‘AI may know me better’: why Hongkongers turn to chatbots for mental health help

by Chief Editor March 15, 2026
written by Chief Editor

The Rise of AI Companions: Are Chatbots the Future of Mental Wellbeing?

As rates of depression and anxiety climb globally, a surprising latest source of support is gaining traction: artificial intelligence. From offering a listening ear to providing coping strategies, AI chatbots like ChatGPT are increasingly being turned to for emotional assistance. But what does this trend mean for the future of mental health, and are there potential downsides to relying on digital companions?

A Growing Need for Accessible Support

Recent data from Hong Kong reveals a concerning trend: overall average depression and anxiety scores have reached record highs. A survey by the Chinese University of Hong Kong and the Mental Health Association of Hong Kong highlighted this increase in early March. Amidst this crisis, approximately 22% of residents are now seeking help from AI chatbots to manage their emotions, supplementing traditional support networks of friends and family.

Joe, a 20-year-ancient student in Hong Kong, exemplifies this shift. He uses OpenAI’s ChatGPT, accessed through the Poe app, to navigate anxieties related to dating, family, and stress. “To a certain extent, AI may realize me better than my friends,” he shared, highlighting the perceived level of understanding and availability these chatbots offer.

The Benefits of AI-Powered Mental Wellness

Experts suggest that AI can play a valuable role in complementing traditional therapy. ChatGPT, for example, interacts in a conversational way, allowing it to answer follow-up questions and even admit mistakes. This capability, as OpenAI explains, makes it a potentially useful tool for self-exploration and emotional processing.

The accessibility of AI is a key advantage. Unlike traditional therapy, which can be expensive and difficult to access, chatbots are available 24/7 and often at a lower cost. Here’s particularly critical for individuals in underserved communities or those facing barriers to care.

Potential Pitfalls and Ethical Considerations

Despite the benefits, mental health advocates caution against overreliance on AI. An exclusive dependence on chatbots could potentially hinder the development of crucial social skills and delay seeking professional help when needed. The case of Joe Ceccanti, whose life tragically unraveled after becoming consumed by interactions with ChatGPT, serves as a stark warning. Ceccanti initially used the chatbot to brainstorm sustainable housing solutions but eventually turned to it as a confidante, spending up to 12 hours a day communicating with the bot before his death.

concerns remain about data privacy and the potential for AI to provide inaccurate or harmful advice. The algorithms driving these chatbots are constantly evolving, and their responses are not always reliable.

Future Trends: Personalized AI Therapy and Beyond

The future of AI and mental health is likely to involve increasingly personalized and sophisticated tools. OpenAI is already exploring ways to customize ChatGPT models, allowing for more tailored interactions. The Joe Rogan Experience podcast recently discussed these fine-tuning features, highlighting the potential for enhanced precision and effectiveness.

You can anticipate the development of AI-powered platforms that integrate with wearable sensors to monitor physiological data, such as heart rate and sleep patterns, providing a more holistic understanding of an individual’s mental state. AI could also be used to analyze social media activity and identify individuals at risk of developing mental health issues, enabling proactive intervention.

The recent funding of companies like Gumloop, which received $50 million to empower employees to build AI agents, suggests a growing investment in AI-driven solutions for a wide range of applications, including mental wellbeing.

FAQ

Q: Can AI chatbots replace traditional therapy?
A: No, AI chatbots should be seen as a complement to, not a replacement for, traditional therapy. They can provide support and guidance, but they cannot offer the same level of expertise and personalized care as a qualified mental health professional.

Q: Is my data safe when using AI chatbots for mental health?
A: Data privacy is a valid concern. It’s important to review the privacy policies of the chatbot provider and understand how your data is being collected and used.

Q: What should I do if an AI chatbot gives me harmful advice?
A: If you receive advice that feels unsafe or unhelpful, discontinue use and seek guidance from a trusted friend, family member, or mental health professional.

Q: How is ChatGPT being used in the tech industry?
A: ChatGPT is being used to revolutionize industries through personalization and data analysis, as discussed on the ChatGPT podcast.

Did you know? The Gemini 3 AI model, recently revealed by Google, demonstrates smarter reasoning, creativity, and comprehension, potentially impacting the future of AI-driven mental health support.

Pro Tip: If you’re considering using an AI chatbot for emotional support, start by setting clear boundaries and expectations. Remember that these tools are not a substitute for human connection and professional help.

What are your thoughts on the role of AI in mental health? Share your opinions in the comments below, and explore our other articles on technology and wellbeing for more insights.

March 15, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

Claude Now Creates Charts & Visuals for Clearer Answers

by Chief Editor March 13, 2026
written by Chief Editor

Claude Gets Visual: A New Era for AI Chatbots

Anthropic’s Claude chatbot has received a significant update, introducing support for inline visual content. This enhancement aims to deliver clearer, more intuitive answers, moving beyond text-only responses.

Beyond Text: The Rise of Visual AI

Claude can now generate custom visuals, including charts, graphs, and diagrams, directly within the chat interface. This isn’t about generating images from scratch; instead, Claude leverages HTML and SVG to create functional visuals when they better convey information than plain text. The system can as well incorporate real-world data, such as current weather conditions and formatted recipe cards, provided web search is enabled.

Currently, weather and recipe data are only available on the desktop version of Claude, as these visuals aren’t yet supported within the iOS app.

Interactive Experiences and Structured Queries

The update extends beyond static visuals. Claude is now capable of posing structured questions using interactive multiple-choice inputs, eliminating the necessitate for users to type out responses. This streamlines the conversational flow and makes interactions more efficient.

Anthropic emphasizes that Claude will proactively use visuals when appropriate, but users can also explicitly request a visual aid to accompany their queries.

Artifacts and the Expanding Capabilities of Claude

Although distinct from Claude’s Artifacts feature, this visual update complements the broader trend of turning ideas into shareable apps and tools. Artifacts allow users to build complex applications directly within Claude, and the addition of visuals enhances the potential of these creations.

Artifacts enable the creation of standalone content, often exceeding 15 lines, designed for editing, reuse, and reference. They represent a shift towards AI as a creative and productive tool, rather than simply a conversational partner.

The Future of Conversational AI: What’s Next?

This move towards visual and interactive responses signals a broader trend in conversational AI. Users are increasingly expecting more than just text-based answers; they want dynamic, engaging experiences. The ability to integrate real-world data and present it in a visually appealing format is a key differentiator for AI assistants.

The development of “Claude in Claude,” where Artifacts can make live calls to Claude’s API, further expands the possibilities. This allows for the creation of fully functional, AI-powered applications within the chat interface.

FAQ

  • What are Claude Artifacts? Artifacts are shareable apps, tools, or content created within Claude, often exceeding 15 lines and designed for reuse.
  • Is the visual update available on all Claude plans? Yes, visual responses and interactive content are available to all Claude users.
  • Will the weather and recipe features be available on iOS? Not currently, these features are limited to the desktop version of Claude.
  • How does this differ from image generation? Claude creates visuals using HTML and SVG, focusing on data representation rather than generating artistic images.

Pro Tip: Experiment with asking Claude to “show me a chart of…” or “visualize this data…” to spot the new visual capabilities in action.

Want to learn more about the latest AI innovations? Explore our beginner’s guide to Claude Artifacts.

March 13, 2026 0 comments
0 FacebookTwitterPinterestEmail
Entertainment

Fortune Tech: Yann Lecun’s billion anit-Meta bet, Meta’s Moltbook, Amazon’s AI coding

by Chief Editor March 12, 2026
written by Chief Editor

YouTube’s Reign: How the Streaming Giant Overtook Disney

The media landscape is undergoing a seismic shift. For decades, Disney stood as the undisputed king of entertainment, built on a foundation of iconic intellectual property. But a latest report from MoffettNathanson reveals a stunning upset: YouTube has surpassed Disney as the world’s largest media company by revenue. This isn’t just a win for YouTube CEO Neal Mohan and Google; it signals a fundamental change in how value is created in the modern media world.

From Mickey Mouse to MrBeast: A Changing of the Guard

Disney’s empire was forged through carefully crafted characters and franchises – Mickey Mouse, Ariel, Star Wars, and Marvel. YouTube’s success, however, is powered by a different breed of star: individual creators like MrBeast, PewDiePie, and the Paul brothers. These “free agents,” as Fortune describes them, attract massive audiences directly, bypassing the traditional studio system.

This raises a critical question: are eyeballs more valuable than owned content? YouTube doesn’t demand to develop its own characters; it simply provides the platform for creators to thrive. The platform’s ability to attract and retain a massive audience ensures a continuous influx of talent. But can this model build a legacy comparable to Disney’s century-long dominance?

The AI Arms Race: Yann LeCun’s $1 Billion Bet Against LLMs

Whereas YouTube reshapes the entertainment world, the underlying technology powering the future of media is also evolving rapidly. Yann LeCun, former chief AI scientist at Meta, is making a bold bet against the current trend of large language models (LLMs). His new startup, Advanced Machine Intelligence Labs, has secured a staggering $1.03 billion in seed funding – Europe’s largest ever – from investors including Nvidia and Jeff Bezos.

LeCun believes LLMs are fundamentally limited in their ability to achieve true intelligence. Instead, he’s focusing on “world models”—AI systems trained on video and spatial data that can reason, plan, and retain memory. This approach has potential applications in robotics, transportation, and potentially, the creation of more immersive and interactive entertainment experiences.

Pro Tip:

Keep an eye on the development of “world models.” This technology could revolutionize how AI interacts with the physical world and create entirely new forms of digital content.

Meta’s Acquisition of Moltbook: Controlling the AI Conversation

Meta isn’t standing still in the AI race. The company recently acquired Moltbook, a “social network for AI agents” that gained notoriety for reports of agents discussing ways to circumvent human control. While some of these reports were attributed to human manipulation, the acquisition signals Meta’s growing interest in multi-agent systems and the potential for AI-driven collaboration.

By integrating Moltbook’s technology into its Superintelligence Labs, Meta aims to create a platform where AI agents can interact, learn, and perform complex tasks for users and businesses. This move underscores the importance of controlling the narrative and infrastructure surrounding AI development.

Amazon’s AI Coding Safeguards: A Reality Check

The rush to integrate AI into every aspect of business isn’t without its challenges. Amazon recently held an internal meeting to address a string of outages, at least one of which was linked to errors in AI-assisted code. This serves as a cautionary tale: while AI can significantly boost productivity, it’s crucial to implement robust safeguards and quality control measures.

Amazon CEO Andy Jassy has championed the use of AI tools, citing significant developer time savings. However, the recent outages highlight the need for a balanced approach, combining the efficiency of AI with the expertise of human engineers.

FAQ: The Future of Media and AI

  • Is Disney losing its relevance? Not necessarily, but it faces increasing competition from platforms like YouTube that offer a different value proposition.
  • What are “world models” and why are they important? World models are AI systems that learn from visual and spatial data, allowing them to reason and plan more effectively than traditional language models.
  • What is Meta’s strategy in the AI space? Meta is investing heavily in AI research and development, with a focus on multi-agent systems and integrating AI into its existing platforms.
  • Are AI-generated code errors a significant risk? Yes, companies need to implement safeguards and quality control measures to mitigate the risk of outages and other issues caused by AI-assisted coding.

Did you understand?

The 2025 standoff between Disney and Google/YouTube TV resulted in Disney movies disappearing from Google Play, YouTube, and Google TV, demonstrating the power dynamics at play in the streaming landscape.

The future of media is being shaped by a complex interplay of factors: shifting audience preferences, technological advancements, and the evolving power dynamics between established players and emerging platforms. As YouTube’s rise demonstrates, the ability to capture and retain audience attention is paramount. And as the investments in AI research suggest, the next generation of media experiences will be powered by increasingly sophisticated and intelligent systems.

March 12, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

AI vs. Pentagon: Researchers Back Anthropic in First Amendment Suit

by Chief Editor March 12, 2026
written by Chief Editor

The AI Battleground: Anthropic’s Lawsuit Signals a New Era of Government Oversight

The escalating dispute between Anthropic and the Department of War isn’t simply a contract disagreement; it’s a pivotal moment that will define the boundaries of government influence over the rapidly evolving artificial intelligence landscape. The core of the conflict centers on Anthropic’s refusal to relinquish its ethical guardrails, specifically those preventing its AI from being used in autonomous weapons systems or domestic surveillance. This stance has triggered an unprecedented response from the government, designating Anthropic a “supply chain risk” – a label typically reserved for foreign adversaries.

From Contract Dispute to Constitutional Question

Initially framed as a narrow contract dispute, the situation has quickly broadened into a fundamental challenge to the independence of AI companies. Anthropic is now suing the government, alleging that the “supply chain risk” designation is “unprecedented and unlawful,” and a violation of its First Amendment rights. The company estimates potential losses of “hundreds of millions of dollars” in business as a result of the government’s actions.

This case isn’t happening in a vacuum. It follows a $200 million Department of Defense contract awarded to Anthropic just months prior, highlighting the initial enthusiasm for the company’s AI models within the federal government. The reversal underscores a growing tension between the desire to leverage AI for national security and concerns about the ethical implications of its deployment.

Silicon Valley Rallies in Support

The implications of this case extend far beyond Anthropic. A significant show of support has emerged from within the AI community itself. Thirty-seven researchers from OpenAI and Google DeepMind, including Google chief scientist Jeff Dean, have filed an amicus brief with the court, backing Anthropic’s legal challenge. This demonstrates a collective concern that the government’s actions could “chill professional debate” and “undermine American innovation and competitiveness” in the field of AI.

The amicus brief argues that the Pentagon’s decision introduces “unpredictability” into the industry, potentially discouraging companies from implementing safety measures and ethical guidelines. This is particularly relevant as AI technology becomes increasingly integrated into critical infrastructure and national security systems.

The Department of War’s Perspective

The Department of War’s actions stem from concerns about maintaining control over AI technology used in sensitive applications. The department reportedly sought to ensure its AI systems weren’t constrained by Anthropic’s policies against autonomous weapons and mass surveillance. This reflects a broader debate about the balance between technological advancement and national security imperatives.

The government’s designation of Anthropic as a supply chain risk is a powerful tool, effectively barring the company from working with military contractors. This move signals a willingness to use its considerable leverage to enforce its priorities, even if it means challenging the ethical boundaries set by private companies.

What’s at Stake: The Future of AI Governance

The outcome of Anthropic’s lawsuit will have far-reaching consequences for the AI industry. A ruling in favor of the government could embolden regulators to exert greater control over AI development and deployment, potentially stifling innovation and limiting the ability of companies to implement ethical safeguards. Conversely, a victory for Anthropic could establish a precedent for protecting the independence of AI companies and preserving their right to set their own ethical standards.

This case also highlights the need for clearer legal frameworks governing the use of AI, particularly in the context of national security. The current ambiguity surrounding these issues creates uncertainty for both companies and regulators, increasing the risk of future conflicts.

Pro Tip: Understanding the nuances of supply chain risk designations is crucial. Historically, these designations have been reserved for entities posing a direct threat to national security, typically foreign actors. Applying this label to a domestic AI company for ethical reasons is a significant departure from established practice.

FAQ

Q: What is a “supply chain risk” designation?
A: It’s a label typically applied to companies, often foreign, that pose a threat to the security of the government’s supply chain.

Q: Why is Anthropic being targeted?
A: Anthropic refused to remove its restrictions on using its AI for autonomous weapons and mass surveillance.

Q: Who is supporting Anthropic in this legal battle?
A: 37 AI researchers from OpenAI and Google DeepMind have filed an amicus brief in support of Anthropic.

Q: What could be the consequences of this case?
A: The outcome could shape the future of AI governance and the balance between innovation, ethics, and national security.

Did you know? The Department of Defense recently changed its name to the Department of War.

Want to learn more about the evolving landscape of AI and its impact on national security? Explore more articles on Anthropic’s website and stay informed about this critical issue.

March 12, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

Microsoft brings Anthropic’s Claude Cowork into Copilot to run tasks across Outlook, Teams, and Excel

by Chief Editor March 9, 2026
written by Chief Editor

Microsoft’s AI Play: Copilot Cowork and the Rise of Agentic Automation

Microsoft is doubling down on artificial intelligence with the launch of Copilot Cowork, a new feature integrated into Microsoft 365 Copilot. This isn’t just another chatbot; it’s an agentic automation tool designed to proactively complete tasks across multiple Microsoft applications. The key? A close collaboration with Anthropic, leveraging the technology behind their Claude Cowork application.

The Claude Cowork Effect: Shifting the Enterprise AI Landscape

The arrival of Anthropic’s Claude Cowork earlier in 2026 sent ripples through the enterprise software market, triggering a significant selloff in stocks as investors reassessed the value of companies offering functionalities now potentially replicated by AI. Microsoft’s Copilot Cowork appears to be a direct response, aiming to close the gap and offer a comparable experience within the familiar Microsoft 365 ecosystem. Like Claude Cowork, Copilot Cowork allows users to delegate complex, multi-step tasks to an AI agent.

How Copilot Cowork Works: Beyond the Chatbot

Copilot Cowork distinguishes itself by operating within the cloud, utilizing Microsoft 365’s infrastructure and accessing a user’s complete work data graph. This means it can analyze Outlook calendars to propose schedule changes, prepare briefings for meetings, and conduct in-depth research – all autonomously. Currently in a limited research preview, wider availability is expected through the Frontier program by the end of March 2026.

Microsoft’s Diversification: Beyond OpenAI

Microsoft’s partnership with Anthropic is noteworthy. It signals a growing willingness to diversify its AI partnerships beyond its significant investment in OpenAI. Anthropic’s Claude Code has gained traction among developers, and Copilot Cowork builds on similar principles. OpenAI is responding with its own agent-based framework, Frontier, designed for deeper integration with corporate IT systems.

Security and Compliance: A Key Differentiator

A crucial aspect of Copilot Cowork is its operation within Microsoft 365’s existing security and compliance boundaries. This addresses a major concern for enterprise adoption of AI – ensuring data privacy and adherence to regulatory requirements. The ability to run AI processes within a secure, governed environment is a significant advantage.

The Future of Agentic AI in the Workplace

Copilot Cowork represents a significant step towards the future of work, where AI agents handle routine and complex tasks, freeing up human employees to focus on more strategic initiatives. This trend is likely to accelerate as AI models become more sophisticated and integration with existing workflows improves. The competition between Microsoft, Anthropic, and OpenAI will drive innovation and ultimately benefit businesses seeking to leverage the power of AI.

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive “AI Radar” Frontier Report 6× per year, access to comments, and our complete archive.

Subscribe now

Frequently Asked Questions

What is Copilot Cowork?

Copilot Cowork is an agentic AI tool integrated into Microsoft 365 Copilot, designed to autonomously complete tasks across Microsoft applications.

How does Copilot Cowork differ from a traditional chatbot?

Unlike chatbots, Copilot Cowork proactively completes tasks, plans multi-step processes, and delivers finished work, rather than simply responding to prompts.

What is the role of Anthropic in Copilot Cowork?

Microsoft developed Copilot Cowork in close collaboration with Anthropic, integrating the technology behind their Claude Cowork application.

Is Copilot Cowork secure?

Yes, Copilot Cowork operates within Microsoft 365’s existing security and compliance boundaries.

March 9, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

AI Surveillance & the Fourth Amendment: Legal Gaps & National Security

by Chief Editor March 9, 2026
written by Chief Editor

The AI Surveillance Revolution: How Technology is Redefining Privacy and National Security

For decades, the legal framework surrounding surveillance lagged behind technological advancements. The Fourth Amendment, designed to protect against unreasonable searches and seizures, originated in an era where “search” meant physical intrusion. Laws like the Foreign Intelligence Surveillance Act (FISA) of 1978 and the Electronic Communications Privacy Act (ECPA) of 1986 addressed wiretapping and email interception, but the explosion of digital data and the rise of artificial intelligence have fundamentally altered the landscape.

From Wiretaps to Data Clouds: The Evolution of Surveillance

Historically, collecting information required tangible effort – entering homes or intercepting communications. Today, we generate massive “clouds” of data with every online interaction. This shift has created unprecedented opportunities for surveillance. AI doesn’t demand a specific warrant for each piece of information; it can analyze vast datasets, identify patterns and build detailed profiles, even from seemingly innocuous individual data points.

As one expert notes, the law simply hasn’t kept pace with this technological reality. The government can legally collect information and then utilize AI systems to analyze it, raising concerns about the scope of permissible surveillance.

National Security vs. Privacy: A Delicate Balance

While concerns about privacy are valid, national security interests necessitate data collection and analysis. Targeted intelligence gathering, such as monitoring individuals suspected of working for foreign countries or planning terrorist activities, can be crucial. Although, the line between targeted intelligence and broader data collection can grow blurred.

This tension is particularly relevant when considering the Pentagon’s employ of AI. While OpenAI has amended its contract to prohibit the intentional use of its AI system for domestic surveillance of U.S. Persons, the clause allowing the Pentagon to use the technology for all lawful purposes remains a point of contention. Experts suggest that companies have limited ability to prevent the Pentagon from utilizing technology as it deems lawful.

Section 702 and the Fourth Amendment: A Recent Court Ruling

Recent legal challenges highlight the evolving legal landscape. A U.S. District Court recently ruled that warrantless queries of Americans’ communications collected under Section 702 of FISA violated the Fourth Amendment. This decision represents a significant victory against warrantless surveillance, demonstrating a growing judicial scrutiny of intelligence-gathering practices.

The Role of Section 702

Section 702 allows the government to collect communications of foreign targets located outside the United States. However, this collection often incidentally captures communications of Americans. The recent court ruling focused on the legality of querying this collected data for information about U.S. Citizens without a warrant, finding that such queries violated Fourth Amendment protections.

The Future of AI and Surveillance: Key Trends

Several trends are likely to shape the future of AI and surveillance:

  • Increased Automation: AI will automate more aspects of surveillance, from data collection to analysis and threat detection.
  • Expansion of Data Sources: The range of data sources used for surveillance will continue to expand, including social media, location data, and biometric information.
  • Legal Challenges: Expect continued legal challenges to surveillance practices, particularly those involving AI and the Fourth Amendment.
  • Evolving Regulations: Policymakers will grapple with the need to update surveillance laws to address the challenges posed by AI.

FAQ

Q: What is the Fourth Amendment?
A: It protects against unreasonable searches and seizures.

Q: What is FISA?
A: The Foreign Intelligence Surveillance Act, passed in 1978, established procedures for authorizing electronic surveillance for foreign intelligence purposes.

Q: Can the government use AI to analyze legally collected data?
A: Yes, as long as the initial data collection is lawful, the government can generally use AI to analyze it.

Q: What is Section 702 of FISA?
A: It allows the government to collect communications of foreign targets, but often incidentally captures communications of Americans.

Q: What are the concerns about OpenAI’s contract with the Pentagon?
A: While OpenAI prohibits intentional domestic surveillance, the Pentagon’s ability to use the technology for “lawful purposes” could still allow for surveillance activities.

Did you know? The concept of a “reasonable expectation of privacy” is central to Fourth Amendment jurisprudence, and its application in the digital age is constantly being debated.

Pro Tip: Regularly review the privacy settings on your online accounts and be mindful of the data you share.

What are your thoughts on the balance between national security and individual privacy in the age of AI? Share your perspective in the comments below. Explore our other articles on technology and law for more in-depth analysis. Subscribe to our newsletter for the latest updates on these critical issues.

March 9, 2026 0 comments
0 FacebookTwitterPinterestEmail
Newer Posts
Older Posts

Recent Posts

  • Half of Residents in Disadvantaged Areas Consider Leaving Sweden

    April 8, 2026
  • Mircea Lucescu Dies: Romanian Football Legend & Former Manager Passes Away

    April 8, 2026
  • Oppo & Realme HP Price List: April 8, 2026 – Official Prices

    April 8, 2026
  • Antiviral drugs and shingles vaccines tied to lower dementia risk

    April 8, 2026
  • Blind Woman Denied Bus Travel: Discrimination Case Against Flixbus

    April 8, 2026

Popular Posts

  • 1

    Maya Jama flaunts her taut midriff in a white crop top and denim jeans during holiday as she shares New York pub crawl story

    April 5, 2025
  • 2

    Saar-Unternehmen hoffen auf tiefgreifende Reformen

    March 26, 2025
  • 3

    Marta Daddato: vita e racconti tra YouTube e podcast

    April 7, 2025
  • 4

    Unlocking Success: Why the FPÖ Could Outperform Projections and Transform Austria’s Political Landscape

    April 26, 2025
  • 5

    Mecimapro Apologizes for DAY6 Concert Chaos: Understanding the Controversy

    May 6, 2025

Follow Me

Follow Me
  • Cookie Policy
  • CORRECTIONS POLICY
  • PRIVACY POLICY
  • TERMS OF SERVICE

Hosted by Byohosting – Most Recommended Web Hosting – for complains, abuse, advertising contact: o f f i c e @byohosting.com


Back To Top
Newsy Today
  • Business
  • Entertainment
  • Health
  • News
  • Sport
  • Tech
  • World