• Business
  • Entertainment
  • Health
  • News
  • Sport
  • Tech
  • World
Newsy Today
news of today
Home - Physical AI
Tag:

Physical AI

Tech

NVIDIA GTC: Scaling Physical AI with Omniverse & New Data Factory Blueprints

by Chief Editor March 28, 2026
written by Chief Editor

The Rise of the AI Factory: How NVIDIA is Pioneering the Future of Physical AI

NVIDIA’s recent GTC conference signaled a major shift in the landscape of artificial intelligence, moving beyond isolated applications to large-scale enterprise deployments. The focus? Physical AI – the integration of AI into the physical world, impacting robotics, autonomous vehicles, and manufacturing. This isn’t just about smarter robots; it’s about fundamentally changing how things are designed, built, and operated.

From Digital Twins to Real-World Impact

At the heart of this transformation is the concept of the digital twin – a virtual replica of a physical system. NVIDIA’s Omniverse DSX Blueprint is designed to unify simulation across every layer of an AI factory, enabling optimization and efficiency gains before physical infrastructure is even installed. This approach is particularly crucial for modern AI factories, which are incredibly complex systems involving thermal management, power grids, and network loads.

Compute as Data: A Modern Paradigm

Traditionally, access to real-world data was a key competitive advantage in physical AI. However, NVIDIA argues that this is changing. The challenge isn’t just acquiring data, but managing the entire data factory – from curation and augmentation to evaluation. The newly introduced Physical AI Data Factory Blueprint addresses this by transforming compute power into high-quality training data. Leveraging NVIDIA Cosmos open world foundation models and the NVIDIA OSMO operator, developers can now generate diverse datasets from limited real-world inputs.

Several companies are already utilizing this blueprint, including FieldAI, Hexagon Robotics, Linker Vision, Milestone Systems, Skild AI, and Teradyne Robotics, accelerating projects in robotics, vision AI, and autonomous vehicles.

OpenUSD: The Common Language of the Metaverse and Beyond

OpenUSD is emerging as a critical enabler of scalable physical AI. It provides a standardized scene description language, allowing teams to seamlessly integrate CAD data, simulation assets, and real-world telemetry into a shared, physically accurate virtual environment. This interoperability is key to unlocking the full potential of digital twins and collaborative workflows.

Manufacturing and Logistics Reimagined

The implications for manufacturing and logistics are profound. NVIDIA’s Mega Omniverse Blueprint provides a reference architecture for designing, testing, and optimizing robot fleets and AI agents within a digital twin of a factory. KION, in collaboration with Accenture and Siemens, is leveraging this blueprint to build large-scale warehouse digital twins for GXO, training and testing autonomous forklifts powered by NVIDIA Jetson modules.

The Ecosystem Expands

NVIDIA isn’t tackling this challenge alone. The company is actively partnering with a global robotics ecosystem, including ABB Robotics, FANUC, KUKA, and Yaskawa – representing a combined install base of over 2 million robots. These partnerships focus on integrating NVIDIA Omniverse libraries and Isaac simulation frameworks into existing robotic systems, enabling validation of complex applications through physically accurate digital twins.

Robot Brains Powered by AI

The development of “robot brains” is similarly accelerating, with companies like FieldAI and Skild AI utilizing NVIDIA Cosmos world models for data generation and Isaac simulation frameworks for policy validation. Generalist AI is exploring the leverage of NVIDIA Cosmos to generate synthetic data, potentially enabling robots to quickly master a wide range of tasks.

FAQ

What is Physical AI? Physical AI refers to the application of artificial intelligence to control and optimize physical systems, such as robots, vehicles, and factories.

What is OpenUSD? OpenUSD is a scene description language that enables interoperability between different 3D tools and platforms, crucial for building and simulating digital twins.

What is the NVIDIA Omniverse DSX Blueprint? It’s a reference architecture for unifying simulation across all layers of an AI factory, allowing for optimization before physical deployment.

What is the NVIDIA Physical AI Data Factory Blueprint? This blueprint transforms compute power into high-quality training data, addressing the bottleneck of acquiring and processing real-world data.

How are companies using these technologies? Companies like KION and GXO are using NVIDIA’s blueprints to build and test autonomous forklift fleets in digital twins, while robotics developers are leveraging Cosmos for data generation.

Where can I learn more about NVIDIA’s announcements from GTC? You can find more information on the GTC 2026 online press kit and watch the keynote replay.

Pro Tip: Explore NVIDIA’s Isaac Sim platform to start building your own physically accurate simulations and digital twins.

Did you grasp? Microsoft Azure and Nebius are the first cloud platforms to offer the Physical AI Data Factory Blueprint, making large-scale data production more accessible.

Aim for to stay ahead of the curve in the world of AI and robotics? Share your thoughts in the comments below and explore more articles on our site!

March 28, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

50 startups transforming industries with physical AI

by Chief Editor March 20, 2026
written by Chief Editor

The Rise of Physical AI: How Robotics is Poised to Reshape Our World

The convergence of artificial intelligence and robotics is no longer a futuristic fantasy; it’s actively reshaping industries from defense and manufacturing to healthcare and everyday consumer experiences. Recent advancements are driving explosive growth in the robotics market, fueled by breakthroughs in AI, decreasing hardware costs, and critical labor shortages.

The Three Pillars of Robotics Growth

According to recent research, the robotics market is experiencing unprecedented expansion. This growth is underpinned by three key factors:

  • AI Foundation Models: Improved perception and decision-making capabilities are now possible thanks to advances in AI.
  • Hardware Cost Reduction: Dramatic reductions in hardware costs are making robotics more accessible.
  • Labor Shortages: Acute labor shortages across critical industries are driving demand for automated solutions.

By 2035, experts predict over 2.5 billion robots worldwide, but this figure likely underestimates the true potential. As robots become more capable and versatile, they will unlock entirely new markets beyond simply automating existing workflows.

What Sets Leading Robotics Companies Apart?

Successful companies in this space share common characteristics. They boast world-class technical teams with expertise in robotics, AI, and systems engineering. They focus on high-value, repeatable use cases that justify the complexities of deploying physical systems. Crucially, they forge deep partnerships with customers to gather real-world data for continuous improvement.

Pro Tip: Building a full-stack solution – integrating hardware, software, and AI – is essential. Betting solely on software or commoditized hardware is unlikely to yield long-term success.

Key Players Driving the Revolution

While the robotics industry hasn’t yet experienced a “ChatGPT moment,” significant progress is already impacting our lives. Companies like Waymo and Anduril Industries are demonstrating what’s possible when physical AI research translates into real-world applications.

Autonomous Vehicles: The Road Ahead

The race to autonomous transportation is accelerating, with applications ranging from long-haul trucking to last-mile delivery and ridesharing. Key companies include:

  • Waymo: Leading robotaxi service with commercial deployment in major U.S. Cities.
  • Applied Intuition: Building autonomy for commercial and defense industries.
  • Wayve: Developing AI driver software for a range of vehicles.
  • Waabi: Pioneering next-generation self-driving technology leveraging generative AI.
  • Nuro: Creating a universal autonomy platform for all roads and rides.

Climate and Agriculture: Cultivating a Sustainable Future

Robotics is addressing the agricultural labor crisis while simultaneously improving sustainability. Notable companies include:

  • Carbon Robotics: AI-powered weeding robots reducing herbicide use.
  • Orchard: Autonomous platform for orchard management.
  • Seneca: AI-powered autonomous fire suppression drones.
  • Upside Robotics: Autonomous field robots for agricultural tasks.

Consumer Robotics: Bringing Intelligence Home

By integrating intelligent automation into daily life, consumer robotics is paving the way for a future where robots are commonplace. Leading companies include:

  • Figure: Developing humanoid robots for commercial applications.
  • Matic: Building autonomous home cleaning robots.
  • Sunday AI: Creating robotic household care solutions.
  • Zipline: Pioneering autonomous drone delivery at scale.
  • 1X: Developing general-purpose humanoid robots.
  • Coco: Operating the largest fleet of autonomous sidewalk delivery robots.
  • The Bot Company: Building household robotics for various tasks.

Defense: Reimagining National Security

Autonomous systems are revolutionizing defense, offering new capabilities for ensuring national security. Key players include:

  • Anduril Industries: Developing advanced autonomous defense systems, including drones and submarines.
  • Neros Technologies: Producing mass-produced attritable first-person view drones.
  • NODA AI: Creating an orchestration platform for multi-domain autonomous systems.
  • Auterion: Developing AI-enabled autonomous systems for defense.
  • Saronic Technologies: Building autonomous surface vessels for maritime defense.
  • Shield AI: Developing AI pilot software and autonomous aircraft for military aviation.
  • Skydio: Creating autonomous drones and docking systems for public safety.
  • Breaker Industries: Developing voice-command orchestration for autonomous systems.

Robotics Infrastructure: The Foundation for Growth

These companies are providing the essential tools and platforms that enable the entire robotics ecosystem:

  • Zeromatter: Offering a high-performance simulation platform for building and testing autonomous systems.
  • Foxglove: Providing a full-stack observability platform for robotics visualization and debugging.
  • Point One Navigation: Delivering precision positioning for autonomous systems.
  • Voxel51: Creating a visual AI and computer vision data platform.
  • Theseus: Developing AI-powered GPS for military drones using vision-based technology.

Generalized Foundation Models: The AI Brains Behind the Robots

These companies are building the AI that will power the next generation of robots:

  • World Labs: Developing spatial intelligence for AI capable of understanding and generating 3D worlds.
  • Dyna Robotics: Creating embodied AGI for general-purpose robots.
  • Generalist: Building foundation models for robotic manipulation.
  • Perceptron AI: Developing base perceptive models for robotics.
  • Physical Intelligence: Creating foundation models for robotic manipulation.
  • Skild AI: Offering a scalable robot learning platform.
  • Eka Robotics: Developing robotics that masters physics through self-supervised learning.

Health and Life Sciences: Precision and Automation in Healthcare

These companies are transforming healthcare through precision robotics:

  • Periodic Labs: Developing AI-powered scientists and autonomous labs for materials discovery.
  • Mendaera: Creating Focalist, an FDA-cleared robotic system for high-precision medical procedures.
  • Radical AI: Accelerating materials R&D through AI and robotics labs.
  • Medra: Building an AI-powered autonomous robotics system for life sciences labs.

Navigating the Challenges Ahead

The leaders in this space face unique challenges, including managing the complexities of real-world deployments, building trust with enterprises for critical safety applications, and developing sustainable unit economics despite high R&D costs.

Did you know? Talent migration from leading AI labs to robotics startups is accelerating, signaling a strong belief in the future of physical AI.

Connect with Us

If you’re a founder building in the physical AI space, we encourage you to connect with us. Contact us at [email protected].

March 20, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

Microsoft & NVIDIA: Scaling AI Infrastructure with Foundry & New Hardware

by Chief Editor March 19, 2026
written by Chief Editor

Microsoft and NVIDIA: Forging the Future of AI at the Edge and in the Real World

The collaboration between Microsoft and NVIDIA, prominently showcased at NVIDIA GTC 2026, signals a pivotal shift in the artificial intelligence landscape. It’s no longer solely about training massive models; the focus is rapidly evolving towards deploying and operating AI agents at scale, bridging the gap between digital intelligence and real-world applications. This partnership isn’t just about faster chips; it’s about a complete ecosystem designed for production-ready AI.

Foundry: The Operating System for Enterprise AI

Microsoft is positioning its Foundry platform as the “operating system for AI,” built on Azure and designed to orchestrate models, tools, data, and observability. The recent general availability of the next-generation Foundry Agent Service and Observability in Foundry Control Plane empowers organizations to build and manage AI agents that can reason, plan, and act across diverse workflows. Corvus Energy is already leveraging Foundry to automate inspection processes across its fleet, demonstrating the platform’s practical impact.

The integration of NVIDIA Nemotron models into Microsoft Foundry expands the range of available models, alongside offerings from Fireworks AI, OpenAI, Anthropic, Mistral, and DeepSeek. This provides customers with greater flexibility and choice when selecting the optimal model for their specific needs.

Azure AI Infrastructure: Powering the Next Generation of Workloads

Microsoft is investing heavily in AI-optimized infrastructure within Azure. Notably, they are the first hyperscale cloud provider to power on NVIDIA’s next-generation Vera Rubin NVL72 systems. This commitment to cutting-edge hardware, coupled with liquid-cooled Grace Blackwell GPUs already deployed across their global datacenters, ensures Azure remains at the forefront of AI compute capabilities.

Recognizing the importance of data sovereignty and security, Microsoft has extended Foundry Local support to include the NVIDIA Vera Rubin platform on Azure Local. This allows organizations to maintain control over their AI workloads while benefiting from accelerated computing and Azure’s consistent operations and governance.

Physical AI: Connecting the Digital and Physical Worlds

The convergence of AI and the physical world is a key theme driving innovation. Microsoft and NVIDIA are collaborating on the NVIDIA Physical AI Data Factory Blueprint, with Microsoft Foundry serving as the platform for hosting and operating Physical AI systems. A new public Azure Physical AI Toolchain GitHub repository, integrated with the Nvidia Physical AI Data Factory and core Azure services, further facilitates the development of these systems.

Deeper integration between Microsoft Fabric and NVIDIA Omniverse libraries is enabling organizations to connect live operational data with physically accurate digital twins and simulations. This allows for AI-driven action across machines, facilities, and workflows, moving beyond simple monitoring and alerts.

The Rise of Agentic AI and Voice Integration

The availability of Voice Live API integration with Foundry Agent Service, currently in public preview, opens up new possibilities for building voice-first, multimodal, real-time agentic experiences. This, combined with refreshed Microsoft Foundry portal and integrations with security partners like Palo Alto Networks’ Prisma AIRS and Zenity, strengthens the security and manageability of AI agents.

What Does This Indicate for Businesses?

This collaboration isn’t just a technology showcase; it’s a strategic move to empower businesses across industries. From manufacturing and energy to logistics and healthcare, the ability to deploy and operate AI agents at scale will unlock new levels of efficiency, automation, and innovation.

Pro Tip:

Consider starting small with AI agent deployments. Identify specific, well-defined workflows that can benefit from automation and gradually expand your apply cases as you gain experience and confidence.

FAQ

  • What is Microsoft Foundry? Foundry is Microsoft’s platform designed to be the operating system for building, deploying, and operating AI at enterprise scale.
  • What is NVIDIA’s role in this partnership? NVIDIA provides the accelerated computing hardware and software, including models like Nemotron, that power Microsoft’s AI infrastructure and platforms.
  • What is Physical AI? Physical AI refers to the application of AI to real-world, physical systems and environments, such as robotics and industrial automation.
  • What is Azure Local? Azure Local allows customers to run Azure services, including AI workloads, in their own datacenters, providing greater control and data sovereignty.

Did you know? Microsoft is the first hyperscale cloud to power on NVIDIA’s newest Vera Rubin NVL72 systems.

Explore more about Microsoft’s AI solutions on Azure AI and discover how NVIDIA is transforming industries with NVIDIA’s AI platform.

March 19, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

Cat 306 CR: AI-Powered Mini Excavator Runs Open Models on NVIDIA Jetson Thor

by Chief Editor March 11, 2026
written by Chief Editor

The Rise of the AI-Powered Construction Site: Caterpillar’s 306 CR Leads the Charge

The construction industry is undergoing a quiet revolution, driven by the integration of artificial intelligence (AI) into everyday machinery. Nowhere is this more apparent than with Caterpillar’s 306 CR mini excavator, a machine designed to thrive in tight spaces and now, thanks to advancements in edge computing, capable of answering questions. This isn’t just about automation; it’s about creating a collaborative partnership between human operators and intelligent machines.

From Data Centers to the Dirt: The Shift to Edge AI

For years, open-source AI models resided primarily in data centers, reliant on robust computing power and constant network connectivity. However, this reliance introduces latency and ongoing costs. The trend is now decisively shifting towards “edge AI” – processing data directly on the machine itself. This is crucial for applications like construction, where real-time responsiveness and consistent operation are paramount. The Cat 306 CR, powered by NVIDIA’s Jetson Thor platform, exemplifies this shift.

NVIDIA and Caterpillar: A Powerful Partnership

Caterpillar’s implementation leverages several key NVIDIA technologies. The Cat AI Assistant, currently in development, utilizes NVIDIA Jetson Thor for real-time inference. It also incorporates NVIDIA Nemotron speech models for accurate voice interactions and Qwen3 4B for fast, localized response generation. So the excavator can understand and respond to operator queries without relying on a cloud connection, ensuring data privacy and minimizing delays.

Beyond the Excavator: AI in Robotics and Automation

The impact extends far beyond excavators. Franka Robotics is showcasing the potential of onboard AI with its FR3 Duo dual-arm system, running the NVIDIA GR00T N1.6 model conclude-to-end. Similarly, research projects like the SONIC project from NVIDIA’s GEAR Lab demonstrate the feasibility of deploying complex humanoid controllers directly on Jetson Orin, achieving remarkably low latency. Even a matcha-making robot built by students at UIUC utilizes Jetson Thor and the GR00T N1.5 model.

The Benefits of Onboard AI: Safety, Efficiency, and Control

The advantages of running AI models directly on the machine are significant. Lower latency translates to quicker response times and improved control. Limited power consumption is essential for mobile equipment. Consistent behavior, unaffected by network fluctuations, enhances safety and reliability. The ability to process data locally addresses growing concerns about data privacy.

Jetson: Becoming the Industry Standard

NVIDIA Jetson is rapidly becoming the go-to platform for deploying open models at the edge. Its versatility, supporting a wide range of AI frameworks, and its ability to handle diverse workloads produce it ideal for a variety of applications. Developers can access model benchmarks and tutorials at the Jetson AI Lab, and the platform supports models like Gemma, gpt-oss-20B, Mistral AI, NVIDIA Cosmos, NVIDIA Isaac GR00T, and Qwen 3.5.

What Does This Mean for the Future of Construction?

The integration of AI into construction equipment like the Cat 306 CR isn’t just about automating tasks; it’s about augmenting human capabilities. Expect to see AI-powered systems providing operator guidance, enhancing safety features, and optimizing machine performance. Digital twins, powered by NVIDIA Omniverse, will enable realistic simulations for training and planning. The future construction site will be a collaborative environment where humans and intelligent machines work together seamlessly.

FAQ

Q: What is edge AI?
A: Edge AI refers to processing AI models directly on the device, rather than relying on a cloud connection. This reduces latency, improves reliability, and enhances data privacy.

Q: What is NVIDIA Jetson?
A: NVIDIA Jetson is a platform for developing and deploying AI applications at the edge. It offers a range of modules with varying levels of performance and power consumption.

Q: What are the benefits of AI in construction?
A: AI can improve safety, efficiency, and productivity on construction sites by providing operator assistance, automating tasks, and optimizing machine performance.

Q: What is CatHelios?
A: CatHelios is a unified data platform providing trusted machine context.

Caterpillar Technical Highlights

  • NVIDIA Jetson Thor: Edge AI platform for real-time inference in industrial and robotics systems
  • NVIDIA Riva: Speech AI framework using Parakeet ASR and Magpie TTS
  • Qwen3 4B: Compact LLM for intent parsing and response generation
  • vLLM: Efficient runtime for serving LLM inference at the edge
  • CatHelios: Unified data platform providing trusted machine context
  • NVIDIA Omniverse: Digital twin and simulation frameworks for industrial workflows

Pro Tip: Explore the Jetson AI Lab for tutorials and model benchmarks to get started with deploying AI on NVIDIA Jetson platforms.

Want to learn more about the future of AI in construction? Share your thoughts in the comments below!

March 11, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

ABB & NVIDIA: Physical AI Revolutionizes Robotics with Omniverse Integration

by Chief Editor March 9, 2026
written by Chief Editor

The Factory Floor Reimagined: How NVIDIA and ABB are Closing the ‘Sim-to-Real’ Gap with Physical AI

The future of manufacturing is taking shape, and it’s powered by a groundbreaking partnership between ABB Robotics, and NVIDIA. Today, the two companies announced a collaboration designed to bring industrial-grade physical AI to factories worldwide, promising to dramatically reduce costs, accelerate production, and address critical labor shortages.

Bridging the Virtual-Real Divide

For decades, manufacturers have grappled with the “sim-to-real” gap – the frustrating disconnect between how robots perform in virtual simulations and their performance on the actual factory floor. Lighting inconsistencies, material behavior discrepancies, and unpredictable real-world variations have historically hampered the seamless transition from digital training to physical deployment. ABB and NVIDIA are tackling this challenge head-on.

The core of this innovation lies in integrating NVIDIA Omniverse libraries directly into ABB’s RobotStudio programming and simulation suite. This integration delivers physically accurate simulation capabilities, allowing manufacturers to design, program, and test entire automation cells in a virtual environment before deploying a single robot. The result? A unified workflow that minimizes errors and maximizes efficiency.

RobotStudio HyperReality: A New Era of Precision

Launching in the second half of 2026, RobotStudio HyperReality promises to be a game-changer. The new platform boasts an impressive 99% correlation between simulation and real-world behavior. What we have is achieved through a combination of physics-rich simulation, synthetic data generation, and ABB’s Absolute Accuracy technology, which reduces positioning errors to around 0.5 mm – a significant improvement over the typical 8-15 mm.

RobotStudio HyperReality exports a fully parameterized robot station – encompassing robots, sensors, lighting, kinematics, and parts – as a USD file into NVIDIA Omniverse. Within Omniverse, ABB’s virtual controller runs the same firmware as its physical counterparts, ensuring consistent performance across both environments. Synthetic images generated in Omniverse are then used to train AI vision models entirely in simulation.

Early Adopters: Foxconn and Workr Lead the Way

The potential of this technology is already being realized through pilot programs with industry leaders. Foxconn, the world’s largest electronics manufacturer, is leveraging RobotStudio HyperReality in its consumer electronics assembly lines. The technology is expected to reduce setup time and eliminate costly physical testing, particularly crucial given the delicate components and frequent product variations inherent in electronics manufacturing.

Workr, a U.S.-based robotic workforce company, is integrating its WorkrCore platform with ABB industrial robots trained using synthetic data generated by NVIDIA Omniverse libraries. Workr aims to deploy advanced automation solutions to small and medium-sized manufacturers, addressing critical labor shortages and boosting productivity. They plan to showcase AI-powered robotic systems capable of onboarding new parts in minutes and deploying without specialized programming expertise at NVIDIA GTC 2026.

The Impact on Manufacturing: Cost Savings and Accelerated Time-to-Market

The benefits of this collaboration extend far beyond improved accuracy. ABB projects that RobotStudio HyperReality will reduce deployment costs by up to 40% and accelerate time-to-market by as much as 50%. Manufacturers can now design and validate production lines virtually, cutting setup and commissioning times by up to 80% and eliminating the need for expensive physical prototypes.

ABB is exploring the integration of the NVIDIA Jetson edge AI platform into its Omnicore controller, enabling real-time inference across its entire robot portfolio. This will unlock new possibilities for intelligent automation and adaptive robotics.

Looking Ahead: The Future of Intelligent Automation

This partnership represents a major milestone in the evolution of industrial automation. By closing the sim-to-real gap, ABB and NVIDIA are paving the way for a future where robots are more adaptable, efficient, and reliable than ever before. The implications are far-reaching, promising to transform manufacturing processes across a wide range of industries.

Frequently Asked Questions

What is ‘physical AI’? Physical AI refers to AI models that interact with and learn from the physical world, enabling robots to perform complex tasks with greater precision and adaptability.

What is NVIDIA Omniverse? NVIDIA Omniverse is a platform for building and operating metaverse and 3D workflows. It provides the tools and infrastructure needed to create physically accurate simulations.

When will RobotStudio HyperReality be available? RobotStudio HyperReality is scheduled for release in the second half of 2026.

What are the key benefits of this partnership? The key benefits include reduced costs, accelerated time-to-market, improved accuracy, and increased efficiency in manufacturing processes.

Who are the early adopters of this technology? Foxconn and Workr are among the first companies piloting RobotStudio HyperReality.

March 9, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

NVIDIA & Dassault Systèmes Partner to Build Industrial AI World Models

by Chief Editor February 9, 2026
written by Chief Editor

The Rise of Virtual Twins: How AI is Revolutionizing Engineering and Manufacturing

The future of engineering isn’t about building physical prototypes first – it’s about building them in software. A landmark partnership between NVIDIA and Dassault Systèmes, unveiled at 3DEXPERIENCE World, is accelerating this shift, promising to redefine how products are designed, factories are operated, and even scientific discoveries are made.

From Digital Designs to ‘World Models’

For decades, engineers have used digital models to visualize and test designs. Now, the focus is moving towards “world models” – AI-powered systems that simulate the behavior of products, factories, and complex systems with unprecedented accuracy. These aren’t just static representations; they’re dynamic, physics-based simulations capable of predicting outcomes and optimizing performance.

Dassault Systèmes, with its 3DEXPERIENCE platform serving over 45 million users, has long been a leader in virtual twin technology. The collaboration with NVIDIA aims to fuse accelerated computing and AI libraries with these virtual twins, enabling real-time digital workflows and AI companions to assist engineering teams.

AI as Infrastructure: The New Computing Stack

NVIDIA CEO Jensen Huang envisions a future where artificial intelligence is as fundamental as electricity or the internet. This means moving away from manually specified designs to systems that can generate, simulate, and optimize solutions in software at an industrial scale. This represents a fundamental reinvention of the computing stack.

According to Huang, this new approach will allow engineers to function at a scale 100 to 1,000 times – and eventually a million times – greater than before.

Applications Across Industries

The potential applications of this technology are vast, spanning multiple sectors:

Advancing Scientific Discovery

The NVIDIA BioNeMo platform, combined with BIOVIA science-validated world models, is accelerating the discovery of new molecules, and materials. This has implications for biopharma, materials science, and beyond.

AI-Driven Engineering Design

SIMULIA, leveraging NVIDIA CUDA-X and AI physics libraries, empowers engineers to accurately predict the behavior of designs, enabling faster prototyping and validation. This means fewer physical prototypes and reduced development costs.

The AI-Powered Factory of the Future

NVIDIA Omniverse, integrated with Dassault Systèmes’ DELMIA Virtual Twin, is enabling the creation of autonomous, software-defined production systems. This represents a shift from static factories to dynamic, adaptable manufacturing environments.

Virtual Companions for Engineers

The 3DEXPERIENCE agentic platform, powered by NVIDIA AI technologies and Nemotron open models, will provide engineers with “virtual companions” – AI assistants that offer trusted, actionable intelligence and automate repetitive tasks.

Deploying AI Factories with Sovereign Cloud

Dassault Systèmes is deploying NVIDIA-powered AI factories on three continents through its OUTSCALE sovereign cloud. This allows customers to leverage the power of AI although maintaining data residency and security, addressing critical concerns for many organizations.

Amplifying, Not Replacing, Human Ingenuity

Both Dassault Systèmes CEO Pascal Daloz and NVIDIA CEO Jensen Huang emphasized that the goal isn’t to replace engineers, but to amplify their capabilities. By automating exploratory tasks and providing AI-driven insights, engineers can focus on creativity and innovation.

Daloz stated that engineers want to “invent the future,” not simply automate the past.

FAQ

What is a virtual twin? A virtual twin is a digital replica of a physical asset, process, or system. It allows for simulation, analysis, and optimization without the need for physical prototypes.

What are ‘world models’? World models are AI-powered systems that simulate the behavior of complex systems based on physics and scientific principles.

How will this partnership benefit engineers? The partnership will provide engineers with AI-powered tools and virtual companions that automate tasks, accelerate design cycles, and enable exploration of larger design spaces.

Is AI going to replace engineers? No. The focus is on augmenting human capabilities, not replacing them. AI will handle repetitive tasks, allowing engineers to focus on creativity and innovation.

Where can I learn more about this collaboration? You can explore demos and learn more at GTC San Jose from March 16-19, specifically at Florence Hu-Aubigny’s session on virtual twins and booth 1841 in the Industrial AI and Robotics pavilion.

Did you realize? Virtual twins are becoming “knowledge factories” – places where knowledge is created, tested, and trusted before anything is built in the physical world.

Pro Tip: Explore NVIDIA Omniverse and Dassault Systèmes’ 3DEXPERIENCE platform to understand the capabilities of virtual twin technology and how it can be applied to your industry.

What are your thoughts on the future of AI-powered engineering? Share your insights in the comments below!

February 9, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

NVIDIA Omniverse & OpenUSD: Accelerating Robotics Development with Physical AI

by Chief Editor January 31, 2026
written by Chief Editor

The Rise of Physical AI: How Open Source and Digital Twins are Reshaping Robotics

The future of robotics isn’t just about building machines; it’s about imbuing them with intelligence, adaptability, and the ability to seamlessly interact with the physical world. Recent advancements showcased at CES 2026, and driven by companies like NVIDIA, signal a pivotal shift towards “physical AI” – a convergence of robotics, AI, and high-fidelity simulation. This isn’t a distant dream; it’s happening now, fueled by open-source frameworks and the power of digital twins.

Open Source: The Engine of Innovation

For years, proprietary systems hindered rapid progress in robotics. The move towards open source, particularly with frameworks like OpenUSD and NVIDIA’s Isaac platform, is democratizing access to critical tools. This collaborative environment allows developers to build upon each other’s work, accelerating innovation at an unprecedented pace. According to a recent report by the Robotics Industries Association, open-source robotics projects have seen a 35% increase in contributions over the last two years, directly correlating with faster development cycles.

NVIDIA’s commitment to open physical AI models, including Alpamayo and Nemotron, is a key driver. These aren’t just theoretical tools; they’re being integrated into real-world applications, from Caterpillar’s AI-powered heavy equipment assistants to advanced surgical robots from LEM Surgical.

LEM Surgical’s Dynamis Robotic Surgical System leverages NVIDIA’s AI technologies for enhanced precision.

Digital Twins: Bridging the Gap Between Simulation and Reality

The core of this revolution lies in the creation of accurate digital twins – virtual replicas of physical systems. OpenUSD provides the standardized framework for sharing 3D data, ensuring seamless integration between simulation and deployment. NVIDIA Omniverse libraries act as the “ground truth” for these simulations, providing the data needed to train AI models in a realistic environment.

This approach allows companies like Caterpillar to simulate factory layouts and traffic patterns *before* making physical changes, significantly improving efficiency and safety. Similarly, NEURA Robotics is using Omniverse to refine robot behavior in complex scenarios, minimizing risks and optimizing performance in real-world deployments.

Pro Tip: Investing in high-fidelity simulation is no longer optional. It’s a critical component of developing robust and reliable robotic systems. The cost of simulation is significantly lower than the cost of real-world testing and potential failures.

The Expanding Role of World Models

Beyond digital twins, “world models” are emerging as a crucial element of physical AI. NVIDIA Cosmos, for example, allows robots to understand and predict the behavior of their environment. AgiBot’s Genie Envisioner platform leverages Cosmos Predict 2 to generate action-conditioned videos, enabling more reliable policy transfer to physical robots.

Intbot is pushing the boundaries further by using NVIDIA Cosmos Reason 2 to give social robots a “sixth sense,” allowing them to interpret social cues and navigate complex interactions with humans. This is a significant step towards creating robots that are truly capable of collaborating with people in everyday life.

Humanoid Robots: A New Era of Dexterity and Assistance

The advancements in physical AI are particularly impactful for humanoid robotics. Companies like ROBOTIS are building open-source sim-to-real pipelines using NVIDIA Isaac technologies, accelerating the development of robots capable of performing complex tasks in human environments. The integration of Hugging Face’s Reachy 2 humanoid with NVIDIA Jetson Thor further expands the possibilities for advanced vision language action (VLA) models.

NVIDIA’s Agile engine, built on Isaac Lab, simplifies the training of reinforcement learning policies for humanoid locomotion and manipulation, making it easier to create robots that can navigate and interact with the world with human-like dexterity.

The Convergence of Robotics and Large Language Models

The integration of Large Language Models (LLMs) like NVIDIA Nemotron is transforming how we interact with robots. Caterpillar’s “Hey Cat” assistant demonstrates the power of natural language interaction, allowing operators to control heavy equipment with voice commands. This intuitive interface lowers the barrier to entry and makes complex machinery more accessible.

Furthermore, the collaboration between NEURA Robotics and SAP, integrating SAP’s Joule agents with robots through the Mega NVIDIA Omniverse Blueprint, highlights the potential for seamless integration between robotic systems and enterprise software.

Looking Ahead: Trends to Watch

  • Edge AI Dominance: More processing will move to the edge, enabling faster response times and reduced reliance on cloud connectivity. NVIDIA Jetson Thor will be central to this trend.
  • Generative AI for Robotics: Generative AI will play an increasingly important role in creating synthetic data, designing robot morphologies, and optimizing control policies.
  • Standardization and Interoperability: OpenUSD will become the de facto standard for 3D data exchange, fostering greater collaboration and reducing fragmentation in the robotics ecosystem.
  • AI-Driven Fleet Management: The ability to simulate and manage large fleets of robots will become essential for industrial automation and logistics.

FAQ

What is Physical AI?
Physical AI refers to the application of artificial intelligence to control and enhance physical systems, such as robots and autonomous vehicles.
What is OpenUSD?
OpenUSD is an open-source framework for describing, composing, and augmenting 3D scenes and data, enabling seamless collaboration and data exchange.
What are Digital Twins?
Digital twins are virtual replicas of physical assets, systems, or processes, used for simulation, analysis, and optimization.
How does NVIDIA Omniverse fit into this?
NVIDIA Omniverse provides the platform and tools for building and connecting digital twins, leveraging OpenUSD as its foundation.

Did you know? The global robotics market is projected to reach $210 billion by 2030, driven by advancements in AI and the increasing demand for automation across various industries.

Want to learn more about the future of robotics and physical AI? Explore the resources mentioned in this article and join the conversation! Share your thoughts in the comments below.

January 31, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

CES 2026: AI, Robotics & the US Automaker Exodus – TechCrunch Mobility

by Chief Editor January 18, 2026
written by Chief Editor

The automotive landscape is undergoing a seismic shift, and it’s no longer solely about cars. The future of transportation, as showcased at recent events like CES 2026, is increasingly defined by “physical AI” – the integration of artificial intelligence into the physical world through robotics, autonomous systems, and advanced sensor technology. This isn’t just about self-driving cars anymore; it’s about a complete reimagining of how we move people and goods.

The Rise of Physical AI: Beyond Autonomous Vehicles

For years, the promise of fully autonomous vehicles dominated the conversation. However, CES 2025 and 2026 revealed a broader trend: a surge in companies focusing on the underlying technologies that *enable* autonomy, and applying those technologies to a diverse range of applications. Traditional US automakers have noticeably scaled back their presence at these events, creating space for innovators in robotics, AI-powered hardware, and software solutions. Companies like Zoox, Tensor Auto, Tier IV, and Waymo (now sporting a rebranded Zeekr robotaxi) are leading the charge, alongside a growing presence from Chinese manufacturers like Geely and GWM.

Nvidia CEO Jensen Huang’s term, “physical AI,” perfectly encapsulates this evolution. It’s about AI moving beyond the digital realm and interacting with the physical world through sensors, cameras, and actuators. This manifests in everything from autonomous forklifts in warehouses to humanoid robots assisting in manufacturing, and, of course, the continued development of robotaxis and drones. The potential impact spans industries, promising increased efficiency, safety, and new possibilities for automation.

Hyundai’s Robotics Focus: A Case Study

Hyundai’s massive exhibit at CES 2026, dominated by robots rather than cars, is a prime example of this shift. The showcase of Boston Dynamics’ Atlas humanoid robot, alongside innovations from Hyundai Motor Group Robotics LAB – including EV charging robots and the versatile MobEd platform – signals a clear strategic direction. This isn’t a pivot *away* from automobiles, but rather an expansion into a broader ecosystem of mobility solutions. The palpable hype surrounding humanoids, as noted by Mobileye co-founder Amnon Shashua, suggests a belief in the long-term viability of this technology, despite initial skepticism.

Pro Tip: Don’t dismiss the hype around humanoids. While valuations may fluctuate, the underlying technology and potential applications are very real.

Geopolitical Implications and the US Automotive Industry

The changing landscape isn’t without geopolitical undertones. Former President Trump’s recent comments welcoming Chinese automakers to the US, while potentially boosting economic competition, have sparked concern within the automotive industry. The Alliance for Automotive Innovation is reportedly “freaking out,” according to industry insiders, due to existing regulations restricting the import of connected vehicles from China and Russia. This highlights a tension between open market principles and national security concerns.

Meanwhile, Canada is taking a different approach, slashing import taxes on Chinese EVs. This divergence in policy could lead to a shift in automotive manufacturing and supply chains, potentially impacting the US industry. The debate underscores the complex interplay between trade, technology, and national interests.

Key Deals and Investments Shaping the Future

Recent investment activity further illustrates the momentum behind these trends:

  • Mobileye’s $900M Acquisition: Mobileye’s acquisition of Mentee Robotics demonstrates a significant bet on the future of humanoid robotics.
  • Allegiant & Sun Country: The $1.5 billion merger signals consolidation in the budget airline sector, potentially impacting air travel accessibility.
  • Luminar’s Downsizing: Luminar’s sale of its lidar business for a fraction of its peak valuation serves as a cautionary tale about the challenges of scaling autonomous vehicle technology.
  • JetZero’s Funding: The $175 million Series B round for JetZero highlights continued investment in innovative aircraft designs focused on fuel efficiency.

Notable Reads and Emerging Trends

Beyond the headline-grabbing deals, several emerging trends deserve attention:

  • Data Security Concerns: The Bluspark Global security breach underscores the importance of robust cybersecurity measures in the connected vehicle ecosystem.
  • FTC Regulations: The FTC’s order regarding GM’s OnStar data sharing practices highlights the growing scrutiny of data privacy in the automotive industry.
  • Super App Strategies: InDrive’s expansion into advertising and grocery delivery demonstrates the potential of “super app” models in the transportation sector.
  • AI-First Approach to Autonomy: Motional’s reboot, centered around an AI-first strategy, suggests a shift towards more sophisticated and adaptable autonomous systems.
  • Robotaxi Legalization: New York’s potential legalization of robotaxis (excluding NYC) could pave the way for wider adoption of autonomous ride-hailing services.

Did you know? The US Department of Commerce’s Bureau of Industry and Security currently restricts the import and sale of certain connected vehicles linked to China or Russia.

FAQ: The Future of Transportation

Q: Will fully autonomous vehicles become a reality?
A: While the timeline remains uncertain, the underlying technology is rapidly advancing. Expect to see increasing levels of automation in specific applications, such as highway driving and geofenced areas, before achieving full Level 5 autonomy.

Q: What is “physical AI”?
A: It’s the integration of AI into the physical world through robotics, sensors, and actuators, enabling machines to interact with and understand their environment.

Q: What are the biggest challenges facing the autonomous vehicle industry?
A: Challenges include regulatory hurdles, public acceptance, technological limitations (particularly in unpredictable environments), and the high cost of development and deployment.

Q: How will these trends impact the automotive industry?
A: The automotive industry will likely evolve from a focus on vehicle manufacturing to a broader ecosystem of mobility services, encompassing robotics, AI-powered software, and data analytics.

The future of transportation is no longer simply about getting from point A to point B. It’s about creating a more efficient, sustainable, and intelligent mobility ecosystem powered by physical AI. Stay tuned – the ride is just beginning.

Want to learn more? Explore our other articles on autonomous vehicles, robotics, and the future of mobility here. Subscribe to our newsletter for the latest updates and insights!

January 18, 2026 0 comments
0 FacebookTwitterPinterestEmail
Business

Fujitsu develops Fujitsu Kozuchi Physical AI 1.0 for seamless integration of physical and agentic AI

by Chief Editor December 24, 2025
written by Chief Editor

The Dawn of Physical AI: How Fujitsu and NVIDIA are Building the Future of Automation

Fujitsu’s recent unveiling of Kozuchi Physical AI 1.0, developed in collaboration with NVIDIA, isn’t just another AI announcement. It signals a fundamental shift: the merging of the digital and physical worlds through agentic AI. This isn’t about smarter software; it’s about AI that can do things in the real world, autonomously and securely. This development builds on the partnership announced in October 2023, and promises to reshape industries from manufacturing to logistics.

Beyond Chatbots: The Rise of Agentic AI

For years, AI has largely been confined to the realm of data analysis and information processing – think chatbots and recommendation engines. Agentic AI, however, takes things a step further. It’s about creating AI ‘agents’ capable of independent problem-solving, planning, and execution. Fujitsu’s multi-AI agent framework, powered by NVIDIA’s software and Fujitsu’s Takane LLM, is designed to automate complex, confidential business processes. Imagine a procurement department where AI agents negotiate contracts, manage supplier relationships, and optimize purchasing decisions – all without human intervention.

This isn’t science fiction. Companies like Scale AI are already providing data infrastructure to support the development of these types of agents. A recent report by Gartner places agentic AI near the “Peak of Inflated Expectations” – meaning we’re on the cusp of seeing real-world applications emerge at scale.

Pro Tip: When evaluating agentic AI solutions, prioritize security and explainability. You need to understand why an AI agent made a particular decision, especially in sensitive areas like finance or legal compliance.

From Secure Workflows to Physical Robotics: The Evolution of Kozuchi

Fujitsu’s roadmap for Kozuchi is ambitious. By the end of 2025, they aim to create an agentic AI foundation capable of autonomous learning and evolution within customer environments. But the real game-changer is the planned expansion into the “physical AI domain.” This means connecting these intelligent agents to robots, allowing them to directly interact with the physical world.

Consider a warehouse setting. Instead of relying on pre-programmed robots, AI agents could dynamically assign tasks, optimize routes, and even troubleshoot mechanical issues – all in real-time. This level of adaptability is crucial for handling the increasing complexity of modern supply chains. Amazon, for example, is heavily investing in robotics and AI to automate its fulfillment centers, but the next wave will be about giving those robots true autonomy.

Sovereign AI and the Importance of Control

Fujitsu’s focus on “sovereign domains” is particularly noteworthy. This refers to the need for organizations to maintain control over their AI systems and data, especially in regulated industries like healthcare and finance. Building AI foundations that operate within a secure, controlled environment is paramount. This is where NVIDIA’s expertise in secure computing and AI infrastructure becomes invaluable.

The European Union’s AI Act, for instance, is driving demand for AI systems that are transparent, accountable, and respect fundamental rights. Companies that can deliver on these requirements will have a significant competitive advantage.

The Impact on Industries: Beyond Procurement

While Fujitsu is initially focusing on procurement, the potential applications of Physical AI are vast. Here are a few examples:

  • Manufacturing: Predictive maintenance, quality control, and automated assembly lines.
  • Healthcare: Robotic surgery, personalized medicine, and automated drug discovery.
  • Logistics: Autonomous delivery vehicles, optimized route planning, and warehouse automation.
  • Agriculture: Precision farming, automated harvesting, and crop monitoring.

A recent study by McKinsey estimates that AI could contribute up to $15.7 trillion to the global economy by 2030, with a significant portion of that growth coming from automation and robotics.

FAQ: Physical AI Explained

  • What is Physical AI? It’s the integration of agentic AI with physical robots and systems, allowing AI to interact with and manipulate the real world.
  • What is an agentic AI? An AI agent capable of independent problem-solving, planning, and execution, rather than simply responding to commands.
  • Why is security important in Physical AI? Because these systems often handle sensitive data and control critical infrastructure, security is paramount.
  • What is Takane? Fujitsu’s proprietary large language model (LLM) used to power the AI agents within the Kozuchi framework.
Did you know? The term “Physical AI” is relatively new, but the concept of robots interacting with the world based on AI is decades old. The key difference now is the sophistication of the AI and the ability for it to learn and adapt autonomously.

Want to learn more about the future of AI and automation? Explore our other articles on artificial intelligence. Share your thoughts in the comments below – what industries do you think will be most impacted by Physical AI?

December 24, 2025 0 comments
0 FacebookTwitterPinterestEmail

Recent Posts

  • Google Revives Snapseed With a Massive Pro Photo Editing Update

    May 13, 2026
  • EastEnders villain returning to Walford in explosive comeback storyline

    May 13, 2026
  • Karachi’s drug kingpin Pinky challenges authorities

    May 13, 2026
  • Eagles vs. Jaguars NFL London Schedule

    May 13, 2026
  • Multi-million-dollar robot diagnosing and treating lung cancer with precision

    May 13, 2026

Popular Posts

  • 1

    Maya Jama flaunts her taut midriff in a white crop top and denim jeans during holiday as she shares New York pub crawl story

    April 5, 2025
  • 2

    Saar-Unternehmen hoffen auf tiefgreifende Reformen

    March 26, 2025
  • 3

    Marta Daddato: vita e racconti tra YouTube e podcast

    April 7, 2025
  • 4

    Unlocking Success: Why the FPÖ Could Outperform Projections and Transform Austria’s Political Landscape

    April 26, 2025
  • 5

    Mecimapro Apologizes for DAY6 Concert Chaos: Understanding the Controversy

    May 6, 2025

Follow Me

Follow Me
  • Cookie Policy
  • CORRECTIONS POLICY
  • PRIVACY POLICY
  • TERMS OF SERVICE

Hosted by Byohosting – Most Recommended Web Hosting – for complains, abuse, advertising contact: o f f i c e @byohosting.com


Back To Top
Newsy Today
  • Business
  • Entertainment
  • Health
  • News
  • Sport
  • Tech
  • World