• Business
  • Entertainment
  • Health
  • News
  • Sport
  • Tech
  • World
Newsy Today
news of today
Home - Nemotron
Tag:

Nemotron

Tech

NVIDIA GTC: The Future of AI is Open & Orchestrated Models

by Chief Editor March 30, 2026
written by Chief Editor

The Rise of the AI Orchestra: Why NVIDIA’s Huang Says Open and Proprietary AI Must Coexist

Artificial intelligence is rapidly evolving from a promising technology to the core infrastructure of businesses worldwide. But the future isn’t about a single, monolithic AI – it’s about a diverse ecosystem of models, both large and small, open and closed, generalist and specialist. This was the central message from NVIDIA founder and CEO Jensen Huang at a recent session on open frontier models at NVIDIA GTC.

Beyond Open vs. Closed: A Hybrid Approach

Huang emphatically stated that the debate isn’t about choosing between open and closed innovation. Instead, it’s about recognizing that both approaches are essential. “Proprietary versus open is not a thing. It’s proprietary and open,” he explained. This signals a shift in thinking, acknowledging the strengths of both models and the necessitate for collaboration.

The Need for Specialized AI Systems

Every industry faces unique challenges. Healthcare, finance, and manufacturing all require AI tailored to their specific data and workflows. A one-size-fits-all approach simply won’t operate. The solution? Systems of models, finely tuned and specialized for different tasks, working together to solve complex business problems.

NVIDIA is actively contributing to the open-source AI movement, now being the largest organization on Hugging Face, with nearly 4,000 team members. The company recently launched the NVIDIA Nemotron Coalition, a global collaboration of AI labs focused on advancing open, frontier-level foundation models through shared expertise and resources.

AI Agents: The Future of Work?

A key takeaway from discussions at GTC was the growing capability of AI agents. According to Cursor CEO Michael Truell, “We’re soon going to witness agents really be coworkers that can grab on tasks that take many hours or many days, and do incredibly complex workloads.” This suggests a future where AI handles increasingly sophisticated tasks, freeing up human workers to focus on more strategic initiatives.

Orchestrating the AI Ecosystem

Perplexity CEO Aravind Srinivas envisions a future where AI isn’t about selecting the “best” model, but rather orchestrating a “multimodal, multi-model and multi-cloud orchestra.” The system itself will intelligently delegate tasks to the most appropriate model, simplifying the process for users.

Trust and Accessibility Through Open Systems

Open systems are gaining traction due to their inherent trustworthiness and accessibility. AMP PBC’s Anjney Midha noted, “At the end of the day, you’re delegating trust…and it’s much easier to trust an open system.” This transparency fosters confidence and allows for wider adoption of AI technologies.

The Importance of Both Generalist and Specialist AI

Just as a hospital relies on both general practitioners and specialized surgeons, society needs both generalist and specialist AI. Open foundations combined with proprietary data allow organizations to unlock unique value and drive innovation in both academia and business. Ai2’s Hanna Hajishirzi emphasized that open access accelerates progress and democratizes AI, ensuring broader participation and benefit.

Black Forest Labs’ Robin Rombach added that both frontier models and specialized open models have exciting potential, and that all of them should have some open component.

FAQ

Q: What is the NVIDIA Nemotron Coalition?
A: It’s a global collaboration of AI labs working to advance open, frontier-level foundation models through shared expertise, data, and compute.

Q: What is the key message from Jensen Huang regarding open vs. Proprietary AI?
A: It’s not an either/or situation. Both open and proprietary AI are essential and should coexist.

Q: What role will AI agents play in the future?
A: They are expected to develop into highly capable coworkers, handling complex tasks and workloads.

Q: Why is specialization important in AI?
A: Different industries have unique challenges that require tailored AI solutions.

Watch the GTC session highlights on YouTube and start building with NVIDIA Nemotron open models.

March 30, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

NVIDIA Nemotron-3 Super: Open-Source 120B Parameter AI Model

by Chief Editor March 11, 2026
written by Chief Editor

NVIDIA Nemotron 3 Super: Ushering in a New Era of Agentic AI

NVIDIA has launched Nemotron 3 Super, a 120-billion-parameter open model with 12 billion active parameters, poised to redefine the landscape of agentic AI. This isn’t just another large language model; it’s a foundational step towards more efficient, accurate, and scalable AI systems capable of handling complex tasks across diverse industries.

Addressing the Challenges of Multi-Agent AI

As AI moves beyond simple chatbots and into sophisticated multi-agent applications, two key challenges emerge: context explosion and the “thinking tax.” Multi-agent workflows generate significantly more data – up to 15 times more tokens than standard chat – due to the need to resend complete histories with each interaction. This increased context volume drives up costs and can lead to agents losing focus on their original objectives. The “thinking tax” refers to the computational expense of complex agents reasoning at every step, making these applications sluggish and impractical.

How Nemotron 3 Super Solves These Problems

Nemotron 3 Super tackles these hurdles head-on with a hybrid architecture and innovative techniques. Its 1-million-token context window allows agents to retain complete workflow state, preventing goal drift. The model leverages a hybrid Mixture-of-Experts (MoE) architecture, combining Mamba layers for efficiency and transformer layers for advanced reasoning. Specifically, it features:

  • Hybrid Architecture: Mamba layers deliver 4x higher memory and compute efficiency.
  • MoE: Only 12 billion of its 120 billion parameters are active during inference.
  • Latent MoE: Improves accuracy by activating four expert specialists for the cost of one.
  • Multi-Token Prediction: Predicts multiple future words simultaneously, resulting in 3x faster inference.

running the model in NVFP4 precision on the NVIDIA Blackwell platform cuts memory requirements and boosts inference speed up to 4x compared to FP8 on NVIDIA Hopper, without sacrificing accuracy.

Real-World Applications Taking Shape

The impact of Nemotron 3 Super is already being felt across various sectors. AI-native companies like Perplexity AI are integrating the model to enhance search capabilities, offering it as one of 20 orchestrated models within their Computer platform. Software development firms such as CodeRabbit, Factory, and Greptile are utilizing Nemotron 3 Super to improve the accuracy and cost-effectiveness of their AI agents. Life sciences organizations, including Edison Scientific and Lila Sciences, are harnessing its power for deep literature research, data science, and molecular understanding.

Enterprise adoption is likewise accelerating. Industry leaders like Amdocs, Palantir, Cadence, Dassault Systèmes, and Siemens are deploying and customizing the model to automate workflows in areas like telecom, cybersecurity, semiconductor design, and manufacturing.

Open Weights and Accessibility

NVIDIA is releasing Nemotron 3 Super with open weights under a permissive license, empowering developers to deploy and customize it on workstations, in data centers, or in the cloud. The model was trained on synthetic data generated using advanced reasoning models, and NVIDIA is publishing the complete methodology, including over 10 trillion tokens of pre- and post-training datasets, and 15 training environments for reinforcement learning.

Leading the Benchmarks

Nemotron 3 Super isn’t just theoretically advanced; it’s demonstrably superior in performance. It currently powers the NVIDIA AI-Q research agent to the No. 1 position on both the DeepResearch Bench and DeepResearch Bench II leaderboards, benchmarks that measure an AI system’s ability to conduct thorough, multistep research.

Availability and Ecosystem Support

NVIDIA Nemotron 3 Super is accessible through build.nvidia.com, Perplexity, OpenRouter, and Hugging Face. Dell Technologies is bringing the model to the Dell Enterprise Hub on Hugging Face, optimized for on-premise deployment. A growing ecosystem of partners, including Google Cloud, Oracle Cloud Infrastructure, Coreweave, Crusoe, and others, are offering access and support for deploying the model.

Future Trends: The Path Forward for Agentic AI

The release of Nemotron 3 Super signals a broader shift towards more capable and accessible agentic AI. We can anticipate several key trends:

  • Increased Specialization: Models will become increasingly specialized for specific tasks and industries, leading to higher accuracy and efficiency.
  • Edge Deployment: The ability to run powerful models like Nemotron 3 Super on edge devices will unlock new applications in areas like robotics and autonomous systems.
  • Enhanced Tool Integration: AI agents will become more adept at utilizing a wider range of tools and APIs, enabling them to perform more complex tasks.
  • Improved Reasoning Capabilities: Continued advancements in model architecture and training techniques will lead to even more sophisticated reasoning abilities.

FAQ

Q: What is Nemotron 3 Super?
A: It’s a 120-billion-parameter open model designed for complex agentic AI systems, offering improved efficiency and accuracy.

Q: What is an agentic AI system?
A: An AI system capable of autonomously performing tasks and making decisions.

Q: Where can I access Nemotron 3 Super?
A: Through build.nvidia.com, Perplexity, OpenRouter, Hugging Face, and various cloud and infrastructure partners.

Q: What is the benefit of the hybrid architecture?
A: It combines the efficiency of Mamba layers with the reasoning power of transformer layers.

Q: Is Nemotron 3 Super open source?
A: Yes, it is released with open weights under a permissive license.

Ready to explore the potential of agentic AI? Visit build.nvidia.com to get started and discover how Nemotron 3 Super can transform your applications.

March 11, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

Cat 306 CR: AI-Powered Mini Excavator Runs Open Models on NVIDIA Jetson Thor

by Chief Editor March 11, 2026
written by Chief Editor

The Rise of the AI-Powered Construction Site: Caterpillar’s 306 CR Leads the Charge

The construction industry is undergoing a quiet revolution, driven by the integration of artificial intelligence (AI) into everyday machinery. Nowhere is this more apparent than with Caterpillar’s 306 CR mini excavator, a machine designed to thrive in tight spaces and now, thanks to advancements in edge computing, capable of answering questions. This isn’t just about automation; it’s about creating a collaborative partnership between human operators and intelligent machines.

From Data Centers to the Dirt: The Shift to Edge AI

For years, open-source AI models resided primarily in data centers, reliant on robust computing power and constant network connectivity. However, this reliance introduces latency and ongoing costs. The trend is now decisively shifting towards “edge AI” – processing data directly on the machine itself. This is crucial for applications like construction, where real-time responsiveness and consistent operation are paramount. The Cat 306 CR, powered by NVIDIA’s Jetson Thor platform, exemplifies this shift.

NVIDIA and Caterpillar: A Powerful Partnership

Caterpillar’s implementation leverages several key NVIDIA technologies. The Cat AI Assistant, currently in development, utilizes NVIDIA Jetson Thor for real-time inference. It also incorporates NVIDIA Nemotron speech models for accurate voice interactions and Qwen3 4B for fast, localized response generation. So the excavator can understand and respond to operator queries without relying on a cloud connection, ensuring data privacy and minimizing delays.

Beyond the Excavator: AI in Robotics and Automation

The impact extends far beyond excavators. Franka Robotics is showcasing the potential of onboard AI with its FR3 Duo dual-arm system, running the NVIDIA GR00T N1.6 model conclude-to-end. Similarly, research projects like the SONIC project from NVIDIA’s GEAR Lab demonstrate the feasibility of deploying complex humanoid controllers directly on Jetson Orin, achieving remarkably low latency. Even a matcha-making robot built by students at UIUC utilizes Jetson Thor and the GR00T N1.5 model.

The Benefits of Onboard AI: Safety, Efficiency, and Control

The advantages of running AI models directly on the machine are significant. Lower latency translates to quicker response times and improved control. Limited power consumption is essential for mobile equipment. Consistent behavior, unaffected by network fluctuations, enhances safety and reliability. The ability to process data locally addresses growing concerns about data privacy.

Jetson: Becoming the Industry Standard

NVIDIA Jetson is rapidly becoming the go-to platform for deploying open models at the edge. Its versatility, supporting a wide range of AI frameworks, and its ability to handle diverse workloads produce it ideal for a variety of applications. Developers can access model benchmarks and tutorials at the Jetson AI Lab, and the platform supports models like Gemma, gpt-oss-20B, Mistral AI, NVIDIA Cosmos, NVIDIA Isaac GR00T, and Qwen 3.5.

What Does This Mean for the Future of Construction?

The integration of AI into construction equipment like the Cat 306 CR isn’t just about automating tasks; it’s about augmenting human capabilities. Expect to see AI-powered systems providing operator guidance, enhancing safety features, and optimizing machine performance. Digital twins, powered by NVIDIA Omniverse, will enable realistic simulations for training and planning. The future construction site will be a collaborative environment where humans and intelligent machines work together seamlessly.

FAQ

Q: What is edge AI?
A: Edge AI refers to processing AI models directly on the device, rather than relying on a cloud connection. This reduces latency, improves reliability, and enhances data privacy.

Q: What is NVIDIA Jetson?
A: NVIDIA Jetson is a platform for developing and deploying AI applications at the edge. It offers a range of modules with varying levels of performance and power consumption.

Q: What are the benefits of AI in construction?
A: AI can improve safety, efficiency, and productivity on construction sites by providing operator assistance, automating tasks, and optimizing machine performance.

Q: What is CatHelios?
A: CatHelios is a unified data platform providing trusted machine context.

Caterpillar Technical Highlights

  • NVIDIA Jetson Thor: Edge AI platform for real-time inference in industrial and robotics systems
  • NVIDIA Riva: Speech AI framework using Parakeet ASR and Magpie TTS
  • Qwen3 4B: Compact LLM for intent parsing and response generation
  • vLLM: Efficient runtime for serving LLM inference at the edge
  • CatHelios: Unified data platform providing trusted machine context
  • NVIDIA Omniverse: Digital twin and simulation frameworks for industrial workflows

Pro Tip: Explore the Jetson AI Lab for tutorials and model benchmarks to get started with deploying AI on NVIDIA Jetson platforms.

Want to learn more about the future of AI in construction? Share your thoughts in the comments below!

March 11, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

NVIDIA: Open AI Models & Blueprints for Autonomous Telecom Networks

by Chief Editor March 1, 2026
written by Chief Editor

The Rise of Agentic AI: How NVIDIA is Rewriting the Future of Telecom Networks

Autonomous networks – self-managing telecommunications systems – are rapidly transitioning from a futuristic concept to an immediate priority for telecom operators. Network automation is now the top AI investment area, according to NVIDIA’s latest State of AI in Telecommunications report. But automation is just the first step. True autonomy requires networks that can understand intent, weigh options, and make independent decisions.

Beyond Automation: The Need for Reasoning and AI Agents

The key to unlocking this next level of network intelligence lies in reasoning models and AI agents specifically trained on telecom data. These aren’t simply executing pre-programmed tasks; they’re learning to think like network engineers. This shift demands an end-to-end agentic system, incorporating telco network models, intelligent AI agents, and network simulation tools for validation.

NVIDIA’s New Tools for Autonomous Networks

Ahead of Mobile World Congress Barcelona, NVIDIA unveiled a suite of new tools designed to accelerate this transition. These include an open NVIDIA Nemotron-based Large Telco Model (LTM), a guide for building reasoning agents, and NVIDIA Blueprints focused on energy savings and network configuration. These resources are being released through GSMA’s new Open Telco AI initiative, making them accessible to operators worldwide.

Open Nemotron 3 LTM: Understanding the Language of Telecom

The new open-source NVIDIA Nemotron LTM, developed in collaboration with AdaptKey AI, is a 30-billion-parameter model designed to understand the specific terminology and workflows of the telecom industry. It’s optimized for tasks like fault isolation, remediation planning, and change validation. Crucially, being an open model provides telcos with transparency and control over their AI, allowing for secure on-premises deployment and customization with their own data.

Teaching AI to Think Like a Network Engineer

NVIDIA and Tech Mahindra have published a guide detailing how to fine-tune reasoning models and build agents capable of handling Network Operations Center (NOC) workflows. The approach focuses on identifying high-impact incident categories, translating expert resolutions into step-by-step procedures, and creating structured reasoning traces for the model to learn from. Using the NVIDIA NeMo-Skills pipeline, operators can build specialized AI agents that can solve problems with the expertise of a seasoned network engineer.

Energy Efficiency and Intent-Driven Automation

NVIDIA’s new Blueprint for intent-driven RAN energy efficiency leverages closed-loop operation – models that understand the network, agents that act on intent, and simulation for validation. It integrates VIAVI’s TeraVM AI RAN Scenario Generator to create synthetic network data, allowing operators to test and validate energy-saving policies without disrupting live networks.

Real-World Implementations: From Africa to Japan

The NVIDIA Blueprint for telco network configuration is already being adopted by operators globally. Cassava Technologies is using it to build Cassava Autonomous Network, optimizing its multi-vendor mobile network environment in Africa. NTT DATA is implementing the blueprint to intelligently manage traffic surges in Japan, improving network resilience.

Multi-Agent Orchestration with BubbleRAN

NVIDIA and BubbleRAN are enhancing the Blueprint with the NVIDIA NeMo Agent Toolkit (NAT) and BubbleRAN Agentic Toolkit (BAT) to enable more flexible management of network monitoring, configuration, and validation agents. Telenor Group will be the first to adopt this enhanced blueprint to improve its 5G network for Telenor Maritime.

FAQ: Agentic AI in Telecom

What is an agentic AI system? An agentic AI system is one that includes AI agents capable of understanding intent, reasoning, and taking independent actions to achieve specific goals.

What is the NVIDIA Nemotron LTM? It’s an open-source large telco model designed to understand the language of telecom and reason through complex workflows.

How can AI help with network energy efficiency? AI can analyze network data and identify opportunities to reduce power consumption without impacting quality of service.

What is the benefit of an open-source AI model? Open-source models provide transparency, control, and the ability to customize the AI to specific network needs.

What is the role of simulation in autonomous networks? Simulation allows operators to safely test and validate AI-driven decisions before implementing them in a live network.

Did you know? The NVIDIA State of AI in Telecommunications report identifies network automation as the top AI use case for investment and return on investment.

Pro Tip: Focus on high-impact, high-frequency incident categories when training AI agents to maximize their effectiveness.

Explore the latest advancements in agentic AI for telecommunications at Mobile World Congress, taking place in Barcelona from March 2-5.

What are your thoughts on the future of AI in telecom? Share your insights in the comments below!

March 1, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

Agentic AI in India: NVIDIA Powers Tech Industry Transformation

by Chief Editor February 21, 2026
written by Chief Editor

India’s Tech Transformation: How Agentic AI is Redefining Industries

India’s technology sector is undergoing a rapid evolution, fueled by advancements in agentic and generative AI. Companies are leveraging NVIDIA AI Enterprise software and models like NVIDIA Nemotron to boost productivity and efficiency across diverse sectors, from customer support to healthcare, and telecommunications.

Wipro Revolutionizes Call Centers with AI-Powered Efficiency

Traditional call center models struggle to meet the demands of peak seasons and complex customer needs. Wipro is addressing this challenge with its WEGA platform, powered by NVIDIA AI Enterprise. Deployed for a major U.S. Healthcare insurance provider, the system is enabling service representatives to handle more complex requests and deliver personalized support. The results are significant: 42% of inbound calls are now handled by AI agents, with near-instant responsiveness across 900 concurrent calls and 164 requests per second, all with low latency.

Pro Tip: AI agents aren’t replacing human agents; they’re augmenting their capabilities. This allows human representatives to focus on more complex issues requiring empathy and critical thinking.

Tech Mahindra and NVIDIA: Autonomous Networks Powered by AI

Tech Mahindra is collaborating with NVIDIA to create a platform for AI-assisted network operations. A large telco model (LTM) prioritizes fixes for field technicians based on historical success rates, leading to faster and more accurate resolutions. This approach is paving the way for level-4-plus operational maturity in the telecom industry, which generates over $1.5 trillion in annual revenue.

The Power of NVIDIA Nemotron in Telecom

The platform utilizes NVIDIA Nemotron embedding models for semantic search and a reranking model to improve decision relevance. These models are deployed with NVIDIA NIM microservices for accelerated AI inference, and NVIDIA NeMo Agent Toolkit orchestrates agent workflows across network domains.

Infosys Accelerates Software Development with AI Coding Models

Infosys has developed a small language model for coding, built using the NVIDIA NeMo framework within NVIDIA AI Enterprise, and integrated into Infosys Topaz Fabric. This 2.5-billion-parameter model accelerates software delivery, supporting agent development, code generation, and refactoring. It’s trained on a curated blend of code, synthetic data, and natural language, achieving performance comparable to larger models on key benchmarks.

Infosys has likewise prioritized safety, incorporating safety-aligned training and responsible AI practices to reduce harmful outputs and ensure secure coding capabilities.

Persistent Systems Advances Drug Discovery with AI and BioNeMo

Persistent Systems is partnering with NVIDIA to accelerate early-stage drug discovery. Their Generative Molecules and Virtual Screening (GenMoIVS) solution, built on the NVIDIA BioNeMo platform and NeMo Agent Toolkit, simulates molecular behavior with high accuracy, generating and evaluating candidate compounds before lab testing. This approach reduces risk and shortens development cycles.

The platform leverages NVIDIA’s accelerated computing platform, including NVIDIA AI Enterprise software and NIM microservices, enabling high-throughput simulation and real-time scientific decision-making.

The Projected Growth of India’s IT Sector

India’s tech industry is on a strong growth trajectory, projected to reach $500 billion in revenue by 2030, up from $250 billion in 2023. This momentum is driven, in part, by the adoption of AI technologies, supported by investments in GPU infrastructure – with 38,000 GPUs secured as of September.

Frequently Asked Questions

Q: What is agentic AI?
A: Agentic AI refers to AI systems that can act autonomously to achieve specific goals, often by breaking down complex tasks into smaller, manageable steps.

Q: What is NVIDIA AI Enterprise?
A: NVIDIA AI Enterprise is an end-to-end, cloud-native suite of AI software that accelerates data science pipelines and streamlines the development and deployment of AI applications.

Q: How are these AI advancements impacting jobs in India?
A: Whereas some routine tasks may be automated, these advancements are primarily creating recent opportunities for skilled professionals in AI development, data science, and related fields.

Did you know? The integration of AI agents is not limited to these companies. Numerous Indian tech giants are actively deploying NVIDIA AI agents across various industries.

What are your thoughts on the future of AI in India? Share your comments below!

February 21, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

NVIDIA Fuels India’s AI Revolution: Infrastructure, Models & Research

by Chief Editor February 18, 2026
written by Chief Editor

India’s AI Revolution: A Deep Dive into the India AI Impact Summit 2026

India is rapidly emerging as a global hub for Artificial Intelligence (AI) innovation, a trend powerfully underscored by the recent India AI Impact Summit in New Delhi. The summit brought together heads of state, industry leaders, and entrepreneurs to chart the course for AI’s future, with NVIDIA playing a central role in bolstering the nation’s AI capabilities.

Building a Robust AI Infrastructure

A cornerstone of India’s AI ambitions is a significant investment in computing infrastructure. The IndiaAI Compute Pillar is driving the development of AI cloud offerings, incorporating tens of thousands of NVIDIA GPUs. This initiative is fueled by over $1 billion in funding through the IndiaAI Mission, designed to strengthen compute capacity and foster the development of sovereign AI.

NVIDIA is collaborating with next-generation cloud providers like Yotta, L&T, and E2E Networks to deliver advanced AI factories. Yotta’s Shakti Cloud, powered by over 20,000 NVIDIA Blackwell Ultra GPUs, offers pay-per-use GPU-dense services. E2E Networks is building an NVIDIA Blackwell GPU cluster on its TIR platform, hosted at the L&T Vyoma Data Center in Chennai, featuring NVIDIA HGX B200 systems and open models.

Further expanding access, Netweb Technologies is launching Tyrone Camarero AI Supercomputing systems built on the NVIDIA Grace Blackwell architecture, manufactured in India under the “Make in India” mission.

The Rise of India-Specific AI Models

The IndiaAI Mission’s Innovation Center Pillar focuses on developing and deploying foundation models trained on India-specific data. This is particularly crucial for a multilingual nation like India, with 22 constitutionally recognized languages and over 1,500 more. Frontier AI models can help bridge the digital divide and enable more inclusive technology access.

Organizations are leveraging NVIDIA Nemotron to support public-sector services, financial systems, and enterprise operations in multiple languages. Datasets like Nemotron-Personas-India, built using NeMo Data Designer, provide a foundation for population-scale sovereign AI development.

Key Players in India’s AI Model Development

  • BharatGen: Developed a 17-billion-parameter mixture-of-experts model using the NVIDIA NeMo framework.
  • Chariot: Building an 8-billion-parameter model for real-time text to speech using the NeMo framework.
  • Commotion: Integrating NVIDIA Nemotron models into its AI operating system for automating enterprise workflows.
  • CoRover.ai: Deploying NVIDIA Nemotron Speech open models for customer service applications for the Indian Railway Catering and Tourism Corporation.
  • Gnani.ai: Building a 14-billion-parameter speech-to-speech model on NVIDIA Nemotron Speech models.
  • National Payments Corporation of India (NPCI): Exploring training FiMi, a financial model for India, using the NVIDIA Nemotron 3 Nano model.
  • Sarvam.ai: Open sourcing its Sarvam-3 series of text and multimodal large language model variants, trained for 22 Indic languages.
  • Soket.ai: Utilizing a modern large-model training stack on open NVIDIA Nemotron technologies.
  • Tech Mahindra: Developing an 8-billion-parameter foundation model tailored for Indian languages and dialects.
  • Zoho: Advancing its Zia LLM platform with proprietary models built using NVIDIA NeMo.

Government and Academic Collaboration

The IndiaAI Mission’s Application Development and Startup Financing Pillars are fostering innovation through government and academic partnerships. NVIDIA is collaborating with the Anusandhan National Research Foundation (ANRF) to support cutting-edge AI research across leading academic institutions.

This collaboration will provide ANRF grantee institutions with access to NVIDIA AI Enterprise software and technical mentorship through the NVIDIA AI Technology Center. NVIDIA is partnering with venture capital firms like Peak XV and Accel India to identify and fund promising AI startups, with over 4,000 Indian AI startups already participating in the NVIDIA Inception program.

FAQ

Q: What is the IndiaAI Mission?
A: It’s a national program to build AI infrastructure, datasets, skilling, and innovation ecosystems in India.

Q: What role is NVIDIA playing in India’s AI development?
A: NVIDIA is collaborating with cloud providers, research institutions, and startups to provide infrastructure, models, and expertise.

Q: What is NVIDIA Nemotron?
A: It’s a suite of open models, datasets, tools, and libraries for building frontier speech, language, and multimodal models.

Q: What is the significance of developing AI models for Indian languages?
A: It helps bridge the digital divide and makes AI technology more accessible to India’s diverse population.

Did you know? India is investing heavily in its AI cloud infrastructure, with systems including tens of thousands of NVIDIA GPUs.

Pro Tip: Explore the NVIDIA Inception program for startups to gain access to resources and support for AI development.

Stay informed about the latest advancements in AI and India’s role in shaping the future of this transformative technology. Learn more about NVIDIA’s partnerships with India’s largest manufacturers and how India’s global systems integrators are building enterprise AI agents with NVIDIA.

February 18, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

NVIDIA & Dassault Systèmes Partner to Build Industrial AI World Models

by Chief Editor February 9, 2026
written by Chief Editor

The Rise of Virtual Twins: How AI is Revolutionizing Engineering and Manufacturing

The future of engineering isn’t about building physical prototypes first – it’s about building them in software. A landmark partnership between NVIDIA and Dassault Systèmes, unveiled at 3DEXPERIENCE World, is accelerating this shift, promising to redefine how products are designed, factories are operated, and even scientific discoveries are made.

From Digital Designs to ‘World Models’

For decades, engineers have used digital models to visualize and test designs. Now, the focus is moving towards “world models” – AI-powered systems that simulate the behavior of products, factories, and complex systems with unprecedented accuracy. These aren’t just static representations; they’re dynamic, physics-based simulations capable of predicting outcomes and optimizing performance.

Dassault Systèmes, with its 3DEXPERIENCE platform serving over 45 million users, has long been a leader in virtual twin technology. The collaboration with NVIDIA aims to fuse accelerated computing and AI libraries with these virtual twins, enabling real-time digital workflows and AI companions to assist engineering teams.

AI as Infrastructure: The New Computing Stack

NVIDIA CEO Jensen Huang envisions a future where artificial intelligence is as fundamental as electricity or the internet. This means moving away from manually specified designs to systems that can generate, simulate, and optimize solutions in software at an industrial scale. This represents a fundamental reinvention of the computing stack.

According to Huang, this new approach will allow engineers to function at a scale 100 to 1,000 times – and eventually a million times – greater than before.

Applications Across Industries

The potential applications of this technology are vast, spanning multiple sectors:

Advancing Scientific Discovery

The NVIDIA BioNeMo platform, combined with BIOVIA science-validated world models, is accelerating the discovery of new molecules, and materials. This has implications for biopharma, materials science, and beyond.

AI-Driven Engineering Design

SIMULIA, leveraging NVIDIA CUDA-X and AI physics libraries, empowers engineers to accurately predict the behavior of designs, enabling faster prototyping and validation. This means fewer physical prototypes and reduced development costs.

The AI-Powered Factory of the Future

NVIDIA Omniverse, integrated with Dassault Systèmes’ DELMIA Virtual Twin, is enabling the creation of autonomous, software-defined production systems. This represents a shift from static factories to dynamic, adaptable manufacturing environments.

Virtual Companions for Engineers

The 3DEXPERIENCE agentic platform, powered by NVIDIA AI technologies and Nemotron open models, will provide engineers with “virtual companions” – AI assistants that offer trusted, actionable intelligence and automate repetitive tasks.

Deploying AI Factories with Sovereign Cloud

Dassault Systèmes is deploying NVIDIA-powered AI factories on three continents through its OUTSCALE sovereign cloud. This allows customers to leverage the power of AI although maintaining data residency and security, addressing critical concerns for many organizations.

Amplifying, Not Replacing, Human Ingenuity

Both Dassault Systèmes CEO Pascal Daloz and NVIDIA CEO Jensen Huang emphasized that the goal isn’t to replace engineers, but to amplify their capabilities. By automating exploratory tasks and providing AI-driven insights, engineers can focus on creativity and innovation.

Daloz stated that engineers want to “invent the future,” not simply automate the past.

FAQ

What is a virtual twin? A virtual twin is a digital replica of a physical asset, process, or system. It allows for simulation, analysis, and optimization without the need for physical prototypes.

What are ‘world models’? World models are AI-powered systems that simulate the behavior of complex systems based on physics and scientific principles.

How will this partnership benefit engineers? The partnership will provide engineers with AI-powered tools and virtual companions that automate tasks, accelerate design cycles, and enable exploration of larger design spaces.

Is AI going to replace engineers? No. The focus is on augmenting human capabilities, not replacing them. AI will handle repetitive tasks, allowing engineers to focus on creativity and innovation.

Where can I learn more about this collaboration? You can explore demos and learn more at GTC San Jose from March 16-19, specifically at Florence Hu-Aubigny’s session on virtual twins and booth 1841 in the Industrial AI and Robotics pavilion.

Did you realize? Virtual twins are becoming “knowledge factories” – places where knowledge is created, tested, and trusted before anything is built in the physical world.

Pro Tip: Explore NVIDIA Omniverse and Dassault Systèmes’ 3DEXPERIENCE platform to understand the capabilities of virtual twin technology and how it can be applied to your industry.

What are your thoughts on the future of AI-powered engineering? Share your insights in the comments below!

February 9, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

NVIDIA Nemotron: Build AI-Powered Document Intelligence Systems

by Chief Editor February 8, 2026
written by Chief Editor

The Rise of Agentic AI: How NVIDIA Nemotron is Revolutionizing Document Intelligence

Businesses are drowning in data, much of it locked within unstructured documents. Reports, PDFs, web pages, and spreadsheets – extracting valuable insights from these sources has traditionally been a manual, time-consuming process. Now, a new wave of AI-powered document intelligence is emerging, promising to automate understanding and unlock hidden value. At the heart of this shift is NVIDIA Nemotron, a family of open models designed for precisely this purpose.

From Manual Review to AI-Powered Insights

For years, teams have relied on manual review, spreadsheets, and basic Optical Character Recognition (OCR) tools to glean information from documents. These methods are often inefficient and prone to errors, especially when dealing with complex layouts and varied formats. Intelligent document processing, powered by AI agents and techniques like Retrieval-Augmented Generation (RAG), offers a transformative solution. It interprets rich content – tables, charts, images, and text – turning it into actionable insights.

NVIDIA Nemotron: The Engine Behind the Transformation

NVIDIA Nemotron provides the open models and GPU-accelerated libraries needed to build these AI-powered document intelligence systems. The models are transparent, with open weights and training data available on Hugging Face, allowing for thorough evaluation before deployment. Nemotron’s latest iteration, the Nemotron 3 family, delivers leading efficiency and accuracy, particularly for complex, high-throughput agentic AI applications.

Real-World Applications: Streamlining Business Processes

The impact of this technology is already being felt across various industries. Several companies are leveraging Nemotron to address specific challenges:

Justt: Automating Financial Dispute Resolution

In the financial sector, payment disputes are a major source of revenue loss. Justt.ai utilizes Nemotron Parse to automate the chargeback lifecycle. The platform ingests transaction data, customer interactions, and policies, then automatically assembles evidence for disputes, reducing manual effort and recapturing revenue for merchants like HEI Hotels & Resorts.

Docusign: Scaling Agreement Intelligence

Docusign, a leader in agreement management, is evaluating Nemotron Parse to improve the extraction of tables, text, and metadata from complex contracts. This will enable faster and more accurate processing of agreements, turning them into structured data for analysis and AI-driven workflows.

Edison Scientific: Accelerating Scientific Research

Edison Scientific’s Kosmos AI Scientist uses Nemotron Parse to rapidly extract structured information from research papers, including equations, tables, and figures. This transforms a vast research corpus into an interactive, queryable knowledge engine, accelerating hypothesis generation and literature review.

Key Technologies Powering Document Intelligence

Building a robust document intelligence pipeline requires several key components:

  • Extraction: Nemotron extraction and OCR models rapidly ingest multimodal PDFs and other document types.
  • Embedding: Nemotron embedding models convert passages and visual elements into vector representations for semantic search.
  • Reranking: Nemotron reranking models evaluate candidate passages to ensure the most relevant content is surfaced.
  • Parsing: Nemotron Parse models decipher document semantics to extract text and tables with precise spatial grounding.

These capabilities are available as NVIDIA NIM microservices and foundation models, designed to run efficiently on NVIDIA GPUs.

The Future of Document Intelligence: Trends to Watch

The field of document intelligence is rapidly evolving. Several key trends are poised to shape its future:

Increased Focus on Multimodal Understanding

Current models are increasingly capable of understanding not just text, but too images, tables, and charts within documents. This multimodal approach will unlock deeper insights and more accurate interpretations.

Edge Deployment and Reduced Latency

Deploying document intelligence models on edge devices will enable real-time processing and reduce reliance on cloud connectivity. This is particularly important for applications requiring immediate responses.

Integration with Multi-Agent Systems

Document intelligence will become increasingly integrated with multi-agent systems, allowing AI agents to collaborate and automate complex tasks based on information extracted from documents.

Enhanced Security and Compliance

As document intelligence systems handle sensitive data, security and compliance will become paramount. Technologies like confidential computing and data encryption will be essential.

FAQ

What is NVIDIA Nemotron?
NVIDIA Nemotron is a family of open-source AI models designed for building specialized AI agents, particularly for tasks involving document understanding and reasoning.

What is Retrieval-Augmented Generation (RAG)?
RAG is a technique that combines the power of large language models with information retrieved from external sources, such as documents, to generate more accurate and contextually relevant responses.

What are NVIDIA NIM microservices?
NVIDIA NIM microservices are pre-packaged, GPU-accelerated software components that simplify the deployment and scaling of AI applications.

Where can I locate more information about Nemotron?
You can find more information on the NVIDIA Nemotron developer page and on GitHub.

What is Nemotron Parse?
Nemotron Parse models decipher document semantics to extract text and tables with precise spatial grounding and correct reading flow.

Ready to unlock the power of your documents? Explore the resources available on NVIDIA’s website and join the growing community of developers building the future of document intelligence.

February 8, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

NVIDIA Blueprints: AI for Smarter Warehouses & Richer Retail Catalogs

by Chief Editor January 13, 2026
written by Chief Editor

The seamless online shopping experiences we now take for granted – the “add to cart” ease, the speedy deliveries – are built on a complex foundation of logistics, data management, and increasingly, artificial intelligence. But behind the scenes, retailers are grappling with aging infrastructure, fragmented data, and ever-rising customer expectations. NVIDIA is stepping into this challenge with new “Blueprints” designed to revolutionize the retail value chain, and the implications are far-reaching.

The Rise of the Intelligent Retail Ecosystem

NVIDIA’s recently launched Multi-Agent Intelligent Warehouse (MAIW) and Retail Catalog Enrichment Blueprints aren’t just about incremental improvements; they represent a fundamental shift towards an intelligent, adaptive retail ecosystem. These open-source developer references aim to empower businesses to leverage AI across the entire process, from warehouse floor to online storefront.

“We’re seeing a move away from simply automating tasks to orchestrating intelligence,” explains Tarik Hammadou, Director of Developer Relations for AI for Retail and Consumer Packaged Goods at NVIDIA. “These blueprints reduce integration costs and accelerate application development, allowing retailers to compete in a rapidly evolving landscape.”

Warehouse Workflows: From Firefighting to Foresight

Warehouses, traditionally hubs of manual labor and logistical challenges, are prime candidates for AI-driven transformation. The disconnect between IT and Operational Technology (OT) has long hindered efficient problem-solving – accurately tracking inventory, identifying tech glitches, and deploying staff effectively. MAIW addresses this by introducing an “agentic AI layer” that acts as a coordinator between these systems.

Imagine a warehouse supervisor asking, “Why is packing slow?” Instead of a lengthy investigation, the MAIW blueprint analyzes equipment status, task queues, and staffing data, pinpointing the bottleneck and recommending solutions – like rebalancing workload or prioritizing tasks. This proactive approach, powered by real-time explainable intelligence, moves warehouses from reactive “fire drills” to data-driven, predictable operations.

A look inside the MAIW Blueprint.

Beyond Basic Descriptions: The Power of AI-Enriched Catalogs

The “sparse data” problem plagues many retailers: incomplete or inconsistent product information hinders searchability and personalization. The Retail Catalog Enrichment Blueprint tackles this head-on using generative AI. Imagine feeding a simple image of a ceramic mug into the system. The blueprint, leveraging NVIDIA’s NVIDIA Nemotron vision language model, can automatically generate detailed metadata – color, material, capacity, style, and even suggested use cases.

This isn’t just about filling in blanks; it’s about creating localized, brand-aligned content at scale. The blueprint can generate product titles and descriptions tailored to specific markets, extract attributes for improved SEO, and even create culturally relevant imagery. According to a recent McKinsey report, companies that effectively personalize the customer experience see a 10-15% increase in revenue.

Pro Tip: Focus on enriching product data with high-quality images and videos. Visual content significantly boosts engagement and conversion rates.

Real-World Impact: Grid Dynamics Leading the Charge

Companies are already realizing the benefits of these blueprints. Grid Dynamics, a global tech consulting firm, has developed a catalog enrichment and management system using the Retail Catalog Enrichment Blueprint. “The quality of the search and the quality of the browsing experience for customers directly depends on the quality of the catalog data,” says Ilya Katsov, CTO of Grid Dynamics. “Our solution automates this, ensuring catalogs have rich, consistent attributes.”

This automation is crucial for large retailers with massive product catalogs, where manual data review is simply unsustainable. By improving data quality, Grid Dynamics’ solution enhances product discoverability, boosts customer intent signals, and ultimately drives sales.

Future Trends: The Convergence of Physical and Digital Retail

The MAIW and Catalog Enrichment Blueprints are just the beginning. The future of retail lies in the seamless integration of physical and digital experiences, powered by AI at every touchpoint. We can expect to see:

  • Hyper-Personalization: AI will analyze individual customer data to deliver truly personalized product recommendations, promotions, and shopping experiences.
  • Autonomous Stores: Amazon Go-style stores, utilizing computer vision and sensor technology, will become more prevalent, offering frictionless checkout and optimized inventory management.
  • Robotics and Automation: Robots will play an increasingly important role in warehouse operations, handling tasks like picking, packing, and sorting with greater efficiency.
  • Digital Twins: Retailers will create digital replicas of their stores and warehouses to simulate different scenarios, optimize layouts, and improve operational efficiency.
  • AI-Powered Supply Chains: Predictive analytics will enable retailers to anticipate demand fluctuations, optimize inventory levels, and mitigate supply chain disruptions.

FAQ

Q: What are NVIDIA Blueprints?
A: NVIDIA Blueprints are open-source developer references designed to accelerate the development of AI-powered solutions for specific industries, like retail.

Q: What is the benefit of using AI in a warehouse?
A: AI can improve efficiency, reduce errors, optimize inventory management, and enhance worker safety in warehouses.

Q: How does AI help with product catalog enrichment?
A: AI can automatically generate product descriptions, attributes, and localized content, saving retailers time and resources.

Q: Is this technology only for large retailers?
A: While the benefits are significant for large retailers, the blueprints are designed to be scalable and adaptable for businesses of all sizes.

Did you know? The global AI in retail market is projected to reach $88.7 billion by 2030, growing at a CAGR of 31.7% from 2023 to 2030. (Source: Allied Market Research)

The retail landscape is undergoing a dramatic transformation, and AI is at the heart of it. By embracing these new technologies, retailers can unlock unprecedented levels of efficiency, personalization, and customer satisfaction.

Want to learn more about the future of AI in retail? Share your thoughts in the comments below, and explore our other articles on AI and the future of commerce.

January 13, 2026 0 comments
0 FacebookTwitterPinterestEmail

Recent Posts

  • Indonesia to Expand National Internship Program for 2026-2027

    April 26, 2026
  • Viruses Enhance Sulfamethoxazole Removal in Wetlands by Modulating Bacteria-Phage Interactions

    April 26, 2026
  • Here’s What Happens When You Drink Diet Soda Every Day, According to Registered Dietitians

    April 26, 2026
  • Bomb Attack in Colombia Kills 14 Ahead of Presidential Election

    April 26, 2026
  • Colombia Bomb Attack Kills 14 Ahead of Presidential Election

    April 26, 2026

Popular Posts

  • 1

    Maya Jama flaunts her taut midriff in a white crop top and denim jeans during holiday as she shares New York pub crawl story

    April 5, 2025
  • 2

    Saar-Unternehmen hoffen auf tiefgreifende Reformen

    March 26, 2025
  • 3

    Marta Daddato: vita e racconti tra YouTube e podcast

    April 7, 2025
  • 4

    Unlocking Success: Why the FPÖ Could Outperform Projections and Transform Austria’s Political Landscape

    April 26, 2025
  • 5

    Mecimapro Apologizes for DAY6 Concert Chaos: Understanding the Controversy

    May 6, 2025

Follow Me

Follow Me
  • Cookie Policy
  • CORRECTIONS POLICY
  • PRIVACY POLICY
  • TERMS OF SERVICE

Hosted by Byohosting – Most Recommended Web Hosting – for complains, abuse, advertising contact: o f f i c e @byohosting.com


Back To Top
Newsy Today
  • Business
  • Entertainment
  • Health
  • News
  • Sport
  • Tech
  • World