• Business
  • Entertainment
  • Health
  • News
  • Sport
  • Tech
  • World
Newsy Today
news of today
Home - open source - Page 2
Tag:

open source

Tech

AI-RAN: NVIDIA Powers Next-Gen Wireless Networks at MWC 2026

by Chief Editor March 1, 2026
written by Chief Editor

AI-RAN: The Revolution Reshaping Wireless Networks

The future of wireless communication is rapidly evolving, driven by the convergence of artificial intelligence (AI) and Radio Access Networks (RAN). What was once confined to laboratory settings is now moving into real-world deployments, promising a new era of speed, efficiency, and capability for 5G and beyond. This shift, known as AI-RAN, is gaining significant momentum, with major players like Nokia and NVIDIA leading the charge.

From 5G Enhancement to 6G Foundation

AI-RAN isn’t simply an upgrade to existing 5G infrastructure; it’s a fundamental architectural change. Traditional RANs rely on static configurations, whereas AI-RAN leverages machine learning to dynamically optimize network performance. In other words adapting to changing conditions, predicting user behavior, and allocating resources with unprecedented precision. The ultimate goal is to create AI-native 6G systems that are secure, open, and incredibly efficient.

Key Partnerships Driving Innovation

Nokia and NVIDIA are at the forefront of this revolution, forging strategic partnerships with leading telecom operators worldwide. T-Mobile U.S., SoftBank, and Indosat Ooredoo Hutchison (IOH) have already passed key implementation milestones, demonstrating the viability of NVIDIA-powered AI-RAN in live environments. These collaborations are crucial for accelerating the transition from proof-of-concept to commercial deployment.

Real-World Demonstrations: A Glimpse into the Future

Recent trials showcase the tangible benefits of AI-RAN. T-Mobile U.S. Successfully demonstrated concurrent AI and RAN processing, supporting applications like video streaming and generative AI on a live 5G network. SoftBank achieved an industry-first 16-layer massive MIMO using fully software-defined 5G, while IOH showcased Southeast Asia’s first AI-powered 5G call, enabling secure, real-time connectivity and even remote control of a robotic dog.

Benchmarking Breakthroughs and Performance Gains

Benchmarking results from companies like SynaXG reveal impressive performance improvements. AI-RAN running on NVIDIA platforms delivers high-speed, carrier-grade performance across multiple 5G spectrum bands. SynaXG achieved a throughput of 36 Gbps with under 10 milliseconds latency, demonstrating the potential for significantly faster and more responsive wireless experiences.

A Thriving Ecosystem of Partners

The AI-RAN ecosystem is rapidly expanding, with companies like Quanta Cloud Technology (QCT), Supermicro, WNC, Eridan, and LITEON contributing to the development of standardized hardware and software solutions. NVIDIA’s Aerial RAN Computer (ARC) platforms, coupled with these partner offerings, provide operators with a range of deployment options.

The AI-RAN Alliance: Shaping the Industry Roadmap

The AI-RAN Alliance, now boasting over 150 members, plays a vital role in shaping the industry roadmap for AI-native networks. This collaborative effort fosters innovation, validates new concepts, and accelerates the development of open and interoperable solutions.

AI-RAN and the Rise of Autonomous Systems

As intelligence permeates the physical world, AI-RAN networks are becoming essential for supporting autonomous systems like robots and self-driving cars. These systems rely on reliable, low-latency connectivity to see, sense, reason, and act in real-time. Project ULTIMO, a European initiative, is exploring how AI-RAN can enable large-scale autonomous mobility services.

Innovations Showcased at MWC26

Mobile World Congress 2026 highlighted a tripled number of AI-RAN innovations compared to the previous year, with over 26 out of 33 AI-RAN Alliance demos built using NVIDIA AI Aerial. Demonstrations included DeepSig’s AI-native air interface for improved throughput and spectral efficiency, SUTD’s split-inferencing for robots and autonomous vehicles, and zTouch Networks’ AI-RAN orchestration blueprint for efficient GPU resource allocation.

FAQ: Understanding AI-RAN

  • What is AI-RAN? AI-RAN is the integration of artificial intelligence into Radio Access Networks to optimize performance and enable new capabilities.
  • What are the benefits of AI-RAN? Benefits include increased speed, improved efficiency, reduced latency, and support for advanced applications like autonomous systems.
  • Who are the key players in AI-RAN development? Nokia and NVIDIA are leading the charge, along with a growing ecosystem of partners and the AI-RAN Alliance.
  • When will we see commercial AI-RAN deployments? Commercial trials are expected in 2026, with broader commercial releases anticipated in 2027.

Pro Tip: Keep an eye on the AI-RAN Alliance for the latest updates and industry standards.

Did you recognize? NVIDIA has open-sourced NVIDIA Aerial CUDA-accelerated RAN libraries to further accelerate innovation in the field.

Explore the potential of AI-RAN and its impact on the future of wireless communication. Share your thoughts and questions in the comments below!

March 1, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

NVIDIA: Open AI Models & Blueprints for Autonomous Telecom Networks

by Chief Editor March 1, 2026
written by Chief Editor

The Rise of Agentic AI: How NVIDIA is Rewriting the Future of Telecom Networks

Autonomous networks – self-managing telecommunications systems – are rapidly transitioning from a futuristic concept to an immediate priority for telecom operators. Network automation is now the top AI investment area, according to NVIDIA’s latest State of AI in Telecommunications report. But automation is just the first step. True autonomy requires networks that can understand intent, weigh options, and make independent decisions.

Beyond Automation: The Need for Reasoning and AI Agents

The key to unlocking this next level of network intelligence lies in reasoning models and AI agents specifically trained on telecom data. These aren’t simply executing pre-programmed tasks; they’re learning to think like network engineers. This shift demands an end-to-end agentic system, incorporating telco network models, intelligent AI agents, and network simulation tools for validation.

NVIDIA’s New Tools for Autonomous Networks

Ahead of Mobile World Congress Barcelona, NVIDIA unveiled a suite of new tools designed to accelerate this transition. These include an open NVIDIA Nemotron-based Large Telco Model (LTM), a guide for building reasoning agents, and NVIDIA Blueprints focused on energy savings and network configuration. These resources are being released through GSMA’s new Open Telco AI initiative, making them accessible to operators worldwide.

Open Nemotron 3 LTM: Understanding the Language of Telecom

The new open-source NVIDIA Nemotron LTM, developed in collaboration with AdaptKey AI, is a 30-billion-parameter model designed to understand the specific terminology and workflows of the telecom industry. It’s optimized for tasks like fault isolation, remediation planning, and change validation. Crucially, being an open model provides telcos with transparency and control over their AI, allowing for secure on-premises deployment and customization with their own data.

Teaching AI to Think Like a Network Engineer

NVIDIA and Tech Mahindra have published a guide detailing how to fine-tune reasoning models and build agents capable of handling Network Operations Center (NOC) workflows. The approach focuses on identifying high-impact incident categories, translating expert resolutions into step-by-step procedures, and creating structured reasoning traces for the model to learn from. Using the NVIDIA NeMo-Skills pipeline, operators can build specialized AI agents that can solve problems with the expertise of a seasoned network engineer.

Energy Efficiency and Intent-Driven Automation

NVIDIA’s new Blueprint for intent-driven RAN energy efficiency leverages closed-loop operation – models that understand the network, agents that act on intent, and simulation for validation. It integrates VIAVI’s TeraVM AI RAN Scenario Generator to create synthetic network data, allowing operators to test and validate energy-saving policies without disrupting live networks.

Real-World Implementations: From Africa to Japan

The NVIDIA Blueprint for telco network configuration is already being adopted by operators globally. Cassava Technologies is using it to build Cassava Autonomous Network, optimizing its multi-vendor mobile network environment in Africa. NTT DATA is implementing the blueprint to intelligently manage traffic surges in Japan, improving network resilience.

Multi-Agent Orchestration with BubbleRAN

NVIDIA and BubbleRAN are enhancing the Blueprint with the NVIDIA NeMo Agent Toolkit (NAT) and BubbleRAN Agentic Toolkit (BAT) to enable more flexible management of network monitoring, configuration, and validation agents. Telenor Group will be the first to adopt this enhanced blueprint to improve its 5G network for Telenor Maritime.

FAQ: Agentic AI in Telecom

What is an agentic AI system? An agentic AI system is one that includes AI agents capable of understanding intent, reasoning, and taking independent actions to achieve specific goals.

What is the NVIDIA Nemotron LTM? It’s an open-source large telco model designed to understand the language of telecom and reason through complex workflows.

How can AI help with network energy efficiency? AI can analyze network data and identify opportunities to reduce power consumption without impacting quality of service.

What is the benefit of an open-source AI model? Open-source models provide transparency, control, and the ability to customize the AI to specific network needs.

What is the role of simulation in autonomous networks? Simulation allows operators to safely test and validate AI-driven decisions before implementing them in a live network.

Did you know? The NVIDIA State of AI in Telecommunications report identifies network automation as the top AI use case for investment and return on investment.

Pro Tip: Focus on high-impact, high-frequency incident categories when training AI agents to maximize their effectiveness.

Explore the latest advancements in agentic AI for telecommunications at Mobile World Congress, taking place in Barcelona from March 2-5.

What are your thoughts on the future of AI in telecom? Share your insights in the comments below!

March 1, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

Tenable warns of widening AI exposure gap in cloud

by Chief Editor February 23, 2026
written by Chief Editor

The Widening AI Exposure Gap: Why Cloud Security is Falling Behind

Organisations are facing a growing cybersecurity challenge: an “AI exposure gap.” This isn’t about AI *causing* breaches, but rather the rapid integration of AI, cloud technologies, and third-party software creating vulnerabilities that security teams struggle to identify and address. A recent report from Tenable highlights this critical mismatch between engineering speed and security capabilities.

The Software Supply Chain: A Major Weak Point

The report reveals a significant risk within the software supply chain. A staggering 86% of organisations have third-party code packages installed containing critical-severity vulnerabilities. Even more concerning, 13% have deployed packages with a known history of compromise, including instances linked to the s1ngularity and Shai-Hulud worms. This demonstrates that vulnerabilities aren’t just theoretical; they’re actively being exploited.

The increasing use of AI and Model Context Protocol third-party packages – found in 70% of organisations – further complicates matters. These integrations often bypass traditional security oversight, embedding AI deeper into systems and expanding the attack surface.

Identity and Access Management: A Critical Control Point

Identity controls are proving to be a major pressure point. “Ghost” secrets – unused or unrotated cloud credentials – plague 65% of organisations. Alarmingly, 17% of these unused credentials grant critical administrative privileges. Nearly half (49%) of identities with excessive permissions remain dormant, representing a significant potential entry point for attackers.

The report also raises concerns about permissions granted to AI services themselves, with 18% of organisations giving them rarely-audited administrative access. Non-human identities, like AI agents and service accounts, now pose a higher risk (52%) than human users (37%), due to “toxic combinations” of permissions across fragmented systems.

The Rise of “Invisible” Exposure

Tenable defines this challenge as an issue of “exposure management” – the process of identifying, evaluating, and prioritizing risks across all potential attacker entry points. AI adoption dramatically expands the number of systems and components that can inherit risk, adding new layers to applications, infrastructure, identities, and data. This creates a largely invisible exposure that many security teams are ill-equipped to manage.

The report identified severe risks in four key areas: AI security posture, supply chain attack vectors, least-privilege implementation, and cloud workload exposure.

What Can Organisations Do?

The report recommends a multi-faceted approach. Improving visibility of AI integrations is paramount, alongside tightening identity-centric controls. Implementing least-privilege practices for AI roles, removing “ghost” identities, and eliminating exposure from static secrets are also crucial steps. Recognizing that third-party code and external accounts now function as extensions of an organisation’s infrastructure is vital.

Liat Hayun, Senior Vice President of Product Management and Research at Tenable, emphasizes the demand for security teams to proactively account for AI systems embedded within infrastructure. She states that a lack of visibility and governance leaves teams vulnerable to new exposures, including over-privileged identities in the cloud.

Hayun advocates for focusing on the “unified exposure path” to move beyond managing “security debt” and towards managing actual business risk.

Pro Tip

Regularly audit and rotate cloud credentials. Implement multi-factor authentication (MFA) wherever possible to add an extra layer of security.

Future Trends to Watch

The AI exposure gap isn’t a static problem; it’s likely to worsen as AI becomes more pervasive. Several trends will exacerbate the challenge:

  • Increased AI Complexity: AI models will develop into more complex, making it harder to understand their internal workings and potential vulnerabilities.
  • AI-Powered Attacks: Attackers will increasingly leverage AI to automate and refine their attacks, making them more sophisticated and tough to detect.
  • Expansion of Non-Human Identities: The number of AI agents and service accounts will continue to grow, increasing the risk associated with non-human identities.
  • Decentralized AI Development: More AI development will occur outside of centralized IT departments, leading to shadow AI and increased security risks.

FAQ

Q: What is the “AI exposure gap”?
A: It’s the growing mismatch between the speed of AI and cloud adoption and the ability of security teams to assess and remediate associated risks.

Q: How significant is the risk from third-party code?
A: 86% of organisations have third-party code packages with critical vulnerabilities, and 13% have deployed compromised packages.

Q: What is exposure management?
A: It’s the process of identifying, evaluating, and prioritizing risks across all potential attacker entry points.

Did you know?

Non-human identities (AI agents, service accounts) now present a higher risk profile than human users, according to Tenable’s research.

Want to learn more about securing your cloud environment? Explore our other articles on cloud security best practices.

February 23, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

NVIDIA Fuels India’s AI Revolution: Infrastructure, Models & Research

by Chief Editor February 18, 2026
written by Chief Editor

India’s AI Revolution: A Deep Dive into the India AI Impact Summit 2026

India is rapidly emerging as a global hub for Artificial Intelligence (AI) innovation, a trend powerfully underscored by the recent India AI Impact Summit in New Delhi. The summit brought together heads of state, industry leaders, and entrepreneurs to chart the course for AI’s future, with NVIDIA playing a central role in bolstering the nation’s AI capabilities.

Building a Robust AI Infrastructure

A cornerstone of India’s AI ambitions is a significant investment in computing infrastructure. The IndiaAI Compute Pillar is driving the development of AI cloud offerings, incorporating tens of thousands of NVIDIA GPUs. This initiative is fueled by over $1 billion in funding through the IndiaAI Mission, designed to strengthen compute capacity and foster the development of sovereign AI.

NVIDIA is collaborating with next-generation cloud providers like Yotta, L&T, and E2E Networks to deliver advanced AI factories. Yotta’s Shakti Cloud, powered by over 20,000 NVIDIA Blackwell Ultra GPUs, offers pay-per-use GPU-dense services. E2E Networks is building an NVIDIA Blackwell GPU cluster on its TIR platform, hosted at the L&T Vyoma Data Center in Chennai, featuring NVIDIA HGX B200 systems and open models.

Further expanding access, Netweb Technologies is launching Tyrone Camarero AI Supercomputing systems built on the NVIDIA Grace Blackwell architecture, manufactured in India under the “Make in India” mission.

The Rise of India-Specific AI Models

The IndiaAI Mission’s Innovation Center Pillar focuses on developing and deploying foundation models trained on India-specific data. This is particularly crucial for a multilingual nation like India, with 22 constitutionally recognized languages and over 1,500 more. Frontier AI models can help bridge the digital divide and enable more inclusive technology access.

Organizations are leveraging NVIDIA Nemotron to support public-sector services, financial systems, and enterprise operations in multiple languages. Datasets like Nemotron-Personas-India, built using NeMo Data Designer, provide a foundation for population-scale sovereign AI development.

Key Players in India’s AI Model Development

  • BharatGen: Developed a 17-billion-parameter mixture-of-experts model using the NVIDIA NeMo framework.
  • Chariot: Building an 8-billion-parameter model for real-time text to speech using the NeMo framework.
  • Commotion: Integrating NVIDIA Nemotron models into its AI operating system for automating enterprise workflows.
  • CoRover.ai: Deploying NVIDIA Nemotron Speech open models for customer service applications for the Indian Railway Catering and Tourism Corporation.
  • Gnani.ai: Building a 14-billion-parameter speech-to-speech model on NVIDIA Nemotron Speech models.
  • National Payments Corporation of India (NPCI): Exploring training FiMi, a financial model for India, using the NVIDIA Nemotron 3 Nano model.
  • Sarvam.ai: Open sourcing its Sarvam-3 series of text and multimodal large language model variants, trained for 22 Indic languages.
  • Soket.ai: Utilizing a modern large-model training stack on open NVIDIA Nemotron technologies.
  • Tech Mahindra: Developing an 8-billion-parameter foundation model tailored for Indian languages and dialects.
  • Zoho: Advancing its Zia LLM platform with proprietary models built using NVIDIA NeMo.

Government and Academic Collaboration

The IndiaAI Mission’s Application Development and Startup Financing Pillars are fostering innovation through government and academic partnerships. NVIDIA is collaborating with the Anusandhan National Research Foundation (ANRF) to support cutting-edge AI research across leading academic institutions.

This collaboration will provide ANRF grantee institutions with access to NVIDIA AI Enterprise software and technical mentorship through the NVIDIA AI Technology Center. NVIDIA is partnering with venture capital firms like Peak XV and Accel India to identify and fund promising AI startups, with over 4,000 Indian AI startups already participating in the NVIDIA Inception program.

FAQ

Q: What is the IndiaAI Mission?
A: It’s a national program to build AI infrastructure, datasets, skilling, and innovation ecosystems in India.

Q: What role is NVIDIA playing in India’s AI development?
A: NVIDIA is collaborating with cloud providers, research institutions, and startups to provide infrastructure, models, and expertise.

Q: What is NVIDIA Nemotron?
A: It’s a suite of open models, datasets, tools, and libraries for building frontier speech, language, and multimodal models.

Q: What is the significance of developing AI models for Indian languages?
A: It helps bridge the digital divide and makes AI technology more accessible to India’s diverse population.

Did you know? India is investing heavily in its AI cloud infrastructure, with systems including tens of thousands of NVIDIA GPUs.

Pro Tip: Explore the NVIDIA Inception program for startups to gain access to resources and support for AI development.

Stay informed about the latest advancements in AI and India’s role in shaping the future of this transformative technology. Learn more about NVIDIA’s partnerships with India’s largest manufacturers and how India’s global systems integrators are building enterprise AI agents with NVIDIA.

February 18, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

From Physics to Securing the Internet: The Story of FreeRADIUS Founder Alan DeKok

by Chief Editor February 17, 2026
written by Chief Editor

From Physics to Securing the Internet: The Enduring Legacy of FreeRADIUS and the Future of Network Authentication

Alan DeKok’s journey from nuclear physics to becoming a leading figure in network security is a testament to the power of adaptability and the often-unforeseen opportunities that arise from pursuing one’s curiosity. His creation, FreeRADIUS, a foundational open-source software for authenticating users, quietly underpins a significant portion of internet access worldwide – from major internet service providers to university Wi-Fi networks.

The Unseen Foundation of Internet Security

Most internet users are unaware of the complex processes happening behind the scenes to verify their identity and grant access to online resources. FreeRADIUS acts as that gatekeeper, a critical component of the Remote Authentication Dial-In User Service (RADIUS) protocol. It’s a system DeKok began developing as a side project in the late 1990s, recognizing a gap in the market for actively maintained open-source RADIUS servers.

From Strawberries to Subatomic Particles: A Unique Skillset

DeKok’s path wasn’t a direct line to technology. Growing up on a farm, he quickly realized a preference for the challenges of 8-bit computers over agricultural labor. This led him to pursue a Bachelor’s and Master’s degree in physics at Carleton University. He found physics appealing due to its blend of mathematics and practical application. His work at the Sudbury Neutrino Observatory, managing a water-purification system achieving an astonishing one atom of impurity per cubic meter, honed his problem-solving skills.

Pro Tip: DeKok emphasizes that the ability to understand the “big picture” and break down complex problems into manageable pieces – skills honed during his physics studies – are invaluable in the rapidly evolving field of network security.

The Rise of FreeRADIUS and InkBridge Networks

After stints at Gandalf and CryptoCard, DeKok founded NetworkRADIUS (now InkBridge Networks) in 2008, driven by a desire to continue developing and supporting FreeRADIUS. Today, the software is used by an estimated 100 million people daily, and InkBridge Networks employs experts across Canada, France, and the United Kingdom. DeKok estimates that at least half of the world’s internet users rely on his software for authentication.

Why RADIUS Endures: Simplicity and Implementation

Despite the emergence of alternative protocols like Diameter, RADIUS continues to thrive. While Diameter offered potential improvements, RADIUS’s simplicity and widespread existing implementation have given it a significant advantage. DeKok believes RADIUS is “never going to go away,” citing the billions of dollars of equipment currently running the protocol.

The Open-Source Advantage

DeKok attributes FreeRADIUS’s success to its open-source nature. Initially adopted as a way to enter the market with limited funding, open-sourcing allowed FreeRADIUS to compete effectively with larger companies and establish itself as an industry-leading product. This collaborative approach fosters innovation and ensures the software remains adaptable to evolving security threats.

The Future of Network Authentication: Beyond Passwords

While FreeRADIUS remains a cornerstone of network security, the landscape of authentication is rapidly changing. Several trends are poised to shape the future of how users access networks and online services:

Multi-Factor Authentication (MFA) Expansion

The increasing sophistication of cyberattacks is driving the adoption of MFA. While traditionally relying on SMS codes or authenticator apps, future MFA solutions will likely integrate biometric authentication (fingerprint, facial recognition) and passwordless technologies.

Passwordless Authentication

Passwordless authentication methods, such as WebAuthn and FIDO2, are gaining traction. These technologies leverage cryptographic keys stored on devices to verify user identity, eliminating the need for passwords altogether. This reduces the risk of phishing attacks and improves user experience.

Zero Trust Network Access (ZTNA)

ZTNA is a security model based on the principle of “never trust, always verify.” Unlike traditional VPNs, ZTNA provides granular access control based on user identity, device posture, and application context. This approach minimizes the attack surface and enhances security for remote access.

AI and Machine Learning in Authentication

Artificial intelligence (AI) and machine learning (ML) are being used to detect and prevent fraudulent authentication attempts. ML algorithms can analyze user behavior patterns to identify anomalies and flag suspicious activity, providing an additional layer of security.

Frequently Asked Questions (FAQ)

  • What is FreeRADIUS? FreeRADIUS is an open-source implementation of the RADIUS protocol, used for authenticating users and controlling network access.
  • Who uses FreeRADIUS? Major internet service providers, financial institutions, universities, and other organizations rely on FreeRADIUS for network security.
  • What is the RADIUS protocol? RADIUS is a networking protocol that provides centralized authentication, authorization, and accounting (AAA) services.
  • Is FreeRADIUS secure? FreeRADIUS is actively maintained and regularly updated to address security vulnerabilities.

Alan DeKok’s story highlights the importance of adaptability, continuous learning, and the often-serendipitous nature of career paths. As network security continues to evolve, the principles he embodies – a focus on foundational knowledge, a willingness to embrace new technologies, and a commitment to open collaboration – will remain essential for securing the internet for years to come.

Explore more articles on network security and open-source technologies.

February 17, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

AI Tokenomics: Lowering Costs with NVIDIA Blackwell & Open Source Models

by Chief Editor February 12, 2026
written by Chief Editor

The AI Token Revolution: How Cost Efficiency is Fueling the Next Wave of Innovation

Every AI-powered interaction, from a diagnostic insight in healthcare to a character’s dialogue in a game, relies on a fundamental unit of intelligence: the token. As AI scales, the ability to afford more tokens becomes critical. The key? Better tokenomics – driving down the cost of each token. This trend is accelerating, with recent research indicating infrastructure and algorithmic efficiencies are reducing inference costs by up to 10x annually.

What Exactly *Are* AI Tokens?

Tokens are the basic units of data that AI models process. Whether it’s text, images, or audio, data is broken down into tokens before being analyzed. The faster these tokens can be processed, the faster the AI learns and responds. Efficient tokenization is crucial for reducing the computational power needed for both training and inference.

The Impact of NVIDIA Blackwell: A 10x Cost Reduction

Leading AI inference providers, including Baseten, DeepInfra, Fireworks AI, and Together AI, are already leveraging the NVIDIA Blackwell platform to significantly reduce costs. Blackwell helps them reduce cost per token by up to 10x compared to the previous NVIDIA Hopper platform. Here’s achieved through a combination of advanced hardware, optimized software, and efficient inference stacks.

Healthcare: Sully.ai and Baseten’s 90% Cost Reduction

In healthcare, companies like Sully.ai are using AI to automate tasks like medical coding and note-taking, freeing up doctors to spend more time with patients. By migrating to Baseten’s Model API, powered by open source models on NVIDIA Blackwell GPUs, Sully.ai achieved a 90% reduction in inference costs – a 10x improvement over their previous closed-source implementation – alongside a 65% improvement in response times. This has already returned over 30 million minutes to physicians.

Gaming: Latitude and DeepInfra’s 4x Improvement

AI-native gaming, exemplified by Latitude’s AI Dungeon and upcoming Voyage platform, presents unique scaling challenges. Every player action triggers an inference request, demanding low latency and cost-effective processing. By running large open source models on DeepInfra’s Blackwell-powered platform, Latitude reduced the cost per million tokens from 20 cents to just 5 cents – a 4x improvement – whereas maintaining accuracy.

Agentic Chat: Fireworks AI and Sentient Foundation’s 25-50% Efficiency Gain

Sentient Labs is building powerful reasoning AI systems using open source models. To manage the complex compute demands of its Sentient Chat application, the company partnered with Fireworks AI, utilizing its Blackwell-optimized inference stack. This resulted in a 25-50% improvement in cost efficiency compared to their previous Hopper-based deployment, supporting a viral launch with 1.8 million waitlisted users and 5.6 million queries in a single week.

Customer Service: Decagon and Together AI’s 6x Cost Savings

Decagon builds AI agents for enterprise customer support, where even slight delays can negatively impact the user experience. By leveraging Together AI’s production inference on NVIDIA Blackwell GPUs, and implementing optimizations like speculative decoding and caching, Decagon achieved a 6x reduction in cost per query compared to using closed-source proprietary models. Response times were consistently under 400 milliseconds, even with thousands of tokens per query.

The Future of Tokenomics: Beyond Blackwell

The cost reductions seen today are just the beginning. NVIDIA’s GB200 NVL72 system promises a further 10x reduction in cost per token for reasoning models compared to NVIDIA Hopper. Looking ahead, the NVIDIA Rubin platform aims to deliver another 10x performance boost and token cost reduction over Blackwell, integrating six new chips into a single AI supercomputer.

Pro Tip: Explore Open Source Models

The case studies above highlight the power of combining optimized hardware with open source models. Don’t overlook the potential cost savings and flexibility offered by the open source AI community.

FAQ: Understanding AI Tokenomics

  • What is a token in AI? A token is a basic unit of data processed by AI models, representing pieces of text, images, or audio.
  • Why is tokenomics vital? Tokenomics determines the cost of running AI applications, impacting scalability and profitability.
  • How can I reduce my AI costs? Optimizing infrastructure, utilizing efficient models, and leveraging platforms like NVIDIA Blackwell are key strategies.
  • What is the role of NVIDIA Blackwell? NVIDIA Blackwell is a platform designed to significantly reduce the cost per token for AI inference.

Seek to learn more about optimizing your AI infrastructure? Explore NVIDIA’s full-stack inference platform.

February 12, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

SlimeVR Butterfly Trackers – nRF52833-based, ultra-slim, full-body VR trackers offer up to 48h battery life (Crowdfunding)

by Chief Editor February 12, 2026
written by Chief Editor

SlimeVR Butterfly Trackers: The Future of Affordable, Wireless Full-Body Tracking is Here

Rotterdam-based SlimeVR is poised to disrupt the virtual reality landscape with its new Butterfly Trackers. These ultra-slim, open-hardware trackers promise to deliver a comfortable and affordable full-body tracking (FBT) experience, eliminating the need for cumbersome base stations or complex setups. The trackers are designed for a wide range of applications, including VR gaming, VTubing, and motion capture.

Beyond Base Stations and Wires: How SlimeVR Butterfly Trackers Work

Unlike traditional FBT systems that rely on external base stations, SlimeVR Butterfly Trackers utilize Inertial Measurement Units (IMUs) to track absolute rotation. Each tracker transmits data wirelessly via a custom 2.4 GHz protocol to a dedicated USB dongle, supporting up to 10 trackers simultaneously. This innovative approach removes the limitations of space and setup complexity associated with older technologies. The system doesn’t require Wi-Fi or Bluetooth, addressing concerns about latency and interference.

Engineering Marvel: Comfort and Performance in a 7mm Package

SlimeVR has prioritized comfort with the Butterfly Tracker’s design. Weighing less than 10 grams and measuring under 7mm thick, these trackers are designed to be worn discreetly under clothing. The “butterfly” split design, with the PCB and 90 mAh battery positioned side-by-side and connected by a flexible bridge, contours to the body for a more natural and comfortable fit. Despite their little size, the trackers boast an impressive battery life of over 48 hours on a single charge, utilizing USB-C for convenient recharging.

Technical Specifications: A Deep Dive

The Butterfly Trackers are built around the Nordic nRF52833 wireless MCU, featuring an Arm Cortex-M4F microcontroller running at 64 MHz. They offer a 100-200 Hz refresh rate and latency of less than 15ms. Key specifications include:

  • Wireless MCU: Nordic nRF52833
  • Memory: 128 kB RAM, 512 kB flash
  • Connectivity: 2.4 GHz proprietary wireless (ESB protocol)
  • Sensor: 6-axis IMU (TDK ICM-45686)
  • Battery: 90 mAh (48+ hours active use)
  • Dimensions: 56 x 35 x 7 mm

Software Ecosystem: From Firmware to Full-Body Integration

SlimeVR’s ecosystem extends beyond the hardware. The trackers run on Smol Slime firmware, originally a community-led project designed to optimize power efficiency. The SlimeVR Server, available for Windows, macOS, Linux, and Android, acts as the central processing unit, combining data from the trackers and using forward kinematics to calculate body position based on user height and proportions. Integration with popular VR platforms is achieved through the OpenVR Driver, allowing seamless compatibility with SteamVR. Support for OSC protocol enables direct connection to standalone headsets.

From Gaming to Motion Capture: Versatile Applications

The SlimeVR Butterfly Trackers unlock a wide range of possibilities. They are compatible with VR games like VRChat, enabling full-body tracking for enhanced immersion. VTubers can leverage the trackers for more expressive and engaging streams, and motion capture artists can utilize them for recording BVH files for use in programs like Blender. The system’s ability to track movement without occlusion – meaning clothes or body parts won’t block the signal – further expands its potential applications.

Availability and Pricing

The SlimeVR Butterfly Trackers are currently available for pre-order on Crowd Supply, with shipping scheduled for August 31, 2026. Pricing starts at $279 for the Core Set (6 trackers + dongle), with options for larger sets and accessories, including a charging dock.

Frequently Asked Questions

  • Do SlimeVR Trackers require base stations? No, they do not. They utilize IMUs for tracking and do not rely on external base stations.
  • Can the trackers be used under clothing? Yes, their slim design and flexible interconnect make them comfortable to wear under clothing.
  • What is the battery life of the trackers? The trackers offer over 48 hours of active use on a single charge.
  • How many trackers can be connected? The system supports up to 10 trackers connected to a single dongle.
  • What platforms are supported? The SlimeVR Server is available for Windows, macOS, Linux, and Android.

Explore more about SlimeVR and the Butterfly Trackers on the official website and GitHub repositories.

February 12, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

AI in Finance: 89% See Revenue Gains & Budgets Rise – 2024 Report

by Chief Editor February 10, 2026
written by Chief Editor

AI Revolutionizes Finance: A New Era of Efficiency and Growth

Artificial intelligence is no longer a futuristic concept in financial services – it’s the present, and its impact is rapidly accelerating. From automating complex trading algorithms to bolstering fraud detection and streamlining risk management, AI is reshaping the industry. A recent NVIDIA report reveals that AI adoption is at an all-time high, with organizations realizing significant returns on investment.

The Rise of AI-Powered Revenue and Cost Reduction

A staggering 89% of financial institutions report that AI is directly contributing to increased annual revenue and decreased costs. This isn’t just theoretical; 64% have seen revenue increases exceeding 5%, with nearly a third experiencing gains of over 10%. Cost reductions are equally impressive, with 61% reporting savings of more than 5%, and 25% exceeding 10%. These gains are being driven by AI applications in areas like document processing, customer service, algorithmic trading, and risk management.

Open Source AI: Leveling the Playing Field

The landscape of financial AI is being fundamentally altered by the growing importance of open-source models. 84% of respondents in the NVIDIA report consider open-source models and software crucial to their AI strategy. This shift allows organizations greater flexibility and efficiency, enabling them to tailor AI tools to their specific needs and enhance accuracy by incorporating proprietary data. However, experts caution that while open source can aid close the gap with early adopters, proprietary approaches can still unlock superior performance for specialized tasks.

Pro Tip: Don’t underestimate the power of fine-tuning open-source models with your own data. What we have is where true competitive advantage lies.

AI Agents: The Next Frontier in Automation

Beyond traditional AI applications, agentic AI – advanced systems capable of autonomous reasoning, planning, and execution – is gaining traction. Currently, 42% of companies are exploring agentic AI, with 21% already deploying these systems. These AI agents are proving particularly effective in areas like payment operations, where they can optimize authorization rates and routing decisions with speed and precision that traditional rule-based systems can’t match. Every basis point improvement in authorization rates translates directly to revenue, making this a high-impact application.

Budgets Surge as AI Delivers Results

The success of AI initiatives is fueling increased investment. Nearly 100% of surveyed organizations plan to maintain or increase their AI budgets in the coming year. Investment is focused on three key areas: optimizing existing AI workflows, expanding AI into new use cases, and building or improving AI infrastructure – both on-premises and in the cloud. The deployment and expansion of AI agents are likewise receiving significant attention.

The Importance of Data as a Strategic Asset

A key takeaway from the NVIDIA report is the growing recognition of proprietary data as a strategic asset. Organizations that can effectively leverage their unique data sets to train and refine AI models will be best positioned to gain a competitive edge. This underscores the importance of data governance, quality, and accessibility.

Did you recognize? The ability to fine-tune AI models on proprietary data is becoming a key differentiator in the financial services industry.

Looking Ahead: Future Trends in Financial AI

The current trajectory suggests several key trends will shape the future of AI in finance:

  • Increased Adoption of Generative AI: Generative AI adoption is already on the rise (up 52% year-over-year), and this trend is expected to continue as institutions explore its potential for tasks like content creation, risk modeling, and customer interaction.
  • Edge AI Expansion: As AI moves closer to the point of data generation, edge AI platforms like NVIDIA’s Jetson and Thor will become increasingly important for real-time analysis and decision-making.
  • AI-Driven Cybersecurity: The financial sector is a prime target for cyberattacks. AI will play a crucial role in proactively identifying and mitigating threats, enhancing security measures, and protecting sensitive data.
  • The Convergence of AI and 6G: The integration of AI into next-generation telecommunications networks, as exemplified by NVIDIA’s partnership with Nokia, will unlock new possibilities for real-time data analysis and ultra-reliable connectivity.

FAQ: AI in Financial Services

Q: What are the biggest benefits of AI in finance?
A: Increased revenue, reduced costs, improved risk management, enhanced fraud detection, and better customer experiences.

Q: Is open-source AI a viable alternative to proprietary solutions?
A: Open-source AI offers flexibility and cost-efficiency, but proprietary solutions may deliver superior performance for specific tasks.

Q: What is agentic AI?
A: Agentic AI refers to advanced AI systems that can autonomously reason, plan, and execute complex tasks.

Q: How important is data quality for AI success?
A: Data quality is paramount. Accurate, complete, and well-governed data is essential for training effective AI models.

Explore more about NVIDIA’s AI solutions for financial services and download the full “State of AI in Financial Services: 2026 Trends” report to delve deeper into these insights.

What are your thoughts on the future of AI in finance? Share your insights in the comments below!

February 10, 2026 0 comments
0 FacebookTwitterPinterestEmail
Business

Next Moca Releases Agent Definition Language as an Open Source Specification

by Chief Editor February 9, 2026
written by Chief Editor

The Rise of Agent Definition Languages: A Fresh Standard for AI’s Future

The artificial intelligence landscape is rapidly evolving beyond simple chatbots and one-off prompts. We’re entering the era of AI agents – autonomous entities capable of reasoning, utilizing tools, accessing knowledge, and orchestrating complex workflows. But with this advancement comes a critical challenge: a lack of standardization. Every platform and team defines “agents” differently, leading to fragmentation and hindering scalability. Now, a new open-source standard, the Agent Definition Language (ADL), aims to solve this problem.

What is ADL and Why Does it Matter?

Developed by Next Moca and released under the Apache 2.0 license, ADL is essentially a blueprint for AI agents. It provides a vendor-neutral, declarative format for defining everything an agent *is* and *can do*. This includes its identity, purpose, the language model it uses, the tools it has access to, its permissions, how it accesses information (through Retrieval Augmented Generation or RAG), and even governance metadata like ownership and version history.

Think of it like this: OpenAPI defines APIs, allowing different systems to communicate seamlessly. ADL aims to do the same for AI agents. As Kiran Kashalkar, founder of Next Moca, puts it, ADL is “Think OpenAPI (Swagger) for agents.”

Addressing the Fragmentation Problem

Currently, agent definitions are often scattered across various formats – YAML files, code embedded configurations, proprietary JSON fields – making it difficult to understand an agent’s capabilities and boundaries. This lack of clarity poses significant challenges for security reviews, compliance, and reuse. ADL consolidates these definitions into a single, machine-readable format, enhancing inspectability and governance.

Pro Tip: A standardized definition layer like ADL allows for consistent validation in CI/CD pipelines, ensuring agents meet predefined standards before deployment.

How ADL Works: A Declarative Approach

ADL is a declarative language, meaning it focuses on *what* an agent should do, not *how* it should do it. It doesn’t define runtime behavior or agent-to-agent communication protocols. Instead, it provides a clear specification of the agent’s characteristics, allowing different platforms and frameworks to interpret and execute it.

This framework-agnostic approach is crucial for portability. Developers can define an agent once using ADL and then deploy it across various platforms without modification. This reduces vendor lock-in and promotes interoperability.

Beyond Definition: The Future of Agent Management

The release of ADL is just the beginning. The open-source nature of the project encourages community contributions and the development of an ecosystem of tools around the standard. This could include:

  • Editors: User-friendly interfaces for creating and managing ADL definitions.
  • Validators: Tools for ensuring ADL definitions are valid and conform to the specification.
  • Registries: Centralized repositories for storing and sharing ADL definitions.
  • Testing Tools: Automated tests for verifying agent behavior based on its ADL definition.

This ecosystem will streamline the entire agent lifecycle, from development and deployment to monitoring and maintenance.

ADL and Existing Technologies

ADL isn’t intended to replace existing technologies like A2A (agent-to-agent communication), MCP, OpenAPI, or workflow engines. Instead, it complements them. ADL defines the agent itself, while these other technologies handle communication, execution, and orchestration.

Did you know? ADL focuses on the “what” of an agent, while other technologies focus on the “how.”

Real-World Applications

The potential applications of ADL are vast. Consider these examples:

  • Customer Support: Defining agents that can handle specific customer inquiries, access knowledge bases, and escalate complex issues.
  • Fraud Detection: Creating agents that can analyze transactions, identify suspicious patterns, and flag potential fraud.
  • HR Automation: Developing agents that can automate tasks like onboarding, benefits administration, and employee inquiries.

In each of these scenarios, ADL provides a standardized way to define the agent’s capabilities, permissions, and governance policies.

Frequently Asked Questions (FAQ)

Q: Is ADL a runtime environment?
A: No, ADL is a definition language. It doesn’t execute code or manage agent workflows. It simply defines what an agent is and what it can do.

Q: Is ADL tied to a specific programming language?
A: No, ADL is model-agnostic and platform-agnostic. It’s based on JSON, a widely supported data format.

Q: How can I contribute to the ADL project?
A: The ADL repository on GitHub ([https://github.com/nextmoca/adl](https://github.com/nextmoca/adl)) provides contribution guidelines and a public roadmap.

Q: What are the benefits of using ADL?
A: Portability, auditability, vendor neutrality, and improved governance are key benefits.

The open-sourcing of ADL marks a significant step towards a more standardized and scalable future for AI agents. By providing a common language for defining these powerful entities, ADL empowers developers, enhances security, and unlocks new possibilities for innovation.

Explore the ADL project on GitHub: https://github.com/nextmoca/adl

February 9, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

NVIDIA Nemotron: Build AI-Powered Document Intelligence Systems

by Chief Editor February 8, 2026
written by Chief Editor

The Rise of Agentic AI: How NVIDIA Nemotron is Revolutionizing Document Intelligence

Businesses are drowning in data, much of it locked within unstructured documents. Reports, PDFs, web pages, and spreadsheets – extracting valuable insights from these sources has traditionally been a manual, time-consuming process. Now, a new wave of AI-powered document intelligence is emerging, promising to automate understanding and unlock hidden value. At the heart of this shift is NVIDIA Nemotron, a family of open models designed for precisely this purpose.

From Manual Review to AI-Powered Insights

For years, teams have relied on manual review, spreadsheets, and basic Optical Character Recognition (OCR) tools to glean information from documents. These methods are often inefficient and prone to errors, especially when dealing with complex layouts and varied formats. Intelligent document processing, powered by AI agents and techniques like Retrieval-Augmented Generation (RAG), offers a transformative solution. It interprets rich content – tables, charts, images, and text – turning it into actionable insights.

NVIDIA Nemotron: The Engine Behind the Transformation

NVIDIA Nemotron provides the open models and GPU-accelerated libraries needed to build these AI-powered document intelligence systems. The models are transparent, with open weights and training data available on Hugging Face, allowing for thorough evaluation before deployment. Nemotron’s latest iteration, the Nemotron 3 family, delivers leading efficiency and accuracy, particularly for complex, high-throughput agentic AI applications.

Real-World Applications: Streamlining Business Processes

The impact of this technology is already being felt across various industries. Several companies are leveraging Nemotron to address specific challenges:

Justt: Automating Financial Dispute Resolution

In the financial sector, payment disputes are a major source of revenue loss. Justt.ai utilizes Nemotron Parse to automate the chargeback lifecycle. The platform ingests transaction data, customer interactions, and policies, then automatically assembles evidence for disputes, reducing manual effort and recapturing revenue for merchants like HEI Hotels & Resorts.

Docusign: Scaling Agreement Intelligence

Docusign, a leader in agreement management, is evaluating Nemotron Parse to improve the extraction of tables, text, and metadata from complex contracts. This will enable faster and more accurate processing of agreements, turning them into structured data for analysis and AI-driven workflows.

Edison Scientific: Accelerating Scientific Research

Edison Scientific’s Kosmos AI Scientist uses Nemotron Parse to rapidly extract structured information from research papers, including equations, tables, and figures. This transforms a vast research corpus into an interactive, queryable knowledge engine, accelerating hypothesis generation and literature review.

Key Technologies Powering Document Intelligence

Building a robust document intelligence pipeline requires several key components:

  • Extraction: Nemotron extraction and OCR models rapidly ingest multimodal PDFs and other document types.
  • Embedding: Nemotron embedding models convert passages and visual elements into vector representations for semantic search.
  • Reranking: Nemotron reranking models evaluate candidate passages to ensure the most relevant content is surfaced.
  • Parsing: Nemotron Parse models decipher document semantics to extract text and tables with precise spatial grounding.

These capabilities are available as NVIDIA NIM microservices and foundation models, designed to run efficiently on NVIDIA GPUs.

The Future of Document Intelligence: Trends to Watch

The field of document intelligence is rapidly evolving. Several key trends are poised to shape its future:

Increased Focus on Multimodal Understanding

Current models are increasingly capable of understanding not just text, but too images, tables, and charts within documents. This multimodal approach will unlock deeper insights and more accurate interpretations.

Edge Deployment and Reduced Latency

Deploying document intelligence models on edge devices will enable real-time processing and reduce reliance on cloud connectivity. This is particularly important for applications requiring immediate responses.

Integration with Multi-Agent Systems

Document intelligence will become increasingly integrated with multi-agent systems, allowing AI agents to collaborate and automate complex tasks based on information extracted from documents.

Enhanced Security and Compliance

As document intelligence systems handle sensitive data, security and compliance will become paramount. Technologies like confidential computing and data encryption will be essential.

FAQ

What is NVIDIA Nemotron?
NVIDIA Nemotron is a family of open-source AI models designed for building specialized AI agents, particularly for tasks involving document understanding and reasoning.

What is Retrieval-Augmented Generation (RAG)?
RAG is a technique that combines the power of large language models with information retrieved from external sources, such as documents, to generate more accurate and contextually relevant responses.

What are NVIDIA NIM microservices?
NVIDIA NIM microservices are pre-packaged, GPU-accelerated software components that simplify the deployment and scaling of AI applications.

Where can I locate more information about Nemotron?
You can find more information on the NVIDIA Nemotron developer page and on GitHub.

What is Nemotron Parse?
Nemotron Parse models decipher document semantics to extract text and tables with precise spatial grounding and correct reading flow.

Ready to unlock the power of your documents? Explore the resources available on NVIDIA’s website and join the growing community of developers building the future of document intelligence.

February 8, 2026 0 comments
0 FacebookTwitterPinterestEmail
Newer Posts
Older Posts

Recent Posts

  • Suspicious House Fire in Isolated Quebec Neighborhood Leaves Major Damage

    May 16, 2026
  • Indonesian Customs Responds to Former Peacekeeper’s Allegations of Missing Shipment Items

    May 16, 2026
  • Putin to Visit China to Strengthen Ties Following Trump’s Visit

    May 16, 2026
  • These States Could See Aurora Borealis Tonight

    May 16, 2026
  • Trump returns from China with no Iran breakthrough – and a decision to make

    May 16, 2026

Popular Posts

  • 1

    Maya Jama flaunts her taut midriff in a white crop top and denim jeans during holiday as she shares New York pub crawl story

    April 5, 2025
  • 2

    Saar-Unternehmen hoffen auf tiefgreifende Reformen

    March 26, 2025
  • 3

    Marta Daddato: vita e racconti tra YouTube e podcast

    April 7, 2025
  • 4

    Unlocking Success: Why the FPÖ Could Outperform Projections and Transform Austria’s Political Landscape

    April 26, 2025
  • 5

    Mecimapro Apologizes for DAY6 Concert Chaos: Understanding the Controversy

    May 6, 2025

Follow Me

Follow Me
  • Cookie Policy
  • CORRECTIONS POLICY
  • PRIVACY POLICY
  • TERMS OF SERVICE

Hosted by Byohosting – Most Recommended Web Hosting – for complains, abuse, advertising contact: o f f i c e @byohosting.com


Back To Top
Newsy Today
  • Business
  • Entertainment
  • Health
  • News
  • Sport
  • Tech
  • World