• Business
  • Entertainment
  • Health
  • News
  • Sport
  • Tech
  • World
Newsy Today
news of today
Home - open source
Tag:

open source

Tech

Product showcase: NetGuard open-source firewall for Android

by Chief Editor May 8, 2026
written by Chief Editor

The Evolution of Mobile Privacy: Beyond Simple App Permissions

For years, the conversation around mobile security focused on “permissions”—asking a user if an app could access their camera or contacts. But as we move deeper into an era of hyper-connectivity, the frontier of privacy has shifted. It is no longer just about what an app can access on your phone, but where that data goes once it leaves the device.

View this post on Instagram about Zero Trust, Pro Tip
From Instagram — related to Zero Trust, Pro Tip

Tools like NetGuard highlight a growing demand for granular network control. By using a local VPN loopback to filter traffic, users are taking back the “kill switch” from the operating system. This trend points toward a future where “Zero Trust” architecture isn’t just for corporate servers, but for the smartphone in your pocket.

Pro Tip: If you are using a firewall to save data or increase privacy, always remember to disable battery optimization for the app. Android’s aggressive power management can kill background VPN services, leaving your “blocked” apps free to connect to the internet again.

The Rise of Local VPNs and Digital Sovereignty

One of the most interesting technical trends is the use of the Android VPN service not for anonymity (like a traditional VPN), but for local traffic orchestration. Because Android restricts the ability to chain multiple VPNs, a local firewall essentially becomes the “gatekeeper” for all outgoing packets.

This represents a broader movement toward digital sovereignty. Users are increasingly distrustful of proprietary “black box” systems. The preference for open-source firewalls allows the community to audit the code, ensuring that the tool designed to protect your privacy isn’t secretly collecting data itself.

We are likely to see a surge in “Privacy-First” OS forks—similar to LineageOS—that integrate these firewall capabilities directly into the kernel, removing the need for a VPN-based workaround and reducing battery drain.

AI-Driven Traffic Analysis: The Next Frontier

Currently, most mobile firewalls rely on manual blacklists and whitelists. You decide that Chrome can access the web, but your calculator app cannot. However, the next evolution will be Behavioral Network Analysis.

How to Build Free Open Source Apps | Tutorial ft. NetGuard Firewall & Android Studio

Imagine a firewall powered by lightweight, on-device AI that doesn’t just block an app, but analyzes the pattern of its traffic. If a simple flashlight app suddenly attempts to send 50MB of encrypted data to an unknown server in another country at 3:00 AM, the AI would flag this as anomalous behavior and kill the connection instantly.

This shift from static rules to dynamic intelligence will be crucial as apps become more complex and “telemetry” (the background data apps send back to developers) becomes more sophisticated.

Did you know? Many “free” apps monetize your experience by selling “device fingerprints”—unique identifiers that include your battery level, screen resolution, and network operator—to advertising networks via background telemetry.

Combatting the Telemetry Tide

The battle against background data leakage is becoming an arms race. Developers use techniques like “domain fronting” to hide their tracking servers behind legitimate services (like Google or Cloudflare). This makes it harder for basic firewalls to identify who the app is actually talking to.

Future trends suggest a move toward DNS-over-HTTPS (DoH) and DNS-over-TLS (DoT) integration within firewalls. By encrypting DNS queries, users can prevent Internet Service Providers (ISPs) from seeing which domains their apps are hitting, adding a layer of invisibility to the blocking process.

Real-world data from privacy audits shows that even “system apps” often communicate with servers dozens of times per hour. As users become more aware of this “invisible chatter,” the demand for tools that provide transparent access logs—showing exactly which IP address was contacted and when—will only grow.

Frequently Asked Questions

Does using a local firewall slow down my internet?

Generally, no. Because the traffic is being routed through a local loopback on your own device rather than a remote server, the latency is negligible. Any perceived slowdown is usually due to the device’s CPU processing the filtering rules.

Can I use a firewall and a commercial VPN at the same time?

On standard Android devices, no. Android only allows one active VPN service at a time. To achieve both, you would typically need a rooted device or a specialized OS that allows for network routing at the system level.

Is a firewall enough to stop all tracking?

It stops the transmission of data, but not the collection. An app can still collect your data locally; a firewall simply prevents that app from “phoning home” to upload that data to a server.

What’s your take on mobile privacy? Do you trust your OS to handle your data, or have you started using third-party tools to lock down your device? Let us know in the comments below or subscribe to our newsletter for more deep dives into digital security.

May 8, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

Tenable finds GitHub workflow flaw in Microsoft repo

by Chief Editor May 4, 2026
written by Chief Editor

The Invisible Attack Surface: Why Your CI/CD Pipeline is the New Front Line

For years, cybersecurity focused on the “front door”—firewalls, login screens, and API gateways. But as development speeds up, the real danger has shifted to the “back door”: the Continuous Integration and Continuous Delivery (CI/CD) pipelines. The recent discovery by Tenable Research in a Microsoft GitHub repository serves as a wake-up call. A Python string injection flaw in the Windows-driver-samples repository allowed for remote code execution, potentially exposing repository secrets. When a project with 5,000 forks and 7,700 stars has this vulnerability, it isn’t just a bug in one codebase; It’s a blueprint for how modern software supply chains can be dismantled. The risk isn’t just about one leaked token. It is about the systemic trust we place in automation. As we move forward, the industry is shifting toward a reality where the pipeline itself is treated as a high-value target, equal in importance to the production server.

Did you know? Many organizations still rely on “default” permissions for their automation tokens. In the Microsoft case, researchers inferred the GITHUB_TOKEN likely operated with default read and write access since the repository predated 2023 security updates.

The Death of the ‘God Token’ and the Rise of Least Privilege

The Death of the 'God Token' and the Rise of Least Privilege
Microsoft Actions Instead

One of the most critical trends in DevOps security is the aggressive move away from long-lived, high-privilege tokens. For too long, developers used “God Tokens”—credentials with sweeping permissions that could create issues, push code, and modify settings across an entire organization. The future is Least Privilege Automation. We are seeing a transition toward:

  • Short-lived Credentials: Moving away from static secrets toward tokens that expire in minutes or hours.
  • OIDC (OpenID Connect): Instead of storing a secret key in GitHub, pipelines now use OIDC to request temporary access from cloud providers like AWS or Azure, eliminating the need for long-term stored secrets.
  • Granular Scoping: Rather than “Read/Write” access, permissions are being narrowed to specific actions, such as read-only access to the contents folder.

“The CI/CD infrastructure is part of an organisation’s attack surface and software supply chain,” Rémy Marot, Staff Research Engineer at Tenable

AI: The Double-Edged Sword of Pipeline Security

As we integrate Artificial Intelligence into our coding workflows, we are entering a period of “automated escalation.” AI is fundamentally changing how vulnerabilities like string injections are both created and found. On the offensive side, attackers are using LLMs to scan public YAML files and workflow scripts for patterns that suggest unsafe input handling. A vulnerability that might have taken a human researcher days to find can now be spotted by an AI agent in seconds. But, the defensive trend is equally powerful. We are seeing the emergence of AI-driven Guardrails. Future CI/CD systems will likely include:

  • Real-time Static Analysis: AI that blocks a commit if the workflow script introduces a potential injection point.
  • Anomaly Detection: Systems that flag a workflow if it suddenly attempts to access a secret it has never used before or connects to an unknown external IP.
Pro Tip: Regularly audit your `.github/workflows` files. Treat your YAML configurations as production code—subject them to the same peer review and security scanning as your primary application logic.

Moving Toward ‘Zero Trust’ DevOps

The industry is realizing that “internal” does not mean “safe.” The Tenable finding proved that a simple GitHub issue submission—an action available to any registered user—could trigger a vulnerable workflow. The future trend is Zero Trust for Pipelines. This means assuming that any input coming into the pipeline—whether it is a pull request, a comment, or an issue description—is potentially malicious. This shift involves implementing Software Bill of Materials (SBOM) and strict provenance checks. By verifying exactly who touched the code and which automated process built the binary, companies can ensure that a compromised pipeline doesn’t lead to a poisoned update being sent to millions of users.

For more on securing your development environment, see our guide on [Internal Link: Implementing DevSecOps Best Practices].

Frequently Asked Questions

What is a CI/CD pipeline attack?

A CI/CD attack targets the automated tools used to build and deploy software. Instead of attacking the final app, hackers target the pipeline to steal secrets or inject malicious code directly into the software before it is released.

Frequently Asked Questions
Microsoft Actions Python

Why is string injection dangerous in GitHub Actions?

String injection occurs when user-supplied text is executed as code. In GitHub Actions, if a workflow takes a user’s issue description and passes it directly into a shell script or Python command, an attacker can “inject” their own commands to take over the server running the workflow.

How can I secure my GitHub repository secrets?

Avoid using default permissions. Explicitly define the permissions key in your workflow YAML to restrict the GITHUB_TOKEN to the minimum access required for that specific job.

What is the role of the GITHUB_TOKEN?

The GITHUB_TOKEN is an automatically generated secret used by GitHub Actions to authenticate requests to the GitHub API, allowing the workflow to perform tasks like creating releases or commenting on issues.


Join the Conversation: Is your team treating your CI/CD pipeline as critical infrastructure, or is it still viewed as “background tooling”? Share your security strategies or request a question in the comments below.

Want to stay ahead of the next major vulnerability? Subscribe to our Security Insights newsletter for weekly deep-dives into the evolving threat landscape.

May 4, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

Open-source IPFire DNS Firewall blocks malware and phishing at the resolver

by Chief Editor April 28, 2026
written by Chief Editor

The Evolution of Network Defense: Moving Toward DNS-Layer Security

For years, network administrators have relied on a combination of heavy-duty proxies and external “sinkholes” to keep unwanted traffic at bay. Although, the landscape is shifting. The recent integration of DNS-layer domain blocking directly into the firewall—as seen in the latest IPFire Core Update 201—signals a broader trend: the move toward lightweight, invisible, and highly efficient security at the resolver level.

Unlike traditional URL filters that often require complex HTTPS inspection and certificate handling, DNS-layer blocking operates by intercepting the request before a connection is even attempted. When a client requests a domain flagged as malicious, the system returns an NXDOMAIN response. This effectively tells the client that the domain does not exist, ensuring that no connection is established and no sensitive data leaves the network.

Did you know? An NXDOMAIN (Non-Existent Domain) response is one of the most efficient ways to block threats because it stops the attack at the “phonebook” stage of the internet, preventing the device from ever reaching out to the malicious server.

The Decline of Heavy Proxy Dependencies

The industry is moving away from the “middleman” approach to filtering. Traditional URL filters often depend on proxy setups that can introduce latency and break encrypted traffic. By handling blocklist enforcement directly inside the firewall’s DNS proxy, the need for client-side configuration and HTTPS inspection is eliminated.

The Decline of Heavy Proxy Dependencies
Firewall Solving the Bandwidth Bottleneck Threat Intelligence One

This transition simplifies the architecture for the end-user. Instead of managing a separate device—such as an external Pi-hole deployment—operators can now consolidate their security stack. This reduction in complexity not only improves performance but as well reduces the number of potential failure points in a home or business network.

Solving the Bandwidth Bottleneck in Threat Intelligence

One of the biggest hurdles in maintaining real-time security is the size of the blocklists. As the number of phishing and malware domains grows, the data required to keep a firewall updated can turn into massive. For users on limited cellular connections or in regions with expensive data, downloading gigabytes of updates is simply not sustainable.

View this post on Instagram about Solving the Bandwidth Bottleneck, Threat Intelligence One
From Instagram — related to Solving the Bandwidth Bottleneck, Threat Intelligence One

The solution lies in Incremental Zone Transfers (IXFR), defined in RFC 1995. Rather than downloading a full list every time a change occurs, IXFR allows the firewall to download only the specific changes between versions. According to Michael Tremer, IPFire’s lead developer, this is crucial because full downloads of malware and phishing lists can reach roughly 100 MiB per update.

This shift toward incremental updates is a critical trend for the “edge” of the internet. As more devices move to the network perimeter, the ability to push updates every five minutes without saturating the connection is what allows security teams to combat the short lifespan of phishing sites, which may only remain active for a few hours.

Pro Tip: If you are migrating from a separate Pi-hole or an older URL Filter, remember that custom block and allow lists do not transfer automatically. Use the web UI to copy and paste your domains directly into the new DNS Firewall interface to maintain your custom security posture.

Hardening the Attack Surface: The “Less is More” Philosophy

Modern security is not just about adding new features; We see about removing unnecessary ones. A growing trend in open-source distributions is the aggressive pruning of unused packages to reduce the “attack surface”—the total number of points where an attacker could potentially find a vulnerability.

Infoblox DNS Firewall: Understanding APT Malware

We are seeing this in practice with the removal of non-essential components. For example, the removal of Rust packages no longer required by the distribution and the dropping of the 7zip add-on (due to a lack of upstream maintenance) are strategic moves. By cutting build overhead and removing unmaintained code, developers can ensure a leaner, more secure environment.

This philosophy extends to the toolchain itself. Updating to the latest versions of core components—such as glibc 2.43, OpenSSL 3.6.1, and OpenVPN 2.6.19—ensures that the firewall is leveraging the most recent security patches and performance optimizations.

The Future of Automated Reporting and IDS

As network environments grow more complex, the way we handle security alerts must also evolve. The move toward customizable recipient configurations for Intrusion Prevention System (IPS) reports—splitting daily, weekly, and monthly cadences—reflects a need for better organizational routing.

In the future, we can expect these reports to become even more granular, potentially integrating with AI-driven analysis to separate “noise” from actual threats, ensuring that the people responsible for review intervals are not overwhelmed by false positives.

Frequently Asked Questions

What is DNS-layer domain blocking?
It is a security method that checks DNS queries against a blocklist before a connection is made. If a domain is listed as malicious, the firewall returns an NXDOMAIN response, preventing the device from connecting to the site.

Do I still need a Pi-hole if my firewall has a DNS Firewall?
While Pi-hole is a powerful tool, integrated DNS firewalls provide similar functionality (blocking malware, phishing, and ads) without the need for additional hardware or complex configuration.

What is IXFR and why does it matter?
IXFR stands for Incremental Zone Transfer. It allows a system to download only the changes to a blocklist rather than the entire file, which significantly saves bandwidth and allows for more frequent updates.

Does the DNS Firewall require HTTPS inspection?
No. Because it operates at the DNS level, it does not need to inspect encrypted HTTPS traffic or handle certificates, making it more privacy-friendly and easier to deploy.


Are you upgrading your home or business firewall this year? We wish to hear about your setup. Do you prefer a consolidated firewall approach, or do you still rely on separate hardware for DNS sinkholing? Let us know in the comments below or subscribe to our newsletter for more deep dives into open-source security.

April 28, 2026 0 comments
0 FacebookTwitterPinterestEmail
Business

LightInk – An ESP32-based, solar-powered E-ink smartwatch with up to 10 months of battery life

by Chief Editor April 26, 2026
written by Chief Editor

The Shift Toward Ultra-Low Power Architecture

The future of wearables is moving away from power-hungry boot sequences. Traditionally, processors like the ESP32 seize approximately 28 ms to boot, consuming several milliamps of power before performing any actual tasks. This overhead is a significant barrier to achieving true long-term battery life.

View this post on Instagram about Hardware, The Shift Toward Ultra
From Instagram — related to Hardware, The Shift Toward Ultra

A emerging trend is the use of “wake stubs”—function pointers in the RTC memory. By allowing the core to run code in microseconds and bypassing the flash entirely, devices can boot, send data, and update display buffers in less than 1 ms. This approach allows the system to return to deep sleep almost instantly, drastically reducing energy draw.

Did you know? Standard ESP32 boot sequences create a massive energy overhead. By reimplementing SPI communication within a wake stub, active time can be reduced to under 1 ms.

Optimizing Hardware for Efficiency

To maximize longevity, engineers are removing high-power-consumption components. This includes eliminating dedicated battery-charging ICs and accelerometers, which often draw unnecessary quiescent current.

The integration of specialized components, such as the TPS63900 buck-boost converter with a 75-nA IQ, allows devices to operate dynamically at voltages like 2.6V or 2.9V, ensuring that every micro-amp of harvested energy is used effectively.

Solar-First Design: Beyond the Charging Cable

We are seeing a return to the philosophy of 90s solar digital watches, but with modern smart capabilities. The trend is shifting toward “solar-first” operation, where a solar cell is not just a secondary charger but the primary power source maintaining a small battery.

By pairing a solar cell with a modest 100mAh battery, it is now possible to achieve an operational lifespan of 6 to 10 months. This eliminates the need for frequent plugging-in and reduces the device’s reliance on the power grid.

Pro Tip: To maintain precise timekeeping in ultra-low-power devices, implement manual drift calibration for the RTC. Targeting 1ppm (parts per million) ensures the watch remains accurate over months of operation.

The Evolution of E-Ink in Wearables

E-paper displays are becoming the gold standard for wearables where battery life is prioritized over high refresh rates. A 1.54-inch B/W e-Paper panel (such as the GDEH0154D67) provides high visibility with minimal power consumption.

The Evolution of E-Ink in Wearables
Hardware The Evolution Integrating Specialized Off

The key to the next generation of E-ink devices is “ultra-fast partial updates.” Instead of refreshing the entire screen, which is energy-intensive, devices only update the specific pixels that change. This enables the device to remain in deep sleep whereas the display refreshes, further extending the battery life.

Integrating Specialized Off-Grid Connectivity

Future wearables are expanding beyond simple Bluetooth and Wi-Fi. The integration of LoRa (via transceivers like the Wio-SX1262) and GPS allows for communication and navigation in areas without cellular coverage.

This combination of LoRa, GPS, and solar power transforms a simple smartwatch into a resilient tool for outdoor and off-grid use, all while maintaining a compact 3D-printed form factor.

Open-Source Hardware and Community Iteration

The development of high-efficiency wearables is increasingly driven by open-source collaboration. Platforms like GitHub and Hackaday allow developers to share ESP-IDF firmware, EasyEDA hardware designs, and 3D printable models.

Open-Source Hardware and Community Iteration
Hardware Frequently Asked Questions How

This community-driven approach allows creators to build upon existing projects—such as the SQFMI Watchy—to specifically target improvements in power efficiency and feature sets without increasing the physical size of the device.

Frequently Asked Questions

How long can a solar-powered E-ink watch last?

Depending on the design and solar supplement, devices like LightInk can operate for approximately 6 to 10 months on a 100mAh battery.

What is a wake stub in the context of ESP32?

A wake stub is a function pointer in the RTC memory that allows the processor to execute code immediately upon waking, bypassing the flash boot process to save time, and power.

Why use LoRa in a smartwatch?

LoRa provides long-range, low-power communication, making it ideal for wearables intended for off-grid use where Wi-Fi or cellular networks are unavailable.

Want to dive deeper into open-source hardware? Let us know in the comments which ultra-low-power features you’d want in your next wearable, or subscribe to our newsletter for more embedded engineering insights!

April 26, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

NVIDIA GTC: The Future of AI is Open & Orchestrated Models

by Chief Editor March 30, 2026
written by Chief Editor

The Rise of the AI Orchestra: Why NVIDIA’s Huang Says Open and Proprietary AI Must Coexist

Artificial intelligence is rapidly evolving from a promising technology to the core infrastructure of businesses worldwide. But the future isn’t about a single, monolithic AI – it’s about a diverse ecosystem of models, both large and small, open and closed, generalist and specialist. This was the central message from NVIDIA founder and CEO Jensen Huang at a recent session on open frontier models at NVIDIA GTC.

Beyond Open vs. Closed: A Hybrid Approach

Huang emphatically stated that the debate isn’t about choosing between open and closed innovation. Instead, it’s about recognizing that both approaches are essential. “Proprietary versus open is not a thing. It’s proprietary and open,” he explained. This signals a shift in thinking, acknowledging the strengths of both models and the necessitate for collaboration.

The Need for Specialized AI Systems

Every industry faces unique challenges. Healthcare, finance, and manufacturing all require AI tailored to their specific data and workflows. A one-size-fits-all approach simply won’t operate. The solution? Systems of models, finely tuned and specialized for different tasks, working together to solve complex business problems.

NVIDIA is actively contributing to the open-source AI movement, now being the largest organization on Hugging Face, with nearly 4,000 team members. The company recently launched the NVIDIA Nemotron Coalition, a global collaboration of AI labs focused on advancing open, frontier-level foundation models through shared expertise and resources.

AI Agents: The Future of Work?

A key takeaway from discussions at GTC was the growing capability of AI agents. According to Cursor CEO Michael Truell, “We’re soon going to witness agents really be coworkers that can grab on tasks that take many hours or many days, and do incredibly complex workloads.” This suggests a future where AI handles increasingly sophisticated tasks, freeing up human workers to focus on more strategic initiatives.

Orchestrating the AI Ecosystem

Perplexity CEO Aravind Srinivas envisions a future where AI isn’t about selecting the “best” model, but rather orchestrating a “multimodal, multi-model and multi-cloud orchestra.” The system itself will intelligently delegate tasks to the most appropriate model, simplifying the process for users.

Trust and Accessibility Through Open Systems

Open systems are gaining traction due to their inherent trustworthiness and accessibility. AMP PBC’s Anjney Midha noted, “At the end of the day, you’re delegating trust…and it’s much easier to trust an open system.” This transparency fosters confidence and allows for wider adoption of AI technologies.

The Importance of Both Generalist and Specialist AI

Just as a hospital relies on both general practitioners and specialized surgeons, society needs both generalist and specialist AI. Open foundations combined with proprietary data allow organizations to unlock unique value and drive innovation in both academia and business. Ai2’s Hanna Hajishirzi emphasized that open access accelerates progress and democratizes AI, ensuring broader participation and benefit.

Black Forest Labs’ Robin Rombach added that both frontier models and specialized open models have exciting potential, and that all of them should have some open component.

FAQ

Q: What is the NVIDIA Nemotron Coalition?
A: It’s a global collaboration of AI labs working to advance open, frontier-level foundation models through shared expertise, data, and compute.

Q: What is the key message from Jensen Huang regarding open vs. Proprietary AI?
A: It’s not an either/or situation. Both open and proprietary AI are essential and should coexist.

Q: What role will AI agents play in the future?
A: They are expected to develop into highly capable coworkers, handling complex tasks and workloads.

Q: Why is specialization important in AI?
A: Different industries have unique challenges that require tailored AI solutions.

Watch the GTC session highlights on YouTube and start building with NVIDIA Nemotron open models.

March 30, 2026 0 comments
0 FacebookTwitterPinterestEmail
Business

PineTime Pro smartwatch to feature dual-core Cortex-M33 MCU, 2.13-inch AMOLED, GPS, and more

by Chief Editor March 30, 2026
written by Chief Editor

PineTime Pro: A Leap Forward for Open Source Smartwatches

Pine64’s upcoming PineTime Pro smartwatch is generating significant buzz, promising a substantial upgrade over the original PineTime. This isn’t just a spec bump. it represents a growing trend towards accessible, customizable wearable technology. The Pro boasts a dual-core Cortex-M33 MCU, a vibrant 2.13-inch AMOLED display, and integrated GPS – features previously unseen in the PineTime lineup. This move positions the PineTime Pro as a compelling alternative to mainstream smartwatches, particularly for developers and privacy-conscious users.

The Evolution of Open Source Wearables

The original PineTime, launched in 2019, quickly gained a dedicated following thanks to its open-source nature. The availability of firmware like InfiniTime demonstrated the community’s ability to enhance and adapt the device. However, the initial hardware had limitations. The PineTime Pro directly addresses these, offering a significant increase in processing power and memory – 800KB of SRAM, plus 8MB of PSRAM and 8MB of QSPI flash. This expanded capacity opens doors for more complex features and a smoother user experience.

Key Specifications and What They Mean

Let’s break down the key specs:

  • Dual-Core Cortex-M33 MCU: This processor provides a substantial performance boost over the original PineTime’s Cortex-M4.
  • 2.13-inch AMOLED Display: AMOLED technology delivers richer colors, deeper blacks, and improved energy efficiency compared to the IPS display on the original PineTime.
  • Integrated GPS: A crucial addition for fitness tracking and navigation, eliminating the demand to rely on a connected smartphone.
  • Heart Rate & Blood Oxygen Sensor: Expanding health tracking capabilities.
  • Bluetooth 5.2: Offers improved connectivity and efficiency.

The inclusion of a 6-axis IMU (Inertial Measurement Unit) further enhances the device’s sensing capabilities, potentially enabling more accurate activity tracking and gesture recognition.

The Power of Open Source and Customization

Pine64’s commitment to open-source software is a major differentiator. Developers are already working on adapting existing firmware like InfiniTime and WaspOS to the PineTime Pro. The increased hardware capabilities should make it easier to add new features and optimize performance. Pine64 has also collaborated with a Chinese smartwatch manufacturer to develop a custom chip and will release the SDK to the community, fostering further innovation.

The potential for PebbleOS compatibility has also been mentioned, though no official port is currently underway. This highlights the ambition of the open-source community to bring a wider range of operating systems to the platform.

Beyond the Pro: A Dual-Product Strategy

Pine64 intends to continue supporting both the original PineTime and the PineTime Pro. This dual-product strategy allows them to cater to different user needs and price points. The original PineTime remains an attractive entry-level option, while the Pro targets users who demand more advanced features and performance.

What Does This Mean for the Future of Smartwatches?

The PineTime Pro exemplifies a growing trend towards more open and customizable wearable technology. Consumers are increasingly seeking alternatives to the closed ecosystems offered by major tech companies. The success of the PineTime Pro could encourage other manufacturers to embrace open-source principles and provide users with greater control over their devices.

The collaboration with a Chinese smartwatch manufacturer suggests a potential shift in the supply chain for wearable technology. By partnering with established manufacturers, Pine64 can leverage their expertise and resources to create more sophisticated devices.

Frequently Asked Questions

Q: When will the PineTime Pro be released?
A: A launch date hasn’t been announced yet, but Pine64 hopes to release it later this year.

Q: Will the PineTime Pro work with my existing PineTime accessories?
A: This information is not currently available.

Q: What operating systems will the PineTime Pro support?
A: It will initially support InfiniTime and WaspOS, with potential for PebbleOS compatibility in the future.

Q: Is the PineTime Pro waterproof?
A: Water resistance details have not been released.

Q: Where can I find more information about the PineTime Pro?
A: Visit the Pine64 announcement for the latest updates.

Pro Tip: Keep an eye on the Pine64 forums and community channels for the latest development updates and firmware releases.

Stay tuned for further updates on the PineTime Pro and the evolving landscape of open-source wearables. What features are you most excited about? Share your thoughts in the comments below!

March 30, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

NVIDIA DRA Driver: Open Source AI Infrastructure for Kubernetes | KubeCon Europe 2026

by Chief Editor March 24, 2026
written by Chief Editor

NVIDIA Opens Up AI Infrastructure with Kubernetes Donation: A Shift Towards Collaborative AI

Artificial intelligence is rapidly becoming a cornerstone of modern computing, and Kubernetes has emerged as the dominant platform for managing AI workloads. Now, NVIDIA is taking a significant step towards fostering a more open and collaborative AI ecosystem by donating the NVIDIA Dynamic Resource Allocation (DRA) Driver for GPUs to the Cloud Native Computing Foundation (CNCF). This move, announced at KubeCon Europe, signals a shift from vendor-controlled governance to full community ownership, promising increased transparency, innovation, and accessibility.

What Does This Mean for AI Developers?

Historically, managing GPUs – the engines that power AI – within data centers has been a complex undertaking. The NVIDIA DRA Driver aims to simplify this process, offering several key benefits for developers. These include improved efficiency through smarter resource sharing, support for technologies like NVIDIA Multi-Process Service and Multi-Instance GPU, and the ability to scale AI infrastructure massively using NVIDIA Multi-Node NVlink. The driver provides flexibility, allowing dynamic reconfiguration of hardware, and precision, enabling fine-tuned requests for specific computing power.

Pro Tip: The NVIDIA DRA Driver’s support for NVIDIA Multi-Node NVlink is particularly crucial for training large AI models on next-generation systems like those powered by NVIDIA Grace Blackwell.

Expanding Security with Kata Containers

Beyond resource allocation, NVIDIA is also enhancing the security of AI workloads. In collaboration with the CNCF’s Confidential Containers community, NVIDIA has introduced GPU support for Kata Containers. These lightweight virtual machines provide a stronger isolation layer, protecting AI workloads and enabling organizations to implement confidential computing to safeguard sensitive data.

Industry Collaboration Fuels Innovation

NVIDIA isn’t acting alone. The company is collaborating with a broad range of industry leaders – including Amazon Web Services, Broadcom, Canonical, Google Cloud, Microsoft, Nutanix, Red Hat, and SUSE – to drive these features forward. This collaborative approach underscores the importance of a unified ecosystem for accelerating AI innovation.

“Open source will be at the core of every successful enterprise AI strategy,” says Chris Wright, CTO and SVP of global engineering at Red Hat. “NVIDIA’s donation of the NVIDIA DRA Driver for GPUs helps to cement the role of open source in AI’s evolution.”

Beyond the Driver: A Wave of Open Source Contributions

The donation of the DRA Driver is just one piece of NVIDIA’s broader commitment to open source. Recent contributions include NVSentinel, a system for GPU fault remediation, and AI Cluster Runtime, an agentic AI framework. The KAI Scheduler, NVIDIA’s AI workload scheduler, has been onboarded as a CNCF Sandbox project, further encouraging community involvement.

NVIDIA is also expanding the Dynamo ecosystem with Grove, an open source Kubernetes application programming interface for orchestrating AI workloads on GPU clusters. Grove integrates with the llm-d inference stack, aiming for wider adoption within the Kubernetes community.

Future Trends: The Rise of Collaborative AI Infrastructure

This move towards open source and collaborative development signals several key trends in the future of AI infrastructure:

  • Standardization: Open source projects like the NVIDIA DRA Driver will drive standardization in high-performance computing components, making it easier for organizations to build and deploy AI solutions.
  • Increased Accessibility: By simplifying GPU orchestration, NVIDIA is making high-performance computing more accessible to a wider range of developers, and organizations.
  • Enhanced Security: The integration of GPU support for Kata Containers highlights the growing importance of security in AI workloads, particularly as organizations handle increasingly sensitive data.
  • AI-Powered Infrastructure Management: Projects like AI Cluster Runtime demonstrate the potential of using AI itself to manage and optimize AI infrastructure.

FAQ

Q: What is the NVIDIA DRA Driver for GPUs?
A: It’s a software driver that allows for more efficient allocation and sharing of GPU resources within a Kubernetes environment.

Q: What is Kata Containers?
A: Lightweight virtual machines that provide enhanced security by isolating workloads.

Q: Why is NVIDIA donating this technology to the CNCF?
A: To foster a more open and collaborative AI ecosystem and accelerate innovation.

Q: Where can I learn more about NVIDIA’s open source projects?
A: Visit NVIDIA’s GitHub page for a comprehensive list of projects.

Did you know? NVIDIA Dynamo 1.0 is now available, and the company is actively expanding its ecosystem with projects like Grove.

Developers and organizations can begin using and contributing to the NVIDIA DRA Driver today. Explore the possibilities and join the growing community shaping the future of AI infrastructure.

March 24, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

NVIDIA Nemotron-3 Super: Open-Source 120B Parameter AI Model

by Chief Editor March 11, 2026
written by Chief Editor

NVIDIA Nemotron 3 Super: Ushering in a New Era of Agentic AI

NVIDIA has launched Nemotron 3 Super, a 120-billion-parameter open model with 12 billion active parameters, poised to redefine the landscape of agentic AI. This isn’t just another large language model; it’s a foundational step towards more efficient, accurate, and scalable AI systems capable of handling complex tasks across diverse industries.

Addressing the Challenges of Multi-Agent AI

As AI moves beyond simple chatbots and into sophisticated multi-agent applications, two key challenges emerge: context explosion and the “thinking tax.” Multi-agent workflows generate significantly more data – up to 15 times more tokens than standard chat – due to the need to resend complete histories with each interaction. This increased context volume drives up costs and can lead to agents losing focus on their original objectives. The “thinking tax” refers to the computational expense of complex agents reasoning at every step, making these applications sluggish and impractical.

How Nemotron 3 Super Solves These Problems

Nemotron 3 Super tackles these hurdles head-on with a hybrid architecture and innovative techniques. Its 1-million-token context window allows agents to retain complete workflow state, preventing goal drift. The model leverages a hybrid Mixture-of-Experts (MoE) architecture, combining Mamba layers for efficiency and transformer layers for advanced reasoning. Specifically, it features:

  • Hybrid Architecture: Mamba layers deliver 4x higher memory and compute efficiency.
  • MoE: Only 12 billion of its 120 billion parameters are active during inference.
  • Latent MoE: Improves accuracy by activating four expert specialists for the cost of one.
  • Multi-Token Prediction: Predicts multiple future words simultaneously, resulting in 3x faster inference.

running the model in NVFP4 precision on the NVIDIA Blackwell platform cuts memory requirements and boosts inference speed up to 4x compared to FP8 on NVIDIA Hopper, without sacrificing accuracy.

Real-World Applications Taking Shape

The impact of Nemotron 3 Super is already being felt across various sectors. AI-native companies like Perplexity AI are integrating the model to enhance search capabilities, offering it as one of 20 orchestrated models within their Computer platform. Software development firms such as CodeRabbit, Factory, and Greptile are utilizing Nemotron 3 Super to improve the accuracy and cost-effectiveness of their AI agents. Life sciences organizations, including Edison Scientific and Lila Sciences, are harnessing its power for deep literature research, data science, and molecular understanding.

Enterprise adoption is likewise accelerating. Industry leaders like Amdocs, Palantir, Cadence, Dassault Systèmes, and Siemens are deploying and customizing the model to automate workflows in areas like telecom, cybersecurity, semiconductor design, and manufacturing.

Open Weights and Accessibility

NVIDIA is releasing Nemotron 3 Super with open weights under a permissive license, empowering developers to deploy and customize it on workstations, in data centers, or in the cloud. The model was trained on synthetic data generated using advanced reasoning models, and NVIDIA is publishing the complete methodology, including over 10 trillion tokens of pre- and post-training datasets, and 15 training environments for reinforcement learning.

Leading the Benchmarks

Nemotron 3 Super isn’t just theoretically advanced; it’s demonstrably superior in performance. It currently powers the NVIDIA AI-Q research agent to the No. 1 position on both the DeepResearch Bench and DeepResearch Bench II leaderboards, benchmarks that measure an AI system’s ability to conduct thorough, multistep research.

Availability and Ecosystem Support

NVIDIA Nemotron 3 Super is accessible through build.nvidia.com, Perplexity, OpenRouter, and Hugging Face. Dell Technologies is bringing the model to the Dell Enterprise Hub on Hugging Face, optimized for on-premise deployment. A growing ecosystem of partners, including Google Cloud, Oracle Cloud Infrastructure, Coreweave, Crusoe, and others, are offering access and support for deploying the model.

Future Trends: The Path Forward for Agentic AI

The release of Nemotron 3 Super signals a broader shift towards more capable and accessible agentic AI. We can anticipate several key trends:

  • Increased Specialization: Models will become increasingly specialized for specific tasks and industries, leading to higher accuracy and efficiency.
  • Edge Deployment: The ability to run powerful models like Nemotron 3 Super on edge devices will unlock new applications in areas like robotics and autonomous systems.
  • Enhanced Tool Integration: AI agents will become more adept at utilizing a wider range of tools and APIs, enabling them to perform more complex tasks.
  • Improved Reasoning Capabilities: Continued advancements in model architecture and training techniques will lead to even more sophisticated reasoning abilities.

FAQ

Q: What is Nemotron 3 Super?
A: It’s a 120-billion-parameter open model designed for complex agentic AI systems, offering improved efficiency and accuracy.

Q: What is an agentic AI system?
A: An AI system capable of autonomously performing tasks and making decisions.

Q: Where can I access Nemotron 3 Super?
A: Through build.nvidia.com, Perplexity, OpenRouter, Hugging Face, and various cloud and infrastructure partners.

Q: What is the benefit of the hybrid architecture?
A: It combines the efficiency of Mamba layers with the reasoning power of transformer layers.

Q: Is Nemotron 3 Super open source?
A: Yes, it is released with open weights under a permissive license.

Ready to explore the potential of agentic AI? Visit build.nvidia.com to get started and discover how Nemotron 3 Super can transform your applications.

March 11, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

Cat 306 CR: AI-Powered Mini Excavator Runs Open Models on NVIDIA Jetson Thor

by Chief Editor March 11, 2026
written by Chief Editor

The Rise of the AI-Powered Construction Site: Caterpillar’s 306 CR Leads the Charge

The construction industry is undergoing a quiet revolution, driven by the integration of artificial intelligence (AI) into everyday machinery. Nowhere is this more apparent than with Caterpillar’s 306 CR mini excavator, a machine designed to thrive in tight spaces and now, thanks to advancements in edge computing, capable of answering questions. This isn’t just about automation; it’s about creating a collaborative partnership between human operators and intelligent machines.

From Data Centers to the Dirt: The Shift to Edge AI

For years, open-source AI models resided primarily in data centers, reliant on robust computing power and constant network connectivity. However, this reliance introduces latency and ongoing costs. The trend is now decisively shifting towards “edge AI” – processing data directly on the machine itself. This is crucial for applications like construction, where real-time responsiveness and consistent operation are paramount. The Cat 306 CR, powered by NVIDIA’s Jetson Thor platform, exemplifies this shift.

NVIDIA and Caterpillar: A Powerful Partnership

Caterpillar’s implementation leverages several key NVIDIA technologies. The Cat AI Assistant, currently in development, utilizes NVIDIA Jetson Thor for real-time inference. It also incorporates NVIDIA Nemotron speech models for accurate voice interactions and Qwen3 4B for fast, localized response generation. So the excavator can understand and respond to operator queries without relying on a cloud connection, ensuring data privacy and minimizing delays.

Beyond the Excavator: AI in Robotics and Automation

The impact extends far beyond excavators. Franka Robotics is showcasing the potential of onboard AI with its FR3 Duo dual-arm system, running the NVIDIA GR00T N1.6 model conclude-to-end. Similarly, research projects like the SONIC project from NVIDIA’s GEAR Lab demonstrate the feasibility of deploying complex humanoid controllers directly on Jetson Orin, achieving remarkably low latency. Even a matcha-making robot built by students at UIUC utilizes Jetson Thor and the GR00T N1.5 model.

The Benefits of Onboard AI: Safety, Efficiency, and Control

The advantages of running AI models directly on the machine are significant. Lower latency translates to quicker response times and improved control. Limited power consumption is essential for mobile equipment. Consistent behavior, unaffected by network fluctuations, enhances safety and reliability. The ability to process data locally addresses growing concerns about data privacy.

Jetson: Becoming the Industry Standard

NVIDIA Jetson is rapidly becoming the go-to platform for deploying open models at the edge. Its versatility, supporting a wide range of AI frameworks, and its ability to handle diverse workloads produce it ideal for a variety of applications. Developers can access model benchmarks and tutorials at the Jetson AI Lab, and the platform supports models like Gemma, gpt-oss-20B, Mistral AI, NVIDIA Cosmos, NVIDIA Isaac GR00T, and Qwen 3.5.

What Does This Mean for the Future of Construction?

The integration of AI into construction equipment like the Cat 306 CR isn’t just about automating tasks; it’s about augmenting human capabilities. Expect to see AI-powered systems providing operator guidance, enhancing safety features, and optimizing machine performance. Digital twins, powered by NVIDIA Omniverse, will enable realistic simulations for training and planning. The future construction site will be a collaborative environment where humans and intelligent machines work together seamlessly.

FAQ

Q: What is edge AI?
A: Edge AI refers to processing AI models directly on the device, rather than relying on a cloud connection. This reduces latency, improves reliability, and enhances data privacy.

Q: What is NVIDIA Jetson?
A: NVIDIA Jetson is a platform for developing and deploying AI applications at the edge. It offers a range of modules with varying levels of performance and power consumption.

Q: What are the benefits of AI in construction?
A: AI can improve safety, efficiency, and productivity on construction sites by providing operator assistance, automating tasks, and optimizing machine performance.

Q: What is CatHelios?
A: CatHelios is a unified data platform providing trusted machine context.

Caterpillar Technical Highlights

  • NVIDIA Jetson Thor: Edge AI platform for real-time inference in industrial and robotics systems
  • NVIDIA Riva: Speech AI framework using Parakeet ASR and Magpie TTS
  • Qwen3 4B: Compact LLM for intent parsing and response generation
  • vLLM: Efficient runtime for serving LLM inference at the edge
  • CatHelios: Unified data platform providing trusted machine context
  • NVIDIA Omniverse: Digital twin and simulation frameworks for industrial workflows

Pro Tip: Explore the Jetson AI Lab for tutorials and model benchmarks to get started with deploying AI on NVIDIA Jetson platforms.

Want to learn more about the future of AI in construction? Share your thoughts in the comments below!

March 11, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

AI-RAN: NVIDIA Powers Next-Gen Wireless Networks at MWC 2026

by Chief Editor March 1, 2026
written by Chief Editor

AI-RAN: The Revolution Reshaping Wireless Networks

The future of wireless communication is rapidly evolving, driven by the convergence of artificial intelligence (AI) and Radio Access Networks (RAN). What was once confined to laboratory settings is now moving into real-world deployments, promising a new era of speed, efficiency, and capability for 5G and beyond. This shift, known as AI-RAN, is gaining significant momentum, with major players like Nokia and NVIDIA leading the charge.

From 5G Enhancement to 6G Foundation

AI-RAN isn’t simply an upgrade to existing 5G infrastructure; it’s a fundamental architectural change. Traditional RANs rely on static configurations, whereas AI-RAN leverages machine learning to dynamically optimize network performance. In other words adapting to changing conditions, predicting user behavior, and allocating resources with unprecedented precision. The ultimate goal is to create AI-native 6G systems that are secure, open, and incredibly efficient.

Key Partnerships Driving Innovation

Nokia and NVIDIA are at the forefront of this revolution, forging strategic partnerships with leading telecom operators worldwide. T-Mobile U.S., SoftBank, and Indosat Ooredoo Hutchison (IOH) have already passed key implementation milestones, demonstrating the viability of NVIDIA-powered AI-RAN in live environments. These collaborations are crucial for accelerating the transition from proof-of-concept to commercial deployment.

Real-World Demonstrations: A Glimpse into the Future

Recent trials showcase the tangible benefits of AI-RAN. T-Mobile U.S. Successfully demonstrated concurrent AI and RAN processing, supporting applications like video streaming and generative AI on a live 5G network. SoftBank achieved an industry-first 16-layer massive MIMO using fully software-defined 5G, while IOH showcased Southeast Asia’s first AI-powered 5G call, enabling secure, real-time connectivity and even remote control of a robotic dog.

Benchmarking Breakthroughs and Performance Gains

Benchmarking results from companies like SynaXG reveal impressive performance improvements. AI-RAN running on NVIDIA platforms delivers high-speed, carrier-grade performance across multiple 5G spectrum bands. SynaXG achieved a throughput of 36 Gbps with under 10 milliseconds latency, demonstrating the potential for significantly faster and more responsive wireless experiences.

A Thriving Ecosystem of Partners

The AI-RAN ecosystem is rapidly expanding, with companies like Quanta Cloud Technology (QCT), Supermicro, WNC, Eridan, and LITEON contributing to the development of standardized hardware and software solutions. NVIDIA’s Aerial RAN Computer (ARC) platforms, coupled with these partner offerings, provide operators with a range of deployment options.

The AI-RAN Alliance: Shaping the Industry Roadmap

The AI-RAN Alliance, now boasting over 150 members, plays a vital role in shaping the industry roadmap for AI-native networks. This collaborative effort fosters innovation, validates new concepts, and accelerates the development of open and interoperable solutions.

AI-RAN and the Rise of Autonomous Systems

As intelligence permeates the physical world, AI-RAN networks are becoming essential for supporting autonomous systems like robots and self-driving cars. These systems rely on reliable, low-latency connectivity to see, sense, reason, and act in real-time. Project ULTIMO, a European initiative, is exploring how AI-RAN can enable large-scale autonomous mobility services.

Innovations Showcased at MWC26

Mobile World Congress 2026 highlighted a tripled number of AI-RAN innovations compared to the previous year, with over 26 out of 33 AI-RAN Alliance demos built using NVIDIA AI Aerial. Demonstrations included DeepSig’s AI-native air interface for improved throughput and spectral efficiency, SUTD’s split-inferencing for robots and autonomous vehicles, and zTouch Networks’ AI-RAN orchestration blueprint for efficient GPU resource allocation.

FAQ: Understanding AI-RAN

  • What is AI-RAN? AI-RAN is the integration of artificial intelligence into Radio Access Networks to optimize performance and enable new capabilities.
  • What are the benefits of AI-RAN? Benefits include increased speed, improved efficiency, reduced latency, and support for advanced applications like autonomous systems.
  • Who are the key players in AI-RAN development? Nokia and NVIDIA are leading the charge, along with a growing ecosystem of partners and the AI-RAN Alliance.
  • When will we see commercial AI-RAN deployments? Commercial trials are expected in 2026, with broader commercial releases anticipated in 2027.

Pro Tip: Keep an eye on the AI-RAN Alliance for the latest updates and industry standards.

Did you recognize? NVIDIA has open-sourced NVIDIA Aerial CUDA-accelerated RAN libraries to further accelerate innovation in the field.

Explore the potential of AI-RAN and its impact on the future of wireless communication. Share your thoughts and questions in the comments below!

March 1, 2026 0 comments
0 FacebookTwitterPinterestEmail
Newer Posts
Older Posts

Recent Posts

  • Daily Horoscope: May 10, 2026

    May 10, 2026
  • Estonia expects Ukraine to improve drone control after airspace breaches | News

    May 10, 2026
  • Very surreal’: Colorado woman surprised by 250-pound bear napping in window well

    May 10, 2026
  • USM Alger Secures Dramatic 1-0 Win Over Zamalek in CAF Confederation Cup Final

    May 10, 2026
  • The 5-Second Rule Debunked: The Science of Food Contamination

    May 10, 2026

Popular Posts

  • 1

    Maya Jama flaunts her taut midriff in a white crop top and denim jeans during holiday as she shares New York pub crawl story

    April 5, 2025
  • 2

    Saar-Unternehmen hoffen auf tiefgreifende Reformen

    March 26, 2025
  • 3

    Marta Daddato: vita e racconti tra YouTube e podcast

    April 7, 2025
  • 4

    Unlocking Success: Why the FPÖ Could Outperform Projections and Transform Austria’s Political Landscape

    April 26, 2025
  • 5

    Mecimapro Apologizes for DAY6 Concert Chaos: Understanding the Controversy

    May 6, 2025

Follow Me

Follow Me
  • Cookie Policy
  • CORRECTIONS POLICY
  • PRIVACY POLICY
  • TERMS OF SERVICE

Hosted by Byohosting – Most Recommended Web Hosting – for complains, abuse, advertising contact: o f f i c e @byohosting.com


Back To Top
Newsy Today
  • Business
  • Entertainment
  • Health
  • News
  • Sport
  • Tech
  • World