• Business
  • Entertainment
  • Health
  • News
  • Sport
  • Tech
  • World
Newsy Today
news of today
Home - cybersecurity
Tag:

cybersecurity

World

NEXTDC launches first overseas data centre in Kuala Lumpur

by Chief Editor May 14, 2026
written by Chief Editor

The AI Infrastructure Arms Race: Why the Shift to ‘AI Factories’ is Redefining Global Business

For years, data centres were viewed as the “digital warehouses” of the internet—quiet, sterile environments where servers stored data and hosted websites. But that era is over. We are witnessing a fundamental pivot toward what industry insiders are calling “AI Factories.”

View this post on Instagram about Kuala Lumpur, Infrastructure Arms Race
From Instagram — related to Kuala Lumpur, Infrastructure Arms Race

The recent launch of NEXTDC’s KL1 facility in Kuala Lumpur is a prime example of this shift. This isn’t just another colocation site; it is a purpose-built engine designed for high-performance computing (HPC) and artificial intelligence. When a company invests AUD$1 billion into a single regional hub, they aren’t betting on storage—they are betting on the massive compute power required to fuel the next decade of generative AI.

Did you know? Tier IV certification, like that targeted by the KL1 facility, is the gold standard of resilience. It means the facility is designed to be fully fault-tolerant, ensuring that a single failure in any system doesn’t cause an outage. For AI workloads that run for weeks on a single training set, this “zero downtime” is non-negotiable.

The Rise of Digital Sovereignty and ‘Sovereign-Ready’ Cloud

As AI integrates into government services, healthcare, and national security, the question is no longer just “Does it work?” but “Where does the data live?” This is the birth of digital sovereignty.

The Rise of Digital Sovereignty and 'Sovereign-Ready' Cloud
Kuala Lumpur Tier

Businesses are increasingly wary of sending sensitive data across borders where it may be subject to foreign laws. This trend is driving a surge in demand for “sovereign-ready” environments—infrastructure that allows companies to scale AI systems while maintaining strict control over governance and compliance within their own borders.

We are seeing this play out across Southeast Asia, where nations are competing to become the primary hub for AI. By establishing local, high-tier infrastructure, providers allow enterprises to satisfy regulatory requirements without sacrificing the speed of the cloud. This “local-first” approach to global scale is becoming the blueprint for multinational expansion.

Beyond Colocation: The Move Toward GPU-as-a-Service (GPUaaS)

The hardware requirements for AI are vastly different from traditional cloud computing. Standard CPUs cannot handle the parallel processing needed for Large Language Models (LLMs); you need GPUs (Graphics Processing Units), specifically high-end chips like those from NVIDIA.

However, GPUs are expensive and difficult to source. This has led to the rise of GPU-as-a-Service (GPUaaS). Instead of building their own data centres, companies are partnering with infrastructure providers to rent massive GPU clusters on demand.

A real-world example is the partnership between SharonAI and NEXTDC, where GPUaaS was deployed to achieve rapid scalability without the capital expenditure of building a private facility. In the future, You can expect “AI-Ready” data centres to function less like landlords and more like utility providers, delivering raw compute power as a scalable resource.

Pro Tip: If you are an enterprise leader planning your AI roadmap, don’t just look at the cost per rack. Evaluate the power density and cooling capabilities of your provider. AI chips generate immense heat; without advanced liquid cooling or high-density power configurations, your hardware will throttle, killing your performance.

The Southeast Asian ‘Data Gold Rush’

While Singapore has long been the digital heart of Asia, constraints on land and energy have opened the door for neighbors. Malaysia, Indonesia, and Thailand are now in a fierce competition to attract the world’s tech giants.

The Southeast Asian 'Data Gold Rush'
Malaysia

Malaysia, in particular, is positioning itself as a strategic alternative. The investment in the Klang Valley indicates a broader trend: the decentralization of the Asian cloud. By offering a combination of regulatory clarity, available land, and aggressive energy policies, Malaysia is attracting “AI Factories” that require more space and power than a dense city-state can provide.

This regional shift is further bolstered by diplomatic and economic strategies, such as Australia’s Southeast Asia Economic Strategy to 2040, which encourages cross-border capital flow to build sustainable digital ecosystems.

Future Trends to Watch

  • Liquid Cooling Integration: As GPUs get hotter, traditional air conditioning will fail. Expect a massive shift toward immersion cooling and direct-to-chip liquid cooling in new builds.
  • Edge AI Convergence: While massive hubs like KL1 handle the “training” of AI, we will see a rise in smaller “Edge” data centres that handle the “inference” (the actual running of the AI) closer to the end-user to reduce latency.
  • Green AI: The energy demand of AI is staggering. The next competitive advantage for data centres won’t be just speed, but the ability to prove Net Zero operations through renewable energy integration.

Frequently Asked Questions

What is a Tier IV data centre?
A Tier IV facility is the highest level of data centre certification from the Uptime Institute. It is fully fault-tolerant, meaning any single failure in the power or cooling systems will not affect the critical load.

Future Trends to Watch
NEXTDC data center KL1

Why is Malaysia becoming a hub for AI infrastructure?
Malaysia offers a strategic balance of available land, power capacity, and government support (such as the AI Nation 2030 vision), making it an attractive alternative to the more constrained markets like Singapore.

What is the difference between traditional cloud and AI-ready infrastructure?
Traditional cloud is designed for general-purpose workloads (web hosting, databases). AI-ready infrastructure is built for high-density power, specialized cooling for GPUs, and massive interconnectivity to handle the huge data flows required by machine learning.


Join the Conversation: Do you think the shift toward digital sovereignty will unhurried down global AI innovation, or will regional hubs like KL1 actually accelerate it? Let us know your thoughts in the comments below or subscribe to our newsletter for more deep dives into the future of digital infrastructure.

May 14, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

Microsoft’s agentic security system found four critical Windows RCE flaws

by Chief Editor May 13, 2026
written by Chief Editor

The Rise of Agentic AI: A Paradigm Shift in Cybersecurity

For years, the industry viewed AI-powered vulnerability discovery as a futuristic curiosity—something that worked in controlled labs but stumbled in the messy reality of enterprise code. That era has officially ended. The emergence of agentic systems, such as Microsoft’s MDASH, signals a move away from single-model prompts toward “agentic swarms.”

Unlike a standard Large Language Model (LLM) that provides a single answer, an agentic system employs a multi-model harness. In the case of MDASH, this involves over 100 specialized AI agents that don’t just scan code; they debate, validate, and cross-reference findings to eliminate the “hallucinations” that previously plagued AI security tools.

Did you know? Microsoft’s MDASH achieved a 100% recall rate in tcpip.sys and identified every single one of 21 intentionally injected vulnerabilities in a private driver—with zero false positives.

This shift suggests a future where security is no longer a periodic “audit” but a continuous, autonomous process. We are moving toward a world where AI agents act as permanent, digital “red teams,” tirelessly probing every line of code the moment it is written.

Closing the Gap: From Research to Production-Grade Defense

The real breakthrough isn’t just that AI can find bugs, but that it can now approximate the reasoning of professional offensive researchers. When an AI system can identify critical Remote Code Execution (RCE) flaws in a networking stack, the barrier between “automated scanning” and “expert hacking” vanishes.

The End of the Manual Bug Hunt?

Traditional vulnerability research is slow and expensive, relying on a handful of elite humans to find “zero-days.” Agentic AI scales this expertise. By utilizing an ensemble of frontier and distilled models, these systems can process millions of lines of code in a fraction of the time a human team would require.

As these tools move from private previews to wider industry adoption, the “window of vulnerability”—the time between a bug’s creation and its discovery—will shrink drastically. For organizations, this means the pressure to patch will intensify, as the “attacker’s advantage” of finding a bug first is neutralized by autonomous defense systems.

Pro Tip: To stay ahead of AI-driven threats, shift your security strategy toward Immutable Infrastructure. If your systems are designed to be replaced rather than patched, you reduce the impact of RCE flaws that AI agents might discover.

The New Arms Race: AI-Driven Offense vs. Defense

We are entering a period of “compressed timelines.” If defensive teams are using agentic AI to secure Windows, offensive actors are undoubtedly building similar swarms to break it. This creates a high-velocity feedback loop: AI finds a bug, AI patches the bug, and AI looks for a way around the patch.

The Risk of Automated Exploitation

The danger lies in the democratization of these capabilities. While Microsoft uses MDASH for production-grade defense, the underlying logic of “agentic scanning” could be mirrored by malicious actors. When vulnerability discovery becomes an “engineering problem” rather than a “genius problem,” the volume of potential exploits will skyrocket.

🛡️ Microsoft Patches 77 Bugs Including Critical Office RCE Flaws 🛡️

To counter this, the industry must move toward Self-Healing Codebases. The logical next step after MDASH is a system that not only discovers the flaw but automatically generates, tests, and deploys a verified patch without human intervention.

Future Horizons: The Autonomous Security Stack

Looking ahead, we can expect the integration of AI agents into every layer of the software development lifecycle (SDLC). We are moving toward a “Zero-Trust Code” model where no piece of software is deployed unless an agentic swarm has signed off on its security integrity.

Future Horizons: The Autonomous Security Stack
Remote Code Execution

This evolution will likely lead to the rise of AI-Security Orchestrators—systems that manage hundreds of specialized agents, each focused on different attack vectors (e.g., one agent for memory leaks, another for logic flaws, another for authentication bypasses), collaborating in real-time to harden the environment.

For more on how to secure your current environment, check out our guide on modern security frameworks or explore our analysis of LLM vulnerabilities.

Frequently Asked Questions

What is agentic AI in the context of security?
Agentic AI refers to a system of multiple specialized AI agents that can reason, debate, and validate findings autonomously, rather than relying on a single prompt-and-response model.

What is an RCE flaw?
Remote Code Execution (RCE) is a critical vulnerability that allows an attacker to execute arbitrary code on a remote machine, often leading to full system compromise.

How does MDASH differ from traditional vulnerability scanners?
Traditional scanners look for known patterns (signatures). MDASH uses reasoning and an ensemble of AI models to discover new, previously unknown vulnerabilities in complex codebases.

Will AI replace human security researchers?
No, but it will change their role. Humans will shift from “hunting” for bugs to “orchestrating” the AI systems that find them and making high-level strategic decisions on risk management.

Join the Conversation

Do you believe autonomous AI will eventually make software “unhackable,” or are we just building faster weapons for attackers? Let us know your thoughts in the comments below or subscribe to our newsletter for weekly insights into the future of AI security.

Subscribe for AI Updates

May 13, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

China TV variety show exposes scam linking ‘peace’ sign selfies to privacy risks

by Chief Editor May 10, 2026
written by Chief Editor

The Hidden Cost of a Smile: Is Your Favorite Selfie Pose a Security Risk?

For years, the “peace sign” or “scissor hand” pose has been a global staple of social media culture, especially across Asia. It’s a gesture of friendliness, youth and positivity. However, a startling revelation from cybersecurity experts in China is turning this innocent habit into a potential privacy nightmare.

View this post on Instagram about Your Favorite Selfie Pose, Security Risk
From Instagram — related to Your Favorite Selfie Pose, Security Risk

Recent warnings highlighted on a mainland workplace reality show have exposed a terrifying reality: high-resolution selfies can be used to harvest your fingerprints. By leveraging artificial intelligence (AI) and advanced photo-editing software, criminals can reconstruct biometric data from a simple photograph, effectively “stealing” your identity without you ever knowing.

Did you know? Experts suggest that fingerprints can be extracted from selfies taken within 1.5 meters if the fingers face the camera directly. Even at a distance of up to 3 meters, roughly half of the hand’s biometric details can still be recovered.

The AI Evolution: From Photo Enhancement to Biometric Theft

The core of the problem lies in the rapid evolution of AI-driven image reconstruction. In the past, a photo would need to be an extreme close-up to reveal the ridges of a fingerprint. Today, cryptography professors, including Jing Jiwu from the University of Chinese Academy of Sciences, warn that high-quality cameras combined with AI can fill in the gaps.

This isn’t just theoretical. We are seeing a rise in “visual hacking,” where public data is weaponized. This trend aligns with the broader surge in AI-driven fraud, such as the deepfake scams recently reported in Baotou, China, where AI-generated likenesses were used to deceive victims. When you combine a stolen fingerprint with a deepfake voice or face, the potential for bypassing biometric security systems—like those used in banking or smartphone unlocking—becomes a frightening reality.

The “Resolution Trap”

As smartphone manufacturers race to include 108MP or 200MP sensors, they are inadvertently creating a goldmine for bad actors. Higher resolution means more data points per pixel, making it easier for AI to map the unique whorls and loops of a human fingerprint from a distance.

The "Resolution Trap"
China Resolution Trap

Future Trends: The Era of Biometric Obfuscation

As we move forward, the relationship between our physical bodies and our digital identities will undergo a radical shift. We are likely to see several emerging trends in response to these vulnerabilities:

  • Biometric Noise and Masking: Just as some users blur their faces for privacy, we may see the rise of “biometric noise” filters. These AI tools would subtly alter the ridges of fingers or the patterns of an iris in a photo—invisible to the human eye but impossible for a machine to reconstruct.
  • The Shift to Multi-Modal Authentication: Relying on a single biometric (like a fingerprint) is becoming a liability. The industry will likely pivot toward “multi-modal” security, requiring a combination of behavioral biometrics (how you type or walk) and physical biometrics.
  • Legal Frameworks for Biometric Ownership: We can expect a surge in legislation regarding “biometric theft.” If a photo posted on a public forum is used to steal a fingerprint, who is liable? The platform, the user, or the hacker?
Pro Tip: To protect your biometric data, avoid taking high-resolution photos with your palms or fingertips facing the lens. If you are sharing photos of your hands in a professional or public context, consider using a slight blur filter on the fingertips.

Beyond the Fingerprint: What Else Are We Exposing?

The “peace sign” scare is a wake-up call for a larger issue: the over-sharing of biometric markers. From the unique geometry of our ears to the patterns in our retinas, our photos are essentially digital blueprints of our bodies.

Industry experts suggest that the next frontier of identity theft won’t be passwords or credit card numbers, but “biological keys.” As we integrate more biometric locks into our homes and cars, the incentive for criminals to harvest this data from social media will only grow.

For more on how global tech hubs are handling these risks, you can explore the technological landscape of China or research the latest guidelines on deepfake prevention from international cybersecurity agencies.

Frequently Asked Questions

Q: Is every selfie with a peace sign dangerous?
A: Not necessarily. The risk is highest with high-resolution photos taken from a close distance (under 3 meters) where the fingers are clearly visible and facing the camera.

Q: Can a hacker really unlock my phone with a photo?
A: While most modern phones use 3D mapping or ultrasonic sensors that are harder to fool, the reconstructed data could potentially be used to create a physical “spoof” (a synthetic fingerprint) to bypass simpler biometric scanners.

Q: How can I check if my biometric data has been compromised?
A: Unlike a password, you cannot “change” your fingerprint. The best defense is prevention—limiting the high-res biometric data you post publicly and using two-factor authentication (2FA) that doesn’t rely solely on biometrics.

Join the Conversation

Are you changing the way you take selfies, or do you think this is an overreaction to the power of AI? Let us know in the comments below!

Want more insights on digital privacy? Subscribe to our Privacy Watch newsletter.

May 10, 2026 0 comments
0 FacebookTwitterPinterestEmail
World

Tens of thousands of students and teachers unable to access QLearn following cybersecurity breach

by Chief Editor May 8, 2026
written by Chief Editor

The Great Digital Classroom Crash: Why EdTech Security is the Next Global Battleground

Imagine waking up on the morning of a final exam only to find your entire academic world has vanished. No lecture notes, no submission portal, and no way to contact your professor. For hundreds of thousands of students globally, this nightmare became a reality during the massive breach of the Canvas learning management system (LMS).

When the notorious hacking group ShinyHunters targeted Instructure, the company behind Canvas, they didn’t just steal data—they paralyzed the educational infrastructure of nearly 9,000 institutions. From the universities of New South Wales to public schools in Queensland, the ripple effect was instantaneous.

This event serves as a wake-up call. As education migrates almost entirely to the cloud, the “single point of failure” risk has reached a critical mass. We are entering a new era where cybersecurity is no longer just an IT concern; it is a fundamental requirement for academic continuity.

Did you know? The Canvas breach highlighted a dangerous trend called “Double Extortion.” Hackers don’t just lock the system; they steal sensitive data and then demand a second ransom to prevent that data from being leaked on the dark web.

The Shift Toward Decentralized Learning Architectures

For years, the trend in EdTech has been consolidation. Schools wanted one platform to do everything: grading, communication, content delivery, and assessment. However, the Canvas incident proves that total centralization creates a “honey pot” for cybercriminals.

In the coming years, we expect a shift toward decentralized or hybrid architectures. Instead of relying on a single cloud provider for every function, institutions may begin distributing their critical data across multiple encrypted environments. This ensures that if one system is compromised, the entire school doesn’t grind to a halt.

We are likely to see the rise of “interoperable micro-services,” where a school might use one secure provider for identity management, another for content storage, and a third for assessments. This “eggs in different baskets” approach limits the blast radius of any single attack.

Zero Trust: The New Standard for Campus Networks

The traditional security model was like a castle: a strong wall (firewall) on the outside, but once you were inside, you were trusted. Modern hackers, however, specialize in finding one small crack in the wall to gain entry and then moving laterally through the system.

The future of EdTech security lies in Zero Trust Architecture (ZTA). The core philosophy is simple: never trust, always verify.

  • Identity-Based Access: Access is granted based on the user’s identity and device health, not just a password.
  • Micro-segmentation: Dividing the network into small zones so a breach in the “student forum” section cannot reach the “grade database” section.
  • Continuous Authentication: Systems that constantly verify the user’s identity throughout their session to prevent session hijacking.
Pro Tip for Educators: To protect your students, implement mandatory Multi-Factor Authentication (MFA) across all platforms. While it adds a few seconds to the login process, it eliminates the vast majority of password-based attacks.

AI vs. AI: The Cybersecurity Arms Race in Education

The ShinyHunters breach demonstrated that hackers are becoming more aggressive, often mocking “security patches” that failed to stop them. This is because attackers are now using AI to scan for vulnerabilities in real-time, finding holes faster than human engineers can patch them.

AI vs. AI: The Cybersecurity Arms Race in Education
Digital Resilience

To counter this, educational institutions will increasingly rely on AI-driven Predictive Security. Instead of reacting to a breach, these systems use machine learning to identify “behavioral anomalies.” For example, if a user account suddenly attempts to download 10,000 student records at 3:00 AM, the AI can kill the session instantly before a human admin even sees the alert.

For more insights on how AI is reshaping security, check out our guide on the evolution of threat detection.

Digital Resilience as a Core Curriculum Requirement

The Canvas hack didn’t just cause technical glitches; it caused psychological stress. Students like Abriana Doherty and Ekansh Alla reported extreme frustration and anxiety as deadlines loomed while systems remained dark. This reveals a gap in our education: we teach students how to use technology, but not how to survive its failure.

Digital Resilience as a Core Curriculum Requirement
Cybersecurity Schools

Digital Resilience will soon become a part of the standard curriculum. This includes:

  • Offline Contingency Planning: Teaching students and staff how to maintain productivity when the cloud disappears.
  • Phishing Literacy: As seen in the Tasmania Department for Education warning, the biggest risk after a breach is the wave of scam emails. Students must be trained to recognize “social engineering” tactics.
  • Data Hygiene: Encouraging users to maintain independent backups of their critical work outside of the institutional LMS.

FAQ: Understanding EdTech Cybersecurity

Q: Why are educational institutions such popular targets for hackers?
A: Schools hold massive amounts of PII (Personally Identifiable Information) and often have decentralized security protocols across thousands of different users, making them “soft targets” compared to banks or government agencies.
Q: If my school’s LMS is hacked, is my financial information at risk?
A: Not necessarily. In the recent Canvas breach, officials noted that passwords and financial data were likely not compromised. However, names and emails are often stolen, which increases the risk of targeted phishing scams.
Q: What should I do if I suspect my student account has been compromised?
A: Immediately change your passwords for all accounts that share the same credentials, enable MFA, and report the incident to your institution’s IT department. Never click links in emails claiming to be “security alerts” without verifying them first.

The digitalization of the classroom is an incredible leap forward, but the Canvas breach proves that our security infrastructure hasn’t kept pace with our innovation. The future of learning depends not just on the quality of the content, but on the resilience of the pipes that deliver it.


What do you think? Has your institution taken enough steps to protect your data, or are we just waiting for the next big crash? Share your experiences in the comments below or subscribe to our newsletter for more deep dives into the intersection of technology and society.

May 8, 2026 0 comments
0 FacebookTwitterPinterestEmail
World

The Canvas Hack Is a New Kind of Ransomware Debacle

by Chief Editor May 8, 2026
written by Chief Editor

The New Frontier of Digital Extortion: Why EdTech is the Next Great Cyber Battleground

For years, the narrative around ransomware was simple: hackers lock your files, and you pay a fee to get the key. But the landscape has shifted. We are entering an era of “pure extortion,” where the goal isn’t to lock the system, but to weaponize the data within it.

The recent systemic failure of the Canvas learning management system serves as a wake-up call. When a single platform—used by thousands of institutions and millions of students—becomes a point of failure, the impact isn’t just a technical glitch; it’s a nationwide operational paralysis. As we look toward the future of education technology (EdTech), several critical trends are emerging that will redefine how schools and students protect their digital lives.

Did you know? According to industry reports, Canvas is used by approximately 41% of higher education institutions in North America, making it a primary target for “supply chain” attacks where hackers target one vendor to reach thousands of victims.

The Rise of the ‘Single Point of Failure’ Crisis

The EdTech industry has trended toward massive consolidation. While having a unified system like Canvas or Google Classroom streamlines administration, it creates a “honey pot” effect. A single successful breach at the vendor level—such as the one perpetrated by the ShinyHunters group—can compromise hundreds of millions of records simultaneously.

Future trends suggest a move toward decentralized resilience. We will likely see institutions demanding more “sovereignty” over their data, pushing vendors to move away from monolithic cloud storage toward distributed architectures. The goal is simple: ensure that a breach at the parent company doesn’t automatically grant access to every student’s private messages and ID numbers across 8,000 different schools.

The Shift from Encryption to Exfiltration

We are seeing a pivot in hacker tactics. In the past, ransomware encrypted data. Today, groups like ShinyHunters focus on exfiltration—stealing the data and threatening to leak it. This is far more dangerous for educational institutions because “fixing” the system (patching the hole) doesn’t remove the threat. The data is already gone.

This “leak-ware” model puts schools in an impossible position. Even if the software is “fully operational,” the reputational and legal risk of a data leak persists, creating a permanent state of leverage for the attackers.

Pro Tip: If you use the same password for your university portal as you do for your personal email or banking, change it immediately. Use a password manager to ensure every account has a unique, complex string.

Why Student Data is the New ‘Digital Gold’

You might wonder why hackers target student ID numbers and email addresses instead of credit card info. The answer is long-term identity value. Student data is often “cleaner” and more stable than financial data, which changes frequently.

View this post on Instagram about Digital Gold
From Instagram — related to Digital Gold

Stolen student records allow criminals to:

  • Engineer hyper-targeted phishing: Using specific course names or instructor identities to trick students into downloading malware.
  • Build synthetic identities: Combining student IDs with other leaked data to open fraudulent accounts.
  • Extort individuals: Using private messages exchanged on platforms to blackmail students or faculty.

As AI-driven social engineering becomes more sophisticated, these data sets become the fuel for attacks that are nearly impossible for the average user to detect.

The Path Toward ‘Zero Trust’ Education

To combat these trends, the industry is moving toward a Zero Trust Architecture. The old model of security was like a castle: a big wall (firewall) around the school’s network. Once you were inside, you were trusted.

Zero Trust assumes the attacker is already inside. It requires continuous verification of every user and every device. In the future, logging into a learning platform won’t just require a password; it will involve behavioral biometrics, device fingerprinting, and strict “least-privilege” access, ensuring that a breach in one module (like an ePortfolio) doesn’t lead to a breach of the entire student database.

For more on how to secure your personal data, check out our guide on essential digital hygiene for the modern era.

Frequently Asked Questions

Q: Is my data safe if the platform says the incident is ‘resolved’?
A: ‘Resolved’ usually means the vulnerability has been patched and the attacker no longer has access. However, if your data was already exfiltrated (stolen), it remains in the hands of the attackers regardless of the system’s current status.

Canvas hack hits Nevada schools, disrupts finals as ransomware group threatens data leak

Q: What is the most crucial step to take after an EdTech breach?
A: Change your passwords and enable Multi-Factor Authentication (MFA) on all linked accounts. Be extremely wary of emails or texts claiming to be from your institution that ask for further verification.

Q: Why don’t schools just stop using these large platforms?
A: The scale of modern education requires cloud-based collaboration. The solution isn’t to abandon the technology, but to demand higher security standards and more transparent data-handling policies from vendors.

Join the Conversation

Do you think educational institutions are doing enough to protect student privacy, or are we sacrificing security for convenience? Let us know in the comments below or subscribe to our newsletter for the latest updates on cybersecurity trends.

Subscribe for Security Alerts

May 8, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

Tenable finds GitHub workflow flaw in Microsoft repo

by Chief Editor May 4, 2026
written by Chief Editor

The Invisible Attack Surface: Why Your CI/CD Pipeline is the New Front Line

For years, cybersecurity focused on the “front door”—firewalls, login screens, and API gateways. But as development speeds up, the real danger has shifted to the “back door”: the Continuous Integration and Continuous Delivery (CI/CD) pipelines. The recent discovery by Tenable Research in a Microsoft GitHub repository serves as a wake-up call. A Python string injection flaw in the Windows-driver-samples repository allowed for remote code execution, potentially exposing repository secrets. When a project with 5,000 forks and 7,700 stars has this vulnerability, it isn’t just a bug in one codebase; It’s a blueprint for how modern software supply chains can be dismantled. The risk isn’t just about one leaked token. It is about the systemic trust we place in automation. As we move forward, the industry is shifting toward a reality where the pipeline itself is treated as a high-value target, equal in importance to the production server.

Did you know? Many organizations still rely on “default” permissions for their automation tokens. In the Microsoft case, researchers inferred the GITHUB_TOKEN likely operated with default read and write access since the repository predated 2023 security updates.

The Death of the ‘God Token’ and the Rise of Least Privilege

The Death of the 'God Token' and the Rise of Least Privilege
Microsoft Actions Instead

One of the most critical trends in DevOps security is the aggressive move away from long-lived, high-privilege tokens. For too long, developers used “God Tokens”—credentials with sweeping permissions that could create issues, push code, and modify settings across an entire organization. The future is Least Privilege Automation. We are seeing a transition toward:

  • Short-lived Credentials: Moving away from static secrets toward tokens that expire in minutes or hours.
  • OIDC (OpenID Connect): Instead of storing a secret key in GitHub, pipelines now use OIDC to request temporary access from cloud providers like AWS or Azure, eliminating the need for long-term stored secrets.
  • Granular Scoping: Rather than “Read/Write” access, permissions are being narrowed to specific actions, such as read-only access to the contents folder.

“The CI/CD infrastructure is part of an organisation’s attack surface and software supply chain,” Rémy Marot, Staff Research Engineer at Tenable

AI: The Double-Edged Sword of Pipeline Security

As we integrate Artificial Intelligence into our coding workflows, we are entering a period of “automated escalation.” AI is fundamentally changing how vulnerabilities like string injections are both created and found. On the offensive side, attackers are using LLMs to scan public YAML files and workflow scripts for patterns that suggest unsafe input handling. A vulnerability that might have taken a human researcher days to find can now be spotted by an AI agent in seconds. But, the defensive trend is equally powerful. We are seeing the emergence of AI-driven Guardrails. Future CI/CD systems will likely include:

  • Real-time Static Analysis: AI that blocks a commit if the workflow script introduces a potential injection point.
  • Anomaly Detection: Systems that flag a workflow if it suddenly attempts to access a secret it has never used before or connects to an unknown external IP.
Pro Tip: Regularly audit your `.github/workflows` files. Treat your YAML configurations as production code—subject them to the same peer review and security scanning as your primary application logic.

Moving Toward ‘Zero Trust’ DevOps

The industry is realizing that “internal” does not mean “safe.” The Tenable finding proved that a simple GitHub issue submission—an action available to any registered user—could trigger a vulnerable workflow. The future trend is Zero Trust for Pipelines. This means assuming that any input coming into the pipeline—whether it is a pull request, a comment, or an issue description—is potentially malicious. This shift involves implementing Software Bill of Materials (SBOM) and strict provenance checks. By verifying exactly who touched the code and which automated process built the binary, companies can ensure that a compromised pipeline doesn’t lead to a poisoned update being sent to millions of users.

For more on securing your development environment, see our guide on [Internal Link: Implementing DevSecOps Best Practices].

Frequently Asked Questions

What is a CI/CD pipeline attack?

A CI/CD attack targets the automated tools used to build and deploy software. Instead of attacking the final app, hackers target the pipeline to steal secrets or inject malicious code directly into the software before it is released.

Frequently Asked Questions
Microsoft Actions Python

Why is string injection dangerous in GitHub Actions?

String injection occurs when user-supplied text is executed as code. In GitHub Actions, if a workflow takes a user’s issue description and passes it directly into a shell script or Python command, an attacker can “inject” their own commands to take over the server running the workflow.

How can I secure my GitHub repository secrets?

Avoid using default permissions. Explicitly define the permissions key in your workflow YAML to restrict the GITHUB_TOKEN to the minimum access required for that specific job.

What is the role of the GITHUB_TOKEN?

The GITHUB_TOKEN is an automatically generated secret used by GitHub Actions to authenticate requests to the GitHub API, allowing the workflow to perform tasks like creating releases or commenting on issues.


Join the Conversation: Is your team treating your CI/CD pipeline as critical infrastructure, or is it still viewed as “background tooling”? Share your security strategies or request a question in the comments below.

Want to stay ahead of the next major vulnerability? Subscribe to our Security Insights newsletter for weekly deep-dives into the evolving threat landscape.

May 4, 2026 0 comments
0 FacebookTwitterPinterestEmail
Health

Dental practice software maker fixes bug that exposed patients’ medical records

by Chief Editor April 30, 2026
written by Chief Editor

The Rise of the “Accidental” Security Researcher

For years, the world of cybersecurity was the domain of elite hackers and professional penetration testers. However, a shifting trend is emerging: the “accidental” researcher. These are regular consumers who stumble upon massive security flaws not through malicious intent, but through simple curiosity or routine utilize of a service.

Take the recent case of Joseph R. Cox, a patient who discovered a critical vulnerability while simply viewing his own dental records. By noticing that document numbers in the web address were sequentially incremental, he realized that changing a single digit allowed him to access the private medical histories, personal information, and photo identification of other patients.

View this post on Instagram about Reporting Vacuum, Home Improvement
From Instagram — related to Reporting Vacuum, Home Improvement

This highlights a growing reality for modern businesses. Your first line of defense is no longer just your IT department; it is every single person with a login to your portal. When users find these gaps, the relationship between the consumer and the company is position to the ultimate test.

Did you know? The flaw described—where changing a URL parameter allows access to another user’s data—is known in the industry as an Insecure Direct Object Reference (IDOR). It is one of the most common yet devastating security oversights in web applications.

The Danger of the “Reporting Vacuum”

Finding a bug is only half the battle; the real crisis occurs when there is no way to report it. We are seeing an alarming trend of “reporting vacuums,” where companies provide no discernible avenue for security disclosures. In the case of Practice by Numbers, the company’s website email was broken, and messages sent to founders via LinkedIn went unanswered.

This is not an isolated incident. Similar patterns have appeared across various industries:

  • Retail: The fashion retailer Express recently fixed a bug that exposed customer order details after a user struggled to find a way to alert the company.
  • Home Improvement: Home Depot reportedly ignored reports from a security researcher regarding a lapse that exposed internal systems for nearly a year, only acting after media intervention.

When companies ignore or fail to provide a communication channel, they push well-meaning users toward the media. This transforms a private patch into a public relations disaster.

The Shift Toward Vulnerability Disclosure Programs (VDPs)

The future of corporate security lies in the adoption of formal Vulnerability Disclosure Programs (VDPs). Rather than relying on a generic “Contact Us” email, forward-thinking companies are creating dedicated portals where researchers can safely report flaws without fear of legal retaliation.

Solve Your Problem – Dental Practice Management Software

While Practice by Numbers has stated they plan to update their website to allow for security reporting, the lack of a specific timeline underscores a wider industry lag in prioritizing these communication pipelines.

Healthcare SaaS: The High Stakes of “Bundled” Software

The vulnerability in the Practice by Numbers portal—used in over 5,000 dental practices across the U.S.—reveals the systemic risk of bundled healthcare software. When a single software provider manages portals for thousands of clinics, a single bug becomes a force multiplier for data exposure.

In this instance, the software housed highly sensitive data, including medical documents and photo IDs. While the company’s CTO, Chris Lau, noted that server logs suggested fewer than 10 patients were exposed, the potential for damage was immense.

Pro Tip for Business Owners: If you use third-party SaaS for patient or customer data, ask your provider specifically if they undergo annual third-party security audits. A “secure” claim is not a substitute for a verified audit report.

The Necessity of Third-Party Audits

A recurring theme in recent breaches is the absence of pre-launch security audits. When questioned, leadership at Practice by Numbers declined to confirm if their portal had undergone such a review. In an era of sophisticated cyber threats, relying on internal testing is no longer sufficient, especially for companies handling protected health information.

The Necessity of Third-Party Audits
Numbers The Rise

Frequently Asked Questions

What is an IDOR vulnerability?

An Insecure Direct Object Reference (IDOR) occurs when an application provides direct access to objects based on user-supplied input. If the system doesn’t verify that the user has permission to access that specific object, an attacker can simply change a value (like a patient ID in a URL) to view someone else’s data.

Why are companies slow to implement reporting channels?

Some companies fear that inviting reports will draw more attention to their flaws or lead to “beg-bounties” (people reporting trivial issues for money). However, the risk of a silent breach or a public exposé is far greater than the cost of managing a VDP.

How can I tell if my data has been exposed in a software bug?

The most reliable way is through official notifications from the service provider. In the recent dental software case, the company worked with the affected practice to notify the specific patients identified in their server logs.

What do you think? Should companies be legally required to provide a functional security reporting channel? Let us know in the comments below or subscribe to our newsletter for more insights on digital privacy.

April 30, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

Open-source IPFire DNS Firewall blocks malware and phishing at the resolver

by Chief Editor April 28, 2026
written by Chief Editor

The Evolution of Network Defense: Moving Toward DNS-Layer Security

For years, network administrators have relied on a combination of heavy-duty proxies and external “sinkholes” to keep unwanted traffic at bay. Although, the landscape is shifting. The recent integration of DNS-layer domain blocking directly into the firewall—as seen in the latest IPFire Core Update 201—signals a broader trend: the move toward lightweight, invisible, and highly efficient security at the resolver level.

Unlike traditional URL filters that often require complex HTTPS inspection and certificate handling, DNS-layer blocking operates by intercepting the request before a connection is even attempted. When a client requests a domain flagged as malicious, the system returns an NXDOMAIN response. This effectively tells the client that the domain does not exist, ensuring that no connection is established and no sensitive data leaves the network.

Did you know? An NXDOMAIN (Non-Existent Domain) response is one of the most efficient ways to block threats because it stops the attack at the “phonebook” stage of the internet, preventing the device from ever reaching out to the malicious server.

The Decline of Heavy Proxy Dependencies

The industry is moving away from the “middleman” approach to filtering. Traditional URL filters often depend on proxy setups that can introduce latency and break encrypted traffic. By handling blocklist enforcement directly inside the firewall’s DNS proxy, the need for client-side configuration and HTTPS inspection is eliminated.

The Decline of Heavy Proxy Dependencies
Firewall Solving the Bandwidth Bottleneck Threat Intelligence One

This transition simplifies the architecture for the end-user. Instead of managing a separate device—such as an external Pi-hole deployment—operators can now consolidate their security stack. This reduction in complexity not only improves performance but as well reduces the number of potential failure points in a home or business network.

Solving the Bandwidth Bottleneck in Threat Intelligence

One of the biggest hurdles in maintaining real-time security is the size of the blocklists. As the number of phishing and malware domains grows, the data required to keep a firewall updated can turn into massive. For users on limited cellular connections or in regions with expensive data, downloading gigabytes of updates is simply not sustainable.

View this post on Instagram about Solving the Bandwidth Bottleneck, Threat Intelligence One
From Instagram — related to Solving the Bandwidth Bottleneck, Threat Intelligence One

The solution lies in Incremental Zone Transfers (IXFR), defined in RFC 1995. Rather than downloading a full list every time a change occurs, IXFR allows the firewall to download only the specific changes between versions. According to Michael Tremer, IPFire’s lead developer, this is crucial because full downloads of malware and phishing lists can reach roughly 100 MiB per update.

This shift toward incremental updates is a critical trend for the “edge” of the internet. As more devices move to the network perimeter, the ability to push updates every five minutes without saturating the connection is what allows security teams to combat the short lifespan of phishing sites, which may only remain active for a few hours.

Pro Tip: If you are migrating from a separate Pi-hole or an older URL Filter, remember that custom block and allow lists do not transfer automatically. Use the web UI to copy and paste your domains directly into the new DNS Firewall interface to maintain your custom security posture.

Hardening the Attack Surface: The “Less is More” Philosophy

Modern security is not just about adding new features; We see about removing unnecessary ones. A growing trend in open-source distributions is the aggressive pruning of unused packages to reduce the “attack surface”—the total number of points where an attacker could potentially find a vulnerability.

Infoblox DNS Firewall: Understanding APT Malware

We are seeing this in practice with the removal of non-essential components. For example, the removal of Rust packages no longer required by the distribution and the dropping of the 7zip add-on (due to a lack of upstream maintenance) are strategic moves. By cutting build overhead and removing unmaintained code, developers can ensure a leaner, more secure environment.

This philosophy extends to the toolchain itself. Updating to the latest versions of core components—such as glibc 2.43, OpenSSL 3.6.1, and OpenVPN 2.6.19—ensures that the firewall is leveraging the most recent security patches and performance optimizations.

The Future of Automated Reporting and IDS

As network environments grow more complex, the way we handle security alerts must also evolve. The move toward customizable recipient configurations for Intrusion Prevention System (IPS) reports—splitting daily, weekly, and monthly cadences—reflects a need for better organizational routing.

In the future, we can expect these reports to become even more granular, potentially integrating with AI-driven analysis to separate “noise” from actual threats, ensuring that the people responsible for review intervals are not overwhelmed by false positives.

Frequently Asked Questions

What is DNS-layer domain blocking?
It is a security method that checks DNS queries against a blocklist before a connection is made. If a domain is listed as malicious, the firewall returns an NXDOMAIN response, preventing the device from connecting to the site.

Do I still need a Pi-hole if my firewall has a DNS Firewall?
While Pi-hole is a powerful tool, integrated DNS firewalls provide similar functionality (blocking malware, phishing, and ads) without the need for additional hardware or complex configuration.

What is IXFR and why does it matter?
IXFR stands for Incremental Zone Transfer. It allows a system to download only the changes to a blocklist rather than the entire file, which significantly saves bandwidth and allows for more frequent updates.

Does the DNS Firewall require HTTPS inspection?
No. Because it operates at the DNS level, it does not need to inspect encrypted HTTPS traffic or handle certificates, making it more privacy-friendly and easier to deploy.


Are you upgrading your home or business firewall this year? We wish to hear about your setup. Do you prefer a consolidated firewall approach, or do you still rely on separate hardware for DNS sinkholing? Let us know in the comments below or subscribe to our newsletter for more deep dives into open-source security.

April 28, 2026 0 comments
0 FacebookTwitterPinterestEmail
World

Cybersecurity Meets Geopolitics at Top EU Court

by Chief Editor April 24, 2026
written by Chief Editor

The New Era of Digital Sovereignty: Moving Beyond Blanket Bans

The landscape of European telecommunications is shifting. For years, the debate around “high-risk vendors” was a binary struggle: either a company was allowed in the network, or it was banned entirely. Though, recent legal developments at the Court of Justice of the European Union (CJEU) suggest a more nuanced future.

The advisory opinion in Elisa Eesti AS v. Estonian Government Security Committee signals a move toward “granular security.” While the CJEU acknowledges that Member States can exclude hardware and software based on national security risks, the era of the opaque “blacklist” may be ending.

From Blacklists to Risk Maps

Future trends indicate that governments will be required to move away from blanket bans. Instead, they must provide specific, equipment-and-use-based risk assessments. This means regulators cannot simply say a manufacturer is “high-risk”; they must articulate why a specific component in a specific part of the network poses an unacceptable threat.

This shift forces a translation of classified intelligence into contestable legal reasoning. For operators, this means a move toward more detailed documentation and a higher burden of proof for regulators who wish to compel the removal of existing infrastructure.

Did you realize? The Estonian Electronic Communications Act assesses high-risk vendors based on 12 criteria, including whether the producer’s home country respects democratic principles or exhibits aggressive behavior in cyberspace.

The High Cost of Security: The “Rip and Replace” Challenge

As the EU pushes for a more secure ICT supply chain, the industry is facing a massive financial hurdle: the “rip and replace” phenomenon. Removing deeply integrated hardware from a live network is not just a technical challenge—it is a multi-billion-euro operational nightmare.

View this post on Instagram about Security, Risk
From Instagram — related to Security, Risk

We are seeing a fragmented implementation across the bloc. While countries like Sweden and Latvia moved early to exclude vendors like Huawei and ZTE from core 5G networks, others have lagged. Germany, for instance, has announced plans to remove these components from its core 5G networks by the end of 2026.

A critical trend to watch is the fight over compensation. As operators are forced to swap out equipment, the question of the “right to property” under the EU Charter of Fundamental Rights becomes central. Without U.S.-style assistance funds, the financial burden on mid-sized operators could lead to increased litigation over fair compensation.

Pro Tip for Operators: Start auditing your supply chain now. Transitioning from a high-risk vendor is more cost-effective when integrated into a long-term hardware refresh cycle rather than reacting to a sudden government mandate.

When Courts Meet Classified Intelligence

One of the most significant future trends is the “judicialization” of national security. Historically, “national security” was often treated as a carte blanche—a magic phrase that stopped further legal inquiry. That is changing.

The CJEU is establishing that while the EU cannot decide what is necessary for a Member State’s security, the invocation of national security does not exempt a state from complying with EU law. This creates a tension: how do courts review a decision based on classified intelligence without compromising that very intelligence?

One can expect a growing body of case law focusing on proportionality. Courts will increasingly probe how hybrid administrative bodies translate secret threats into public, reviewable decisions. This will likely lead to new judicial techniques for handling secret evidence while still protecting the rights of private companies.

Expanding the Perimeter: Beyond 5G

The logic applied to 5G towers is rapidly expanding to other critical digital arteries. The EU’s broader ICT Supply Chain Security Toolbox encourages governments to appear beyond technical vulnerabilities to “non-technical risks,” such as ownership structures and political pressure.

Steve Durbin of ISF Warns Geopolitics Will Be the Defining Cybersecurity Risk of 2026

This “security-first” methodology is now bleeding into other sectors:

  • Satellite Connectivity: Ensuring that the space-based internet of the future isn’t dependent on adversarial infrastructure.
  • Submarine Cables: Applying the same risk-assessment logic to the physical cables that carry the bulk of global internet traffic.
  • Global Gateway: Integrating ICT risk management into the EU’s international infrastructure investments.

The Regulatory Shift: Consumer Protection as National Defense

Perhaps the most surprising trend is the institutional migration of security. In the Elisa Eesti case, the decision didn’t come from a Ministry of Defense, but from the TTJA—an office for consumer protection and technical supervision.

Cybersecurity is no longer just a military concern; it has migrated into the realm of consumer and competition law. This means that the regulators of tomorrow will be “hybrid” agents, balancing technical standards, consumer rights, and geopolitical intelligence. This shift may lead to more frequent intersections between competition law (antitrust) and national security mandates.

FAQ: High-Risk Vendors and EU Law

Can EU countries legally ban specific telecom vendors?
Yes, in principle. According to recent advisory opinions, Member States may exclude hardware and software if the manufacturer poses a risk to national security, provided the decision is based on a specific risk assessment.

What is “rip and replace”?
It is the process of removing existing high-risk vendor equipment from a network and replacing it with gear from trusted suppliers.

Is the Advocate General’s opinion legally binding?
No, the opinions of Advocates General are non-binding, but they are highly influential in shaping the final judgments of the CJEU and the development of EU legal doctrine.

Who determines if a vendor is “high-risk”?
What we have is typically determined by national authorities (such as security committees or technical supervision offices) using criteria that may include the vendor’s country of origin and its relationship with foreign governments.

Join the Conversation

How should the EU balance national security with the financial burden on telecom operators? Do you believe “granular” risk assessments are enough to protect digital infrastructure?

Share your thoughts in the comments below or subscribe to our newsletter for the latest insights on digital sovereignty.

April 24, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

Barracuda spots 7 million device code phishing attacks

by Chief Editor April 24, 2026
written by Chief Editor

The Industrialization of Identity Theft: The PhaaS Evolution

The landscape of cybercrime is shifting from manual, targeted attacks to a highly scalable business model. The emergence of Phishing-as-a-Service (PhaaS) platforms, such as the EvilTokens kit, allows low-skill criminals to launch sophisticated campaigns that were once the sole domain of advanced threat actors.

This “industrialization” means that high-volume attacks are now easier to execute. For example, security firm Barracuda recently detected over 7 million device code phishing attacks within a single four-week window. By packaging complex exploits into ready-to-use kits sold on platforms like Telegram, the barrier to entry for attackers has vanished.

Did you recognize? Device code phishing is particularly dangerous since it doesn’t rely on fake login pages. Instead, it tricks users into using the legitimate Microsoft login portal, making it nearly invisible to traditional “spot the fake URL” training.

Beyond the Password: The Shift to Token Hijacking

For years, security training focused on preventing credential theft. However, we are seeing a strategic pivot toward hijacking trusted authentication flows. Instead of stealing a password, attackers are now targeting OAuth 2.0 access and refresh tokens.

View this post on Instagram about Microsoft, Phishing
From Instagram — related to Microsoft, Phishing

By abusing the device authorization flow—originally designed for devices with limited interfaces like printers or smart TVs—attackers can gain authorized access to Microsoft 365 and Entra ID environments. Once a victim enters a legitimate code on a real Microsoft page, the attacker receives the token directly.

This method provides three critical advantages for the attacker:

  • Stealth: No cloned websites are used, bypassing many email filters.
  • MFA Bypass: Because the victim authorizes the device themselves, multifactor authentication (MFA) and conditional access checks are often bypassed.
  • Persistence: Refresh tokens can grant attackers access for days or weeks, remaining effective even if the user changes their password.

The Next Frontier: Cross-Platform Expansion

While current surges heavily target Microsoft ecosystems, the trend is moving toward cross-platform versatility. The developers behind the EvilTokens kit have already indicated plans to extend their phishing capabilities to include Gmail and Okta phishing pages.

How fast is a BARRACUDA ATTACK? FREE CODE FRIDAY : DIGITAL CODES Magic Mike 7th son

This suggests a future where “identity-agnostic” phishing kits can pivot between different cloud providers depending on the target’s infrastructure. We are already seeing diverse threat actors—including Russian groups like Storm-237, UTA032, UTA0355, UNK_AcademicFlare, and TA2723, as well as the ShinyHunters data extortion group—leveraging these advanced techniques.

Pro Tip: To mitigate this risk, organizations should implement layered security controls, including advanced email filtering and continuous monitoring of identity protection mechanisms. Tighter controls around device authorization flows are essential to stop token abuse.

Redefining the Human Firewall

The rise of device code phishing renders traditional “look for the padlock” or “check the domain” advice obsolete. Since the final step of the attack happens on a genuine site (such as microsoft.com/devicelogin), the battle has shifted from technical detection to contextual awareness.

Future security training must move beyond identifying “fake” sites and instead teach users to question the reason for a request. If a user is asked to enter a verification code for a device they didn’t intentionally link, it should be treated as a critical red flag, regardless of how legitimate the website appears.

Attackers are increasingly tailoring their lures to specific roles. Recent campaigns have used PDFs, HTML, and DOCX files impersonating financial documents, payroll notices, or SharePoint shares to target employees in HR, finance, logistics, and sales.

Frequently Asked Questions

What is device code phishing?
It’s an attack that abuses the OAuth 2.0 device authorization flow. Attackers trick users into entering a legitimate device code on an official login page, which grants the attacker an access token to the user’s account.

Can MFA stop device code phishing?
Not necessarily. Because the victim is the one performing the authentication on a trusted device, they effectively “approve” the attacker’s session, potentially bypassing MFA and conditional access checks.

What is EvilTokens?
EvilTokens is a Phishing-as-a-Service (PhaaS) kit that automates device code phishing attacks, primarily targeting Microsoft 365 and Entra ID environments.

How do I protect my organization?
Implement layered security, use advanced email filtering, monitor for unusual identity patterns, and train staff to never enter device codes unless they initiated the request themselves.


Are you confident in your current identity protection strategy? Share your thoughts in the comments below or subscribe to our newsletter for the latest updates on evolving cyber threats.

April 24, 2026 0 comments
0 FacebookTwitterPinterestEmail
Newer Posts
Older Posts

Recent Posts

  • How Norwegian Celebrities Celebrate May 17th National Day

    May 17, 2026
  • The Boys: Egy fontos karakter nem tér vissza a fináléban

    May 17, 2026
  • Angel Di Maria Responds to River Plate Hostility and Refereeing Complaints

    May 17, 2026
  • Tens of thousands march in London in separate immigration, pro‑Palestinian protests

    May 17, 2026
  • Could wine provide Australia’s next source of biofuel?

    May 17, 2026

Popular Posts

  • 1

    Maya Jama flaunts her taut midriff in a white crop top and denim jeans during holiday as she shares New York pub crawl story

    April 5, 2025
  • 2

    Saar-Unternehmen hoffen auf tiefgreifende Reformen

    March 26, 2025
  • 3

    Marta Daddato: vita e racconti tra YouTube e podcast

    April 7, 2025
  • 4

    Unlocking Success: Why the FPÖ Could Outperform Projections and Transform Austria’s Political Landscape

    April 26, 2025
  • 5

    Mecimapro Apologizes for DAY6 Concert Chaos: Understanding the Controversy

    May 6, 2025

Follow Me

Follow Me
  • Cookie Policy
  • CORRECTIONS POLICY
  • PRIVACY POLICY
  • TERMS OF SERVICE

Hosted by Byohosting – Most Recommended Web Hosting – for complains, abuse, advertising contact: o f f i c e @byohosting.com


Back To Top
Newsy Today
  • Business
  • Entertainment
  • Health
  • News
  • Sport
  • Tech
  • World