• Business
  • Entertainment
  • Health
  • News
  • Sport
  • Tech
  • World
Newsy Today
news of today
Home - Scams
Tag:

Scams

Tech

The FBI says crypto crime is soaring in NC and across the country :: WRAL.com

by Chief Editor March 26, 2026
written by Chief Editor

The Soaring Threat of Cybercrime: How Scammers Are Exploiting Crypto and AI

One click can be all it takes to become a victim. Despite years of warnings, cybercrime is surging, both in the number of people affected and the financial losses incurred. The speed and anonymity offered by cryptocurrency, coupled with the increasing sophistication of scams powered by artificial intelligence, are creating a perfect storm for fraudsters.

Romance and Investment Scams: A Growing Epidemic

Relationship investment scams are a particularly insidious form of romance fraud. These schemes, which caused nearly $4 billion in losses in 2023 according to the FBI, involve building long-term relationships with victims before introducing the idea of investing in cryptocurrency. Melanie Devoe of the Commodity Futures Trading Commission explains that these fraudsters are “professionals” with a well-defined playbook.

The Crypto Advantage for Criminals

The shift to cryptocurrency provides criminals with a significant advantage. Crypto’s ease of concealment, due to limited agreements between the FBI and crypto entities, makes it harder for investigators to track and recover stolen funds. James Kaylor, a Supervisory Agent with the FBI, notes that “crypto can move really, really quickly,” and is “easier for them to launder that money rather than go through financial institutions.”

Billions Lost, Limited Recovery

The U.S. Department of Justice seized nearly $2.5 billion in crypto linked to cybercrimes in fiscal year 2025 – a tenfold increase from the $237 million recovered just five years prior. However, this recovered amount represents only a small fraction of the total losses. In 2024 alone, victims reported losing $9.3 billion in crypto scams. Recovering funds is further complicated by the fact that seized crypto wallets often contain money from multiple victims, making equitable distribution challenging.

AI-Powered Deception

Scammers are increasingly leveraging the power of artificial intelligence to enhance their deception. Kaylor warns of “manipulated websites, manipulated graphics, AI manipulated charts to show that you’re making money.” By the time victims realize they’ve been scammed, it’s often too late, and the fraudsters have disappeared with their money.

Real-Life Impact: A $2 Million Loss

Federal court filings reveal numerous cases of victims losing substantial sums. One example involves a 67-year-old man from Harnett County who invested nearly $2 million in a fake crypto trading site after being targeted in a romance scam. The FBI was only able to recover approximately $300,000 of his investment.

Protecting Yourself: Simple Advice

The FBI offers straightforward advice to protect against cybercrime: never send money to someone you’ve only met online, and be skeptical of websites that appear legitimate but may be fraudulent. The golden rule, as Kaylor puts it, is “if it sounds too good to be true, it probably is. Don’t do it.”

Future Trends and Challenges

As AI technology becomes more accessible, You can expect to see even more sophisticated scams. Deepfakes, realistic but fabricated videos and audio recordings, could be used to impersonate trusted individuals and further manipulate victims. The increasing complexity of the crypto landscape, with the emergence of new cryptocurrencies and decentralized finance (DeFi) platforms, will also present new challenges for law enforcement.

FAQ

Q: What is the biggest risk with crypto scams?
A: The speed and anonymity of cryptocurrency transactions make it difficult to track and recover stolen funds.

Q: How can I protect myself from romance scams?
A: Never send money to someone you’ve only met online, and be wary of individuals who quickly profess strong feelings or question for financial assistance.

Q: What should I do if I think I’ve been scammed?
A: Report the incident to the FBI’s Internet Crime Complaint Center (IC3) and your local law enforcement agency.

Q: Is there any way to get my money back if I’ve been scammed?
A: Recovery is often difficult, but reporting the scam promptly may increase the chances of recovering some funds.

Did you know? North Carolina experienced a high number of cyber investment scam complaints in 2024, with 178 reported cases.

Pro Tip: Regularly update your security software and be cautious about clicking on links or downloading attachments from unknown sources.

What are your experiences with online scams? Share your thoughts in the comments below and help us raise awareness!

March 26, 2026 0 comments
0 FacebookTwitterPinterestEmail
Business

Boosting Your Support and Safety on Meta’s Apps With AI

by Chief Editor March 19, 2026
written by Chief Editor

Meta’s AI Revolution: 24/7 Support and a Safer Online Experience

Meta is rolling out a significant upgrade to user support and content moderation on Facebook and Instagram, powered by artificial intelligence. The modern Meta AI support assistant promises to resolve account issues faster and more effectively, while advanced AI systems are being deployed to detect and remove harmful content with greater accuracy.

Instant Support: The Meta AI Support Assistant

Forget endless searches through aid center articles. Meta’s new AI assistant is designed to provide 24/7 support for nearly any issue, directly within the Facebook and Instagram apps for iOS and Android, and on desktop. It can respond to requests in under five seconds, a dramatic improvement over traditional support methods.

The assistant isn’t just about answering questions; it can take action. Users can now directly report scams, impersonation accounts, or problematic content. It also simplifies processes like appealing content takedowns, managing privacy settings, resetting passwords, and updating profile information. The rollout is happening in all languages currently supported by Facebook and Instagram.

Early feedback on the Meta AI support assistant has been positive, with the majority of users reporting a great experience.

Smarter Content Enforcement: AI Tackles Scams and Harmful Content

Beyond user support, Meta is leveraging AI to improve content enforcement. New AI systems are demonstrating impressive results in identifying and mitigating harmful content, including scams, impersonation, and illicit material.

Here’s what the new AI systems are achieving:

  • Scam Detection: Identifying 5,000 scam attempts per day that previously went unnoticed.
  • Impersonation Prevention: Reducing user reports of impersonated celebrities by over 80%.
  • Harmful Content Removal: Catching two times more violating adult sexual solicitation content while decreasing mistakes by over 60%.
  • Account Protection: Preventing account takeovers by recognizing suspicious login activity.
  • Fraudulent Site Detection: Identifying fake websites spoofing legitimate businesses.

These systems operate in 98% of languages spoken online, a significant increase from previous coverage, and are designed to adapt to evolving tactics and cultural nuances.

A Shift in Strategy: AI and Human Expertise

Meta plans to gradually rely more on these advanced AI systems for content enforcement, reducing its dependence on third-party vendors. However, human reviewers will remain crucial, particularly for complex decisions like appeals and reports to law enforcement. The goal is to combine the speed and scale of AI with the judgment and expertise of human moderators.

Meta emphasizes that its Community Standards are not changing, and the AI tools are designed to improve reporting and appeal processes.

Future Trends: The Evolution of AI in Social Media

Meta’s investment in AI signals a broader trend in social media: a move towards proactive, automated safety measures. Expect to see:

  • Hyper-Personalized Safety: AI tailoring safety features to individual user behavior and risk profiles.
  • Real-Time Threat Detection: AI identifying and responding to emerging threats in real-time, such as coordinated disinformation campaigns.
  • Enhanced Privacy Controls: AI-powered tools giving users more granular control over their data and privacy settings.
  • AI-Driven Content Creation Tools: AI assisting users in creating safe and responsible content.

FAQ

Q: Will the AI assistant replace human support agents?
A: No, Meta states that human reviewers will continue to play a key role, especially for complex issues and appeals.

Q: How does Meta ensure the AI isn’t biased?
A: Meta is rigorously testing the AI systems, building in safeguards, and evaluating performance to protect against bias and ensure accuracy.

Q: Is the AI assistant available in all languages?
A: The AI assistant is rolling out in all languages supported by Facebook and Instagram.

Q: What types of scams can the AI assistant help with?
A: The AI assistant can help report scams, including those involving login details and impersonation.

Did you know? The new AI systems can detect and prevent account takeovers by recognizing unusual login activity and profile changes.

Pro Tip: Familiarize yourself with Meta’s reporting tools and the new AI assistant to quickly address any issues you encounter on Facebook or Instagram.

Want to learn more about Meta’s safety initiatives? Visit the Meta Newsroom.

March 19, 2026 0 comments
0 FacebookTwitterPinterestEmail
Business

Fighting Scammers and Protecting People with New Technology and Partnerships

by Chief Editor March 11, 2026
written by Chief Editor

The Evolving Battle Against Online Scams: AI, Alerts, and a Proactive Defense

The digital landscape is in a constant arms race against increasingly sophisticated scammers. As criminals refine their tactics, platforms like Meta are responding with a multi-pronged approach centered on artificial intelligence, proactive user alerts, and strengthened partnerships with law enforcement. The goal: to stay ahead of the curve and protect users from fraud.

AI as a Frontline Defender: Beyond Traditional Detection

Traditional scam detection systems often struggle with the subtle nuances employed by modern fraudsters. Meta is deploying advanced AI systems capable of analyzing text, images, and contextual clues to identify a wider range of scam patterns. This isn’t just about flagging obvious phishing attempts. it’s about recognizing deceptive framing and subtle manipulation.

Combating Impersonation with AI

A key focus is on detecting impersonation, where scammers mimic celebrities, public figures, or brands. AI analyzes fake fan sentiment, misleading bios, and associations to identify deceptive accounts. The technology processes more contextual information, enhancing the ability to catch these impersonations before they cause harm.

Protecting Against Deceptive Links and Domain Spoofing

Scammers frequently redirect users to fake websites designed to steal credentials or financial information. Meta’s AI proactively detects and blocks content leading to these deceptive webpages, protecting thousands of brands and individuals from impersonation attempts.

New Tools to Empower Users: Alerts and Warnings

While automated detection is crucial, empowering users with information is equally important. Meta is rolling out new tools to alert users to potential threats before they engage with suspicious activity.

Suspicious Friend Request Alerts on Facebook

Facebook is testing warnings for suspicious friend requests, particularly those from accounts with few mutual friends or inconsistent location data. These alerts help users make informed decisions about accepting or rejecting requests, reducing the risk of connecting with scammers.

WhatsApp Device Linking Warnings

Scammers often attempt to trick users into linking their WhatsApp accounts to a malicious device, often through fake competitions or QR code scams. WhatsApp now alerts users when a device linking request appears suspicious, providing an opportunity to pause and reconsider before granting access.

Advanced Scam Detection on Messenger

Advanced scam detection is expanding on Messenger, warning users about chats with new contacts exhibiting patterns associated with common scams, such as suspicious job offers. Users are prompted to share chat messages for AI review, potentially identifying and blocking fraudulent activity.

Strengthening Advertiser Verification for a Safer Ecosystem

Advertisements are a common vector for scams. Meta is expanding its advertiser verification program, aiming to have verified advertisers drive 90% of ad revenue by the end of 2026, up from 70% currently. This process promotes transparency and limits attempts to misrepresent advertiser identity.

Collaborative Enforcement: Taking Action Against Scam Networks

Fighting scams requires a collaborative approach. Meta is working with law enforcement and industry peers worldwide to disrupt sophisticated scam operations. Last year alone, the company removed over 159 million scam ads, with 92% taken down before being reported.

Recent Enforcement Actions

  • Joint Disruption Week with Global Law Enforcement: Collaboration with the FBI, DOJ, Royal Thai Police, and other agencies led to the disabling of over 150,000 accounts linked to scam networks and 21 arrests in Thailand.
  • Romance Scam Disruption: Over 15,000 assets on Facebook and Instagram were removed for using deceptive personas in romance scams.
  • Nigeria Scam Center Disruption: A partnership with the Nigeria Police Force and UK National Crime Agency resulted in the arrest of seven suspects involved in a scam center targeting UK and US citizens.

Raising Global Awareness Through Education

Technology alone isn’t enough. Raising awareness about online safety is crucial, particularly for vulnerable populations. Meta is supporting initiatives like the #TrappedinScamCrime campaign (in partnership with UNODC, IJM, and the US Department of State) and the Scam Se Bacho campaign (with Indian Cyber Crime Coordination Centre and SEBI) to educate users about recognizing and avoiding scams.

Looking Ahead: The Future of Scam Prevention

The fight against online scams is ongoing. Expect to see continued investment in AI-powered detection, more proactive user alerts, and stronger collaboration between platforms, law enforcement, and international organizations. The focus will likely shift towards preemptively identifying and disrupting scam networks before they can inflict harm, rather than simply reacting to reported incidents.

Did you know?

Scammers are increasingly using AI to clone voices and create incredibly realistic deepfakes, making it harder than ever to distinguish between genuine communication and fraudulent attempts.

Pro Tip:

Be wary of unsolicited messages or friend requests, especially from people you don’t know. Always verify the identity of the sender before sharing personal information or clicking on any links.

Frequently Asked Questions (FAQ)

  • What is AI doing to help fight scams? AI is analyzing patterns in text, images, and user behavior to identify and remove scam accounts and content more effectively.
  • How can I protect myself from scams on Facebook? Look for alerts about suspicious friend requests and be cautious about clicking on links or sharing personal information with unfamiliar accounts.
  • What should I do if I believe I’ve been targeted by a scam? Report the incident to the platform and to your local law enforcement agency.
  • Are scams becoming more sophisticated? Yes, scammers are constantly evolving their tactics, using techniques like AI-powered voice cloning and deepfakes to deceive users.

Stay informed, stay vigilant, and help protect yourself and others from falling victim to online scams. Explore the Meta Adversarial Threat Report for more insights into their ongoing efforts.

March 11, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

Meta Fights Back: New Protections Target Global Scam Surge & ‘Pig Butchering’

by Chief Editor March 11, 2026
written by Chief Editor

The Escalating War Against Online Scams: Meta’s Fight and What Lies Ahead

The digital landscape is increasingly plagued by sophisticated, large-scale scams, often originating from Southeast Asia. Recent collaborative efforts between global law enforcement, including the FBI and Thai police, and Meta have resulted in significant disruptions – 21 arrests and the disabling of 150,000 accounts – but experts warn this is just the beginning. The fight against these “pig butchering” and other investment scams is evolving, demanding constant innovation, and cooperation.

The Rise of Industrialized Scamming

What was once a fragmented issue has morphed into an industrialized operation. Scammers are leveraging social media and communication platforms like Facebook, Instagram, and WhatsApp to target victims worldwide. These aren’t isolated incidents; they are coordinated efforts run by transnational syndicates, exploiting digital platforms to operate across multiple jurisdictions. The scale is staggering, with billions of dollars lost annually.

These scams often involve building trust with victims over extended periods, a tactic known as “pig butchering,” before ultimately defrauding them of significant sums. The professionalization of these operations is a key concern, with scammers increasingly using sophisticated techniques to evade detection.

Meta’s Response: A Multi-Pronged Approach

Meta is responding with a multi-pronged strategy. In 2025 alone, the company removed 10.9 million Facebook and Instagram accounts linked to criminal scam centers and over 159 million scam ads. They are likewise expanding scam detection features within Messenger, introducing warnings for new WhatsApp device links, and testing alerts for suspicious friend requests on Facebook.

Beyond reactive measures, Meta is focusing on preventative steps. They aim to have 90% of ad revenue come from verified advertisers by the end of 2026, a substantial increase from the current 70%. This verification process is intended to reduce the influx of fraudulent advertisements. AI-powered detection systems are being deployed to identify and flag impersonation attempts and deceptive links.

Did you know? Internal Meta estimates, reported by Reuters, suggest that up to 10% of its revenue could potentially originate from scam advertising, highlighting the financial incentive for scammers to exploit the platform.

The Challenges Ahead: A Shifting Battlefield

Despite these efforts, the battle is far from won. The scamming ecosystem is constantly adapting. Scammers are becoming more adept at circumventing detection systems and exploiting new vulnerabilities. The problem is too large for any single entity to solve, requiring sustained collaboration between tech companies, law enforcement agencies, and governments worldwide.

One emerging trend is the increasing use of AI by scammers themselves. AI can be used to generate more convincing fake profiles, craft personalized scam messages, and automate various aspects of the scamming process. This creates a dangerous arms race, where detection and prevention technologies must constantly evolve to stay ahead.

Beyond Tech: Addressing the Root Causes

While technological solutions are crucial, addressing the root causes of these scams is equally important. Many scammers are victims of human trafficking and forced labor, operating under duress in scam compounds. Recent law enforcement operations in countries like Thailand, Cambodia, and Nigeria have focused on dismantling these compounds and rescuing victims.

Pro Tip: Be wary of unsolicited messages or friend requests from individuals you don’t know, especially on social media. Verify the identity of anyone you interact with online before sharing personal information or sending money.

The Future of Scam Prevention

The future of scam prevention will likely involve a combination of advanced technologies, stronger international cooperation, and increased public awareness. Expect to see:

  • Enhanced AI-powered detection: More sophisticated AI algorithms capable of identifying subtle patterns and anomalies indicative of scam activity.
  • Decentralized verification systems: Blockchain-based solutions for verifying identities and credentials, reducing the risk of impersonation.
  • Cross-platform collaboration: Increased information sharing and coordinated action between different social media platforms and communication providers.
  • Greater regulatory oversight: Governments implementing stricter regulations to hold platforms accountable for the scams that occur on their services.

FAQ: Online Scams and Your Safety

  • What is “pig butchering”? It’s a type of investment scam where fraudsters build a relationship with victims over time before convincing them to invest in fake opportunities.
  • How can I protect myself from online scams? Be cautious of unsolicited messages, verify identities, and never share personal financial information with strangers online.
  • What should I do if I reckon I’ve been scammed? Report the incident to your local law enforcement agency and the platform where the scam occurred.

The fight against online scams is a continuous process. Staying informed, exercising caution, and supporting collaborative efforts are essential to protecting yourself and others from these increasingly sophisticated threats.

What are your thoughts on the evolving threat of online scams? Share your experiences and insights in the comments below!

March 11, 2026 0 comments
0 FacebookTwitterPinterestEmail
News

3 Singaporeans arrested in connection with Cambodia-based scam syndicate

by Rachel Morgan News Editor March 3, 2026
written by Rachel Morgan News Editor

Singapore police have arrested three individuals in connection with investigations into the Prince Holding Group and its founder, Chen Zhi, in recent months. The arrests occurred between November 2025 and January 2026, following initial investigations that began in 2024.

Investigations and Initial Seizures

In October 2025, police conducted island-wide operations targeting Chen Zhi and his associates. At that time, assets totaling more than S$150 million (US$117 million) were seized or placed under prohibition of disposal orders. These assets included a yacht, 11 cars, and bottles of liquor. However, no arrests were made as Chen Zhi and his associates were not present in Singapore.

Did You Grasp? Chen Zhi was reportedly arrested in Cambodia in January and then extradited to China at the request of Chinese authorities.

Chen Zhi’s arrest in Cambodia and subsequent extradition to China occurred after the initial police operations in Singapore.

Recent Arrests

Tan Yew Kiat, 49, director of car leasing firm SRS Auto, was arrested on November 20, 2025. Police also issued prohibition of disposal orders against vehicles registered under SRS Auto. Nigel Tang Wan Bao Nabil, 32, reportedly the captain of a yacht owned by Chen Zhi, was arrested on December 11, 2025, upon his return to Singapore from Cambodia. The third suspect, Yeo Sin Huat Alan, 53, was arrested on January 12, 2026, also upon his return to Singapore from Cambodia.

Expert Insight: The timing of these arrests, coinciding with the suspects’ return to Singapore from Cambodia, suggests coordinated efforts between international law enforcement agencies and a focused strategy to apprehend individuals linked to this complex financial network.

Frequently Asked Questions

When did investigations into Prince Holding Group begin?

Investigations into Prince Holding Group’s founder and chairman Chen Zhi, his associates and related companies began in 2024.

What types of assets were seized in October 2025?

Assets seized or placed under prohibition of disposal orders included a yacht, 11 cars, and bottles of liquor.

How many Singaporeans have been arrested in connection with this case?

Three Singaporeans have been arrested between November 2025 and January 2026.

It remains to be seen what further legal proceedings will unfold as investigations continue, and whether additional individuals may be implicated in this ongoing case.

March 3, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

Minnesota Lawmakers and Police Seek Complete Ban on Crypto ATMs

by Chief Editor February 28, 2026
written by Chief Editor

Minnesota Moves to Ban Crypto ATMs Amidst Surge in Scams

Minnesota lawmakers are poised to ban cryptocurrency ATMs statewide, a move fueled by a dramatic rise in scams targeting vulnerable residents. House File 3642, sponsored by Rep. Erin Koegel, recently reached the House Commerce Finance and Policy Committee, signaling a potential end to the operation of virtual currency kiosks within the state.

The Rising Tide of Crypto ATM Scams

The proposed ban comes as reports of cryptocurrency-related fraud continue to climb. The Minnesota Department of Commerce recorded 70 complaints in the past year, totaling $540,000 in losses. However, officials acknowledge that these figures likely represent only a fraction of the actual problem, as many victims are reluctant to report incidents.

Law enforcement officials have highlighted the devastating impact of these scams. Woodbury Police Det. Lynn Lawrence shared the story of a woman on a fixed income who lost half her monthly earnings over six months through repeated bitcoin ATM transactions, fearing she would become homeless. These machines are favored by scammers because they allow for quick, cash-based transactions, making it difficult to trace funds.

Failed Protections and the “Pig Butchering” Phenomenon

Previous attempts to curb fraud through consumer protections – such as warnings about the irreversible nature of crypto transactions, daily transaction limits, and refund procedures – have proven ineffective. Scammers routinely circumvent these measures by coaching victims to use existing accounts or machines in neighboring states like Wisconsin.

A particularly insidious tactic gaining traction is the “pig butchering” scam, often orchestrated by Asian criminal syndicates. These scams involve building relationships with victims online before enticing them to invest in fake crypto trading platforms. These operations, sometimes involving forced labor, rely on crypto ATMs to facilitate the transfer of funds from cash to cryptocurrency.

National Trend: States Crack Down on Crypto Kiosks

Minnesota is not alone in its efforts to regulate or ban crypto ATMs. Maine recently reached a nearly $2 million settlement with Bitcoin Depot, requiring the removal of all its kiosks from the state. Kansas regulators are investigating banks linked to crypto ATMs after a couple lost $20,000 to a scam. West Virginia’s House Finance Committee advanced legislation to license operators and set transaction limits, following reports of $7.6 million in losses. The FBI reported nearly 11,000 complaints in 2024, totaling $247 million, climbing to $333 million in 2025.

Federal Scrutiny and the CLARITY Act

At the federal level, the Digital Asset Market Clarity Act (CLARITY Act) similarly targets crypto ATMs. The legislation, which passed the House last year, would treat kiosk operators as money transmitters subject to Bank Secrecy Act obligations. However, Senate committees have postponed markups as negotiations continue, particularly regarding stablecoin regulations.

Privacy Concerns and Decentralized Alternatives

While the push for regulation is driven by consumer protection, some privacy advocates argue that restrictions on crypto ATMs represent a broader clampdown on financial privacy. They contend that these kiosks offer one of the few remaining avenues for trading between dollars and crypto without extensive surveillance. However, truly decentralized peer-to-peer trading remains an option for those prioritizing privacy, though it requires a higher level of technical expertise.

Industry Response and the Future of Crypto Kiosks

Larry Lipka of CoinFlip, a major crypto ATM operator, acknowledges the problem of scams but opposes an outright ban, arguing that it’s inappropriate to penalize a legal product due to fraudulent activity. Roughly 350 licensed crypto kiosks operate in Minnesota under eight to ten companies.

FAQ

What is House File 3642?

House File 3642 is a Minnesota bill that would prohibit the operation of virtual currency kiosks (crypto ATMs) statewide.

Why are crypto ATMs being targeted?

Crypto ATMs are frequently used in scams, particularly those targeting elderly individuals, resulting in significant financial losses.

What is “pig butchering”?

“Pig butchering” is a scam where criminals build relationships with victims online before convincing them to invest in fake crypto trading platforms.

Are other states taking action against crypto ATMs?

Yes, states like Maine, Kansas, and West Virginia are also implementing regulations or bans on crypto ATMs.

What is the CLARITY Act?

The CLARITY Act is federal legislation that would regulate crypto ATMs by treating operators as money transmitters.

February 28, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

Google’s AI Overviews Can Scam You. Here’s How to Stay Safe

by Chief Editor February 15, 2026
written by Chief Editor

The Rise of AI-Powered Scams: How Google Overviews Are Becoming a Modern Hunting Ground for Fraudsters

Google’s push to deliver instant answers with AI Overviews is changing the search landscape. But this convenience comes with a hidden cost: a growing vulnerability to scams. Increasingly, malicious actors are exploiting this new feature to inject fraudulent information, particularly misleading contact numbers, directly into search results.

How AI Overviews Are Being Exploited

Traditionally, scammers relied on manipulating search rankings to place fake websites higher in results. Now, they’re finding a more direct route: contaminating the data sources that feed Google’s AI Overviews. Reports are surfacing on platforms like Facebook and Reddit, and highlighted by publications like The Washington Post and Digital Trends, detailing instances where AI Overviews provide incorrect phone numbers for legitimate businesses.

The scam unfolds when a user searches for a company’s contact information. Instead of connecting them to the official number, the AI Overview presents a fraudulent one. Callers are then directed to individuals posing as representatives of the company, attempting to extract payment details or other sensitive information.

It’s a good idea not to trust AI for contact details.David Nield

Why AI Overviews Are Vulnerable

The core issue lies in how AI Overviews are constructed. The system doesn’t simply copy content; it gathers data from multiple sources and synthesizes a new explanation. While this aims for clarity, it also means the AI can inadvertently amplify misinformation if it’s present in the source material. The fraudulent numbers are reportedly being published on numerous low-profile websites, allowing the AI to pick them up during its data aggregation process.

This isn’t a new problem – misinformation has long plagued the internet. However, the design of AI Overviews, which presents information as definitive rather than encouraging independent verification, makes users more susceptible to these cons.

The Future of AI and Online Security

As AI becomes more integrated into search and information retrieval, the potential for abuse will likely increase. People can anticipate several trends:

  • Sophisticated Scams: Scammers will likely refine their techniques, creating more convincing fake websites and content designed to be easily scraped by AI systems.
  • Expansion to Other Areas: The problem isn’t limited to phone numbers. AI Overviews could be exploited to spread false information about financial products, healthcare advice, or other sensitive topics.
  • Increased Reliance on Verification: Google and other search engines will demand to invest heavily in more robust verification mechanisms to identify and filter out fraudulent data.
  • User Education: Consumers will need to become more critical of the information presented in AI Overviews and exercise caution before acting on it.

What Can You Do to Stay Safe?

Protecting yourself requires a healthy dose of skepticism. Always verify contact information through official channels – a company’s website, a trusted directory, or a previous statement. Avoid calling numbers provided solely by AI Overviews. Remember that AI is a tool, and like any tool, it can be misused.

FAQ

  • Are AI Overviews accurate? AI Overviews can contain mistakes and may not always provide accurate information.
  • Can I turn off AI Overviews? According to Forbes, you cannot completely turn off AI Overviews, but there are ways to limit their influence.
  • What should I do if I suspect a scam? Report the fraudulent number to the relevant authorities and to Google.

Pro Tip: Before making any important decision based on information from an AI Overview, always cross-reference it with multiple reliable sources.

What are your experiences with Google’s AI Overviews? Share your thoughts and concerns in the comments below!

February 15, 2026 0 comments
0 FacebookTwitterPinterestEmail
Business

WA authorities reveal ‘red flags’ from romance scammers in Valentine’s Day warning

by Chief Editor February 14, 2026
written by Chief Editor

Valentine’s Day Warning: Romance Scams Rise as AI Complicates Detection

Western Australian authorities are urging vigilance this Valentine’s Day as romance scams continue to evolve, with new tactics leveraging artificial intelligence to deceive unsuspecting individuals. Recent data reveals that while reported cases dipped slightly in 2025, the financial impact remains substantial, with scammers stealing $3.8 million from 63 West Australians.

The Evolving Tactics of Romance Scammers

Romance scammers typically operate by building emotional connections with victims over extended periods, often months or even years, before requesting money or pressuring them into risky financial transfers. These scams often begin on dating websites and apps, where perpetrators create fake profiles with stolen images and fabricated life stories. A common initial tactic is “love bombing” – an overwhelming display of affection designed to quickly establish a false sense of intimacy.

Commerce Minister Dr Tony Buti emphasized the patience of these criminals, noting that a relationship seemingly blossoming around Valentine’s Day may not reveal its true, fraudulent nature for months. “Romance scammers are patient operators, who can spend months building trust before asking for money,” he stated.

Red Flags to Watch For

Authorities have identified several key warning signs that a new online relationship may be a scam. These include:

  • Excuses for Not Meeting: Scammers frequently fabricate reasons why they cannot meet in person, often claiming to work in remote locations like oil rigs or in the military.
  • Secrecy: Requests to preserve the relationship secret from friends and family are a major red flag.
  • Encrypted Communication: Urging a move to encrypted messaging platforms to avoid scrutiny.
  • Early Financial Requests: Any request for money, especially in the early stages of the relationship, should be treated with extreme caution.
  • Isolation Tactics: Attempts to isolate the victim from their support network.

The Threat of Deepfakes and AI

A growing concern is the use of artificial intelligence, particularly deepfake technology, to enhance the credibility of scams. Consumer Protection Commissioner Trish Blake revealed a recent case where a woman almost fell victim to a deepfake during a video call, initially believing she was speaking with the person from her dating app, only to discover a stranger hiding under a blanket. This highlights the increasing sophistication of scammers and the difficulty in verifying identities online.

Maggie thought she found love online, before a video call glitch showed a man in a cupboard covered with a blanket

Romance scams are on the rise in WA, with authorities warning people to be vigilant of scammers using artificial intelligence to disguise themselves in video calls.

Who is Most at Risk?

The National Anti-Scam Centre reports that certain demographics are disproportionately affected by romance scams. These include individuals over 35, people with disabilities, and those experiencing significant life changes such as divorce or widowhood. While men are more likely to report these scams, women tend to suffer greater financial losses – an average of $36,091 per scam compared to $17,089 for men nationally between January 2024 and May 2025.

Individuals aged 65 and over experienced the highest total losses, totaling $11.7 million nationally.

Underreporting and the “Shame Factor”

Authorities acknowledge that reported figures likely underestimate the true scale of the problem, as many victims are reluctant to come forward due to embarrassment or shame. This underreporting makes it difficult to accurately assess the impact of these scams and implement effective prevention strategies.

Protecting Yourself: Pro Tips

Verify Profile Photos: Use reverse image searches (like Google or TinEye) to check if profile pictures are genuine and haven’t been stolen from elsewhere online.
Take Your Time: Don’t rush into a relationship. Spend time getting to grasp someone before sharing personal information or considering financial requests.
Trust Your Instincts: If something feels off, it probably is. Don’t ignore your gut feeling.

FAQ: Romance Scams

Q: What should I do if I suspect I’m being scammed?
A: Immediately cease all contact with the individual, report the scam to WA ScamNet, and contact your bank or financial institution.

Q: Is it possible to recover money lost to a romance scam?
A: Recovery is often difficult, but it’s worth reporting the scam to authorities and your bank. There is no guarantee of recovering funds.

Q: How can I protect my family and friends from romance scams?
A: Share this information with them and encourage them to be cautious when forming online relationships.

Q: What is “love bombing”?
A: Love bombing is a manipulative tactic where a scammer overwhelms a victim with affection and attention early in the relationship to quickly gain their trust.

If you or someone you know has been affected by a romance scam, resources are available. Report scams to WA ScamNet and seek support from victim assistance services.

February 14, 2026 0 comments
0 FacebookTwitterPinterestEmail
News

Cousins to learn fate over ‘particularly unusual fraud’ they claimed was a bet to see if it would work

by Rachel Morgan News Editor February 12, 2026
written by Rachel Morgan News Editor

A Dublin man and his cousin have pleaded guilty to charges related to a complex fraud involving a fictitious birth registration and a subsequent passport application. The case, heard in Dublin Circuit Criminal Court, centers around the attempted creation of a new identity for Simon O’Donnell (43).

Details of the Scheme

Simon O’Donnell pleaded guilty to providing false information in connection with a passport application submitted on August 26, 2019. His cousin, Winnie O’Donnell (59), pleaded guilty to providing false or misleading information to the Civil Registration Service between May and November 2011. The pair attempted to register the birth of a ‘David O’Donnell’, claiming he was born in 1980 – 29 years prior to the registration attempt.

Winnie O’Donnell presented herself as the aunt of the fictitious ‘David O’Donnell’ and claimed to have witnessed his birth. The fabricated birth certificate was then used in support of Simon O’Donnell’s 2019 passport application. However, the application was flagged due to issues identified by biometric safeguards, leading to a Garda investigation.

Did You Recognize? The court heard the only purpose of the application was to establish a new identity.

Motives and Legal Arguments

During questioning, Simon O’Donnell stated he submitted the application in an attempt to obtain social welfare benefits to pay off individuals who were threatening him due to a feud. Winnie O’Donnell admitted there was no person named ‘David O’Donnell’, but did not dispute that a signature on the registration form appeared to be hers, stating it stemmed from a bet regarding the application’s success.

Judge Martin Nolan described the scheme as a “very strange thing to think about doing it, let alone bet on it,” and remarked on the audacity of attempting to register someone 29 years after their supposed birth.

Legal counsel for Simon O’Donnell noted his client’s lack of prior criminal history and his co-operation with the investigation. Counsel for Winnie O’Donnell highlighted that she had no previous convictions at the time of the offense and did not profit from the scheme. Winnie O’Donnell now has eight summary convictions for offences under the Theft and Fraud Act.

Expert Insight: The case highlights the vulnerabilities within identity registration systems and the lengths to which individuals may go when facing perceived threats or financial pressures. The fact that the fraud was detected by biometric safeguards underscores the importance of these security measures.

What Happens Next?

Both defendants were remanded on bail and the case was adjourned for finalization. This proves possible the court will consider community service for Simon O’Donnell, given his co-operation and lack of prior deception convictions. A non-custodial sentence may also be considered for Winnie O’Donnell, given her lack of prior convictions at the time of the offense. The judge will ultimately determine the appropriate sentencing based on the details presented and legal arguments made.

Frequently Asked Questions

What charges did Simon O’Donnell face?

Simon O’Donnell pleaded guilty to providing false information in connection with a passport application on August 26, 2019.

What was Winnie O’Donnell’s role in the scheme?

Winnie O’Donnell pleaded guilty to providing false or misleading information to a registrar of the Civil Registration Service regarding the birth of ‘David O’Donnell’.

Why did Simon O’Donnell attempt to obtain a new identity?

Simon O’Donnell told Gardaí he was attempting to obtain social welfare to pay off people who were threatening him as a result of a feud.

Given the unusual nature of this case, what factors do you think will weigh most heavily in the judge’s sentencing decision?

February 12, 2026 0 comments
0 FacebookTwitterPinterestEmail
News

Police warn of phishing scams impersonating LTA, targeting Singaporean travellers to Malaysia

by Rachel Morgan News Editor February 10, 2026
written by Rachel Morgan News Editor

Singaporean travellers to Malaysia are being warned about a new wave of phishing scams impersonating the Land Transport Authority (LTA), police announced on Tuesday, February 10.

Scam Targets Travellers

The scam involves SMS messages sent to individuals after their mobile phones connect to Malaysian telecommunications networks for roaming. These messages falsely claim outstanding vehicle tolls are owed.

Decommissioned Sender ID

The fraudulent texts are sent using a former LTA official sender ID, simply named “LTA.” However, the police have confirmed this sender ID was decommissioned in July 2024 and is no longer in employ.

Did You Know? The scam directs victims to a phishing website designed to steal bank card details under the guise of toll payment.

Victims are prompted to click a link leading to a phishing website where they are asked to provide their bank card information. Police reports indicate that victims typically discover the fraud only after noticing unauthorized transactions on their cards.

Financial Impact

Since January 27, at least 10 cases of this scam have been reported, resulting in losses of at least S$24,000 (US$19,000).

Expert Insight: This scam highlights the evolving tactics employed by fraudsters, leveraging international travel and trusted institutions like the LTA to exploit vulnerabilities. The use of a decommissioned sender ID suggests a deliberate attempt to create a false sense of legitimacy.

Authorities advise the public to remain vigilant and cautious of unsolicited messages requesting financial information.

Frequently Asked Questions

What is the nature of this scam?

The scam involves SMS messages impersonating the LTA, falsely claiming unpaid vehicle tolls for travellers roaming in Malaysia.

Is the “LTA” sender ID still active?

No, the police have confirmed that the sender ID “LTA” was decommissioned in July 2024 and is no longer in use.

What should victims do if they suspect they have been targeted?

Victims only realize they have been scammed when unauthorized transactions appear on their cards, according to the police.

How can individuals best protect themselves from similar scams in the future?

February 10, 2026 0 comments
0 FacebookTwitterPinterestEmail
Newer Posts
Older Posts

Recent Posts

  • Iran’s Economy on the Brink of Collapse Amid US-Israel Conflict

    April 25, 2026
  • Iran’s Economy Faces Collapse Amid US-Israel Conflict and Blockades

    April 25, 2026
  • Gianluca Rocchi Under Investigation for Alleged VAR Manipulation in Serie A

    April 25, 2026
  • Filho de Rob Reiner relata assassinato dos pais pelo irmão

    April 25, 2026
  • Cthulhu: The Cosmic Abyss Review: Atmospheric Horror, Mixed Results

    April 25, 2026

Popular Posts

  • 1

    Maya Jama flaunts her taut midriff in a white crop top and denim jeans during holiday as she shares New York pub crawl story

    April 5, 2025
  • 2

    Saar-Unternehmen hoffen auf tiefgreifende Reformen

    March 26, 2025
  • 3

    Marta Daddato: vita e racconti tra YouTube e podcast

    April 7, 2025
  • 4

    Unlocking Success: Why the FPÖ Could Outperform Projections and Transform Austria’s Political Landscape

    April 26, 2025
  • 5

    Mecimapro Apologizes for DAY6 Concert Chaos: Understanding the Controversy

    May 6, 2025

Follow Me

Follow Me
  • Cookie Policy
  • CORRECTIONS POLICY
  • PRIVACY POLICY
  • TERMS OF SERVICE

Hosted by Byohosting – Most Recommended Web Hosting – for complains, abuse, advertising contact: o f f i c e @byohosting.com


Back To Top
Newsy Today
  • Business
  • Entertainment
  • Health
  • News
  • Sport
  • Tech
  • World