• Business
  • Entertainment
  • Health
  • News
  • Sport
  • Tech
  • World
Newsy Today
news of today
Home - ai regulation
Tag:

ai regulation

News

Building trust in the algorithm: Indonesia’s emerging AI framework

by Rachel Morgan News Editor March 24, 2026
written by Rachel Morgan News Editor

Indonesia is rapidly establishing itself as a key player in the regional digital economy and is increasingly focused on adopting artificial intelligence (AI). The Indonesian government outlined its long-term vision for AI with the Artificial Intelligence National Strategy for Indonesia 2020-2045: AI Towards Indonesia Vision 2045. A 2023 Kearney report projected that AI could contribute USD366 billion to Indonesia’s GDP by 2030.

Despite this ambition, Indonesia’s AI governance framework is still in its early stages, reflecting the challenges of aligning legal and institutional responses with the country’s rapid technological development. This gap in regulation presents both opportunities and challenges to strengthen accountability, enhance legal certainty and build public trust in AI technologies.

Framework and Governance

Currently, Indonesia does not have specific laws or regulations addressing AI. Instead, the operation and use of AI are governed by existing laws, including those related to electronic systems under the Electronic Information and Transactions Law, amended by Law No.1 of 2026 on Criminal Adjustment (EIT Law) and Government Regulation No.71 of 2019 on the Provision of Electronic Systems and Transactions. Under this framework, AI can be considered an “electronic agent,” defined as a device within an electronic system operated by a person.

However, this definition may be inadequate for modern AI systems, which often operate autonomously and exhibit complex problem-solving capabilities. In the absence of detailed AI-specific regulations, the Ministry of Communication and Digital Affairs (MOCD) issued Circular Letter No.9 of 2023 on Artificial Intelligence Ethics (CL9), providing general guidelines on the ethical values and control of AI-based activities.

These ethical values include inclusivity, security, accessibility, transparency, credibility, and accountability. AI operators are expected to safeguard society, prevent discrimination, and consider risk and crisis management. Sector-specific regulations also apply, such as the Financial Service Authority (OJK)’s Indonesian Banking Artificial Intelligence Governance, which focuses on reliability, accountability, and human oversight.

Did You Know? The MOCD published the National AI Roadmap White Paper in August 2025, which includes establishing a National AI Co-ordination Task Force to harmonize laws and regulations.

The OJK has also introduced a Code of Conduct for Responsible and Trustworthy Artificial Intelligence in the Financial Technology Industry, emphasizing fairness, transparency, and explainability.

Emerging AI-Specific Policies and Development

While a comprehensive legal framework is still under development, the MOCD published the National AI Roadmap White Paper in August 2025. This roadmap covers the conceptual framework of AI, issues analysis, and government policy direction, including establishing a National AI Co-ordination Task Force. It also introduces an AI lifecycle with principles to minimize risks at each stage and outlines key principles of AI governance, including dignity, justice, and accountability.

Complementing the roadmap, the MOCD also published AI Ethical Guidelines to strengthen the ethical framework in CL9, providing a self-assessment questionnaire for businesses. The government is also preparing a presidential regulation on AI to address accountability and security concerns and align AI initiatives across ministries and agencies.

Key Legal Challenges

Despite recent developments, several legal and institutional issues remain. Indonesia currently lacks a unified legal definition of AI, leading to uncertainty about how it should be regulated. The regulatory landscape is fragmented, potentially leading to overlapping authorities and inconsistent standards. This also raises concerns about personal data protection, as AI development often involves collecting and processing large datasets.

Indonesian law does not currently recognize AI as a separate legal subject, leaving liability for AI-related harm to be determined based on existing legal frameworks. To date, You’ll see no court decisions or specific legal provisions clarifying liability arising from the use of AI.

Expert Insight: Indonesia’s main challenge isn’t a lack of technological capability, but rather a need for governance readiness. Addressing the legal gaps and establishing clear frameworks for accountability and security will be crucial for realizing the full potential of AI in the country.

Frequently Asked Questions

What is Indonesia’s long-term vision for AI?

The Indonesian government set out its long-term AI vision through the Artificial Intelligence National Strategy for Indonesia 2020-2045: AI Towards Indonesia Vision 2045.

What is the role of the Ministry of Communication and Digital Affairs (MOCD) in AI governance?

The MOCD issued Circular Letter No.9 of 2023 on Artificial Intelligence Ethics (CL9) and published the National AI Roadmap White Paper in August 2025, indicating a growing policy-driven approach to AI governance.

What are some of the key legal challenges facing AI governance in Indonesia?

Key legal challenges include the lack of a unified legal definition of AI, a fragmented regulatory approach, privacy risks, and unclear liability and accountability frameworks.

As Indonesia continues to integrate AI across sectors, will the country be able to effectively balance innovation with the need for robust legal and ethical safeguards?

March 24, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

AI Bill Sponsor Targeted by Big Tech in NY Congressional Race

by Chief Editor March 3, 2026
written by Chief Editor

Tech Billionaires Wage War in New York: The Battle for AI Regulation

New York’s 12th congressional district has become ground zero in a high-stakes battle over the future of artificial intelligence. Assembly Member Alex Bores, a Democrat running for Congress, is facing a relentless barrage of attack ads funded by a super PAC backed by some of Silicon Valley’s biggest names. The core issue? Bores’ push for AI regulation.

From Palantir to Political Target

Bores’ story is complex. He spent nearly five years at Palantir, the controversial data analytics firm known for its work with government agencies, including U.S. Immigration and Customs Enforcement (ICE). He quit Palantir in 2019, citing moral objections to the company’s ICE contracts. Now, that past is being weaponized against him.

Ads accuse Bores of profiting from technology used in deportations, a claim he disputes, stating he never worked directly on the ICE contract. However, his former employer, Palantir, is now indirectly funding his opposition through the super PAC, Leading the Future.

The $125 Million Offensive

Leading the Future has raised a staggering $125 million to support candidates who oppose strict AI regulation and to undermine those, like Bores, who advocate for it. The PAC’s backers include Palantir co-founder Joe Lonsdale, OpenAI President Greg Brockman, and venture capital firm Andreessen Horowitz. They’ve already committed at least $10 million to oppose Bores’ campaign.

“They’re targeting me to make an example of me,” Bores told TechCrunch. He believes his tech background – including his computer science degree and experience at Palantir – makes him a particularly potent threat to their agenda.

The RAISE Act and the Fight for State Control

Bores’ political troubles stem, in part, from sponsoring the RAISE Act in New York. This law requires large AI labs to have publicly available safety plans and report catastrophic safety incidents. Although considered a relatively mild regulation, it sparked outrage among industry leaders who fear a patchwork of state laws could stifle innovation.

The tech industry, backed by former President Trump’s executive order, is pushing for federal-level AI regulation, believing it will preempt stricter state laws. Bores, however, argues that states should retain the right to regulate AI in the absence of a comprehensive federal framework.

A Broader Trend: Big Tech’s Political Spending

The battle over Bores’ campaign is not an isolated incident. Meta has invested $65 million in super PACs to elect state-level candidates friendly to the tech industry. AI companies and executives collectively donated at least $83 million in 2025 to federal campaigns, and committees. This influx of money underscores the growing political influence of the tech sector.

Interestingly, Bores has also received support from a PAC backed by Anthropic, called Public First Action, which is spending $450,000 to counter the attacks. This highlights a division within the AI industry itself, with some companies advocating for greater transparency and oversight.

What Does This Mean for the Future of AI Regulation?

The New York congressional race is a microcosm of a larger struggle: how to balance innovation with responsible AI development. The massive spending by tech companies signals their determination to shape the regulatory landscape in their favor. The outcome of this election, and similar contests across the country, could have profound implications for the future of AI.

FAQ

Q: What is the RAISE Act?
A: It’s a New York law requiring large AI labs to have publicly available safety plans and report safety incidents.

Q: Who is funding the attacks against Alex Bores?
A: A super PAC called Leading the Future, backed by Palantir co-founder Joe Lonsdale, OpenAI President Greg Brockman, and other Silicon Valley investors.

Q: Why is the AI industry so opposed to state-level regulation?
A: They fear a patchwork of state laws will create uncertainty and hinder innovation.

Q: What is Palantir’s role in this conflict?
A: A Palantir co-founder is funding the super PAC attacking Bores, despite Bores having previously worked at and left the company due to concerns about its ICE contracts.

Did you know? The amount of money being spent on this congressional race far exceeds the typical spending for a New York State Assembly race.

Pro Tip: Stay informed about AI regulation efforts in your state and contact your representatives to voice your concerns.

Want to learn more about the evolving landscape of AI and its impact on society? Explore our other articles on technology and politics.

March 3, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

Trump signs executive order to centralize AI regulation, curbing state powers

by Chief Editor December 12, 2025
written by Chief Editor

Why the Federal Government Wants to Steer the AI Ship

President Trump’s recent executive order signals a decisive shift: the United States is moving toward a single, nationwide framework for artificial intelligence. The order blocks states from enacting their own AI laws, creates an AI Litigation Task Force within the Justice Department, and emphasizes “unfettered innovation” as the engine of global competitiveness.

Key Drivers Behind the Federal‑First Approach

  • Competitive pressure from China. China’s centralized approval system allows rapid rollout of AI solutions, a model the administration fears could outpace U.S. companies if every state imposes its own rules.
  • Investment certainty. Venture capitalists like David Sacks argue that a uniform regulatory environment reduces “approval fatigue” and encourages billions of dollars in AI funding.
  • Legal consistency. A national “AI Litigation Task Force” aims to pre‑empt patchwork lawsuits that could stall product launches.
Did you know? In 2023, U.S. AI startups attracted $71 billion in venture capital, yet 48 % reported concerns about differing state privacy and safety rules.

Future Trends Shaping AI Regulation in America

1. A Nationwide “AI Safety” Playbook

Even as the order pushes back on “onerous” state statutes, it explicitly protects “kid‑safety” measures. Expect a federal “AI Safety Blueprint” that mirrors the European Union’s AI Act but focuses on child‑focused safeguards, data minimization, and transparency.

2. Consolidated Enforcement Through the Litigation Task Force

The newly created task force will likely become the go‑to body for challenges against state‑level AI rules. Its first cases may involve:

  1. State bans on facial‑recognition deployment in public spaces.
  2. Mandates requiring “explainable AI” disclosures for consumer credit decisions.

Legal scholars predict that within five years, the task force will have set precedent‑defining rulings that shape AI compliance strategies across the country.

3. Federal Funding and “AI hubs” Powered by Uniform Rules

With regulatory uncertainty reduced, the Department of Commerce is expected to launch an AI Innovation Hub Initiative. These hubs will concentrate R&D in data‑rich regions, offering tax incentives and grant programs that require adherence to the national framework.

4. Rise of “State‑Fed Bridge” Legislation

Republicans like Rep. Marjorie Taylor Greene champion state rights, while Democrats such as Gov. Jared Polis stress federal coordination. The compromise could emerge as “bridge” bills that allow states to experiment in narrow domains (e.g., autonomous vehicle testing) while deferring broader AI policy to the federal level.

Pro tip: If you run an AI startup, start building compliance into your product roadmap now. A modular approach—where core functions meet federal standards and state‑specific layers can be toggled on or off—will future‑proof your operations.

Real‑World Case Studies

Case Study: Facial‑Recognition in Colorado

Colorado passed a strict ban on government use of facial‑recognition in 2022. When the federal order became law, the state’s ban was challenged by the AI Litigation Task Force. The resulting settlement required Colorado to adopt the federal “AI Transparency Standard” while keeping its ban on law‑enforcement use—illustrating how federal pre‑emption can coexist with targeted state safeguards.

Case Study: AI‑Driven Loan Underwriting in Texas

A Texas credit union deployed an AI underwriting model that reduced loan processing time by 30 %. The model complied with the national “Explainable AI” guideline, allowing it to sidestep a proposed state law that would have forced a costly redesign. This advantage helped the credit union capture a 12 % market‑share increase within a year.

FAQs

Will states ever be able to pass AI laws again?
Under the current executive order, any new state AI regulation must be consistent with the federal framework; otherwise, it may be challenged by the AI Litigation Task Force.
How does this order affect existing AI regulations?
Existing state statutes that align with federal standards can remain, but those deemed “onerous” or conflicting will face legal challenges.
What does “kid‑safety” protection mean?
The order explicitly preserves state and federal measures that protect minors, such as age‑verification requirements for AI‑generated content.
Is there federal funding tied to compliance?
Yes. The Commerce Department’s AI Innovation Hub Initiative offers grants to companies that meet the national standards.
Will this ordering impact AI research in universities?
University labs can continue state‑level collaborations, but funding for federally‑supported research will require adherence to the nationwide AI framework.

What’s Next for AI Policy Makers?

Policymakers will watch how the AI Litigation Task Force’s first rulings set the tone for the next decade. Expect a surge in:

  • Industry coalitions pushing for “sandbox” environments to test innovative AI under federal oversight.
  • State legislatures drafting narrowly tailored bills that complement, rather than conflict with, the federal playbook.
  • International observers comparing the U.S. approach to the EU’s AI Act and China’s top‑down model.

Join the Conversation

How do you think a unified federal AI strategy will shape the next wave of innovation? Share your thoughts, sign up for our newsletter, and stay updated on the evolving AI regulatory landscape.

December 12, 2025 0 comments
0 FacebookTwitterPinterestEmail
Health

Regulatory Policy and Practice on AI’s Frontier

by Chief Editor July 7, 2025
written by Chief Editor

Adaptive Regulation: Unlocking AI’s Untapped Potential

The rapid evolution of artificial intelligence presents both incredible opportunities and significant challenges. To fully realize AI’s benefits, while mitigating risks, requires a proactive approach. A key element of this approach is adaptive, expert-led regulation.

The Historical Role of Technology

Throughout history, groundbreaking technological advancements have fueled economic growth, expanded opportunities, and enhanced living standards. Innovations, from the printing press to the internet, have reshaped societies. Now, AI stands poised to become the next major catalyst for transformation.

Technology’s power lies in its ability to amplify existing knowledge and generate novel insights. This leads to new jobs, higher incomes, and improved well-being. However, the benefits aren’t always evenly distributed. Disruption and transition periods can present difficulties.

The AI Revolution and the Need for Forward-Thinking Policy

AI’s capabilities are remarkable, spanning healthcare, finance, manufacturing, and education. We’re witnessing major advancements in AI-based reasoning and complex task performance. These breakthroughs are bringing us closer to artificial general intelligence (AGI), with systems capable of human-level and even superhuman intelligence.

Adaptive policy-making is key to navigating the transformations AI brings. Consider the 2024 Nobel Prize in chemistry, which recognized the use of AI in understanding protein structures. This exemplifies AI’s potential while highlighting the need for oversight and guardrails.

Did you know? The global AI market is projected to reach over $1.8 trillion by 2030, according to Statista. This underlines the urgency and importance of regulatory frameworks.

How Government Can Shape the Future of AI

Government can foster the growth of AI by creating a regulatory environment that encourages its adoption. This includes promoting AI research and development.

Operationalizing policy is critical. Regulatory agencies play a significant role in adapting and administering these frameworks. Effective regulation is crucial, allowing for AI-driven innovation without stifling it.

Expert-Led Regulation: The Key to Success

The core regulatory objectives like consumer protection and safety should remain unchanged. However, outdated requirements, designed before the advent of advanced AI, may not be fit for purpose. The focus should shift to how these goals are achieved within the context of rapidly evolving AI.

We are not suggesting governments should jettison vital interests. However, regulators need to ensure AI delivers on its promise while protecting individuals from harm. This includes fostering AI-human collaboration, where AI agents monitor other AI systems, with human oversight for nuanced matters. This requires modernizing regulation in design, application, and clarity to accommodate AI’s capabilities.

Accomplishing regulatory modernity is not straightforward. It requires merging technological and regulatory expertise. Regulatory agencies will be better positioned to adapt regulations and meet the challenges of unexpected uses of AI. The technical elements of regulation should mirror the technical elements of AI to ensure a seamless relationship.

Regulatory agencies can enhance their processes and practices with AI. For example, AI agents could assist with permitting, licensing, and registration applications, potentially accelerating approvals.

Pro Tip: To stay ahead, regulatory agencies should increase in-house technological expertise. Consider establishing interdisciplinary teams of lawyers and AI engineers.

Building a Tech-Savvy Regulatory Workforce

Regulatory agencies need to boost their technological know-how by bringing in experts from the private sector, academia, and research institutions. This includes computer scientists, software engineers, and AI researchers.

For example, lawyers with regulatory knowledge can work with AI engineers to analyze and measure the interpretation of legal obligations by large language models. Regulators could use AI agents to speed up the review of licensing and registration.

The Future of AI Regulation: Key Trends to Watch

  • Proactive Policy-Making: Regulators must anticipate the future, not just react to the present.
  • Cross-Disciplinary Collaboration: The integration of tech experts and regulatory specialists is essential.
  • Agile Frameworks: Regulations must be flexible and adaptable to accommodate rapid technological shifts.
  • AI-Driven Oversight: Leverage AI to improve regulatory processes and enforcement mechanisms.
  • Global Cooperation: Coordinate international regulatory efforts to ensure a consistent and responsible approach.

FAQ: Demystifying AI Regulation

Q: What is adaptive regulation?
A: It’s a regulatory approach that adjusts to emerging technologies, like AI, ensuring innovation and minimizing risks.

Q: Why is technological expertise important for regulators?
A: It allows them to understand AI’s capabilities and potential impacts, enabling them to create effective and relevant regulations.

Q: How can AI be used in regulation?
A: AI can assist with tasks like application review, compliance monitoring, and risk assessment, making regulatory processes more efficient.

Q: What are the main goals of AI regulation?
A: To promote innovation, protect consumers, ensure fairness, and address potential societal risks.

Q: What are some of the challenges related to AI regulation?
A: Keeping up with the rapid pace of AI development, defining ethical guidelines, and ensuring fairness and transparency in AI systems.

Q: Where can I learn more about AI regulation?
A: Explore resources from organizations like the OECD, the World Economic Forum, and leading academic institutions such as Vanderbilt Law Review.

Q: What are the main elements of a Pro-Innovation Policy Agenda?
A: Investment in research, development, clear guidelines, support for startups, and a regulatory framework that fosters adaptability.

Q: What is the impact of AI on the financial sector?
A: AI is rapidly transforming financial services, leading to automated trading, fraud detection, and personalized financial advice. However, it also introduces new risks, such as algorithmic bias and cybersecurity threats.

Q: Can you give me an example of using AI in healthcare?
A: AI is now used to analyze medical images for faster and more accurate diagnoses, to develop personalized treatment plans, and to accelerate drug discovery.

Q: What is Artificial General Intelligence (AGI)?
A: AGI refers to AI systems that possess human-level cognitive abilities and can perform any intellectual task that a human being can.

The administrative state that can effectively respond to the capabilities of AI will make a significant difference in converting AI’s potential into reality, continuing the history of technological breakthroughs that have greatly improved the lives of people for centuries.

Do you have any thoughts on how AI regulation should evolve? Share your comments below!

July 7, 2025 0 comments
0 FacebookTwitterPinterestEmail

Recent Posts

  • Readers Speak: Vessel seizures top Hormuz risk

    May 4, 2026
  • All-you-can-drink Bali resort kids will go gaga over

    May 4, 2026
  • US to Assist Ships Trapped in Strait of Hormuz

    May 4, 2026
  • Trump: US to Assist Stuck Ships in Strait of Hormuz

    May 4, 2026
  • PSSI Approves Persija vs Persib Match at SUGBK

    May 4, 2026

Popular Posts

  • 1

    Maya Jama flaunts her taut midriff in a white crop top and denim jeans during holiday as she shares New York pub crawl story

    April 5, 2025
  • 2

    Saar-Unternehmen hoffen auf tiefgreifende Reformen

    March 26, 2025
  • 3

    Marta Daddato: vita e racconti tra YouTube e podcast

    April 7, 2025
  • 4

    Unlocking Success: Why the FPÖ Could Outperform Projections and Transform Austria’s Political Landscape

    April 26, 2025
  • 5

    Mecimapro Apologizes for DAY6 Concert Chaos: Understanding the Controversy

    May 6, 2025

Follow Me

Follow Me
  • Cookie Policy
  • CORRECTIONS POLICY
  • PRIVACY POLICY
  • TERMS OF SERVICE

Hosted by Byohosting – Most Recommended Web Hosting – for complains, abuse, advertising contact: o f f i c e @byohosting.com


Back To Top
Newsy Today
  • Business
  • Entertainment
  • Health
  • News
  • Sport
  • Tech
  • World