• Business
  • Entertainment
  • Health
  • News
  • Sport
  • Tech
  • World
Newsy Today
news of today
Home - data centers
Tag:

data centers

Tech

AI Data Centers: The Shift to High-Voltage DC Power & Efficiency

by Chief Editor March 24, 2026
written by Chief Editor

The Data Center Revolution: Why DC Power is the Future of AI Infrastructure

The relentless demand for more processing power, driven by the explosion of artificial intelligence, is forcing a fundamental shift in data center design. While much attention focuses on the latest chip architectures from companies like NVIDIA, the infrastructure supporting those chips is undergoing a quiet revolution – a move from traditional alternating current (AC) to direct current (DC) power distribution.

The Inefficiency of AC in a High-Power World

For decades, data centers have relied on AC power, a system inherited from the broader electrical grid. However, this approach involves multiple conversions – AC to DC, DC to AC, and back to DC – to deliver the power that servers and GPUs actually require. Each conversion introduces energy loss and adds complexity. As AI racks begin to draw closer to 1 MW of power, the inefficiencies of AC become unsustainable. NVIDIA notes that a 1 MW rack could require as much as 200 kg of copper busbar, scaling to 200,000 kg for a 1 GW data center.

The Rise of 800 VDC: A Game Changer

The solution gaining traction is high-voltage DC power distribution, specifically 800 VDC. By converting grid power directly to 800 VDC at the data center perimeter, many of the intermediate conversion steps are eliminated. This translates to higher energy efficiency, reduced heat dissipation, improved system reliability, and a smaller physical footprint. Switching to 800 V DC allows 85 percent more power to be transmitted through the same conductor size, reducing resistive losses and copper requirements by 45 percent.

Industry Leaders Embrace the DC Shift

Major players in the data center infrastructure space are already responding. Delta, Vertiv, and Eaton have all unveiled novel designs optimized for the AI era and 800 VDC power delivery. Vertiv’s 800 V DC ecosystem is designed to integrate with NVIDIA Vera Rubin platforms and will be commercially available in the second half of 2026. Eaton is developing medium-voltage solid-state transformers (SSTs) for DC power distribution, while Delta has released 800 V DC in-row power racks with embedded battery backup.

Early Adopters and Regional Trends

While the transition is underway, adoption isn’t uniform. Higher voltage DC data centers have already emerged in China. In the Americas, the Mt. Diablo Initiative – a collaboration between Meta, Microsoft, and the Open Compute Project – is experimenting with 400 V DC rack power distribution. SolarEdge is similarly developing a 99%-efficient SST paired with a native DC UPS and DC power distribution layer.

Challenges and the Path Forward

Despite the benefits, widespread adoption of DC power faces hurdles. Patrick Hughes, from the National Electrical Manufacturers Association, emphasizes the need for a complete, coordinated ecosystem – encompassing power electronics, protection, connectors, and safety components. Retooling manufacturing capacity, expanding supply chains, and establishing clear standards are crucial. Many companies are taking a cautious approach, offering adapted solutions while awaiting clearer standards and customer commitments.

Beyond 800 VDC: The Potential of Solid-State Transformers

Solid-state transformers (SSTs) are emerging as a key enabling technology for high-voltage DC data centers. These devices offer higher efficiency, smaller size, and improved reliability compared to traditional transformers. They are also essential for integrating renewable energy sources directly into the data center power infrastructure.

FAQ: DC Power in Data Centers

Q: What is the main benefit of switching to DC power in data centers?
A: Reduced energy loss and improved efficiency due to fewer power conversions.

Q: What voltage level is becoming the standard for high-voltage DC data centers?
A: 800 VDC is emerging as the leading standard.

Q: What is a solid-state transformer (SST)?
A: An SST is a more efficient and compact alternative to traditional transformers, crucial for high-voltage DC systems.

Q: Are all data centers switching to DC power immediately?
A: The transition is gradual, with early adoption in China and experimental projects underway in the Americas.

Q: What are the challenges to wider DC power adoption?
A: Establishing a complete ecosystem of components, retooling manufacturing, and developing clear standards.

Did you know? A 1 GW data center using traditional AC power could require 200,000 kg of copper. Switching to 800 VDC can significantly reduce this amount.

Pro Tip: When evaluating data center infrastructure, consider the long-term benefits of DC power, including reduced operating costs and improved sustainability.

Explore more articles on data center technology and AI infrastructure to stay ahead of the curve. Share your thoughts in the comments below – what challenges do you observe with the transition to DC power?

March 24, 2026 0 comments
0 FacebookTwitterPinterestEmail
Business

Can Carbon Credits Clean Up Big Tech’s AI-Fueled Emissions Surge?

by Chief Editor March 22, 2026
written by Chief Editor

The AI Boom’s Dirty Secret: Big Tech’s Reliance on Carbon Credits

The relentless expansion of artificial intelligence is creating an unexpected challenge for the tech industry: a surge in energy demand and, carbon emissions. While companies like Amazon, Google, Meta, and Microsoft pledge commitment to net-zero goals, their growing reliance on carbon credits raises questions about the true cost of the AI revolution and whether these credits represent genuine environmental progress or simply a sophisticated form of greenwashing.

Data Centers: The Epicenter of the Energy Crisis

Data centers, the physical infrastructure powering AI and cloud computing, are incredibly energy-intensive. Global electricity consumption by these facilities has been increasing by approximately 12 percent annually since 2017, according to a report by the International Energy Agency (IEA). In fact, the IEA reports that power demand for data centers is growing four times faster than all other sectors combined.

This escalating energy demand is directly linked to rising carbon emissions. Many of the world’s power grids still rely heavily on fossil fuels, meaning increased electricity consumption translates to a larger carbon footprint. Several major tech companies have already experienced a rise in carbon emissions in recent years due to data center expansion, directly contradicting their net-zero pledges.

The Rise of Carbon Credits: A Quick Fix?

To address this growing problem, Big Tech is increasingly turning to carbon credits. These credits are designed to offset emissions by funding projects that reduce or remove carbon dioxide from the atmosphere, such as carbon capture and storage (CCS) technologies and reforestation initiatives. Each credit represents one metric tonne of CO2 reduced or removed.

Purchases of carbon credits have skyrocketed. Data from the carbon credit management platform Ceezer shows a dramatic increase: from 14,200 credits in 2022 to 11.92 million in 2023, and a further jump to 24.4 million in 2024 and 68.4 million in 2025 – a 181% increase year-over-year. Amazon, Google’s parent company Alphabet, Microsoft, and Meta are collectively expected to invest almost $700 billion in AI technology in 2026, fueling this demand.

Microsoft appears to be leading the charge, reporting a 247 percent increase in credit purchasing between 2022 and 2023, followed by a 337 percent rise between 2023 and 2024, reaching 21.9 million credits.

Systemic Problems Plague Carbon Credit Schemes

Despite the surge in investment, the effectiveness of carbon credits is under intense scrutiny. A 2025 review paper analyzing 25 years of evidence revealed that the failure of carbon offsets to cut planet-heating pollution isn’t due to isolated incidents, but rather deep-seated systemic problems. The report suggests that gradual changes to the system won’t be enough to address these issues.

Even after widespread efforts to improve carbon credit systems, underlying problems persist, resulting in many programs being of poor quality. The rules established at the 2024 UN climate summit, while anticipated to bring improvements, “did not substantially address the quality problem,” according to the report.

Experts argue that achieving true net-zero requires companies to cut emissions at the source, rather than relying on offsets. The IEA consistently emphasizes this point, but its message appears to be falling on deaf ears.

The Future of Sustainable AI

The current trajectory raises serious concerns about the sustainability of the AI boom. Unless effective carbon credit programs can be demonstrably proven, Big Tech’s massive investment in achieving “net-zero” risks being perceived as little more than greenwashing.

The industry needs to prioritize genuine emissions reductions through operational changes, investments in renewable energy sources, and the development of more energy-efficient AI technologies. A shift towards durable carbon removal – projects that permanently remove CO2 from the atmosphere – is also crucial, but these solutions are currently expensive and limited in scale.

Frequently Asked Questions

What are carbon credits? Carbon credits represent one metric tonne of carbon dioxide reduced or removed from the atmosphere, allowing companies to offset their emissions by funding climate-positive projects.

Why are tech companies buying more carbon credits? The surge in AI development requires massive data centers, which consume huge amounts of energy and generate significant emissions. Carbon credits are being used to offset these emissions and meet net-zero pledges.

Are carbon credits an effective solution? Many researchers are skeptical, citing systemic problems with carbon credit schemes and questioning their ability to deliver genuine emissions reductions.

What is the alternative to carbon credits? Prioritizing direct emissions reductions through operational changes, renewable energy investments, and energy-efficient technologies is considered the most effective approach.

What is Microsoft doing to address its carbon footprint? Microsoft reported a significant increase in carbon credit purchases and aims to become carbon negative by 2030, focusing on both reducing emissions and removing unavoidable emissions.

Did you know? The data center industry currently contributes at least 0.5 percent of global greenhouse gas emissions, and the IEA expects this figure to rise to around 1.4 percent within five years.

Pro Tip: Look beyond headline net-zero pledges and investigate the specific strategies companies are employing to reduce their environmental impact. Focus on verifiable emissions reductions, not just carbon credit purchases.

What are your thoughts on Big Tech’s reliance on carbon credits? Share your opinions in the comments below!

March 22, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

Google Data Centers: 2.7GW Clean Energy Deal Powers Michigan Expansion

by Chief Editor March 17, 2026
written by Chief Editor

Google’s Power Play: How Data Centers are Driving a Recent Era of Clean Energy

Google is reshaping how data centers are powered, moving beyond simply consuming electricity to actively procuring and enabling new clean energy resources. This isn’t a new commitment – Google vowed to use 100% carbon-free power seven years ago – but a shift in how that commitment is realized. Recent announcements in Michigan and Minnesota demonstrate a strategic approach where data center development is intrinsically linked to significant investments in renewable energy and grid stability.

The Michigan Model: 2.7 Gigawatts of New Capacity

In partnership with DTE Energy, Google plans to add 2.7 gigawatts (GW) of new resources to the Michigan grid to support a new data center in suburban Detroit. This deal, utilizing Google’s Clean Transition Tariff, mirrors a similar agreement with Xcel Energy in Minnesota. The Michigan plan includes 1.6 GW of solar power, 400 megawatts of four-hour energy storage, 50 megawatts of long-duration energy storage, and 300 megawatts of “additional clean resources.”

This approach differs from traditional power purchase agreements (PPAs), which utilities often treated as isolated events. The Clean Transition Tariff encourages long-range planning and integration of these technologies into the grid.

Demand Response and Grid Flexibility

Beyond generating new clean power, Google’s plan incorporates 350 megawatts of demand response. This involves curtailing electricity use during peak times, either by incentivizing large users or temporarily reducing power consumption at Google’s own data centers. This adds a layer of flexibility to the grid, helping to balance supply and demand.

The $10 Million Energy Impact Fund

Google is too launching a $10 million Energy Impact Fund in Michigan, focused on initiatives like home weatherization and energy workforce development. While the impact of this fund remains to be seen, it signals a broader commitment to energy affordability and community benefits alongside infrastructure development.

Beyond Google: A Trend Towards “Bring Your Own Power”

Google’s strategy isn’t an isolated case. Other tech companies are increasingly exploring similar models, recognizing the need for reliable, clean energy to power their growing data center footprints. This trend, dubbed “bring your own power,” is driven by several factors:

  • Sustainability Goals: Many companies have ambitious carbon reduction targets.
  • Energy Security: Direct investment in energy resources provides greater control and predictability.
  • Cost Management: Long-term contracts for renewable energy can offer price stability.

The Role of Energy Storage

The inclusion of both four-hour and long-duration energy storage in Google’s plans highlights the growing importance of storage technologies. Storage helps to smooth out the intermittency of renewable sources like solar and wind, ensuring a reliable power supply. Long-duration storage, in particular, is crucial for providing backup power during extended periods of low renewable generation.

Challenges and Future Outlook

While these developments are promising, challenges remain. The definition of “clean resources” can be ambiguous, and it’s unclear whether natural gas will be included. Scaling these models requires close collaboration between tech companies, utilities, and regulators.

However, the trend is clear: data centers are no longer just consumers of energy; they are becoming active participants in the energy transition. This shift has the potential to accelerate the deployment of clean energy technologies and create a more resilient and sustainable grid.

FAQ

Q: What is the Clean Transition Tariff?
A: It’s a tariff designed to allow Google to pay a premium to specify the types of power it wants deployed, encouraging utilities to incorporate such technologies into their long-range planning.

Q: What is demand response?
A: It’s a system where large electricity users reduce their consumption during peak times to help stabilize the grid.

Q: How much is Google investing in Michigan?
A: Google is investing $10 million in an Energy Impact Fund and is enabling 2.7 GW of new clean resources for the grid.

Q: Will this impact electricity prices for consumers?
A: The Energy Impact Fund aims to reduce utility bills, but the overall impact on prices remains to be seen.

Did you know? Google’s data center operations will be served by 2.7 gigawatts (GW) of new resources, including solar power, advanced storage technologies and demand flexibility.

Pro Tip: Keep an eye on developments in long-duration energy storage – this technology will be key to unlocking the full potential of renewable energy.

Desire to learn more about the future of sustainable data centers? Explore our other articles or subscribe to our newsletter for the latest insights.

March 17, 2026 0 comments
0 FacebookTwitterPinterestEmail
Business

It takes more than Nvidia’s chips to build the world’s data centers

by Chief Editor March 12, 2026
written by Chief Editor

The AI Chip Race: Beyond Nvidia’s Dominance

Nvidia currently reigns supreme in the artificial intelligence chip market, with hyperscalers like Amazon, Google, and Meta investing heavily in its GPUs to power their data centers. Nvidia’s revenue surged from $26.9 billion in 2022 to $215.9 billion in 2025, a testament to the explosive demand for AI processing power. Though, this dominance isn’t going unchallenged. A significant shift is underway as these tech giants aggressively pursue alternatives, aiming to reduce reliance on a single supplier.

The Rise of Custom ASICs

The key to breaking Nvidia’s hold lies in Application-Specific Integrated Circuits (ASICs). Unlike GPUs, which are versatile but general-purpose, ASICs are designed for specific tasks. This specialization allows for greater efficiency, particularly in “joules-per-token” – a critical metric as AI workloads transition towards inference. Google’s Tensor Processing Units (TPUs) are leading the charge in the ASIC space, with some experts believing they rival or even surpass Nvidia’s GPUs in certain applications.

Amazon, Meta, Microsoft, and OpenAI are also developing their own custom AI chips. Amazon’s recently launched “UltraServers” powered by its Trainium 3 chips are a direct challenge to Nvidia and Google. This move towards in-house chip design isn’t about completely replacing GPUs, but about optimizing performance and cost for specific AI workloads.

Pro Tip: ASICs offer significant advantages in power efficiency and cost for dedicated AI tasks, but they lack the flexibility of GPUs. The optimal strategy involves a mix of both.

The Ecosystem Behind the Chips

It’s crucial to understand that Nvidia doesn’t simply deliver chips and walk away. Companies like Dell, Hewlett Packard Enterprise (HPE), and Foxconn play a vital role in building the server infrastructure that houses these processors. These partners are responsible for integrating Nvidia’s GPUs into complete systems, tailoring them to meet the unique needs of each customer.

HPE, for example, works closely with customers to plan data center infrastructure well in advance, considering power and cooling capacity. Dell has streamlined deployment, achieving the ability to bring a server rack online in as little as 24 hours, and even deploying 100,000 GPUs in just six weeks for a single customer.

Software is the Secret Sauce

Nvidia’s success isn’t solely based on its hardware. Its CUDA platform, a comprehensive software ecosystem, is a major draw for developers. CUDA provides the tools and documentation needed to unlock the full potential of Nvidia’s GPUs. Nvidia emphasizes that the majority of its employees are software engineers, highlighting the importance of software in its overall strategy.

This software advantage creates a network effect, attracting developers and further solidifying Nvidia’s position. Competitors must not only match the hardware performance but also build equally robust software ecosystems to truly challenge Nvidia’s dominance.

The Geopolitical Landscape and Future Threats

The AI chip industry is also becoming entangled in geopolitical tensions. Iran has threatened attacks on US tech companies, including Nvidia, Google, Amazon, and Microsoft, alleging their support for military operations. This highlights the strategic importance of these technologies and the potential for disruption beyond market competition.

What’s Next for AI Chips?

The trend towards custom ASICs is expected to accelerate. Analysts predict that the ASIC market will grow even faster than the GPU market in the coming years. We can anticipate further innovation in chip architecture, materials, and manufacturing processes. The focus will remain on improving efficiency, reducing costs, and tailoring solutions to specific AI applications.

FAQ

  • What is an ASIC? An Application-Specific Integrated Circuit is a chip designed for a particular purpose, offering greater efficiency than general-purpose GPUs for specific AI tasks.
  • Why are companies building their own AI chips? To reduce reliance on a single supplier (Nvidia), optimize performance for specific workloads, and potentially lower costs.
  • What role does software play in the AI chip market? Software ecosystems, like Nvidia’s CUDA, are crucial for attracting developers and unlocking the full potential of AI hardware.
  • Is Nvidia losing its dominance? While Nvidia remains the leader, the rise of custom ASICs and increased competition from companies like Google and Amazon are challenging its position.

Explore more about the latest advancements in AI and technology by subscribing to our newsletter. Sign up here!

March 12, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

Nscale: Nvidia-Backed AI Firm Hits $14.6B Valuation – IPO on the Horizon?

by Chief Editor March 9, 2026
written by Chief Editor

Nscale’s $14.6 Billion Valuation Signals a New Era for European AI Infrastructure

London-based AI infrastructure company Nscale has surged to a $14.6 billion valuation following a $2 billion Series C funding round, solidifying its position as a European decacorn alongside Helsing and Mistral AI. This latest investment, featuring participation from Nvidia, underscores the growing demand for specialized data centers capable of handling the computational demands of artificial intelligence.

The Rise of Vertically Integrated AI Infrastructure

Nscale distinguishes itself through a strategy of vertical integration, controlling all aspects of the AI infrastructure stack – from energy and data centers to compute and orchestration software. This approach allows for greater efficiency and control, crucial in a rapidly evolving market. The Series C funding builds on previous successes, including a $1.1 billion Series B round in September and a $433 million pre-Series C SAFE round backed by industry giants like Dell, and Nokia.

IPO Potential and Major Investor Confidence

The involvement of Goldman Sachs and JPMorgan in the Series C raise has fueled speculation about a potential Initial Public Offering (IPO) for Nscale. CEO Josh Payne indicated the company might go public “as early as this year” to secure additional capital. This confidence from leading investment banks signals strong belief in Nscale’s growth trajectory and market potential.

High-Profile Board Appointments Signal Strategic Direction

Adding to the momentum, Nscale has appointed former Meta COO Sheryl Sandberg, former Yahoo president Susan Decker, and former U.K. Deputy prime minister Nick Clegg to its board of directors. These appointments bring a wealth of experience in scaling technology companies, navigating complex regulatory landscapes, and shaping global policy – all critical for Nscale’s future success.

“Stargate Norway” and the Expansion of AI Compute

A key component of Nscale’s strategy is the “Stargate Norway” project, a joint venture with Aker ASA. This ambitious initiative aims to establish a massive AI infrastructure hub in Norway, capable of hosting 100,000 Nvidia GPUs by the end of 2026, with OpenAI as an initial customer. The full management of this project now rests with Nscale, streamlining execution and governance.

Strategic Partnerships with Microsoft and Dell

Nscale’s reach extends beyond Norway, with existing partnerships to deploy approximately 200,000 Nvidia GPUs across data centers in Europe and the U.S., in collaboration with Microsoft and Dell. These collaborations demonstrate Nscale’s ability to forge strategic alliances with key players in the technology ecosystem.

Debt Financing and Sustainable Infrastructure

In addition to equity funding, Nscale has secured a $1.4 billion delayed draw term loan backed by GPUs, further bolstering its financial resources. The company is committed to sustainable practices, including utilizing low-cost renewable energy and reusing waste heat, aligning with growing environmental concerns.

Pro Tip:

Vertical integration is becoming increasingly critical in the AI infrastructure space. Companies that can control the entire stack, from hardware to software, will be best positioned to innovate and compete.

FAQ

What is Nscale? Nscale is a British AI infrastructure company building data centers and cloud access to compute.

Who are the major investors in Nscale? Nvidia, Aker ASA, 8090 Industries, Blue Owl, Dell, Nokia, and others.

What is “Stargate Norway”? A joint venture between Nscale and Aker ASA to build a large-scale AI infrastructure hub in Norway.

Is Nscale planning an IPO? The company has indicated it might seek to go public as early as this year.

What makes Nscale different? Its focus on vertical integration, controlling all aspects of the AI infrastructure stack.

Did you know? Nscale’s latest valuation is more than double the valuation achieved with its previous Series B funding round.

Explore more about the future of AI and infrastructure on our blog. Read more articles.

March 9, 2026 0 comments
0 FacebookTwitterPinterestEmail
Business

Marvell projects strong fiscal 2028 revenue on AI-driven data center boom, shares jump

by Chief Editor March 6, 2026
written by Chief Editor

Marvell Rides the AI Wave: A Deep Dive into the Future of Data Center Infrastructure

Marvell Technology’s recent revenue forecast exceeding Wall Street estimates signals more than just a good quarter; it underscores a fundamental shift in the data center landscape. Driven by the explosive growth of artificial intelligence, demand for specialized chips and interconnect solutions is soaring, and Marvell is positioning itself as a key enabler of this revolution.

The AI Infrastructure Boom: Why Now?

The current surge in AI adoption is fueling unprecedented investment in infrastructure. Major tech players – Alphabet, Microsoft, Amazon, and Meta – are collectively projected to spend over $630 billion this year building out AI capabilities. This massive influx of capital is directly translating into increased demand for the chips and networking equipment that power these systems. Marvell’s custom application-specific integrated circuits (ASICs) and high-speed interconnect technologies are at the heart of this build-out.

Beyond Nvidia: The Rise of Custom Chip Design

Even as Nvidia currently dominates the AI processor market, hyperscalers are increasingly exploring custom chip designs tailored to their specific data center workloads. Companies like Marvell and Broadcom are capitalizing on this trend, offering design services and specialized components that provide alternatives to general-purpose processors. This move towards customization allows for greater efficiency and performance optimization.

Broadcom’s projection of over $100 billion in AI chip sales next year further validates the immense opportunity in this space. The competition is heating up, and Marvell is actively challenging the status quo.

Optical Interconnects: The Next Frontier

Marvell’s recent acquisitions – Celestial AI ($3.25 billion) and XConn Technologies – highlight a strategic focus on optical interconnects. These technologies utilize light instead of electrical signals to connect AI chips and memory, offering significantly faster data transfer speeds and reduced energy consumption. This is crucial for scaling AI clusters and overcoming the limitations of traditional electrical interconnects.

The company is also making strides in PCIe 8.0 SerDes technology, targeting future bandwidth-intensive AI workloads. These investments demonstrate a commitment to staying ahead of the curve and anticipating the evolving needs of the industry.

Data Center Re-Architecture: A Sustainable Future

Marvell’s innovations aren’t just about speed; they’re about fundamentally re-architecting data centers for the AI era. By addressing bottlenecks in data movement and memory access, the company is enabling more agile, powerful, and sustainable infrastructure. This is increasingly important as data centers face growing pressure to reduce their environmental impact.

Marvell’s Financial Momentum

Marvell’s financial performance reflects its strong position in the market. The company expects revenue to grow nearly 40% and approach $15 billion in fiscal 2028, significantly exceeding analyst expectations. A 22% revenue increase in the fourth quarter, reaching $2.22 billion, further demonstrates this momentum. The data center segment, its largest business, saw a 21% rise to $1.65 billion.

Frequently Asked Questions

Q: What are optical interconnects and why are they important?
A: Optical interconnects use light to transmit data, offering faster speeds and lower energy consumption compared to traditional electrical connections, which is vital for AI workloads.

Q: What is CXL (Compute Express Link)?
A: CXL is an industry standard interconnect that enables coherent data sharing between CPUs, GPUs, and other accelerators, improving performance and efficiency in AI systems.

Q: How is Marvell different from Nvidia?
A: Nvidia primarily focuses on AI processors (GPUs), while Marvell specializes in the underlying infrastructure – the chips and interconnects that connect and support those processors.

Q: What is PCIe 8.0?
A: PCIe 8.0 is the latest generation of the Peripheral Component Interconnect Express standard, offering significantly increased bandwidth for data transfer within servers and data centers.

Did you know? Marvell completed its acquisition of Celestial AI, adding on-chip optical expertise that directly targets high-performance AI clusters.

Pro Tip: Keep an eye on customer adoption and design wins for Marvell’s new technologies, as these are key indicators of future revenue growth.

Explore more about the evolving landscape of AI infrastructure and the companies shaping the future of computing. Read our latest analysis on data center trends.

March 6, 2026 0 comments
0 FacebookTwitterPinterestEmail
Business

CPUs are back en vogue in the data center

by Chief Editor March 5, 2026
written by Chief Editor

The CPU Renaissance: How the Workhorse of Computing is Staging an AI Comeback

For years, the narrative in the data center world was clear: GPUs (Graphics Processing Units) were the future of AI, while CPUs (Central Processing Units) were relegated to supporting roles. Billions were invested in securing high-end GPUs to train and run increasingly complex AI models. But a shift is underway. Recent deals and statements from industry giants suggest the CPU isn’t ready to cede the AI arena just yet.

Meta’s Bets on Both Sides

The tide began to turn with recent announcements from Meta. The social media giant expanded its deal with Nvidia for GPU deployment, but simultaneously revealed its largest-ever deployment of Nvidia’s Grace CPU-only servers. Meta also struck a deal with AMD, incorporating servers running the company’s Venice and next-generation Verano CPUs. This dual investment signals a recognition that AI workloads aren’t solely the domain of GPUs.

Intel Sees AI as a CPU Driver

Intel CEO Lip-Bu Tan highlighted AI as a “major driver for CPU demand” during the company’s January earnings call. This represents a significant statement, considering Intel’s recent struggles and turnaround efforts. The implication is that the proliferation of AI, in its various forms, is creating new opportunities for CPU utilization.

Why the CPU is Back in the AI Conversation

The resurgence of the CPU in AI isn’t about replacing GPUs; it’s about recognizing the evolving nature of AI workloads. While GPUs excel at the intensive parallel processing required for training large AI models, CPUs are proving crucial for other aspects of the AI ecosystem.

The Rise of AI Inference and Agentic AI

As companies move beyond simply training models to deploying them for real-world applications – a process known as inference – CPUs are becoming increasingly important. Smaller language models and domain-specific models often run more efficiently on CPUs. The emergence of “agentic AI” – semi- and fully autonomous bots capable of performing tasks on your behalf – is driving increased CPU usage. These agents need to interact with existing systems, navigate files, and process data, tasks where CPUs traditionally shine.

CPUs: The Glue Holding AI Together

CPUs aren’t just about running AI models directly. They play a vital role in the broader AI infrastructure. They are essential for data mining, personalization, and the analysis that provides context to AI models. As Nvidia VP of hyperscale and high-performance computing, Ian Buck, explained, much of the “data management and wrangling” happens on CPUs, across entire fleets of servers.

The Economic Impact: A Growing Market

Analysts predict a significant boost to the CPU market as AI adoption expands. BofA Global Research estimates the total addressable market for CPUs could climb from $27 billion in 2025 to as much as $60 billion by 2030, with AI servers accounting for approximately 70% of that growth.

A Symbiotic Relationship: GPUs and CPUs Working Together

It’s important to note that this isn’t a competition between GPUs and CPUs. AMD’s Dan McNamara emphasizes that the growth of CPUs doesn’t mean GPUs are slowing down. Instead, the increasing complexity and diversity of AI workloads are driving demand for both types of processors. GPUs need CPUs to function, handling data transfer and other essential tasks.

Looking Ahead: Future Trends

The interplay between CPUs and GPUs in AI is likely to become even more nuanced. We can expect to see:

  • Specialized CPU Architectures: Chipmakers will continue to develop CPUs optimized for specific AI tasks, incorporating AI accelerators and other features to enhance performance.
  • Heterogeneous Computing: Systems will increasingly combine CPUs, GPUs, and other specialized processors to create highly efficient and adaptable AI infrastructure.
  • Edge AI: As AI moves closer to the data source (edge computing), CPUs will play a critical role in processing data locally, reducing latency and bandwidth requirements.

FAQ

Q: Will CPUs replace GPUs in AI?
A: No. CPUs and GPUs have different strengths and will continue to complement each other in the AI ecosystem.

Q: What is AI inference?
A: AI inference is the process of using a trained AI model to develop predictions or decisions on new data.

Q: What is agentic AI?
A: Agentic AI refers to AI systems that can autonomously perform tasks on behalf of users.

Q: What role do CPUs play in data management for AI?
A: CPUs are crucial for mining, processing, and analyzing the vast amounts of data required to train and run AI models.

Did you know? The demand for CPUs in the AI ecosystem is projected to more than double by 2030, reaching a $60 billion market.

Pro Tip: When evaluating AI infrastructure, consider the entire workload, not just the training phase. CPUs are essential for inference, data processing, and agentic AI.

Want to learn more about the latest developments in AI and computing? Explore our technology news section for in-depth analysis and expert insights.

March 5, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

Google’s Minnesota Data Center: Pioneering Clean Power & Massive Battery Storage

by Chief Editor March 2, 2026
written by Chief Editor

Google’s Minnesota Data Center: A Blueprint for the Future of AI and Energy

Data centers, the backbone of the digital world, are facing increasing scrutiny over their energy consumption. Concerns are rising about potential higher electric bills and the risk of prolonging the lifespan of outdated coal plants as demand for AI continues to surge. However, a groundbreaking project by Google in Pine Island, Minnesota, offers a compelling alternative – a data center powered by clean energy and bolstered by the world’s largest battery.

Beyond Net-Zero: Paying for the Transition

Google isn’t simply aiming for net-zero emissions; it’s actively investing in the infrastructure needed to support its growing energy demands without burdening existing customers. “Google has long been committed to scaling our infrastructure responsibly, which includes paying for the electricity and associated costs of our growth,” explains Lucia Tian, Google’s head of advanced energy technologies. This commitment extends to funding 1,900 megawatts of recent clean energy through an agreement with Xcel Energy.

This approach mirrors a similar initiative in Nevada, where Google financed a geothermal power plant developed by Fervo, a company pioneering next-generation energy technology. In Minnesota, the investment breaks down to 1,400 megawatts of wind power and 200 megawatts of solar power, coupled with a revolutionary long-duration energy storage solution.

The Rise of Long-Duration Energy Storage: Form Energy’s Iron-Air Battery

Central to the Minnesota project is a battery developed by Form Energy. Unlike traditional lithium-ion batteries that typically store energy for hours, Form Energy’s technology utilizes iron-air technology, effectively “reversibly rusting iron” to store and release energy over 100 hours. The Minnesota plant will boast a capacity of 300 megawatts and 30 gigawatt-hours, surpassing the total storage capacity of all battery projects completed in the U.S. In 2024.

This extended storage capability is crucial for addressing the intermittent nature of renewable energy sources. As Xcel Energy notes, a long-duration battery can mitigate the impact of prolonged periods of low solar and wind generation, such as cloudy winter days. Importantly, the technology is cost-competitive with natural gas, making it a viable alternative for grid-scale energy storage.

The Broader Implications: A Shift in Tech’s Energy Strategy

Google’s Minnesota project isn’t an isolated incident. Major tech companies are expected to pledge greater responsibility for their energy consumption at the White House. This signals a broader industry trend towards proactive energy management and investment in clean energy infrastructure.

Beyond Data Centers: Applications for Grid Resilience

The implications of long-duration energy storage extend far beyond data centers. These batteries can play a vital role in enhancing grid resilience, enabling greater integration of renewable energy sources, and reducing reliance on fossil fuels. They can also support the electrification of transportation and other sectors.

The success of Form Energy’s iron-air battery could spur further innovation in long-duration storage technologies, potentially leading to even more efficient and cost-effective solutions.

FAQ

Q: What is long-duration energy storage?
A: Long-duration energy storage refers to technologies capable of storing energy for 100 hours or more, unlike traditional batteries that typically store energy for a few hours.

Q: How does Form Energy’s battery function?
A: Form Energy’s battery uses iron-air technology, which involves reversibly rusting iron to store and release energy.

Q: Why is Google investing in clean energy?
A: Google is committed to scaling its infrastructure responsibly and paying for the electricity and associated costs of its growth.

Q: Will this project impact electricity bills for Minnesota residents?
A: No, Google is paying to build enough clean power so that existing customers won’t foot the bill.

Did you know? The battery in Minnesota will store more energy than all battery projects built in the U.S. In 2024 combined.

Pro Tip: Investing in long-duration energy storage is key to unlocking the full potential of renewable energy sources.

Explore more articles on sustainable technology and energy solutions here. Subscribe to our newsletter for the latest updates and insights!

March 2, 2026 0 comments
0 FacebookTwitterPinterestEmail
Business

Meta and AMD Partner for Longterm AI Infrastructure Agreement

by Chief Editor February 24, 2026
written by Chief Editor

Meta and AMD Forge AI Partnership: A New Era of Compute Power

Meta Platforms and Advanced Micro Devices (AMD) have announced a significant multi-year agreement to bolster Meta’s AI infrastructure. The deal, potentially exceeding $100 billion, will notice AMD provide up to 6 gigawatts of its Instinct GPUs to power Meta’s next generation of AI models.

Beyond the Gigawatts: A Strategic Alignment

This partnership extends beyond a simple hardware supply agreement. Meta and AMD are aligning their roadmaps across silicon, systems, and software. This vertical integration aims to accelerate innovation and scale, enabling Meta to rapidly deploy cutting-edge AI capabilities. The collaboration builds on existing work, including the jointly developed Helios rack-scale architecture, unveiled at the 2025 Open Compute Project Global Summit.

Helios: The Foundation for Scalable AI

AMD Helios is designed to enable scalable, rack-level AI infrastructure. This architecture is crucial for handling the immense computational demands of modern AI workloads. Shipments supporting the first gigawatt deployment are expected to begin in the second half of 2026, utilizing a custom AMD Instinct GPU based on the MI450 architecture and 6th Gen AMD EPYC “Venice” CPUs.

Diversifying Compute for ‘Personal Superintelligence’

Meta’s strategy isn’t solely reliant on AMD. The company emphasizes a “portfolio-based approach” to infrastructure, combining hardware from multiple partners with its own Meta Training and Inference Accelerator (MTIA) silicon program. This diversification aims to create a more resilient and flexible tech stack, future-proofing its leadership in AI and enabling the development of “personal superintelligence.”

A 10% Stake in the Making?

Adding another layer to the agreement, AMD has issued a performance-based warrant allowing Meta to acquire up to 160 million AMD shares – approximately 10% of the company. This warrant vests in tranches tied to GPU shipments and stock price milestones, aligning the interests of both companies and demonstrating a long-term commitment.

The Broader AI Landscape

This deal highlights the escalating demand for AI-specific hardware. Meta’s simultaneous expansion of its Nvidia partnership underscores the necessitate for diverse compute resources. The competition for AI dominance is driving significant investment in GPU technology and related infrastructure.

FAQ

Q: What is the primary goal of this partnership?
A: To power Meta’s AI infrastructure with AMD Instinct GPUs and align technology roadmaps for faster innovation.

Q: What is the Helios rack-scale architecture?
A: A jointly developed architecture by AMD and Meta designed for scalable AI infrastructure.

Q: When will the first GPU deployments begin?
A: Shipments are expected to begin in the second half of 2026.

Q: What is Meta’s “portfolio-based approach” to infrastructure?
A: A strategy of diversifying hardware partnerships and utilizing its own silicon development to create a resilient and flexible infrastructure.

Q: Does Meta have other AI hardware partnerships?
A: Yes, Meta recently expanded its partnership with Nvidia.

Did you recognize? The agreement includes a performance-based warrant for Meta to acquire a significant stake in AMD, demonstrating a strong long-term commitment.

Pro Tip: Diversifying your technology stack is a key strategy for mitigating risk and fostering innovation in the rapidly evolving AI landscape.

What are your thoughts on this new partnership? Share your insights in the comments below!

February 24, 2026 0 comments
0 FacebookTwitterPinterestEmail
Tech

Amazon Earmarks $12 Billion for Louisiana Data Centers

by Chief Editor February 24, 2026
written by Chief Editor

Amazon’s $12 Billion Louisiana Investment: A Sign of the Future for AI Infrastructure

Amazon’s recent commitment of $12 billion to build AI data center campuses in northwest Louisiana marks a significant escalation in the tech giant’s infrastructure investments. This move, announced on February 23, 2026, isn’t just about expanding capacity; it’s a strategic play signaling where the future of cloud computing and artificial intelligence is headed.

The Scale of the Investment and its Components

The $12 billion will fund not only the data centers themselves, but also crucial supporting infrastructure. Amazon will cover all expenses for new energy infrastructure upgrades needed to power the facilities. The company plans to invest in solar energy projects, aiming to add up to 200 MW of carbon-free energy to the Louisiana grid. Up to $400 million will be allocated to public water infrastructure improvements to support the campuses.

Louisiana’s Appeal: Why the Pelican State?

According to Louisiana Governor Jeff Landry, Amazon chose the state due to its “prime sites, infrastructure, and workforce.” This highlights a growing trend: companies are seeking locations that offer not just land availability, but also robust existing infrastructure and a skilled labor pool. The partnership with STACK Infrastructure, a digital infrastructure firm, will be key to building the facilities.

A Broader Trend: Amazon’s Nationwide Infrastructure Buildout

Louisiana is not an isolated case. Amazon Web Services (AWS) announced plans in January to invest at least $11 billion in Georgia to expand AI infrastructure. Prior to that, in June, Amazon committed at least $20 billion to Pennsylvania for similar data center expansion. These investments demonstrate a “relentless commitment to powering our customers’ digital innovation through cloud and AI technologies,” according to Roger Wehner, vice president of economic development at AWS.

The AI and Cloud Computing Connection

The driving force behind these massive investments is the insatiable demand for AI and cloud computing resources. AI models require enormous processing power and data storage, necessitating the construction of specialized data centers. Cloud computing, in turn, relies on these data centers to deliver on-demand services to businesses and individuals.

Impact on Local Economies

Amazon’s investment in Louisiana is expected to create significant economic opportunities for local communities. Governor Landry emphasized that the investment will “connect our communities to jobs that power how Americans live, work and do business.” Similar effects are anticipated in Georgia and Pennsylvania, as these projects generate both construction jobs and long-term employment opportunities in the tech sector.

Sustainability Considerations

Amazon’s commitment to investing in renewable energy sources, like solar power, and upgrading water infrastructure demonstrates a growing awareness of the environmental impact of data centers. Data centers are energy-intensive operations, and sustainability is becoming an increasingly key factor in site selection and design.

Frequently Asked Questions

What is an AI data center? An AI data center is a specialized facility designed to handle the massive computing and storage requirements of artificial intelligence applications.

Why is Amazon investing so heavily in data centers? Amazon is investing to meet the growing demand for its cloud computing services (AWS) and to support the development and deployment of AI technologies.

What is STACK Infrastructure’s role in this project? STACK Infrastructure is the developer and owner of the data center campuses, partnering with Amazon to build and operate the facilities.

Will these investments lead to job creation? Yes, these investments are expected to create both construction jobs and long-term employment opportunities in the tech sector.

Is Amazon focused on sustainability in these projects? Yes, Amazon is investing in renewable energy sources and upgrading water infrastructure to reduce the environmental impact of its data centers.

Did you grasp? The demand for data center space is projected to grow exponentially in the coming years, driven by the increasing adoption of AI and cloud computing.

Pro Tip: Preserve an eye on states with favorable infrastructure, skilled workforces, and supportive government policies – they are likely to attract further data center investments.

Explore more about Amazon’s commitment to sustainability here. What are your thoughts on the future of AI infrastructure? Share your comments below!

February 24, 2026 0 comments
0 FacebookTwitterPinterestEmail
Newer Posts
Older Posts

Recent Posts

  • Emerging Threat: Microsoft Teams Vishing Campaign Continues 

    March 24, 2026
  • Massimo Orlando Interview: Lazio Situation & Sarri Analysis | Lazionews.eu

    March 24, 2026
  • Colombia Plane Crash: Death Toll Rises to 69 | La Presse

    March 24, 2026
  • Vietnam Loans: Agriculture, SMEs & Export Growth in Dong Nai Province

    March 24, 2026
  • Sebastian Korda: Tennis Star’s Win & Engagement to Football Legend’s Daughter

    March 24, 2026

Popular Posts

  • 1

    Maya Jama flaunts her taut midriff in a white crop top and denim jeans during holiday as she shares New York pub crawl story

    April 5, 2025
  • 2

    Saar-Unternehmen hoffen auf tiefgreifende Reformen

    March 26, 2025
  • 3

    Marta Daddato: vita e racconti tra YouTube e podcast

    April 7, 2025
  • 4

    Unlocking Success: Why the FPÖ Could Outperform Projections and Transform Austria’s Political Landscape

    April 26, 2025
  • 5

    Mecimapro Apologizes for DAY6 Concert Chaos: Understanding the Controversy

    May 6, 2025

Follow Me

Follow Me
  • Cookie Policy
  • CORRECTIONS POLICY
  • PRIVACY POLICY
  • TERMS OF SERVICE

Hosted by Byohosting – Most Recommended Web Hosting – for complains, abuse, advertising contact: o f f i c e @byohosting.com


Back To Top
Newsy Today
  • Business
  • Entertainment
  • Health
  • News
  • Sport
  • Tech
  • World