article thumbnail

What’s next for Broadcom stock after a 240% three-year climb?

Dataconomy

Analysts expect the serviceable addressable market for Broadcoms custom AI accelerators and networking chips to range between $60 billion and $90 billion by fiscal year 2027. and $9 for 2025, 2026, and 2027, respectively. billion to $50 billion by that time. Despite these promising figures, the company faces several challenges.

article thumbnail

Dell’Oro Group: AI data center switch spending to exceed $100 billion by 2029

Dataconomy

Boujelbene noted that Ethernet is gaining traction as the primary fabric for large-scale AI clusters, driven by supply and demand dynamics. Notably, even major NVIDIA GPU-based clusters, such as xAI’s Colossus, are adopting Ethernet, prompting an advancement in the projected crossover timeline of Ethernet with InfiniBand by one year.

professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

AI at warp speed: Nvidia’s new GB300 superchip arrives this year

Dataconomy

Nvidia has announced its next generation of AI superchips, the Blackwell Ultra GB300, which will ship in the second half of this year, the Vera Rubin for the second half of next year, and the Rubin Ultra, set to arrive in the second half of 2027. times the FP4 inference performance and can accelerate AI reasoning tasks.

article thumbnail

Your next phone will live longer thanks to Brussels

Dataconomy

Winners, laggards and the 2027 review Early winners include modular-phone pioneers and premium brands already shipping IP68 chassis and extended-support policies. Laggards cluster among entry-level OEMs that outsource design and run on razor-thin margins; for them, the seven-year spare-part stockpile is a capital-intensive hurdle.

article thumbnail

Real value, real time: Production AI with Amazon SageMaker and Tecton

AWS Machine Learning Blog

Global ecommerce fraud is predicted to exceed $343 billion by 2027. Orchestrate with Tecton-managed EMR clusters – After features are deployed, Tecton automatically creates the scheduling, provisioning, and orchestration needed for pipelines that can run on Amazon EMR compute engines.

ML 98
article thumbnail

Broadcom stock climbs 13%: The AI boom investors can’t ignore

Dataconomy

CEO Hock Tan emphasized the potential of custom AI chips currently in development for three large cloud customers, anticipating that these clients will deploy 1 million AI chips in networked clusters by 2027. While the AI segment thrived, non-AI semiconductor revenue experienced a decline of 23% year-over-year.

AI 91
article thumbnail

The history of Kubernetes

IBM Journey to AI blog

Borg’s large-scale cluster management system essentially acts as a central brain for running containerized workloads across its data centers. Omega took the Borg ecosystem further, providing a flexible, scalable scheduling solution for large-scale computer clusters. Control plane nodes , which control the cluster.