HomeMarket NewsAmazon Web Services Unveils New AI Powerhouses Amid Chip Armageddon

Amazon Web Services Unveils New AI Powerhouses Amid Chip Armageddon

Actionable Trade Ideas

always free

Amazon Web Services, the cloud computing unit of tech titan Amazon (NASDAQ:AMZN), put on a show on Tuesday, revealing two groundbreaking processors designed to supercharge artificial intelligence training while slashing the amount of power required to do so. In a jaw-dropping move, AWS also broadened its existing partnership with GPU colossus Nvidia (NVDA).

Amazon Web Services AWS advertisement ad sign closeup in underground transit platform in NYC Subway Station, wall tiled, arrow, side

krblokhin

Beaming with pride, Amazon (AMZN) declared that the new Graviton4 and Trainium2 processors, unveiled at AWS re:Invent, flaunt superior performance, more cores, and memory compared to their predecessors. Graviton4 boasts a 30% enhancement in performance over Graviton3, 50% more cores, and up to 75% better memory, according to Amazon (AMZN).

Trainium2, Amazon’s other ace up its sleeve, promises a staggering fourfold acceleration in training speed compared to its predecessor. It is set to be utilized in AWS’s EC2 UltraClusters comprising up to an incredible 100,000 chips. AWS touted that the cloud colossus can whip foundation and large language models (LLMs) into shape in β€œa fraction of the time” while improving energy efficiency by up to twice the previous levels.

David Brown, vice president of Compute and Networking at AWS, exulted in a statement, β€œBy focusing our chip designs on real workloads that matter to customers, we’re able to deliver the most advanced cloud infrastructure to them. Graviton4 marks the fourth generation we’ve delivered in just five years, and is the most powerful and energy efficient chip we have ever built for a broad range of workloads.

β€œAnd with the surge of interest in generative AI, Tranium2 will help customers train their ML models faster, at a lower cost, and with better energy efficiency.” Amazon (AMZN) proudly announced that Anthropic, Databricks, Datadog, Epic, Honeycomb, and SAP are among AWS customers leveraging the new chips.

In a further audacious move, AWS and Nvidia (NVDA) expanded their prior collaboration to include the availability of Nvidia’s H200 AI GPUs on AWS via the Nvidia DGX Cloud. AWS also divulged plans to offer the first-ever cloud AI supercomputer featuring NVIDIA Grace Hopper Superchip and AWS UltraCluster scalability.

Amazon (AMZN) joins the ranks of tech behemoths flaunting their chip-making prowess, all amid a frenzied race to bridge the shortfall of GPUs caused by the skyrocketing demand for training LLMs due to the boon in generative artificial intelligence. Earlier this month, Microsoft (MSFT) unveiled its inaugural pair of in-house processors at its annual Ignite conference, one focusing on artificial intelligence and the other on cloud computing.

In August, Google (GOOG) (GOOGL) rolled out its fifth-generation tensor processing unit chip, known as TPU v5e, capable of delivering up to twice the higher training performance per dollar and up to two-and-a-half times the performance per dollar for LLMs and generative AI models compared to the prior model, Google said. AMD (AMD) is slated to host a grand reveal next month, showcasing its new AI accelerators, the MI300 GPU.

Swing Trading Ideas and Market Commentary

Need some new swing ideas? Get free weekly swing ideas and market commentary from Jonathan Bernstein here: Swing Trading.

Explore More

Weekly In-Depth Market Analysis and Actionable Trade Ideas

Get institutional-level analysis and trade ideas to take your trading to the next level, sign up for free and become apart of the community.