
Image source: The Motley Fool.
Broadcom (NASDAQ: AVGO)
Q1 2025 Earnings Call
Mar 06, 2025, 5:00 p.m. ET
Overview of the Q1 2025 Earnings Call
- Prepared Remarks
- Questions and Answers
- Call Participants
Prepared Remarks
Operator
Welcome to Broadcom, Inc.’s financial results conference call for the first quarter of fiscal year 2025. For opening remarks and introductions, I would like to turn the call over to Ji Yoo, head of investor relations.
Ji Yoo — Director, Investor Relations
Thank you, Sherie, and good afternoon, everyone. Joining today’s call are Hock Tan, president and CEO; Kirsten Spears, chief financial officer; and Charlie Kawwas, president of the Semiconductor Solutions Group. Broadcom released a press statement and financial tables after the market closed discussing our first quarter 2025 financial performance. If you did not receive a copy, the information is available in the Investors section of Broadcom’s website at broadcom.com.
This conference call is being webcast live, and an audio replay will be available for one year on Broadcom’s website. Hock and Kirsten will discuss our Q1 fiscal results, guidance for Q2, and comments on the current business environment. We’ll take questions after their prepared remarks. For specific risk factors that could impact our actual results, please refer to our press release and recent SEC filings.
Financial Highlights of Q1 2025
Hock E. Tan — President, Chief Executive Officer, and Director
Thank you, Ji, and thanks to everyone for joining us today. In fiscal Q1 2025, we achieved total revenue of $14.9 billion, an increase of 25% year-over-year, with consolidated adjusted EBITDA reaching a record $10.1 billion, up 41% year-over-year. Let’s focus on our Semiconductor business first. We generated revenue of $8.2 billion in Q1, which is an 11% year-over-year growth.
The growth was largely fueled by the rise in AI, with AI revenue reaching $4.1 billion—up 77% from a year ago—and exceeding our guidance of $3.8 billion, thanks to robust shipments of networking solutions. Hyperscaler partners have aggressively invested in next-generation frontier AI models that require advanced high-performance accelerators and larger AI data centers. In response, we are increasing our R&D investments in two crucial areas.
First, we are innovating by creating the next generation of accelerators. We are currently working on industry-leading two-nanometer AI XPU packaging with 3.5D technology, aiming for a target of 10,000 teraflops XPU performance. Second, we plan to scale clusters to support 500,000 accelerators for our hyperscale customers, having recently doubled the RAID X capacity of our existing Tomahawk sites.
Moreover, to enable these AI clusters to expand toward 1 million XPUs, we are advancing our next-generation 100 terabit Tomahawk 6 switch, currently undergoing studies at 1.6 terabit bandwidth with expected sample deliveries to customers within months. These R&D investments align with the roadmap timelines of our three large hyperscale customers, each aiming to achieve 1 million XPU clusters by the end of 2027. We reaffirm our view that these customers will create a Serviceable Addressable Market (SAM) estimated between $60 billion and $90 billion by fiscal 2027.
In addition to our three major customers, we are also actively working with two other hyperscalers to develop their own customized AI accelerators, keeping on track to tape out their XPUs this year. It has become evident from our collaborations that while hyperscalers excel in software, Broadcom leads in hardware, effectively co-optimizing large language models.
Notably, we’ve gained traction since our last earnings call, with two additional hyperscalers selecting Broadcom for custom accelerator development aimed at their next-generation frontier models. In total, beyond our three estimated customers, four more are now deeply engaged with us for their own accelerator developments, which do not factor into our previously mentioned SAM estimation. This indicates an encouraging market trend.
As new frontier models exert pressure on AI systems, achieving a single system design point for all model clusters proves challenging. The trajectory toward specialized XPUs signifies a multi-year investment endeavor.
Looking Ahead
Turning to fiscal year 2025, we foresee consistent deployment growth for our XPUs and networking products. In Q1, AI revenue stood at $4.1 billion, and we anticipate that growth momentum to carry into Q2 with projected AI revenue increasing to $4.4 billion—an impressive 44% on a year-over-year basis. However, it’s worth noting that non-AI semiconductor revenue decreased to $4.1 billion, reflecting a 9% sequential decline due to a seasonal drop in the wireless sector.
Overall, during Q1, we observed a slow recovery in non-AI semiconductor segments, with broadband showing signs of improvement since its low point in Q4 2024.
Solid Q2 Growth Expected as Semiconductor and Software Sectors Advance
The semiconductor sector is showing signs of recovery with a projected double-digit sequential growth in Q1, expected to continue into Q2 as service providers and telecommunications companies ramp up their spending. Server storage experienced a slight decline in Q1, but projections indicate a high single-digit sequential increase for Q2. In contrast, enterprise networking remains flat in the first half of fiscal year 2025 as customers work through their channel inventory.
Wireless revenue saw a seasonal decline sequentially, yet it was flat year on year. Anticipations for Q2 indicate that wireless revenue will continue to show no change from the previous year. Meanwhile, industrial resales dropped significantly in Q1 and are expected to decline further in Q2. Collectively, these dynamics suggest that non-AI semiconductor revenue may remain stable sequentially despite ongoing growth in year-on-year bookings.
Overall, for Q2, total semiconductor revenue is expected to rise 2% sequentially and 17% year on year, reaching $8.4 billion. Shifting focus to the Infrastructure Software segment, Q1 revenue was $6.7 billion—an impressive 47% increase from the previous year and a 15% rise sequentially, largely influenced by deals that moved from Q4 to Q1. This marks the first quarter in fiscal ’25 where the year-on-year comparisons incorporate VMware figures for both quarters.
We are witnessing remarkable growth in the Software segment primarily due to two key factors. First, there is a transition from a largely perpetual licensing footprint to a complete subscription model; currently, over 60% of this transition is complete. Second, the previous perpetual licenses were primarily for compute virtualization, known as vSphere.
Additionally, we are upselling clients to a comprehensive Virtual Cloud Foundation (VCF) that allows entire data centers to be virtualized, facilitating the creation of private cloud environments on-site. As of Q1’s conclusion, around 70% of our top 10,000 clients have adopted VCF, revealing substantial potential for future growth as these enterprises increasingly shift towards AI capabilities.
A large number of enterprises are now running AI workloads on their on-premise data centers, requiring both GPU servers and traditional CPUs. Just as VCF virtualizes traditional data centers using CPUs, it is also poised to virtualize GPUs within a unified platform, enabling the importation of AI models for local data processing. This platform, known as the VMware Private AI Foundation, has already garnered interest from 39 enterprises in collaboration with NVIDIA.
Customer interest reflects our open ecosystem, advanced load-balancing, and automation features, which allow for efficient workload management across both GPU and CPU infrastructures, ultimately driving down costs. For Q2, we anticipate software segment revenue to be around $6.5 billion, marking a 23% increase from the previous year. Thus, our overall guidance for Q2 consolidated revenue is approximately $14.9 billion, which represents a year-on-year growth of 19%.
This growth is expected to lead to an adjusted EBITDA of about 66% of revenue. With that, I’ll now turn the call over to Kirsten M. Spears, Chief Financial Officer and Chief Accounting Officer.
Thank you, Hock. Let’s delve into our Q1 financial performance in greater detail. It’s important to consider that Q1 of fiscal 2024 had 14 weeks while Q1 of fiscal 2025 has 13 weeks. Consolidated revenue for the quarter reached $14.9 billion, showing a 25% increase from last year.
In Q1, our gross margin was 79.1% of revenue, exceeding earlier guidance due to stronger infrastructure software revenue and a more favorable mix within semiconductor revenue. Operating expenses totaled $2 billion, with R&D accounting for $1.4 billion. Our operating income was $9.8 billion, a 44% increase year over year, reflecting an operating margin of 66%. Adjusted EBITDA reached a record $10.1 billion, or 68% of revenue, surpassing our guidance of 66%.
This figure excludes $142 million in depreciation. Now, looking at our segments, semiconductor revenue was $8.2 billion, comprising 55% of total revenue for the quarter, marking an 11% increase from a year ago. The gross margin for this segment was approximately 68%, a rise of 70 basis points year on year due to revenue mix improvements.
Operating expenses increased by 3% from the previous year to $890 million as we further invested in R&D for advanced AI semiconductors, resulting in a semiconductor operating margin of 57%. Turning now to the Infrastructure Software segment, we generated $6.7 billion in revenue, which was 45% of total revenue and represented a significant 47% increase year on year largely due to VMware’s performance. The gross margin for Infrastructure Software hit 92.5% this quarter, up from 88% last year.
Operating expenses were about $1.1 billion, resulting in an operating margin of 76%, substantially improved from 59% a year ago. This year’s improvement is attributed to the effective integration of VMware and our focus on executing a strong VCF strategy. Shifting to cash flow, we achieved $6 billion in free cash flow, equal to 40% of revenue. However, free cash flow as a percentage has been affected by cash interest expenses from VMware-related debt and ongoing taxation issues.
During the quarter, we allocated $100 million for capital expenditures. Our Days Sales Outstanding stood at 30 days this quarter compared to 41 days last year. At the end of Q1, inventory reached $1.9 billion, up 8% sequentially to prepare for future revenue needs, while our inventory turnover days remained at 65. We concluded Q1 with $9.3 billion in cash and $68.8 billion in gross principal debt.
Throughout the quarter, we repaid $495 million in fixed-rate debt and $7.6 billion in floating-rate debt, using new senior notes, commercial paper, and cash reserves, resulting in a net debt reduction of $1.1 billion. This led to an average coupon rate of 3.8% and a maturity of 7.3 years for our $58.8 billion in fixed-rate debt, while our $6 billion in floating-rate debt has a 5.4% rate and 3.8 years’ maturity, with commercial paper averaging 4.6%.
Regarding capital allocation, during Q1, we distributed $2.8 billion in cash dividends based on a $0.59 per share quarterly common stock dividend. We also repurchased 8.7 million AVGO shares from employees for tax purposes, costing $2 billion. For Q2, we anticipate the non-GAAP diluted share count to be about 4.95 billion shares. Our Q2 guidance estimates consolidated revenue at $14.9 billion, and we expect semiconductor revenue to hit approximately $8.4 billion, reflecting a 17% year-on-year increase. AI revenue for Q2 is anticipated to reach $4.4 billion, a substantial 44% year on year growth, while non-AI semiconductor revenue is expected to be around $4 billion. Infrastructure Software revenue is projected to be about $6.5 billion, up 23% year on year.
In summary, we estimate Q2 adjusted EBITDA to be roughly 66%. Additionally, for forecasting purposes, we anticipate a decline of approximately 20 basis points in Q2 consolidated gross margins, influenced by the mix of our infrastructure software and semiconductor products as we continue increasing R&D investments in advanced AI technologies.
# Financial Insights: Q2 Results and Future Growth at Broadcom
Q2 adjusted EBITDA is projected to be around 66%. We also anticipate a non-GAAP tax rate of approximately 14% for both Q2 and the fiscal year 2025.
That wraps up my prepared remarks. Operator, please open the floor for questions.
Questions & Answers
Operator
Thank you. [Operator instructions] Due to time constraints, we request that you limit your questions to one each. Please stand by while we compile the Q&A roster. Our first question comes from Ben Reitzes with Melius.
Your line is open.
Ben Reitzes — Analyst
Hi, everyone. Thank you, and congratulations on the results. Hock, you mentioned that four more customers are in the pipeline. Could you elaborate on the trends you’re observing? Is there potential for these customers to match the current three? Additionally, how does this reflect the overall trend in custom silicon and your long-term business outlook? Thanks.
Hock E. Tan — President, Chief Executive Officer, and Director
Great question, Ben. Thanks for your kind words. To clarify, the four potential customers are still in development, not yet classified as customers by our metrics. In our work with XPUs, we act as enablers rather than creators. We collaborate with hyperscalers and partners to design the chips and integrated systems that power compute tasks. This partnership is crucial for training large frontier models.
Importantly, while we develop the hardware, its effectiveness hinges on the software models and algorithms from our partners before they can achieve full scale. Currently, we only classify three as customers because they have deployed at scale and are prepared for higher production volumes. On the other hand, the four potential partners are working towards similar goals but are still in the earlier stages of development.
Creating the first chip can typically take about a year and a half. However, since we have established methodologies that have successfully worked for our current customers, we see no reason these new partners can’t eventually create similar demand, albeit on a later timeline.
Ben Reitzes — Analyst
Thank you very much.
Operator
Thank you. One moment for our next question. This comes from Harlan Sur with JPMorgan. Your line is open.
Harlan Sur — Analyst
Good afternoon, and congratulations on your solid quarterly performance, Hock and team. It’s encouraging to see the ongoing growth in the AI sector during the first half of your fiscal year and the expansion of your AI ASIC customer base. Hock, last earnings you noted that significant growth is expected in the second half of the fiscal year, primarily driven by the launch of new three-nanometer AI acceleration programs. Can you share any qualitative or quantitative insights into what to expect for the second half compared to the first half? Have any dynamics shifted in the last 90 days?
Hock E. Tan — President, Chief Executive Officer, and Director
Harlan, you’re asking me to predict my customers’ intentions, which is always a bit challenging. However, we have been exceeding expectations in Q1 and see promising signs for Q2. This success stems from improved networking shipments, which cluster XPUs and AI accelerators, including some GPUs for hyperscalers. We also suspect there have been some accelerated pull-ins of shipments in fiscal 2025.
Harlan Sur — Analyst
Regarding the anticipated second-half three-nanometer ramp, is that still proceeding as planned?
Hock E. Tan — President, Chief Executive Officer, and Director
Thank you, Harlan. At this time, I can only provide guidance for Q2. Let’s refrain from speculating about the second half.
Harlan Sur — Analyst
Understood, thank you, Hock.
Operator
Thank you. Please hold for our next question, which comes from William Stein with Truist Securities. Your line is open.
William Stein — Analyst
Thanks for taking my question. Congratulations on your impressive results. Given recent headlines concerning tariffs and DeepSeek, it seems there could be some disruptions, with some customers feeling paralyzed in decision-making. In such times, strong companies often emerge even stronger. Broadcom has shown tremendous growth over the past decade, particularly in the AI field. Are you observing disruptions in this environment, and can we expect significant changes for Broadcom stemming from these dynamics?
Hock E. Tan — President, Chief Executive Officer, and Director
You raised some thought-provoking questions, William, and the topics are relevant. However, it’s still early to determine the outcomes. The potential threat of tariffs, especially concerning chips, hasn’t yet materialized, and it’s unclear how they will be implemented.
Nonetheless, we are experiencing a constructive disruption within the semiconductor industry, notably driven by generative AI. This trend is having substantial effects, and I’ve addressed the significance of this earlier.
Semiconductor Technology Gains Momentum Amid AI’s Rising Demands
As the semiconductor industry evolves rapidly, advancements in technology related to process, packaging, and design are accelerating. Companies are pushing for higher performance in accelerators and networking functionalities, responding to intriguing challenges that arise each month.
At the forefront of this evolution are XPUs, where firms are focused on optimizing their products to meet the needs of partners, customers, and hyperscale partners. Optimizing these accelerators involves more than just assessing their compute capacity measured in teraflops; it includes understanding the complexities of distributed computing. The performance of a single XPU or GPU is intrinsically linked to the network bandwidth shared with adjacent units.
When optimizing these systems, several factors must be balanced, including whether the focus is on training, pre-filling, or post-training fine-tuning. Important considerations are the amount of memory needed and the allowable latency, which directly relates to memory bandwidth. Engineers must navigate at least four critical variables, possibly five if in-memory bandwidth is considered. This challenge provides engineers with a gratifying opportunity to innovate and create advanced hardware infrastructures capable of supporting generative AI.
Moreover, AI is influencing not only hardware solutions for enterprises but also the architecture of data centers themselves. Maintaining data privacy becomes paramount amidst this shift. Consequently, enterprises are reassessing their reliance on public cloud services for AI workloads. Many are now considering the necessity of running these processes on-premises, leading to upgrades in data center capabilities. This trend has surfaced over the past year, supporting discussions around the VMware Private AI Foundation and reflecting enterprises’ urgent need to manage where their AI workloads run.
These trends are primarily driven by AI advancements and a growing emphasis on cloud sovereignty and data regulations. Regarding tariffs, it is still too early to determine their full impact. A clearer picture may emerge in three to six months.
William Stein — Analyst
Thank you.
Operator
Thank you. One moment for our next question from Ross Seymore with Deutsche Bank. Your line is open.
Ross Seymore — Analyst
Thanks for the opportunity. Hock, I want to revisit the XPU segment. You mentioned four new engagements with unnamed customers last quarter and two more this quarter. Transitioning from design wins to actual deployments is complex. How do you assess that progress? There seems to be varied opinions on whether design wins translate into meaningful deployments.
Hock E. Tan — President, Chief Executive Officer, and Director
That’s an interesting question, Ross. Our definition of a design win differs from what many peers might consider. We recognize a design win when our product is not only produced at scale but also deployed in actual operations. This timeline can take one to two years from the taping out stage until the product reaches our partner for production.
Furthermore, producing and deploying merely 5,000 XPUs does not constitute genuine production in our perspective. We prefer to work with partners who require larger volumes for ongoing training of large language models and frontier models. We are selective about our partnerships, prioritizing those who are sustainable long-term.
This approach is consistent with how we’ve conducted our ASIC business over the past 15 years. We carefully choose our customers and engage in multi-year planning with them, focusing on proven entities rather than startups.
Ross Seymore — Analyst
Thank you.
Operator
One moment for the next question, which will come from Stacy Rasgon with Bernstein Research. Your line is open.
Stacy Rasgon — Analyst
Thanks for taking my question. I’m interested in the three existing customers currently in volume. Are there concerns about new regulations or AI diffusion rules set to take effect in May affecting these design wins or shipments? It seems you believe these three customers are proceeding as planned, but any insight on the potential regulatory impact would be valuable.
Hock E. Tan — President, Chief Executive Officer, and Director
In light of current geopolitical tensions and government actions, it’s natural to have some concerns. However, to answer your question directly, we currently have no concerns regarding these engagements.
Stacy Rasgon — Analyst
So none of these customers are located in China or are Chinese clients?
Hock E. Tan — President, Chief Executive Officer, and Director
No comment. Are you trying to identify who they are?
Stacy Rasgon — Analyst
Okay, that’s insightful. Thank you.
Hock E. Tan — President, Chief Executive Officer, and Director
Thank you.
Operator
One moment for our next question.
AI Market Dynamics: Insights from Broadcom’s Leadership on Growth and Strategy
Vivek Arya — Analyst
Thank you for the opportunity. Hock, you’ve often highlighted the significance of training workloads in the AI space. However, there’s a growing perception that the inference workload might dominate the AI market, especially with new reasoning models emerging. How does this shift affect your potential market share? If inference takes precedence, does it expand your serviceable available market (SAM) beyond the $60 billion to $90 billion range? Alternatively, does it maintain that value with a different product mix, or does an inference-heavy market favor GPUs over XPUs? Thank you.
Hock E. Tan — President, Chief Executive Officer, and Director
That’s a thoughtful question. While I frequently discuss training workloads, I want to clarify that our XPUs are also designed for inference as a distinct product line. The architectures of inference and training chips differ significantly, and both contribute to the $60 billion to $90 billion opportunity I’ve mentioned. If I wasn’t clear before, I apologize for any confusion. It’s essential to note, however, that a substantial portion of revenue comes from training, rather than inference, within the current service parameters we’re discussing.
Vivek Arya — Analyst
Thank you.
Operator
One moment for our next question from Harsh Kumar with Piper Sandler. Your line is open.
Harsh Kumar — Analyst
Thanks to the Broadcom team for your great execution. I have a quick question. We’ve noted that nearly all large clusters with over 100K units are shifting to Ethernet. Could you elaborate on what factors customers consider when selecting between a provider known for having exceptional Switch ASICs, like Broadcom, versus one with superior compute capabilities?
Hock E. Tan — President, Chief Executive Officer, and Director
That’s a critical point. For hyperscalers, the decision is heavily influenced by performance, especially when it comes to connecting and scaling AI accelerators, whether they’re XPUs or GPUs. The primary goal for these hyperscalers is to achieve the highest performance as they train advanced models. Therefore, customers prioritize proven hardware and systems. They require technology that guarantees optimal functioning during this process.
In this context, Broadcom holds a competitive edge. We have extensive experience in networking, switching, and routing over the past decade, making our solutions particularly appealing. The continued advancement in our technology, from 800-gigabit-per-second to 1.6 and onto 3.2 terabits per second, underscores our commitment to meeting hyperscalers’ needs. We are investing heavily in developing our products, including the Tomahawk 5 and future iterations like Tomahawk 6, 7, and 8, tailored to the requirements of a few high-demand customers while aiming to capture a significant share of the market.
Harsh Kumar — Analyst
Thank you, Hock.
Operator
Thank you. One moment for our next question from Timothy Arcuri with UBS. Your line is open.
Timothy Arcuri — Analyst
Thank you. Hock, you’ve previously indicated that XPU units are anticipated to grow from around 2 million last year to approximately 7 million by 2027-2028. Do the four new customers you’ve mentioned contribute to that 7 million-unit target? You previously discussed an average selling price of $20,000 by that timeframe. Are the first three customers part of that 7 million, and do the new four engagements influence that total?
Hock E. Tan — President, Chief Executive Officer, and Director
Thanks for the question, Tim. To clarify, the market we’ve referenced regarding unit growth pertains specifically to the three customers we currently serve. The additional four engagements we mentioned are still in the partnership phase and do not yet factor into our serviceable available market.
Timothy Arcuri — Analyst
Understood, so those new engagements would augment that figure. Thank you, Hock.
Hock E. Tan — President, Chief Executive Officer, and Director
Thanks.
Operator
One moment for our next question from C.J. Muse with Cantor Fitzgerald. Your line is open.
C.J. Muse — Analyst
Good afternoon. Thank you for answering my question. Hock, I’d like to follow up on your prepared remarks regarding optimizing hardware for hyperscalers. Could you elaborate on how your expanded portfolio, now encompassing six mega-scale frontier models, supports differentiation among these players? Achieving exaflops per second while optimizing capital expenditure and energy consumption is their ultimate goal. How is Broadcom aiding in this quest, and where do you think the line is drawn in terms of collaboration versus proprietary development?
Hock E. Tan — President, Chief Executive Officer, and Director
Our role involves providing essential semiconductor technology that enables hyperscalers to optimize their models and algorithms. This cooperation allows us to tailor our offerings to their unique requirements. The degree of optimization we provide largely depends on the feedback we receive from our partners, which directly influences our visibility and capabilities.
The XPU model we market maximizes performance alongside power efficiency, which is crucial. While we guide clients in performance enhancements, the primary drive for differentiation lies within their control, and we strive to enable that as much as possible.
Broadcom Discusses AI Opportunities and Market Dynamics
When considering total cost of ownership, power consumption plays a crucial role. Companies need to focus on how they design power management systems, balancing cluster sizes for different processes, including training, pre-training, post-training, inference, and test time scaling. Each of these has its unique characteristics, which highlights the benefit of working closely on XPU development. Regarding China, I don’t have a specific perspective; for us, it remains largely a technical matter.
C.J. Muse — Analyst
Thank you very much.
Operator
Next, we’ll hear from Christopher Rolland at Susquehanna. Your line is open.
Christopher Rolland — Analyst
Hi, thank you for taking my question. This one is directed at Hock and Kirsten. With your comprehensive connectivity portfolio, how do you see new greenfield scale-up opportunities unfolding, whether in optical or copper? Additionally, can you explain where your increased opex is being directed within the AI landscape?
Thank you.
Hock E. Tan — President, Chief Executive Officer, and Director
Your question covers our broad portfolio. Currently, we are benefiting from many hyperscale customers who are focused predominantly on greenfield expansions as opposed to brownfield developments. This trend provides exciting opportunities for us.
Both copper and optical connections are part of our deployment strategy, with significant potential coming from optical networking. We’re engaged with various active elements, such as multi-mode lasers, called VCSELs, and edge-emitting lasers for single mode. We offer both types to meet market demands.
Specifically, we have opportunities in scaling up versus scaling out. Our expertise extends into several protocols beyond Ethernet, including PCI Express, where we are at the forefront of networking innovation. We have intelligent switching architectures, such as those from our Jericho family, alongside standard NICs. Our product range gives us an edge in these initiatives.
Based on earlier discussions, I would estimate our AI-related revenue to be around 20%, possibly increasing to 30%. Last quarter we peaked at nearly 40%, although that wasn’t typical. Generally, our portfolio contributes about 30%, with XPU accelerators comprising the remaining 70%. This illustrates how different components of our product line play distinct roles in this growth area.
Christopher Rolland — Analyst
Thank you, Hock.
Kirsten M. Spears — Chief Financial Officer and Chief Accounting Officer
On the research and development front, as mentioned, we invested $1.4 billion in R&D in Q1, and I expect that number to increase in Q2. While we prioritize R&D across all product lines to stay competitive, it’s notable that we are focused on developing the first 2-nanometer AI XPU packaged in 3D.
Additionally, we’ve doubled the ratings capacity of our existing Tomahawk 5 products, facilitating scalability towards 1 million XPUs for our AI customers. This is a major focus for us moving forward.
Christopher Rolland — Analyst
Thank you, Kirsten.
Operator
Next, we will hear from Vijay Rakesh at Mizuho. Your line is open.
Vijay Rakesh — Analyst
Hello, Hock. Just a quick inquiry regarding the networking segment:
How much sequential growth in AI-related networking revenue do you anticipate? Also, do you have any thoughts on future M&A, especially with recent Intel developments?
Hock E. Tan — President, Chief Executive Officer, and Director
On the networking side, we certainly saw a boost in Q1. However, I’m not expecting a consistent split of 60% compute and 40% networking revenue going forward. That mix seems more like a temporary anomaly. Over time, I believe it will stabilize closer to 70-30, possibly nearer to 30% for networking.
As we look into Q2, I think that positive trend may continue, but I view it as a brief surge rather than a new normal. Regarding M&A, at this time, our focus remains on AI initiatives and VMware integration, so we are not active in that area.
Vijay Rakesh — Analyst
Thank you, Hock.
Operator
We appreciate your participation. I will now turn the call back to Ji Yoo for closing remarks.
Ji Yoo — Director, Investor Relations
Thank you, Sherie. Broadcom plans to report its earnings for the second quarter of fiscal year 2025 after market close on Thursday, June 5, 2025. A public webcast of the earnings conference call will follow at 2:00 p.m. Pacific time.
That concludes our earnings call today. Thank you for joining us, Sherie, you may end the call.
Operator
[Operator signoff]
Duration: 0 minutes
Call Participants:
Ji Yoo — Director, Investor Relations
Hock E. Tan — President, Chief Executive Officer, and Director
Kirsten M. Spears — Chief Financial Officer and Chief Accounting Officer
Ben Reitzes — Analyst
Harlan Sur — Analyst
William Stein — Analyst
Ross Seymore — Analyst
Stacy Rasgon — Analyst
Vivek Arya — Analyst
Harsh Kumar — Analyst
Timothy Arcuri — Analyst
C.J. Muse — Analyst
Christopher Rolland — Analyst
Vijay Rakesh — Analyst
For more AVGO analysis, visit our earnings call transcripts.
This article is a transcript of the conference call produced for The Motley Fool. While we aim for accuracy, errors or omissions may occur. The Motley Fool encourages readers to conduct their research and consult the company’s SEC filings. Please see our Terms and Conditions for details regarding our liability disclaimers.
The Motley Fool recommends Broadcom. The Motley Fool has a disclosure policy.
The viewpoints expressed in this article represent the author’s opinions and do not necessarily reflect those of Nasdaq, Inc.