①Broadcom executives also discussed strong customer demand, the potential impact of AI sales on gross margins, and the company’s development of customized XPU solutions for multiple customers during the earnings call; ②Broadcom CEO Hock Tan repeatedly emphasized that the $73 billion backlog does not represent total delivery revenue over the next 18 months, and future order growth may still accelerate.
After the market close on Thursday Eastern Time, Broadcom released its fiscal fourth-quarter 2025 earnings report. The report showed that driven by robust AI demand, the company's earnings per share reached $1.95, surpassing analysts' expectations of $1.87; revenue was $18.02 billion, also higher than the expected $17.46 billion.

Despite positive performance, Broadcom’s stock fell more than 4% in after-hours trading. One reason behind this was that the earnings report revealed a $73 billion backlog of AI product orders, which disappointed some investors.

During the earnings call, Broadcom CEO Hock Tan repeatedly stressed that the current $73 billion backlog does not equate to total delivery revenue over the next 18 months, and future order volumes could still accelerate. However, the market reaction remained cautious.
In addition, during the earnings call, Broadcom executives also discussed the potential impact of AI sales on gross margins, the company’s development of customized XPU solutions for multiple customers, and news regarding Broadcom’s collaboration with OpenAI.
Key Points
· Broadcom’s Q4 revenue increased by 28% year-over-year to reach $18 billion.
· Earnings per share of $1.95 exceeded expectations by 4.28%.
· Revenue from AI business grew by 65%, making a significant contribution to overall performance.
· The company announced an increase in quarterly dividends and an extension of its stock repurchase program.
Earnings Call Transcript (translated with AI assistance,部分内容有所删减)
Conference Call Operator Cherie: Welcome to Broadcom's fiscal year 2025 fourth quarter and full-year financial results conference call. I will now turn the call over to Ji Yoo, Head of Investor Relations at Broadcom, for opening remarks and introductions.
Head of Investor Relations at Broadcom, Ji Yoo: Thank you, Cherie, and good afternoon, everyone. Joining me on today’s call are President and CEO Hock Tan, CFO Kirsten Spears, and President of the Semiconductor Solutions Group, Charlie Kawwas. Broadcom has released its earnings press release and financial statements after the market close, detailing the financial performance for the fourth quarter and full year of fiscal 2025.
President and CEO of Broadcom, Hock Tan: Thank you, Ji. Thank you all for joining us today. We have just concluded our fiscal fourth quarter of 2025. Before diving into the details of the quarter, let me briefly recap the full-year performance.
In fiscal 2025, consolidated revenue grew by 24% year-over-year to a record $64 billion, driven by AI semiconductor business and VMware. Revenue from AI-related operations surged 65% year-over-year to $200 billion, propelling the semiconductor segment’s annual revenue to a record $370 billion. In the infrastructure software segment, widespread adoption of VMware Cloud Foundation (VCF) drove revenue growth of 26% year-over-year to $270 billion. All in all, 2025 was another strong year for Broadcom, and we anticipate that customer spending momentum in AI will continue to accelerate into 2026. Now, let us delve into the details of the fourth quarter of fiscal 2025.
Fourth-quarter total revenue reached a record $18 billion, growing 28% year-over-year and surpassing our guidance, primarily due to stronger-than-expected growth in AI semiconductors and infrastructure software businesses. Fourth-quarter consolidated adjusted EBITDA hit a record $12.12 billion, up 34% year-over-year. Let me now provide further details on the performance of our two business segments. Semiconductor Solutions revenue was $11.1 billion, with growth accelerating to 35% year-over-year. This robust performance was fueled by AI semiconductor revenue of $6.5 billion, which surged 74% year-over-year. Over the 11 quarters since we began disclosing this segment, its growth trajectory has exceeded tenfold. Our custom acceleration business more than doubled year-over-year as customers increasingly adopt custom accelerators (which we refer to as XPUs) to train large language models (LLMs) and monetize platforms via inference APIs and applications.
I would like to add that these XPUs are not only being used by customers for training and inference of internal workloads but, in some cases, are also being extended to other LLM peers. A notable example is Google, whose Tensor Processing Units (TPUs), developed for Gemini, are also being utilized by companies such as Apple, Cohere, and SSI for AI cloud computing. The scale of this trend could be quite substantial.
As you know, in Q3 of fiscal 2025, we received a $10 billion order from Anthropic for our latest Ironwood Rex TPU (the newest model of Google's TPU). In this quarter (Q4), we received an additional $11 billion order from the same client, with deliveries scheduled for late 2026. However, this does not imply that our other two major clients are also using TPUs. In fact, they prefer to maintain control over their own direction and continue advancing multi-year plans to develop their own custom AI accelerators (which we refer to as XPU Rex).
Today, I am pleased to announce that this quarter, we secured our fifth XPU client through a $1 billion order with delivery scheduled for late 2026. Now, let’s discuss the AI networking business. Demand in this area has been even stronger as customers expand their data center infrastructures before deploying AI accelerators. Our current backlog of AI switch orders exceeds $10 billion, driven by continued record-breaking demand for our latest 102-terabit-per-second Tomahawk 6 switch — currently the only product on the market with this level of performance. And this is just part of our business.
We have also received record orders for digital signal processors (DSPs), lasers, and other optical components, as well as PCI Express switches that will be deployed in AI data centers. Including our XPUs, the total value of our current orders exceeds $73 billion, accounting for nearly half of Broadcom’s consolidated backlog of $162 billion. We expect this $73 billion in AI-related backlog to be delivered within the next 18 months, and in Q1 of fiscal 2026, our AI business revenue is projected to double year-over-year to $8.2 billion.
Turning to non-AI semiconductor business, Q4 revenue was $4.6 billion, growing 2% year-over-year and increasing 16% sequentially due to seasonal tailwinds in wireless. On a year-over-year basis, broadband showed solid recovery, wireless remained flat, and all other end markets declined as signs of a corporate spending recovery remain limited. Consequently, we expect non-AI semiconductor revenue in Q1 of fiscal 2026 to be approximately $4.1 billion, flat compared to the same period last year, with a sequential decline due to typical seasonality in wireless.
Now, I will discuss the infrastructure software business segment. In the fourth quarter, revenue from the infrastructure software business reached $6.9 billion, an increase of 19% year-over-year, surpassing our expectation of $6.7 billion. Order volumes remained strong, with total contract value signed in the fourth quarter exceeding $10.4 billion, compared to $8.2 billion in the same period last year. By the end of the year, the backlog of orders for the infrastructure software business increased from $49 billion last year to $73 billion.
We expect the first quarter renewals to exhibit seasonal characteristics, with infrastructure software business revenue projected to be approximately $6.8 billion. However, we still anticipate that revenue from the infrastructure software business for fiscal year 2026 will achieve low double-digit growth. Below is our outlook for 2026.
Overall, we expect AI-related business revenue to continue accelerating, becoming the primary driver of the company’s growth, while non-AI semiconductor revenue remains stable. Revenue from the infrastructure software business will continue to benefit from VMware's low double-digit growth. For the first quarter of fiscal year 2026, we project consolidated revenue to reach approximately $19.1 billion, representing a 28% year-over-year increase, with adjusted EBITDA expected to account for 67% of revenue.
Ji Yoo, Head of Investor Relations at Broadcom: I will now provide a detailed review of the financial performance for the fourth quarter. Consolidated revenue for the quarter reached a record $18 billion, growing by 28% year-over-year. The gross margin for the quarter was 77.9% of revenue, higher than our initial guidance, primarily due to growth in software business revenue and product mix optimization within the semiconductor business. Consolidated operating expenses totaled $2.1 billion, with $1.5 billion allocated to R&D. Operating income for the fourth quarter reached a record $11.9 billion, up 35% year-over-year. Sequentially, despite a 50-basis-point decline in gross margin due to the semiconductor product mix, operating margin improved by 70 basis points to 66.2%, driven by favorable operating leverage. Adjusted EBITDA was $12.12 billion, accounting for 68% of revenue, surpassing our guidance of 67%. This figure excludes depreciation expenses of $148 million. Now, let us review the profit and loss statements (P&L) for the two business segments, starting with the semiconductor business.
Revenue from the semiconductor solutions segment reached a record $11.1 billion, accelerating to 35% year-over-year growth, driven by the AI business. Semiconductor business revenue accounted for 61% of total revenue this quarter. The gross margin for the semiconductor solutions segment was approximately 68%. Operating expenses grew by 16% year-over-year to $1.1 billion due to increased investment in cutting-edge AI semiconductor R&D. The operating margin for the semiconductor business was 59%, improving by 250 basis points year-over-year. Next, we turn to the infrastructure software business. Revenue from the infrastructure software business reached $6.9 billion, growing 19% year-over-year and accounting for 39% of total revenue. The gross margin for the infrastructure software business this quarter was 93%, compared to 91% in the same period last year. Operating expenses for the quarter were $1.1 billion, with the operating margin for the infrastructure software business improving to 78% from 72% a year ago, reflecting the completion of VMware integration.
Turning to cash flow, free cash flow for the quarter was $7.5 billion, representing 41% of revenue. Capital expenditures amounted to $237 million. Days sales outstanding (DSO) for the fourth quarter was 36 days, compared to 29 days in the same period last year. Inventory at the end of the fourth quarter stood at $2.3 billion, up 4% sequentially. Inventory days on hand were 58 days, down from 66 days in the third quarter, indicating our continued disciplined approach to inventory management across the ecosystem. Cash balance at the end of the fourth quarter was $16.2 billion, increasing by $5.5 billion sequentially, driven by strong cash flow. Our total fixed-rate debt of $76.1 billion carries a weighted average coupon rate of 4% and an average remaining maturity of 7.2 years. Moving on to capital allocation.
In the fourth quarter, we paid $2.8 billion in cash dividends to shareholders based on a quarterly dividend of $0.59 per common share. We project diluted shares outstanding under non-GAAP measures to be approximately 4.97 billion for the first quarter of fiscal year 2026, excluding any potential impact from stock repurchases. Now, I will summarize the financial performance for fiscal year 2025. Company revenue reached a record $63.9 billion, with organic growth accelerating to 24% year-over-year. Revenue from the semiconductor business was $36.9 billion, up 22% year-over-year, while revenue from the infrastructure software business was $27 billion, growing 26% year-over-year. Adjusted EBITDA for fiscal year 2025 was $43 billion, accounting for 67% of revenue. Free cash flow grew 39% year-over-year to $26.9 billion. In fiscal year 2025, we returned $17.5 billion in cash to shareholders through $11.1 billion in dividends and $6.4 billion in stock repurchases and retirements.
In light of the prior year’s cash flow growth, we announced an increase in the quarterly cash dividend for common shares to $0.65 per share for the first quarter of fiscal year 2026, representing a 10% increase from the previous quarter. We plan to maintain this target quarterly dividend throughout fiscal year 2026, subject to quarterly board approval. This implies a record annual dividend of $2.60 per common share for fiscal year 2026, marking a 10% year-over-year increase. I would like to emphasize that this represents the 15th consecutive year of annual dividend increases since we began paying dividends in fiscal year 2011. The board also approved an extension of the stock repurchase program, which has $7.5 billion of available capacity remaining as of the end of calendar year 2026. Moving on to guidance.
We project consolidated revenue for the first quarter of fiscal year 2026 to reach $19.1 billion, representing a 28% year-over-year increase. We expect semiconductor business revenue to be approximately $12.3 billion, growing 50% year-over-year. Among this, AI semiconductor business revenue for the first quarter is projected to reach $8.2 billion, growing approximately 100% year-over-year. Revenue from the infrastructure software business is expected to be around $6.8 billion, increasing 2% year-over-year. For modeling purposes, we anticipate consolidated gross margin to decline sequentially by approximately 100 basis points in the first quarter, primarily driven by the higher proportion of AI-related revenue. It is important to note that the full-year consolidated gross margin will be influenced by the revenue mix between the infrastructure software and semiconductor businesses, as well as the product mix within the semiconductor business. We expect adjusted EBITDA for the first quarter to account for approximately 67% of revenue. Due to the impact of the global minimum tax rate and changes in geographic revenue composition compared to fiscal year 2025, we forecast the non-GAAP tax rate for the first quarter and full year of fiscal year 2026 to rise from 14% to approximately 16.5%.
That concludes my prepared remarks. Operator, please open the floor for questions.
Ji Yoo, Head of Investor Relations at Broadcom: Thank you. The first question comes from Vivek Arya at Bank of America. Your line is now open.
Vivek Arya: Thank you. I just want to confirm, you mentioned that there will be $73 billion worth of AI-related orders delivered over the next 18 months, implying that AI revenue for fiscal year 2026 could be around $50 billion, correct? My main question is, there has been some discussion in the market about customers developing their own tools. Do you think your hyperscale customers might prefer to take more of this work in-house? How do you see the share of your XPU products evolving over the next one to two years with your largest customers? Thank you.
Hock Tan, President and CEO of Broadcom: To answer your first question, our statement is accurate — as of now, we have a backlog of $73 billion in orders related to XPUs, switches, DSPs, lasers, and other AI data center products, which are expected to be delivered within the next 18 months. Clearly, this is just the current figure. We fully expect more orders to come in during this period. Therefore, don’t interpret this $73 billion as the total delivery revenue for the next 18 months; we are merely stating that these orders currently exist, and the order volume has been accelerating.
Frankly speaking, we are seeing strong order growth not only in XPUs but also in switches, DSPs, and all other AI data center-related components. Over the past three months, we have never seen such large-scale orders, especially for the Tomahawk 6 switch. It is one of the fastest-growing products in terms of deployment among all the switches we’ve launched, which is highly noteworthy. Part of the reason is that our Tomahawk 6 switch is currently the only product on the market capable of delivering 102 terabits per second performance, a core requirement for scaling the latest GPU and XPU clusters.
However, regarding future developments, your core question pertains to the XPU business, correct? My response is, don’t take external rumors as conclusive. This is a long-term development process, a journey that spans multiple years. Currently, there are very few companies engaged in large language model (LLM) businesses, and they have good reasons for wanting to develop their own custom AI accelerators. For functionalities that general-purpose GPUs achieve through software and kernels, you can directly implement them in custom hardware.
Through custom-designed, hardware-driven XPUs, performance can far exceed that of general-purpose GPUs. We have observed this in TPUs and all the accelerators we’ve developed for other clients — whether in sparse cores, training, inference, or decision-making inference, custom XPUs perform significantly better across the board.
So, does this mean that over time, all customers will opt to develop their own solutions? Not necessarily. In fact, silicon technology is constantly evolving. If you’re an LLM company competing in this space, where would you allocate your resources? Especially when you ultimately have to compete with commercial GPU vendors who are relentless in advancing their technologies. Therefore, I believe the notion that customers will shift entirely to developing their own tools is an exaggerated assumption. Frankly, I don’t think it will happen. Thank you.
Ji Yoo, Head of Investor Relations at Broadcom: Next, we have a question from Ross Seymour at Deutsche Bank. Your line is now open.
Ross Seymour: Hello, thank you for the opportunity to ask a question. Hock, I’d like to follow up on your earlier comments about expanding TPU sales to a broader set of commercial customers. Do you view this as a substitution effect for customers who might otherwise collaborate with you on developing ASICs, or is it actually expanding the overall market size? From your perspective, what financial impacts do you foresee?
Hock Tan: Ross, that’s an excellent question. The most evident trend we are observing right now is that TPUs are primarily being sold to customers who already use TPUs. Their alternative option is typically commercial GPUs. Switching to other customized products represents a completely different scenario — investing in custom accelerators is a multi-year journey, a strategic direction, rather than a transactional or short-term decision.
The shift from GPUs to TPUs represents a transactional decision, while the in-house development of AI accelerators is a long-term strategic initiative. No factors are expected to deter customers from their sustained investment towards the ultimate goal of successfully developing and deploying their own custom AI accelerators. This is the trend we currently observe. Thank you.
Ji Yoo, Head of Investor Relations at Broadcom: The next question comes from Harlan Sur at JPMorgan.
Harlan Sur: Good afternoon. Thank you for the opportunity to ask a question, and congratulations on the strong results, guidance, and execution, Hock. I just want to clarify again: you mentioned a total AI-related backlog of $73 billion over the next six quarters—this is simply a snapshot of the current order book, correct? But considering lead times, I believe customers will continue placing AI-related orders in the fourth, fifth, and sixth quarters. Therefore, as time progresses, the backlog for deliveries in the second half of 2026 could continue to grow, correct? Is that the right understanding? Additionally, given the strong and growing backlog, has the team secured supply commitments for 3nm and 2nm wafers, wafer-level system integration (CoWoS), substrates, and high-bandwidth memory (HBM) to meet all demands outlined in the order book?
I understand you are working to address this issue through advanced packaging solutions, such as building a new facility in Singapore. Could you remind us which specific aspects of the advanced packaging process the Singapore team is focusing on? Thank you.
Kirsten Spears, Chief Financial Officer at Broadcom: Thank you. To answer your first straightforward question, your understanding is correct. The $73 billion represents the backlog we currently have on hand that will be delivered over the next six quarters. You can also expect additional orders to be added to this backlog given our lead times. Therefore, one way to look at it is that revenue over the next six quarters will be at least $73 billion, but we anticipate actual revenue to far exceed this as more orders come in during this timeframe. Lead times vary by product, generally ranging from six months to a year.
Your core question relates to the supply chain, correct? Specifically, the key supply chains in silicon-based products and packaging? Yes. This has been a significant challenge we’ve been continuously addressing. With robust demand growth and increasing requirements for more innovative packaging technologies—what we call advanced packaging—because every custom accelerator now requires multi-chip integration, packaging has become a highly challenging technical issue. The primary purpose of building our facility in Singapore is to internalize part of the advanced packaging process. We believe that with the current strong demand, internalization not only reduces costs but, more importantly, ensures supply chain security and delivery stability. As you mentioned, we are constructing a fairly sizable advanced packaging facility in Singapore dedicated to addressing these advanced packaging needs. Regarding silicon-based products, we still rely on Taiwan Semiconductor’s manufacturing processes, so we’ve been actively securing more 2nm and 3nm capacity. So far, we haven’t encountered limitations in this area, but given our growing backlog, the situation remains to be seen.
Ji Yoo, Head of Investor Relations at Broadcom: Next, we have a question from Blaine Curtis at Jefferies. Your line is open.
Blaine Curtis: Good afternoon. Thank you for the opportunity to ask a question. Regarding the initial $10 billion deal, you mentioned it was for rack sales. I’m curious about how subsequent orders, as well as those from the fifth customer, will be fulfilled—will they be delivered as XPUs or racks? Could you elaborate on the specifics of what is being delivered and provide related data? Clearly, Google uses its own networking equipment. So, I’m wondering whether the products you deliver will align entirely with what Google is currently using, or will they include your own networking equipment? Thank you.
Hock Tan, President and Chief Executive Officer at Broadcom: Blaine, that’s a very complex question. Let me explain—it’s a systems sale. It truly is a systems sale. In any AI system used by hyperscalers, there are numerous components beyond just the XPU and custom accelerators. Therefore, we believe it makes sense to conduct business in the form of a systems sale, and we take full responsibility for the entire system—or what you referred to as the rack. I think it’s easier to understand this as a systems sale. For the fourth customer, we are selling in the form of a system, which includes our core components. This is no different from selling chips—in the entire sales process, we are responsible for certifying the system and ensuring its operational capability.
Ji Yoo, Head of Investor Relations at Broadcom: Next, we have a question from Stacy Rasgon at Bernstein. Your line is open.
Stacy Rasgon: Hello. Thank you for the opportunity to ask a question. I would like to discuss the issue of gross margin, which may be somewhat related to the previous question. I understand why the gross margin of the AI business is relatively low — this includes the cost pass-through of high-bandwidth memory (HBM), and system sales might further reduce the gross margin. You have hinted at this in the past, but I would like to know if you could provide more clarity: as AI business revenue begins to grow and system sales gradually progress, how should we view the gross margin level over the next four or six quarters? Could it drop to just above 70%? Or could the overall company gross margin even fall below 70% (starting with a 6)? Additionally, while I understand that the gross margin will decline, what impact will this have on operating margins? Do you believe you can achieve sufficient operating leverage in terms of operating expenses to maintain stable operating margins, or will operating margins also decline?
Broadcom President and CEO Hock Tan: I will let Kirsten provide you with the detailed data, but I’ll briefly address this at a high level, Stacy. This is an excellent question. Currently, we have started some system sales, but you haven’t yet seen the impact on gross margins in the financial data — however, this impact will become evident in the future, and we have publicly stated this before. Clearly, the gross margin of the AI business is lower than that of our other businesses (including software), but we expect that as AI business revenue continues to grow significantly, we will achieve sufficient operating leverage in terms of operating expenses, thus maintaining a high growth level in operating profits. Therefore, although we anticipate gross margins will begin to decline, operating leverage will help support operating margins. That’s the overall picture.
Broadcom CFO Kirsten Spears: Yes. I think Hock's explanation was very accurate. In the second half of this year, when we begin delivering more systems, it will become clear: we will pass through more costs of non-proprietary components — similar to how memory is included in XPU products and related costs are passed through. In rack sales, we will pass through more costs, resulting in lower gross margins. However, as Hock mentioned, in absolute terms, gross profit will increase, but the gross margin percentage will decline. Due to operating leverage, the absolute amount of operating profit will also increase, though the operating margin (as a percentage of revenue) will decline slightly. We will provide more specific guidance closer to the end of the year.
Conference Call Operator Cherie: Next, we have a question from Jim Schneider of Goldman Sachs. Your line is now open.
Jim Schneider: Good afternoon. Thank you for the opportunity to ask a question. Hock, I would like you to clarify your expectations for AI business revenue in fiscal year 2026. I recall you mentioning that the growth rate of AI business revenue in fiscal 2026 will accelerate further from the 65% growth rate in fiscal 2025, and you projected 100% growth for the first quarter. So, I’m wondering if the growth rate for the first quarter serves as a good starting point for the full year, or will the full-year growth rate be slightly lower than the first quarter? Additionally, could you confirm separately whether the $1 billion order from the fifth customer indeed comes from OpenAI — you previously issued a separate announcement regarding this. Thank you.
Broadcom CFO Kirsten Spears: Wow, that’s quite a few questions. Let me start with the outlook for fiscal 2026. As I mentioned earlier, our backlog is changing very rapidly and continues to grow. You are correct; six months ago, we indicated that AI business revenue growth for fiscal 2026 could be in the range of 60%-70% year-over-year. Today, we project that AI business revenue for the first quarter of fiscal 2026 will double. The reason for this projection is the continuous influx of new orders, and we’ve provided milestone data — namely, the $73 billion backlog scheduled for delivery over the next 18 months. As I mentioned in answering the previous question, we fully expect the $73 billion backlog over the next 18 months to continue growing. This is a dynamically evolving target that will adjust over time, but the overall trend is upward.
It is difficult for me to predict the exact growth rate for fiscal 2026 with precision. Therefore, I would rather not provide specific full-year guidance — which is why we do not give annual guidance, but we do provide guidance for the first quarter. Please be patient, and we will provide guidance for the second quarter later. You’re correct in asking whether this represents an accelerating trend. My response is that as we move through 2026, the growth rate is likely to continue accelerating. I hope that answers your question.
Conference Call Operator Cherie: Next, we have a question from Ben Reitzes of Melius Research. Your line is now open.
Ben Reitzes: Hello. Thank you very much. I wanted to ask about something — I’m not sure if the previous questioner mentioned it, but I didn’t hear the answer. I’d like to learn more about the contract with OpenAI, which is expected to begin in the second half of this year and continue until 2029, involving 10 gigawatts of capacity. I assume this refers to the fifth customer’s order. I want to know if you remain confident that this order will serve as a driver of business growth. Are there any obstacles preventing it from becoming a major growth driver? And when do you expect this order to start contributing to revenue, and how confident are you about it? Thank you very much, Hock.
Broadcom CFO Kirsten Spears: In the previous question from Jim, you didn’t hear the answer because I didn’t respond — and I won’t respond to that question now either. This is the fifth customer, a real client, and their business will continue to grow. They are advancing a multi-year proprietary XPU development plan. That’s all I’ll say about that. Regarding the situation with OpenAI you mentioned, we recognize this as a long-term collaboration extending until 2029 — as reflected in our press release with OpenAI. Ben, the 10 gigawatts of capacity corresponds to 2027, 2028, and 2029, not 2026. More specifically, it pertains to 10 gigawatts of capacity during the 2027-2029 period. This is the discussion regarding OpenAI. I refer to it as an agreement, reflecting alignment with OpenAI, a highly respected and valued client. However, we do not expect this agreement to contribute significantly to revenue in 2026.
Conference call operator Cherie: One moment, please. The next question comes from C.J. Muse at Cantor Fitzgerald. Your line is now open.
C.J. Muse: Good afternoon. Thank you for the opportunity to ask a question. I'd like to discuss the custom silicon business and your expectations for generational growth in Broadcom’s compute business. As context, one of your competitors recently launched an XPU product, which is essentially an accelerator for large-scale context windows. I’m curious whether you see this as creating more opportunities for additional XPU product options among the existing five customers. Thank you very much.
Broadcom Chief Financial Officer Kirsten Spears: Thank you. Yes, you are absolutely correct. The advantage of custom accelerators is that we do not attempt a 'one-size-fits-all' approach, and during generational evolution, each of these five customers can develop their own custom XPU accelerators for both training and inference. Essentially, for each customer, these two development tracks progress almost simultaneously. Therefore, we have sufficient product versions to meet demand without needing to create additional ones – simply by developing custom accelerators for these customers, we already possess a diverse portfolio. By the way, when developing custom accelerators, we prefer to integrate more unique, differentiated hardware features rather than relying on software and cores to achieve those functionalities.
I understand that achieving functionality through software is also highly complex, but comparatively, integrating features such as Sparse Core data routers and dense matrix multipliers into the same chip via hardware provides one of the key advantages of custom accelerators. Alternatively, even for the same customer, there may be variations in memory capacity or bandwidth across different chips – because even during inference, a customer might aim for increased inferencing decisions, decoding, or prefilling, among other functions. Thus, in practice, we develop specialized hardware tailored to the distinct needs of training, inference, and running workloads for our customers. This is a fascinating area, and we observe that each customer has multiple chip requirements.
Conference call operator Cherie: One moment, please. The next question comes from Harsh Kumar at Piper Sandler. Your line is now open.
Harsh Kumar: First, congratulations on delivering remarkable results. I have a straightforward question and a more strategic one. The simple question is about your guidance for AI business growth, which indicates a sequential increase approaching $1.7 billion. I am curious about how this growth is distributed among your three existing customers – is it relatively balanced with all customers growing, or is one customer the primary driver? Secondly, Hock, from a strategic perspective, one of your competitors recently acquired a photonics architecture company. I’d like to hear your thoughts on this technology – do you view it as disruptive, or is it currently just a gimmick?
Broadcom Chief Financial Officer Kirsten Spears: I appreciate the way you framed that question; you seemed somewhat hesitant. Thank you for that. Regarding the first question, yes, our business momentum is incredibly strong, making it feel like this growth will never end. It is driven collectively by our existing customers and current XPU products, with the latter contributing significantly. However, this does not imply a slowdown in demand for switches (not only Tomahawk 6 but also Tomahawk 5) – our latest 1.6 terabits-per-second DSP product (mainly used for scale-out optical interconnects) is also seeing robust demand.
Accordingly, demand for optical components such as lasers and PIN diodes is exceptionally high. All these businesses together are driving the growth. Of course, as you might expect, compared to XPUs, these segments represent relatively smaller revenue streams. To give you a better sense, let's consider the backlog: of the $73 billion AI business backlog projected over the next 18 months, approximately $20 billion comes from other products, with the remainder attributed to XPUs. I hope this gives you insight into our product mix. That said, the $20 billion in other product orders is still substantial, and we take it very seriously. Regarding your second question, silicon photonics, as a technology enabling more efficient and lower-power interconnects, applies not only to scale-out but also holds potential for scale-up applications in the future – I believe that at some point, silicon photonics will become the only solution.
We are not fully there yet, but we have mastered the technology and continue advancing its development – starting with 400 gigabit bandwidth, followed by 800 gigabit bandwidth. These products are not yet ready. We are currently developing silicon photonics-based switches and interconnects for 1.6 terabit bandwidth. However, we are unsure if these will be fully deployed because engineers, both within our company and across the industry, will try to achieve scale-up using copper cables within racks and pluggable optical modules wherever possible. Only when pluggable optics fail to meet requirements, or even copper cannot suffice, will silicon photonics become the inevitable choice – and that day will come. We are prepared, but it is not the right time yet.
Conference call operator Cherie: One moment, please. The next question comes from Carl Ackerman at BNP Paribas. Your line is now open.
Carl Ackerman: Yes, thank you. Could you discuss your supply chain resilience and visibility with key materials suppliers, particularly in the area of wafer-level system integration (CoWoS) — where you need to support existing customer projects while also meeting the demand for the two new custom computing processors announced this quarter? My core question is, given your significant role in multiple critical segments of the AI networking and computing supply chain, and considering the record backlog you mentioned, what are some of the current supply chain bottlenecks you're facing, and how do you plan to address and mitigate these bottlenecks? How do you see these bottlenecks improving by 2026? Thank you.
Kirsten Spears, Chief Financial Officer of Broadcom: Generally speaking, supply chain bottlenecks are comprehensive. However, in some aspects, we are fortunate — we possess leading product technologies and business lines that provide a variety of key cutting-edge components required for today's most advanced AI data centers. For instance, as I mentioned earlier, our DSP products currently achieve a performance of 1.6 terabits per second, offering the leading bandwidth connectivity technology needed for top-tier XPUs and even GPUs, and we will continue to maintain this leadership. At the same time, we also offer complementary laser products (including Electro-Absorption Modulated Lasers (EML), Vertical-Cavity Surface-Emitting Lasers (VCSEL), and Continuous Wave Lasers (CW lasers)). Therefore, fortunately, we have these critical active components and can identify demand trends early on, allowing us to scale production accordingly during the product design phase.
That was a rather long introduction, but my core point is this: among all AI system rack data center suppliers (excluding power enclosures, transformers, and gas turbines, etc.), we might be in the best position to understand where the bottlenecks are — because sometimes we ourselves are part of the bottleneck, so we take proactive measures to address these issues. We are optimistic about the supply chain situation by 2026.
Conference Call Operator Cherie: Next, we will move to the next question from Christopher Rolland of Susquehanna. Your line is now open.
Christopher Rolland: Hello. Thank you for the opportunity to ask a question. First, a clarification question, followed by my main inquiry. I apologize for revisiting this topic, but I want to confirm: your agreement with OpenAI is non-binding, similar to the agreements with NVIDIA and AMD, correct? Secondly, you mentioned that revenue from non-AI semiconductor businesses will remain stable. Is this due to ongoing inventory backlogs? What conditions would be necessary for this business to return to growth? Do you believe this segment will eventually recover and grow again? Thank you.
Kirsten Spears, Chief Financial Officer of Broadcom: Regarding the non-AI semiconductor business, we are indeed seeing steady recovery in broadband, but other business segments are not showing the same trend. We expect this business to remain stable, and we currently do not see signs of a sustainable and significant recovery. It may take a few more quarters. We do not anticipate further deterioration in demand — which, in my view, could be attributed to AI-related spending drawing substantial corporate and hyperscale investments away from other areas. Apart from broadband, we do not foresee significant declines or rapid recovery in the non-AI semiconductor business. That’s a brief summary of the non-AI business outlook.
Regarding OpenAI, I do not intend to delve deeply into the details but would like to clarify the nature of the 10-gigawatt announcement. Additionally, our collaboration with OpenAI on custom accelerators is proceeding very well and will soon materialize — including clear commitment terms. However, the 10-gigawatt announcement I referenced earlier represents an agreement to develop 10 gigawatts of capacity for OpenAI between 2027 and 2029. That’s all there is to it. This is separate from the XPU project we are developing for them.
Conference Call Operator Cherie: Thank you. We still have time for one final question. The last question comes from Joe Moore of Morgan Stanley. Your line is now open.
Joe Moore: Thank you very much. If rack-based revenue reaches $2.1 billion in the second half of 2026, will this revenue level be sustained? Moreover, will you continue selling racks, or will this type of business evolve over time? My main interest lies in understanding what proportion of your 18-month backlog consists of full systems?
Kirsten Spears, Chief Financial Officer of Broadcom: That’s an interesting question. Fundamentally, it relates to our clients’ future demand for compute capacity (which I refer to as 18 months from now). Based on the information we currently have, your guess might be as accurate as mine — it all depends on client needs. If they require more compute capacity, then rack sales will continue to grow, potentially at an even larger scale; if not, they may not. What we want to convey is that, based on our current visibility, this reflects client demand within that timeframe.
Broadcom President and CEO Hock Tan: Now, I will hand the meeting back to Ji Yoo for closing remarks.
Broadcom Head of Investor Relations Ji Yoo: Thank you, operator. Broadcom will participate in the New Street Research Virtual AI Big Ideas Conference on Monday, December 15, 2025. Broadcom is currently scheduled to release its Q1 fiscal year 2026 earnings report after market close on Wednesday, March 4, 2026, followed by a live webcast of the earnings call at 2:00 PM Pacific Time. This concludes today’s earnings call. Thank you all for participating. Operator, you may now conclude the meeting.
Looking to pick stocks or analyze them? Want to know the opportunities and risks in your portfolio?For all your investment-related questions, just ask Futubull AI!
Editor /rice
