Penguin Solutions Raises Full-Year Outlook on Memory Surge, but Advanced Computing Slippage Signals Execution Risk in AI Factory Pivot
Q2 Fiscal 2026 Earnings Call — April 1, 2026
Penguin Solutions delivered a mixed but net-positive second quarter, raising its full-year revenue and earnings outlook on the back of a supercharged memory business while simultaneously trimming expectations for its Advanced Computing segment. The company now guides to 12% full-year net sales growth at the midpoint, up from 6% previously, with non-GAAP diluted EPS of $2.15 versus the prior $2.00 target. The headline improvement, however, obscures a meaningful downward revision to the AI infrastructure business that management attributed to deployment timing — a characterization the market will want to verify as the year progresses.
Memory Becomes the Engine — and the Risk
Integrated Memory is now the dominant revenue contributor, accounting for 50% of total Q2 net sales at $172 million, up 63% year-over-year. The segment's full-year growth guidance has been lifted to a range of 65% to 75%, driven by a combination of strong AI-related demand across networking, telecommunications, and computing, as well as favorable pricing dynamics in what remains a tight supply environment. CFO Nate Olmstead was direct about the upside ceiling: "To get to the high end of that outlook really just refers to our ability to secure materials, which is the only inhibitor we see right now." The company is using its balance sheet aggressively to buy ahead, with inventory rising to $322 million from $200 million a year ago and accounts payable expanding to $401 million from $238 million. Days of inventory at 51 days versus 37 a year ago reflects deliberate pre-purchasing rather than demand weakness, but the working capital build is real and worth monitoring.
On margin, management flagged that second-half gross margins will face pressure. The full-year non-GAAP gross margin outlook was trimmed by one percentage point to 28%, plus or minus 50 basis points, due to a richer mix of lower-margin memory and AI hardware sales and rising memory input costs in the AI factory business. A favorable timing dynamic between inventory purchases and shipment pricing boosted Q2 gross margins to 31.2%, up 120 basis points sequentially, but Olmstead cautioned that this effect moderates as price increases slow. Flash memory, which carries higher margins within the portfolio, also contributed positively to the quarter's mix.
Advanced Computing: The Honest Problem
Advanced Computing net sales of $116 million were down 42% year-over-year and came in as the segment that forced management's hand on guidance. The full-year outlook for the segment was revised to minus 25% to minus 15% year-over-year, worse than prior expectations. Management offered two structural explanations and one cyclical one. The structural factors are well-understood: the wind-down of the Penguin Edge high-margin business and the deliberate exit from hyperscale hardware sales together represent approximately a 30-percentage-point drag on Advanced Computing year-over-year growth. The cyclical explanation — revenue lagging bookings by three to six months, with five months remaining in the fiscal year — is harder to verify independently but was consistent across management's answers.
CEO Kash Shaikh, on his first earnings call in the role having joined in early February, acknowledged the issue squarely: "Most of the bookings that we are expecting may not materialize into the revenue for the second half of this fiscal, but we believe that it will have a positive impact going into the first half of next fiscal." That is a reasonable explanation for project-based infrastructure businesses, but it defers the revenue recognition story to a period beyond the current fiscal year, which investors need to weigh carefully.
The underlying non-hyperscale AI/HPC business does show genuine momentum. Non-hyperscale AI/HPC net sales grew 50% year-over-year for the first half of fiscal 2026 and now represent over 40% of Advanced Computing revenue, versus roughly 20% in the prior-year first half. The company added five new AI/HPC customer logos in Q2 alone — across financial services, biomedical research, and energy — bringing the first-half total to seven versus three in the same period last year. Deployment cycles of 12 to 18 months, however, mean these wins will translate to revenue on a timeline that extends well into fiscal 2027.
The AI Factory Platform Strategy Takes Shape
Shaikh used the call to articulate a six-pillar AI factory platform framework: ClusterWare infrastructure management software, the new MemoryAI line of inference-optimized systems, Advanced Computing Systems, OriginAI reference architectures, end-to-end services, and a partner ecosystem anchored by NVIDIA, SK Telecom, and Dell. The strategic framing is coherent and addresses a real market transition. "Model training was largely compute bound; inference powering agentic AI is memory bound and latency sensitive," Shaikh stated, positioning the company's combined memory and compute heritage as structurally differentiated.
The MemoryAI product line, announced at NVIDIA GTC in March, is the most significant new product disclosure. It includes a CXL-based KV Cache server — a system designed to store inference context and accelerate large language model response times — as well as a broader portfolio of scalable memory systems built on Compute Express Link interconnects. Shaikh offered an unusually accessible explanation of the value proposition: "If you are writing a book and you have to write a new sentence without having memory as a supporting component, you will have to reread the entire book before writing the next sentence. That's kind of how it is changing for enterprises." The analogy captures why KV Cache is becoming architecturally important as inference scales.
Proof points are emerging. The company sold CXL-based KV Cache servers to a Tier 1 financial institution for their on-premise AI factory — the same customer also purchasing broader AI infrastructure — and received a "substantial order" for CXL cards from a generative AI company building inference solutions. These are early but meaningful validations that the product strategy is connecting with buyers. Olmstead confirmed that CXL solutions are expected to carry materially higher gross margins than the core memory module business, given the software and hardware differentiation embedded in the product.
Photonic Memory and Celestial AI Proceeds
Penguin disclosed receiving approximately $32 million in proceeds from the sale of its investment in Celestial AI following that company's acquisition by Marvell Technology in a multibillion-dollar deal. Celestial AI was developing photonic interconnect technology for memory scaling — the same technology underpinning what Penguin calls its Photonic Memory Appliance. Management was careful to position this not as a strategic setback but as a future opportunity, noting that they are now "positioning for future growth in this market" as the Marvell relationship develops. Shaikh clarified the technical architecture: CXL provides a solid near-term solution for memory pooling between GPUs and CPUs, the KV Cache server addresses latency requirements for larger inference context windows, and photonic connectivity represents the next scaling step — offering greater memory-sharing capacity than electrical CXL alone. The three layers are complementary rather than sequential dependencies.
Capital Allocation and Balance Sheet
The company ended Q2 with $489 million in cash and equivalents and $450 million in debt, maintaining a net cash position. With convertible notes retired and no scheduled debt maturities until 2029, the balance sheet provides meaningful operational flexibility. The company repurchased approximately 1.7 million shares for $32 million in the quarter, with $64.5 million remaining under the current authorization as of late February. Working capital expansion — driven by strategic memory inventory builds and higher accounts receivable from memory volume growth — consumed cash in the quarter, with operating cash flow at $55 million versus $73 million a year earlier. Capital expenditures remain minimal at $2 million for the quarter, reflecting the asset-light nature of the business model.
New Leadership and Organizational Signals
Beyond Shaikh's arrival as CEO, the company appointed Ian Colle as Chief Product Officer, bringing over two decades of AI infrastructure and HPC experience, most recently from Amazon Web Services. The hire is notable given Penguin's stated intent to accelerate product innovation in ClusterWare software and the MemoryAI line. Olmstead noted that a new Chief Revenue Officer who came in a couple of quarters ago "has done a nice job of adding some more rigor to the planning process in our AI business." The combined leadership reset is encouraging but also means investors are evaluating a largely new executive team against an ambitious strategic transition simultaneously — a risk factor that is difficult to quantify but real.
NVIDIA Alignment and Competitive Positioning
When asked whether NVIDIA's own AI factory reference designs represent competitive pressure, Shaikh was unambiguous that they are complementary: "Their blueprints are more complementary to our AI factory platform and the components that make up for it." The company's OriginAI Factory Architecture overlays on top of NVIDIA blueprints, adding cluster management software, memory systems, services, and ecosystem integration. As NVIDIA increasingly targets enterprise customers — a strategic priority Shaikh noted aligns directly with Penguin's own go-to-market shift — the partnership dynamic looks constructive. The Deepgram collaboration, deploying Dell PowerEdge servers with NVIDIA RTX Pro 6000 Blackwell GPUs for enterprise voice AI in healthcare and retail, illustrates how the three-way partnership can generate real wins in vertical markets.
LED Segment: Managed Decline
The Optimized LED segment continues on a controlled downward trajectory, with Q2 net sales of $56 million down 7% year-over-year. Full-year guidance remains minus 15% to minus 5%. Management described the business as operating with "focused leadership and dedicated operational discipline," language that signals maintenance mode rather than investment. Tariff cost recovery provided some gross margin support in Q2, though that benefit is expected to diminish in the second half.
Penguin Solutions Deep Dive
Architectural Framework and Value Capture
Penguin Solutions, having completed its corporate metamorphosis from SMART Global Holdings in late 2024, operates as a pure-play artificial intelligence infrastructure architect. Rather than merely assembling commoditized hardware, the company monetizes the immense friction associated with deploying enterprise-scale computing. The business model is segmented into three primary vectors: Advanced Computing, Integrated Memory, and a legacy Optimized LED division. Advanced Computing serves as the tip of the spear, where Penguin designs, liquid-cools, deploys, and actively manages high-performance computing clusters. Instead of simply handing a client a pallet of servers, Penguin utilizes its proprietary ICE ClusterWare orchestration software to offer turnkey, pre-validated artificial intelligence factories under the OriginAI banner. The monetization engine here is bilateral, capturing upfront high-ticket integration revenues alongside a sticky, recurring stream of managed services and lifecycle support contracts. The Integrated Memory segment functions as a critical adjacent pillar, leveraging decades of specialty module expertise to design high-margin, ruggedized memory solutions that solve severe bandwidth bottlenecks within compute-heavy environments. By managing both the structural plumbing of the data center and the specific memory constraints of the chips inside it, Penguin commands a unique, vertically integrated value proposition that yields non-GAAP gross margins routinely cresting 31 percent, a stark contrast to the low double-digit margins of standard original equipment manufacturers.
Ecosystem Topography: Customers and Supply Chain
The customer base has undergone a deliberate evolution over the past three years. Historically anchored by massive, bespoke hyperscale deployments, most notably serving as the primary infrastructure architect for Meta's original artificial intelligence Research SuperCluster, Penguin has actively diversified its revenue concentration. Today, the demand matrix is heavily weighted toward non-hyperscale entities grappling with the sudden imperative to deploy generative models. This includes tier-two neo-cloud service providers, sovereign wealth projects such as Korea's SKT which recently deployed a massive Nvidia Blackwell cluster, leading tier-one financial institutions, and specialized national laboratories. On the supply side, the ecosystem is tightly tethered to the dominant silicon designers. As an Elite Partner, Penguin relies heavily on Nvidia for graphics processing unit allocations, while concurrently maintaining deep engineering partnerships with Advanced Micro Devices and Intel for central processing unit and advanced memory integration. While this supplier concentration introduces inherent allocation risks, Penguin's ability to act as a preferred, highly reliable conduit for getting complex silicon into production environments fastens its indispensability within the vendor hierarchy.
Market Share Dynamics and the Competitive Arena
The global artificial intelligence server market is currently a battleground dominated by massive scale on one end and fierce margin degradation on the other. Industry data indicates Dell Technologies leads the pack with roughly a 20 percent market share, followed closely by Hewlett Packard Enterprise at 15 percent, Chinese conglomerate Inspur at 12 percent, Lenovo at 11 percent, and Supermicro sitting near 9 percent. Within this oligopolistic structure, Penguin Solutions operates as a smaller, highly specialized insurgent. Instead of competing on raw volume and brutal price undercutting against the likes of Taiwanese original design manufacturers, Penguin targets the complex middle layer of the market. This segment comprises enterprise and sovereign buyers who lack the internal engineering armies of a Google or Amazon and therefore require intensive co-design, liquid cooling expertise, and post-deployment cluster management. By focusing on total cost of ownership and rapid time-to-first-token rather than the cheapest initial hardware bill of materials, Penguin defends its premium pricing. The current market reality dictates that while behemoths like Dell leverage vast direct sales teams to capture generalized corporate IT spending, Penguin captures the highly technical, thermal-constrained, and bespoke supercomputing deployments where failure is catastrophic.
Industry Tailwinds and the Governance Vacuum
The most acute industry dynamic accelerating Penguin's growth trajectory is the unprecedented governance crisis currently engulfing a primary competitor. In March 2026, the United States Department of Justice unsealed indictments against executives linked to Supermicro, alleging a staggering multi-billion dollar scheme to smuggle restricted artificial intelligence servers to Chinese buyers. This legal reckoning, compounding earlier accounting irregularities, has fundamentally altered the procurement calculus for sophisticated infrastructure buyers. Sovereign wealth funds, highly regulated financial institutions, and government agencies face massive compliance risks and cannot afford to design mission-critical architecture around a compromised vendor. This has created an estimated 15 billion dollar compliance vacuum in the tier-two cloud and enterprise market. Penguin Solutions is currently the cleanest artificial intelligence infrastructure play capable of absorbing this exact demand profile. The flight to governance is actively redirecting procurement pipelines toward Penguin, providing a structural tailwind that transcends normal cyclical hardware demand. Simultaneously, the broader transition from air-cooled data centers to direct liquid cooling necessitated by next-generation silicon power draws acts as a natural filter, eliminating low-value hardware assemblers and further consolidating pricing power among specialized integrators.
Structural Moats in a Commoditized Hardware Landscape
Hardware assembly is notoriously bereft of economic moats, yet Penguin has engineered distinct structural advantages that insulate it from a pure race to the bottom. The first moat is its 25-year pedigree in supercomputing and thermal orchestration. As rack power densities soar past 120 kilowatts to support new silicon architectures, deploying infrastructure is no longer an exercise in plugging in servers; it is an exercise in advanced fluid dynamics and facility-level power management. Competitors attempting to pivot into high-density artificial intelligence deployments face a steep, expensive learning curve in direct liquid cooling, whereas Penguin possesses proven, battle-tested reference designs. The second moat is software-defined stickiness. Penguin's ICE ClusterWare provides the critical control plane that keeps thousands of disparate graphics processing units functioning synchronously, predicting hardware failures, and minimizing cluster downtime. This software layer transforms a transactional hardware sale into an ongoing operational dependency. The final moat is vertical integration via its Integrated Memory division. While competitors must source third-party memory modules subject to brutal spot market pricing, Penguin designs its own application-optimized memory, insulating its supply chain and allowing it to co-design server architectures from the silicon up to the software layer.
Technological Catalysts: Dismantling the Memory Wall
As the artificial intelligence industry pivots aggressively from training base models toward running persistent, enterprise-scale inference and agentic workflows, the primary operational bottleneck has shifted from raw compute power to memory bandwidth, a phenomenon known clinically as the memory wall. In early 2026, Penguin capitalized on this inflection point by launching the industry's first production-ready memory appliance utilizing Compute Express Link, or CXL, technology. Branded as the MemoryAI KV Cache Server, this hardware creates a massive, disaggregated pool of memory, up to 11 terabytes, that can be accessed by the entire computing cluster. By offloading the key-value cache from the expensive and strictly constrained memory on the graphics processing units themselves, Penguin dramatically accelerates inference latency and eliminates compute idle times. This innovation is a direct revenue multiplier; the Integrated Memory segment posted a massive 63 percent year-over-year growth rate in the second fiscal quarter of 2026, driven directly by the adoption of this CXL-based technology. This establishes Penguin not just as a cluster builder, but as a critical inventor of technologies that solve the most expensive bottlenecks in modern computing.
Leadership Transition and Strategic Execution
The management track record reflects a successful, albeit complex, strategic pivot. Former Chief Executive Officer Mark Adams orchestrated the critical transformation of the company, shedding the legacy SMART Global Holdings identity, divesting non-core geographic assets like the Brazilian memory division, and unifying the disparate advanced computing assets under a cohesive enterprise brand. In February 2026, the company executed a calculated leadership transition, appointing Kash Shaikh as the new Chief Executive Officer. Shaikh brings a highly relevant pedigree from his tenures at cybersecurity and cloud optimization firms, where he specialized in scaling agentic computing solutions and driving global enterprise software adoption. His mandate is to transition Penguin from an infrastructure provider to a comprehensive artificial intelligence platform company. The strategic execution over the past few years has been clinically disciplined, evidenced by the deliberate wind-down of the low-margin legacy Edge computing business. While this wind-down creates a mechanical drag on top-line revenue growth optically, it surgically removes dilutive revenues, directly resulting in the consistent expansion of operating margins and underlying earnings per share.
The Scorecard
Penguin Solutions stands as a structurally vital, yet chronically underappreciated, architect of the modern computing economy. By successfully marrying complex thermal engineering, proprietary orchestration software, and bespoke memory design, the company captures outsized economics in a market otherwise characterized by commoditization and margin compression. The current strategic posture is uniquely asymmetric; Penguin possesses the technical pedigree to deploy the world's most advanced computing clusters while serving as the primary beneficiary of a massive, compliance-driven procurement shift stemming from the legal collapse of its closest specialized competitor.
The financial translation of this positioning is highly compelling. With its highest-margin memory segment currently demonstrating explosive double-digit growth driven by structural technology adoption, and its computing pipeline expanding aggressively into sovereign and enterprise verticals, the business is operating from a point of maximum operational leverage. By discarding dilutive legacy units and focusing exclusively on solving the acute friction of architectural scaling, the current management team is actively widening the company's competitive moat, ensuring durable profitability in the most capital-intensive technological arms race of the decade.