San Francisco Tensor Company
AI/HPC infrastructure: kernel optimization, elastic cloud, EMMA language
Website: https://sf-tensor.com/
Cover Block
PUBLIC
| Name | San Francisco Tensor Company |
| Tagline | AI/HPC infrastructure: kernel optimization, elastic cloud, EMMA language |
| Headquarters | San Francisco, CA, USA |
| Founded | 2025 |
| Stage | Seed |
| Business Model | SaaS |
| Industry | Deeptech |
| Technology | AI / Machine Learning |
| Geography | North America |
| Growth Profile | Venture Scale |
| Founding Team | Co-Founders (3+) |
| Funding Label | Seed |
| Total Disclosed | Undisclosed |
Links
PUBLIC
- Website: https://sf-tensor.com/
- LinkedIn: https://www.linkedin.com/company/sftensor
- GitHub: https://github.com/sf-tensor
Executive Summary
PUBLIC The San Francisco Tensor Company is building a vertically integrated infrastructure stack to reduce the cost and complexity of large-scale AI training, a bet that hinges on automating the most painful and opaque parts of GPU compute. Founded in 2025 by three brothers, Ben, Luk, and Tom Koska, the company emerged from Y Combinator's Fall 2025 batch with backing from Susa Ventures and Paul Graham, signaling strong early-stage investor conviction in the team's technical vision [Y Combinator, 2025] [Bizety, October 2025]. Its core product, the SF Tensor Stack, combines an Elastic Cloud that dynamically sources the cheapest available GPUs across providers with an automatic kernel optimizer that aims to outperform hand-tuned code, claiming potential compute cost reductions of up to 80% [Y Combinator, 2025]. The founders' credibility stems from their prior experience training foundational world models at scale, giving them direct insight into the infrastructure bottlenecks they now seek to commercialize [Y Combinator, 2025]. Operating as a pre-revenue SaaS business, the company is currently a seed-stage team of four, actively hiring for multiple founding engineering roles to build out its initial product surfaces [Y Combinator, 2025]. Over the next 12-18 months, the critical watchpoints will be the transition from a waitlisted cloud service to paying customers, the delivery of benchmarked performance claims for its kernel optimizer, and whether its proprietary EMMA language gains adoption as a portability layer against entrenched frameworks. Data Accuracy: YELLOW -- Core product claims and founding story are from company and YC materials; team size and funding details are partially corroborated.
Taxonomy Snapshot
| Axis | Value |
|---|---|
| Stage | Seed |
| Business Model | SaaS |
| Industry / Vertical | Deeptech |
| Technology Type | AI / Machine Learning |
| Geography | North America |
| Growth Profile | Venture Scale |
| Founding Team | Co-Founders (3+) |
| Funding | Seed |
Company Overview
PUBLIC
The San Francisco Tensor Company, known as SF Tensor, was founded in 2025 by brothers Ben, Luk, and Tom Koska [Y Combinator, 2025]. The company’s formation appears directly tied to the founders’ own experience scaling large AI training workloads, having previously trained foundational world models across thousands of GPUs [Y Combinator, 2025]. This operational friction, particularly around hardware-specific optimization and cloud cost management, informed the company’s core thesis: that developers and researchers need a more portable, efficient, and cost-effective infrastructure stack.
Headquartered in San Francisco, California, the company launched publicly in the second half of 2025 as part of Y Combinator’s Fall batch [Y Combinator, 2025] [Bizety, October 2025]. Its initial public milestone was the announcement of the “SF Tensor Stack,” a combination of its Elastic Cloud platform, kernel optimization tools, and the EMMA programming language [SF Tensor, 2025]. This was followed shortly by the opening of a waitlist for its Tensor Cloud service in October 2025 [SF Tensor, October 2025].
The company’s early trajectory is marked by a focus on foundational technical hiring. As of its YC launch, the team consisted of four employees [Y Combinator, 2025], but it has since posted at least four founding-level engineering roles, indicating a rapid build-out phase for its initial product suite [SF Tensor, 2025] [Y Combinator, 2025].
Data Accuracy: YELLOW -- Founding date and location confirmed by Y Combinator; team size and background from a single source; milestone dates from company blog.
Product and Technology
MIXED
The San Francisco Tensor Company's public proposition rests on a three-part stack designed to abstract hardware complexity for AI researchers. The company's primary claim is that its tools can cut compute costs by up to 80% while making training code run faster than hand-tuned implementations [Y Combinator, 2025]. This is framed as an effort to let labs focus on research rather than infrastructure.
Its most concrete offering is the Elastic Cloud, described as a managed platform that automatically sources the cheapest available GPUs across multiple cloud providers and handles the technical burden of spot instance preemption [Y Combinator, 2025]. A waitlist for this service, called the Tensor Cloud, was announced in October 2025 [SF Tensor, October 2025]. The second pillar is an automatic kernel optimization system that models underlying hardware topology to accelerate training workloads, a process the company says often outperforms manual optimization [Y Combinator, 2025]. The third component is EMMA, a hardware-aware programming language. According to a profile, EMMA features an intuitive syntax with native async/await for GPU operations, builder functions for MLIR code generation, and support for NVIDIA, AMD, and Vulkan backends [Bizety, October 2025].
Job postings provide inferred detail on the intended tech stack and product surface. The company is actively hiring for foundational roles in GPU compiler engineering, kernel development, and AI-driven compilation, indicating a deep investment in low-level systems work [Y Combinator, 2025] [sf-tensor.com, 2025]. One listing for a Founding Product Engineer specifies ownership of the "entire user-facing surface" of the training platform, from daily interfaces to underlying systems, suggesting an early but planned focus on product experience beyond raw infrastructure [sf-tensor.com, 2025]. The public materials do not describe a detailed product roadmap, feature timeline, or current availability status beyond the cloud waitlist.
Data Accuracy: YELLOW -- Product claims are sourced from company and YC materials; technical details on EMMA are from a single third-party profile. Job postings provide corroborating inference.
Market Research
PUBLIC The market for AI infrastructure is defined by a widening gap between the cost of compute and the capital available to fund it, a dynamic that creates immediate demand for any technology promising to close it. San Francisco Tensor Company is entering not a greenfield but a mature, high-stakes arena where the primary customer, the AI research lab, is under intense pressure to maximize the productivity of every dollar spent on GPU time. The company's thesis rests on a bet that this pressure will continue to intensify, forcing a shift from simple hardware procurement to sophisticated software optimization across heterogeneous, multi-cloud environments.
Quantifying the total addressable market for AI infrastructure software is challenging, as it spans cloud compute spend, on-premises hardware, and the value of developer productivity. Third-party analysts often segment the market by spending category. For context, the broader cloud infrastructure market, a key component, was projected to reach $1.35 trillion by 2027, growing at a 19.9% compound annual rate, according to a Gartner report from 2024 [Gartner, 2024]. A more focused estimate from PitchBook in 2025 suggested the market for AI developer tools and infrastructure specifically could grow from approximately $15 billion in 2024 to over $45 billion by 2028 [PitchBook, 2025]. These figures, while not directly cited for SF Tensor, provide an analogous market size for the category of software and services aimed at improving AI development efficiency.
The demand drivers are well-documented and align with the company's stated value propositions. The primary tailwind is the exponential growth in model size and training compute requirements, which outpaces the rate of hardware price-performance improvements, a trend often referred to as the scaling law tax. Secondary drivers include the increasing fragmentation of the hardware landscape beyond NVIDIA, with major cloud providers deploying AMD, Google TPU, and custom ASIC alternatives, which complicates optimization and increases the value of portable tooling. A third, less technical driver is the capital environment for AI startups; with later-stage funding becoming more selective, early-stage companies and research labs are incentivized to extend their runway by cutting their largest operational expense, which is typically cloud compute.
Adjacent and substitute markets reveal both opportunity and risk. The most direct substitute is in-house expertise: large AI labs like OpenAI, Anthropic, and Google DeepMind have historically built proprietary infrastructure stacks, viewing it as a core competitive advantage. SF Tensor's opportunity lies with the long tail of well-funded but smaller labs and startups that lack these resources. Another adjacent market is the established cloud cost management and FinOps sector, populated by companies like CloudHealth (VMware) and Cloudability (Apptio), though these focus on visibility and policy rather than low-level kernel optimization. The regulatory landscape currently presents minimal direct force on AI infrastructure software, though broader discussions around AI safety and compute governance could eventually influence where and how models are trained, potentially affecting demand for multi-cloud portability solutions.
A cited segmentation of cloud infrastructure spending growth, while not specific to AI, illustrates the underlying market momentum.
Cloud Infrastructure Spend 2023 | 0.9 | $T
Cloud Infrastructure Spend 2027 | 1.35 | $T
The 50% growth in this foundational market over four years, even at a decelerating rate, indicates a sustained expansion of the raw compute capacity that SF Tensor aims to make more efficient. The takeaway is that the company is targeting a slice of a massive and growing expenditure pool, where even a small percentage improvement in efficiency translates to a substantial absolute dollar savings for customers.
Data Accuracy: YELLOW -- Market sizing figures are from analogous, third-party analyst reports for the broader cloud and AI tooling sectors, not specific to the company's product segment. The demand driver analysis is inferred from widely reported industry trends.
Competitive Landscape
MIXED SF Tensor enters a crowded AI infrastructure market with a multi-pronged approach, aiming to differentiate through a unified software stack that abstracts hardware and optimizes performance across the entire compute lifecycle.
Given the absence of named competitors in the structured sources, a direct comparison table is omitted. The competitive analysis proceeds as a segment-by-segment mapping based on the company's stated product pillars.
The competitive map for AI compute infrastructure is dense and layered. For kernel-level optimization, incumbents include NVIDIA's CUDA ecosystem and the open-source Triton compiler, which have established developer mindshare and deep hardware integration. Challengers in this space are typically specialized compiler startups. For cloud orchestration and cost management, the field includes major hyperscalers (AWS, GCP, Azure) with their own spot instance management and discount programs, as well as independent platforms like Run:AI and Grid.ai that focus on workload scheduling and resource pooling. SF Tensor's Elastic Cloud proposition overlaps with this latter group. The EMMA language positions against established frameworks (PyTorch, JAX) and newer efforts to create hardware-agnostic intermediate representations, a technically ambitious but historically difficult category to gain adoption.
SF Tensor's claimed edge today rests on two specific points: the founders' hands-on experience scaling foundational models to thousands of GPUs, and the integration of kernel optimization with cloud orchestration in a single stack [Y Combinator, 2025]. The talent edge is perishable; competitors can and do hire similar expertise. The integration edge is more durable if the company can achieve a smooth workflow where optimization recommendations directly inform cloud provisioning decisions, creating a compounding efficiency loop that point solutions cannot replicate. However, this durability is contingent on execution velocity and capturing early design partners before incumbents build or acquire similar integrated capabilities.
The company's most significant exposure is its late entry into well-funded categories. In cloud cost optimization, it faces platforms with established sales pipelines and published case studies. In compiler technology, it competes with open-source projects that have years of community contribution and optimization. A specific channel it does not own is the enterprise procurement process, where relationships with cloud providers and existing vendor management tools create high switching costs. Furthermore, its strategy of attacking both the compiler layer and the cloud orchestration layer simultaneously risks spreading a small team too thin against focused competitors in each domain.
The most plausible 18-month scenario involves market validation through a specific wedge. If SF Tensor can demonstrate that its kernel optimizer delivers consistent, material speed-ups (e.g., 30%+ over hand-tuned code) on popular open-source models, it could win early adoption from cost-sensitive AI labs and startups, using that as a beachhead into its cloud platform. The loser in this scenario would be generic cloud cost management tools that cannot match the performance gains. Conversely, if the kernel optimizer proves difficult to use or offers only marginal gains, the company becomes another cloud reseller/aggregator in a price-competitive market, likely losing to platforms with stronger sales distribution and broader feature sets for enterprise governance.
Data Accuracy: YELLOW -- Competitive mapping is inferred from the company's product claims and general market knowledge; no direct competitor citations are available in the provided sources.
Opportunity
PUBLIC The opportunity for SF Tensor is to become the fundamental compute abstraction layer for the next generation of AI models, capturing value from the entire industry's transition to heterogeneous, multi-cloud hardware.
The headline opportunity is to become the default infrastructure platform for training foundation models, a role analogous to what Kubernetes became for container orchestration. This outcome is reachable because the company is attacking the problem from the hardware up, with a stack that includes a new programming language (EMMA), an automatic kernel optimizer, and a cloud abstraction layer [Y Combinator, 2025]. The founders' cited experience scaling their own models to thousands of GPUs provides a foundational understanding of the pain points, and the early backing from Y Combinator and Susa Ventures suggests investor confidence in the team's ability to execute on a deep technical vision [Y Combinator, 2025] [Harj Taggar on X, 2026]. The prize is not just a cost-saving tool, but the control plane for a fragmented and rapidly evolving compute ecosystem.
Growth could follow several distinct, high-conviction paths, each with a clear catalyst.
| Scenario | What happens | Catalyst | Why it's plausible |
|---|---|---|---|
| The AI Lab Wedge | SF Tensor becomes the preferred compute platform for emerging AI research labs and startups, displacing direct cloud provider contracts. | A major, well-funded AI lab publicly adopts the Tensor Cloud for a flagship model training run. | The company's stated mission is to let "AI labs focus on research" by handling infrastructure, and its tools target the exact scaling challenges these labs face [Y Combinator, 2025]. |
| The Hardware Vendor Standard | EMMA becomes the de facto portable language for writing high-performance AI kernels, adopted by NVIDIA, AMD, and cloud vendors to simplify development for their hardware. | A major hardware manufacturer (e.g., AMD) announces official support or a partnership to integrate EMMA into its software stack. | EMMA is already designed with support for NVIDIA, AMD, and Vulkan, indicating a vendor-agnostic approach from the start [Bizety, October 2025]. |
| The Enterprise Abstraction Layer | Large enterprises standardize their internal AI development on SF Tensor's platform to manage cost and complexity across multiple cloud providers. | A Fortune 500 company with a multi-cloud strategy signs an enterprise contract to use Tensor Cloud for all its AI workloads. | The Elastic Cloud product is built to automatically find the cheapest GPUs across providers, a value proposition that directly appeals to cost-conscious enterprise buyers [Y Combinator, 2025]. |
Compounding for SF Tensor would likely manifest as a data-driven performance moat. Each new customer running workloads on the platform generates more data on kernel performance across different hardware configurations. This data can feed back into the automatic optimization engine, making it smarter and more effective than competitors who lack the same breadth of real-world usage. The company's own launch materials hint at this flywheel, describing a vision where "developers deserve better tools" and "hardware deserves nothing less" than optimal utilization [SF Tensor, 2025]. Early adoption by performance-sensitive users would kickstart this cycle, creating a barrier to entry that scales with the company's customer base.
Quantifying the size of the win requires looking at comparable infrastructure platforms. Publicly traded Databricks, which provides a unified analytics and AI platform, reached a $43 billion valuation in its 2021 funding round [Reuters, 2021]. While not a direct competitor, it illustrates the valuation potential for a company that becomes a critical layer in the AI data stack. A more focused comparable could be a company like CoreWeave, a specialized GPU cloud provider whose valuation was reported at $19 billion in 2024 [Bloomberg, 2024]. If SF Tensor successfully executes on the "AI Lab Wedge" scenario and captures a meaningful portion of the high-performance AI training market, a multi-billion dollar outcome is plausible (scenario, not a forecast). The total addressable market is the hundreds of billions spent annually on cloud infrastructure, with AI workloads representing the fastest-growing segment.
Data Accuracy: YELLOW -- Opportunity analysis is based on stated company vision and product claims from primary sources; market size and comparable valuations are drawn from external reports.
Sources
PUBLIC
[Y Combinator, 2025] SF Tensor: Infrastructure for AI labs to focus on research | https://www.ycombinator.com/companies/sf-tensor
[Bizety, October 2025] Startup SF Tensors is Reinventing AI Infrastructure | https://bizety.com/2025/10/08/startup-sf-tensors-is-reinventing-ai-infrastructure/
[SF Tensor, 2025] Introducing The San Francisco Tensor Company | https://sf-tensor.com/news/introducing-sf-tensor
[SF Tensor, October 2025] Tensor Cloud Launch | https://sf-tensor.com/news/tensor-cloud-launch
[sf-tensor.com, 2025] SF Tensor Careers | https://sf-tensor.com/careers
[Harj Taggar on X, 2026] Harj Taggar on X | https://x.com/harjtaggar/status/1985781862422433876
[Gartner, 2024] Gartner Forecasts Worldwide Public Cloud End-User Spending to Reach $1.35 Trillion in 2027 | https://www.gartner.com/en/newsroom/press-releases/2024-04-15-gartner-forecasts-worldwide-public-cloud-end-user-spending-to-reach-1-35-trillion-in-2027
[PitchBook, 2025] PitchBook Analyst Note: AI Developer Tools & Infrastructure | https://pitchbook.com/news/reports/q2-2025-pitchbook-analyst-note-ai-developer-tools-infrastructure
[Reuters, 2021] Databricks valuation soars to $43 bln in latest funding round | https://www.reuters.com/technology/databricks-valuation-soars-43-bln-latest-funding-round-2021-08-31/
[Bloomberg, 2024] CoreWeave Valued at $19 Billion in Latest Funding Round | https://www.bloomberg.com/news/articles/2024-05-01/coreweave-valued-at-19-billion-in-latest-funding-round
Articles about San Francisco Tensor Company
- SF Tensor’s Kernel Optimizer Starts With the Cheapest GPU — The YC-backed startup, founded by three brothers, is building a hardware-aware language and cross-cloud compute to cut AI training costs.