You imagine the words, ‘Hello, world.’ The text appears on the screen. There is no microphone, no keyboard, no subvocal murmur in the throat. The only motion is the faint, invisible pulse of blood in your brain. This is the promise of MindSpeech, the flagship model from London-based MindPortal, a research company that has spent the last five years trying to turn a science fiction trope into a working interface. The premise is simple, and almost impossibly ambitious: to decode the fluid, continuous stream of imagined speech directly into text, creating a silent, thought-powered bridge between a human and a machine [MindPortal website].
It is a bet that has shifted shape. Founded in 2019, MindPortal initially developed a non-invasive, optical brain-computer interface headband, a piece of hardware designed to capture brain activity with greater precision than traditional EEG [PRNewswire, May 2021]. By late 2023, the company claimed this device could facilitate a direct, real-time dialogue with ChatGPT through imagined speech [PRNewswire, Dec 2023]. Today, the company’s public positioning has pivoted, describing itself as a pure AI research lab focused exclusively on model development, with hardware seemingly relegated to the past [Y Combinator, 2024]. The core ambition, however, remains unchanged: to build the first general-purpose conduit from thought to language.
The pivot from hardware to model
This shift from a hardware-plus-software stack to a pure AI play is the most telling product decision in the company’s brief history. The initial vision was a full-stack solution, a wearable device that could capture high-fidelity neural data to feed proprietary models. The 2021 seed round of $5 million, led by Learn Capital with participation from Kleiner Perkins and Y Combinator, funded that path [PRNewswire, May 2021]. The company filed 13 patents, presumably around its optical sensing technology [MindPortal /company]. But building reliable, consumer-grade biometric hardware is a notoriously difficult and capital-intensive endeavor, a graveyard of failed startups. The pivot to ‘model development rather than hardware’ suggests a strategic retreat to the problem MindPortal believes it can uniquely solve: the translation layer itself.
MindSpeech is that translation layer. The company claims it is the first supervised AI model to decode free-form imagined speech into text, moving beyond the recognition of pre-memorized words or phrases to capture the ‘fluid nature of internal speech’ [MindPortal website]. The technical paper for MindSpeech lists authors from the company and academic collaborators, pointing to a research foundation in functional near-infrared spectroscopy (fNIRS) and prompt tuning [dblp, 2024]. For co-founders Ekram Alam, previously of VR/AR app development, and Jack Baber, the technical lead, the model is the product [Virtual Reality Society] [The Org, 2026]. Its applications, as they see them, span assistive communication for non-verbal individuals, hands-free productivity, and immersive control in virtual environments [Y Combinator, 2024].
The silent market
The potential market is defined by absence: the silence of a stroke patient, the quiet focus of a programmer unwilling to break flow, the immersive silence of a VR user who cannot speak to an AI companion. MindPortal’s wedge is the claim of continuity. Previous research, they argue, could only decode isolated words. MindSpeech, they say, can handle sentences [MindPortal website]. This is the foundational bet,that imagined speech is not a series of discrete symbols but a continuous signal, and that with enough data and the right architecture, an AI can learn to transcribe it.
The investor lineup suggests a belief in the long-term viability of this frontier. Beyond the institutional leads, the seed round included individual checks from figures like Fitbit co-founder James Park and former Facebook design director Julie Zhou [PRNewswire, May 2021]. This mix points to a thesis that bridges deep tech and consumer intuition. The total disclosed funding remains at that $5 million seed from 2021, however, with no subsequent rounds announced publicly. The company’s Y Combinator profile lists a team size of 10 but marks the company as ‘Inactive,’ a contradictory status that is not explained, though a separate YC jobs page lists it as active [Y Combinator, 2024] [Y Combinator jobs, 2024].
The burden of proof
The central challenge for MindPortal is one of validation. The company’s claims of a ‘first-of-its-kind technology’ and a ‘major breakthrough’ are self-reported, primarily through press releases and its own website [PRNewswire, Dec 2023] [Unite.AI, 2024]. Independent, peer-reviewed replication of these results in a public scientific venue is not cited in the available materials. For a technology that promises to read minds, even in the limited sense of decoding imagined speech, the burden of proof is exceptionally high. The market skepticism is not about utility,the use cases are compelling,but about fundamental feasibility and accuracy outside of controlled lab conditions.
- The data moat. The company’s primary advantage, if it exists, would be a proprietary dataset of high-density fNIRS recordings paired with imagined speech transcripts, painstakingly gathered during its hardware phase. This dataset would be extraordinarily difficult for a new entrant to replicate.
- The interface problem. Even with a perfect model, a user needs a way to capture neural signals. MindPortal’s retreat from building its own hardware passes this problem to others, potentially limiting early deployment to research labs with expensive equipment.
- The silent competitor set. The field of neural decoding is crowded with well-funded labs and large tech companies, from Meta’s Reality Labs to nascent startups like Synchron. MindPortal’s differentiation rests on its specific focus on continuous imagined speech, a niche within a niche.
The path forward likely involves partnering with academic and clinical institutions to demonstrate efficacy, publishing more openly, and securing a significantly larger round to scale both research and early, targeted deployments. The table below outlines the company’s known backing.
| Investor | Type | Note |
|---|---|---|
| Learn Capital | Venture Firm | Lead investor, $5M Seed Round [PRNewswire, May 2021] |
| Kleiner Perkins | Venture Firm | Participating investor [PRNewswire, May 2021] |
| Y Combinator | Accelerator | Participating investor [Y Combinator, 2024] |
| 7pc, Scrum Ventures | Venture Firms | Participating investors [PRNewswire, May 2021] |
| James Park, Julie Zhou, Dan Siroker, Matt Bellamy | Angels | Individual investors [PRNewswire, May 2021] |
Every new interface asks a question of the culture it enters. The mouse asked if we were comfortable pointing. The touchscreen asked if we were comfortable touching. Voice assistants asked if we were comfortable talking. MindPortal’s silent model asks a more intimate, and perhaps more unsettling, question: are we comfortable being read? The product’s implicit answer is that we will be, when the value of a frictionless, private, and universally accessible channel to our machines outweighs the instinct to keep our thoughts to ourselves. The next twelve months will show whether the data, and the market, agree.
Sources
- [MindPortal website] MindPortal - The Future of Human-AI Communication | https://mindportal.com/
- [PRNewswire, May 2021] MindPortal Raises $5 Million Led by Learn Capital... | https://www.prnewswire.com/news-releases/mindportal-raises-5-million-led-by-learn-capital-and-participants-include-a-rockstar-and-several-high-profile-individuals-301297148.html
- [PRNewswire, Dec 2023] MindPortal Achieves First Non-Invasive Optical Brain-Computer Interface... | https://www.prnewswire.com/news-releases/mindportal-achieves-first-non-invasive-optical-brain-computer-interface-that-enables-users-to-seamlessly-communicate-with-chatgpt-through-imagined-speech-302007382.html
- [Y Combinator, 2024] MindPortal: Thought-to-Language AI Models | https://www.ycombinator.com/companies/mindportal
- [dblp, 2024] MindSpeech: Continuous Imagined Speech Decoding using High-Density fNIRS and Prompt Tuning... | https://dblp.org/rec/journals/corr/abs-2408-05362.html
- [Virtual Reality Society] An Interview With Ekram Alam - Virtual Reality Society | https://www.vrs.org.uk/expert-insights/interview-ekram-alam/
- [The Org, 2026] MindPortal Company Profile | https://theorg.com/org/mindportal
- [Y Combinator jobs, 2024] Jobs at MindPortal | https://www.ycombinator.com/companies/mindportal/jobs
- [MindPortal /company] MindPortal - The AI Lab Building Tomorrow's Interfaces | https://mindportal.com/company
- [Unite.AI, 2024] Major Breakthrough in Telepathic Human-AI Communication... | https://www.unite.ai/major-breakthrough-in-telepathic-human-ai-communication-mindspeech-decodes-smooth-thoughts-into-text/