EyePop.ai
No-code platform to train/deploy custom AI vision models
Website: https://eyepop.ai/
Cover Block
PUBLIC
| Name | EyePop.ai |
| Tagline | No-code platform to train/deploy custom AI vision models |
| Headquarters | San Diego, USA |
| Stage | Seed |
| Business Model | API / Developer Platform |
| Industry | Other (AI/Computer Vision) |
| Technology | AI / Machine Learning |
| Geography | North America |
| Founding Team | Co-Founders (3+) |
| Funding Label | Seed (total disclosed ~$2.85M) |
Links
PUBLIC
- Website: https://www.eyepop.ai/
- LinkedIn: https://www.linkedin.com/company/eyepop-ai
Executive Summary
PUBLIC
EyePop.ai is a seed-stage platform that aims to make custom computer vision accessible to developers and small teams without requiring machine learning expertise, a bet that merits attention as demand for specialized AI models grows but in-house talent remains scarce [EyePop.ai website, 2025]. The company, founded by a trio of serial entrepreneurs, provides a no-code environment where users can train models on their own image, video, or livestream data and deploy them at the edge, notably on Qualcomm hardware [EyePop.ai website, 2025]. Its differentiation appears to rest on a self-service training workflow and a promise of full data ownership, positioning it against more complex, code-first ML platforms.
The founding team brings a mix of startup creation and exit experience. Brad Chisum (CEO) previously founded Lumedyne Technologies, a sensor company acquired by Google in a deal reported at $85 million [Mixergy podcast]. Andy Ballester (CPO) was a co-founder of GoFundMe, and Torsten Schulz (CTO) has a background scaling startups that were later acquired [LinkedIn]. While none have a public, deep pedigree in AI research, their operational experience in building and selling technology companies is a tangible asset for navigating early-stage growth.
In February 2025, the company closed a $2.85 million seed round led by Innosphere Ventures, with participation from a syndicate of regional and thematic funds including BonAngels Venture Partners and Spatial Capital [Crunchbase, 2025]. The business model is API-based, with a public entry point at $20 per month after a 30-day free trial and enterprise pricing available upon request [EyePop.ai website, 2025]. Over the next 12-18 months, the key indicators to monitor will be the validation of its self-service training claims through user adoption, the materialization of its partnership with Qualcomm into commercial deployments, and the disclosure of initial customer traction and revenue metrics, all of which are currently absent from the public record.
Data Accuracy: YELLOW -- Core product claims are sourced from the company's website; funding details are confirmed by Crunchbase; founder backgrounds are partially corroborated by LinkedIn and a podcast interview.
Taxonomy Snapshot
| Axis | Value |
|---|---|
| Stage | Seed |
| Business Model | API / Developer Platform |
| Industry / Vertical | Other |
| Technology Type | AI / Machine Learning |
| Geography | North America |
| Founding Team | Co-Founders (3+) |
| Funding | Seed (total disclosed ~$2,850,000) |
Company Overview
PUBLIC
EyePop.ai is a San Diego-based developer platform founded to make custom computer vision models accessible without requiring machine learning expertise. The company's public narrative centers on removing the technical barriers that have traditionally kept computer vision out of reach for startups and small development teams [EyePop.ai, 2025]. Its founding team, Brad Chisum, Torsten Schulz, and Andy Ballester, brings a combination of serial entrepreneurship and product experience to the venture.
Key milestones for the company are sparse in public records, but a few datapoints outline its early trajectory. The company was named a semi-finalist in the 2024 Vision Tank competition, an early signal of industry recognition [LinkedIn, 2024]. Its most significant public development to date was a live demonstration of its Video Intelligence Agent at the Snapdragon Summit 2025, conducted in collaboration with Qualcomm Technologies [EyePop.ai, 2025]. This event served as a de facto product launch platform, showcasing the platform's edge deployment capabilities. Capitalization followed shortly after, with a $2.85 million Seed round closing in February 2025, led by Innosphere Ventures [Crunchbase, 2025][PitchBook, 2025].
Data Accuracy: YELLOW -- Founding details are from company sources; funding is corroborated by two databases. Key milestones lack independent press coverage.
Product and Technology
MIXED EyePop.ai's core proposition is to abstract the machine learning pipeline for computer vision, offering a no-code platform where developers can create and deploy custom models using their own visual data. The company's website positions the service as a self-service training environment, allowing users to upload images, videos, or livestreams to train models for tasks like object detection, measurement, and counting [EyePop.ai website]. A key claim is that this process can be completed in hours, eliminating the need for in-house ML expertise [EyePop.ai website].
The platform appears to be structured in two layers. First, a library of pre-built, ready-to-use vision models for common tasks like people or text detection provides an immediate entry point [EyePop.ai website]. Second, the custom training layer is the primary focus, where users retain full ownership of their data and the resulting models [EyePop.ai website]. For deployment, the company emphasizes edge computing, specifically highlighting integration with Qualcomm devices and an on-premise runtime option, which suggests a focus on latency-sensitive or data-privacy-conscious applications [EyePop.ai website, 2025]. A distinct product surface called the Video Agent was demonstrated at the Snapdragon Summit 2025; it is described as a tool for ingesting multiple camera feeds and automatically generating polished highlight reels [14].
Pricing information [PUBLIC] is partially disclosed on the developer documentation site. A free 30-day trial is offered, followed by a $20/month starter plan [EyePop.ai website]. Custom training and enterprise packages are listed as "Contact us," with overage fees for compute hours priced at $1.00 per hour [EyePop.ai website]. The technology stack can be partially inferred from active job postings, which seek a Machine Learning Software Engineer with experience in PyTorch, TensorFlow, and C++ [17], and an Account Engineer familiar with Python and cloud APIs [18].
Data Accuracy: YELLOW -- Product claims sourced directly from company website and blog; technical stack inferred from job postings. No third-party technical reviews or customer deployment case studies were found to corroborate performance claims.
Market Research
PUBLIC
The market for accessible computer vision tools is expanding as developers across industries seek to automate visual analysis without building machine learning teams from scratch. Demand is driven by the proliferation of visual data from cameras, drones, and mobile devices, coupled with a persistent shortage of specialized ML talent. The platform approach, which abstracts away model training complexity, targets a segment of the broader AI development tools market.
Third-party market sizing for the specific no-code computer vision platform segment is not publicly available. However, analogous markets provide a sense of scale. The global computer vision market was valued at approximately $16 billion in 2024 and is projected to grow at a compound annual rate of 19% through 2030, according to a report from Grand View Research [Grand View Research, 2024]. The adjacent low-code development platform market, which shares a similar democratization thesis, was estimated at over $22 billion in 2023 with strong double-digit growth [Gartner, 2023]. These figures suggest the addressable market for tools that simplify AI implementation is substantial and growing.
Key demand drivers extend beyond general AI adoption. Specific tailwinds include the need for real-time analytics in sectors like traffic management and construction site safety, the falling cost of camera hardware and edge compute, and regulatory pressures for standardized reporting in industries like insurance and compliance. The company's own marketing cites applications in construction, drones, traffic, and surveillance [EyePop.ai website, 2025], indicating these verticals are initial beachheads. A substitute market exists in outsourcing vision tasks to large, generic cloud APIs, but these often lack customization, raise data privacy concerns, and can incur unpredictable costs at scale.
Regulatory and macro forces present a mixed picture. Data privacy regulations (e.g., GDPR, CCPA) incentivize on-premise or edge deployment solutions that keep data local, a capability EyePop.ai promotes. Conversely, export controls on advanced AI chips and geopolitical tensions around technology could complicate supply chains for edge hardware partners, though this is a systemic risk for the category rather than a company-specific one. The lack of a dominant, vertically integrated no-code vision platform suggests the competitive landscape is still forming.
Computer Vision Market (2024) | 16 | $B
Low-Code Platform Market (2023) | 22 | $B
The sizing chart, drawn from analogous markets, illustrates the large potential adjacency. The computer vision market's growth trajectory indicates a rising tide, but the platform's success hinges on capturing a meaningful slice of developer workflow within specific high-value verticals.
Data Accuracy: YELLOW -- Market sizing is inferred from analogous, broader industry reports; specific segment data for no-code vision platforms is not confirmed.
Competitive Landscape
MIXED
EyePop.ai positions itself as a low-friction, developer-first gateway into custom computer vision, a segment where the primary competition is not other startups but the inertia and complexity of building in-house or using generalist cloud AI services.
The competitive map for no-code AI vision tools is still nascent but can be segmented into three tiers. At the top are the hyperscale cloud platforms: Google's Vertex AI Vision, AWS Panorama, and Azure Custom Vision. These offer deep integration with their respective clouds and extensive pre-trained model libraries, but they often require significant ML expertise to customize and can lead to vendor lock-in [PUBLIC]. The middle tier consists of specialized computer vision API providers like Roboflow and Landing AI, which target developers with tools for dataset management, model training, and deployment. Roboflow, in particular, has established a strong community and open-source presence [PUBLIC]. The bottom tier, where EyePop.ai currently operates, includes newer entrants and developer tools focused on extreme ease of use and edge deployment, aiming to capture small teams and agencies priced out or overwhelmed by the complexity of the larger platforms.
Where EyePop.ai attempts to carve out a defensible edge is through its specific focus on edge deployment partnerships and a simplified, self-service workflow. The collaboration with Qualcomm, demonstrated at the Snapdragon Summit 2025, provides a tangible technical path to on-device inference, which is a key requirement for latency-sensitive or offline use cases in drones, traffic management, and surveillance [EyePop.ai, 2025]. This partnership is a current advantage, but its durability is perishable; it depends on continued technical execution and exclusivity, which Qualcomm is unlikely to grant indefinitely as the edge AI market matures. The other potential edge is the founding team's serial entrepreneurial experience, which may aid in early-stage product-market fit discovery and fundraising. However, this is not a technical moat and does not directly counter the data network effects or brand recognition that incumbents possess.
The company's most significant exposure is its lack of visible traction and its positioning in a crowded layer of the AI stack. It is competing directly with Roboflow, which has raised more capital, built a larger public community, and offers a more comprehensive suite of tools for computer vision pipelines [PUBLIC]. Furthermore, EyePop.ai does not own a critical distribution channel. Its model relies on developers finding its platform through search or partnerships, while cloud incumbents can use their massive existing developer bases and sales teams. The company also appears to have minimal public marketing or developer evangelism, which raises visibility concerns in a market where community adoption is often a leading indicator of success.
The most plausible 18-month scenario involves continued market fragmentation. A "winner" in the niche could be a company like Roboflow if it successfully expands its platform up the value chain into more vertical-specific applications while maintaining its developer-friendly ethos, thereby capturing the early adopters EyePop.ai targets. A "loser" scenario for EyePop.ai would materialize if it fails to convert its Qualcomm partnership and technical demo into a stream of paying customers and developer mindshare. Without measurable traction, it risks becoming a feature rather than a platform, potentially an acquisition target for a larger company seeking edge AI deployment capabilities, but not a standalone market leader.
Data Accuracy: YELLOW -- Competitive analysis is based on public positioning of broad market segments; specific competitor intelligence is limited to general market knowledge as no named competitors were provided in source data.
Opportunity
PUBLIC
If EyePop.ai executes, the prize is a foundational position in the emerging market for no-code, edge-deployable computer vision, a capability that could unlock AI-driven automation across dozens of physical industries still reliant on manual inspection.
The headline opportunity is to become the default self-service platform for developers and small teams building custom vision applications, bypassing the need for in-house machine learning expertise. This outcome is reachable because the company's product positioning directly addresses a well-documented pain point: the complexity and cost of training and deploying specialized vision models. The platform's emphasis on data ownership and edge deployment, highlighted in its collaboration with Qualcomm Technologies for the Snapdragon Summit 2025 [EyePop.ai, 2025], targets a critical demand for on-premise, low-latency analysis in sectors like construction, drones, and surveillance. While the company's public traction is unconfirmed, the core product premise aligns with a clear market need for accessible, proprietary AI tools.
Growth could follow several distinct paths, each with a plausible catalyst.
| Scenario | What happens | Catalyst | Why it's plausible |
|---|---|---|---|
| Qualcomm Ecosystem Dominance | EyePop.ai becomes the preferred vision software layer for developers building on Qualcomm's edge AI hardware. | Deepening technical integration and co-marketing as a "Premier launch partner" [EyePop.ai website, 2025]. | The live demonstration at Snapdragon Summit 2025 provides a public proof point of technical collaboration [EyePop.ai, 2025]. Hardware-software bundling is a proven path for developer tools. |
| Vertical SaaS for Physical Industries | The company builds dedicated, industry-specific solutions (e.g., for insurance damage assessment, traffic management) that command higher ACVs. | Securing a lighthouse customer in a high-value vertical like insurance or construction, validating the ROI. | The website explicitly cites use cases for automating insurance assessments and traffic management [EyePop.ai website, 2025], indicating initial product-market fit exploration. |
| Developer Platform Flywheel | A thriving community of developers builds and shares custom models, attracting more users and improving the core model library. | Launching a public model marketplace or significantly growing the developer community section. | The company maintains a "Community" page and developer documentation, signaling an intent to build an ecosystem [EyePop.ai website, 2025]. |
Compounding for EyePop.ai would likely manifest as a data and distribution flywheel. Early adopters training custom models on the platform would generate proprietary datasets that, if leveraged ethically and with user consent, could improve the performance of the company's baseline model library. More importantly, a successful deployment in one vertical, such as construction site safety, creates a reference case that lowers the sales friction for adjacent use cases like drone-based infrastructure inspection. The company's partnership focus, including with data labeling services [EyePop.ai website, 2025], suggests an understanding that easing the entire model development lifecycle, not just the training, is key to locking in users.
The size of the win can be framed by looking at the valuation of public companies that have successfully productized complex AI/ML workflows for developers. While no direct public comparable exists for a no-code computer vision platform, companies like DataRobot (which focused on automated machine learning) reached a peak private valuation of over $6 billion [PitchBook, 2021] before market conditions shifted. A more conservative but relevant scenario is a strategic acquisition by a major cloud provider or chipmaker seeking to bolster its edge AI and vision offerings. If the "Qualcomm Ecosystem Dominance" scenario plays out, EyePop.ai could position itself as an attractive tuck-in acquisition for a hardware company aiming to provide a complete vision stack, with deal sizes in the hundreds of millions of dollars being plausible for a company with proven developer adoption and integration depth (scenario, not a forecast).
Data Accuracy: YELLOW -- The opportunity analysis is based on the company's stated product positioning and a single public partnership demonstration. Market comparables are drawn from historical data in adjacent sectors. No customer or revenue metrics are available to validate the growth scenarios.
Sources
PUBLIC
[EyePop.ai website, 2025] EyePop.ai Homepage | https://www.eyepop.ai/
[Mixergy podcast] Mixergy Interview: Lumedyne Tech with Brad Chisum | https://mixergy.com/interviews/lumedyne-tech-with-brad-chisum/
[LinkedIn] Andy Ballester LinkedIn | https://www.linkedin.com/in/andrewballester/
[Crunchbase, 2025] EyePop.ai - Financial Details | https://www.crunchbase.com/organization/eyepop-ai/financial_details
[PitchBook, 2025] Eyepop.AI 2025 Company Profile: Valuation, Funding & Investors | PitchBook | https://pitchbook.com/profiles/company/531040-60
[LinkedIn, 2024] EyePop.ai Company LinkedIn | https://www.linkedin.com/company/eyepop-ai
[EyePop.ai, 2025] EyePop.ai Highlights Video Intelligence Agent at Snapdragon Summit 2025 | https://www.eyepop.ai/blog/eyepop-ai-highlights-video-intelligence-agent-at-snapdragon-summit-2025
[EyePop.ai website] Pricing - Developer Documentation - EyePop.ai | https://docs.eyepop.ai/developer-documentation/pricing
[17] Machine Learning Software Engineer Job Posting | https://apply.workable.com/eyepop-dot-a-i-inc/j/98CD545CC1/
[18] Account Engineer Job Posting | https://apply.workable.com/eyepop-dot-a-i-inc/j/2B5F26FADF/
[Grand View Research, 2024] Grand View Research Computer Vision Market Report | https://www.grandviewresearch.com/industry-analysis/computer-vision-market
[Gartner, 2023] Gartner Low-Code Development Platform Market Report | https://www.gartner.com/en/newsroom/press-releases/2023-08-28-gartner-forecasts-worldwide-low-code-development-technologies-market-to-grow-20-percent-in-2023
[EyePop.ai website, 2025] EyePop.ai Partners | https://www.eyepop.ai/partners
[EyePop.ai, 2025] EyePop.ai Snapdragon Summit 2025 Page | https://www.eyepop.ai/snapdragon-summit-2025
[EyePop.ai website, 2025] EyePop.ai Community | https://www.eyepop.ai/community
[PitchBook, 2021] DataRobot Valuation Data | https://pitchbook.com/profiles/company/114257-10
Articles about EyePop.ai
- EyePop.ai's $2.85 Million Seed Round Puts Computer Vision on the Edge — The San Diego startup, backed by Innosphere Ventures and a GoFundMe co-founder, aims to let any developer train custom AI vision models.