Back to news
Investment notes

Investment Notes: Octen US$10M Seed

0:00
0:00

Traditional search infrastructure was built for humans, ranked by ads, and formatted for browsing. That worked brilliantly, until the way information is accessed fundamentally changed. AI started doing the searching.

Today, a large and fast-growing share of queries are initiated not by people but by LLMs, AI agents, and the applications built on top of them. And the way these systems consume search results is completely different. They need structured, low-latency data they can reason over in milliseconds.

This is the problem Octen is solving. Octen is building the real-time, LLM-native search layer AI agents actually need: faster, cleaner, and purpose-built for how AI consumes information. We're proud to have led Octen's US$10M seed round alongside Argor and a group of leading AI scientists.

Introducing Octen

AI models are powerful, but their capabilities are inherently bottlenecked by static training data. To act on real-time information, they need web search — but the way agents search is fundamentally different from how humans do. Rather than querying sequentially, agents can fire thousands of searches simultaneously, collapsing what used to be long research chains into a few high-concurrency rounds that deliver context directly to the model.

Octen is built for exactly this. Their proprietary distributed search engine is purpose-built for AI workloads — handling over 1M queries per second on a single account, with a P50 latency of 62ms and real-time indexing that keeps data fresh within 5-minute intervals. The result is an information retrieval layer that is fast, concurrent, and reliable enough to power the next generation of AI agents.

The Team

Octen’s founder, Kuan Zou, has spent nearly a decade building search and AI infrastructure. Before founding Octen, he led AI search at Alibaba Cloud for over five years, powering systems serving hundreds of millions of end-users worldwide, and earlier built Baidu’s enterprise search platform from the ground up. 

He has since put together an impressive team of engineers and AI researchers from Meta, Google, TikTok, Alibaba, Baidu, DeepSeek, and Xiaohongshu, with extensive experience building systems that power search at significant scale with high throughput and reliability. They are among the best in the world at what they do, and we're excited to be partnering with them as they build Octen.

Their ability to execute is already proven: within weeks of starting development, Octen's embedding model swept the RTEB leaderboard and is now open-sourced on HuggingFace as one of the top-performing models globally for precision and long-context understanding.

What's Next for Octen? 

Octen has officially launched its web search API and embedding search API, enabling developers to ship LLM-powered applications, chatbots, research assistants, and autonomous agents with real-time, high-quality data.

The seed funding will be used to accelerate product development, expand developer adoption, forge enterprise partnerships, and grow the engineering and developer relations teams over the next 12 months.

We're proud to be backing Kuan and the Octen team as they build what we believe will become a foundational layer of the agentic internet. Welcome to Square Peg.

Enjoyed this post?

Share with your network!