SurfSense

Freemium

A tool to build AI knowledge bases from documents.

SurfSense is an open-source AI research agent designed to build knowledge bases from internal documents and external integrations. It features support for over 50 file formats, real-time team collaboration with Role-Based Access Control, and compatibility with various LLM providers via LiteLLM. This tool is built for teams and researchers who need a self-hostable alternative for managing and chatting with private data sources (verified: 2026-01-30).

Jan 30, 2026
Get Started
Pricing: Freemium
Last verified: Jan 30, 2026
Compare alternativesBrowse by taskGuides

Key facts

Pricing

Freemium

Use cases

Research teams connecting Large Language Models to internal knowledge sources for real-time collaborative chat and data analysis (verified: 2026-01-30), Knowledge managers centralizing information from over fifty file extensions and external platforms like Slack, Jira, and Notion (verified: 2026-01-30), Privacy-focused organizations deploying self-hosted AI research agents that work with local LLMs like vLLM and Ollama (verified: 2026-01-30)

Strengths

The platform supports over 50 file extensions including documents, images, and video files for building personal knowledge bases (verified: 2026-01-30), Role-Based Access Control provides granular permissions for documents, chats, and connectors with specific roles like Owner, Admin, Editor, and Viewer (verified: 2026-01-30), Integration flexibility exists through the OpenAI spec and LiteLLM, which enables connections to virtually any inference provider (verified: 2026-01-30)

Limitations

The software requires users to manage their own self-hosted deployment and infrastructure for local operation (verified: 2026-01-30), Users must provide and configure their own inference providers or local LLM instances to enable the chat functionality (verified: 2026-01-30)

Last verified

Jan 30, 2026

Plan your next step

Use these links to move from this review into compare and task workflows before committing to a tool stack.

CompareBrowse by task GuidesTools Deals

Priority tasks: Content writing tasksCode generation tasksVideo generation tasksMeeting notes tasksTranscription tasks

Priority guides: AI SEO tools guideAI coding tools guideAI video tools guideAI meeting notes guide

Strengths

  • The platform supports over 50 file extensions including documents, images, and video files for building personal knowledge bases (verified: 2026-01-30)
  • Role-Based Access Control provides granular permissions for documents, chats, and connectors with specific roles like Owner, Admin, Editor, and Viewer (verified: 2026-01-30)
  • Integration flexibility exists through the OpenAI spec and LiteLLM, which enables connections to virtually any inference provider (verified: 2026-01-30)

Limitations

  • The software requires users to manage their own self-hosted deployment and infrastructure for local operation (verified: 2026-01-30)
  • Users must provide and configure their own inference providers or local LLM instances to enable the chat functionality (verified: 2026-01-30)

FAQ

What specific external data sources does the platform integrate with for research?

The tool integrates with search engines like SearxNG and Tavily, productivity suites including Google Drive and Microsoft Teams, and project management tools such as Jira, Linear, and ClickUp. It also connects to communication platforms like Slack and Discord to pull data into a centralized knowledge base for real-time AI interaction (verified: 2026-01-30).

How does the platform manage team collaboration and data access security?

Collaboration is managed through Role-Based Access Control for search spaces, where owners invite members with specific roles to manage permissions for documents, chats, and settings. This system ensures that granular permissions are applied to all shared knowledge bases, allowing teams to collaborate securely while maintaining strict data governance (verified: 2026-01-30).

Does the software support the use of local Large Language Models?

The software works with local LLMs such as vLLM and Ollama, which allows for private, self-hosted deployments that do not require external cloud providers. By using the OpenAI specification and LiteLLM, users can connect virtually any inference provider to their local instance to maintain full control over their data processing (verified: 2026-01-30).