Replies: 2 comments 3 replies
-
Docusaurus is a static site generator and its only role is to build a static deployment, that you can host on simple providers such as GitHub pages. I don't know enough about MCP to understand what you want us to implement. Please explain like I'm 5, assuming I know almost nothing about MCPs. What does it mean to "add an MCP server hosted on the docusaurus site". Since Docusaurus builds a static site, can you create a concrete static example of what it means? You can for instance create an HTTP resource manually first, and then we'll figure out how to automate the process, but we need a very concrete example first to see what should be the outcome of implementing such a feature. If "add an MCP server" means adding a runtime server (which means Docusaurus is not a static site generator anymore), this is likely out of the scope of Docusaurus and should be implemented by the community as a separate project. If someone wants to build that, I'm here to help and ensure that we expose primitives that permit you to do so. |
Beta Was this translation helpful? Give feedback.
-
made a prototype as it interest me, i just made it but i have not tested anything. only the main code idea is there for now. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I have been working on setting up my development workflow using various Coding Agents (Cline, Roo Code, Copilot etc) and have come across the need to reference documents frequently. Since many of the documents sites are built on docusaurus framework I wanted to see if there has been any discussions on building a native plugin / feature that will provide AI ability to access and read through the documentation site via model context protocol.
Right now, people have come up with various custom solutions (using semantic search databases etc) to fetch and index the documents locally for querying, however this results in outdated/stale content and doesn't offer support for versioning.
A second option is to use MCP servers like fetch or firecrawl to ask the Agent to crawl specific pages when you need them (this can be cumbersome since the user has to search through manually and provide the URL which the Agent can then scrape).
My proposal is to add an MCP server directly hosted on the docusaurus site (since MCP now supports HTTP instead of SSE making implementation much simpler) that would expose functionality to the Agent like:
Sites that have MCP enabled can expose a URL that can be configured with various MCP Clients for use.
Beta Was this translation helpful? Give feedback.
All reactions