Skip to content

Current Stage on MCP Servers

Published: at 10:00 AM

Current Stage on MCP Servers

Today is 2025-03-12. OpenAI just released a new Agent SDK https://github.com/openai/openai-agents-python, which denotes its war against Anthropic’s proposed MCP server ecosystem. OpenAI has also just replaced their /v1/completions endpoint with the new /v1/responses endpoint, which is battery included for tool usage, streaming, multimodal, and saving context transmission.

However, if OpenAI, the pioneer but sometimes under-beaten player, can withdraw the world’s momentum on MCP servers.

Here’s a review of the current stage on MCP servers.

A lot of people comment that the MCP protocol is badly (or even not) designed. My point is that so is the /v1/completions protocol.

Vague way of launching MCP servers

Most MCP servers are launched locally via either npx (implemented by JS/TS) or uvx (implemented by Python).

One example is Fetch (https://github.com/modelcontextprotocol/servers/tree/main/src/fetch)

uvx mcp-server-fetch

Another is FileSystem (https://github.com/modelcontextprotocol/servers/blob/main/src/filesystem)

npx -y @modelcontextprotocol/server-filesystem /path/to/other/allowed/dir

Or in .json config format

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-filesystem",
        "/Users/username/Desktop",
        "/path/to/other/allowed/dir"
      ]
    }
  }
}

This has 2 issues.

  1. When using Cursor with SSH remote, the MCP server actually runs on the local GUI machine, not the remote server.
  2. If you installed uvx or npx in a user directory and run the binary by changing PATH, you cannot use npx or uvx to run the MCP server because “command” is directly executed.

One solution is to provide a wrapper script to run the MCP server.

wrapper_script.sh

#!/bin/zsh
export PATH="/path/to/your/npx/or/uvx:$PATH"

source ~/.bashrc
source ~/.zshrc

$@

Then change the JSON config to:

{
  "mcpServers": {
    "filesystem": {
      "command": "/path/to/your/wrapper_script.sh",
      "args": [
        "npx",
        "-y",
        "@modelcontextprotocol/server-filesystem",
        "/Users/username/Desktop",
        "/path/to/other/allowed/dir"
      ]
    }
  }
}

Lack of minimalistic client

Currently, most MCP clients are part of a bigger initiative, like Cursor, Cline, Windsurf, or some GUI ChatBots, like Cherry Studio.

I’m looking forward to an Aider-like project with tool usage ability. The closest one is Claude Code, but it’s not open source and vendor-locked.

Current shortcoming of the Aider project

Aider is a pioneer in LLM-guided programming, but it still lacks tool usage ability, which is a deal breaker given the landscape of current LLM products. The disadvantage is huge, even just for coding tasks.

For example, if I want the LLM to move a file from one directory to another, when tool usage is enabled, the LLM can just issue the command to move the file. But without tool usage, the LLM needs to ingest the file content and spit out the file content to another file location.

This will

Corporate Firewall Workaround

When using Claude behind corporate firewalls, HTTP proxy configuration is often restricted. However, this limitation can be bypassed using the fetch MCP server, which provides web access capabilities through the MCP protocol rather than direct HTTP connections.

Suggestions and visions on Agent development/debugging workflow

  1. I look forward to a gpt-4o-mini cost level, but with ability to use tools LLM used in Cursor. So the user is not burdened by the cost.