MCP Myths – The Biggest MCP Myths That Refuse To Go Away

Despite MCP being very new, this fertile and furtive area has already generated its own myths and misconceptions, some of which are proving very difficult to dispel.

Here’s my rundown of the biggest MCP myths that I’ve seen running free and propagating in the wild. I’ve added explanations which should dispel each myth for anyone who is still confounded by any of them.

Quick List of The Biggest MCP Myths

  1. MCP Is Just An API: No, MCP ≠ API, they’re very different. MCP takes an entirely different approach to communication; it’s stateful, flexible, maintains context, and more. 
  2. Sandboxed MCP Servers Are Safe: Sandboxing/containerizing MCP servers makes them safer, but it doesn’t eliminate all security risks/accidental damage. 
  3. Having More Tools Empowers Agents: The more tools an AI agent has to choose from, the more likely it is to get stuck in a tool-selection loop or make poor tool selections.
  4. Big Name MCP Servers Are Secure: Numerous, significant vulnerabilities have already been exposed in servers launched by Asana, Jira, GitHub, to name just a few.
  5. MCP OAuth Is Normal OAuth: OAuth flows in MCP differ, introducing additional complexity, challenges, and considerations not present in typical OAuth flows.
  6. You Can Use Prompts To Lock Down Agent Behavior: Well-crafted malicious prompts can override any red lines you’ve given to the AI. You need stronger guardrails.
  7. Auth Is Mandatory For MCP Servers: The MCP specification doesn’t mandate any authorization for MCP servers.

Myth #1: MCP is just an API

Browse any forum about MCP servers, and it won’t take long to encounter someone glibly saying that MCP servers are just fancy APIs, or an API for AI.  

It’s technically true that MCPs are a form of API, but that’s like saying a USB cable is a cable. In reality, when many people say MCPs are just APIs, they’re usually referring to RESTful APIs.

RESTful API connections are static, stateless, use precise endpoints, and don’t manage context (the client manages all context). Each request and response follows a rigid, preset path and is isolated from previous and subsequent requests – neither the resource nor the API maintains context.

MCP connections are flexible, stateful, and negotiated at runtime, utilizing a dynamic and conversational handshake (rather than fixed endpoints), with context maintained by both the MCP server and client. 

MCP connections are not a string of singular, isolated transactions. They are a dance, or collaborative conversation between the client and server. This means they have new and different capabilities, but also new security risks and scalability challenges.

Learn more about the differences between MCP and API and where you should use each, and watch this video where our CEO explains some of the main ways MCPs are different from APIs:

Myth #2: Sandboxed MCPs Are Safe MCPs

Sandboxing an MCP server – typically in a Docker container – is much safer than just running the server Sandboxing an MCP server, typically in a Docker container, is much safer than running the server freely. It’s a best practice for using MCP servers securely, but it leaves significant security risks unmitigated. 

Sandboxed MCPs can still allow corrupted agents to execute actions or exfiltrate data from those resources to which they’re allowed access. Misconfiguration can create risks, too. Researchers found that hundreds of MCP servers were binding to “0.0.0.0” allowing outside access and full system control without authenticating.

Suppose you don’t lock down network access from the container. In that case, a malicious MCP server can still make network calls, too, so for maximum security, you should isolate machines running local MCP servers from your corporate network, too.

Securing MCP servers requires additional measures, including: 

  • OAuth 2.1 with PCKE
  • Finely-scoped, frequently rotated, securely stored access tokens
  • Prompt sanitization 
  • AI-agent and MCP-specific identity management
  • Other MCP/AI-specific runtime controls

For maximum security, run your MCP servers using an MCP getaway or proxy, ideally with failsafes against data exfiltration, such as data masking and controls over export/send capabilities, or access to MCP tools that an agent can use to exfiltrate data.

Myth #3: Having More Tools Empower Agents

I regularly see people – often newcomers to the MCP space – getting excited about building stacks of MCP servers and tools to create super-powered AI agents.

These people quickly run into problems – here’s why: 

LLMs have a limited context window. When they connect to MCP servers, the server sends JSON files with descriptions of their tools’ capabilities. 

Sending lots of tool descriptions consumes the LLM’s context window. This reduces the LLM’s available “thinking space” to select the right tool, and also consumes your tokens. The LLM can even become completely stuck and fail to select any tool. 

Even having a smaller number of tools can cause issues if the tools are similar. For instance, choosing between a hammer and a screwdriver is easier than comparing different sizes or, worse, brands of screwdrivers.

Approaches to prevent agents from being overwhelmed by too many tools include:

  • Use an MCP gateway/proxy to filter tools based on user/agent role (identity) or task type
  • Use RAG-MCP techniques to offload tool selection to a vector-database-connected LLM
  • Filter tools directly in the client
  • Include tool-selection guidance in your prompts

Read more about all these approaches in our GitHub guide: Improving MCP Tool Selection.

Myth #4: Big Name MCP Servers Are Secure

We’re still in the very early days of MCP, and numerous cases have already shown that you can’t rely on MCP servers launched by big names to be secure.

For example:

  • Asana spent $8m rectifying a design flaw in their MCP server that dissolved tenancy boundaries, allowing different companies to view each other’s projects and data.
  • Researchers demonstrated that various prompt injection attacks were possible via Atlassian’s MCP.
  • Researchers were able to deploy prompt injection payloads via issues added to GitHub repositories accessed via the GitHub MCP server.

We’re maintaining an index of reported vulnerabilities in MCP servers. Take a look there to track emerging MCP server vulnerabilities, and submit any that are missing, too. 

Myth #5: MCP OAuth Flows Are Regular OAuth Flows

MCP OAuth flows build on standard OAuth 2.1 with PCKE, but has some additional mechanisms that MCP OAuth flows build on standard OAuth 2.1 with PCKE, but have some additional mechanisms that make it distinctive and often difficult to set up correctly.

Complications you may discover when setting up OAuth for MCP systems include:

  • Server Roles: It’s best practice in standard OAuth to use separate resource servers and authorization servers. However, some MCP setups may use a combined resource and authorization server. 
  • Dynamic Client Registration: MCP OAuth requires clients to dynamically register without human intervention, which is rare in standard OAuth flows. Dynamic registration enables MCP clients to access resources as needed without requiring pre-provisioning with IDs and secrets.
  • Dynamic Discovery: OAuth typically operates in systems with hardcoded endpoints, but MCP OAuth requires MCP clients to dynamically fetch resource server and authorization server metadata using methods such as 401 Unauthorized responses. This is a common failure point when people attempt to set up OAuth for MCP servers themselves.

Use our MCP OAuth Troubleshooting Checklist if you’re struggling to get your own MCP OAuth flows working.

If you’re scaling up MCP server use at an enterprise level, you should consider using an MCP gateway or proxy to centralize, standardize, and easily manage authentication, authorization, and identity management for your MCP ecosystem.

Myth #6: You Can Use Prompts To “Lock Down” AI Agent Behavior

Numerous tests by researchers have demonstrated that getting AI agents to override security instructions and take actions that the initial user has specifically instructed it to never do.

Research shows that attackers can override prompt-based guardrails using explicit instructions or indirect, more manipulative language to trick the AI into ignoring guardrails. 

Aside from using various prompt injection methods (such as tool poisoning and RADE) to corrupt the AI agent, attackers can also use attack vectors like “Server Spoofing” to trick the AI into sending data to a malicious server (which impersonates the real, innocent server), but we’re straying into other areas now.

To use AI agents with MCP servers securely, you need more robust prompt sanitization, agent monitoring, and other runtime guardrails provided by an MCP gateway (like MCP Manager!

Myth #7: Auth Is Mandatory For MCP Servers

Although it feels like every MCP server developer has felt the pain of adding OAuth, the MCP spec for authorization doesn’t actually mandate any authorization. The spec’s wording can confuse some users by quickly stating that authorization is optional, then mandating which methods you must use when adding authorization; they could make this clearer. 

However, for full clarity, there is no requirement for MCP servers to have any authorization in place. Instead, the onus is ultimately on each user to ensure authorization is in place.

Even if a server does have authorization, it still makes sense to inspect and make sure everything is working correctly before you roll it out. Or you can use a gateway/proxy to centralize and standardize authorization, authentication, and identity management. 

The Myth Busting Is Just Beginning 

Hopefully, you found this clear-up of common MCP myths useful and thought-provoking. I’m sure there are some myths you can think of that I have missed. Feel free to reach out and share those with me, and I’ll add them to this post.

Unfortunately, I feel my short list of myths is certain to keep growing, especially as people using or hearing about MCP servers spread beyond the current highly technical core audience. Their misunderstandings will be very different, and perhaps more difficult to disabuse. 

The more voices involved – technical or otherwise – the more myths will arise, and the faster they will spread, but to some extent, that’s the sign of a technology that is really taking off, being talked about, and widely used. 

If you want to make it easier for your organization to use MCP servers at scale and securely, then you should use an MCP gateway like MCP Manager.

Get in touch with us today, and we’ll demonstrate how we can help make your MCP adoption smooth and successful, with the lasting impact you want.

Ready to give MCP Manager a try?

Learn More

MCP Manager secures AI agent activity.