Apigee Example Authorization On MCP Server
This example shows how you can use Apigee to proxy in front of a bare MCP Server implemented in Python+FastMCP, to add on Authorization controls.
Ask AI about Apigee Example Authorization On MCP Server
Powered by Claude · Grounded in docs
I know everything about Apigee Example Authorization On MCP Server. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
Apigee Example - External Access Control for MCPs
This repository shows one way to wrap insecure MCP Servers with an Apigee proxy that enforces externally-defined access control decisions.
Many people think MCP is different enough that it requires a different gateway, an entirely different approach in the network.
But it's not that different from what you already know.
- MCP uses JSONRPC, well known and in use since 2010
- MCP remote transport uses HTTP
- MCP security is built on OAuth
Sound familiar? If you have Apigee, you already have a platform that can provide the right governance for this flavor of API.
Access control options
It is possible to include basic access control into Apigee, using the Apigee configuation flow language. By "basic" I mean, it is possible for you to configure an API Proxy to check whether the inbound request presents a token, and whether the inbound token is valid for the given operation. But this is a "client-oriented" authorization check - it does not consider the identity of the attached user - and it is static.
But to include more dynamic access control into an Apigee API proxy, often you will want to externalize the access control decision, and allow Apigee to enforce the decision. This externalized decision could rely on any arbitrary data, including the identity of the user calling, ambient data, recent history for that particular user, time of the day, and so on.
This repository shows one way to do that, using an implementation involving:
- An MCP Server, implemented in Python, running in Cloud run
- An API Proxy configured in Apigee X (cloud), that proxies to that server
- a separate Cloud Run service, implemented in C#, that makes access control decisions
- an ExternalCallout policy in the Apigee proxy to call to the Access Control Cloud Run service
Screencast
There's a screencast that shows how all of this works. I encourage you to check it out!
Disclaimer
This example is not an official Google product, nor is it part of an official Google product. It's just an example.
Significant Assembly Required
This example relies on numerous moving parts and systems:
- an Open ID Connect Identity Provider
- An MCP Server implemented in Python with FastMCP, deployed to Cloud Run
- an Authorization Rules server, implemented inb C#/.NET, deployed to Cloud Run
- a spreadsheet that holds the rules for the above Authorization Server
- several API Proxies in Apigee
It's a big effort, bigger than I am able to commit to, at this time, for me to share a repo with completely tested, working code and scripts that allows you to reproduce all of this on your own.
But I'm publishing the code and configuration anyway, because I think just reading and examining the moving parts will be helpful.
Background
Basic access control in Apigee, using the Apigee configuation flow language, is easy. For
example, it's really easy to configure an Apigee API proxy to allow access, only if the caller
presents a valid token (using the built-in Apigee policy
OAuthV2,
with Operation = VerifyAccessToken). Or a valid, unexpired API Key (using the built-in Apigee
policy
VerifyAPIKey).
In the simple case, the OAuthV2/VerifyAccessToken policy would look like this:
<OAuthV2 name="OAuthV2-Verify-Access-Token">
<Operation>VerifyAccessToken</Operation>
</OAuthV2>
And the VerifyAPIKey policy would look like this:
<VerifyAPIKey name="APIKeyVerifier">
<APIKey ref="request.queryparam.apikey" />
</VerifyAPIKey>
In the former case, the one relying on the OAuthV2 access token, of course, the calling app must have previously obtained the access token, via some grant flow. That is just the standard OAuthV2 model, nothing new there.
But as you can see, whether using a key or a token, the control is binary. Either the caller has the valid key or token which is valid for the current call, or it does not. If you want finer-grained control, particularly with MCP servers and tools, you want more control and flexibility than this coarse-grained check can provide.
The use of API Products for Access Control
To extend beyond basdic checks, Apigee has the API product concept. API publishers can configure specific client credentials (client IDs or API keys) to be authorized for specific API Products. The Products are really just collections of API Proxies, with metadata. Each inbound request presents a credential, which will resolve to a valid API Product. Then, at runtime, Apigee will verify that the presented application client credential is authorized for an API Product that includes the particular verb + path pair that the current API request is using.
For a 15-minute screencast review of the API Product concept and the implicit verb+path authorization checks, see here. But the basics are:
-
At configuration time:
- API publishers define API Products. Each one includes 1 or more verb + path pairs.
- Client developers obtain credentials (client IDs) for their apps. Each credential is authorized for one or more API Products.
- Client developers embed those credentials into the apps they build.
-
At runtime:
- The client app sends in GET /foo (verb = GET, path = /foo).
- When you call VerifyAPIKey or VerifyAccessToken, Apigee checks the key or token.
- If valid, Apigee implicitly checks that the verb + path pair is authorized via at least one of the API Products associated to the credential.
And beyond the basics, you can also configure Apigee to check a scope on an Access Token.
There is a handy working sample that walks you through this, actually working in Apigee. Check it out!
What about more flexible controls?
One thing that is missing here is "role based access control", a/k/a RBAC, which would allow an access control decision based on the identity of the human operating the application. Also missing is ABAC, what OWASP calls "Attribute Based Access Control", which allows control based not just on the role or identity of the caller, but also based on additional data, such as: Job role, time of day, project name, originating IP address, record creation date, prior activity pattern, and others. Apigee does not have a good mechanism, by itself, for performing either user-by-user RBAC or the more general ABAC.
To accomplish user-based RBAC, or the more general ABAC, the typical pattern is to externalize the access control decision and use Apigee to enforce the decision. You would use this as a complement to the basic authorization checks Apigee can do with API Products.
The way it works for handling an inbound call (whether MCP or some other variant):
-
The Apigee runtime collects or determines all of the information it needs to inform an access control decision. This might be information about the requesting user, a billing account status, patterns of recent activity, and so on. Normally the user information is obtained from something like an ID Token that is signed by an independent Identity Provider.
-
Apigee sends an access control request to an external Access Control system. This request must include all the metadata that the external system will need to make a decision. The identity of the caller, the resource being requested, the specific action being requested, the source IP address, and so on. Whatever is required.
-
The external system makes the decision (Allow or Deny), and sends it back to Apigee.
-
The Apigee API proxy then enforces that decision.
The example contained in this repository shows how you can implement this pattern using a custom Cloud Run service to externalize the access control decision for MCP Servers.
Implementation Details
The example here shows the basic idea. Here's how it works.
-
A client app (eg Agent) sends an MCP call request into an Apigee API proxy. This call must include an Access Token in the Authorization header.
-
The Apigee API proxy verifies the access token, checking that it is valid for the given proxy.
-
If that passes, the Apigee API proxy calls to an external access control service, passing it {jwt payload, MCP method, MCP tool}. This service happens to be implemented in C#, but that's just a detail.
-
The access control service uses the Google Sheets REST API to retrieve access control rules from a Google Sheet. This data is cached in the Access Control service.
-
The access control service applies the access rules against the inbound data. It uses the "az_groups" claim on the Access Token, and the MCP verb and tool, to find a matching rule. If there is an ALLOW entry, the request is allowed. Otherwise, not.
The service returns an "ALLOW" or "DENY" to the proxy.
-
The proxy enforces that decision, and proxies to the upstream MCP Server for ALLOW, otherwise issues a 403 status.
The rules look like this:

And the logic that evaluates whether a request should be authorized,
Some implementation notes:
-
In step 1, the access token that the agent sends, is obtained via a OAuthV2 authorization code grant type, from the OpenID Connect server registered for the particular MCP Server. You need to supply your own OIDC Server for this.
You may use Auth0.com; for instructions, see Auth0 setup.
Whatever OIDC server you use, it must emit an access token with a claim "az_groups" within it, which should be a list of strings. The Access Control Server examines that claim to determine whether to allow the request.
-
The API Proxy assumes that the JWKS endpoint is available at
${OIDC_SERVER}/jwksand the token issuer is the same as the${OIDC_SERVER}url. -
The access control service is a GRPC service. That means it will be relatively fast and efficient to call into, from your Apigee API Proxy, and it should be acceptable to incur that check for every API request. If the relatively low latency is still not acceptable, you can move the rules evaluation logic into the Apigee proxy itself. That is not shown here.
Why not use OPA for Access Control?
Good Question!! Open Policy Agent is a good solution for storing, managing, and evaluating access rules, for arbitrary systems or resources. It's open source, well maintanied, and available as a deployable container image. You can deploy the container image right to something like Cloud Run; no need to build the code.
All sounds good, right? The one drawback that I've seen is that OPA depends on REGO to express policies. This is a domain-specific language; I have not seen it used in any place other than OPA. And it is somewhat novel. That can be an obstacle to some teams.
For this particular example, I decided to use a Google Sheet to store the access rules for these reasons:
- it's visual - it's easy to see what specific rules are in place, and easy to demonstrate;
- it's easy to update and maintain the access rules.
- it's easy to protect the access rules with User rights on the Sheets document.
- it's easy to get a log of who changed what - just look at the version history on the sheet.
All of that, you get "for free" with a Google Sheet.
The C# logic that retrieves and applies the rules is also fairly easy to understand. The combination of all of those factors means using Sheets and C# makes for a solution that is more broadly accessible than one based on the combination of OPA and REGO.
BUT, the architectural model of the solution using OPA would be exactly the same as what I've got here with a custom C# service and a Google Sheet.
Deploying it for your own purposes
To follow the instructions to deploy this in your own environment, you will need the following pre-requisites:
- Apigee X or hybrid
- a Google Cloud project with Cloud Run and Cloud Build enabled
- a Google Workspace environment that allows you to create and share spreadsheets
- .NET 8.0 - if you want to modify the source code and build locally. Otherwise Cloud Build will build it for you remotely, and you don't need .NET on your workstation.
- various tools: bash, curl, gcloud CLI, apigeecli, jq
You can get all of these things in the Google Cloud Shell.
Steps to follow
These will require some customization by you. This is not fully tested and vetted.
-
Modify the env.sh file to suit your environment. Then source it to set those variables for use in subsequent commands:
source ./env.sh -
Enable the services needed:
./1-enable-services.sh -
Signin with gcloud to allow the script to create a spreadsheet:
./2-auth-login.sh -
Deploy the "Products" MCP Server to cloud run
3-deploy-products-mcp-to-cloud-run.sh -
Create the sheet that holds Rules + Roles.
./4-create-sheet.shWhen the script finishes, define the shell variable for the Sheet ID. Find that from the output of the "create sheet" step.
export SHEET_ID=VALUE-FROM-PRIOR-STEP -
Create the service account for the Access Control service.
./5-create-service-account-for-access-control-service.sh -
Manually share the sheet created previously with the SA email address.
-
Deploy the Cloud Run Service that will read and apply the Rules in the sheet.
./6-deploy-access-control-service-to-cloud-run.shThis takes a few minutes. It sends the source code up to Cloud Build, builds the service, then deploys it from the image.
-
Create the Apigee Target Server.
This is the server entity pointing to the access control server.
./7-create-apigee-target-server-for-authz.sh -
Install apigeecli
./8-install-apigeecli.sh -
Import and deploy the Apigee API Proxies
There is one to handle the MCP "Well known endpoints" (basepath `/.well-known/) and another to handle the other MCP transactions.
./9-import-and-deploy-apigee-proxies.sh -
Configure an MCP Server in the chatbot or agent of your choice (Gemini CLI works), like this:
"mcpServers": {
"products": {
"httpUrl": "https://your-apigee-endpoint/mcp-access-control/mcp",
"oauth": {
"enabled": true,
"clientId": "ab4aded9d20f44RHgmrNCq",
"clientSecret": "26a86ab545704312b748e331f854"
}
}
}
The clientId and clientSecret need to be known by your OIDC Server. Or, you can omit them if your server supports Dynamic Client Registration (DCR).
- Start the agent; it should kickoff the OAuth flow and eventually invoke the MCP Server. You can open a Trace session on Apigee to see the interactions.
Clean Up
-
Remove the Apigee assets.
This includes the target server and the API proxy.
./99a-clean-apigee-entities.sh -
Remove the Cloud Run assets.
This includes the service account.
./99b-clean-cloud-run-authorization-service.sh -
Manually delete the Google sheet.
Support
This callout and example proxy is open-source software, and is not a supported part of Apigee. If you have questions or need assistance with it, you can try inquiring on the Google Cloud Community forum dedicated to Apigee There is no service-level guarantee for responses to inquiries posted to that site.
License
This material is Copyright © 2025 Google LLC. and is licensed under the Apache 2.0 License. This includes the Java code as well as the API Proxy configuration.
Bugs
-
The Cloud Run service is deployed to allow "unauthenticated access". If you use something like this in a real system, you will want to deploy the Cloud Run service to allow
run.invokefrom the service account your Access Control Service runs as. -
The API Proxy does not perform "VerifyAPIKey" on the client ID contained within the Access Token. This is a simple extension. It requires synchronizing the Apps in Apigee with the Client ID in the OIDC Server.
-
The Access Control service does not check for malformed rules.
