Goose Buildpack
No description available
Ask AI about Goose Buildpack
Powered by Claude Β· Grounded in docs
I know everything about Goose Buildpack. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
Goose Cloud Foundry Buildpack
A Cloud Foundry supply buildpack that bundles the Goose AI coding agent CLI into containers, enabling Java applications to invoke Goose programmatically.
Overview
This buildpack installs the Goose CLI (a native Rust binary) into Cloud Foundry containers, allowing applications to leverage Goose for AI-assisted coding tasks. Unlike other AI agent buildpacks, no runtime (Node.js, Python) is required - Goose is a single self-contained binary.
Features
- Native Binary - Goose CLI is a ~30MB Rust binary, no runtime dependencies
- Multi-Provider Support - Works with Anthropic, OpenAI, Google, Databricks, Ollama, and more
- GenAI Auto-Discovery - Automatically discovers and configures Tanzu GenAI service bindings
- Java Wrapper Library - Clean Java API for invoking Goose from Spring Boot applications
- MCP Support - Configure Model Context Protocol servers to extend capabilities
- Agent Credential Broker - Centralized credential management for OAuth-protected MCP servers via delegation tokens
- Skills Support - Define reusable instruction sets for common workflows
- Spring Boot Ready - Auto-configuration for seamless Spring Boot integration
Quick Start
1. Create Configuration File
Add .goose-config.yml to your application root:
goose:
enabled: true
version: "latest" # Always use latest release
provider: anthropic
model: claude-sonnet-4-20250514
2. Update Manifest
Configure your manifest.yml:
applications:
- name: my-app
buildpacks:
- goose-buildpack
- java_buildpack
env:
ANTHROPIC_API_KEY: sk-ant-xxxxx
GOOSE_ENABLED: true
3. Add Java Wrapper (Optional)
For Java applications, add the Google Artifact Registry repository and wrapper dependency to your pom.xml:
<repositories>
<repository>
<id>gcp-maven-public</id>
<name>GCP Artifact Registry - Public Maven Repository</name>
<url>https://us-central1-maven.pkg.dev/cf-mcp/maven-public</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
</repositories>
<dependencies>
<dependency>
<groupId>org.tanzu.goose</groupId>
<artifactId>goose-cf-wrapper</artifactId>
<version>1.1.0</version>
</dependency>
</dependencies>
4. Use in Code
@Autowired
private GooseExecutor gooseExecutor;
public void processWithGoose() {
String result = gooseExecutor.execute("Analyze this code for security issues");
System.out.println(result);
}
Offline / Disconnected Installation
For air-gapped or disconnected Cloud Foundry foundations, use the pre-built cached buildpack zip. It bundles the Goose binary so no outbound internet is needed during staging.
Download goose_buildpack-cached-vX.Y.Z-goose-vA.B.C.zip from the Releases page, then upload it once to your foundation:
cf create-buildpack goose_buildpack ./goose_buildpack-cached-vX.Y.Z-goose-vA.B.C.zip 99 --enable
Then reference it by name in manifest.yml (no URL):
applications:
- name: my-app
buildpacks:
- goose_buildpack
- java_buildpack
The bundled Goose version is fixed. To upgrade Goose, upload a new cached zip:
cf update-buildpack goose_buildpack -p ./goose_buildpack-cached-vX.Y.Z-goose-vB.C.D.zip
Detection
The buildpack detects and activates when:
.goose-config.ymlor.goose-config.yamlexists in the app root- A JAR file contains
.goose-config.yml(Spring Boot apps) GOOSE_ENABLED=trueenvironment variable is setmanifest.ymlcontainsgoose-enabled: true
Configuration
Environment Variables
| Variable | Description | Required |
|---|---|---|
GOOSE_ENABLED | Enable buildpack detection | No |
GOOSE_VERSION | Goose version: latest, stable, or specific version like v1.21.1 (default: latest) | No |
ANTHROPIC_API_KEY | Anthropic API key | One provider required* |
OPENAI_API_KEY | OpenAI or OpenAI-compatible API key | One provider required* |
OPENAI_HOST | Base URL for OpenAI-compatible endpoints (e.g. https://my-llm.example.com/openai) | For custom endpoints |
GOOGLE_API_KEY | Google API key | One provider required* |
DATABRICKS_HOST | Databricks workspace URL | With DATABRICKS_TOKEN |
DATABRICKS_TOKEN | Databricks access token | With DATABRICKS_HOST |
OLLAMA_HOST | Ollama server URL | For local inference |
GOOSE_PROVIDER | Default LLM provider | No |
GOOSE_MODEL | Default LLM model | No |
BROKER_BASE_URL | Agent Credential Broker URL (enables broker integration) | No |
GOOSE_LOG_LEVEL | Log level (debug/info/warn/error) | No |
GOOSE_TIMEOUT_MINUTES | Execution timeout | No (default: 5) |
GOOSE_MAX_TURNS | Max conversation turns | No (default: 100) |
BYPASS_GENAI | Skip GenAI service auto-discovery | No |
GENAI_SERVICE_NAME | Set by buildpack when GenAI discovered | Output only |
*Not required if using a GenAI service binding - credentials are auto-discovered.
Configuration File (.goose-config.yml)
goose:
enabled: true
version: "latest" # "latest" (always newest), "stable", or "v1.21.1" (specific)
# LLM Provider
provider: anthropic # anthropic, openai, google, databricks, ollama
model: claude-sonnet-4-20250514
# Settings
logLevel: info # debug, info, warn, error
# Environment Variables (optional)
# Exported into the container shell at runtime. Values may reference
# manifest.yml env vars using ${VAR_NAME} syntax (expanded at runtime).
env:
MY_CUSTOM_VAR: "static value"
ANOTHER_VAR: "${MANIFEST_VAR}"
# MCP Servers (optional)
mcpServers:
# Local stdio server (requires runtime in container)
- name: filesystem
type: stdio
command: npx
args:
- "-y"
- "@modelcontextprotocol/server-filesystem"
env:
ALLOWED_DIRECTORIES: "/home/vcap/app,/tmp"
# Remote authenticated server (credentials managed by Agent Credential Broker)
- name: github
type: streamable_http
url: "https://api.githubcopilot.com/mcp/"
brokerAuth: true # Credentials injected by broker at runtime
# Skills (optional) - reusable instruction sets
skills:
- name: code-review
description: Code review checklist
content: |
# Code Review Checklist
- [ ] Code does what the PR claims
- [ ] Edge cases handled
- [ ] Follows project style guide
User-Defined Environment Variables
The env: section in .goose-config.yml lets you declare environment variables that are exported into the container shell at runtime β useful for values that Goose or MCP servers need but that don't belong in manifest.yml.
goose:
env:
MY_CUSTOM_VAR: "static value"
ANOTHER_VAR: "${MANIFEST_VAR}"
Values are written to a .profile.d script at staging time and sourced when the container starts. References to other environment variables (e.g. ${MANIFEST_VAR}) are not expanded at staging β they are expanded at runtime, so any variable set in manifest.yml will be available for interpolation.
Order of precedence: variables already set in the environment (e.g. from manifest.yml) take precedence over same-named entries in the env: section, since manifest.yml env vars are present before the profile script runs.
Agent Credential Broker Integration
The Java wrapper integrates with the Agent Credential Broker for centralized credential management. Instead of each agent application managing its own OAuth flows, the broker handles credential acquisition, storage, and refresh, and agents receive short-lived access tokens via delegation.
How It Works
- User grants access β A user pre-authorizes target systems (e.g., GitHub, Cloud Foundry) in the Credential Broker's web UI
- Delegation token β At session creation, the host application obtains a signed delegation token from the broker using the user's identity
- Credential injection β Before each Goose execution, the wrapper's
BrokerCredentialInjectorexchanges the delegation token for short-lived access tokens and injects them asAuthorizationheaders into Goose'sconfig.yaml - Transparent auth β MCP servers receive authenticated requests without any credential handling in the agent code
Configuration
The only configuration needed is the BROKER_BASE_URL environment variable:
# manifest.yml
env:
BROKER_BASE_URL: ((BROKER_BASE_URL))
MCP servers that require authentication should have brokerAuth: true in .goose-config.yml. No clientId, clientSecret, or scopes are needed β the broker manages those details.
mcpServers:
- name: github
type: streamable_http
url: "https://api.githubcopilot.com/mcp/"
brokerAuth: true
- name: cloud-foundry
type: streamable_http
url: "https://cf-mcp-server.apps.example.com/mcp"
brokerAuth: true
Prerequisites
- The Agent Credential Broker must be deployed and accessible
- The host application and broker must share the same SSO identity provider so that user identities are consistent
- Users must have active grants in the broker for the target systems they want to use
Java Wrapper Components
| Class | Responsibility |
|---|---|
BrokerCredentialInjector | Requests access tokens from the broker and injects Authorization headers into Goose's config |
CredentialBrokerClient | REST client for the broker's /api/credentials/request endpoint |
BrokerAutoConfiguration | Spring Boot auto-configuration β creates broker beans when BROKER_BASE_URL is set |
MCP Server Auto-Configuration via Service Binding
As an alternative to the Agent Credential Broker, the buildpack also supports automatic MCP server discovery from Cloud Foundry service bindings. This approach is suitable when MCP servers use simple API key authentication rather than OAuth flows.
Service instances are detected as MCP servers when their credentials contain both a uri field and an X-API-KEY header.
When an MCP server is detected, the buildpack will:
- Extract the MCP server URI from the service credentials
- Automatically add it to Goose's configuration as a
streamable_httpextension - Make it available to Goose without manual configuration
Requirements:
- The service credentials must include a
urifield - The service credentials must include an
X-API-KEYheader (within theheadersobject) - Only Streaming HTTP transport is supported for remote MCP servers
Example: MCP server created with a user-provided service.
cf create-user-provided-service my-mcp-server \
-p '{"uri":"https://my-mcp-server.apps.example.com/mcp","headers":{"X-API-KEY":"your-api-key"}}'
The buildpack will automatically detect and configure the MCP server at runtime. Service bindings take precedence over manually configured MCP servers in .goose-config.yml - if a service binding has the same name as a config file server, the service binding will be used.
GenAI Service Auto-Discovery
The buildpack automatically discovers and configures Tanzu GenAI Proxy service bindings at container startup. This eliminates the need to manually configure LLM providers when using GenAI services in Cloud Foundry.
How it works:
- At container startup, the buildpack parses
VCAP_SERVICESfor GenAI service bindings - Services are detected when they have:
- A
genaitag OR a label starting withgenai - Credentials containing an
endpointobject withapi_key,api_base, andconfig_url
- A
- The buildpack calls the
config_urlendpoint to discover available models - It automatically selects the first model with
TOOLScapability (required for Goose's tool calling) - Environment variables are set automatically:
OPENAI_API_KEY,OPENAI_HOST,GOOSE_PROVIDER,GOOSE_MODEL
Example VCAP_SERVICES structure for GenAI:
{
"genai": [{
"name": "my-genai-service",
"tags": ["genai"],
"credentials": {
"endpoint": {
"api_key": "your-api-key",
"api_base": "https://genai-proxy.example.com/instance-id",
"config_url": "https://genai-proxy.example.com/instance-id/config"
}
}
}]
}
Priority (highest to lowest):
- GenAI service binding - Automatically discovered at runtime
- Environment variables -
GOOSE_PROVIDER,GOOSE_MODELset in manifest - Config file defaults - Values from
.goose-config.yml
Bypassing GenAI discovery:
Set BYPASS_GENAI=true to skip GenAI service discovery and use manually configured providers instead:
env:
BYPASS_GENAI: true
OPENAI_API_KEY: your-key
GOOSE_PROVIDER: openai
GOOSE_MODEL: gpt-4
Custom OpenAI-Compatible Endpoints
If you have a model endpoint that is compatible with the OpenAI API (many LLM proxies and self-hosted models expose this format), configure it using OPENAI_HOST and OPENAI_API_KEY along with provider: openai in your config file.
.goose-config.yml:
goose:
enabled: true
provider: openai # use openai for any OpenAI-compatible endpoint
model: your-model-name # model name as exposed by your endpoint
manifest.yml:
applications:
- name: my-app
buildpacks:
- goose-buildpack
- java_buildpack
env:
OPENAI_HOST: https://your-llm-endpoint.example.com/openai
OPENAI_API_KEY: your-api-key
OPENAI_HOST is the base URL of the endpoint β Goose appends the appropriate API paths (e.g. /chat/completions) automatically. The provider: openai setting tells Goose to use the OpenAI wire format, regardless of the actual provider behind the endpoint.
Verifying GenAI configuration:
The buildpack sets GENAI_SERVICE_NAME when a GenAI service is discovered. Applications can check this variable to determine if GenAI is configured:
String genaiService = System.getenv("GENAI_SERVICE_NAME");
if (genaiService != null) {
// Using GenAI service: genaiService
}
Skills
Skills are reusable sets of instructions that teach Goose how to perform specific tasks. They follow the Agent Skills format compatible with Claude Desktop and other agents.
Skill Types
The buildpack supports three types of skills:
1. Inline Skills
Embed skill content directly in your .goose-config.yml:
skills:
- name: production-deploy
description: Safe deployment procedure for production
content: |
# Production Deployment
## Pre-deployment
1. Ensure all tests pass
2. Get approval from reviewers
3. Notify #deployments channel
## Deploy
1. Create release branch from main
2. Run ./gradlew build
3. Deploy to staging, verify, then production
2. File-Based Skills
Reference skills from directories in your application:
skills:
- name: deployment
path: .goose/skills/deployment
The skill directory must contain a SKILL.md file with YAML frontmatter:
---
name: deployment
description: Deployment workflow for this project
---
# Deployment Steps
1. Build the application
2. Run integration tests
...
3. Git-Based Skills
Clone skills from a Git repository during staging:
skills:
- name: company-standards
source: https://github.com/org/goose-skills.git
branch: main
path: skills/company-standards
Note: Git-based skills require network access during Cloud Foundry staging.
Skill Locations
Skills are installed to .goose/skills/ in the application directory and copied to ~/.config/goose/skills/ at runtime. Goose automatically discovers and uses skills based on your requests.
Java Wrapper Library
The buildpack includes a Java wrapper library for easy integration:
Synchronous Execution
GooseExecutor executor = new GooseExecutorImpl();
String result = executor.execute("What is this code doing?");
With Options
GooseOptions options = GooseOptions.builder()
.timeout(Duration.ofMinutes(10))
.maxTurns(50)
.build();
String result = executor.execute("Refactor this function", options);
Streaming
try (Stream<String> lines = executor.executeStreaming("Explain this algorithm")) {
lines.forEach(System.out::println);
}
Spring Boot Integration
The wrapper auto-configures with Spring Boot:
# application.yml
goose:
enabled: true
timeout: 5m
max-turns: 100
@RestController
public class AIController {
private final GooseExecutor gooseExecutor;
@PostMapping("/api/ai/analyze")
public String analyze(@RequestBody String prompt) {
return gooseExecutor.execute(prompt);
}
}
SSE Normalizing Proxy
When using GenAI services, the Java wrapper automatically starts a local SSE normalizing proxy. This proxy handles compatibility issues between GenAI proxy services and Goose CLI:
- SSE Format Normalization: Converts
data:{...}todata: {...}(adds required space) - Tool Call Index Injection: Adds missing
"index":0field to tool call responses - Transparent: The proxy is started automatically when
apiKeyandbaseUrlare configured inGooseOptions
The proxy runs on a random local port and is automatically cleaned up when the session ends.
Comparison with Claude Code
| Aspect | Claude Code Buildpack | Goose Buildpack |
|---|---|---|
| Runtime | Node.js required | No runtime (native binary) |
| Binary Source | npm install | GitHub releases |
| Config File | .claude-code-config.yml | .goose-config.yml |
| Authentication | ANTHROPIC_API_KEY only | Multiple providers |
| Size | ~100MB (Node + npm) | ~30MB (single binary) |
| CLI Invocation | claude -p "prompt" | goose session --text "prompt" |
Directory Structure
goose-buildpack/
βββ bin/
β βββ detect # Buildpack detection script
β βββ supply # Buildpack supply script
βββ lib/
β βββ config_parser.sh # Parse .goose-config.yml settings
β βββ environment.sh # Runtime environment setup
β βββ genai_configurator.sh # GenAI service auto-discovery
β βββ goose_configurator.sh # Main config orchestrator
β βββ installer.sh # Goose binary installation
β βββ mcp_configurator.sh # MCP server configuration
β βββ profiles_generator.sh # Generate profiles.yaml
β βββ skills_configurator.sh # Skills installation
β βββ validator.sh # Validation utilities
βββ java-wrapper/ # Java wrapper library
β βββ pom.xml
β βββ src/main/java/org/tanzu/goose/cf/
β βββ GooseExecutor.java
β βββ GooseExecutorImpl.java
β βββ GooseEnvironmentManager.java
β βββ GooseOptions.java
β βββ McpServerInfo.java
β βββ SseNormalizingProxy.java # SSE format normalization
β βββ broker/ # Agent Credential Broker integration
β β βββ BrokerAutoConfiguration.java
β β βββ BrokerCredentialInjector.java
β β βββ CredentialBrokerClient.java
β βββ spring/ # Spring Boot integration
βββ examples/
β βββ sample-goose-config.yml
β βββ sample-manifest.yml
βββ resources/
β βββ default-config.yml
βββ buildpack.yml
βββ README.md
Supported Providers
| Provider | Environment Variables | Notes |
|---|---|---|
| Tanzu GenAI | Auto-discovered | Auto-configured from service bindings |
| Anthropic | ANTHROPIC_API_KEY | Claude models |
| OpenAI | OPENAI_API_KEY | GPT models |
| OpenAI-compatible | OPENAI_API_KEY, OPENAI_HOST | Any LLM proxy or endpoint with an OpenAI-compatible API; use provider: openai |
GOOGLE_API_KEY | Gemini models | |
| Databricks | DATABRICKS_HOST, DATABRICKS_TOKEN | Enterprise |
| Ollama | OLLAMA_HOST | Local inference |
| Azure OpenAI | AZURE_OPENAI_* | Azure-hosted |
| AWS Bedrock | AWS_* | AWS-hosted |
Requirements
- Cloud Foundry with cflinuxfs4 stack (Ubuntu 22.04)
- At least one LLM provider: API key OR Tanzu GenAI service binding
- For Java wrapper: Java 21+
License
MIT License - see LICENSE
