K8s Custom Controller
No description available
Ask AI about K8s Custom Controller
Powered by Claude ยท Grounded in docs
I know everything about K8s Custom Controller. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
Kubernetes Custom Controller
A powerful Kubernetes management tool built with Cobra CLI, client-go, and controller-runtime, providing advanced deployment management and real-time event monitoring capabilities.
๐ Table of Contents
- โจ Features
- ๐ Quick Start
- โ๏ธ Configuration
- ๐ API Server
- ๐ฎ Controller Runtime
- ๐ณ Docker Support
- ๐ฏ CLI Commands
- ๐ Project Structure
- ๐ฅ Future Development
- ๐ License
โจ Features
- ๐ Multi-Cluster Management: Monitor deployments across multiple Kubernetes clusters simultaneously
- ๐๏ธ Real-time Informer: Watch deployment changes with live event logging
- ๐ฏ Controller-Runtime Integration: Advanced controller with detailed event logging
- ๐ FastHTTP API Server: Fast HTTP API with Swagger UI for programmatic access
- ๐ Flexible Authentication: Kubeconfig and in-cluster authentication support
- ๐ Powerful CLI: Clean, intuitive command interface
- ๐งช Comprehensive Testing: Integration with real Kubernetes API via EnvTest
- โ๏ธ Advanced Configuration: Layered configuration system with environment variables
๐ Quick Start
Installation
# Clone the repository
git clone https://github.com/obezsmertnyi/k8s-custom-controller.git
cd k8s-custom-controller
# Build the binary
make build
Docker Usage
# Pull the pre-built image
docker pull ghcr.io/obezsmertnyi/k8s-custom-controller/k8s-custom-controller:latest
# Run with mounted kubeconfig and configuration file
docker run --rm --network host \
-v ~/.kube/config:/root/.kube/config \
-v ./docs/config-example.yaml:/app/config.yaml \
ghcr.io/obezsmertnyi/k8s-custom-controller/k8s-custom-controller:latest \
--config=/app/config.yaml
# Build Docker image
make docker-build
# Run in Docker container
make docker-run
Basic Commands
# Start with default configuration
cd bin
./k8s-cli
# List available commands
./k8s-cli --help
# Start with a specific config file
./k8s-cli --config=./config.yaml
# Run with command-line options
./k8s-cli --port=8090 --enable-swagger=false
# List deployments in a namespace
./k8s-cli list --namespace default
๐ Project Structure
.
โโโ charts/ # Helm charts for Kubernetes deployment
โโโ cmd/ # CLI commands and application entrypoints
โโโ config/ # Kubernetes resources for deployment
โโโ docs/ # Documentation and examples
โโโ pkg/ # Core functionality packages
โ โโโ ctrl/ # Controller-runtime implementation
โ โโโ informer/ # Kubernetes informer implementation
โ โโโ testutil/ # Testing utilities
โโโ scripts/ # Helper scripts for development
โโโ tests/ # Integration tests
โ๏ธ Configuration
The application uses a flexible, layered configuration system based on Viper.
Configuration Example
Below is a complete production-ready configuration example:
# Kubernetes connection settings
kubernetes:
kubeconfig: ~/.kube/config # Path to kubeconfig file
in_cluster: false # Set to true when running inside Kubernetes cluster
context: "my-context" # Kubernetes context to use
namespace: "default" # Default namespace
qps: 10.0 # API server QPS limit
burst: 20 # API server burst limit
timeout: 20s # API server timeout
# API server settings
api_server:
enabled: true # Enable API server component
host: "0.0.0.0" # Listen address
port: 8080 # Listen port
enable_swagger: true # Enable Swagger documentation
security:
rate_limit_requests_per_second: 10 # Rate limit requests per second
max_connections_per_ip: 100 # Maximum connections per IP
idle_timeout_seconds: 120 # Idle connection timeout
read_timeout_seconds: 10 # Read timeout
write_timeout_seconds: 30 # Write timeout
disable_keepalive: false # Disable keepalive in production
# Informer settings
informer:
enabled: true # Enable informer component
namespace: "" # Namespace to watch, leave empty for all namespaces
resync_period: 2m # How often to resync the informer cache
label_selector: "" # Filter resources by label
field_selector: "" # Filter resources by field
# Controller-runtime settings
controller_runtime:
leader_election:
enabled: true # Enable leader election for controller high availability
id: "k8s-custom-controller" # Leader election ID
namespace: "kube-system" # Namespace for leader election
metrics:
bind_address: ":8081" # Address to expose metrics on
# Logging configuration
logging:
format: json # Log format (json or console)
level: info # Global log level (debug, info, warn, error)
time_format: rfc3339 # Time format for logs
output: stdout # Log output destination
๐ฏ CLI Commands
The k8s-cli provides a set of powerful commands to manage Kubernetes resources:
Commands:
config Manage configuration
create Create a Kubernetes deployment in the specified namespace
delete Delete a Kubernetes deployment in the specified namespace
help Help about any command
list List Kubernetes deployments in the specified namespace
Flags:
--config string Config file path (default is $HOME/.k8s-custom-controller/config.yaml)
--enable-leader-election Enable leader election for controller manager (default true)
--enable-swagger Enable Swagger UI documentation (default true)
-h, --help help for k8s-cli
--host string Host address to bind the server to (default "0.0.0.0")
--kubeconfig string Path to the kubeconfig file (default: ~/.kube/config)
--leader-election-id string ID for leader election (default "k8s-custom-controller-leader-election")
--leader-election-namespace string Namespace for leader election resources (default "default")
--log-level string Set log level: trace, debug, info, warn, error (default "info")
--metrics-bind-address string Bind address for metrics server (default "0.0.0.0")
--metrics-port int Port for controller manager metrics (default 8081)
--port int Port to run the server on (default 8080)
Examples
# List all deployments in the default namespace
./k8s-cli list
# Create a new deployment
./k8s-cli create --name nginx-deployment --image nginx:latest --replicas 3 --port 80 --namespace default
# Delete a deployment
./k8s-cli delete nginx-app --namespace production
# View configuration
./k8s-cli config view
Configuration Layers
flowchart TD
A[Command-line flags] -->|Highest Priority| E[Final Configuration]
B[Environment Variables] -->|KCUSTOM_ prefix| E
C[Configuration YAML file] --> E
D[Default Values] -->|Lowest Priority| E
Architecture Overview
flowchart TB
CLI[Command Line Interface] --> Config[Configuration Manager]
Config --> K8sClient[Kubernetes Client]
Config --> APIServer[API Server]
Config --> Informer[Resource Informer]
Config --> Runtime[Controller Runtime]
K8sClient --> Informer
K8sClient --> Runtime
APIServer --> Swagger[Swagger UI]
APIServer --> HealthAPI[Health Endpoint]
APIServer --> ResourceAPI[Resource Endpoints]
Informer --> EventHandlers[Event Handlers]
Runtime --> Controllers[Custom Controllers]
subgraph "External Integrations"
ResourceAPI --> MultiCluster[Multi-Cluster Manager]
end
Component Diagram
flowchart LR
User([User]) --> |Uses| CLI
CLI[k8s-cli] --> |Configures| Server[FastHTTP Server]
CLI --> |Initializes| K8s[Kubernetes Client]
CLI --> |Manages| CR[Controller Runtime]
CLI --> |Watches| Informers[Resource Informers]
Server --> |Provides| API[JSON API]
Server --> |Exposes| Swagger[Swagger UI]
K8s --> |Access| Clusters[(Kubernetes Clusters)]
Informers --> |Monitor| Resources[(Kubernetes Resources)]
CR --> |Reconciles| CRDs[(Custom Resources)]
Configuration Priority
The configuration system prioritizes values in the following order (highest to lowest):
- Command-line flags (e.g.,
--port,--host,--enable-swagger) - Environment variables (with
KCUSTOM_prefix) - Configuration file (YAML/JSON)
- Default values
Configuration File
The application searches for a configuration file in these locations:
- Path specified with
--configflag ./config.yamlin current directory$HOME/.k8s-custom-controller/config.yaml/etc/k8s-custom-controller/config.yaml
Environment Variables
The tool supports setting any config value via environment variables with the KCUSTOM_ prefix. Example:
# Logging configuration
KCUSTOM_LOGGING_FORMAT=json
KCUSTOM_LOGGING_LEVEL=debug
๐ API Server
The API server provides endpoints for managing Kubernetes resources. It runs on port 8080 by default.
API Endpoints
Deployments API
List deployments:
# Get all deployments
curl http://localhost:8080/deployments
# Get deployments in specific namespace
curl "http://localhost:8080/deployments?namespace=default"
# Get simplified list of deployments
curl "http://localhost:8080/deployments?format=simple"
Create deployment:
curl -X POST http://localhost:8080/deployments \
-H "Content-Type: application/json" \
-d '{
"name": "test-nginx",
"namespace": "default",
"image": "nginx:latest",
"replicas": 2,
"port": 80
}'
Create deployment with custom labels:
curl -X POST http://localhost:8080/deployments \
-H "Content-Type: application/json" \
-d '{
"name": "my-app",
"namespace": "default",
"image": "nginx:alpine",
"replicas": 3,
"port": 8080,
"labels": {
"environment": "production",
"version": "1.0"
}
}'
Delete deployment:
curl -X DELETE "http://localhost:8080/deployments?name=test-nginx&namespace=default"
The application exposes a REST API server using the FastHTTP framework for optimal performance. When enabled, it provides access to Kubernetes resources through a JSON API.
Key Features
- FastHTTP Engine: High-performance HTTP server optimized for low latency
- Swagger UI Integration: Interactive API documentation and testing
- JSON API: Standardized JSON responses for all endpoints
- Rate Limiting: Configurable per-IP and global rate limiting
- Security Headers: Modern security headers for protection
Starting the API Server
Enable via Configuration File
api_server:
enabled: true
host: "0.0.0.0"
port: 8080
enable_swagger: true
Endpoints
| Endpoint | Method | Description |
|---|---|---|
/health | GET | Health check for API server |
/clusters | GET | List registered clusters |
/deployments | GET | List deployments across clusters |
/pods | GET | List pods across clusters |
/services | GET | List services across clusters |
/nodes | GET | List nodes across clusters |
/swagger | GET | Swagger UI interface |
๐ฎ Controller Runtime
The application integrates with controller-runtime to provide advanced Kubernetes resource handling and events monitoring.
Key Features
- Leader Election: Ensure only one controller is active in clustered deployments
- Metrics Server: Prometheus-compatible metrics endpoint
- Event Broadcasting: Standardized event handling and recording
- Resource Watching: Efficient resource change monitoring
Configuration
controller_runtime:
leader_election:
enabled: true
id: k8s-custom-controller-leader-election
namespace: default
metrics:
bind_address: :8081
Architecture
flowchart LR
A[Controller Manager] --> B[Reconciler]
B --> C[Kubernetes API]
B --> D[Caching Layer]
D --> C
B --> E[Events]
E --> C
Features
- Automatic Reconciliation: Handles CREATE, UPDATE, DELETE events
- Rate Limiting: Configurable reconciliation rate
- Leader Election: Optional for high-availability deployments
- Metrics: Prometheus metrics for reconciliations, errors, and latencies
- Event Recording: Kubernetes events for controller actions
๐ณ Docker Support
The application provides comprehensive Docker support for containerized deployments.
Pre-built Images
# Pull latest image
docker pull ghcr.io/obezsmertnyi/k8s-custom-controller/k8s-custom-controller:latest
# Run with mounted kubeconfig
docker run --rm --network host \
-v ~/.kube/config:/root/.kube/config \
-v ./docs/config-example.yaml:/app/config.yaml \
ghcr.io/obezsmertnyi/k8s-custom-controller/k8s-custom-controller:latest \
--config=/app/config.yaml
Building Custom Images
# Build image
make docker-build
# Build and tag for registry
make docker-tag
# Build and run
make docker-run
Kubernetes Deployment
Helm chart is available in the charts/ directory:
helm install k8s-controller ./charts/k8s-custom-controller \
--set kubeconfig.enabled=false \
--set incluster.enabled=true
Roadmap Status
| Step | Feature | Status | Target Date |
|---|---|---|---|
| Step 11 | Custom CRD and Multi-Project Support | โ In Progress | Q3 2025 |
| Step 12 | Platform Engineering Integration | โ Planned | Q3 2025 |
| Step 13 | MCP Server Integration | โ Planned | Q3 2025 |
| Step 14 | JWT Authentication | โ Backlog | Q3 2025 |
| Step 15 | Testing and Observability | โ Backlog | Q4 2025 |
Current Development
Step 11: Custom CRD and Multi-Project Support
- Initial CRD definition created
- Custom CRD
Frontendpagewith dedicated informer - Controller with additional reconciliation logic for custom resource
- Multi-project client configuration for management clusters
Step 12: Platform Engineering Integration
- Integration with Port.io
- API handler for actions to CRUD custom resources
- Discord notifications integration
- Add update action support for IDP and controller
Step 13: MCP Server Integration
- Integrate with github.com/mark3labs/mcp-go/mcp to create MCP server
- API handlers as MCP tools with configurable port
- Add delete/update MCP tools
- Add OIDC authentication to MCP
Step 14: JWT Authentication
- JWT authentication and authorization for API
- JWT authentication and authorization for MCP
- Role-based access control for all endpoints
Step 15: Testing and Observability
- Basic OpenTelemetry code instrumentation
- Achieve 90% test coverage
- End-to-end testing of all components
๐ License
This project is licensed under the MIT License - see the LICENSE file for details.
Copyright (c) 2025 Oleksandr Bezsmertnyi
