Kubernetes Deployment
Deploy ollyScale on Kubernetes (Minikube) for local development!
Service map showing microservices running on Kubernetes
All examples are launched from the repo - clone it first or download the current GitHub release archive:
Prerequisites
Deployment Architecture
ollyScale uses a modular deployment architecture with the following components deployed in order:
- Infrastructure (sync wave 10-20): ArgoCD, cert-manager, Gateway API, OpenTelemetry Operator
- Middleware (sync wave 30-45):
- CloudNativePG Operator (wave 30)
- ollyscale-postgres chart (wave 45) - PostgreSQL database
- Redis, Kafka operators
- Observability (sync wave 50+):
- ollyscale chart (wave 50) - API, Web UI, OpAMP server, OTLP receiver
- Demo applications
Database Requirement
The postgres database is now deployed separately via the ollyscale-postgres chart.
This must be deployed before the main ollyscale chart.
1. Deploy ollyScale Core
-
Start Minikube:
-
Deploy ollyScale:
Deploy using Helm (images will be pulled from Docker Hub automatically): ```bash cd charts task deploy ``` !!! note "Local Development Build (Optional)" To build and deploy custom images for local development: `bashcd charts ./build-and-push-local.sh
` -
Access the UI:
To access the ollyScale UI (Service Type: LoadBalancer) on macOS with Minikube, you need to use
minikube tunnel.Open a new terminal window and run:
You may be asked for your password. Keep this terminal open.
Now you can access the ollyScale UI at: http://localhost:5002
OpenTelemetry Collector + OpAMP Config Page: Navigate to the "OpenTelemetry Collector + OpAMP Config" tab in the UI to view and manage collector configurations remotely. See the OpAMP Configuration section below for setup instructions.
-
Send Telemetry from Host Apps:
To send telemetry from applications running on your host machine (outside Kubernetes), use
kubectl port-forwardto expose the OTel Collector ports:Open a new terminal window and run:
Keep this terminal open. Now point your application's OpenTelemetry exporter to: - gRPC:
http://localhost:4317- HTTP:http://localhost:4318Example environment variables:
For apps running inside the Kubernetes cluster:
Use the Kubernetes service name: - gRPC:http://otel-collector:4317- HTTP:http://otel-collector:4318 -
Clean Up:
Uninstall ollyScale using Helm:
Shut down Minikube:
Minikube may be more stable if you delete it:
2. Demo Applications (Optional)
To see ollyScale in action with instrumented microservices:
The deploy script pulls demo images from Docker Hub by default. For local development, you can build images locally when prompted.
To clean up the demo:
The demo includes two microservices that automatically generate traffic, showcasing distributed tracing across service boundaries.
3. OpenTelemetry Demo (~20 Services - Optional)
To deploy the full OpenTelemetry Demo with ~20 microservices:
Prerequisites:
- ollyScale must be deployed first (see Setup above)
- Helm installed
- Sufficient cluster resources (demo is resource-intensive)
Deploy:
This deploys all OpenTelemetry Demo services configured to send telemetry to ollyScale's collector via HTTP on port 4318. Built-in observability tools (Jaeger, Grafana, Prometheus) are disabled.
Cleanup:
This removes the OpenTelemetry Demo but leaves ollyScale running.
4. ollyScale Core-Only Deployment: Use Your Own Kubernetes OpenTelemetry Collector
To deploy ollyScale without the bundled OTel Collector (e.g., if you have an existing collector daemonset). Includes OpAMP server for optional remote collector configuration management:
- Deploy Core:
-
Access UI: Run
minikube tunneland accesshttp://localhost:5002. -
Cleanup:
Use ollyScale with Any OpenTelemetry Collector
Swap out the included Otel Collector for any distro of Otel Collector.
Point your OpenTelemetry exporters to ollyscale-otlp-receiver:4343: i.e.
exporters:
debug:
verbosity: detailed
otlp:
endpoint: "ollyscale-otlp-receiver:4343"
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [debug, otlp, spanmetrics]
metrics:
receivers: [otlp, spanmetrics]
processors: [batch]
exporters: [debug, otlp]
logs:
receivers: [otlp]
processors: [batch]
exporters: [debug, otlp]
The Otel Collector will forward everything to ollyScale's OTLP receiver, which process telemetry and stores it in Redis in OTEL format for the backend and UI to access.
OpAMP Configuration (Optional)
The OpenTelemetry Collector + OpAMP Config page in the ollyScale UI allows you to view and manage collector configurations remotely. To enable this feature, add the OpAMP extension to your collector config:
extensions:
opamp:
server:
ws:
endpoint: ws://ollyscale-opamp-server:4320/v1/opamp
service:
extensions: [opamp]
The default configuration template (included as a ConfigMap in k8s-core-only/ollyscale-opamp-server.yaml) shows a complete example with OTLP receivers, OpAMP extension, batch processing, and spanmetrics connector. Your collector will connect to the OpAMP server and receive configuration updates through the ollyScale UI.
Building Images
By default, deployment scripts pull pre-built images from GitHub Container Registry (GHCR). For building images locally (Minikube) or publishing to GHCR, see build/README.md.