Skip to content

Kubernetes Deployment

Deploy ollyScale on Kubernetes (Minikube) for local development!

ollyScale on Kubernetes

Service map showing microservices running on Kubernetes


All examples are launched from the repo - clone it first or download the current GitHub release archive:

git clone https://github.com/ryanfaircloth/ollyscale

Prerequisites

Deployment Architecture

ollyScale uses a modular deployment architecture with the following components deployed in order:

  1. Infrastructure (sync wave 10-20): ArgoCD, cert-manager, Gateway API, OpenTelemetry Operator
  2. Middleware (sync wave 30-45):
  3. CloudNativePG Operator (wave 30)
  4. ollyscale-postgres chart (wave 45) - PostgreSQL database
  5. Redis, Kafka operators
  6. Observability (sync wave 50+):
  7. ollyscale chart (wave 50) - API, Web UI, OpAMP server, OTLP receiver
  8. Demo applications

Database Requirement

The postgres database is now deployed separately via the ollyscale-postgres chart. This must be deployed before the main ollyscale chart.

1. Deploy ollyScale Core

  1. Start Minikube:

    minikube start
    
  2. Deploy ollyScale:

    Deploy using Helm (images will be pulled from Docker Hub automatically):
    
    ```bash
    cd charts
    task deploy
    ```
    
    !!! note "Local Development Build (Optional)"
    To build and deploy custom images for local development:
    `bash
    

    cd charts ./build-and-push-local.sh `

  3. Access the UI:

    To access the ollyScale UI (Service Type: LoadBalancer) on macOS with Minikube, you need to use minikube tunnel.

    Open a new terminal window and run:

    minikube tunnel
    

    You may be asked for your password. Keep this terminal open.

    Now you can access the ollyScale UI at: http://localhost:5002

    OpenTelemetry Collector + OpAMP Config Page: Navigate to the "OpenTelemetry Collector + OpAMP Config" tab in the UI to view and manage collector configurations remotely. See the OpAMP Configuration section below for setup instructions.

  4. Send Telemetry from Host Apps:

    To send telemetry from applications running on your host machine (outside Kubernetes), use kubectl port-forward to expose the OTel Collector ports:

    Open a new terminal window and run:

    kubectl port-forward service/otel-collector 4317:4317 4318:4318
    

    Keep this terminal open. Now point your application's OpenTelemetry exporter to: - gRPC: http://localhost:4317 - HTTP: http://localhost:4318

    Example environment variables:

    export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
    

    For apps running inside the Kubernetes cluster:
    Use the Kubernetes service name: - gRPC: http://otel-collector:4317 - HTTP: http://otel-collector:4318

  5. Clean Up:

    Uninstall ollyScale using Helm:

    helm uninstall ollyscale -n ollyscale
    kubectl delete namespace ollyscale
    

    Shut down Minikube:

    minikube stop
    

    Minikube may be more stable if you delete it:

    minikube delete
    

2. Demo Applications (Optional)

To see ollyScale in action with instrumented microservices:

cd k8s-demo
./02-deploy.sh

The deploy script pulls demo images from Docker Hub by default. For local development, you can build images locally when prompted.

To clean up the demo:

./03-cleanup.sh

The demo includes two microservices that automatically generate traffic, showcasing distributed tracing across service boundaries.


3. OpenTelemetry Demo (~20 Services - Optional)

To deploy the full OpenTelemetry Demo with ~20 microservices:

Prerequisites:

  • ollyScale must be deployed first (see Setup above)
  • Helm installed
  • Sufficient cluster resources (demo is resource-intensive)

Deploy:

cd k8s-otel-demo
./01-deploy-otel-demo-helm.sh

This deploys all OpenTelemetry Demo services configured to send telemetry to ollyScale's collector via HTTP on port 4318. Built-in observability tools (Jaeger, Grafana, Prometheus) are disabled.

Cleanup:

cd k8s-otel-demo
./02-cleanup-otel-demo-helm.sh

This removes the OpenTelemetry Demo but leaves ollyScale running.

4. ollyScale Core-Only Deployment: Use Your Own Kubernetes OpenTelemetry Collector

To deploy ollyScale without the bundled OTel Collector (e.g., if you have an existing collector daemonset). Includes OpAMP server for optional remote collector configuration management:

  1. Deploy Core:
cd k8s-core-only
./01-deploy.sh
  1. Access UI: Run minikube tunnel and access http://localhost:5002.

  2. Cleanup:

./02-cleanup.sh

Use ollyScale with Any OpenTelemetry Collector

Swap out the included Otel Collector for any distro of Otel Collector.

Point your OpenTelemetry exporters to ollyscale-otlp-receiver:4343: i.e.

exporters:
  debug:
    verbosity: detailed

  otlp:
    endpoint: "ollyscale-otlp-receiver:4343"
    tls:
      insecure: true

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [debug, otlp, spanmetrics]

    metrics:
      receivers: [otlp, spanmetrics]
      processors: [batch]
      exporters: [debug, otlp]

    logs:
      receivers: [otlp]
      processors: [batch]
      exporters: [debug, otlp]

The Otel Collector will forward everything to ollyScale's OTLP receiver, which process telemetry and stores it in Redis in OTEL format for the backend and UI to access.

OpAMP Configuration (Optional)

The OpenTelemetry Collector + OpAMP Config page in the ollyScale UI allows you to view and manage collector configurations remotely. To enable this feature, add the OpAMP extension to your collector config:

extensions:
  opamp:
    server:
      ws:
        endpoint: ws://ollyscale-opamp-server:4320/v1/opamp

service:
  extensions: [opamp]

The default configuration template (included as a ConfigMap in k8s-core-only/ollyscale-opamp-server.yaml) shows a complete example with OTLP receivers, OpAMP extension, batch processing, and spanmetrics connector. Your collector will connect to the OpAMP server and receive configuration updates through the ollyScale UI.


Building Images

By default, deployment scripts pull pre-built images from GitHub Container Registry (GHCR). For building images locally (Minikube) or publishing to GHCR, see build/README.md.