Backstage Kubernetes brings your cluster view into the Backstage catalog. It is built for service owners, allowing you to check the health of a service in one place across many clusters. The plugin highlights problems and lets you drill into deployments and pods for that service.
It connects to Kubernetes through the API and works with any cloud provider or managed platform. Use it to watch rollouts during a release, see errors sooner, and reduce context switching.
For most teams, the value is a clear picture of what is running, where it runs, and whether it is healthy.

Installation Instructions
These instructions apply to self-hosted Backstage only. To use this plugin on Roadie, visit the docs.
1. Install the packages
Run the following commands in your root directory:
# Install the frontend package
yarn --cwd packages/app add @backstage/plugin-kubernetes
# Install the backend package
yarn --cwd packages/backend add @backstage/plugin-kubernetes-backend
Additional packages available:
@backstage/plugin-kubernetes-common- Shared types and utilities@backstage/plugin-kubernetes-node- Extension points and node types for custom implementations@backstage/plugin-kubernetes-cluster- Cluster resource viewer component
2. Backend setup
The Kubernetes plugin now uses Backstage's new backend system. The legacy KubernetesBuilder approach has been removed and is no longer supported.
New Backend System (Current)
Add the plugin to your backend in packages/backend/src/index.ts:
// packages/backend/src/index.ts
import { createBackend } from '@backstage/backend-defaults';
const backend = createBackend();
// Add the Kubernetes plugin
backend.add(import('@backstage/plugin-kubernetes-backend'));
// ... other plugins
backend.start();
Legacy Backend System (Deprecated)
If you're still using the legacy backend system, you'll need to migrate to the new backend system. The old KubernetesBuilder pattern is no longer available. See the Backstage migration guide for details.
3. Add Kubernetes config to your app-config.yaml
The backend plugin needs to know how to connect to your clusters. Here are the most common authentication methods:
Option A: Service Account (Standard / On-Prem)
This is the most common method for generic clusters.
kubernetes:
serviceLocatorMethod:
type: 'multiTenant'
clusterLocatorMethods:
- type: 'config'
clusters:
- url: https://<CLUSTER_API_URL>
name: <CLUSTER_NAME>
title: 'My Production Cluster' # Optional: Human-readable display name
authProvider: 'serviceAccount'
skipTLSVerify: false
serviceAccountToken: ${K8S_SERVICE_ACCOUNT_TOKEN}
Option B: AWS EKS (Cloud Native)
If Backstage is running in AWS, it can use the IAM role of the pod/instance.
kubernetes:
serviceLocatorMethod:
type: 'multiTenant'
clusterLocatorMethods:
- type: 'config'
clusters:
- url: https://<EKS_CLUSTER_ENDPOINT>
name: <CLUSTER_NAME>
authProvider: 'aws'
skipTLSVerify: false
# Backstage will automatically use the AWS credentials from the environment
You can also provide explicit AWS configuration:
kubernetes:
clusterLocatorMethods:
- type: 'config'
clusters:
- url: https://<EKS_CLUSTER_ENDPOINT>
name: <CLUSTER_NAME>
authProvider: 'aws'
authMetadata:
kubernetes.io/aws-assume-role: ${ROLE_ARN}
kubernetes.io/aws-external-id: ${EXTERNAL_ID}
kubernetes.io/x-k8s-aws-id: ${CLUSTER_NAME}
Option C: Google GKE (Cloud Native)
If Backstage is running on GCP, it can use Application Default Credentials or service account authentication.
Using Application Default Credentials (ADC):
kubernetes:
serviceLocatorMethod:
type: 'multiTenant'
clusterLocatorMethods:
- type: 'config'
clusters:
- url: https://<GKE_CLUSTER_ENDPOINT>
name: <CLUSTER_NAME>
authProvider: 'googleServiceAccount'
skipTLSVerify: false
Auto-discovering GKE clusters:
kubernetes:
clusterLocatorMethods:
- type: 'gke'
projectId: 'my-gcp-project'
region: 'us-central1' # Optional: specific region
exposeDashboard: true
matchingResourceLabels: # Optional: filter clusters
- key: 'environment'
value: 'production'
User OAuth for GKE (client-side):
kubernetes:
clusterLocatorMethods:
- type: 'config'
clusters:
- url: https://<GKE_CLUSTER_ENDPOINT>
name: <CLUSTER_NAME>
authProvider: 'google'
skipTLSVerify: false
Additional Authentication Methods
Backstage supports additional authentication providers including Azure AKS (azure, aks), generic OIDC (oidc), catalog-based cluster discovery, and local development with kubectl proxy. For complete documentation on all authentication methods, see the Backstage Kubernetes Authentication guide .
4. Frontend setup
New Frontend System (Recommended)
If you're using the new frontend system, register the plugin feature in your app:
// packages/app/src/App.tsx
import { createApp } from '@backstage/app-defaults';
import kubernetesPlugin from '@backstage/plugin-kubernetes/alpha';
export const app = createApp({
features: [
// ... other plugins
kubernetesPlugin,
],
});
Then enable the entity content extension in app-config.yaml:
app:
extensions:
- entity-content:kubernetes/kubernetes
Optional: Filter which entity kinds show the Kubernetes tab
app:
extensions:
- entity-content:kubernetes/kubernetes:
config:
filter: kind:component,resource
title: Kubernetes
path: kubernetes
Old Frontend System
To expose the UI in the old frontend system, open packages/app/src/components/catalog/EntityPage.tsx, import the component, and add it to the entity page:
// packages/app/src/components/catalog/EntityPage.tsx
import React from 'react';
import { EntityLayout } from '@backstage/plugin-catalog';
import {
EntityKubernetesContent,
isKubernetesAvailable,
} from '@backstage/plugin-kubernetes';
const serviceEntityPage = (
<EntityLayout>
{/* other tabs... */}
<EntityLayout.Route
if={isKubernetesAvailable}
path="/kubernetes"
title="Kubernetes"
>
<EntityKubernetesContent refreshIntervalMs={30000} />
</EntityLayout.Route>
</EntityLayout>
);
Note: The isKubernetesAvailable helper ensures the tab only appears when the entity has Kubernetes annotations.
Viewing Cluster Resources
To view cluster-level resources (not tied to a specific service), use the cluster plugin:
import { EntityKubernetesClusterContent } from '@backstage/plugin-kubernetes-cluster';
const clusterEntityPage = (
<EntityLayout>
<EntityLayout.Route path="/kubernetes" title="Kubernetes">
<EntityKubernetesClusterContent />
</EntityLayout.Route>
</EntityLayout>
);
Enable Pod Deletion (Optional)
To allow users to delete pods from the Backstage UI:
kubernetes:
frontend:
podDelete:
enabled: true
You'll also need to update RBAC permissions (see below).
Using RBAC Authorization
The backend exposes endpoints that require permissions. The identity you use (Service Account or Cloud Role) must have read-only cluster-wide access for the specific objects Backstage needs to fetch.
These objects typically include:
- Core resources: pods, pods/log, services, configmaps, limitranges, resourcequotas
- Apps: deployments, replicasets, statefulsets, daemonsets
- Batch: jobs, cronjobs
- Networking: ingresses
- Autoscaling: horizontalpodautoscalers
- Metrics: pods (from metrics.k8s.io)
Here is an example of a ClusterRole to grant read access:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: backstage-read-only
rules:
- apiGroups:
- '*'
resources:
- pods
- pods/log
- services
- configmaps
- limitranges
- resourcequotas
verbs:
- get
- watch
- list
- apiGroups:
- apps
resources:
- deployments
- replicasets
- statefulsets
- daemonsets
verbs:
- get
- watch
- list
- apiGroups:
- batch
resources:
- jobs
- cronjobs
verbs:
- get
- watch
- list
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- watch
- list
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- get
- watch
- list
- apiGroups:
- metrics.k8s.io
resources:
- pods
verbs:
- get
- list
If pod deletion is enabled, add the delete permission:
- apiGroups:
- ''
resources:
- pods
verbs:
- get
- watch
- list
- delete
Backstage Permissions (v1.36.0+)
If you're using Backstage's permissions framework, the following permissions are available:
kubernetes.clusters.read- View cluster informationkubernetes.resources.read- View Kubernetes resources
These can be configured in your permission policies to control access at the Backstage level.
Surfacing your Kubernetes components as part of an entity
There are two ways to surface your Kubernetes components as part of an entity. The label selector takes precedence over the annotation/service id.
Common backstage.io/kubernetes-id label
Adding the entity annotation
In order for Backstage to detect that an entity has Kubernetes components, the following annotation should be added to the entity's catalog-info.yaml:
# catalog-info.yaml
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: dice-roller
annotations:
'backstage.io/kubernetes-id': dice-roller
spec:
type: service
lifecycle: production
owner: team-a
Labeling Kubernetes components
In order for Kubernetes components to show up in the software catalog as a part of an entity, Kubernetes components themselves can have the following label:
metadata:
labels:
'backstage.io/kubernetes-id': <BACKSTAGE_ENTITY_NAME>
This means you can label your Kubernetes Deployment like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: dice-roller
labels:
backstage.io/kubernetes-id: dice-roller
spec:
# ... rest of deployment spec
Label selector query annotation
You can write your own custom label selector query that Backstage will use to lookup the objects (similar to kubectl --selector="your query here"). Review the labels and selectors Kubernetes documentation for more info.
# catalog-info.yaml
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: my-service
annotations:
'backstage.io/kubernetes-label-selector': 'app=my-app,component=front-end'
spec:
type: service
lifecycle: production
owner: team-a
Note: Ensure your actual Kubernetes Deployments and Pods have matching labels applied to them. For example, if using app=my-app,component=front-end, your Deployment should include:
metadata:
labels:
app: my-app
component: front-end
Other annotation options
Namespace filtering:
annotations:
'backstage.io/kubernetes-namespace': 'production'
Cluster filtering:
annotations:
'backstage.io/kubernetes-cluster': 'production-cluster'
Note: The kubernetes-label-selector annotation takes precedence over kubernetes-id if both are present.
Advanced: Visualizing Custom Resources (CRDs)
Most modern platforms rely on Custom Resource Definitions (CRDs) like Argo Rollouts, Tekton Pipelines, or Istio VirtualServices. You can extend the plugin to visualize these.
To fetch CRDs, update your app-config.yaml:
kubernetes:
# ... existing config ...
customResources:
- group: 'argoproj.io'
apiVersion: 'v1alpha1'
plural: 'rollouts'
- group: 'tekton.dev'
apiVersion: 'v1beta1'
plural: 'pipelineruns'
- group: 'networking.istio.io'
apiVersion: 'v1beta1'
plural: 'virtualservices'
This allows Backstage to fetch these objects alongside standard resources.
Example: Generating a Service Account Token (Kubernetes 1.24+)
If you are using the Service Account method, note that Kubernetes 1.24+ no longer auto-generates secrets. You have two options:
Option 1: Short-lived tokens (Recommended)
Use the TokenRequest API to generate tokens:
# Create the Service Account
kubectl create sa backstage-user
# Generate a token (valid for 1 hour by default)
kubectl create token backstage-user
# Or specify a longer duration
kubectl create token backstage-user --duration=999999h
Option 2: Long-lived tokens
Manually create a Secret to get a long-lived token:
apiVersion: v1
kind: Secret
metadata:
name: backstage-user-secret
annotations:
kubernetes.io/service-account.name: backstage-user
type: kubernetes.io/service-account-token
Apply the secret:
kubectl apply -f backstage-user-secret.yaml
Get the token:
kubectl get secret backstage-user-secret -o=jsonpath='{.data.token}' | base64 --decode
Bind the ClusterRole to the Service Account:
kubectl create clusterrolebinding backstage-read-only-binding \
--clusterrole=backstage-read-only \
--serviceaccount=default:backstage-user
Things to Know
Troubleshooting Common Issues
If you have configured the plugin but see the "No resources found" error, check these common culprits:
1. Label Mismatch
The annotation in catalog-info.yaml must exactly match the labels on your Kubernetes resources. A typo in backstage.io/kubernetes-label-selector is the most common error.
Check your catalog annotation:
annotations:
'backstage.io/kubernetes-label-selector': 'app=my-service'
Verify your deployment has matching labels:
kubectl get deployments -l app=my-service --all-namespaces
2. RBAC Permissions
The identity used by Backstage must have get, list, and watch permissions for the resources you are trying to view.
Verify permissions:
kubectl auth can-i list pods --as=system:serviceaccount:default:backstage-user
kubectl auth can-i get deployments --as=system:serviceaccount:default:backstage-user
3. Namespace Visibility
By default, Backstage queries all namespaces. If you use the backstage.io/kubernetes-namespace annotation, ensure it matches the actual namespace where resources are deployed.
4. Configuration Errors
Check your Backstage logs for connection errors:
# Look for Kubernetes-related errors
kubectl logs <backstage-pod> | grep -i kubernetes
Common issues include:
- Invalid cluster URLs
- Expired or incorrect authentication tokens
- Network connectivity problems
- Certificate validation failures (check
skipTLSVerifysetting)
5. Duplicate Cluster Names
Backstage will fail to start if duplicate cluster names exist in your configuration. Ensure each cluster has a unique name.
Changelog
This changelog is produced from commits made to the Backstage Kubernetes plugin since 4 months ago. It may not contain information about all commits. Releases and version bumps are intentionally omitted. This changelog is generated by AI.
Breaking changes
- Remove support for the legacy backend system. Apps that call createRouter or KubernetesBuilder must migrate to the new backend system
- Remove the backend common dependency in the Kubernetes backend plugin
- Stop re exported types that came from kubernetes node. Import types from their source packages
From #30864 merged 2 months ago
Set up Backstage in minutes with Roadie
Focus on using Backstage, rather than building and maintaining it.
