Skip to main content
Version: 1.3.1.0

Ambari Kubernetes Manager View (Tech Preview)

Tech Preview — ODP 1.3.2.0

This feature will be included in ODP 1.3.2.0 as a Tech Preview, currently in qualification. It is available for early enterprise testing.

Interested in early access? Contact our team to join the enterprise early access program.

The Ambari Kubernetes Manager View is an Ambari plugin that extends cluster management to Kubernetes workloads. It provides a unified interface for deploying, configuring, monitoring, and managing the full lifecycle of Helm-based applications running on a connected Kubernetes or OpenShift cluster — all within the same Ambari UI used for managing HDFS, YARN, Hive, and other cluster services.

How the Kubernetes View Works

The Kubernetes View operates as a server-side plugin within Ambari. When you deploy an application through the View, Ambari:

  1. Reads the current cluster configuration (Hive Metastore URI, Kerberos realm, Ranger REST URL, LDAP settings, etc.)
  2. Generates the appropriate Helm values, merging cluster-derived settings with any user-supplied overrides
  3. Executes the Helm install or upgrade against the configured Kubernetes cluster
  4. Tracks the deployment operation asynchronously and reports progress through the Ambari background operations interface
  5. Monitors the released Helm release via Flux for ongoing status

This approach ensures that configuration values derived from the ODP cluster (URIs, hostnames, security parameters) are always consistent with the actual cluster state, rather than being duplicated and potentially drifting.

Installing the Kubernetes View Plugin

The Kubernetes Manager View is bundled with Ambari 2.8.2.0. If you are running Ambari 2.8.2.0, the view is available in your Ambari installation.

To activate the view:

  1. Log into Ambari as a cluster administrator.
  2. Navigate to Admin > Views.
  3. Locate KUBERNETES_MANAGER in the list of available views.
  4. Click Create Instance and provide:
    • Instance Name: a label for this view instance (e.g., k8s-prod)
    • Display Name: the label shown in the Ambari UI sidebar
    • Description: optional description

Once the instance is created, the view appears in the Ambari Views menu and is accessible to users with the appropriate Ambari role.

Connecting Ambari to a Kubernetes Cluster

Before deploying workloads, you must configure the connection between Ambari and your Kubernetes cluster.

Service Account Setup

Create a dedicated service account in your Kubernetes cluster for Ambari to use:

# Create namespace for ODP-managed apps
kubectl create namespace odp-apps

# Create service account
kubectl create serviceaccount ambari-manager -n odp-apps

# Create ClusterRole (or namespace-scoped Role) with required permissions
kubectl create clusterrolebinding ambari-manager-binding \
--clusterrole=cluster-admin \
--serviceaccount=odp-apps:ambari-manager
Least Privilege

The cluster-admin binding above is used for simplicity. In production, restrict the role to the specific API groups and resources required: apps/deployments, core/services, core/configmaps, core/secrets, core/persistentvolumeclaims, and batch/jobs.

Kubeconfig Configuration

In the Kubernetes View settings, provide the kubeconfig or connection parameters:

ParameterDescription
Kubernetes API URLThe API server endpoint (e.g., https://k8s-api.example.com:6443)
CA CertificateThe cluster CA certificate (PEM format)
Service Account TokenToken for the ambari-manager service account
NamespaceTarget namespace for deployments (e.g., odp-apps)
Helm Binary PathPath to the Helm 3 binary on the Ambari server

Ambari stores the service account token encrypted in the Ambari database.

Verifying the Connection

After saving the connection parameters, click Test Connection in the Kubernetes View. Ambari will attempt to list resources in the configured namespace. A successful test confirms that the API URL, credentials, and network connectivity are all working.

The Kubernetes Manager UI

Once connected, the Kubernetes View provides the following management flows:

Application Catalog

The main screen lists the applications available for deployment: currently Trino and Apache Superset. Each entry shows:

  • Application name and version
  • Deployment status (Not Deployed / Deployed / Upgrading / Failed)
  • Helm release name
  • Last operation timestamp

Deploy Flow

  1. Select an application from the catalog.
  2. Click Deploy.
  3. The configuration wizard presents grouped settings:
    • General: replica counts, resource requests and limits
    • Security: Kerberos settings (pre-populated from the cluster), OIDC parameters
    • Connectivity: connector URIs (pre-populated from the cluster)
    • Advanced: raw Helm values override (YAML editor)
  4. Click Deploy to submit. Ambari creates a background operation.

Upgrade Flow

When a new chart version is available:

  1. Select the deployed application.
  2. Click Upgrade.
  3. Review the configuration diff between the current and new version.
  4. Confirm and submit. Ambari executes helm upgrade and tracks the rollout.

Rollback

If an upgrade fails or produces issues:

  1. Select the deployed application.
  2. Click Rollback.
  3. Select the target revision from the Helm release history.
  4. Confirm. Ambari executes helm rollback and returns the release to the selected revision.

Uninstall

To remove a deployed application:

  1. Select the deployed application.
  2. Click Uninstall.
  3. Confirm. Ambari executes helm uninstall and removes all Kubernetes resources created by the chart.

Background Operations and Progress Tracking

All Helm operations (install, upgrade, rollback, uninstall) run as background operations in Ambari. This means:

  • You do not need to keep the browser window open for the operation to complete.
  • Progress is visible in the Ambari Background Operations panel (the clock icon in the Ambari toolbar).
  • Each operation produces a structured log that shows Helm output and any errors.
  • Operations have a configurable timeout (default: 10 minutes).

If an operation fails, the background operation log contains the full error output from Helm, which is essential for troubleshooting.

GitOps and Flux Release Status Monitoring

The Kubernetes View integrates with Flux (GitOps toolkit) to provide ongoing release status monitoring. When Flux is configured in your Kubernetes cluster and managing the Helm releases deployed by Ambari, the View displays:

  • Flux HelmRelease status: whether the release is reconciled, pending, or in error
  • Last reconcile time: when Flux last checked the release against the desired state
  • Drift detection: if manual changes have been made to Kubernetes resources outside of Ambari/Flux, the status reflects the drift

This is particularly useful in environments where infrastructure changes go through a GitOps review process. Ambari's Helm install creates or updates the Flux HelmRelease custom resource; Flux handles the actual reconciliation from the Git repository.

To use Flux integration, install Flux in your Kubernetes cluster before connecting it to Ambari:

flux install

The Kubernetes View will automatically detect Flux if the Flux CRDs are present in the cluster.

Kerberos Keytab Delegation

For applications that need to authenticate to ODP services (Hive Metastore, HDFS, Ranger), Ambari handles keytab provisioning:

  1. Ambari generates or retrieves a service keytab from the cluster's Kerberos infrastructure (FreeIPA or MIT KDC).
  2. The keytab is stored as a Kubernetes Secret in the application namespace.
  3. The Helm chart mounts the keytab secret into the application containers.
  4. Application configuration (e.g., Trino's core-site.xml) references the keytab path.

The service principal used for each application is configurable in the deployment wizard. The default naming convention follows: <service>/<hostname>@<REALM>.

Keytab rotation: when the keytab is renewed in Kerberos, re-triggering the Helm upgrade from Ambari will update the Kubernetes Secret with the new keytab.

OIDC Authentication Integration

For workloads that expose a web UI (Apache Superset), Ambari supports configuring OIDC (OpenID Connect) authentication:

ParameterDescription
OIDC Provider URLThe OIDC provider's issuer URL (e.g., your Keycloak or Dex instance)
Client IDThe OAuth2 client ID registered for this application
Client SecretThe OAuth2 client secret (stored encrypted in Ambari)
Allowed GroupsLDAP/AD groups whose members are permitted to access the application
Admin GroupsGroups granted administrator access within the application

When OIDC is configured, the Helm chart is deployed with the OIDC proxy sidecar or native OIDC configuration (depending on the application). Users accessing Superset are redirected to the OIDC provider for authentication.

OIDC and Kerberos are complementary in this architecture: Kerberos secures backend service-to-service communication, while OIDC secures user-facing web interfaces.

Ranger and LDAP Configuration Materialization

One of the key benefits of the Kubernetes View is that it reads existing ODP security configuration and injects it into Helm values automatically. At deployment time, Ambari materializes:

ODP Config SourceMaterialized Into
Ranger REST URL and admin credentialsTrino Ranger plugin configuration
Hive Metastore URIs (from Hive config)Trino Hive catalog hive.metastore.uri
Kerberos realm and KDC addresskrb5.conf configmap in Kubernetes
LDAP/AD server URL and bind DNSuperset auth configuration
HDFS NameNode URITrino HDFS config

This eliminates the need to manually copy configuration values from your Ambari configs into Helm values files — a process that is error-prone and often leads to misconfiguration.

Known Limitations (Tech Preview)

LimitationNotes
No YARN integrationTrino resource management is independent of YARN queues
No Atlas lineage for TrinoQueries through Trino are not captured in Atlas in this release
Superset HA not configured via AmbariMultiple Superset replicas require manual Helm override
Keytab rotation requires manual re-deployNo automatic keytab renewal trigger yet
Limited to one Kubernetes cluster per Ambari View instanceMulti-cluster support is planned
OpenShift Security Context ConstraintsMay require additional SCC configuration for some charts on OpenShift

These limitations will be addressed in future ODP releases as the Kubernetes integration moves from Tech Preview toward general availability.