Skip to main content

Deploying DataHub with Kubernetes

Introduction#

Helm charts for deploying DataHub on a kubernetes cluster is located in this repository. We provide charts for deploying Datahub and it's dependencies (Elasticsearch, optionally Neo4j, MySQL, and Kafka) on a Kubernetes cluster.

This doc is a guide to deploy an instance of DataHub on a kubernetes cluster using the above charts from scratch.

Setup#

  1. Set up a kubernetes cluster
  2. Install the following tools:
    • kubectl to manage kubernetes resources
    • helm to deploy the resources based on helm charts. Note, we only support Helm 3.

Components#

Datahub consists of 4 main components: GMS, MAE Consumer (optional), MCE Consumer (optional), and Frontend. Kubernetes deployment for each of the components are defined as subcharts under the main Datahub helm chart.

The main components are powered by 4 external dependencies:

  • Kafka
  • Local DB (MySQL, Postgres, MariaDB)
  • Search Index (Elasticsearch)
  • Graph Index (Supports either Neo4j or Elasticsearch)

The dependencies must be deployed before deploying Datahub. We created a separate chart for deploying the dependencies with example configuration. They could also be deployed separately on-prem or leveraged as managed services. To remove your dependency on Neo4j, set enabled to false in the values.yaml for prerequisites. Then, override the graph_service_impl field in the values.yaml of datahub instead of neo4j.

Quickstart#

Assuming kubectl context points to the correct kubernetes cluster, first create kubernetes secrets that contain MySQL and Neo4j passwords.

kubectl create secret generic mysql-secrets --from-literal=mysql-root-password=datahubkubectl create secret generic neo4j-secrets --from-literal=neo4j-password=datahub

The above commands sets the passwords to "datahub" as an example. Change to any password of choice.

Add datahub helm repo by running the following

helm repo add datahub https://helm.datahubproject.io/

Then, deploy the dependencies by running the following

helm install prerequisites datahub/datahub-prerequisites

Note, the above uses the default configuration defined here. You can change any of the configuration and deploy by running the following command.

helm install prerequisites datahub/datahub-prerequisites --values <<path-to-values-file>>

Run kubectl get pods to check whether all the pods for the dependencies are running. You should get a result similar to below.

NAME                                               READY   STATUS      RESTARTS   AGEelasticsearch-master-0                             1/1     Running     0          62melasticsearch-master-1                             1/1     Running     0          62melasticsearch-master-2                             1/1     Running     0          62mprerequisites-cp-schema-registry-cf79bfccf-kvjtv   2/2     Running     1          63mprerequisites-kafka-0                              1/1     Running     2          62mprerequisites-mysql-0                              1/1     Running     1          62mprerequisites-neo4j-community-0                    1/1     Running     0          52mprerequisites-zookeeper-0                          1/1     Running     0          62m

deploy Datahub by running the following

helm install datahub datahub/datahub

Values in values.yaml have been preset to point to the dependencies deployed using the prerequisites chart with release name "prerequisites". If you deployed the helm chart using a different release name, update the quickstart-values.yaml file accordingly before installing.

Run kubectl get pods to check whether all the datahub pods are running. You should get a result similar to below.

NAME                                               READY   STATUS      RESTARTS   AGEdatahub-datahub-frontend-84c58df9f7-5bgwx          1/1     Running     0          4m2sdatahub-datahub-gms-58b676f77c-c6pfx               1/1     Running     0          4m2sdatahub-datahub-mae-consumer-7b98bf65d-tjbwx       1/1     Running     0          4m3sdatahub-datahub-mce-consumer-8c57d8587-vjv9m       1/1     Running     0          4m2sdatahub-elasticsearch-setup-job-8dz6b              0/1     Completed   0          4m50sdatahub-kafka-setup-job-6blcj                      0/1     Completed   0          4m40sdatahub-mysql-setup-job-b57kc                      0/1     Completed   0          4m7selasticsearch-master-0                             1/1     Running     0          97melasticsearch-master-1                             1/1     Running     0          97melasticsearch-master-2                             1/1     Running     0          97mprerequisites-cp-schema-registry-cf79bfccf-kvjtv   2/2     Running     1          99mprerequisites-kafka-0                              1/1     Running     2          97mprerequisites-mysql-0                              1/1     Running     1          97mprerequisites-neo4j-community-0                    1/1     Running     0          88mprerequisites-zookeeper-0                          1/1     Running     0          97m

You can run the following to expose the frontend locally. Note, you can find the pod name using the command above. In this case, the datahub-frontend pod name was datahub-datahub-frontend-84c58df9f7-5bgwx.

kubectl port-forward <datahub-frontend pod name> 9002:9002

You should be able to access the frontend via http://localhost:9002.

Once you confirm that the pods are running well, you can set up ingress for datahub-frontend to expose the 9002 port to the public.

Other useful commands#

CommandDescription
helm uninstall datahubRemove DataHub
helm lsList of Helm charts
helm historyFetch a release history