Highly Available Cluster Setup
We’re overhauling Dgraph’s docs to make them clearer and more approachable. If you notice any issues during this transition or have suggestions, please let us know.
You can run three Dgraph Alpha servers and three Dgraph Zero servers in a highly
available cluster setup. For a highly available setup, start the Dgraph Zero
server with --replicas 3
flag, so that all data is replicated on three Alpha
servers and forms one Alpha group. You can install a highly available cluster
using:
- dgraph-ha.yaml file
- Helm charts.
Install a highly available Dgraph cluster using YAML or Helm
Before you begin
- Install the Kubernetes command line tool.
- Ensure that you have a production-ready Kubernetes cluster with at least three worker nodes running in a cloud provider of your choice.
- (Optional) To run Dgraph Alpha with TLS, see TLS Configuration.
Installing a highly available Dgraph cluster
-
Verify that you are able to access the nodes in the Kubernetes cluster:
An output similar to this appears:
After your Kubernetes cluster is up, you can use dgraph-ha.yaml to start the cluster.
-
Start a StatefulSet that creates Pods with
Zero
,Alpha
, andRatel UI
:An output similar to this appears:
-
Confirm that the Pods were created successfully.
An output similar to this appears:
You can check the logs for the Pod using
kubectl logs --follow <POD_NAME>
.. -
Port forward from your local machine to the Pod:
-
Go to
http://localhost:8000
to access Dgraph using the Ratel UI.
Deleting highly available Dgraph resources
Delete all the resources using:
Before you begin
- Install the Kubernetes command line tool.
- Ensure that you have a production-ready Kubernetes cluster with at least three worker nodes running in a cloud provider of your choice.
- (Optional) To run Dgraph Alpha with TLS, see TLS Configuration.
Installing a highly available Dgraph cluster
-
Verify that you are able to access the nodes in the Kubernetes cluster:
An output similar to this appears:
After your Kubernetes cluster is up, you can use dgraph-ha.yaml to start the cluster.
-
Start a StatefulSet that creates Pods with
Zero
,Alpha
, andRatel UI
:An output similar to this appears:
-
Confirm that the Pods were created successfully.
An output similar to this appears:
You can check the logs for the Pod using
kubectl logs --follow <POD_NAME>
.. -
Port forward from your local machine to the Pod:
-
Go to
http://localhost:8000
to access Dgraph using the Ratel UI.
Deleting highly available Dgraph resources
Delete all the resources using:
Before you begin
- Install the Kubernetes command line tool.
- Ensure that you have a production-ready Kubernetes cluster with at least three worker nodes running in a cloud provider of your choice.
- Install Helm.
- (Optional) To run Dgraph Alpha with TLS, see TLS Configuration.
Installing a highly available Dgraph cluster using Helm
-
Verify that you are able to access the nodes in the Kubernetes cluster:
An output similar to this appears:
After your Kubernetes cluster is up and running, you can use of the Dgraph Helm chart to install a highly available Dgraph cluster
-
Add the Dgraph Helm repository:
-
Install the chart with
<RELEASE-NAME>
:You can also specify the version using:
When configuring the Dgraph image tag, be careful not to use
latest
ormain
in a production environment. These tags may have the Dgraph version change, causing a mixed-version Dgraph cluster that can lead to an outage and potential data loss.An output similar to this appears:
-
Get the name of the Pods in the cluster using
kubectl get pods
: -
Get the Dgraph Alpha HTTP/S endpoint by running these commands:
Deleting the resources from the cluster
-
Delete the Helm deployment using:
-
Delete associated Persistent Volume Claims:
Dgraph configuration files
You can create a Dgraph configuration files for Alpha server and
Zero server with Helm chart configuration values, <MY-CONFIG-VALUES>
. For more
information about the values, see the latest
configuration settings.
-
Open an editor of your choice and create a configuration file named
<MY-CONFIG-VALUES>.yaml
: -
Change to the director in which you created
<MY-CONFIG-VALUES>
.yaml and then install with Alpha and Zero configuration using:
Exposing Alpha and Ratel Services
By default Zero and Alpha services are exposed only within the Kubernetes
cluster as Kubernetes service type ClusterIP
.
In order to expose the Alpha service and Ratel service publicly you can use
Kubernetes service type LoadBalancer
or an Ingress resource.
Public Internet
To use an external load balancer, set the service type to LoadBalancer
.
For security purposes we recommend limiting access to any public endpoints, such as using a white list.
-
To expose Alpha service to the Internet use:
-
To expose Alpha and Ratel services to the Internet use:
Private network
An external load balancer can be configured to face internally to a private subnet rather the public Internet. This way it can be accessed securely by clients on the same network, through a VPN, or from a jump server. In Kubernetes, this is often configured through service annotations by the provider. Here’s a small list of annotations from cloud providers:
Provider | Documentation Reference | Annotation |
---|---|---|
AWS | Amazon EKS: Load Balancing | service.beta.kubernetes.io/aws-load-balancer-internal: "true" |
Azure | AKS: Internal Load Balancer | service.beta.kubernetes.io/azure-load-balancer-internal: "true" |
Google Cloud | GKE: Internal Load Balancing | cloud.google.com/load-balancer-type: "Internal" |
As an example, using Amazon EKS as the provider.
-
Create a Helm chart configuration values file
<MY-CONFIG-VALUES>
.yaml file: -
To expose Alpha and Ratel services privately, use:
Public Internet
To use an external load balancer, set the service type to LoadBalancer
.
For security purposes we recommend limiting access to any public endpoints, such as using a white list.
-
To expose Alpha service to the Internet use:
-
To expose Alpha and Ratel services to the Internet use:
Private network
An external load balancer can be configured to face internally to a private subnet rather the public Internet. This way it can be accessed securely by clients on the same network, through a VPN, or from a jump server. In Kubernetes, this is often configured through service annotations by the provider. Here’s a small list of annotations from cloud providers:
Provider | Documentation Reference | Annotation |
---|---|---|
AWS | Amazon EKS: Load Balancing | service.beta.kubernetes.io/aws-load-balancer-internal: "true" |
Azure | AKS: Internal Load Balancer | service.beta.kubernetes.io/azure-load-balancer-internal: "true" |
Google Cloud | GKE: Internal Load Balancing | cloud.google.com/load-balancer-type: "Internal" |
As an example, using Amazon EKS as the provider.
-
Create a Helm chart configuration values file
<MY-CONFIG-VALUES>
.yaml file: -
To expose Alpha and Ratel services privately, use:
You can expose Alpha and Ratel using an
ingress
resource that can route traffic to service resources. Before using this option
you may need to install an
ingress controller
first, as is the case with AKS and
EKS, while in the case of
GKE, this comes bundled with a
default ingress controller. When routing traffic based on the hostname
, you
may want to integrate an addon like
ExternalDNS so that DNS
records can be registered automatically when deploying Dgraph.
As an example, you can configure a single ingress resource that uses ingress-nginx for Alpha and Ratel services.
-
Create a Helm chart configuration values file,
<MY-CONFIG-VALUES>
.yaml file: -
To expose Alpha and Ratel services through an ingress:
You can run kubectl get ingress
to see the status and access these through
their hostname, such as http://alpha.<my-domain-name>
and
http://ratel.<my-domain-name>
Ingress controllers likely have an option to configure access for private internal networks. Consult documentation from the ingress controller provider for further information.
Upgrading the Helm chart
You can update your cluster configuration by updating the configuration of the Helm chart. Dgraph is a stateful database that requires some attention on upgrading the configuration carefully to update your cluster to your desired configuration.
In general, you can use helm upgrade
to update the
configuration values of the cluster. Depending on your change, you may need to
upgrade the configuration in multiple steps.
To upgrade to an HA cluster setup:
-
Ensure that the shard replication setting is more than one and
zero.shardReplicaCount
. For example, set the shard replica flag on the Zero node group to 3,zero.shardReplicaCount=3
. -
Run the Helm upgrade command to restart the Zero node group:
-
Set the Alpha replica count flag. For example:
alpha.replicaCount=3
. -
Run the Helm upgrade command again:
Was this page helpful?