Create multi-cluster mesh
To create a multi-cluster mesh with Service Mesh Manager, you need:
- At least two Kubernetes clusters, with access to their kubeconfig files.
Service Mesh Manager CLItool installed on your computer.
- Network connectivity properly configured between the participating clusters.
Create a multi-cluster mesh
To create a multi-cluster mesh with Service Mesh Manager, complete the following steps.
Note: If you are installing Service Mesh Manager on a managed Kubernetes solution of a public cloud provider (for example, Amazon EKS, AKS, or GKE) or kOps, the cluster name auto-discovered by Service Mesh Manager is incompatible with Kubernetes resource naming restrictions and Istio’s method of identifying clusters in a multicluster mesh.
To install Service Mesh Manager you MUST use the
--cluster-nameparameter to set a cluster name that complies with the RFC 1123 DNS subdomain/label format (alphanumeric string without “_” or “.” characters).
Install Service Mesh Manager to the master cluster using the following command. This will install all Service Mesh Manager components to the cluster.
If you are installing Service Mesh Manager on a managed Kubernetes solution of a public cloud provider (for example, Amazon EKS, AKS, or GKE) or kOps, run
smm install -a --cluster-name <name-of-your-cluster>
smm install -a
If you experience errors during the installation, try running the installation in verbose mode:
smm install -v
Service Mesh Manager supports KUBECONFIG contexts having the following authentication methods:
- certfile and keyfile
- certdata and keydata
- bearer token
- exec/auth provider
Username-password pairs are not supported. If you are installing Service Mesh Manager in a test environment, you can install it without requiring authentication by running:
smm install --anonymous-auth -a --run-demo
On the primary Service Mesh Manager cluster, attach the peer cluster to the mesh using one of the following commands.
Note: To understand the difference between the remote Istio and primary Istio clusters, see the Istio control plane models section in the official Istio documentation. The short summary is that remote Istio clusters do not have a separate Istio control plane, while primary Istio clusters do.
The following commands automate the process of creating the resources necessary for the peer cluster, generate and set up the kubeconfig for that cluster, and attach the cluster to the mesh.
To attach a remote Istio cluster with the default options, run:
smm istio cluster attach <PEER_CLUSTER_KUBECONFIG_FILE>
To attach a primary Istio cluster (one that has an active Istio control plane installed), run:
smm istio cluster attach <PEER_CLUSTER_KUBECONFIG_FILE> --active-istio-control-plane
Note: If the name of the cluster cannot be used as a Kubernetes resource name (for example, because it contains the underscore, colon, or another special character), you must manually specify a name to use when you are attaching the cluster to the service mesh. For example:
smm istio cluster attach <PEER-CLUSTER-KUBECONFIG-FILE> --name <KUBERNETES-COMPLIANT-CLUSTER-NAME> --active-istio-control-plane
Otherwise, the following error occurs when you try to attach the cluster:
could not attach peer cluster: graphql: Secret "example-secret" is invalid: metadata.name: Invalid value: "gke_gcp-cluster_region": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.'**
Verify the name that will be used to refer to the cluster in the mesh. To use the name of the cluster, press Enter. The name must comply with the RFC 1123 DNS subdomain/label format (alphanumeric string without “_” or “.” characters).
? Cluster must be registered. Please enter the name of the cluster (<current-name-of-the-cluster>)
Wait until the peer cluster is attached. Attaching the peer cluster takes some time, because it can be completed only after the ingress gateway address works. You can verify that the peer cluster is attached successfully with the following command:
smm istio cluster status
The process is finished when you see
Statusfield of all clusters.
To attach other clusters, or to customize the network settings of the cluster, see Attach a new cluster to the mesh.
Deploy the demo application. You can deploy the demo application in a distributed way to multiple clusters with the following commands:
smm demoapp install -s frontpage,catalog,bookings,postgresql smm -c <PEER_CLUSTER_KUBECONFIG_FILE> demoapp install -s movies,payments,notifications,analytics,database,mysql --peer
After installation, the demo application automatically starts generating traffic, and the dashboard draws a picture of the data flow. (If it doesn’t, run the
smm demoapp load startcommand, or Generate load on the UI. If you want to stop generating traffic, run
smm demoapp load stop.)
If you are looking to deploy your own application, check out Deploy custom application for some guidelines.
If you are installing Service Mesh Manager on a managed Kubernetes solution of a public cloud provider (for example, AWS, Azure, or Google Cloud), assign admin roles so that you can tail the logs of your containers from the Service Mesh Manager UI and perform various tasks from the CLI that require custom permissions. Run the following command:
kubectl create clusterrolebinding user-cluster-admin --clusterrole=cluster-admin --user=<gcp/aws/azure username>
Open the dashboard and look around.
If you have purchased a commercial license for Service Mesh Manager, apply the license. For details, see Paid tier.
To remove the demo application from a peer cluster, run the following command:
smm -c <PEER_CLUSTER_KUBECONFIG_FILE> demoapp uninstall
To remove a peer cluster from the mesh, run the following command:
smm istio cluster detach <PEER_CLUSTER_KUBECONFIG_FILE>
For details, see Detach a cluster from the mesh.