Create a test cluster

You need a Kubernetes cluster to run Service Mesh Manager. If you don’t already have a Kubernetes cluster to work with, create one with one of the following methods.

CAUTION:

Supported providers and Kubernetes versions

The cluster must run a Kubernetes version that Istio supports. For Istio 1.13.x, these are Kubernetes 1.19, 1.20, 1.21 and 1.22.

Service Mesh Manager is tested and known to work on the following Kubernetes providers:

  • Cisco Intersight Kubernetes Service (IKS)
  • Amazon Elastic Kubernetes Service (Amazon EKS)
  • Google Kubernetes Engine (GKE)
  • Azure Kubernetes Service (AKS)
  • On-premises installation of stock Kubernetes with load balancer support (and optionally PVCs for persistence)

Resource requirements:

Make sure that your Kubernetes cluster has sufficient resources. The default installation (with Service Mesh Manager and demo application) requires the following amount of resources on the cluster:

  • CPU:
    • 12 vCPU in total
    • 4 vCPU available for allocation per worker node
  • Memory:
    • 16 GB in total
    • 2 GB available for allocation per worker node
  • 12 GB of ephemeral storage on the Kubernetes worker nodes (for Traces and Metrics)

Note: These minimum requirements need to be available for allocation within your cluster, in addition to the requirements of any other loads running in your cluster (for example, DaemonSets and Kubernetes node-agents). If Kubernetes cannot allocate sufficient resources to Service Mesh Manager, some pods will remain in Pending state, and Service Mesh Manager will not function properly.

Enabling additional features, such as High Availability increases this value.

The default installation, when enough headroom is available in the cluster, should be able to support at least 150 running Pods with the same amount of Services. For setting up Service Mesh Manager for bigger workloads, see scaling Service Mesh Manager.

  • Run locally (~5 minutes): Deploy Service Mesh Manager to a single-node Kubernetes cluster running on your development machine.
  • Run on a Kubernetes cluster (~10 minutes): Deploy Service Mesh Manager to a Kubernetes cluster of your choice.

Run Service Mesh Manager locally

Recommended if you don’t have or don’t want to create a Kubernetes cluster, but want to try out Service Mesh Manager quickly.

  1. Install one of the following tools to run a Kubernetes cluster locally:

  2. Ensure that the local Kubernetes cluster has at least:

    • 6 CPU’s
    • 8.0 GiB memory
    • 16 GB disk space
  3. Launch the local Kubernetes cluster with one of the following tools:

    • Minikube:

      minikube start --cpus=4 --memory=8192
      
    • Docker for Desktop: In preferences, choose Enable Kubernetes.

    • Kind:

      kind create cluster
      
  4. Proceed to Install Service Mesh Manager.

  5. When you’re done experimenting, you can remove the demo application, Service Mesh Manager, and Istio from your cluster with the following command, which removes all of these components in the correct order:

    smm uninstall -a
    

    Note: Uninstalling Service Mesh Manager does not remove the Custom Resource Definitions (CRDs) from the cluster, because removing a CRD removes all related resources. Since Service Mesh Manager uses several external components, this could remove things not belonging to Service Mesh Manager.

Run on a Kubernetes cluster

Recommended if you have a Kubernetes cluster and want to try out Service Mesh Manager quickly.

  1. Create a cluster that meets the following resource requirements with your favorite provider.

    CAUTION:

    Supported providers and Kubernetes versions

    The cluster must run a Kubernetes version that Istio supports. For Istio 1.13.x, these are Kubernetes 1.19, 1.20, 1.21 and 1.22.

    Service Mesh Manager is tested and known to work on the following Kubernetes providers:

    • Cisco Intersight Kubernetes Service (IKS)
    • Amazon Elastic Kubernetes Service (Amazon EKS)
    • Google Kubernetes Engine (GKE)
    • Azure Kubernetes Service (AKS)
    • On-premises installation of stock Kubernetes with load balancer support (and optionally PVCs for persistence)

    Resource requirements:

    Make sure that your Kubernetes cluster has sufficient resources. The default installation (with Service Mesh Manager and demo application) requires the following amount of resources on the cluster:

    • CPU:
      • 12 vCPU in total
      • 4 vCPU available for allocation per worker node
    • Memory:
      • 16 GB in total
      • 2 GB available for allocation per worker node
    • 12 GB of ephemeral storage on the Kubernetes worker nodes (for Traces and Metrics)

    Note: These minimum requirements need to be available for allocation within your cluster, in addition to the requirements of any other loads running in your cluster (for example, DaemonSets and Kubernetes node-agents). If Kubernetes cannot allocate sufficient resources to Service Mesh Manager, some pods will remain in Pending state, and Service Mesh Manager will not function properly.

    Enabling additional features, such as High Availability increases this value.

    The default installation, when enough headroom is available in the cluster, should be able to support at least 150 running Pods with the same amount of Services. For setting up Service Mesh Manager for bigger workloads, see scaling Service Mesh Manager.

  2. Set Kubernetes configuration and context.

    The Service Mesh Manager command-line tool uses your current Kubernetes context, from the file named in the KUBECONFIG environment variable (~/.kube/config by default). Check if this is the cluster you plan to deploy the product by running the following command:

    kubectl config get-contexts
    

    If there are multiple contexts in the Kubeconfig file, specify the one you want to use with the use-context parameter, for example:

    kubectl config use-context <context-to-use>
    
  3. Proceed to Install Service Mesh Manager.