Microservice Cluster Setup¶
Creation of Kubernetes clusters is the first step in deploying microservices. Assure1 minimizes the complexities around creating and maintaining Kubernetes clusters. Rancher Kubernetes Engine (RKE) is a command line tool for deployment of Kubernetes. The Assure1 clusterctl application is a frontend to RKE and provides the configuration necessary for the opinionated setup.
Review the architecture and components involved in Understanding Microservices before the creation of a cluster.
Roles¶
The Cluster.Master and Cluster.Worker roles must be installed on one or more servers. For a single-server development system, the recommendation is to install both roles. On production systems, each data center or availability zone should have at least 3 servers with the Cluster.Master role. These servers can also have the Cluster.Worker role depending on the resources available. This Package command must be run as root.
-
Install both the Cluster.Master and Cluster.Worker roles by using the Cluster meta role:
$A1BASEDIR/bin/Package install-role Cluster
-
Install only the Cluster.Master role:
$A1BASEDIR/bin/Package install-role Cluster.Master
-
Install only the Cluster.Worker role:
$A1BASEDIR/bin/Package install-role Cluster.Worker
Note
These roles can be added during installation of a new server by specifying them to SetupWizard.
Setup SSH Keys¶
Each server in an Assure1 instance needs an SSH key for the assure1 user to access each other. This step is not needed on the primary presentation server. This CreateSSLCertificate command should be run as assure1.
$A1BASEDIR/bin/CreateSSLCertificate --Type SSH
Creating Clusters¶
The clusterctl command line application provides the interface for creating, updating, and removing clusters. It determines the servers in each cluster by the Cluster.* roles, and whether those servers have been associated to an existing cluster. The clusterctl command must be run as root.
$A1BASEDIR/bin/cluster/clusterctl create
Note
For redundant clusters across data centers, care must be taken if all the roles are installed before cluster creating. By default, clusterctl will pull in all available servers and create a single cluster. To define 2 separate clusters, the hosts can be specified explicitly.
- Primary cluster:
$A1BASEDIR/bin/cluster/clusterctl create --host cluster-pri1.example.com --host cluster-pri2.example.com --host cluster-pri3.example.com
- Redundant cluster:
$A1BASEDIR/bin/cluster/clusterctl create --host cluster-sec1.example.com --host cluster-sec2.example.com --host cluster-sec3.example.com --secondary
Update the Helm Repository¶
The Helm Repository should be updated on at least one server in a cluster, usually one of the primaries.
su - assure1
export WEBFQDN=<Primary Presentation Web FQDN>
a1helm repo update
Install Helm Packages¶
Helm packages are installed as releases and can have unique names. By default, the convention is to name the release the same as the Helm chart. Each install needs to define the location of the Docker Registry and the namespace to install the release. Additional configuration can be set during install depending on the options provided for each chart.
-
The Assure1 Trap Collector microservice documentation has the information needed to deploy the service.
-
An example event collection and processing pipeline includes multiple microservices:
-
The Assure1 Trap Collector microservice will receive traps.
-
The Assure1 FCOM Processor microservice processes the traps.
-
The Assure1 Event Sink microservice inserts the traps into the database.
-
Customizing the Cluster Configuration File¶
Some installations may need to customize the configuration file that is used when creating clusters.
Creating a New Cluster¶
When creating a new cluster, this can be done by editing the template file found in this location:
$A1BASEDIR/etc/rke/cluster-tmpl.yml
Clusters can then be created as described above.
Updating an Existing Cluster¶
For a server with clusters already running, update the cluster.yml file, then update the clusters using the clusterctl upgrade command. The configuration file is located in the following location:
$A1BASEDIR/etc/rke/cluster.yml
Once the file has been changed, the following command must be run to upgrade the clusters with the new configurations:
Note
This command must be run as the root user:
$A1BASEDIR/bin/cluster/clusterctl upgrade
Example changing the file size limitation used by the Vision ingress controller¶
One example is to change the maximum body size for the ingress controller. In the relevant configuration file, find the ingress section. In the options definition of that section, edit or add the following line to change the maximum size allowed:
proxy-body-size: 15m
Then run the upgrade command above when upgrade existing clusters, or follow the documentation to create new clusters.
Helpful Troubleshooting¶
Helm deployments and the associated Kubernetes Pods, services, and other components can fail to initialize or crash unexpectedly. Here are a few helpful commands to aid troubleshooting these issues.
Note
These commands must be run as the assure1 user:
su - assure1
-
Look at all running pods:
a1k get pods --all-namespaces
-
Describe a pod to get events if it fails to start:
a1k describe pod <Pod Name> -n <Namespace>
Note
The
<Pod Name>
and<Namespace>
values are available in the get pods command above. -
Get and tail logs of a running pod:
a1k logs <Pod Name> -n <Namespace> -f
Note
The
<Pod Name>
and<Namespace>
values are available in the get pods command above. -
Uninstalling a microservice:
a1helm uninstall <Release Name> -n <Namespace>
Note
The
<Release Name>
and<Namespace>
values are available in the output of the following command:a1helm list --all-namespaces