Kubernetes cluster can run on OpenStack. One of the many options is to use Rancher Kubernetes Engine. In this demo, we will explore capabilities of Kubernetes with an simple example slightly beyond "Hello World". We will use Kubernetes to create a StatefulSet of NGINX pods. The system has fail-over protection and disaster recovery built in. The web content is stored in a S3 bucket as web servers in clouds would normally do.
Kubernetes cluster can run on OpenStack. One of the many options is to use Rancher Kubernetes Engine. In this demo, we will explore capabilities of Kubernetes with an simple example slightly beyond "Hello World". We use Kubernetes to create a StatefulSet of NGINX pods. The system has fail-over protection and disaster recovery built in. The web content is stored in a S3 bucket as web servers in clouds would normally do.
In particular, we will discuss items 1 - 4 and leave items 5 - 7 as reading after the workshop:
...
...
@@ -62,12 +62,13 @@ After restarting bash session or reloading ~/.bash_profile, you should see the c
Note that you can use `kubectl config use-context local` to reset the current context. If you are working with multiple Kubernetes clusters in the same or different clouds, you always want to check and switch contexts with these two commands.
A floating IP is assigned to the new cluster for various endpoints. The value can be found in the kube.yml. Here is the relevant section in kube.yml::
A floating IP is assigned to the new cluster for various endpoints. The value can be found in the config file or with the following command::
Combining with FIP from kube.yml, you can access the monitor at `http://193.62.55.64:30002/login <http://193.62.55.64:30002/login>`_.
Combining with FIP from kube.yml, you can access the monitor at http://45.86.170.94:30002/login.
There are many useful dashboards built for you already. The most frequently used one is `Kubernetes / Nodes`. It provides very good overview on resource consumption, for example:
...
...
@@ -308,7 +309,7 @@ Create an ingress so that the cluster IP gets exposed to the external network::
NAME HOSTS ADDRESS PORTS AGE
nginx-ingress * 80 26s
Now the NGINX is accessible via the the same floating IP for other endpoints, which is provided in kube.yml. In my cluster, the URL is `http://193.62.55.64/nginx/ <http://193.62.55.64/nginx/>`_.
Now the NGINX is accessible via the the same floating IP for other endpoints, which is provided in kube.yml. In my cluster, the URL is http://45.86.170.94/nginx/.
Note the NodePort. It is needed to access the web UI via the floating IP, for example `http://193.62.55.64:30968/ <http://193.62.55.64:30968/>`_. Login with the access key and secret key specified in minio.yml. Upload files via GUI. Follow Minio documentation to use REST interface to load large number of files.
Note the NodePort. It is needed to access the web UI via the floating IP, for example http://45.86.170.94:30968/. Login with the access key and secret key specified in minio.yml. Upload files via GUI. Follow Minio documentation to use REST interface to load large number of files.
Deleting Minio after using it for better security
+++++++++++++++++++++++++++++++++++++++++++++++++
...
...
@@ -426,9 +429,9 @@ Update pvc.yml, minio.yml and web.yml to make sure that the mount points are mat
deployment.extensions/minio-nginx created
service/minio-nginx created
Log onto Minio at `http://193.62.55.64:30968/ <http://193.62.55.64:30968/>`_, where 30968 is the new NodePort show on GUI. Create a bucket `html` and place an index.html file in it.
Log onto Minio at http://45.86.170.94:30968/, where 30968 is the new NodePort show on GUI. Create a bucket `html` and place an index.html file in it.
Check NGINX `http://193.62.55.64/nginx/ <http://193.62.55.64/nginx/>`_. You should see an HTML page without styling instead of HTTP404 error.
Check NGINX http://45.86.170.94/nginx/. You should see an HTML page without styling instead of HTTP404 error.
@@ -51,7 +51,7 @@ However, Minikube does not have the storage class nfs-client::
NAME PROVISIONER AGE
standard (default) k8s.io/minikube-hostpath 47h
We are to create a toy NFS server providing such storage class on Minikube by running `~/adv-k8s/osk/nfs-server.sh`. After a little while, you should see messages ending with the following::
We are to create a toy NFS server providing such storage class on Minikube by running `~/adv-k8s/osk/nfs-server.sh`. Provide your password when prompted. After a little while, you should see messages ending with the following::
Waiting for 1 pods to be ready...
partitioned roll out complete: 1 new pods have been updated...
...
...
@@ -323,7 +323,7 @@ Make sure that the arguments to initialize the container must refer to the same
Apply the `Deployment` for Minio to turn the shared persistent volume in `ReadWriteMany` mode into a S3 storage::
You can create cluster from **Operations** > **Kubernetes** > **Add Kubernetes cluster** > **Create cluster on GKE**
You need to create account on Google Cloud in able to use this feature. Once the cluster is created, you need to enable,
-**Ingress** (For load balancing)
-**cert-manager** (Optional - for SSL certificates)
-**Prometheus** (Optional - for monitoring cluster/application)
Once Ingress setup is done you need to set base domain as $public-ip$.nip.io. There should be suggestion below base domain input box for your reference.
### Enabling Auto-DevOps ###
Enable Auto-DevOps from **Settings** > **CI/CD** > **Auto-DevOps**,
Once Auto-Devops is enabled, you can go to **CI/CD** to check for latest pipeline. Latest pipeline should show application build and deployment steps.
Your application link should be displayed in **Production** stage.
### Checking Security Dashoard ###
Once the pipeline is succeded, you can go to **Security & Complaince** > **Security Dashboard** to check for security scan reports.
### Checking Monitoring Dashoard ###
If you have enabled **Prometheus** in Cluster Applications, you can check **Operations** > **Metrics** for application CPU/Memory/IO metrics.
### Conclusion ###
This tutorial guides you through setting up auto-devops with a sample application. But there are some customization needed for your application to be enabled for auto-devops. You can find more about customization in,
1. Always have Dockerfile ready, don't let auto-devops build using build packs. Generally it does not result well.
2. Instead of enabling auto-devops through settings, you can create .gitlab-ci.yml and import auto-devops template. It does basically the same thing but the later gives you more flexibility in case you need to add more steps.
3. If you are trying out in cloud kubernetes services, be mindful this can incur heavy cost. Please destroy the cluster once you have tried out.
4. Adding your own cluster is the cost-efficient option. If your organization provides kubernetes cluster in private cloud, you can add the cluster following below guide,
**Cons**: Most complicated installation, configuration and authentication. Ancient Python version in EBI cluster making life harder.
Log into EBI cluster (e.g. `ssh ebi-cli` or `ssh ebi-login`) from a terminal window. Follow the instructions on `Using the Google Cloud SDK installer <https://cloud.google.com/sdk/docs/downloads-interactive#linux>`_ to install Google Cloud SDK interactively or silently. Once your shell is restarted and the gcloud environment is initialized, you can SSH from EBI cluster to any node on GCP (e.g. `gcloud compute ssh --zone $ZONE $LOGIN_HOST --tunnel-through-iap --project $PROJECT`).
Log into EBI cluster (e.g. `ssh ebi-cli` or `ssh ebi-login`) from a terminal window. Run the following commands to install Miniconda3 as instructed by https://docs.conda.io/en/latest/miniconda.html. Answer `yes` to all the questions::
Notes:
#. You can use IAP with SCP in a similar fashion (e.g. `gcloud compute scp --zone $ZONE --tunnel-through-iap --project $PROJECT <normal_scp_parameters>`). It can be handy to push or pull files between EBI cluster and GCP nodes via SCP.
[davidyuan@noah-login-03 ~]$ cd "${HOME}" && curl -O https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
This should create a base conda environment with Python 3.8 installed. You can confirm that with the following commands::
Python 3.4.x is no longer officially supported by the Google Cloud SDK. EBI cluster is still on an ancient version. The gsutil refuses to work with python 3.4 on EBI cluster. Thus, you can not copy files into object storage with gsutil in EBI cluster. You have two options:
You can use rsync to copy files to the object storage bucket::
Now you can install Google Cloud SDK with Miniconda3 as instructed by https://anaconda.org/conda-forge/google-cloud-sdk. Again, answer `yes` when asked::
You can SSH from EBI cluster to any node on GCP (e.g. `gcloud compute ssh --zone $ZONE $LOGIN_HOST --tunnel-through-iap --project $PROJECT`).
You have two options as workarounds:
Notes:
#. You can install a local copy of Python 3.7 under your $HOME to use gsutil with the correct version of Python.
#. You can SCP files to or from Cloud Shell and use gsutil in the Cloud Sheel to further move files to or from object storage.
#. You can use IAP with SCP in a similar fashion (e.g. `gcloud compute scp --zone $ZONE --tunnel-through-iap --project $PROJECT <normal_scp_parameters>`). It can be handy to push or pull files between EBI cluster and GCP nodes via SCP.
# It is a good idea to create a plan first. The input values can be provided with var-files.
#
terraform plan -lock=false-var-file${DIR}/resops.tf -out${DIR}/kubespray/out.plan${DIR}/kubespray/contrib/terraform/openstack
terraform plan -lock=false-var-file"${DIR}/resops.tf"-out"${DIR}/kubespray/out.plan""${DIR}/kubespray/contrib/terraform/openstack"
# Apply the changes planned explicitly. Terraform does not guarantee plan and apply generates the same plan. Thus, it is a good idea to provide an explicit plan and turn on the auto approval.