Commit 3906e326 authored by Chandra Deep Tiwari's avatar Chandra Deep Tiwari
Browse files

Merge branch 'latest' into 'external'

Latest to external merge request

See merge request !156
parents bc3a3e44 e4e69db8
Pipeline #100075 passed with stages
in 1 minute and 59 seconds
......@@ -44,8 +44,8 @@ Course guidelines
+----------+------------------------------------------------------------------------------------------------------------------+
| 60 min | `Kubernetes 101 <../../_static/pdf/resops2019/Kubernetes-101.pdf>`_ |
+----------+------------------------------------------------------------------------------------------------------------------+
| 120 min | `Kubernetes (Demo) <Kubernetes-Demo-2019.html>`_ |
| | / `Kubernetes Practicals <../../_static/pdf/resops2019/KubernetesPracticals.pdf>`_ |
| 120 min | `Overview <../../_static/pdf/resops2019/KubernetesPracticals.pdf>`_ |
| | / `Kubernetes (Demo) <Kubernetes-Demo-2019.html>`_ |
+----------+------------------------------------------------------------------------------------------------------------------+
| 45 min | `Overview <../../_static/pdf/resops2019/KubernetesPracticals.pdf>`_ |
| | / `Kubernetes Practical <Minikube-and-NGINX-Practical-2019.html>`_ |
......
......@@ -25,4 +25,5 @@ The exercises are:
- [Extend the pipeline by adding further steps](gitlab/05_add-further-steps.md)
- [Change the order of the pipeline steps](gitlab/06_change-order-of-steps.md)
- [Pass secret information to the build pipeline](gitlab/07_pass-build-secrets.md)
- [Learn about AutoDevOps](https://docs.gitlab.com/ee/topics/autodevops/).
\ No newline at end of file
- [Gitlab 101 Tool Certification](https://about.gitlab.com/handbook/people-group/learning-and-development/certifications/gitlab-101/).
- [AutoDevOps Tutorial](gitlab/auto-devops-tutorial.md).
\ No newline at end of file
Kubernetes on OpenStack
=======================
Kubernetes cluster can run on OpenStack. One of the many options is to use Rancher Kubernetes Engine. In this demo, we will explore capabilities of Kubernetes with an simple example slightly beyond "Hello World". We will use Kubernetes to create a StatefulSet of NGINX pods. The system has fail-over protection and disaster recovery built in. The web content is stored in a S3 bucket as web servers in clouds would normally do.
Kubernetes cluster can run on OpenStack. One of the many options is to use Rancher Kubernetes Engine. In this demo, we will explore capabilities of Kubernetes with an simple example slightly beyond "Hello World". We use Kubernetes to create a StatefulSet of NGINX pods. The system has fail-over protection and disaster recovery built in. The web content is stored in a S3 bucket as web servers in clouds would normally do.
.. /static/images/resops2019/Nginx.NFS.Minio.png
.. image:: /static/images/resops2019/Nginx.NFS.Minio.Sphinx.png
Figure: NGINX, NFS and Minio integartion
Figure: NGINX, NFS and Minio integration
In particular, we will discuss items 1 - 4 and leave items 5 - 7 as reading after the workshop:
......@@ -62,12 +62,13 @@ After restarting bash session or reloading ~/.bash_profile, you should see the c
Note that you can use `kubectl config use-context local` to reset the current context. If you are working with multiple Kubernetes clusters in the same or different clouds, you always want to check and switch contexts with these two commands.
A floating IP is assigned to the new cluster for various endpoints. The value can be found in the kube.yml. Here is the relevant section in kube.yml::
A floating IP is assigned to the new cluster for various endpoints. The value can be found in the config file or with the following command::
- cluster:
certificate-authority-data: __DELETED__
server: https://193.62.55.64:6443
name: local
C02XD1G9JGH7:.kube davidyuan$ kubectl cluster-info
Kubernetes master is running at https://45.86.170.94:6443
CoreDNS is running at https://45.86.170.94:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Kubectl
-------
......@@ -230,7 +231,7 @@ A resource monitor is created specific to your cluster for you, where Prometheus
prometheus-operator-prometheus-custom NodePort 10.43.21.250 <none> 9090:30986/TCP 86d
prometheus-operator-prometheus-node-exporter ClusterIP 10.43.179.232 <none> 9100/TCP 86d
Combining with FIP from kube.yml, you can access the monitor at `http://193.62.55.64:30002/login <http://193.62.55.64:30002/login>`_.
Combining with FIP from kube.yml, you can access the monitor at http://45.86.170.94:30002/login.
There are many useful dashboards built for you already. The most frequently used one is `Kubernetes / Nodes`. It provides very good overview on resource consumption, for example:
......@@ -308,7 +309,7 @@ Create an ingress so that the cluster IP gets exposed to the external network::
NAME HOSTS ADDRESS PORTS AGE
nginx-ingress * 80 26s
Now the NGINX is accessible via the the same floating IP for other endpoints, which is provided in kube.yml. In my cluster, the URL is `http://193.62.55.64/nginx/ <http://193.62.55.64/nginx/>`_.
Now the NGINX is accessible via the the same floating IP for other endpoints, which is provided in kube.yml. In my cluster, the URL is http://45.86.170.94/nginx/.
.. image:: /static/images/resops2019/nginx.404.png
......@@ -324,6 +325,8 @@ Minio deployment and service
Create Minio deployment and service. Check if everything is started successfully::
C02XD1G9JGH7:minio davidyuan$ accesskey=__DELETED__
C02XD1G9JGH7:minio davidyuan$ secretkey=__DELETED__
C02XD1G9JGH7:tsi-ccdoc davidyuan$ kubectl create secret generic minio --from-literal=accesskey=${accesskey} --from-literal=secretkey=${secretkey}
secret/minio created
......@@ -334,7 +337,7 @@ Create Minio deployment and service. Check if everything is started successfully
minio Opaque 2 0s
C02XD1G9JGH7:tsi-ccdoc davidyuan$ kubectl apply -f https://gitlab.ebi.ac.uk/TSI/tsi-ccdoc/raw/master/tsi-cc/ResOps/scripts/minio/minio.yml
deployment.extensions/minio-nginx created
deployment.apps/minio-nginx created
service/minio-nginx created
C02XD1G9JGH7:tsi-ccdoc davidyuan$ kubectl get deployment
......@@ -348,7 +351,7 @@ Create Minio deployment and service. Check if everything is started successfully
minio-nginx NodePort 10.43.151.136 <none> 9000:30968/TCP 52s
nginx ClusterIP 10.43.173.206 <none> 80/TCP 30m
Note the NodePort. It is needed to access the web UI via the floating IP, for example `http://193.62.55.64:30968/ <http://193.62.55.64:30968/>`_. Login with the access key and secret key specified in minio.yml. Upload files via GUI. Follow Minio documentation to use REST interface to load large number of files.
Note the NodePort. It is needed to access the web UI via the floating IP, for example http://45.86.170.94:30968/. Login with the access key and secret key specified in minio.yml. Upload files via GUI. Follow Minio documentation to use REST interface to load large number of files.
Deleting Minio after using it for better security
+++++++++++++++++++++++++++++++++++++++++++++++++
......@@ -426,9 +429,9 @@ Update pvc.yml, minio.yml and web.yml to make sure that the mount points are mat
deployment.extensions/minio-nginx created
service/minio-nginx created
Log onto Minio at `http://193.62.55.64:30968/ <http://193.62.55.64:30968/>`_, where 30968 is the new NodePort show on GUI. Create a bucket `html` and place an index.html file in it.
Log onto Minio at http://45.86.170.94:30968/, where 30968 is the new NodePort show on GUI. Create a bucket `html` and place an index.html file in it.
Check NGINX `http://193.62.55.64/nginx/ <http://193.62.55.64/nginx/>`_. You should see an HTML page without styling instead of HTTP404 error.
Check NGINX http://45.86.170.94/nginx/. You should see an HTML page without styling instead of HTTP404 error.
.. image:: /static/images/resops2019/nginx.home.png
......@@ -480,7 +483,7 @@ If you are curious how the backends work, connect to either one of the three pod
By the way, the same `index.html` can also be accessed via S3. Here is what the link may look like::
http://193.62.55.64:30968/html/index.html?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=davidyuan%2F20190513%2F%2Fs3%2Faws4_request&X-Amz-Date=20190513T102759Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=297542c0b696b6980acd9252e35da7604623006334ef9b20d028c7b736217ae8
http://45.86.170.94:30968/html/index.html?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=davidyuan%2F20190513%2F%2Fs3%2Faws4_request&X-Amz-Date=20190513T102759Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=297542c0b696b6980acd9252e35da7604623006334ef9b20d028c7b736217ae8
The metadata for S3 protocol can be found under `/usr/share/nginx/.minio.sys`::
......
......@@ -51,7 +51,7 @@ However, Minikube does not have the storage class nfs-client::
NAME PROVISIONER AGE
standard (default) k8s.io/minikube-hostpath 47h
We are to create a toy NFS server providing such storage class on Minikube by running `~/adv-k8s/osk/nfs-server.sh`. After a little while, you should see messages ending with the following::
We are to create a toy NFS server providing such storage class on Minikube by running `~/adv-k8s/osk/nfs-server.sh`. Provide your password when prompted. After a little while, you should see messages ending with the following::
Waiting for 1 pods to be ready...
partitioned roll out complete: 1 new pods have been updated...
......@@ -323,7 +323,7 @@ Make sure that the arguments to initialize the container must refer to the same
Apply the `Deployment` for Minio to turn the shared persistent volume in `ReadWriteMany` mode into a S3 storage::
ubuntu@resops-k8s-node-nf-2:~$ kubectl apply -f ~/adv-k8s/osk/dpm/minio.yml
deployment.extensions/minio-freebayes created
deployment.apps/minio-freebayes created
service/minio-freebayes created
ubuntu@resops-k8s-node-nf-2:~$ kubectl get svc
......
## Auto DevOps tutorial ##
### Objective ###
Learn how to setup Auto-DevOps.
### Forking Repo ###
Fork the Auto-DevOps example provided by gitlab from below,
https://gitlab.com/auto-devops-examples/minimal-ruby-app
### Adding Kubernetes Cluster ###
You can create cluster from **Operations** > **Kubernetes** > **Add Kubernetes cluster** > **Create cluster on GKE**
You need to create account on Google Cloud in able to use this feature. Once the cluster is created, you need to enable,
- **Ingress** (For load balancing)
- **cert-manager** (Optional - for SSL certificates)
- **Prometheus** (Optional - for monitoring cluster/application)
Once Ingress setup is done you need to set base domain as $public-ip$.nip.io. There should be suggestion below base domain input box for your reference.
### Enabling Auto-DevOps ###
Enable Auto-DevOps from **Settings** > **CI/CD** > **Auto-DevOps**,
![](/static/images/resops2019/auto-devops-capture-1.png)
### Checking Deployment ###
Once Auto-Devops is enabled, you can go to **CI/CD** to check for latest pipeline. Latest pipeline should show application build and deployment steps.
Your application link should be displayed in **Production** stage.
### Checking Security Dashoard ###
Once the pipeline is succeded, you can go to **Security & Complaince** > **Security Dashboard** to check for security scan reports.
### Checking Monitoring Dashoard ###
If you have enabled **Prometheus** in Cluster Applications, you can check **Operations** > **Metrics** for application CPU/Memory/IO metrics.
### Conclusion ###
This tutorial guides you through setting up auto-devops with a sample application. But there are some customization needed for your application to be enabled for auto-devops. You can find more about customization in,
https://docs.gitlab.com/ee/topics/autodevops/customize.html
### Best Practices ###
1. Always have Dockerfile ready, don't let auto-devops build using build packs. Generally it does not result well.
2. Instead of enabling auto-devops through settings, you can create .gitlab-ci.yml and import auto-devops template. It does basically the same thing but the later gives you more flexibility in case you need to add more steps.
3. If you are trying out in cloud kubernetes services, be mindful this can incur heavy cost. Please destroy the cluster once you have tried out.
4. Adding your own cluster is the cost-efficient option. If your organization provides kubernetes cluster in private cloud, you can add the cluster following below guide,
https://docs.gitlab.com/ee/user/project/clusters/add_remove_clusters.html
......@@ -44,34 +44,33 @@ SSH from EBI cluster
**Cons**: Most complicated installation, configuration and authentication. Ancient Python version in EBI cluster making life harder.
Log into EBI cluster (e.g. `ssh ebi-cli` or `ssh ebi-login`) from a terminal window. Follow the instructions on `Using the Google Cloud SDK installer <https://cloud.google.com/sdk/docs/downloads-interactive#linux>`_ to install Google Cloud SDK interactively or silently. Once your shell is restarted and the gcloud environment is initialized, you can SSH from EBI cluster to any node on GCP (e.g. `gcloud compute ssh --zone $ZONE $LOGIN_HOST --tunnel-through-iap --project $PROJECT`).
Log into EBI cluster (e.g. `ssh ebi-cli` or `ssh ebi-login`) from a terminal window. Run the following commands to install Miniconda3 as instructed by https://docs.conda.io/en/latest/miniconda.html. Answer `yes` to all the questions::
Notes:
#. You can use IAP with SCP in a similar fashion (e.g. `gcloud compute scp --zone $ZONE --tunnel-through-iap --project $PROJECT <normal_scp_parameters>`). It can be handy to push or pull files between EBI cluster and GCP nodes via SCP.
[davidyuan@noah-login-03 ~]$ cd "${HOME}" && curl -O https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
[davidyuan@noah-login-03 ~]$ chmod +x ./Miniconda3-latest-Linux-x86_64.sh
[davidyuan@noah-login-03 ~]$ ./Miniconda3-latest-Linux-x86_64.sh
[davidyuan@noah-login-03 ~]$ . "${HOME}"/.bashrc
WORKAROUND: rsync for object store from EBI cluster
---------------------------------------------------
This should create a base conda environment with Python 3.8 installed. You can confirm that with the following commands::
Python 3.4.x is no longer officially supported by the Google Cloud SDK. EBI cluster is still on an ancient version. The gsutil refuses to work with python 3.4 on EBI cluster. Thus, you can not copy files into object storage with gsutil in EBI cluster. You have two options:
(base) [davidyuan@noah-login-03 ~]$ python --version
Python 3.8.3
(base) [davidyuan@noah-login-03 ~]$ which python
~/miniconda3/bin/python
You can use rsync to copy files to the object storage bucket::
Now you can install Google Cloud SDK with Miniconda3 as instructed by https://anaconda.org/conda-forge/google-cloud-sdk. Again, answer `yes` when asked::
gsutil rsync -r a gs://<my_bucket>/<uri>
(base) [davidyuan@noah-login-03 ~]$ conda install -c conda-forge google-cloud-sdk
Or, you can rsync from filesystem to filesystem, by setting up an SSH helper script on the EBI side::
It is always a good idea to double-check what you have done::
cat >gcloud-compute-ssh <<EOF
export CLOUDSDK_PYTHON=\$(which python3)
host="\$1"
shift
exec gcloud compute ssh "\$host" -- "\$@"
EOF
(base) [davidyuan@noah-login-03 ~]$ which gcloud
~/miniconda3/bin/gcloud
(base) [davidyuan@noah-login-03 ~]$ gcloud --version
Google Cloud SDK 310.0.0
chmod +x gcloud-compute-ssh
rsync --rsh ./gcloud-compute-ssh --recursive --partial --times --progress local_dir/ g2-controller:remote_dir/
You can SSH from EBI cluster to any node on GCP (e.g. `gcloud compute ssh --zone $ZONE $LOGIN_HOST --tunnel-through-iap --project $PROJECT`).
You have two options as workarounds:
Notes:
#. You can install a local copy of Python 3.7 under your $HOME to use gsutil with the correct version of Python.
#. You can SCP files to or from Cloud Shell and use gsutil in the Cloud Sheel to further move files to or from object storage.
#. You can use IAP with SCP in a similar fashion (e.g. `gcloud compute scp --zone $ZONE --tunnel-through-iap --project $PROJECT <normal_scp_parameters>`). It can be handy to push or pull files between EBI cluster and GCP nodes via SCP.
......@@ -173,6 +173,7 @@ Site map
ResOps/2019/gitlab/Github-pipeline.rst
ResOps/2019/gitlab/Gitlab-DevOps.rst
ResOps/2019/gitlab/Tracing.rst
ResOps/2019/gitlab/auto-devops-tutorial.md
ResOps/2019/Important-considerations-for-research-pipelines.rst
ResOps/2019/Kubernetes-Demo-2019.rst
ResOps/2019/Minikube-and-NGINX-Practical-2019.rst
......
......@@ -3,7 +3,7 @@
# Adding Minikube to the new VMs
# https://computingforgeeks.com/how-to-install-vnc-server-on-ubuntu-18-04-lts/
# xfce4 xfce4-goodies
cmd_minikube='sudo sed -i.bak -e "s#PasswordAuthentication no#PasswordAuthentication yes#g" /etc/ssh/sshd_config && sudo service sshd restart && grp=$(id -ng) && i=$(hostname -I | cut -d "." -f 4 | cut -d " " -f 1) && sudo useradd -m -g ${grp} -s /bin/bash -p $(openssl passwd -1 passw${i}rd) resops${i} && sudo apt-get update && sudo apt-get install -y socat nfs-common docker.io firefox vnc4server && sudo apt-get -y autoremove && sudo snap install kubectl --classic && curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube && sudo cp minikube /usr/local/bin && rm minikube && sudo adduser resops${i} sudo && sudo usermod -G docker -a ubuntu && sudo usermod -G docker -a resops${i} && git config --global credential.helper "cache --timeout=3600" && git config --global user.name "resops${i}" && git config --global user.email "resops${i}@localhost" && sudo mv /home/ubuntu/.gitconfig /home/resops${i}'
cmd_minikube='sudo sed -i.bak -e "s#PasswordAuthentication no#PasswordAuthentication yes#g" /etc/ssh/sshd_config && sudo service sshd restart && grp=$(id -ng) && i=$(hostname -I | cut -d "." -f 4 | cut -d " " -f 1) && sudo useradd -m -g ${grp} -s /bin/bash -p $(openssl passwd -1 passw${i}rd) resops${i} && sudo apt-get update && sudo apt-get install -y socat nfs-common docker.io firefox vnc4server conntrack && sudo apt-get -y autoremove && sudo snap install kubectl --classic && curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube && sudo cp minikube /usr/local/bin && rm minikube && sudo adduser resops${i} sudo && sudo usermod -G docker -a ubuntu && sudo usermod -G docker -a resops${i} && git config --global credential.helper "cache --timeout=3600" && git config --global user.name "resops${i}" && git config --global user.email "resops${i}@localhost" && sudo mv /home/ubuntu/.gitconfig /home/resops${i}'
# Set environment to connect to certain project with the openrc script generated by Horizon
source ~/Downloads/ResOps-openrc-V2.sh
......@@ -23,7 +23,7 @@ for worker in "${workers[@]}"; do
ORG1_IFS=${IFS}; IFS=', ' tokens=( ${worker} ) && worker=${tokens[${#tokens[@]}-2]}; IFS=${ORG1_IFS} && echo "Tokens: ${tokens[*]}." && echo "Worker: ${worker}." && echo
fip=${tokens[${#tokens[@]}-1]}
i=$(echo ${worker} | cut -d "." -f 4 | cut -d " " -f 1)
i=$(echo "${worker}" | cut -d "." -f 4 | cut -d " " -f 1)
echo "ssh resops${i}@${fip},passw${i}rd" | tee >> ~/IdeaProjects/tsi/tsi-ccdoc/tsi-cc/ResOps/scripts/kubespray/list.txt
done
......@@ -32,8 +32,8 @@ for worker in "${workers[@]}"; do
ORG1_IFS=${IFS}; IFS=', ' tokens=( ${worker} ) && worker=${tokens[${#tokens[@]}-2]}; IFS=${ORG1_IFS} && echo "Tokens: ${tokens[*]}." && echo "Worker: ${worker}." && echo
ssh -i ~/.ssh/id_rsa -o StrictHostKeyChecking=No -o UserKnownHostsFile=/dev/null -o ProxyCommand="ssh -W %h:%p -i ~/.ssh/id_rsa ubuntu@${bastion}" ubuntu@${worker} "echo 'warm up bastion server.'"
ssh -i ~/.ssh/id_rsa -o StrictHostKeyChecking=No -o UserKnownHostsFile=/dev/null -o ProxyCommand="ssh -W %h:%p -i ~/.ssh/id_rsa ubuntu@${bastion}" ubuntu@${worker} ${cmd_minikube}&
ssh -i ~/.ssh/id_rsa -o StrictHostKeyChecking=No -o UserKnownHostsFile=/dev/null -o ProxyCommand="ssh -W %h:%p -i ~/.ssh/id_rsa ubuntu@${bastion}" "ubuntu@${worker}" "echo 'warm up bastion server.'"
ssh -i ~/.ssh/id_rsa -o StrictHostKeyChecking=No -o UserKnownHostsFile=/dev/null -o ProxyCommand="ssh -W %h:%p -i ~/.ssh/id_rsa ubuntu@${bastion}" "ubuntu@${worker}" "${cmd_minikube}" &
done
wait
......@@ -44,12 +44,12 @@ for worker in "${workers[@]}"; do
ORG1_IFS=${IFS}; IFS=', ' tokens=( ${worker} ) && worker=${tokens[${#tokens[@]}-2]}; IFS=${ORG1_IFS} && echo "Tokens: ${tokens[*]}." && echo "Worker: ${worker}." && echo
ssh -i ~/.ssh/id_rsa -o StrictHostKeyChecking=No -o UserKnownHostsFile=/dev/null -o ProxyCommand="ssh -W %h:%p -i ~/.ssh/id_rsa ubuntu@${bastion}" ubuntu@${worker} "echo 'warm up bastion server.'"
ssh -i ~/.ssh/id_rsa -o StrictHostKeyChecking=No -o UserKnownHostsFile=/dev/null -o ProxyCommand="ssh -W %h:%p -i ~/.ssh/id_rsa ubuntu@${bastion}" ubuntu@${worker} 'i=$(hostname -I | cut -d "." -f 4 | cut -d " " -f 1) && vnc4server :1 -geometry 1024x768 -depth 24 <<EOF
ssh -i ~/.ssh/id_rsa -o StrictHostKeyChecking=No -o UserKnownHostsFile=/dev/null -o ProxyCommand="ssh -W %h:%p -i ~/.ssh/id_rsa ubuntu@${bastion}" "ubuntu@${worker}" "echo 'warm up bastion server.'"
ssh -i ~/.ssh/id_rsa -o StrictHostKeyChecking=No -o UserKnownHostsFile=/dev/null -o ProxyCommand="ssh -W %h:%p -i ~/.ssh/id_rsa ubuntu@${bastion}" "ubuntu@${worker}" 'i=$(hostname -I | cut -d "." -f 4 | cut -d " " -f 1) && vnc4server :1 -geometry 1024x768 -depth 24 <<EOF
passw${i}rd
passw${i}rd
EOF'
ssh -i ~/.ssh/id_rsa -o StrictHostKeyChecking=No -o UserKnownHostsFile=/dev/null -o ProxyCommand="ssh -W %h:%p -i ~/.ssh/id_rsa ubuntu@${bastion}" ubuntu@${worker} ${cmd_vnc}&
ssh -i ~/.ssh/id_rsa -o StrictHostKeyChecking=No -o UserKnownHostsFile=/dev/null -o ProxyCommand="ssh -W %h:%p -i ~/.ssh/id_rsa ubuntu@${bastion}" "ubuntu@${worker}" "${cmd_vnc}" &
done
wait && date
......@@ -61,10 +61,10 @@ for worker in "${workers[@]}"; do
ORG1_IFS=${IFS}; IFS=', ' tokens=( ${worker} ) && worker=${tokens[${#tokens[@]}-2]}; IFS=${ORG1_IFS} && echo "Tokens: ${tokens[*]}." && echo "Worker: ${worker}." && echo
echo "Verifying ${worker}..."
ssh -i ~/.ssh/id_rsa -o StrictHostKeyChecking=No -o UserKnownHostsFile=/dev/null -o ProxyCommand="ssh -W %h:%p -i ~/.ssh/id_rsa ubuntu@${bastion}" -t ubuntu@${worker} "${cmd_check}"
ssh -i ~/.ssh/id_rsa -o StrictHostKeyChecking=No -o UserKnownHostsFile=/dev/null -o ProxyCommand="ssh -W %h:%p -i ~/.ssh/id_rsa ubuntu@${bastion}" -t "ubuntu@${worker}" "${cmd_check}"
done
# && sudo minikube start --vm-driver=none && sudo kubectl get node
i=$(echo ${worker} | cut -d "." -f 4 | cut -d " " -f 1)
ssh -i ~/.ssh/id_rsa -o StrictHostKeyChecking=No -o UserKnownHostsFile=/dev/null -o ProxyCommand="ssh -W %h:%p -i ~/.ssh/id_rsa ubuntu@${bastion}" -t resops${i}@${worker} "${cmd_check}"
i=$(echo "${worker}" | cut -d "." -f 4 | cut -d " " -f 1)
ssh -i ~/.ssh/id_rsa -o StrictHostKeyChecking=No -o UserKnownHostsFile=/dev/null -o ProxyCommand="ssh -W %h:%p -i ~/.ssh/id_rsa ubuntu@${bastion}" -t "resops${i}@${worker}" "${cmd_check}"
IFS=${ORG_IFS}
......@@ -57,7 +57,7 @@ number_of_k8s_masters_no_floating_ip_no_etcd = 0
# flavor_k8s_master = "bba3e111-9247-40b7-9e55-9c5a1fa8bcfe"
# worker nodes with floating IPs
number_of_k8s_nodes = 35
number_of_k8s_nodes = 50
# worker nodes without floating IPs
number_of_k8s_nodes_no_floating_ip = 0
......
......@@ -17,14 +17,15 @@ ORG_IFS=${IFS}; IFS=$'\n' workers=( $(openstack server list -f csv | grep 'k8s-n
# Install and configure VMs
for worker in "${workers[@]}"; do
# shellcheck disable=SC2206
ORG1_IFS=${IFS}; IFS=', ' tokens=( ${worker} ) && worker=${tokens[${#tokens[@]}-2]}; IFS=${ORG1_IFS} && echo "Tokens: ${tokens[*]}." && echo "Worker: ${worker}." && echo
ssh -i ~/.ssh/id_rsa -o StrictHostKeyChecking=No -o UserKnownHostsFile=/dev/null -o ProxyCommand="ssh -W %h:%p -i ~/.ssh/id_rsa ubuntu@${bastion}" ubuntu@${worker} "echo 'warm up bastion server.'"
ssh -i ~/.ssh/id_rsa -o StrictHostKeyChecking=No -o UserKnownHostsFile=/dev/null -o ProxyCommand="ssh -W %h:%p -i ~/.ssh/id_rsa ubuntu@${bastion}" ubuntu@${worker} ${cmd_start}&
ssh -i ~/.ssh/id_rsa -o StrictHostKeyChecking=No -o UserKnownHostsFile=/dev/null -o ProxyCommand="ssh -W %h:%p -i ~/.ssh/id_rsa ubuntu@${bastion}" "ubuntu@${worker}" "echo 'warm up bastion server.'"
ssh -i ~/.ssh/id_rsa -o StrictHostKeyChecking=No -o UserKnownHostsFile=/dev/null -o ProxyCommand="ssh -W %h:%p -i ~/.ssh/id_rsa ubuntu@${bastion}" "ubuntu@${worker}" "${cmd_start}" &
done
wait && date
cmd_check='i=$(hostname -I | cut -d "." -f 4 | cut -d " " -f 1) && id resops${i} && docker images list && sudo minikube start --vm-driver=none && sudo kubectl get node && sudo minikube stop'
i=$(echo ${worker} | cut -d "." -f 4 | cut -d " " -f 1)
ssh -i ~/.ssh/id_rsa -o StrictHostKeyChecking=No -o UserKnownHostsFile=/dev/null -o ProxyCommand="ssh -W %h:%p -i ~/.ssh/id_rsa ubuntu@${bastion}" -t resops${i}@${worker} "${cmd_check}"
i=$(echo "${worker}" | cut -d "." -f 4 | cut -d " " -f 1)
ssh -i ~/.ssh/id_rsa -o StrictHostKeyChecking=No -o UserKnownHostsFile=/dev/null -o ProxyCommand="ssh -W %h:%p -i ~/.ssh/id_rsa ubuntu@${bastion}" -t resops"${i}"@"${worker}" "${cmd_check}"
IFS=${ORG_IFS}
......@@ -4,13 +4,13 @@
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
# Terraform is highly sensitive to CWD. Always change to a fixed directory.
cd ${DIR}/kubespray/contrib/terraform
cd "${DIR}/kubespray/contrib/terraform" || exit
# Set environment to connect to certain project with the openrc script generated by Horizon
source ~/Downloads/ResOps-openrc.sh
# Initialize Terroform with a Terraform working directory, which is needed only when it is not CWD
terraform init ${DIR}/kubespray/contrib/terraform/openstack
terraform init "${DIR}/kubespray/contrib/terraform/openstack"
# Destroy a deployment. It is a good idea not to use auto-approval.
terraform destroy -lock=false
\ No newline at end of file
......@@ -4,21 +4,19 @@
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
# Terraform is highly sensitive to CWD. Always change to a fixed directory.
cd ${DIR}
git clone https://github.com/kubernetes-sigs/kubespray/
cd kubespray/contrib/terraform
cd "${DIR}" && git clone https://github.com/kubernetes-sigs/kubespray/
cd kubespray/contrib/terraform || exit
# Set environment to connect to certain project with the openrc script generated by Horizon
source ~/Downloads/ResOps-openrc.sh
# Initialize Terroform with a Terraform working directory, which is needed only when it is not CWD
terraform init ${DIR}/kubespray/contrib/terraform/openstack
terraform init "${DIR}/kubespray/contrib/terraform/openstack"
# It is a good idea to create a plan first. The input values can be provided with var-files.
#
terraform plan -lock=false -var-file ${DIR}/resops.tf -out ${DIR}/kubespray/out.plan ${DIR}/kubespray/contrib/terraform/openstack
terraform plan -lock=false -var-file "${DIR}/resops.tf" -out "${DIR}/kubespray/out.plan" "${DIR}/kubespray/contrib/terraform/openstack"
# Apply the changes planned explicitly. Terraform does not guarantee plan and apply generates the same plan. Thus, it is a good idea to provide an explicit plan and turn on the auto approval.
#
terraform apply -lock=false -auto-approve ${DIR}/kubespray/out.plan
terraform apply -lock=false -auto-approve "${DIR}/kubespray/out.plan"
......@@ -11,4 +11,7 @@ kubectl create secret generic minio --from-literal=accesskey=${accesskey} --from
kubectl get secret
kubectl apply -f source/static/scripts/minio/minio.yml
kubectl get svc
\ No newline at end of file
kubectl get svc
# https://helm.min.io/
# helm repo add minio https://helm.min.io
# helm install minio --set replicas=1,service.type=NodePort,service.nodePort=32001,persistence.existingClaim=minio-pvc-nginx,existingSecret=minio,mountPath="/usr/share/nginx/" minio/minio
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
# This name uniquely identifies the Deployment
name: minio-nginx
spec:
replicas: 1
selector:
matchLabels:
app: minio
strategy:
type: Recreate
template:
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment