Commit 05857130 authored by Dave Johnson's avatar Dave Johnson Committed by GitHub
Browse files

Merge pull request #161 from thisdavejohnson/master

Moved AWS-specific bits to a sub-directory and added provisioning playbooks
parents 2a5c30b3 bfb399a2
---
- name: Provision instances
hosts: localhost
connection: local
gather_facts: False
# load AWS variables from this group vars file
vars_files:
- group_vars/all
tasks:
- name: Launch instances
ec2:
access_key: "{{ ec2_access_key }}"
secret_key: "{{ ec2_secret_key }}"
keypair: "{{ ec2_keypair }}"
group: "{{ ec2_security_group }}"
type: "{{ ec2_instance_type }}"
image: "{{ ec2_image }}"
region: "{{ ec2_region }}"
instance_tags: "{'ansible_group':'jboss', 'type':'{{ ec2_instance_type }}', 'group':'{{ ec2_security_group }}', 'Name':'demo_''{{ tower_user_name }}'}"
count: "{{ ec2_instance_count }}"
wait: true
register: ec2
- name: Wait for SSH to come up
wait_for:
host: "{{ item.public_dns_name }}"
port: 22
delay: 60
timeout: 320
state: started
with_items: "{{ ec2.instances }}""
......@@ -2,3 +2,21 @@
http_port: 8080
https_port: 8443
# AWS specific variables
ec2_access_key:
ec2_secret_key:
ec2_region: us-east-1
ec2_zone:
ec2_image: ami-6c1e8f04
ec2_instance_type: m1.small
ec2_keypair: djohnson
ec2_security_group: default
ec2_instance_count: 3
ec2_tag: demo
ec2_tag_name_prefix: dj
ec2_hosts: all
wait_for_port: 22
# This user name will be set by Tower, when run through Tower
tower_user_name: admin
......@@ -11,6 +11,8 @@ capability to dynamically add and remove web server nodes from the deployment.
It also includes examples to do a rolling update of a stack without affecting
the service.
(To use this demonstration with Amazon Web Services, please use the "aws" sub-directory.)
You can also optionally configure a Nagios monitoring node.
### Initial Site Setup
......
Copyright (C) 2013 AnsibleWorks, Inc.
This work is licensed under the Creative Commons Attribution 3.0 Unported License.
To view a copy of this license, visit http://creativecommons.org/licenses/by/3.0/deed.en_US.
LAMP Stack + HAProxy: Example Playbooks for Amazon Web Services
-----------------------------------------------------------------------------
- Requires Ansible 1.2
- Expects CentOS/RHEL 6 hosts
This example is an extension of the simple LAMP deployment. Here we'll install
and configure a web server with an HAProxy load balancer in front, and deploy
an application to the web servers. This set of playbooks also have the
capability to dynamically add and remove web server nodes from the deployment.
It also includes examples to do a rolling update of a stack without affecting
the service.
You can also optionally configure a Nagios monitoring node.
### Initial Site Setup
First, we provision the hosts neccessary for this demonstration using the included playbook, "demo-aws-launch.yml". This will provision the following instances, with the group structure specified below. The hosts are tagged via AWS EC2 tagging and the Ansible inventory sync script (or Tower) will create the appropriate groups from these tags.
[tag_ansible_group_webservers]
webserver1
webserver2
[tag_ansible_group_dbservers]
dbserver
[tag_ansible_group_lbservers]
lbserver
[tag_ansible_group_monitoring]
nagios
After which we execute the following command to deploy the site:
ansible-playbook -i ec2.py site.yml
The deployment can be verified by accessing the IP address of your load
balancer host in a web browser: http://<ip-of-lb>:8888. Reloading the page
should have you hit different webservers.
The Nagios web interface can be reached at http://<ip-of-nagios>/nagios/
The default username and password are "nagiosadmin" / "nagiosadmin".
### Removing and Adding a Node
Removal and addition of nodes to the cluster is as simple as creating new instances, syncing the
Ansible inventory and re-running:
ansible-playbook -i ec2.py site.yml
### Rolling Update
Rolling updates are the preferred way to update the web server software or
deployed application, since the load balancer can be dynamically configured
to take the hosts to be updated out of the pool. This will keep the service
running on other servers so that the users are not interrupted.
In this example the hosts are updated in serial fashion, which means that
only one server will be updated at one time. If you have a lot of web server
hosts, this behaviour can be changed by setting the 'serial' keyword in
webservers.yml file.
Once the code has been updated in the source repository for your application
which can be defined in the group_vars/all file, execute the following
command:
ansible-playbook -i ec2.py rolling_update.yml
You can optionally pass: -e webapp_version=xxx to the rolling_update
playbook to specify a specific version of the example webapp to deploy.
---
# Provision instances in AWS specific to the LAMP HA Proxy demo
- name: Provision instances in AWS
hosts: localhost
connection: local
gather_facts: False
# load AWS variables from this group vars file
vars_files:
- group_vars/all
tasks:
- name: Launch webserver instances
ec2:
access_key: "{{ ec2_access_key }}"
secret_key: "{{ ec2_secret_key }}"
keypair: "{{ ec2_keypair }}"
group: "{{ ec2_security_group }}"
type: "{{ ec2_instance_type }}"
image: "{{ ec2_image }}"
region: "{{ ec2_region }}"
instance_tags: "{'ansible_group':'webservers', 'type':'{{ ec2_instance_type }}', 'group':'{{ ec2_security_group }}', 'Name':'demo_''{{ tower_user_name }}'}"
count: "{{ ec2_instance_count }}"
wait: true
register: ec2
- name: Launch database instance
ec2:
access_key: "{{ ec2_access_key }}"
secret_key: "{{ ec2_secret_key }}"
keypair: "{{ ec2_keypair }}"
group: "{{ ec2_security_group }}"
type: "{{ ec2_instance_type }}"
image: "{{ ec2_image }}"
region: "{{ ec2_region }}"
instance_tags: "{'ansible_group':'dbservers', 'type':'{{ ec2_instance_type }}', 'group':'{{ ec2_security_group }}', 'Name':'demo_''{{ tower_user_name }}'}"
count: "1"
wait: true
register: ec2
- name: Launch load balancing instance
ec2:
access_key: "{{ ec2_access_key }}"
secret_key: "{{ ec2_secret_key }}"
keypair: "{{ ec2_keypair }}"
group: "{{ ec2_security_group }}"
type: "{{ ec2_instance_type }}"
image: "{{ ec2_image }}"
region: "{{ ec2_region }}"
instance_tags: "{'ansible_group':'lbservers', 'type':'{{ ec2_instance_type }}', 'group':'{{ ec2_security_group }}', 'Name':'demo_''{{ tower_user_name }}'}"
count: "1"
wait: true
register: ec2
- name: Launch monitoring instance
ec2:
access_key: "{{ ec2_access_key }}"
secret_key: "{{ ec2_secret_key }}"
keypair: "{{ ec2_keypair }}"
group: "{{ ec2_security_group }}"
type: "{{ ec2_instance_type }}"
image: "{{ ec2_image }}"
region: "{{ ec2_region }}"
instance_tags: "{'ansible_group':'monitoring', 'type':'{{ ec2_instance_type }}', 'group':'{{ ec2_security_group }}', 'Name':'demo_''{{ tower_user_name }}'}"
count: "1"
wait: true
register: ec2
- name: Wait for SSH to come up
wait_for:
host: "{{ item.public_dns_name }}"
port: 22
delay: 60
timeout: 320
state: started
with_items: "{{ ec2.instances }}"
---
# Variables here are applicable to all host groups
httpd_port: 80
ntpserver: 192.168.1.2
# AWS specific variables
ec2_access_key:
ec2_secret_key:
ec2_region: us-east-1
ec2_zone:
ec2_image: ami-bc8131d4
ec2_instance_type: m1.small
ec2_keypair: djohnson
ec2_security_group: default
ec2_instance_count: 3
ec2_tag: demo
ec2_tag_name_prefix: dj
ec2_hosts: all
wait_for_port: 22
# This user name will be set by Tower, when run through Tower
tower_user_name: admin
---
# This role installs httpd
- name: Install http
yum: name={{ item }} state=present
with_items:
- httpd
- name: http service state
service: name=httpd state=started enabled=yes
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.4.5 (GNU/Linux)
mQINBEvSKUIBEADLGnUj24ZVKW7liFN/JA5CgtzlNnKs7sBg7fVbNWryiE3URbn1
JXvrdwHtkKyY96/ifZ1Ld3lE2gOF61bGZ2CWwJNee76Sp9Z+isP8RQXbG5jwj/4B
M9HK7phktqFVJ8VbY2jfTjcfxRvGM8YBwXF8hx0CDZURAjvf1xRSQJ7iAo58qcHn
XtxOAvQmAbR9z6Q/h/D+Y/PhoIJp1OV4VNHCbCs9M7HUVBpgC53PDcTUQuwcgeY6
pQgo9eT1eLNSZVrJ5Bctivl1UcD6P6CIGkkeT2gNhqindRPngUXGXW7Qzoefe+fV
QqJSm7Tq2q9oqVZ46J964waCRItRySpuW5dxZO34WM6wsw2BP2MlACbH4l3luqtp
Xo3Bvfnk+HAFH3HcMuwdaulxv7zYKXCfNoSfgrpEfo2Ex4Im/I3WdtwME/Gbnwdq
3VJzgAxLVFhczDHwNkjmIdPAlNJ9/ixRjip4dgZtW8VcBCrNoL+LhDrIfjvnLdRu
vBHy9P3sCF7FZycaHlMWP6RiLtHnEMGcbZ8QpQHi2dReU1wyr9QgguGU+jqSXYar
1yEcsdRGasppNIZ8+Qawbm/a4doT10TEtPArhSoHlwbvqTDYjtfV92lC/2iwgO6g
YgG9XrO4V8dV39Ffm7oLFfvTbg5mv4Q/E6AWo/gkjmtxkculbyAvjFtYAQARAQAB
tCFFUEVMICg2KSA8ZXBlbEBmZWRvcmFwcm9qZWN0Lm9yZz6JAjYEEwECACAFAkvS
KUICGw8GCwkIBwMCBBUCCAMEFgIDAQIeAQIXgAAKCRA7Sd8qBgi4lR/GD/wLGPv9
qO39eyb9NlrwfKdUEo1tHxKdrhNz+XYrO4yVDTBZRPSuvL2yaoeSIhQOKhNPfEgT
9mdsbsgcfmoHxmGVcn+lbheWsSvcgrXuz0gLt8TGGKGGROAoLXpuUsb1HNtKEOwP
Q4z1uQ2nOz5hLRyDOV0I2LwYV8BjGIjBKUMFEUxFTsL7XOZkrAg/WbTH2PW3hrfS
WtcRA7EYonI3B80d39ffws7SmyKbS5PmZjqOPuTvV2F0tMhKIhncBwoojWZPExft
HpKhzKVh8fdDO/3P1y1Fk3Cin8UbCO9MWMFNR27fVzCANlEPljsHA+3Ez4F7uboF
p0OOEov4Yyi4BEbgqZnthTG4ub9nyiupIZ3ckPHr3nVcDUGcL6lQD/nkmNVIeLYP
x1uHPOSlWfuojAYgzRH6LL7Idg4FHHBA0to7FW8dQXFIOyNiJFAOT2j8P5+tVdq8
wB0PDSH8yRpn4HdJ9RYquau4OkjluxOWf0uRaS//SUcCZh+1/KBEOmcvBHYRZA5J
l/nakCgxGb2paQOzqqpOcHKvlyLuzO5uybMXaipLExTGJXBlXrbbASfXa/yGYSAG
iVrGz9CE6676dMlm8F+s3XXE13QZrXmjloc6jwOljnfAkjTGXjiB7OULESed96MR
XtfLk0W5Ab9pd7tKDR6QHI7rgHXfCopRnZ2VVQ==
=V/6I
-----END PGP PUBLIC KEY BLOCK-----
[epel]
name=Extra Packages for Enterprise Linux 6 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
[epel-debuginfo]
name=Extra Packages for Enterprise Linux 6 - $basearch - Debug
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch/debug
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-6&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
gpgcheck=1
[epel-source]
name=Extra Packages for Enterprise Linux 6 - $basearch - Source
#baseurl=http://download.fedoraproject.org/pub/epel/6/SRPMS
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-6&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
gpgcheck=1
---
# Handlers for common notifications
- name: restart ntp
service: name=ntpd state=restarted
- name: restart iptables
service: name=iptables state=restarted
---
# This role contains common plays that will run on all nodes.
- name: Install python bindings for SE Linux
yum: name={{ item }} state=present
with_items:
- libselinux-python
- libsemanage-python
- name: Create the repository for EPEL
copy: src=epel.repo dest=/etc/yum.repos.d/epel.repo
- name: Create the GPG key for EPEL
copy: src=RPM-GPG-KEY-EPEL-6 dest=/etc/pki/rpm-gpg
- name: install some useful nagios plugins
yum: name={{ item }} state=present
with_items:
- nagios-nrpe
- nagios-plugins-swap
- nagios-plugins-users
- nagios-plugins-procs
- nagios-plugins-load
- nagios-plugins-disk
- name: Install ntp
yum: name=ntp state=present
tags: ntp
- name: Configure ntp file
template: src=ntp.conf.j2 dest=/etc/ntp.conf
tags: ntp
notify: restart ntp
- name: Start the ntp service
service: name=ntpd state=started enabled=yes
tags: ntp
# work around RHEL 7, for now
- name: insert iptables template
template: src=iptables.j2 dest=/etc/sysconfig/iptables
when: ansible_distribution_major_version != '7'
notify: restart iptables
- name: test to see if selinux is running
command: getenforce
register: sestatus
changed_when: false
# {{ ansible_managed }}
# Manual customization of this file is not recommended.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
{% if (inventory_hostname in groups.tag_ansible_group_webservers) or (inventory_hostname in groups.tag_ansible_group_monitoring) %}
-A INPUT -p tcp --dport 80 -j ACCEPT
{% endif %}
{% if (inventory_hostname in groups.tag_ansible_group_dbservers) %}
-A INPUT -p tcp --dport 3306 -j ACCEPT
{% endif %}
{% if (inventory_hostname in groups.tag_ansible_group_lbservers) %}
-A INPUT -p tcp --dport {{ listenport }} -j ACCEPT
{% endif %}
{% for host in groups.tag_ansible_group_monitoring %}
-A INPUT -p tcp -s {{ hostvars[host].ansible_default_ipv4.address }} --dport 5666 -j ACCEPT
{% endfor %}
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT
driftfile /var/lib/ntp/drift
restrict 127.0.0.1
restrict -6 ::1
server {{ ntpserver }}
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
---
# Handler to handle DB tier notifications
- name: restart mysql
service: name=mysqld state=restarted
---
# This role will install MySQL and create db user and give permissions.
- name: Install Mysql package
yum: name={{ item }} state=present
with_items:
- mysql-server
- MySQL-python
- name: Configure SELinux to start mysql on any port
seboolean: name=mysql_connect_any state=true persistent=yes
when: sestatus.rc != 0
- name: Create Mysql configuration file
template: src=my.cnf.j2 dest=/etc/my.cnf
notify:
- restart mysql
- name: Start Mysql Service
service: name=mysqld state=started enabled=yes
- name: Create Application Database
mysql_db: name={{ dbname }} state=present
- name: Create Application DB User
mysql_user: name={{ dbuser }} password={{ upassword }} priv=*.*:ALL host='%' state=present
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
port={{ mysql_port }}
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment