Commit 40e2dd6f authored by Dave Johnson's avatar Dave Johnson Committed by GitHub
Browse files

Merge pull request #159 from ansible/sa-hackathon

Refactoring ansible examples
parents 3756b36d 05857130
......@@ -6,7 +6,7 @@
These playbooks deploy a very basic implementation of JBoss Application Server,
version 7. To use them, first edit the "hosts" inventory file to contain the
hostnames of the machines on which you want JBoss deployed, and edit the
group_vars/jboss-servers file to set any JBoss configuration parameters you need.
group_vars/all file to set any JBoss configuration parameters you need.
Then run the playbook, like this:
......
---
- name: Provision instances
hosts: localhost
connection: local
gather_facts: False
# load AWS variables from this group vars file
vars_files:
- group_vars/all
tasks:
- name: Launch instances
ec2:
access_key: "{{ ec2_access_key }}"
secret_key: "{{ ec2_secret_key }}"
keypair: "{{ ec2_keypair }}"
group: "{{ ec2_security_group }}"
type: "{{ ec2_instance_type }}"
image: "{{ ec2_image }}"
region: "{{ ec2_region }}"
instance_tags: "{'ansible_group':'jboss', 'type':'{{ ec2_instance_type }}', 'group':'{{ ec2_security_group }}', 'Name':'demo_''{{ tower_user_name }}'}"
count: "{{ ec2_instance_count }}"
wait: true
register: ec2
- name: Wait for SSH to come up
wait_for:
host: "{{ item.public_dns_name }}"
port: 22
delay: 60
timeout: 320
state: started
with_items: "{{ ec2.instances }}""
---
# This playbook deploys two simple applications to JBoss server.
- hosts: all
roles:
# Optionally, (re)deploy JBoss here
# - jboss-standalone
- java-app
# Here are variables related to the standalone JBoss installation
http_port: 8080
https_port: 8443
# AWS specific variables
ec2_access_key:
ec2_secret_key:
ec2_region: us-east-1
ec2_zone:
ec2_image: ami-6c1e8f04
ec2_instance_type: m1.small
ec2_keypair: djohnson
ec2_security_group: default
ec2_instance_count: 3
ec2_tag: demo
ec2_tag_name_prefix: dj
ec2_hosts: all
wait_for_port: 22
# This user name will be set by Tower, when run through Tower
tower_user_name: admin
# Here are variables related to the standalone JBoss installation
http_port: 8080
https_port: 8443
[jboss-servers]
webserver1
appserver1
---
- name: Copy application WAR file to host
copy: src=jboss-helloworld.war dest=/tmp
- name: Deploy HelloWorld to JBoss
jboss: deploy_path=/usr/share/jboss-as/standalone/deployments/ src=/tmp/jboss-helloworld.war deployment=helloworld.war state=present
- name: Copy application WAR file to host
copy: src=ticket-monster.war dest=/tmp
- name: Deploy Ticket Monster to JBoss
jboss: deploy_path=/usr/share/jboss-as/standalone/deployments/ src=/tmp/ticket-monster.war deployment=ticket-monster.war state=present
\ No newline at end of file
......@@ -4,12 +4,14 @@
with_items:
- unzip
- java-1.7.0-openjdk
- libselinux-python
- libsemanage-python
- name: Download JBoss from jboss.org
get_url: url=http://download.jboss.org/jbossas/7.1/jboss-as-7.1.1.Final/jboss-as-7.1.1.Final.zip dest=/opt/jboss-as-7.1.1.Final.zip
- name: Extract archive
command: chdir=/usr/share /usr/bin/unzip -q /opt/jboss-as-7.1.1.Final.zip creates=/usr/share/jboss-as
unarchive: dest=/usr/share src=/opt/jboss-as-7.1.1.Final.zip creates=/usr/share/jboss-as copy=no
# Rename the dir to avoid encoding the version in the init script
- name: Rename install directory
......@@ -36,4 +38,21 @@
- name: deploy iptables rules
template: src=iptables-save dest=/etc/sysconfig/iptables
when: ansible_distribution_major_version != "7"
notify: restart iptables
- name: Ensure that firewalld is installed
yum: name=firewalld state=present
when: ansible_distribution_major_version == "7"
- name: Ensure that firewalld is started
service: name=firewalld state=started
when: ansible_distribution_major_version == "7"
- name: deploy firewalld rules
firewalld: immediate=yes port={{ item }} state=enabled permanent=yes
when: ansible_distribution_major_version == "7"
with_items:
- "{{ http_port }}/tcp"
- "{{ https_port }}/tcp"
---
# This playbook deploys a simple standalone JBoss server.
- hosts: jboss-servers
remote_user: root
- hosts: all
roles:
- jboss-standalone
......@@ -11,6 +11,8 @@ capability to dynamically add and remove web server nodes from the deployment.
It also includes examples to do a rolling update of a stack without affecting
the service.
(To use this demonstration with Amazon Web Services, please use the "aws" sub-directory.)
You can also optionally configure a Nagios monitoring node.
### Initial Site Setup
......
Copyright (C) 2013 AnsibleWorks, Inc.
This work is licensed under the Creative Commons Attribution 3.0 Unported License.
To view a copy of this license, visit http://creativecommons.org/licenses/by/3.0/deed.en_US.
LAMP Stack + HAProxy: Example Playbooks for Amazon Web Services
-----------------------------------------------------------------------------
- Requires Ansible 1.2
- Expects CentOS/RHEL 6 hosts
This example is an extension of the simple LAMP deployment. Here we'll install
and configure a web server with an HAProxy load balancer in front, and deploy
an application to the web servers. This set of playbooks also have the
capability to dynamically add and remove web server nodes from the deployment.
It also includes examples to do a rolling update of a stack without affecting
the service.
You can also optionally configure a Nagios monitoring node.
### Initial Site Setup
First, we provision the hosts neccessary for this demonstration using the included playbook, "demo-aws-launch.yml". This will provision the following instances, with the group structure specified below. The hosts are tagged via AWS EC2 tagging and the Ansible inventory sync script (or Tower) will create the appropriate groups from these tags.
[tag_ansible_group_webservers]
webserver1
webserver2
[tag_ansible_group_dbservers]
dbserver
[tag_ansible_group_lbservers]
lbserver
[tag_ansible_group_monitoring]
nagios
After which we execute the following command to deploy the site:
ansible-playbook -i ec2.py site.yml
The deployment can be verified by accessing the IP address of your load
balancer host in a web browser: http://<ip-of-lb>:8888. Reloading the page
should have you hit different webservers.
The Nagios web interface can be reached at http://<ip-of-nagios>/nagios/
The default username and password are "nagiosadmin" / "nagiosadmin".
### Removing and Adding a Node
Removal and addition of nodes to the cluster is as simple as creating new instances, syncing the
Ansible inventory and re-running:
ansible-playbook -i ec2.py site.yml
### Rolling Update
Rolling updates are the preferred way to update the web server software or
deployed application, since the load balancer can be dynamically configured
to take the hosts to be updated out of the pool. This will keep the service
running on other servers so that the users are not interrupted.
In this example the hosts are updated in serial fashion, which means that
only one server will be updated at one time. If you have a lot of web server
hosts, this behaviour can be changed by setting the 'serial' keyword in
webservers.yml file.
Once the code has been updated in the source repository for your application
which can be defined in the group_vars/all file, execute the following
command:
ansible-playbook -i ec2.py rolling_update.yml
You can optionally pass: -e webapp_version=xxx to the rolling_update
playbook to specify a specific version of the example webapp to deploy.
---
# Provision instances in AWS specific to the LAMP HA Proxy demo
- name: Provision instances in AWS
hosts: localhost
connection: local
gather_facts: False
# load AWS variables from this group vars file
vars_files:
- group_vars/all
tasks:
- name: Launch webserver instances
ec2:
access_key: "{{ ec2_access_key }}"
secret_key: "{{ ec2_secret_key }}"
keypair: "{{ ec2_keypair }}"
group: "{{ ec2_security_group }}"
type: "{{ ec2_instance_type }}"
image: "{{ ec2_image }}"
region: "{{ ec2_region }}"
instance_tags: "{'ansible_group':'webservers', 'type':'{{ ec2_instance_type }}', 'group':'{{ ec2_security_group }}', 'Name':'demo_''{{ tower_user_name }}'}"
count: "{{ ec2_instance_count }}"
wait: true
register: ec2
- name: Launch database instance
ec2:
access_key: "{{ ec2_access_key }}"
secret_key: "{{ ec2_secret_key }}"
keypair: "{{ ec2_keypair }}"
group: "{{ ec2_security_group }}"
type: "{{ ec2_instance_type }}"
image: "{{ ec2_image }}"
region: "{{ ec2_region }}"
instance_tags: "{'ansible_group':'dbservers', 'type':'{{ ec2_instance_type }}', 'group':'{{ ec2_security_group }}', 'Name':'demo_''{{ tower_user_name }}'}"
count: "1"
wait: true
register: ec2
- name: Launch load balancing instance
ec2:
access_key: "{{ ec2_access_key }}"
secret_key: "{{ ec2_secret_key }}"
keypair: "{{ ec2_keypair }}"
group: "{{ ec2_security_group }}"
type: "{{ ec2_instance_type }}"
image: "{{ ec2_image }}"
region: "{{ ec2_region }}"
instance_tags: "{'ansible_group':'lbservers', 'type':'{{ ec2_instance_type }}', 'group':'{{ ec2_security_group }}', 'Name':'demo_''{{ tower_user_name }}'}"
count: "1"
wait: true
register: ec2
- name: Launch monitoring instance
ec2:
access_key: "{{ ec2_access_key }}"
secret_key: "{{ ec2_secret_key }}"
keypair: "{{ ec2_keypair }}"
group: "{{ ec2_security_group }}"
type: "{{ ec2_instance_type }}"
image: "{{ ec2_image }}"
region: "{{ ec2_region }}"
instance_tags: "{'ansible_group':'monitoring', 'type':'{{ ec2_instance_type }}', 'group':'{{ ec2_security_group }}', 'Name':'demo_''{{ tower_user_name }}'}"
count: "1"
wait: true
register: ec2
- name: Wait for SSH to come up
wait_for:
host: "{{ item.public_dns_name }}"
port: 22
delay: 60
timeout: 320
state: started
with_items: "{{ ec2.instances }}"
---
# Variables here are applicable to all host groups
httpd_port: 80
ntpserver: 192.168.1.2
# AWS specific variables
ec2_access_key:
ec2_secret_key:
ec2_region: us-east-1
ec2_zone:
ec2_image: ami-bc8131d4
ec2_instance_type: m1.small
ec2_keypair: djohnson
ec2_security_group: default
ec2_instance_count: 3
ec2_tag: demo
ec2_tag_name_prefix: dj
ec2_hosts: all
wait_for_port: 22
# This user name will be set by Tower, when run through Tower
tower_user_name: admin
---
# The variables file used by the playbooks in the dbservers group.
# These don't have to be explicitly imported by vars_files: they are autopopulated.
mysqlservice: mysqld
mysql_port: 3306
dbuser: root
dbname: foodb
upassword: abc
---
# Variables for the HAproxy configuration
# HAProxy supports "http" and "tcp". For SSL, SMTP, etc, use "tcp".
mode: http
# Port on which HAProxy should listen
listenport: 8888
# A name for the proxy daemon, this wil be the suffix in the logs.
daemonname: myapplb
# Balancing Algorithm. Available options:
# roundrobin, source, leastconn, source, uri
# (if persistance is required use, "source")
balance: roundrobin
# Ethernet interface on which the load balancer should listen
# Defaults to the first interface. Change this to:
#
# iface: eth1
#
# ...to override.
#
iface: '{{ ansible_default_ipv4.interface }}'
---
# Variables for the web server configuration
# Ethernet interface on which the web server should listen.
# Defaults to the first interface. Change this to:
#
# iface: eth1
#
# ...to override.
#
iface: '{{ ansible_default_ipv4.interface }}'
# this is the repository that holds our sample webapp
repository: https://github.com/bennojoy/mywebapp.git
# this is the sha1sum of V5 of the test webapp.
webapp_version: 351e47276cc66b018f4890a04709d4cc3d3edb0d
---
# This role installs httpd
- name: Install http
yum: name={{ item }} state=present
with_items:
- httpd
- name: http service state
service: name=httpd state=started enabled=yes
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment