V2 Deployment Checklist
Contents
- V2 Deployment Checklist
- Introduction
- DNS Name Configuration
-
Setup Virtual Machine & Virtual Host
- Configuration for Vagrantfile Generation
- Define Reverse Proxy
- Define Application Backup & Data Directories
- Run Puppet Agent on the Host Server
- Document New Virtual Machine on Wiki
- Define a Puppet Node for the New Virtual Machine
- Define a Virtual Host on the VM
- Start Virtual Machine
- Setup V2 User SSH Certificate
- Setup V2 Deployment Instance
- Deploy Initial V2 System
- Start Delayed Job Task
- Register Application in MAPS
- Test Outgoing Email
- Configure & Test Incoming Email
- Install V2 Configuration
- Import Configuration
- Import Data & Documents
- End-to-End Test
- Setup Backups
- Setup Monitoring
Introduction
DNS Name Configuration
- DNS configuration is performed on the Puppet server mgt.vh01.softxs.ch
ssh -p 10104 root@loki cd /etc/puppet
Configure new DNS names. As root on the Puppet server
cd /etc/puppet/modules/bind9/files/master # -- Edit the following two files: vi zones/all/{domain_name}. # Update the 'Serial' with today's date and the sequence number (normally '01') :x vi generic/all/{domain_name}. # Add CNAME record for the sub-domain pointing to the host server :x
- Test DNS Names. Make sure the new DNS name is defined on the all following DNS servers:
zg-1.softxs.ch - Public server
zg-3.softxs.ch - Public server
loki.softxs.ch - LU DMZ
odin.softxs.ch - LU DMZ
modi.softxs.ch - LU Internal network
Setup Virtual Machine & Virtual Host
Requires the following steps:
- Define VM Vagrantfile generation
- Define VM configuration
- Define Virtual Host configuration
- Start new VM
Log into VM and perform the initial OS update and first puppet agent run
- Setup SSH login certificate
Configuration for Vagrantfile Generation
Configure the Vagrantfile generation in the Puppet server mgt.vh01.softxs.ch. As user root on the Puppet server
ssh -p 10104 root@lu-4.softxs.ch # loki.softxs.ch cd /etc/puppet/modules/manifests vi nodes.pp # Find the node definition for the host server (zg-1, loki, odin, etc.) # Add a ''v2_server::vm'' definition, similar to the following: node "zg-3.softxs.ch" { # host VM ... create_resources( v2_server::vm, { # VM definition ... 'v0402' => { vm_hostname => 'v0402', vm_fqdn => 'v0402.vh03.softxs.ch', vm_box => 'precise64_rails', vm_memory => 3072, vm_ip => '172.16.4.2', vm_netmask => '255.240.0.0', vm_ssh_port => 20402, vm_synched_folders => { 'vagrant' => { vm_dir => "/vagrant", host_dir => "." }, 'data' => { vm_dir => "/data", host_dir => "/v01/data/v0402", owner => "www-data", group => "v2", mount_options => ["dmode=775", "fmode=764"], }, 'backup' => { vm_dir => "/backup", host_dir => "/v01/backup/local/v0402", owner => "www-data", group => "v2", mount_options => ["dmode=775", "fmode=764"], }, }, }, ...
The critical parameters are: (See V2VirtualServers for example configurations and the setup conventions)
- Hostname
- Fully qualified domain name (fqdn)
Virtual machine box
- Memory
- IP address
- Port number
Paths for the /data & /backup directories
Define Reverse Proxy
Define a reverse proxy entry on the host machine, to forward http requests to the VM. As user root on the Puppet server
cd /etc/puppet vi manifests/nodes.pp node "zg-3.softxs.ch" { # Find the node of the host machine ... create_resources( rproxy::vhost, { ... # Add an entry similar to the following: 'v0402' => { host => 'v030402.softxs.ch', ip => '172.16.4.2' }, ... } ... }
- Where:
v0402 is the hostname of the VM
host is the external (public) DNS name
ip is the internal IP address of the VM
Define Application Backup & Data Directories
Define Backup & Data directories for the VM. As user root on the Puppet server
cd /etc/puppet vi manifests/nodes.pp node "zg-3.softxs.ch" { # Find the node of the host machine ... create_resources( v2_server::mkdir, { ... # Add entries similar to the following "v0402-backup" => { target => "/v01/backup/local/v0402" }, "v0402-data" => { target => "/v01/data/v0402" }, ... } ... }
- Where:
v0402 is the hostname of the VM
/v01/backup/local is the top-level directory of host machine's local backup tree
/v01/data is the top-level directory of host machine's application directory tree
Run Puppet Agent on the Host Server
Wait for puppet agent to run on the host machine to implement the updates
- Or run it by hand:
puppet agent --onetime --no-daemonize --verbose
- This will also create a new VM directory on the host machine:
- /home/vagrant/vms/{name}
Where name is the VM new host name
- /home/vagrant/vms/{name}
- Or run it by hand:
Document New Virtual Machine on Wiki
Document the new VM in the Wiki page: V2VirtualServers
Define a Puppet Node for the New Virtual Machine
Define a new puppet node for the vm, named with the VM's fully qualified domain name:
node "v0402.vh03.softxs.ch" { # VM FQDN class { 'puppet_agent': period => 10 } class { 'postfix_satellite': node_name => $name, mailname => 'softxs.ch', relayhost => 'smtp.softxs.ch' } include apache include mysql include passenger class { 'rails_user': user => 'v2' } # ..place for vhost definition (see next section).. }
Define a Virtual Host on the VM
Add an apache::vhost definition to the VM node:
node "v0402.vh03.softxs.ch" { # VM node ... apache::vhost {'trial.softxs.ch': template => 'apache/vhost-v2.conf.erb', docroot => '/home/v2/rails/trial.softxs.ch', priority => 25, servername => 'trial.softxs.ch', options => '-Indexes', subvhosts => { 'trial1' => { }, 'trial2' => { }, }, ... }
The subvhosts are used to define the relative paths for the individual applications
Start Virtual Machine
As user vagrant on the host machine start the VM:
cd /home/vagrant/vms/{name} # Where 'name' is the new VM host name vagrant up # Starts the VM
- Log into the VM
vagrant ssh
- Perform an initial software update on the VM
su - apt-get update
Perform an initial puppet agent run on the VM
puppet agent --onetime --no-daemonize --verbose
- This should create a vhost, including:
- Apache configuration files, located at:
- /etc/apache2/sites-available
- /etc/apache2/sites-enabled
- Base directories for the application(s):
- /home/v2/{domain}/{app(s)}
- Apache configuration files, located at:
Setup V2 User SSH Certificate
- In order to allow automatic login for application deployment using Capistrano, you must install a SSH key on the VM for the v2 user:
ssh -p {port} v2@{host}
- Where
port is the SSH port number defined in the VM's Vagrantfile
host is the base servers domain name
Create a .ssh directory. As user v2 on the VM:
cd ~ mkdir .ssh # If the directory doesn't already exist chmod 0700 .ssh cd .ssh vi authorized_keys # Add your SSH public key, typically from your local machine, in the file ''~/.ssh/identity.pub'' or ''~/.ssh/id_dsa.pub'' :x chmod 0600 authorized_keys
- Test the key. As your local user on your development (or other) system, from which you plan to do application deployment, make sure the following command works without having to enter a password:
ssh -p {port} v2@host
- Where:
port is the SSH port number defined in the VM's Vagrantfile
host is the base servers domain name
Setup V2 Deployment Instance
Work Organiser application is done from the git repository: git.softxs.ch:/home/git/gitroot/v2deploy.git
Define Instance
Go to the Works Organiser instance directory: At the top of v2deploy repository:
cd app/v2p0/site/proto/instance
- Create an new instance directory. Typically it's named {dns_name}-{path}, where
dns_name is the DNS name used to access the application host (a VM)
path is the relative path
For example: demo.softxs.ch-hydro, which would be access with the url:
Create the following directory & files in the instance directory:
deploy.rb - Capistrano deployment paramenters, including server, directories, etc
files - directory
config - directory
database.yml - Define the application's database
email_secret.txt - Key for sending email
settings.local.yml - Define the site/instance application specific settings
- The settings for these files are explained in the following sections
Define deploy.rb
The deploy.rb file defines the Capistrano deployment parameters.
Example deploy.rb file:
# deploy.rb # app: v2p0, site: proto, instance: demo.softxs.ch-tasks Capistrano::Configuration.instance.load do # -- Server parameters role :web, "demo.softxs.ch" role :app, "demo.softxs.ch" role :db, "demo.softxs.ch", :primary => true set :vm_ssh_port, 20108 set :basepath, "/home/v2/rails" set :servpath, "demo.softxs.ch" set :relpath, "tasks" set :deploy_to, "#{basepath}/#{servpath}/#{relpath}-app" # -- Relative path on server (disable if no relative path) # WARNING: RAILS_RELATIVE_URL_ROOT must = '/' + Settings.SXS.Application.Url.Path set :asset_env, "RAILS_RELATIVE_URL_ROOT='/#{relpath}'" # -- Owner/group of deployed code set :user, "v2" # set :group, "www-data" # # -- Git parameters set :branch, "master" # Git branch to deploy set :scm_user, ENV['CAP_USER'] || ENV['USER'] # -- Installation set :use_sudo, false end
- The key settings are:
roles - The server(s) where the application will be defined. Normally they are all set to the same machine
:web - The web server
:app - The application server
:db - The SQL server
:vm_ssh_port - The SSH port to use to log into the server(s)
:basepath - The absolute path where applications are installed
:servpath - The relative path to the vhost
:relpath - The relative path to the application
:branch - The git branch to install
settings.local.yml
database.yml
Email Secret Key
Home Page Setup
Deploy Initial V2 System
Capistrano Deployment
Test Local Login
Test Invitation & MAPS Login
Start Delayed Job Task
Register Application in MAPS
Test Outgoing Email
Configure & Test Incoming Email
Install V2 Configuration
Import Configuration
Import Data & Documents
End-to-End Test
Setup Backups
Application backups run on a nightly basis. There are three parts:
A database backup to the application's ../shared/backup directory
A backup of the application's -app directory tree (which includes shared/backup) to a sub-directory in the MV's /backup (NFS mount-ed from the host server)
A backup server rsync-ing the host machine's backup area (TODO: described elsewhere)
Backups are based on the following cronjobs:
On the application VM, setup cronjobs as user v2:
- Backup the application database
Backup the application's -app directory tree
Application Database Backup
On the application VM, as user v2 Setup a cronjob like the following:
23 03 * * * /home/v2/rails/{domain}/{relpath}-app/current/script/ruby_cron.sh \ /home/v2/rails/{domain}/{relpath}-app/current/script/backup_db \ -o /home/v2/rails/{domain}/{relpath)-app/shared/backup \ -u{usr} -p{pwd} -z
The cronjob should run every 24 hours and its time should be before the root rsync cronjob shown below
- Where:
{domain} is the application's domain name. E.g. poyry.works-organiser.com
{relpath} is the relative URL path to the application. E.g. stadelhofen
{usr} is the MySQL administrative user's name. E.g. root
{pwd} is the MySQL administrative user's name. E.g. secret
-z is the option to compress the output with bzip2
Application Directory Tree Backup
Warning: These cronjobs currently run as root, as there are permissions errors running as user v2
On the application VM, as user v2 Setup a cronjob like the following:
/usr/bin/rsync -qa /home/v2/rails/{domain}/{relpath}-app/ /backup/{domain}/{relpath}/
The cronjob should run every 24 hours and its time should be before the backup server's collection of the host machine's nightly backup data
- Where:
{domain} is the application's domain name. E.g. poyry.works-organiser.com
{relpath} is the relative URL path to the application. E.g. stadelhofen