V2 Server Setup

V2Master

Introduction

  1. A Linux server capable of hosting multiple virtual machines (VMs)
  2. Basic setup performed by hand:
    1. OS installation
    2. Install development tools
    3. Setup RAID disk mirroring
  3. Further provisioning automated by use of Puppet:

    1. VirtualBox and Vagrant installation

    2. Rinetd setup on host system
    3. Reverse proxy gateway in VM
    4. Internal DNS server in VM

Platform

  1. Hardware:
    1. Multi-core Intel architecture with 64-bit processor(s)
    2. 1 x SSD drive for the root, swap and service/application VMs
    3. 2 x identical hard disks for VM datasets and backups
  2. Software:
    1. VirtualBox - Virtualization software.

      1. See: https://www.virtualbox.org

    2. Vagrant - Ruby-based command line front-end for VirtualBox.

      1. See: http://www.vagrantup.com/

    3. Other standard Linux packages (Bind, Apache, MySQL, Rails, etc.) as needed

Operating System Installation

  1. Prepare a bootable memory stick with Ubuntu
  2. Install Ubuntu
  3. Set the root password and update the OS to the latest version of the distribution

Prepare a Bootable Memory Stick

  1. Download Ubuntu 12.04.3 LTS Server

    1. Available from: http://www.ubuntu.com/download/server

    2. Select: 64-bit (recommended) - This is the amd64 version, with is appropriate for all 64-bit Intel and AMD processors

  2. Prepare a bootable memory stick
    1. See instructions at:
      1. On an Ubuntu system: http://www.ubuntu.com/download/desktop/create-a-usb-stick-on-ubuntu

Ubuntu Installation

  1. Boot system from memory stick containing Ubuntu 12.04.3 LTS server

  2. Select language: English

  3. Selection option: Install Ubuntu Server

  4. Select system language: English

  5. Select location: other --> Europe --> Switzerland

  6. Select locale: en_US.UTF-8

  7. Configure network:
    1. DHCP configuration will start, enter cancel

    2. Select primary network interface - This question does not appear for systems with a single ethernet port

    3. Select: Configure manually

    4. IP address: enter systems's IP address'

    5. Gate IP address: enter system's gateway address

    6. Nameserver address(es): enter system's primary DNS IP address(es)'

    7. Hostname: enter system's hostname

  8. Create initial user
    1. Full name:
    2. User name:
    3. Enter & re-enter password

    4. Encrypt home directory: no

  9. Time zone: Accept Europe/Zurich

  10. Partition disks:
    1. Select entry for SSD drive
    2. Select Automatically partition and use LVM

    3. Select entire disk
    4. Accept proposed partition and accept writing partition table to disk
  11. Installs base operating system

  12. Enter proxy information: Enter return (e.g. no proxy required)

  13. Software selection: Select SSHD server, and nothing else

  14. Installs and configures more software

  15. Install GRUB boot loader: Yes

  16. Installation complete. Remove memory stick a reboot

Initial Configuration

  1. Login as the user defined during the installation
  2. Set the root password:
    • $ sudo bash
      # passwd
  3. Update system packages and update the distribute: (this can take about 5 minutes or more)
    • # apt-get update
      # apt-get -y dist-upgrade
  4. Install development tools, which are required for the VirtualBox installation

    • # apt-get -y install build-essential autoconf libtool pkg-config

Setup Mirrored Disks

  1. Setup RAID-1 based mirrored disks
  2. Configure the RAID disks to be mounted on system boot

Device Names

In a typical installation, the SSD and two hard disks have device names assigned as follows:

  1. SSD: /dev/sda

  2. Hard disk 1: /dev/sdb

  3. Hard disk 2: /dev/sdc

Procedure

  1. Partition and format hard disks with ext4 filesystems

    1. Partition: Perform this procedure once for heach hard dick, typically for /dev/sdb and /dev/sdc. Enter the following commands:

      • # fdisk /dev/sdb
          p  # Print partition table
          d  # Delete all existing partitions. May need to use this command multiple times
          n  # Create a new partition, accept defaults, which are for a primary partition using all availa disk space
          w  # Write partition table to disk and exit
    2. Example:
      • Command (m for help): p
          ..prints partition table (which will be empty for a new disk)..
        Command (m for help): n
        Partition type:
          p   primary (0 primary, 0 extended, 4 free)
          e   extended
        Select (default p):
        Partition number (1-4, default 1):
        Using default value 1
        
        Command (m for help): w
        The partition table has been altered!
        Calling ioctl() to re-read partition table.
  2. Setup RAID-1 set:
    1. Create mount point:
      • # mkdir /v01
    2. Create multi-disk array for RAID set:
      • # mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
  3. Show information about the RAID set:
    • # mdadm --detail /dev/md0
      /dev/md0:
              Version : 1.2
        Creation Time : Tue Nov 19 16:12:03 2013
           Raid Level : raid1
           Array Size : 1953381248 (1862.89 GiB 2000.26 GB)
        Used Dev Size : 1953381248 (1862.89 GiB 2000.26 GB)
         Raid Devices : 2
        Total Devices : 2
          Persistence : Superblock is persistent
      
          Update Time : Thu Nov 21 16:48:44 2013
                State : active
       Active Devices : 2
      Working Devices : 2
       Failed Devices : 0
        Spare Devices : 0
      
                 Name : odin:0  (local to host odin)
                 UUID : 7289ef81:334df27d:389e9383:c225b4f5
               Events : 160
      
          Number   Major   Minor   RaidDevice State
             0       8       17        0      active sync   /dev/sdb1
             1       8       33        1      active sync   /dev/sdc1
      
      # mdadm --detail --scan
      ARRAY /dev/md/0 metadata=1.2 name=odin:0 UUID=7289ef81:334df27d:389e9383:c225b4f5
      
      # blkid /dev/md0
      /dev/md0: UUID="21f4f1a5-7f60-4f89-986d-84cfd6de49b4" TYPE="ext4"
  4. Format the RAID set:
    • # mkfs -t ext4 /dev/md0
  5. Update /etc/mdadm/mdadm.conf and add the following: (Note use of the UUID, obtained from the mdadm --detail --scan command):

    • # 2 x 2TB mirrored hard drives:
      ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 UUID=7289ef81:334df27d:389e9383:c225b4f5
  6. Update /etc/fstab and add the following: (Note use of the UUID, obtained from the blkid /dev/md0 command):

    • # 2 x 2TB mirrod disks
      UUID=21f4f1a5-7f60-4f89-986d-84cfd6de49b4 /v01 ext4 defaults 0 2
  7. Update /etc/initramfs-tools/conf.d/mdadm to contain: (This allow the server to reboot when the RAID set is degraded. If this is not set, when the RAID set is degraded, which is the apparently case while the RAID is initially being built, the system will not boot, except through the rescue menu entry of the system's boot menu).

    • BOOT_DEGRADED=true
  8. Test mount and umount the RAID set:
    • # mount /v01
      # df -h
      ...should list /dev/md0 with the expected disk capacity
      # umount /v01
  9. Reboot system to make sure it boots and that the /v01 file system is present:

    • # reboot now

Additional System Configuration

  1. Install and configure the NTP daemon

NTP Configuration

  1. Install NTP daemon
    • # apt-get -y install ntp
  2. Edit /etc/ntp.conf to contain the following:

    • server ntp1.softxs.ch
      server ntp2.softxs.ch
      server ntp3.softxs.ch
      server ntp4.softxs.ch
  3. Start NTP daemon:
    • # service ntp restart
  4. Check damon running and able to connect with time servers: (You should see output like the following):
    • # ntpq -p
           remote           refid      st t when poll reach   delay   offset  jitter
      ==============================================================================
       caledonia.dataw 129.69.1.153     2 u    7   64    1   19.662   54.345   0.000
       ntp0.as34288.ne .MRS.            1 u    6   64    1   27.857   14.460   0.000
       arthur.testserv 162.23.41.56     2 u    5   64    1   23.029   12.674   0.000
       ms21.snowflakeh 81.94.123.17     2 u    4   64    1   35.802   19.073   0.000
       europium.canoni 193.79.237.14    2 u    3   64    1   40.978    9.453   0.000

Puppet Installation and Configuration

  1. Install the latest Puppet software, obtained from the Puppet Labs web site
  2. Configure puppet for use as an agent

  3. Setup a certificate on the agent system, aign it on the puppet master to allow remote connections

Puppet Installation

  1. Setup backup area:
    • # mkdir -p /v01/home/backup
  2. Download and install recent version of Puppet (the Ubuntu package is generally out of date)
    • # cd /v01/home/backup
      # wget http://apt.puppetlabs.com/puppetlabs-release-precise.deb
      
      # dpkg -i puppetlabs-release-precise.deb
      # apt-get update
      # apt-get -y install puppet
  3. Check version of puppet and facter (Puppet's support tool for getting OS specific information):

    • # puppet --version
      3.3.2
      
      # facter --version
      1.7.3

      Make sure the puppet version is 3.3.x or later.

Puppet Configuration

  1. Setup Puppet main directory:
    • # mkdir -p /etc/puppet
      # cd /etc/puppet
  2. Edit file /etc/puppet/puppet.conf to contain the following:

    • [agent]
        server      = puppet
        report      = true
  3. Edit file /etc/puppet/auth.conf to contain the following:

    • # This is an example auth.conf file, which implements the
      # defaults used by the puppet master.
      
      ### Authenticated paths - these apply only when the client
      ### has a valid certificate and is thus authenticated
      
      # allow nodes to retrieve their own catalog
      path ~ ^/catalog/([^/]+)$
      method find
      allow $1
      
      # allow nodes to retrieve their own node definition
      path ~ ^/node/([^/]+)$
      method find
      allow $1
      #allow *
      #allow thor.softxs.ch
      
      # allow all nodes to access the certificates services
      path /certificate_revocation_list/ca
      method find
      allow *
      
      # allow all nodes to store their reports
      path /report
      method save
      allow *
      
      # unconditionally allow access to all file services
      # which means in practice that fileserver.conf will
      # still be used
      path /file
      allow *
      
      ### Unauthenticated ACL, for clients for which the current master doesn't
      ### have a valid certificate; we allow authenticated users, too, because
      ### there isn't a great harm in letting that request through.
      
      # allow access to the master CA
      path /certificate/ca
      auth any
      method find
      allow *
      
      path /certificate/
      auth any
      method find
      allow *
      
      path /certificate_request
      auth any
      method find, save
      allow *
      
      # this one is not stricly necessary, but it has the merit
      # of showing the default policy, which is deny everything else
      path /
      auth any

Puppet Connection to Puppetmaster

  1. On the new system as root: Make a test connection to the puppetmaster (puppet.softxs.ch):

    • # cd /etc/puppet
      puppet agent --test --verbose
      Info: Creating a new SSL key for odin.softxs.ch
      Notice: Using less secure serialization of reports and query parameters for compatibility
      Notice: with older puppet master. To remove this notice, please upgrade your master(s)
      Notice: to Puppet 3.3 or newer.
      Notice: See http://links.puppetlabs.com/deprecate_yaml_on_network for more information.
      Info: Caching certificate for ca
      Info: Creating a new SSL certificate request for odin.softxs.ch
      Info: Certificate Request fingerprint (SHA256): B3:F3:30:C0:AD:C3:48:2E:31:34:EA:36:74:DD:24:75:4B:E9:82:45:F7:93:A1:9B:F1:A8:A7:B8:54:8F:5B:FA
      Exiting; no certificate found and waitforcert is disabled
  2. On the puppetmaster, puppet.softxs.ch, view and sign the cetificate:
    • # puppet cert --list
        "odin.softxs.ch" (SHA256) B3:F3:30:C0:AD:C3:48:2E:31:34:EA:36:74:DD:24:75:4B:E9:82:45:F7:93:A1:9B:F1:A8:A7:B8:54:8F:5B:FA
      
      # puppet cert --sign odin.softxs.ch
      Notice: Signed certificate request for odin.softxs.ch
      Notice: Removing file Puppet::SSL::CertificateRequest odin.softxs.ch at '/etc/puppet/ssl/ca/requests/odin.softxs.ch.pem'
  3. Make puppet test tun again to verify the certificate works (on new system as root):
    • # puppet agent --verbose --no-daemonize  --onetime
      Notice: Using less secure serialization of reports and query parameters for compatibility
      Notice: with older puppet master. To remove this notice, please upgrade your master(s)
      Notice: to Puppet 3.3 or newer.
      Notice: See http://links.puppetlabs.com/deprecate_yaml_on_network for more information.
      Info: Caching certificate for odin.softxs.ch
      Info: Caching certificate_revocation_list for ca
      Info: Retrieving plugin
      Notice: /File[/var/lib/puppet/lib/puppet]/ensure: created
      Notice: /File[/var/lib/puppet/lib/puppet/face]/ensure: created
      ...and many more messages about files in /var/lib/puppet...
      Info: Caching catalog for odin.softxs.ch
      Info: Applying configuration version '1385393941'
      Info: Creating state file /var/lib/puppet/state/state.yaml
      Notice: Finished catalog run in 0.03 seconds

Virtualization Tools Installation and Test

  1. Install VirtualBox and Vagrant using Puppet.

  2. Text the virtualization tools by creating a VM and verifying that it comes up

Install Virtualization Tools

  1. On the puppetmaster (puppet.softxs.ch), as root, edit the file /etc/puppet/manifests/nodes.conf and add the following (set the hostname as appropriate):

    • node "odin.softxs.ch" {
        include vm_host
      }
  2. On the new machine as root make a puppet run:
    • # puppet agent --onetime --no-daemonize --verbose
      Info: Retrieving plugin
      Info: Caching catalog for odin.softxs.ch
      Info: Applying configuration version '1385394816'
      Notice: /Stage[main]/Vm_host/File[vmhost-install-script.sh]/ensure:
      ... followed by messages about uninstalled packages... followed by what appears to be an error...
      Error: /tmp/vmhost-install-script.sh /tmp/virtualbox-4.3_4.3.0-89960~Ubuntu~precise_amd64.deb 4.3.0r89960 /usr/bin/VBoxManage --version returned 1 instead of one of [0]
      Error: /Stage[main]/Vm_host/Exec[run-install-vbox]/returns: change from notrun to 0 failed: /tmp/vmhost-install-script.sh /tmp/virtualbox-4.3_4.3.0-89960~Ubuntu~precise_amd64.deb 4.3.0r89960 /usr/bin/VBoxManage --version returned 1 instead of one of [0]
      ...
      Notice: /Group[vagrant]/ensure: created
      Notice: /User[vagrant]/ensure: created
      Notice: /Stage[main]/Vm_host/File[/home/vms]/ensure: created
      Notice: /Stage[main]/Vm_host/Exec[run-install-vagrant]/returns: executed successfully
      Notice: Finished catalog run in 102.17 seconds

      Note that the error listed above can be ignored, provided the message executed successfully appears at the end.

  3. Verify that VirtualBox and Vagrant have been installed and that user vagrant exists:

    • # VBoxManage --version
      4.3.0r89960
      
      # vagrant --version
      Vagrant 1.3.5
      
      # id vagrant
      uid=1001(vagrant) gid=1001(vagrant) groups=1001(vagrant)

Test Virtualization Tools

Note that all vagrant operations must be performed as user vagrant, and that you must also generally be in the directory containing the Vagrantfile in order for most commands to work.

  1. Import a VM box for testing:

    • # su - vagrant
      $ mkdir boxes
      $ scp root@loki.softxs.ch:/home/vagrant/boxes/Ubuntu-12.04-precise64.box /home/vagrant/boxes
      
      $ vagrant box add precise64 /home/vagrant/boxes/Ubuntu-12.04-precise64.box
      Downloading or copying the box...
      Extracting box...e: 0/s, Estimated time remaining: --:--:--)
      Successfully added box 'precise64' with provider 'virtualbox'!

      Note the user of the name precise64, which is the box's name

  2. Create test VM to make sure that the Virtualization system is functioning correctly:
    • # su - vagrant
      $ cd ../vms
      $ mkdir vmt1
      $ cd vmt1
  3. Edit the file Vagrantfile so that it contains the following:

    • # -- Test VM - Ubuntu
      Vagrant.configure("2") do |config|
        config.vm.box = "precise64"
        config.vm.hostname = "vmt1.softxs.ch"
        config.vm.provider :virtualbox do |v|
          v.customize ["modifyvm", :id, "--name", "vhmt1"]
          v.customize ["modifyvm", :id, "--memory", "512"]
        end
      end
  4. Start the VM:
    • $ vagrant up
      Bringing machine 'default' up with 'virtualbox' provider...
      [default] Importing base box 'precise64'...
      [default] Matching MAC address for NAT networking...
      [default] Setting the name of the VM...
      [default] Clearing any previously set forwarded ports...
      [default] Creating shared folders metadata...
      [default] Clearing any previously set network interfaces...
      [default] Preparing network interfaces based on configuration...
      [default] Forwarding ports...
      [default] -- 22 => 2222 (adapter 1)
      [default] Running 'pre-boot' VM customizations...
      [default] Booting VM...
      [default] Waiting for machine to boot. This may take a few minutes...
      [default] Machine booted and ready!
      [default] The guest additions on this VM do not match the installed version of
      VirtualBox! In most cases this is fine, but in rare cases it can
      cause things such as shared folders to not work properly. If you see
      shared folder errors, please update the guest additions within the
      virtual machine and reload your VM.
      
      Guest Additions Version: 4.2.0
      VirtualBox Version: 4.3
      [default] Setting hostname...
      [default] Mounting shared folders...
      [default] -- /vagrant
  5. Connect to the VM:
    • $ vagrant ssh
      Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.2.0-23-generic x86_64)
       * Documentation:  https://help.ubuntu.com/
      Welcome to your Vagrant-built virtual machine.
      Last login: Fri Sep 14 06:23:18 2012 from 10.0.2.2
      vagrant@vmt1:~$

      You are now logged into the VM called vmt1

      vagrant@vmt1:~$ uptime
       16:09:47 up 2 min,  1 user,  load average: 0.09, 0.06, 0.03
      vagrant@vmt1:~$ lsb_release -a
      No LSB modules are available.
      Distributor ID: Ubuntu
      Description:    Ubuntu 12.04 LTS
      Release:        12.04
      Codename:       precise
      vagrant@vmt1:~$ df -h
      Filesystem                  Size  Used Avail Use% Mounted on
      /dev/mapper/precise64-root   79G  2.2G   73G   3% /
      udev                        237M  4.0K  237M   1% /dev
      tmpfs                        99M  272K   99M   1% /run
      none                        5.0M     0  5.0M   0% /run/lock
      none                        246M     0  246M   0% /run/shm
      /dev/sda1                   228M   25M  192M  12% /boot
      /vagrant                     95G  8.5G   86G   9% /vagrant
      vagrant@vmt1:~$ ls -l /vagrant/
      total 4
      -rw-rw-r-- 1 vagrant vagrant 282 Nov 25 16:05 Vagrantfile

      Note that the filesystem /vagrant is the directory /home/vms/vmt1 on the host system. Make sure it is present and that it contains the Vagrant file. This is to ensure that file sharing with the host system working.

  6. If the VM does not come up, e.g. you get a message like the following:
    • ...
      [default] Waiting for machine to boot. This may take a few minutes...
      The guest machine entered an invalid state while waiting for it
      to boot. Valid states are 'starting, running'. The machine is in the
      'poweroff' state. Please verify everything is configured
      properly and try again.
      
      If the provider you're using has a GUI that comes with it,
      it is often helpful to open that and watch the machine, since the
      GUI often has more helpful error messages than Vagrant can retrieve.
      For example, if you're using VirtualBox, run `vagrant up` while the
      VirtualBox GUI is open.

      Look in the VirtualBox log file:

    • /home/vagrant/VirtualBox VMs/vmt1/Logs It contains the boot messages of the VM. It may contain an indication of what went wrong. In particular check for a log message like the following, which indicates that the host system virtualization features are not enabled in the host system's BIOS:

      00:00:00.398875 VMSetError: VT-x is not available
  7. Shutdown and delete the VM
    • vagrant@vmt1:~$ exit
      
      $ vagrant status
      Current machine states:
        default                   running (virtualbox)
      The VM is running. To stop this VM, you can run `vagrant halt` to
      shut it down forcefully, or you can run `vagrant suspend` to simply
      suspend the virtual machine. In either case, to restart it again,
      simply run `vagrant up`.
      
      $ vagrant halt
      [default] Attempting graceful shutdown of VM...
      
      $ vagrant status
      Current machine states:
        default                   poweroff (virtualbox)
      The VM is powered off. To restart the VM, simply run `vagrant up`
      
      $ vagrant destroy
      Are you sure you want to destroy the 'default' VM? [y/N] y
      [default] Destroying VM and associated drives...
      
      $ cd /home/vms
      $ rm -rf vmt1

V2ServerSetup (last edited 2017-05-24 13:11:26 by TiborNagy)

Copyright 2008-2014, SoftXS GmbH, Switzerland