Building Stand-Alone BOSH and Concourse

This should be the last “how to install concourse” post; With this, I think I’ve covered all the interesting ways to install it.  Using BOSH is by-far my favorite approach.  After this, I hope to post more related to the use of concourse and pipelines.

Overview

There are three phases to this deployment:

  1. BOSH-start – We’ll set up an ubuntu VM to create the BOSH director from.  We’ll be using BOSH v2 and not bosh-init
  2. BOSH Director – This does all the work for us, but has to be instructed how to connect to vSphere
  3. Concourse – We’ll use a deployment manifest in BOSH to deploy concourse

I took the approach that – where possible – I would manually download the files and transfer them to the target, rather than having the install process pull the files down automatically.  In my case, I went through a lot of trial-and-error, so I did not want to pull down the files every time.  In addition, I’d like to get a feel for what a self-contained (no Internet access) solution would look like. BTW, concourse requires Internet access in order to get to docker hub for a container to run its pipelines.

Starting position

Make sure you have the following:

  • Working vSphere environment with some available storage and compute capacity
  • At least one network on a vSwitch or Distributed vSwitch with available IP addresses
  • Account for BOSH to connect to vSphere with permissions to create folders, resource pools, and VMs
  • An Ubuntu  VM template.  Mine is 16.04 LTS
  • PuTTY, Win-SCP or similar tools

BOSH-start

  1. Deploy a VM from your Ubuntu template.  Give it a name – I call mine BOSH-start – and IP address, power it on.  In my case, I’m logged in as my account to avoid using root unless necessary.
  2. Install dependencies:
    sudo apt-get install -y build-essential zlibc zlib1g-dev ruby ruby-dev openssl \
    libxslt-dev libxml2-dev libssl-dev libreadline6 libreadline6-dev \
    libyaml-dev libsqlite3-dev sqlite3
  3. Download BOSH CLI v2, make it executable and move it to the path.  Get the latest version of the BOSH v2 CLI here.
    wget https://s3.amazonaws.com/bosh-cli-artifacts/bosh-cli-2.0.16-linux-amd64
    chmod +x ~/Downloads/bosh-cli-*
    sudo mv ~/Downloads/bosh-cli-* /usr/local/bin/bosh

BOSH Director

  1. Git Director templates
    mkdir ~/bosh-1
    cd ~/bosh-1
    git clone https://github.com/cloudfoundry/bosh-deployment
  2. Create a folder and use bosh to create the environment.  This command will create several “state” files and our BOSH director with the information you provide.  Replace the values in red with your own.
    
    bosh create-env bosh-deployment/bosh.yml \
        --state=state.json \
        --vars-store=creds.yml \
        -o bosh-deployment/vsphere/cpi.yml \
        -o bosh-deployment/vsphere/resource-pool.yml \
        -o bosh-deployment/misc/dns.yml \
        -v internal_dns=<DNS Servers ex: [192.168.100.10,192.168.100.11]>
        -v director_name=<name of BOSH director. eg:boshdir> \
        -v internal_cidr=<CIDR for network ex: 172.16.9.0/24> \
        -v internal_gw=<Gateway Address> \
        -v internal_ip=<IP Address to assign to BOSH director> \
        -v network_name="<vSphere vSwitch Port Group>" \
        -v vcenter_dc=<vSphere Datacenter> \
        -v vcenter_ds=<vSphere Datastore> \
        -v vcenter_ip=<IP address of vCenter Server> \
        -v vcenter_user=<username for connecting to vCenter Server> \
        -v vcenter_password=<password for that account> \
        -v vcenter_templates=<location for templates ex:/BOSH/templates> \
        -v vcenter_vms=<location for VM.  ex:/BOSH/vms> \
        -v vcenter_disks=<folder on datastore for bosh disks.  ex:bosh-1-disks> \
        -v vcenter_cluster=<vCenter Cluster Name> \
        -v vcenter_rp=<Resource Pool Name>

    One note here; if you do not add the line for dns.yml and internal_dns, your BOSH director will use 8.8.8.8 as its DNS server and won’t be able to find anything internal. This will take a little while to download the bits and set up the Director for you.

  3. Connect to Director.  The following commands will create an alias for the new BOSH environment named “bosh-1”. Replace 10.0.0.6 with the IP of your BOSH Director from the create-env command:
    # Configure local alias
    bosh alias-env bosh-1 -e 10.0.0.6 --ca-cert <(bosh int ./creds.yml --path /director_ssl/ca)
    export BOSH_CLIENT=admin
    export BOSH_CLIENT_SECRET=`bosh int ./creds.yml --path /admin_password`
    bosh -e bosh-1 env
  4. Next we’ll need a “cloud config”.  This indicates to BOSH Director how to configure the CPI for interaction with vSphere.  You can find examples and details here.  For expediency, What I ended up with is below. As usual, you’ll want to update the values in red to match your environment.  Save this file as ~/bosh-1/cloud-config.yml on the BOSH-start VM
    azs:
    - name: z1
      cloud_properties:
        datacenters:
        - name: <vSphere Datacenter Name>
        - clusters: 
          - <vSphere Cluster Name>: {resource_pool: <Resource Pool in that cluster>}
    properties:
      vcenter:
        address: <IP of FQDN of vCenter Server>
        user: <account to connect to vSphere with>
        password: <Password for that account>
        default_disk_type: thin
        enable_auto_anti_affinity_drs_rules: false
        datacenters:
        - name: <vSphere Datacenter Name>
          vm_folder: /BOSH/vms
          template_folder: /BOSH/templates
          disk_path: prod-disks
          datastore_pattern: <regex filter for datastores to use ex: '\AEQL-THICK0\d' >
          persistent_datastore_pattern: <regex filter for datastores to use ex: '\AEQL-THICK0\d' >
          clusters:
          - <vSphere Cluster Name>: {resource_pool: <Resource Pool in that cluster>}
    
    vm_types:
    - name: default
      cloud_properties:
        cpu: 2
        ram: 4096
        disk: 16_384
    - name: large
      cloud_properties:
        cpu: 2
        ram: 8192
        disk: 32_768
    
    disk_types:
    - name: default
      disk_size: 16_384
      cloud_properties:
        type: thin
    - name: large
      disk_size: 32_768
      cloud_properties:
        type: thin
    
    networks:
    - name: default
      type: manual
      subnets:
      - range: <network CIDR where to place VMs ex:192.168.10.0/26>
        reserved: <reserved range in that CIDR ex:[192.168.10.1-192.168.10.42] >
        gateway: <gateway address for that network>
        az: z1
        dns: <DNS Server IPs ex: [192.168.100.50,192.168.100.150] >
        cloud_properties:
          name: <name of port group to attach created VMs to>
    
    compilation:
      workers: 5
      reuse_compilation_vms: true
      az: z1
      vm_type: large
      network: default
    
    
  5. Update Cloud Config with our file:
    bosh -e bosh-1 update-cloud-config ./cloud-config

    This is surprisingly fast.  You should now have a functional BOSH Director.

Concourse

Let’s deploy something with BOSH!

Prereqs:

  • Copy the URLs for the Concourse and Garden runC BOSH releases from here
  • Copy the URL for the latest Ubuntu Trusty stemcell for vSphere from here
  1. Upload Stemcell.  You’ll see it create a VM with a name beginning with “sc” in vSphere
    bosh -e bosh-1 upload-stemcell <URL to stemcell>
  2. Upload Garden runC release to BOSH
    bosh -e bosh-1 upload-release <URL to garden-runc tgz>
  3. Upload Concourse release to BOSH
    bosh -e bosh-1 upload-release <URL to concourse tgz>
  4. A BOSH deployment must have a stemcell, a release and a manifest.  You can get a concourse manifest from here, or start with the one I’m using.  You’ll notice that a lot of the values here must match those in our cloud-config.  Save the concourse manifest as ~/concourse.yml
    ---
    name: concourse
    
    releases:
    - name: concourse
      version: latest
    - name: garden-runc
      version: latest
    
    stemcells:
    - alias: trusty
      os: ubuntu-trusty
      version: latest
    
    instance_groups:
    - name: web
      instances: 1
      # replace with a VM type from your BOSH Director's cloud config
      vm_type: default
      stemcell: trusty
      azs: [z1]
      networks: [{name: default}]
      jobs:
      - name: atc
        release: concourse
        properties:
          # replace with your CI's externally reachable URL, e.g. https://ci.foo.com
          external_url: http://concourse.mydomain.com
    
          # replace with username/password, or configure GitHub auth
          basic_auth_username: myuser
          basic_auth_password: mypass
    
          postgresql_database: &atc_db atc
      - name: tsa
        release: concourse
        properties: {}
    
    - name: db
      instances: 1
      # replace with a VM type from your BOSH Director's cloud config
      vm_type: large
      stemcell: trusty
      # replace with a disk type from your BOSH Director's cloud config
      persistent_disk_type: default
      azs: [z1]
      networks: [{name: default}]
      jobs:
      - name: postgresql
        release: concourse
        properties:
          databases:
          - name: *atc_db
            # make up a role and password
            role: atc_db
            password: mypass
    
    - name: worker
      instances: 1
      # replace with a VM type from your BOSH Director's cloud config
      vm_type: default
      stemcell: trusty
      azs: [z1]
      networks: [{name: default}]
      jobs:
      - name: groundcrew
        release: concourse
        properties: {}
      - name: baggageclaim
        release: concourse
        properties: {}
      - name: garden
        release: garden-runc
        properties:
          garden:
            listen_network: tcp
            listen_address: 0.0.0.0:7777
    
    update:
      canaries: 1
      max_in_flight: 1
      serial: false
      canary_watch_time: 1000-60000
      update_watch_time: 1000-60000

    A couple of notes:

    • The Worker instance will need plenty of space, especially if you’re planning to use PCF Pipeline Automation, as it’ll have to download the massive binaries from PivNet. You’ll want to make sure that you have a sufficiently large vm type defined in your cloud config and assigned as worker in the Concourse manifest
  5. Now, we have everything we need to deploy concourse.  Notice that we’re using BOSH v2 and the deployment syntax is a little different than in BOSH v1.  This command will create a handful of VMs, compile a bunch of packages and push them to the VMs.  You’ll a couple extra IPs for the compilation VMs – these will go away after the deployment is complete.
    bosh -e bosh-1 -d concourse deploy ./concourse.yml
  6. Odds are that you’ll have to make adjustments to the cloud-config and deployment manifest.  If so, you can easily apply updates to the cloud-config with the bosh update-cloud-config command.
  7. If the deployment is completely hosed up and you need to remove it, you can do so with
    bosh -e bosh-1 -d concourse stop &&  bosh -e bosh-1 -d concourse deld

Try it out

  1. Get the IP address of the web instance by running
    bosh -e bosh-1 vms

    From the results, identify the IP address of the web instance:

  2. Point your browser to http://<IP of web instance>:8080
  3. Click Login, Select “main” team and login with the username and password (myuser and mypass in the example) you used in the manifest

References:

 

Advertisement