Getting Started with CredHub in Concourse

First, some background – I promise to keep it short.  You should never have credentials in a public github repo.  Probably not good to have them in a private repo either.  At Pivotal, the github client is configured with credalert which complains when I try to push credentials to github.  I maintain compliance, I needed a way to update the stuff in the repo and have my credentials too.  Concourse supports a couple of AWS credential managers, Vault and Credhub.  Since CredHub is built-in to ops-manager-deployed BOSH director, we don’t have to spin anything else up.

The simplified diagram shows how this will work.  CredHub is on the BOSH director, so it’ll need to be reachable from the Concourse Web ATS service and anywhere the credhub cli will be used. If your BOSH director is behind a NAT, you may want to configure a DNAT, so it can be reached.

In this case, we’re using a “management/infrastructure” Operations Manager and BOSH director to deploy and manage concourse and minio.  The pipelines on concourse will be used to deploy and maintain other foundations in the environment.

Configure UAA

  1. Logon to the ops manager and navigate to status to record the IP address of the BOSH director.  If your BOSH director is behind a NAT, locate it’s DNAT instead.
  2. Navigate to the credentials tab.  We’re going to need the uaa_login_client_credentials password and the uaa_admin_client_credentials password.
  3. While here, save the ops manager root ca to your computer.  From the installation dashboard, click on your name in the upper right, select Settings.  Then click Advanced and Download Root CA.
  4. SSH into your ops manager:  ubuntu@<ops manager name or IP>
  5. Set uaac target

    uaac target https://<IP of BOSH director>:8443 –ca-cert /var/tempest/workspaces/default/root_ca_certificate

  6. Login to uaac – ok, this gets awkward

    uaac token owner get login -s <uaa_login_client_credentials>

    • Replace <uaa_login_client_credentials> with the value you saved
    • When prompted for a username enter admin
    • For password enter the uaa_admin_client_password value you saved
    • You should see “Successfully fetched token…”
  7. Create a uaac client for concourse to use with credhub

    uaac client add –name concourse-to-credhub –authorized-grant-types client_credentials –authorities credhub.read,credhub.write –access-token-validity 30 –secret MySecretPassword

    Please replace MySecretPassword with something else

  8. Create a uaac user for use with the CredHub cli

    uaac user add credhub –email credhub@whatever.com -p MySecretPassword

Try out Credhub cli

    1. Download and install the credhub cli.  On mac, you can use brew install credhub
    2. From a terminal/command line run this to point the cli to the credhub instance on the BOSH director:

      credhub api -s <IP of BOSH director>:8844 –ca-cert ./root_ca_certificate

      • Replace <IP of BOSH director> with the name or reachable IP of the director
      • root_ca_certificate is the root CA from ops manager you downloaded earlier
    3. Login to credhub:

      credhub login -u credhub -p MySecretPassword

      User and pass are from the User we added to uaa earlier

    4. Set a test value:

      credhub –type:value –name=/testval –value=hello

      Here’s we’re setting a key (aka credential) with the name /testval to the value “hello”. Note that all the things stored in credhub start with a slash and that there are several types of credentials that can be stored, the simplest being “value”

    5. Get our value:

      credhub –name /testval

      This will return the metadata for our key/credential

Configuring Concourse to use CredHub

Concourse TSA must be configured to look to credhub as a credential manager. I’m using BOSH-deployed concourse, so I’ll simply update the deployment manifest with the new params. if you’re using concourse via docker-compose, you’ll want to update the yml with the additional params as described here.

For concourse deployed via BOSH and using concouse-bosh-deployment, we’ll include the /operations/credhub.yml file and the additional params.  For me this looks like

bosh -e core deploy -d concourse concourse.yml \
-l ../versions.yml \
–vars-store cluster-creds.yml \
-o operations/static-web.yml \
-o operations/basic-auth.yml \
-o operations/scale.yml \
-o operations/privileged-http.yml \
-o operations/credhub.yml \
–var web_ip=192.168.100.205 \
–var external_url=http://concourse.ragazzilab.com \
–var network_name=INFRA \
–var web_vm_type=small.disk \
–var db_vm_type=small.disk \
–var azs=[BOSH] \
–var db_persistent_disk_type=10240 \
–var worker_vm_type=concourse.worker \
–var deployment_name=concourse \
–var local_user.username=myuser \
–var local_user.password=mypass \
–var web_instances=1 \
–var worker_instances=1 \
–var syslog_address=syslog.ragazzilab.com \
–var syslog_port=514 \
–var syslog_permitted_peer=syslog.ragazzilab.com \
–var credhub_url=”https://192.168.100.200:8844 ” \
–var credhub_client_id=concourse-to-credhub \
–var credhub_client_secret=MySecretPassword \
–var credhub_ca_cert=”$(cat root_ca_certificate)”

Test a pipeline

    1. Use credhub cli to create a value

      credhub set –name /concourse/main/hello-credhub/hello –value World

      Concourse has a default pattern for looking up interpolation values. It’s /concourse/<team name>/<pipeline name>/<key>

    2. Get the test pipeline from here.

      jobs:
      – name: hello-credhub
      plan:
      – do:
      – task: hello-credhub
      config:
      platform: linux
      image_resource:
      type: docker-image
      source:
      repository: ubuntu
      run:
      path: sh
      args:
      – -exc
      – |
      echo “Hello $WORLD_PARAM”
      params:
      WORLD_PARAM: ((hello))

    3. Use fly to set the test pipeline

      fly -t concourse login -c http://concourse -u myuser -p mypass -n main
      fly -t core sp -p hello-credhub -c hello-credhub.yml

    4. Run the test pipeline in concourse. If all goes well, it should say Hello World”

Automating PKS Upgrades

Last night, Pivotal announced new versions of PKS and Harbor, so I thought it’s time to simplify the upgrade process. Here is a concourse pipeline that essentially aggregates the upgrade-tile pipeline so that PKS and Harbor are upgraded in one go.

What it does:

  1. Runs on a schedule – you set the time and days it may run
  2. Downloads the latest version of PKS and Harbor from Pivnet- you set the major.minor version range
  3. Uploads the PKS and Harbor releases to your BOSH director
  4. Determines whether the new release is missing a stemcell, downloads it from PivNet and uploads it to BOSH director
  5. Stages the tiles/releases
  6. Applies changes

What you need:

  1. A working Concourse instance that is able to reach the Internet to pull down the binaries and repo
  2. The fly cli and credentials for your Concourse.
  3. A token from your PivNet account
  4. An instance of PKS 1.0.2 or 1.0.3 AND Harbor 1.4.x deployed on Ops Manager
  5. Credentials for your Ops Manager
  6. (optional) A token from your GitHub account

How to use the pipeline:

  1. Download params.yml and pipeline.yml from here.
  2. Edit the params.yml by replacing the values in double-parentheses with the actual value. Each line has a bit explaining what it’s expecting.  For example, ((ops_mgr_host)) becomes opsmgr.pcf1.domain.local
    • Remove the parens
    • If you have a GitHub Token, pop that value in, otherwise remove ((github_token))
    • The current pks_major_minor_version regex will get the latest 1.0.x.  If you want to pin it to a specific version, or when PKS 1.1.x is available, you can make those changes here.
    • The ops_mgr_usr and ops_mgr_pwd credentials are those you use to logon to Ops Manager itself.  Typically set when the Ops Manager OVA is deployed.
    • The schedule params should be adjusted to a convenient time to apply the upgrade.  Remember that in addition to the PKS Service being offline (it’s a singleton) during the upgrade, your Kubernetes clusters may be affected if you have the “Upgrade all Clusters” errand set to run in the PKS configuration, so schedule wisely!

  3. Open your cli and login to concourse with fly

    fly -t concourse login -c http://concourse.domain.local:8080 -u username -p password

  4. Set the new pipeline. Here, I’m naming the pipeline “PKS_Upgrade”. You’ll pass the pipeline.yml with the “-c” param and your edited params.yml with the “-l” param

    fly -t concourse sp -p PKS_Upgrade -c pipeline.yml -l params.yml

    Answer “y” to “Apply Configuration”…

  5. Unpause the pipeline so it can run when in the scheduled window

    fly -t concourse up -p PKS_Upgrade

  6. Login to the Concourse web to see our shiny new pipeline!

    If you don’t want to deal with the schedule and simply want it to upgrade on-demand, use the pipeline-nosched.yml instead of pipeline.yml, just be aware that when you unpause the pipeline, it’ll start doing its thing.  YMMV, but for me, it took about 8 minutes to complete the upgrade.

Behind the scenes
It’s not immediately obvious how the pipeline does what it does. When I first started out, I found it frustrating that there just isn’t much to the pipeline itself. To that end, I tried making pipelines that were entirely self-contained. This was good in that you can read the pipeline and see everything it’s doing; plus it can be made to run in an air-gapped environment. The downside is that there is no separation, one error in any task and you’ll have to edit the whole pipeline file.
As I learned a little more and poked around in what others were doing, it made sense to split the “tasks” out, keep them in a GitHub public repo and pull it down to run on-demand.

Pipelines generally have two main sections; resources and jobs.
Resources are objects that are used by jobs. In this case, the binary installation files, a zip of the GitHub repo and the schedule are resources.
Jobs are (essentially) made up of plans and plans have tasks.
Each task in most pipelines uses another source yml. This task.yml will indicate which image concourse should build a container from and what it should do on that container (typically, run a script). All of these task components are in the GitHub repo, so when the pipeline job runs, it clones the repo and runs the appropriate task script in a container built on an image pulled from dockerhub.

More info
I’ve got a several pipelines in the repo.   Some of them do what they’re supposed to. 🙂 Most of them are derived from others’ work, so many thanks to Pivotal Services and Sabha Parameswaran

Resolutions for 2018

If I put it here, I’m much more likely to follow-through.  Like many, I work best under some pressure.  Here is a list of what I want to do differently (with regard to technology) next year.

  1. Do more blogging.  I can make a ton of excuses for not blogging as much this year.  I love sharing what I’ve learned; the more new stuff I learn, the more I share.  So….
  2. Do more for NSX for vSphere and NSX-T.  I feel strongly that SDN is critical to the future of how datacenters operate.  NSX is the logical leader in this space and will only grow in interest.  There is still a tendency to replicate what was done with pre-SDN technology and I’d like to see modern ways to solve problems while finding and pushing the limits of what can be done in SDN.
  3. PKS
    Do more with containers and PKS.  The technologies that Pivotal provides are cutting edge.  Already and continuing, containers and applications-as-code methods are growing and will define the datacenter of the future.  Just as a few years ago, we stopped thinking of hardware servers as single-purpose, we’ll embrace multiple workloads within a VM.
  4. Do more coding.  I love concourse and pipelines, but have a lot to learn.  Let’s find the limits of BOSH and pipelines.  Can we not only deploy, but automate the operation and maintenance of a PaaS solution?
  5. Do more coding.  I feel that as we move to “applications-as-code”, it’s important to understand what that means to developers and operators.  What sort of problems become irrelevant in this approach?  What molehills become mountains?

Hope to see you next year!

 

Building Stand-Alone BOSH and Concourse

This should be the last “how to install concourse” post; With this, I think I’ve covered all the interesting ways to install it.  Using BOSH is by-far my favorite approach.  After this, I hope to post more related to the use of concourse and pipelines.

Overview

There are three phases to this deployment:

  1. BOSH-start – We’ll set up an ubuntu VM to create the BOSH director from.  We’ll be using BOSH v2 and not bosh-init
  2. BOSH Director – This does all the work for us, but has to be instructed how to connect to vSphere
  3. Concourse – We’ll use a deployment manifest in BOSH to deploy concourse

I took the approach that – where possible – I would manually download the files and transfer them to the target, rather than having the install process pull the files down automatically.  In my case, I went through a lot of trial-and-error, so I did not want to pull down the files every time.  In addition, I’d like to get a feel for what a self-contained (no Internet access) solution would look like. BTW, concourse requires Internet access in order to get to docker hub for a container to run its pipelines.

Starting position

Make sure you have the following:

  • Working vSphere environment with some available storage and compute capacity
  • At least one network on a vSwitch or Distributed vSwitch with available IP addresses
  • Account for BOSH to connect to vSphere with permissions to create folders, resource pools, and VMs
  • An Ubuntu  VM template.  Mine is 16.04 LTS
  • PuTTY, Win-SCP or similar tools

BOSH-start

  1. Deploy a VM from your Ubuntu template.  Give it a name – I call mine BOSH-start – and IP address, power it on.  In my case, I’m logged in as my account to avoid using root unless necessary.
  2. Install dependencies:
    sudo apt-get install -y build-essential zlibc zlib1g-dev ruby ruby-dev openssl \
    libxslt-dev libxml2-dev libssl-dev libreadline6 libreadline6-dev \
    libyaml-dev libsqlite3-dev sqlite3
  3. Download BOSH CLI v2, make it executable and move it to the path.  Get the latest version of the BOSH v2 CLI here.
    wget https://s3.amazonaws.com/bosh-cli-artifacts/bosh-cli-2.0.16-linux-amd64
    chmod +x ~/Downloads/bosh-cli-*
    sudo mv ~/Downloads/bosh-cli-* /usr/local/bin/bosh

BOSH Director

  1. Git Director templates
    mkdir ~/bosh-1
    cd ~/bosh-1
    git clone https://github.com/cloudfoundry/bosh-deployment
  2. Create a folder and use bosh to create the environment.  This command will create several “state” files and our BOSH director with the information you provide.  Replace the values in red with your own.
    
    bosh create-env bosh-deployment/bosh.yml \
        --state=state.json \
        --vars-store=creds.yml \
        -o bosh-deployment/vsphere/cpi.yml \
        -o bosh-deployment/vsphere/resource-pool.yml \
        -o bosh-deployment/misc/dns.yml \
        -v internal_dns=<DNS Servers ex: [192.168.100.10,192.168.100.11]>
        -v director_name=<name of BOSH director. eg:boshdir> \
        -v internal_cidr=<CIDR for network ex: 172.16.9.0/24> \
        -v internal_gw=<Gateway Address> \
        -v internal_ip=<IP Address to assign to BOSH director> \
        -v network_name="<vSphere vSwitch Port Group>" \
        -v vcenter_dc=<vSphere Datacenter> \
        -v vcenter_ds=<vSphere Datastore> \
        -v vcenter_ip=<IP address of vCenter Server> \
        -v vcenter_user=<username for connecting to vCenter Server> \
        -v vcenter_password=<password for that account> \
        -v vcenter_templates=<location for templates ex:/BOSH/templates> \
        -v vcenter_vms=<location for VM.  ex:/BOSH/vms> \
        -v vcenter_disks=<folder on datastore for bosh disks.  ex:bosh-1-disks> \
        -v vcenter_cluster=<vCenter Cluster Name> \
        -v vcenter_rp=<Resource Pool Name>

    One note here; if you do not add the line for dns.yml and internal_dns, your BOSH director will use 8.8.8.8 as its DNS server and won’t be able to find anything internal. This will take a little while to download the bits and set up the Director for you.

  3. Connect to Director.  The following commands will create an alias for the new BOSH environment named “bosh-1”. Replace 10.0.0.6 with the IP of your BOSH Director from the create-env command:
    # Configure local alias
    bosh alias-env bosh-1 -e 10.0.0.6 --ca-cert <(bosh int ./creds.yml --path /director_ssl/ca)
    export BOSH_CLIENT=admin
    export BOSH_CLIENT_SECRET=`bosh int ./creds.yml --path /admin_password`
    bosh -e bosh-1 env
  4. Next we’ll need a “cloud config”.  This indicates to BOSH Director how to configure the CPI for interaction with vSphere.  You can find examples and details here.  For expediency, What I ended up with is below. As usual, you’ll want to update the values in red to match your environment.  Save this file as ~/bosh-1/cloud-config.yml on the BOSH-start VM
    azs:
    - name: z1
      cloud_properties:
        datacenters:
        - name: <vSphere Datacenter Name>
        - clusters: 
          - <vSphere Cluster Name>: {resource_pool: <Resource Pool in that cluster>}
    properties:
      vcenter:
        address: <IP of FQDN of vCenter Server>
        user: <account to connect to vSphere with>
        password: <Password for that account>
        default_disk_type: thin
        enable_auto_anti_affinity_drs_rules: false
        datacenters:
        - name: <vSphere Datacenter Name>
          vm_folder: /BOSH/vms
          template_folder: /BOSH/templates
          disk_path: prod-disks
          datastore_pattern: <regex filter for datastores to use ex: '\AEQL-THICK0\d' >
          persistent_datastore_pattern: <regex filter for datastores to use ex: '\AEQL-THICK0\d' >
          clusters:
          - <vSphere Cluster Name>: {resource_pool: <Resource Pool in that cluster>}
    
    vm_types:
    - name: default
      cloud_properties:
        cpu: 2
        ram: 4096
        disk: 16_384
    - name: large
      cloud_properties:
        cpu: 2
        ram: 8192
        disk: 32_768
    
    disk_types:
    - name: default
      disk_size: 16_384
      cloud_properties:
        type: thin
    - name: large
      disk_size: 32_768
      cloud_properties:
        type: thin
    
    networks:
    - name: default
      type: manual
      subnets:
      - range: <network CIDR where to place VMs ex:192.168.10.0/26>
        reserved: <reserved range in that CIDR ex:[192.168.10.1-192.168.10.42] >
        gateway: <gateway address for that network>
        az: z1
        dns: <DNS Server IPs ex: [192.168.100.50,192.168.100.150] >
        cloud_properties:
          name: <name of port group to attach created VMs to>
    
    compilation:
      workers: 5
      reuse_compilation_vms: true
      az: z1
      vm_type: large
      network: default
    
    
  5. Update Cloud Config with our file:
    bosh -e bosh-1 update-cloud-config ./cloud-config

    This is surprisingly fast.  You should now have a functional BOSH Director.

Concourse

Let’s deploy something with BOSH!

Prereqs:

  • Copy the URLs for the Concourse and Garden runC BOSH releases from here
  • Copy the URL for the latest Ubuntu Trusty stemcell for vSphere from here
  1. Upload Stemcell.  You’ll see it create a VM with a name beginning with “sc” in vSphere
    bosh -e bosh-1 upload-stemcell <URL to stemcell>
  2. Upload Garden runC release to BOSH
    bosh -e bosh-1 upload-release <URL to garden-runc tgz>
  3. Upload Concourse release to BOSH
    bosh -e bosh-1 upload-release <URL to concourse tgz>
  4. A BOSH deployment must have a stemcell, a release and a manifest.  You can get a concourse manifest from here, or start with the one I’m using.  You’ll notice that a lot of the values here must match those in our cloud-config.  Save the concourse manifest as ~/concourse.yml
    ---
    name: concourse
    
    releases:
    - name: concourse
      version: latest
    - name: garden-runc
      version: latest
    
    stemcells:
    - alias: trusty
      os: ubuntu-trusty
      version: latest
    
    instance_groups:
    - name: web
      instances: 1
      # replace with a VM type from your BOSH Director's cloud config
      vm_type: default
      stemcell: trusty
      azs: [z1]
      networks: [{name: default}]
      jobs:
      - name: atc
        release: concourse
        properties:
          # replace with your CI's externally reachable URL, e.g. https://ci.foo.com
          external_url: http://concourse.mydomain.com
    
          # replace with username/password, or configure GitHub auth
          basic_auth_username: myuser
          basic_auth_password: mypass
    
          postgresql_database: &atc_db atc
      - name: tsa
        release: concourse
        properties: {}
    
    - name: db
      instances: 1
      # replace with a VM type from your BOSH Director's cloud config
      vm_type: large
      stemcell: trusty
      # replace with a disk type from your BOSH Director's cloud config
      persistent_disk_type: default
      azs: [z1]
      networks: [{name: default}]
      jobs:
      - name: postgresql
        release: concourse
        properties:
          databases:
          - name: *atc_db
            # make up a role and password
            role: atc_db
            password: mypass
    
    - name: worker
      instances: 1
      # replace with a VM type from your BOSH Director's cloud config
      vm_type: default
      stemcell: trusty
      azs: [z1]
      networks: [{name: default}]
      jobs:
      - name: groundcrew
        release: concourse
        properties: {}
      - name: baggageclaim
        release: concourse
        properties: {}
      - name: garden
        release: garden-runc
        properties:
          garden:
            listen_network: tcp
            listen_address: 0.0.0.0:7777
    
    update:
      canaries: 1
      max_in_flight: 1
      serial: false
      canary_watch_time: 1000-60000
      update_watch_time: 1000-60000

    A couple of notes:

    • The Worker instance will need plenty of space, especially if you’re planning to use PCF Pipeline Automation, as it’ll have to download the massive binaries from PivNet. You’ll want to make sure that you have a sufficiently large vm type defined in your cloud config and assigned as worker in the Concourse manifest
  5. Now, we have everything we need to deploy concourse.  Notice that we’re using BOSH v2 and the deployment syntax is a little different than in BOSH v1.  This command will create a handful of VMs, compile a bunch of packages and push them to the VMs.  You’ll a couple extra IPs for the compilation VMs – these will go away after the deployment is complete.
    bosh -e bosh-1 -d concourse deploy ./concourse.yml
  6. Odds are that you’ll have to make adjustments to the cloud-config and deployment manifest.  If so, you can easily apply updates to the cloud-config with the bosh update-cloud-config command.
  7. If the deployment is completely hosed up and you need to remove it, you can do so with
    bosh -e bosh-1 -d concourse stop &&  bosh -e bosh-1 -d concourse deld

Try it out

  1. Get the IP address of the web instance by running
    bosh -e bosh-1 vms

    From the results, identify the IP address of the web instance:

  2. Point your browser to http://<IP of web instance>:8080
  3. Click Login, Select “main” team and login with the username and password (myuser and mypass in the example) you used in the manifest

References:

 

Building a Concourse CI VM on Ubuntu

Recently, I’ve found myself needing a Concourse CI system. I struggled with the documentation on concourse.ci, couldn’t find any comprehensive build guides.  Knew for certain I wasn’t going to use VirtualBox.  So, having worked it out; thought I’d share what I went through to get to a working system.

Starting Position
Discovered that the CentOS version I was using previously did not have a compatible Linux kernel version.  CentOS 7.2 uses kernel 3.10, Concourse requires 3.19+.  So, I’m starting with a freshly-deployed Ubuntu Server 16.04 LTS this time.

Prep Ubuntu
Not a lot we have to do, but still pretty important:

  1. Make sure port for concourse is open

    sudo ufw allow 8080
    sudo ufw status

    sudo ufw disable

    I disabled the firewall on ubuntu because it was preventing the concourse worker and concourse web from communicating.

  2. Update and make sure wget is installed

    apt-get update
    apt-get install wget

Postgresql
Concourse expects to use a postgresql database, I don’t have one standing by, so let’s install it.

  1. Pretty straightforward on Ubuntu too:

    apt-get install postgresql postgresql-contrib

    Enter y to install the bits.  On Ubuntu, we don’t have to take extra steps to configure the service.

  2. Ok, now we have to create an account and a database for concourse. First, lets create the linux account. I’m calling mine “concourse” because I’m creative like that.

    adduser concourse
    passwd concourse

  3. Next, we create the account (aka “role” or “user”) in postgres via the createuser command. In order to do this, we have to switch to the postgres account, do that with sudo:

    sudo -i -u postgres

    Now, while in as postgres we can use the createuser command

    createuser –interactive

    You’ll enter the name of the account, and answer a couple of special permissions questions.

  4. While still logged in as postgres, run this command to create a new database for concourse. I’m naming my database “concourse” – my creativity is legendary. Actually, I think it makes life easier if the role and database are named the same

    createdb concourse

  5. Test by switching users to the concourse account and making sure it can run psql against the concourse databaseWhile in psql, use this command to set the password for the account in postgress

    ALTER ROLE concourse WITH PASSWORD 'changeme';

  6. Type \q to exit psql

Concourse
Ok, we have a running postgresql service and and account to be used for concourse. Let’s go.

  1. Create a folder for concourse. I used /concourse, but you can use /var/lib/whatever/concourse if you feel like it.
  2. Download the binary from concourse.ci/downloads.html into your /concourse folder using wget or transfer via scp.
  3. Create a symbolic link named “concourse” to the file you downloaded and make it executable

    ln -s ./concourse_linux_amd64 ./concourse
    chmod +x ./concourse_linux_amd64

  4. Create keys for concourse

    cd /concourse

    mkdir -p keys/web keys/worker

    ssh-keygen -t rsa -f ./keys/web/tsa_host_key -N ”
    ssh-keygen -t rsa -f ./keys/web/session_signing_key -N ”
    ssh-keygen -t rsa -f ./keys/worker/worker_key -N ”
    cp ./keys/worker/worker_key.pub ./keys/web/authorized_worker_keys
    cp ./keys/web/tsa_host_key.pub ./keys/worker

  5. Create start-up script for Concourse. Save this as /concourse/start.sh:

    /concourse/concourse web \
    –basic-auth-username myuser \
    –basic-auth-password mypass \
    –session-signing-key /concourse/keys/web/session_signing_key \
    –tsa-host-key /concourse/keys/web/tsa_host_key \
    –tsa-authorized-keys /concourse/keys/web/authorized_worker_keys \
    –external-url http://192.168.103.81:8080 \
    –postgres-data-source postgres://concourse:changeme@127.0.0.1/concourse?sslmode=disable

    /concourse/concourse worker \
    –work-dir /opt/concourse/worker \
    –tsa-host 127.0.0.1 \
    –tsa-public-key /concourse/keys/worker/tsa_host_key.pub \
    –tsa-worker-private-key /concourse/keys/worker/worker_key

    The items in red should definitely be changed for your environment. “external_url” uses the IP address of the VM its running on. and the username and password values in the postgres-data-source should reflect what you set up earlier. Save the file and be sure to set it as executable (chmod +x ./start.sh)

  6. Run the script “./start.sh”. You should see several lines go by concerning worker-collectors and builder-reapers.
    • If you instead see a message about authentication, you’ll want to make sure that 1) the credentials in the script are correct, 2) the account has not had it’s password set in linux or in postgres
    • If you instead see a message about the connection not accepting SSL, be sure that the connection string in the script includes “?sslmode=disable” after the database name
  7. Test by pointing a browser at the value you assigned to the external_url. You should see “no pipelines configured”.  You can login using the basic-auth username and password you specified in the startup script.

    Success!
  8. Back in your SSH session, you can kill it with <CRTL>+C

Finishing Up
Now we just have to make sure that concourse starts when the system reboots. I am certain that there are better/safer/more reliable ways to do this, but here’s what I did:
Use nano or your favorite text editor to add “/concourse/start.sh” to /etc/rc.local ABOVE the line that reads “exit 0”
Now, reboot your VM and retest the connectivity to the concourse page.

Thanks

EMC ECS Community Edition project for how to start the script on boot.

Mitchell Anicas’ very helpful post on setting up postgres on Ubuntu.

Concourse.ci for some wholly inadequate documentation

Alfredo Sánchez for bringing the issue with Concourse and CentOS to my attention

Building a Concourse CI VM on CentOS

Recently, I’ve found myself needing a Concourse CI system. I struggled with the documentation on concourse.ci, couldn’t find any comprehensive build guides.  Knew for certain I wasn’t going to use VirtualBox.  So, having worked it out; thought I’d share what I went through to get to a working system.

WARNING

It has been brought to my attention that CentOS does not have a compatible Linux kernel, so I’ve redone this post using Ubuntu instead.

Starting Position
I’m starting with a freshly-deployed CentOS 7 VM. I use Simon’s template build, so it comes up quickly and reliably.  Logged on as root.

Prep CentOS
Not a lot we have to do, but still pretty important:

  1. Open firewall post for concourse

    firewall-cmd --add-port=8080/tcp --permanent
    firewall-cmd --reload

    optionally, you can open 5432 for postgres if you feel like it

  2. Update and make sure wget is installed

    yum update
    yum install wget

Postgresql
Concourse expects to use a postgresql database, I don’t have one standing by, so let’s install it.

  1. Pretty straightforward on CentOS:

    yum install postgresql-server postgresql-contrib

    Enter y to install the bits.

  2. When that step is done, we’ll set it up with this command:

    sudo postgresql-setup initdb

  3. Next, we’ll update the postgresql config to allow passwords. Use your favorite editor to open /var/lib/pgsql/data/pg_hba.conf We need to update the value in the method column for IPv4 and IPv6 connections from “ident” to “md5” then save the file.
    Before

    After
  4. Now, let’s start postgresql and set it to run automatically

    sudo systemctl start postgresql
    sudo systemctl enable postgresql

  5. Ok, now we have to create an account and a database for concourse. First, lets create the linux account. I’m calling mine “concourse” because I’m creative like that.

    adduser concourse
    passwd concourse

  6. Next, we create the account (aka “role” or “user”) in postgres via the createuser command. In order to do this, we have to switch to the postgres account, do that with sudo:

    sudo -i -u postgres

    Now, while in as postgres we can use the createuser command

    createuser –interactive

    You’ll enter the name of the account, and answer a couple of special permissions questions.

  7. While still logged in as postgres, run this command to create a new database for concourse. I’m naming my database “concourse” – my creativity is legendary. Actually, I think it makes life easier if the role and database are named the same

    createdb concourse

  8. Test by switching users to the concourse account and making sure it can run psql against the concourse databaseWhile in psql, use this command to set the password for the account in postgress

    ALTER ROLE concourse WITH PASSWORD 'changeme';

  9. Type \q to exit psql

Concourse
Ok, we have a running postgresql service and and account to be used for concourse. Let’s go.

  1. Create a folder for concourse. I used /concourse, but you can use /var/lib/whatever/concourse if you feel like it.
  2. Download the binary from concourse.ci/downloads.html into your /concourse folder using wget or transfer via scp.
  3. Create a symbolic link named “concourse” to the file you downloaded and make it executable

    ln -s ./concourse_linux_amd64 ./concourse
    chmod +x ./concourse_linux_amd64

  4. Create keys for concourse

    cd /concourse

    mkdir -p keys/web keys/worker

    ssh-keygen -t rsa -f ./keys/web/tsa_host_key -N ”
    ssh-keygen -t rsa -f ./keys/web/session_signing_key -N ”
    ssh-keygen -t rsa -f ./keys/worker/worker_key -N ”
    cp ./keys/worker/worker_key.pub ./keys/web/authorized_worker_keys
    cp ./keys/web/tsa_host_key.pub ./keys/worker

  5. Create start-up script for Concourse. Save this as /concourse/start.sh:

    /concourse/concourse web \
    –basic-auth-username myuser \
    –basic-auth-password mypass \
    –session-signing-key /concourse/keys/web/session_signing_key \
    –tsa-host-key /concourse/keys/web/tsa_host_key \
    –tsa-authorized-keys /concourse/keys/web/authorized_worker_keys \
    –external-url http://192.168.103.81:8080 \
    –postgres-data-source postgres://concourse:changeme@127.0.0.1/concourse?sslmode=disable

    /concourse/concourse worker \
    –work-dir /opt/concourse/worker \
    –tsa-host 127.0.0.1 \
    –tsa-public-key /concourse/keys/worker/tsa_host_key.pub \
    –tsa-worker-private-key /concourse/keys/worker/worker_key

    The items in red should definitely be changed for your environment. “external_url” uses the IP address of the VM its running on. and the username and password values in the postgres-data-source should reflect what you set up earlier. Save the file and be sure to set it as executable (chmod +x ./start.sh)

  6. Run the script “./start.sh”. You should see several lines go by concerning worker-collectors and builder-reapers.
    • If you instead see a message about authentication, you’ll want to make sure that 1) the credentials in the script are correct, 2) the account has not had it’s password set in linux or in postgres and 3) the pg_hba.conf fie has been updated to use md5 instead of ident
    • If you instead see a message about the connection not accepting SSL, be sure that the connection string in the script includes “?sslmode=disable” after the database name
  7. Test by pointing a browser at the value you assigned to the external_url. You should see “no pipelines configured”

    Success!
  8. Back in your SSH session, you can kill it with <CRTL>+X

Finishing Up
Now we just have to make sure that concourse starts when the system reboots. I am certain that there are better/safer/more reliable ways to do this, but here’s what I did:

echo "/concourse/start.sh" >> /etc/rc.d/rc.local
chmod +x /etc/rc.d/rc.local

Now, reboot your VM and retest the connectivity to the concourse page.

Thanks

EMC ECS Community Edition project for how to start the script on boot.

Mitchell Anicas’ very helpful post on setting up postgres on CentOS.

Concourse.ci for some wholly inadequate documentation