Archive
Building Stand-Alone BOSH and Concourse
This should be the last “how to install concourse” post; With this, I think I’ve covered all the interesting ways to install it. Using BOSH is by-far my favorite approach. After this, I hope to post more related to the use of concourse and pipelines.
Overview
There are three phases to this deployment:
- BOSH-start – We’ll set up an ubuntu VM to create the BOSH director from. We’ll be using BOSH v2 and not bosh-init
- BOSH Director – This does all the work for us, but has to be instructed how to connect to vSphere
- Concourse – We’ll use a deployment manifest in BOSH to deploy concourse
I took the approach that – where possible – I would manually download the files and transfer them to the target, rather than having the install process pull the files down automatically. In my case, I went through a lot of trial-and-error, so I did not want to pull down the files every time. In addition, I’d like to get a feel for what a self-contained (no Internet access) solution would look like. BTW, concourse requires Internet access in order to get to docker hub for a container to run its pipelines.
Starting position
Make sure you have the following:
- Working vSphere environment with some available storage and compute capacity
- At least one network on a vSwitch or Distributed vSwitch with available IP addresses
- Account for BOSH to connect to vSphere with permissions to create folders, resource pools, and VMs
- An Ubuntu VM template. Mine is 16.04 LTS
- PuTTY, Win-SCP or similar tools
BOSH-start
- Deploy a VM from your Ubuntu template. Give it a name – I call mine BOSH-start – and IP address, power it on. In my case, I’m logged in as my account to avoid using root unless necessary.
- Install dependencies:
sudo apt-get install -y build-essential zlibc zlib1g-dev ruby ruby-dev openssl \ libxslt-dev libxml2-dev libssl-dev libreadline6 libreadline6-dev \ libyaml-dev libsqlite3-dev sqlite3
- Download BOSH CLI v2, make it executable and move it to the path. Get the latest version of the BOSH v2 CLI here.
wget https://s3.amazonaws.com/bosh-cli-artifacts/bosh-cli-2.0.16-linux-amd64 chmod +x ~/Downloads/bosh-cli-* sudo mv ~/Downloads/bosh-cli-* /usr/local/bin/bosh
BOSH Director
- Git Director templates
mkdir ~/bosh-1 cd ~/bosh-1 git clone https://github.com/cloudfoundry/bosh-deployment
- Create a folder and use bosh to create the environment. This command will create several “state” files and our BOSH director with the information you provide. Replace the values in red with your own.
bosh create-env bosh-deployment/bosh.yml \ --state=state.json \ --vars-store=creds.yml \ -o bosh-deployment/vsphere/cpi.yml \ -o bosh-deployment/vsphere/resource-pool.yml \ -o bosh-deployment/misc/dns.yml \ -v internal_dns=<DNS Servers ex: [192.168.100.10,192.168.100.11]> -v director_name=<name of BOSH director. eg:boshdir> \ -v internal_cidr=<CIDR for network ex: 172.16.9.0/24> \ -v internal_gw=<Gateway Address> \ -v internal_ip=<IP Address to assign to BOSH director> \ -v network_name="<vSphere vSwitch Port Group>" \ -v vcenter_dc=<vSphere Datacenter> \ -v vcenter_ds=<vSphere Datastore> \ -v vcenter_ip=<IP address of vCenter Server> \ -v vcenter_user=<username for connecting to vCenter Server> \ -v vcenter_password=<password for that account> \ -v vcenter_templates=<location for templates ex:/BOSH/templates> \ -v vcenter_vms=<location for VM. ex:/BOSH/vms> \ -v vcenter_disks=<folder on datastore for bosh disks. ex:bosh-1-disks> \ -v vcenter_cluster=<vCenter Cluster Name> \ -v vcenter_rp=<Resource Pool Name>
One note here; if you do not add the line for dns.yml and internal_dns, your BOSH director will use 8.8.8.8 as its DNS server and won’t be able to find anything internal. This will take a little while to download the bits and set up the Director for you.
- Connect to Director. The following commands will create an alias for the new BOSH environment named “bosh-1”. Replace 10.0.0.6 with the IP of your BOSH Director from the create-env command:
# Configure local alias bosh alias-env bosh-1 -e 10.0.0.6 --ca-cert <(bosh int ./creds.yml --path /director_ssl/ca) export BOSH_CLIENT=admin export BOSH_CLIENT_SECRET=`bosh int ./creds.yml --path /admin_password` bosh -e bosh-1 env
- Next we’ll need a “cloud config”. This indicates to BOSH Director how to configure the CPI for interaction with vSphere. You can find examples and details here. For expediency, What I ended up with is below. As usual, you’ll want to update the values in red to match your environment. Save this file as ~/bosh-1/cloud-config.yml on the BOSH-start VM
azs: - name: z1 cloud_properties: datacenters: - name: <vSphere Datacenter Name> - clusters: - <vSphere Cluster Name>: {resource_pool: <Resource Pool in that cluster>} properties: vcenter: address: <IP of FQDN of vCenter Server> user: <account to connect to vSphere with> password: <Password for that account> default_disk_type: thin enable_auto_anti_affinity_drs_rules: false datacenters: - name: <vSphere Datacenter Name> vm_folder: /BOSH/vms template_folder: /BOSH/templates disk_path: prod-disks datastore_pattern: <regex filter for datastores to use ex: '\AEQL-THICK0\d' > persistent_datastore_pattern: <regex filter for datastores to use ex: '\AEQL-THICK0\d' > clusters: - <vSphere Cluster Name>: {resource_pool: <Resource Pool in that cluster>} vm_types: - name: default cloud_properties: cpu: 2 ram: 4096 disk: 16_384 - name: large cloud_properties: cpu: 2 ram: 8192 disk: 32_768 disk_types: - name: default disk_size: 16_384 cloud_properties: type: thin - name: large disk_size: 32_768 cloud_properties: type: thin networks: - name: default type: manual subnets: - range: <network CIDR where to place VMs ex:192.168.10.0/26> reserved: <reserved range in that CIDR ex:[192.168.10.1-192.168.10.42] > gateway: <gateway address for that network> az: z1 dns: <DNS Server IPs ex: [192.168.100.50,192.168.100.150] > cloud_properties: name: <name of port group to attach created VMs to> compilation: workers: 5 reuse_compilation_vms: true az: z1 vm_type: large network: default
- Update Cloud Config with our file:
bosh -e bosh-1 update-cloud-config ./cloud-config
This is surprisingly fast. You should now have a functional BOSH Director.
Concourse
Let’s deploy something with BOSH!
Prereqs:
- Copy the URLs for the Concourse and Garden runC BOSH releases from here
- Copy the URL for the latest Ubuntu Trusty stemcell for vSphere from here
- Upload Stemcell. You’ll see it create a VM with a name beginning with “sc” in vSphere
bosh -e bosh-1 upload-stemcell <URL to stemcell>
- Upload Garden runC release to BOSH
bosh -e bosh-1 upload-release <URL to garden-runc tgz>
- Upload Concourse release to BOSH
bosh -e bosh-1 upload-release <URL to concourse tgz>
- A BOSH deployment must have a stemcell, a release and a manifest. You can get a concourse manifest from here, or start with the one I’m using. You’ll notice that a lot of the values here must match those in our cloud-config. Save the concourse manifest as ~/concourse.yml
--- name: concourse releases: - name: concourse version: latest - name: garden-runc version: latest stemcells: - alias: trusty os: ubuntu-trusty version: latest instance_groups: - name: web instances: 1 # replace with a VM type from your BOSH Director's cloud config vm_type: default stemcell: trusty azs: [z1] networks: [{name: default}] jobs: - name: atc release: concourse properties: # replace with your CI's externally reachable URL, e.g. https://ci.foo.com external_url: http://concourse.mydomain.com # replace with username/password, or configure GitHub auth basic_auth_username: myuser basic_auth_password: mypass postgresql_database: &atc_db atc - name: tsa release: concourse properties: {} - name: db instances: 1 # replace with a VM type from your BOSH Director's cloud config vm_type: large stemcell: trusty # replace with a disk type from your BOSH Director's cloud config persistent_disk_type: default azs: [z1] networks: [{name: default}] jobs: - name: postgresql release: concourse properties: databases: - name: *atc_db # make up a role and password role: atc_db password: mypass - name: worker instances: 1 # replace with a VM type from your BOSH Director's cloud config vm_type: default stemcell: trusty azs: [z1] networks: [{name: default}] jobs: - name: groundcrew release: concourse properties: {} - name: baggageclaim release: concourse properties: {} - name: garden release: garden-runc properties: garden: listen_network: tcp listen_address: 0.0.0.0:7777 update: canaries: 1 max_in_flight: 1 serial: false canary_watch_time: 1000-60000 update_watch_time: 1000-60000
A couple of notes:
- The Worker instance will need plenty of space, especially if you’re planning to use PCF Pipeline Automation, as it’ll have to download the massive binaries from PivNet. You’ll want to make sure that you have a sufficiently large vm type defined in your cloud config and assigned as worker in the Concourse manifest
- Now, we have everything we need to deploy concourse. Notice that we’re using BOSH v2 and the deployment syntax is a little different than in BOSH v1. This command will create a handful of VMs, compile a bunch of packages and push them to the VMs. You’ll a couple extra IPs for the compilation VMs – these will go away after the deployment is complete.
bosh -e bosh-1 -d concourse deploy ./concourse.yml
- Odds are that you’ll have to make adjustments to the cloud-config and deployment manifest. If so, you can easily apply updates to the cloud-config with the bosh update-cloud-config command.
- If the deployment is completely hosed up and you need to remove it, you can do so with
bosh -e bosh-1 -d concourse stop && bosh -e bosh-1 -d concourse deld
Try it out
- Get the IP address of the web instance by running
bosh -e bosh-1 vms
From the results, identify the IP address of the web instance:
- Point your browser to http://<IP of web instance>:8080
- Click Login, Select “main” team and login with the username and password (myuser and mypass in the example) you used in the manifest
References:
- http://bosh.io/docs/cloud-config.html – Help for figuring out the schema of the cloud config
- http://bosh.io/docs/networks.html – Help figuring out the network section of the cloud config
- http://concourse.ci/clusters-with-bosh.html – Where I got most of my information. Note that (as of July 2017) the deploying method linked on this page only works for BOSH v1
- Thanks Danny Berger for this comment on github, saved me pulling all my hair out.
Building a Concourse CI VM on Ubuntu
Recently, I’ve found myself needing a Concourse CI system. I struggled with the documentation on concourse.ci, couldn’t find any comprehensive build guides. Knew for certain I wasn’t going to use VirtualBox. So, having worked it out; thought I’d share what I went through to get to a working system.
Starting Position
Discovered that the CentOS version I was using previously did not have a compatible Linux kernel version. CentOS 7.2 uses kernel 3.10, Concourse requires 3.19+. So, I’m starting with a freshly-deployed Ubuntu Server 16.04 LTS this time.
Prep Ubuntu
Not a lot we have to do, but still pretty important:
- Make sure port for concourse is open
sudo ufw allow 8080
sudo ufw statussudo ufw disable
I disabled the firewall on ubuntu because it was preventing the concourse worker and concourse web from communicating.
- Update and make sure wget is installed
apt-get update
apt-get install wget
Postgresql
Concourse expects to use a postgresql database, I don’t have one standing by, so let’s install it.
- Pretty straightforward on Ubuntu too:
apt-get install postgresql postgresql-contrib
Enter y to install the bits. On Ubuntu, we don’t have to take extra steps to configure the service.
- Ok, now we have to create an account and a database for concourse. First, lets create the linux account. I’m calling mine “concourse” because I’m creative like that.
adduser concourse
passwd concourse - Next, we create the account (aka “role” or “user”) in postgres via the createuser command. In order to do this, we have to switch to the postgres account, do that with sudo:
sudo -i -u postgres
Now, while in as postgres we can use the createuser command
createuser –interactive
You’ll enter the name of the account, and answer a couple of special permissions questions.
- While still logged in as postgres, run this command to create a new database for concourse. I’m naming my database “concourse” – my creativity is legendary. Actually, I think it makes life easier if the role and database are named the same
createdb concourse
- Test by switching users to the concourse account and making sure it can run psql against the concourse database
While in psql, use this command to set the password for the account in postgress
ALTER ROLE concourse WITH PASSWORD 'changeme';
- Type \q to exit psql
Concourse
Ok, we have a running postgresql service and and account to be used for concourse. Let’s go.
- Create a folder for concourse. I used /concourse, but you can use /var/lib/whatever/concourse if you feel like it.
- Download the binary from concourse.ci/downloads.html into your /concourse folder using wget or transfer via scp.
- Create a symbolic link named “concourse” to the file you downloaded and make it executable
ln -s ./concourse_linux_amd64 ./concourse
chmod +x ./concourse_linux_amd64 - Create keys for concourse
cd /concourse
mkdir -p keys/web keys/worker
ssh-keygen -t rsa -f ./keys/web/tsa_host_key -N ”
ssh-keygen -t rsa -f ./keys/web/session_signing_key -N ”
ssh-keygen -t rsa -f ./keys/worker/worker_key -N ”
cp ./keys/worker/worker_key.pub ./keys/web/authorized_worker_keys
cp ./keys/web/tsa_host_key.pub ./keys/worker - Create start-up script for Concourse. Save this as /concourse/start.sh:
/concourse/concourse web \
–basic-auth-username myuser \
–basic-auth-password mypass \
–session-signing-key /concourse/keys/web/session_signing_key \
–tsa-host-key /concourse/keys/web/tsa_host_key \
–tsa-authorized-keys /concourse/keys/web/authorized_worker_keys \
–external-url http://192.168.103.81:8080 \
–postgres-data-source postgres://concourse:changeme@127.0.0.1/concourse?sslmode=disable/concourse/concourse worker \
–work-dir /opt/concourse/worker \
–tsa-host 127.0.0.1 \
–tsa-public-key /concourse/keys/worker/tsa_host_key.pub \
–tsa-worker-private-key /concourse/keys/worker/worker_keyThe items in red should definitely be changed for your environment. “external_url” uses the IP address of the VM its running on. and the username and password values in the postgres-data-source should reflect what you set up earlier. Save the file and be sure to set it as executable (
chmod +x ./start.sh
) - Run the script “./start.sh”. You should see several lines go by concerning worker-collectors and builder-reapers.
- If you instead see a message about authentication, you’ll want to make sure that 1) the credentials in the script are correct, 2) the account has not had it’s password set in linux or in postgres
- If you instead see a message about the connection not accepting SSL, be sure that the connection string in the script includes “?sslmode=disable” after the database name
- Test by pointing a browser at the value you assigned to the external_url. You should see “no pipelines configured”. You can login using the basic-auth username and password you specified in the startup script.
Success!
- Back in your SSH session, you can kill it with <CRTL>+C
Finishing Up
Now we just have to make sure that concourse starts when the system reboots. I am certain that there are better/safer/more reliable ways to do this, but here’s what I did:
Use nano or your favorite text editor to add “/concourse/start.sh” to /etc/rc.local ABOVE the line that reads “exit 0”
Now, reboot your VM and retest the connectivity to the concourse page.
Thanks
EMC ECS Community Edition project for how to start the script on boot.
Mitchell Anicas’ very helpful post on setting up postgres on Ubuntu.
Concourse.ci for some wholly inadequate documentation
Alfredo Sánchez for bringing the issue with Concourse and CentOS to my attention
Building a Concourse CI VM on CentOS
Recently, I’ve found myself needing a Concourse CI system. I struggled with the documentation on concourse.ci, couldn’t find any comprehensive build guides. Knew for certain I wasn’t going to use VirtualBox. So, having worked it out; thought I’d share what I went through to get to a working system.
WARNING
It has been brought to my attention that CentOS does not have a compatible Linux kernel, so I’ve redone this post using Ubuntu instead.
Starting Position
I’m starting with a freshly-deployed CentOS 7 VM. I use Simon’s template build, so it comes up quickly and reliably. Logged on as root.
Prep CentOS
Not a lot we have to do, but still pretty important:
- Open firewall post for concourse
firewall-cmd --add-port=8080/tcp --permanent
firewall-cmd --reloadoptionally, you can open 5432 for postgres if you feel like it
- Update and make sure wget is installed
yum update
yum install wget
Postgresql
Concourse expects to use a postgresql database, I don’t have one standing by, so let’s install it.
- Pretty straightforward on CentOS:
yum install postgresql-server postgresql-contrib
Enter y to install the bits.
- When that step is done, we’ll set it up with this command:
sudo postgresql-setup initdb
- Next, we’ll update the postgresql config to allow passwords. Use your favorite editor to open /var/lib/pgsql/data/pg_hba.conf We need to update the value in the method column for IPv4 and IPv6 connections from “ident” to “md5” then save the file.
Before
After
- Now, let’s start postgresql and set it to run automatically
sudo systemctl start postgresql
sudo systemctl enable postgresql - Ok, now we have to create an account and a database for concourse. First, lets create the linux account. I’m calling mine “concourse” because I’m creative like that.
adduser concourse
passwd concourse - Next, we create the account (aka “role” or “user”) in postgres via the createuser command. In order to do this, we have to switch to the postgres account, do that with sudo:
sudo -i -u postgres
Now, while in as postgres we can use the createuser command
createuser –interactive
You’ll enter the name of the account, and answer a couple of special permissions questions.
- While still logged in as postgres, run this command to create a new database for concourse. I’m naming my database “concourse” – my creativity is legendary. Actually, I think it makes life easier if the role and database are named the same
createdb concourse
- Test by switching users to the concourse account and making sure it can run psql against the concourse database
While in psql, use this command to set the password for the account in postgress
ALTER ROLE concourse WITH PASSWORD 'changeme';
- Type \q to exit psql
Concourse
Ok, we have a running postgresql service and and account to be used for concourse. Let’s go.
- Create a folder for concourse. I used /concourse, but you can use /var/lib/whatever/concourse if you feel like it.
- Download the binary from concourse.ci/downloads.html into your /concourse folder using wget or transfer via scp.
- Create a symbolic link named “concourse” to the file you downloaded and make it executable
ln -s ./concourse_linux_amd64 ./concourse
chmod +x ./concourse_linux_amd64 - Create keys for concourse
cd /concourse
mkdir -p keys/web keys/worker
ssh-keygen -t rsa -f ./keys/web/tsa_host_key -N ”
ssh-keygen -t rsa -f ./keys/web/session_signing_key -N ”
ssh-keygen -t rsa -f ./keys/worker/worker_key -N ”
cp ./keys/worker/worker_key.pub ./keys/web/authorized_worker_keys
cp ./keys/web/tsa_host_key.pub ./keys/worker - Create start-up script for Concourse. Save this as /concourse/start.sh:
/concourse/concourse web \
–basic-auth-username myuser \
–basic-auth-password mypass \
–session-signing-key /concourse/keys/web/session_signing_key \
–tsa-host-key /concourse/keys/web/tsa_host_key \
–tsa-authorized-keys /concourse/keys/web/authorized_worker_keys \
–external-url http://192.168.103.81:8080 \
–postgres-data-source postgres://concourse:changeme@127.0.0.1/concourse?sslmode=disable/concourse/concourse worker \
–work-dir /opt/concourse/worker \
–tsa-host 127.0.0.1 \
–tsa-public-key /concourse/keys/worker/tsa_host_key.pub \
–tsa-worker-private-key /concourse/keys/worker/worker_keyThe items in red should definitely be changed for your environment. “external_url” uses the IP address of the VM its running on. and the username and password values in the postgres-data-source should reflect what you set up earlier. Save the file and be sure to set it as executable (
chmod +x ./start.sh
) - Run the script “./start.sh”. You should see several lines go by concerning worker-collectors and builder-reapers.
- If you instead see a message about authentication, you’ll want to make sure that 1) the credentials in the script are correct, 2) the account has not had it’s password set in linux or in postgres and 3) the pg_hba.conf fie has been updated to use md5 instead of ident
- If you instead see a message about the connection not accepting SSL, be sure that the connection string in the script includes “?sslmode=disable” after the database name
- Test by pointing a browser at the value you assigned to the external_url. You should see “no pipelines configured”
Success!
- Back in your SSH session, you can kill it with <CRTL>+X
Finishing Up
Now we just have to make sure that concourse starts when the system reboots. I am certain that there are better/safer/more reliable ways to do this, but here’s what I did:
echo "/concourse/start.sh" >> /etc/rc.d/rc.local
chmod +x /etc/rc.d/rc.local
Now, reboot your VM and retest the connectivity to the concourse page.
Thanks
EMC ECS Community Edition project for how to start the script on boot.
Mitchell Anicas’ very helpful post on setting up postgres on CentOS.
Concourse.ci for some wholly inadequate documentation