Archive

Posts Tagged ‘uaa’

Logging into a Kubernetes cluster with an OIDC LDAP account

09/09/2019 Comments off

I confess, most of my experience with Kubernetes is with Pivotal Container Service (PKS) Enterprise.  PKS makes it rather easy to get started and I found that I took some tasks for granted.

In PKS Enterprise, one can use the pks cli to not only life-cycle clusters, but to obtain the credentials to the cluster and automatically update the kubeconfig with the account information.  So, administrative/operations users can run the command “pks get-credentials my-cluster” to have a kubeconfig updated with the authentication tokens and parameters to connect to my-cluster.

K8s OIDC using UAA on PKS

The PKS controller includes the User Account and Authentication (UAA) component, which is used to authenticate users into PKS Enterprise.  UAA can also be easily configured to connect to an existing LDAP service – this is the desired configuration in most organizations so that users account exist in one place (Active Directory in my example).

So, I found myself wondering “I don’t want to provide the PKS CLI to developers, so how can they connect kubectl to the cluster?”

Assumptions:

  • PKS Enterprise on vSphere (with or without NSX-T)
  • Active Directory
  • Developer user account belongs to the k8s-devs security group in AD

Prerequisite configuration:

  1. UAA on PKS configured a with UAA User Account Store: LDAP Server.  This links UAA to LDAP/Active Directory
  2. User Search Filter: userPrincipalName={0}  This means that users can login as user@domain.tld
  3. Group Search Filter: member={0} This ensures that AD groups may be used rather than specifying individual users
  4. Configure created clusters to use UAA as the OIDC provider: Enabled  This pre-configures the kubernetes API to use OpenID Connect with UAA. If not using PKS Enterprise, you’ll need to provide another OpenID Connect-Compliant endpoint (like Dex), link it to Active Directory and update the kubernetes cluster api manually to use the OpenID Authentication.

 

Operator: Create Role and RoleBinding:

While authentication is handled by OIDC/UAA/LDAP, Authorization must be configured on the cluster to provide access to resources via RBAC.  This is done by defining a Role (or clusterRole) that indicates what actions may be taken on what resources and a RoleBinding which links the Role to one or more “subjects”.

  1.  Authenticate to kubernetes cluster with an administrative account (for example, using PKS cli to connect)
  2.  Create yaml file for our Role and RoleBinding:
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: developers
    rules:
    - apiGroups: ["", "extensions", "apps"]
      resources: ["deployments", "replicasets", "pods"]
      verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] 
      # You can also use ["*"]
    ---
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: developer-dev-binding
    subjects:
    - kind: Group
      name: k8s-devs
      apiGroup: rbac.authorization.k8s.io
    roleRef:
      kind: Role
      name: developers
      apiGroup: rbac.authorization.k8s.io
    

    In the example above, we’re creating a Role named “developers”, granting access to the core, extensions and apps API groups and several actions against deployments, replicaSets and pods. Notice that developers in this role would have have access to secrets (for example)
    The example RoleBinding binds a group named “k8s-devs” to the developers role. Notice that we have not created the k8s-devs group in Kubernetes or UAA; it exists in Active Directory

  3. Use Kubectl to apply the yaml, creating the Role and Rolebinding in the targeted namespace

 

Creating the kubeconfig – the hard way

To get our developer connected with kubectl, they’ll need a kubeconfig with the authentication and connection details.  The Hard way steps are:

  1. Operator obtains the cluster’s certificate authority data.  This can be done via curl or by copying the value from the existing kubeconfig.
  2. Operator creates a template kubeconfig, replacing the value specified, then sends it to the developer user
    apiVersion: v1
    clusters:
    - cluster:
      certificate-authority-data: <OBTAINED IN STEP 1 >
      server: < FQDN to Master Node. >
      name: PROVIDED-BY-ADMIN
    contexts:
    - context:
        cluster: PROVIDED-BY-ADMIN
        user: PROVIDED-BY-USER
      name:  PROVIDED-BY-ADMIN
    current-context: PROVIDED-BY-ADMIN
    kind: Config
    preferences: {}
    users:
    - name: PROVIDED-BY-USER
      user:
        auth-provider:
          config:
            client-id: pks_cluster_client
            cluster_client_secret: ""
            id-token: PROVIDED-BY-USER
            idp-issuer-url: https://PROVIDED-BY-ADMIN:8443/oauth/token
            refresh-token:  PROVIDED-BY-USER
          name: oidc
  3. The developer user obtains the id_token and refresh_token from UAA, via a curl command
    curl 'https://PKS-API:8443/oauth/token' -k -XPOST -H
    'Accept: application/json' -d "client_id=pks_cluster_client&client_secret=""&grant_type=password&username=UAA-USERNAME&response_type=id_token" --data-urlencode password=UAA-PASSWORD
  4. The developer user updates the kubeconfig with the id_token and refresh token in the kubeconfig

 

Creating the kubeconfig – the easy way

Assuming the developer is using Mac or Linux…

  1. Install jq on developer workstation
  2. Download the get-pks-k8s-config.sh script, make it executable (chmod +x get-pks.k8s.config.sh)
  3. Execute the script (replace the params with your own)
    ./get-pks-k8s-config.sh --API=api.pks.mydomain.com \
    --CLUSTER=cl1.pks.mydomain.com \
    --USER=dev1@mydomain.com \
    --NS=scratch
    • API – FQDN to PKS Controller, for UAA
    • CLUSTER – FQDN to master node of k8s cluster
    • USER – userPrincipalName for the user
    • NS – Namespace to target; optional
  4. After entering the user’s password, the script will set the params in the kubeconfig and switch context automatically

Try it out
Our developer user should able to “see” pods but not namespaces for example:

dev1 can see pods but not namespaces

 

Creating the kubeconfig – the easiest way

  1. Provide the developer with the PKS CLI tool, remember we have not added them to any group or role with PKS admin permissions.
  2. Provide the developer with the PKS API endpoint FQDN and the cluster name
  3. The developer may run this command to generate the updated kubeconfig and set the current context
    pks get-kubeconfig CLUSTERNAME -a API -u USER -k

    • CLUSTERNAME is the name of the cluster
    • API – FQDN to PKS Controller
    • USER – userPrincipalName for the user
  4. You’ll be prompted for the account password. Once entered, the tool will fetch the user-specific kubeconfig.

Use PKS CLI to get the kubeconfig

 

Getting Started with CredHub in Concourse

05/10/2019 Comments off

First, some background – I promise to keep it short.  You should never have credentials in a public github repo.  Probably not good to have them in a private repo either.  At Pivotal, the github client is configured with credalert which complains when I try to push credentials to github.  I maintain compliance, I needed a way to update the stuff in the repo and have my credentials too.  Concourse supports a couple of AWS credential managers, Vault and Credhub.  Since CredHub is built-in to ops-manager-deployed BOSH director, we don’t have to spin anything else up.

The simplified diagram shows how this will work.  CredHub is on the BOSH director, so it’ll need to be reachable from the Concourse Web ATS service and anywhere the credhub cli will be used. If your BOSH director is behind a NAT, you may want to configure a DNAT, so it can be reached.

In this case, we’re using a “management/infrastructure” Operations Manager and BOSH director to deploy and manage concourse and minio.  The pipelines on concourse will be used to deploy and maintain other foundations in the environment.

Configure UAA

  1. Logon to the ops manager and navigate to status to record the IP address of the BOSH director.  If your BOSH director is behind a NAT, locate it’s DNAT instead.
  2. Navigate to the credentials tab.  We’re going to need the uaa_login_client_credentials password and the uaa_admin_client_credentials password.
  3. While here, save the ops manager root ca to your computer.  From the installation dashboard, click on your name in the upper right, select Settings.  Then click Advanced and Download Root CA.
  4. SSH into your ops manager:  ubuntu@<ops manager name or IP>
  5. Set uaac target

    uaac target https://<IP of BOSH director>:8443 –ca-cert /var/tempest/workspaces/default/root_ca_certificate

  6. Login to uaac – ok, this gets awkward

    uaac token owner get login -s <uaa_login_client_credentials>

    • Replace <uaa_login_client_credentials> with the value you saved
    • When prompted for a username enter admin
    • For password enter the uaa_admin_client_password value you saved
    • You should see “Successfully fetched token…”
  7. Create a uaac client for concourse to use with credhub

    uaac client add –name concourse-to-credhub –authorized-grant-types client_credentials –authorities credhub.read,credhub.write –access-token-validity 30 –secret MySecretPassword

    Please replace MySecretPassword with something else

  8. Create a uaac user for use with the CredHub cli

    uaac user add credhub –email credhub@whatever.com -p MySecretPassword

Try out Credhub cli

    1. Download and install the credhub cli.  On mac, you can use brew install credhub
    2. From a terminal/command line run this to point the cli to the credhub instance on the BOSH director:

      credhub api -s <IP of BOSH director>:8844 –ca-cert ./root_ca_certificate

      • Replace <IP of BOSH director> with the name or reachable IP of the director
      • root_ca_certificate is the root CA from ops manager you downloaded earlier
    3. Login to credhub:

      credhub login -u credhub -p MySecretPassword

      User and pass are from the User we added to uaa earlier

    4. Set a test value:

      credhub –type:value –name=/testval –value=hello

      Here’s we’re setting a key (aka credential) with the name /testval to the value “hello”. Note that all the things stored in credhub start with a slash and that there are several types of credentials that can be stored, the simplest being “value”

    5. Get our value:

      credhub –name /testval

      This will return the metadata for our key/credential

Configuring Concourse to use CredHub

Concourse TSA must be configured to look to credhub as a credential manager. I’m using BOSH-deployed concourse, so I’ll simply update the deployment manifest with the new params. if you’re using concourse via docker-compose, you’ll want to update the yml with the additional params as described here.

For concourse deployed via BOSH and using concouse-bosh-deployment, we’ll include the /operations/credhub.yml file and the additional params.  For me this looks like

bosh -e core deploy -d concourse concourse.yml \
-l ../versions.yml \
–vars-store cluster-creds.yml \
-o operations/static-web.yml \
-o operations/basic-auth.yml \
-o operations/scale.yml \
-o operations/privileged-http.yml \
-o operations/credhub.yml \
–var web_ip=192.168.100.205 \
–var external_url=http://concourse.ragazzilab.com \
–var network_name=INFRA \
–var web_vm_type=small.disk \
–var db_vm_type=small.disk \
–var azs=[BOSH] \
–var db_persistent_disk_type=10240 \
–var worker_vm_type=concourse.worker \
–var deployment_name=concourse \
–var local_user.username=myuser \
–var local_user.password=mypass \
–var web_instances=1 \
–var worker_instances=1 \
–var syslog_address=syslog.ragazzilab.com \
–var syslog_port=514 \
–var syslog_permitted_peer=syslog.ragazzilab.com \
–var credhub_url=”https://192.168.100.200:8844 ” \
–var credhub_client_id=concourse-to-credhub \
–var credhub_client_secret=MySecretPassword \
–var credhub_ca_cert=”$(cat root_ca_certificate)”

Test a pipeline

    1. Use credhub cli to create a value

      credhub set –name /concourse/main/hello-credhub/hello –value World

      Concourse has a default pattern for looking up interpolation values. It’s /concourse/<team name>/<pipeline name>/<key>

    2. Get the test pipeline from here.

      jobs:
      – name: hello-credhub
      plan:
      – do:
      – task: hello-credhub
      config:
      platform: linux
      image_resource:
      type: docker-image
      source:
      repository: ubuntu
      run:
      path: sh
      args:
      – -exc
      – |
      echo “Hello $WORLD_PARAM”
      params:
      WORLD_PARAM: ((hello))

    3. Use fly to set the test pipeline

      fly -t concourse login -c http://concourse -u myuser -p mypass -n main
      fly -t core sp -p hello-credhub -c hello-credhub.yml

    4. Run the test pipeline in concourse. If all goes well, it should say Hello World”