Configure Tanzu Kubernetes Grid to use Active Directory

Tanzu Kubernetes Grid includes and supports packages for dex and Gangway.  These are used to extend authentication to LDAP and OIDC endpoints.  Recall that Kubernetes does not do user-management or traditional authentication.  As a K8s cluster admin, you can create service accounts of course, but those are not meant to be used by developers.

Think of dex as a transition layer, it uses ‘connectors’ for upstream Identity providers (IdP) like Active Directory for LDAP or Okta for SAML and presents an OpenID Connect (OIDC) endpoint for k8s to use.

TKG provides not only the packages mentioned above, but also a collection of yaml files and documentation for implementation.  The current version (as of May 12, 2020) documentation for configuring authentication is pretty general, the default values in the config files are suitable for OpenLDAP.  So, I thought I’d share the specific settings for connecting dex to Active Directory.

Assumptions:

    1. TKG Management cluster is deployed
    2. Following the VMware documentation
    3. Using the TKG-provided tkg-extensions
    4. dex will be deployed to management cluster or to a specific workload cluster

Edits to authentication/dex/vsphere/ldap/03-cm.yaml – from Docs

  1. Replace <MGMT_CLUSTER_IP> with the IP address of one of the control plane nodes of your management cluster.  This is one of the control plane nodes where we’re putting dex
  2. If the LDAP server is listening on the default port 636, which is the secured configuration, replace <LDAP_HOST> with the IP or DNS address of your LDAP server. If the LDAP server is listening on any other port, replace <LDAP_HOST> with the address and port of the LDAP server, for example 192.168.10.22:389 or ldap.mydomain.com:389.  Never, never, never use unencrypted LDAP.  You’ll need to specify port 636 unless your targeted AD controller is also a Global Catalog server in which case you’ll specify port 3269.  Check with the AD team if you’re unsure.
  3. If your LDAP server is configured to listen on an unsecured connection, uncomment insecureNoSSL: true. Note that such connections are not recommended as they send credentials in plain text over the network. Never, never, never use unencrypted LDAP.
  4. Update the userSearch and groupSearch parameters with your LDAP server configuration.  This need much more detail – see steps below

Edits to authentication/dex/vsphere/ldap/03-cm.yaml – AD specific

  1. Obtain the root CA public certificate for your AD controller. Save a base64-encoded version of the certificate: echo root64.cer | base64 > rootcer.b64 for example will write the data from the PEM-encoded root64.cer file into a base64-encoded file named rootcer.b64
  2. Add the base64-encoded certificate content to the rootCAData key.  Be sure to remove the leading “#”.  This is an alternative to using the rootCA key, where we’ll have to place the file on each Control Plane node
  3. Update the userSearch values as follows:
    key default set to notes
    baseDN ou=people,

    dc=vmware,dc=com

    DN of OU in AD under

    which user accounts are found

    Example: ou=User Accounts,DC=ragazzilab,DC=com
    filter “(objectClass=

    posixAccount)”

    “(objectClass=person)”
    username uid userPrincipalName
    idAttr uid DN Case-sensitive
    emailAttr mail userPrincipalName
    nameAttr givenName cn
  4. Update the groupSearch values as follows:
    key default set to notes
    baseDN ou=people,

    dc=vmware,dc=com

    DN of OU in AD under

    which security Groups are found

    Example: DC=ragazzilab,DC=com
    filter “(objectClass=

    posixGroup)”

    “(objectClass=group)”
    userAttr uid DN Case-Sensitive
    groupAttr memberUid “member:1.2.840.113556.1.4.1941:” This is necessary to search within nested groups in AD
    nameAttr cn cn

Other important Notes
When you create the oidc secret in the workload clusters running Gangway, the clientSecret value is base64-encoded, but the corresponding secret for the workload cluster in the staticClients section of the dex configmMap is decoded. This can be confusing since the decoded value is also randomly-generated.

Advertisement

Logging into a Kubernetes cluster with an OIDC LDAP account

I confess, most of my experience with Kubernetes is with Pivotal Container Service (PKS) Enterprise.  PKS makes it rather easy to get started and I found that I took some tasks for granted.

In PKS Enterprise, one can use the pks cli to not only life-cycle clusters, but to obtain the credentials to the cluster and automatically update the kubeconfig with the account information.  So, administrative/operations users can run the command “pks get-credentials my-cluster” to have a kubeconfig updated with the authentication tokens and parameters to connect to my-cluster.

K8s OIDC using UAA on PKS

The PKS controller includes the User Account and Authentication (UAA) component, which is used to authenticate users into PKS Enterprise.  UAA can also be easily configured to connect to an existing LDAP service – this is the desired configuration in most organizations so that users account exist in one place (Active Directory in my example).

So, I found myself wondering “I don’t want to provide the PKS CLI to developers, so how can they connect kubectl to the cluster?”

Assumptions:

  • PKS Enterprise on vSphere (with or without NSX-T)
  • Active Directory
  • Developer user account belongs to the k8s-devs security group in AD

Prerequisite configuration:

  1. UAA on PKS configured a with UAA User Account Store: LDAP Server.  This links UAA to LDAP/Active Directory
  2. User Search Filter: userPrincipalName={0}  This means that users can login as user@domain.tld
  3. Group Search Filter: member={0} This ensures that AD groups may be used rather than specifying individual users
  4. Configure created clusters to use UAA as the OIDC provider: Enabled  This pre-configures the kubernetes API to use OpenID Connect with UAA. If not using PKS Enterprise, you’ll need to provide another OpenID Connect-Compliant endpoint (like Dex), link it to Active Directory and update the kubernetes cluster api manually to use the OpenID Authentication.

 

Operator: Create Role and RoleBinding:

While authentication is handled by OIDC/UAA/LDAP, Authorization must be configured on the cluster to provide access to resources via RBAC.  This is done by defining a Role (or clusterRole) that indicates what actions may be taken on what resources and a RoleBinding which links the Role to one or more “subjects”.

  1.  Authenticate to kubernetes cluster with an administrative account (for example, using PKS cli to connect)
  2.  Create yaml file for our Role and RoleBinding:
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: developers
    rules:
    - apiGroups: ["", "extensions", "apps"]
      resources: ["deployments", "replicasets", "pods"]
      verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] 
      # You can also use ["*"]
    ---
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: developer-dev-binding
    subjects:
    - kind: Group
      name: k8s-devs
      apiGroup: rbac.authorization.k8s.io
    roleRef:
      kind: Role
      name: developers
      apiGroup: rbac.authorization.k8s.io
    

    In the example above, we’re creating a Role named “developers”, granting access to the core, extensions and apps API groups and several actions against deployments, replicaSets and pods. Notice that developers in this role would have have access to secrets (for example)
    The example RoleBinding binds a group named “k8s-devs” to the developers role. Notice that we have not created the k8s-devs group in Kubernetes or UAA; it exists in Active Directory

  3. Use Kubectl to apply the yaml, creating the Role and Rolebinding in the targeted namespace

 

Creating the kubeconfig – the hard way

To get our developer connected with kubectl, they’ll need a kubeconfig with the authentication and connection details.  The Hard way steps are:

  1. Operator obtains the cluster’s certificate authority data.  This can be done via curl or by copying the value from the existing kubeconfig.
  2. Operator creates a template kubeconfig, replacing the value specified, then sends it to the developer user
    apiVersion: v1
    clusters:
    - cluster:
      certificate-authority-data: <OBTAINED IN STEP 1 >
      server: < FQDN to Master Node. >
      name: PROVIDED-BY-ADMIN
    contexts:
    - context:
        cluster: PROVIDED-BY-ADMIN
        user: PROVIDED-BY-USER
      name:  PROVIDED-BY-ADMIN
    current-context: PROVIDED-BY-ADMIN
    kind: Config
    preferences: {}
    users:
    - name: PROVIDED-BY-USER
      user:
        auth-provider:
          config:
            client-id: pks_cluster_client
            cluster_client_secret: ""
            id-token: PROVIDED-BY-USER
            idp-issuer-url: https://PROVIDED-BY-ADMIN:8443/oauth/token
            refresh-token:  PROVIDED-BY-USER
          name: oidc
  3. The developer user obtains the id_token and refresh_token from UAA, via a curl command
    curl 'https://PKS-API:8443/oauth/token' -k -XPOST -H
    'Accept: application/json' -d "client_id=pks_cluster_client&client_secret=""&grant_type=password&username=UAA-USERNAME&response_type=id_token" --data-urlencode password=UAA-PASSWORD
  4. The developer user updates the kubeconfig with the id_token and refresh token in the kubeconfig

 

Creating the kubeconfig – the easy way

Assuming the developer is using Mac or Linux…

  1. Install jq on developer workstation
  2. Download the get-pks-k8s-config.sh script, make it executable (chmod +x get-pks.k8s.config.sh)
  3. Execute the script (replace the params with your own)
    ./get-pks-k8s-config.sh --API=api.pks.mydomain.com \
    --CLUSTER=cl1.pks.mydomain.com \
    --USER=dev1@mydomain.com \
    --NS=scratch
    • API – FQDN to PKS Controller, for UAA
    • CLUSTER – FQDN to master node of k8s cluster
    • USER – userPrincipalName for the user
    • NS – Namespace to target; optional
  4. After entering the user’s password, the script will set the params in the kubeconfig and switch context automatically

Try it out
Our developer user should able to “see” pods but not namespaces for example:

dev1 can see pods but not namespaces

 

Creating the kubeconfig – the easiest way

  1. Provide the developer with the PKS CLI tool, remember we have not added them to any group or role with PKS admin permissions.
  2. Provide the developer with the PKS API endpoint FQDN and the cluster name
  3. The developer may run this command to generate the updated kubeconfig and set the current context
    pks get-kubeconfig CLUSTERNAME -a API -u USER -k

    • CLUSTERNAME is the name of the cluster
    • API – FQDN to PKS Controller
    • USER – userPrincipalName for the user
  4. You’ll be prompted for the account password. Once entered, the tool will fetch the user-specific kubeconfig.
Use PKS CLI to get the kubeconfig