Showing posts with label Tanzu Kubernetes Grid. Show all posts
Showing posts with label Tanzu Kubernetes Grid. Show all posts

Saturday, June 22, 2024

vSphere with Tanzu using NSX-T - Part31 - Troubleshooting inaccessible TKC with expired control plane certs

In the course of managing multiple Tanzu Kubernetes Clusters (TKC), I encountered an unexpected issue: the control plane certificates had expired, preventing us from accessing the cluster using the kubeconfig file. To make matters worse, we were unable to SSH into the TKC control plane Virtual Machines (VMs) due to the vmware-system-user password expiring in accordance with STIG Hardening.

The recommended workaround for updating the vmware-system-user password expiry involves applying a specific daemonset on Guest Clusters. However, this approach requires access to the TKC using its admin kubeconfig file, which was unavailable due to the expired certificates.

Warning: In case of critical production issues that affect the accessibility of your Tanzu Kubernetes Cluster (TKC), it is strongly advised to submit a product support request to our team for assistance. This will ensure that you receive expert guidance and a timely resolution to help minimize the impact on your environment.

To resolve this issue, I followed an alternative workaround: I reset the root password of the TKC control plane VMs through the vCenter VM console, as outlined in this knowledge base article. Once the root password was reset, I was able to log directly into the TKC control plane VM using the VM console.




After gaining access to the TKC control plane VM, I proceeded to renew the control plane certificates using kubeadm, as detailed in this blog post. It's essential to apply this process to all control plane nodes in your cluster to ensure proper functionality.

root [ /etc/kubernetes ]# kubeadm certs check-expiration

root [ /etc/kubernetes ]# kubeadm certs renew all
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[renew] Error reading configuration from the Cluster. Falling back to default configuration

certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed

Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.

Although this workaround required some additional steps, it ultimately allowed us to regain access to our Tanzu Kubernetes Cluster and maintain its security and functionality.

Hope it was useful. Cheers!

Saturday, April 20, 2024

Hugging Face - Part5 - Deploy your LLM app on Kubernetes

In our previous blog post, we explored the process of containerizing the Large Language Model (LLM) from Hugging Face using FastAPI and Docker. The next step is deploying this containerized application on a Kubernetes cluster. Additionally, I'll share my observations and insights gathered during this exercise. 


You can access the deployment yaml spec and detailed instructions in my GitHub repo: 

https://github.com/vineethac/huggingface/tree/main/6-deploy-on-k8s

Requirements

  • I am using a Tanzu Kubernetes Cluster (TKC).
  • Each node is of size best-effort-2xlarge which has 8 vCPU and 64Gi of memory.

❯ KUBECONFIG=gckubeconfig k get node
NAME                                             STATUS   ROLES                  AGE    VERSION
tkc01-control-plane-49jx4                        Ready    control-plane,master   97d    v1.23.8+vmware.3
tkc01-control-plane-m8wmt                        Ready    control-plane,master   105d   v1.23.8+vmware.3
tkc01-control-plane-z6gxx                        Ready    control-plane,master   97d    v1.23.8+vmware.3
tkc01-worker-nodepool-a1-pqq7j-dc6957d97-8gjn8   Ready    <none>                 21d    v1.23.8+vmware.3
tkc01-worker-nodepool-a1-pqq7j-dc6957d97-c9nfq   Ready    <none>                 21d    v1.23.8+vmware.3
tkc01-worker-nodepool-a1-pqq7j-dc6957d97-cngff   Ready    <none>                 21d    v1.23.8+vmware.3
❯

  • I've attached 256Gi storage volumes to the worker nodes that is mounted at /var/lib/containerd. The worker nodes on which these llm pods are running should have enough storage space. Otherwise you may notice these pods getting stuck/ restarting/ unknownstatus. If the worker nodes run out of the storage disk space, you will see pods getting evicted with warnings The node was low on resource: ephemeral-storage. TKC spec is available in the above mentioned Git repo.

Deployment

  • This works on a CPU powered Kubernetes cluster. Additional configurations might be required if you want to run this on a GPU powered cluster.
  • We have already instrumented the Readiness and Liveness functionality in the LLM app itself. 
  • The readiness probe invokes the /healthz endpoint exposed by the FastAPI app. This will make sure the FastAPI itself is healthy/ responding to the API calls.
  • The liveness probe invokes liveness.py script within the app. The script invokes the /ask endpoint which interacts with the LLM and returns the response. This will make sure the LLM is responding to the user queries. For some reason if the llm is not responding/ hangs, the liveness probe will fail and eventually it will restart the container.
  • You can apply the deployment yaml spec as follows:
❯ KUBECONFIG=gckubeconfig k apply -f fastapi-llm-app-deploy-cpu.yaml

Validation


❯ KUBECONFIG=gckubeconfig k get deploy fastapi-llm-app
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
fastapi-llm-app   2/2     2            2           21d
❯
❯ KUBECONFIG=gckubeconfig k get pods | grep fastapi-llm-app
fastapi-llm-app-758c7c58f7-79gmq                               1/1     Running   1 (71m ago)    13d
fastapi-llm-app-758c7c58f7-gqdc6                               1/1     Running   1 (99m ago)    13d
❯
❯ KUBECONFIG=gckubeconfig k get svc fastapi-llm-app
NAME              TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)          AGE
fastapi-llm-app   LoadBalancer   10.110.228.33   10.216.24.104   5000:30590/TCP   5h24m
❯

Now you can just do a curl against the EXTERNAL-IP of the above mentioned fastapi-llm-app service.

❯ curl http://10.216.24.104:5000/ask -X POST -H "Content-Type: application/json" -d '{"text":"list comprehension examples in python"}'

In our next blog post, we'll try enhancing our FastAPI application with robust instrumentation. Specifically, we'll explore the process of integrating FastAPI metrics into our application, allowing us to gain valuable insights into its performance and usage metrics. Furthermore, we'll take a look at incorporating traces using OpenTelemetry, a powerful tool for distributed tracing and observability in modern applications. By leveraging OpenTelemetry, we'll be able to gain comprehensive visibility into the behavior of our application across distributed systems, enabling us to identify performance bottlenecks and optimize resource utilization.

Stay tuned for an insightful exploration of FastAPI metrics instrumentation and OpenTelemetry integration in our upcoming blog post!

Hope it was useful. Cheers!

Monday, January 15, 2024

Ollama - Part1 - Deploy Ollama on Kubernetes

Docker published GenAI stack around Oct 2023 which consists of large language models (LLMs) from Ollama, vector and graph databases from Neo4j, and the LangChain framework. These utilities can help developers with the resources they need to kick-start creating new applications using generative AI. Ollama can be used to deploy and run LLMs locally. In this exercise we will deploy Ollama to a Kubernetes cluster and prompt it.

In my case I am using a Tanzu Kubernetes Cluster (TKC) running on vSphere with Tanzu 7u3 platform powered by Dell PowerEdge R640 servers. The TKC nodes are using best-effort-2xlarge vmclass with 8 CPU and 64Gi Memory.  Note that I am running it on a regular Kubernetes cluster without GPU. If you have GPU, additional configuration steps might be required.



Hope it was useful. Cheers!

Saturday, November 18, 2023

vSphere with Tanzu using NSX-T - Part29 - Logging using Loki stack

Grafana Loki is a log aggregation system that we can use for Kubernetes. In this post we will deploy Loki stack on a Tanzu Kubernetes cluster.

❯ KUBECONFIG=gc.kubeconfig kg no
NAME                                            STATUS   ROLES                  AGE    VERSION
tkc01-control-plane-k8fzb                       Ready    control-plane,master   144m   v1.23.8+vmware.3
tkc01-worker-nodepool-a1-pqq7j-76d555c9-4n5kh   Ready    <none>                 132m   v1.23.8+vmware.3
tkc01-worker-nodepool-a1-pqq7j-76d555c9-8pcc6   Ready    <none>                 128m   v1.23.8+vmware.3
tkc01-worker-nodepool-a1-pqq7j-76d555c9-rx7jf   Ready    <none>                 134m   v1.23.8+vmware.3
❯
❯ helm repo add grafana https://grafana.github.io/helm-charts
❯ helm repo update
❯ helm repo list
❯ helm search repo loki

I saved the values file using helm show values grafana/loki-stack and made necessary modifications as mentioned below. 

  • I enabled Grafana by setting enabled: true. This will create a new Grafana instance.
  • I also added a section under grafana.ingress in the loki-stack/values.yaml, that will create an ingress resource for this new Grafana instance.

 Here is the values.yaml file.

test_pod:
  enabled: true
  image: bats/bats:1.8.2
  pullPolicy: IfNotPresent

loki:
  enabled: true
  isDefault: true
  url: http://{{(include "loki.serviceName" .)}}:{{ .Values.loki.service.port }}
  readinessProbe:
    httpGet:
      path: /ready
      port: http-metrics
    initialDelaySeconds: 45
  livenessProbe:
    httpGet:
      path: /ready
      port: http-metrics
    initialDelaySeconds: 45
  datasource:
    jsonData: "{}"
    uid: ""


promtail:
  enabled: true
  config:
    logLevel: info
    serverPort: 3101
    clients:
      - url: http://{{ .Release.Name }}:3100/loki/api/v1/push

fluent-bit:
  enabled: false

grafana:
  enabled: true
  sidecar:
    datasources:
      label: ""
      labelValue: ""
      enabled: true
      maxLines: 1000
  image:
    tag: 8.3.5
  ingress:
    ## If true, Grafana Ingress will be created
    ##
    enabled: true

    ## IngressClassName for Grafana Ingress.
    ## Should be provided if Ingress is enable.
    ##
    ingressClassName: nginx

    ## Annotations for Grafana Ingress
    ##
    annotations: {}
      # kubernetes.io/ingress.class: nginx
      # kubernetes.io/tls-acme: "true"

    ## Labels to be added to the Ingress
    ##
    labels: {}

    ## Hostnames.
    ## Must be provided if Ingress is enable.
    ##
    # hosts:
    #   - grafana.domain.com
    hosts:
      - grafana-loki-vineethac-poc.test.com

    ## Path for grafana ingress
    path: /

    ## TLS configuration for grafana Ingress
    ## Secret must be manually created in the namespace
    ##
    tls: []
    # - secretName: grafana-general-tls
    #   hosts:
    #   - grafana.example.com

prometheus:
  enabled: false
  isDefault: false
  url: http://{{ include "prometheus.fullname" .}}:{{ .Values.prometheus.server.service.servicePort }}{{ .Values.prometheus.server.prefixURL }}
  datasource:
    jsonData: "{}"

filebeat:
  enabled: false
  filebeatConfig:
    filebeat.yml: |
      # logging.level: debug
      filebeat.inputs:
      - type: container
        paths:
          - /var/log/containers/*.log
        processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"
      output.logstash:
        hosts: ["logstash-loki:5044"]

logstash:
  enabled: false
  image: grafana/logstash-output-loki
  imageTag: 1.0.1
  filters:
    main: |-
      filter {
        if [kubernetes] {
          mutate {
            add_field => {
              "container_name" => "%{[kubernetes][container][name]}"
              "namespace" => "%{[kubernetes][namespace]}"
              "pod" => "%{[kubernetes][pod][name]}"
            }
            replace => { "host" => "%{[kubernetes][node][name]}"}
          }
        }
        mutate {
          remove_field => ["tags"]
        }
      }
  outputs:
    main: |-
      output {
        loki {
          url => "http://loki:3100/loki/api/v1/push"
          #username => "test"
          #password => "test"
        }
        # stdout { codec => rubydebug }
      }

# proxy is currently only used by loki test pod
# Note: If http_proxy/https_proxy are set, then no_proxy should include the
# loki service name, so that tests are able to communicate with the loki
# service.
proxy:
  http_proxy: ""
  https_proxy: ""
  no_proxy: ""

Deploy using Helm

❯ helm upgrade --install --atomic loki-stack grafana/loki-stack --values values.yaml --kubeconfig=gc.kubeconfig --create-namespace --namespace=loki-stack
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: gc.kubeconfig
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: gc.kubeconfig
Release "loki-stack" does not exist. Installing it now.
W1203 13:36:48.286498   31990 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W1203 13:36:48.592349   31990 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W1203 13:36:55.840670   31990 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W1203 13:36:55.849356   31990 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
NAME: loki-stack
LAST DEPLOYED: Sun Dec  3 13:36:45 2023
NAMESPACE: loki-stack
STATUS: deployed
REVISION: 1
NOTES:
The Loki stack has been deployed to your cluster. Loki can now be added as a datasource in Grafana.

See http://docs.grafana.org/features/datasources/loki/ for more detail.

 

Verify

❯ KUBECONFIG=gc.kubeconfig kg all -n loki-stack
NAME                                     READY   STATUS    RESTARTS   AGE
pod/loki-stack-0                         1/1     Running   0          89s
pod/loki-stack-grafana-dff58c989-jdq2l   2/2     Running   0          89s
pod/loki-stack-promtail-5xmrj            1/1     Running   0          89s
pod/loki-stack-promtail-cts5j            1/1     Running   0          89s
pod/loki-stack-promtail-frwvw            1/1     Running   0          89s
pod/loki-stack-promtail-wn4dw            1/1     Running   0          89s

NAME                            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/loki-stack              ClusterIP   10.110.208.35    <none>        3100/TCP   90s
service/loki-stack-grafana      ClusterIP   10.104.222.214   <none>        80/TCP     90s
service/loki-stack-headless     ClusterIP   None             <none>        3100/TCP   90s
service/loki-stack-memberlist   ClusterIP   None             <none>        7946/TCP   90s

NAME                                 DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/loki-stack-promtail   4         4         4       4            4           <none>          90s

NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/loki-stack-grafana   1/1     1            1           90s

NAME                                           DESIRED   CURRENT   READY   AGE
replicaset.apps/loki-stack-grafana-dff58c989   1         1         1       90s

NAME                          READY   AGE
statefulset.apps/loki-stack   1/1     91s

❯ KUBECONFIG=gc.kubeconfig kg ing -n loki-stack
NAME                 CLASS   HOSTS                                 ADDRESS        PORTS   AGE
loki-stack-grafana   nginx   grafana-loki-vineethac-poc.test.com   10.216.24.45   80      7m16s
❯

Now in my case I've an ingress controller and dns resolution in place. If you don't have those configured, you can just port forward the loki-stack-grafana service to view the Grafana dashboard.

To get the username and password you should decode the following secret:

❯ KUBECONFIG=gc.kubeconfig kg secrets -n loki-stack loki-stack-grafana -oyaml

Login to the Grafana instance and verify the Data Sources section, and it must be already configured. Now click on explore option and use the log browser to query logs. 

Hope it was useful. Cheers!

Saturday, February 4, 2023

vSphere with Tanzu using NSX-T - Part23 - Supervisor cluster certificates expiry

Note that the supervisor control plane component certificates will expire after one year. 

Here is the VMware KB: https://kb.vmware.com/s/article/89324

NOTE: If certificates expire on the Supervisor or Guest Clusters, access and management of the clusters will fail. And, you will need to raise a case with VMware support team for assistance.

Keep a note of this cert expiry date, and if you can update the supervisor cluster atleast once in a year, these certs will get updated.

Here is a quick way to check the expiry of the supervisor control plane certs. 

❯ k config current-context
sc2-06-d5165f-vc01

❯ k cluster-info
Kubernetes control plane is running at https://10.43.69.117:6443
KubeDNS is running at https://10.43.69.117:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

❯ echo | openssl s_client -servername 10.43.69.117 -connect 10.43.69.117:6443 | openssl x509 -noout -dates
depth=0 CN = kube-apiserver
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 CN = kube-apiserver
verify error:num=21:unable to verify the first certificate
verify return:1
DONE
notBefore=Jun 2 09:36:17 2023 GMT
notAfter=Jun 1 09:36:18 2024 GMT


Thanks to my friend Ravikrithik Udainath for the above openssl tip!

I am using the admin kubeconfig of the supervisor cluster. Here is the link to my previous article on exporting WCP admin kubeconfig file. In this case, 10.43.69.117 is the floating IP for the supervisor control plane and it is assigned to one of the supervisor control plane VMs.

This vSphere with Tanzu cluster was deployed on June 02, 2023, and as you can see above, the certificate expiry will be after one year, which in this case is June 01, 2024. 

You can set up some sort of monitoring/ alerting for all your supervisor clusters to get notification on these expiry dates. 

Hope it was useful. Cheers!

Sunday, September 11, 2022

vSphere with Tanzu using NSX-T - Part19 - Troubleshooting TKC stuck at creating phase

This article provides basic troubleshooting steps for TKCs (Tanzu Kubernetes Cluster) stuck at creating phase.

Verify status of the TKC

  • Use the following commands to verify the TKC status.
kubectl get tkc -n <supervisor_namespace>
kubectl get tkc -n <supervisor_namespace> -o json
kubectl describe tkc <tkc_name> -n <supervisor_namespace>
kubectl get cluster-api -n <supervisor_namespace>
kubectl get vm,machine,wcpmachine -n <supervisor_namespace> 

Cluster health

  • Verify health of the supervisor cluster.
❯ kubectl get node
NAME STATUS ROLES AGE VERSION
4201a7b2667b0f3b021efcf7c9d1726b Ready control-plane,master 86d v1.22.6+vmware.wcp.2
4201bead67e21a8813415642267cd54a Ready control-plane,master 86d v1.22.6+vmware.wcp.2
4201e0e8e29b0ddb4b59d3165dd40941 Ready control-plane,master 86d v1.22.6+vmware.wcp.2
wxx-08-r02esx13.xxxxxyyyy.com Ready agent 85d v1.22.6-sph-db56d46
wxx-08-r02esx14.xxxxxyyyy.com Ready agent 85d v1.22.6-sph-db56d46
wxx-08-r02esx15.xxxxxyyyy.com Ready agent 85d v1.22.6-sph-db56d46
wxx-08-r02esx16.xxxxxyyyy.com Ready agent 85d v1.22.6-sph-db56d46
wxx-08-r02esx17.xxxxxyyyy.com Ready agent 85d v1.22.6-sph-db56d46
wxx-08-r02esx18.xxxxxyyyy.com Ready agent 85d v1.22.6-sph-db56d46
wxx-08-r02esx19.xxxxxyyyy.com Ready agent 85d v1.22.6-sph-db56d46
wxx-08-r02esx20.xxxxxyyyy.com Ready agent 85d v1.22.6-sph-db56d46
wxx-08-r02esx21.xxxxxyyyy.com Ready agent 85d v1.22.6-sph-db56d46
wxx-08-r02esx22.xxxxxyyyy.com Ready agent 85d v1.22.6-sph-db56d46
wxx-08-r02esx23.xxxxxyyyy.com Ready agent 85d v1.22.6-sph-db56d46
wxx-08-r02esx24.xxxxxyyyy.com Ready agent 85d v1.22.6-sph-db56d46

❯ kubectl get --raw '/healthz?verbose'
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check passed 

Terminating namespaces

  • Check for namespaces stuck at terminating phase. If there are any, properly clean them up by removing all child objects. 
  • You can use this kubectl get-all plugin to see all resources under a namespace.  Then clean them up properly. Mostly you need to set finalizers of remaining child resources to null. Following is a sample case where 2 PVCs where stuck at terminating and they were cleaned up by setting its finalizers to null.
❯ kg ns | grep Terminating
rgettam-gettam Terminating 226d

❯ k get-all -n rgettam-gettam
NAME NAMESPACE AGE
persistentvolumeclaim/58ef0d27-ba66-4f4e-b4d7-43bd1c4fb833-c8c0c111-e480-4df4-baf8-d140d0237e1d rgettam-gettam 86d
persistentvolumeclaim/58ef0d27-ba66-4f4e-b4d7-43bd1c4fb833-e5c99b7e-1397-4a9d-b38c-53a25cab6c3f rgettam-gettam 86d

❯ kg pvc -n rgettam-gettam
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
58ef0d27-ba66-4f4e-b4d7-43bd1c4fb833-c8c0c111-e480-4df4-baf8-d140d0237e1d Terminating pvc-bd4252fb-bfed-4ef3-ab5a-43718f9cbed5 8Gi RWO sxx-01-vcxx-wcp-mgmt 86d
58ef0d27-ba66-4f4e-b4d7-43bd1c4fb833-e5c99b7e-1397-4a9d-b38c-53a25cab6c3f Terminating pvc-8bc9daa1-21cf-4af2-973e-af28d66a7f5e 30Gi RWO sxx-01-vcxx-wcp-mgmt 86d

❯ kg pvc -n rgettam-gettam --no-headers | awk '{print $1}' | xargs -I{} kubectl patch -n rgettam-gettam pvc {} -p '{"metadata":{"finalizers": null}}'
  • You can also do kubectl get namespace <namespace> -oyaml and the status section will show if there are resources/ content to be deleted or any finalizers remaining.
  • Verify vmop-controller pod logs, and restart them if required.

IP_BLOCK_EXHAUSTED

  • Check CIDR usage of the supervisor cluster.
❯ kg clusternetworkinfos
NAME                                                AGE
domain-c1006-06046c54-c9e5-41aa-bc2c-52d72c05bce4   160d

❯ kg clusternetworkinfos domain-c1006-06046c54-c9e5-41aa-bc2c-52d72c05bce4 -o json | jq .usage
{
  "egressCIDRUsage": {
    "allocated": 33,
    "total": 1024
  },
  "ingressCIDRUsage": {
    "allocated": 42,
    "total": 1024
  },
  "subnetCIDRUsage": {
    "allocated": 832,
    "total": 1024
  }
} 
  • When the IP blocks of supervisor cluster are exhausted, you will find the following warning when you describe the TKC.
 Conditions:
    Last Transition Time:  2022-10-05T18:34:35Z
    Message:               Cannot realize subnet
    Reason:                ClusterNetworkProvisionFailed
    Severity:              Warning
    Status:                False
    Type:                  Ready 
  • Also when you check the namespace, you can see the following ncp error IP_BLOCK_EXHAUSTED.
 ❯ kg ns tsql-integration-test -oyaml
apiVersion: v1
kind: Namespace
metadata:
  annotations:
    calaxxxx.xxxyy.com/xxxrole-created: "1"
    ncp/error: IP_BLOCK_EXHAUSTED
    ncp/router_id: t1_d0a2af0f-8430-4250-9fcf-807a4afe51aa_rtr
    vmware-system-resource-pool: resgroup-307480
    vmware-system-vm-folder: group-v307481
  creationTimestamp: "2022-10-05T17:35:18Z"

Notes:

  • If the subnetCIDRUsage IP block is exhausted, you may need to remove some old/ unused namespaces, and that will release some IPs. If that is not possible, you may need to consider adding new subnet.
  • After removing the old/ unused namespaces, and even if IPs are available, sometimes the TKCs will be stuck at creating phase! In that case, check the ncp, vmop, and capw controller pods and you may need to restart them. What I observed is usually after restart of ncp pod, vmop-controller pods, and all pods under vmware-system-capw namespaces the VMs will start getting deployed and the TKC creation will progress and complete successfully.

Resource availability

  • Check whether there are enough resources available in the cluster.
LAST SEEN  TYPE   REASON       OBJECT                    MESSAGE
3m23s    Warning  UpdateFailure   virtualmachine/magna3-control-plane-9rhl4   The host does not have sufficient CPU resources to satisfy the reservation.
80s     Warning  ReconcileFailure  wcpmachine/magna3-control-plane-s5s9t-p2cxj  vm is not yet powered on: vmware-system-capw-controller-manager/WCPMachine//chakravartha-magna3/magna3/magna3-control-plane-s5s9t-p2cxj 

  • Check for resource limits applied to the namespace.

Check whether storage policy is assigned to the namespace

27m         Warning   ReconcileFailure               wcpmachine/gc-pool-0-cv8vz-5snbc          admission webhook "default.validating.virtualmachine.vmoperator.xxxyy.com" denied the request: StorageClass wdc-10-vc21c01-wcp-pod is not assigned to any ResourceQuotas in namespace mpereiramaia-demo2

  • In this case, the storage policy wasnt assigned to the ns. I assigned the storage policy wdc-10-vc21c01-wcp-pod to the respective namespace, and the TKC deployment was successful.

Check Content library can sync properly

  • Sometimes issues related to CL can cause TKCs to get stuck at creating phase! Check this blog post for more details.

KCP can't remediate

Message:               KCP can't remediate if current replicas are less or equal then 1
Reason:                WaitingForRemediation @ Machine/gc-control-plane-zpssc
Severity:              Warning
  • In this case, you can just edit the TKC spec, change the control plane vmclass to a different class and save. Once the deployment is complete and TKC is running, edit the TKC spec again and revert the vmclass that you modified earlier to its original class. This process will re-provision the control plane.

TKC VMs waiting for IP

  • In this case, take a look at NSXT and check whether all Edge nodes are healthy. If there are mismatch errors, resolve them.
  • You may also check ncp pod logs and restart ncp pod if required.

VirtualMachineClassBindingNotFound

Conditions:
    Last Transition Time:  2021-05-05T18:19:10Z
    Message:               1 of 2 completed
    Reason:                VirtualMachineClassBindingNotFound @ Machine/tkc-dev-control-plane-wxd57
    Severity:              Error
    Status:                False
    Message:               0/1 Control Plane Node(s) healthy. 0/2 Worker Node(s) healthy
Events:
  Normal  PhaseChanged  7m22s  vmware-system-tkg/vmware-system-tkg-controller-manager/tanzukubernetescluster-status-controller  cluster changes from creating phase to failed phase

  • This happens when the virtualmachineclassbindings are missing and can be resolved by adding all/ required VM Class to the Namespace using the vSphere Client. Following are the steps to add VM Classes to a namespace:
  • Log into vCenter web UI
  • From Hosts and Clusters > Select the namespace > Summary tab > VM Service tile > Click Manage VM Classes
  • Select all required VM Classes and click OK

Verify NSX-T objects

  • Issues at the NSX-T side can also cause the TKC to be stuck at creating phase. Following is a sample case and you can see these logs when you describe the TKC:
Message: 2 errors occurred:
* failed to configure DNS for /, Kind= namespace-test-01/gc: unable to reconcile kubeadm ConfigMap's CoreDNS info: unable to retrieve kubeadm Configmap from the guest cluster: configmaps "kubeadm-config" not found * failed to configure kube-proxy for /, Kind= namespace-test-01/gc: unable to retrieve kube-proxy daemonset from the guest cluster: daemonsets.apps "kube-proxy" not found
  • In this case, these were some issues with the virtual servers in loadbalancer. Some stale entries of virtual servers were still present and their IP didn't get removed properly and it was causing some intermittent connectivity issues to some of the other services of type loadbalancer. And, new TKC deployment within that affected namespace also gets stuck due to this. In our case we deleted the affected namespace, and recreated it, that cleaned up all those virtual server state entries and the load balancer, and new TKC deployments were successful. So it will be worth to check on the health and staus of NSX-T objects in case you have TKC deployment issues.

Check for broken TKCs in the cluster

  • Sometimes the TKC deployments are very slow and takes more than 30 minutes. In this case, you may notice that the first control plane VM will get deployed in like 30-45 minutes after the TKC creation has started. Look for vmop controller logs. Following is sample log:
❯ kail -n vmware-system-vmop
vmware-system-vmop/vmware-system-vmop-controller-manager-55459cb46b-2psrk[manager]: E1027 11:49:44.725620       1 readiness_worker.go:111] readiness-probe "msg"="readiness probe fails" "error"="dial tcp 172.29.9.212:6443: connect: connection refused" "vmName"="ciroscosta-cartographer/kontinue-control-plane-svlk4" "result"=-1

vmware-system-vmop/vmware-system-vmop-controller-manager-55459cb46b-2psrk[manager]: E1027 11:49:49.888653       1 readiness_worker.go:111] readiness-probe "msg"="readiness probe fails" "error"="dial tcp 172.29.2.66:6443: connect: connection refused" "vmName"="whaozhe-platform/gc-control-plane-mf4p5" "result"=-1

  • In the above case, two of the TKCs were broken/ stuck at updating phase and we were unable to connect to its control plane.
ciroscosta-cartographer    kontinue    updating       2021-10-29T18:47:46Z   v1.20.9+vmware.1-tkg.1.a4cee5b    1     2
whaozhe-platform           gc          updating       2022-01-27T03:59:31Z   v1.20.12+vmware.1-tkg.1.b9a42f3   1     10
  • After removing the namespaces with broken TKCs, new deployments were completing succesfully. 

Restart system pods

  • Sometimes restart of some of the system controller pods resoves the issue. I usually delete all the pods of the following namespaces and they will get restarted in a few seconds.
k delete pod --all --namespace=vmware-system-vmop
k delete pod --all --namespace=vmware-system-capw
k delete pod --all --namespace=vmware-system-tkg
k delete pod --all --namespace=vmware-system-csi
k delete pod --all --namespace=vmware-system-nsx

Hope this was useful. Cheers!

Saturday, July 30, 2022

vSphere with Tanzu using NSX-T - Part17 - Troubleshooting TKCs stuck at updating phase

Ideally if everything goes well the TKCs (Tanzu Kubernetes Cluster aka Guest Cluster)  should be in running phase. But sometimes due to several reasons it may be stuck at updating phase. In this article, we will take a sample case and look at troubleshooting/ fixing it. 

Following is an example:

NAMESPACE              NAME                    PHASE      CREATIONTIME           VERSION                           CP    WORKER
karvea-vc17ns11 sc201vc17pace updating 2021-11-19T12:17:24Z v1.20.9+vmware.1-tkg.1.a4cee5b 1 4

Lets connect to this TKC. Here I have a small plugin (kubectl-gckc) that generates the TKC kubeconfig and gcc is alias to KUBECONFIG=gckubeconfig, where gckubeconfig is the TKC admin kubeconfig file.
❯ k gckc karvea-vc17ns11 sc201vc17pace
❯ gcc kg no
NAME STATUS ROLES AGE VERSION
sc201vc17pace-control-plane-zt99l Ready control-plane,master 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz Ready,SchedulingDisabled <none> 189d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw Ready,SchedulingDisabled <none> 189d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-gxmtt Ready <none> 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-j4wvp Ready <none> 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-l2dq5 Ready <none> 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-xqlkv Ready <none> 139d v1.20.9+vmware.1

❯ kg vm -n karvea-vc17ns11
NAME POWERSTATE AGE
sc201vc17pace-control-plane-zt99l poweredOn 139d
sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz poweredOn 189d
sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw poweredOn 189d
sc201vc17pace-workers-jrcb6-85c4844f6c-gxmtt poweredOn 139d
sc201vc17pace-workers-jrcb6-85c4844f6c-j4wvp poweredOn 139d
sc201vc17pace-workers-jrcb6-85c4844f6c-l2dq5 poweredOn 139d
sc201vc17pace-workers-jrcb6-85c4844f6c-xqlkv poweredOn 139d



❯ kg machine -n karvea-vc17ns11
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
sc201vc17pace-control-plane-zt99l sc201vc17pace sc201vc17pace-control-plane-zt99l vsphere://4201e660-3124-9aa5-4ec2-6fbc2ff3ecea Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz sc201vc17pace sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz vsphere://42010982-8b25-ad7b-2a1d-bb949def4834 Deleting 189d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw sc201vc17pace sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw vsphere://4201a640-2b39-3d66-5a26-db95a612f6e5 Deleting 189d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-gxmtt sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-gxmtt vsphere://42013a9b-dffb-4609-89d6-4ca123c4dc1e Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-j4wvp sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-j4wvp vsphere://4201160b-21c9-ccc2-6826-e3545e34b490 Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-l2dq5 sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-l2dq5 vsphere://420125a8-e45c-04b7-5612-ce3149e86d74 Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-xqlkv sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-xqlkv vsphere://4201238f-c9a3-a9b2-9c31-4ed99318bd30 Running 139d v1.20.9+vmware.1


As you can see above, there are two worker machines that are stuck at Deleting phase. It is because the corresponding two worker nodes are at Ready, SchedulingDisabled status. The nodes are not drained yet due to some reason. Once they get drained properly, its status will be changed to NotReady, SchedulingDisabled. Now lets try to drain those worker nodes manually.
❯ gcc k drain sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz
node/sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz already cordoned
DEPRECATED WARNING: Aborting the drain command in a list of nodes will be deprecated in v1.23.
The new behavior will make the drain command go through all nodes even if one or more nodes failed during the drain.
For now, users can try such experience via: --ignore-errors
error: unable to drain node "sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz", aborting command...

There are pending nodes to be drained:
sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz
cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/calico-node-wqlmq, kube-system/kube-proxy-78z5k, nsxi-platform/nsxi-platform-fluent-bit-pdzjx, projectcontour/projectcontour-envoy-r9pg7, vmware-system-csi/vsphere-csi-node-p2gtd
cannot delete Pods with local storage (use --delete-emptydir-data to override): nsxi-platform/kafka-2

❯ gcc k drain sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz --ignore-daemonsets --delete-emptydir-data
node/sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-wqlmq, kube-system/kube-proxy-78z5k, nsxi-platform/nsxi-platform-fluent-bit-pdzjx, projectcontour/projectcontour-envoy-r9pg7, vmware-system-csi/vsphere-csi-node-p2gtd
evicting pod nsxi-platform/kafka-2
error when evicting pods/"kafka-2" -n "nsxi-platform" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod nsxi-platform/kafka-2
error when evicting pods/"kafka-2" -n "nsxi-platform" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
^C
❯ gcc kg pdb
No resources found in default namespace.
❯ gcc kg pdb -A
NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
nsxi-platform kafka N/A 1 0 188d
nsxi-platform zookeeper N/A 1 1 188d


Here this worker  node sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz is not getting drained because of the presence of a pod disruption budget (pdb). So, in-order to drain the node, I am taking a back up of the pdb yaml file and delete it. And once the nodes are drained, I will apply the pdb yaml back on to the cluster.
❯ gcc kg pdb -n nsxi-platform kafka -oyaml > pdb-nsxi-platform-kafka.yaml
❯ code pdb-nsxi-platform-kafka.yaml
❯ gcc kg pdb -n nsxi-platform zookeeper -oyaml > pdb-nsxi-platform-zookeeper.yaml
❯ code pdb-nsxi-platform-zookeeper.yaml

❯ gcc k delete pdb kafka -n nsxi-platform
poddisruptionbudget.policy "kafka" deleted
❯ gcc kg pdb -A
NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
nsxi-platform zookeeper N/A 1 1 188d

❯ gcc k drain sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz --ignore-daemonsets --delete-emptydir-data
node/sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-wqlmq, kube-system/kube-proxy-78z5k, nsxi-platform/nsxi-platform-fluent-bit-pdzjx, projectcontour/projectcontour-envoy-r9pg7, vmware-system-csi/vsphere-csi-node-p2gtd
evicting pod nsxi-platform/kafka-2
pod/kafka-2 evicted
node/sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz evicted


❯ gcc k drain sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz --ignore-daemonsets --delete-emptydir-data
node/sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-wqlmq, kube-system/kube-proxy-78z5k, nsxi-platform/nsxi-platform-fluent-bit-pdzjx, projectcontour/projectcontour-envoy-r9pg7, vmware-system-csi/vsphere-csi-node-p2gtd
node/sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz drained


❯ gcc k drain sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw
node/sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw already cordoned
DEPRECATED WARNING: Aborting the drain command in a list of nodes will be deprecated in v1.23.
The new behavior will make the drain command go through all nodes even if one or more nodes failed during the drain.
For now, users can try such experience via: --ignore-errors
error: unable to drain node "sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw", aborting command...

There are pending nodes to be drained:
sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw
error: cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/calico-node-4tz4x, kube-system/kube-proxy-q726d, nsxi-platform/nsxi-platform-fluent-bit-b24nn, projectcontour/projectcontour-envoy-rppkx, vmware-system-csi/vsphere-csi-node-mpbsh
❯ gcc k drain sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw --ignore-daemonsets
node/sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-4tz4x, kube-system/kube-proxy-q726d, nsxi-platform/nsxi-platform-fluent-bit-b24nn, projectcontour/projectcontour-envoy-rppkx, vmware-system-csi/vsphere-csi-node-mpbsh
node/sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw drained
The worker nodes are now drained.
❯ gcc kg no
NAME STATUS ROLES AGE VERSION
sc201vc17pace-control-plane-zt99l Ready control-plane,master 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz NotReady,SchedulingDisabled <none> 189d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw NotReady,SchedulingDisabled <none> 189d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-gxmtt Ready <none> 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-j4wvp Ready <none> 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-l2dq5 Ready <none> 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-xqlkv Ready <none> 139d v1.20.9+vmware.1

❯ gcc kg no
NAME STATUS ROLES AGE VERSION
sc201vc17pace-control-plane-zt99l Ready control-plane,master 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw NotReady,SchedulingDisabled <none> 189d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-gxmtt Ready <none> 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-j4wvp Ready <none> 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-l2dq5 Ready <none> 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-xqlkv Ready <none> 139d v1.20.9+vmware.1
As soon as the worker nodes are drained, one of them got successfully removed/ deleted, but the other worker node is still present. When we look at the machine resource, you can still see one of the worker machine is still stuck at Deleting phase. In this case I've manually deleted the worker node, still the corresponding worker machine is stuck at Deleting phase.
❯ kg machine -n karvea-vc17ns11
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
sc201vc17pace-control-plane-zt99l sc201vc17pace sc201vc17pace-control-plane-zt99l vsphere://4201e660-3124-9aa5-4ec2-6fbc2ff3ecea Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw sc201vc17pace sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw vsphere://4201a640-2b39-3d66-5a26-db95a612f6e5 Deleting 189d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-gxmtt sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-gxmtt vsphere://42013a9b-dffb-4609-89d6-4ca123c4dc1e Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-j4wvp sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-j4wvp vsphere://4201160b-21c9-ccc2-6826-e3545e34b490 Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-l2dq5 sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-l2dq5 vsphere://420125a8-e45c-04b7-5612-ce3149e86d74 Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-xqlkv sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-xqlkv vsphere://4201238f-c9a3-a9b2-9c31-4ed99318bd30 Running 139d v1.20.9+vmware.1


❯ gcc k delete node sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw
node "sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw" deleted

❯ gcc kg no
NAME STATUS ROLES AGE VERSION
sc201vc17pace-control-plane-zt99l Ready control-plane,master 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-gxmtt Ready <none> 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-j4wvp Ready <none> 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-l2dq5 Ready <none> 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-xqlkv Ready <none> 139d v1.20.9+vmware.1
Now lets describe the worker machine stuck at Deleting. In this case you can see that there are two PVCs stuck at Terminating status. So I  just edited those two PVCs yaml and set finalizer to null.
❯ kg machine -n karvea-vc17ns11
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
sc201vc17pace-control-plane-zt99l sc201vc17pace sc201vc17pace-control-plane-zt99l vsphere://4201e660-3124-9aa5-4ec2-6fbc2ff3ecea Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw sc201vc17pace sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw vsphere://4201a640-2b39-3d66-5a26-db95a612f6e5 Deleting 189d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-gxmtt sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-gxmtt vsphere://42013a9b-dffb-4609-89d6-4ca123c4dc1e Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-j4wvp sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-j4wvp vsphere://4201160b-21c9-ccc2-6826-e3545e34b490 Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-l2dq5 sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-l2dq5 vsphere://420125a8-e45c-04b7-5612-ce3149e86d74 Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-xqlkv sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-xqlkv vsphere://4201238f-c9a3-a9b2-9c31-4ed99318bd30 Running 139d v1.20.9+vmware.1



❯ kg vm -n karvea-vc17ns11
NAME POWERSTATE AGE
sc201vc17pace-control-plane-zt99l poweredOn 139d
sc201vc17pace-workers-jrcb6-85c4844f6c-gxmtt poweredOn 139d
sc201vc17pace-workers-jrcb6-85c4844f6c-j4wvp poweredOn 139d
sc201vc17pace-workers-jrcb6-85c4844f6c-l2dq5 poweredOn 139d
sc201vc17pace-workers-jrcb6-85c4844f6c-xqlkv poweredOn 139d


❯ kd machine sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw -n karvea-vc17ns11

Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal DetectedUnhealthy 13m (x2 over 17m) machinehealthcheck-controller Machine karvea-vc17ns11/sc201vc17pace-workers-jrcb6/sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw/sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw has unhealthy node sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw
Normal SuccessfulDrainNode 13m (x2 over 19m) machine-controller success draining Machine's node "sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw"
Normal NodeVolumesDetached 12m (x2 over 19m) machine-controller success waiting for node volumes detach Machine's node "sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw"
Normal MachineMarkedUnhealthy 106s (x4 over 9m58s) machinehealthcheck-controller Machine karvea-vc17ns11/sc201vc17pace-workers-jrcb6/sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw/sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw has been marked as unhealthy

❯ kg pvc -n karvea-vc17ns11
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
a366a76b-2000-4d33-a817-a9c1b9e60b1b-1f4b5ee8-f378-445e-97d3-f4c4656863bb Bound pvc-1dc35d76-86c6-4a70-82e7-99609480a0b3 10Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-3509d39d-e632-492b-a0c4-b5b3874b01a6 Bound pvc-97e6e063-9a9e-4837-9999-284523379453 128Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-42a0f98e-0f9c-4fc1-bc9f-862e94086624 Bound pvc-be6bd318-140c-4cb8-9c22-daf9ec8dac65 128Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-48b9ddc4-41bc-4228-a6b5-0aea3a470811 Bound pvc-faa7798e-c045-420f-9d09-44674d9d2326 20Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-8c880e33-681a-4eae-a57d-3aaf0fb9c950 Bound pvc-cf1a6c2e-0e9e-425c-ae46-b010b086c325 10Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-aa196378-d10f-45ed-a528-b0d691ec6447 Bound pvc-49fca2f0-3402-429f-884f-7db9012934d6 8Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-bbe074ee-9ba3-4839-b519-af82214a9ad0 Bound pvc-3887e89c-0a5b-4d08-938b-c9cb0a1efaca 8Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-bfb23073-29e8-4f0d-b2c0-934ff808ad2c Bound pvc-f966f803-ca92-45b6-9395-8d1d24c67f8e 10Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-d39e8f9b-692e-46ac-a52c-2d977f0a95fa Bound pvc-25d7c8c2-7994-4ee8-9ef8-725ae1c8c8a1 8Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-ef1e2362-83bc-4af4-b748-a496aa911009 Bound pvc-7aefd3fe-3279-4e20-8a00-5ca60cc61e40 128Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-f072ee1b-034a-4ac8-965c-f66a2d8bd61c Bound pvc-276acbee-ba6c-4cc9-8bc5-e18525abd256 20Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
sc201vc17pace-workers-wswdh-2hz8w-containerd Bound pvc-e67e3a6f-99d6-4e21-813d-e9c9994b25d6 42Gi RWO sc2-01-vc17c01-wcp-mgmt 139d
sc201vc17pace-workers-wswdh-5pjrc-containerd Bound pvc-fb162388-4347-4f48-825e-c2c2d62ceb90 42Gi RWO sc2-01-vc17c01-wcp-mgmt 139d
sc201vc17pace-workers-wswdh-755m6-containerd Terminating pvc-da2e4866-bb41-4f74-a4b7-0f74bc7061a1 42Gi RWO sc2-01-vc17c01-wcp-mgmt 189d
sc201vc17pace-workers-wswdh-dgmjs-containerd Terminating pvc-64eac528-f160-444c-9a0f-0ed9f6393e06 42Gi RWO sc2-01-vc17c01-wcp-mgmt 189d
sc201vc17pace-workers-wswdh-djp2m-containerd Bound pvc-a7542552-de13-4670-ac45-84ed39c3c916 42Gi RWO sc2-01-vc17c01-wcp-mgmt 139d
sc201vc17pace-workers-wswdh-flwtt-containerd Bound pvc-1b8ee843-709a-4e2a-955d-a9a9a6a83c73 42Gi RWO sc2-01-vc17c01-wcp-mgmt 139d

As soon as the PVCs are removed, you can see the worker machine that was stuck at Deleting got removed, and the TKC chaged its status to running.
❯ kg pvc -n karvea-vc17ns11
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
a366a76b-2000-4d33-a817-a9c1b9e60b1b-1f4b5ee8-f378-445e-97d3-f4c4656863bb Bound pvc-1dc35d76-86c6-4a70-82e7-99609480a0b3 10Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-3509d39d-e632-492b-a0c4-b5b3874b01a6 Bound pvc-97e6e063-9a9e-4837-9999-284523379453 128Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-42a0f98e-0f9c-4fc1-bc9f-862e94086624 Bound pvc-be6bd318-140c-4cb8-9c22-daf9ec8dac65 128Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-48b9ddc4-41bc-4228-a6b5-0aea3a470811 Bound pvc-faa7798e-c045-420f-9d09-44674d9d2326 20Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-8c880e33-681a-4eae-a57d-3aaf0fb9c950 Bound pvc-cf1a6c2e-0e9e-425c-ae46-b010b086c325 10Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-aa196378-d10f-45ed-a528-b0d691ec6447 Bound pvc-49fca2f0-3402-429f-884f-7db9012934d6 8Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-bbe074ee-9ba3-4839-b519-af82214a9ad0 Bound pvc-3887e89c-0a5b-4d08-938b-c9cb0a1efaca 8Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-bfb23073-29e8-4f0d-b2c0-934ff808ad2c Bound pvc-f966f803-ca92-45b6-9395-8d1d24c67f8e 10Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-d39e8f9b-692e-46ac-a52c-2d977f0a95fa Bound pvc-25d7c8c2-7994-4ee8-9ef8-725ae1c8c8a1 8Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-ef1e2362-83bc-4af4-b748-a496aa911009 Bound pvc-7aefd3fe-3279-4e20-8a00-5ca60cc61e40 128Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-f072ee1b-034a-4ac8-965c-f66a2d8bd61c Bound pvc-276acbee-ba6c-4cc9-8bc5-e18525abd256 20Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
sc201vc17pace-workers-wswdh-2hz8w-containerd Bound pvc-e67e3a6f-99d6-4e21-813d-e9c9994b25d6 42Gi RWO sc2-01-vc17c01-wcp-mgmt 139d
sc201vc17pace-workers-wswdh-5pjrc-containerd Bound pvc-fb162388-4347-4f48-825e-c2c2d62ceb90 42Gi RWO sc2-01-vc17c01-wcp-mgmt 139d
sc201vc17pace-workers-wswdh-djp2m-containerd Bound pvc-a7542552-de13-4670-ac45-84ed39c3c916 42Gi RWO sc2-01-vc17c01-wcp-mgmt 139d
sc201vc17pace-workers-wswdh-flwtt-containerd Bound pvc-1b8ee843-709a-4e2a-955d-a9a9a6a83c73 42Gi RWO sc2-01-vc17c01-wcp-mgmt 139d

❯ kg machine -n karvea-vc17ns11
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
sc201vc17pace-control-plane-zt99l sc201vc17pace sc201vc17pace-control-plane-zt99l vsphere://4201e660-3124-9aa5-4ec2-6fbc2ff3ecea Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-gxmtt sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-gxmtt vsphere://42013a9b-dffb-4609-89d6-4ca123c4dc1e Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-j4wvp sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-j4wvp vsphere://4201160b-21c9-ccc2-6826-e3545e34b490 Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-l2dq5 sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-l2dq5 vsphere://420125a8-e45c-04b7-5612-ce3149e86d74 Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-xqlkv sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-xqlkv vsphere://4201238f-c9a3-a9b2-9c31-4ed99318bd30 Running 139d v1.20.9+vmware.1

❯ kgtkca | grep karvea
karvea-vc17ns11 sc201vc17pace running 2021-11-19T12:17:24Z v1.20.9+vmware.1-tkg.1.a4cee5b 1 4

Note: The above case is a sample scenario and the reasons why the TKC is stuck at updating may vary based on several conditions. This is a generic method one can follow while approaching these kind of issues. 
 
Hope it was useful. Cheers!

Sunday, July 17, 2022

vSphere with Tanzu using NSX-T - Part16 - Troubleshooting content library related issues

In this article, we will take a look at troubleshooting some of the content library related issues that you may encounter while managing/ administering vSphere with Tanzu clusters.


Case 1:
 
TKC (guest K8s cluster) deployments failing as VMs were not getting deployed. You can see Failed to deploy OVF package error in the VC UI. This was due to error A general system error occurred: HTTP request error: cannot authenticate SSL certificate for host wp-content.vmware.com while syncing content library.
 
 

Following is a sample log for this issue from the vmop-controller-manger:

Warning CreateFailure 5m29s (x26 over 50m) vmware-system-vmop/vmware-system-vmop-controller-manager-85484c67b7-9jncl/virtualmachine-controller deploy from content library failed for image "ob-19344082-tkgs-ova-ubuntu-2004-v1.21.6---vmware.1-tkg.1": POST https://sc2-01-vcxx.xx.xxxx.com:443/rest/com/vmware/vcenter/ovf/library-item/id:8b34e422-cc30-4d44-9d78-367528df0622?~action=deploy: 500 Internal Server Error
This can be resolved by just editing the content library and accepting new certificate thumbprint.
 

Case 2:
 
Missing TKRs. Even though CL is present in the VC and will have all required OVF Templates, on the supervisor cluster TKR resources will be missing/ not found.
❯ kubectl get tkr
No resources found

This could happen if there are duplicate content libraries present in the VC with same Subscription URL. If you find duplicate CLs, try removing them. If there are CLs that are not being used, consider deleting them. Also, try synchronize the CL.

If this doesn't resolve the issue, try to delete and recreate the CL, and make sure you select the newly created CL under Cluster > Configure > Supervisor Cluster > General > Tanzu Kubernetes Grid Service > Content Library.


You may also verify the vmware-system-vmop-controller-manager pod logs and capw-controller-manager pod logs. Check if those pods are running, or getting continuously restarted. If required you may restart those pods.



Case 3:
 

TKC deployments failing as VMs were not getting deployed. Sample vmop-controller-manger logs given below:
E0803 18:51:30.638787       1 vmprovider.go:155] vsphere "msg"="Clone VirtualMachine failed" "error"="deploy from content library failed for image \"ob-18900476-photon-3-k8s-v1.21.6---vmware.1-tkg.1.b3d708a\": deploy error: The operation failed due to An error occurred during host configuration." "vmName"="rkatz-testmigrationvm5/gc-lab-control-plane-kxwn2"

E0803 18:51:30.638821 1 virtualmachine_controller.go:660] VirtualMachine "msg"="Provider failed to create VirtualMachine" "error"="deploy from content library failed for image \"ob-18900476-photon-3-k8s-v1.21.6---vmware.1-tkg.1.b3d708a\": deploy error: The operation failed due to An error occurred during host configuration." "name"="rkatz-testmigrationvm5/gc-lab-control-plane-kxwn2"

E0803 18:51:30.638851 1 virtualmachine_controller.go:358] VirtualMachine "msg"="Failed to reconcile VirtualMachine" "error"="deploy from content library failed for image \"ob-18900476-photon-3-k8s-v1.21.6---vmware.1-tkg.1.b3d708a\": deploy error: The operation failed due to An error occurred during host configuration." "name"="rkatz-testmigrationvm5/gc-lab-control-plane-kxwn2"

E0803 18:51:30.639301 1 controller.go:246] controller "msg"="Reconciler error" "error"="deploy from content library failed for image \"ob-18900476-photon-3-k8s-v1.21.6---vmware.1-tkg.1.b3d708a\": deploy error: The operation failed due to An error occurred during host configuration." "controller"="virtualmachine" "name"="gc-lab-control-plane-kxwn2" "namespace"="rkatz-testmigrationvm5" "reconcilerGroup"="vmoperator.xxxx.com" "reconcilerKind"="VirtualMachine"

This could be resolved by restarting the cm-inventory service on all nsx-t manager nodes. Following are the commands to restart cm-inventory service on NSX-T manager nodes:
get service cm-inventory  
restart service cm-inventory

Case 4: 
Sometimes in the WCP K8s layer you will notice some stale contentsources object entries. Contentsources are the corresponding objects of content libraries in K8s layer. Due to some reasons/ requirements you might have created multiple content libraries, and you may have delete some of them at later point of time from the vCenter, but they may not be removed properly from the WCP K8s layer and thats how these stale contentsources objects are found. You can use PowerCLI to list the current content libraries present in the VC, compare it with the contentsources and remove the stale entries.
> Get-ContentLibrary | select Name,Id | fl

Name : wdc-01-vc18c01-wcp
Id   : 17209f4b-3f7f-4bcb-aeaf-fd0b53b66d0d

> kg contentsources NAME AGE 0f00d3fa-de54-4630-bc99-aa13ccbe93db 173d 17209f4b-3f7f-4bcb-aeaf-fd0b53b66d0d 321d 451ce3f3-49d7-47d3-9a04-2839c5e5c662 242d 75e0668c-0cdc-421e-965d-fd736187cc57 173d 818c8700-efa4-416b-b78f-5f22e9555952 173d 9abbd108-aeb3-4b50-b074-9e6c00473b02 173d a6cd1685-49bf-455f-a316-65bcdefac7cf 173d acff9a91-0966-4793-9c3a-eb5272b802bd 242d fcc08a43-1555-4794-a1ae-551753af9c03 173d

In the above sample case you can see multiple contentsource objects, but there is only one content library. So you can delete all the contentsource objects, except 17209f4b-3f7f-4bcb-aeaf-fd0b53b66d0d.

Hope it was useful. Cheers!