Saturday, July 30, 2022

vSphere with Tanzu using NSX-T - Part17 - Troubleshooting TKCs stuck at updating phase

Ideally if everything goes well the TKCs (Tanzu Kubernetes Cluster aka Guest Cluster)  should be in running phase. But sometimes due to several reasons it may be stuck at updating phase. In this article, we will take a sample case and look at troubleshooting/ fixing it. 

Following is an example:

NAMESPACE              NAME                    PHASE      CREATIONTIME           VERSION                           CP    WORKER
karvea-vc17ns11 sc201vc17pace updating 2021-11-19T12:17:24Z v1.20.9+vmware.1-tkg.1.a4cee5b 1 4

Lets connect to this TKC. Here I have a small plugin (kubectl-gckc) that generates the TKC kubeconfig and gcc is alias to KUBECONFIG=gckubeconfig, where gckubeconfig is the TKC admin kubeconfig file.
❯ k gckc karvea-vc17ns11 sc201vc17pace
❯ gcc kg no
NAME STATUS ROLES AGE VERSION
sc201vc17pace-control-plane-zt99l Ready control-plane,master 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz Ready,SchedulingDisabled <none> 189d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw Ready,SchedulingDisabled <none> 189d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-gxmtt Ready <none> 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-j4wvp Ready <none> 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-l2dq5 Ready <none> 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-xqlkv Ready <none> 139d v1.20.9+vmware.1

❯ kg vm -n karvea-vc17ns11
NAME POWERSTATE AGE
sc201vc17pace-control-plane-zt99l poweredOn 139d
sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz poweredOn 189d
sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw poweredOn 189d
sc201vc17pace-workers-jrcb6-85c4844f6c-gxmtt poweredOn 139d
sc201vc17pace-workers-jrcb6-85c4844f6c-j4wvp poweredOn 139d
sc201vc17pace-workers-jrcb6-85c4844f6c-l2dq5 poweredOn 139d
sc201vc17pace-workers-jrcb6-85c4844f6c-xqlkv poweredOn 139d



❯ kg machine -n karvea-vc17ns11
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
sc201vc17pace-control-plane-zt99l sc201vc17pace sc201vc17pace-control-plane-zt99l vsphere://4201e660-3124-9aa5-4ec2-6fbc2ff3ecea Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz sc201vc17pace sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz vsphere://42010982-8b25-ad7b-2a1d-bb949def4834 Deleting 189d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw sc201vc17pace sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw vsphere://4201a640-2b39-3d66-5a26-db95a612f6e5 Deleting 189d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-gxmtt sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-gxmtt vsphere://42013a9b-dffb-4609-89d6-4ca123c4dc1e Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-j4wvp sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-j4wvp vsphere://4201160b-21c9-ccc2-6826-e3545e34b490 Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-l2dq5 sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-l2dq5 vsphere://420125a8-e45c-04b7-5612-ce3149e86d74 Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-xqlkv sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-xqlkv vsphere://4201238f-c9a3-a9b2-9c31-4ed99318bd30 Running 139d v1.20.9+vmware.1


As you can see above, there are two worker machines that are stuck at Deleting phase. It is because the corresponding two worker nodes are at Ready, SchedulingDisabled status. The nodes are not drained yet due to some reason. Once they get drained properly, its status will be changed to NotReady, SchedulingDisabled. Now lets try to drain those worker nodes manually.
❯ gcc k drain sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz
node/sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz already cordoned
DEPRECATED WARNING: Aborting the drain command in a list of nodes will be deprecated in v1.23.
The new behavior will make the drain command go through all nodes even if one or more nodes failed during the drain.
For now, users can try such experience via: --ignore-errors
error: unable to drain node "sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz", aborting command...

There are pending nodes to be drained:
sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz
cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/calico-node-wqlmq, kube-system/kube-proxy-78z5k, nsxi-platform/nsxi-platform-fluent-bit-pdzjx, projectcontour/projectcontour-envoy-r9pg7, vmware-system-csi/vsphere-csi-node-p2gtd
cannot delete Pods with local storage (use --delete-emptydir-data to override): nsxi-platform/kafka-2

❯ gcc k drain sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz --ignore-daemonsets --delete-emptydir-data
node/sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-wqlmq, kube-system/kube-proxy-78z5k, nsxi-platform/nsxi-platform-fluent-bit-pdzjx, projectcontour/projectcontour-envoy-r9pg7, vmware-system-csi/vsphere-csi-node-p2gtd
evicting pod nsxi-platform/kafka-2
error when evicting pods/"kafka-2" -n "nsxi-platform" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod nsxi-platform/kafka-2
error when evicting pods/"kafka-2" -n "nsxi-platform" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
^C
❯ gcc kg pdb
No resources found in default namespace.
❯ gcc kg pdb -A
NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
nsxi-platform kafka N/A 1 0 188d
nsxi-platform zookeeper N/A 1 1 188d


Here this worker  node sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz is not getting drained because of the presence of a pod disruption budget (pdb). So, in-order to drain the node, I am taking a back up of the pdb yaml file and delete it. And once the nodes are drained, I will apply the pdb yaml back on to the cluster.
❯ gcc kg pdb -n nsxi-platform kafka -oyaml > pdb-nsxi-platform-kafka.yaml
❯ code pdb-nsxi-platform-kafka.yaml
❯ gcc kg pdb -n nsxi-platform zookeeper -oyaml > pdb-nsxi-platform-zookeeper.yaml
❯ code pdb-nsxi-platform-zookeeper.yaml

❯ gcc k delete pdb kafka -n nsxi-platform
poddisruptionbudget.policy "kafka" deleted
❯ gcc kg pdb -A
NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
nsxi-platform zookeeper N/A 1 1 188d

❯ gcc k drain sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz --ignore-daemonsets --delete-emptydir-data
node/sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-wqlmq, kube-system/kube-proxy-78z5k, nsxi-platform/nsxi-platform-fluent-bit-pdzjx, projectcontour/projectcontour-envoy-r9pg7, vmware-system-csi/vsphere-csi-node-p2gtd
evicting pod nsxi-platform/kafka-2
pod/kafka-2 evicted
node/sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz evicted


❯ gcc k drain sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz --ignore-daemonsets --delete-emptydir-data
node/sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-wqlmq, kube-system/kube-proxy-78z5k, nsxi-platform/nsxi-platform-fluent-bit-pdzjx, projectcontour/projectcontour-envoy-r9pg7, vmware-system-csi/vsphere-csi-node-p2gtd
node/sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz drained


❯ gcc k drain sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw
node/sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw already cordoned
DEPRECATED WARNING: Aborting the drain command in a list of nodes will be deprecated in v1.23.
The new behavior will make the drain command go through all nodes even if one or more nodes failed during the drain.
For now, users can try such experience via: --ignore-errors
error: unable to drain node "sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw", aborting command...

There are pending nodes to be drained:
sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw
error: cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/calico-node-4tz4x, kube-system/kube-proxy-q726d, nsxi-platform/nsxi-platform-fluent-bit-b24nn, projectcontour/projectcontour-envoy-rppkx, vmware-system-csi/vsphere-csi-node-mpbsh
❯ gcc k drain sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw --ignore-daemonsets
node/sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-4tz4x, kube-system/kube-proxy-q726d, nsxi-platform/nsxi-platform-fluent-bit-b24nn, projectcontour/projectcontour-envoy-rppkx, vmware-system-csi/vsphere-csi-node-mpbsh
node/sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw drained
The worker nodes are now drained.
❯ gcc kg no
NAME STATUS ROLES AGE VERSION
sc201vc17pace-control-plane-zt99l Ready control-plane,master 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-5c7d9548f-pn6vz NotReady,SchedulingDisabled <none> 189d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw NotReady,SchedulingDisabled <none> 189d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-gxmtt Ready <none> 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-j4wvp Ready <none> 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-l2dq5 Ready <none> 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-xqlkv Ready <none> 139d v1.20.9+vmware.1

❯ gcc kg no
NAME STATUS ROLES AGE VERSION
sc201vc17pace-control-plane-zt99l Ready control-plane,master 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw NotReady,SchedulingDisabled <none> 189d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-gxmtt Ready <none> 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-j4wvp Ready <none> 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-l2dq5 Ready <none> 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-xqlkv Ready <none> 139d v1.20.9+vmware.1
As soon as the worker nodes are drained, one of them got successfully removed/ deleted, but the other worker node is still present. When we look at the machine resource, you can still see one of the worker machine is still stuck at Deleting phase. In this case I've manually deleted the worker node, still the corresponding worker machine is stuck at Deleting phase.
❯ kg machine -n karvea-vc17ns11
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
sc201vc17pace-control-plane-zt99l sc201vc17pace sc201vc17pace-control-plane-zt99l vsphere://4201e660-3124-9aa5-4ec2-6fbc2ff3ecea Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw sc201vc17pace sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw vsphere://4201a640-2b39-3d66-5a26-db95a612f6e5 Deleting 189d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-gxmtt sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-gxmtt vsphere://42013a9b-dffb-4609-89d6-4ca123c4dc1e Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-j4wvp sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-j4wvp vsphere://4201160b-21c9-ccc2-6826-e3545e34b490 Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-l2dq5 sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-l2dq5 vsphere://420125a8-e45c-04b7-5612-ce3149e86d74 Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-xqlkv sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-xqlkv vsphere://4201238f-c9a3-a9b2-9c31-4ed99318bd30 Running 139d v1.20.9+vmware.1


❯ gcc k delete node sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw
node "sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw" deleted

❯ gcc kg no
NAME STATUS ROLES AGE VERSION
sc201vc17pace-control-plane-zt99l Ready control-plane,master 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-gxmtt Ready <none> 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-j4wvp Ready <none> 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-l2dq5 Ready <none> 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-xqlkv Ready <none> 139d v1.20.9+vmware.1
Now lets describe the worker machine stuck at Deleting. In this case you can see that there are two PVCs stuck at Terminating status. So I  just edited those two PVCs yaml and set finalizer to null.
❯ kg machine -n karvea-vc17ns11
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
sc201vc17pace-control-plane-zt99l sc201vc17pace sc201vc17pace-control-plane-zt99l vsphere://4201e660-3124-9aa5-4ec2-6fbc2ff3ecea Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw sc201vc17pace sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw vsphere://4201a640-2b39-3d66-5a26-db95a612f6e5 Deleting 189d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-gxmtt sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-gxmtt vsphere://42013a9b-dffb-4609-89d6-4ca123c4dc1e Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-j4wvp sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-j4wvp vsphere://4201160b-21c9-ccc2-6826-e3545e34b490 Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-l2dq5 sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-l2dq5 vsphere://420125a8-e45c-04b7-5612-ce3149e86d74 Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-xqlkv sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-xqlkv vsphere://4201238f-c9a3-a9b2-9c31-4ed99318bd30 Running 139d v1.20.9+vmware.1



❯ kg vm -n karvea-vc17ns11
NAME POWERSTATE AGE
sc201vc17pace-control-plane-zt99l poweredOn 139d
sc201vc17pace-workers-jrcb6-85c4844f6c-gxmtt poweredOn 139d
sc201vc17pace-workers-jrcb6-85c4844f6c-j4wvp poweredOn 139d
sc201vc17pace-workers-jrcb6-85c4844f6c-l2dq5 poweredOn 139d
sc201vc17pace-workers-jrcb6-85c4844f6c-xqlkv poweredOn 139d


❯ kd machine sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw -n karvea-vc17ns11

Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal DetectedUnhealthy 13m (x2 over 17m) machinehealthcheck-controller Machine karvea-vc17ns11/sc201vc17pace-workers-jrcb6/sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw/sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw has unhealthy node sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw
Normal SuccessfulDrainNode 13m (x2 over 19m) machine-controller success draining Machine's node "sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw"
Normal NodeVolumesDetached 12m (x2 over 19m) machine-controller success waiting for node volumes detach Machine's node "sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw"
Normal MachineMarkedUnhealthy 106s (x4 over 9m58s) machinehealthcheck-controller Machine karvea-vc17ns11/sc201vc17pace-workers-jrcb6/sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw/sc201vc17pace-workers-jrcb6-5c7d9548f-w64lw has been marked as unhealthy

❯ kg pvc -n karvea-vc17ns11
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
a366a76b-2000-4d33-a817-a9c1b9e60b1b-1f4b5ee8-f378-445e-97d3-f4c4656863bb Bound pvc-1dc35d76-86c6-4a70-82e7-99609480a0b3 10Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-3509d39d-e632-492b-a0c4-b5b3874b01a6 Bound pvc-97e6e063-9a9e-4837-9999-284523379453 128Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-42a0f98e-0f9c-4fc1-bc9f-862e94086624 Bound pvc-be6bd318-140c-4cb8-9c22-daf9ec8dac65 128Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-48b9ddc4-41bc-4228-a6b5-0aea3a470811 Bound pvc-faa7798e-c045-420f-9d09-44674d9d2326 20Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-8c880e33-681a-4eae-a57d-3aaf0fb9c950 Bound pvc-cf1a6c2e-0e9e-425c-ae46-b010b086c325 10Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-aa196378-d10f-45ed-a528-b0d691ec6447 Bound pvc-49fca2f0-3402-429f-884f-7db9012934d6 8Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-bbe074ee-9ba3-4839-b519-af82214a9ad0 Bound pvc-3887e89c-0a5b-4d08-938b-c9cb0a1efaca 8Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-bfb23073-29e8-4f0d-b2c0-934ff808ad2c Bound pvc-f966f803-ca92-45b6-9395-8d1d24c67f8e 10Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-d39e8f9b-692e-46ac-a52c-2d977f0a95fa Bound pvc-25d7c8c2-7994-4ee8-9ef8-725ae1c8c8a1 8Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-ef1e2362-83bc-4af4-b748-a496aa911009 Bound pvc-7aefd3fe-3279-4e20-8a00-5ca60cc61e40 128Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-f072ee1b-034a-4ac8-965c-f66a2d8bd61c Bound pvc-276acbee-ba6c-4cc9-8bc5-e18525abd256 20Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
sc201vc17pace-workers-wswdh-2hz8w-containerd Bound pvc-e67e3a6f-99d6-4e21-813d-e9c9994b25d6 42Gi RWO sc2-01-vc17c01-wcp-mgmt 139d
sc201vc17pace-workers-wswdh-5pjrc-containerd Bound pvc-fb162388-4347-4f48-825e-c2c2d62ceb90 42Gi RWO sc2-01-vc17c01-wcp-mgmt 139d
sc201vc17pace-workers-wswdh-755m6-containerd Terminating pvc-da2e4866-bb41-4f74-a4b7-0f74bc7061a1 42Gi RWO sc2-01-vc17c01-wcp-mgmt 189d
sc201vc17pace-workers-wswdh-dgmjs-containerd Terminating pvc-64eac528-f160-444c-9a0f-0ed9f6393e06 42Gi RWO sc2-01-vc17c01-wcp-mgmt 189d
sc201vc17pace-workers-wswdh-djp2m-containerd Bound pvc-a7542552-de13-4670-ac45-84ed39c3c916 42Gi RWO sc2-01-vc17c01-wcp-mgmt 139d
sc201vc17pace-workers-wswdh-flwtt-containerd Bound pvc-1b8ee843-709a-4e2a-955d-a9a9a6a83c73 42Gi RWO sc2-01-vc17c01-wcp-mgmt 139d

As soon as the PVCs are removed, you can see the worker machine that was stuck at Deleting got removed, and the TKC chaged its status to running.
❯ kg pvc -n karvea-vc17ns11
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
a366a76b-2000-4d33-a817-a9c1b9e60b1b-1f4b5ee8-f378-445e-97d3-f4c4656863bb Bound pvc-1dc35d76-86c6-4a70-82e7-99609480a0b3 10Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-3509d39d-e632-492b-a0c4-b5b3874b01a6 Bound pvc-97e6e063-9a9e-4837-9999-284523379453 128Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-42a0f98e-0f9c-4fc1-bc9f-862e94086624 Bound pvc-be6bd318-140c-4cb8-9c22-daf9ec8dac65 128Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-48b9ddc4-41bc-4228-a6b5-0aea3a470811 Bound pvc-faa7798e-c045-420f-9d09-44674d9d2326 20Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-8c880e33-681a-4eae-a57d-3aaf0fb9c950 Bound pvc-cf1a6c2e-0e9e-425c-ae46-b010b086c325 10Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-aa196378-d10f-45ed-a528-b0d691ec6447 Bound pvc-49fca2f0-3402-429f-884f-7db9012934d6 8Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-bbe074ee-9ba3-4839-b519-af82214a9ad0 Bound pvc-3887e89c-0a5b-4d08-938b-c9cb0a1efaca 8Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-bfb23073-29e8-4f0d-b2c0-934ff808ad2c Bound pvc-f966f803-ca92-45b6-9395-8d1d24c67f8e 10Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-d39e8f9b-692e-46ac-a52c-2d977f0a95fa Bound pvc-25d7c8c2-7994-4ee8-9ef8-725ae1c8c8a1 8Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-ef1e2362-83bc-4af4-b748-a496aa911009 Bound pvc-7aefd3fe-3279-4e20-8a00-5ca60cc61e40 128Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
a366a76b-2000-4d33-a817-a9c1b9e60b1b-f072ee1b-034a-4ac8-965c-f66a2d8bd61c Bound pvc-276acbee-ba6c-4cc9-8bc5-e18525abd256 20Gi RWO sc2-01-vc17c01-wcp-mgmt 188d
sc201vc17pace-workers-wswdh-2hz8w-containerd Bound pvc-e67e3a6f-99d6-4e21-813d-e9c9994b25d6 42Gi RWO sc2-01-vc17c01-wcp-mgmt 139d
sc201vc17pace-workers-wswdh-5pjrc-containerd Bound pvc-fb162388-4347-4f48-825e-c2c2d62ceb90 42Gi RWO sc2-01-vc17c01-wcp-mgmt 139d
sc201vc17pace-workers-wswdh-djp2m-containerd Bound pvc-a7542552-de13-4670-ac45-84ed39c3c916 42Gi RWO sc2-01-vc17c01-wcp-mgmt 139d
sc201vc17pace-workers-wswdh-flwtt-containerd Bound pvc-1b8ee843-709a-4e2a-955d-a9a9a6a83c73 42Gi RWO sc2-01-vc17c01-wcp-mgmt 139d

❯ kg machine -n karvea-vc17ns11
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
sc201vc17pace-control-plane-zt99l sc201vc17pace sc201vc17pace-control-plane-zt99l vsphere://4201e660-3124-9aa5-4ec2-6fbc2ff3ecea Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-gxmtt sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-gxmtt vsphere://42013a9b-dffb-4609-89d6-4ca123c4dc1e Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-j4wvp sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-j4wvp vsphere://4201160b-21c9-ccc2-6826-e3545e34b490 Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-l2dq5 sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-l2dq5 vsphere://420125a8-e45c-04b7-5612-ce3149e86d74 Running 139d v1.20.9+vmware.1
sc201vc17pace-workers-jrcb6-85c4844f6c-xqlkv sc201vc17pace sc201vc17pace-workers-jrcb6-85c4844f6c-xqlkv vsphere://4201238f-c9a3-a9b2-9c31-4ed99318bd30 Running 139d v1.20.9+vmware.1

❯ kgtkca | grep karvea
karvea-vc17ns11 sc201vc17pace running 2021-11-19T12:17:24Z v1.20.9+vmware.1-tkg.1.a4cee5b 1 4

Note: The above case is a sample scenario and the reasons why the TKC is stuck at updating may vary based on several conditions. This is a generic method one can follow while approaching these kind of issues. 
 
Hope it was useful. Cheers!

No comments:

Post a Comment