Showing posts with label CUDA. Show all posts
Showing posts with label CUDA. Show all posts

Saturday, October 18, 2025

Working with GPUs - Part1 - Using nvidia-smi

GPUs are the backbone of modern AI and HPC clusters, and understanding their basic health and configuration is the first step toward reliable operations. In this first post of the series, we start with nvidia-smi - the primary tool for discovering Nvidia GPUs, validating drivers and CUDA versions, and performing essential health checks. These fundamentals form the baseline for monitoring, performance benchmarking, and troubleshooting GPU compute nodes at scale.

Verify version 

nvidia-smi --version


List all GPUs 

nvidia-smi -L


Current state of every GPU

nvidia-smi


Following are the key observations from the above output:

  • All 8 GPUs detected (NVIDIA H100 80GB HBM3). Confirms full hardware enumeration. 
  • HBM3 memory present (80GB per GPU). Validates expected SKU (H100 SXM vs PCIe). This is important because SXM GPUs behave differently in power, cooling, and NVLink bandwidth; troubleshooting playbooks differ by form factor.
  • Driver version 550.90.07 with CUDA compatibility 12.4. This confirms a Hopper‑supported, production‑ready driver stack. Many issues (NCCL failures, DCGM errors, framework crashes) trace back to unsupported driver–CUDA combinations.
  • Persistence Mode: On. This avoids GPU driver reinitialization delays and flaky behavior between jobs. Turning this off in clusters can cause intermittent job start failures or longer warm‑up times.
  • Temperatures in 34–41 °C range at idle. This indicates healthy cooling and airflow. High idle temperatures point to heatsink issues, airflow obstructions, fan/BMC problems, or thermal paste degradation.
  • Performance State: P0 (highest performance). This shows GPUs are not power or thermally‑throttled. If GPUs remain in lower P‑states under load, suspect thermal limits, power caps, or firmware misconfigurations.
  • Power usage ~70–76 W with cap at 700 W. This confirms ample power headroom and no throttling. GPUs hitting the power cap during load may show reduced performance even when utilization appears high.
  • GPU utilization at 0% and no running processes. This confirms the node is idle and clean. Useful to rule out “ghost” workloads, leaked CUDA contexts, or stuck processes when diagnosing performance drops.
  • Memory usage ~1 MiB per GPU. Only driver bookkeeping allocations present. Any significant memory use at idle suggests leftover processes or failed container teardown.
  • Volatile Uncorrected ECC errors: 0. Confirms memory integrity. Any non‑zero uncorrected ECC errors are serious and usually justify isolating the GPU and starting RMA/vendor diagnostics.
  • MIG mode: Disabled. Ensures full GPU and NVLink bandwidth availability. MIG partitions can severely impact NCCL and large‑model training if enabled unintentionally.
  • Compute mode: Default. Allows multiple processes (expected in shared clusters). Exclusive modes can cause unexpected job failures or scheduling issues.
  • Fan: N/A (SXM platform). Normal for chassis‑controlled cooling. Fan values appearing unexpectedly may indicate incorrect sensor readings or platform misidentification.


Health metrics of all GPUs


nvidia-smi -q

This shows details like:
  • Serial Number
  • VBIOS Version
  • GPU Part Number
  • Utilization
  • ECC Errors
  • Temperature, etc.

Query GPU health metrics


Help: nvidia-smi --help-query-gpu

GPU memory usage and utilization: nvidia-smi --query-gpu=index,name,uuid,driver_version,memory.total,memory.used,utilization.gpu --format=csv


GPU temperature status: nvidia-smi --query-gpu=index,name,uuid,temperature.gpu,temperature.gpu.tlimit,temperature.memory --format=csv


GPU reset state: nvidia-smi --query-gpu=index,name,uuid,reset_status.reset_required,reset_status.drain_and_reset_recommended --format=csv


  • reset_status.reset_required - indicates whether the GPU must be reset to return to a clean operational state.
  • reset_status.drain_and_reset_recommended - Yes, indicates GPU/ node should be drained first, then reset. No, indicates reset can be done immediately.
  • Note: In production GPU clusters based on Kubernetes, the safest and recommended practice is to always drain the node before attempting GPU recovery. For H100 SXM systems, recovery is performed via node reboot, not individual GPU resets.


NVLink topology

nvidia-smi topo -m


Note: Any non‑NVLink GPU‑to‑GPU path on H100 SXM immediately explains poor NCCL performance and requires hardware correction.

nvidia-smi nvlink -s         # shows per direction (Tx or Rx) bandwidth of all nvlinks of all GPUs

nvidia-smi nvlink -s -i 0 # shows per direction (Tx or Rx) bandwidth of all nvlinks of the GPU 0


Hope it was useful. Cheers!