Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

[INTERNAL USE]

...

Contents

Table of Contents
minLevel1
maxLevel6
outlinefalse
typelist
printablefalse

HOW TO INSTALL TOOL

Tool File Name:
Hopper (H100 GPU): 629-24287-XXXX-FLD-38780.tgz
Download Location: scp root@172.25.10.35:/root/629-24287-XXXXAmpere (A100 GPU): 629-23587-XX86-FLD-3878038782.tgz .

PROBLEM SITUATION

Supermicro provided this file to diagnose HGX H100 GPU issues. Related to ZD-6179 / SMC CRM Case: SM2310022368.

The reported issue related to 4x NVSwitch used by 8x H100 GPU. The problem was that only 3 of 4 NVSwitches was recognized. Due to this problem Fabric Manager Service was unable to run.

Unable to start Fabric Manager Service.

Code Block
root@rdlab:/home/exx# systemctl enable nvidia-fabricmanager.service
root@rdlab:/home/exx# systemctl start nvidia-fabricmanager.service
Job for nvidia-fabricmanager.service failed because the control process exited with error code.
See "systemctl status nvidia-fabricmanager.service" and "journalctl -xeu nvidia-fabricmanager.service" for details.

 Checked status.

Code Block
root@rdlab:/home/exx# systemctl status nvidia-fabricmanager.service
× nvidia-fabricmanager.service - NVIDIA fabric manager service
     Loaded: loaded (/lib/systemd/system/nvidia-fabricmanager.service; enabled; vendor preset: enabled)
     Active: failed (Result: exit-code) since Fri 2023-09-29 15:18:28 PDT; 1min 41s ago
    Process: 16773 ExecStart=/usr/bin/nv-fabricmanager -c /usr/share/nvidia/nvswitch/fabricmanager.cfg (code=exited, status=1/FAILURE)
        CPU: 16ms

Sep 29 15:18:27 rdlab systemd[1]: Starting NVIDIA fabric manager service...
Sep 29 15:18:28 rdlab nv-fabricmanager[16775]: Connected to 1 node.
Sep 29 15:18:28 rdlab nv-fabricmanager[16775]: detected number of NVSwitches don't match with any supported system topology, aborting fabric ma>
Sep 29 15:18:28 rdlab nv-fabricmanager[16775]: detected number of NVSwitches don't match with any supported system topology, aborting fabric ma>
Sep 29 15:18:28 rdlab systemd[1]: nvidia-fabricmanager.service: Control process exited, code=exited, status=1/FAILURE
Sep 29 15:18:28 rdlab systemd[1]: nvidia-fabricmanager.service: Failed with result 'exit-code'.
Sep 29 15:18:28 rdlab systemd[1]: Failed to start NVIDIA fabric manager service.

 
It looks like there was one success, but rest are failures.

Code Block
root@rdlab:/home/exx# journalctl -xeu nvidia-fabricmanager.service
Sep 29 15:18:28 rdlab nv-fabricmanager[16775]: detected number of NVSwitches don't match with any supported system topology, aborting fabric ma>
Sep 29 15:18:28 rdlab nv-fabricmanager[16775]: detected number of NVSwitches don't match with any supported system topology, aborting fabric ma>
Sep 29 15:18:28 rdlab systemd[1]: nvidia-fabricmanager.service: Control process exited, code=exited, status=1/FAILURE
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ An ExecStart= process belonging to unit nvidia-fabricmanager.service has exited.
░░
░░ The process' exit code is 'exited' and its exit status is 1.
Sep 29 15:18:28 rdlab systemd[1]: nvidia-fabricmanager.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ The unit nvidia-fabricmanager.service has entered the 'failed' state with result 'exit-code'.
Sep 29 15:18:28 rdlab systemd[1]: Failed to start NVIDIA fabric manager service.
░░ Subject: A start job for unit nvidia-fabricmanager.service has failed
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ A start job for unit nvidia-fabricmanager.service has finished with a failure.
░░
░░ The job identifier is 10553 and the job result is failed.
Sep 29 15:23:52 rdlab systemd[1]: Starting NVIDIA fabric manager service...
░░ Subject: A start job for unit nvidia-fabricmanager.service has begun execution
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ A start job for unit nvidia-fabricmanager.service has begun execution.
░░
░░ The job identifier is 10924.
Sep 29 15:23:53 rdlab nv-fabricmanager[16995]: Connected to 1 node.
Sep 29 15:23:53 rdlab nv-fabricmanager[16995]: detected number of NVSwitches don't match with any supported system topology, aborting fabric ma>
Sep 29 15:23:53 rdlab nv-fabricmanager[16995]: detected number of NVSwitches don't match with any supported system topology, aborting fabric ma>
Sep 29 15:23:53 rdlab systemd[1]: nvidia-fabricmanager.service: Control process exited, code=exited, status=1/FAILURE
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ An ExecStart= process belonging to unit nvidia-fabricmanager.service has exited.
░░
░░ The process' exit code is 'exited' and its exit status is 1.
Sep 29 15:23:53 rdlab systemd[1]: nvidia-fabricmanager.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ The unit nvidia-fabricmanager.service has entered the 'failed' state with result 'exit-code'.
Sep 29 15:23:53 rdlab systemd[1]: Failed to start NVIDIA fabric manager service.
░░ Subject: A start job for unit nvidia-fabricmanager.service has failed
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ A start job for unit nvidia-fabricmanager.service has finished with a failure.
░░
░░ The job identifier is 10924 and the job result is failed.

Log Files:

View file
nameRMA28206-fabricmanager.log
View file
nameRMA28206-NVSwitch Detection..txt

INVESTIGATION DETAILS

The

Tool Installation

Download Location (INTERNAL QA SERVER):
Hopper (H100 GPU): scp root@172.25.10.35:/root/HGX_Tool/629-24287-XXXX-FLD-38780.tgz .
Amepre (A100 GPU): scp root@172.25.10.35:/root/HGX_Tool/629-23587-XX86-FLD-38782.tgz .
Unload Nvidia Driver: scp root@172.25.10.35:/root/HGX_Tool/unload_nvidia_driver.sh .

The tool is expected to be placed in the /var/diags folder. Created this folder if it does not exist.

Extracted folder (Hopper) content:

Code Block
root@rdlab:/var/diags/629-24287-XXXX-FLD-38780# ll
total 473124
drwxr-xr-x 2 root root      4096 Oct  4 10:55 ./
drwxr-xr-x 3 exx  exx       4096 Oct  4 10:55 ../
-rwxr-xr-x 1 exx  exx      17895 Sep 11 09:47 fdmain.sh*
-rwxr-xr-x 1 exx  exx      32888 Sep 11 09:47 fieldiag.sh*
-r-xr-xr-x 1 exx  exx   11194648 Sep 11 09:47 nvflash*
-rwxr-xr-x 1 exx  exx  473142559 Sep 11 09:47 onediagfield.r6.252.tgz*
-r-xr-xr-x 1 exx  exx       2906 Sep 11 09:47 README.txt*
-r-xr-xr-x 1 exx  exx       1702 Sep 11 09:47 relnotes.txt*
-r-xr-xr-x 1 exx  exx      18541 Sep 11 09:47 sku_hopper-hgx-8-gpu.json*
-r-xr-xr-x 1 exx  exx      18477 Sep 11 09:47 sku_hopper-hgx-8-gpu_tpol.json*
-rw-rw-r-- 1 exx  exx       3428 Sep 11 09:47 spec_hopper-hgx-8-gpu_level1_field.json
-rw-rw-r-- 1 exx  exx       3428 Sep 11 09:47 spec_hopper-hgx-8-gpu_level2_field.json
-rw-rw-r-- 1 exx  exx       2312 Sep 11 09:47 spec_hopper-hgx-8-gpu_sit_field.json
-r-xr-xr-x 1 exx  exx       6832 Sep 11 09:47 testargs_hopper-hgx-8-gpu.json*

Extracted folder (Ampere) content:

Code Block
root@rdlab:/var/diags/629-23587-XX86-FLD-38782# ll
total 243940
drwxr-xr-x 4 root root      4096 Mar  4 22:55 ./
drwxr-xr-x 4 root root      4096 Mar  5 17:03 ../
drwxr-xr-x 8 root root      4096 Mar  4 22:55 dgx/
-rw-r--r-- 1 root root         0 Mar  4 22:55 dgx_log_creation_lock
-rw-r--r-- 1 root root         0 Mar  4 22:55 dgx_unpack_package_lock
-rw-r--r-- 1 root root     26360 Mar  5 00:43 fieldiag.log
-rwxr-xr-x 1 exx  exx      31232 Sep 19 20:53 fieldiag.sh*
-rwxr-xr-x 1 exx  exx  238456629 Sep 19 20:53 hgxfieldiag.r3.102*
drwxr-xr-x 2 root root      4096 Mar  5 00:43 logs/
-r-xr-xr-x 1 exx  exx   11104504 Sep 19 20:53 nvflash*
-rwxr-xr-x 1 exx  exx       2773 Sep 19 20:53 README.txt*
-rwxr-xr-x 1 exx  exx       4497 Sep 19 20:53 relnotes.txt*
-rwxr-xr-x 1 exx  exx       1823 Sep 19 20:53 sku_hgx-a100-8-gpu_40g_aircooled.json*
-rwxr-xr-x 1 exx  exx       1482 Sep 19 20:53 sku_hgx-a100-8-gpu_40g_hybrid.json*
-rwxr-xr-x 1 exx  exx       3787 Sep 19 20:53 sku_hgx-a100-8-gpu_80g_aircooled.json*
-rwxr-xr-x 1 exx  exx       3611 Sep 19 20:53 sku_hgx-a100-8-gpu_80g_hybrid.json*
-rwxr-xr-x 1 exx  exx      25076 Sep 19 20:53 testargs_hgx-a100-8-gpu_2tray.json*
-rwxr-xr-x 1 exx  exx      24734 Sep 19 20:53 testargs_hgx-a100-8-gpu_d00_2tray.json*
-rwxr-xr-x 1 exx  exx      14310 Sep 19 20:53 testargs_hgx-a100-8-gpu_d00.json*
-rwxr-xr-x 1 exx  exx      14652 Sep 19 20:53 testargs_hgx-a100-8-gpu.json*

PROBLEM SITUATION

Supermicro provided this file to diagnose HGX H100 GPU issues. Related to ZD-6179 / SMC CRM Case: SM2310022368.

The reported issue related to 4x NVSwitch used by 8x H100 GPU. The problem was that only 3 of 4 NVSwitches was recognized. Due to this problem Fabric Manager Service was unable to run.

Unable to start Fabric Manager Service.

Code Block
root@rdlab:/home/exx# systemctl enable nvidia-fabricmanager.service
root@rdlab:/home/exx# systemctl start nvidia-fabricmanager.service
Job for nvidia-fabricmanager.service failed because the control process exited with error code.
See "systemctl status nvidia-fabricmanager.service" and "journalctl -xeu nvidia-fabricmanager.service" for details.

 Checked status.

Code Block
root@rdlab:/home/exx# systemctl status nvidia-fabricmanager.service
× nvidia-fabricmanager.service - NVIDIA fabric manager service
     Loaded: loaded (/lib/systemd/system/nvidia-fabricmanager.service; enabled; vendor preset: enabled)
     Active: failed (Result: exit-code) since Fri 2023-09-29 15:18:28 PDT; 1min 41s ago
    Process: 16773 ExecStart=/usr/bin/nv-fabricmanager -c /usr/share/nvidia/nvswitch/fabricmanager.cfg (code=exited, status=1/FAILURE)
        CPU: 16ms

Sep 29 15:18:27 rdlab systemd[1]: Starting NVIDIA fabric manager service...
Sep 29 15:18:28 rdlab nv-fabricmanager[16775]: Connected to 1 node.
Sep 29 15:18:28 rdlab nv-fabricmanager[16775]: detected number of NVSwitches don't match with any supported system topology, aborting fabric ma>
Sep 29 15:18:28 rdlab nv-fabricmanager[16775]: detected number of NVSwitches don't match with any supported system topology, aborting fabric ma>
Sep 29 15:18:28 rdlab systemd[1]: nvidia-fabricmanager.service: Control process exited, code=exited, status=1/FAILURE
Sep 29 15:18:28 rdlab systemd[1]: nvidia-fabricmanager.service: Failed with result 'exit-code'.
Sep 29 15:18:28 rdlab systemd[1]: Failed to start NVIDIA fabric manager service.

 
It looks like there was one success, but rest are failures.

Code Block
root@rdlab:/home/exx# journalctl -xeu nvidia-fabricmanager.service
Sep 29 15:18:28 rdlab nv-fabricmanager[16775]: detected number of NVSwitches don't match with any supported system topology, aborting fabric ma>
Sep 29 15:18:28 rdlab nv-fabricmanager[16775]: detected number of NVSwitches don't match with any supported system topology, aborting fabric ma>
Sep 29 15:18:28 rdlab systemd[1]: nvidia-fabricmanager.service: Control process exited, code=exited, status=1/FAILURE
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ An ExecStart= process belonging to unit nvidia-fabricmanager.service has exited.
░░
░░ The process' exit code is 'exited' and its exit status is 1.
Sep 29 15:18:28 rdlab systemd[1]: nvidia-fabricmanager.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ The unit nvidia-fabricmanager.service has entered the 'failed' state with result 'exit-code'.
Sep 29 15:18:28 rdlab systemd[1]: Failed to start NVIDIA fabric manager service.
░░ Subject: A start job for unit nvidia-fabricmanager.service has failed
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ A start job for unit nvidia-fabricmanager.service has finished with a failure.
░░
░░ The job identifier is 10553 and the job result is failed.
Sep 29 15:23:52 rdlab systemd[1]: Starting NVIDIA fabric manager service...
░░ Subject: A start job for unit nvidia-fabricmanager.service has begun execution
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ A start job for unit nvidia-fabricmanager.service has begun execution.
░░
░░ The job identifier is 10924.
Sep 29 15:23:53 rdlab nv-fabricmanager[16995]: Connected to 1 node.
Sep 29 15:23:53 rdlab nv-fabricmanager[16995]: detected number of NVSwitches don't match with any supported system topology, aborting fabric ma>
Sep 29 15:23:53 rdlab nv-fabricmanager[16995]: detected number of NVSwitches don't match with any supported system topology, aborting fabric ma>
Sep 29 15:23:53 rdlab systemd[1]: nvidia-fabricmanager.service: Control process exited, code=exited, status=1/FAILURE
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ An ExecStart= process belonging to unit nvidia-fabricmanager.service has exited.
░░
░░ The process' exit code is 'exited' and its exit status is 1.
Sep 29 15:23:53 rdlab systemd[1]: nvidia-fabricmanager.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ The unit nvidia-fabricmanager.service has entered the 'failed' state with result 'exit-code'.
Sep 29 15:23:53 rdlab systemd[1]: Failed to start NVIDIA fabric manager service.
░░ Subject: A start job for unit nvidia-fabricmanager.service has failed
░░ Defined-By: systemd
░░ Support: http://www.ubuntu.com/support
░░
░░ A start job for unit nvidia-fabricmanager.service has finished with a failure.
░░
░░ The job identifier is 10924 and the job result is failed.

Log Files:

View file
nameRMA28206-fabricmanager.log
View file
nameRMA28206-NVSwitch Detection..txt

FIELDIAG TOOL USAGE

Review the README.txt for details on usage and options.

View file
nameREADME.txt

If the following error is encountered when running the fielddiag.sh, run the unload_nvidia-driver.sh script to stop the services.

Code Block
root@rdlab:/home/exx/smc_fieldiag/629-24287-XXXX-FLD-38780# ./fieldiag.sh
Unpacking onediag...
Could not determine HGX baseboard SKU

SMC provided a script to unload Nvidia drivers.

File Name: unload_nvidia_driver.sh

View file
nameunload_nvidia_driver.sh

GPU Field Diagnostic test failure example. Only 2 of 8 GPU’s were properly identified for testing.

Code Block
root@rdlab:/var/diags/629-24287-XXXX-FLD-38780# ./fieldiag.sh --gpufielddiag
/var/diags/629-24287-XXXX-FLD-38780
Warning: Stopping systemd-udevd.service, but it can still be activated by:
  systemd-udevd-kernel.socket
  systemd-udevd-control.socket
************************************************************
*                                                          *
*                   GPU FIELD DIAGNOSTIC                   *
*                                                          *
************************************************************
Version            629-24287-XXXX-FLD-38780
Logs               /var/diags/629-24287-XXXX-FLD-38780/dgx/logs-20231002-150508/gpu_fd_logs

Running fieldiag...
GPU Devices Under Test: 0:04:00.0 0:23:00.0 0:43:00.0 0:64:00.0 0:84:00.0 0:a3:00.0 0:c3:00.0 0:e4:00.0
Running Parallel Tests

MODS start: Mon Oct  2 15:06:29 2023
GPU 0: RUNNING  GPU 1: RUNNING  GPU 2: RUNNING  GPU 3: RUNNING  GPU 4: RUNNING  GPU 5: RUNNING  GPU 6: RUNNING  GPU 7: RUNNING
GPU 0: RUNNING  GPU 1: RUNNING  GPU 2: FAIL     GPU 3: FAIL     GPU 4: FAIL     GPU 5: FAIL     GPU 6: FAIL     GPU 7: FAIL
Initializing... |====================| 100.0 %
Running test 489 on            GPU 0 [7] [04:00.0] -   2 tests remaining |====================| 99.6 %
Done    test 491 on            GPU 0 [7] [04:00.0] -   0 tests remaining |====================| 100.0 %FAIL     GPU 7: FAIL

Error Code = 000000000000 (ok)

 #######     ####     ######    ######
 ########   ######   ########  ########
 ##    ##  ##    ##  ##     #  ##     #
 ##    ##  ##    ##   ###       ###
 ########  ########    ####      ####
 #######   ########      ###       ###
 ##        ##    ##  #     ##  #     ##
 ##        ##    ##  ########  ########
 ##        ##    ##   ######    ######

MODS end  : Mon Oct  2 23:25:10 2023  [29921.052 seconds (08:18:41.052 h:m:s)]
MODS end  : Mon Oct  2 23:25:10 2023  [29921.081 seconds (08:18:41.081 h:m:s)]
GPU 0: PENDING  GPU 1: PENDING  GPU 2: FAIL     GPU 3: FAIL     GPU 4: FAIL     GPU 5: FAIL     GPU 6: FAIL     GPU 7: FAIL
ls: cannot access '__fieldiag2/*.log': No such file or directory
ls: cannot access '__fieldiag3/*.log': No such file or directory
ls: cannot access '__fieldiag4/*.log': No such file or directory
ls: cannot access '__fieldiag5/*.log': No such file or directory
ls: cannot access '__fieldiag6/*.log': No such file or directory
ls: cannot access '__fieldiag7/*.log': No such file or directory
----------------------
Fieldiag Testing Completed
GPU 0: PASS     GPU 1: PASS     GPU 2: FAIL     GPU 3: FAIL     GPU 4: FAIL     GPU 5: FAIL     GPU 6: FAIL     GPU 7: FAIL

Results Summary
GPU ID    |       GPU SN#      |   STATUS
===============================================
GPU0      |   1650723008686    |    PASS
GPU1      |   1650723001355    |    PASS

 #######     ####     ######    ######
 ########   ######   ########  ########
 ##    ##  ##    ##  ##     #  ##     #
 ##    ##  ##    ##   ###       ###
 ########  ########    ####      ####
 #######   ########      ###       ###
 ##        ##    ##  #     ##  #     ##
 ##        ##    ##  ########  ########
 ##        ##    ##   ######    ######

Done
Failed to send reload request: No such file or directory

Test methods requested by Supermicro from CRM ticket to focus on the NVSwitch issue.

Run the following command and change the test speed value in the connectivity JSON section to Gen 3 - 8000.

./fieldiag.sh --no_bmc --sit # or --level1 or --level2

spec_hopper-hgx-8-gpu_sit_field.json
spec_hopper-hgx-8-gpu_level1_field.json
spec_hopper-hgx-8-gpu_level2_field.json

Section to alter speed value.

...

Example of running Field Test showing logs output location.

...

Failure example from ZD-12288: fieldiag.log

View file
nameS434992x3806626-fieldiag-FAILED-8xA100.log