SPEChpc(TM) 2021 Tiny Result Advanced Micro Devices Dallas Milan Cluster: Gigabyte H262-Z63 (AMD EPYC 7763) hpc2021 License: 0017 Test date: Aug-2021 Test sponsor: Advanced Micro Devices Hardware availability: Apr-2021 Tested by: Advanced Micro Devices Software availability: Aug-2021 Base Base Thrds Base Base Peak Peak Thrds Peak Peak Benchmarks Model Ranks pr Rnk Run Time Ratio Model Ranks pr Rnk Run Time Ratio -------------- ------ ------ ------ --------- --------- ------ ------ ------ --------- --------- 505.lbm_t MPI 512 1 89.7 25.1 S 505.lbm_t MPI 512 1 90.1 25.0 * 505.lbm_t MPI 512 1 90.2 25.0 S 513.soma_t MPI 512 1 323 11.5 S 513.soma_t MPI 512 1 316 11.7 S 513.soma_t MPI 512 1 316 11.7 * 518.tealeaf_t MPI 512 1 109 15.2 S 518.tealeaf_t MPI 512 1 79.9 20.7 * 518.tealeaf_t MPI 512 1 74.4 22.2 S 519.clvleaf_t MPI 512 1 193 8.56 * 519.clvleaf_t MPI 512 1 178 9.26 S 519.clvleaf_t MPI 512 1 215 7.66 S 521.miniswp_t MPI 512 1 161 9.94 S 521.miniswp_t MPI 512 1 159 10.0 S 521.miniswp_t MPI 512 1 160 10.0 * 528.pot3d_t MPI 512 1 139 15.3 * 528.pot3d_t MPI 512 1 149 14.2 S 528.pot3d_t MPI 512 1 136 15.6 S 532.sph_exa_t MPI 512 1 184 10.6 S 532.sph_exa_t MPI 512 1 185 10.5 * 532.sph_exa_t MPI 512 1 187 10.4 S 534.hpgmgfv_t MPI 512 1 125 9.42 * 534.hpgmgfv_t MPI 512 1 125 9.39 S 534.hpgmgfv_t MPI 512 1 121 9.73 S 535.weather_t MPI 512 1 131 24.6 S 535.weather_t MPI 512 1 129 25.0 S 535.weather_t MPI 512 1 131 24.7 * ============================================================================================================ 505.lbm_t MPI 512 1 90.1 25.0 * 513.soma_t MPI 512 1 316 11.7 * 518.tealeaf_t MPI 512 1 79.9 20.7 * 519.clvleaf_t MPI 512 1 193 8.56 * 521.miniswp_t MPI 512 1 160 10.0 * 528.pot3d_t MPI 512 1 139 15.3 * 532.sph_exa_t MPI 512 1 185 10.5 * 534.hpgmgfv_t MPI 512 1 125 9.42 * 535.weather_t MPI 512 1 131 24.7 * SPEChpc 2021_tny_base 13.9 SPEChpc 2021_tny_peak Not Run BENCHMARK DETAILS ----------------- Type of System: Homogenous Cluster Compute Nodes Used: 4 Total Chips: 8 Total Cores: 512 Total Threads: 512 Total Memory: 2 TB Compiler: LLVM/Clang 13.0 C/C++/Fortran: Version 13.0-0 MLSE ROCm 4.3.0 Compilers Compiler available by installing ROCm 4.3 or getting https://repo.radeon.com/rocm/apt/4.3/pool/main/l/llvm-amdgpu/llvm-amdgpu_13.0.0.21295.40300_amd64.deb https://repo.radeon.com/rocm/apt/4.3/pool/main/o/openmp-extras4.3.0/openmp-extras4.3.0_12.43.0.40300-52_amd64.deb MPI Library: OpenMPI Version 4.0.5 Other MPI Info: None Other Software: None Base Parallel Model: MPI Base Ranks Run: 512 Base Threads Run: 1 Peak Parallel Models: Not Run Node Description: Gigabyte H262-Z63 =================================== HARDWARE -------- Number of nodes: 4 Uses of the node: compute Vendor: Gigabyte Model: Gigabyte H262-Z63 CPU Name: AMD EPYC 7763 CPU(s) orderable: 1,2 chips Chips enabled: 2 Cores enabled: 128 Cores per chip: 64 Threads per core: 1 CPU Characteristics: Max Boost Clock disabled CPU MHz: 2450 Primary Cache: 32 KB I + 32 KB D on chip per core Secondary Cache: 512 KB I+D on chip per core L3 Cache: 256 MB I+D on chip per chip 32 MB shared / 8 cores Other Cache: None Memory: 512 GB (16 x 32 GB 2Rx4 PC4-3200AA-R) Disk Subsystem: Intel SSD 520 Series 240GB, 2.5in SATA 6Gb/s Other Hardware: None Adapter: ConnectX-6 Dual port, model number: MCX653106A Number of Adapters: 0 Slot Type: None Data Rate: None Ports Used: 0 Interconnect Type: None SOFTWARE -------- Adapter: ConnectX-6 Dual port, model number: MCX653106A Adapter Driver: None Adapter Firmware: None Operating System: CentOS Linux release 8.3.2011 Kernel 4.18.0-193 [native to CentOS 8.3] Local File System: xfs Shared File System: NFS share System State: Multi-user, run level 3 Other Software: None Interconnect Description: Mellanox ================================== HARDWARE -------- Vendor: Mellanox Model: NVIDIA MCX653106A-EFAT ConnectX-6 VPI Adapter Card HDR100/EDR/100GbE Switch Model: MLNX_OFED_LINUX-5.2.1.0 (OFED-5.2.1.0) Switch: 27_2008_2202-MQM8790-HS2X_Ax Number of Switches: 2 Number of Ports: 40 Data Rate: InfiniBand HDR 100 Gb/s Firmware: HCA: 20.29.1016 Topology: non-blocking fat tree Primary Use: MPI Traffic SOFTWARE -------- Submit Notes ------------ The config file option 'submit' was used. MPI startup command: mpirun command was used to start MPI jobs. Compiler Version Notes ---------------------- ============================================================================== CXXC 532.sph_exa_t(base) ------------------------------------------------------------------------------ /home/rlieberm/rocm/rocm-4.3.0-llvm/llvm/bin/clang++: /lib64/libtinfo.so.5: no version information available (required by /home/rlieberm/rocm/rocm-4.3.0-llvm/llvm/bin/clang++) clang version 13.0.0 (https://github.com/RadeonOpenCompute/llvm-project roc-4.3.0 21295 f2943f684437d2c1143a56e418d29fc6b3314072) Target: x86_64-unknown-linux-gnu Thread model: posix InstalledDir: /home/rlieberm/rocm/rocm-4.3.0-llvm/llvm/bin ------------------------------------------------------------------------------ ============================================================================== CC 505.lbm_t(base) 513.soma_t(base) 518.tealeaf_t(base) 521.miniswp_t(base) 534.hpgmgfv_t(base) ------------------------------------------------------------------------------ /home/rlieberm/rocm/rocm-4.3.0-llvm/llvm/bin/clang: /lib64/libtinfo.so.5: no version information available (required by /home/rlieberm/rocm/rocm-4.3.0-llvm/llvm/bin/clang) clang version 13.0.0 (https://github.com/RadeonOpenCompute/llvm-project roc-4.3.0 21295 f2943f684437d2c1143a56e418d29fc6b3314072) Target: x86_64-unknown-linux-gnu Thread model: posix InstalledDir: /home/rlieberm/rocm/rocm-4.3.0-llvm/llvm/bin ------------------------------------------------------------------------------ ============================================================================== FC 519.clvleaf_t(base) 528.pot3d_t(base) 535.weather_t(base) ------------------------------------------------------------------------------ /home/rlieberm/rocm/rocm-4.3.0-llvm/llvm/bin/flang: /lib64/libtinfo.so.5: no version information available (required by /home/rlieberm/rocm/rocm-4.3.0-llvm/llvm/bin/flang) flang-new version 13.0.0 (https://github.com/RadeonOpenCompute/llvm-project roc-4.3.0 21295 f2943f684437d2c1143a56e418d29fc6b3314072) Target: x86_64-unknown-linux-gnu Thread model: posix InstalledDir: /home/rlieberm/rocm/rocm-4.3.0-llvm/llvm/bin ------------------------------------------------------------------------------ Base Compiler Invocation ------------------------ C benchmarks: mpicc C++ benchmarks: mpicxx Fortran benchmarks: mpif90 Base Portability Flags ---------------------- 519.clvleaf_t: -DSPEC_USE_MPIFH 521.miniswp_t: -DUSE_KBA -DUSE_ACCELDIR 528.pot3d_t: -DSPEC_USE_MPIFH 532.sph_exa_t: -DSPEC_USE_LT_IN_KERNELS 535.weather_t: -DSPEC_USE_MPIFH Base Optimization Flags ----------------------- C benchmarks: -O3 C++ benchmarks: -O3 Fortran benchmarks: -O3 Base Other Flags ---------------- C benchmarks: -I/home/rlieberm/rocm/rocm-4.3.0-llvm/llvm/include C++ benchmarks: -I/home/rlieberm/rocm/rocm-4.3.0-llvm/llvm/include Fortran benchmarks: -I/home/rlieberm/rocm/rocm-4.3.0-llvm/llvm/include -I/home/software/openmpi/aocc30/4.0.5/include/ The flags file that was used to format this result can be browsed at http://www.spec.org/hpc2021/flags/amd2021_flags.html You can also download the XML flags source by saving the following link: http://www.spec.org/hpc2021/flags/amd2021_flags.xml SPEChpc is a trademark of the Standard Performance Evaluation Corporation. All other brand and product names appearing in this result are trademarks or registered trademarks of their respective holders. --------------------------------------------------------------------------------------------------------------------------------------- For questions about this result, please contact the tester. For other inquiries, please contact info@spec.org. Copyright 2021-2023 Standard Performance Evaluation Corporation Tested with SPEChpc2021 v1.0.2 on 2021-08-25 15:56:44-0400. Report generated on 2023-08-25 18:56:59 by hpc2021 ASCII formatter v1.0.3. Originally published on 2021-10-20.