Avoiding runaccel
Using the SPEaccel® 2023 benchmarks while making minimal use of SPEC's tool set

$Id$ Latest: www.spec.org/accel2023/Docs/

Contents

Introduction

Environment

Steps

Review one rule

Install

Pick a benchmark

Pick a config file

Fake it

Find the log

Find the build dir

Copy the build dir (triple only)

Build it

Place the binary in the run dir

Copy the run dir

Run it

Save your work

Repeat

Validation

Introduction

This document is for those who prefer to avoid using some of the SPEC-supplied tools, typically because of a need for more direct access to the benchmarks. For example:

If the above describes you, here is a suggested path which should lead quickly to your desired state. This document shows you how to use SPEC's tools for the minimal purpose of just generating work directories, for use as a private sandbox. Note, however, that you cannot do formal, "reportable" runs without using SPEC's toolset.

Caution: Examples below use size=test in order to demonstrate working with a benchmark with its simplest workload. The test workload would be wildly inappropriate for performance work. Once you understand the techniques shown here, use the ref workload. If you are unable to do that (perhaps because you are using a slow simulator), you still should not use test, because it is likely to lead your research in the wrong direction. Various other benchmark test workloads just do a quick check that the binary starts and can open its files, then take the rest of the day off to go get a cup of tea (that is, do almost none of their real work). If you are really unable to simulate the ref workload, a more defensible choice would be to sample traces from ref.

License reminder: Various commands below demonstrate copying benchmarks among systems. These examples assume that all the systems belong to licensed users of SPECaccel 2023. For the SPEChpc license, see www.spec.org/accel2023/Docs/licenses/SPEC-License.pdf and for information about all the licensed software in SPEC SPECaccel 2023, see SPECaccel 2023 Licenses.

Environments

Three different environments are referenced in this document, using these labels:

Steps

  1. Review one rule: Please read the rule on research/academic usage. It is understood that the suite may be used in ways other than the formal environment that the tools help to enforce. If you plan to publish your results, you must state how your usage of the suite differs from the standard usage.

    So even if you skip over the tools and the run rules today, you should plan a time to come back and learn them later.

  2. Install: Get through a successful installation, even if it is on a different system than the one that you care about. Yes, we are about to teach you how to mostly bypass the tools, but there will still be some minimal use. So you need a working toolset and a valid installation. If you have troubles with the install procedures described in install-guide-linux.html, please see techsupport.html and we'll try to help you.

  3. Pick a benchmark: Pick a benchmark that will be your starting point.

    Choose one benchmark from the SPECaccel 2023 suite that you'd like to start with. For example, you might start with 463.swim (Fortran) or 404.lbm (C). These are two of the shortest benchmarks for lines of code, and therefore relatively easy to understand.

  4. Pick a config file: Pick a config file for an environment that resembles your environment. You'll find a variety of config files in the directory $SPEC/config/ on Linux systems or at www.spec.org/accel2023 with the submitted SPECaccel 2023 results. Don't worry if the config file you pick doesn't exactly match your environment; you're just looking for a somewhat reasonable starting point.

  5. Fake it: Execute a "fake" run to set up run directories, including a build directory for source code, for the benchmark.

    For example, let's suppose that you want to work with 463.swim and your environment is at least partially similar to the environment described in the comments for Example_gnu.cfg:

    $ pwd
    /Users/carl/spec/accel2023
    $ source shrc
    $ cd config
    $ cp Example_nvhpc.cfg my_test.cfg 
    $ runaccel --fake --loose --size test --tune base --config my_test 463.swim  
    .
    .
    . (lots of stuff goes by)
    .
    .
    
    Success: 1x463.swim
    
    The log for this run is in /Users/carl/spec/accel2023/result/accel2023.007.log
    

    This command should report a success for the build, run and validation phases of the test case, but the actual commands have not been run. It is only a report of what would be run according to the config file that you have supplied.

  6. Find the log: Near the bottom of the output from the previous step, notice the location of the log file for this run -- in the example above, log number 007. The log file contains a record of the commands as reported by the "fake" run. You can find the commands by searching for "%%".

  7. Find the build dir: To find the build directory that was set up in the fake run, you can search for the string build/ in the log:

    $ cd $SPEC/result
    $ grep build/ accel2023.007.log
    Wrote to makefile '/Users/carl/spec/accel2023/benchspec/ACCEL/463.swim/build/build_base_nvhpc.0000/Makefile.deps':
    Wrote to makefile '/Users/carl/spec/accel2023/benchspec/ACCEL/463.swim/build/build_base_nvhpc.0000/Makefile.spec':
    $ 

    Or, you can just go directly to the benchmark build directories and look for the most recent one. For example:,

    $ go 463.swim build
    /Users/carl/spec/accel2023/benchspec/ACCEL/463.swim/build
    $ ls -gtd build*
    drwxrwxr-x 2 staff 4096 Sep  8 10:54 build_base_nvhpc.0000/
    $ 

    In the example above, go is shorthand for getting us around the SPEC tree. The ls -gtd command prints the names of each build subdirectory, with the most recent first. If this is your first time here, there will be only one directory listed, as in the example above.

    You can work in this build directory, make source code changes, and try other build commands without affecting the original sources.

  8. Copy the build dir (triple only): If you are using a unified or cross-compile environment, you can skip to the next step. But if you are using a triple environment, then you will want to package up the build directory with a program such as tar -- a handy copy is in the bin directory of your SPEC installation, as spectar. You can compress it with specxz. Then, you will move the package off to whatever system has compilers.

    For example, you might say something like this:

    $ spectar -cf - build_base_nvhpc.0000/ | specxz > mybuild.tar.xz
    $ scp mybuild.tar.xz nicj@somesys:                                  [reminder: copying]
    mybuild.tar.xz                          100%   13KB 181.7KB/s   00:00    
    $  

    Note that the above example assumes that you have versions of xz and tar available on the system that has compilers, which you will use to unpack the compressed tarfile, typically with a command similar to this:

    xz -dc mybuild.tar.xz | tar -xvf -

    If you don't have xz available, you might try bzip2 or gzip on both the sending and receiving systems. If you use some other compression utility, be sure that it does not corrupt the files by destroying line endings, re-wrapping long lines, or otherwise subtracting value.

  9. Build it: Generate an executable using the build directory. If you are using a unified or cross-compile environment, then you can say commands such as these:

    $ cd build_base_nvhpc.0000/
    $ specmake clean
    rm -rf *.o  SWIM7 swim.out
    find . \( -name \*.o -o -name '*.fppized.f*' -o -name '*.i' -o -name '*.mod' \) -print | xargs rm -rf
    rm -rf swim
    rm -rf swim.exe
    rm -rf core
    rm -rf options.err compiler-version.err make.out compiler-version.out options.out
    $ specmake
    specperl specpp -DSPEC_OPENACC -DSPEC -DNDEBUG swim.F -o swim.fppized.f
    nvfortran -c -o swim.fppized.o -fast -acc swim.fppized.f
    nvfortran      -fast -acc          swim.fppized.o                      -o swim
    

    Note above that the $SPEC environment variable is used to find the SPECaccel common makefile as well as the location of the specperl and specpp utilities to perform the Fortran preprocessing.

    You can also carry out a dry run of the build, which will display the build commands without attempting to run them, by adding -n to the specmake command line. You might find it useful to capture the output of specmake -n to a file, so it can easily be edited, and used as a script.

    If you are trying to debug a new system, you can prototype changes to Makefile.spec or even to the benchmark sources. The make variables used in Makefile.spec will vary by what was included in configration file you used above. The most commonly used make variables are described in Section II.A of config.html with the full list in

    If you are using a triple environment, then presumably it's because you don't have specmake working on the system where the compiler resides. But fear not: specmake is just GNU make under another name, so whatever make you have handy on the target system might work fine with the above commands. If not, then you'll need to extract the build commands prior to creating the bundle, create and edit a local build file, and try them on the system.

    $ go 463.swim build
    /Users/carl/spec/accel2023/benchspec/ACCEL/463.swim/build
    $ cd build_base_nvhpc.0000
    $ specmake -n > bld.sh 
    $ vi bld.sh  (Edit the build script with your favorite editor)
    $ cat bld.sh 
    nvfortran -c -o swim.o  -DSPEC_OPENACC -DSPEC -DNDEBUG-fast -acc swim.F
    nvfortran      -fast -acc          swim.fppized.o                      -o swim
    $ cd ..
    $ spectar -cf - build_base_nvhpc.0000/ | specxz > mybuild.tar.xz
    $ scp mybuild.tar.xz nick@somesys:     
    $

    Then on the remote system:

    $ tar Jxvf mybuild.tar.xz
    $ cd build_base_nvhpc.0000
    $ sh -x bld.sh
    + nvfortran -c -o swim.o -DSPEC_OPENACC -DSPEC -DNDEBUG -fast -acc swim.F
    + nvfortran -fast -acc swim.fppized.o -o swim
    

    Note that the edited "bld.sh" script removed the specpp command and instead added the define flags to the compile line and changed the name of the file from "swim.fppized.f" to "swim.F". It's common practice for Fortran compilers to preprocess source files that use upper-case "F" in the file suffix. However, preprocessing is not part of the Fortran standard hence not all Fortran compilers support it. If you are using a Fortran compiler that does not support preprocessing, then you will need to either save the post-process Fortran source (*fppized.f90) prior to creating the bundle, or install the SPEC tools on the remote system so specpp is available.

  10. Find the run directory, and add the binary to it: Using techniques similar to those used to find the build directory, find the run directory established above, and place the binary into it. If you are using a unified or cross-compile environment, you can copy the binary directly into the run directory; if you are using a triple environment, then you'll have to retrieve the binary from the compilation system using whatever program you use to communicate between systems.

    In a unified environment, the commands might look something like this:

    $ go result
    /Users/carl/spec/accel2023/result/
    $ grep 'Setting up' accel2023.007.log
     Setting up 463.swim test base nvhpc: run_base_test_nvhpc.0000
    $ go 463.swim run 
    /Users/carl/spec/accel2023/benchspec/ACCEL/463.swim/run/
    $ cd run_base_test_nvhpc.0000/
    $ cp ../../build/build_base_nvhpc.0000/swim .
    $  

    In the result directory, we search log 007 to find the correct name of the directory, go there, and copy the binary into it.

  11. Copy the run dir: If you are using a unified environment, you can skip this step. Otherwise, you'll need to package up the run directory and transport it to the system where you want to run the benchmark. For example:

    $ go 463.swim run
    /Users/carl/spec/accel2023/benchspec/ACCEL/463.swim/run/
    $ spectar cf - run_base_test_nvhpc.0000/ | specxz > myrun.tar.xz
    $ scp myrun.tar.xz nick@mysys: 
    $  

    Note that the above example assumes that you have versions of xz and tar available on the run time system, which you will use to unpack the compressed tarfile, typically with something like this:

    xz -dc myrun.tar.xz | tar -xvf -

    tar Jxvf myrun.tar.xz

    If you don't have xz available, you might try bzip2 or gzip on both the sending and receiving systems. If you use some other compression utility, be sure that it does not corrupt the files by destroying line endings, re-wrapping long lines, or otherwise subtracting value.

  12. Run it: If you are using a unified environment, you can use specinvoke to see the command lines that run the benchmark, and/or capture them to a shell script. You can also run them using judicious(*) cut and paste:

    $ go 463.swim run/run_base_test_nvhpc.0000 
    /Users/carl/spec/accel2023/benchspec/ACCEL/463.swim/run/run_base_test_nvhpc.0000
    $ cp ../../build/build_base_nvhpc.0000/swim .
    $ specinvoke -n
    # specinvoke r4356
    #  Invoked as: specinvoke -n
    # timer ticks over every 1000 ns
    # Use another -n on the command line to see chdir commands and env dump
    # Starting run for copy #0
    ../run_base_test_nvhpc.0000/swim_base.nvhpc < swim.in > swim.out 2>> swim.err
    specinvoke exit: rc=0
    

    (*) Note above that the swim binary to include additional identifiers (__base.nvhpc) - which we simply ignore in the command that is cut-and-pasted, because the binary built by hand is just swim.

    $ swim < swim.in > swim.out  
    $ cat swim.out
      SPEC benchmark 463.swim
    
     NUMBER OF POINTS IN THE X DIRECTION     512
     NUMBER OF POINTS IN THE Y DIRECTION     512
     GRID SPACING IN THE X DIRECTION      25000.
     GRID SPACING IN THE Y DIRECTION      25000.
     TIME STEP                               20.
     TIME FILTER PARAMETER                 0.001
     NUMBER OF ITERATIONS                     10
    
     Pcheck =   0.1311E+11
     Ucheck =   0.5215E+05
     Vcheck =   0.5215E+05
    
    $  

    If you are using a cross-compile or triple environment, you can capture the commands to a file and execute that. Be sure to follow the instructions carefully for how to do that, noting in particular the items above your environment, at the specinvoke chapter of SPECaccel 2023 Utilities.

    Alternatively, you can extract the run commands from speccmd.cmd.

    $ go 463.swim run/run_base_test_nvhpc.0000 
    /Users/carl/spec/accel2023/benchspec/ACCEL/463.swim/run/run_base_test_nvhpc.0000
    $ tail -1 speccmds.cmd
    -i swim.in -o swim.out -e swim.err ../run_base_test_nvhpc.0000/swim_base.nvhpc
    $  

    speccmds.cmd is the script specinvoke uses to run the benchmarks. The "-E" options sets the environment variables, "-i", "-o" and "-e" are the names for the stdin, stdout and stderr logs. The command line to run the benchmark is bolded.

  13. Save your work: Important: if you are at all interested in saving your work, move the build/build* and run/run* directories to some safer location. That way, your work areas will not be accidentally deleted the next time someone comes along and uses one of runaccel cleanup actions..

  14. Repeat: Admittedly, the large number of steps that it took to get here may seem like a lot of trouble. But that's why you started with a simple benchmark and the simplest workload (--size test in the fake step). Now that you've got the pattern down, it is hoped that it will be straightforward to repeat the process for the other available workloads.

    But if you're finding it tedious... then maybe this is an opportunity to sell you on the notion of using runaccel after all, which automates all this tedium. If the reason you came here was because runaccel doesn't work on your brand-new environment, then perhaps you'll want to try to get it built, using the hints in tools-build.html.

Validation

Note that this document has only discussed getting the benchmarks built and running. Presumably at some point you'd like to know whether your system got the correct answer. At that point, you can use specdiff, which is explained in utility.html.

Avoiding runaccel Using the SPECaccel®2023 benchmarks while making minimal use of SPEC's tool set: Copyright © 2023 Standard Performance Evaluation Corporation (SPEC)