GPU Benchmark Tests of Genoa, Milan and Ice Lake Platforms

social_icon_fb social_icon_twitter social_icon_line social_icon_line

In our previous blog, we announced AEWIN SCB-1932C Server has been validated as a NVIDIA-Certified System for enterprise edge. Today we will explore more on the GPU benchmark tests across different AEWIN platforms.

System Configurations
Applying AEWIN High Performance Appliances, SCB-1946C, SCB-1932C, and SCB-1937C.

Servers for Testing/Benchmark
SystemSCB-1946CSCB-1932CSCB-1937CNvidia Benchmark
ProcessorDual AMD EPYC 9174F


Dual Intel Xeon Gold 5318S

(Ice Lake)

Dual AMD EPYC 7543


Dual AMD EPYC 7003


Freq4.1 GHz2.1 GHz2.8 GHzN/A
Memory1x 32GB2x 32GB1x 32GBN/A
GPU1x Nvidia A301x Nvidia A301x Nvidia A301x Nvidia A30
OSUbuntu 20.04.3 LTSUbuntu 20.04.3 LTSUbuntu 20.04.3 LTSN/A
FrameworkTensorRT 8.6.1TensorRT 8.6.1TensorRT 8.6.1TensorRT 8.6.1

GPU Status Monitor
For preparation, write a GPU monitor script “” in the host in case of throttling.

Input the status refresh duration as interval. input “y” to save log or “n” not to save log.

Benchmark Test
Run the script “” from the host. It will redirect you to the GPU accelerated container. From the container run the script “”. It will ask to choose between int8 mode or fp16 mode for the test. Input 1 to run in int8 mode.

Run the script “” in the host to start the test. The picture below shows an example of the benchmark results.

For the Benchmark results, we only consider the percentile value of the GPU Compute. For example, the percentile value shown in the above figure is equal to 8.88623. To calculate the performance in img/sec for any GPU, we use the following formula: 1000/(percentile/128), where 128 is batch size for current test. Thus, the int8 (images/sec) is equivalent to 14,405.

Testing Script
1. sh script in the container

echo -e “for int8 test, press 1; for fp16 test, press 2 : ”
read testmode
if [ “${testmode}” -eq 1 ]; then
    /workspace/tensorrt/bin/trtexec –batch=128 –iterations=400 –workspace=1024 –percentile=99 –deploy=ResNet50_N2.prototxt –model=ResNet50_fp32.caffemodel –output=prob –int8
elif [ “${testmode}” -eq 2 ]; then
    /workspace/tensorrt/bin/trtexec –batch=128 –iterations=400 –workspace=1024 –percentile=99 –deploy=ResNet50_N2.prototxt –model=ResNet50_fp32.caffemodel –output=prob –fp16
    echo -e “input wrong !!!”

2. sh script in the host

docker run –gpus ‘”device=0″‘ -it –rm –name trt_2011 -w /workspace/tensorrt/data/resnet50/ trt:2011

3. burn-in script in the container

    mpirun –allow-run-as-root -np 1 –mca btl ^openib python -u ./ –batch_size 128 –num_iter 28800 –precision fp16 –iter_unit batch

4. burn-in script in the host

docker run –gpus ‘”device=0″‘ -it –rm –name tf_2011tf2 -w /workspace/nvidia-examples/cnn tf:2011tf2

5. GPU monitor script “” in the host

#echo ” ” > ./gpu_log.txt
echo “please enter interval (sec) : ”
read interval
echo “Do you want to save the log file?(y/n)”
read logflagfor((i=1;i>0;i++))
    if [ “${logflag}” = “y” ]; then
        echo -e “\n=====i : ${i}=====\n” > ./gpu_log_tmp.txt
        nvidia-smi >> ./gpu_log_tmp.txt
        sleep 1
        nvidia-smi -q -d CLOCK | grep -v N/A | grep -v “Not Found” >> ./gpu_log_tmp.txt
        cat ./gpu_log_tmp.txt
        cat ./gpu_log_tmp.txt >> gpu_log.txt
        sleep “${interval}”
    elif [ “${logflag}” = “n” ]; then
        echo -e “\n=====i : ${i}=====\n”
        sleep 1
        nvidia-smi -q -d CLOCK | grep -v N/A | grep -v “Not Found”
        sleep “${interval}”
        echo “Input error! Please enter y or n.”

As shown in the benchmark results, we verified A30 on the platforms including SCB-1946C(Genoa), SCB-1932C(Ice Lake), and SCB-1937C(Milan). They share better or similar results compared to Nvidia benchmarks.

platforms range from edge AI appliances to general purpose computing systems, to high performance servers, customers can select the most suitable ones with the GPUs required for each application. Reach out to our friendly sales and discover more on AEWIN GPU Server platforms!