Update 05/08/2018: For the newest version of Diskspd please visit:

 

https://aka.ms/diskspd

 

------------

Update 05/06/2016: New version of Diskspd now available: v2.0.17. Changes from v2.0.15:

Running Diskspd on current versions of Nano Server (>=TP5) no longer requires the installation of the reverse forwarders package (-ReverseForwarders) for proper operation.

Source code is hosted at the following repo: https://github.com/microsoft/diskspd. In addition to Diskspd itself, this repo hosts measurement frameworks which use Diskspd. The initial example is the VM Fleet being used for Windows Server 2016 Hyper-Converged Storage Spaces Direct work. Look for these under the Frameworks directory.

Updated below examples to included PowerShell syntax due to command interpreter differences (i.e. backtick-quote the parameters with internal comma separators and leading hashes).

And past tool update notes are now included in the readme.txt file included in the download package.

------------

A feature-rich and versatile storage testing tool, Diskspd (version 2.0.17) combines robust and granular IO workload definition with flexible runtime and output options, creating an ideal tool for synthetic storage subsystem testing and validation.  Select highlights include:

And although complete usage documentation is included in the download package, below are a couple of examples to get started.

Example 1 

Set the block size to 256K, run the sequential (no -r switch) 100% read (no -w switch) test for 10 seconds, leverage 8 overlapped IOs and 4 threads per target, affinitize threads to CPUs 0 and 1 (each file will have threads affinitized to both CPUs) and target Physical Disk 9.

Command Line:

Diskspd.exe -b256K -d10 -o8 -t4 -a0,1 #9

PowerShell:

Diskspd.exe -b256K -d10 -o8 -t4 -`a0,1 `#9

Sample Output

Command Line: Diskspd.exe -b256K -d10 -o8 -t4 -a0,1 #9

Input parameters:

        timespan:   1
        -------------
        duration: 10s
        warm up time: 5s
        cool down time: 0s
        random seed: 0
        advanced affinity: 0, 1
        path: '#9'
                think time: 0ms
                burst size: 0
                using software and hardware cache
                performing read test
                block size: 262144
                number of outstanding I/O operations: 8
                stride size: 262144
                thread stride size: 0
                threads per file: 4
                using I/O Completion Ports
                IO priority: normal

Results for timespan 1:
*******************************************************************************

actual test time:       10.01s
thread count:           4

Total IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  file
------------------------------------------------------------------------------
     0 |      1385431040 |         5285 |     132.02 |     528.06 | #9 (186GB)
     1 |      1385431040 |         5285 |     132.02 |     528.06 | #9 (186GB)
     2 |      1385431040 |         5285 |     132.02 |     528.06 | #9 (186GB)
     3 |      1385693184 |         5286 |     132.04 |     528.16 | #9 (186GB)
------------------------------------------------------------------------------
total:        5541986304 |        21141 |     528.09 |    2112.35

Read IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  file
------------------------------------------------------------------------------
     0 |      1385431040 |         5285 |     132.02 |     528.06 | #9 (186GB)
     1 |      1385431040 |         5285 |     132.02 |     528.06 | #9 (186GB)
     2 |      1385431040 |         5285 |     132.02 |     528.06 | #9 (186GB)
     3 |      1385693184 |         5286 |     132.04 |     528.16 | #9 (186GB)
------------------------------------------------------------------------------
total:        5541986304 |        21141 |     528.09 |    2112.35

Write IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  file
------------------------------------------------------------------------------
     0 |               0 |            0 |       0.00 |       0.00 | #9 (186GB)
     1 |               0 |            0 |       0.00 |       0.00 | #9 (186GB)
     2 |               0 |            0 |       0.00 |       0.00 | #9 (186GB)
     3 |               0 |            0 |       0.00 |       0.00 | #9 (186GB)
------------------------------------------------------------------------------
total:                 0 |            0 |       0.00 |       0.00

C:\Data\Diskspd>

Example 2

Set the block size to 8K, run the test for 60 seconds, disable all hardware and software caching, measure and display latency statistics, leverage 2 overlapped IOs and 4 threads per target, random 30% writes and 70% reads and create a 50MB test file at c:\io.dat

Command Line:

Diskspd.exe -b8K -d60 -h -L -o2 -t4 -r -w30 -c50M c:\io.dat

PowerShell:

Diskspd.exe -b8K -d60 -h -L -o2 -t4 -r -w30 -c50M c:\io.dat

Sample Output

Command Line: Diskspd.exe -b8K -d60 -h -L -o2 -t4 -r -w30 -c50M c:\io.dat

Input parameters:

        timespan:   1
        -------------
        duration: 60s
        warm up time: 5s
        cool down time: 0s
        measuring latency
        random seed: 0
        path: 'c:\io.dat'
                think time: 0ms
                burst size: 0
                software and hardware cache disabled
                performing mix test (write/read ratio: 30/100)
                block size: 8192
                using random I/O (alignment: 8192)
                number of outstanding I/O operations: 2
                stride size: 8192
                thread stride size: 0
                threads per file: 4
                using I/O Completion Ports
                IO priority: normal

Results for timespan 1:
*******************************************************************************

actual test time:       60.00s
thread count:           4

Total IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |        44900352 |         5481 |       0.71 |      91.35 |   21.910 |    27.633 | c:\io.dat (50MB)
     1 |        44720128 |         5459 |       0.71 |      90.98 |   21.987 |    26.877 | c:\io.dat (50MB)
     2 |        44761088 |         5464 |       0.71 |      91.07 |   21.981 |    26.822 | c:\io.dat (50MB)
     3 |        45817856 |         5593 |       0.73 |      93.22 |   21.466 |    26.323 | c:\io.dat (50MB)
-----------------------------------------------------------------------------------------------------
total:         180199424 |        21997 |       2.86 |     366.61 |   21.834 |    26.916

Read IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |        31842304 |         3887 |       0.51 |      64.78 |   12.384 |    13.325 | c:\io.dat (50MB)
     1 |        31121408 |         3799 |       0.49 |      63.32 |   12.258 |    13.198 | c:\io.dat (50MB)
     2 |        31326208 |         3824 |       0.50 |      63.73 |   12.344 |    13.800 | c:\io.dat (50MB)
     3 |        32366592 |         3951 |       0.51 |      65.85 |   11.886 |    12.602 | c:\io.dat (50MB)
-----------------------------------------------------------------------------------------------------
total:         126656512 |        15461 |       2.01 |     257.68 |   12.216 |    13.235

Write IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |        13058048 |         1594 |       0.21 |      26.57 |   45.140 |    37.837 | c:\io.dat (50MB)
     1 |        13598720 |         1660 |       0.22 |      27.67 |   44.251 |    35.563 | c:\io.dat (50MB)
     2 |        13434880 |         1640 |       0.21 |      27.33 |   44.453 |    35.090 | c:\io.dat (50MB)
     3 |        13451264 |         1642 |       0.21 |      27.37 |   44.518 |    35.010 | c:\io.dat (50MB)
-----------------------------------------------------------------------------------------------------
total:          53542912 |         6536 |       0.85 |     108.93 |   44.585 |    35.880


  %-ile |  Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
    min |      0.152 |      3.474 |      0.152
   25th |      4.242 |     20.145 |      6.114
   50th |      8.638 |     34.130 |     12.401
   75th |     15.380 |     57.890 |     27.417
   90th |     27.425 |     89.141 |     52.325
   95th |     37.417 |    112.730 |     74.555
   99th |     63.537 |    173.054 |    129.122
3-nines |    114.707 |    285.271 |    228.023
4-nines |    156.141 |    423.908 |    317.251
5-nines |    157.008 |    423.908 |    423.908
6-nines |    157.008 |    423.908 |    423.908
7-nines |    157.008 |    423.908 |    423.908
8-nines |    157.008 |    423.908 |    423.908
    max |    157.008 |    423.908 |    423.908

D:\Temp\DiskSpd>

And for a listing of all program options, run:

diskspd.exe -?

In addition, source code for Diskspd is open sourced!  Check it out here.


Note: Diskspd v2.0.17 has been fully tested on: