The best approach would be to use the native PowerShell Cmdlets Jump .   If you have a good case for using Invoke-WebRequest instead a few suggestions:

Generating the derived authorization header (SharedKey) from the storage account key may be bit difficult in this environment.   It’s much easier to use SAS tokens for this and append it to the query string of the REST request.  

If you have a relatively small file (that you can load into memory and do with a single PutBlob call)

$file = Get-Content .\file.png -Raw 
$uri = " Jump :59:20Z&st=2019-02-10T20:59:20Z&spr=https&sig=XXXXX" 
$headers = @{} 
Invoke-WebRequest -uri $uri -Method Put -Body $file -ContentType "image/png" -Headers $headers
 Note:  $URI line removed from the screenshot to protect my storage account 

I wouldn’t recommend this approach for anything over ~4MiB in size.

Additional information: REST with PowerShell Jump .  It appears that cURL translates to Invoke-WebRequest in PowerShell.

Creating SAS tokens, then curl really is about the same functionality as Invoke‑WebRequest for doing the upload.

e.g., to upload to ptest/file.png… If the user knows ahead of time that the blob will be called file.png – generate a Blob SAS signature for that specific name.   If they need to accommodate a variety of blob names, they can generate for the container level and edit the filename

You can use curl as: uri='  . . .' 

cat file.png | curl -X PUT -H 'x-ms-blob-type: BlockBlob' -m 300 -H 'Content-Type: image/png'-d @- $uri

to pipe the file in and upload.   -m is a 300-second request timeout (https takes time), and Content-type is optional.


  1. On the efficiency side, adding the two curl options to disable the 100-Continue processing (‘-H Expect:’ and ‘--expect100-timeout 0’) would yield only a trivial benefit as Azure Storage responds with the 100 quickly.   Leaving those options out has the added gain of a fast-fail (before data payload is sent) in the case of a >256MB payload or a bad URI.
  2. Despite the PutBlob doc page Jump not being explicit about defaults, there’s no longer a need to send the Storage API version explicitly (e.g., -H 'x-ms-version: 2016-05-31', or higher) to allow data sizes in the 64MB to 256MB range.   2016-05-31 is currently the default assumed by Storage (at least for this API use).
  3. If you try to send a >256MB (well, really MiB – 1024-based) payload, the request fails with a 413 error.
  4. ‘—data-binary’ is nearly always what you want – curl can trip on EOF characters and stop early with just ‘-d’.
  5. curl can also read the file directly via  “—data-binary @filename” instead of “—data-binary @-“.
  6. Depending on the balance between network capacity and processing power for the local workstation, it may also be beneficial to include the curl ‘–tr-encoding’ option to have curl gzip the payload before transmission.