Knowing the difference between Azure Premium Storage and Azure Standard Storage is very important when architecting cloud infrastructure solutions.

Let’s start with a quick side to side comparison: (correct in April 2016)

Premium Storage

Supports page blobs (disks) only

Is charged by disk tier (if you use 100 GB of a 1 TB disk, you pay for the 1 TB disk)

Service Level Agreement on Storage Performance

The premium storage disk maps to a physical SSD that is not shared with others

Performance is backed up by targets

Locally redundant only

higher cost

Standard Storage

Supports disks, blobs, tables, queues, and Azure file shares

Is charged by usage (if you use 100 GB of a 1 TB disk, you pay for 100 GB)

Best Effort Service

User shares the storage with others

Performance may flactuate

Geo Redundant Storage options available

low cost

Using Premium Storage may be recommended depending on your scenario

For all latency cirtical workloads Microsoft recommends the use of premium storage. Often these latency critical scenarios occur when dealing with larger databases. (both MySQL and MSSQL can greatly benefit from premium storage) https://azure.microsoft.com/en-gb/documentation/articles/virtual-machines-windows-sql-performance/

How do I improve my VM’s performance if I cannot move to premium storage?

There are are several reasons why you may not want to move to premium storage with your workload.
Even if Premium Storage is not an option, there are still a number of things you can do to improve performance.

Move from A to D

Moving from an type-A machine to a type-D machine will provide better performance because your machine will be using a newer generation of processors and RAM. Additionally you will get an SSD temporary drive (or swap partition on Linux) that can be used for your page file or temporary storage during computation.

Max out on IOPS and throughput

To achieve the highest amount of IOPS and throughput possible, Microsoft recommends that you attach the maximum amount of data disks available to your machine and create storage spaces in Windows Server or a RAID0 configuration on Linux.

Once this large storage space or RAID is created, you should aim to move most of your computation and – preferably – all of your data to this massive data volume. As the storage space consists of more than one disk, you will get a multiple of the available IOPS to use. (as IOPS are capped on a per disk level)

You may still be limited by the bandwidth available for your VM. More Information: https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/#scalability-targets-for-blobs-queues-tables-and-files

Follow the best practices for your scenario

Every scenario is different and so is every workload. Make sure you follow the best practices for your workload.
Microsoft has a wealth of resources available here, but it is also worth contacting the vendor of the application that you are running for additional advice.

For example: Best Practices for MySQL: https://azure.microsoft.com/en-gb/documentation/articles/virtual-machines-linux-classic-optimize-mysql/

Expecting a spike in traffic? Warm things up!

Standard Storage is a multi tennanted, elastic storage environment and if a large amount of traffic suddenly hits Azure, it can take some time for this storage to scale out. If you are expecting a busy period it may make sense to run a couple of benchmarks or send some warm up traffic just before you are expecting a period of high traffic.

Use Azure Platform as a Service offerings to assist content delivery

There is a wealth of offerings available to assist the delivery of your content. Redis Cache – for example – can be used to reduce the load on your compute instances and serve cached content, if it is available.