Unexplained Azure Table Storage transaction limitations

I’m running performance tests against ATS and its behaving a bit weird when using multiple virtual machines against the same table / storage account.

The entire pipeline is non blocking (await/async) and using TPL for concurrent and parallel execution.

First of all its very strange that with this setup i’m only getting about 1200 insertions. This is running on a sized L VM box, that is 4 cores + 800mbps.

I’m inserting 100.000 rows with unique PK and unique RK, that should leverage the ultimate distribution of partitioning.

Even more non deterministic behavior is the following:

When I run 1 VM i get about 1200 insertions per second. When I run 3 VM i get about 730 on each insertions per second.

Its quite humors to read the blog post where they are specifying their targets.

Single Table Partition– a table partition are all of the entities in a table with the same partition key value, and usually tables have many partitions. The throughput target for a single table partition is:

Up to 2,000 entities per second

Note, this is for a single partition, and not a single table. Therefore, a table with good partitioning, can process up to the 20,000 entities/second, which is the overall account target described above.

What shall I do to be able to utilize the 20k per second, and how would it be possible to execute more than 1,2k per VM?


I’ve now also tried using 3 storage accounts for each individual node and is still getting the performance / throttling behavior. Which i can’t find a logical reason for.

Update 2:

I’ve optimized the code further and now i’m possible to execute about 1550 by removing some threading contention with TPL Scheduler.

Update 3:

I’ve now also tried in US West. The performance is worse there. About 33% lower. Probably more customers on that DC. The storage account was of course a new one in the same DC.

Update 4:

I tried executing the code from a XL machine. Which is 8 cores instead of 4 and the double amount of memory and bandwidth and got a 2% increase in performance so clearly this problem is not on my side..

This question is also published on Stackoverflow for feedback: