Workaround for NTFS deduplication error 0x8007000E Not enough storage is available to complete this operation

This can pop up when starting an optimization job, even when you have plenty of RAM, even if you give tons of memory to job. Error message is misleading, storage here means memory.

Workaround is to just increase page file. I came across this issue on a Server Core 2016 that had 24GB of RAM for a 16TB volume. Analysis job caused commit to grow to almost 90% (without releasing it in time) so optimization could not allocate any memory. I didn’t go in depth (RAMMap etc) though. After increasing page file from automatic ~2GB to 16GB, jobs work just fine.

Keep in mind that commit does not mean that memory or page file is actually used. It just means that application has been promised that this memory will be available when it will be actually used. Unused commit is taken from pagefile first so it’s basically free performance-wise, except for increased disk space use.

One thought on “Workaround for NTFS deduplication error 0x8007000E Not enough storage is available to complete this operation”

  1. What can also help in this case, start the job setting a different maximum memory usage. In my case, on a server with 16GB RAM, the job failed with the error 0x8007000E when using all of the memory, but worked when using just 50% of the available memory. Use the following command to do this:

    Start-DedupJob -Type Optimization -Volume -Memory 50 -Cores 80 -Priority High

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.