Over-provisioning an SSD - does it still hold?











up vote
13
down vote

favorite
2












Multiple (but not very recent) sources suggest that ~7% of the SSD space should be left unallocated in order to reduce the drive wear. Is it still valid as for now or has the situation changed?










share|improve this question













migrated from unix.stackexchange.com Jul 24 '15 at 18:01


This question came from our site for users of Linux, FreeBSD and other Un*x-like operating systems.















  • It does matter because of the TRIM-enable problem, as I said. I suspect SU is a good fit for this question, but it does now need editing to mention Linux!
    – sourcejedi
    Jul 24 '15 at 18:10










  • the free space is allows for better performance. Drive wear & tear is overstated and possibly a myth now. A good quality SSD can last over 10 years writing to it 24/7. this might be a good article to review: howtogeek.com/165472/… - writing to an empty block is fairly quick, but writing to a partially-filled block involves reading the partially-filled block, modifying its value, and then writing it back. Repeat this many, many times for each file you write to the drive as the file will likely consume many blocks. ---
    – Sun
    Jul 30 '15 at 16:42















up vote
13
down vote

favorite
2












Multiple (but not very recent) sources suggest that ~7% of the SSD space should be left unallocated in order to reduce the drive wear. Is it still valid as for now or has the situation changed?










share|improve this question













migrated from unix.stackexchange.com Jul 24 '15 at 18:01


This question came from our site for users of Linux, FreeBSD and other Un*x-like operating systems.















  • It does matter because of the TRIM-enable problem, as I said. I suspect SU is a good fit for this question, but it does now need editing to mention Linux!
    – sourcejedi
    Jul 24 '15 at 18:10










  • the free space is allows for better performance. Drive wear & tear is overstated and possibly a myth now. A good quality SSD can last over 10 years writing to it 24/7. this might be a good article to review: howtogeek.com/165472/… - writing to an empty block is fairly quick, but writing to a partially-filled block involves reading the partially-filled block, modifying its value, and then writing it back. Repeat this many, many times for each file you write to the drive as the file will likely consume many blocks. ---
    – Sun
    Jul 30 '15 at 16:42













up vote
13
down vote

favorite
2









up vote
13
down vote

favorite
2






2





Multiple (but not very recent) sources suggest that ~7% of the SSD space should be left unallocated in order to reduce the drive wear. Is it still valid as for now or has the situation changed?










share|improve this question













Multiple (but not very recent) sources suggest that ~7% of the SSD space should be left unallocated in order to reduce the drive wear. Is it still valid as for now or has the situation changed?







partitioning ssd






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Jul 24 '15 at 8:14









marmistrz

2401414




2401414




migrated from unix.stackexchange.com Jul 24 '15 at 18:01


This question came from our site for users of Linux, FreeBSD and other Un*x-like operating systems.






migrated from unix.stackexchange.com Jul 24 '15 at 18:01


This question came from our site for users of Linux, FreeBSD and other Un*x-like operating systems.














  • It does matter because of the TRIM-enable problem, as I said. I suspect SU is a good fit for this question, but it does now need editing to mention Linux!
    – sourcejedi
    Jul 24 '15 at 18:10










  • the free space is allows for better performance. Drive wear & tear is overstated and possibly a myth now. A good quality SSD can last over 10 years writing to it 24/7. this might be a good article to review: howtogeek.com/165472/… - writing to an empty block is fairly quick, but writing to a partially-filled block involves reading the partially-filled block, modifying its value, and then writing it back. Repeat this many, many times for each file you write to the drive as the file will likely consume many blocks. ---
    – Sun
    Jul 30 '15 at 16:42


















  • It does matter because of the TRIM-enable problem, as I said. I suspect SU is a good fit for this question, but it does now need editing to mention Linux!
    – sourcejedi
    Jul 24 '15 at 18:10










  • the free space is allows for better performance. Drive wear & tear is overstated and possibly a myth now. A good quality SSD can last over 10 years writing to it 24/7. this might be a good article to review: howtogeek.com/165472/… - writing to an empty block is fairly quick, but writing to a partially-filled block involves reading the partially-filled block, modifying its value, and then writing it back. Repeat this many, many times for each file you write to the drive as the file will likely consume many blocks. ---
    – Sun
    Jul 30 '15 at 16:42
















It does matter because of the TRIM-enable problem, as I said. I suspect SU is a good fit for this question, but it does now need editing to mention Linux!
– sourcejedi
Jul 24 '15 at 18:10




It does matter because of the TRIM-enable problem, as I said. I suspect SU is a good fit for this question, but it does now need editing to mention Linux!
– sourcejedi
Jul 24 '15 at 18:10












the free space is allows for better performance. Drive wear & tear is overstated and possibly a myth now. A good quality SSD can last over 10 years writing to it 24/7. this might be a good article to review: howtogeek.com/165472/… - writing to an empty block is fairly quick, but writing to a partially-filled block involves reading the partially-filled block, modifying its value, and then writing it back. Repeat this many, many times for each file you write to the drive as the file will likely consume many blocks. ---
– Sun
Jul 30 '15 at 16:42




the free space is allows for better performance. Drive wear & tear is overstated and possibly a myth now. A good quality SSD can last over 10 years writing to it 24/7. this might be a good article to review: howtogeek.com/165472/… - writing to an empty block is fairly quick, but writing to a partially-filled block involves reading the partially-filled block, modifying its value, and then writing it back. Repeat this many, many times for each file you write to the drive as the file will likely consume many blocks. ---
– Sun
Jul 30 '15 at 16:42










3 Answers
3






active

oldest

votes

















up vote
14
down vote



accepted










Windows will generally use TRIM. This means as long as you have X% free space on the filesystem, the drive will see X% as unallocated.[*] Over-provisioning not required.



Exception: historically, SSDs with Sandforce controllers/firmware have not restored full performance after TRIM :(.



Performance loss on the full drive can be significant, and more so than some other drives. This will be associated with high write amplification, and hence increases wear. Source: Anandtech reviews.



So it's necessary if and only if




  • you're not sure that TRIM will be used. AFAIK it's still not enabled by default on Linux, because of performance issues with a few old & badly-behaving drives.

  • OR you're worried about filling a Sandforce drive (and that the content won't be amenable to compression by the smart controller).


It's not too hard to enable TRIM on Linux, and you're unlikely to notice any problems.



Fortunately, several of the most popular brands make their own controller. The Sandforce controllers are not as popular as they used to be. Sandforce issues make me skeptical about that specific "smart" controller design, which was very aggressive for its time. Apologies to Sandforce but I don't have a reference for the exact controller models affected.





[*] Filesystems like having plenty of free space too, to reduce fragmentation. So TRIM is great, because you don't have to add two safety margins together, the same free space helps both of them :).
The drive can take advantage of the unallocated space to improve performance, as well as avoiding high wear as you say.






share|improve this answer





















  • Is it possible to detect a Sandforce controller once having an installed SSD?
    – marmistrz
    Jul 24 '15 at 13:54










  • There isn't a standard read-out or a test that I'm aware of. You need to know what the drive is & try looking it up. About all I can see is a "model number" e.g. in GNOME Disks like "M4-CT128M4SSD2", which might be awkward to match. Sorry again. I believe Samsung use their own controllers, and Crucial/Micron use Marvell. OCZ/Toshiba have used various controllers including Sandforce.
    – sourcejedi
    Jul 24 '15 at 17:37










  • This assumes that you actually have free space on the filesystem, right? :D
    – endolith
    Sep 15 '16 at 1:27










  • Be specific as to what part you're responding to please, this was over a year ago. I think you mean "[hypothetical] I have no discipline, need my system to push back against continued abuse, and have no software to set a high water mark alarm". Which is a perfectly valid criticism, albeit interesting from someone who would consider over-provisioning in the first place.
    – sourcejedi
    Sep 15 '16 at 7:30


















up vote
6
down vote













Modern SSD controllers are smart enough that overprovisioning is not typically necessary for everyday use. However, there are still situations, primarily in datacenter environments, where overprovisioning is recommended. To understand why overprovisioning can be useful, it is necessary to understand how SSDs work.



SSDs must cope with the limitations of flash memory when writing data



SSDs use a type of memory called NAND flash memory. Unlike hard drives, NAND cells containing data cannot be directly overwritten; the drive needs to erase existing data before it can write new data. Furthermore, while SSDs write data in pages that are typically 4 KB to 16 KB in size, they can only erase data in large groups of pages called blocks, typically several hundred KBs to several MBs in size in modern SSDs.



NAND also has a limited amount of write endurance. To avoid rewriting data unnecessarily in order to erase blocks, and to ensure that no block receives a disproportionate number of writes, the drive tries to spread out writes, especially small random writes, to different blocks. If the writes replace old data, it marks the old pages as invalid. Once all the pages in a block are marked invalid, the drive is free to erase it without having to rewrite valid data.



SSDs need free space to function optimally, but not every workload is conducive to maintaining free space



If the drive has little or no free space remaining, it will not be able to spread out writes. Instead, the drive will need to erase blocks right away as writes are sent to the drive, rewriting any valid data within those blocks into other blocks. This results in more data being written to the NAND than is sent to the drive, a phenomenon known as write amplification. Write amplification is especially pronounced with random write-intensive workloads, such as online transaction processing (OLTP), and needs to be kept to a minimum because it results in reduced performance and endurance.



To reduce write amplification, most modern systems support a command called TRIM, which tells the drive which blocks no longer contain valid data so they can be erased. This is necessary because the drive would otherwise need to assume that data logically deleted by the operating system is still valid, which hinders the drive's ability to maintain adequate free space.



However, TRIM is sometimes not possible, such as when the drive is in an external enclosure (most enclosures do not support TRIM) or when the drive is used with an older operating system. Furthermore, under highly-intensive random-write workloads, writes will be spread over large regions of the underlying NAND, which means that forced rewriting of data and attendant write amplification can occur even if the drive is not nearly full.



Modern SSDs experience significantly less write amplification than older drives but some workloads can still benefit from overprovisioning



The earliest SSDs had much less mature firmware that would tend to rewrite data much more often than necessary. Early Indilinx and JMicron controllers (the JMF602 was infamous for stuttering and abysmal random write performance) suffered from extremely high write amplification under intensive random-write workloads, sometimes exceeding 100x. (Imagine writing over 100 MB of data to the NAND when you're just trying to write 1 MB!). Newer controllers, with the benefit of higher processing power, improved flash management algorithms, and TRIM support, are much better able to handle these situations, although heavy random-write workloads can still cause write amplification in excess of 10x in modern SSDs.



Overprovisioning provides the drive with a larger region of free space to handle random writes and avoid forced rewriting of data. All SSDs are overprovisioned to at least some minimal degree; some use only the difference between GB and GiB to provide about 7% of spare space for the drive to work with, while others have more overprovisioning to optimize performance for the needs of specific applications. For example, an enterprise SSD for write-heavy OLTP or database workloads may have 512 GiB of physical NAND yet have an advertised capacity of 400 GB, rather than the 480 to 512 GB typical of consumer SSDs with similar amounts of NAND.



If your workload is particularly demanding, or if you're using the drive in an environment where TRIM is not supported, you can manually overprovision space by partitioning the drive so that some space is unused. For example, you can partition a 512 GB SSD to 400 GB and leave the remaining space unallocated, and the drive will use the unallocated space as spare space. Do note, however, that this unallocated space must be trimmed if it has been written to before; otherwise, it will have no benefit as the drive will see that space as occupied. (Partitioning utilities should be smart enough to do this, but I'm not 100% sure; see "Does Windows trim unpartitioned (unformatted) space on an SSD?")



If you're just a normal consumer, overprovisioning is generally not necessary



In typical consumer environments where TRIM is supported, the SSD is less than 70-80% full, and is not getting continuously slammed with random writes, write amplification is typically not an issue and overprovisioning is generally not necessary.



Ultimately, most consumers will not write nearly enough data to disk to wear out the NAND within the intended service life of most SSDs, even with high write amplification, so it's not something to lose sleep over.






share|improve this answer






























    up vote
    1
    down vote













    The size of additional space differs very much between SSD drive models, but in general this is still true.






    share|improve this answer





















    • Do you have any reference for this?
      – Léo Lam
      Jul 28 '15 at 3:04










    • Do you mean reference for a specific drive? Many drives (not only SSD) have public technical references, but sorry, I don't have time to search. However if you're interested in general reference, check this: samsung.com/global/business/semiconductor/minisite/SSD/…
      – Tomasz Klim
      Jul 28 '15 at 5:41






    • 1




      Yes, I was wondering whether this is still true for all drives, or only for specific models or brands, as the other answer suggests that over-provisioning is no longer necessary on recent drives.
      – Léo Lam
      Jul 28 '15 at 5:44










    • The other answer is not exactly right. Indeed drives with Sandforce controllers had plenty of additional space. That's perfectly true. But also any other SSD controller uses additional space, just not as much. As this probably won't change.
      – Tomasz Klim
      Jul 28 '15 at 5:50






    • 1




      From the doc for Samsung SSD: "there is always the option to manually set aside additional space for even further-improved performance (e.g. under demanding workloads)", i.e. they suggest you don't need to start considering this unless you have a "demanding workload".
      – sourcejedi
      Aug 2 '15 at 8:48













    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "3"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f944913%2fover-provisioning-an-ssd-does-it-still-hold%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    3 Answers
    3






    active

    oldest

    votes








    3 Answers
    3






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    14
    down vote



    accepted










    Windows will generally use TRIM. This means as long as you have X% free space on the filesystem, the drive will see X% as unallocated.[*] Over-provisioning not required.



    Exception: historically, SSDs with Sandforce controllers/firmware have not restored full performance after TRIM :(.



    Performance loss on the full drive can be significant, and more so than some other drives. This will be associated with high write amplification, and hence increases wear. Source: Anandtech reviews.



    So it's necessary if and only if




    • you're not sure that TRIM will be used. AFAIK it's still not enabled by default on Linux, because of performance issues with a few old & badly-behaving drives.

    • OR you're worried about filling a Sandforce drive (and that the content won't be amenable to compression by the smart controller).


    It's not too hard to enable TRIM on Linux, and you're unlikely to notice any problems.



    Fortunately, several of the most popular brands make their own controller. The Sandforce controllers are not as popular as they used to be. Sandforce issues make me skeptical about that specific "smart" controller design, which was very aggressive for its time. Apologies to Sandforce but I don't have a reference for the exact controller models affected.





    [*] Filesystems like having plenty of free space too, to reduce fragmentation. So TRIM is great, because you don't have to add two safety margins together, the same free space helps both of them :).
    The drive can take advantage of the unallocated space to improve performance, as well as avoiding high wear as you say.






    share|improve this answer





















    • Is it possible to detect a Sandforce controller once having an installed SSD?
      – marmistrz
      Jul 24 '15 at 13:54










    • There isn't a standard read-out or a test that I'm aware of. You need to know what the drive is & try looking it up. About all I can see is a "model number" e.g. in GNOME Disks like "M4-CT128M4SSD2", which might be awkward to match. Sorry again. I believe Samsung use their own controllers, and Crucial/Micron use Marvell. OCZ/Toshiba have used various controllers including Sandforce.
      – sourcejedi
      Jul 24 '15 at 17:37










    • This assumes that you actually have free space on the filesystem, right? :D
      – endolith
      Sep 15 '16 at 1:27










    • Be specific as to what part you're responding to please, this was over a year ago. I think you mean "[hypothetical] I have no discipline, need my system to push back against continued abuse, and have no software to set a high water mark alarm". Which is a perfectly valid criticism, albeit interesting from someone who would consider over-provisioning in the first place.
      – sourcejedi
      Sep 15 '16 at 7:30















    up vote
    14
    down vote



    accepted










    Windows will generally use TRIM. This means as long as you have X% free space on the filesystem, the drive will see X% as unallocated.[*] Over-provisioning not required.



    Exception: historically, SSDs with Sandforce controllers/firmware have not restored full performance after TRIM :(.



    Performance loss on the full drive can be significant, and more so than some other drives. This will be associated with high write amplification, and hence increases wear. Source: Anandtech reviews.



    So it's necessary if and only if




    • you're not sure that TRIM will be used. AFAIK it's still not enabled by default on Linux, because of performance issues with a few old & badly-behaving drives.

    • OR you're worried about filling a Sandforce drive (and that the content won't be amenable to compression by the smart controller).


    It's not too hard to enable TRIM on Linux, and you're unlikely to notice any problems.



    Fortunately, several of the most popular brands make their own controller. The Sandforce controllers are not as popular as they used to be. Sandforce issues make me skeptical about that specific "smart" controller design, which was very aggressive for its time. Apologies to Sandforce but I don't have a reference for the exact controller models affected.





    [*] Filesystems like having plenty of free space too, to reduce fragmentation. So TRIM is great, because you don't have to add two safety margins together, the same free space helps both of them :).
    The drive can take advantage of the unallocated space to improve performance, as well as avoiding high wear as you say.






    share|improve this answer





















    • Is it possible to detect a Sandforce controller once having an installed SSD?
      – marmistrz
      Jul 24 '15 at 13:54










    • There isn't a standard read-out or a test that I'm aware of. You need to know what the drive is & try looking it up. About all I can see is a "model number" e.g. in GNOME Disks like "M4-CT128M4SSD2", which might be awkward to match. Sorry again. I believe Samsung use their own controllers, and Crucial/Micron use Marvell. OCZ/Toshiba have used various controllers including Sandforce.
      – sourcejedi
      Jul 24 '15 at 17:37










    • This assumes that you actually have free space on the filesystem, right? :D
      – endolith
      Sep 15 '16 at 1:27










    • Be specific as to what part you're responding to please, this was over a year ago. I think you mean "[hypothetical] I have no discipline, need my system to push back against continued abuse, and have no software to set a high water mark alarm". Which is a perfectly valid criticism, albeit interesting from someone who would consider over-provisioning in the first place.
      – sourcejedi
      Sep 15 '16 at 7:30













    up vote
    14
    down vote



    accepted







    up vote
    14
    down vote



    accepted






    Windows will generally use TRIM. This means as long as you have X% free space on the filesystem, the drive will see X% as unallocated.[*] Over-provisioning not required.



    Exception: historically, SSDs with Sandforce controllers/firmware have not restored full performance after TRIM :(.



    Performance loss on the full drive can be significant, and more so than some other drives. This will be associated with high write amplification, and hence increases wear. Source: Anandtech reviews.



    So it's necessary if and only if




    • you're not sure that TRIM will be used. AFAIK it's still not enabled by default on Linux, because of performance issues with a few old & badly-behaving drives.

    • OR you're worried about filling a Sandforce drive (and that the content won't be amenable to compression by the smart controller).


    It's not too hard to enable TRIM on Linux, and you're unlikely to notice any problems.



    Fortunately, several of the most popular brands make their own controller. The Sandforce controllers are not as popular as they used to be. Sandforce issues make me skeptical about that specific "smart" controller design, which was very aggressive for its time. Apologies to Sandforce but I don't have a reference for the exact controller models affected.





    [*] Filesystems like having plenty of free space too, to reduce fragmentation. So TRIM is great, because you don't have to add two safety margins together, the same free space helps both of them :).
    The drive can take advantage of the unallocated space to improve performance, as well as avoiding high wear as you say.






    share|improve this answer












    Windows will generally use TRIM. This means as long as you have X% free space on the filesystem, the drive will see X% as unallocated.[*] Over-provisioning not required.



    Exception: historically, SSDs with Sandforce controllers/firmware have not restored full performance after TRIM :(.



    Performance loss on the full drive can be significant, and more so than some other drives. This will be associated with high write amplification, and hence increases wear. Source: Anandtech reviews.



    So it's necessary if and only if




    • you're not sure that TRIM will be used. AFAIK it's still not enabled by default on Linux, because of performance issues with a few old & badly-behaving drives.

    • OR you're worried about filling a Sandforce drive (and that the content won't be amenable to compression by the smart controller).


    It's not too hard to enable TRIM on Linux, and you're unlikely to notice any problems.



    Fortunately, several of the most popular brands make their own controller. The Sandforce controllers are not as popular as they used to be. Sandforce issues make me skeptical about that specific "smart" controller design, which was very aggressive for its time. Apologies to Sandforce but I don't have a reference for the exact controller models affected.





    [*] Filesystems like having plenty of free space too, to reduce fragmentation. So TRIM is great, because you don't have to add two safety margins together, the same free space helps both of them :).
    The drive can take advantage of the unallocated space to improve performance, as well as avoiding high wear as you say.







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Jul 24 '15 at 11:11









    sourcejedi

    1,74821128




    1,74821128












    • Is it possible to detect a Sandforce controller once having an installed SSD?
      – marmistrz
      Jul 24 '15 at 13:54










    • There isn't a standard read-out or a test that I'm aware of. You need to know what the drive is & try looking it up. About all I can see is a "model number" e.g. in GNOME Disks like "M4-CT128M4SSD2", which might be awkward to match. Sorry again. I believe Samsung use their own controllers, and Crucial/Micron use Marvell. OCZ/Toshiba have used various controllers including Sandforce.
      – sourcejedi
      Jul 24 '15 at 17:37










    • This assumes that you actually have free space on the filesystem, right? :D
      – endolith
      Sep 15 '16 at 1:27










    • Be specific as to what part you're responding to please, this was over a year ago. I think you mean "[hypothetical] I have no discipline, need my system to push back against continued abuse, and have no software to set a high water mark alarm". Which is a perfectly valid criticism, albeit interesting from someone who would consider over-provisioning in the first place.
      – sourcejedi
      Sep 15 '16 at 7:30


















    • Is it possible to detect a Sandforce controller once having an installed SSD?
      – marmistrz
      Jul 24 '15 at 13:54










    • There isn't a standard read-out or a test that I'm aware of. You need to know what the drive is & try looking it up. About all I can see is a "model number" e.g. in GNOME Disks like "M4-CT128M4SSD2", which might be awkward to match. Sorry again. I believe Samsung use their own controllers, and Crucial/Micron use Marvell. OCZ/Toshiba have used various controllers including Sandforce.
      – sourcejedi
      Jul 24 '15 at 17:37










    • This assumes that you actually have free space on the filesystem, right? :D
      – endolith
      Sep 15 '16 at 1:27










    • Be specific as to what part you're responding to please, this was over a year ago. I think you mean "[hypothetical] I have no discipline, need my system to push back against continued abuse, and have no software to set a high water mark alarm". Which is a perfectly valid criticism, albeit interesting from someone who would consider over-provisioning in the first place.
      – sourcejedi
      Sep 15 '16 at 7:30
















    Is it possible to detect a Sandforce controller once having an installed SSD?
    – marmistrz
    Jul 24 '15 at 13:54




    Is it possible to detect a Sandforce controller once having an installed SSD?
    – marmistrz
    Jul 24 '15 at 13:54












    There isn't a standard read-out or a test that I'm aware of. You need to know what the drive is & try looking it up. About all I can see is a "model number" e.g. in GNOME Disks like "M4-CT128M4SSD2", which might be awkward to match. Sorry again. I believe Samsung use their own controllers, and Crucial/Micron use Marvell. OCZ/Toshiba have used various controllers including Sandforce.
    – sourcejedi
    Jul 24 '15 at 17:37




    There isn't a standard read-out or a test that I'm aware of. You need to know what the drive is & try looking it up. About all I can see is a "model number" e.g. in GNOME Disks like "M4-CT128M4SSD2", which might be awkward to match. Sorry again. I believe Samsung use their own controllers, and Crucial/Micron use Marvell. OCZ/Toshiba have used various controllers including Sandforce.
    – sourcejedi
    Jul 24 '15 at 17:37












    This assumes that you actually have free space on the filesystem, right? :D
    – endolith
    Sep 15 '16 at 1:27




    This assumes that you actually have free space on the filesystem, right? :D
    – endolith
    Sep 15 '16 at 1:27












    Be specific as to what part you're responding to please, this was over a year ago. I think you mean "[hypothetical] I have no discipline, need my system to push back against continued abuse, and have no software to set a high water mark alarm". Which is a perfectly valid criticism, albeit interesting from someone who would consider over-provisioning in the first place.
    – sourcejedi
    Sep 15 '16 at 7:30




    Be specific as to what part you're responding to please, this was over a year ago. I think you mean "[hypothetical] I have no discipline, need my system to push back against continued abuse, and have no software to set a high water mark alarm". Which is a perfectly valid criticism, albeit interesting from someone who would consider over-provisioning in the first place.
    – sourcejedi
    Sep 15 '16 at 7:30












    up vote
    6
    down vote













    Modern SSD controllers are smart enough that overprovisioning is not typically necessary for everyday use. However, there are still situations, primarily in datacenter environments, where overprovisioning is recommended. To understand why overprovisioning can be useful, it is necessary to understand how SSDs work.



    SSDs must cope with the limitations of flash memory when writing data



    SSDs use a type of memory called NAND flash memory. Unlike hard drives, NAND cells containing data cannot be directly overwritten; the drive needs to erase existing data before it can write new data. Furthermore, while SSDs write data in pages that are typically 4 KB to 16 KB in size, they can only erase data in large groups of pages called blocks, typically several hundred KBs to several MBs in size in modern SSDs.



    NAND also has a limited amount of write endurance. To avoid rewriting data unnecessarily in order to erase blocks, and to ensure that no block receives a disproportionate number of writes, the drive tries to spread out writes, especially small random writes, to different blocks. If the writes replace old data, it marks the old pages as invalid. Once all the pages in a block are marked invalid, the drive is free to erase it without having to rewrite valid data.



    SSDs need free space to function optimally, but not every workload is conducive to maintaining free space



    If the drive has little or no free space remaining, it will not be able to spread out writes. Instead, the drive will need to erase blocks right away as writes are sent to the drive, rewriting any valid data within those blocks into other blocks. This results in more data being written to the NAND than is sent to the drive, a phenomenon known as write amplification. Write amplification is especially pronounced with random write-intensive workloads, such as online transaction processing (OLTP), and needs to be kept to a minimum because it results in reduced performance and endurance.



    To reduce write amplification, most modern systems support a command called TRIM, which tells the drive which blocks no longer contain valid data so they can be erased. This is necessary because the drive would otherwise need to assume that data logically deleted by the operating system is still valid, which hinders the drive's ability to maintain adequate free space.



    However, TRIM is sometimes not possible, such as when the drive is in an external enclosure (most enclosures do not support TRIM) or when the drive is used with an older operating system. Furthermore, under highly-intensive random-write workloads, writes will be spread over large regions of the underlying NAND, which means that forced rewriting of data and attendant write amplification can occur even if the drive is not nearly full.



    Modern SSDs experience significantly less write amplification than older drives but some workloads can still benefit from overprovisioning



    The earliest SSDs had much less mature firmware that would tend to rewrite data much more often than necessary. Early Indilinx and JMicron controllers (the JMF602 was infamous for stuttering and abysmal random write performance) suffered from extremely high write amplification under intensive random-write workloads, sometimes exceeding 100x. (Imagine writing over 100 MB of data to the NAND when you're just trying to write 1 MB!). Newer controllers, with the benefit of higher processing power, improved flash management algorithms, and TRIM support, are much better able to handle these situations, although heavy random-write workloads can still cause write amplification in excess of 10x in modern SSDs.



    Overprovisioning provides the drive with a larger region of free space to handle random writes and avoid forced rewriting of data. All SSDs are overprovisioned to at least some minimal degree; some use only the difference between GB and GiB to provide about 7% of spare space for the drive to work with, while others have more overprovisioning to optimize performance for the needs of specific applications. For example, an enterprise SSD for write-heavy OLTP or database workloads may have 512 GiB of physical NAND yet have an advertised capacity of 400 GB, rather than the 480 to 512 GB typical of consumer SSDs with similar amounts of NAND.



    If your workload is particularly demanding, or if you're using the drive in an environment where TRIM is not supported, you can manually overprovision space by partitioning the drive so that some space is unused. For example, you can partition a 512 GB SSD to 400 GB and leave the remaining space unallocated, and the drive will use the unallocated space as spare space. Do note, however, that this unallocated space must be trimmed if it has been written to before; otherwise, it will have no benefit as the drive will see that space as occupied. (Partitioning utilities should be smart enough to do this, but I'm not 100% sure; see "Does Windows trim unpartitioned (unformatted) space on an SSD?")



    If you're just a normal consumer, overprovisioning is generally not necessary



    In typical consumer environments where TRIM is supported, the SSD is less than 70-80% full, and is not getting continuously slammed with random writes, write amplification is typically not an issue and overprovisioning is generally not necessary.



    Ultimately, most consumers will not write nearly enough data to disk to wear out the NAND within the intended service life of most SSDs, even with high write amplification, so it's not something to lose sleep over.






    share|improve this answer



























      up vote
      6
      down vote













      Modern SSD controllers are smart enough that overprovisioning is not typically necessary for everyday use. However, there are still situations, primarily in datacenter environments, where overprovisioning is recommended. To understand why overprovisioning can be useful, it is necessary to understand how SSDs work.



      SSDs must cope with the limitations of flash memory when writing data



      SSDs use a type of memory called NAND flash memory. Unlike hard drives, NAND cells containing data cannot be directly overwritten; the drive needs to erase existing data before it can write new data. Furthermore, while SSDs write data in pages that are typically 4 KB to 16 KB in size, they can only erase data in large groups of pages called blocks, typically several hundred KBs to several MBs in size in modern SSDs.



      NAND also has a limited amount of write endurance. To avoid rewriting data unnecessarily in order to erase blocks, and to ensure that no block receives a disproportionate number of writes, the drive tries to spread out writes, especially small random writes, to different blocks. If the writes replace old data, it marks the old pages as invalid. Once all the pages in a block are marked invalid, the drive is free to erase it without having to rewrite valid data.



      SSDs need free space to function optimally, but not every workload is conducive to maintaining free space



      If the drive has little or no free space remaining, it will not be able to spread out writes. Instead, the drive will need to erase blocks right away as writes are sent to the drive, rewriting any valid data within those blocks into other blocks. This results in more data being written to the NAND than is sent to the drive, a phenomenon known as write amplification. Write amplification is especially pronounced with random write-intensive workloads, such as online transaction processing (OLTP), and needs to be kept to a minimum because it results in reduced performance and endurance.



      To reduce write amplification, most modern systems support a command called TRIM, which tells the drive which blocks no longer contain valid data so they can be erased. This is necessary because the drive would otherwise need to assume that data logically deleted by the operating system is still valid, which hinders the drive's ability to maintain adequate free space.



      However, TRIM is sometimes not possible, such as when the drive is in an external enclosure (most enclosures do not support TRIM) or when the drive is used with an older operating system. Furthermore, under highly-intensive random-write workloads, writes will be spread over large regions of the underlying NAND, which means that forced rewriting of data and attendant write amplification can occur even if the drive is not nearly full.



      Modern SSDs experience significantly less write amplification than older drives but some workloads can still benefit from overprovisioning



      The earliest SSDs had much less mature firmware that would tend to rewrite data much more often than necessary. Early Indilinx and JMicron controllers (the JMF602 was infamous for stuttering and abysmal random write performance) suffered from extremely high write amplification under intensive random-write workloads, sometimes exceeding 100x. (Imagine writing over 100 MB of data to the NAND when you're just trying to write 1 MB!). Newer controllers, with the benefit of higher processing power, improved flash management algorithms, and TRIM support, are much better able to handle these situations, although heavy random-write workloads can still cause write amplification in excess of 10x in modern SSDs.



      Overprovisioning provides the drive with a larger region of free space to handle random writes and avoid forced rewriting of data. All SSDs are overprovisioned to at least some minimal degree; some use only the difference between GB and GiB to provide about 7% of spare space for the drive to work with, while others have more overprovisioning to optimize performance for the needs of specific applications. For example, an enterprise SSD for write-heavy OLTP or database workloads may have 512 GiB of physical NAND yet have an advertised capacity of 400 GB, rather than the 480 to 512 GB typical of consumer SSDs with similar amounts of NAND.



      If your workload is particularly demanding, or if you're using the drive in an environment where TRIM is not supported, you can manually overprovision space by partitioning the drive so that some space is unused. For example, you can partition a 512 GB SSD to 400 GB and leave the remaining space unallocated, and the drive will use the unallocated space as spare space. Do note, however, that this unallocated space must be trimmed if it has been written to before; otherwise, it will have no benefit as the drive will see that space as occupied. (Partitioning utilities should be smart enough to do this, but I'm not 100% sure; see "Does Windows trim unpartitioned (unformatted) space on an SSD?")



      If you're just a normal consumer, overprovisioning is generally not necessary



      In typical consumer environments where TRIM is supported, the SSD is less than 70-80% full, and is not getting continuously slammed with random writes, write amplification is typically not an issue and overprovisioning is generally not necessary.



      Ultimately, most consumers will not write nearly enough data to disk to wear out the NAND within the intended service life of most SSDs, even with high write amplification, so it's not something to lose sleep over.






      share|improve this answer

























        up vote
        6
        down vote










        up vote
        6
        down vote









        Modern SSD controllers are smart enough that overprovisioning is not typically necessary for everyday use. However, there are still situations, primarily in datacenter environments, where overprovisioning is recommended. To understand why overprovisioning can be useful, it is necessary to understand how SSDs work.



        SSDs must cope with the limitations of flash memory when writing data



        SSDs use a type of memory called NAND flash memory. Unlike hard drives, NAND cells containing data cannot be directly overwritten; the drive needs to erase existing data before it can write new data. Furthermore, while SSDs write data in pages that are typically 4 KB to 16 KB in size, they can only erase data in large groups of pages called blocks, typically several hundred KBs to several MBs in size in modern SSDs.



        NAND also has a limited amount of write endurance. To avoid rewriting data unnecessarily in order to erase blocks, and to ensure that no block receives a disproportionate number of writes, the drive tries to spread out writes, especially small random writes, to different blocks. If the writes replace old data, it marks the old pages as invalid. Once all the pages in a block are marked invalid, the drive is free to erase it without having to rewrite valid data.



        SSDs need free space to function optimally, but not every workload is conducive to maintaining free space



        If the drive has little or no free space remaining, it will not be able to spread out writes. Instead, the drive will need to erase blocks right away as writes are sent to the drive, rewriting any valid data within those blocks into other blocks. This results in more data being written to the NAND than is sent to the drive, a phenomenon known as write amplification. Write amplification is especially pronounced with random write-intensive workloads, such as online transaction processing (OLTP), and needs to be kept to a minimum because it results in reduced performance and endurance.



        To reduce write amplification, most modern systems support a command called TRIM, which tells the drive which blocks no longer contain valid data so they can be erased. This is necessary because the drive would otherwise need to assume that data logically deleted by the operating system is still valid, which hinders the drive's ability to maintain adequate free space.



        However, TRIM is sometimes not possible, such as when the drive is in an external enclosure (most enclosures do not support TRIM) or when the drive is used with an older operating system. Furthermore, under highly-intensive random-write workloads, writes will be spread over large regions of the underlying NAND, which means that forced rewriting of data and attendant write amplification can occur even if the drive is not nearly full.



        Modern SSDs experience significantly less write amplification than older drives but some workloads can still benefit from overprovisioning



        The earliest SSDs had much less mature firmware that would tend to rewrite data much more often than necessary. Early Indilinx and JMicron controllers (the JMF602 was infamous for stuttering and abysmal random write performance) suffered from extremely high write amplification under intensive random-write workloads, sometimes exceeding 100x. (Imagine writing over 100 MB of data to the NAND when you're just trying to write 1 MB!). Newer controllers, with the benefit of higher processing power, improved flash management algorithms, and TRIM support, are much better able to handle these situations, although heavy random-write workloads can still cause write amplification in excess of 10x in modern SSDs.



        Overprovisioning provides the drive with a larger region of free space to handle random writes and avoid forced rewriting of data. All SSDs are overprovisioned to at least some minimal degree; some use only the difference between GB and GiB to provide about 7% of spare space for the drive to work with, while others have more overprovisioning to optimize performance for the needs of specific applications. For example, an enterprise SSD for write-heavy OLTP or database workloads may have 512 GiB of physical NAND yet have an advertised capacity of 400 GB, rather than the 480 to 512 GB typical of consumer SSDs with similar amounts of NAND.



        If your workload is particularly demanding, or if you're using the drive in an environment where TRIM is not supported, you can manually overprovision space by partitioning the drive so that some space is unused. For example, you can partition a 512 GB SSD to 400 GB and leave the remaining space unallocated, and the drive will use the unallocated space as spare space. Do note, however, that this unallocated space must be trimmed if it has been written to before; otherwise, it will have no benefit as the drive will see that space as occupied. (Partitioning utilities should be smart enough to do this, but I'm not 100% sure; see "Does Windows trim unpartitioned (unformatted) space on an SSD?")



        If you're just a normal consumer, overprovisioning is generally not necessary



        In typical consumer environments where TRIM is supported, the SSD is less than 70-80% full, and is not getting continuously slammed with random writes, write amplification is typically not an issue and overprovisioning is generally not necessary.



        Ultimately, most consumers will not write nearly enough data to disk to wear out the NAND within the intended service life of most SSDs, even with high write amplification, so it's not something to lose sleep over.






        share|improve this answer














        Modern SSD controllers are smart enough that overprovisioning is not typically necessary for everyday use. However, there are still situations, primarily in datacenter environments, where overprovisioning is recommended. To understand why overprovisioning can be useful, it is necessary to understand how SSDs work.



        SSDs must cope with the limitations of flash memory when writing data



        SSDs use a type of memory called NAND flash memory. Unlike hard drives, NAND cells containing data cannot be directly overwritten; the drive needs to erase existing data before it can write new data. Furthermore, while SSDs write data in pages that are typically 4 KB to 16 KB in size, they can only erase data in large groups of pages called blocks, typically several hundred KBs to several MBs in size in modern SSDs.



        NAND also has a limited amount of write endurance. To avoid rewriting data unnecessarily in order to erase blocks, and to ensure that no block receives a disproportionate number of writes, the drive tries to spread out writes, especially small random writes, to different blocks. If the writes replace old data, it marks the old pages as invalid. Once all the pages in a block are marked invalid, the drive is free to erase it without having to rewrite valid data.



        SSDs need free space to function optimally, but not every workload is conducive to maintaining free space



        If the drive has little or no free space remaining, it will not be able to spread out writes. Instead, the drive will need to erase blocks right away as writes are sent to the drive, rewriting any valid data within those blocks into other blocks. This results in more data being written to the NAND than is sent to the drive, a phenomenon known as write amplification. Write amplification is especially pronounced with random write-intensive workloads, such as online transaction processing (OLTP), and needs to be kept to a minimum because it results in reduced performance and endurance.



        To reduce write amplification, most modern systems support a command called TRIM, which tells the drive which blocks no longer contain valid data so they can be erased. This is necessary because the drive would otherwise need to assume that data logically deleted by the operating system is still valid, which hinders the drive's ability to maintain adequate free space.



        However, TRIM is sometimes not possible, such as when the drive is in an external enclosure (most enclosures do not support TRIM) or when the drive is used with an older operating system. Furthermore, under highly-intensive random-write workloads, writes will be spread over large regions of the underlying NAND, which means that forced rewriting of data and attendant write amplification can occur even if the drive is not nearly full.



        Modern SSDs experience significantly less write amplification than older drives but some workloads can still benefit from overprovisioning



        The earliest SSDs had much less mature firmware that would tend to rewrite data much more often than necessary. Early Indilinx and JMicron controllers (the JMF602 was infamous for stuttering and abysmal random write performance) suffered from extremely high write amplification under intensive random-write workloads, sometimes exceeding 100x. (Imagine writing over 100 MB of data to the NAND when you're just trying to write 1 MB!). Newer controllers, with the benefit of higher processing power, improved flash management algorithms, and TRIM support, are much better able to handle these situations, although heavy random-write workloads can still cause write amplification in excess of 10x in modern SSDs.



        Overprovisioning provides the drive with a larger region of free space to handle random writes and avoid forced rewriting of data. All SSDs are overprovisioned to at least some minimal degree; some use only the difference between GB and GiB to provide about 7% of spare space for the drive to work with, while others have more overprovisioning to optimize performance for the needs of specific applications. For example, an enterprise SSD for write-heavy OLTP or database workloads may have 512 GiB of physical NAND yet have an advertised capacity of 400 GB, rather than the 480 to 512 GB typical of consumer SSDs with similar amounts of NAND.



        If your workload is particularly demanding, or if you're using the drive in an environment where TRIM is not supported, you can manually overprovision space by partitioning the drive so that some space is unused. For example, you can partition a 512 GB SSD to 400 GB and leave the remaining space unallocated, and the drive will use the unallocated space as spare space. Do note, however, that this unallocated space must be trimmed if it has been written to before; otherwise, it will have no benefit as the drive will see that space as occupied. (Partitioning utilities should be smart enough to do this, but I'm not 100% sure; see "Does Windows trim unpartitioned (unformatted) space on an SSD?")



        If you're just a normal consumer, overprovisioning is generally not necessary



        In typical consumer environments where TRIM is supported, the SSD is less than 70-80% full, and is not getting continuously slammed with random writes, write amplification is typically not an issue and overprovisioning is generally not necessary.



        Ultimately, most consumers will not write nearly enough data to disk to wear out the NAND within the intended service life of most SSDs, even with high write amplification, so it's not something to lose sleep over.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Nov 24 at 22:15

























        answered Dec 7 '16 at 20:35









        bwDraco

        36.4k36135177




        36.4k36135177






















            up vote
            1
            down vote













            The size of additional space differs very much between SSD drive models, but in general this is still true.






            share|improve this answer





















            • Do you have any reference for this?
              – Léo Lam
              Jul 28 '15 at 3:04










            • Do you mean reference for a specific drive? Many drives (not only SSD) have public technical references, but sorry, I don't have time to search. However if you're interested in general reference, check this: samsung.com/global/business/semiconductor/minisite/SSD/…
              – Tomasz Klim
              Jul 28 '15 at 5:41






            • 1




              Yes, I was wondering whether this is still true for all drives, or only for specific models or brands, as the other answer suggests that over-provisioning is no longer necessary on recent drives.
              – Léo Lam
              Jul 28 '15 at 5:44










            • The other answer is not exactly right. Indeed drives with Sandforce controllers had plenty of additional space. That's perfectly true. But also any other SSD controller uses additional space, just not as much. As this probably won't change.
              – Tomasz Klim
              Jul 28 '15 at 5:50






            • 1




              From the doc for Samsung SSD: "there is always the option to manually set aside additional space for even further-improved performance (e.g. under demanding workloads)", i.e. they suggest you don't need to start considering this unless you have a "demanding workload".
              – sourcejedi
              Aug 2 '15 at 8:48

















            up vote
            1
            down vote













            The size of additional space differs very much between SSD drive models, but in general this is still true.






            share|improve this answer





















            • Do you have any reference for this?
              – Léo Lam
              Jul 28 '15 at 3:04










            • Do you mean reference for a specific drive? Many drives (not only SSD) have public technical references, but sorry, I don't have time to search. However if you're interested in general reference, check this: samsung.com/global/business/semiconductor/minisite/SSD/…
              – Tomasz Klim
              Jul 28 '15 at 5:41






            • 1




              Yes, I was wondering whether this is still true for all drives, or only for specific models or brands, as the other answer suggests that over-provisioning is no longer necessary on recent drives.
              – Léo Lam
              Jul 28 '15 at 5:44










            • The other answer is not exactly right. Indeed drives with Sandforce controllers had plenty of additional space. That's perfectly true. But also any other SSD controller uses additional space, just not as much. As this probably won't change.
              – Tomasz Klim
              Jul 28 '15 at 5:50






            • 1




              From the doc for Samsung SSD: "there is always the option to manually set aside additional space for even further-improved performance (e.g. under demanding workloads)", i.e. they suggest you don't need to start considering this unless you have a "demanding workload".
              – sourcejedi
              Aug 2 '15 at 8:48















            up vote
            1
            down vote










            up vote
            1
            down vote









            The size of additional space differs very much between SSD drive models, but in general this is still true.






            share|improve this answer












            The size of additional space differs very much between SSD drive models, but in general this is still true.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Jul 24 '15 at 9:56









            Tomasz Klim

            745411




            745411












            • Do you have any reference for this?
              – Léo Lam
              Jul 28 '15 at 3:04










            • Do you mean reference for a specific drive? Many drives (not only SSD) have public technical references, but sorry, I don't have time to search. However if you're interested in general reference, check this: samsung.com/global/business/semiconductor/minisite/SSD/…
              – Tomasz Klim
              Jul 28 '15 at 5:41






            • 1




              Yes, I was wondering whether this is still true for all drives, or only for specific models or brands, as the other answer suggests that over-provisioning is no longer necessary on recent drives.
              – Léo Lam
              Jul 28 '15 at 5:44










            • The other answer is not exactly right. Indeed drives with Sandforce controllers had plenty of additional space. That's perfectly true. But also any other SSD controller uses additional space, just not as much. As this probably won't change.
              – Tomasz Klim
              Jul 28 '15 at 5:50






            • 1




              From the doc for Samsung SSD: "there is always the option to manually set aside additional space for even further-improved performance (e.g. under demanding workloads)", i.e. they suggest you don't need to start considering this unless you have a "demanding workload".
              – sourcejedi
              Aug 2 '15 at 8:48




















            • Do you have any reference for this?
              – Léo Lam
              Jul 28 '15 at 3:04










            • Do you mean reference for a specific drive? Many drives (not only SSD) have public technical references, but sorry, I don't have time to search. However if you're interested in general reference, check this: samsung.com/global/business/semiconductor/minisite/SSD/…
              – Tomasz Klim
              Jul 28 '15 at 5:41






            • 1




              Yes, I was wondering whether this is still true for all drives, or only for specific models or brands, as the other answer suggests that over-provisioning is no longer necessary on recent drives.
              – Léo Lam
              Jul 28 '15 at 5:44










            • The other answer is not exactly right. Indeed drives with Sandforce controllers had plenty of additional space. That's perfectly true. But also any other SSD controller uses additional space, just not as much. As this probably won't change.
              – Tomasz Klim
              Jul 28 '15 at 5:50






            • 1




              From the doc for Samsung SSD: "there is always the option to manually set aside additional space for even further-improved performance (e.g. under demanding workloads)", i.e. they suggest you don't need to start considering this unless you have a "demanding workload".
              – sourcejedi
              Aug 2 '15 at 8:48


















            Do you have any reference for this?
            – Léo Lam
            Jul 28 '15 at 3:04




            Do you have any reference for this?
            – Léo Lam
            Jul 28 '15 at 3:04












            Do you mean reference for a specific drive? Many drives (not only SSD) have public technical references, but sorry, I don't have time to search. However if you're interested in general reference, check this: samsung.com/global/business/semiconductor/minisite/SSD/…
            – Tomasz Klim
            Jul 28 '15 at 5:41




            Do you mean reference for a specific drive? Many drives (not only SSD) have public technical references, but sorry, I don't have time to search. However if you're interested in general reference, check this: samsung.com/global/business/semiconductor/minisite/SSD/…
            – Tomasz Klim
            Jul 28 '15 at 5:41




            1




            1




            Yes, I was wondering whether this is still true for all drives, or only for specific models or brands, as the other answer suggests that over-provisioning is no longer necessary on recent drives.
            – Léo Lam
            Jul 28 '15 at 5:44




            Yes, I was wondering whether this is still true for all drives, or only for specific models or brands, as the other answer suggests that over-provisioning is no longer necessary on recent drives.
            – Léo Lam
            Jul 28 '15 at 5:44












            The other answer is not exactly right. Indeed drives with Sandforce controllers had plenty of additional space. That's perfectly true. But also any other SSD controller uses additional space, just not as much. As this probably won't change.
            – Tomasz Klim
            Jul 28 '15 at 5:50




            The other answer is not exactly right. Indeed drives with Sandforce controllers had plenty of additional space. That's perfectly true. But also any other SSD controller uses additional space, just not as much. As this probably won't change.
            – Tomasz Klim
            Jul 28 '15 at 5:50




            1




            1




            From the doc for Samsung SSD: "there is always the option to manually set aside additional space for even further-improved performance (e.g. under demanding workloads)", i.e. they suggest you don't need to start considering this unless you have a "demanding workload".
            – sourcejedi
            Aug 2 '15 at 8:48






            From the doc for Samsung SSD: "there is always the option to manually set aside additional space for even further-improved performance (e.g. under demanding workloads)", i.e. they suggest you don't need to start considering this unless you have a "demanding workload".
            – sourcejedi
            Aug 2 '15 at 8:48




















            draft saved

            draft discarded




















































            Thanks for contributing an answer to Super User!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f944913%2fover-provisioning-an-ssd-does-it-still-hold%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            "Incorrect syntax near the keyword 'ON'. (on update cascade, on delete cascade,)

            Alcedinidae

            RAC Tourist Trophy