I used to write enterprise class hard drive firmware for HGST before they got bought by Western Digital. Our code was also shard with the consumer SATA drives. Hard drives have *very* robust error detection and correction. They will generally go to extraordinary lengths to recover and relocate data on marginal sectors. The ECC algorithms implemented in the hardware are very robust. Drives do need to be powered up once in a while to do background scanning for weak sectors, so just writing data and putting the drive on a shelf is not the best idea. As with many other things, the best approach is defense in depth (layers of defense). Store several copies of the data, and not on the same brand of media or hard drive model. Verify the data using a hash that was created at the time the data was written to detect corruption, and use another copy (that hopefully isn't also corrupted) to mitigate the problem. Filesystems like ZFS are awesome. All of the big cloud data storage providers have your data stored in multiple places; two copies isn't considered safe enough. And of course they spread them out geographically to mitigate damage from external factors. Mike