No Data Corruption & Data Integrity
Discover what No Data Corruption & Data Integrity is and how it could be beneficial for the files within your website hosting account.
The process of files getting corrupted due to some hardware or software failure is referred to as data corruption and this is among the main problems which hosting companies face because the larger a hard disk is and the more info is filed on it, the much more likely it is for data to get corrupted. You'll find various fail-safes, but often the info is corrupted silently, so neither the file system, nor the administrators see a thing. Because of this, a corrupted file will be treated as a good one and if the HDD is a part of a RAID, the file will be copied on all other disk drives. In principle, this is done for redundancy, but in reality the damage will be worse. The moment a given file gets damaged, it will be partly or completely unreadable, therefore a text file will no longer be readable, an image file will show a random mix of colors in case it opens at all and an archive will be impossible to unpack, and you risk losing your website content. Although the most widespread server file systems include various checks, they quite often fail to identify some problem early enough or require a long time period to be able to check all files and the server will not be functional for the time being.
No Data Corruption & Data Integrity in Shared Hosting
We guarantee the integrity of the information uploaded in every shared hosting account which is generated on our cloud platform due to the fact that we use the advanced ZFS file system. The aforementioned is the only one which was designed to avert silent data corruption via a unique checksum for each and every file. We'll store your information on multiple SSD drives which work in a RAID, so the exact same files will be accessible on several places at once. ZFS checks the digital fingerprint of all files on all of the drives in real time and in the event that the checksum of any file differs from what it needs to be, the file system swaps that file with an undamaged version from another drive in the RAID. No other file system uses checksums, so it is easy for data to be silently damaged and the bad file to be duplicated on all drives over time, but since that can never happen on a server using ZFS, you do not have to concern yourself with the integrity of your info.