No Data Corruption & Data Integrity
What exactly does the 'No Data Corruption & Data Integrity' slogan mean to each Internet hosting account user?
Data corruption is the unintended transformation of a file or the loss of information which usually occurs during reading or writing. The reason could be hardware or software failure, and due to this fact, a file can become partially or entirely corrupted, so it will no longer function as it should because its bits will be scrambled or missing. An image file, for example, will no longer present an accurate image, but a random mix of colors, an archive will be impossible to unpack for the reason that its content will be unreadable, and so on. In the event that this kind of a problem appears and it isn't identified by the system or by an administrator, the data will get corrupted silently and when this happens on a drive that is a part of a RAID array where the information is synchronized between different drives, the corrupted file will be copied on all other drives and the damage will be long term. Many widely used file systems either do not have real-time checks or do not have good ones that can detect an issue before the damage is done, so silent data corruption is a very common problem on hosting servers where huge volumes of data are stored.
No Data Corruption & Data Integrity in Semi-dedicated Servers
We have avoided any risk of files getting corrupted silently due to the fact that the servers where your semi-dedicated server account will be created work with a powerful file system known as ZFS. Its key advantage over alternative file systems is that it uses a unique checksum for each and every file - a digital fingerprint which is checked in real time. As we store all content on numerous NVMe drives, ZFS checks if the fingerprint of a file on one drive corresponds to the one on the rest of the drives and the one it has stored. In the event that there is a mismatch, the bad copy is replaced with a good one from one of the other drives and because this happens in real time, there's no chance that a damaged copy can remain on our web servers or that it can be duplicated to the other hard disks in the RAID. None of the other file systems work with this type of checks and what is more, even during a file system check following an unexpected electrical power failure, none of them can detect silently corrupted files. In comparison, ZFS won't crash after a power loss and the regular checksum monitoring makes a lenghty file system check obsolete.