![]() ![]() On Friday afternoon, the OpenZFS project released version 2.1.0. however, judging from the changelog there was no “encryption performance fix” included: Release zfs-2.1. dRAID vdevs can be quickly reused with additional capacity in place of additional disks. Linux516-zfs and zfs-dkms both include zfs v2.1.2, while the AUR build is zfs v2.1.3. And when DRAID is considered stable, mostly likely add it to the GUI. If you have ZFS storage pools from a previous Solaris release, such as the Solaris 10 10/09 release, you can upgrade your pools with the zpool upgrade. It’s a mature concept that complements the ZFS tool set. Corporate customers at IBM and Panasas have been flogging other distributed RAID systems for more than ten years. I investigated/tested/played around a bit and noticed that both linux516-zfs and zfs-dkms packages from the extra repository suffer from this performance degradation, while the compiled zfs-dkms from AUR manages speeds of nearly 550MB/s for me and one fifth the cpu load (below 10%). Almost certainly they will upgrade to OpenZFS 2.1. OpenZFS: co je nového - stable release 0.7. The OpenZFS regression test suite ztest is a good indication that dRAID satisfies the ZFS commitment to data protection. I am running a RAIDZ2 across 5 harddrives and noticed very poor performance and high CPU usage (~50% cpu usage) when reading (or writing) files to the encrypted portion of the vdev (only about 170MiB/s), while performing fine (~4% cpu usage) for the plain not encrypted portion (about 515MiB/s). ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |