And if your raid-controller does not have its own cache, it means you are running raid-arrays basically un-cached. Instead of that, it counts on raid-controller to do it. It does not sacrifice single bit of RAM for caching disk-i/o. It reserves part of RAM (can be even a few GB) for I/O-buffers and disk-cache. You can not compare results you got from Linux, because Linux (and any other modern OS) do disk-caching. This has been said here zillion times, and yet some people do not want to face hard reality. Trully, your on-board controller (based on LSI SAS2308 SoC) has no cache and that *is* the problem. ".Before people jump all over me for using an integrated controller with no onboard RAM cache, I temporarily ran Fedora Linux and was able to demonstrate over 135 MB/s reads and writes to the same RAID 1 configuration." Please, is there anyone who has words of wisdom to make the RAID 1 performance on a LSI 2308 controller not suck on ESXi? Does anyone know if this same problem exists when running a RAID 10 configuration? I really don't want to do this if I don't have to! a ZFS storage array, then re-share it back to its parent hypervisor via NFS or iSCSI). I have scoured the internet and all I see are people re-flashing the controller to run in IT mode, then they use passthrough to get the drives accessible to some underlying VM (e.g.
To me, this clearly indicates the write performance is a problem with VMWare's driver and is not a limitation of the controller or the underlying hard drives. Also, as a test I re-configured the RAID settings temporarily to run as RAID 0 (stripe) and the read and write performance in ESXi was just fine, and exceeded 100 MB/s. This same RAID 1 configuration allows for over 100 MB/s read performance in ESXi, and that is perfectly acceptable.īefore people jump all over me for using an integrated controller with no onboard RAM cache, I temporarily ran Fedora Linux and was able to demonstrate over 135 MB/s reads and writes to the same RAID 1 configuration.
(FYI, this disk cache policy setting is not accessible via the BIOS menu, you HAVE to use LSI's Megaraid Storage Manager software to set it, which is a huge headache all its own).
The "out of the box" write performance was about 4 MB/s, and after contacting Supermicro support and setting the Disk Cache Policy = Enabled I get about 13 MB/s write throughput. I use two Western Digital Caviar Black 2TB drives in RAID 1 configuration on this controller, and the write performance is absolutely terrible. Reminder: this LSI controller is on VMWare's supported device list.
I have updated the firmware of the LSI 2308 controller to v19 and updated to the latest VMWare driver (version 19, available here: VMware vSphere 5: Private Cloud Computing, Server and Data Center Virtualization).
I have a Supermicro X10SL7-F motherboard, which includes an integrated LSI 2308 SAS/SATA controller, running on a fresh install of ESXi 5.5 Update 1 patch rollup 2.