This post is a continuation in a series of posts on comparing various operating systems as network attached storage solutions for VMDK files in a VMWare environment. Please read Part 1 and Part 2 in the series.
Test Setup
My test lab had three major components. The first was the NAS server itself. The second was the ESX host. The third was the VM running the test on the ESX Host
NAS Server
HP ML570 G4
4x Dual Core 3.2GHZ/800MHZ Processors
8GB of RAM
P400 Array Controller with 512MB of Cache
2x 72GB 15K SAS Drives - RAID 1 - For Operating System
14x 72GB 15K SAS Drives - RAID 0 - For Data
2x 1Gig-E NICs - Management Only
1x Intel 10Gig-E NIC - NFS Traffic Only
ESX Host
HP BL 460c G6
8x Cores at 2.4GHZ
90GB of RAM
2x 10 Gig-E Dedicated NICs for NFS Traffic
2x 10 Gig-E Dedicated NICs for VM Traffic
2x 1 Gig-E Dedicated NICs for ESX Management Traffic
2x 72GB 15K local drives for ESX Install
VM Guest
Windows 2008 R2
2x CPU
4GB of RAM
50GB C: VMDK Drive - VMDK drive stored on Fiber Channel Storage separate from NAS
750GB E: VMDK Drive - Stored on NAS and accessed via NFS.
Running IO Meter v2006.07.27
Note on JUMBO frames: Jumbo frames were enabled on the ESX host and switches. However, they were not enabled on the NAS Server.
Several different IOMeter Profiles were used to emulate various applications. Each of the worker processes had an I/O queue of 32, ensuring that IOMeter was keeping plenty of I/O ready to go. This workload at the level configured far exceeds what a typical server would present.
IOMeter was run in the VM. The VM had a C: drive hosted on a datastore unrelated to the test. The E: drive, which was 750GB, was created on NFS datastore hosted on the NAS gateway server. Each IO profile was tested 3 times for 15 minutes each. The three test results were then averaged together.
The IO profiles are summarized in the table below.
Some Notes on the NAS OS
For all operating systems, the default installation and setup were used. The standard patches and updates were applied. No other changes except where noted were made. No additional or upgraded drivers were installed.
Of course, each vendor has tweaks to improve performance. In order to keep the test process and time taken manageable, the defaults were used. Otherwise, significant research would be needed to understand the best options for each OS in order to maximize the performance. This in turn opens the door to two accusations: 1. That the proper procedure was not followed and hence invalidating the test and 2. That there was purposeful manipulation of the test setup to favor a particular vendor. By choosing the defaults, the test implicitly relies on the vendor to setup their operating system for the best performance, i.e. it's the vendor's responsibility to configure their own OS.
- Windows 2008 64-Bit R2
- Standard list of patches and updates
- No additional software
- Services for NFS
- Installed HP Support Pack. Performance was so horrible initially that it was believed that installing the latest HP Support Pack may resolve the problem. It didn't.
- OpenFiler
- v2.3 64-Bit
- Completed standard system Update
- Standard NFS Share options except turned on "no_root_squash"
- Used XFS file system
- RedHat
- Enterprise Linux v5
- Used EXT3 file system
- NFS SYNC Option on
- Nexenta
- Used version 3 with latest updates
- There were numerous problems with this OS and I opted to not use it in the test. Some of the problems were
- No native support for Intel 10Gig-E nic and no simple way to add the driver
- Random crashes in the middle of tests and at times just during boot up
- Significant questions regarding the longevity of this product given Oracle's lack of support of its base OS, Open Solaris.
- Suse Linux Enterprise 11 SP1
- Kernel version v2.6.32
- Used EXT3 file system
- Solaris 10
Test Results
Test Conclusions
Windows NFS Performance was Horrible
It is stunning how poorly a major operating system like Windows 2008 did in the tests. It stands out simply for its poor performance. As an example the next best competitor in the File Server IOps category is 5x better. That can't be explained away by some simple tweaks. That level of poor results speaks to a fundamental problem with the OS running NFS.
Attempts were made to improve performance by tweaking settings, installing updated drivers from HP, searching for solutions online, etc. etc. None of these steps were required for any of the other operating systems. Regardless, all of this additional work was to no avail. Even with all the changes and updates Windows 2008 failed to produce any noticeable improvement.
As a result of these tests, Windows 2008 should not be used as an VMWare NFS datastore. Only for the smallest simplest workloads would this operating system work.
Wide Variance in Results
Excluding the Windows 2008 poor results, there was still a wide variance in results. There was an expectation going into the test that there would be only small differences in the results. Three of the five operating system all use a modern version of Linux. It seemed likely that the Linux versions would provide similar results.
However, there was a wide descrepency even amongst the Linux operating systems. In several cases, OpenFiler and Suse put up performance numbers that were DOUBLE RedHat's. A few percentage points would be reasonable. But double the performance strikes as an unusual result.
Again steps were taken to try to tweak RedHat to provide better performance. Again, none of the changes or updates had any effect.
Openfiler a Good Opensource Option
Openfiler had great results. Consistently one of the better Linux operating systems and for the Exchange and File Server tests beat out all the other Linux variants.
Openfiler and Suse were virtually tied with Openfiler taking Exchange and File Server and Suse taking Web Server and SQL
If you like open source software, Openfiler should be a strong contender for your NAS gateway.
Solaris Dominates
Solaris far and away crushed the competition. Solaris posted the best results in every test and in some cases beat the competition by a wide margin. If you don't mind using Solaris and dealing with Oracle, it's certainly the NAS gateway of choice.
Some of these great results were probably due to the ZFS file system. ZFS is simply an incredible file system, which I believe to currently be the best commercially available file system on the planet. It offers effectively unlimited volume sizes, deduplication, unlimited snapshots, no write penalty snapshots, mega sized read and write caches, etc. etc. It's too bad they don't offer a version that runs on Linux. It's better than anything Linux has including the upcoming btrfs