Friday 10 December 2010

HP DL360 G5 very slow hard drive performance

One of our servers became free recently (HP Proliant DL360 G5 with 2 dual core 3GHz processors, 9GB RAM and 4 147GB, 15k rpm hdd in RAID(0+1) - or RAID 10 never sure of the difference). Nice fast server I thought.

So I install VMWare ESXi 4.1 and start testing. Abysmal performance on the hard drives! After much testing I work out that although read speeds are ok, write is useless (26MB/s read, <1MB/s write). I was expecting double that on the read speeds, far more on the write. I try a cheap iSCSI setup with FreeNas and get ~ 12MB/Sec read and write so the server itself is fine. I'm using iometer for this testing, a windows application from 2006 which takes a bit of setting up but is free and very powerful when you figure out how it works.

Eventually after much testing I figure out that although this server has a write cache it does not have a backup battery on this cache. VMWare is detecting the missing battery and deciding not to use the cache in case of power failure which would lead to corrupt data. I've just ordered HP parts 398648-001 (Battery for P400i raid controler cache) and 409125-001 (power cable for this part) which should get this server back up to speed. The battery was substituted with part 381573-001 which is compatible and replaces 398648-001.

I had some trouble finding where to buy the parts but eventually came across www.chilternitparts.com who seem to have lots of parts for HP and Dell systems.

UPDATE:
The battery and cable arrived. Installed with no problems (there is a handy guide printed on the inside of the server cover) and performance has increased greatly. The battery is still charging 4 hours after installing it but VMWare has obviously decided to trust it as write speeds have gone from 0.63MB/s to 23.49MB/s (iometer settings - 1 worker with 50MB file on c:\ drive; 4k; 0%read; 0% random, left to run for about 5 min).

My domain controller (2003 Small Business Server, Poweredge 2600) currently gets 30MB/s during backups - normally it gets nowhere close to this in daily use. However the tests I have been doing are deliberately hard on the throughput - by making the tests easier (32k block size instead of 4k, 32 job queue, 10GB test file to ensure I could out supply the cache and get actual throughput) I get an easy 60MB/sec write speed and 75MB/s read speed. Writes using cache hit a peak at 125MB/s and when I used a smaller file to read pure cache I was getting a rock solid 250MB/s. Tests over - its now time to start thinking about actually getting the Domain Controlelr virtualised. Probably an overnight job to do before Christmas. Luckily with another IT guy one of us can do the 6pm start cold clone part and the other one can do the 6am check clone thoroughly and bring live.
UPDATE: Nope that did not work.  Cold clone over the network took far longer than expected (17 hours for ~ 115GB of files over 3 partitions totaling ~ 260GB), partly due to us resizing the disks, partly due to it just being a known slow process.  Plan B is to find a long weekend, might have to delay as don't want to do it during our busy period (Jan, Feb, Mar).

3 comments:

  1. Hello,

    Thanks for you post.

    I'm curious, did you find out any solution?

    Thank you in advance.

    ReplyDelete
    Replies
    1. Hi Anon,
      The slow write speed was fixed with the battery. The speed of the cold clone was mainly poor because of the old source server I believe. I just managed to agree a 24 hour maintenance window. Once on the VMware server the domain controller was nice and fast and ran well for 3 years until we upgraded to server 2012.

      Delete
  2. It?s rare for me to find something on the web that?s as entertaining and intriguing as what you have got here. Your page is sweet, your graphics are great, and what?s more, you use source that are relevant to what you?re saying. You are certainly one in a million, well done!

    ReplyDelete