Jump to content

Fastest Writer Speed for Server RAID Array


CVByrne

Recommended Posts

Hi Guys,

 

I've got a question for the guys who work in IT here.

 

If we have peak write speed of 700meg per second with the model runs we're doing. It hits that speed often and is writing out data for up to 15 hours at a time. We'd want a storage with say max speed of 800meg per second.

 

We've tested this on 6 SSD's in RAID1 It's a very expensive solution. As we then have to copy the output to a fileserver for storage and clean down the SSD drives

 

What we're thinking of is if it's possible to get such write speeds with HDD in the fastest RAID setup. Fastest disks as if it is we can save money by going down that route.

 

 

Cheers

 

Link to comment
Share on other sites

Well we currently have a 4 year old HP solution called Ibrix which is so bad we lose 15-20% of all model runs due to failures in reading files from it. It’s based around propriatarty software which allows very high write speeds on HDD’s.

 

I got one of the guys from High Performance Computing to measure our max output speeds which are 700meg per second which is the constraint on bus speed etc. basically it’s the max the node groups will write out data at. So now I know what we require in I/O speed.

 

SSD we got from HP is in my opinion too expensive for our needs, its 6 Enterprise level disks of 400 TB each set out in a RAID array. They cost us £40k in total and the idea is to buy 3 more of the same setup for each Computing Node Group. We’re going to double the computing power and go for 8 nodes per group and 4 groups in 2 locations.

 

Due to internal politics here which I don’t understand IT Ops seem to want to persevere with this Ibrix. But 15-20% failures to me means doubling the amount of lost compute time when we double the grid size. It makes no sense. Their argument against the SSD solution is the cost.

 

In the end the board will decide. But I want to know what I’m not being told by HP or IT Ops. As to me there has to be a solution to get 4  2TB storage on each of the 8 node groups that has a max throughput of 700meg per second. That isn’t going to cost £120k.

 

Is there something I’m missing here. There has to be a middle ground between these crazy fast SSD and sticking with this junk Ibrix solution.

Link to comment
Share on other sites

I think we have to stick with HP for this and go through IT Ops on the purchase. But I have a feeling they are against this due to the fact if this Ibrix solution lasts one more year then they will have total control over the new storage solution for next 5 years.

 

 

Simple question, if I go get 6 HDD's and put them in a RAID 5 layout can I get speeds of 700 meg / sec ?

 

Is it possible with RAID 0 ??

Link to comment
Share on other sites

http://h18000.www1.hp.com/products/quickspecs/13714_div/13714_div.pdf

 

That's the parts we've got. 6 x 400 in RAID 5

 

 

Now IT Ops guy is saying these are the performance figures.

 

 

 

 

Storage Blade (D2200) - SSD

Storage Blade (D2200) – SAS HDD

Read(MB/s)

410

600*

Read(IO/s)

40000

11000*

Write(MB/s)

180

600*

Write(IO/s)

14500

11000*

   

* Max external shared

 

 

I think these are clearly bullshit. 180 mb/s on a D2200 !!??

 

My Laptop can write faster than 180 MB/S

Link to comment
Share on other sites

The throughput you will get depends on so many factors. How is it connected? What protocol is in use? Which crappy vendor makes the kit? What is the read and write bandwidth of the individual disks. How big is the controller's cache? Does it have a flash tier? Are the controllers properly configured or is this a JBOD?

 

If you have money to spend, contact a vendor and tell them what performance you want and then see if it fits your budget.

 

If you don't, look at the cache configuration options on your controller. Make sure it's forced to write back with adaptive read ahead and use the caches on the HDDs themselves. (If you are paying for HP you'll have multiply redundant power)  Try the different RAID configurations. You will get best results with striping if you use smaller disks and play with the stripe width. I would say the manufacturer can help with this, but your manufacturer is HP, so it'll be quicker to just do the work.

 

How do you know your bottleneck is just disk I/O? Observational evidence or calculation?

Link to comment
Share on other sites

Cheers, bottleneck is at 700 meg per second as we tested it on the Ibrix and SSD solution and both have different but higher max write speeds yet both hit that 700 meg wall.

I want to get as cheap a solution as possible as an interm solution for the year. Then allow a total system solution to be implemented when we know exactly what is the requirements from an optimised model. Get the hardware that fits.

Link to comment
Share on other sites

You won't get anywhere near ssd performance with spinning disks once you exhaust the write cache. That said, lots of sans have very large write caches now such as the IBM v7k with 32GB per controller. The choice is a simple cost v performance. If you need fast write perf with datasets which exhaust any caching you have, you have to pay for the ssd. If you reach a bandwidth bottleneck then you need to be looking at your transport. You can get 16Gb FC now but lots of servers still only have 4Gb HBAs or mostly 8.

Haven't had chance to read this thread properly but if you can make use of it, storage virtualization is really nice. With IBM's SVC and to a lesser extent the 3PAR stuff it puts a virtualization layer between the arrays and vvols and moves cold data down the tiers. You normally have 3 tiers, 1 ssd, 1 15k SAS, 1 near line and it moves all of the data to the highest tier it can, while watching for cold data and dropping it down the tiers.

Link to comment
Share on other sites

Cheers, bottleneck is at 700 meg per second as we tested it on the Ibrix and SSD solution and both have different but higher max write speeds yet both hit that 700 meg wall.

I want to get as cheap a solution as possible as an interm solution for the year. Then allow a total system solution to be implemented when we know exactly what is the requirements from an optimised model. Get the hardware that fits.

A cheap solution is a single IBM v7k with ssds, a brocade FC switch, and some 8Gb HBAs. About 40k all in for a decent sized ssd tier. I've hit a GB/s sustained sequential write on one of those.

Link to comment
Share on other sites

My storage experience is pretty blinkered so I don't know a lot outside IBM and HP. Just looked at that Nimble though and it looks good :)

I think it's pretty amazing, especially considering the price. It;s like netapp from 6 years ago.

Link to comment
Share on other sites

×
×
  • Create New...
Â