[WPXperVideo id=623 ]alright hey I’m Steve Johnson with lsi I work in our CTO office as a technologist what we’re showing here is a full rack scale demonstration of 12 gigs ass everything connected through a new 12 gig SAS switch this is a prototype demonstration not a product yet so what we have here are 20 servers connected to 10 J bods they’re all running 12 gig SAS connected to a prototype 12 gigs a switch the SAS which is a 40 port by for SAS switch so it’s effectively 160 gigabytes throughput and that’s connected with by eights to each one of these J mods and by fours to each one of the servers each one of the servers has a 12 gig SAS mega raid controller in them and they are connected basically all to the switch and then to the switch to DJ BOTS so what this shows is it shows direct attached storage with scalable sand attributes all right so every server can access any drive all at the same time that would be simultaneous access all right what this enables is different applications such as real-time analytics running Hadoop where every node can access all the storage simultaneous definitely improving performance instead of driving things over Ethernet doing static configurations where data has to be pushed to the note also the other thing it enables things like VMware mobility because it’s shared if you have a node goes down or you want to do load balancing all the storage is centralized and we can move the virtual disk between nodes as needed so the this particular demonstration what’s blinking all these lights is a technology that we’re working on now called dhire and it’s a megaraid product and basically what it does is it takes the 220 something drives takes one gigabyte data chunks from each drive all right and then from that giant pool that giant chair full of storage each one of the mega raids can create virtual disks in this particular case we got raid 5 raid six volumes that are running and what’s nice about that is if any particular disk dies the raid 5 raid 6 rebuild times are spread out across all the nodes in all the drives so typically a 4 terabyte read five or eight six rebuild times today may take one week or two weeks depending on what workloads are running and in this case we can get that down to about 20 minutes so what’s really nice about raid 5 or 86 is the physical capacity in relationship to the virtual capacity which is probably about 75% or eighty percent depending on on what you’re doing so this is a really nice setup you get all the attributes and performance characteristics of direct attached storage dad storage and you get many of the sand attributes associated with other sands alright so today a lot of people build servers disaggregated from the storage all right and in that case you got to buy SAS cables and you connect them all up in this case the only thing you need to do is put a SAS switch in the middle all right and add a few more cables and that gives you all these sand capabilities all right so SAS is a very high performance cost effective type of solution compared to other sands that are out there like InfiniBand or fibre channel very expensive this thing is very easy to manage it’s going to be using common megaraid and if you want to use to set up without megaraid just standard you know HBA controllers then you would have like Open Compute host software such as bluster Swift that you could run up here there’s another demonstration over there that nebula is doing that’s a small version of this doing six get that kind of will show you where we’re looking at driving data centers and cloud storage moving forward with this

Please let us know...

Q 1. Which of the Following Devices You Need Help With?

  • Hard Drive
  • RAID Array
  • Memory Card / Stick
  • Cell Phone / Tablet

Next >

en_USEnglish
es_MXSpanish en_USEnglish