

So it sounds like the big thing between shared SAS and SAN is that fact that I will not be able to use storage vmotion? I guess you can possibly have direct attached storage and shares this as iscsi protocol on a box? I did mean SAS when I said DAS (all the acronyms :)). I understand that iscsi, FC, and SAS are the interfaces but not sure what you mean by iscsi DAS.

I'm no expert on vsan but if the data is shared across hosts my bottleneck will still the 1 gb network?. I would look at vsan if I didn't already have the hosts with no drives. However my oracle database will need more horsepower and right now my bottleneck is the 1 gb links due to latency from the small writes. I have a equallogic 6100x that does great for general purposes vm's. This is exactly the problem and potential solution I am looking at tleavit. I'd seriously consider VMware Virtual SAN (need to invest more time into your workload investigation). Looks like iSCSI and DAS are both wrong options here (only 3 hosts now so iSCSI is overkill, DAS beyond 3 hosts will need SAS switches and that's a lock-in-LSI and quite expensive one). My question is are there any con's to using this in VMware? I use veeam as my backup solution so I do not need the snapshot and replication features of my current iSCSI box. It looks like DAS such as a Dell MD3420 will be a better fit since it will be high speed and native SCSI.

I only have 3 hosts in my VMware environment so support for more than this is neglible. I can look at 10gb or FC but my costs will be substantially higher since I don't have 10gb switches or FC switches. My current iSCSI solution does not perform the same as my physical oracle platform (md1000 6gb SAS) when baselining. I have 1 gb iSCSI currently and will be moving a oracle database to VMware. I am in the process of weighing my options with iSCSI and DAS.
#Equallogic ps6500 disk map software#
If you go with 12gbs SAS on the HBAs and on the MD34xx you should have a fast server storage I/O solution that if you add some software read (or write) cache of a local SSD (drive or PCIe) will be even faster. Good point about the reads/writes, some of the block software only does reads, however if you are mainly doing reads, then you let the writes drop down and be handled by a fast storage array that is not being bothered with lots of reads. Some for block include Proximal now part of Samsung, Virtunet (allows you to use a portion of SSD (card, drive, etc) for cache, part as a target), Sandisk and others. As to cache software / appliances, many of those will work with any PCIe, SAS, SATA, FC, iSCSI or other storage while some have more narrow lists.

#Equallogic ps6500 disk map manual#
or click here to see MD34xx deployment manual which shows the different topologies including up to four host with single controller, or four with HA (e.g. It looks like it will come down to either the Pernix solution or the shared SAS yes will support up to four, see the links in my last post. This also supports vmotion as well which is a plus. I have been testing this setup with some Micron PCIE 700GB SSD's. I have tested this with write back to one host and the performance gain is huge.The benefit is that you have server side caching for both reads and writes. The only problem is that I will need a 10gb pipe in between the hosts to support the IO redundant write requirements. They support write back mode and writing to multiple hosts as well. My other option is to go with Pernix and a caching card. Phil435 Do you know how many hosts that the MD3420 supports? I was told up to four but i haven't seen documentation on that yet? You have lots of options for your given environment scenario, just figure out what you want or need to do, what you could do and would it add or remove complexity. Also if you only have 3 vmw hosts besides using 10GbE in pt-pt tri-angle, you could also do the same using QDR InfiniBand and IPoIB and other items supported by VMW. If you do not need to do the storage vmotion or other functions that require shared storage, then you can get around things. However ask your self this, do you need to be able to do storage vmotions which you would need to have some form of shared storage (hardware or software base), or if storage vmotion is not needed. You can also share local dedicated internal or external DAS (sas or SATA or PCIe) using software from VMWare/VSAN, Starwind among others, however does that add to complexity. You have an option with the MD3000 (depends on specific model) in that with 3 vm hosts, you can also sas attach directly to the SAS ports (without switch) and run at 6Gbs on each of the paths.
