PDA

View Full Version : iscsi for vm's



Rick B
07-Mar-2012, 14:43
I have been doing reading on iscsi for hosting domu's. I have been doing
ocfs2 for a couple years hosting the vm's as file based. I am trying now to
redesign this now to use iscsi: based raw disks I suppose it would be put.
What I need to know is if anyone has done any real world number differences
between the 2. I would migrate all the vm's that direction if that will
increase the performance of the file system for the domu's.

Thanks for any advice.

Magic31
13-Mar-2012, 09:09
What I need to know is if anyone has done any real world number differences between the 2.

No real numbers that I can give you. Back in the SLES 10.1 / 10.2 days I did do allot of comparison between file based VM's on OCFS2 and plain LVM / raw disk (over iSCSI as also fiber) access for the domU's. I found OCFS2 to be slower and less stable. There has been enough development on OCFS2 since then, but I still prefer setting up the domU pointing to an LVM volume for the OS disk (so LVM snapshots can be made on the OS volume disk) and phy access to all the datadisks (as these are on a SAN in our case we can then leverage SAN snapshots for the DATA volumes).

As we don't use the HAE pack in SLES 11, we don't have cLVM. So to make sure that one domU does not get loaded twice across our Xen cluster, we use the Xen dom0 locking mechanism where each Xen host writes to a shared NFS share. Management of the clusters is mainly done through the open source Convirt solution (which also helps prevent accidental dual start).

That has been working very well for us for the last two/three years.

-Willem

Rick B
21-Mar-2012, 18:59
Magic31 wrote:

>
> Rick B;2880 Wrote:
>> What I need to know is if anyone has done any real world number
>> differences between the 2.
>
> No real numbers that I can give you. Back in the SLES 10.1 / 10.2
> days I did do allot of comparison between file based VM's on OCFS2 and
> plain LVM / raw disk (over iSCSI as also fiber) access for the domU's.
> I found OCFS2 to be slower and less stable. There has been enough
> development on OCFS2 since then, but I still prefer setting up the domU
> pointing to an LVM volume for the OS disk (so LVM snapshots can be made
> on the OS volume disk) and phy access to all the datadisks (as these are
> on a SAN in our case we can then leverage SAN snapshots for the DATA
> volumes).
>
> As we don't use the HAE pack in SLES 11, we don't have cLVM. So to
> make sure that one domU does not get loaded twice across our Xen
> cluster, we use the Xen dom0 locking mechanism where each Xen host
> writes to a shared NFS share. Management of the clusters is mainly done
> through the open source Convirt solution (which also helps prevent
> accidental dual start).
>
> That has been working very well for us for the last two/three years.
>
> -Willem
>
>

Hi Willem,

Can you elaborate a little more please. I have the new iscsi san in place
and have been trying to use iscsi:iqn,lun to attach but virt mgr does not
like the ,lun at the end. it only we access the first lun by not specifying
a lun. I guess I need to know if anyone else is actually doing it this way.
This is SLES 11 SP1.

Does that have to be an nfs volume for the tracking of the node it is
sitting on?

Thanks

jmozdzen
22-Mar-2012, 12:33
Rick,

to answer that NFS part of your questions:

> > As we don't use the HAE pack in SLES 11, we don't have cLVM. So to
> > make sure that one domU does not get loaded twice across our Xen
> > cluster, we use the Xen dom0 locking mechanism where each Xen host
> > writes to a shared NFS share.
> Does that have to be an nfs volume for the tracking of the node it is sitting on?

Nope, all you need is shared access to that "file space" from all Dom0s, and with locking support.

We're using SLESS11SP1+HAE with OCFS2 via FC for /etc/xen/vm on currently only two Dom0 and have a subdir /etc/xen/vm/vm_locks for all those xend DomU lock files. While it does work generally, we do see some severe hickups that we believe to come from somewhere in the OCFS2 layer (which seems not to cope well with all the locking going on). We've had that part on NFS for some time (without any problems) but switched back to OCFS2 for reasons beyond the scope of this discussion.

Regards,
Jens

Rick B
23-Mar-2012, 15:05
Magic31 wrote:

>
> Rick B;2880 Wrote:
>> What I need to know is if anyone has done any real world number
>> differences between the 2.
>
> No real numbers that I can give you. Back in the SLES 10.1 / 10.2
> days I did do allot of comparison between file based VM's on OCFS2 and
> plain LVM / raw disk (over iSCSI as also fiber) access for the domU's.
> I found OCFS2 to be slower and less stable. There has been enough
> development on OCFS2 since then, but I still prefer setting up the domU
> pointing to an LVM volume for the OS disk (so LVM snapshots can be made
> on the OS volume disk) and phy access to all the datadisks (as these are
> on a SAN in our case we can then leverage SAN snapshots for the DATA
> volumes).
>
> As we don't use the HAE pack in SLES 11, we don't have cLVM. So to
> make sure that one domU does not get loaded twice across our Xen
> cluster, we use the Xen dom0 locking mechanism where each Xen host
> writes to a shared NFS share. Management of the clusters is mainly done
> through the open source Convirt solution (which also helps prevent
> accidental dual start).
>
> That has been working very well for us for the last two/three years.
>
> -Willem
>
>

Willem do you know if Convirt will work with sles 11 sp2?