Page 1 of 2 12 LastLast
Results 1 to 10 of 13

Thread: OES11 virtual machines

  1. #1
    Rick B NNTP User

    OES11 virtual machines

    I am trying to get my head around this one using OES servers with NCS. I
    have done this using iscsi which is easy. I am not quite sure how you do
    this with FC. The layout would be 3 hosts connected to the fc switch and a
    fc san connected to the switch. I will be running the vm's off of a portion
    of the san and they will be setup with HA between the servers with ocsfs for
    managing which nodes they will be run on. My confusion comes from how to get
    these vm's than to see the clustered nss volumes on the san. I have tried to
    speculate on this but so far have not gotten an understanding. These will be
    running on Xen hosted on Sles 11 sp2.

    Thanks
    Rick

  2. #2

    Re: OES11 virtual machines

    Quote Originally Posted by Rick B View Post
    I am trying to get my head around this one using OES servers with NCS. I
    have done this using iscsi which is easy. I am not quite sure how you do
    this with FC. The layout would be 3 hosts connected to the fc switch and a
    fc san connected to the switch. I will be running the vm's off of a portion
    of the san and they will be setup with HA between the servers with ocsfs for
    managing which nodes they will be run on. My confusion comes from how to get
    these vm's than to see the clustered nss volumes on the san. I have tried to
    speculate on this but so far have not gotten an understanding. These will be
    running on Xen hosted on Sles 11 sp2.

    Thanks
    Rick
    Hi Rick,

    I'm not sure what you are looking for. If you have setup with 3 Xen hosts that all have a simultaneous connection to the SAN stores, the OES vm's can also access devices/partitions/disks. Adding an exclamation (!) to the shared disks defined in the vm configuration files followed by deleting and reimporting the vm definitions into Xen with the 'xm new [vm name]' command, should allow the vm's to access the shared disks simultaneously.

    I have not run OES NCS domU's on Xen before, but have done so successfully with NetWare based clusters a couple of times in the past. (for NetWare vm's it involved also setting the "set XenBlk Barriers=0" on each NetWare domU, I don't think such an equivalent is needed for OES domU's).

    Some more info on what you are finding to be an issue might give a better response.

    Cheers,
    Willem
    Knowledge Partner (voluntary sysop)
    ---
    If you find a post helpful and are logged into the web interface,
    please show your appreciation and click on the star below it. Thanks!

  3. #3
    Rick B NNTP User

    Re: OES11 virtual machines

    Magic31 wrote:

    >
    > Rick B;4879 Wrote:
    >> I am trying to get my head around this one using OES servers with NCS. I
    >> have done this using iscsi which is easy. I am not quite sure how you
    >> do
    >> this with FC. The layout would be 3 hosts connected to the fc switch
    >> and a
    >> fc san connected to the switch. I will be running the vm's off of a
    >> portion
    >> of the san and they will be setup with HA between the servers with
    >> ocsfs for
    >> managing which nodes they will be run on. My confusion comes from how
    >> to get
    >> these vm's than to see the clustered nss volumes on the san. I have
    >> tried to
    >> speculate on this but so far have not gotten an understanding. These
    >> will be
    >> running on Xen hosted on Sles 11 sp2.
    >>
    >> Thanks
    >> Rick

    >
    > Hi Rick,
    >
    > I'm not sure what you are looking for. If you have setup with 3 Xen
    > hosts that all have a simultaneous connection to the SAN stores, the OES
    > vm's can also access devices/partitions/disks. Adding an exclamation (!)
    > to the shared disks defined in the vm configuration files followed by
    > deleting and reimporting the vm definitions into Xen with the 'xm new
    > [vm name]' command, should allow the vm's to access the shared disks
    > simultaneously.
    >
    > I have not run OES NCS domU's on Xen before, but have done so
    > successfully with NetWare based clusters a couple of times in the past.
    > (for NetWare vm's it involved also setting the "set XenBlk Barriers=0"
    > on each NetWare domU, I don't think such an equivalent is needed for OES
    > domU's).
    >
    > Some more info on what you are finding to be an issue might give a
    > better response.
    >
    > Cheers,
    > Willem
    >
    >


    Sorry for not following up sooner but just very swamped. I am in the middle
    of this now trying to get virtualized servers using npiv. Does anyone know
    what the correct settings to put in the disks field. I have tried using the
    setting npiv:wwn,lun and can't connect. Is there somewhere that explains
    how to do this that is understandable. There is not much docs on this. I am
    using a qlogic 4g fc switch and qlogic hba's. I have a Fujitsu DX60 san.

    Thanks


  4. Re: OES11 virtual machines

    Hi Rick,

    Quote Originally Posted by Rick B View Post
    [...] I am in the middle
    of this now trying to get virtualized servers using npiv. Does anyone know
    what the correct settings to put in the disks field. I have tried using the
    setting npiv:wwn,lun and can't connect. Is there somewhere that explains
    how to do this that is understandable.[...]
    Thanks
    the syntax is
    Code:
    disk=[ 'npiv:<fabric>-<wwpn>-<wwnn>-<wwpn-san>-<lun>,xvda,w', ]
    Concerning WWNN and WWPN, I once looked up the value for "locally defined numbers" and decided that 29001100aaXXXXXX (with XXXXXX being any six-digit hex number unique to your fabric) is a good range for WWPNs and 29011100aaXXXXXX the one for WWNNs. (I use NPIV WWPN and WWNN in pairs, so those are unique per DomU.)

    The doc is in /etc/xen/scripts/block-npiv:
    Code:
    #!/bin/bash
    
    # Usage: block-npiv [add npiv | remove dev]
    
    dir=$(dirname "$0")
    . "$dir/block-npiv-common.sh"
    . "$dir/block-common.sh"
    
    #set -x
    #command=$1
    
    case "$command" in
      add)
        # Params is one big arg, with fields separated by hyphens:
        #   FABRIC-VPWWPN-VPWWNN-TGTWWPN-LUN#
        #    arg 2 - Fabric Name
        #    arg 3 - VPORT's WWPN
        #    arg 4 - VPORT's WWNN
        #    arg 5 - Target's WWPN
        #    arg 6 - LUN # on Target
        # no wwn contains a leading 0x - it is a 16 character hex value
        # You may want to optionally pick a specific adapter ?
    A word of warning: From bad experience I suggest to take extra precautions so that only a single node will write to a specific LUN, for instance by setting up DomU locking within xend... having the DomU started on more than one node will get you into trouble for sure...

    And another experience: When using the above syntax, the LUN will be attached to Dom0 and handed to the DomU like any other local Dom0 resource. I recommend to configure LVM on Dom0 to only use the devices that actually carry Dom0-PVs (that are meant for the Dom0) by specifying the appropriate filters in /etc/lvm/lvm.conf. Additionally, there were/are some minor flaws when using LVM inside the DomUs, independant from LVM filter settings: If you use the same VG name on Dom0 and DomU *and* you have a LV on the DomU that is not present on the Dom0 VG, then the DomU boot process will hang - this persists until you reboot the Dom0 and affects any subsequent DomU start. Changing /lib/udev/activate_vg to not change the VG to active if the new VG has the same name as Dom0 VGs is the fix.

    With regards,
    Jens

    PS: If you find a way to specify an NPIV adapter for forwarding to the DomU (AKA pvSCSI), let me know. It should be possible manually, but I've found no /etc/xen/vm/*-support for this so far. Using pvSCSI would avoid seeing the NPIV adapter in Dom0 at all, keeping it limited to its intended use: inside the DomU...

  5. #5

    Re: OES11 virtual machines

    Quote Originally Posted by jmozdzen View Post
    .. Changing /lib/udev/activate_vg to not change the VG to active if the new VG has the same name as Dom0 VGs is the fix.

    Good points Jens (locking is certainly one to watch for, especially when using cluster unaware filesystems like LVM2 vs cLVM)!

    Also the LVM heads up is a good one to mention, and also added that a best practice is to not use LVM configured storage for partitions the dom0/host is using - configure storagre/partitioning for the Xen host/dom0 itself with non LVM "traditional primary partitions" and use LVM type partitions (or direct phy access to partitions) for the domU/guests.

    The tip on /lig/udev/activate_vg has gone into my little book of notes!

    Cheers,
    Willem
    Last edited by Magic31; 21-Jun-2012 at 07:46.
    Knowledge Partner (voluntary sysop)
    ---
    If you find a post helpful and are logged into the web interface,
    please show your appreciation and click on the star below it. Thanks!

  6. Re: OES11 virtual machines

    Quote Originally Posted by Magic31 View Post
    Good points Jens (locking is certainly one to watch for, especially when using cluster unaware filesystems like LVM2 vs cLVM)!
    When you use an NPIV "adapter" per DomU, use of cluster FSs usually doesn't come to mind: It's like assigning a dedicated disk to the single DomU.

    But I have to admit that my advice might sound illogical - even when activating the same WWPN on another Dom0 (by starting the DomU there), your fabric should route packets to only a single NPIV HBA (the one with the latest login). It's just that I had to deal with a bug in SCST that resulted in double-assignments of LUNs to seemingly two different WWPNs, although configured only for a one-on-one assignment. We've submitted a fix, which is in the trunk code of SCST, but I doubt that many vendors have switched to using "trunk" rather than one of the "stable" SCST releases - which do not have that fix applied. Details are available here.

    Quote Originally Posted by Magic31 View Post
    Also the LVM heads up is a good one to mention, and also added that a best practice is to not use LVM configured storage for partitions the dom0/host is using - configure storagre/partitioning for the Xen host/dom0 itself with non LVM "traditional primary partitions" and use LVM type partitions (or direct phy access to partitions) for the domU/guests.
    I'm too much a fan of LVM to follow this advice - but I believe to be one of the "I know what I'm doing, wanted to do it and can handle the consequences" variety Had it not been for the activate_vg bug, the filters in Dom0's lvm.conf were sufficient for me to stay out of trouble.

    Quote Originally Posted by Magic31 View Post
    The tip on /lig/udev/activate_vg has gone into my little book of notes!

    Cheers,
    Willem
    Then I better provide my version of activate_vg here (since the message from 2010 on the Novell forum was another victim of the Novell/Suse forum split):
    Code:
    #!/bin/sh
    VGUUID=$1
    VGNAME=$2
    
    now=$(date)
    SYSTEMVG=$(/sbin/lvm vgdisplay|grep "VG Name"|awk '{ print $3 }')
    
    echo "$now: not acting on $0 $1 $2, VG $2 unwanted with system VG $SYSTEMVG" >> /var/log/lvm2.log
    exit 0
    
    if [ "$SYSTEMVG" = "$VGNAME" ] ; then
            echo "$now: not acting on $0 $1 $2, VG $2 conflicts with system VG $SYSTEMVG" >> /var/log/lvm2.log
    else
            echo "$now: SYSTEMVG=$SYSTEMVG" >> /var/log/lvm2.log
            echo "$now: " /sbin/lvm vgchange --config "devices {filter = [ \"a|/dev/disk/by-id/lvm2-vgpv-$VGUUID-.*|\", \"r/.*/\" ]}" -a y $VGNAME >> /var/log/lvm2.log
            /sbin/lvm vgchange --config "devices {filter = [ \"a|/dev/disk/by-id/lvm2-vgpv-$VGUUID-.*|\", \"r/.*/\" ]}" -a y $VGNAME
    fi
    I failed to reproduce this problem end of April (with then latest patches) - but today I've seen new log entries from May and June, so this *might* still be a problem. I'll have to re-test this when I find some time...

    YMMV

    With regards,
    Jens

  7. #7
    Rick B NNTP User

    Re: OES11 virtual machines

    jmozdzen wrote:

    >
    > Magic31;5207 Wrote:
    >> Good points Jens (locking is certainly one to watch for, especially when
    >> using cluster unaware filesystems like LVM2 vs cLVM)!

    > When you use an NPIV "adapter" per DomU, use of cluster FSs usually
    > doesn't come to mind: It's like assigning a dedicated disk to the single
    > DomU.
    >
    > But I have to admit that my advice might sound illogical - even when
    > activating the same WWPN on another Dom0 (by starting the DomU there),
    > your fabric should route packets to only a single NPIV HBA (the one with
    > the latest login). It's just that I had to deal with a bug in SCST that
    > resulted in double-assignments of LUNs to seemingly two different WWPNs,
    > although configured only for a one-on-one assignment. We've submitted a
    > fix, which is in the trunk code of SCST, but I doubt that many vendors
    > have switched to using "trunk" rather than one of the "stable" SCST
    > releases - which do not have that fix applied. Details are available
    > 'here'
    > (http://technik.blogs.nde.ag/2012/02/...-pitfalls-dug-

    deep/).
    >
    > Magic31;5207 Wrote:
    >> Also the LVM heads up is a good one to mention, and also added that a
    >> best practice is to not use LVM configured storage for partitions the
    >> dom0/host is using - configure storagre/partitioning for the Xen
    >> host/dom0 itself with non LVM "traditional primary partitions" and use
    >> LVM type partitions (or direct phy access to partitions) for the
    >> domU/guests.

    > I'm too much a fan of LVM to follow this advice - but I believe to be
    > one of the "I know what I'm doing, wanted to do it and can handle the
    > consequences" variety Had it not been for the activate_vg bug, the
    > filters in Dom0's lvm.conf were sufficient for me to stay out of
    > trouble.
    >
    > Magic31;5207 Wrote:
    >> The tip on /lig/udev/activate_vg has gone into my little book of notes!
    >>
    >>
    >> Cheers,
    >> Willem

    >
    > Then I better provide my version of activate_vg here (since the message
    > from 2010 on the Novell forum was another victim of the Novell/Suse
    > forum split):
    >
    > Code:
    > --------------------
    > #!/bin/sh
    > VGUUID=$1
    > VGNAME=$2
    >
    > now=$(date)
    > SYSTEMVG=$(/sbin/lvm vgdisplay|grep "VG Name"|awk '{ print $3 }')
    >
    > echo "$now: not acting on $0 $1 $2, VG $2 unwanted with system VG
    > $SYSTEMVG" >> /var/log/lvm2.log exit 0
    >
    > if [ "$SYSTEMVG" = "$VGNAME" ] ; then
    > echo "$now: not acting on $0 $1 $2, VG $2 conflicts with system VG
    > $SYSTEMVG" >> /var/log/lvm2.log else
    > echo "$now: SYSTEMVG=$SYSTEMVG" >> /var/log/lvm2.log
    > echo "$now: " /sbin/lvm vgchange --config "devices {filter = [
    > \"a|/dev/disk/by-id/lvm2-vgpv-$VGUUID-.*|\", \"r/.*/\" ]}" -a y $VGNAME
    > >> /var/log/lvm2.log /sbin/lvm vgchange --config "devices {filter = [

    > \"a|/dev/disk/by-id/lvm2-vgpv-$VGUUID-.*|\", \"r/.*/\" ]}" -a y $VGNAME
    > fi
    > --------------------
    >
    > I failed to reproduce this problem end of April (with then latest
    > patches) - but today I've seen new log entries from May and June, so
    > this *might* still be a problem. I'll have to re-test this when I find
    > some time...
    >
    > YMMV
    >
    > With regards,
    > Jens
    >
    >


    Thanks Jens And Willem. I have this now running with 2 vm's so far. They are
    booth oes11. I will be doing clustering at this level from the domu's. Can I
    assign all the cluster devices using the same format to all vm's that are in
    the cluster? I would assume since it is a device it would show up and than
    be able to create pools. Also the sbd can be addressed the same way?

    I know there are so may ways to skin the cat these days.

    Thanks again.

  8. Re: OES11 virtual machines

    Hi Rick,

    Quote Originally Posted by Rick B View Post
    Thanks Jens And Willem. I have this now running with 2 vm's so far. They are
    booth oes11. I will be doing clustering at this level from the domu's. Can I
    assign all the cluster devices using the same format to all vm's that are in
    the cluster? I would assume since it is a device it would show up and than
    be able to create pools. Also the sbd can be addressed the same way?

    I know there are so may ways to skin the cat these days.

    Thanks again.
    you'd create NPIV HBAs per DomU (so with two DomUs, you have two different NPIV addresses) and configure your SAN to give both vHBAs access to the share resource(s).

    From that point on, it should be the same as with any other type of shared disk (both VMs have concurrent access to the same disk space).

    As I have no experience with OES, I cannot answer the "pool" part of your question.

    AFAIK, sbd fencing can be done on any shared device, ie a separate partition on a single "shared disk". But as we use power switches for fencing, I only know the theory.

    Regards,
    Jens

  9. #9
    Rick B NNTP User

    Re: OES11 virtual machines

    jmozdzen wrote:

    >
    > Hi Rick,
    >
    > Rick B;5213 Wrote:
    >> Thanks Jens And Willem. I have this now running with 2 vm's so far. They
    >> are
    >> booth oes11. I will be doing clustering at this level from the domu's.
    >> Can I
    >> assign all the cluster devices using the same format to all vm's that
    >> are in
    >> the cluster? I would assume since it is a device it would show up and
    >> than
    >> be able to create pools. Also the sbd can be addressed the same way?
    >>
    >> I know there are so may ways to skin the cat these days.
    >>
    >> Thanks again.

    >
    > you'd create NPIV HBAs per DomU (so with two DomUs, you have two
    > different NPIV addresses) and configure your SAN to give both vHBAs
    > access to the share resource(s).
    >
    > From that point on, it should be the same as with any other type of
    > shared disk (both VMs have concurrent access to the same disk space).
    >
    > As I have no experience with OES, I cannot answer the "pool" part of
    > your question.
    >
    > AFAIK, sbd fencing can be done on any shared device, ie a separate
    > partition on a single "shared disk". But as we use power switches for
    > fencing, I only know the theory.
    >
    > Regards,
    > Jens
    >
    >


    I don't know whether I said this earlier. I am on Sles 11 sp2. Is there an
    easier way to add additional disks to vm's rather than having to edit files
    each time and than deleting and creating new all the domu's?


  10. #10
    Rick B NNTP User

    Re: OES11 virtual machines

    Rick B wrote:

    > jmozdzen wrote:
    >
    >>
    >> Hi Rick,
    >>
    >> Rick B;5213 Wrote:
    >>> Thanks Jens And Willem. I have this now running with 2 vm's so far. They
    >>> are
    >>> booth oes11. I will be doing clustering at this level from the domu's.
    >>> Can I
    >>> assign all the cluster devices using the same format to all vm's that
    >>> are in
    >>> the cluster? I would assume since it is a device it would show up and
    >>> than
    >>> be able to create pools. Also the sbd can be addressed the same way?
    >>>
    >>> I know there are so may ways to skin the cat these days.
    >>>
    >>> Thanks again.

    >>
    >> you'd create NPIV HBAs per DomU (so with two DomUs, you have two
    >> different NPIV addresses) and configure your SAN to give both vHBAs
    >> access to the share resource(s).
    >>
    >> From that point on, it should be the same as with any other type of
    >> shared disk (both VMs have concurrent access to the same disk space).
    >>
    >> As I have no experience with OES, I cannot answer the "pool" part of
    >> your question.
    >>
    >> AFAIK, sbd fencing can be done on any shared device, ie a separate
    >> partition on a single "shared disk". But as we use power switches for
    >> fencing, I only know the theory.
    >>
    >> Regards,
    >> Jens
    >>
    >>

    >
    > I don't know whether I said this earlier. I am on Sles 11 sp2. Is there an
    > easier way to add additional disks to vm's rather than having to edit
    > files each time and than deleting and creating new all the domu's?


    Think I figured it out. Looks like the block attach updates the config as
    well.

    Thanks


Page 1 of 2 12 LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •