SUSECON
Results 1 to 2 of 2

Thread: Enabling multipath for boot volume SLES 11 SP3 and EMC VNX

Threaded View

  1. #1

    Question Enabling multipath for boot volume SLES 11 SP3 and EMC VNX

    I am working on a project to document how to connect SAN disk to a Linux on System z server and need some help.

    The SAN team provided me with four 50GB LUNs on an EMC VNX that appear to linux as vendor=DGC and product=VRAID as well as two 100GB LUNs on an IBM V7000.

    The Enterprise Server is a z114, running z/VM 6.2, with a 4 port fcp card which is connected to two Brocade switches.

    Phase 1 of my project is to add SAN disk to an existing SLES 11 SP3 system that is installed and running on ECKD dasd (from a guest under z/VM 6.2).

    Phase 2 of my project is to install SLES 11 SP3 onto SAN disk (from a guest under z/VM 6.2).

    Phase 1 was successful as I was able to access, with multipath enabled, all 6 of the LUNs.

    FYI - Though the SLES 11 doc said that the EMC VNX was supported out of the box I was not able to enable multipath without extensive modification of /etc/multipath.conf. First off, the default entry for this hardware in the sample file /usr/share/doc/packages/multipath-tools/multipath.conf.defaults is incorrect (at least in my environment). I copied the following code to /etc/multipath.conf and uncommented each line. (Note: I had already copied /usr/share/doc/packages/multipath-tools/multipath.conf.synthetic to /etc/multipath.conf.)

    Code:
    #       device {
    #               vendor "DGC"
    #               product ".*"
    #               product_blacklist "LUNZ"
    #               path_grouping_policy "group_by_prio"
    #               path_checker "emc_clariion"
    #               features "1 queue_if_no_path"
    #               hardware_handler "1 emc"
    #               prio "emc"
    #               failback "immediate"
    #               rr_weight "uniform"
    #               no_path_retry 60
    #       }
    In order to get multipath enabled on the EMC VNX, I also had to change path_checker to "readsector0", hardware_handler to "1 alua", and prio to "alua". The IBM LUNs were recognized without having to add or modify a device stanza.

    I'm having trouble with Phase 2 in getting multipath enabled on the EMC VNX. I haven't attempted to install SLES 11 on the IBM device yet though I expect that it will work as per the SLES documentation (Storage manual). I have successfully installed SLES 11 onto one of the EMC VNX LUNs and have been able to shutdown the guest and IPL from the SAN disk. I have created /etc/multipath.conf with similar parameters as I used for phase 1, along with other parameter recommendations from the Storage manual.

    The /etc/multipath.conf file I'm using is below.
    Code:
    ##
    ## This is a template multipath-tools configuration file
    ## Uncomment the lines relevent to your environment
    ##
     defaults {
            polling_interval        30
            path_checker            tur
            no_path_retry           5
            user_friendly_names     yes
            dev_loss_tmo 90
            fast_io_fail_tmo 5
     }
     blacklist {
            devnode "^dasd[a-z][[0-9]*]"
            device {
                    vendor "DGC"
                    product "LUNZ"
            }
            device {
                    vendor "EMC"
                    product "LUNZ"
            }
     }
     multipaths {
            multipath {
                    wwid                    3600601608fa034006e987a05a0dfe311
                    alias                   EMC-VNX_LUN0
            }
     }
     devices {
            device {
                    vendor "DGC"
                    product "*"
                    product_blacklist "LUNZ"
                    path_grouping_policy "group_by_prio"
                    path_checker "readsector0"
                    features "1 queue_if_no_path"
                    hardware_handler "1 alua"
                    prio "alua"
                    failback "immediate"
                    rr_weight "uniform"
                    no_path_retry 60
            }
    }
    I have followed the procedure in the manual to add 'dm-multipath' to the INITRD_MODULES= parameter in /etc/sysconfig/kernel, save the file, run mkinitrd, run zipl, add boot.multipath and multipathd to the startup processes, then shutdown and IPL. When I attempt to IPL the server, linux begins to access the SAN disk but when it starts to enable multipath it displays about 30 lines of error-type messages that repeat until I terminate the boot (via z/VM command #CP I CMS).

    Any thoughts on what else I need to do enable multipath on the boot volume? I need the path redundancy in case we decide to use SAN disk for some or all of our Enterprise servers.

    Note: the attached zip file contains the Word document for this phase of the project.

    Harley

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •