SLES 11 SP4 HAE: clmv, can't create logical volume

Hi,

i followed this guideline:
https://www.suse.com/documentation/sle_ha/book_sleha/data/sec_ha_clvm_config.html#sec_ha_clvm_config_resources

The clone resource is running on both nodes:
Last updated: Wed Feb  1 14:47:16 2017
Last change: Tue Jan 31 15:14:55 2017 by hacluster via crmd on ha-idg-1
Stack: classic openais (with plugin)
Current DC: ha-idg-2 - partition with quorum
Version: 1.1.12-f47ea56
2 Nodes configured, 2 expected votes
6 Resources configured


Node ha-idg-1: online
        prim_clvmd      (ocf::lvm2:clvmd):      Started
        prim_dlm        (ocf::pacemaker:controld):      Started
Node ha-idg-2: online
        prim_clvmd      (ocf::lvm2:clvmd):      Started
        prim_dlm        (ocf::pacemaker:controld):      Started

Node Attributes:
* Node ha-idg-1:
* Node ha-idg-2:

Migration summary:
* Node ha-idg-2:
   prim_stonith_ilo_ha-idg-1: migration-threshold=1000000 fail-count=1000000 last-failure='Tue Jan 31 15:14:54 2017'
* Node ha-idg-1:
   prim_stonith_ilo_ha-idg-2: migration-threshold=1000000 fail-count=1000000 last-failure='Tue Jan 31 15:15:07 2017'

Failed actions:
    prim_stonith_ilo_ha-idg-1_start_0 on ha-idg-2 'unknown error' (1): call=100, status=Timed Out, exit-reason='none', last-rc-change='Tue Jan 31 15:14:34 2017', queued=0ms, exec=20004ms
    prim_stonith_ilo_ha-idg-2_start_0 on ha-idg-1 'unknown error' (1): call=107, status=Error, exit-reason='none', last-rc-change='Tue Jan 31 15:14:55 2017', queued=0ms, exec=11584ms

I created the pv and the vg without problems:
ha-idg-2:~ # pvs
pvs  PV                                                  VG            Fmt  Attr PSize    PFree
pvs  /dev/mapper/3600508b1001c5037520913a9b581d78d_part1 vg1           lvm2 a--   295.00g       0
pvs  /dev/mapper/3600c0ff00012824b04af7a5201000000       vg_cluster_01 lvm2 a--  1006.75g 1006.75g
ha-idg-2:~ # vgs
vgs  VG            #PV #LV #SN Attr   VSize    VFree
vgs  vg1             1   1   0 wz--n-  295.00g       0
vgs  vg_cluster_01   1   0   0 wz--nc 1006.75g 1006.75g

In the vg for the cluster seems to be enough free space.
But i can't create a lv on top of it:
ha-idg-2:~ # lvcreate -nlv -m1 -l10%VG vg_cluster_01 --mirrorlog mirrored
lvcreate  Insufficient suitable allocatable extents for logical volume lv: 6442 more required

What i found out:
For the vg_cluster_01 there is no device file, although i activated it:
ha-idg-2:~ # ll /dev/vg*
crw-rw---- 1 root root 10, 63 Jan 30 19:28 /dev/vga_arbiter

/dev/vg1:
total 0
lrwxrwxrwx 1 root root 7 Jan 30 19:28 lv_root -> ../dm-5

Thanks for any help.


Bernd

Comments

  • Hi,

    using this i succeeded: lvcreate -nlv_cluster_01 -m1 --alloc anywhere -L200G vg_cluster_01
    I have to admit that i don't know exactly what i did here.
    man lvcreate says:
    -m, --mirrors Mirrors
                  Creates a mirrored logical volume with Mirrors copies.  For example, specifying "-m 1" would result in a mirror with two-sides; that is, a linear volume plus one copy.
    
                  Specifying the optional argument --nosync will cause the creation of the mirror to skip the initial resynchronization.  Any data written afterwards will be mirrored, but the original contents will not be copied.  This
                  is useful for skipping a potentially long and resource intensive initial sync of an empty device.
    
                  The optional argument --mirrorlog specifies the type of log to be used.  The default is disk, which is persistent and requires a small amount of storage space, usually on a separate device from  the  data  being  mir-
                  rored.   Using  core  means  the mirror is regenerated by copying the data from the first device again each time the device is activated, for example, after every reboot.  Using "mirrored" will create a persistent log
                  that is itself mirrored.
    
                  The optional argument --corelog is equivalent to --mirrorlog core.
    
                  Every leg of the mirrored logical volume could be placed on the pvs with same tag, see details(mirror_legs_require_separate_pvs) in lvm.conf.
    
    -m1 would result in a mirror with two-sides; that is, a linear volume plus one copy.
    The used space seems to confirm that:
    ha-idg-2:~ # vgs
    vgs  VG            #PV #LV #SN Attr   VSize    VFree
    vgs  vg1             1   1   0 wz--n-  295.00g      0
    vgs  vg_cluster_01   1   1   0 wz--nc 1006.75g 606.72g
    ha-idg-2:~ # lvs
    lvs  LV            VG            Attr      LSize   Pool Origin Data%  Move Log                Copy%  Convert
    lvs  lv_root       vg1           -wi-ao--- 295.00g
    lvs  lv_cluster_01 vg_cluster_01 mwa-a-m-- 200.00g                         lv_cluster_01_mlog   8.16
    
    Do i create the lv twice ? And afterwards filling it up with data, i have the data twice ?
    What is the purpose/advantage ? My pv's are volumes on a SAN, and the SAN is a RAID5. Currently i don't see any benefit doing that.
    Did i miss something ?
    The manpage is talking of "legs". But in reality i don't have two legs, because pvX from hostA ist the same as from hostB.
    Is it necessary to create a mirror in a clustered lvm scenario ? Currently i just see the disadvantage of loosing space.
    And maybe it's slower ?

    Bernd
Sign In or Register to comment.