SLES HA Software Loadbalancer Pacemaker Issue!!

Dear Team

Need your feedback for my below queries:

1. I have 2 nodes to join cluster and both node is having Bond0 (OAM) and Bond1(Traffic) which belong to same subnet, and my LVS VIP will join to Bond1.
Is it possible to do SLES HA cluster to perform having both Bond0 and Bond1 on same subnet?

2. I have performed the HA cluster, but facing issue which i attached in doc for refference.

Link: https://drive.google.com/open?id=1a-JTaf050is-oLTuMiqThPTaV_lsZWSX

I have my OS: SLES12_Sp3 and SLes12 SP3 HA ISO

Thanks
Anup.

Comments

  • anup007anup007 New or Quiet Member
    17gmpc01:~ # ifconfig -a
    bond0 Link encap:Ethernet HWaddr 08:F1:EA:CB:B7:70
    inet addr:10.76.204.33 Bcast:10.76.204.255 Mask:255.255.255.0
    inet6 addr: fe80::af1:eaff:fecb:b770/64 Scope:Link
    UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
    RX packets:28845 errors:0 dropped:2459 overruns:0 frame:0
    TX packets:5109 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:5324592 (5.0 Mb) TX bytes:656032 (640.6 Kb)

    bond1 Link encap:Ethernet HWaddr 08:F1:EA:CB:B7:71
    inet addr:10.76.204.46 Bcast:10.76.204.255 Mask:255.255.255.0
    inet6 addr: fe80::af1:eaff:fecb:b771/64 Scope:Link
    UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
    RX packets:32390 errors:0 dropped:2459 overruns:0 frame:0
    TX packets:29509 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:4607943 (4.3 Mb) TX bytes:3100888 (2.9 Mb)


    node 172805153: 17gmpc01
    node 172805167: 17gmpc02
    primitive ldirectord ldirectord \
    operations $id=ldirectord-operations \
    op monitor interval=20 timeout=60
    primitive lvsvip IPaddr2 \
    operations $id=lvsvip-operations \
    op monitor interval=10 timeout=60 \
    params ip=10.76.204.45 lvs_support=true cidr_netmask=32 nic="eth<X>:vip4"
    primitive ping ocf:pacemaker:ping \
    operations $id=ping-operations \
    op monitor interval=10 timeout=60 \
    params host_list="10.76.204.254 10.76.204.46 10.76.204.47" \
    meta target-role=Started
    group lvsgroup ldirectord lvsvip \
    meta target-role=Started
    clone pingclone ping \
    meta target-role=started
    location lvs_ping_location lvsgroup \
    rule -inf: pingd eq 0
    property cib-bootstrap-options: \
    have-watchdog=false \
    dc-version=1.1.16-4.8-77ea74d \
    cluster-infrastructure=corosync \
    cluster-name=cluster \
    stonith-enabled=false \
    no-quorum-policy=ignore \
    placement-strategy=balanced
    rsc_defaults rsc-options: \
    resource-stickiness=1 \
    migration-threshold=3
    op_defaults op-options: \
    record-pending=true \
    timeout=600

    17gmpc02:~ # ifconfig -a
    bond0 Link encap:Ethernet HWaddr 08:F1:EA:CB:B5:20
    inet addr:10.76.204.34 Bcast:10.76.204.255 Mask:255.255.255.0
    inet6 addr: fe80::af1:eaff:fecb:b520/64 Scope:Link
    UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
    RX packets:3749 errors:0 dropped:1876 overruns:0 frame:0
    TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:311900 (304.5 Kb) TX bytes:864 (864.0 b)

    bond1 Link encap:Ethernet HWaddr 08:F1:EA:CB:B5:21
    inet addr:10.76.204.47 Bcast:10.76.204.255 Mask:255.255.255.0
    inet6 addr: fe80::af1:eaff:fecb:b521/64 Scope:Link
    UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
    RX packets:31735 errors:0 dropped:1874 overruns:0 frame:0
    TX packets:49525 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:3415957 (3.2 Mb) TX bytes:8750847 (8.3 Mb)



    GMPC1:
    # /etc/corosync/corosync.conf file autogenerated by YaST2.
    # Manually changed configurations may get lost when reconfigured by YaST2.
    totem {
    #How long to wait for consensus to be achieved before starting a new round of membership configuration.
    consensus: 6000

    #The mode for redundant ring. None is used when only 1 interface specified, otherwise, only active or passive may be choosen
    rrp_mode: none

    #How many token retransmits should be attempted before forming a new configuration.
    token_retransmits_before_loss_const: 10

    #How long to wait for join messages in membership protocol. in ms
    join: 60

    #The maximum number of messages that may be sent by one processor on receipt of the token.
    max_messages: 20

    #The virtual synchrony filter type used to indentify a primary component. Change with care.
    vsftype: none

    #Timeout for a token lost. in ms
    token: 5000

    #This specifies the name of cluster
    cluster_name: cluster

    #Specifies version of IP to use for communication. Value can be one of ipv4 or ipv6.
    ip_version: ipv4

    #HMAC/SHA1 should be used to authenticate all message
    secauth: off

    #The only valid version is 2
    version: 2

    #Transport protocol
    transport: udp

    interface {
    #The multicast address to be used
    mcastaddr: 226.94.1.1

    #Network Address to be bind for this interface setting
    bindnetaddr: 10.76.204.0

    #The multicast port to be used
    mcastport: 5405

    #The ringnumber assigned to this interface setting
    ringnumber: 0

    }
    #To make sure the auto-generated nodeid is positive
    clear_node_high_bit: yes

    }
    logging {
    #Log to a specified file
    to_logfile: no

    #Log timestamp as well
    timestamp: off

    #Facility in syslog
    syslog_facility: daemon

    #Log to syslog
    to_syslog: yes

    #Whether or not turning on the debug information in the log
    debug: off

    #Log to the standard error output
    to_stderr: no

    #Logging file line in the source code as well
    fileline: off

    }
    quorum {
    #votequorum requires an expected_votes value to function
    expected_votes: 2

    #Enables two node cluster operations
    two_node: 1

    #Enable and configure quorum subsystem
    provider: corosync_votequorum

    }


    GMPC2:
    # /etc/corosync/corosync.conf file autogenerated by YaST2.
    # Manually changed configurations may get lost when reconfigured by YaST2.
    totem {
    #How long to wait for consensus to be achieved before starting a new round of membership configuration.
    consensus: 6000

    #The mode for redundant ring. None is used when only 1 interface specified, otherwise, only active or passive may be choosen
    rrp_mode: none

    #How many token retransmits should be attempted before forming a new configuration.
    token_retransmits_before_loss_const: 10

    #How long to wait for join messages in membership protocol. in ms
    join: 60

    #The maximum number of messages that may be sent by one processor on receipt of the token.
    max_messages: 20

    #The virtual synchrony filter type used to indentify a primary component. Change with care.
    vsftype: none

    #Timeout for a token lost. in ms
    token: 5000

    #This specifies the name of cluster
    cluster_name: cluster

    #Specifies version of IP to use for communication. Value can be one of ipv4 or ipv6.
    ip_version: ipv4

    #HMAC/SHA1 should be used to authenticate all message
    secauth: off

    #The only valid version is 2
    version: 2

    #Transport protocol
    transport: udp

    interface {
    #The multicast address to be used
    mcastaddr: 226.94.1.1

    #Network Address to be bind for this interface setting
    bindnetaddr: 10.76.204.0

    #The multicast port to be used
    mcastport: 5405

    #The ringnumber assigned to this interface setting
    ringnumber: 0

    }
    #To make sure the auto-generated nodeid is positive
    clear_node_high_bit: yes

    }
    logging {
    #Log to a specified file
    to_logfile: no

    #Log timestamp as well
    timestamp: off

    #Facility in syslog
    syslog_facility: daemon

    #Log to syslog
    to_syslog: yes

    #Whether or not turning on the debug information in the log
    debug: off

    #Log to the standard error output
    to_stderr: no

    #Logging file line in the source code as well
    fileline: off

    }
    quorum {
    #votequorum requires an expected_votes value to function
    expected_votes: 2

    #Enables two node cluster operations
    two_node: 1

    #Enable and configure quorum subsystem
    provider: corosync_votequorum

    }
  • strahil-nikolov-dxcstrahil-nikolov-dxc Established Member
    anup007 wrote: »
    Is it possible to do SLES HA cluster to perform having both Bond0 and Bond1 on same subnet?

    Hi,

    Can you parapharse your question ? I do not understand what 'perform' should mean.
    The IPaddr2 resource type is just doing what we can do with the ip command.
    You can provide examples how you would do it manually (add/remove the IP) and then it will be a piece of cake.
  • strahil-nikolov-dxcstrahil-nikolov-dxc Established Member
    If you asked whether we can control the device -> yes this is possible.

    The metadata for a resource is viewable via:
    [code] crm ra info ocf:heartbeat:IPaddr2 [code].

    Note: on newer implementations IPaddr is a symlink to IPaddr2 which uses the 'ip' command, which means that IP assigned by the cluster will not be visible by ifconfig. The 'ifconfig' is deprecated for a very long time.
  • anup007anup007 New or Quiet Member
    Hi,

    Can you parapharse your question ? I do not understand what 'perform' should mean.
    The IPaddr2 resource type is just doing what we can do with the ip command.
    You can provide examples how you would do it manually (add/remove the IP) and then it will be a piece of cake.

    Hello
    My question: when i am doing this for Configuring Software Load Balancer, OAM login is lost and not able to connect through OAM, but after comment the below y mark, then the login restore.
    #net.ipv4.conf.all.arp_ignore = 1
    #net.ipv4.conf.all.arp_announce = 2
    #net.ipv4.conf.bond1.arp_ignore = 1
    #net.ipv4.conf.bond1.arp_announce = 2

    in my setup: Bond0 is OAM and Bond1 is carrying the VIP (where both Bond0 and Bond1 is in same subnet).
  • strahil-nikolov-dxcstrahil-nikolov-dxc Established Member
    anup007 wrote: »
    Hello
    My question: when i am doing this for Configuring Software Load Balancer, OAM login is lost and not able to connect through OAM, but after comment the below y mark, then the login restore.
    #net.ipv4.conf.all.arp_ignore = 1
    #net.ipv4.conf.all.arp_announce = 2
    #net.ipv4.conf.bond1.arp_ignore = 1
    #net.ipv4.conf.bond1.arp_announce = 2

    in my setup: Bond0 is OAM and Bond1 is carrying the VIP (where both Bond0 and Bond1 is in same subnet).



    It's quite strange to have 2 bonds in the same subnet. Why don't you aggregate the bond slaves into a single device and make your life easier ?
Sign In or Register to comment.