I have created a 2 node cluster on sles12. Following is the configuration -

msnode1:~ # crm --version
crm 3.0.0

msnode1:~ # corosync -v
Corosync Cluster Engine, version '2.3.6'
Copyright (c) 2006-2009 Red Hat, Inc.

msnode1:~ # crm config show
node 1: msnode1
node 2: msnode2
primitive mspersonal systemd:mspersonal \
op monitor interval=30s
primitive virtip IPaddr \
params ip=10.243.109.103 cidr_netmask=21 \
op monitor interval=30s
location cli-prefer-virtip virtip role=Started inf: msnode1
colocation msconstraint inf: virtip mspersonal
order msorder Mandatory: virtip mspersonal
property cib-bootstrap-options: \
have-watchdog=false \
dc-version=1.1.16-4.8-77ea74d \
cluster-infrastructure=corosync \
cluster-name=mscluster \
stonith-enabled=false \
placement-strategy=balanced \
help \
list \
last-lrm-refresh=1561341732
rsc_defaults rsc-options: \
resource-stickiness=100 \
migration-threshold=2
op_defaults op-options: \
timeout=600 \
record-pending=true

msnode1:~ # crm status
Stack: corosync
Current DC: msnode1 (version 1.1.16-4.8-77ea74d) - partition with quorum
Last updated: Tue Jun 25 17:43:44 2019
Last change: Tue Jun 25 17:38:21 2019 by hacluster via cibadmin on msnode1

2 nodes configured
2 resources configured

Online: [ msnode1 msnode2 ]

Full list of resources:

virtip (ocf::heartbeat:IPaddr): Started msnode1
mspersonal (systemd:mspersonal): Started msnode1


When I shut off the cluster on node1 (or reboot node1), the resources started on msnode2, but then they immediately turned off and situation changes as -

msnode1:~ # systemctl stop pacemaker
msnode2:~ # crm status
Stack: corosync
Current DC: msnode2 (version 1.1.16-4.8-77ea74d) - partition WITHOUT quorum
Last updated: Tue Jun 25 17:44:26 2019
Last change: Tue Jun 25 17:38:20 2019 by hacluster via cibadmin on msnode1

2 nodes configured
2 resources configured

Online: [ msnode2 ]
OFFLINE: [ msnode1 ]

Full list of resources:

virtip (ocf::heartbeat:IPaddr): Stopped
mspersonal (systemd:mspersonal): Stopped

When I restart the pacemaker service on msnode1, the resources starts back on msnode1 --

msnode1:~ # systemctl start pacemaker
msnode1:~ # crm status
Stack: corosync
Current DC: msnode2 (version 1.1.16-4.8-77ea74d) - partition with quorum
Last updated: Tue Jun 25 17:46:09 2019
Last change: Tue Jun 25 17:38:20 2019 by hacluster via cibadmin on msnode1

2 nodes configured
2 resources configured

Online: [ msnode1 msnode2 ]

Full list of resources:

virtip (ocf::heartbeat:IPaddr): Started msnode1
mspersonal (systemd:mspersonal): Started msnode1

But when I redo the same exercise, the resources actually start on msnode2 -

msnode1:~ # systemctl stop pacemaker
msnode2:~ # crm status
Stack: corosync
Current DC: msnode2 (version 1.1.16-4.8-77ea74d) - partition WITHOUT quorum
Last updated: Tue Jun 25 17:47:00 2019
Last change: Tue Jun 25 17:38:20 2019 by hacluster via cibadmin on msnode1

2 nodes configured
2 resources configured

Online: [ msnode2 ]
OFFLINE: [ msnode1 ]

Full list of resources:

virtip (ocf::heartbeat:IPaddr): Started msnode2
mspersonal (systemd:mspersonal): Started msnode2

But when I start pacemaker again on msnode1, the resources move back to msnode1 which I didnt expect because of stickiness set to 100 which works fine when any one of the resource fails.

I am not able to catch what I am missing in this cluster configuration, which can set everything right for me.