PDA

View Full Version : Sles 11 sp3



Rickb
04-Nov-2013, 15:15
I am looking for how to upgrade from sles11 sp2 to sp3 when you are also
running HA. I have been searching the docs how t do this. I have quite a few
vm's running using ha and looking for the cleanest way to upgrade. This is a
2 node HA cluster.

malcolmlewis
04-Nov-2013, 15:23
On Mon 04 Nov 2013 02:15:39 PM CST, Rickb wrote:

I am looking for how to upgrade from sles11 sp2 to sp3 when you are
also running HA. I have been searching the docs how t do this. I have
quite a few vm's running using ha and looking for the cleanest way to
upgrade. This is a 2 node HA cluster.


Hi
Have you read this one?
https://www.suse.com/documentation/sle_ha/singlehtml/book_sleha/book_sleha.html#sec.ha.migration.sle11.sp3

--
Cheers Malcolm °¿° SUSE Knowledge Partner (Linux Counter #276890)
SLED 11 SP3 (x86_64) GNOME 2.28.0 Kernel 3.0.93-0.8-default
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below... Thanks!

jmozdzen
04-Nov-2013, 18:17
Hi Rickb,


I am looking for how to upgrade from sles11 sp2 to sp3 when you are also
running HA. I have been searching the docs how t do this. I have quite a few
vm's running using ha and looking for the cleanest way to upgrade. This is a
2 node HA cluster.

from what I can tell, you need to take down the cluster to upgrade (no rolling update) - but then it's a simple as described in https://www.suse.com/support/kb/doc.php?id=7012368, i.e. "Update by using zypper". HAE is a simple add-on product that can be upgraded online easily - so in step 7 / 8, simply include the HAE migration product as well and include the HAE repos in the "zypper dup" step.

Regards,
Jens

smflood
05-Nov-2013, 14:00
On 04/11/2013 17:41, jmozdzen wrote:

> from what I can tell, you need to take down the cluster to upgrade (no
> rolling update) - but then it's a simple as described in
> https://www.suse.com/support/kb/doc.php?id=7012368, i.e. "Update by
> using zypper". HAE is a simple add-on product that can be upgraded
> online easily - so in step 7 / 8, simply include the HAE migration
> product as well and include the HAE repos in the "zypper dup" step.

Correct. I've literally just upgraded a SLES11 SP2 HAE cluster node to
SP3 using the Zypper method as per section 7.6.3 of the SLES11
Deployment Guide[1].

HTH.

[1]
https://www.suse.com/documentation/sles11/book_sle_deployment/data/sec_update_sle11sp2.html
--
Simon
SUSE Knowledge Partner

------------------------------------------------------------------------
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below. Thanks.
------------------------------------------------------------------------

paca
06-Nov-2013, 08:07
On 4.11.2013 19:41, jmozdzen wrote:

> from what I can tell, you need to take down the cluster to upgrade (no
> rolling update) - but then it's a simple as described in
> https://www.suse.com/support/kb/doc.php?id=7012368, i.e. "Update by
> using zypper". HAE is a simple add-on product that can be upgraded

Have you tried rolling update for cluster? (And failed?) Or where you
have find out that rolling update is not supported?

I'm just planning to update 4 node cluster and I could really use some
real world experience about upgrading. ;)

My plan per node is :
rcopenais stop
chkconfig openais off
(now run zypper to update server and HA products, reboot as many times
as needed)
chkconfig openais on
rcopenais start

and continue with next node..

jmozdzen
06-Nov-2013, 11:54
Hi paca,

> Have you tried rolling update for cluster? (And failed?) Or where you
> have find out that rolling update is not supported?

we tried and had trouble getting our OCFS2 file systems to re-join.Might have been something different than the version upgrade, but in our case it was a scheduled maintenance window anyhow, so it was easiest to bring down the cluster.

Regards,
Jens

paca
11-Nov-2013, 14:09
On 6.11.2013 13:04, jmozdzen wrote:

> we tried and had trouble getting our OCFS2 file systems to re-join.Might
> have been something different than the version upgrade, but in our case
> it was a scheduled maintenance window anyhow, so it was easiest to bring
> down the cluster.
>
> Regards,
> Jens
>
>
I also have had some problems in past with major ocfs2 version upgrades.
But now I'm trying to do rolling update again. So far everything is ok.
First node is updated, ocfs2 is mounted. I can migrate virtual machines
from/to updated node.

Only problem is that when testing ocfs2 performance by copying iso file
inside ocfs2
I got dmesg: "JBD: cp wants too many credits (811648 > 16384)"
This occurs only on ocfs2, not on root filesystem (btrfs) and both
devices are sharing same iscsi adapter. I'm hoping that this is just
minor issue with different ocfs2 versions.

I'll continue with rolling update and post back if hitting any major issues.

paca
12-Nov-2013, 12:13
On 11.11.2013 15:09, Petri Asikainen wrote:
> On 6.11.2013 13:04, jmozdzen wrote:

>
> I'll continue with rolling update and post back if hitting any major
> issues.
>

Updating my SLE11 SP2 HAE cluster to SLES SP3 went well until updating
last node. It was running as CRM DC.
When stopping openais (rcopenais off) on DC node, ocfs2 timed out on
other nodes. At least for minute or two. on that time could not access
ocfs2 filesystems.
I'm not sure if this is related slow election of new DC. But this caused
all xen-domu-resources to marked as failed.
So reboot/fencing of whole cluster. :(

So questions I asked from my self after this:
Is it possible to force pacemaker elect DC to new node before shutting
down current one?
Is there configuration parameter that could help with this issue?

And maybe I should do something else for living? ;)

jmozdzen
12-Nov-2013, 13:31
Hi paca,

On 11.11.2013 15:09, Petri Asikainen wrote:
> On 6.11.2013 13:04, jmozdzen wrote:

>
> I'll continue with rolling update and post back if hitting any major
> issues.
>

Updating my SLE11 SP2 HAE cluster to SLES SP3 went well until updating
last node. It was running as CRM DC.
When stopping openais (rcopenais off) on DC node, ocfs2 timed out on
other nodes. At least for minute or two. on that time could not access
ocfs2 filesystems.

sounds familiar to me - it's probably the DLM unavailable during the election period.


So questions I asked from my self after this:
Is it possible to force pacemaker elect DC to new node before shutting
down current one?
Is there configuration parameter that could help with this issue?

DC election is considered an internal detail of the cluster stack and as far as I can tell, there's no way to force away the DC from a specific node, but by shutting down the stack on that node.


And maybe I should do something else for living? ;)

Hey, if it were easy, we wouldn't get paid for doing it ;)

Regards,
Jens

Rickb
25-Nov-2013, 22:38
Simon Flood wrote:

> On 04/11/2013 17:41, jmozdzen wrote:
>
>> from what I can tell, you need to take down the cluster to upgrade (no
>> rolling update) - but then it's a simple as described in
>> https://www.suse.com/support/kb/doc.php?id=7012368, i.e. "Update by
>> using zypper". HAE is a simple add-on product that can be upgraded
>> online easily - so in step 7 / 8, simply include the HAE migration
>> product as well and include the HAE repos in the "zypper dup" step.
>
> Correct. I've literally just upgraded a SLES11 SP2 HAE cluster node to
> SP3 using the Zypper method as per section 7.6.3 of the SLES11
> Deployment Guide[1].
>
> HTH.
>
> [1]
>
https://www.suse.com/documentation/sles11/book_sle_deployment/data/sec_update_sle11sp2.html

Simon, I saw that you said you did a node. Did you finish the rest of the
nodes? The wagon train picked up on the HA and upgraded it as well? How many
nodes in the cluster?

Thanks

jmozdzen
26-Nov-2013, 13:15
Hi rickb,


Simon, I saw that you said you did a node. Did you finish the rest of the
nodes? The wagon train picked up on the HA and upgraded it as well? How many
nodes in the cluster?

Thanks

while I'm not Simon, I can report that wagon didn't treat cluster nodes in any way special - you'll have to run the steps on each node individually.

Regards,
Jens

Rickb
26-Nov-2013, 18:27
jmozdzen wrote:

>
> Hi rickb,
> Rickb;17687 Wrote:
>>
>> Simon, I saw that you said you did a node. Did you finish the rest of
>> the
>> nodes? The wagon train picked up on the HA and upgraded it as well? How
>> many
>> nodes in the cluster?
>>
>> Thanks
>
> while I'm not Simon, I can report that wagon didn't treat cluster nodes
> in any way special - you'll have to run the steps on each node
> individually.
>
> Regards,
> Jens
>
>


Thank you for all the help. I am working on the second node now and so far
everything has gone smooth except for an issue of 1 vm. It was not coming up
properly. Still not sure what happened. I finally moved it back and forth a
few times and than it finally finished completing the start. Not quite sure
yet what happened but it seems to be coming together.

Again thanks

smflood
28-Nov-2013, 18:28
On 25/11/2013 21:38, Rickb wrote:

> Simon, I saw that you said you did a node. Did you finish the rest of the
> nodes? The wagon train picked up on the HA and upgraded it as well? How many
> nodes in the cluster?

In our case we had installed two SLES11 SP3 servers to add to an
existing one to create an HAE cluster but after having problems setting
things up discovered the existing server was SLES11 SP2 so I upgraded it
to match.

HTH.
--
Simon
SUSE Knowledge Partner

------------------------------------------------------------------------
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below. Thanks.
------------------------------------------------------------------------