PDA

View Full Version : Move SLES DomU from SLES10 SP4 host to SLES11 SP2 host-no go



kjhurni
19-Oct-2012, 19:06
Here's my setup:
SLES 10 SP4 64-bit XEN Host
I have TWO physical disk-backed DomU
One is paravirtualized SLES 10 SP3 64-bit
One is Fully Virtualized SLES 11 SP1 32-bit

Rather than upgrade the host via offline/inplace upgrade to SLES11 SP2, I powered off the server.
Disconnected the LUNs via our SAN (we boot from SAN)
Created a new LUN0
Attached it to the server
Booted up the SLES11 SP2 code and installed SLES11 SP2 as Physical machine
After install, I patched server
Then I went into Yast and added on the XEN Hypervisor and Tools
All is good.

Now, I powered server off
I re-attached my two other LUNs that hold my DomU
I boot up server
Server can see LUN's just fine

I manually went into virtual manager and created a paravirtualized VM with the same settings as what was on the SLES10 host (I have the xm -l file I exported so I know what the config was). I point it to the same physical disk (/dev/disk/by-id/scsi-bighairyGUID)

However, when the DomU tries to boot all I get is:
Boot failed
Boot loader didn't return any data


????

The FULLY virtual machine actually boots, but for some reason cannot find the / partion (but obviously finds the /boot partition).

Now, if I power off the server, remove the SLES11 SP2 Dom0 boot lun, re-attach the SLES10 SP4 Dom0 boot lun, boot it up, things load just fine.

Is there some conversion that needs to be done between a DomU that was originally created in SLES10 SP4 64-bit when going to SLES11 SP2 64-bit?
I couldn't find anything in the SLES 11 SP2 docs EXCEPT if you created a VM in SLES10 (no SP), but that is not the case here.

jmozdzen
19-Oct-2012, 20:03
Hi kjhurni,

SLES10 is quite some time ago for me, but I recall having had to change our DomU configurations to migrate Dom0 from SLES10 to SLES11.

I'd recommend to set up a new DomU on the SLES11 Dom0 and clone that config (pointing to the old DomU disk image, adjusting NIF MAC address and alike). I don't remember having had to change anything inside DomU, just it's configuration. But again - it's been many moons since I've done this, so ymmv...

Regards,
Jens

kjhurni
19-Oct-2012, 20:39
Hi kjhurni,

SLES10 is quite some time ago for me, but I recall having had to change our DomU configurations to migrate Dom0 from SLES10 to SLES11.

I'd recommend to set up a new DomU on the SLES11 Dom0 and clone that config (pointing to the old DomU disk image, adjusting NIF MAC address and alike). I don't remember having had to change anything inside DomU, just it's configuration. But again - it's been many moons since I've done this, so ymmv...

Regards,
Jens

Thanks, but I think that's basically what I did on the SLES11 SP2 Dom0:

Created a new VM in Virtual Manager, said I had an EXISTING disk with OS on it, pointed it to the same disk (old DomU disk), etc.

kjhurni
19-Oct-2012, 20:53
Hi kjhurni,

SLES10 is quite some time ago for me, but I recall having had to change our DomU configurations to migrate Dom0 from SLES10 to SLES11.

I'd recommend to set up a new DomU on the SLES11 Dom0 and clone that config (pointing to the old DomU disk image, adjusting NIF MAC address and alike). I don't remember having had to change anything inside DomU, just it's configuration. But again - it's been many moons since I've done this, so ymmv...

Regards,
Jens

Ugh, apparently it's this:

http://www.novell.com/support/kb/doc.php?id=7002815

nasty way to "convert" things (either configure to use old stuff-probably not ideal) or go through the pain of mounting the DomU inside of the Dom0 (ack!)

jmozdzen
19-Oct-2012, 20:56
Hi,

sorry then - I thought you had some way to re-import those exported settings. We're mostly working with config files, so I'm not up to date what magic virtmanager can do nowadays ;)

Any hints in the DomU's log? Whenever I came across that error message, it was that the DomU loader was unable to cope with the disk or menu.lst layout. Anything unusual with i.e. the file system used for DomU's /boot (maybe no longer supported by the new Dom0)?

Regards,
Jens

jmozdzen
19-Oct-2012, 21:14
Thanks for reporting back.

Having to modify DomUs to run them under an upgraded Dom0 is a PITA, for sure.

How's your HVM going, have you got that running, too?

Regards,
Jens

kjhurni
19-Oct-2012, 21:41
Thanks for reporting back.

Having to modify DomUs to run them under an upgraded Dom0 is a PITA, for sure.

How's your HVM going, have you got that running, too?

Regards,
Jens

Well I was hoping to use the "easy" part of the TID, but it's extremely sparse (no details, etc):


running domU's on SLES 10 and SLES 11

If a DomU will be run on both SLES 10 and SLES 11 Dom0 hosts, install the DomU on SLES 10. After the DomU is created, use the /etc/xen/vm file to make changes to hardware and to start and stop the DomU.

But they don't state which file (The .xml or the other one). Further, those files are only ever there ONCE (when you initially create the VM in XEN). Any changes made later (ie, RAM, etc.) are not reflected in them and Novell doesn't list a way to get them exported there. the xm commands export to a completely different file/format.

I COULD use the domuloader method in the TID (I have the config files) but then not sure what'll happen with the networking because in SLES 10 the network cards are defined with an entirely different syntax than in SLES11 (brX vs. vifXX)


domUloader: use or import legacy /etc/xen/vm files

domUloader still ships with SLES 11, but is not the default boot loader for "vm-install" which is used by Virt-Manager and YaST. If you have not modified the domU using "xm" commands, YaST, or "Virt-Manager", then you can re-import the DomU configuration. To do so simply type "xm new -f <configuration file>".

I'm just wanting to make sure I don't shoot myself in the foot with using the "legacy" stuff (that always implies stuff is going away and shouldn't be used).

I suppose I could be daring and try this:


pygrub: add a menu.lst file to existing hosts but these are physical disks, so I can't use the TID they list, although I can easily fire up the SLES10 XEN host, boot into the XEN DomU and manually do what the rest of the section says to do, I guess.

KBOYLE
19-Oct-2012, 21:42
kjhurni wrote:

> Ugh, apparently it's this:
>
> http://www.novell.com/support/kb/doc.php?id=7002815

I was thinking about your issue then saw this post. As I read through
the TID, I found it a bit confusing.

As I understand it, DomU's running under a SLES 11 Dom0 need grub to
boot. Since you have both SLES 10 and SLES 11 Dom0's available, can't
you not just:

1. Start DomU from your SLES 10 Dom0

2. Install grub boot loader.

3. Shutdown

4. Start DomU from your SLES 11 Dom0


Just curious. It might save accessing DomU's file system from Dom0.

--
Kevin Boyle - Knowledge Partner
If you find this post helpful and are using the web interface,
show your appreciation and click on the star below...

kjhurni
19-Oct-2012, 22:11
kjhurni wrote:

> Ugh, apparently it's this:
>
> http://www.novell.com/support/kb/doc.php?id=7002815

I was thinking about your issue then saw this post. As I read through
the TID, I found it a bit confusing.

As I understand it, DomU's running under a SLES 11 Dom0 need grub to
boot. Since you have both SLES 10 and SLES 11 Dom0's available, can't
you not just:

1. Start DomU from your SLES 10 Dom0

2. Install grub boot loader.

3. Shutdown

4. Start DomU from your SLES 11 Dom0


Just curious. It might save accessing DomU's file system from Dom0.

--
Kevin Boyle - Knowledge Partner
If you find this post helpful and are using the web interface,
show your appreciation and click on the star below...

I can certainly try that, but the TID seems to indicate you can only do that during the initial installation time AND only on SLES11. That's the first item in the TID:


installation change: install a boot loader
During the installation of any Linux DomU under SLES 11, make sure to install Grub. Installing a boot loader will ensure that a /boot/grub/menu.lst file is populated, and will allow DomU's to boot normally.

What's puzzling is that 2 items in the TID indicate it's because you're missing the /boot/grub/menu.lst, but if I mount the paravirtualized physical disk /boot partition I do see a :
/grub/menu.lst

But it may be one of those odd/weird things with XEN Host that it's not really there but it is?
However I think the first part of the TID kinda conflicts with the third item.

First item indicates all you need is the menu.lst file
Third item indicates that not only do you need that, it needs to contain a VERY specific line (which seems to vary from the existing one I have):

title Default Kernel
root (hd0,0)
kernel /vmlinuz-xen root=/dev/xvda2 splash=silent showopts vga=0x31a
initrd /initrd-xen

I believe the one that SLES10 creates has a different kernel lint and initrd line there.

jmozdzen
19-Oct-2012, 23:05
Well I was hoping to use the "easy" part of the TID, but it's extremely sparse (no details, etc):



But they don't state which file (The .xml or the other one). Further, those files are only ever there ONCE (when you initially create the VM in XEN). Any changes made later (ie, RAM, etc.) are not reflected in them and Novell doesn't list a way to get them exported there. the xm commands export to a completely different file/format.
I think they're talking about the "other" one - at least that's what we're using... and while the files are there until you delete them, you are right that they are not updated when you alter the definitions stored in the Xen config DB. Once you're into clustering, you'll want to *only* work with the files, as modifications to the Xen DB will only persist on a single Xen server, while you can share the files across many Xen Dom0s. But is should be easy to create up-to-date files, either manually or using virtmanager (just create a new VM with the proper settings and i.e. use the results as a template). Just keep in mind that you'll have to actually DELETE the DomU definitions from the Xen store, else "xm create <filename>" wont have the desired effect and "xm start <domuame>" will use the xen store definitions, rather than the config files.



I COULD use the domuloader method in the TID (I have the config files) but then not sure what'll happen with the networking because in SLES 10 the network cards are defined with an entirely different syntax than in SLES11 (brX vs. vifXX)
Actually, they're not different. SLES11 has simply dropped the Xen scripts to set up all the bridging environment - you do that in advance (outside the Xen configuration, ie via YaST) and then reference the bridge name you want the VIF to attach to, in the DomU config file. We did the same in SLES10 already, as we had other ideas of bridge names etc than the way Xen handled it :D




I'm just wanting to make sure I don't shoot myself in the foot with using the "legacy" stuff (that always implies stuff is going away and shouldn't be used).[
I believe the DomU config file version will remain active for quite some time - it's more difficult to run a Xen cluster with via a shared Xen store - AFAIK this isn't available yet and IMO there's no "business case" to implement this just to replace config files. But as boot loaders change and new features are available, the syntax and/or available commands within the config files will change over time... like when going from SLES10 to SLES11.


I suppose I could be daring and try this:

but these are physical disks, so I can't use the TID they list, although I can easily fire up the SLES10 XEN host, boot into the XEN DomU and manually do what the rest of the section says to do, I guess.

Indeed a valid approach - just make sure those changes are compatible with the SLES10 environment, else you're fixed to running a SLES11 Dom0 ;)

With regards,
Jens

KBOYLE
19-Oct-2012, 23:10
I also found the TID confusing. It implied that there was no menu.lst
and no grub boot loader. As I read it, resolving that would allow it to
boot however you say your ment.list is already present.

When boot fails, to you still have access to the console? Maybe there
are some error messages that would help? Perhaps there is something in
the kernel ring buffer if it is not too early in the boot process? It
can be displayed using "dmesg". I think there is a key combination too
(<ctrl><alt><F9>) but I'm not sure.


--
Kevin Boyle - Knowledge Partner
If you find this post helpful and are using the web interface,
show your appreciation and click on the star below...

Petri Asikainen
21-Oct-2012, 20:05
Using multipath on SAN?
Try to add 'features "no_partitions" (in multipath.conf)for your san luns.
This prevent multipath from creating maps for partititons,
those maps will prevent direct access to disks, and thats why vms are
not booting from lun.
This issue affect only paravirtalized domUs, Or
when paravirtualized drivers are installed on DomU (like Windows server).

My two cents,

Petri

On 19.10.2012 21:14, kjhurni wrote:
>
> Here's my setup:
> SLES 10 SP4 64-bit XEN Host
> I have TWO physical disk-backed DomU
> One is paravirtualized SLES 10 SP3 64-bit
> One is Fully Virtualized SLES 11 SP1 32-bit
>
> Rather than upgrade the host via offline/inplace upgrade to SLES11 SP2,
> I powered off the server.
> Disconnected the LUNs via our SAN (we boot from SAN)
> Created a new LUN0
> Attached it to the server
> Booted up the SLES11 SP2 code and installed SLES11 SP2 as Physical
> machine
> After install, I patched server
> Then I went into Yast and added on the XEN Hypervisor and Tools
> All is good.
>
> Now, I powered server off
> I re-attached my two other LUNs that hold my DomU
> I boot up server
> Server can see LUN's just fine
>
> I manually went into virtual manager and created a paravirtualized VM
> with the same settings as what was on the SLES10 host (I have the xm -l
> file I exported so I know what the config was). I point it to the same
> physical disk (/dev/disk/by-id/scsi-bighairyGUID)
>
> However, when the DomU tries to boot all I get is:
> Boot failed
> Boot loader didn't return any data
>
>
> ????
>
> The FULLY virtual machine actually boots, but for some reason cannot
> find the / partion (but obviously finds the /boot partition).
>
> Now, if I power off the server, remove the SLES11 SP2 Dom0 boot lun,
> re-attach the SLES10 SP4 Dom0 boot lun, boot it up, things load just
> fine.
>
> Is there some conversion that needs to be done between a DomU that was
> originally created in SLES10 SP4 64-bit when going to SLES11 SP2
> 64-bit?
> I couldn't find anything in the SLES 11 SP2 docs EXCEPT if you created
> a VM in SLES10 (no SP), but that is not the case here.
>
>

paca
22-Oct-2012, 07:27
That should be features "1 no_partitions" in /etc/multipath.conf
SAN-device section.

On 21.10.2012 22:05, Petri Asikainen wrote:
> Using multipath on SAN?
> Try to add 'features "no_partitions" (in multipath.conf)for your san luns.
> This prevent multipath from creating maps for partititons,
> those maps will prevent direct access to disks, and thats why vms are
> not booting from lun.
> This issue affect only paravirtalized domUs, Or
> when paravirtualized drivers are installed on DomU (like Windows server).
>
> My two cents,
>
> Petri
>
> On 19.10.2012 21:14, kjhurni wrote:
>>
>> Here's my setup:
>> SLES 10 SP4 64-bit XEN Host
>> I have TWO physical disk-backed DomU
>> One is paravirtualized SLES 10 SP3 64-bit
>> One is Fully Virtualized SLES 11 SP1 32-bit
>>
>> Rather than upgrade the host via offline/inplace upgrade to SLES11 SP2,
>> I powered off the server.
>> Disconnected the LUNs via our SAN (we boot from SAN)
>> Created a new LUN0
>> Attached it to the server
>> Booted up the SLES11 SP2 code and installed SLES11 SP2 as Physical
>> machine
>> After install, I patched server
>> Then I went into Yast and added on the XEN Hypervisor and Tools
>> All is good.
>>
>> Now, I powered server off
>> I re-attached my two other LUNs that hold my DomU
>> I boot up server
>> Server can see LUN's just fine
>>
>> I manually went into virtual manager and created a paravirtualized VM
>> with the same settings as what was on the SLES10 host (I have the xm -l
>> file I exported so I know what the config was). I point it to the same
>> physical disk (/dev/disk/by-id/scsi-bighairyGUID)
>>
>> However, when the DomU tries to boot all I get is:
>> Boot failed
>> Boot loader didn't return any data
>>
>>
>> ????
>>
>> The FULLY virtual machine actually boots, but for some reason cannot
>> find the / partion (but obviously finds the /boot partition).
>>
>> Now, if I power off the server, remove the SLES11 SP2 Dom0 boot lun,
>> re-attach the SLES10 SP4 Dom0 boot lun, boot it up, things load just
>> fine.
>>
>> Is there some conversion that needs to be done between a DomU that was
>> originally created in SLES10 SP4 64-bit when going to SLES11 SP2
>> 64-bit?
>> I couldn't find anything in the SLES 11 SP2 docs EXCEPT if you created
>> a VM in SLES10 (no SP), but that is not the case here.
>>
>>
>

kjhurni
22-Oct-2012, 14:33
That should be features "1 no_partitions" in /etc/multipath.conf
SAN-device section.

On 21.10.2012 22:05, Petri Asikainen wrote:
> Using multipath on SAN?
> Try to add 'features "no_partitions" (in multipath.conf)for your san luns.
> This prevent multipath from creating maps for partititons,
> those maps will prevent direct access to disks, and thats why vms are
> not booting from lun.
> This issue affect only paravirtalized domUs, Or
> when paravirtualized drivers are installed on DomU (like Windows server).
>
> My two cents,
>
> Petri
>
> On 19.10.2012 21:14, kjhurni wrote:
>>
>> Here's my setup:
>> SLES 10 SP4 64-bit XEN Host
>> I have TWO physical disk-backed DomU
>> One is paravirtualized SLES 10 SP3 64-bit
>> One is Fully Virtualized SLES 11 SP1 32-bit
>>
>> Rather than upgrade the host via offline/inplace upgrade to SLES11 SP2,
>> I powered off the server.
>> Disconnected the LUNs via our SAN (we boot from SAN)
>> Created a new LUN0
>> Attached it to the server
>> Booted up the SLES11 SP2 code and installed SLES11 SP2 as Physical
>> machine
>> After install, I patched server
>> Then I went into Yast and added on the XEN Hypervisor and Tools
>> All is good.
>>
>> Now, I powered server off
>> I re-attached my two other LUNs that hold my DomU
>> I boot up server
>> Server can see LUN's just fine
>>
>> I manually went into virtual manager and created a paravirtualized VM
>> with the same settings as what was on the SLES10 host (I have the xm -l
>> file I exported so I know what the config was). I point it to the same
>> physical disk (/dev/disk/by-id/scsi-bighairyGUID)
>>
>> However, when the DomU tries to boot all I get is:
>> Boot failed
>> Boot loader didn't return any data
>>
>>
>> ????
>>
>> The FULLY virtual machine actually boots, but for some reason cannot
>> find the / partion (but obviously finds the /boot partition).
>>
>> Now, if I power off the server, remove the SLES11 SP2 Dom0 boot lun,
>> re-attach the SLES10 SP4 Dom0 boot lun, boot it up, things load just
>> fine.
>>
>> Is there some conversion that needs to be done between a DomU that was
>> originally created in SLES10 SP4 64-bit when going to SLES11 SP2
>> 64-bit?
>> I couldn't find anything in the SLES 11 SP2 docs EXCEPT if you created
>> a VM in SLES10 (no SP), but that is not the case here.
>>
>>
>

Interesting, yes, we are using multipathing. Odd that creating the devmapper stuff in SLES11 would break it, since it works fine in SLES10. Although I'm puzzled why the fully virtualized /boot partition works fine in SLES11 though if it was the mapping.

But I can give it a whirl and see what happens.

kjhurni
22-Oct-2012, 14:35
I also found the TID confusing. It implied that there was no menu.lst
and no grub boot loader. As I read it, resolving that would allow it to
boot however you say your ment.list is already present.

When boot fails, to you still have access to the console? Maybe there
are some error messages that would help? Perhaps there is something in
the kernel ring buffer if it is not too early in the boot process? It
can be displayed using "dmesg". I think there is a key combination too
(<ctrl><alt><F9>) but I'm not sure.


--
Kevin Boyle - Knowledge Partner
If you find this post helpful and are using the web interface,
show your appreciation and click on the star below...

The boot fails in that you cannot even start the virtual machine via virt manager.
The FULLY virtualized one, you can start, it boots up to the Grub menu, tries to boot and then complains it cannot find the / partition.

I may try manually editing the file and make the changes or something.

KBOYLE
22-Oct-2012, 17:11
kjhurni wrote:

> Here's my setup:
> SLES 10 SP4 64-bit XEN Host
> I have TWO physical disk-backed DomU
> One is paravirtualized SLES 10 SP3 64-bit
> One is Fully Virtualized SLES 11 SP1 32-bit

I have one customer whose Dom0 is also SLES 10 SP4 64-bit. There were
issues, which have now been resolved, that prevented an upgrade. Upon
checking XenStore DomU configurations, some show:

> (bootloader /usr/bin/pygrub)

while others show:

> (bootloader /usr/lib/xen/boot/domUloader.py)
> (bootloader_args '--entry=xvda1:/boot/vmlinuz-xen,/boot/initrd-xen')

....depending on the version of SLES on Dom0 when the DomU was created.


http://www.novell.com/support/kb/doc.php?id=7002815

....shows a sample configuration with
> bootloader="/usr/lib/xen/domUloader.sys"
but as you can see, mine is not the same.

Obviously, I'm very interested in any progress you make. I have a
production server here and need to understand just how an upgrade on
Dom0 to SLES11-SP2 will impact my DomU's.

Thank you for sharing.


--
Kevin Boyle - Knowledge Partner
If you find this post helpful and are using the web interface,
show your appreciation and click on the star below...

KBOYLE
22-Oct-2012, 17:32
KBOYLE wrote:

> I have one customer whose Dom0 is also SLES 10 SP4 64-bit.

Correction:

The DomU is SLES11. That explains why one DomU was created with pygrub.

All DomU's are working properly. I wasn't even aware of the different
bootloaders until I looked. Still, I'll wait until I see how you make
out before upgrading to SLES11-SP2. :-)

--
Kevin Boyle - Knowledge Partner
If you find this post helpful and are using the web interface,
show your appreciation and click on the star below...

paca
23-Oct-2012, 07:14
And remember after changing/adding /etc/multipath.conf to run mkinitrd
and reboot the system.

On 22.10.2012 16:34, kjhurni wrote:
>
> paca;9631 Wrote:
>> That should be features "1 no_partitions" in /etc/multipath.conf
>> SAN-device section.
>>
>> On 21.10.2012 22:05, Petri Asikainen wrote:
>>> Using multipath on SAN?
>>> Try to add 'features "no_partitions" (in multipath.conf)for your san
>> luns.
>>> This prevent multipath from creating maps for partititons,
>>> those maps will prevent direct access to disks, and thats why vms
>> are
>>> not booting from lun.
>>> This issue affect only paravirtalized domUs, Or
>>> when paravirtualized drivers are installed on DomU (like Windows
>> server).
>>>
>>> My two cents,
>>>
>>> Petri
>>>
>>> On 19.10.2012 21:14, kjhurni wrote:
>>>>
>>>> Here's my setup:
>>>> SLES 10 SP4 64-bit XEN Host
>>>> I have TWO physical disk-backed DomU
>>>> One is paravirtualized SLES 10 SP3 64-bit
>>>> One is Fully Virtualized SLES 11 SP1 32-bit
>>>>
>>>> Rather than upgrade the host via offline/inplace upgrade to SLES11
>> SP2,
>>>> I powered off the server.
>>>> Disconnected the LUNs via our SAN (we boot from SAN)
>>>> Created a new LUN0
>>>> Attached it to the server
>>>> Booted up the SLES11 SP2 code and installed SLES11 SP2 as Physical
>>>> machine
>>>> After install, I patched server
>>>> Then I went into Yast and added on the XEN Hypervisor and Tools
>>>> All is good.
>>>>
>>>> Now, I powered server off
>>>> I re-attached my two other LUNs that hold my DomU
>>>> I boot up server
>>>> Server can see LUN's just fine
>>>>
>>>> I manually went into virtual manager and created a paravirtualized
>> VM
>>>> with the same settings as what was on the SLES10 host (I have the xm
>> -l
>>>> file I exported so I know what the config was). I point it to the
>> same
>>>> physical disk (/dev/disk/by-id/scsi-bighairyGUID)
>>>>
>>>> However, when the DomU tries to boot all I get is:
>>>> Boot failed
>>>> Boot loader didn't return any data
>>>>
>>>>
>>>> ????
>>>>
>>>> The FULLY virtual machine actually boots, but for some reason
>> cannot
>>>> find the / partion (but obviously finds the /boot partition).
>>>>
>>>> Now, if I power off the server, remove the SLES11 SP2 Dom0 boot
>> lun,
>>>> re-attach the SLES10 SP4 Dom0 boot lun, boot it up, things load
>> just
>>>> fine.
>>>>
>>>> Is there some conversion that needs to be done between a DomU that
>> was
>>>> originally created in SLES10 SP4 64-bit when going to SLES11 SP2
>>>> 64-bit?
>>>> I couldn't find anything in the SLES 11 SP2 docs EXCEPT if you
>> created
>>>> a VM in SLES10 (no SP), but that is not the case here.
>>>>
>>>>
>>>
>
> Interesting, yes, we are using multipathing. Odd that creating the
> devmapper stuff in SLES11 would break it, since it works fine in SLES10.
> Although I'm puzzled why the fully virtualized /boot partition works
> fine in SLES11 though if it was the mapping.
>
> But I can give it a whirl and see what happens.
>
>

kjhurni
23-Oct-2012, 19:32
kjhurni wrote:

> Here's my setup:
> SLES 10 SP4 64-bit XEN Host
> I have TWO physical disk-backed DomU
> One is paravirtualized SLES 10 SP3 64-bit
> One is Fully Virtualized SLES 11 SP1 32-bit

I have one customer whose Dom0 is also SLES 10 SP4 64-bit. There were
issues, which have now been resolved, that prevented an upgrade. Upon
checking XenStore DomU configurations, some show:

> (bootloader /usr/bin/pygrub)

while others show:

> (bootloader /usr/lib/xen/boot/domUloader.py)
> (bootloader_args '--entry=xvda1:/boot/vmlinuz-xen,/boot/initrd-xen')

....depending on the version of SLES on Dom0 when the DomU was created.


http://www.novell.com/support/kb/doc.php?id=7002815

....shows a sample configuration with
> bootloader="/usr/lib/xen/domUloader.sys"
but as you can see, mine is not the same.

Obviously, I'm very interested in any progress you make. I have a
production server here and need to understand just how an upgrade on
Dom0 to SLES11-SP2 will impact my DomU's.

Thank you for sharing.


--
Kevin Boyle - Knowledge Partner
If you find this post helpful and are using the web interface,
show your appreciation and click on the star below...

Okay I've come to the conclusion that the TID is full of all sorts of problems/inaccuracies

You cannot just do the:
xm new -f file

Because it'll complain about the DomID (how do you determine the domid of the Xen host)?

Also in my case I am NOT upgrading the Xen host At least in the sense that I am not going to throw in the SLES11 SP2 .ISO, boot from it and "upgrade". Last time I tried that it totally wrecked my networking setup from SLES10 SP3. (granted, it was SLES11 at the time).

KBOYLE
23-Oct-2012, 20:58
kjhurni wrote:

> Okay I've come to the conclusion that the TID is full of all sorts of
> problems/inaccuracies

Perhaps it's time you provided some TID feedback. A revised TID may not
resolve your issue but it may prevent others from experiencing the same
frustrations as you are!

Despite the "inaccuracies" it also appears that the TID is incomplete.
It would be much more helpful is a single document addressed *all* the
undocumented issues one might encounter when attempting such an upgrade.

--
Kevin Boyle - Knowledge Partner
If you find this post helpful and are using the web interface,
show your appreciation and click on the star below...

KBOYLE
23-Oct-2012, 21:16
kjhurni wrote:

> You cannot just do the:
> xm new -f file
>
> Because it'll complain about the DomID (how do you determine the domid
> of the Xen host)?

When you reboot a Xen host, the hypervisor starts Domain 0. Being the
first domain, it is assigned a domain ID of 0 (zero). It is the same
every time the host is rebooted. Each time a DomU is started, it is
assigned the next available sequential number.

The first DomU started will be assigned "1". The second one started
will be assigned "2". If the first one is stopped and restarted, it
will be assigned "3". And so it goes.

Before re-importing a DomU configuration into the XenStore, I make sure
the original has been removed, after backing it up, of course. Make
sure to shutdown the DomU (MyDomU) first.



xm list -l MyDomU > MyDomu.py
xm delete MyDomU
xm new -f MyRevisedDomU.py


I have never has any issues with this procedure.

--
Kevin Boyle - Knowledge Partner
If you find this post helpful and are using the web interface,
show your appreciation and click on the star below...

kjhurni
23-Oct-2012, 21:58
I think they're talking about the "other" one - at least that's what we're using... and while the files are there until you delete them, you are right that they are not updated when you alter the definitions stored in the Xen config DB. Once you're into clustering, you'll want to *only* work with the files, as modifications to the Xen DB will only persist on a single Xen server, while you can share the files across many Xen Dom0s. But is should be easy to create up-to-date files, either manually or using virtmanager (just create a new VM with the proper settings and i.e. use the results as a template). Just keep in mind that you'll have to actually DELETE the DomU definitions from the Xen store, else "xm create <filename>" wont have the desired effect and "xm start <domuame>" will use the xen store definitions, rather than the config files.



Actually, they're not different. SLES11 has simply dropped the Xen scripts to set up all the bridging environment - you do that in advance (outside the Xen configuration, ie via YaST) and then reference the bridge name you want the VIF to attach to, in the DomU config file. We did the same in SLES10 already, as we had other ideas of bridge names etc than the way Xen handled it :D




I believe the DomU config file version will remain active for quite some time - it's more difficult to run a Xen cluster with via a shared Xen store - AFAIK this isn't available yet and IMO there's no "business case" to implement this just to replace config files. But as boot loaders change and new features are available, the syntax and/or available commands within the config files will change over time... like when going from SLES10 to SLES11.



Indeed a valid approach - just make sure those changes are compatible with the SLES10 environment, else you're fixed to running a SLES11 Dom0 ;)

With regards,
Jens

Well actually the network bridging is different. I just attempted to import the "old" config (from the SLES10 machine) and the networking doesn't even match up with how it is in SLES 11. Although that may have been due to the fact I was using bridging with custom script (the only way you really could do it in SLES 10).

Anyway, it seems the TID that Novell has is almost completely worthless. The first option is invalid as this isn't a new install. The second option is vague and I can't even get NTS to answer me back as to what it means.

Third option I cannot get to work because it won't let me chroot /mnt or whatever (I get the error about /bin/bash not found or something). Never had that happen before, but then again I usually only do that via rescue installation.

4th option also doesn't work if you ever made a change to the configuration. And I'm not sure if the command is listed properly. Novell has you running the xm with a -f instead of a -F. I believe the -f only works with the legacy file vs. the -F works with a file you exported via the xm list -l blah>blah command.

Now, I COULD get the 4th option to semi-work by using the -F command to import the exported VM. But I still got the error, but at that point it would let me modify the bootloader line.

Once I did that (paravirtualized DomU, BTW), it would boot, but then it refused to find the /dev/xvda3 for some reason (it IS there because it obviously boots fine on SLES10 Domu). Oh, and that's also where I found that the bridging is totally different. It was trying to use "eth1" which isn't even valid anymore, you have to pick paravirtualized - eth1 (br1) or whatever from the list.

So my whole opinion of the process is that you should give up hope on upgrading a SLES10 SP3 Dom0 to SLES11 SP2 and just start the entire thing from scratch and manually reinstall all your DomU.

Apparently nobody has ever bothered to actually TEST taking a DomU from SLES10 sP3 and trying to get it to run under SLES11 SP2 Dom0. At least the authors of the TID didn't try it because none of their stuff works.

I may be daring and snap the LUNs and see what happens on an actual upgrade of the host OS (Dom0), but so far it seems one cannot easily take a DomU created on SLES10 SP3 and run it on SLES11 SP2 Dom0.

KBOYLE
24-Oct-2012, 16:09
KBOYLE wrote:

>

> xm list -l MyDomU > MyDomu.py
> xm delete MyDomU
> xm new -f MyRevisedDomU.py
>

Ooops! Should have used a capital F.



xm list -l MyDomU > MyDomu.py
xm delete MyDomU
xm new -F MyRevisedDomU.py


--
Kevin Boyle - Knowledge Partner
If you find this post helpful and are using the web interface,
show your appreciation and click on the star below...

kjhurni
24-Oct-2012, 20:41
KBOYLE wrote:

>

> xm list -l MyDomU > MyDomu.py
> xm delete MyDomU
> xm new -f MyRevisedDomU.py
>

Ooops! Should have used a capital F.



xm list -l MyDomU > MyDomu.py
xm delete MyDomU
xm new -F MyRevisedDomU.py


--
Kevin Boyle - Knowledge Partner
If you find this post helpful and are using the web interface,
show your appreciation and click on the star below...

Yes, that's where I messed up as well. The TID said to use "-f" not "-F"
However, apparently they meant the legacy config files in the /etc/xen directory only work with the -f
But those don't work for me since they're like 2 years old and we've changed things since then, so I have no choice apparently in this whole mess.

I did ask NTS to ask about IMPORTING a VM from SLES10 into SLES11, and they're checking.

Otherwise it looks like I will either have to:

1) Upgrade the XEN Host (something I'm loathe to do)
or
2) Reinstall the VM (DomU) on SLES11 from scratch. In THIS case, it's NAM stuff, so it's not QUITE that bad for the IDP and LAG/MAG.
But it would be a PITA for the Admin Console.

Makes me want to use Vmware.

KBOYLE
27-Oct-2012, 17:03
kjhurni wrote:

> The boot fails in that you cannot even start the virtual machine via
> virt manager.

I came across this TID which may help.

TID 7001376 How to manually install GRUB after a failed installation or
disk corruption of the MBR.
http://www.novell.com/support/kb/doc.php?id=7001376

If TID 7002815 suggests your issue can occur when grub is not installed
on the DomU and TID 7001376 says there are instances when grub was not
installed properly, then maybe...

--
Kevin Boyle - Knowledge Partner
If you find this post helpful and are using the web interface,
show your appreciation and click on the star below...

kjhurni
29-Oct-2012, 15:59
kjhurni wrote:

> The boot fails in that you cannot even start the virtual machine via
> virt manager.

I came across this TID which may help.

TID 7001376 How to manually install GRUB after a failed installation or
disk corruption of the MBR.
http://www.novell.com/support/kb/doc.php?id=7001376

If TID 7002815 suggests your issue can occur when grub is not installed
on the DomU and TID 7001376 says there are instances when grub was not
installed properly, then maybe...

--
Kevin Boyle - Knowledge Partner
If you find this post helpful and are using the web interface,
show your appreciation and click on the star below...

Thanks for the info, but unfortunately I am not using LVM or EVMS, so the TID doesn't really help much.

I'm ASSUMING grub is installed because I can go into Yast -> bootloader and it shows grub being used. However, with paravirtualized DomU I vaguely remember there's some odd/funkiness with the boot stuff as to where it's really at (the Dom0 or the DomU)?

jmozdzen
29-Oct-2012, 16:37
Hi kjhurni,

> However, with paravirtualized DomU I vaguely remember there's some odd/funkiness with the boot stuff as to where it's really at (the Dom0 or the DomU)?

as far as our Xen setup is concerned (SLES10 and SLES11SP1), booting a DomU involves the following steps:

- mount the DomU's virtual disk in Dom0 for bootup

- determine initrd and kernel (see comment below)

- copy kernel and intrd to Dom0 disk space

- unmount DomU's virtual disk in Dom0

- mount DomU's virtual disk(s) for normal operation

- start VM using Dom0's copy of DomU's initrd & kernel

- delete Dom0's copy of DomU's initrd & kernel

Determining the initrd and kernel file is the part done by the different boot loader settings. It seems that "pygrub" fetches it's information from DomU's menu.lst, that why GRUB "needs to be installed" or at least a proper menu.lst is required. Of course you can also set up direct pointers to these files, even up to files stored locally on Dom0 (i.e. using bootloader_args and a supporting bootloader like "domUloader.py").

So in order to boot your DomU, three things have to work:
- access to the DomU kernel/initrd prior to DomU startup (meaning "Dom0 can access DomU's disk to copy those files")
- working DomU kernel/initrd (DomU kernel must work with Xen version of Dom0)
- proper setup (boot loader must be able to determine which DomU kernel/initrd to copy)

"boot loader didn't return any data" usually means that it couldn't locate/copy kernel/initrd.

If you can identify the kernel and initrd file you'd like to use, maybe using domUloader will get the VMs up&running?

I'm unsure what the current problem state is: What boot loader (and it's configuration) is failing under SLES11? What's in xend.log after (failed) DomU start?

Regards,
Jens

kjhurni
29-Oct-2012, 17:01
Hi kjhurni,

> However, with paravirtualized DomU I vaguely remember there's some odd/funkiness with the boot stuff as to where it's really at (the Dom0 or the DomU)?

as far as our Xen setup is concerned (SLES10 and SLES11SP1), booting a DomU involves the following steps:

- mount the DomU's virtual disk in Dom0 for bootup

- determine initrd and kernel (see comment below)

- copy kernel and intrd to Dom0 disk space

- unmount DomU's virtual disk in Dom0

- mount DomU's virtual disk(s) for normal operation

- start VM using Dom0's copy of DomU's initrd & kernel

- delete Dom0's copy of DomU's initrd & kernel

Determining the initrd and kernel file is the part done by the different boot loader settings. It seems that "pygrub" fetches it's information from DomU's menu.lst, that why GRUB "needs to be installed" or at least a proper menu.lst is required. Of course you can also set up direct pointers to these files, even up to files stored locally on Dom0 (i.e. using bootloader_args and a supporting bootloader like "domUloader.py").

So in order to boot your DomU, three things have to work:
- access to the DomU kernel/initrd prior to DomU startup (meaning "Dom0 can access DomU's disk to copy those files")
- working DomU kernel/initrd (DomU kernel must work with Xen version of Dom0)
- proper setup (boot loader must be able to determine which DomU kernel/initrd to copy)

"boot loader didn't return any data" usually means that it couldn't locate/copy kernel/initrd.

If you can identify the kernel and initrd file you'd like to use, maybe using domUloader will get the VMs up&running?

I'm unsure what the current problem state is: What boot loader (and it's configuration) is failing under SLES11? What's in xend.log after (failed) DomU start?

Regards,
Jens

Thanks Jens

The original issue was that I could not import a DomU created in SLES10 SP3 on a SLES11 SP2 Xen Host. I would get the original error in my original post (boot failed, blah blah).

I opened an SR and that's where they pointed me to the TID that indicates due to kernel changes, that you cannot do that. The TID had some suggestions, most of which either assume that you've never ever changed the original VM (ie, your legacy files are identical to your currently exported ones--which in my case they are NOT), or you can manually create/adjust the files.

I ended up basically creating a new DomU that was "empty" (I just needed the machine definition in the Virt Manager), and then edited it to point to the "sles10 DomU' disk, along with adjusted the bootloader args per the TID.

At that point, the paravirtualized SLES10-created DomU will actually boot, but fails to find the / partition and all you get is the lovely $ prompt.

The FULLY virtualized machine created in SLES10 will actually "import" (for lack of a better word) into SLES11 Xen Host, but it suffers the same fate. You bootup, see the Grub menu, it starts to boot and then cannot find the / partition.

Originally I thought it was due to the MPIO as someone posted that issue here, so I rebuilt the SLES11 XEN host without MPIO and the same thing occurs.

I know normally (on a physical machine) the "cannot find /dev/blah /" errors are usually due to bad menu.lst or /etc/fstab or something, and normally I'd boot into the rescue system, but of course, doing THAT on paravirtualized XEN is a royal nightmare (I tried the old TID on that but the "chroot /mnt" command gives me errors about not finding /bin/bash)

It seems so far, that the only info Novell has on the whole situation is to Upgrade the XEN host, as moving/running the DomU around isn't supported UNLESS you're lucky enough to have the legacy files match your current state of your DomU in which case supposedly it would work.

jmozdzen
29-Oct-2012, 17:20
Hi kjhurni,

thanks for the summary - I was briefly following this thread over the days, but somehow got lost :[

> At that point, the paravirtualized SLES10-created DomU will actually boot, but fails to find the / partition and all you get is the lovely $ prompt.

You might have already checked: Is the device name of your virtual disk (as the DomU will see it) the same as before the "upgrade"? Looking at the new configuration should give the new name, even if the DomU doesn't start:
like "disk=[ '__some_Dom0_disk_description___,xvda,w', ]" in the configuration file. If the disk's name ("xvda" in this example) doesn't match, then most likely the DomU kernel loader (or rather it's initrd script) will not be able to find the device and you might want to add to your config something like
extra="root=/dev/xvda3"
or however the root file system is to be identified inside the DomU.

Regards,
Jens

KBOYLE
29-Oct-2012, 20:17
kjhurni wrote:

> At that point, the paravirtualized SLES10-created DomU will actually
> boot, but fails to find the / partition and all you get is the lovely
> $ prompt.

Have you tried to run mkinitrd at this point? if the DomU is frozen,
perhaps you can try the technique described in:

> TID 7001376 How to manually install GRUB after a failed installation
> or disk corruption of the MBR.
> http://www.novell.com/support/kb/doc.php?id=7001376


--
Kevin Boyle - Knowledge Partner
If you find this post helpful and are using the web interface,
show your appreciation and click on the star below...

kjhurni
29-Oct-2012, 21:06
Hi kjhurni,

thanks for the summary - I was briefly following this thread over the days, but somehow got lost :[

> At that point, the paravirtualized SLES10-created DomU will actually boot, but fails to find the / partition and all you get is the lovely $ prompt.

You might have already checked: Is the device name of your virtual disk (as the DomU will see it) the same as before the "upgrade"? Looking at the new configuration should give the new name, even if the DomU doesn't start:
like "disk=[ '__some_Dom0_disk_description___,xvda,w', ]" in the configuration file. If the disk's name ("xvda" in this example) doesn't match, then most likely the DomU kernel loader (or rather it's initrd script) will not be able to find the device and you might want to add to your config something like
extra="root=/dev/xvda3"
or however the root file system is to be identified inside the DomU.

Regards,
Jens

I'm pretty sure I double-checked. The physical disk is correct, (/dev/disk/by-id/scsi-GUIDTHINGY-part3)
I'll have to look inside, but I'm 99.9% sure it's still:
/dev/xvda3

kjhurni
29-Oct-2012, 21:48
I'm pretty sure I double-checked. The physical disk is correct, (/dev/disk/by-id/scsi-GUIDTHINGY-part3)
I'll have to look inside, but I'm 99.9% sure it's still:
/dev/xvda3

Okay the DomU is most certainly using grub (if the Yast -> Bootloader is correct).
The "edit configuration files" seems to indicate that the menu.lst for grub is set to
root (hd0,0)
kernel /boot/vmlinuz-2.6.16.60-0.832.2-xen

ramdisk:
/boot/initrd-2.6.16.60-0.83.2-xen

The SLES10 DomU config file has the bootloader args as:
(bootloader_args '--entry=xvda1:/boot/vmlinuz-xen,/boot/initrd-xen')

whereas, the SLES11 DomU would have something like:
(bootloader_args -q)

jmozdzen
29-Oct-2012, 23:09
> whereas, the SLES11 DomU would have something like:
> (bootloader_args -q)

have you tried setting

(bootloader_args '--entry=xvda1:/boot/vmlinuz-2.6.16.60-0.832.2-xen
,/boot/initrd-2.6.16.60-0.83.2-xen')

(using the appropriate syntax for your file, of course)

Regards,
Jens