PDA

View Full Version : Backup of data partition of a VM



John_Gill
05-Aug-2014, 10:47
Hi,

Running SLES 11sp3 with XEN virtualization.

The host partition table has a 3TB allocated to /dev/sdb
The vm's partition table has the 3TB /dev/xvdb2 which is mounted as /var/data

My VM is running SLES11sp3 as 100gb image containing the OS and other stuff. I want to run a backup of the VM's raw disk image and the /var/data.
So from the host, I shut down the VM and I can then backup /var/lib/xen/images/archive/disk0.raw. All good so far.

Now I need to access the /var/data on the VM.
So my thinking is mount 3TB VM partition on the host as "mount /dev/sdb /mnt/data"
Then run the backup and then umount /mnt/data and restart the VM.

So will this work, break stuff or cause chaos ..... or is there a better way?
Your thoughts will be appreciated.
Regards
John :confused:

jmozdzen
05-Aug-2014, 15:14
Hi John,

Hi,

Running SLES 11sp3 with XEN virtualization.

The host partition table has a 3TB allocated to /dev/sdb
The vm's partition table has the 3TB /dev/xvdb2 which is mounted as /var/data

That's strange, in that typically you map block devices - making the Dom0's (AKA "host") device /dev/sdb become "/dev/xvdb" inside the DomU ("VM").

If you think my assumption doesn't fit your situation, please post the block device configuration of your DomU so we can see the actual mapping.


My VM is running SLES11sp3 as 100gb image containing the OS and other stuff. I want to run a backup of the VM's raw disk image and the /var/data.
So from the host, I shut down the VM and I can then backup /var/lib/xen/images/archive/disk0.raw. All good so far.

Now I need to access the /var/data on the VM.
So my thinking is mount 3TB VM partition on the host as "mount /dev/sdb /mnt/data"
Then run the backup and then umount /mnt/data and restart the VM.

Sounds wrong to me: I assume that Dom0:/dev/sdb is DomU:/dev/xvdb (see my comment above), and that you partitioned that device (now irrelevant whether done inside the DomU or the Dom0). Hence you'd have to "mount /dev/sdb2 /mnt/data".


So will this work, break stuff or cause chaos ..... or is there a better way?
Your thoughts will be appreciated.
Regards
John :confused:

The remainsteps (umount, then restart DomU) sounds fine. As an extra security measure, you might want to mount the FS in read-only mode (mount -o ro), so that even if DomU is restarted while the FS is still mounted on Dom0, no accidental write operations from Dom0 may occur and the file system does not get corrupted. (Of course, when this situation arises, read errors on Dom0 are likely - but you'd only have to umount on Dom0 and won't cause any *harm*).

Regards,
Jens

John_Gill
05-Aug-2014, 15:51
Hi,

I have attached screen shots of Dom0 and DomU partition table. Please note that the storage is multipathed to a SAN.
Dom0 86
DomU 87

Thanks in advance.
Regards
John

jmozdzen
05-Aug-2014, 17:08
Hi John,

unfortunately, the screen shots don't make it more clear to me.

The Dom0 3TB "drive" has a single partition (009_part1), while DomU's xvdb *seems* to have more than one partition or a different layout - /dev/xvdb2 sounds like the second "primary" partition slot on that drive, albeit the only one (as it takes up all the space).

How's xvdb configured in the DomU configuration? If else fails, let us know the relevant section from "xm list -l DomUname", run on Dom0. Look for a "(device (vbd" section with an "(dev xvdb:disk)" element - I'm after the value of the "uname" element.

If, as I now assume (and I may be wrong), you have *0009_part1 mapped to xvdb, you'll basically have a *Dom0 partition* that is a *complete DomU disk* (including another partition table!). You'd then need to use i.e. "kpartx" to create Dom0 pseudo devices that will allow you to mount the "DomU partition".

It also should be verifiable by running "fdisk -l /dev/mapper/.....009_part1" inside Dom0, which should print you the same partition table as you see when running "fdisk -l /dev/xvdb" inside DomU.

Regards,
Jens

John_Gill
06-Aug-2014, 09:57
Hi,

This is how I setup the disks multipathed to an IBM Storwize 7000:
Dom0
/dev/mapper/36005...8 1TB
/dev/mapper/36005...8_part1 2GB - swap
/dev/mapper/36005...8_part2 1021GB - /

/dev/mapper/36005...9 3TB
/dev/mapper/36005...9_part1 3TB

/dev/sda 1TB /dev/mapper/36005... 8
/dev/sdb 3TB /dev/mapper/36005... 9
/dev/sdc 1TB /dev/mapper/36005... 8 (duplicated by the multipath)
/dev/sdd 3TB /dev/mapper/36005... 9 (duplicated by the multipath)

So far so good...

DomU
/dev/xvda 100GB
/dev/xvda1 1GB boot
/dev/xvda2 2GB swap
/dev/xvda3 97GB /

/dev/xvdb 3TB
/dev/xvdb2 3TB /var/data

In DomU I can see and have full access to the 3TB partition that is mounted on /var/data

*****************************
*** Output from "xm list -l archive ***
*****************************

(device
(vbd
(protocol x86_64-abi)
(uuid 2f46e149-e213-403d-b457-dd5e9b8d7ea0)
(bootable 1)
(dev xvda:disk)
(uname file:/var/lib/xen/images/archive/disk0.raw)
(mode w)
(backend 0)
(VDI '')
)
)
(device
(vbd
(protocol x86_64-abi)
(uuid 45fa6a99-e0d3-7289-07dc-419caa51deaf)
(bootable 0)
(dev xvdb:disk)
(uname phy:/dev/mapper/3600507680280883e2800000000000000009_part1)
(mode w)
(backend 0)
(VDI '')
)
)

************************************************** *******
*** Output from "fdisk -l /dev/mapper/36005...9_part1" ***
************************************************** *******
WARNING: GPT (GUID Partition Table) detected on '/dev/mapper/36005...9_part1'!
The util fdisk doesn't support GPT. Use GNU Parted.

Disk /dev/mapper/36005...9_part1: 3298.5 GB, 3298532786176 bytes
255 heads, 63 sectors/track, 401023 cylinders, total 6442446848 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Device Boot Start End Blocks Id System
/dev/mapper/36005...9_part1p4 1 1 0+ ee GPT


Thanks again.
John

jmozdzen
06-Aug-2014, 13:46
Hi John,


(device
(vbd
(protocol x86_64-abi)
(uuid 45fa6a99-e0d3-7289-07dc-419caa51deaf)
(bootable 0)
(dev xvdb:disk)
(uname phy:/dev/mapper/3600507680280883e2800000000000000009_part1)
(mode w)
(backend 0)
(VDI '')
)
)

************************************************** *******
*** Output from "fdisk -l /dev/mapper/36005...9_part1" ***
************************************************** *******
WARNING: GPT (GUID Partition Table) detected on '/dev/mapper/36005...9_part1'!
The util fdisk doesn't support GPT. Use GNU Parted.

Disk /dev/mapper/36005...9_part1: 3298.5 GB, 3298532786176 bytes
255 heads, 63 sectors/track, 401023 cylinders, total 6442446848 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Device Boot Start End Blocks Id System
/dev/mapper/36005...9_part1p4 1 1 0+ ee GPT

seems I guessed right concerning the disk layout: You have mapped Dom0's *0009_part1 (the first partition of that multipathed device) to xvdb, making that single partition appear as a complete disk inside the VM.
I was wrong in assuming you created a traditional partition table inside the VM - it's a new-style GPT, hence asking for fdisk output was a bad idea ;)

But still, you could use kpartx to map the GPT from the *0009_part1 "device" to device mapper devices, so that you could mount that DM device to access the file system (known as /opt/data inside DomU).

Regards,
Jens

John_Gill
08-Aug-2014, 08:07
Hi Jens,

Shot for the help. I fiddled with "kpartx" and now I can see the 3TB volume and run backups.

Regards
John

jmozdzen
08-Aug-2014, 09:31
Hi John,

thanks for letting us know you succeeded - and just as a reminder: Please let kpartx remove those devices before restarting the DomU. It won't cause any immediate harm not to do so , but you'll clobber your /dev/mapper directory and would have changing device names on every consecutive kpartx ("create devices") call - nothing nice when you're scripting things. Not to forget that you *might* run some tool that scans for "disk partitions" that could see these files and act on them, which isn't wanted either.

Regards,
Jens