PDA

View Full Version : slow performance on logical volumes



sharfuddin
10-Apr-2013, 19:28
its a SUSE 11 SP2 xen virtual machine with 8 GB Memory, kernel version 3.0.58-0.6.6-xen #1 SMP

we recently changed the traditional/static partitioning(/dev/xvdc{1,2,3} to lvm based volumes, and after that we are observing serious performance issues. e.g earlier(when we were using static/traditional partitioning) backup took only 45 minutes, now backup takes 2 hours or more. Also our end users are complaining about performance too.

running "hdparm -t" on the raw disk /dev/xvdc returns far better results then running "hdparm -t" on logical volumes



/dev/xvdc:
Timing buffered disk reads: 742 MB in 3.00 seconds = 247.31 MB/sec

/dev/xvdc:
Timing buffered disk reads: 670 MB in 3.00 seconds = 223.31 MB/sec

/dev/xvdc:
Timing buffered disk reads: 718 MB in 3.00 seconds = 239.23 MB/sec

while running "hdparm -t" on logical volume returns:


/dev/mapper/VG--ORG--ORACLE-sap--oracle:
Timing buffered disk reads: 148 MB in 3.01 seconds = 49.17 MB/sec

/dev/mapper/VG--ORG--ORACLE-sap--oracle:
Timing buffered disk reads: 192 MB in 3.10 seconds = 62.02 MB/sec

/dev/mapper/VG--ORG--ORACLE-sap--oracle:
Timing buffered disk reads: 154 MB in 3.03 seconds = 50.87 MB/sec

/dev/mapper/VG--ORG--ORACLE-sap--oracle:
Timing buffered disk reads: 184 MB in 3.03 seconds = 60.77 MB/sec


by looking the results, it seems that there must be something wrong with the logical volumes.

Please help and suggest

following are storage details:


df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/xvda3 ext3 47G 7.6G 38G 17% /
devtmpfs devtmpfs 4.1G 140K 4.1G 1% /dev
tmpfs tmpfs 8.0G 179M 7.9G 3% /dev/shm
/dev/xvda1 ext3 243M 24M 207M 11% /boot
/dev/mapper/VG--ORG--ORACLE-sap--oracle ext3 245G 222G 11G 96% /oracle
/dev/mapper/VG--ORG--ORACLE-sap--ora--sapmnt ext3 15G 1.7G 13G 12% /sapmnt
/dev/mapper/VG--ORG--ORACLE-sap--ora--usrsap ext3 30G 24G 4.3G 85% /usr/sap


pvdisplay
--- Physical volume ---
PV Name /dev/xvdc
VG Name VG-ORG-ORACLE
PV Size 350.00 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 89599
Free PE 1791
Allocated PE 87808
PV UUID jhypNN-fPpy-2uzc-nsxP-qZIl-9JAg-O9aenE


vgdisplay
--- Volume group ---
VG Name VG-ORG-ORACLE
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 6
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 4
Open LV 4
Max PV 0
Cur PV 1
Act PV 1
VG Size 350.00 GiB
PE Size 4.00 MiB
Total PE 89599
Alloc PE / Size 87808 / 343.00 GiB
Free PE / Size 1791 / 7.00 GiB
VG UUID MSnb2T-A5Pf-3i7B-5sz2-f10M-2T2s-uCbdJn

lvdisplay
--- Logical volume ---
LV Name /dev/VG-ORG-ORACLE/sap-ora-sapmnt
VG Name VG-ORG-ORACLE
LV UUID mSM4Mg-jxoV-phr5-8p5S-i5XY-VSbN-p6McFk
LV Write Access read/write
LV Status available
# open 1
LV Size 15.00 GiB
Current LE 3840
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 1024
Block device 253:0

--- Logical volume ---
LV Name /dev/VG-ORG-ORACLE/sap-ora-usrsap
VG Name VG-ORG-ORACLE
LV UUID zbwf4e-E3HL-6Vnw-ayXK-19PH-Dtir-uqi52g
LV Write Access read/write
LV Status available
# open 1
LV Size 30.00 GiB
Current LE 7680
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 1024
Block device 253:1

--- Logical volume ---
LV Name /dev/VG-ORG-ORACLE/sap-oracle
VG Name VG-ORG-ORACLE
LV UUID 4zt2BF-BhIJ-a4I7-mr9i-KVBk-c6Dt-BvztDQ
LV Write Access read/write
LV Status available
# open 1
LV Size 248.00 GiB
Current LE 63488
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:2

--- Logical volume ---
LV Name /dev/VG-ORG-ORACLE/oracle-sap-backup
VG Name VG-ORG-ORACLE
LV UUID vXL33N-CaUY-ILsT-YwqE-Hn0Z-xLRV-Pm2aRS
LV Write Access read/write
LV Status available
# open 1
LV Size 50.00 GiB
Current LE 12800
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 1024
Block device 253:3

sharfuddin
10-Apr-2013, 20:54
I think I have found the solution.

our XEN hypervisor(dom0) is using LVM, and we are also using lvm on our virtual machines(domU).. which is not recommend


Only use Logical Volume Manager (LVM) on Dom0 or DomU, but not both
https://www.suse.com/support/kb/doc.php?id=3559698

Regards

jmozdzen
10-Apr-2013, 21:15
Hi sharfuddin,

we've been running that Dom0/DomU LVM mix for a long time before switching to dedicated SAN LUNs for our VMs, without noticable performance degration - especially not to the extend you experience. Granted, this was on SLES10, but the recommendation originated in that time frame too, AFAIK.

Where those non-LVM numbers above measured before switching to LVM or during a test after switching back to non-LVM?

Should you ever come to a point where you switch to LVM on DomU and physical partitioning on Dom0 and re-gain original performance, I'd be glad to hear from you, too.

Regards,
Jens

KBOYLE
10-Apr-2013, 21:40
sharfuddin wrote:

>
> I think I have found the solution.
>
> our XEN hypervisor(dom0) is using LVM, and we are also using lvm on
> our virtual machines(domU).. which is not recommend
>
> > Only use Logical Volume Manager (LVM) on Dom0 or DomU, but not both
> https://www.suse.com/support/kb/doc.php?id=3559698
>
> Regards

Perhaps... perhaps not!

That caveat has been there since 2008 without any additional
explanation. I have tried in vain to obtain clarification. For example,
is it cautioning against either or both of these scenarios?

1. LVM is used in Dom0 but sda3 is assigned to DomU. LVM is used in
DomU.

2. LVM is used in Dom0 and LV1 is assigned to DomU. LVM is also used in
DomU.

The TID says that "I/O performance suffers" with no further
explanation. Certainly LVM can be misconfigured and performance can
suffer but their may be other factors to consider.

If DomU and Dom0 each try to optimise I/O without knowing the other one
is trying to do the same performance can be impacted.

Have a look at TID 7009616:
Virtual SLES11SP1 shows bad IO performance when accessing raw disks
http://www.novell.com/support/kb/doc.php?id=7009616

If you care to try the suggestions in this TID, please report back what
you learn.

--
Kevin Boyle - Knowledge Partner
If you find this post helpful and are logged into the web interface,
show your appreciation and click on the star below...