PDA

View Full Version : SLES 12 SP1 sizing / on sles 12 box compared to sles 11



skunkboy
14-Apr-2016, 16:52
Hey,

New to btrfs and their snapshots. Been working with them on some test boxes and noticed that after large sets of patches getting applied the size of the snapshots can grow rapidly.
On SLES 11 we could comfortably keep / on 15 GB disks, we keep applications and users on other drives. Does anyone have rules of thumbs or sizing information for snapshot growth?
On one of our test boxes that had been 54% full before I installed patches earlier this month it went to 92% full because of the number of snapshots created during the patching.

I've looked around on line and can find docs on how snapshots work, but no best practices or things to watch out for.

I've seen some suggestion on here to reduce the number of snapshots kept, and that seems to make some sense, and would reduce the amount of drive space devoted to snapshots.

Any pointers, suggestions, or helpful links?

Thanks,
Matt

malcolmlewis
14-Apr-2016, 18:30
On Thu 14 Apr 2016 03:54:02 PM CDT, skunkboy wrote:

Hey,

New to btrfs and their snapshots. Been working with them on some test
boxes and noticed that after large sets of patches getting applied the
size of the snapshots can grow rapidly.
On SLES 11 we could comfortably keep / on 15 GB disks, we keep
applications and users on other drives. Does anyone have rules of
thumbs or sizing information for snapshot growth?
On one of our test boxes that had been 54% full before I installed
patches earlier this month it went to 92% full because of the number of
snapshots created during the patching.

I've looked around on line and can find docs on how snapshots work,
but no best practices or things to watch out for.

I've seen some suggestion on here to reduce the number of snapshots
kept, and that seems to make some sense, and would reduce the amount of
drive space devoted to snapshots.

Any pointers, suggestions, or helpful links?

Thanks,
Matt




Hi
If you want to use snapper with btrfs, 15G is a bit light (as you have
discovered).

Lets see, I would configure /etc/snapper/configs/root to the desired
level of only a few.
I use;


NUMBER_LIMIT="4"
NUMBER_LIMIT_IMPORTANT="2"


My / is hovering around 20G from the output of;


btrfs fi usage /
Overall:
Device size: 62.29GiB
Device allocated: 19.03GiB
Device unallocated: 43.26GiB
Device missing: 0.00B
Used: 13.83GiB
Free (estimated): 47.24GiB (min: 47.24GiB)
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 288.00MiB (used: 0.00B)


There is a daily cronjob in /etc/cron.daily/suse.de-snapper, if after a
big update you can run this manually, there should also be a weekly
cronjob /etc/cron.weekly/btrfs-balance.sh again, manually run should
clean up things as well. It's a softlink to;


ls -la /etc/cron.weekly/btrfs-balance.sh
lrwxrwxrwx 1 root root 44 Apr 13 11:41 /etc/cron.weekly/btrfs-balance.sh -> /usr/share/btrfsmaintenance/btrfs-balance.sh

As long as you have the btrfsmaintenance package installed.

If you get stuck with a full disk, have a read here;
https://www.suse.com/documentation/sles-12/stor_admin/data/sect_filesystems_trouble.html

Good reference here as well;
https://btrfs.wiki.kernel.org/index.php/FAQ

--
Cheers Malcolm °¿° LFCS, SUSE Knowledge Partner (Linux Counter #276890)
SUSE Linux Enterprise Desktop 12 SP1|GNOME 3.10.4|3.12.53-60.30-default
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below... Thanks!