PDA

View Full Version : SES5 via NFS to VMWare6.5: OK=1, FAIL=3



polezhaevdmi
20-Dec-2017, 17:12
Happy MidWinter!

I tried to mount the SES5 file service at ESXi 6.5, using various protocols (NFS 3/4) and backend store (Object / CephFS). The result is:
- Combination NFS v3 + CephFS mounts and stores VMDKs fine;
- Combination NFS v3 + Object mounts, but reports zero available volume, write is not possible;
- Both combinations with NFS v4 were not mount.
282285

Additional info:
- The SES5 cluster is fresh and updated;
- The deployment process performed by SALT (as documented);
- The NFS-Ganesha and users configured via OpenATTIC GUI;
283284
- Links to NFS guides are here (http://masteringvmware.com/how-to-add-nfs-datastore/) and here (http://wahlnetwork.com/2015/02/02/nfs-v4-1/);
- The attempts to mount these exports at Linux workstation (Debian) gave almost the same results (1 OK and 3 FAILs). For example, on 138 GB share


# mount.nfs 172.18.66.74:/nfsuobj4/nfsbobj4 /mnt
# cd /mnt
# ls -la
total 4
drwxrwxrwx 1 nobody nogroup 0 Dec 20 16:25 .
drwxr-xr-x 23 root root 4096 Mar 16 2017 ..
# mkdir 1
# cd 1
# dd if=/dev/zero of=dd.tmp bs=4K
dd: error writing ‘dd.tmp’: Input/output error
25018+0 records in
25017+0 records out
102469632 bytes (102 MB) copied, 0.737502 s, 139 MB/s


Thus, I'm suspecting something wrong with SES5 NFS and Object store stack, or with the configuration engine scripts.
The NFS v4 support in OpenAttic/Ganesha/SES5: does it mean v4 or v4.1 of NFS protocol?
Can anyone give the advice, how to find the workaround to make, for example, combination NFS v4 + Object store operable?

Cheers!

Automatic Reply
25-Dec-2017, 06:30
polezhaevdmi,

It appears that in the past few days you have not received a response to your
posting. That concerns us, and has triggered this automated reply.

These forums are peer-to-peer, best effort, volunteer run and that if your issue
is urgent or not getting a response, you might try one of the following options:

- Visit http://www.suse.com/support and search the knowledgebase and/or check all
the other support options available.
- Open a service request: https://www.suse.com/support
- You could also try posting your message again. Make sure it is posted in the
correct newsgroup. (http://forums.suse.com)

Be sure to read the forum FAQ about what to expect in the way of responses:
http://forums.suse.com/faq.php

If this is a reply to a duplicate posting or otherwise posted in error, please
ignore and accept our apologies and rest assured we will issue a stern reprimand
to our posting bot..

Good luck!

Your SUSE Forums Team
http://forums.suse.com

eblock
28-Sep-2018, 14:30
Hi,

I know this thread is quite old, but I'm sure the topic is not outdated and maybe other users will encounter similar problems.
While I can't comment on the NFSv4 mount issue (in my lab environment I can mount both v3 and v4 with both CephFS and RGW) I can explain your Input/Output error.

There are defaults configured for max write size and max object size:

osd-3:~ # ceph daemon osd.5 config show |grep osd_max_write
"osd_max_write_size": "90",

osd-3:~ # ceph daemon osd.5 config show |grep osd_max_object_size
"osd_max_object_size": "134217728",


I don't know the background but the max object size has been reduced from 100 GB to 128 MB (see this (https://github.com/ceph/ceph/pull/15520)). Here's another thread (http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-November/022544.html) in the ceph-users mailing list discussing the max writes.
Considering that I don't think RGW is the place to store large VM disks. Of course you could tweak that value to your needs but there probably was a good reason for that change. Maybe your requirements are not for RGW but for RBD (or CephFS as you already tried).