Happy MidWinter!

I tried to mount the SES5 file service at ESXi 6.5, using various protocols (NFS 3/4) and backend store (Object / CephFS). The result is:
- Combination NFS v3 + CephFS mounts and stores VMDKs fine;
- Combination NFS v3 + Object mounts, but reports zero available volume, write is not possible;
- Both combinations with NFS v4 were not mount.

Additional info:
- The SES5 cluster is fresh and updated;
- The deployment process performed by SALT (as documented);
- The NFS-Ganesha and users configured via OpenATTIC GUI;
- Links to NFS guides are here and here;
- The attempts to mount these exports at Linux workstation (Debian) gave almost the same results (1 OK and 3 FAILs). For example, on 138 GB share
# mount.nfs /mnt
# cd /mnt
# ls -la
total 4
drwxrwxrwx 1 nobody nogroup 0 Dec 20 16:25 .
drwxr-xr-x 23 root root 4096 Mar 16 2017 ..
# mkdir 1
# cd 1
# dd if=/dev/zero of=dd.tmp bs=4K
dd: error writing dd.tmp: Input/output error
25018+0 records in
25017+0 records out
102469632 bytes (102 MB) copied, 0.737502 s, 139 MB/s
Thus, I'm suspecting something wrong with SES5 NFS and Object store stack, or with the configuration engine scripts.
The NFS v4 support in OpenAttic/Ganesha/SES5: does it mean v4 or v4.1 of NFS protocol?
Can anyone give the advice, how to find the workaround to make, for example, combination NFS v4 + Object store operable?