I know this thread is quite old, but I'm sure the topic is not outdated and maybe other users will encounter similar problems.
While I can't comment on the NFSv4 mount issue (in my lab environment I can mount both v3 and v4 with both CephFS and RGW) I can explain your Input/Output error.

There are defaults configured for max write size and max object size:
osd-3:~ # ceph daemon osd.5 config show |grep osd_max_write
    "osd_max_write_size": "90",

osd-3:~ # ceph daemon osd.5 config show |grep osd_max_object_size
    "osd_max_object_size": "134217728",
I don't know the background but the max object size has been reduced from 100 GB to 128 MB (see this). Here's another thread in the ceph-users mailing list discussing the max writes.
Considering that I don't think RGW is the place to store large VM disks. Of course you could tweak that value to your needs but there probably was a good reason for that change. Maybe your requirements are not for RGW but for RBD (or CephFS as you already tried).