PDA

View Full Version : SES Planning: many RADOS-GWs with same data



polezhaevdmi
18-Jul-2016, 15:23
Let's imagine the simple task: maintain the RADOS-GW availability (active-active or active-passive protection allowed) in a small SES cluster (less than 7 nodes, multiple node roles are supported).

Should the federated configuration with 1 zone for all RGWs (at several nodes) be used only, or the simpler way to do the same does exist?
https://www.suse.com/documentation/ses-3/book_storage_admin/data/ceph_rgw_fed.html

Is it possible to create a 'placeholders' at nodes 2, 3, 4,... for RADOS-GW, created at node 1, and make any of these 'placeholders' be able to run at their nodes?

Automatic Reply
25-Jul-2016, 05:30
polezhaevdmi,

It appears that in the past few days you have not received a response to your
posting. That concerns us, and has triggered this automated reply.

These forums are peer-to-peer, best effort, volunteer run and that if your issue
is urgent or not getting a response, you might try one of the following options:

- Visit http://www.suse.com/support and search the knowledgebase and/or check all
the other support options available.
- Open a service request: https://www.suse.com/support
- You could also try posting your message again. Make sure it is posted in the
correct newsgroup. (http://forums.suse.com)

Be sure to read the forum FAQ about what to expect in the way of responses:
http://forums.suse.com/faq.php

If this is a reply to a duplicate posting or otherwise posted in error, please
ignore and accept our apologies and rest assured we will issue a stern reprimand
to our posting bot..

Good luck!

Your SUSE Forums Team
http://forums.suse.com

polezhaevdmi
25-Jul-2016, 14:18
The door is opened: all RADOS gateways in the cluster are using the same object space. No federation or multisite headache required: just create the additional gateways at the nodes, synchronize the CEPH configuration and list the gateways names/IPs in the client / load balanser / HA proxy daemon.

https://www.mail-archive.com/ceph-users@lists.ceph.com/msg25994.html

The most reasonable and 'musthave' feature appeared almost not documented.