versions. But there have been lots of fixes in this area ... e.g.
https://tracker.ceph.com/issues/39657
Is upgrading Ceph to a more recent version an option for you?
Regards
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
so it's sure to be
picked up.
Thanks a bunch. If you miss the train, you miss the train - fair enough.
Nice to know there is another one going soon and that bug is going to be
on it !
Regards
Christian
___
ceph-users mailing list -- ceph-users
in one my
clusters.
Regards
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
.
I would love for RGW to support more detailed bucket policies,
especially with external / Keystone authentication.
Regards
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Casey,
Interesting. Especially since the request it hangs on is a GET request.
I set the option and restarted the RGW I test with.
The POSTs for deleting take a while but there are not longer blocking GET
or POST requests.
Thank you!
Best,
Christian
PS: Sorry for pressing the wrong reply
"This section applies only to the older Filestore OSD back end. Since
Luminous BlueStore has been default and preferred."
It's totally obsolete with bluestore.
Regards
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To u
On 08.03.24 14:25, Christian Rohmann wrote:
What do you mean by blocking IO? No bucket actions (read / write) or
high IO utilization?
According to https://docs.ceph.com/en/latest/radosgw/dynamicresharding/
"Writes to the target bucket are blocked (but reads are not) briefly
during resha
you mean by blocking IO? No bucket actions (read / write) or
high IO utilization?
Regards
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
situation or at least where to or what to look
for?
Best,
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On 04.03.24 22:24, Daniel Brown wrote:
debian-reef/
Now appears to be:
debian-reef_OLD/
Could this have been some sort of "release script" just messing up the
renaming / symlinking to the most recent stable?
Regards
Christian
___
On 23.02.24 16:18, Christian Rohmann wrote:
I just noticed issues with ceph-crash using the Debian /Ubuntu
packages (package: ceph-base):
While the /var/lib/ceph/crash/posted folder is created by the package
install,
it's not properly chowned to ceph:ceph by the postinst script
.
Regards
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On 01.02.24 10:10, Christian Rohmann wrote:
[...]
I am wondering if ceph-exporter ([2] is also built and packaged via
the ceph packages [3] for installations that use them?
[2] https://github.com/ceph/ceph/tree/main/src/exporter
[3] https://docs.ceph.com/en/latest/install/get-packages/
I
atest" documentation is at
https://docs.ceph.com/en/latest/install/get-packages/#ceph-development-packages.
But it seems nothing has changed. There are dev packages available at
the URLs mentioned there.
Regards
Christian
___
ceph-users mailing li
wondering if
ceph-exporter ([2] is also built and packaged via the ceph packages [3]
for installations that use them?
Regards
Christian
[1]
https://docs.ceph.com/en/latest/mgr/prometheus/#ceph-daemon-performance-counters-metrics
[2] https://github.com/ceph/ceph/tree/main/src/exporter
[3
containers being built somewhere to
use with cephadm.
Regards
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I could be wrong however as far as I can see you have 9 chunks which
requires 9 failure domains.
Your failure domain is set to datacenter which you only have 3 of. So that
won't work.
You need to set your failure domain to host and then create a crush rule to
choose a DC and choose 3 hosts within
(Keystone in my case) at full rate.
Regards
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Happy New Year Ceph-Users!
With the holidays and people likely being away, I take the liberty to
bluntly BUMP this question about protecting RGW from DoS below:
On 22.12.23 10:24, Christian Rohmann wrote:
Hey Ceph-Users,
RGW does have options [1] to rate limit ops or bandwidth per bucket
General complaint about docker is usually that it by default stops all
running containers when the docker daemon gets shutdown. There is the
"live-restore" option (which has been around for a while) but that's turned
off by default (and requires a daemon restart to enable). It only supports
patch
lace?
* Does it make sense to extend RGWs capabilities to deal with those
cases itself?
** adding negative caching
** rate limits on concurrent external authentication requests (or is
there a pool of connections for those requests?)
Regards
Christian
[1] https://docs.ceph.com/en/latest
You can structure your crush map so that you get multiple EC chunks per
host in a way that you can still survive a host outage outage even though
you have fewer hosts than k+1
For example if you run an EC=4+2 profile on 3 hosts you can structure your
crushmap so that you have 2 chunks per host.
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Sorry to dig up this old thread ...
On 25.01.23 10:26, Christian Rohmann wrote:
On 20/10/2022 10:12, Christian Rohmann wrote:
1) May I bring up again my remarks about the timing:
On 19/10/2022 11:46, Christian Rohmann wrote:
I believe the upload of a new release to the repo prior
On Mon, 9 Oct 2023 at 14:24, Anthony D'Atri wrote:
>
>
> > AFAIK the standing recommendation for all flash setups is to prefer fewer
> > but faster cores
>
> Hrm, I think this might depend on what you’re solving for. This is the
> conventional wisdom for MDS for sure. My sense is that OSDs can
AFAIK the standing recommendation for all flash setups is to prefer fewer
but faster cores, so something like a 75F3 might be yielding better latency.
Plus you probably want to experiment with partitioning the NVMEs and
running multiple OSDs per drive - either 2 or 4.
On Sat, 7 Oct 2023 at 08:23,
) about this, as I could not find
one yet.
It seems a weird way of disclosing such a thing and am wondering if
anybody knew any more about this?
Regards
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email
I am unfortunately still observing this issue of the RADOS pool
"*.rgw.log" filling up with more and more objects:
On 26.06.23 18:18, Christian Rohmann wrote:
On the primary cluster I am observing an ever growing (objects and
bytes) "sitea.rgw.log" pool, not so on the r
Hi,
interesting, that’s something we can definitely try!
Thanks!
Christian
> On 5. Sep 2023, at 16:37, Manuel Lausch wrote:
>
> Hi,
>
> in older versions of ceph with the auto-repair feature the PG state of
> scrubbing PGs had always the repair state as well.
> With la
all daemons to the same minor version those
> errors were gone.
>
> Regards,
> Eugen
>
> Zitat von Christian Theune :
>
>> Hi,
>>
>> this is a bit older cluster (Nautilus, bluestore only).
>>
>> We’ve noticed that the cluster is almost continuous
any relevant issue either.
Any ideas?
Liebe Grüße,
Christian Theune
--
Christian Theune · c...@flyingcircus.io · +49 345 219401 0
Flying Circus Internet Operations GmbH · https://flyingcircus.io
Leipziger Str. 70/71 · 06108 Halle (Saale) · Deutschland
HR Stendal HRB 21169 · Geschäftsführer
his is what I am
currently doing (lvcreate + ceph-volume lvm create). My question
therefore is, if ceph-volume (!) could somehow create this LV for the DB
automagically if I'd just give it a device (or existing VG)?
Thank you very much for your patience in clarifying and responding to my
with DB or WAL on fast storage.
Regards
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
name like the rbd and the corresponding
"rbd-read-only" profile?
Regards
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
a few LVs is hard... it's just that ceph volume
does apply some structure to the naming of LVM VGs and LVs on the OSD
device and also adds metadata. That would then be up to the user, right?
Regards
Christian
___
ceph-users mailing list -- ceph-use
On 10/08/2023 13:30, Christian Rohmann wrote:
It's already fixed master, but the backports are all still pending ...
There are PRs for the backports now:
* https://tracker.ceph.com/issues/62060
* https://tracker.ceph.com/issues/62061
* https://tracker.ceph.com/issues/62062
Regards
: https://tracker.ceph.com/issues/55260
It's already fixed master, but the backports are all still pending ...
Regards
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
> Thank you for the information, Christian. When you reshard the bucket id is
> updated (with most recent versions of ceph, a generation number is
> incremented). The first bucket id matches the bucket marker, but after the
> first reshard they diverge.
This makes a lot of sense
_info": "false"
}
}
> 4. After you resharded previously, did you get command-line output along the
> lines of:
> 2023-07-24T13:33:50.867-0400 7f10359f2a80 1 execute INFO: reshard of bucket
> “" completed successfully
I think so, at least for the second reshard. But I wouldn't bet my
life on it. I fear I might have missed an error on the first one since
I have done a radosgw-admin bucket reshard so often and never seen it
fail.
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
something like 97. Or I could directly "downshard" to 97.
Also, the second zone has a similar problem, but as the error messsage lets me
know, this would be a bad idea. Will it just take more time until the sharding
is transferred to the seconds zone?
Best,
Christian Kugler
___
Based on my understanding of CRUSH it basically works down the hierarchy
and then randomly (but deterministically for a given CRUSH map) picks
buckets (based on the specific selection rule) on that level for the object
and then it does this recursively until it ends up at the leaf nodes.
Given
. In reality it was simply the
private, RFC1918, IP of the test machine that came in as source.
Sorry for the noise and thanks for your help.
Christian
P.S. With IPv6, this would not have happened.
___
ceph-users mailing list -- ceph-users@ceph.io
is that required and why seems to be no periodic trimming
happening?
Regards
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
looking for data other might have collected on their similar
use-cases.
Also I am still wondering if there really is nobody that worked/played
more with zstd since that has become so popular in recent months...
Regards
Christian
___
ceph-users
g of the log trimming activity that I
should expect? Or that might indicate why trimming does not happen?
Regards
Christian
[1]
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/WZCFOAMLWV3XCGJ3TVLHGMJFVYNZNKLD/
___
ceph-users mail
ot;bytes_sent":0,"bytes_received":64413,"object_size":64413,"total_time":155,"user_agent":"aws-sdk-go/1.27.0
(go1.16.15; linux; amd64)
S3Manager","referrer":"","trans_id":"REDACTED","authentication_typ
e decision on
the compression algo?
Regards
Christian
[1]
https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#confval-bluestore_compression_algorithm
[2] https://github.com/ceph/ceph/pull/33790
[3] https://github.com/facebook/zstd/
://download.coeh.com/debian-quincy/ bullseye main
to
deb https://download.coeh.com/debian-quincy/ boowkworm main
in the near future!?
Regards,
Christian
OpenPGP_0xC20C05037880471C.asc
Description: OpenPGP public key
OpenPGP_signature
Description: OpenPGP digital signature
zonegroups referring to the same pools and this
should only run through proper abstractions … o_O
Cheers,
Christian
> On 14. Jun 2023, at 17:42, Christian Theune wrote:
>
> Hi,
>
> further note to self and for posterity … ;)
>
> This turned out to be a no-go as well, becau
id i get something wrong?
>
>
>
>
> Kind regards,
> Nino
>
>
> On Wed, Jun 14, 2023 at 5:44 PM Christian Theune wrote:
> Hi,
>
> further note to self and for posterity … ;)
>
> This turned out to be a no-go as well, because you can’t silently switch th
, not the public IP
of the client.
So the actual remote address is NOT used in my case.
Did I miss any config setting anywhere?
Regards and thanks for your help
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ately seems not even supposed by the BEAST library which RGW uses.
I opened feature requests ...
** https://tracker.ceph.com/issues/59422
** https://github.com/chriskohlhoff/asio/issues/1091
** https://github.com/boostorg/beast/issues/2484
but there is no outcome yet.
Rega
a few very large buckets (200T+) that will take a
while to copy. We can pre-sync them of course, so the downtime will only be
during the second copy.
Christian
> On 13. Jun 2023, at 14:52, Christian Theune wrote:
>
> Following up to myself and for posterity:
>
> I’m going to
is still 2.4 hours …
Cheers,
Christian
> On 9. Jun 2023, at 11:16, Christian Theune wrote:
>
> Hi,
>
> we are running a cluster that has been alive for a long time and we tread
> carefully regarding updates. We are still a bit lagging and our cluster (that
> started around
and I guess that would be a good
comparison for what timing to expect when running an update on the metadata.
I’ll also be in touch with colleagues from Heinlein and 42on but I’m open to
other suggestions.
Hugs,
Christian
[1] We currently have 215TiB data in 230M objects. Using the “official
Hm, this thread is confusing
in the context of S3 client-side encryption means - the user is responsible
to encrypt the data with their own keys before submitting it. As far as I'm
aware, client-side encryption doesn't require any specific server support -
it's a function of the client SDK used
enlighten me.
Thank you and with kind regards
Christian
On 02/02/2022 20:10, Christian Rohmann wrote:
Hey ceph-users,
I am debugging a mgr pg_autoscaler WARN which states a
target_size_bytes on a pool would overcommit the available storage.
There is only one pool with value
With failure domain host your max usable cluster capacity is essentially
constrained by the total capacity of the smallest host which is 8TB if I
read the output correctly. You need to balance your hosts better by
swapping drives.
On Fri, 31 Mar 2023 at 03:34, Nicola Mori wrote:
> Dear Ceph
create their own roles and policies to
use them by default?
All the examples talk about the requirement for admin caps and
individual setting of '--caps="user-policy=*'.
If there was a default role + policy (question #1) that could be applied
to externally authenticated users, I'd like for
ative of the community response. I learned a lot
in the process, had an outage-inducing scenario rectified very quickly, and got
back to work. Thanks so much! Happy to answer any followup questions and
return the favor when I can.
From: Rice, Christian
Date: Wednesday, March 8, 2023 at 3:57 PM
To:
I have a large number of misplaced objects, and I have all osd settings to “1”
already:
sudo ceph tell osd.\* injectargs '--osd_max_backfills=1
--osd_recovery_max_active=1 --osd_recovery_op_priority=1'
How can I slow it down even more? The cluster is too large, it’s impacting
other network
ing it with the new name.
> You only must keep the ID from the node in the crushmap!
>
> Regards
> Manuel
>
>
> On Mon, 13 Feb 2023 22:22:35 +
> "Rice, Christian" wrote:
>
>> Can anyone please point me at a doc that explains the most
>> efficie
Can anyone please point me at a doc that explains the most efficient procedure
to rename a ceph node WITHOUT causing a massive misplaced objects churn?
When my node came up with a new name, it properly joined the cluster and owned
the OSDs, but the original node with no devices remained. I
Hey everyone,
On 20/10/2022 10:12, Christian Rohmann wrote:
1) May I bring up again my remarks about the timing:
On 19/10/2022 11:46, Christian Rohmann wrote:
I believe the upload of a new release to the repo prior to the
announcement happens quite regularly - it might just be due
and then a total failure of an OSD ?
Would be nice to fix this though to not "block" the warning status with
something that's not actually a warning.
Regards
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an ema
On 15/12/2022 10:31, Christian Rohmann wrote:
May I kindly ask for an update on how things are progressing? Mostly I
am interested on the (persisting) implications for testing new point
releases (e.g. 16.2.11) with more and more bugfixes in them.
I guess I just have not looked on the right
!
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
creators to apply such a policy themselves, but to apply this as a
global default in RGW, forcing all buckets to have SSE enabled -
transparently.
If there is no way to achieve this just yet, what are your thoughts
about adding such an option to RGW?
Regards
I'm facing -.-
But there is a fix commited, pending backports to Quincy / Pacific:
https://tracker.ceph.com/issues/57306
Regards
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
807) about Cloud Sync being
broken since Pacific?
Regards
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ct RGW in both zones.
Regards
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi all,
we're running a ceph cluster with v15.2.17 and cephadm on various CentOS
hosts. Since CentOS 8.x is EOL, we'd like to upgrade/migrate/reinstall
the OS, possibly migrating to Rocky or CentOS stream:
host | CentOS | Podman
-|--|---
osd* | 7.9.2009 | 1.6.4 x5
osd*
which we
are waiting for. TBH I was about
to ask if it would not be sensible to do an intermediate release and not
let it grow bigger and
bigger (with even more changes / fixes) going out at once.
Regards
Christian
___
ceph-users mailing list
have) resulted in
extra 8 Ceph OSDs with no db device.
Best,
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
it sounds like it would limit the amount of
SSDs used for DB devices.
How can I use all of the SSDs‘ capacity?
Best,
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
this week.
Thanks for the info.
1) May I bring up again my remarks about the timing:
On 19/10/2022 11:46, Christian Rohmann wrote:
I believe the upload of a new release to the repo prior to the
announcement happens quite regularly - it might just be due to the
technical process of releasing.
But I
debian-17.2.4/ return 404.
Regards
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
date notes.
Regards
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
://tracker.ceph.com/projects/rgw/issues?query_id=247
But you are not syncing the data in your deployment? Maybe that's a
different case then?
Regards
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le
es require users to
actively make use of SSE-S3, right?
Thanks again with kind regards,
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On Sun, 19 Jun 2022 at 02:29, Satish Patel wrote:
> Greeting folks,
>
> We are planning to build Ceph storage for mostly cephFS for HPC workload
> and in future we are planning to expand to S3 style but that is yet to be
> decided. Because we need mass storage, we bought the following HW.
>
> 15
we had issues with slow ops on ssd AND nvme; mostly fixed by raising aio-max-nr
from 64K to 1M, eg "fs.aio-max-nr=1048576" if I remember correctly.
On 3/29/22, 2:13 PM, "Alex Closs" wrote:
Hey folks,
We have a 16.2.7 cephadm cluster that's had slow ops and several
(constantly
I would not host multiple OSD on a spinning drive (unless it's one of those
Seagate MACH.2 drives that have two independent heads) - head seek time
will most likely kill performance. The main reason to host multiple OSD on
a single SSD or NVME is typically to make use of the large IOPS capacity
On 28/02/2022 20:54, Sascha Vogt wrote:
Is there a way to clear the error counter on pacific? If so, how?
No, no anymore. See https://tracker.ceph.com/issues/54182
Regards
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
lock /semaphore or something
along this line, this certainly is affected by the latency on the
underlying storage.
Could you maybe trigger manual a deep-scrub on all your OSDs, just to
see if that does anything?
Thanks again for keeping in touch!
Regards
Chri
are the worse of bugs and adding some
unpredictability to their occurrence we likely need
more evidence to have a chance to narrow this down. And since you seem
to observe something similar, could you gather
and post debug info about them to the ticket as well maybe?
Regards
Christian
;omap_digest_mismatch","client.4349063.0:10289913"
".dir.9cba42a3-dd1c-46d4-bdd2-ef634d12c0a5.56337947.1562","omap_digest_mismatch","client.4364800.0:10934595"
".dir.06f9b7c7-6326-4a41-9115-d4d092cf74ce.1163207.114.9","omap_digest_mismatch&
rely must be a bug then, as those bytes are not really
"actual_raw_used".
I was about to raise a bug, but I wanted to ask here on the ML first if
I misunderstood the mechanisms at play here.
Thanks and with kind regards,
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
occur on the secondary side
only
Regarding your scrub errors. You do have those still coming up at random?
Could you check with "list-inconsistent-obj" if yours are within the
OMAP data and in the metadata pools only?
Regards
Christian
___
ce
arge OMAP structures with lots of movement. And
the issues only are with the metadata pools.
Regards
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
the next
inconsistency occurs?
Could there be any misconfiguration causing this?
Thanks and with kind regards
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
repairing the
other, adding to the theory of something really odd going on.
Did you upgrade to Octopus in the end then? Any more issues with such
inconsistencies on your side Tomasz?
Regards
Christian
On 20/10/2021 10:33, Tomasz Płaza wrote:
As the upgrade process states, rgw are the last one
I think Marc uses containers - but they've chosen Apache Mesos as
orchestrator and ceph-adm doesn't work with that.
Currently essentially two ceph container orchestrators exist - rook which
is a ceph orch or kubernetes and ceph-adm which is an orchestrator
expecting docker or podman
Admittedly I
In addition to what the others said - generally there is little point
in splitting block and wal partitions - just stick to one for both.
What model are you SSDs and how well do they handle small direct
writes? Because that's what you'll be getting on them and the wrong
type of SSD can make things
wrote:
>>
>> Den tors 28 okt. 2021 kl 10:18 skrev Lokendra Rathour
>> :
>> >
>> > Hi Christian,
>> > Thanks for the update.
>> > I have 5 SSD on each node i.e. a total of 15 SSD using which I have
>> > created this RAID 0 Disk, wh
- What is the expected file/object size distribution and count?
- Is it write-once or modify-often data?
- What's your overall required storage capacity?
- 18 OSDs per WAL/DB drive seems a lot - recommended is ~6-8
- With 12TB OSD the recommended WAL/DB size is 120-480GB (1-4%) per
bucket stats --bucket mybucket
Doing a bucket_size / number_of_objects gives you an average object size
per bucket and that certainly is an indication on
buckets with rather small objects.
Regards
Christian
___
ceph-users mailing list -- ceph-users
n a
> replicated pool writes and reads are handled by the primary PG, which would
> explain this write bandwidth limit.
>
> /Z
>
> On Tue, 5 Oct 2021, 22:31 Christian Wuerdig,
> wrote:
>
>> Maybe some info is missing but 7k write IOPs at 4k block size seem f
Maybe some info is missing but 7k write IOPs at 4k block size seem fairly
decent (as you also state) - the bandwidth automatically follows from that
so not sure what you're expecting?
I am a bit puzzled though - by my math 7k IOPS at 4k should only be
27MiB/sec - not sure how the 120MiB/sec was
A couple of notes to this:
Ideally you should have at least 2 more failure domains than your base
resilience (K+M for EC or size=N for replicated) - reasoning: Maintenance
needs to be performed so chances are every now and then you take a host
down for a few hours or possibly days to do some
1 - 100 of 139 matches
Mail list logo