Hi list,
We have a 2 Luminous RGW running behind an F5 balancer. Every couple of seconds
the F5 balancer send a keep-alive request to the RGW and saturate the Civetweb
log with http entries, making it very difficult to troubleshoot users
connection. Example:
172.16.212.86 - -
Is it possible to set data-pool for ec-pools on qemu-img?
For repl-pools I used "qemu-img convert" to convert from e.g. vmdk to raw
and write to rbd/ceph directly.
The rbd utility is able to do this for raw or empty images but without
convert (converting 800G and writing it again would now take
ya, sadly it looks like btrfs will never materialize as the next filesystem
of the future. Redhat as an example even dropped it from its future, as
others probably will and have too.
On Sun, Sep 23, 2018 at 11:28 AM mj wrote:
> Hi,
>
> Just a very quick and simple reply:
>
> XFS has *always*
Hi,
Just a very quick and simple reply:
XFS has *always* treated us nicely, and we have been using it for a VERY
long time, ever since the pre-2000 suse 5.2 days on pretty much all our
machines.
We have seen only very few corruptions on xfs, and the few times we
tried btrfs, (almost)
On Fri, Sep 21, 2018 at 04:17:35PM -0400, Jin Mao wrote:
> I am looking for an API equivalent of 'radosgw-admin log list' and
> 'radosgw-admin log show'. Existing /usage API only reports bucket level
> numbers like 'radosgw-admin usage show' does. Does anyone know if this is
> possible from rest
Hi Paul,
thanks for the hint, I just checked and it works perfectly.
I found this guide:
https://www.reddit.com/r/ceph/comments/72yc9m/ceph_openstack_with_ec/
The works well with one meta/data setup but not with multiple (like
device-class based pools).
The link above uses client-auth, is
The usual trick for clients not supporting this natively is the option
"rbd_default_data_pool" in ceph.conf which should also work here.
Paul
Am So., 23. Sep. 2018 um 18:03 Uhr schrieb Kevin Olbrich :
>
> Hi!
>
> Is it possible to set data-pool for ec-pools on qemu-img?
> For repl-pools I used
Hi!
Is it possible to set data-pool for ec-pools on qemu-img?
For repl-pools I used "qemu-img convert" to convert from e.g. vmdk to raw
and write to rbd/ceph directly.
The rbd utility is able to do this for raw or empty images but without
convert (converting 800G and writing it again would now
Short answer: no and no.
Long:
1. having size = 2 is safe *if you also keep min_size at 2*. But
that's not highly available so you usually don't want this. min_size =
1 (or reducing min size on an ec pool) is basically a guarantee to
lose at least some data/writes in the long run.
2. It's no
"when using BlueStore, Ceph can ensure data integrity by conducting a cyclical
redundancy check (CRC) on write operations; then, store the CRC value in the
block database. On read operations, Ceph can retrieve the CRC value from the
block database and compare it with the generated CRC of the
10 matches
Mail list logo