The quick answer, is they are optimized for different use cases.
Things like relational databases (mysql, postgresql) benefit from the
performance that a dedicated filesystem can provide (rbd). Shared filesystems
are usually counter indicated with such software.
Shared filesystems like cephfs
Yeah, agreed. My first question would be how is your user going to consume
the storage?
You'll struggle to run VM's on RadosGW and if they are doing archival
backups then RBD is likely not the best solution.
Each has very different requirements at the hardware level, for example if
you are
Hi Jorge,
I think it depends on your workload.
On Tue, May 25, 2021 at 7:43 PM Jorge Garcia wrote:
>
> This may be too broad of a topic, or opening a can of worms, but we are
> running a CEPH environment and I was wondering if there's any guidance
> about this question:
>
> Given that some
This may be too broad of a topic, or opening a can of worms, but we are
running a CEPH environment and I was wondering if there's any guidance
about this question:
Given that some group would like to store 50-100 TBs of data on CEPH and
use it from a linux environment, are there any
Hi,
On my setup I didn't enable a strech cluster. It's just a 3 x VM setup
running on the same Proxmox node, all the nodes are using a single
unique network. I installed Ceph using the documented cephadm flow.
Thanks for the confirmation, Greg! I‘ll try with a newer release then. >That’s
Hi everyone,
The Ceph Month June schedule is now available:
https://pad.ceph.com/p/ceph-month-june-2021
We have great sessions from component updates, performance best
practices, Ceph on different architectures, BoF sessions to get more
involved with working groups in the community, and more!
Thanks for the confirmation, Greg! I‘ll try with a newer release then.
That’s why we’re testing, isn’t it? ;-)
Then the OPs issue is probably not resolved yet since he didn’t
mention a stretch cluster. Sorry for high-jacking the thread.
Zitat von Gregory Farnum :
On Tue, May 25, 2021 at
On Tue, May 25, 2021 at 7:17 AM Eugen Block wrote:
> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/16.2.4/rpm/el8/BUILD/ceph-16.2.4/src/osd/OSDMap.cc:
> In function 'void
Thank you Janne,
I will give upmap a shot. Need to try it first in some non-prod
cluster. Non-prod clusters are doing much better for me even though
they have a lot fewer OSDs..
Thanks everyone!
On Tue, May 25, 2021 at 12:48 AM Janne Johansson wrote:
>
> I would suggest enabling the upmap
Hi,
I wanted to explore the stretch mode in pacific (16.2.4) and see how
it behaves with a DC failure. It seems as if I'm hitting the same or
at least a similar issue here. To verify if it's the stretch mode I
removed the cluster and rebuilt it without stretch mode, three hosts
in three
Am Di., 25. Mai 2021 um 09:23 Uhr schrieb Boris Behrens :
>
> Hi,
> I am still searching for a reason why these two values differ so much.
>
> I am currently deleting a giant amount of orphan objects (43mio, most
> of them under 64kb), but the difference get larger instead of smaller.
>
> This was
Eugen,
Eugen Block wrote:
: Mykola explained it in this thread [1] a couple of months ago:
:
: `rbd cp` will copy only one image snapshot (or the image head) to the
: destination.
:
: `rbd deep cp` will copy all image snapshots and the image head.
Thanks for the explanation. I have created a
Hi
The server run 15.2.9 and has 15 HDD and 3 SSD.
The OSDs was created with this YAML file
hdd.yml
service_type: osd
service_id: hdd
placement:
host_pattern: 'pech-hd-*'
data_devices:
rotational: 1
db_devices:
rotational: 0
The result was that the 3 SSD is added to 1 VG with
Am Di., 25. Mai 2021 um 09:39 Uhr schrieb Konstantin Shalygin :
>
> Hi,
>
> On 25 May 2021, at 10:23, Boris Behrens wrote:
>
> I am still searching for a reason why these two values differ so much.
>
> I am currently deleting a giant amount of orphan objects (43mio, most
> of them under 64kb),
Not sure what I'm doing wrong, I suspect its the way I'm running
ceph-volume.
root@drywood12:~# cephadm ceph-volume lvm create --data /dev/sda --dmcrypt
Inferring fsid 1518c8e0-bbe4-11eb-9772-001e67dc85ea
Using recent ceph image ceph/ceph@sha256
Hi,
> On 25 May 2021, at 10:23, Boris Behrens wrote:
>
> I am still searching for a reason why these two values differ so much.
>
> I am currently deleting a giant amount of orphan objects (43mio, most
> of them under 64kb), but the difference get larger instead of smaller.
When user trough
Hi,
I am still searching for a reason why these two values differ so much.
I am currently deleting a giant amount of orphan objects (43mio, most
of them under 64kb), but the difference get larger instead of smaller.
This was the state two days ago:
>
> [root@s3db1 ~]# radosgw-admin bucket stats
Hi,
Mykola explained it in this thread [1] a couple of months ago:
`rbd cp` will copy only one image snapshot (or the image head) to the
destination.
`rbd deep cp` will copy all image snapshots and the image head.
It depends on the number of snapshots that need to be copied, if there
are
18 matches
Mail list logo