On 21/09/2020 5:40 am, Stefan Kooman wrote:
My experience with bonding and Ceph is pretty good (OpenvSwitch). Ceph
uses lots of tcp connections, and those can get shifted (balanced)
between interfaces depending on load.
Same here - I'm running 4*1GB (LACP, Balance-TCP) on a 5 node cluster
On 2020-09-20 12:19, Marc Roos wrote:
>
>
> - pat yourself on the back for choosing ceph, there are a lot of
> experts(not including me :)) here willing to help(during office hours)
> - decide what you like to use ceph for, and how much storage you need.
> - Running just an osd on a server has
On 2020-09-20 10:25, Philip Rhoades wrote:
> People,
>
> I am interested in experimenting with CEPH on say 4 or 8 small form
> factor computers (SBCs?) - any suggestions about how to get started?
What do you want to learn? What Ceph features are you interested in
(RGW, CephFS, RBD)? The
Hi all,
I'm in the process of testing out ceph on a small cluster. The cluster
is virtually empty with no clients, just a few OSDs. I noticed extensive disk
write I/O on some nodes and tracked them down to the ceph-mon daemon. A cursory
log over 1 hour shows that each monitor generates
Eugene,
Thanks for your help. The info is rea/usr/bin/ceph --cluster ceph --name
client.osd-lockbox.${OSD_FSID} --keyring $OSD_PATH/lockbox.keyring config-key
get dm-crypt/osd/$OSD_FSID/lukslly helpful. In my case, the OSDs were encrypted
so the process is a bit more involved but I managed to
Thanks Oliver, useful checks!
-Original Message-
To: ceph-users
Subject: Re: [ceph-users] ceph-volume lvm cannot zap???
Hi,
we have also seen such cases, it seems that sometimes (when the
controller / device is broken in special ways), device mapper keeps the
volume locked.
You
https://docs.ceph.com/docs/mimic/man/8/ceph-volume-systemd/
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
People,
I am interested in experimenting with CEPH on say 4 or 8 small form
factor computers (SBCs?) - any suggestions about how to get started?
I haven't bought anything yet - I have some working Fedora Workstations
and Servers and a laptop but I don't want to experiment on them . .
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/drivers/md/dm-writecache.c?h=v5.8.10=c1005322ff02110a4df7f0033368ea015062b583
On 19/09/2020 10:31, huxia...@horebdata.cn wrote:
Dear Maged,
Thanks a lot for detailed explanantion on dm-writecache with Ceph.
You mentioned
- pat yourself on the back for choosing ceph, there are a lot of
experts(not including me :)) here willing to help(during office hours)
- decide what you like to use ceph for, and how much storage you need.
- Running just an osd on a server has not that many implications so you
could rethink
10 matches
Mail list logo