[ceph-users] Re: Setting up a small experimental CEPH network

2020-09-20 Thread Lindsay Mathieson
On 21/09/2020 5:40 am, Stefan Kooman wrote: My experience with bonding and Ceph is pretty good (OpenvSwitch). Ceph uses lots of tcp connections, and those can get shifted (balanced) between interfaces depending on load. Same here - I'm running 4*1GB (LACP, Balance-TCP) on a 5 node cluster

[ceph-users] Re: Setting up a small experimental CEPH network

2020-09-20 Thread Stefan Kooman
On 2020-09-20 12:19, Marc Roos wrote: > > > - pat yourself on the back for choosing ceph, there are a lot of > experts(not including me :)) here willing to help(during office hours) > - decide what you like to use ceph for, and how much storage you need. > - Running just an osd on a server has

[ceph-users] Re: Setting up a small experimental CEPH network

2020-09-20 Thread Stefan Kooman
On 2020-09-20 10:25, Philip Rhoades wrote: > People, > > I am interested in experimenting with CEPH on say 4 or 8 small form > factor computers (SBCs?) - any suggestions about how to get started? What do you want to learn? What Ceph features are you interested in (RGW, CephFS, RBD)? The

[ceph-users] Is ceph-mon disk write i/o normal at more than 1/2TB a day on an empty cluster?

2020-09-20 Thread tri
Hi all, I'm in the process of testing out ceph on a small cluster. The cluster is virtually empty with no clients, just a few OSDs. I noticed extensive disk write I/O on some nodes and tracked them down to the ceph-mon daemon. A cursory log over 1 hour shows that each monitor generates

[ceph-users] Re: Process for adding a separate block.db to an osd

2020-09-20 Thread tri
Eugene, Thanks for your help. The info is rea/usr/bin/ceph --cluster ceph --name client.osd-lockbox.${OSD_FSID} --keyring $OSD_PATH/lockbox.keyring config-key get dm-crypt/osd/$OSD_FSID/lukslly helpful. In my case, the OSDs were encrypted so the process is a bit more involved but I managed to

[ceph-users] Re: ceph-volume lvm cannot zap???

2020-09-20 Thread Marc Roos
Thanks Oliver, useful checks! -Original Message- To: ceph-users Subject: Re: [ceph-users] ceph-volume lvm cannot zap??? Hi, we have also seen such cases, it seems that sometimes (when the controller / device is broken in special ways), device mapper keeps the volume locked. You

[ceph-users] ceph docs redirect not good

2020-09-20 Thread Marc Roos
https://docs.ceph.com/docs/mimic/man/8/ceph-volume-systemd/ ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Setting up a small experimental CEPH network

2020-09-20 Thread Philip Rhoades
People, I am interested in experimenting with CEPH on say 4 or 8 small form factor computers (SBCs?) - any suggestions about how to get started? I haven't bought anything yet - I have some working Fedora Workstations and Servers and a laptop but I don't want to experiment on them . .

[ceph-users] Re: Benchmark WAL/DB on SSD and HDD for RGW RBD CephFS

2020-09-20 Thread Maged Mokhtar
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/drivers/md/dm-writecache.c?h=v5.8.10=c1005322ff02110a4df7f0033368ea015062b583 On 19/09/2020 10:31, huxia...@horebdata.cn wrote: Dear Maged, Thanks a lot for detailed explanantion on dm-writecache with Ceph. You mentioned

[ceph-users] Re: Setting up a small experimental CEPH network

2020-09-20 Thread Marc Roos
- pat yourself on the back for choosing ceph, there are a lot of experts(not including me :)) here willing to help(during office hours) - decide what you like to use ceph for, and how much storage you need. - Running just an osd on a server has not that many implications so you could rethink