[ceph-users] Re: Re layout help: need chassis local io to minimize net links

2020-06-29 Thread Harry G. Coin
Anthony asked about the 'use case'.  Well, I haven't gone into details because I worried it wouldn't help much.  From a 'ceph' perspective, the sandbox layout goes like this:  4 pretty much identical old servers, each with 6 drives, and a smaller server just running a mon to break ties.  Usual

[ceph-users] Re: Re layout help: need chassis local io to minimize net links

2020-06-29 Thread Anthony D'Atri
> Thanks for the thinking. By 'traffic' I mean: when a user space rbd > write has as a destination three replica osds in the same chassis eek. > does the whole write get shipped out to the mon and then back Mons are control-plane only. > All the 'usual suspects' like lossy ethernets and

[ceph-users] Re: Re layout help: need chassis local io to minimize net links

2020-06-29 Thread Jeff Welling
I've not been replying to the list, apologies. > just the write metadata to the mon,  with the actual write data content not having to cross a physical ethernet cable but directly to the chassis-local osds via the 'virtual' internal switch? This is my understanding as well, yes. I've not

[ceph-users] Re: Re layout help: need chassis local io to minimize net links

2020-06-29 Thread Harry G. Coin
Thanks for the thinking.  By 'traffic' I mean:  when a user space rbd write has as a destination three replica osds in the same chassis, does the whole write get shipped out to the mon and then back, or just the write metadata to the mon,  with the actual write data content not having to cross a

[ceph-users] Re: Re layout help: need chassis local io to minimize net links

2020-06-29 Thread Anthony D'Atri
What does “traffic” mean? Reads? Writes will have to hit the net regardless of any machinations. > On Jun 29, 2020, at 7:31 PM, Harry G. Coin wrote: > > I need exactly what ceph is for a whole lot of work, that work just > doesn't represent a large fraction of the total local traffic.

[ceph-users] Re layout help: need chassis local io to minimize net links

2020-06-29 Thread Harry G. Coin
I need exactly what ceph is for a whole lot of work, that work just doesn't represent a large fraction of the total local traffic.  Ceph is the right choice.  Plainly ceph has tremendous support for replication within a chassis, among chassis and among racks.  I just need intra-chassis traffic to

[ceph-users] Octopus upgrade breaks Ubuntu 18.04 libvirt

2020-06-29 Thread Andrei Mikhailovsky
Hello, I've upgraded ceph to Octopus (15.2.3 from repo) on one of the Ubuntu 18.04 host servers. The update caused problem with libvirtd which hangs when it tries to access the storage pools. The problem doesn't exist on Nautilus. The libvirtd process simply hangs. Nothing seem to happen. The

[ceph-users] Octopus missing rgw-orphan-list tool

2020-06-29 Thread Andrei Mikhailovsky
Hello, I have been struggling a lot with radosgw buckets space wastage, which is currently stands at about 2/3 of utilised space is wasted and unaccounted for. I've tried to use the tools to find the orphan objects, but these were running in loop for weeks on without producing any results.

[ceph-users] Re: Debian install

2020-06-29 Thread Anastasios Dados
Hello Rafael, Can you check the apt sources list that exist from your ceph-deploy node? Maybe there you have put luminous debian packages version? Regards, Anastasios On Mon, 2020-06-29 at 06:59 -0300, Rafael Quaglio wrote: > Hi, > > We have already installed a new Debian (10.4) server and I

[ceph-users] Re: Bluestore performance tuning for hdd with nvme db+wal

2020-06-29 Thread Mark Kirkwood
That is an interesting point. We are using 12 on 1 nvme journal for our Filestore nodes (which seems to work ok). The workload for wal + db is different so that could be a factor. However when I've looked at the IO metrics for the nvme it seems to be only lightly loaded, so does not appear to

[ceph-users] Re: NFS Ganesha 2.7 in Xenial not available

2020-06-29 Thread Victoria Martinez de la Cruz
Hey Cephers, Can someone help us out with this? Seems that it could be fixed by just rerunning that job Goutham pointed out. We have a bunch of changes waiting for this to merge. Thanks in advance, V On Fri, Jun 26, 2020 at 2:49 PM Goutham Pacha Ravi wrote: > Hello! > > Thanks for bringing

[ceph-users] Re: *****SPAM***** layout help: need chassis local io to minimize net links

2020-06-29 Thread Marc Roos
I wonder if you should not have chosen a different product? Ceph is meant to distribute data across nodes, racks, data centers etc. For a nail use a hammer, for a screw use a screw driver. -Original Message- To: ceph-users@ceph.io Subject: *SPAM* [ceph-users] layout help:

[ceph-users] Issue with ceph-ansible installation, No such file or directory

2020-06-29 Thread sachin . nicky
Error occurs here: - name: look up for ceph-volume rejected devices ceph_volume: cluster: "{{ cluster }}" action: "inventory" register: rejected_devices environment: CEPH_VOLUME_DEBUG: 1 CEPH_CONTAINER_IMAGE: "{{

[ceph-users] Re: rgw : unable to find part(s) of aborted multipart upload of [object].meta

2020-06-29 Thread EDH - Manuel Rios
Hi Amit, Yes in the non-ec pool there’re like 600 files .meta but i dont know if its safe move to data pool. Does anyone know if there is a way to generate a synthetic .meta only so that the delete command is capable of deleting the file? Regards Manuel De: Amit Ghadge Enviado el: lunes,

[ceph-users] [RGW] Space usage vastly overestimated since Octopus upgrade

2020-06-29 Thread Liam Monahan
Hi, Since upgrading from Nautilus 14.2.9 -> Octopus 15.2.3 two weeks ago we are seeing large upticks in the reported size (both space and object count) for a number of our RGW users. It does not seem to be isolated to just one user, so I don't think it's something wrong in the users' usage

[ceph-users] layout help: need chassis local io to minimize net links

2020-06-29 Thread Harry G. Coin
Hi I have a few servers each with 6 or more disks, with a storage workload that's around 80% done entirely within each server.   From a work-to-be-done perspective there's no need for 80% of the load to traverse network interfaces, the rest needs what ceph is all about.   So I cooked up a set of

[ceph-users] Re: Move WAL/DB to SSD for existing OSD?

2020-06-29 Thread Lindsay Mathieson
On 29/06/2020 11:44 pm, Stefan Priebe - Profihost AG wrote: You need to use: ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-${OSD} bluefs-bdev-new-db --dev-target /dev/bluesfs_db/db-osd${OSD} and ceph-bluestore-tool --path dev/osd1/ --devs-source dev/osd1/block --dev-target

[ceph-users] Re: Nautilus latest builds for CentOS 8

2020-06-29 Thread Giulio Fidente
On 6/18/20 12:11 AM, Ken Dreyer wrote: > On Wed, Jun 17, 2020 at 9:25 AM David Galloway wrote: >> If there will be a 14.2.10 or 14.3.0 (I don't actually know), it will be >> built and signed for CentOS 8. >> >> Is this sufficient? > > > Yes, thanks! > I can see the newer 14.2.10 build for el8

[ceph-users] Re: Octopus Grafana inside the dashboard

2020-06-29 Thread Lenz Grimmer
Hi Simon, On 6/29/20 2:12 PM, Simon Sutter wrote: > I'm trying to get Grafana working inside the Dashboard. > If I press on "Overall Performance" tab, I get an error, because the iframe > tries to connect to the internal hostname, which cannot be resolved from my > machine. > If I directly

[ceph-users] Re: Move WAL/DB to SSD for existing OSD?

2020-06-29 Thread Stefan Priebe - Profihost AG
Hi Lindays, Am 29.06.20 um 15:37 schrieb Lindsay Mathieson: > Nautilus - Bluestore OSD's created with everything on disk. Now I have > some spare SSD's - can I move the location of the existing WAL and/or DB > to SSD partitions without recreating the OSD? > > I suspect  not - saw emails from

[ceph-users] Move WAL/DB to SSD for existing OSD?

2020-06-29 Thread Lindsay Mathieson
Nautilus - Bluestore OSD's created with everything on disk. Now I have some spare SSD's - can I move the location of the existing WAL and/or DB to SSD partitions without recreating the OSD? I suspect  not - saw emails from 2018, in the negative :( Failing that - is it difficult to add

[ceph-users] Re: Nautilus 14.2.10 mon_warn_on_pool_no_redundancy

2020-06-29 Thread Martin Verges
I agree, please check for min_size to cover min 1 max 2 configs as we have done in our software for our users since years. It is important and it can prevent lot's of issues. -- Martin Verges Managing director Mobile: +49 174 9335695 E-Mail: martin.ver...@croit.io Chat: https://t.me/MartinVerges

[ceph-users] Nautilus 14.2.10 mon_warn_on_pool_no_redundancy

2020-06-29 Thread Wout van Heeswijk
Hi All, I really like the idea of warning users against using unsafe practices. Wouldn't it make sense to warn against using min_size=1 instead of size=1. I've seen data loss happen with size=2 min_size=1 when multiple failures occur and write have been done between the failures. Effectively

[ceph-users] Octopus Grafana inside the dashboard

2020-06-29 Thread Simon Sutter
Hello I'm trying to get Grafana working inside the Dashboard. If I press on "Overall Performance" tab, I get an error, because the iframe tries to connect to the internal hostname, which cannot be resolved from my machine. If I directly open grafana, everything works. How can I tell the

[ceph-users] Debian install

2020-06-29 Thread Rafael Quaglio
Hi, We have already installed a new Debian (10.4) server and I need put it in a Ceph cluster. When I execute the command to install ceph on this node: ceph-deploy install --release nautilus node1 It starts to install a version 12.x in my node... (...)