[ceph-users] Re: NoSuchKey on key that is visible in s3 list/radosgw bk

2020-07-27 Thread Robin H. Johnson
On Mon, Jul 27, 2020 at 08:02:23PM +0200, Mariusz Gronczewski wrote: > Hi, > > I've got a problem on Octopus (15.2.3, debian packages) install, bucket > S3 index shows a file: > > s3cmd ls s3://upvid/255/38355 --recursive > 2020-07-27 17:48 50584342 > >

[ceph-users] Re: Cluster became unresponsive: e5 handle_auth_request failed to assign global_id

2020-07-27 Thread Dino Godor
Well, port 6800 is not a monitor port as I just looked up, so I wouldn't look there. Can you use ceph command from another mon ? Also maybe the user you use can't access the admin keyring - as far as I remember that lead to infinetely hanging commands on my test cluster (but was Nautilus,

[ceph-users] Re: rbd-nbd stuck request

2020-07-27 Thread Jason Dillaman
On Mon, Jul 27, 2020 at 3:08 PM Herbert Alexander Faleiros wrote: > > Hi, > > On Fri, Jul 24, 2020 at 12:37:38PM -0400, Jason Dillaman wrote: > > On Fri, Jul 24, 2020 at 10:45 AM Herbert Alexander Faleiros > > wrote: > > > > > > On Fri, Jul 24, 2020 at 07:28:07PM +0500, Alexander E. Patrakov

[ceph-users] Re: rbd-nbd stuck request

2020-07-27 Thread Herbert Alexander Faleiros
Hi, On Fri, Jul 24, 2020 at 12:37:38PM -0400, Jason Dillaman wrote: > On Fri, Jul 24, 2020 at 10:45 AM Herbert Alexander Faleiros > wrote: > > > > On Fri, Jul 24, 2020 at 07:28:07PM +0500, Alexander E. Patrakov wrote: > > > On Fri, Jul 24, 2020 at 6:01 PM Herbert Alexander Faleiros > > > wrote:

[ceph-users] NoSuchKey on key that is visible in s3 list/radosgw bk

2020-07-27 Thread Mariusz Gronczewski
Hi, I've got a problem on Octopus (15.2.3, debian packages) install, bucket S3 index shows a file: s3cmd ls s3://upvid/255/38355 --recursive 2020-07-27 17:48 50584342 s3://upvid/255/38355/juz_nie_zyjesz_sezon_2___oficjalny_zwiastun___netflix_mp4 radosgw-admin bi list also shows it

[ceph-users] Re: mimic: much more raw used than reported

2020-07-27 Thread Frank Schilder
Hi Igor, thanks for your answer. I was thinking about that, but as far as I understood, to hit this bug actually requires a partial rewrite to happen. However, these are disk images in storage servers with basically static files, many of which very large (15GB). Therefore, I believe, the vast

[ceph-users] Re: Cluster became unresponsive: e5 handle_auth_request failed to assign global_id

2020-07-27 Thread Илья Борисович Волошин
Here are all the active ports on mon1 (with the exception of sshd and ntpd): # netstat -npl Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp0 0 :3300 0.0.0.0:* LISTEN 1582/ceph-mon tcp0 0 :6789

[ceph-users] Re: Fwd: BlueFS assertion ceph_assert(h->file->fnode.ino != 1)

2020-07-27 Thread Igor Fedotov
Hi Alexei, just left a comment in the ticket... Thanks, Igor On 7/25/2020 3:31 PM, Aleksei Zakharov wrote: Hi all, I wonder if someone else faced the issue described on the tracker: https://tracker.ceph.com/issues/45519 We thought that this problem is caused by high OSD fragmentation,

[ceph-users] Re: Cluster became unresponsive: e5 handle_auth_request failed to assign global_id

2020-07-27 Thread Dino Godor
Hi, have you tried to locally connect to the ports with netcat (or telnet)? Is the process listening ? (something like netstat -4ln or the current equivalent thereof) Is the old (new) Firewall maybe still running ? On 27.07.20 16:00, Илья Борисович Волошин wrote: Hello, I've created an

[ceph-users] Cluster became unresponsive: e5 handle_auth_request failed to assign global_id

2020-07-27 Thread Илья Борисович Волошин
Hello, I've created an Octopus 15.2.4 cluster with 3 monitors and 3 OSDs (6 hosts in total, all ESXi VMs). It lived through a couple of reboots without problem, then I've reconfigured the main host a bit: set iptables-legacy as current option in update-alternatives (this is a Debian10 system),

[ceph-users] snaptrim blocks IO on ceph nautilus

2020-07-27 Thread Manuel Lausch
Hi, since some days I try to debug a problem with snaptrimming under nautilus. I have a cluster with Nautilus (v14.2.10) , 44 Nodes á 24 OSDs á 14 TB I create every day a snapshot for 7 days. Every time the old snapshot is deleting I have bad IO performcance and blocked requests for several

[ceph-users] Re: Push config to all hosts

2020-07-27 Thread Ricardo Marques
Hi Cem, Since https://github.com/ceph/ceph/pull/35576 you will be able to tell cephadm to keep your `/etc/ceph/ceph.conf` updated in all hosts by runnig: # ceph config set mgr mgr/cephadm/manage_etc_ceph_ceph_conf true But this feature was not released yet, so you will have to wait for

[ceph-users] Re: mimic: much more raw used than reported

2020-07-27 Thread Igor Fedotov
Frank, suggest to start with perf counter analysis as per the second part of my previous email... Thanks, Igor On 7/27/2020 2:30 PM, Frank Schilder wrote: Hi Igor, thanks for your answer. I was thinking about that, but as far as I understood, to hit this bug actually requires a partial

[ceph-users] Re: please help me fix iSCSI Targets not available

2020-07-27 Thread Ricardo Marques
Hi David, which ceph version are you using? From: David Thuong Sent: Wednesday, July 22, 2020 10:45 AM To: ceph-users@ceph.io Subject: [ceph-users] please help me fix iSCSI Targets not available iSCSI Targets not available Please consult the documentation on how

[ceph-users] Re: mimic: much more raw used than reported

2020-07-27 Thread Igor Fedotov
Hi Frank, you might be being hit by https://tracker.ceph.com/issues/44213 In short the root causes are  significant space overhead due to high bluestore allocation unit (64K) and EC overwrite design. This is fixed for upcoming Pacific release by using 4K alloc unit but it is unlikely to be

[ceph-users] cache tier dirty status

2020-07-27 Thread Budai Laszlo
Hello all, is there a way to interrogate a cache tier pool about the number of dirty objects/bytes that it contains? Thank you, Laszlo ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Reinitialize rgw garbage collector

2020-07-27 Thread Michael Bisig
Hi all, I have a question about the garbage collector within RGWs. We run Nautilus 14.2.8 and we have 32 garbage objects in the gc pool with totally 39 GB of garbage that needs to be processed. When we run, radosgw-admin gc process --include-all objects are processed but most of them won't