[ceph-users] Delete unused RBD volume takes to long.

2017-07-15 Thread Alvaro Soto
Hi, does anyone have experienced or know why the delete process takes longer that a creation of a RBD volume. My test was this: - Create a 1PB volume -> less than a minute - Delete the volume created -> like 2 days The result was unexpected by me and till now, don't know the reason, the

[ceph-users] iSCSI production ready?

2017-07-15 Thread Alvaro Soto
Hi guys, does anyone know any news about in what release iSCSI interface is going to be production ready, if not yet? I mean without the use of a gateway, like a different endpoint connector to a CEPH cluster. Thanks in advance. Best. -- ATTE. Alvaro Soto Escobar

[ceph-users] some OSDs stuck down after 10.2.7 -> 10.2.9 update

2017-07-15 Thread Lincoln Bryant
Hi all, After updating to 10.2.9, some of our SSD-based OSDs get put into "down" state and die as in [1]. After bringing these OSDs back up, they sit at 100% CPU utilization and never become up/in. From the log I see (from [2]): heartbeat_map is_healthy 'OSD::osd_op_tp thread

Re: [ceph-users] RBD cache being filled up in small increases instead of 4MB

2017-07-15 Thread Ruben Rodriguez
On 14/07/17 18:43, Ruben Rodriguez wrote: > How to reproduce... I'll provide more concise details on how to test this behavior: Ceph config: [client] rbd readahead max bytes = 0 # we don't want forced readahead to fool us rbd cache = true Start a qemu vm, with a rbd image attached with

Re: [ceph-users] RBD cache being filled up in small increases instead of 4MB

2017-07-15 Thread Ruben Rodriguez
On 15/07/17 09:43, Nick Fisk wrote: >> -Original Message- >> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of >> Gregory Farnum >> Sent: 15 July 2017 00:09 >> To: Ruben Rodriguez >> Cc: ceph-users >> Subject: Re:

Re: [ceph-users] RBD cache being filled up in small increases instead of 4MB

2017-07-15 Thread Ruben Rodriguez
On 15/07/17 15:33, Jason Dillaman wrote: > On Sat, Jul 15, 2017 at 9:43 AM, Nick Fisk wrote: >> Unless you tell the rbd client to not disable readahead after reading the >> 1st x number of bytes (rbd readahead disable after bytes=0), it will stop >> reading ahead and will

Re: [ceph-users] Broken Ceph Cluster when adding new one - Proxmox 5.0 & Ceph Luminous

2017-07-15 Thread Udo Lembke
Hi, On 15.07.2017 16:01, Phil Schwarz wrote: > Hi, > ... > > While investigating, i wondered about my config : > Question relative to /etc/hosts file : > Should i use private_replication_LAN Ip or public ones ? private_replication_LAN!! And the pve-cluster should use another network (nics) if

Re: [ceph-users] RBD cache being filled up in small increases instead of 4MB

2017-07-15 Thread Jason Dillaman
On Sat, Jul 15, 2017 at 9:43 AM, Nick Fisk wrote: > Unless you tell the rbd client to not disable readahead after reading the 1st > x number of bytes (rbd readahead disable after bytes=0), it will stop reading > ahead and will only cache exactly what is requested by the client.

[ceph-users] 答复: 答复: 答复: No "snapset" attribute for clone object

2017-07-15 Thread 许雪寒
I debugged a little, and find that this might have something to do with the "cache evict" and "list_snaps" operations. I debugged the "core" file of the process with gdb, and confirmed that the object that caused the segmentation fault is rbd_data.d18d71b948ac7.062e, just as the

[ceph-users] When are bugs available in the rpm repository

2017-07-15 Thread Marc Roos
When are bugs like these http://tracker.ceph.com/issues/20563 available in the rpm repository (https://download.ceph.com/rpm-luminous/el7/x86_64/)? I sort of don’t get it from this page http://docs.ceph.com/docs/master/releases/. Maybe something here could specifically mentioned about the

[ceph-users] Broken Ceph Cluster when adding new one - Proxmox 5.0 & Ceph Luminous

2017-07-15 Thread Phil Schwarz
Hi, short version : I broke my cluster ! Long version , with context: With a 4 nodes Proxmox Cluster The nodes are all Pproxmox 5.05+Ceph luminous with filestore -3 mon+OSD -1 LXC+OSD Was working fine Added a fifth node (proxmox+ceph) today a broke everything.. Though every node can ping each

Re: [ceph-users] RBD cache being filled up in small increases instead of 4MB

2017-07-15 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Gregory Farnum > Sent: 15 July 2017 00:09 > To: Ruben Rodriguez > Cc: ceph-users > Subject: Re: [ceph-users] RBD cache being filled up in small