[ceph-users] Hammer update

2017-03-01 Thread Sasha Litvak
Hello everyone, Hammer 0.94.10 update was announced in the blog a week ago. However, there are no packages available for either version of redhat. Can someone tell me what is going on? ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Hammer update

2017-03-02 Thread Sasha Litvak
I run centos 6.8 so no 0.94.10 packages for el6. On Mar 2, 2017 8:47 AM, "Abhishek L" <abhis...@suse.com> wrote: Sasha Litvak writes: > Hello everyone, > > Hammer 0.94.10 update was announced in the blog a week ago. However, there are no packages available for eith

Re: [ceph-users] Mon Create currently at the state of probing

2017-06-18 Thread Sasha Litvak
Do you have firewall on on new server by any chance? On Sun, Jun 18, 2017 at 8:18 PM, Jim Forde wrote: > I have an eight node ceph cluster running Jewel 10.2.5. > > One Ceph-Deploy node. Four OSD nodes and three Monitor nodes. > > Ceph-Deploy node is r710T > > OSD’s are r710a,

Re: [ceph-users] Ceph release cadence

2017-09-09 Thread Sasha Litvak
As a user, I woul like to add, I would like to see a real 2 year support for LTS releases. Hammer releases were sketchy at best in 2017. When luminous was released The outstanding bugs were auto closed, good buy and good readance. Also the decision to drop certain OS support created a

Re: [ceph-users] Ceph release cadence

2017-09-10 Thread Sasha Litvak
As a user, I woul like to add, I would like to see a real 2 year support for LTS releases. Hammer releases were sketchy at best in 2017. When luminous was released The outstanding bugs were auto closed, good buy and good readance. Also the decision to drop certain OS support created a

Re: [ceph-users] Client admin socket for RBD

2019-06-25 Thread Sasha Litvak
nfig get" on a client.admin? There is no daemon for client.admin, I get > nothing. Can you please explain? > > > Tarek Zegar > Senior SDS Engineer > Email *tze...@us.ibm.com* > Mobile *630.974.7172* > > > > > [image: Inactive hide details for Sasha Lit

Re: [ceph-users] Commit and Apply latency on nautilus

2019-09-30 Thread Sasha Litvak
> > croit GmbH > Freseniusstr. 31h > 81247 München > www.croit.io > Tel: +49 89 1896585 90 > > On Mon, Sep 30, 2019 at 8:46 PM Sasha Litvak > wrote: > > > > In my case, I am using premade Prometheus sourced dashboards in grafana. > > > > For indiv

Re: [ceph-users] OSD crashed during the fio test

2019-10-01 Thread Sasha Litvak
It was hardware indeed. Dell server reported a disk being reset with power on. Checking the usual suspects i.e. controller firmware, controller event log (if I can get one), drive firmware. I will report more when I get a better idea Thank you! On Tue, Oct 1, 2019 at 2:33 AM Brad Hubbard

Re: [ceph-users] Commit and Apply latency on nautilus

2019-10-01 Thread Sasha Litvak
urces loads you get step by step. Latency from 4M will not be > the same as 4k. > > i would also run fio tests on the raw Nytro 1551 devices including sync > writes. > > I would not recommend you increase readahead for random io. > > I do not recommend making RAID0 > >

Re: [ceph-users] OSD crashed during the fio test

2019-10-01 Thread Sasha Litvak
19:35:13.721 7f8d03150700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f8cd3dde700' had timed out after 60 The spike of latency on this OSD is 6 seconds at that time. Any ideas? On Tue, Oct 1, 2019 at 8:03 AM Sasha Litvak wrote: > It was hardware indeed. Dell server reported a d

Re: [ceph-users] Commit and Apply latency on nautilus

2019-09-30 Thread Sasha Litvak
In my case, I am using premade Prometheus sourced dashboards in grafana. For individual latency, the query looks like that irate(ceph_osd_op_r_latency_sum{ceph_daemon=~"$osd"}[1m]) / on (ceph_daemon) irate(ceph_osd_op_r_latency_count[1m])

Re: [ceph-users] user and group acls on cephfs mounts

2019-11-05 Thread Sasha Litvak
Figured out. Nothing ceph related. Someone created multiple ACL entries on a directory and ls -l had correct numbers but getfacl showed its real colors. Group write permissions were disabled at that level. On Tue, Nov 5, 2019 at 7:10 PM Yan, Zheng wrote: > On Wed, Nov 6, 2019 at 5:47 AM Alex

[ceph-users] SPAM in the ceph-users list

2019-11-12 Thread Sasha Litvak
I am seeing more and more spam on this list. Recently a strain of messages announcing services and businesses in Bangalore for example. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] CephFS kernel module lockups in Ubuntu linux-image-5.0.0-32-generic?

2019-10-24 Thread Sasha Litvak
Also, search for this topic on the list. Ubuntu Disco with most recent Kernel 5.0.0-32 seems to be instable On Thu, Oct 24, 2019 at 10:45 AM Paul Emmerich wrote: > Could it be related to the broken backport as described in > https://tracker.ceph.com/issues/40102 ? > > (It did affect 4.19,

[ceph-users] lists and gmail

2020-01-20 Thread Sasha Litvak
It seems that people now split between new and old list servers. Regardless of either one of them, I am missing a number of messages that appear on archive pages but never seem to make to my inbox. And no they are not in my junk folder. I wonder if some of my questions are not getting a

Re: [ceph-users] low io with enterprise SSDs ceph luminous - can we expect more? [klartext]

2020-01-20 Thread Sasha Litvak
So hdparam -W 0 /dev/sdx doesn't work or it makes no difference? Also I am not sure I understand why it should happen before OSD have been started. At least in my experience hdparam does it to hardware regardless. On Mon, Jan 20, 2020, 2:25 AM Frank Schilder wrote: > We are using Micron 5200

Re: [ceph-users] low io with enterprise SSDs ceph luminous - can we expect more? [klartext]

2020-01-21 Thread Sasha Litvak
Frank, Sorry for the confusion. I thought that turning off cache using hdparm -W 0 /dev/sdx takes effect right away and in case of non-raid controllers and Seagate or Micron SSDs I would see a difference starting fio benchmark right after executing hdparm. So I wonder it makes a difference