Hello everyone,
Hammer 0.94.10 update was announced in the blog a week ago. However, there
are no packages available for either version of redhat. Can someone tell
me what is going on?
___
ceph-users mailing list
ceph-users@lists.ceph.com
I run centos 6.8 so no 0.94.10 packages for el6.
On Mar 2, 2017 8:47 AM, "Abhishek L" <abhis...@suse.com> wrote:
Sasha Litvak writes:
> Hello everyone,
>
> Hammer 0.94.10 update was announced in the blog a week ago. However,
there are no packages available for eith
Do you have firewall on on new server by any chance?
On Sun, Jun 18, 2017 at 8:18 PM, Jim Forde wrote:
> I have an eight node ceph cluster running Jewel 10.2.5.
>
> One Ceph-Deploy node. Four OSD nodes and three Monitor nodes.
>
> Ceph-Deploy node is r710T
>
> OSD’s are r710a,
As a user, I woul like to add, I would like to see a real 2 year support
for LTS releases. Hammer releases were sketchy at best in 2017. When
luminous was released The outstanding bugs were auto closed, good buy and
good readance.
Also the decision to drop certain OS support created a
As a user, I woul like to add, I would like to see a real 2 year support
for LTS releases. Hammer releases were sketchy at best in 2017. When
luminous was released The outstanding bugs were auto closed, good buy and
good readance.
Also the decision to drop certain OS support created a
nfig get" on a client.admin? There is no daemon for client.admin, I get
> nothing. Can you please explain?
>
>
> Tarek Zegar
> Senior SDS Engineer
> Email *tze...@us.ibm.com*
> Mobile *630.974.7172*
>
>
>
>
> [image: Inactive hide details for Sasha Lit
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> On Mon, Sep 30, 2019 at 8:46 PM Sasha Litvak
> wrote:
> >
> > In my case, I am using premade Prometheus sourced dashboards in grafana.
> >
> > For indiv
It was hardware indeed. Dell server reported a disk being reset with power
on. Checking the usual suspects i.e. controller firmware, controller event
log (if I can get one), drive firmware.
I will report more when I get a better idea
Thank you!
On Tue, Oct 1, 2019 at 2:33 AM Brad Hubbard
urces loads you get step by step. Latency from 4M will not be
> the same as 4k.
>
> i would also run fio tests on the raw Nytro 1551 devices including sync
> writes.
>
> I would not recommend you increase readahead for random io.
>
> I do not recommend making RAID0
>
>
19:35:13.721
7f8d03150700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread
0x7f8cd3dde700' had timed out after 60
The spike of latency on this OSD is 6 seconds at that time. Any ideas?
On Tue, Oct 1, 2019 at 8:03 AM Sasha Litvak
wrote:
> It was hardware indeed. Dell server reported a d
In my case, I am using premade Prometheus sourced dashboards in grafana.
For individual latency, the query looks like that
irate(ceph_osd_op_r_latency_sum{ceph_daemon=~"$osd"}[1m]) / on
(ceph_daemon) irate(ceph_osd_op_r_latency_count[1m])
Figured out. Nothing ceph related. Someone created multiple ACL entries
on a directory and ls -l had correct numbers but getfacl showed its real
colors. Group write permissions were disabled at that level.
On Tue, Nov 5, 2019 at 7:10 PM Yan, Zheng wrote:
> On Wed, Nov 6, 2019 at 5:47 AM Alex
I am seeing more and more spam on this list. Recently a strain of messages
announcing services and businesses in Bangalore for example.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Also, search for this topic on the list. Ubuntu Disco with most recent
Kernel 5.0.0-32 seems to be instable
On Thu, Oct 24, 2019 at 10:45 AM Paul Emmerich
wrote:
> Could it be related to the broken backport as described in
> https://tracker.ceph.com/issues/40102 ?
>
> (It did affect 4.19,
It seems that people now split between new and old list servers.
Regardless of either one of them, I am missing a number of messages that
appear on archive pages but never seem to make to my inbox. And no they
are not in my junk folder. I wonder if some of my questions are not
getting a
So hdparam -W 0 /dev/sdx doesn't work or it makes no difference? Also I am
not sure I understand why it should happen before OSD have been started.
At least in my experience hdparam does it to hardware regardless.
On Mon, Jan 20, 2020, 2:25 AM Frank Schilder wrote:
> We are using Micron 5200
Frank,
Sorry for the confusion. I thought that turning off cache using hdparm -W
0 /dev/sdx takes effect right away and in case of non-raid controllers and
Seagate or Micron SSDs I would see a difference starting fio benchmark
right after executing hdparm. So I wonder it makes a difference
17 matches
Mail list logo