On 2/3/17, 3:23 AM, "ceph-users on behalf of Wido den Hollander"
wrote:
>
>> Op 3 februari 2017 om 11:03 schreef Maxime Guyot
>>:
>>
>>
>> Hi,
>>
>> Interesting feedback!
>>
>> > In my opinion the SMR can
On 1/11/17, 10:31 AM, "ceph-users on behalf of Reed Dier"
wrote:
>>2017-01-03 12:10:23.514577 7f1d821f2800 0 ceph version 10.2.5
>>(c461ee19ecbc0c5c330aca20f7392c9a00730367), process ceph-osd, pid 19754
>> 2017-01-03
-Sam
>
>On Tue, Jan 10, 2017 at 9:41 AM, Stillwell, Bryan J
><bryan.stillw...@charter.com> wrote:
>> This is from:
>>
>> ceph version 11.1.1 (87597971b371d7f497d7eabad3545d72d18dd755)
>>
>> On 1/10/17, 10:23 AM, "Samuel Just&q
;
>On Tue, Jan 10, 2017 at 9:00 AM, Stillwell, Bryan J
><bryan.stillw...@charter.com> wrote:
>> On 1/10/17, 5:35 AM, "John Spray" <jsp...@redhat.com> wrote:
>>
>>>On Mon, Jan 9, 2017 at 11:46 PM, Stillwell, Bryan J
>>><bryan.still
On 1/10/17, 2:56 AM, "ceph-users on behalf of Breunig, Steve (KASRL)"
wrote:
>Hi list,
>
>
>I'm running a cluster which is currently in migration from hammer to
>jewel.
>
>
>Actually i have the problem, that the
On 1/10/17, 5:35 AM, "John Spray" <jsp...@redhat.com> wrote:
>On Mon, Jan 9, 2017 at 11:46 PM, Stillwell, Bryan J
><bryan.stillw...@charter.com> wrote:
>> Last week I decided to play around with Kraken (11.1.1-1xenial) on a
>> single node, two OSD cluster,
Last week I decided to play around with Kraken (11.1.1-1xenial) on a
single node, two OSD cluster, and after a while I noticed that the new
ceph-mgr daemon is frequently using a lot of the CPU:
17519 ceph 20 0 850044 168104208 S 102.7 4.3 1278:27
ceph-mgr
Restarting it with
On 11/1/16, 1:45 PM, "Sage Weil" <s...@newdream.net> wrote:
>On Tue, 1 Nov 2016, Stillwell, Bryan J wrote:
>> I recently learned that 'MAX AVAIL' in the 'ceph df' output doesn't
>> represent what I thought it did. It actually represents the amount of
>> data
I recently learned that 'MAX AVAIL' in the 'ceph df' output doesn't
represent what I thought it did. It actually represents the amount of
data that can be used before the first OSD becomes full, and not the sum
of all free space across a set of OSDs. This means that balancing the
data with 'ceph
Do you run a large Ceph cluster? Do you find that you run into issues
that you didn't have when your cluster was smaller? If so we have a new
mailing list for you!
Announcing the new ceph-large mailing list. This list is targeted at
experienced Ceph operators with cluster(s) over 500 OSDs to
On 10/14/16, 2:29 PM, "Alfredo Deza" <ad...@redhat.com> wrote:
>On Thu, Oct 13, 2016 at 5:19 PM, Stillwell, Bryan J
><bryan.stillw...@charter.com> wrote:
>> On 10/13/16, 2:32 PM, "Alfredo Deza" <ad...@redhat.com> wrote:
>>
>>>On T
On 10/13/16, 2:32 PM, "Alfredo Deza" <ad...@redhat.com> wrote:
>On Thu, Oct 13, 2016 at 11:33 AM, Stillwell, Bryan J
><bryan.stillw...@charter.com> wrote:
>> I have a basement cluster that is partially built with Odroid-C2 boards
>>and
>> when
I have a basement cluster that is partially built with Odroid-C2 boards and
when I attempted to upgrade to the 10.2.3 release I noticed that this release
doesn't have an arm64 build. Are there any plans on continuing to make arm64
builds?
Thanks,
Bryan
Thanks Kefu!
Downgrading the mons to 0.94.6 got us out of this situation. I appreciate
you tracking this down!
Bryan
On 10/4/16, 1:18 AM, "ceph-users on behalf of kefu chai"
wrote:
>hi ceph users,
>
>If user upgrades the
ng osdmaps from the
>MONs causing some kind of spinlock condition.
>
>> On Sep 21, 2016, at 4:21 PM, Stillwell, Bryan J
>><bryan.stillw...@charter.com> wrote:
>>
>> While attempting to upgrade a 1200+ OSD cluster from 0.94.6 to 0.94.9
>>I've
>>
While attempting to upgrade a 1200+ OSD cluster from 0.94.6 to 0.94.9 I've
run into serious performance issues every time I restart an OSD.
At first I thought the problem I was running into was caused by the osdmap
encoding bug that Dan and Wido ran into when upgrading to 0.94.7, because
I was
ff has changed between this.
>This is a good test case and I doubt any of us testing by enabling fsck()
>on mount/unmount.
>
>Thanks & Regards
>Somnath
>
>-Original Message-
>From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>Stillwe
I've been doing some benchmarking of BlueStore in 10.2.2 the last few days
and
have come across a failure that keeps happening after stressing the cluster
fairly heavily. Some of the OSDs started failing and attempts to restart
them
fail to log anything in /var/log/ceph/, so I tried starting them
, "Stillwell, Bryan J" <bryan.stillw...@charter.com>
wrote:
>On one of my test clusters that I¹ve upgraded from Infernalis to Jewel
>(10.2.1), and I¹m having a problem where reads are resulting in unfound
>objects.
>
>I¹m using cephfs on top of a erasure coded pool with
On 5/27/16, 3:23 PM, "Gregory Farnum" <gfar...@redhat.com> wrote:
>On Fri, May 27, 2016 at 2:22 PM, Stillwell, Bryan J
><bryan.stillw...@charter.com> wrote:
>> Here's the full 'ceph -s' output:
>>
>> # ceph -s
>> cluster c7ba6111-e
eed to
>mark it as repaired. That's a monitor command.
>
>On Fri, May 27, 2016 at 2:09 PM, Stillwell, Bryan J
><bryan.stillw...@charter.com> wrote:
>> On 5/27/16, 3:01 PM, "Gregory Farnum" <gfar...@redhat.com> wrote:
>>
>>>>
>>>&g
On 5/27/16, 3:01 PM, "Gregory Farnum" wrote:
>>
>> So would the next steps be to run the following commands?:
>>
>> cephfs-table-tool 0 reset session
>> cephfs-table-tool 0 reset snap
>> cephfs-table-tool 0 reset inode
>> cephfs-journal-tool --rank=0 journal reset
>>
On 5/27/16, 11:27 AM, "Gregory Farnum" <gfar...@redhat.com> wrote:
>On Fri, May 27, 2016 at 9:44 AM, Stillwell, Bryan J
><bryan.stillw...@charter.com> wrote:
>> I have a Ceph cluster at home that I¹ve been running CephFS on for the
>> last few years
I have a Ceph cluster at home that I¹ve been running CephFS on for the
last few years. Recently my MDS server became damaged and while
attempting to fix it I believe I¹ve destroyed by CephFS journal based off
this:
2016-05-25 16:48:23.882095 7f8d2fac2700 -1 log_channel(cluster) log [ERR]
: Error
On one of my test clusters that I¹ve upgraded from Infernalis to Jewel
(10.2.1), and I¹m having a problem where reads are resulting in unfound
objects.
I¹m using cephfs on top of a erasure coded pool with cache tiering which I
believe is related.
>From what I can piece together, here is what the
25 matches
Mail list logo