Can you open a ticket with exact version of your ceph cluster?
http://tracker.ceph.com
Thanks,
On Sun, Dec 10, 2017 at 10:34 PM, Martin Preuss wrote:
> Hi,
>
> I'm new to Ceph. I started a ceph cluster from scratch on DEbian 9,
> consisting of 3 hosts, each host has 3-4
On Fri, Oct 13, 2017 at 3:29 PM, Ashley Merrick <ash...@amerrick.co.uk> wrote:
> Hello,
>
>
> Is it possible to limit a cephx user to one image?
>
>
> I have looked and seems it's possible per a pool, but can't find a per image
> option.
What did you look at
So the problem you faced has been completely solved?
On Thu, Sep 28, 2017 at 7:51 PM, Richard Hesketh
wrote:
> On 27/09/17 19:35, John Spray wrote:
>> On Wed, Sep 27, 2017 at 1:18 PM, Richard Hesketh
>> wrote:
>>> On 27/09/17 12:32,
Are we going to have next CDM in an APAC friendly time slot again?
On Thu, Sep 28, 2017 at 12:08 PM, Leonardo Vaz wrote:
> Hey Cephers,
>
> This is just a friendly reminder that the next Ceph Developer Montly
> meeting is coming up:
>
> http://wiki.ceph.com/Planning
>
> If
Just for clarification.
Did you upgrade your cluster from Hammer to Luminous, then hit an assertion?
On Wed, Sep 27, 2017 at 8:15 PM, Richard Hesketh
wrote:
> As the subject says... any ceph fs administrative command I try to run hangs
> forever and kills monitors
It would be much better to explain why as of today, object-map feature
is not supported by the kernel client, or document it.
On Tue, Aug 15, 2017 at 8:08 PM, Ilya Dryomov wrote:
> On Tue, Aug 15, 2017 at 11:34 AM, moftah moftah wrote:
>> Hi All,
>>
>> I
On Sun, Apr 23, 2017 at 4:09 AM, Donny Davis wrote:
> Just in case anyone was curious as to how amazing ceph actually is, I did
> the migration to ceph seamlessly. I was able to bring the other two nodes
> into the cluster, and then turn on replication between them without a
You don't need to recompile that tool. Please see
``ceph_erasure_code_benchmark -h``.
Some examples are:
https://github.com/ceph/ceph/blob/master/src/erasure-code/isa/README#L31-L48
On Sat, Apr 8, 2017 at 8:21 AM, Henry Ngo wrote:
> Hello,
>
> I have a 6 node cluster and I
Please open a ticket so that we track.
http://tracker.ceph.com/
Regards,
On Sat, Apr 8, 2017 at 1:40 AM, Patrick Donnelly
wrote:
> Hello Andras,
>
> On Wed, Mar 29, 2017 at 11:07 AM, Andras Pataki
> wrote:
> > Below is a crash we had on a
Adding Patrick who might be the best person.
Regards,
On Wed, Apr 5, 2017 at 6:16 PM, Wido den Hollander wrote:
>
>> Op 5 april 2017 om 8:14 schreef SJ Zhu :
>>
>>
>> Wido, ping?
>>
>
> This might take a while! Has to go through a few hops for this to get
> I am sure I remember having to reduce min_size to 1 temporarily in the past
> to allow recovery from having two drives irrecoverably die at the same time
> in one of my clusters.
What was the situation that you had to do that?
Thanks for sharing your experience in advance.
Regards,
So description of Jewel is wrong?
http://docs.ceph.com/docs/master/releases/
On Thu, Mar 16, 2017 at 2:27 AM, John Spray <jsp...@redhat.com> wrote:
> On Wed, Mar 15, 2017 at 5:04 PM, Shinobu Kinjo <ski...@redhat.com> wrote:
>> It may be probably kind of challenge but p
a LTS (Long Term Stable) and will receive
> updates until two LTS are published.
>
> --
> Deepak
>
>> On Mar 15, 2017, at 10:09 AM, Shinobu Kinjo <ski...@redhat.com> wrote:
>>
>> It may be probably kind of challenge but please consider Kraken (or
>> later)
as basically for
> bug fixes for 0.94.9.
>
>
>
> On Wed, Mar 15, 2017 at 9:16 AM, Shinobu Kinjo <ski...@redhat.com> wrote:
>>
>> FYI:
>> https://plus.google.com/+Cephstorage/posts/HuCaTi7Egg3
>>
>> On Thu, Mar 16, 2017 at 1:05 AM, Shain Miley <smi..
FYI:
https://plus.google.com/+Cephstorage/posts/HuCaTi7Egg3
On Thu, Mar 16, 2017 at 1:05 AM, Shain Miley wrote:
> Hello,
> I am trying to deploy ceph to a new server using ceph-deply which I have
> done in the past many times without issue.
>
> Right now I am seeing a timeout
We already discussed this:
https://www.spinics.net/lists/ceph-devel/msg34559.html
What do you think of comment posted in that ML?
Would that make sense to you as well?
On Tue, Feb 28, 2017 at 2:41 AM, Vasu Kulkarni wrote:
> Ilya,
>
> Many folks hit this and its quite
Please open ticket at http://tracker.ceph.com, if you haven't yet.
On Thu, Feb 16, 2017 at 6:07 PM, Muthusamy Muthiah
wrote:
> Hi Wido,
>
> Thanks for the information and let us know if this is a bug.
> As workaround we will go with small bluestore_cache_size to
if ``ceph pg deep-scrub `` does not work
then
do
``ceph pg repair
On Sat, Feb 18, 2017 at 10:02 AM, Tracy Reed wrote:
> I have a 3 replica cluster. A couple times I have run into inconsistent
> PGs. I googled it and ceph docs and various blogs say run a repair
>
On Sat, Feb 18, 2017 at 9:03 AM, Matyas Koszik wrote:
>
>
> Looks like you've provided me with the solution, thanks!
:)
> I've set the tunables to firefly, and now I only see the normal states
> associated with a recovering cluster, there're no more stale pgs.
> I hope it'll stay
You may need to increase ``choose_total_tries`` to more than 50
(default) up to 100.
-
http://docs.ceph.com/docs/master/rados/operations/crush-map/#editing-a-crush-map
- https://github.com/ceph/ceph/blob/master/doc/man/8/crushtool.rst
On Sat, Feb 18, 2017 at 5:25 AM, Matyas Koszik
Can you do?
* ceph osd getcrushmap -o ./crushmap.o; crushtool -d ./crushmap.o -o
./crushmap.txt
On Sat, Feb 18, 2017 at 3:52 AM, Gregory Farnum wrote:
> Situations that are stable lots of undersized PGs like this generally
> mean that the CRUSH map is failing to allocate
Would you simply do?
* ceph -s
On Fri, Feb 17, 2017 at 6:26 AM, Benjeman Meekhof wrote:
> As I'm looking at logs on the OSD mentioned in previous email at this
> point, I mostly see this message repeating...is this normal or
> indicating a problem? This osd is marked up in
On Wed, Feb 15, 2017 at 2:18 AM, Lukáš Kubín wrote:
> Hi,
> I'm most probably hitting bug http://tracker.ceph.com/issues/13755 - when
> libvirt mounted RBD disks suspend I/O during snapshot creation until hard
> reboot.
>
> My Ceph cluster (monitors and OSDs) is running
you know why the cpu cost is so high?
> Are there any solutions or suggestions to this problem?
>
> Cheers
>
> -邮件原件-
> 发件人: Shinobu Kinjo [mailto:ski...@redhat.com]
> 发送时间: 2017年2月13日 10:54
> 收件人: chenyehua 11692 (RD)
> 抄送: kc...@redhat.com; ceph-users@lists.
t; 发送时间: 2017年2月13日 9:40
> 收件人: 'Shinobu Kinjo'
> 抄送: kc...@redhat.com; ceph-users@lists.ceph.com
> 主题: 答复: [ceph-users] mon is stuck in leveldb and costs nearly 100% cpu
>
> My ceph version is 10.2.5
>
> -邮件原件-
> 发件人: Shinobu Kinjo [mailto:ski...@redhat.com]
> 发
Which Ceph version are you using?
On Sat, Feb 11, 2017 at 5:02 PM, Chenyehua wrote:
> Dear Mr Kefu Chai
>
> Sorry to disturb you.
>
> I meet a problem recently. In my ceph cluster ,health status has warning
> “store is getting too big!” for several days; and ceph-mon costs
ule loaded?
>
> If the answer to that question is "yes" the follow-up question is
> "Why?" as it is not required for a MON or OSD host.
>
> On Sat, Feb 11, 2017 at 1:18 PM, Michael Andersen <mich...@steelcode.com>
> wrote:
>> Yeah, all three mons have OSDs on the same mach
Yeah, all three mons have OSDs on the same machines.
>
> On Feb 10, 2017 7:13 PM, "Shinobu Kinjo" <ski...@redhat.com> wrote:
>>
>> Is your primary MON running on the host which some OSDs are running on?
>>
>> On Sat, Feb 11, 2017 at 11:53 AM, Mic
Is your primary MON running on the host which some OSDs are running on?
On Sat, Feb 11, 2017 at 11:53 AM, Michael Andersen
wrote:
> Hi
>
> I am running a small cluster of 8 machines (80 osds), with three monitors on
> Ubuntu 16.04. Ceph version 10.2.5.
>
> I cannot reboot
What did you exactly do?
On Fri, Feb 10, 2017 at 11:48 AM, 周威 wrote:
> The version I'm using is 0.94.9
>
> And when I want to create a pool, It shows:
>
> Error EINVAL: error running crushmap through crushtool: (1) Operation
> not permitted
>
> What's wrong about this?
>
.0
> 5 0.04390 Osd.5 up1.0 1.0
>
>
> Appreciate your help
>
> Craig
>
> -Original Message-
> From: Shinobu Kinjo [mailto:ski...@redhat.com]
> Sent: Thursday, February 9, 2017 2:34 PM
> To: Craig Read <cr...@litewiredata.com>
> C
4 OSD nodes or daemons?
please:
* ceph -v
* ceph -s
* ceph osd tree
On Fri, Feb 10, 2017 at 5:26 AM, Craig Read wrote:
> We have 4 OSDs in test environment that are all stuck unclean
>
>
>
> I’ve tried rebuilding the whole environment with the same result.
>
>
>
>
On Wed, Feb 8, 2017 at 8:07 PM, Dan van der Ster wrote:
> Hi,
>
> This is interesting. Do you have a bit more info about how to identify
> a server which is suffering from this problem? Is there some process
> (xfs* or kswapd?) we'll see as busy in top or iotop.
That's my
If you would be able to reproduce the issue intentionally under
particular condition which I have no idea about at the moment, it
would be helpful.
There were some MLs previously regarding to *similar* issue.
# google "libvirt rbd issue"
Regards,
On Tue, Feb 7, 2017 at 7:50 PM, Tracy Reed
ue on Ubuntu 14.04 with Ceph
> repositories (also latest Jewel and ceph-deploy) as well.
>
Community Ceph packages are running on ubuntu box, right?
If so, please do `ceph -v` on ubuntu box.
And also please provide us with same issue which you hit on suse box.
>
> On Wed, Feb 8, 2017
patch(Message*)+0xc3) [0x557c51b25703]
>> > 16: (DispatchQueue::entry()+0x78b) [0x557c5200d06b]
>> > 17: (DispatchQueue::DispatchThread::entry()+0xd) [0x557c51ee5dcd]
>> > 18: (()+0x8734) [0x7f7e95dea734]
>> > 19: (clone()+0x6d) [0x7f7e93d80d3d]
>> >
100 409600
./rados -p cephfs_data_a ls | wc -l
100
If you could reproduce an issue and let us share procedure, that would
be definitely help.
Will try again.
On Tue, Feb 7, 2017 at 2:01 AM, Florent B <flor...@coppint.com> wrote:
> On 02/06/2017 05:49 PM, Shinobu Kinjo wrote:
>
How about *pve01-rbd01*?
* rados -p pve01-rbd01 ls | wc -l
?
On Mon, Feb 6, 2017 at 9:40 PM, Florent B wrote:
> On 02/06/2017 11:12 AM, Wido den Hollander wrote:
>>> Op 6 februari 2017 om 11:10 schreef Florent B :
>>>
>>>
>>> # ceph -v
>>> ceph
On Sun, Feb 5, 2017 at 1:15 AM, John Spray wrote:
> On Fri, Feb 3, 2017 at 5:28 PM, Florent B wrote:
>> Hi everyone,
>>
>> On a Jewel test cluster I have :
please, `ceph -v`
>>
>> # ceph df
>> GLOBAL:
>> SIZE AVAIL RAW USED %RAW USED
>>
You may want to add this in your FIO recipe.
* exec_prerun=echo 3 > /proc/sys/vm/drop_caches
Regards,
On Fri, Feb 3, 2017 at 12:36 AM, Wido den Hollander wrote:
>
>> Op 2 februari 2017 om 15:35 schreef Ahmed Khuraidah :
>>
>>
>> Hi all,
>>
>> I am still
30, 2017 at 1:23 PM, Gregory Farnum <gfar...@redhat.com>
>>>>> wrote:
>>>>> > On Sun, Jan 29, 2017 at 6:40 AM, Muthusamy Muthiah
>>>>> > <muthiah.muthus...@gmail.com> wrote:
>>>>> >> Hi All,
>>>>>
On Wed, Feb 1, 2017 at 1:51 AM, Joao Eduardo Luis wrote:
> On 01/31/2017 03:35 PM, David Turner wrote:
>>
>> If you do have a large enough drive on all of your mons (and always
>> intend to do so) you can increase the mon store warning threshold in the
>> config file so that it no
First off, the followings, please.
* ceph -s
* ceph osd tree
* ceph pg dump
and
* what you actually did with exact commands.
Regards,
On Tue, Jan 31, 2017 at 6:10 AM, José M. Martín wrote:
> Dear list,
>
> I'm having some big problems with my setup.
>
> I was
There were some related MLs.
Google this:
[ceph-users] Ceph Plugin for Collectd
On Sun, Jan 29, 2017 at 8:43 AM, Marc Roos wrote:
>
>
> Is there a doc that describes all the parameters that are published by
> collectd-ceph?
>
> Is there maybe a default grafana
`ceph pg dump` should show you something like:
* active+undersized+degraded ... [NONE,3,2,4,1]3[NONE,3,2,4,1]
Sam,
Am I wrong? Or is it up to something else?
On Sat, Jan 21, 2017 at 4:22 AM, Gregory Farnum wrote:
> I'm pretty sure the default configs won't let an
What does `ceph -s` say?
On Sat, Jan 21, 2017 at 3:39 AM, Wido den Hollander wrote:
>
>> Op 20 januari 2017 om 17:17 schreef Kai Storbeck :
>>
>>
>> Hello ceph users,
>>
>> My graphs of several counters in our Ceph cluster are showing abnormal
>> behaviour after
On Fri, Jan 20, 2017 at 2:54 AM, Brian Andrus
wrote:
> Much of the Ceph project VMs (including tracker.ceph.com) is currently
> hosted on DreamCompute. The migration to our new service/cluster that was
> completed on 2017-01-17, the Ceph project was somehow enabled in
Now I'm totally clear.
Regards,
On Fri, Jan 13, 2017 at 6:59 AM, Samuel Just wrote:
> That would work.
> -Sam
>
> On Thu, Jan 12, 2017 at 1:40 PM, Gregory Farnum wrote:
>> On Thu, Jan 12, 2017 at 1:37 PM, Samuel Just wrote:
>>> Oh, this
,
On Thu, Jan 12, 2017 at 1:01 PM, Jason Dillaman <jdill...@redhat.com> wrote:
> On Wed, Jan 11, 2017 at 10:43 PM, Shinobu Kinjo <ski...@redhat.com> wrote:
>> +2
>> * Reduce manual operation as much as possible.
>> * A recovery tool in case that we break somethin
Sorry, I don't get your question.
Generally speaking, the MON maintains maps of the cluster state:
* Monitor map
* OSD map
* PG map
* CRUSH map
Regards,
On Thu, Jan 12, 2017 at 7:03 PM, wrote:
> Hi all,
> I had just reboot all 3 nodes (one after one) of an small
On Thu, Jan 12, 2017 at 12:28 PM, Christian Balzer wrote:
>
> Hello,
>
> On Wed, 11 Jan 2017 11:09:46 -0500 Jason Dillaman wrote:
>
>> I would like to propose that starting with the Luminous release of Ceph,
>> RBD will no longer support the creation of v1 image format images via
On Thu, Jan 12, 2017 at 2:41 AM, Ilya Dryomov <idryo...@gmail.com> wrote:
> On Wed, Jan 11, 2017 at 6:01 PM, Shinobu Kinjo <ski...@redhat.com> wrote:
>> It would be fine to not support v1 image format at all.
>>
>> But it would be probably friendly fo
It would be fine to not support v1 image format at all.
But it would be probably friendly for users to provide them with more
understandable message when they face feature mismatch instead of just
displaying:
* rbd: map failed: (6) No such device or address
For instance, show the following
?
>>>
>>> - We can also read:
>>> WHICH CLIENT VERSIONS SUPPORT CRUSH_TUNABLES2
>>> - v0.55 or later, including bobtail series (v0.56.x)
>>> - Linux kernel version v3.9 or later (for the file system and RBD kernel
>>> clients)
>>>
&
gt;> should? For me I thought everything is fine because ceph -s said they are up
>> and running.
>>
>> I would think of a problem with the crush map.
>>
>>> Am 10.01.2017 um 08:06 schrieb Shinobu Kinjo <ski...@redhat.com>:
>>>
>>> e.g.,
>
t where do you see this? I think this indicates that they are up:
> osdmap e3114: 9 osds: 9 up, 9 in; 4 remapped pgs?
>
>
>> Am 10.01.2017 um 07:50 schrieb Shinobu Kinjo <ski...@redhat.com>:
>>
>> On Tue, Jan 10, 2017 at 3:44 PM, Marcus Müller <mueller.mar...@poste
9 1.24
> TOTAL 55872G 15071G 40801G 26.97
> MIN/MAX VAR: 0.61/1.70 STDDEV: 13.16
>
> As you can see, now osd2 also went down to 45% Use and „lost“ data. But I
> also think this is no problem and ceph just clears everything up after
> backfilling.
>
>
> Am 1
Looking at ``ceph -s`` you originally provided, all OSDs are up.
> osdmap e3114: 9 osds: 9 up, 9 in; 4 remapped pgs
But looking at ``pg query``, OSD.0 / 1 are not up. Are they something
like related to ?:
> Ceph1, ceph2 and ceph3 are vms on one physical host
Are those OSDs running on vm
> pg 9.7 is stuck unclean for 512936.160212, current state active+remapped,
> last acting [7,3,0]
> pg 7.84 is stuck unclean for 512623.894574, current state active+remapped,
> last acting [4,8,1]
> pg 8.1b is stuck unclean for 513164.616377, current state active+remapped,
> last acting [4,7,2]
s in question and remove the ones with
> inconsistencies (which should remove the underlying rados objects). But
> it'd be perhaps good to do some searching on how/why this problem came about
> before doing this.
>
> andras
>
>
>
> On 01/07/2017 06:48 PM, Shinobu Kinjo
+undersized+degraded
>> 164 active+undersized+degraded
>>
>>
>>
>> root@alex-desktop:/var/lib/ceph/mon/ceph-alex-desktop# ls -ls
>> total 8
>> 0 -rw-r--r-- 1 ceph ceph0 Jan 7 21:11 done
>> 4 -rw--- 1 ceph ceph 77 Jan 7 21:05 keyring
>>
ONITOR (MANUAL)
>
>
>
> Alex F. Evonosky
>
> <https://twitter.com/alexevon> <https://www.linkedin.com/in/alexevonosky>
>
> On Sat, Jan 7, 2017 at 6:36 PM, Shinobu Kinjo <ski...@redhat.com> wrote:
>
>> How did you add a third MON?
>>
>> Re
How did you add a third MON?
Regards,
On Sun, Jan 8, 2017 at 7:01 AM, Alex Evonosky wrote:
> Anyone see this before?
>
>
> 2017-01-07 16:55:11.406047 7f095b379700 0 cephx: verify_reply couldn't
> decrypt with error: error decoding block for decryption
> 2017-01-07
On Wed, Jan 4, 2017 at 6:05 PM, 许雪寒 wrote:
> We've already restarted the OSD successfully.
> Now, we are trying to figure out why the OSD suicide itself
Network issue which causes pretty unstable communication with other
OSDs in same acting set causes suicide usually.
>
> Re:
On Wed, Jan 4, 2017 at 4:33 PM, Henrik Korkuc wrote:
> On 17-01-04 03:16, Gregory Farnum wrote:
>>
>> On Fri, Dec 23, 2016 at 12:04 AM, Henrik Korkuc wrote:
>>>
>>> Hello,
>>>
>>> I wondered if Ceph can emit stats (via perf counters, statsd or in some
>>> other
"parent_split_bits": 0,
> "last_scrub": "342266'14514",
> "last_scrub_stamp": "2016-10-28 16:41:06.563820",
> "last_deep_scrub": "342266'14514",
> "last_deep_scrub_stamp": "201
Description of ``--pool=data`` is fine but just confusing users.
http://docs.ceph.com/docs/jewel/start/quick-ceph-deploy/
should be synced with
https://github.com/ceph/ceph/blob/master/doc/start/quick-ceph-deploy.rst
I would recommend you to refer ``quick-ceph-deploy.rst`` because docs
in git
Yeah, dreamhost seems to have internal issue which is not quite good for us.
Sorry for that.
On Tue, Jan 3, 2017 at 5:41 PM, Rajib Hossen
wrote:
> Hello, I can't browse docs.ceph.com for last 2/3 days. Google says it takes
> too many time to reload. I also
I've never done migration of cephfs_metadata from spindle disks to
ssds. But logically you could achieve this through 2 phases.
#1 Configure CRUSH rule including spindle disks and ssds
#2 Configure CRUSH rule for just pointing to ssds
* This would cause massive data shuffling.
On Mon, Jan
The best practice to reweight OSDs is to run
test-reweight-by-utilization which is dry-run of reweighting OSDs
before running reweight-by-utilization.
On Sat, Dec 31, 2016 at 3:05 AM, Brian Andrus
wrote:
> We have a set it and forget it cronjob setup once an hour to
On Fri, Dec 30, 2016 at 7:27 PM, Kees Meijs wrote:
> Thanks, I'll try a manual reweight at first.
Great.
CRUSH would probably be able to be more clever in the future anyway.
>
> Have a happy new year's eve (yes, I know it's a day early)!
>
> Regards,
> Kees
>
> On 30-12-16
On Fri, Dec 30, 2016 at 7:17 PM, Wido den Hollander wrote:
>
>> Op 30 december 2016 om 11:06 schreef Kees Meijs :
>>
>>
>> Hi Asley,
>>
>> We experience (using Hammer) a similar issue. Not that I have a perfect
>> solution to share, but I felt like mentioning a "me
You can track activity of acting set by using:
# ceph daemon osd.${osd id} dump_ops_in_flight
On Fri, Dec 30, 2016 at 3:59 PM, Jaemyoun Lee
wrote:
> Dear Wido,
> Is there a command to check the ACK? Or, may you tell me a source code
> function for the received ACK?
>
>
hu, Dec 29, 2016 at 2:01 PM, Ukko Hakkarainen
> <ukkohakkarai...@gmail.com> wrote:
>>
>> Shinobe,
>>
>> I'll re-check if the info I'm after is there, I recall not. I'll get back
>> to you later.
>>
>> Thanks!
>>
>> > Shinob
And we may be interested in your cluster's configuration.
# ceph --show-config > $(hostname).$(date +%Y%m%d).ceph_conf.txt
On Fri, Dec 30, 2016 at 7:48 AM, David Turner wrote:
> Another thing that I need to make sure on is that your number of PGs in
> the pool
I always tend to jump into:
https://github.com/ceph
Everything is there.
On Fri, Dec 30, 2016 at 2:34 AM, Michael Hackett wrote:
> Hello Andre,
>
> The Ceph site would be the best place to get the information you are looking
> for, specifically the docs section:
Please see the following:
http://docs.ceph.com/docs/giant/architecture/
Everything you would want to know about is there.
Regards,
On Thu, Dec 29, 2016 at 8:27 AM, Ukko wrote:
> I'd be interested in CRUSH algorithm simplified in series of
> pictures. How does a
On Sun, Dec 25, 2016 at 7:33 AM, Brad Hubbard wrote:
> On Sun, Dec 25, 2016 at 3:33 AM, w...@42on.com wrote:
>>
>>
>>> Op 24 dec. 2016 om 17:20 heeft L. Bader het volgende
>>> geschreven:
>>>
>>> Do you have any references on this?
>>>
quot;data_digest": 2293522445,
> "omap_digest": 4294967295,
> "expected_object_size": 4194304,
> "expected_write_size": 4194304,
> "alloc_hint_flags": 53,
> "watchers": {}
> }
>
> Depending on the output o
Would you be able to execute ``ceph pg ${PG ID} query`` against that
particular PG?
On Wed, Dec 21, 2016 at 11:44 PM, Andras Pataki
wrote:
> Yes, size = 3, and I have checked that all three replicas are the same zero
> length object on the disk. I think some
Can you share exact steps you took to build the cluster?
On Thu, Dec 22, 2016 at 3:39 AM, Aakanksha Pudipeddi
wrote:
> I mean setup a Ceph cluster after compiling from source and make install. I
> usually use the long form to setup the cluster. The mon setup is fine
Would you give us some outputs?
# getfattr -n ceph.quota.max_bytes /some/dir
and
# ls -l /some/dir
On Thu, Dec 15, 2016 at 4:41 PM, gjprabu wrote:
>
> Hi Team,
>
> We are using ceph version 10.2.4 (Jewel) and data's are mounted
> with cephfs file system in
{cache pool} ls
# rados -p ${cache pool} get ${object} /tmp/file
# ls -l /tmp/file
-- Original --
From: "Shinobu Kinjo"<ski...@redhat.com>;
Date: Tue, Dec 13, 2016 06:21 PM
To: "JiaJia Zhong"<zhongjia...@haomaiyi.com>;
Cc:
On Tue, Dec 13, 2016 at 4:38 PM, JiaJia Zhong
wrote:
> hi cephers:
> we are using ceph hammer 0.94.9, yes, It's not the latest ( jewel),
> with some ssd osds for tiering, cache-mode is set to readproxy,
> everything seems to be as expected,
> but when
On Sat, Dec 10, 2016 at 11:00 PM, Jason Dillaman wrote:
> I should clarify that if the OSD has silently failed (e.g. the TCP
> connection wasn't reset and packets are just silently being dropped /
> not being acked), IO will pause for up to "osd_heartbeat_grace" before
The
On Sat, Nov 19, 2016 at 6:59 AM, Brad Hubbard wrote:
> +ceph-devel
>
> On Fri, Nov 18, 2016 at 8:45 PM, Nick Fisk wrote:
>> Hi All,
>>
>> I want to submit a PR to include fix in this tracker bug, as I have just
>> realised I've been experiencing it.
>>
>>
D."
>
>
> On 8 August 2016 at 13:16, Shinobu Kinjo <shinobu...@gmail.com> wrote:
>>
>> On Mon, Aug 8, 2016 at 8:01 PM, Mykola Dvornik <mykola.dvor...@gmail.com>
>> wrote:
>> > Dear ceph community,
>> >
>> > One of the OS
On Mon, Aug 8, 2016 at 8:01 PM, Mykola Dvornik wrote:
> Dear ceph community,
>
> One of the OSDs in my cluster cannot start due to the
>
> ERROR: osd init failed: (28) No space left on device
>
> A while ago it was recommended to manually delete PGs on the OSD to let it
On Sun, Aug 7, 2016 at 6:56 PM, Christian Balzer wrote:
>
> [Reduced to ceph-users, this isn't community related]
>
> Hello,
>
> On Sat, 6 Aug 2016 20:23:41 +0530 Venkata Manojawa Paritala wrote:
>
>> Hi,
>>
>> We have configured single Ceph cluster in a lab with the below
>>
osd_heartbeat_addr must be in [osd] section.
On Thu, Jul 28, 2016 at 4:31 AM, Venkata Manojawa Paritala
wrote:
> Hi,
>
> I have configured the below 2 networks in Ceph.conf.
>
> 1. public network
> 2. cluster_network
>
> Now, the heart beat for the OSDs is happening thru
ractive.de
>
> Anschrift:
>
> IP Interactive UG ( haftungsbeschraenkt )
> Zum Sonnenberg 1-3
> 63571 Gelnhausen
>
> HRB 93402 beim Amtsgericht Hanau
> Geschäftsführung: Oliver Dzombic
>
> Steuer Nr.: 35 236 3622 1
> UST ID: DE2740861
Can you reproduce with debug client = 20?
On Tue, Jul 5, 2016 at 10:16 AM, Goncalo Borges <
goncalo.bor...@sydney.edu.au> wrote:
> Dear All...
>
> We have recently migrated all our ceph infrastructure from 9.2.0 to 10.2.2.
>
> We are currently using ceph-fuse to mount cephfs in a number of
Reproduce with 'debug mds = 20' and 'debug ms = 20'.
shinobu
On Mon, Jul 4, 2016 at 9:42 PM, Lihang wrote:
> Thank you very much for your advice. The command "ceph mds repaired 0"
> work fine in my cluster, my cluster state become HEALTH_OK and the cephfs
> state become
rite operations to rados will be
cancel (maybe `cancel` is not appropriate word in this sentence) until the
full epoch before touching same object.
Since clients must have latest OSD map.
Does it make sense?
Anyway in case I've been missing something, some will add more.
>
> Does this make se
"recovery_progress": {
> "backfill_targets": [],
> "waiting_on_backfill": [],
> "last_backfill_started": "MIN",
> "backfill_info": {
> "begin&q
What does `ceph pg 6.263 query` show you?
On Thu, Jun 30, 2016 at 12:02 PM, Goncalo Borges <
goncalo.bor...@sydney.edu.au> wrote:
> Dear Cephers...
>
> Today our ceph cluster gave us a couple of scrub errors regarding
> inconsistent pgs. We just upgraded from 9.2.0 to 10.2.2 two days ago.
>
> #
What will the followings show you?
ceph pg 12.258 list_unfound // maybe hung...
ceph pg dump_stuck
and enable debug to osd.4
debug osd = 20
debug filestore = 20
debug ms = 1
But honestly my best bet is to upgrade to the latest. It would save
your life much more.
- Shinobu
On Thu, May 26,
ctive
>
> mailto:i...@ip-interactive.de
>
> Anschrift:
>
> IP Interactive UG ( haftungsbeschraenkt )
> Zum Sonnenberg 1-3
> 63571 Gelnhausen
>
> HRB 93402 beim Amtsgericht Hanau
> Geschäftsführung: Oliver Dzombic
>
> Steuer Nr.: 35 236 3622 1
> UST ID: DE2740
On Sat, Apr 30, 2016 at 5:32 PM, Oliver Dzombic wrote:
> Hi,
>
> there is a memory allocation bug, at least in hammer.
>
Could you give us any pointer?
> Mouting an rbd volume as a block device on a ceph node might run you
> into that. Then your mount wont work, and you
This is a previous thread about journal disk replacement.
http://lists.opennebula.org/pipermail/ceph-users-ceph.com/2014-May/039434.html
I hope this would be helpful for you.
Cheers,
S
- Original Message -
From: "Martin Wilderoth"
To: ceph-us...@ceph.com
1 - 100 of 225 matches
Mail list logo