Hello,
I'm a newbie to CEPH, gaining some familiarity by hosting some virtual
machines on a test cluster. I'm using a virtualisation product called
Proxmox Virtual Environment, which conveniently handles cluster setup,
pool setup, OSD creation etc.
During the attempted removal of an OSD, my pool
will always hit at least 22% of your OSDS, and probably more. If you're
unable to add more disks, I would highly recommend adding SSD journals.
On Fri, Dec 19, 2014 at 8:08 AM, Chris Murray chrismurra...@gmail.com wrote:
Hello,
I'm a newbie to CEPH, gaining some familiarity by hosting some virtual
Hi all,
I think I know the answer to this already after reading similar queries,
but I'll ask in case times have changed.
After an error on my part, I have a very small number of pgs in
remapped+peering. They don't look like they'll get out of that state.
Some IO is blocked too, as you
No, I/O will block for those PGs as long as you don't mark them as
lost.
Isn't there any way to get those OSDs back? If you can you can restore
the PGs.
Interesting, 'lost' is a term I'm not yet familiar with, regarding ceph.
I'll read up on it.
One of the OSDs was re-used straight away, and
. In case I should be troubleshooting this side, is/isn't this happening
to others?
-Original Message-
From: Gregory Farnum [mailto:g...@gregs42.com]
Sent: 16 March 2015 20:40
To: Chris Murray
Cc: ceph-users
Subject: Re: [ceph-users] More than 50% osds down, CPUs still busy; will the
cluster
Apologies if anyone receives this twice. I didn't see this e-mail come back
through to the list ...
-Original Message-
From: Chris Murray
Sent: 14 March 2015 08:56
To: 'Gregory Farnum'
Cc: 'ceph-users'
Subject: RE: [ceph-users] More than 50% osds down, CPUs still busy; will the
cluster
is fixed in something later than 0.80.9?
-Original Message-
From: Gregory Farnum [mailto:g...@gregs42.com]
Sent: 18 March 2015 14:01
To: Chris Murray
Cc: ceph-users
Subject: Re: [ceph-users] More than 50% osds down, CPUs still busy; will the
cluster recover without help?
On Wed, Mar 18, 2015
when it never seems to finish 'draining'? Could
my suspicions be true that it's somehow a BTRFS funny?
Thanks again,
Chris
-Original Message-
From: Chris Murray
Sent: 03 March 2015 09:45
To: Gregory Farnum
Cc: ceph-users
Subject: RE: [ceph-users] More than 50% osds down, CPUs still busy
Farnum [mailto:g...@gregs42.com]
Sent: 02 March 2015 18:05
To: Chris Murray
Cc: ceph-users
Subject: Re: [ceph-users] More than 50% osds down, CPUs still busy; will the
cluster recover without help?
You can turn the filestore up to 20 instead of 1. ;) You might also explore
what information you can
reveals references to log files which have very similar entries,
but I can't see anything that just repeats like mine does.
-Original Message-
From: Gregory Farnum [mailto:g...@gregs42.com]
Sent: 26 February 2015 22:37
To: Chris Murray
Cc: ceph-users
Subject: Re: [ceph-users] More than 50% osds
5.43 0.00 807.20 0 48440
Thanks,
Chris
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Chris Murray
Sent: 27 February 2015 10:32
To: Gregory Farnum
Cc: ceph-users
Subject: Re: [ceph-users] More than 50
? If what I've assumed about the osd map numbers is true.
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Chris Murray
Sent: 27 February 2015 08:33
To: Gregory Farnum
Cc: ceph-users
Subject: Re: [ceph-users] More than 50% osds down, CPUs still busy
... Trying to send again after reporting bounce backs to dreamhost ...
... Trying to send one more time after seeing mails come through the
list today ...
Hi all,
First off, I should point out that this is a 'small cluster' issue and
may well be due to the stretched resources. If I'm doomed to
Thanks Greg
After seeing some recommendations I found in another thread, my impatience got
the better of me, and I've start the process again, but there is some logic, I
promise :-)
I've copied the process from Michael Kidd, I believe, and it goes along the
lines of:
setting noup, noin,
perhaps? Is the activity leading up to something, and BTRFS is slowly
doing what Ceph is asking of it, or is it just going round and round in
circles and I just can't see? :-)
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Chris Murray
Sent: 25
That's fair enough Greg, I'll keep upgrading when the opportunity arises, and
maybe it'll spring back to life someday :-)
-Original Message-
From: Gregory Farnum [mailto:g...@gregs42.com]
Sent: 20 March 2015 23:05
To: Chris Murray
Cc: ceph-users
Subject: Re: [ceph-users] More than 50
After messing up some of my data in the past (my own doing, playing with
BTRFS in old kernels), I've been extra cautious and now run a ZFS mirror
across multiple RBD images. It's led me to believe that I have a faulty
SSD in one of my hosts:
sdb without a journal - fine (but slow)
sdc without a
Hello all,
Please can someone offer some advice. In ceph.conf, I use:
osd_mkfs_type = btrfs
osd_mount_options_btrfs =
noatime,nodiratime,compress-force=lzo
filestore btrfs snap= false
However, some of my OSDs are becoming much more full than
Hi all,
Might anyone be able to help me troubleshoot an "apt-get dist-upgrade"
which is stuck at "Setting up ceph-osd (10.2.3-1~bpo80+1)"?
I'm upgrading from 10.2.2. The two OSDs on this node are up, and think
they are version 10.2.3, but the upgrade doesn't appear to be finishing
... ?
Thank
On 13/10/2016 11:49, Henrik Korkuc wrote:
Is apt/dpkg doing something now? Is problem repeatable, e.g. by
killing upgrade and starting again. Are there any stuck systemctl
processes?
I had no problems upgrading 10.2.x clusters to 10.2.3
On 16-10-13 13:41, Chris Murray wrote:
On 22/09/2016
On 22/09/2016 15:29, Chris Murray wrote:
Hi all,
Might anyone be able to help me troubleshoot an "apt-get dist-upgrade"
which is stuck at "Setting up ceph-osd (10.2.3-1~bpo80+1)"?
I'm upgrading from 10.2.2. The two OSDs on this node are up, and think
they are version 10.
21 matches
Mail list logo