Hello,
I have a two pools (default and sas).
Is it able to push osd to non default pool after restart without setting
crushmap?
Thanks.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
I was trying to determine performance impact of deep-scrubbing with
osd_disk_thread_ioprio_class option set but it looks like it's ignored.
Performance (during deep-scrub) is the same with this options set or
left with defaults (1/3 of normal performance).
# ceph --admin-daemon
And, of course, I had to forget to cc ceph-users as promised.
-Joao
Original Message
Subject: Re: [ceph-users] Fwd: Latest firefly: osd not joining cluster
after re-creation
Date: Thu, 23 Oct 2014 08:55:40 +0100
From: Joao Eduardo Luis joao.l...@inktank.com
To: Andrey
On 10/22/2014 07:41 PM, Andrey Korolyov wrote:
Hello,
given small test cluster, following sequence resulted to the inability
to join back for freshly formatted OSD:
- update cluster sequentially from cuttlefish to dumpling to firefly,
- execute tunables change, wait for recovery completion,
-
Hi Dan,
I’m have to move my osd-s and mon-s to a different subnet on different network
interfaces soon. I would appreciate a short write-up.
Regards,
Rein
On 23 Oct 2014, at 12:14, Christian Kauhaus k...@gocept.com wrote:
Am 22.10.2014 um 20:07 schrieb Dan Geist:
Is there an interest in
On 10/23/2014 09:10 AM, Paweł Sadowski wrote:
Hi,
I was trying to determine performance impact of deep-scrubbing with
osd_disk_thread_ioprio_class option set but it looks like it's ignored.
Performance (during deep-scrub) is the same with this options set or
left with defaults (1/3 of normal
Hi.
Already have the necessary changes in git.
https://github.com/ceph/ceph/commit/86926c6089d63014dd770b4bb61fc7aca3998542
2014-10-23 16:42 GMT+04:00 Paweł Sadowski c...@sadziu.pl:
On 10/23/2014 09:10 AM, Paweł Sadowski wrote:
Hi,
I was trying to determine performance impact of
Sorry, I see the problem.
osd.0 10.6.0.1:6800/32051 clashes with existing osd: different fsid
(ours: d0aec02e-8513-40f1-bf34-22ec44f68466 ; theirs:
16cbb1f8-e896-42cd-863c-bcbad710b4ea). Anyway it is clearly a bug and
fsid should be silently discarded there if OSD contains no epochs
itself.
Heh, looks like that the osd process is unable to reach any of mon
members. Since mkfs is getting just well (which requires same mon set
to work) I suspect a bug there.
osd0-monc10.log.gz
Description: GNU Zip compressed data
mon0-dbg.log.gz
Description: GNU Zip compressed data
It is not so easy.. When I added fsid under selected osd` section and
reformatted the store/journal, it aborted at start in
FileStore::_do_transaction (see attach). On next launch, fsid in the
mon store for this OSD magically changes to the something else and I
am kicking again same doorstep (if I
Hi.
I’m having some trouble getting radosgw started. Any pointers would be greatly
appreciated…
Here is a log excerpt:
2014-10-23 09:25:27.231701 7fdcb08de820 0 ceph version 0.80.7
(6c0127fcb58008793d3c8b62d925bc91963672a3), process radosgw, pid 20159
2014-10-23 09:25:27.334087
On Thu, 23 Oct 2014, GuangYang wrote:
Thanks Sage for the quick response!
We are using firefly (v0.80.4 with a couple of back-ports). One
observation we have is that during peering stage (especially if the OSD
got down/in for several hours with high load), the peering OPs are in
On Oct 22, 2014, at 8:22 PM, Craig Lewis wrote:
Shot in the dark: try manually deep-scrubbing the PG. You could also try
marking various osd's OUT, in an attempt to get the acting set to include
osd.25 again, then do the deep-scrub again. That probably won't help though,
because the pg
Hello,
I new in Cephh. I create a cluster with 2 osds and 1 mds.
But, the ls in a specific directory hangs.
Anyone help me?
My clients are Ubuntu 14.04 with kernel 3.13.0-24-generic
And my servers are CentOS 6.5 with kernel 2.6.32-431.23.3.el6.x86_64
The Ceph version is 0.80.5
Thanks,
Att.
Hi all
the procedure does not work for me, have still 47 active+remapped pg. Anyone
have an idea how to fix this issue.
@Wido: now my cluster have a usage less than 80% - thanks for your advice.
Harry
Am 21.10.2014 um 22:38 schrieb Craig Lewis
On 10/23/2014 05:33 PM, Harald Rößler wrote:
Hi all
the procedure does not work for me, have still 47 active+remapped pg. Anyone
have an idea how to fix this issue.
If you look at those PGs using ceph osd pg dump, what is their prefix?
They should start with a number and that number
@Wido: sorry I don’t understand what you mean 100%, generated some output which
may helps.
Ok the pool:
pool 3 'bcf' rep size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num
832 pgp_num 832 last_change 8000 owner 0
all remapping pg have an temp entry:
pg_temp 3.1 [14,20,0]
pg_temp
Let me re-CC the list as this may be worth for the archives.
On 10/23/2014 04:19 PM, Andrey Korolyov wrote:
Doing off-list post again.
So I was inaccurate in an initial bug description:
- mkfs goes just well
- on first start OSD is crashing with ABRT and trace from previous
message, changing
On Thu, Oct 23, 2014 at 9:18 PM, Joao Eduardo Luis
joao.l...@inktank.com wrote:
Let me re-CC the list as this may be worth for the archives.
On 10/23/2014 04:19 PM, Andrey Korolyov wrote:
Doing off-list post again.
So I was inaccurate in an initial bug description:
- mkfs goes just well
-
Hello,
in a nutshell, I can confirm that write amplification, see inline.
On Mon, 20 Oct 2014 10:43:51 -0500 Mark Nelson wrote:
On 10/20/2014 09:28 AM, Mark Wu wrote:
2014-10-20 21:04 GMT+08:00 Mark Nelson mark.nel...@inktank.com
mailto:mark.nel...@inktank.com:
On 10/20/2014
El Miércoles 22/10/2014, Christian Balzer escribió:
Hello,
On Wed, 22 Oct 2014 17:41:45 -0300 Ricardo J. Barberis wrote:
El Martes 21/10/2014, Christian Balzer escribió:
Hello,
I'm trying to change the value of mon_osd_down_out_subtree_limit from
rack to something, anything else
I'm doing some fio tests on Giant using fio rbd driver to measure
performance on a new ceph cluster.
However with block sizes 1M (initially noticed with 4M) I am seeing
absolutely no IOPS for *reads* - and the fio process becomes non
interrupteable (needs kill -9):
$ ceph -v
ceph version
On Thu, Oct 23, 2014 at 7:33 AM, Daniel Takatori Ohara
dtoh...@mochsl.org.br wrote:
Hello,
I new in Cephh. I create a cluster with 2 osds and 1 mds.
But, the ls in a specific directory hangs.
Anyone help me?
how many mounted clients, how many files in the filesystem. I can't
figure out
I'm having a problem getting RadosGW replication to work after upgrading to
Apache 2.4 on my primary test cluster. Upgrading the secondary cluster to
Apache 2.4 doesn't cause any problems. Both Ceph's apache packages and
Ubuntu's packages cause the same problem.
I'm pretty sure I'm missing
On 24/10/14 13:09, Mark Kirkwood wrote:
I'm doing some fio tests on Giant using fio rbd driver to measure
performance on a new ceph cluster.
However with block sizes 1M (initially noticed with 4M) I am seeing
absolutely no IOPS for *reads* - and the fio process becomes non
interrupteable
hi,all
can we deploy multi-rgws on one ceph cluster?
if so does it bring us any problems?
thanks___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
---
Date: Thu, 23 Oct 2014 06:58:58 -0700
From: s...@newdream.net
To: yguan...@outlook.com
CC: ceph-de...@vger.kernel.org; ceph-users@lists.ceph.com
Subject: RE: Filestore throttling
On Thu, 23 Oct 2014, GuangYang wrote:
Thanks Sage for the quick
On Fri, 24 Oct 2014, GuangYang wrote:
commit 44dca5c8c5058acf9bc391303dc77893793ce0be
Author: Sage Weil s...@inktank.com
Date: Sat Jan 19 17:33:25 2013 -0800
filestore: disable extra committing queue allowance
The motivation here is if there is a problem draining the op queue
Dear everyone
I can't start osd.21, (attached log file).
some pgs can't be repair. I'm using replicate 3 for my data pool.
Feel some objects in those pgs be failed,
I tried to delete some data that related above objects, but still not
start osd.21
and, removed osd.21, but other osds (eg:
Spent a frustrating day trying to build a new test cluster, turned out
I had jumbo frames set on the cluster-network only, but having
re-wired the machines recently with a new switch, I forgot to check it
could handle jumbo-frames (it can't).
Symptoms were stuck/unclean PGs - a small subset of
30 matches
Mail list logo