Dear developers.
Very much want io priorities ;)
During the execution of Snap roollback appear slow queries.
Thanks
--
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi all,
Try to use two OSDs to create a cluster. After the deply finished, I
found the health status is 88 active+degraded 104 active+remapped.
Before use 2 osds to create cluster the result is ok. I'm confuse why this
situation happened. Do I need to set crush map to fix this problem?
Hi.
Because the disc requires three different hosts, the default number of
replications 3.
2014-10-29 10:56 GMT+03:00 Vickie CH mika.leaf...@gmail.com:
Hi all,
Try to use two OSDs to create a cluster. After the deply finished, I
found the health status is 88 active+degraded 104
It looks to me like this has been considered (mapping default pool size
to 2). However just to check - this *does* mean that you need two (real
or virtual) hosts - if the two osds are on the same host then crush map
adjustment (hosts - osds) will be required.
Regards
Mark
On 29/10/14
Hi.
This parameter does not apply to pools by default.
ceph osd dump | grep pool. see size=?
2014-10-29 11:40 GMT+03:00 Vickie CH mika.leaf...@gmail.com:
Der Irek:
Thanks for your reply.
Even already set osd_pool_default_size = 2 the cluster still need 3
different hosts right?
Is this
Hi,
the next Ceph MeetUp in Berlin is scheduled for November 24.
Lars Marowsky-Brée of SuSE will talk about Ceph performance.
Please RSVP at http://www.meetup.com/Ceph-Berlin/events/215147892/
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
That is not my experience:
$ ceph -v
ceph version 0.86-579-g06a73c3 (06a73c39169f2f332dec760f56d3ec20455b1646)
$ cat /etc/ceph/ceph.conf
[global]
...
osd pool default size = 2
$ ceph osd dump|grep size
pool 2 'hot' replicated size 2 min_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 128
Mark.
I meant that the existing pools, this parameter is not used.
I'm sure he pools DATA, METADATA, RDB(They are created by default) have
size = 3.
2014-10-29 11:56 GMT+03:00 Mark Kirkwood mark.kirkw...@catalyst.net.nz:
That is not my experience:
$ ceph -v
ceph version 0.86-579-g06a73c3
Dear all,
Thanks for the reply.
Pool replicated size is 2. Because the replicated size parameter already
write into ceph.conf before deploy.
Because not familiar crush map. I will according Mark's information to do
a test that change the crush map to see the result.
ceph osd tree please :)
2014-10-29 12:03 GMT+03:00 Vickie CH mika.leaf...@gmail.com:
Dear all,
Thanks for the reply.
Pool replicated size is 2. Because the replicated size parameter already
write into ceph.conf before deploy.
Because not familiar crush map. I will according Mark's
Hi Support,
Can someone please help me with the below error so I can proceed with my
cluster installation. It has taken a week now not knowing how to carry on.
Regards,
Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency Area,
Meraka, CSIR
Tel: +27 12 841
Hi:
-ceph osd
tree---
# idweight type name up/down reweight
-1 1.82root default
-2 1.82host storage1
0 0.91osd.0 up 1
1 0.91osd.1 up 1
Hi Sakhi:
I got this problem before. Host OS is Ubuntu 14.04 3.13.0-24-generic.
In the end I use fdisk /dev/sdX delete all partition and reboot. Maybe you
can try.
Best wishes,
Mika
2014-10-29 17:13 GMT+08:00 Sakhi Hadebe shad...@csir.co.za:
Hi Support,
Can someone please help me with
Hi Haomai, all.
Today after unexpected power failure one of kv stores (placed on ext4
with default mount options) refused to work. I think that it may be
interesting to revive it because it is almost first time among
hundreds of power failures (and their simulations) when data store got
broken.
Hi,
We are looking to use ZFS for our OSD backend, but I have some questions.
My main question is: Does Ceph already supports the writeparallel mode
for ZFS ? (as described here:
http://www.sebastien-han.fr/blog/2013/12/02/ceph-performance-interesting-things-going-on/)
I've found this, but
Righty, both osd are on the same host, so you will need to amend the
default crush rule. It will look something like:
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
Thanks for Andrey,
The attachment OSD.1's log is only these lines? I really can't find
the detail infos from it?
Maybe you need to improve debug_osd to 20/20?
On Wed, Oct 29, 2014 at 5:25 PM, Andrey Korolyov and...@xdel.ru wrote:
Hi Haomai, all.
Today after unexpected power failure one of kv
On Wed, Oct 29, 2014 at 1:11 PM, Haomai Wang haomaiw...@gmail.com wrote:
Thanks for Andrey,
The attachment OSD.1's log is only these lines? I really can't find
the detail infos from it?
Maybe you need to improve debug_osd to 20/20?
On Wed, Oct 29, 2014 at 5:25 PM, Andrey Korolyov
Thanks!
You mean osd.1 exited abrptly without ceph callback trace?
Anyone has some ideas about this log? @sage @gregory
On Wed, Oct 29, 2014 at 6:19 PM, Andrey Korolyov and...@xdel.ru wrote:
On Wed, Oct 29, 2014 at 1:11 PM, Haomai Wang haomaiw...@gmail.com wrote:
Thanks for Andrey,
The
Hi RHEL/CentOS users,
This is just a heads up that we observe slow requests during the RHEL6.6
upgrade. The upgrade includes selinux-policy-targeted, which runs this during
the update:
/sbin/restorecon -i -f - -R -p -e /sys -e /proc -e /dev -e /mnt -e /var/tmp
-e /home -e /tmp -e /dev
On Wed, Oct 29, 2014 at 1:28 PM, Haomai Wang haomaiw...@gmail.com wrote:
Thanks!
You mean osd.1 exited abrptly without ceph callback trace?
Anyone has some ideas about this log? @sage @gregory
On Wed, Oct 29, 2014 at 6:19 PM, Andrey Korolyov and...@xdel.ru wrote:
On Wed, Oct 29, 2014 at
maybe you can run it directly with debug_osd=20/20 and get ending logs
ceph-osd -i 1 -c /etc/ceph/ceph.conf -f
On Wed, Oct 29, 2014 at 6:34 PM, Andrey Korolyov and...@xdel.ru wrote:
On Wed, Oct 29, 2014 at 1:28 PM, Haomai Wang haomaiw...@gmail.com wrote:
Thanks!
You mean osd.1 exited abrptly
On Wed, Oct 29, 2014 at 1:37 PM, Haomai Wang haomaiw...@gmail.com wrote:
maybe you can run it directly with debug_osd=20/20 and get ending logs
ceph-osd -i 1 -c /etc/ceph/ceph.conf -f
On Wed, Oct 29, 2014 at 6:34 PM, Andrey Korolyov and...@xdel.ru wrote:
On Wed, Oct 29, 2014 at 1:28 PM,
There are two different statistics that are collected, one is the
'usage' information that collects data about actual operations that
clients do in a period of time. This information can be accessed
through the admin api. The other one is the user stats info that is
part of the user quota system,
hi, clewis:
my environment:
one ceph cluster, 3 nodes, each node has one monitor and one osd. one
rgw(rgw1) which is on one of them(osd1). before i deploy the second rgw(rgw2),
the first rgw works well.
after i deploy a second rgw, which can not start.
the number of radosgw process increases
Hello,
I've found my ceph v 0.80.3 cluster in a state with 5 of 34 OSDs being down
through night after months of running without change. From Linux logs I
found out the OSD processes were killed because they consumed all available
memory.
Those 5 failed OSDs were from different hosts of my 4-node
Hi!
We are exploring options to regularly preserve (i.e. backup) the
contents of the pools backing our rados gateways. For that we create
nightly snapshots of all the relevant pools when there is no activity
on the system to get consistent states.
In order to restore the whole pools back to a
Hello Greg,
I added the debug options which you mentioned and started the process again:
[root@th1-mon001 ~]# /usr/bin/ceph-mds -i th1-mon001 --pid-file
/var/run/ceph/mds.th1-mon001.pid -c /etc/ceph/ceph.conf --cluster ceph
--reset-journal 0
old journal was 9483323613~134233517
new journal
Hello Lukas,
Please try the following process for getting all your OSDs up and
operational...
* Set the following flags: noup, noin, noscrub, nodeep-scrub, norecover,
nobackfill
for i in noup noin noscrub nodeep-scrub norecover nobackfill; do ceph osd
set $i; done
* Stop all OSDs (I know, this
On Wed, 29 Oct 2014, Kenneth Waegeman wrote:
Hi,
We are looking to use ZFS for our OSD backend, but I have some questions.
My main question is: Does Ceph already supports the writeparallel mode for ZFS
? (as described here:
Bump :-)
Any ideas on this? They would be much appreciated.
Also: Sorry for a possible double post, client had forgotten its email config.
On 2014-10-22 21:21:54 +, Daniel Schneller said:
We have been running several rounds of benchmarks through the Rados
Gateway. Each run creates
Would it be possible to establish an announcement mailing list, used
only for announcing new versions?
Many other projects have similar lists, and they're very helpful for
keeping up on changes, while not being particularly noisy.
___
ceph-users
I'm finishing a Masters degree and using Ceph for the first time, and would
really appreciate any help because I'm really stuck with this.
Thank you.
On Tue, Oct 28, 2014 at 9:23 PM, Pedro Miranda potter...@gmail.com wrote:
Hi I'm new using Ceph and I have a very basic Ceph cluster with 1 mon
On Tue, Oct 28, 2014 at 2:23 PM, Pedro Miranda potter...@gmail.com wrote:
Hi I'm new using Ceph and I have a very basic Ceph cluster with 1 mon in one
node and 2 OSDs in two separate nodes (all CentOS 7). I followed the
quick-ceph-deploy tutorial.
All went well.
Then I started the quick-rgw
Hey cephers,
For those of you unable to attend yesterday (or those who would like a
replay), the video sessions from yesterday's Ceph Developer Summit:
Hammer have been posted to YouTube. You can access them directly from
the playlist at:
I've ended up at step ceph osd unset noin. My OSDs are up, but not in,
even after an hour:
[root@q04 ceph-recovery]# ceph osd stat
osdmap e2602: 34 osds: 34 up, 0 in
flags nobackfill,norecover,noscrub,nodeep-scrub
There seems to be no activity generated by OSD processes,
Hi all,
I'm new to ceph. What is wrong in this ceph? How can i make status to
change HEALTH_OK? Please help
$ceph status
cluster 62e2f40c-401b-4b3e-804a-cebbec1016c5
health HEALTH_WARN 104 pgs degraded; 88 pgs incomplete; 88 pgs stuck
inactive; 192 pgs stuck unclean
monmap e1:
Ah, sorry... since they were set out manually, they'll need to be set in
manually..
for i in $(ceph osd tree | grep osd | awk '{print $3}'); do ceph osd in $i;
done
Michael J. Kidd
Sr. Storage Consultant
Inktank Professional Services
- by Red Hat
On Wed, Oct 29, 2014 at 12:33 PM, Lukáš Kubín
Hi Ceph,
TL;DR: Register for the Micro Ceph and OpenStack Design Summit November 3rd,
2014 11:40am
http://kilodesignsummit.sched.org/event/f2e49f4547a757cc3d51f5641b2000cb
November 3rd, 2014 11:40am, during the OpenStack summit in Paris[1], the
present and the future of Ceph and OpenStack
Forgot to mention, when you create the ZFS/ZPOOL datasets, make sure to set the
xattar setting to sa
e.g.
zpool create osd01 -O xattr=sa -O compression=lz4 sdb
OR if zpool/zfs dataset already created
zfs set xattr=sa osd01
Cheers
-Original Message-
From: ceph-users
I should have figured that out myself since I did that recently. Thanks.
Unfortunately, I'm still at the step ceph osd unset noin. After setting
all the OSDs in, the original issue reapears preventing me to proceed with
recovery. It now appears mostly at single OSD - osd.10 which consumes ~200%
hi michal,
thanks for the info. we will certainly try it and see if we come to the
same conclusions ;)
one small detail: since you were using centos7, i'm assuming you were
using ZoL 0.6.3?
stijn
On 10/29/2014 08:03 PM, Michal Kozanecki wrote:
Forgot to mention, when you create the
Hi Stijn,
Yes, on my cluster I am running; CentOS 7, ZoL 0.6.3, Ceph 80.5.
Cheers
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Stijn
De Weirdt
Sent: October-29-14 3:49 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] use ZFS for
Hi,
with ceph -w i can see ceph writes reads and io.
But the reads seem to be only reads wich are not served from osd or monitor
cache.
As we have 128gb with every ceph server our monitors and osds are set to use a
lot of ram.
Monitoring only very view times show some ceph reads... but a lot
Hello,
Will there be any benefit in making the journal the size of an entire ssd disk?
I was also thinking on increasing journal max write entries and
journal queue max ops.
But will it matter, or it will have the same effect as a 4gb journal
on the same ssd?
Thank you,
Cristian Falcas
Journals that are too small can cause performance problems; it
basically takes away the SSD journal speedup, and forces all writes to
go at the speed of the HDD.
Once you make the journal big enough to prevent that, there is no
benefit to making it larger.
There might be a slight performance
This release will form the basis for the Giant stable series,
v0.87.x. Highlights for Giant include:
* *RADOS Performance*: a range of improvements have been made in the
OSD and client-side librados code that improve the throughput on
flash backends and improve parallelism and scaling on
I am doing some testing on our new ceph cluster:
- 3 ceph nodes (8 cpu 128G, Ubuntu 12.04 + 3.13 kernel)
- 8 osd on each (i.e 24 in total)
- 4 compute nodes (ceph clients)
- 10G networking
- ceph 0.86 (97dcc0539dfa7dac3de74852305d51580b7b1f82)
I'm using one of the compute nodes to run some fio
Probably never.
I'm having trouble finding documentation, but my understanding is that
dumpling and firefly are the only supported releases. I believe
emperor became unsupported when firefly came out. Similarly, giant
will be supported until hammer comes out. Once hammer comes out,
dumpling
On 30/10/14 11:16, Mark Kirkwood wrote:
I am doing some testing on our new ceph cluster:
- 3 ceph nodes (8 cpu 128G, Ubuntu 12.04 + 3.13 kernel)
- 8 osd on each (i.e 24 in total)
- 4 compute nodes (ceph clients)
- 10G networking
- ceph 0.86 (97dcc0539dfa7dac3de74852305d51580b7b1f82)
I'm using
On 30/10/2014 8:56 AM, Sage Weil wrote:
* *Degraded vs misplaced*: the Ceph health reports from 'ceph -s' and
related commands now make a distinction between data that is
degraded (there are fewer than the desired number of copies) and
data that is misplaced (stored in the wrong
[Re-adding the list, so this is archived for future posterity.]
On Wed, Oct 29, 2014 at 6:11 AM, Patrick Darley
patrick.dar...@codethink.co.uk wrote:
Thanks again for the reply Greg!
On 2014-10-28 17:39, Gregory Farnum wrote:
I'm sorry, you're right — I misread it. :(
No worries, I had
On Wed, Oct 29, 2014 at 7:51 AM, Jasper Siero
jasper.si...@target-holding.nl wrote:
Hello Greg,
I added the debug options which you mentioned and started the process again:
[root@th1-mon001 ~]# /usr/bin/ceph-mds -i th1-mon001 --pid-file
/var/run/ceph/mds.th1-mon001.pid -c
On Thu, 30 Oct 2014 10:40:38 +1100 Nigel Williams wrote:
On 30/10/2014 8:56 AM, Sage Weil wrote:
* *Degraded vs misplaced*: the Ceph health reports from 'ceph -s' and
related commands now make a distinction between data that is
degraded (there are fewer than the desired number of
On 30/10/2014 11:51 AM, Christian Balzer wrote:
Thus objects are (temporarily) not where they're supposed to be, but still
present in sufficient replication.
thanks for the reminder, I suppose that is obvious :-)
A much more benign scenario than degraded and I hope that this doesn't
even
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello,
On Wed, 29 Oct 2014 23:01:55 +0200 Cristian Falcas wrote:
Hello,
Will there be any benefit in making the journal the size of an entire
ssd disk?
Not really, Craig already pointed out a number of things.
To put numbers on things, I size my journals so they can at least hold 20
lists ceph, hi:
how do you solve this issue? i run into it when i tryy to deploy 2 rgws on one
ceph cluster in default region and default zone.
thanks
At 2014-07-01 09:06:24, Brian Rak b...@gameservers.com wrote:
That sounds like you have some kind of odd situation going on. We only
Dan (who wrote that slide deck) is probably your best bet here, but I
believe pool deletion is not very configurable and fairly expensive
right now. I suspect that it will get better in Hammer or Infernalis,
once we have a unified op work queue that we can independently
prioritize all IO through
I have updated the http://ceph.com/get page to reflect a more generic
approach to linking. It's also worth noting that the new
http://download.ceph.com/ infrastructure is available now.
To get to the rpms specifically you can either crawl the
download.ceph.com tree or use the symlink at
On Wed, Oct 29, 2014 at 7:49 AM, Daniel Schneller
daniel.schnel...@centerdevice.com wrote:
Hi!
We are exploring options to regularly preserve (i.e. backup) the
contents of the pools backing our rados gateways. For that we create
nightly snapshots of all the relevant pools when there is no
i ignore a detail, to set FastCgiWrapper off
thanks
At 2014-10-30 10:01:19, yuelongguang fasts...@163.com wrote:
lists ceph, hi:
how do you solve this issue? i run into it when i tryy to deploy 2 rgws on one
ceph cluster in default region and default zone.
thanks
At
Hey cephers,
Given some of the recent interest in utilizing Docker with Ceph I'm
taking another survey of the landscape. I know that Loic recently got
Teuthology running with Docker (http://dachary.org/?p=3330) but I'd
like to look at running a containerized Ceph setup as well.
So far I see
Also to note - we're running on CoreOS and making use of the etcd
distributed key value store to store configuration data, and confd to
template out some of the configuration from etcd. So it's a cool marriage
of various tools in the ecosystem.
*Chris Armstrong*Head of Services
OpDemand /
Sure thing. I'll work on something and send it over early next week.
*Chris Armstrong*Head of Services
OpDemand / Deis.io
GitHub: https://github.com/deis/deis -- Docs: http://docs.deis.io/
On Wed, Oct 29, 2014 at 10:24 PM, Patrick McGarry patr...@inktank.com
wrote:
Christopher,
I would
Christopher,
I would definitely welcome a writeup for the ceph.com blog! Feel free
to send something my way as soon as is convenient. :)
As an aside to anyone doing fun/cool/interesting/wacky things...I'm
always looking for ceph.com blog content and love to feature
everything from small
Hey Patrick,
We recently added a new component to Deis which is based entirely on
running Ceph in containers. We're running mons, OSDs, and MDSes in
containers, and consuming from containers with radosgw as well as CephFS.
See the source here: https://github.com/deis/deis/tree/master/store
I'm
On Thu, 30 Oct 2014, Nigel Williams wrote:
On 30/10/2014 8:56 AM, Sage Weil wrote:
* *Degraded vs misplaced*: the Ceph health reports from 'ceph -s' and
related commands now make a distinction between data that is
degraded (there are fewer than the desired number of copies) and
I've created a survey to try to get a feel for what people are using in
environments where Ceph is deployed today. If you have a moment, please
help us figure out where we should be spending our effort, here! In
particular, if you use kerberos or active directory or something similar,
we
69 matches
Mail list logo