Currently, users do not know when some pg do scrubbing for a long time.
I think whether we could give some warming if it happend (defined as
osd_scrub_max_time).
It would tell the user something may be wrong in cluster.
2015-03-17 21:21 GMT+08:00 池信泽 xmdx...@gmail.com:
On 周二, 3月 17, 2015 at
Hello,
On Wed, 18 Mar 2015 11:05:47 -0700 Gregory Farnum wrote:
On Wed, Mar 18, 2015 at 8:04 AM, Nick Fisk n...@fisk.me.uk wrote:
Hi Greg,
Thanks for your input and completely agree that we cannot expect
developers to fully document what impact each setting has on a
cluster,
On Wed, 18 Mar 2015 08:59:14 +0100 Josef Johansson wrote:
Hi,
On 18 Mar 2015, at 05:29, Christian Balzer ch...@gol.com wrote:
Hello,
On Wed, 18 Mar 2015 03:52:22 +0100 Josef Johansson wrote:
[snip]
We though of doing a cluster with 3 servers, and any recommendation of
I don't use ceph-deploy, but using ceph-disk for creating the OSDs
automatically uses the by-partuuid reference for the journals (at
least I recall only using /dev/sdX for the journal reference, which is
what I have in my documentation). Since ceph-disk does all the
partitioning, it automatically
Hi,
I have a 5 node ceph(v0.87) cluster and am trying to deploy hadoop with
cephFS. I have installed hadoop-1.1.1 in the nodes and changed the
conf/core-site.xml file according to the ceph documentation
http://ceph.com/docs/master/cephfs/hadoop/ but after changing the file the
namenode is not
Dear all,
Ceph 0.72.2 is deployed in three hosts. But the ceph's status is HEALTH_WARN .
The status is as follows:
# ceph -s
cluster e25909ed-25d9-42fd-8c97-0ed31eec6194
health HEALTH_WARN 768 pgs degraded; 768 pgs stuck unclean; recovery 2/3
objects degraded (66.667%)
monmap
Hi guys,
I was creating new buckets and adjusting the crush map when 1 monitor
stopped replying.
The scenario is:
2 servers
2 MONs
21 OSDs each server
Error message in the mon.log:
NOTE: a copy of the executable, or `objdump -rdS executable` is
needed to interpret this.
I uploaded the
Hello,
On Wed, 18 Mar 2015 11:41:17 +0100 Francois Lafont wrote:
Hi,
Christian Balzer wrote :
Consider what you think your IO load (writes) generated by your
client(s) will be, multiply that by your replication factor, divide by
the number of OSDs, that will give you the base load
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
- From the documentation:
Cache Tier readonly:
Read-only Mode: When admins configure tiers with readonly mode, Ceph
clients write data to the backing tier. On read, Ceph copies the
requested object(s) from the backing tier to the cache tier.
Can anyone tel me where code for deleting objects using command
rados rm test-object-1 --pool=data will be found for ceph-version 0.80.5??
Thanks.
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi,
I have a Ceph cluster with both ARM and x86 based servers in the same cluster.
Is there a way for me to define Pools or some logical separation that would
allow me to use only 1 set of machines for a particular test.
That way it makes easy for me to run tests either on x86 or ARM and do
Pankaj,
You can define them via different crush rules, and then assign a pool to a
given crush rule. This is the same in practice as having a node type with
all SSDs and another with all spinners. You can read more about how to set
this up here:
On 03/19/2015 12:27 PM, Robert LeBlanc wrote:
Udev already provides some of this for you. Look in /dev/disk/by-*.
You can reference drives by UUID, id or path (for
SAS/SCSI/FC/iSCSI/etc) which will provide some consistency across
reboots and hardware changes.
Thanks for the quick responses.
- Original Message -
From: Potato Farmer potato_far...@outlook.com
To: ceph-users@lists.ceph.com
Sent: Thursday, March 19, 2015 12:26:41 PM
Subject: [ceph-users] FastCGI and RadosGW issue?
Hi,
I am running into an issue uploading to a bucket over an s3 connection to
Yehuda,
You rock! Thank you for the suggestion. That fixed the issue. :)
-Original Message-
From: Yehuda Sadeh-Weinraub [mailto:yeh...@redhat.com]
Sent: Thursday, March 19, 2015 12:45 PM
To: Potato Farmer
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] FastCGI and RadosGW
On Thu, Mar 19, 2015 at 4:46 AM, Matthijs Möhlmann
matth...@cacholong.nl wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
- From the documentation:
Cache Tier readonly:
Read-only Mode: When admins configure tiers with readonly mode, Ceph
clients write data to the backing tier.
Hi,
Is there a celing on the number for number of placement groups in a
OSD beyond which steady state and/or recovery performance will start
to suffer?
Example: I need to create a pool with 750 osds (25 OSD per server, 50 servers).
The PG calculator gives me 65536 placement groups with 300 PGs
On Thu, 19 Mar 2015, Xinze Chi wrote:
Currently, users do not know when some pg do scrubbing for a long time.
I think whether we could give some warming if it happend (defined as
osd_scrub_max_time).
It would tell the user something may be wrong in cluster.
This should be pretty
I understand there's a KMOD_CCISS package available. However, I can't find it
for download. Anybody have any ideas?
Thanks!
Dan O'Reilly
UNIX Systems Administration
[cid:image001.jpg@01D06222.B852F940]
9601 S. Meridian Blvd.
Englewood, CO 80112
720-514-6293
Hello:
We have a cuttlefish (0.61.9) 192-OSD cluster that we are trying to get
back to a quorum. We have 2 mon nodes up and ready, we just need this 3rd.
We moved the data dir over (/var/lib/ceph/mon) from one of the good ones to
this 3rd node, but it won't start- we see this error, after which
Hi,
I am running into an issue uploading to a bucket over an s3 connection to
ceph. I can create buckets just fine. I just can't create a key and copy
data to it.
Command that causes the error:
key.set_contents_from_string(testing from string)
I encounter the following error:
Udev already provides some of this for you. Look in /dev/disk/by-*.
You can reference drives by UUID, id or path (for
SAS/SCSI/FC/iSCSI/etc) which will provide some consistency across
reboots and hardware changes.
On Thu, Mar 19, 2015 at 1:10 PM, Colin Corr co...@pc-doctor.com wrote:
Greetings
Greetings Cephers,
I have been lurking on this list for a while, but this is my first inquiry. I
have been playing with Ceph for the past 9 months and am in the process of
deploying a production Ceph cluster. I am seeking advice on an issue that I
have encountered. I do not believe it is a
I think this could be part of what I am seeing. I found this post from back in
2003
http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/12083
Which seems to describe a work around for the behaviour to what I am seeing.
The constant small block IO I was seeing looks like it was either
On Wed, Mar 18, 2015 at 11:10 PM, Christian Balzer ch...@gol.com wrote:
Hello,
On Wed, 18 Mar 2015 11:05:47 -0700 Gregory Farnum wrote:
On Wed, Mar 18, 2015 at 8:04 AM, Nick Fisk n...@fisk.me.uk wrote:
Hi Greg,
Thanks for your input and completely agree that we cannot expect
Hello, everyone!
I have created a Ceph cluster (v0.87.1-1) using the info from the 'Quick
deploy http://docs.ceph.com/docs/master/start/quick-ceph-deploy/' page,
with the following setup:
- 1 x admin / deploy node;
- 3 x OSD and MON nodes;
- each OSD node has 2 x 8 GB HDDs;
The
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Bogdan SOLGA
Sent: 19 March 2015 20:51
To: ceph-users@lists.ceph.com
Subject: [ceph-users] PGs issue
Hello, everyone!
I have created a Ceph cluster (v0.87.1-1) using the info from the
I'm looking at trialling OSD's with a small flashcache device over them to
hopefully reduce the impact of metadata updates when doing small block io.
Inspiration from here:-
http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/12083
One thing I suspect will happen, is that when the OSD
On Thu, Mar 19, 2015 at 2:41 PM, Nick Fisk n...@fisk.me.uk wrote:
I'm looking at trialling OSD's with a small flashcache device over them to
hopefully reduce the impact of metadata updates when doing small block io.
Inspiration from here:-
On 19/03/2015, at 15.50, Andrew Diller dill...@gmail.com wrote:
We moved the data dir over (/var/lib/ceph/mon) from one of the good ones to
this 3rd node, but it won't start- we see this error, after which no further
logging occurs:
2015-03-19 06:25:05.395210 7fcb57f1c7c0 -1 failed to
On 19/03/2015, at 15.57, O'Reilly, Dan daniel.orei...@dish.com wrote:
I understand there’s a KMOD_CCISS package available. However, I can’t find
it for download. Anybody have any ideas?
Oh I believe HP swapped cciss for hpsa (Smart Array) driver long ago… so maybe
only download cciss
The problem with using the hpsa driver is that I need to install RHEL 7.1 on a
Proliant system using the SmartArray 400 controller. Therefore, I need a
driver that supports it to even install RHEL 7.1. RHEL 7.1 doesn’t generically
recognize that controller out of the box.
From: Steffen W
32 matches
Mail list logo