Fixed problem,
Default pool size is set to 2, but for rbd pool size wos set to 3.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Mateusz Skała
Sent: Tuesday, March 10, 2015 10:22 AM
To: 'Henrik Korkuc'; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph
So EPEL is not requiered?
Jesus Chavez
SYSTEMS ENGINEER-C.SALES
jesch...@cisco.commailto:jesch...@cisco.com
Phone: +52 55 5267 3146tel:+52%2055%205267%203146
Mobile: +51 1 5538883255tel:+51%201%205538883255
CCIE - 44433
On Mar 9, 2015, at 8:58 AM, HEWLETT, Paul (Paul)** CTR **
Hi all,
I've just attempted to add a new node and OSD to an existing ceph cluster (it's
a small one I use as a NAS at home, not like the big production ones I normally
work on) and it seems to be throwing some odd errors...
Just looking for where to poke it next...
Log is below,
It's a two
hi, folks! I'm testing cache tier for erasure coded pool and with
RBD image on it. And now I'm facing a problem with full cache pool
and object are not evicted automatically, Only if I run manually
rados -p cache cache-flush-evict-all*
client side is:
superuser@share:~$ uname
Hi,
you need to set the max dirty bytes and/or max dirty objects as these 2
parameters will default to 0 for your cache pool.
ceph osd pool set cache_pool_name target_max_objects x
ceph osd pool set cache_pool_name target_max_bytes x
The ratios you already set (dirty_ratio = 0.4 and
Can you reproduce this with
debug osd = 20
debug filestore = 20
debug ms = 1
on the crashing osd? Also, what sha1 are the other osds and mons running?
-Sam
- Original Message -
From: Malcolm Haak malc...@sgi.com
To: ceph-users@lists.ceph.com
Sent: Tuesday, March 10, 2015 3:28:26 AM
Hi Samuel,
The sha1? I'm going to admit ignorance as to what you are looking for. They are
all running the same release if that is what you are asking.
Same tarball built into rpms using rpmbuild on both nodes...
Only difference being that the other node has been upgraded and the problem
node
On Tue, Mar 10, 2015 at 4:20 AM, Florent B flor...@coppint.com wrote:
Hi all,
I'm testing flock() locking system on CephFS (Giant) using Fuse.
It seems that lock works per client, and not over all clients.
Am I right or is it supposed to work over different clients ? Does MDS
has such a
Thanks a lot to /*Be-El*/ from #ceph (irc://irc.oftc.net/ceph)
The problem is resolved after setting 'target_max_bytes' for cache pool:
*$ ceph osd pool set cache target_max_bytes 1840*
Because setting only 'cache_target_full_ratio' to 0.7 - is not
sufficient for cache tiering agent,
Hi Jesus
EPEL is required for the libunwind library.
If libunwind is copied to the ceph repo then EPEL would not be required.
Regards
Paul Hewlett
Senior Systems Engineer
Velocix, Cambridge
Alcatel-Lucent
t: +44 1223 435893 m: +44 7985327353
From: Jesus Chavez
What do you mean by unblocked but still stuck?
-Sam
On Mon, 2015-03-09 at 22:54 +, joel.merr...@gmail.com wrote:
On Mon, Mar 9, 2015 at 2:28 PM, Samuel Just sj...@redhat.com wrote:
You'll probably have to recreate osds with the same ids (empty ones),
let them boot, stop them, and mark
On Tue, 10 Mar 2015, Christian Eichelmann wrote:
Hi Sage,
we hit this problem a few monthes ago as well and it took us quite a while to
figure out what's wrong.
As a Systemadministrator I don't like the idea that daemons or even init
scripts are changing system wide configuration
thanks! but I still don’t get it very well what can I do to install libunwind?
[root@aries ~]# yum install libunwind
Loaded plugins: langpacks, priorities, product-id, subscription-manager
7 packages excluded due to repository priority protections
No package libunwind available.
Error: Nothing to
Or maybe installing the RPM directly from:
http://www.mirrorservice.org/sites/ceph.com/rpm-firefly/rhel7/x86_64/
libunwind-1.1-3.el7.x86_64.rpmhttp://www.mirrorservice.org/sites/ceph.com/rpm-firefly/rhel7/x86_64/libunwind-1.1-3.el7.x86_64.rpm
but this is not for Giant seems to be for firefly =(
Hi
I have a basic architecture related question. I know Calamari collect
system usages data (diamond collector) using perfrormance counters. I need
to knwo if all the system performance data that calamari shows remains in
memory or it usages files to store that.
Thanks
sumit
Hi,
I had a ceph cluster in HEALTH_OK state with Firefly 0.80.9. I just
wanted to remove an OSD (which worked well). So after:
ceph osd out 3
I waited for the rebalancing but I had PGs stuck unclean:
---
~# ceph -s
cluster
On Wed, 11 Mar 2015, Christian Balzer wrote:
On Tue, 10 Mar 2015 12:34:14 -0700 (PDT) Sage Weil wrote:
Adjusting CRUSH maps
* This point release fixes several issues with CRUSH that trigger
excessive data migration when adjusting OSD weights. These are most
Stuck unclean and stuck inactive. I can fire up a full query and
health dump somewhere useful if you want (full pg query info on ones
listed in health detail, tree, osd dump etc). There were blocked_by
operations that no longer exist after doing the OSD addition.
Side note, spent some time
Joao, it looks like map 2759 is causing trouble, how would he get the
full and incremental maps for that out of the mons?
-Sam
On Tue, 2015-03-10 at 14:12 +, Malcolm Haak wrote:
Hi Samuel,
The sha1? I'm going to admit ignorance as to what you are looking for. They
are all running the
Yeah, get a ceph pg query on one of the stuck ones.
-Sam
On Tue, 2015-03-10 at 14:41 +, joel.merr...@gmail.com wrote:
Stuck unclean and stuck inactive. I can fire up a full query and
health dump somewhere useful if you want (full pg query info on ones
listed in health detail, tree, osd
Hi guys,
The last trusty version 0.80.9 have been pushed in the deb
http://ceph.com/debian-firefly/ trusty main repository yesterday.
The last packages have the version 0.80.9-1trusty, but I can not find
the corresponding source packages in
Hi,
The option --dmcrypt-key-dir when you want to activate/create a new
OSD is unusable by default. Because the default path
/etc/ceph/dmcrypt-keys/ is hard coded in udev rules.
I have found and test two simple way to solve :
- Change the path of keys in ''/lib/udev/rules.d/95-ceph-osd.rules''
what is going on with ceph?
[ceph_deploy.gatherkeys][WARNIN] Unable to find
/etc/ceph/ceph.client.admin.keyring on aries
[ceph_deploy][ERROR ] KeyNotFoundError: Could not find keyring file:
/etc/ceph/ceph.client.admin.keyring on host aries
=(
gosh
Jesus Chavez
On 09/03/2015, at 22.44, Nick Fisk n...@fisk.me.uk wrote:
Either option #1 or #2 depending on if your data has hot spots or you need
to use EC pools. I'm finding that the cache tier can actually slow stuff
down depending on how much data is in the cache tier vs on the slower tier.
Writes
Hello,
I'm running debian 8 with ceph 0.80.6-1 firefly in
production and I need to double the count of pgs. I've found that it was
experimantal feature, is it safe now?
Is there still
--allow-experimental-feature switch for ceph osd pool set {pool-name}
pg_num {pg_num} command ?
Thanks
Thanks for reply,
On 10.03.2015 19:52, Weeks, Jacob (RIS-BCT)
wrote:
I am not sure about v0.80.6-1 but in v0.80.7 the
--allow-experimental-feature option is not required. I have increased
pg_num and pgp_num in v0.80.7 without any issues.
on how big cluster,
how long did it take to
I am not sure about v0.80.6-1 but in v0.80.7 the --allow-experimental-feature
option is not required. I have increased pg_num and pgp_num in v0.80.7
without any issues.
on how big cluster, how long did it take to recovery from this change?
The largest pool was roughly 200TB.
It took less than
This is a bugfix release for firefly. It fixes a performance regression
in librbd, an important CRUSH misbehavior (see below), and several RGW
bugs. We have also backported support for flock/fcntl locks to ceph-fuse
and libcephfs.
We recommend that all Firefly users upgrade.
For more
I am not sure about v0.80.6-1 but in v0.80.7 the --allow-experimental-feature
option is not required. I have increased pg_num and pgp_num in v0.80.7 without
any issues.
It may be safer to make the change incrementally rather than all at once. Since
v0.72, ceph does not allow extreme changes
Hello Christian,
hello everyone,
That would be my guess, as a similar 4 node cluster here with a flat, host
only topology is having its data distributed as expected by the weights.
To verify this, I eliminated the room buckets.
Now all hosts are located within the root bucket.
(Same tree as
Yehuda Sadeh-Weinraub yehuda@... writes:
According to the api specified here
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUT.html,
there's no response expected. I can only assume that the application
tries to decode the xml if xml content type is returned.
Also what I hinted App
Hi! Is is possible somehow to have a kind of OSD benchmark for CPU?
It would be very useful to measure the actual compatibility of a server with
a number of OSD, PGs and so on ..
The reason of the request is that the rule of 1 GHz per OSD might not really
hold water
(for reasons like AMD vs
On 11 March 2015 at 06:53, Jesus Chavez (jeschave) jesch...@cisco.com
wrote:
KeyNotFoundError: Could not find keyring file:
/etc/ceph/ceph.client.admin.keyring on host aries
Well - have you verified the keyring is there on host aries and has the
right permissions?
--
Lindsay
- Original Message -
From: Steffen Winther ceph.u...@siimnet.dk
To: ceph-users@lists.ceph.com
Sent: Tuesday, March 10, 2015 12:06:38 AM
Subject: Re: [ceph-users] S3 RadosGW - Create bucket OP
Yehuda Sadeh-Weinraub yehuda@... writes:
According to the api specified here
We have a large number of shadow files in our cluster that aren't being
deleted automatically as data is deleted.
Is it safe to delete these files?
Is there something we need to be aware of when deleting them?
Is there a script that we can run that will delete these safely?
Is there something
Are the slides or videos from ceph days presentations made available
somewhere? I noticed some links in the Frankfurt Ceph day, but not for the
other Ceph Days.
-- Tom
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi,
In my cluster is something wrong with free space. In cluster with 10OSD
(5*1TB + 5*2TB) 'ceph -s' shows:
11425 GB used, 2485 GB / 13910 GB avail
But I have only 2 rbd disks in one pool ('rbd'):
rados df
pool name category KB objects clones
degraded
Hi Sage,
we hit this problem a few monthes ago as well and it took us quite a
while to figure out what's wrong.
As a Systemadministrator I don't like the idea that daemons or even init
scripts are changing system wide configuration parameters, so I wouldn't
like to see the OSDs do it
On 3/10/15 11:06, Mateusz Skała wrote:
Hi,
In my cluster is something wrong with free space. In cluster with
10OSD (5*1TB + 5*2TB) ‘ceph –s’ shows:
11425 GB used, 2485 GB / 13910 GB avail
But I have only 2 rbd disks in one pool (‘rbd’):
rados df
pool name category KB objects
Thanks for reply,
ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
13910G 2472G 11437G 82.22
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0 3792G 27.26 615G 971526
How to free raw used
On Tue, 10 Mar 2015 12:34:14 -0700 (PDT) Sage Weil wrote:
Adjusting CRUSH maps
* This point release fixes several issues with CRUSH that trigger
excessive data migration when adjusting OSD weights. These are most
obvious when a very small weight change (e.g., a
Hi,
Le 10/03/2015 04:40, Leslie Teo a écrit :
we use `rados export poolA /opt/zs.rgw-buckets` export ceph cluster pool
named poolA into localdir /opt/ .and import the directroy
/opt/zs.rgw-buckets into another ceph cluster pool named hello , and
following the error :shell rados
42 matches
Mail list logo