Am 31.01.2013 08:43, schrieb Stefan Priebe - Profihost AG:
Hello,
while compiling the bobtail branch i've seen this warning:
mon/PGMap.cc: In member function ‘void
PGMap::apply_incremental(CephContext*, const PGMap::Incremental)’:
mon/PGMap.cc:247: warning: comparison between signed and
We are using nginx, on top of rgw. In nginx we manage to create logic,
for using a AMQP, and async operations via queues. Then workers, on
every side getiing data from own queue, and then coping data from
source, to destination in s3 API. Works for PUT/DELETE, and work
automatic when production
Hi,
great to see that we now have op/s and B/s output in ceph -w / ceph -s.
But is it reading or writing or both? Also if there are not ops the ;
and the rest of the line is missing instead of printing zeros. This
makes parsing harder.
See:
2013-01-31 10:46:42.045874 mon.0 [INF] pgmap v4037097:
Hi list,
I just rebuild my ceph setup with 6 nodes, (20 sata+ 4 ssd as journal
+10GbE ) per node,software stack is Ubuntu 12.10+ Kernel 3.6.3 + xfs+ceph
0.56.2. Before build up ceph cluster , I have checked all my disks can reach
90MB+/s for sequential write and 100MB+/s for sequential
On Thu, 31 Jan 2013, Stefan Priebe - Profihost AG wrote:
Hi,
great to see that we now have op/s and B/s output in ceph -w / ceph -s.
But is it reading or writing or both? Also if there are not ops the ;
and the rest of the line is missing instead of printing zeros. This
makes parsing
On 01/31/2013 12:16 PM, Sage Weil wrote:
On Thu, 31 Jan 2013, Stefan Priebe - Profihost AG wrote:
Hi,
great to see that we now have op/s and B/s output in ceph -w / ceph -s.
But is it reading or writing or both? Also if there are not ops the ;
and the rest of the line is missing instead of
On 01/30/2013 06:33 PM, Josh Durgin wrote:
It'd be nice to have a log message when libceph_compatible fails.
It could be a future patch though.
Reviewed-by: Josh Durgin josh.dur...@inktank.com
You're right, and I had thought of doing that but forgot.
I will add it before I commit. Thanks a
Hi-
Can you reproduce this with logs? It looks like there are a few ops that
are hanging for a very long time, but there isn't enough information here
except to point to osds 610, 612, 615, and 68...
thanks!
s
On Thu, 31 Jan 2013, Chen, Xiaoxi wrote:
Hi list,
I just rebuild my ceph
this does not work:
#~ ceph --format=json -s
health HEALTH_OK
monmap e1: 3 mons at
{a=10.255.0.100:6789/0,b=10.255.0.101:6789/0,c=10.255.0.102:6789/0},
election epoch 2502, quorum 0,1,2 a,b,c
osdmap e14994: 24 osds: 24 up, 24 in
pgmap v4046683: 8128 pgs: 8128 active+clean; 172 GB
On 01/31/2013 12:46 PM, Stefan Priebe - Profihost AG wrote:
this does not work:
#~ ceph --format=json -s
health HEALTH_OK
monmap e1: 3 mons at
{a=10.255.0.100:6789/0,b=10.255.0.101:6789/0,c=10.255.0.102:6789/0},
election epoch 2502, quorum 0,1,2 a,b,c
osdmap e14994: 24 osds: 24 up,
On 01/31/2013 12:54 PM, Joao Eduardo Luis wrote:
On 01/31/2013 12:46 PM, Stefan Priebe - Profihost AG wrote:
this does not work:
#~ ceph --format=json -s
health HEALTH_OK
monmap e1: 3 mons at
{a=10.255.0.100:6789/0,b=10.255.0.101:6789/0,c=10.255.0.102:6789/0},
election epoch 2502,
Hi,
I disabled scrubbing using
ceph osd tell \* injectargs '--osd-scrub-min-interval 100'
ceph osd tell \* injectargs '--osd-scrub-max-interval 1000'
and the leak seems to be gone.
See the graph at http://i.imgur.com/A0KmVot.png with the OSD memory
for the 12 osd processes over the
Don't use po::valuestring()-required() since this breaks build on
RHEL/CentOs6. Check if the options are set as in the code of other
ceph parts.
Move some checks up in the code to validate options as soon
as possible. Remove printing 'help' twice.
Signed-off-by: Danny Al-Gaaf
On 01/31/2013 05:43 AM, Sage Weil wrote:
Hi-
Can you reproduce this with logs? It looks like there are a few ops that
are hanging for a very long time, but there isn't enough information here
except to point to osds 610, 612, 615, and 68...
FWIW, I have a small pile of disks with bad
Hi,
I'm crossing my fingers, but I just noticed that since I upgraded to kernel
version 3.2.0-36-generic on Ubuntu 12.04 the other day, ceph-osd memory
usage has stayed stable.
Unfortunately for me, I'm already on 3.2.0-36-generic (Ubuntu 12.04 as well).
Cheers,
Sylvain
PS: Dave
Hi All ,
I need some help and guide on compiling ceph client on Eclipse..
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at
Hi Sage,
thanks for your reply. sorry to bother you again.
i like your suggestion create different pools with different types of
storage devices. Can you give me a quick guide, if i want to
implement this function, which code files i need to review in ceph
server side?
i tried to figure out
Can you pastebin the output of rados -p rbd ls?
On Thu, Jan 31, 2013 at 10:17 AM, Andrey Korolyov and...@xdel.ru wrote:
Hi,
Please take a look, this data remains for days and seems not to be
deleted in future too:
pool name category KB objects clones
Hello,
thank you very very much, it works in general now ☺
Can you say something about how to limit the rights of the user I created like
in the manual?
We want to create an account, which has no rights to create buckets. cannot
find a fitting manual for that.
Thank you very much
Regards
On 01/31/2013 06:17 PM, Andrey Korolyov wrote:
Hi,
Please take a look, this data remains for days and seems not to be
deleted in future too:
pool name category KB objects clones
degraded unfound rdrd KB wr
wr KB
data
http://xdel.ru/downloads/ceph-log/rados-out.txt.gz
On Thu, Jan 31, 2013 at 10:31 PM, Gregory Farnum g...@inktank.com wrote:
Can you pastebin the output of rados -p rbd ls?
On Thu, Jan 31, 2013 at 10:17 AM, Andrey Korolyov and...@xdel.ru wrote:
Hi,
Please take a look, this data remains for
On Thu, Jan 31, 2013 at 10:50 AM, Andrey Korolyov and...@xdel.ru wrote:
http://xdel.ru/downloads/ceph-log/rados-out.txt.gz
On Thu, Jan 31, 2013 at 10:31 PM, Gregory Farnum g...@inktank.com wrote:
Can you pastebin the output of rados -p rbd ls?
Well, that sure is a lot of rbd objects. Looks
You might want to ask a more-specific question.
On 01/31/2013 08:24 AM, charles L wrote:
I need some help and guide on compiling ceph client on Eclipse..
--
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a
On Thu, 31 Jan 2013, Sylvain Munaut wrote:
Hi,
I disabled scrubbing using
ceph osd tell \* injectargs '--osd-scrub-min-interval 100'
ceph osd tell \* injectargs '--osd-scrub-max-interval 1000'
and the leak seems to be gone.
See the graph at http://i.imgur.com/A0KmVot.png
On Thu, 31 Jan 2013, Jim Schutt wrote:
On 01/31/2013 05:43 AM, Sage Weil wrote:
Hi-
Can you reproduce this with logs? It looks like there are a few ops that
are hanging for a very long time, but there isn't enough information here
except to point to osds 610, 612, 615, and 68...
On Thu, 31 Jan 2013, sheng qiu wrote:
Hi Sage,
thanks for your reply. sorry to bother you again.
i like your suggestion create different pools with different types of
storage devices. Can you give me a quick guide, if i want to
implement this function, which code files i need to review in
On 01/31/2013 01:00 PM, Sage Weil wrote:
On Thu, 31 Jan 2013, Jim Schutt wrote:
On 01/31/2013 05:43 AM, Sage Weil wrote:
Hi-
Can you reproduce this with logs? It looks like there are a few ops that
are hanging for a very long time, but there isn't enough information here
except to point
Skowron,
Can you go into a bit more detail on your specific use-case? What type
of data are you storing in rbd (type, volume)?
Neil
On Wed, Jan 30, 2013 at 10:42 PM, Skowron Sławomir
slawomir.skow...@grupaonet.pl wrote:
I make new thread, because i think it's a diffrent case.
We have managed
Yes, anyone could do this now by setting up the OSDs on top of
dm-crypted disks, correct? This would just automate the process, and
manage keys for us?
On Tue, Jan 22, 2013 at 5:04 PM, Sage Weil s...@inktank.com wrote:
On Tue, 22 Jan 2013, James Page wrote:
-BEGIN PGP SIGNED MESSAGE-
Correct.
I wasn't actually involved in this (or any other real) work,
but as I recall the only real trick is how much key management
you want:
Do we want to be able to recover the key if a good disk
is rescued from a destroyed server and added to a new
server?
Do we want to ensure that
That's a great idea, it simplifies things for the admin and doesn't
sound too hairy on the monitor side. One alternative option is to do
it in one of the ways that ZFS does; that is you can provide a config
option that says 'get the keys from here'. For example you could set
up an https server and
If there are any users work for organizations which have strict
encryption or key management policies, I'd be interested to learn a
bit more about your needs to drive some of the roadmap around the
encryption features longer-term.
Neil
--
To unsubscribe from this list: send the line unsubscribe
On Wed, Jan 30, 2013 at 02:42:14PM +, Al Viro wrote:
On Tue, Jan 29, 2013 at 01:03:23PM -0800, Sage Weil wrote:
We should drop teh mds_client.c hunk from your patch, and then do
something like the below. I'll put it in the ceph tree so we can do some
basic testing. Unfortunately,
There is a check in the completion path for osd requests that
ensures the number of pages allocated is enough to hold the amount
of incoming data expected.
For bio requests coming from rbd the number of pages is not really
meaningful (although total length would be). So stop requiring that
The ceph messenger has a few spots that are only used when
bio messages are supported, and that's only when CONFIG_BLOCK
is defined. This surrounds a couple of spots with #ifdef's
that would cause a problem if CONFIG_BLOCK were not present
in the kernel configuration.
This resolves:
The patches have been applied to the wip-rpm-update3 branch. Additionally,
rbd-fuse has been put into it's own package.
Please have a look and let me know if that works for everyone.
Thanks,
Gary
On Jan 30, 2013, at 9:10 PM, Dan Mick wrote:
On 01/30/2013 09:08 PM, Dan Mick wrote:
Hello to all,
Here is my setup :
- 2 racks
- osd1 .. osd6 in rack1
- osd7 .. osd12 in rack2
- replica = 2
- CRUSH map set to put replicas accross racks
My question :
Let's imagine that one day, I need to unplug one of the racks (let's
say, rack1). No problem because an other copy of my objects
Try ceph osd set noout beforehand and then ceph osd unset noout. That will
prevent any OSDs from getting removed from the mapping, so no data will be
rebalanced. I don't think there's a way to prevent OSDs from getting zapped on
an individual basis, though.
This is described briefly in the
38 matches
Mail list logo