On 01/25/2013 11:47 AM, Ugis wrote:
This could work, thanks!
P.S. Is there a way to tell which client has mapped certain rbd if no
rbd lock is used?
What you could do is this:
$ rbd lock add myimage `hostname`
That way you know which client locked the image.
Wido
It would be useful to
On Fri, 25 Jan 2013, Andrey Korolyov wrote:
On Fri, Jan 25, 2013 at 4:52 PM, Ugis ugi...@gmail.com wrote:
I mean if you map rbd and do not use rbd lock.. command. Can you
tell which client has mapped certain rbd anyway?
Not yet. We need to add the ability to list watchers in librados, which
On Fri, Jan 25, 2013 at 7:51 PM, Sage Weil s...@inktank.com wrote:
On Fri, 25 Jan 2013, Andrey Korolyov wrote:
On Fri, Jan 25, 2013 at 4:52 PM, Ugis ugi...@gmail.com wrote:
I mean if you map rbd and do not use rbd lock.. command. Can you
tell which client has mapped certain rbd anyway?
Not
Hi Sage,
i am appreciated for your reply.
by my understanding on reading the client codes, i think ceph
allocates msg based on individual file. In another word, if one client
is updating on different files (each file is doing small
writes/updates, i.e. 4kb), ceph has to compose different msgs
Hi,
Could provide those heaps? Is it possible?
--
Regards,
Sébastien Han.
On Tue, Jan 22, 2013 at 10:38 PM, Sébastien Han han.sebast...@gmail.com wrote:
Well ideally you want to run the profiler during the scrubbing process
when the memory leaks appear :-).
--
Regards,
Sébastien Han.
On Fri, 25 Jan 2013, sheng qiu wrote:
Hi Sage,
i am appreciated for your reply.
by my understanding on reading the client codes, i think ceph
allocates msg based on individual file. In another word, if one client
is updating on different files (each file is doing small
writes/updates,
Faidon/paravoid's cluster has a bunch of OSDs that are up, but the pg
queries indicate they are tens of thousands of epochs behind:
history: { epoch_created: 14,
last_epoch_started: 88174,
last_epoch_clean: 88174,
last_epoch_split: 0,
same_up_since:
Gregory, the network physical layout is simple, the two networks are
separate. the 192.168.0 and the 192.168.1 are not subnets within a
network.
Isaac
- Original Message -
From: Gregory Farnum g...@inktank.com
To: Isaac Otsiabah zmoo...@yahoo.com
Cc: ceph-devel@vger.kernel.org
On Fri, Jan 25, 2013 at 12:18 AM, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2013/1/25 Cesar Mello cme...@gmail.com:
Just as a curiosity, I lost some time because I didn't read this:
.. important:: Check the key output. Sometimes ``radosgw-admin``
generates a key with an
On Friday, January 25, 2013 at 9:50 AM, Sage Weil wrote:
Faidon/paravoid's cluster has a bunch of OSDs that are up, but the pg
queries indicate they are tens of thousands of epochs behind:
history: { epoch_created: 14,
last_epoch_started: 88174,
last_epoch_clean: 88174,
last_epoch_split:
The only way I can see that this would happen is if maps were being
generated much more quickly than pgs could be updated...the solution
to that would be to throttle new map handling to the rate at which PGs
consume them at the OSD. Alternately, you could tweak the map
creation rate at the mons.
Could provide those heaps? Is it possible?
We're updating this weekend to 0.56.1.
If it still happens after the update, I'll try and reproduce it on our
test infra and do the profile there, because unfortunately running the
profiler seem to make it eat up CPU and RAM a lot ...
I also need to
Hi Sage,
i see the Pipe class is a very important structure on server side. It
has two threads for read/write messages on the connected socket.
For example, if one client send write request to an OSD, the reader
read the msg and get parse the msg such as msg type, data and so on.
so the msg type
On Thu, Jan 24, 2013 at 9:27 AM, Cesar Mello cme...@gmail.com wrote:
Hi!
I have successfully prototyped read/write access to ceph from Windows
using the S3 API, thanks so much for the help.
Now I would like to do some prototypes targeting performance
evaluation. My scenario typically
On Fri, Jan 25, 2013 at 10:07 AM, Andrey Korolyov and...@xdel.ru wrote:
Sorry, I have written too less yesterday because of being sleepy.
That`s obviously a cache pressure since dropping caches resulted in
disappearance of this errors for a long period. I`m not very familiar
with kernel memory
On Fri, Jan 25, 2013 at 11:51 AM, Isaac Otsiabah zmoo...@yahoo.com wrote:
Gregory, the network physical layout is simple, the two networks are
separate. the 192.168.0 and the 192.168.1 are not subnets within a
network.
Hi Isaac,
Could you send us your routing tables on the osds (route -n).
If the S3 API is not well suited to my scenario, then my effort should
be better directed to porting or writing a native ceph client for
Windows. I just need an API to read and write/append blocks to files.
Any comments are really appreciated.
Hopefully someone with more windows experience
Have you tried rest-bench on localhost at the rados gateway? I was
playing with the rados gateway in a VM the other day, and was getting
up to 400/s on 4k objects. Above that I was getting connection
failures, but I think it was just due to a default max connections
setting somewhere or something.
18 matches
Mail list logo