Hi Greg

Opened this one

http://tracker.ceph.com/issues/16567

Let us see what they say.

Cheers

G.


On 07/01/2016 04:09 AM, Gregory Farnum wrote:
On Wed, Jun 29, 2016 at 10:50 PM, Goncalo Borges
<goncalo.bor...@sydney.edu.au> wrote:
Hi Shinobu

Sorry probably I don't understand your question properly.
Is what you're worry about that object mapped to specific pg could be 
overwritten on different osds?
Not really. I was worried by seeing object sizes changing on the fly.

I will try to clarify.

We are enabling cephfs for our user community.

My mind was set to a context where data is not changing. I got scared because I 
was not finding plausible for the object to change size and content. I thought 
this was a consequence of a bad repair.

However, thinking it over, if we have some application overwritting the same 
file over and over again (which I think we have), that means that we will see 
the same objects change size and content over time. In cephfs, the name of the 
objects is directly related to the file inode and how it is stripped so the 
object names do not actually change if a file is overwritten. Right?!  So, in 
summary, in this scenario, it is normal for objects to change size and content 
all the time.
That's true...

A consequence of this is that the very fast overwrite of files / objects could 
raise some scrub errors if, by chance, ceph is scrubbing pgs with objects which 
are changing on the fly.
...but that's definitely not the case. Scrubbing and client IO
shouldn't race with each other. If you're seeing size errors, it could
be some minor scrub race, but it's definitely a bug. You should
discuss on irc and/or open a ticket for the RADOS team to look at.
-Greg

--
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School of Physics A28 | University of Sydney, NSW  2006
T: +61 2 93511937

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to