Re: [Gluster-devel] handling statfs call in USS

2015-01-06 Thread RAGHAVENDRA TALUR
On Tue, Jan 6, 2015 at 12:43 PM, Raghavendra Bhat rab...@redhat.com wrote:
 On Monday 29 December 2014 01:19 PM, RAGHAVENDRA TALUR wrote:

 On Sun, Dec 28, 2014 at 5:03 PM, Vijay Bellur vbel...@redhat.com wrote:

 On 12/24/2014 02:30 PM, Raghavendra Bhat wrote:


 Hi,

 I have a doubt. In user serviceable snapshots as of now statfs call is
 not implemented. There are 2 ways how statfs can be handled.

 1) Whenever snapview-client xlator gets statfs call on a path that
 belongs to snapshot world, it can send the
 statfs call to the main volume itself, with the path and the inode being
 set to the root of the main volume.

 OR

 2) It can redirect the call to the snapshot world (the snapshot demon
 which talks to all the snapshots of that particular volume) and send
 back the reply that it has obtained.

 Each entry in .snaps can be thought of as a specially mounted read-only
 filesystem and doing a statfs in such a filesystem should generate
 statistics associated with that. So approach 2. seems more appropriate.

 I agree with Vijay here. Treating each entry in .snaps as a specially
 mounted
 read-only filesystem will be required to send proper error codes to Samba.


 Yeah makes sense. But one challenge is if someone does statfs on .snaps
 directory itself, then
 what should be done? Because .snaps is a virtual directory. I can think of 2
 ways
 1) Make snapview-server xlator return 0s when it receives statfs on .snaps
 so that the o/p is similar the one that is obtained when statfs is done on
 /proc

I think some applications may require info from statfs like file
system block size
before they proceed with operations. May not be a hard requirement.

 OR if the above o/p is not right,
 2) If statfs comes on .snaps, then wind the call to regular volume itself.
 Anything beyond .snaps will be sent to the snapshot world.

I see one problem with wind that Samba would expect to see a read-only flag
set for statfs result of .snaps too; otherwise a delete of .snaps in
windows explorer
would give a weird behaviour.



 Regards,
 Raghavendra Bhat


 -Vijay

 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel







-- 
Raghavendra Talur
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] handling statfs call in USS

2015-01-05 Thread Raghavendra Bhat

On Monday 29 December 2014 01:19 PM, RAGHAVENDRA TALUR wrote:

On Sun, Dec 28, 2014 at 5:03 PM, Vijay Bellur vbel...@redhat.com wrote:

On 12/24/2014 02:30 PM, Raghavendra Bhat wrote:


Hi,

I have a doubt. In user serviceable snapshots as of now statfs call is
not implemented. There are 2 ways how statfs can be handled.

1) Whenever snapview-client xlator gets statfs call on a path that
belongs to snapshot world, it can send the
statfs call to the main volume itself, with the path and the inode being
set to the root of the main volume.

OR

2) It can redirect the call to the snapshot world (the snapshot demon
which talks to all the snapshots of that particular volume) and send
back the reply that it has obtained.


Each entry in .snaps can be thought of as a specially mounted read-only
filesystem and doing a statfs in such a filesystem should generate
statistics associated with that. So approach 2. seems more appropriate.

I agree with Vijay here. Treating each entry in .snaps as a specially mounted
read-only filesystem will be required to send proper error codes to Samba.


Yeah makes sense. But one challenge is if someone does statfs on .snaps 
directory itself, then
what should be done? Because .snaps is a virtual directory. I can think 
of 2 ways
1) Make snapview-server xlator return 0s when it receives statfs on 
.snaps so that the o/p is similar the one that is obtained when statfs 
is done on /proc

OR if the above o/p is not right,
2) If statfs comes on .snaps, then wind the call to regular volume 
itself. Anything beyond .snaps will be sent to the snapshot world.


Regards,
Raghavendra Bhat


-Vijay

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel





___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] handling statfs call in USS

2014-12-28 Thread Vijay Bellur

On 12/24/2014 02:30 PM, Raghavendra Bhat wrote:


Hi,

I have a doubt. In user serviceable snapshots as of now statfs call is
not implemented. There are 2 ways how statfs can be handled.

1) Whenever snapview-client xlator gets statfs call on a path that
belongs to snapshot world, it can send the
statfs call to the main volume itself, with the path and the inode being
set to the root of the main volume.

OR

2) It can redirect the call to the snapshot world (the snapshot demon
which talks to all the snapshots of that particular volume) and send
back the reply that it has obtained.



Each entry in .snaps can be thought of as a specially mounted read-only 
filesystem and doing a statfs in such a filesystem should generate 
statistics associated with that. So approach 2. seems more appropriate.


-Vijay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] handling statfs call in USS

2014-12-28 Thread RAGHAVENDRA TALUR
On Sun, Dec 28, 2014 at 5:03 PM, Vijay Bellur vbel...@redhat.com wrote:
 On 12/24/2014 02:30 PM, Raghavendra Bhat wrote:


 Hi,

 I have a doubt. In user serviceable snapshots as of now statfs call is
 not implemented. There are 2 ways how statfs can be handled.

 1) Whenever snapview-client xlator gets statfs call on a path that
 belongs to snapshot world, it can send the
 statfs call to the main volume itself, with the path and the inode being
 set to the root of the main volume.

 OR

 2) It can redirect the call to the snapshot world (the snapshot demon
 which talks to all the snapshots of that particular volume) and send
 back the reply that it has obtained.


 Each entry in .snaps can be thought of as a specially mounted read-only
 filesystem and doing a statfs in such a filesystem should generate
 statistics associated with that. So approach 2. seems more appropriate.

I agree with Vijay here. Treating each entry in .snaps as a specially mounted
read-only filesystem will be required to send proper error codes to Samba.


 -Vijay

 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel



-- 
Raghavendra Talur
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] handling statfs call in USS

2014-12-24 Thread Raghavendra Bhat


Hi,

I have a doubt. In user serviceable snapshots as of now statfs call is 
not implemented. There are 2 ways how statfs can be handled.


1) Whenever snapview-client xlator gets statfs call on a path that 
belongs to snapshot world, it can send the
statfs call to the main volume itself, with the path and the inode being 
set to the root of the main volume.


OR

2) It can redirect the call to the snapshot world (the snapshot demon 
which talks to all the snapshots of that particular volume) and send 
back the reply that it has obtained.


Please provide feedback.

Regards,
Raghavendra Bhat

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] handling statfs call in USS

2014-12-24 Thread Vijaikumar M


On Wednesday 24 December 2014 02:30 PM, Raghavendra Bhat wrote:


Hi,

I have a doubt. In user serviceable snapshots as of now statfs call is 
not implemented. There are 2 ways how statfs can be handled.


1) Whenever snapview-client xlator gets statfs call on a path that 
belongs to snapshot world, it can send the
statfs call to the main volume itself, with the path and the inode 
being set to the root of the main volume.


In this approach, when statfs call is sent to main volume with path and 
inode set to the root can give incorrect value when quota and 
deem-statfs are enabled.

path/inode should be set to the parent of '.snaps'

Thanks,
Vijay


OR

2) It can redirect the call to the snapshot world (the snapshot demon 
which talks to all the snapshots of that particular volume) and send 
back the reply that it has obtained.


Please provide feedback.

Regards,
Raghavendra Bhat

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel