Re: [Gluster-devel] [Gluster-users] User-serviceable snapshots design

2014-05-07 Thread Anand Subramanian
Hi Paul, that is definitely doable and a very nice suggestion. It is 
just that we probably won't be able to get to that in the immediate code 
drop (what we like to call phase-1 of the feature). But yes, let us try 
to implement what you suggest for phase-2. Soon :-)


Regards,
Anand

On 05/06/2014 07:27 AM, Paul Cuzner wrote:
Just one question relating to thoughts around how you apply a filter 
to the snapshot view from a user's perspective.


In the considerations section, it states - We plan to introduce a 
configurable option to limit the number of snapshots visible under the 
USS feature.
Would it not be possible to take the meta data from the snapshots to 
form a tree hierarchy when the number of snapshots present exceeds a 
given threshold, effectively organising the snaps by time. I think 
this would work better from an end-user workflow perspective.


i.e.
.snaps
  \/  Today
+-- snap01_20140503_0800
+-- snap02_20140503_1400
   Last 7 days
 7-21 days
 21-60 days
 60-180days
 180days





*From: *Anand Subramanian ansub...@redhat.com
*To: *gluster-de...@nongnu.org, gluster-users
gluster-us...@gluster.org
*Cc: *Anand Avati aav...@redhat.com
*Sent: *Saturday, 3 May, 2014 2:35:26 AM
*Subject: *[Gluster-users] User-serviceable snapshots design

Attached is a basic write-up of the user-serviceable snapshot feature
design (Avati's). Please take a look and let us know if you have
questions of any sort...

We have a basic implementation up now; reviews and upstream commit
should follow very soon over the next week.

Cheers,
Anand

___
Gluster-users mailing list
gluster-us...@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] User-serviceable snapshots design

2014-05-07 Thread Ira Cooper
Anand, I also have a concern regarding the user-serviceable snapshot feature.

You rightfully call out the lack of scaling caused by maintaining the gfid - 
gfid mapping tables, and correctly point out that this will limit the use cases 
this feature will be applicable to, on the client side.

If in fact gluster generates its gfids randomly, and has always done so, I 
propose that we can change the algorithm used to determine the mapping, to 
eliminate the lack of scaling of our solution.

We can create a fixed constant per-snapshot.  (Can be in just the client's 
memory, or stored on disk, that is an implementation detail here.)  We will 
call this constant n.

I propose we just add the constant to the gfid determine the new gfid.  It 
turns out that this new gfid has the same chance of collision as any random 
gfid.  (It will take a moment for you to convince yourself of this, but the 
argument is fairly intuitive.)  If we do this, I'd suggest we do it on the 
first 32 bits of the gfid, because we can use simple unsigned math, and let it 
just overflow.  (If we get up to 2^32 snapshots, we can revisit this aspect of 
the design, but we'll have other issues at that number.)

By using addition this way, we also allow for subtraction to be used for a 
later purpose.

Note: This design relies on our random gfid generator not turning out a linear 
range of numbers.  If it has in the past, or will in the future, clearly this 
design has flaws.  But, I know of no such plans.  As long as the randomness is 
sufficient, there should be no issue.  (IE: It doesn't turn out linear results.)

Thanks,

-Ira / ira@(redhat.com|samba.org)

PS: +1 to Jeff here.  He's spotting major issues, that should be looked at, 
above the issue above.

- Original Message -
  Attached is a basic write-up of the user-serviceable snapshot feature
  design (Avati's). Please take a look and let us know if you have
  questions of any sort...
 
 A few.
 
 The design creates a new type of daemon: snapview-server.
 
 * Where is it started?  One server (selected how) or all?
 
 * How do clients find it?  Are we dynamically changing the client
   side graph to add new protocol/client instances pointing to new
   snapview-servers, or is snapview-client using RPC directly?  Are
   the snapview-server ports managed through the glusterd portmapper
   interface, or patched in some other way?
 
 * Since a snap volume will refer to multiple bricks, we'll need
   more brick daemons as well.  How are *those* managed?
 
 * How does snapview-server manage user credentials for connecting
   to snap bricks?  What if multiple users try to use the same
   snapshot at the same time?  How does any of this interact with
   on-wire or on-disk encryption?
 
 I'm sure I'll come up with more later.  Also, next time it might
 be nice to use the upstream feature proposal template *as it was
 designed* to make sure that questions like these get addressed
 where the whole community can participate in a timely fashion.
 ___
 Gluster-users mailing list
 gluster-us...@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users
 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel