[Gluster-devel] REST API authentication: JWT - Shared Token vs Shared Secret

2016-03-01 Thread Aravinda

Hi,

For Gluster REST project we are planning to use JSON Web Token for
authentication. There are two approaches to use JWT, please help us to
evaluate between these two options.

http://jwt.io/

For both approach, user/app will register with Username and Secret.

Shared Token Approach:(Default as per JWT website 
http://jwt.io/introduction/)

--
Server will generate JWT with pre-configured expiry once user login to
server by providing Username and Secret. Secret is encrypted and
stored in Server. Clients should include that JWT in all requests.

Advantageous:
1. Clients need not worry anything about JWT signing.
2. Single secret at server side can be used for all token verification.
3. This is a stateless authentication mechanism as the user state is
   never saved in the server memory(http://jwt.io/introduction/)
4. Secret is encrypted and stored in Server.

Disadvantageous:
1. URL Tampering can be protected only by using HTTPS.

Shared Secret Approach:
---
Secret will not be encrypted in Server side because secret is
required for JWT signing and verification. Clients will sign every
request using Secret and send that signature along with the
request. Server will sign again using the same secret to check the
signature match.

Advantageous:
1. Protection against URL Tampering without HTTPS.
2. Different expiry time management based on issued time

Disadvantageous:
1. Clients should be aware of JWT and Signing
2. Shared secrets will be stored as plain text format in server.
3. Every request should lookup for shared secret per user.

--
regards
Aravinda

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Bitrot/Tering : Bad files get migrated and hence corruption goes undetected.

2016-03-01 Thread Venky Shankar
On Sat, Feb 27, 2016 at 06:32:38AM -0500, Joseph Fernandes wrote:
> Yep Agree! :)
> 
> Lets hear from the bitrot folks, what they have to propose.

Apologies for late reply.

> 
> ~Joe 
> 
> - Original Message -
> From: "Niels de Vos" 
> To: "Joseph Fernandes" 
> Cc: "Gluster Devel" 
> Sent: Saturday, February 27, 2016 4:28:43 PM
> Subject: Re: [Gluster-devel] Bitrot/Tering : Bad files get migrated and hence 
> corruption goes undetected.
> 
> On Fri, Feb 26, 2016 at 11:01:28PM -0500, Joseph Fernandes wrote:
> > Well correctly we dont migrate the existing signature, the file starts
> > it life fresh in the new tier(i.e get the bit rot version 1 on the new
> > tier),
> > Now this is also the case with any special xattr/attributes of the
> > file.
> > Again we rely heavily on the dht rebalance mechanism for migrations,
> > which also doesnt carry special attributes/xattr.
> 
> Is there a good reason to not migrate the bitrot signature? Relying on
> an existing functionality is fine, but if it does not address all your
> needs, you have a valid use-case to improve it.

That could be done. However, an I/O operation on the object during migration
should invalidate the signature and the object should be signed again.

AFAICS, there needs to be some infrastructure to avoid (re)signing of an
object if it's fresh after migration.

Thoughts?

> 
> Niels
> 
> > 
> > 
> > - Original Message -
> > From: "Niels de Vos" 
> > To: "Joseph Fernandes" 
> > Cc: "Gluster Devel" 
> > Sent: Friday, February 26, 2016 10:33:11 PM
> > Subject: Re: [Gluster-devel] Bitrot/Tering : Bad files get migrated and 
> > hence corruption goes undetected.
> > 
> > On Fri, Feb 26, 2016 at 09:32:46AM -0500, Joseph Fernandes wrote:
> > > Hi All,
> > > 
> > > This is a discussion mail on the following issue, 
> > > 
> > > 1. Object is corrupted before it could be signed: In this case, the 
> > > corrupted
> > >object is signed and get migrated upon I/O. There's no way to identify 
> > > corruption
> > >for this set of objects.
> > > 
> > > 2. Object is signed (but not scrubbed) and corruption happens thereafter:
> > >In this case, as of now, integrity checking is not done on the fly
> > >and the object would get migrated (and signed again in the hot tier).
> > > 
> > > 
> > > The (1) is definitely not a issue with bitrot with tiering. But (2) we 
> > > can do something to avoid
> > > corrupted file from getting migrated. Before we migrate files we can 
> > > scrub it, but its just a naive
> > > thought, any better suggestions? 
> > 
> > Is there a reason the existing signature can not be migrated? Why does
> > it become invalid?
> > 
> > Niels
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Community Meeting - March changes proposed

2016-03-01 Thread Venky Shankar
On Wed, Mar 02, 2016 at 10:47:03AM +0530, Kaushal M wrote:
> Couldn't reply earlier as I was asleep at the time.
> 
> The time change should have announced during last weeks meeting, but
> no one around remembered this (I'd forgotten as well).
> 
> I propose that we do this weeks meeting at 12UTC, and shift the
> meetings for 9th and 25th to 15UTC. We'll announce the change during
> the meeting and in the meeting minutes after the meeting, this way
> everyone knows.
> 
> Does anyone have objections to this?

Works for me.

> 
> ~kaushal
> 
> On Wed, Mar 2, 2016 at 2:46 AM, Joe Julian  wrote:
> > I'm not sure if it's of any value for me to be there, but I can make that
> > time for 20 minutes.
> >
> > (I would have replied sooner, but this got lost in the overnight flood of
> > mail.)
> >
> >
> > On 02/10/2016 04:13 AM, Amye Scavarda wrote:
> >
> > In order to increase the amount of people attending the Gluster Community
> > Meetings, we'd like to try something new for March.
> >
> > We'd like to move the March 2 and March 16th meetings to a slightly
> > different time, 3 hours later, making it UTC 15:00.
> >
> > Does this time work for people, or should I look for different times?
> > Thanks!
> >
> > --
> > Amye Scavarda | a...@redhat.com | Gluster Community Lead
> >
> >
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> >
> >
> >
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Community Meeting - March changes proposed

2016-03-01 Thread Kaushal M
Couldn't reply earlier as I was asleep at the time.

The time change should have announced during last weeks meeting, but
no one around remembered this (I'd forgotten as well).

I propose that we do this weeks meeting at 12UTC, and shift the
meetings for 9th and 25th to 15UTC. We'll announce the change during
the meeting and in the meeting minutes after the meeting, this way
everyone knows.

Does anyone have objections to this?

~kaushal

On Wed, Mar 2, 2016 at 2:46 AM, Joe Julian  wrote:
> I'm not sure if it's of any value for me to be there, but I can make that
> time for 20 minutes.
>
> (I would have replied sooner, but this got lost in the overnight flood of
> mail.)
>
>
> On 02/10/2016 04:13 AM, Amye Scavarda wrote:
>
> In order to increase the amount of people attending the Gluster Community
> Meetings, we'd like to try something new for March.
>
> We'd like to move the March 2 and March 16th meetings to a slightly
> different time, 3 hours later, making it UTC 15:00.
>
> Does this time work for people, or should I look for different times?
> Thanks!
>
> --
> Amye Scavarda | a...@redhat.com | Gluster Community Lead
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Community Meeting - March changes proposed

2016-03-01 Thread Joe Julian
I'm not sure if it's of any value for me to be there, but I can make 
that time for 20 minutes.


(I would have replied sooner, but this got lost in the overnight flood 
of mail.)


On 02/10/2016 04:13 AM, Amye Scavarda wrote:
In order to increase the amount of people attending the Gluster 
Community Meetings, we'd like to try something new for March.


We'd like to move the March 2 and March 16th meetings to a slightly 
different time, 3 hours later, making it UTC 15:00.


Does this time work for people, or should I look for different times?
Thanks!

--
Amye Scavarda | a...@redhat.com  | Gluster 
Community Lead



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Community Meeting - March changes proposed

2016-03-01 Thread Amye Scavarda
On Wed, Feb 10, 2016 at 4:16 AM, Kaushal M  wrote:
> On Wed, Feb 10, 2016 at 5:43 PM, Amye Scavarda  wrote:
>> In order to increase the amount of people attending the Gluster Community
>> Meetings, we'd like to try something new for March.
>>
>> We'd like to move the March 2 and March 16th meetings to a slightly
>> different time, 3 hours later, making it UTC 15:00.
>>
>> Does this time work for people, or should I look for different times?
>> Thanks!
>
> This works for me!
>
>>
>> --
>> Amye Scavarda | a...@redhat.com | Gluster Community Lead
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel

As we've only gotten Kaushal to weigh in on this, do we still wish to
move tomorrow's meeting to slightly later, or change our schedule up
for March?
Thanks!
- amye

-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Cores generated with ./tests/geo-rep/georep-basic-dr-tarssh.t

2016-03-01 Thread Kotresh Hiremath Ravishankar
Hi Soumya,

I analysed the issue and found out that crash has happened because of the patch 
[1].

The patch doesn't set transport object to NULL in 'rpc_clnt_disable' but 
instead does it on
'rpc_clnt_trigger_destroy'. So if there are pending rpc invocations on the rpc 
object that
is disabled (those instances are possible as happening now in changelog), it 
will trigger a
CONNECT notify again with 'mydata' that is freed causing a crash. This happens 
because
'rpc_clnt_submit' reconnects if rpc is not connected.

 rpc_clnt_submit (...) {
   ...
if (conn->connected == 0) {
ret = rpc_transport_connect (conn->trans,
 conn->config.remote_port);
}
   ...
 }

Without your patch, conn->trans was set NULL and hence CONNECT fails not 
resulting with
CONNECT notify call. And also the cleanup happens in failure path.

So the memory leak can happen, if there is no try for rpc invocation after 
DISCONNECT.
It will be cleaned up otherwise.


[1] http://review.gluster.org/#/c/13507/

Thanks and Regards,
Kotresh H R

- Original Message -
> From: "Kotresh Hiremath Ravishankar" 
> To: "Soumya Koduri" 
> Cc: avish...@redhat.com, "Gluster Devel" 
> Sent: Monday, February 29, 2016 4:15:22 PM
> Subject: Re: Cores generated with ./tests/geo-rep/georep-basic-dr-tarssh.t
> 
> Hi Soumya,
> 
> I just tested that it is reproducible only with your patch both in master and
> 3.76 branch.
> The geo-rep test cases are marked bad in master. So it's not hit in master.
> rpc is introduced
> in changelog xlator to communicate to applications via libgfchangelog.
> Venky/Me will check
> why is the crash happening and will update.
> 
> 
> Thanks and Regards,
> Kotresh H R
> 
> - Original Message -
> > From: "Soumya Koduri" 
> > To: avish...@redhat.com, "kotresh" 
> > Cc: "Gluster Devel" 
> > Sent: Monday, February 29, 2016 2:10:51 PM
> > Subject: Cores generated with ./tests/geo-rep/georep-basic-dr-tarssh.t
> > 
> > Hi Aravinda/Kotresh,
> > 
> > With [1], I consistently see cores generated with the test
> > './tests/geo-rep/georep-basic-dr-tarssh.t' in release-3.7 branch. From
> > the cores, looks like we are trying to dereference a freed
> > changelog_rpc_clnt_t(crpc) object in changelog_rpc_notify(). Strangely
> > this was not reported in master branch.
> > 
> > I tried debugging but couldn't find any possible suspects. I request you
> > to take a look and let me know if [1] caused any regression.
> > 
> > Thanks,
> > Soumya
> > 
> > [1] http://review.gluster.org/#/c/13507/
> > 
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Spurious failures in ec/quota.t and distribute/bug-860663.t

2016-03-01 Thread Raghavendra Gowdappa


- Original Message -
> From: "Poornima Gurusiddaiah" 
> To: "Gluster Devel" , "Manikandan Selvaganesan" 
> , "Susant Palai"
> , "Nithya Balachandran" 
> Sent: Tuesday, March 1, 2016 4:49:51 PM
> Subject: [Gluster-devel] Spurious failures in ec/quota.t and  
> distribute/bug-860663.t
> 
> Hi,
> 
> I see these test cases failing spuriously,
> 
> ./tests/basic/ec/quota.t Failed Tests: 1-13, 16, 18, 20, 2
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/18637/consoleFull
> ./tests/bugs/distribute/bug-860663.t Failed Test: 13
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/18622/consoleFull

The test which failed is just a umount. Not sure why it failed


# Unmount and remount to make sure we're doing fresh lookups.   


TEST umount $M0


Alternatively we can have another fresh mount on say $M1, and run future tests. 
Can you check whether patch [1] fixes your issue (push your patch as a 
dependency of [1])?

[1] http://review.gluster.org/13567

> 
> Could any one from Quota and dht look into it?
> Regards,
> Poornima
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Spurious failures in ec/quota.t and distribute/bug-860663.t

2016-03-01 Thread Poornima Gurusiddaiah
Thank You, have rebased the patch.

Regards,
Poornima

- Original Message -
> From: "Xavier Hernandez" 
> To: "Poornima Gurusiddaiah" , "Gluster Devel" 
> , "Manikandan
> Selvaganesan" , "Susant Palai" , 
> "Nithya Balachandran" 
> Sent: Tuesday, March 1, 2016 4:57:11 PM
> Subject: Re: [Gluster-devel] Spurious failures in ec/quota.t and 
> distribute/bug-860663.t
> 
> Hi Poornima,
> 
> On 01/03/16 12:19, Poornima Gurusiddaiah wrote:
> > Hi,
> >
> > I see these test cases failing spuriously,
> >
> > ./tests/basic/ec/quota.t Failed Tests: 1-13, 16, 18, 20, 2
> > https://build.gluster.org/job/rackspace-regression-2GB-triggered/18637/consoleFull
> 
> This is already solved by http://review.gluster.org/13446/. It has been
> merged just a couple hours ago.
> 
> Xavi
> 
> >
> > ./tests/bugs/distribute/bug-860663.t Failed Test: 13
> > https://build.gluster.org/job/rackspace-regression-2GB-triggered/18622/consoleFull
> >
> > Could any one from Quota and dht look into it?
> >
> > Regards,
> > Poornima
> >
> >
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> >
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Review request for md-cache improvement patches

2016-03-01 Thread Poornima Gurusiddaiah
Hi, 

Here are the improvements proposed for md-cache: 
- Integrate it with upcall cache-invalidation so that we can increase the cache 
timeout to be more than 1 sec. 
- Enable md-cache to cache xattrs. 

The doc explaining the same in detail can be found at 
http://review.gluster.org/#/c/13408/ 

Here are the patches implementing the same: 
md-cache: http://review.gluster.org/#/c/12951/ 
http://review.gluster.org/#/c/13406/ 
Upcall: http://review.gluster.org/#/c/12995/ 
http://review.gluster.org/#/c/12996/ 

Please review the same. 

Regards, 
Poornima 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Spurious failures in ec/quota.t and distribute/bug-860663.t

2016-03-01 Thread Sakshi Bansal
Hi,

Patch http://review.gluster.org/#/c/10906/ (recently merged) fixes 
./tests/bugs/distribute/bug-860663.t.


- Original Message -
From: "Xavier Hernandez" 
To: "Poornima Gurusiddaiah" , "Gluster Devel" 
, "Manikandan Selvaganesan" , 
"Susant Palai" , "Nithya Balachandran" 
Sent: Tuesday, March 1, 2016 4:57:11 PM
Subject: Re: [Gluster-devel] Spurious failures in ec/quota.t and 
distribute/bug-860663.t

Hi Poornima,

On 01/03/16 12:19, Poornima Gurusiddaiah wrote:
> Hi,
>
> I see these test cases failing spuriously,
>
> ./tests/basic/ec/quota.t Failed Tests: 1-13, 16, 18, 20, 2
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/18637/consoleFull

This is already solved by http://review.gluster.org/13446/. It has been 
merged just a couple hours ago.

Xavi

>
> ./tests/bugs/distribute/bug-860663.t Failed Test: 13
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/18622/consoleFull
>
> Could any one from Quota and dht look into it?
>
> Regards,
> Poornima
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Spurious failures in ec/quota.t and distribute/bug-860663.t

2016-03-01 Thread Xavier Hernandez

Hi Poornima,

On 01/03/16 12:19, Poornima Gurusiddaiah wrote:

Hi,

I see these test cases failing spuriously,

./tests/basic/ec/quota.t Failed Tests: 1-13, 16, 18, 20, 2
https://build.gluster.org/job/rackspace-regression-2GB-triggered/18637/consoleFull


This is already solved by http://review.gluster.org/13446/. It has been 
merged just a couple hours ago.


Xavi



./tests/bugs/distribute/bug-860663.t Failed Test: 13
https://build.gluster.org/job/rackspace-regression-2GB-triggered/18622/consoleFull

Could any one from Quota and dht look into it?

Regards,
Poornima


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Spurious failures in ec/quota.t and distribute/bug-860663.t

2016-03-01 Thread Vijaikumar Mallikarjuna
Hi Poornima,

Below patch might solve the regression failure for
''./tests/basic/ec/quota.t'

http://review.gluster.org/#/c/13446/
http://review.gluster.org/#/c/13447/

Thanks,
Vijay


On Tue, Mar 1, 2016 at 4:49 PM, Poornima Gurusiddaiah 
wrote:

> Hi,
>
> I see these test cases failing spuriously,
>
> ./tests/basic/ec/quota.t Failed Tests: 1-13, 16, 18, 20, 2
>
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/18637/consoleFull
>
> ./tests/bugs/distribute/bug-860663.t Failed Test: 13
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/18622/consoleFull
>
> Could any one from Quota and dht look into it?
>
> Regards,
> Poornima
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Spurious failures in ec/quota.t and distribute/bug-860663.t

2016-03-01 Thread Poornima Gurusiddaiah
Hi, 

I see these test cases failing spuriously, 

./tests/basic/ec/quota.t Failed Tests: 1-13, 16, 18, 20, 2 
https://build.gluster.org/job/rackspace-regression-2GB-triggered/18637/consoleFull
 
./tests/bugs/distribute/bug-860663.t Failed Test: 13 
https://build.gluster.org/job/rackspace-regression-2GB-triggered/18622/consoleFull
 

Could any one from Quota and dht look into it? 
Regards, 
Poornima 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Introducing file based snapshots in gluster

2016-03-01 Thread Prasanna Kumar Kalever
On Tuesday, March 1, 2016 1:15:06 PM, Kaushal M wrote:
> On Tue, Mar 1, 2016 at 12:37 PM, Prasanna Kumar Kalever
>  wrote:
> > Hello Gluster,
> >
> >
> > Introducing a new file based snapshot feature in gluster which is based  on
> > reflinks feature which will be out from xfs in a couple of months
> > (downstream)
> >
> >
> > what is a reflink ?
> >
> > You might have surely used softlinks and hardlinks everyday!
> >
> > Reflink  supports transparent copy on write, unlike soft/hardlinks which if
> > useful for  snapshotting, basically reflink points to same data blocks
> > that are used  by actual file (blocks are common to real file and a
> > reflink file hence  space efficient), they use different inode numbers
> > hence they can have  different permissions to access same data blocks,
> > although they may look  similar to hardlinks but are more space efficient
> > and can handle all  operations that can be performed on a regular file,
> > unlike hardlinks  that are limited to unlink().
> >
> > which filesystem support reflink ?
> > I  think its Btrfs who put it for the first time, now xfs trying hard to
> > make them available, in the future we can see them in ext4 as well
> >
> > You can get a feel of reflinks by following tutorial
> > https://pkalever.wordpress.com/2016/01/22/xfs-reflinks-tutorial/
> >
> >
> > POC in gluster: https://asciinema.org/a/be50ukifcwk8tqhvo0ndtdqdd?speed=2
> >
> >
> > How we are doing it ?
> > Currently  we don't have a specific system-call that gives handle to
> > reflinks, so I  decided to go with ioctl call with XFS_IOC_CLONE command.
> >
> > In POC I have used setxattr/getxattr to create/delete/list the snapshot.
> > Restore feature will use setxattr as well.
> >
> > We  can have a fop although Fuse does't understand it, we will manage with
> > a  setxattr at Fuse mount point and again from client side it will be a
> > fop till  the posix xlator then as a ioctl to the underlying filesystem.
> > Planing  to expose APIs for create, delete, list and restore.
> >
> > Are these snapshots Internal or external?
> > We  will have a separate file each time we create a snapshot, obviously the
> > snapshot file will have a different inode number and will be a  readonly,
> > all these files are maintained in the ".fsnap/ " directory  which is
> > maintained by the parent directory where the  snapshot-ted/actual file
> > resides, therefore they will not be visible to user (even with ls -a
> > option, just like USS).
> >
> > *** We can always restore to any snapshot available  in the list and the
> > best part is we can delete any snapshot between  snapshot1 and  snapshotN
> > because all of them are independent ***
> >
> > It  is applications duty to ensure the consistency of the file before it
> > tries to create a snapshot, say in case of VM file snapshot it is the
> > hyper-visor that should freeze the IO and then request for the snapshot
> >
> >
> >
> > Integration with gluster: (Initial state, need more investigation)
> >
> > Quota:
> > Since  the snapshot files resides in ".fsnap/" directory which is
> > maintained  by the same directory where the actual file exist, it falls in
> > the same  users quota :)
> >
> > DHT:
> > As said the snapshot files will resides in the same directory where the
> > actual file resides may be in a ".fsnap/" directory
> >
> > Re-balancing:
> > Simplest  solution could be, copy the actual file as whole copy then for
> > snapshotfiles rsync only delta's and recreate snapshots history by
> > repeating snapshot sequence after each snapshotfile rsync.
> >
> > AFR:
> > Mostly  will be same as write fop (inodelk's and quorum's). There could be
> > no  way to recover or recreate a snapshot on node (brick to be precise)
> > which was down while  taking snapshot and comes back later in time.
> >
> > Disperse:
> > Mostly take the inodelk and snapshot the file, on each of the bricks should
> > work.
> >
> > Sharding:
> > Assume we have a file split into 4 shards. If the fop for take snapshot is
> > sent to all the subvols having the shards, it would be sufficient. All
> > shards will have the snapshot for the state of the shard.
> > List of snap fop should be sent only to the main subvol where shard 0
> > resides.
> > Delete of a snap should be similar to create.
> > Restore would be a little difficult because metadata of the file needs to
> > be updated in shard xlator.
> > 
> > Also in case of sharding, the bricks have gfid based flat filesystem. Hence
> > the snaps created will also be in the shard directory, hence quota is not
> > straight forward and needs additional work in this case.
> >
> >
> > How can we make it better ?
> > Discussion page: http://pad.engineering.redhat.com/kclYd9TPjr
> 
> This link is not accessible externally. Could you move the contents to
> a public location?

Thanks Kaushal, I have copied it to 
https://public.pad.fsfe.org/p/Snapshots_in_glusterfs
lets use this from now.

-Prasanna
> 
> >
> >
> > Thanks to "Pranith Kumar Karam

[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 3 hours)

2016-03-01 Thread Jiffin Tony Thottan

Hi all,

This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting  )
- date: every Tuesday
- time: 12:00 UTC
 (in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Thank you
Jiffin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel