On 27/06/2014, at 5:20 AM, Ravishankar N wrote:
> On 06/27/2014 09:10 AM, Justin Clift wrote:
>> On 27/06/2014, at 2:51 AM, Kaushal M wrote:
>>> 3.3 -> 1
>>> 3.4.x -> 2
>>> 3.5.0 -> 3
>>> 3.5.1 -> 30501
>>> 3.6.0 -> 30600
>> Thanks Kaushal, added it here:
>>
>> http://www.gluster.org/community/d
Inline.
- Original Message -
From: "Atin Mukherjee"
To: "Sachin Pandit" , "Gluster Devel"
, gluster-us...@gluster.org
Sent: Thursday, June 26, 2014 3:30:31 PM
Subject: Re: [Gluster-devel] Need clarification regarding the "force" option
for snapshot delete.
On 06/26/2014 01:58 PM, Sac
On 06/27/2014 09:10 AM, Justin Clift wrote:
On 27/06/2014, at 2:51 AM, Kaushal M wrote:
3.3 -> 1
3.4.x -> 2
3.5.0 -> 3
3.5.1 -> 30501
3.6.0 -> 30600
Thanks Kaushal, added it here:
http://www.gluster.org/community/documentation/index.php/OperatingVersions
Updated the page with the nomenclatu
On 27/06/2014, at 2:51 AM, Kaushal M wrote:
> 3.3 -> 1
> 3.4.x -> 2
> 3.5.0 -> 3
> 3.5.1 -> 30501
> 3.6.0 -> 30600
Thanks Kaushal, added it here:
http://www.gluster.org/community/documentation/index.php/OperatingVersions
Regards and best wishes,
Justin Clift
--
GlusterFS - http://www.gluster
Hi all,
The "rackspace-regression-2GB-triggered" job in Jenkins is
enabled again. This job _should_ automatically start a
full regression test whenever a new Gerrit CR is created,
or a new version is uploaded.
If it seems to be going weird though, please feel welcome
to disable it, and manually
3.3 -> 1
3.4.x -> 2
3.5.0 -> 3
3.5.1 -> 30501
3.6.0 -> 30600
~kaushal
On Fri, Jun 27, 2014 at 2:57 AM, Justin Clift wrote:
> On 26/06/2014, at 9:15 PM, James Shubin wrote:
>> Does someone have an operating version table for GlusterFS?
>>
>> These are the operating-version= values in glusterd.inf
The following test case demonstrates the bug:
sh# mount -t glusterfs localhost:meta-test /mnt/one
sh# mount -t glusterfs localhost:meta-test /mnt/two
sh# echo stuff > /mnt/one/file; rm -f /mnt/two/file; echo stuff >
/mnt/one/file
bash: /mnt/one/file: Stale file handle
sh# echo stuff
On 26/06/2014, at 9:15 PM, James Shubin wrote:
> Does someone have an operating version table for GlusterFS?
>
> These are the operating-version= values in glusterd.info
>
> I'm looking to know which gluster versions, correspond to which
> operating versions, eg:
>
> '3.3' => '1', # eg: blank..
Does someone have an operating version table for GlusterFS?
These are the operating-version= values in glusterd.info
I'm looking to know which gluster versions, correspond to which
operating versions, eg:
'3.3' => '1', # eg: blank...
'3.4.0' => '2',
'3.4.1' => ???
and so on...
Cheers,
James
Wanted to add to the thought process a different angle towards thinking about
the data classified volumes.
One of the reasons for classifying data (be it tiering or others, like high
profile users to high profile storage backends), is to deal with its (i.e data)
protection differently.
With th
Hi,
A feature page for improved rebalance performance is put up here,
http://www.gluster.org/community/documentation/index.php/Features/improve_rebalance_performance
This mail is a request for comments and further ideas in this regard.
In short this aims at improving the existing rebalance perf
http://review.gluster.org/#/c/8181/ - posted a new change, wouldn't it
be worth to add this in smoke tests? rather than at ./rfc.sh ? - we
can provide a detailed summary - since we do not have 'commit/push'
style patch submission.
We can leverage our smoke tests, thoughts?
On Wed, Jun 25, 2014 at
Anders,
Please find the modified patch to be applied on master for the SGID bit
propagation issue, https://bugzilla.redhat.com/show_bug.cgi?id=1110262
Other comments inline.
> > DHT winds a call to mkdir as a part of the dht_selfheal_directory (in
> > dht_selfheal_dir_mkdir where it winds a cal
I don't think brick splitting implemented by LVM would affect directory
browsing any more than adding an additional brick would,
- Original Message -
From: "Justin Clift"
To: "Dan Lambright"
Cc: "Shyamsundar Ranganathan" , "Gluster Devel"
Sent: Thursday, June 26, 2014 12:01:16 PM
Subj
Implementing brick splitting using LVM would allow you to treat each logical
volume (split) as an independent brick. Each split would have its own
.glusterfs subdirectory. I think this would help with taking snapshots as well.
- Original Message -
From: "Shyamsundar Ranganathan"
To: "Kr
On 26/06/2014, at 4:54 PM, Dan Lambright wrote:
> Implementing brick splitting using LVM would allow you to treat each logical
> volume (split) as an independent brick. Each split would have its own
> .glusterfs subdirectory. I think this would help with taking snapshots as
> well.
Would brick
> > > For the short-term, wouldn't it be OK to disallow adding bricks that
> > > is not a multiple of group-size?
> >
> > In the *very* short term, yes. However, I think that will quickly
> > become an issue for users who try to deploy erasure coding because those
> > group sizes will be quite la
On 06/26/2014 01:58 PM, Sachin Pandit wrote:
> Hi all,
>
> We had some concern regarding the snapshot delete "force" option,
> That is the reason why we thought of getting advice from everyone out here.
>
> Currently when we give "gluster snapshot delete ", It gives a
> notification
> saying t
On Wednesday 25 June 2014 11:42:10 Jeff Darcy wrote:
> > How space will be allocated to each new sub-brick ? some sort of thin-
> > provisioning or will it be distributed evenly on each split ?
>
> That's left to the user. The latest proposal, based on discussion of
> the first, is here:
>
> htt
Hi all,
We had some concern regarding the snapshot delete "force" option,
That is the reason why we thought of getting advice from everyone out here.
Currently when we give "gluster snapshot delete ", It gives a
notification
saying that "mentioned snapshot will be deleted, Do you still want to c
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 2014-06-25 22:41, Shyamsundar Ranganathan wrote:
> Hi Anders,
>
> There are multiple problems that I see in the test provided, here is
> answering one of them and the reason why this occurs. It does get
> into the code and functions a bit, but bott
21 matches
Mail list logo