Re: [Gluster-devel] Beta2 NFS sanity tests.

2014-06-23 Thread Niels de Vos
On Wed, Jun 18, 2014 at 12:45:35PM -0400, Benjamin Turner wrote:
 I re ran the individual test 20 times, re ran the fs mark test suite 10
 times, and re ran the whole fs sanity test suite again and was unable to
 reproduce.

Many thanks for running these tests!

Niels

 
 -b
 
 
 On Tue, Jun 17, 2014 at 4:23 PM, Benjamin Turner bennytu...@gmail.com
 wrote:
 
  I saw 1 failure on NFS mounts, I am investigating:
 
   final pass/fail report =
 Test Date: Tue Jun 17 16:15:38 EDT 2014
 Total : [43]
 Passed: [41]
 Failed: [2]
 Abort : [0]
 Crash : [0]
  -
 [   PASS   ]  FS Sanity Setup
 [   PASS   ]  Running tests.
 [   PASS   ]  FS SANITY TEST - arequal
 [   PASS   ]  FS SANITY LOG SCAN - arequal
 [   PASS   ]  FS SANITY TEST - bonnie
 [   PASS   ]  FS SANITY LOG SCAN - bonnie
 [   PASS   ]  FS SANITY TEST - glusterfs_build
 [   PASS   ]  FS SANITY LOG SCAN - glusterfs_build
 [   PASS   ]  FS SANITY TEST - compile_kernel
 [   PASS   ]  FS SANITY LOG SCAN - compile_kernel
 [   PASS   ]  FS SANITY TEST - dbench
 [   PASS   ]  FS SANITY LOG SCAN - dbench
 [   PASS   ]  FS SANITY TEST - dd
 [   PASS   ]  FS SANITY LOG SCAN - dd
 [   PASS   ]  FS SANITY TEST - ffsb
 [   PASS   ]  FS SANITY LOG SCAN - ffsb
 [   PASS   ]  FS SANITY TEST - fileop
 [   PASS   ]  FS SANITY LOG SCAN - fileop
 [   PASS   ]  FS SANITY TEST - fsx
 [   PASS   ]  FS SANITY LOG SCAN - fsx
 [   PASS   ]  FS SANITY LOG SCAN - fs_mark
 [   PASS   ]  FS SANITY TEST - iozone
 [   PASS   ]  FS SANITY LOG SCAN - iozone
 [   PASS   ]  FS SANITY TEST - locks
 [   PASS   ]  FS SANITY LOG SCAN - locks
 [   PASS   ]  FS SANITY TEST - ltp
 [   PASS   ]  FS SANITY LOG SCAN - ltp
 [   PASS   ]  FS SANITY TEST - multiple_files
 [   PASS   ]  FS SANITY LOG SCAN - multiple_files
 [   PASS   ]  FS SANITY LOG SCAN - posix_compliance
 [   PASS   ]  FS SANITY TEST - postmark
 [   PASS   ]  FS SANITY LOG SCAN - postmark
 [   PASS   ]  FS SANITY TEST - read_large
 [   PASS   ]  FS SANITY LOG SCAN - read_large
 [   PASS   ]  FS SANITY TEST - rpc
 [   PASS   ]  FS SANITY LOG SCAN - rpc
 [   PASS   ]  FS SANITY TEST - syscallbench
 [   PASS   ]  FS SANITY LOG SCAN - syscallbench
 [   PASS   ]  FS SANITY TEST - tiobench
 [   PASS   ]  FS SANITY LOG SCAN - tiobench
 [   PASS   ]  FS Sanity Cleanup
 
 [   FAIL   ]  FS SANITY TEST - fs_mark
 [   FAIL   ]  
  /rhs-tests/beaker/rhs/auto-tests/components/sanity/fs-sanity-tests-v2
 
  The failed test was:
 
  #  fs_mark  -d  .  -D  4  -t  4  -S  1
  #   Version 3.3, 4 thread(s) starting at Tue Jun 17 13:39:36 2014
  #   Sync method: INBAND FSYNC: fsync() per file in write loop.
  #   Directories:  Time based hash between directories across 4 
  subdirectories with 180 seconds per subdirectory.
  #   File names: 40 bytes long, (16 initial bytes of time stamp with 24 
  random bytes at end of name)
  #   Files info: size 51200 bytes, written with an IO size of 16384 bytes 
  per write
  #   App overhead is time in microseconds spent in the test not doing file 
  writing related system calls.
 
  FSUse%Count SizeFiles/sec App Overhead
  Error in unlink of ./00/53a07d587SRWZLFBMIUOEVGM4RY9F5P3 : No such 
  file or directory
  fopen failed to open: fs_log.txt.19509
  fs-mark pass # 1 failed
 
  I will investigate and open a BZ if this is reproducible.
 
  -b
 
 
 

 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Spurious failure - ./tests/bugs/bug-859581.t

2014-06-23 Thread Ravishankar N

On 06/18/2014 10:13 AM, Pranith Kumar Karampuri wrote:


On 06/18/2014 10:11 AM, Atin Mukherjee wrote:


On 06/18/2014 10:04 AM, Pranith Kumar Karampuri wrote:

On 06/18/2014 09:39 AM, Atin Mukherjee wrote:

Pranith,

Regression test mentioned in $SUBJECT failed (testcase : 14  16)

Console log can be found at
http://build.gluster.org/job/rackspace-regression-2GB/227/consoleFull

My initial suspect is on HEAL_TIMEOUT (set to 60 seconds) where 
healing

might not have been completed within this time frame and i.e. why
EXPECT_WITHIN fails.


I am not sure on what basis this HEAL_TIMEOUT's value was derived.
Probably you would be the better to analyse it. Having a larger 
time out

value might help here?

I don't think it is a spurious failure. There seems to be a bug in
afr-v2. I will have to fix that.

If its not a spurious failure why its not failing every time?
Depends on which subvolume afr picks in readdir. If it reads the one 
with the directory it will succeed. Otherwise it will fail.


The bug is easy to hit of the steps are carried out in a 2 node setup. I 
have created a BZ https://bugzilla.redhat.com/show_bug.cgi?id=1112158 
and am looking into it.

Pranith

Pranith

Cheers,
Atin




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] 3.5.1-beta2 Problems with suid and sgid bits on directories

2014-06-23 Thread Niels de Vos
On Tue, Jun 17, 2014 at 11:49:26AM -0400, Shyamsundar Ranganathan wrote:
 You maybe looking at the problem being fixed here, [1].
 
 On a lookup attribute mismatch was not being healed across 
 directories, and this patch attempts to address the same. Currently 
 the version of the patch does not heal the S_ISUID and S_ISGID bits, 
 which is work in progress (but easy enough to incorporate and test 
 based on the patch at [1]).
 
 On a separate note, add-brick just adds a brick to the cluster, the 
 lookup is where the heal (or creation of the directory across all sub 
 volumes in DHT xlator) is being done.

I assume that this is not a regression between 3.5.0 and 3.5.1? If that 
is the case, we can pull the fix in 3.5.2 because 3.5.1 really should 
not get delayed much longer.

Thanks,
Niels

 
 Shyam
 
 [1] http://review.gluster.org/#/c/6983/
 
 - Original Message -
  From: Anders Blomdell anders.blomd...@control.lth.se
  To: Gluster Devel gluster-devel@gluster.org
  Sent: Tuesday, June 17, 2014 10:53:52 AM
  Subject: [Gluster-devel] 3.5.1-beta2 Problems with suid and sgid bits on
  directories
  
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
  
  With a glusterfs-3.5.1-0.3.beta2.fc20.x86_64 with a reverted
  3dc56cbd16b1074d7ca1a4fe4c5bf44400eb63ff (due to local lack of IPv4
  addresses), I get
  weird behavior if I:
  
  1. Create a directory with suid/sgid/sticky bit set (/mnt/gluster/test)
  2. Make a subdirectory of #1 (/mnt/gluster/test/dir1)
  3. Do an add-brick
  
  Before add-brick
  
 755 /mnt/gluster
7775 /mnt/gluster/test
2755 /mnt/gluster/test/dir1
  
  After add-brick
  
 755 /mnt/gluster
1775 /mnt/gluster/test
 755 /mnt/gluster/test/dir1
  
  On the server it looks like this:
  
7775 /data/disk1/gluster/test
2755 /data/disk1/gluster/test/dir1
1775 /data/disk2/gluster/test
 755 /data/disk2/gluster/test/dir1
  
  Filed as bug:
  
https://bugzilla.redhat.com/show_bug.cgi?id=1110262
  
  If somebody can point me to where the logic of add-brick is placed, I can
  give
  it a shot (a find/grep on mkdir didn't immediately point me to the right
  place).
  
  
  Regards
  
  Anders Blomdell
  
  
  
  
  - --
  Anders Blomdell  Email: anders.blomd...@control.lth.se
  Department of Automatic Control
  Lund University  Phone:+46 46 222 4625
  P.O. Box 118 Fax:  +46 46 138118
  SE-221 00 Lund, Sweden
  
  -BEGIN PGP SIGNATURE-
  Version: GnuPG v1
  Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
  
  iQEcBAEBAgAGBQJToFZ/AAoJENZYyvaDG8NcIVgH/0FnyTuB/yutrAdKhOCFTGGY
  fKqWEozJjiUB4TE8hvAnYw7DalT6jlPLUre6vGzUuioS6TQNn8emTFA7GN9Ghklv
  pc2I8NWtwju2iXqLO5ACjBDRuFcYaDLQRVzBFiQpOoOkwrly0uEvcSgUKFxrSuMx
  NrUZKgYTjZb+8kwnSsFv/QNlcPR7zWAiyqbu7rh2a2Q9ArwEsLyTi+se6z/T3PIH
  ASEIR86jWywaP/JDRoSIUX0PIIS8v7mciFtCVGmgIHfugmEwDH2ZxQtbrkxHOC3/
  UjOaGY0TYwPNRnlzk2qkk6Yo3bALGzHa4SUfdRf+gvNa0wZLQWFTdnhWP1dPMc0=
  =tMUX
  -END PGP SIGNATURE-
  ___
  Gluster-devel mailing list
  Gluster-devel@gluster.org
  http://supercolony.gluster.org/mailman/listinfo/gluster-devel
  
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Plea for reviews

2014-06-23 Thread Jeff Darcy
I have several patches queued up for 3.6, which have all passed
regression tests.  Unfortunately, they're all in areas where our
resources are pretty thin, so getting the required +1 reviews
is proving to be a challenge.  The patches are as follows:

* For heterogeneous bricks [1]
  http://review.gluster.org/8093

* For better ssl [2]
  http://review.gluster.org/3695
  http://review.gluster.org/8040
  http://review.gluster.org/8094

* Not in feature list, but turning out to be important
  http://review.gluster.org/7702

I know these are all in tricky areas.  I'd be glad to do
walkthroughs to explain what each patch is doing in more
detail.  Thanks in advance to anyone who can help!
  

[1] 
http://www.gluster.org/community/documentation/index.php/Features/heterogeneous-bricks

[2] http://www.gluster.org/community/documentation/index.php/Features/better-ssl
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Data classification proposal

2014-06-23 Thread Dan Lambright
A frustrating aspect of Linux is the complexity of /etc configuration file's 
formats (rsyslog.conf, logrotate, cron, yum repo files, etc) In that spirit I 
would simplify the select in the data classification proposal (copied below) 
to only accept a list of bricks/sub-tiers with wild-cards '*', rather than 
full-blown regular expressions or key/value pairs. I would drop the unclaimed 
keyword, and not have keywords media type, and rack. It does not seem 
necessary to introduce new keys for the underlying block device type (SSD vs 
disk) any more than we need to express the filesystem (XFS vs ext4). In other 
words, I think tiering can be fully expressed in the configuration file while 
still abstracting the underlying storage. That said, the configuration file 
could be built up by a CLI or GUI, and richer expressibility could exist at 
that level.

example:

brick host1:/brick ssd-group0-1

brick host2:/brick ssd-group0-2

brick host3:/brick disk-group0-1

rule tier-1
select ssd-group0*

rule tier-2
select disk-group0

rule all
select tier-1
# use repeated select to establish order
select tier-2
type features/tiering

The filtering option's regular expressions seem hard to avoid. If just the name 
of the file satisfies most use cases (that we know of?) I do not think there is 
any way to avoid regular expressions in the option for filters. (Down the road, 
if we were to allow complete flexibility in how files can be distributed across 
subvolumes, the filtering problems may start to look similar to 90s-era packet 
classification with a solution along the lines of the Berkeley packet filter.)

There may be different rules by which data is distributed at the tiering 
level. For example, one tiering policy could be the fast tier (first listed). 
It would be a cache for the slow tier (second listed). I think the option 
keyword could handle that.

rule all
select tier-1
 # use repeated select to establish order
select tier-2
type features/tiering
option tier-cache, mode=writeback, dirty-watermark=80

Another example tiering policy could be based on compliance ; when a file needs 
to become read-only, it moves from the first listed tier to the second.

rule all
 select tier-1
 # use repeated select to establish order
 select tier-2
 type features/tiering
option tier-retention

- Original Message -
From: Jeff Darcy jda...@redhat.com
To: Gluster Devel gluster-devel@gluster.org
Sent: Friday, May 23, 2014 3:30:39 PM
Subject: [Gluster-devel] Data classification proposal

One of the things holding up our data classification efforts (which include 
tiering but also other stuff as well) has been the extension of the same 
conceptual model from the I/O path to the configuration subsystem and 
ultimately to the user experience.  How does an administrator define a tiering 
policy without tearing their hair out?  How does s/he define a mixed 
replication/erasure-coding setup without wanting to rip *our* hair out?  The 
included Markdown document attempts to remedy this by proposing one out of many 
possible models and user interfaces.  It includes examples for some of the most 
common use cases, including the replica 2.5 case we'e been discussing 
recently.  Constructive feedback would be greatly appreciated.



# Data Classification Interface

The data classification feature is extremely flexible, to cover use cases from
SSD/disk tiering to rack-aware placement to security or other policies.  With
this flexibility comes complexity.  While this complexity does not affect the
I/O path much, it does affect both the volume-configuration subsystem and the
user interface to set placement policies.  This document describes one possible
model and user interface.

The model we used is based on two kinds of information: brick descriptions and
aggregation rules.  Both are contained in a configuration file (format TBD)
which can be associated with a volume using a volume option.

## Brick Descriptions

A brick is described by a series of simple key/value pairs.  Predefined keys
include:

 * **media-type**  
   The underlying media type for the brick.  In its simplest form this might
   just be *ssd* or *disk*.  More sophisticated users might use something like
   *15krpm* to represent a faster disk, or *perc-raid5* to represent a brick
   backed by a RAID controller.

 * **rack** (and/or **row**)  
   The physical location of the brick.  Some policy rules might be set up to
   spread data across more than one rack.

User-defined keys are also allowed.  For example, some users might use a
*tenant* or *security-level* tag as the basis for their placement policy.

## Aggregation Rules

Aggregation rules are used to define how bricks should be combined into
subvolumes, and those potentially combined into higher-level subvolumes, and so
on until all of the bricks are accounted for.  Each aggregation 

Re: [Gluster-devel] Data classification proposal

2014-06-23 Thread Jeff Darcy
 A frustrating aspect of Linux is the complexity of /etc configuration file's
 formats (rsyslog.conf, logrotate, cron, yum repo files, etc) In that spirit
 I would simplify the select in the data classification proposal (copied
 below) to only accept a list of bricks/sub-tiers with wild-cards '*', rather
 than full-blown regular expressions or key/value pairs.

Then how does *the user* specify which files should go into which tier/group?
If we don't let them specify that in configuration, then it can only be done
in code and we've taken a choice away from them.

 I would drop the
 unclaimed keyword

Then how do you specify any kind of default rule for files not matched
elsewhere?  If certain files can be placed only in certain locations due to
security or compliance considerations, how would they specify the location(s)
for files not subject to any such limitation?

 and not have keywords media type, and rack. It does
 not seem necessary to introduce new keys for the underlying block device
 type (SSD vs disk) any more than we need to express the filesystem (XFS vs
 ext4).

The idea is to let users specify whatever criteria matter *to them*; media
type and rack/row are just examples to get them started.

 In other words, I think tiering can be fully expressed in the
 configuration file while still abstracting the underlying storage.

Yes, *tiering* can be expressed using a simpler syntax.  I was trying for
something that could also support placement policies other than strict
linear above vs. below with only the migration policies we've written
into code.

 That
 said, the configuration file could be built up by a CLI or GUI, and richer
 expressibility could exist at that level.
 
 example:
 
 brick host1:/brick ssd-group0-1
 
 brick host2:/brick ssd-group0-2
 
 brick host3:/brick disk-group0-1
 
 rule tier-1
   select ssd-group0*
 
 rule tier-2
   select disk-group0
 
 rule all
   select tier-1
   # use repeated select to establish order
   select tier-2
   type features/tiering
 
 The filtering option's regular expressions seem hard to avoid. If just the
 name of the file satisfies most use cases (that we know of?) I do not think
 there is any way to avoid regular expressions in the option for filters.
 (Down the road, if we were to allow complete flexibility in how files can be
 distributed across subvolumes, the filtering problems may start to look
 similar to 90s-era packet classification with a solution along the lines of
 the Berkeley packet filter.)
 
 There may be different rules by which data is distributed at the tiering
 level. For example, one tiering policy could be the fast tier (first
 listed). It would be a cache for the slow tier (second listed). I think
 the option keyword could handle that.
 
 rule all
   select tier-1
# use repeated select to establish order
   select tier-2
   type features/tiering
   option tier-cache, mode=writeback, dirty-watermark=80
 
 Another example tiering policy could be based on compliance ; when a file
 needs to become read-only, it moves from the first listed tier to the
 second.
 
 rule all
select tier-1
# use repeated select to establish order
select tier-2
type features/tiering
   option tier-retention

OK, good so far.  How would you handle the replica 2.5 sanlock case with
the simplified syntax?  Or security-aware placement equivalent to this?

   rule secure
  select brick-0-*
  option encryption on

   rule insecure
  select brick-1-*
  option encryption off

   rule all
  select secure
  select insecure
  type features/filter
  option filter-condition-1 security-level:high
  option filter-target-1 secure
  option default-subvol insecure

In true agile fashion, maybe we should compile a set of user stories and
treat those as test cases for any proposed syntax.  That would need to
include at least

   * hot/cold tiering

   * HIPPA/EUPD style compliance (file must *always* or *never* be in X)

   * security-aware placement

   * multi-tenancy

   * sanlock case

I'm not trying to create complexity for its own sake.  If there's a
simpler syntax that doesn't eliminate some of these cases in favor of
tiering and nothing else, that would be great.

 - Original Message -
 From: Jeff Darcy jda...@redhat.com
 To: Gluster Devel gluster-devel@gluster.org
 Sent: Friday, May 23, 2014 3:30:39 PM
 Subject: [Gluster-devel] Data classification proposal
 
 One of the things holding up our data classification efforts (which include
 tiering but also other stuff as well) has been the extension of the same
 conceptual model from the I/O path to the configuration subsystem and
 ultimately to the user experience.  How does an administrator define a
 tiering policy without tearing their hair out?  How does s/he define a mixed
 replication/erasure-coding setup without wanting to rip *our* hair out?  The
 included Markdown document attempts to 

Re: [Gluster-devel] Data classification proposal

2014-06-23 Thread Dan Lambright
Rather than using the keyword unclaimed, my instinct was to explicitly list 
which bricks have not been claimed.  Perhaps you have something more subtle 
in mind, it is not apparent to me from your response. Can you provide an 
example of why it is necessary and a list could not be provided in its place? 
If the list is somehow difficult to figure out, due to a particularly complex 
setup or some such, I'd prefer a CLI/GUI build that list rather than having 
sysadmins hand-edit this file.

The key-value piece seems like syntactic sugar - an alias. If so, let the 
name itself be the alias. No notions of SSD or physical location need be 
inserted. Unless I am missing that it *is* necessary, I stand by that value 
judgement as a philosophy of not putting anything into the configuration file 
that you don't require. Can you provide an example of where it is necessary?

As to your point on filtering (which files go into which tier/group). I wrote a 
little further in the email that I do not see a way around regular expressions 
within the filter-condition keyword. My understanding of your proposal is the 
select statement did not do file name filtering, the filter-condition option 
did. I'm ok with that.

As far as the user stories idea goes, that seems like a good next step.

- Original Message -
From: Jeff Darcy jda...@redhat.com
To: Dan Lambright dlamb...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org
Sent: Monday, June 23, 2014 5:24:14 PM
Subject: Re: [Gluster-devel] Data classification proposal

 A frustrating aspect of Linux is the complexity of /etc configuration file's
 formats (rsyslog.conf, logrotate, cron, yum repo files, etc) In that spirit
 I would simplify the select in the data classification proposal (copied
 below) to only accept a list of bricks/sub-tiers with wild-cards '*', rather
 than full-blown regular expressions or key/value pairs.

Then how does *the user* specify which files should go into which tier/group?
If we don't let them specify that in configuration, then it can only be done
in code and we've taken a choice away from them.

 I would drop the
 unclaimed keyword

Then how do you specify any kind of default rule for files not matched
elsewhere?  If certain files can be placed only in certain locations due to
security or compliance considerations, how would they specify the location(s)
for files not subject to any such limitation?

 and not have keywords media type, and rack. It does
 not seem necessary to introduce new keys for the underlying block device
 type (SSD vs disk) any more than we need to express the filesystem (XFS vs
 ext4).

The idea is to let users specify whatever criteria matter *to them*; media
type and rack/row are just examples to get them started.

 In other words, I think tiering can be fully expressed in the
 configuration file while still abstracting the underlying storage.

Yes, *tiering* can be expressed using a simpler syntax.  I was trying for
something that could also support placement policies other than strict
linear above vs. below with only the migration policies we've written
into code.

 That
 said, the configuration file could be built up by a CLI or GUI, and richer
 expressibility could exist at that level.
 
 example:
 
 brick host1:/brick ssd-group0-1
 
 brick host2:/brick ssd-group0-2
 
 brick host3:/brick disk-group0-1
 
 rule tier-1
   select ssd-group0*
 
 rule tier-2
   select disk-group0
 
 rule all
   select tier-1
   # use repeated select to establish order
   select tier-2
   type features/tiering
 
 The filtering option's regular expressions seem hard to avoid. If just the
 name of the file satisfies most use cases (that we know of?) I do not think
 there is any way to avoid regular expressions in the option for filters.
 (Down the road, if we were to allow complete flexibility in how files can be
 distributed across subvolumes, the filtering problems may start to look
 similar to 90s-era packet classification with a solution along the lines of
 the Berkeley packet filter.)
 
 There may be different rules by which data is distributed at the tiering
 level. For example, one tiering policy could be the fast tier (first
 listed). It would be a cache for the slow tier (second listed). I think
 the option keyword could handle that.
 
 rule all
   select tier-1
# use repeated select to establish order
   select tier-2
   type features/tiering
   option tier-cache, mode=writeback, dirty-watermark=80
 
 Another example tiering policy could be based on compliance ; when a file
 needs to become read-only, it moves from the first listed tier to the
 second.
 
 rule all
select tier-1
# use repeated select to establish order
select tier-2
type features/tiering
   option tier-retention

OK, good so far.  How would you handle the replica 2.5 sanlock case with
the simplified syntax?  Or security-aware placement equivalent to this?

   rule secure
  

Re: [Gluster-devel] Plea for reviews

2014-06-23 Thread Raghavendra Gowdappa
Jeff,

Comments inlined.

- Original Message -
 From: Jeff Darcy jda...@redhat.com
 To: Gluster Devel gluster-devel@gluster.org
 Sent: Monday, June 23, 2014 6:53:53 PM
 Subject: [Gluster-devel] Plea for reviews
 
 I have several patches queued up for 3.6, which have all passed
 regression tests.  Unfortunately, they're all in areas where our
 resources are pretty thin, so getting the required +1 reviews
 is proving to be a challenge.  The patches are as follows:
 
 * For heterogeneous bricks [1]
   http://review.gluster.org/8093

Caught up with release schedules. I'll try to take up this on high priority.

 
 * For better ssl [2]
   http://review.gluster.org/3695
   http://review.gluster.org/8040
   http://review.gluster.org/8094
 
 * Not in feature list, but turning out to be important
   http://review.gluster.org/7702
 
 I know these are all in tricky areas.  I'd be glad to do
 walkthroughs to explain what each patch is doing in more
 detail.  Thanks in advance to anyone who can help!
   
 
 [1]
 http://www.gluster.org/community/documentation/index.php/Features/heterogeneous-bricks
 
 [2]
 http://www.gluster.org/community/documentation/index.php/Features/better-ssl

regards,
Raghavendra.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Build glusterfs from source code on sqeezy

2014-06-23 Thread Humble Devassy Chirammal
Also navigate http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.2/


On Tue, Jun 24, 2014 at 11:23 AM, Humble Devassy Chirammal 
humble.deva...@gmail.com wrote:

 Hi,

 You can fetch sources from
 http://bits.gluster.org/pub/gluster/glusterfs/src/

 --Humble


 On Tue, Jun 24, 2014 at 4:57 AM, Cary Tsai f4l...@gmail.com wrote:

 Need to compile and build glusterfs 3.4.2 from source on
 debian 6.0.6 32-bit environment.
 [Indeed I only need glusterrfs-client]

 Could not find the source from the download site,
 1. should I download it from github.com?
 2. I download it from github and in INSTALL, it says
 run ./configure, but there is no ./configure
 so how do I built glusterfs?
 Do I look at the wrong place?
 Thanks for your help.



 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel