Re: [Gluster-devel] Glusterfs as a root file system on the same node

2015-10-14 Thread satish kondapalli
is anyone has any thoughts on this?

Sateesh

On Tue, Oct 13, 2015 at 5:44 PM, satish kondapalli 
wrote:

> Hi,
>
> I want to mount  gluster volume as a root file system for my node.
>
> Node will boot from network( only kernel and initrd images) but my root
> file system has to be one of the gluster volume ( bricks of the volume are
> on the disks which are attached to the same node).  Gluster configuration
> files also part of the root file system.
>
> Here i am facing chicken and egg problem.  Initially i thought to keep
> glusterfs libs, binary in the initrd and start the gluster server as part
> of initrd execution. But for mounting root file system (which is a gluster
> volume), all the gluster configuration files are stored in the root file
> system. My assumption is, without gluster configuration
> files(/var/lib/glusterd/xx) gluster will not find any volumes.
>
> Can someone help me on this?
>
> Sateesh
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Lock migration as a part of rebalance

2015-10-14 Thread Raghavendra G
The original design didn't address some areas (like blocked locks,
atomicity of getlkinfo and setlkinfo). Hence we came up with a newer design
(with slight changes to original design posted earlier in this thread):

* Active/Granted lock migration involves only rebalance process. Clients
have active no role to play.

  As of now, the only state that changes from src to dst during migration
is connection id. Also note that clients most likely have a connection
already established to destination. So, if the rebalance process can
reconstruct the connection id (without involving client), it can just
associate locks on dst with a correct connection. Based on this we decided
to change the way we construct connection identifiers. Now on, connection
ids will have two parts - per connection specific and per process specific
information. Per connection specific identifier will be constant across
different clients (like mount process ,rebalance process etc). The
connection identifiers will be different because of a different per process
identifier. For eg., if a mount process (say mnt1) has two
clients/transports (say c1 and c2) speaking to two different bricks (say b1
and b2), then the connection identifiers,

  on b1 between b1 and c1 from mnt1 will be 
  on b2 between b2 and c2 from mnt1 will be 

 The connection identifiers from a rebalance process (rebal-process) to
same bricks would be,

  on b1 between b1 and c1 from rebal-process will be 
  on b2 between b2 and c2 from rebal-process will be 

  Note that connection specific part of ids is constant for rebalance
process and mount process.

  So, if rebalance process is migrating a file/lock from b1 to b2, all it
has to do is to change connection specific part of id from b1 to b2. So, if
connection identifier of lock on b1 is , rebalance can reconstruct
the id for b2 as  (Note that it can derive connection specific id
to b2 from its own connection to b2).

  If the client has not established connection to dst brick at the time of
migration, to keep things simple rebalance process fails migration of that
file.

* Blocked locks:

  Previous design didn't address blocked locks. The thing with blocked
locks is that they have an additional state in the form of call-stack for
reply path which will be unwound when lock is granted. So, to migrate
blocked locks to dst, rebalance process asks brick to unwind all the
blocked locks with a special error (giving the information of destination
to which file is migrated). Dht in client/mnt process will interpret this
special error and will wind a lock request to dst. Active/granted locks are
migrated before blocked locks. So, new-lock requests (corresponding to
blocked locks) will block on dst too. However, note that here blocked lock
state is lost on src. So, post this point, migration to dst _has_ to
complete. We have two options to recover from this situation:

a. If there is a failure in migration, rebalance process has to migrate
back the blocked lock state from dst back to src in a similar way. Or ask
dst to fail the lock requests. But rebalance process itself might crash
before it gets an opportunity to do so. So, this is not a fool-proof
solution.

b. attempt blocked lock migration _after_ file migration is complete (when
file on destination is marked as data file by rebalance process). Since
these locks are blocked locks, unlike active locks its not an issue if some
other client tries to acquire a conflicting lock post migration but before
blocked lock migration. If there are errors, lock requests are failed and
an error response will be sent to application.

I personally feel option b is simpler and better.

* Atomicity of getlkinfo and setlkinfo done by rebalance process during
Active/granted lock migration:

  While rebalance process is in the process of migrating active locks, the
lock state can change on src b/w getlkinfo on src and setlkinfo on dst. To
preserve the atomicity, we are proposing rebalance process to hold a
mandatory write-lock (which is a "meta-lock" guarding lock-state as opposed
to file data) on src during time-period it is doing active/granted lock
migration. Any lock requests from mnt/client process b/w getlkinfo and
setlkinfo of rebalance process will be unwound with EAGAIN errors by the
brick with relevant information. To make things simple for clients to
handle these errors, posix/locks on src-brick will queue these lock
requests during the time-window specified above till an unlock is issued by
rebalance-process on the "meta" mandatory lock it had acquired before lock
migration. Once unlock is issued on "meta-lock" src-brick replays all the
locks in queue before a response to unlock is sent to rebalance process.
What happens next depends on types of lock-requests in the queue:

a. If lock gets granted, a successful response is unwound to client along
with information that file is under migration (similar to phase1 of data
migration in dht), the client then acquires the lock on destination 

[Gluster-devel] gluster readdocs unaccessible

2015-10-14 Thread Avra Sengupta

Hi,

I am unable to access gluster.readdocs.org . Is anyone else facing the 
same issue.


Regards,
Avra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] gluster readdocs unaccessible

2015-10-14 Thread Sankarshan Mukhopadhyay
On Wed, Oct 14, 2015 at 2:03 PM, Avra Sengupta  wrote:
> I am unable to access gluster.readdocs.org . Is anyone else facing the same
> issue.

 ?


-- 
sankarshan mukhopadhyay

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] [Status]: RichACL support in GlusterFS

2015-10-14 Thread Rajesh Joseph
Hi all,

It's been long I sent any update on the RichACL work being done for GlusterFS. 
Following is the current state of the work being done.

Current State:
+ Initial patch uploaded at https://github.com/rajeshjoseph/glusterfs
+ Able to use setrichacl and getrichacl tool on gluster mount which set and 
retrieves RichACL for a file.
+ Currently using ext4 with RichACL support as backend.
+ Using the test cases provided by Andreas 
(https://github.com/andreas-gruenbacher/richacl) to validate the implementation

Some of the TODOs:
+ Provide support for chmod command
+ Move RichACL enforcement from Ext4 to Gluster.
+ Integrate this with NFS and SMB

Best Regards,
Rajesh
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] glusterfs-3.7.5 released

2015-10-14 Thread Pranith Kumar Karampuri

Hi all,

I'm pleased to announce the release of GlusterFS-3.7.5. This release
includes 70 changes after 3.7.4. The list of fixed bugs is included
below.

Tarball and RPMs can be downloaded from
http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.5/

Ubuntu debs are available from
https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.7

Debian Unstable (sid) packages have been updated and should be
available from default repos.

NetBSD has updated ports at
ftp://ftp.netbsd.org/pub/pkgsrc/current/pkgsrc/filesystems/glusterfs/README.html


Upgrade notes from 3.7.2 and earlier

GlusterFS uses insecure ports by default from release v3.7.3. This
causes problems when upgrading from release 3.7.2 and below to 3.7.3
and above. Performing the following steps before upgrading helps avoid
problems.

- Enable insecure ports for all volumes.

 ```
 gluster volume set  server.allow-insecure on
 gluster volume set  client.bind-insecure on
 ```

- Enable insecure ports for GlusterD. Set the following line in
`/etc/glusterfs/glusterd.vol`

 ```
 option rpc-auth-allow-insecure on
 ```

 This needs to be done on all the members in the cluster.


Fixed bugs
==
1258313 - Start self-heal and display correct heal info after replace brick
1268804 - Test tests/bugs/shard/bug-1245547.t failing consistently when run 
with patch http://review.gluster.org/#/c/11938/
1261234 - Possible memory leak during rebalance with large quantity of files
1259697 - Disperse volume: Huge memory leak of glusterfsd process
1267817 - No quota API to get real hard-limit value.
1267822 - Have a way to disable readdirp on dht from glusterd volume set command
1267823 - Perf: Getting bad performance while doing ls
1267532 - Data Tiering:CLI crashes with segmentation fault when user tries "gluster 
v tier" command
1267149 - Perf: Getting bad performance while doing ls
1266822 - Add more logs in failure code paths + port existing messages to the 
msg-id framework
1262335 - Fix invalid logic in tier.t
1251821 - /usr/lib/glusterfs/ganesha/ganesha_ha.sh is distro specific
1258338 - Data Tiering: Tiering related information is not displayed in gluster 
volume info xml output
1266872 - FOP handling during file migration is broken in the release-3.7 
branch.
1266882 - RFE: posix: xattrop 'GF_XATTROP_ADD_DEF_ARRAY' implementation
1246397 - POSIX ACLs as used by a FUSE mount can not use more than 32 groups
1265633 - AFR : "gluster volume heal  dest=:1.65 reply_serial=2"
1265890 - rm command fails with "Transport end point not connected" during add 
brick
1261444 - cli : volume start will create/overwrite ganesha export file
1258347 - Data Tiering: Tiering related information is not displayed in gluster 
volume status xml output
1258340 - Data Tiering:Volume task status showing as remove brick when detach 
tier is trigger
1260919 - Quota+Rebalance : While rebalance is in progress , quota list shows 
'Used Space' more than the Hard Limit set
1264738 - 'gluster v tier/attach-tier/detach-tier help' command shows the 
usage, and then throws 'Tier command failed' error message
1262700 - DHT + rebalance :- file permission got changed (sticky bit and setgid 
is set) after file migration failure
1263191 - Error not propagated correctly if selfheal layout lock fails
1258244 - Data Tieirng:Change error message as detach-tier error message throws as 
"remove-brick"
1263746 - Data Tiering:Setting only promote frequency and no demote frequency 
causes crash
1262408 - Data Tieirng:Detach tier status shows number of failures even when 
all files are migrated successfully
1262547 - `getfattr -n replica.split-brain-status ' command hung on the 
mount
1262547 - `getfattr -n replica.split-brain-status ' command hung on the 
mount
1262344 - quota: numbers of warning messages in nfs.log a single file itself
1260858 - glusterd: volume status backward compatibility
1261742 - Tier: glusterd crash when trying to detach , when hot tier is having 
exactly one brick and cold tier is of replica type
1262197 - DHT: Few files are missing after remove-brick operation
1261008 - Do not expose internal sharding xattrs to the application.
1262341 - Database locking due to write contention between CTR sql connection 
and tier migrator sql connection
1261715 - [HC] Fuse mount crashes, when client-quorum is not met
1260511 - fuse client crashed during i/o
1261664 - Tiering status command is very cumbersome.
1259694 - Data Tiering:Regression:Commit of detach tier passes without directly 
without even issuing a detach tier start
1260859 - snapshot: from nfs-ganesha mount no content seen in 
.snaps/ directory
1260856 - xml output for volume status on tiered volume
1260593 - man or info page of gluster needs to be updated with self-heal 
commands.
1257394 - Provide more meaningful errors on peer probe and peer detach
1258769 - Porting log messages to new framework
1255110 - client is sending io to arbiter with replica 2
1259652 - quota test 'quota-nfs.t' 

Re: [Gluster-devel] gluster readdocs unaccessible

2015-10-14 Thread Avra Sengupta

On 10/14/2015 02:05 PM, Sankarshan Mukhopadhyay wrote:

On Wed, Oct 14, 2015 at 2:03 PM, Avra Sengupta  wrote:

I am unable to access gluster.readdocs.org . Is anyone else facing the same
issue.

 ?



Thanks. Looks like I messed up the url :p
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] NSR design document

2015-10-14 Thread Manoj Pillai


- Original Message -
> > "The reads will also be sent to, and processed by the current
> > leader."
> > 
> > So, at any given time, only one brick in the replica group is
> > handling read requests? For a read-only workload-phase,
> > all except one will be idle in any given term?
> 
> By default and in theory, yes.  The question is: does it matter in practice?
> If you only have one replica set, and if you haven't turned on the option
> to allow reads from non-leaders (which is not the default because it does
> imply a small reduction in consistency across failure scenarios), and if the
> client(s) bandwidth isn't already saturated, then yeah, it might be slower
> than AFR.  Then again, even that might be outweighed by gains in cache
> efficiency and avoidance of any need for locking.  In the absolute worst
> case, we can split bricks and create multiple replica sets across the same
> bricks, each with their own leader.  That would parallelize reads as much as
> AFR, while still gaining all of the other NSR advantages.
> 
> In other words, yes, in theory it could be a problem.  In practice?  No.

Or maybe: in theory, it shouldn't be a problem in practice :).

We _could_ split bricks to distribute the load more or less evenly. So 
what would naturally be a replica-3/JBOD configuration (i.e. each 
disk is a brick, multiple bricks per server), could be changed 
to carve out 3 bricks out of each disk to distribute load 
(otherwise 2/3 of the disks would be idle in said read-only 
workload phase, IIUC). Such carving could have its downsides though. 
E.g. 3x number of bricks could be a problem if workload has 
operations that don't scale well with brick count. Plus the brick 
configuration guidelines would not exactly be elegant.

FWIW, if I look at the performance and perf regressions tests 
that are run at my place of work (as these tests stand today), I'd 
expect AFR to significantly outperform this design on reads.

-- Manoj


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] NSR design document

2015-10-14 Thread Jeff Darcy
October 14 2015 3:11 PM, "Manoj Pillai"  wrote:
> E.g. 3x number of bricks could be a problem if workload has
> operations that don't scale well with brick count.

Fortunately we have DHT2 to address that.

> Plus the brick
> configuration guidelines would not exactly be elegant.

And we have Heketi to address that.

> FWIW, if I look at the performance and perf regressions tests
> that are run at my place of work (as these tests stand today), I'd
> expect AFR to significantly outperform this design on reads.

Reads tend to be absorbed by caches above us, *especially* in read-only
workloads.  See Rosenblum and Ousterhout's 1992 log-structured file
system paper, and about a bazillion others ever since.  We need to be
concerned at least as much about write performance, and NSR's write
performance will *far* exceed AFR's because AFR uses neither networks
nor disks efficiently.  It splits client bandwidth between N replicas,
and it sprays writes all over the disk (data blocks plus inode plus
index).  Most other storage systems designed in the last ten years can
turn that into nice sequential journal writes, which can even be on a
separate SSD or NVMe device (something AFR can't leverage at all).
Before work on NSR ever started, I had already compared AFR to other
file systems using these same methods and data flows (e.g. Ceph and
MooseFS) many times.  Consistently, I'd see that the difference was
quite a bit more than theoretical.  Despite all of the optimization work
we've done on it, AFR's write behavior is still a huge millstone around
our necks.

OK, let's bring some of these thoughts together.  If you've read
Hennessy and Patterson, you've probably seen this formula before.

value (of an optimization) =
benefit_when_applicable * probability -
penalty_when_inapplicable * (1 - probability)

If NSR's write performance is significantly better than AFR's, and write
performance is either dominant or at least highly relevant for most real
workloads, what does that mean for performance overall?  As prototyping
showed long ago, it means a significant improvement.  Is it *possible*
to construct a read-dominant workload that shows something different?
Of course it is.  It's even possible that write performance will degrade
in certain (increasingly rare) physical configurations.  No design is
best for every configuration and workload.  Some people tried to focus
on the outliers when NSR was first proposed.  Our competitors will be
glad to do the same, for the same reason - to keep their own pet designs
from looking too bad.  The important question is whether performance
improves for *most* real-world configurations and workloads.  NSR is
quite deliberately somewhat write-optimized, because it's where we were
the furthest behind and because it's the harder problem to solve.
Optimizing for read-only workloads leaves users with any other kind of
workload in a permanent hole.

Also, even for read-heavy workloads where we might see a deficit, we
have not one but two workarounds.  One (brick splitting) we've just
discussed, and it is quite deliberately being paired with other
technologies in 4.0 to make it more effective.  The other (read from
non-leaders) is also perfectly viable.  It's not the default because it
reduces consistency to AFR levels, which I don't think serves our users
very well.  However, if somebody's determined to make AFR comparisons,
then it's only fair to compare at the same consistency level.  Giving
users the ability to decide on such tradeoffs, instead of forcing one
choice on everyone, has been part of NSR's design since day one.

I'm not saying your concern is invalid, but NSR's leader-based approach
is *essential* to improving write performance - and thus performance
overall - for most use cases.  It's also essential to improving
functional behavior, especially with respect to split brain, and I
consider that even more important.  Sure, reads don't benefit as much.
They might even get worse, though that remains to be seen and is only
likely to be true in certain scenarios.  As long as we know how to work
around that, is there any need to dwell on it further?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] REMINDER: Weekly gluster community meeting to start in 30 minutes

2015-10-14 Thread Raghavendra Bhat
Hi All,

In 30 minutes from now we will have the regular weekly Gluster
Community meeting.

Meeting details:
- location: #gluster-meeting on Freenode IRC
- date: every Wednesday
- time: 12:00 UTC, 14:00 CEST, 17:30 IST
(in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-community-meetings

Currently the following items are listed:
* Roll Call
* Status of last week's action items
* Gluster 3.7
* Gluster 3.8
* Gluster 3.6
* Gluster 3.5
* Gluster 4.0
* Open Floor
- bring your own topic!

The last topic has space for additions. If you have a suitable topic to
discuss, please add it to the agenda.


Regards,
Raghavendra Bhat
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] NSR design document

2015-10-14 Thread Manoj Pillai

- Original Message -
> From: "Avra Sengupta" 
> To: "Gluster Devel" 
> Sent: Wednesday, October 14, 2015 2:10:33 PM
> Subject: [Gluster-devel] NSR design document
> 
> Hi,
> 
> Please find attached the NSR design document. It captures the
> architecture of NSR, and is in continuation from the discussions that
> happened during the GLuster Next community hangout. I would like to
> request you to kindly go through the document, and share any queries or
> concerns regarding the same.

>From "2. Introduction":
"The reads will also be sent to, and processed by the current
leader."

So, at any given time, only one brick in the replica group is 
handling read requests? For a read-only workload-phase, 
all except one will be idle in any given term?

-- Manoj


> Currently gluster.readdocs.org is down, and I will upload this document
> there once it is back online.
> 
> Regards,
> Avra
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Request for regression test suite details

2015-10-14 Thread Amogha V
Hi,
 We are building application that use opensource DFS products like Ceph,
GlusterFS.
While going through the web materials for DFS I learnt regression testing
run on GlusterFS. I wanted to understand the tests run by the QA team
before the GFS is released to oustide world(more specifically I need the
following)
* *configuaration details of the test setup at GlusterFS*
** test plan document*
** individual test cases run before GlusterFS is released to outside world*

This would help me avoid any duplicate testing and add any new scenarios to
test our application.


Thanks,
Amogha
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Gluster 4.0 - upgrades & backward compatibility strategy

2015-10-14 Thread Roman
Hi,

Its hard to comment plans and things like these, but I suggest everyone
will be happy to have a possibility to upgrade from 3 to 4 without new
installation, OK with offline upgrade also (shut down volumes and upgrade).
And I'm somehow pretty sure, that this upgrade process should be pretty
flawless so no one under any circumstances would need any kind of
rollbacks, so there should not be any IFs :)

2015-10-07 8:32 GMT+03:00 Atin Mukherjee :

> Hi All,
>
> Over the course of the design discussion, we got a chance to discuss
> about the upgrades and backward compatibility strategy for Gluster 4.0
> and here is what we came up with:
>
> 1. 4.0 cluster would be separate from 3.x clusters. Heterogeneous
> support won't be available.
>
> 2. All CLI interfaces exposed in 3.x would continue to work with 4.x.
>
> 3. ReSTful APIs for all old & new management actions.
>
> 4. Upgrade path from 3.x to 4.x would be necessary. We need not support
> rolling upgrades, however all data layouts from 3.x would need to be
> honored. Our upgrade path from 3.x to 4.x should not be cumbersome.
>
>
> Initiative wise upgrades strategy details:
>
> GlusterD 2.0
> 
>
> - No rolling upgrade, service disruption is expected
> - Smooth upgrade from 3.x to 4.x (migration script)
> - Rollback - If upgrade fails, revert back to 3.x, old configuration
> data shouldn't be wiped off.
>
>
> DHT 2.0
> ---
> - No in place upgrade to DHT2
> - Needs migration of data
> - Backward compat, hence does not exist
>
> NSR
> ---
> - volume migration from AFR to NSR is possible with an offline upgrade
>
> We would like to hear from the community about your opinion on this
> strategy.
>
> Thanks,
> Atin
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 
Best regards,
Roman.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] NSR design document

2015-10-14 Thread Jeff Darcy
> "The reads will also be sent to, and processed by the current
> leader."
> 
> So, at any given time, only one brick in the replica group is
> handling read requests? For a read-only workload-phase,
> all except one will be idle in any given term?

By default and in theory, yes.  The question is: does it matter in practice?  
If you only have one replica set, and if you haven't turned on the option to 
allow reads from non-leaders (which is not the default because it does imply a 
small reduction in consistency across failure scenarios), and if the client(s) 
bandwidth isn't already saturated, then yeah, it might be slower than AFR.  
Then again, even that might be outweighed by gains in cache efficiency and 
avoidance of any need for locking.  In the absolute worst case, we can split 
bricks and create multiple replica sets across the same bricks, each with their 
own leader.  That would parallelize reads as much as AFR, while still gaining 
all of the other NSR advantages.

In other words, yes, in theory it could be a problem.  In practice?  No.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Too many open bugs filed against mainline and pre-release versions

2015-10-14 Thread Kaleb S. KEITHLEY

At the present time there are 1044 open bugs filed against 'mainline'
and 111 bugs filed against 'pre-release' versions of GlusterFS.

Some of the really old ones date back to 2009! Some have actually been
fixed but were not ever CLOSED. Some are RFEs (Requests for
Enhancements) that we might want to hang on to.

We don't do pre-releases or release candidates any more. The pre-release
version in bugzilla should be removed.

In any event 1000+ old bugs are just too many to have open.

I propose to bulk close majority of them, or if appropriate, reassign
to a numbered version, . After dealing with the pre-release bugs I
propose to remove 'pre-release' as a version in bugzilla.

In the future, when we make a 3.X (or 4.X) version we will reassign
mainline bugs to the 3.X.0 version.

Below you can find a list of the bugs. If there are any that you think
are special, please go ahead and update them in bugzilla. If you have
any thoughts, comments, or concerns about reassigning or bulk closing,
please reply to this email (Reply-To: is set to gluster-devel@) and
share them with the community. Rest assured I will not take any action
until the community has a chance to weigh in with any opinions.

Open bugs filed against version 'pre-release':

(bugzilla query
--from-url='https://bugzilla.redhat.com/buglist.cgi?bug_status=NEW_status=ASSIGNED_status=POST_status=MODIFIED=Community_id=3937132=GlusterFS_format=advanced=pre-release')

 #765124 ASSIGNED   - sw...@redhat.com - running 'pi' two times throws
exception on 'dist-striped-rep' gluster volume
 #765324 ASSIGNED   - nsath...@redhat.com -
[cb2c6982bd6d588a91fa2827f95f0d9cf3ff2560]: quota limit-usage on child
directories should allowed to exceed than /
 #765333 ASSIGNED   - sw...@redhat.com - pi estimator job failed with
2*3 striped-replicated set-up
 #765347 ASSIGNED   - sw...@redhat.com - terasort failed with
quick.slave.io on
 #765360 ASSIGNED   - sw...@redhat.com - 'grep' job gives wrong results
with quick-slave-io ON
 #765372 ASSIGNED   - sw...@redhat.com - "randomwriter" job failed with
'transport endpoint not connected" error in quick-slave-io ON
 #765396 ASSIGNED   - rgowd...@redhat.com - glusterd crashed when trying
to mount a tcp,rdma volume via rdma transport
 #765438 ASSIGNED   - sw...@redhat.com - 'grep' job fails for N-1
failover tests
 #765465 ASSIGNED   - vbel...@redhat.com - [glusterfs-3.2.4]: smallfiles
rewrite performance on xfs very slow
 #765490 ASSIGNED   - rgowd...@redhat.com - [glusterfs-3.2.5qa2] -
iozone fails in volume with tcp,rdma transport type
 #765580 ASSIGNED   - sauj...@redhat.com - object-storage: volume mount
does not happen
 #767898 ASSIGNED   - thi...@redhat.com - object-storage:
X-Account-Container-Count not updated properly
 #768816 ASSIGNED   - sauj...@redhat.com - object-strorage: cyberduck
not working
 #771585 ASSIGNED   - nsath...@redhat.com - [glusterfs-3.3.0qa19]:
replace brick with some tests running increases quota size to more than
the limit
 #782777 ASSIGNED   - b...@gluster.org - kernel untar failed with
transport endpoint not connected when build with lefence
 #786007 NEW- b...@gluster.org -
[c3aa99d907591f72b6302287b9b8899514fb52f1]: server crashed when dict_set
for truster.afr.vol-client when compiled with efence
 #786068 ASSIGNED   - kpart...@redhat.com - replace-brick on a volume
with rdma transport failed
 #797160 ASSIGNED   - sw...@redhat.com - mountpoint needs to be
unmounted when stop-mapred is issued.
 #798546 NEW- b...@gluster.org - [glusterfs-3.3.0qa24]: create
set command for md-cache & remove stat-prefetch set
 #798883 ASSIGNED   - sw...@redhat.com - All map-reduce jobs fails with
"error in opening zip file" error
 #800396 ASSIGNED   - pkara...@redhat.com -
[228d01916c57d5a5716e1097e39e7aa06f31f3e4]: nfs client reports IO error
with transport endpoint not connected
 #802243 NEW- b...@gluster.org - NFS: mount point unresponsive
when disks get full
 #810844 ASSIGNED   - sw...@redhat.com - terasort job failed with lot of
"java.io.IOException: Spill failed" exceptions
 #813141 ASSIGNED   - b...@gluster.org - Socket server event handler:
Too many open files when data copied from windows
 #823181 ASSIGNED   - vbel...@redhat.com - Mount point shows "Invalid
arguments" upon rebalance followed by rm -rf
 #823664 ASSIGNED   - nsath...@redhat.com - Arequal check sum mismatch
after remove brick start operation from distributed-stripe volume
 #825559 MODIFIED   - di...@redhat.com - [glusterfs-3.3.0q43]: Cannot
heal split-brain
 #826100 ASSIGNED   - b...@gluster.org - [glusterfs-3.3.0-qa43]:
glusterfs server asserted while unrefing (destroying) the connection object
 #829172 ASSIGNED   - b...@gluster.org - reopen_fd_count is becoming -ve
because of stale fdctx
 #834172 ASSIGNED   - pkara...@redhat.com - Conservative merge for
meta-data split-brain
 #865867 MODIFIED   - pport...@redhat.com - gluster-swift upgrade to
3.3.1 wipes out proxy-server config
 #912239 POST   - 

[Gluster-devel] FOSDEM and DevConf.cz CFP

2015-10-14 Thread Amye Scavarda
Heyo!

FOSDEM closes its call for papers soon - like this Friday, October 16th
soon - so if you're interested in giving a talk in the main tracks at
FOSDEM, move quickly.
https://fosdem.org/2016/news/2015-09-24-call-for-participation/

DevConf.CZ is in Brno, Czech Republic the weekend directly after FOSDEM and
that one has a little bit more time.
http://devconf.cz/

If you do propose a talk, please add it to the Gluster events etherpad:
https://public.pad.fsfe.org/p/gluster-events

I've proposed a Storage DevRoom at FOSDEM, if it's accepted, you'll see
more from me on that.

Thanks!
 - amye

-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Glusterfs as a root file system on the same node

2015-10-14 Thread Atin Mukherjee
I don't think this is possible. I'd like to what why do you want to use
use a root file system as a Gluster volume, what's your use case.
Technically this is impossible (wrt GlusterD) as we then have no way to
segregate the configuration data.

~Atin

On 10/15/2015 12:09 AM, satish kondapalli wrote:
> is anyone has any thoughts on this?
> 
> Sateesh
> 
> On Tue, Oct 13, 2015 at 5:44 PM, satish kondapalli
> > wrote:
> 
> Hi,
> 
> I want to mount  gluster volume as a root file system for my node. 
> 
> Node will boot from network( only kernel and initrd images) but my
> root file system has to be one of the gluster volume ( bricks of the
> volume are on the disks which are attached to the same node). 
> Gluster configuration files also part of the root file system. 
> 
> Here i am facing chicken and egg problem.  Initially i thought to
> keep glusterfs libs, binary in the initrd and start the gluster
> server as part of initrd execution. But for mounting root file
> system (which is a gluster volume), all the gluster configuration
> files are stored in the root file system. My assumption is, without
> gluster configuration files(/var/lib/glusterd/xx) gluster will not
> find any volumes. 
> 
> Can someone help me on this?
> 
> Sateesh
> 
> 
> 
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Journal translator spec

2015-10-14 Thread Atin Mukherjee
Hi Avra/Jeff,

Could you push the design document along with the journal spec to
gluster.readthedocs.org as PRs?

~Atin

On 10/14/2015 09:55 PM, Jeff Darcy wrote:
> I've attached the spec for the full data-journaling translator needed by NSR, 
> and possibly usable by other components as well.  It's formatted as a 
> presentation, but there's a significant amount of text in the notes, so if 
> you're viewing the ODP make sure to use the "notes" view (for the PDF this is 
> automatic).  Feedback is welcome.
> 
> 
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Gluster 4.0 - upgrades & backward compatibility strategy

2015-10-14 Thread Atin Mukherjee


On 10/14/2015 05:50 PM, Roman wrote:
> Hi,
> 
> Its hard to comment plans and things like these, but I suggest everyone
> will be happy to have a possibility to upgrade from 3 to 4 without new
> installation, OK with offline upgrade also (shut down volumes and
> upgrade). And I'm somehow pretty sure, that this upgrade process should
> be pretty flawless so no one under any circumstances would need any kind
> of rollbacks, so there should not be any IFs :)
Just to clarify that there will be and has to be an upgrade path. That's
what I mentioned in point 4 in my mail. The only limitation would be
here is no rolling upgrade support.
> 
> 2015-10-07 8:32 GMT+03:00 Atin Mukherjee  >:
> 
> Hi All,
> 
> Over the course of the design discussion, we got a chance to discuss
> about the upgrades and backward compatibility strategy for Gluster 4.0
> and here is what we came up with:
> 
> 1. 4.0 cluster would be separate from 3.x clusters. Heterogeneous
> support won't be available.
> 
> 2. All CLI interfaces exposed in 3.x would continue to work with 4.x.
> 
> 3. ReSTful APIs for all old & new management actions.
> 
> 4. Upgrade path from 3.x to 4.x would be necessary. We need not support
> rolling upgrades, however all data layouts from 3.x would need to be
> honored. Our upgrade path from 3.x to 4.x should not be cumbersome.
> 
> 
> Initiative wise upgrades strategy details:
> 
> GlusterD 2.0
> 
> 
> - No rolling upgrade, service disruption is expected
> - Smooth upgrade from 3.x to 4.x (migration script)
> - Rollback - If upgrade fails, revert back to 3.x, old configuration
> data shouldn't be wiped off.
> 
> 
> DHT 2.0
> ---
> - No in place upgrade to DHT2
> - Needs migration of data
> - Backward compat, hence does not exist
> 
> NSR
> ---
> - volume migration from AFR to NSR is possible with an offline upgrade
> 
> We would like to hear from the community about your opinion on this
> strategy.
> 
> Thanks,
> Atin
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org 
> http://www.gluster.org/mailman/listinfo/gluster-users
> 
> 
> 
> 
> -- 
> Best regards,
> Roman.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Review request

2015-10-14 Thread Atin Mukherjee
I've couple of patches which need review attention

- http://review.gluster.org/#/c/12171/
- http://review.gluster.org/#/c/12329/

Raghavendra T - http://review.gluster.org/#/c/12328/ has got a +2 from
Jeff, could you merge it?

Thanks,
Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel