[Gluster-devel] setfacl: testfile: Remote I/O error (zfsonlinux, gluster 3.6, CentOS 6.6)

2014-11-24 Thread Kiran Patil
Testcase bug-847622.t is failing with Remote I/O error. Steps to reproduce: - [root@fractal-c92e glusterfs]# glusterd [root@fractal-c92e glusterfs]# gluster --mode=script --wignore volume create patchy fractal-c92e.fractal.lan:/d/backends/brick0 volume create: patchy:

Re: [Gluster-devel] BitRot notes

2014-11-24 Thread Vijay Bellur
On 10/31/2014 04:09 PM, Venky Shankar wrote: Hey folks, Myself and Raghavendra (@rabhat) have been discussing about BitRot[1] and came up with a list of high level tasks (breakup items) captured here[2]. The pad will be updated on an ongoing basis reflecting the current status/items that are

Re: [Gluster-devel] glfs_resolve new file force lookup

2014-11-24 Thread Rudra Siva
Thank you for the email. When expanded to multiple bricks, I did see the inode-table not change - it pointed to the same inode table. I don't want/need to resolve if the file exists just need the potential brick information for each file and ability to dispatch as a single fop with entries on

Re: [Gluster-devel] glfs_resolve new file force lookup

2014-11-24 Thread Rudra Siva
If I understand the structure correctly, the lookup itself generally may not pose a problem - specific flags triggering the over-the-wire request for me at this time I wish to suppress (the fop on the backend is fully capable of creating the file if it does not exist) - the lookup drives through

Re: [Gluster-devel] Glusterd 'Management Volume' proposal

2014-11-24 Thread Jeff Darcy
We have been thinking of many approaches to address some of Glusterd's correctness (during failures and at scale) and scalability concerns. A recent email thread on Glusterd-2.0 was along these lines. While that discussion is still valid, we have been considering dogfooding as a viable option

Re: [Gluster-devel] glfs_resolve new file force lookup

2014-11-24 Thread Shyam
On 11/24/2014 07:43 AM, Rudra Siva wrote: If I understand the structure correctly, the lookup itself generally may not pose a problem - specific flags triggering the over-the-wire request for me at this time I wish to suppress (the fop on the backend is fully capable of creating the file if it

[Gluster-devel] Is the macro EC_MAX_NODES error?

2014-11-24 Thread Feng Wang
Hi all, The macro is defined in ec.c file as the following statement. #define EC_MAX_NODES (EC_MAX_FRAGMENTS + ((EC_MAX_FRAGMENTS - 1) / 2)) But according to my understanding and the comments above the statement, it seems that (EC_MAX_FRAGMENTS - 1) should not be divided by 2. The number

[Gluster-devel] Call for proposals: LSF/MM conference

2014-11-24 Thread Dave McAllister
The annual Linux Storage, Filesystem and Memory Management Summit for 2015 will be held on March 9th and 10th before the Linux Foundation Vault conference at the Revere Hotel, Boston MA. For those that do not know, Vault is designed to be an event where open source storage and filesystem

Re: [Gluster-devel] Is the macro EC_MAX_NODES error?

2014-11-24 Thread Feng Wang
Hi Xavi, Here it is. https://bugzilla.redhat.com/show_bug.cgi?id=1167419Thank you. Best Regards, Feng Wang On Tuesday, 25 November 2014, 0:58, Xavier Hernandez xhernan...@datalab.es wrote: Yes, you are correct. Can you file a bug for this ? otherwise I'll do it. Xavi On

[Gluster-devel] Wrong behavior on fsync of md-cache ?

2014-11-24 Thread Xavier Hernandez
Hi, I have an issue in ec caused by what seems an incorrect behavior in md-cache, at least in NetBSD (on linux this doesn't seem to happen). The problem happens when multiple writes are sent in parallel and one of them fails with an error. After the error, an fsync is issued, before all

Re: [Gluster-devel] setfacl: testfile: Remote I/O error (zfsonlinux, gluster 3.6, CentOS 6.6)

2014-11-24 Thread Vijay Bellur
On 11/24/2014 05:55 PM, Kiran Patil wrote: getrlimit(RLIMIT_NOFILE, {rlim_cur=1024, rlim_max=4*1024}) = 0 lstat(testfile, {st_mode=S_IFREG|0644, st_size=0, ...}) = 0 getxattr(testfile, system.posix_acl_access, 0x7fff9ce10d00, 132) = -1 ENODATA (No data available) stat(testfile,

[Gluster-devel] Are you going to GlusterFS Future Features: Discussion on BitRot tomorrow?

2014-11-24 Thread Google+
Are you going to GlusterFS Future Features: Discussion on BitRot tomorrow? Tue, November 25, 5:00 AM PST Jon Archer, Dave McAllister, Atin Mukherjee and 8 more are invited View Invitation:

Re: [Gluster-devel] Glusterd 'Management Volume' proposal

2014-11-24 Thread James
On Mon, Nov 24, 2014 at 9:29 AM, Jeff Darcy jda...@redhat.com wrote: I think the ideal might be to embed a consensus protocol implementation (Paxos, Raft, or Viewstamped Replication) directly into glusterd, so it's guaranteed to start up and die exactly when those daemons do and be subject to

Re: [Gluster-devel] USS test cases failure with core

2014-11-24 Thread Justin Clift
On Mon, 24 Nov 2014 12:22:10 +0530 Avra Sengupta aseng...@redhat.com wrote: snip As we can see, we don't check if value has got any memory allocated. Looks like mem_get0 ran out of pool space, and returned null, inturn forcing get_new_data() to return null. Dereferencing value without checking

Re: [Gluster-devel] Glusterd 'Management Volume' proposal

2014-11-24 Thread Krishnan Parthasarathi
The main issue I have with this, and why I didn't suggest it myself, is that it creates a bit of a chicken and egg problem. Any kind of server-side replication, such as NSR, depends on this subsystem to elect leaders and store its own metadata. How will these things be done if we create a

Re: [Gluster-devel] setfacl: testfile: Remote I/O error (zfsonlinux, gluster 3.6, CentOS 6.6)

2014-11-24 Thread Kiran Patil
d-backends-brick0.log file contents: --- [2014-11-25 05:11:33.080409] I [MSGID: 100030] [glusterfsd.c:2018:main] 0-/usr/local/sbin/glusterfsd: Started running /usr/local/sbin/glusterfsd version 3.6.1 (args: /usr/local/sbin/glusterfsd -s