[Gluster-devel] [master] FAILED: freebsd smoke

2016-03-11 Thread Milind Changire

https://build.gluster.org/job/freebsd-smoke/12914/

=


Making install in nsr-server
--- install-recursive ---
Making install in src
--- nsr-cg.c ---
/usr/local/bin/python 
/usr/home/jenkins/root/workspace/freebsd-smoke/xlators/experimental/nsr-server/src/gen-fops.py
 
/usr/home/jenkins/root/workspace/freebsd-smoke/xlators/experimental/nsr-server/src/all-templates.c
 
/usr/home/jenkins/root/workspace/freebsd-smoke/xlators/experimental/nsr-server/src/nsr.c
 > nsr-cg.c
--- nsr-cg.lo ---
  CC   nsr-cg.lo
nsr-cg.c: In function 'nsr_get_changelog_dir':
nsr-cg.c:9666:24: error: 'ENODATA' undeclared (first use in this function)
 return ENODATA;
^
nsr-cg.c:9666:24: note: each undeclared identifier is reported only once for 
each function it appears in
nsr-cg.c: In function 'nsr_get_terms':
nsr-cg.c:9692:20: error: 'ENODATA' undeclared (first use in this function)
 op_errno = ENODATA; /* Most common error after this. */
^
nsr-cg.c: In function 'nsr_open_term':
nsr-cg.c:9872:28: error: 'ENODATA' undeclared (first use in this function)
 op_errno = ENODATA;
^
nsr-cg.c: In function 'nsr_next_entry':
nsr-cg.c:9929:28: error: 'ENODATA' undeclared (first use in this function)
 op_errno = ENODATA;
^
*** [nsr-cg.lo] Error code 1


=

Please advise.

--
Milind

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] races in dict_foreach() causing crashes in tier-file-creat.t

2016-03-11 Thread Jeff Darcy
> Tier does send lookups serially, which fail on the hashed subvolumes of
> dhts. Both of them trigger lookup_everywhere which is executed in epoll
> threads, thus the they are executed in parallel.

According to your earlier description, items are being deleted by EC
(i.e. the cold tier) while AFR (i.e. the hot tier) is trying to access
the same dictionary.  That sounds pretty parallel across the two.  It
doesn't matter, though, because I think we agree that this solution is
too messy anyway.

> > (3) Enhance dict_t with a gf_lock_t that can be used to serialize
> > access.  We don't have to use the lock in every invocation of
> > dict_foreach (though we should probably investigate that).  For
> > now, we can just use it in the code paths we know are contending.
> 
> dict already has a lock.

Yes, we have a lock which is used in get/set/add/delete - but not in
dict_foreach for the reasons you mention.  I should have been clearer
that I was suggesting a *different* lock that's only used in this
case.  Manually locking with the lock we already have might not work
due to recursive locking, but the lock ordering with a separate
higher-level lock is pretty simple and it won't affect any other uses.

> Xavi was mentioning that dict_copy_with_ref is too costly, which is
> true, if we make this change it will be even more costly :-(.

There are probably MVCC-ish approaches that could be both safe and
performant, but they'd be quite complicated to implement.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Status update on SEEK_DATA/HOLE for GlusterFS 3.8

2016-03-11 Thread Ravishankar N

On 03/11/2016 05:07 PM, Niels de Vos wrote:

Hi all,

I thought I would give a short status update on the tasks related to the
new SEEK procedure/FOP that has been added for GlusterFS 3.8. We had
several goals, and (most of) the basics have been completed:

Great! Thank *you* Niels for doing a major chunk of the work.
-Ravi



  - implement SEEK as network protocol FOP
  - add support for SEEK in the server-side xlators (thanks Xavi for EC)
  - add support for SEEK in the client-side xlators
  - extend glfs_lseek() in libgfapi
  - pass lseek() on through the Linux FUSE kernel module (thanks Ravi)
  - handle lseek() in the fuse-bridge (thanks Ravi)
  - add dissecting of SEEK in Wireshark

Some of the outstanding topics include:

  - SEEK for sharding, high on the wishlist (bug 1301647)
  - SEEK for stripe, bmap, low on the wishlist
  - QEMU usage of glfs_lseek()
patch under review: 
http://lists.nongnu.org/archive/html/qemu-block/2016-03/msg00288.html
  - NFSv4.2 SEEK procedure in NFS-Ganesha
untested patch available on request
  - enhancement for Samba/vfs_gluster
  - enhancement for (Linux) coreutils providing "cp" etc.
(currently uses FIEMAP ioctl(), add fallback to seek)

A design and feature page that has more details about these tasks is
still forthcoming, sorry about the delay.

Niels


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Status update on SEEK_DATA/HOLE for GlusterFS 3.8

2016-03-11 Thread Niels de Vos
Hi all,

I thought I would give a short status update on the tasks related to the
new SEEK procedure/FOP that has been added for GlusterFS 3.8. We had
several goals, and (most of) the basics have been completed:

 - implement SEEK as network protocol FOP
 - add support for SEEK in the server-side xlators (thanks Xavi for EC)
 - add support for SEEK in the client-side xlators
 - extend glfs_lseek() in libgfapi
 - pass lseek() on through the Linux FUSE kernel module (thanks Ravi)
 - handle lseek() in the fuse-bridge (thanks Ravi)
 - add dissecting of SEEK in Wireshark

Some of the outstanding topics include:

 - SEEK for sharding, high on the wishlist (bug 1301647)
 - SEEK for stripe, bmap, low on the wishlist
 - QEMU usage of glfs_lseek()
   patch under review: 
http://lists.nongnu.org/archive/html/qemu-block/2016-03/msg00288.html
 - NFSv4.2 SEEK procedure in NFS-Ganesha
   untested patch available on request
 - enhancement for Samba/vfs_gluster
 - enhancement for (Linux) coreutils providing "cp" etc.
   (currently uses FIEMAP ioctl(), add fallback to seek)

A design and feature page that has more details about these tasks is
still forthcoming, sorry about the delay.

Niels


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] races in dict_foreach() causing crashes in tier-file-creat.t

2016-03-11 Thread Pranith Kumar Karampuri

hi,
  I think this is the RCA for the issue:
Basically with distributed ec + disctributed replicate as cold, hot 
tiers. tier
sends a lookup which fails on ec. (By this time dict already 
contains ec
xattrs) After this lookup_everywhere code path is hit in tier which 
triggers
lookup on each of distribute's hash lookup but fails which leads to 
the cold,
hot dht's lookup_everywhere in two parallel epoll threads where in 
ec's thread it

tries to set trusted.ec.version/dirty/size in the dictionary, the older
values against the same key get erased. While this erasing is going 
on if the
thread that is doing lookup on afr's subvolume accesses these 
members either in
dict_copy_with_ref or client xlator trying to serialize, that can 
either lead
to crash or hang based on when the spin/mutex lock is called on 
invalid memory.


At the moment I sent http://review.gluster.org/13680 (I am pressed for 
time because I need to provide a build for our customer with a fix), 
which avoids parallel accesses of elements which step on each other.


Raghavendra G and I discussed about this problem and the right way to 
fix it is to take a copy(without dict_foreach) of the dictionary in 
dict_foreach inside a lock and then loop over the local dictionary. I am 
worried about the performance implication of this, so wondering if 
anyone has a better idea.


Also included Xavi, who earlier said we need to change dict.c but it is 
a bigger change. May be the time has come? I would love to gather all 
your inputs and implement a better version of dict if we need one.


Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel