[...truncated 6 lines...]
https://bugzilla.redhat.com/1518150 / build: GlusterFS not available for Fedora 
27 Modular Server
https://bugzilla.redhat.com/1524058 / core: gluster peer command stops working 
with unhelpful error messages when DNS doens't work
https://bugzilla.redhat.com/1524048 / core: gluster volume set is very slow
https://bugzilla.redhat.com/1518582 / core: Reduce lock contention on fdtable 
https://bugzilla.redhat.com/1512437 / distribute: parallel-readdir = TRUE 
prevents directories listing
https://bugzilla.redhat.com/1517260 / distribute: Volume wrong size
https://bugzilla.redhat.com/1522710 / fuse: Directory listings on fuse mount 
are very slow due to small number of getdents() entries
https://bugzilla.redhat.com/1523219 / fuse: fuse xlator uses block size and 
fragment size 128KB leading to rounding off in df output
https://bugzilla.redhat.com/1514363 / fuse: Glusterfs client file access 
permission control incorrect
https://bugzilla.redhat.com/1523050 / glusterd: glusterd consuming high memory
https://bugzilla.redhat.com/1513692 / io-stats: io-stats appends now instead of 
overwriting which floods filesystem with logs
https://bugzilla.redhat.com/1521213 / libgfapi: crash when gifs_set_logging is 
called concurrently
https://bugzilla.redhat.com/1522591 / logging: Modify glusterd log print level 
to Warning, glusterfsd and glustefs process log output level has not been 
https://bugzilla.redhat.com/1523295 / md-cache: md-cache should have an option 
to cache STATFS calls
https://bugzilla.redhat.com/1512691 / nfs: PostgreSQL DB Restore: unexpected 
data beyond EOF
https://bugzilla.redhat.com/1513258 / porting: NetBSD port
https://bugzilla.redhat.com/1521045 / posix: Race in file creation while a 
brick is offline
https://bugzilla.redhat.com/1516682 / project-infrastructure: builder11 and 
builder21 seem to be centos6 rather than centos7
https://bugzilla.redhat.com/1515446 / project-infrastructure: build.gluster.org 
times out and does not load!
https://bugzilla.redhat.com/1516198 / project-infrastructure: Create 
jenk...@build.gluster.org and send to /dev/null
https://bugzilla.redhat.com/1514365 / project-infrastructure: Generate report 
to identify first time contributors
https://bugzilla.redhat.com/1518208 / project-infrastructure: gluster-devel ML 
subscription is broken
https://bugzilla.redhat.com/1518093 / project-infrastructure: Jumphost for 
machines in the cage
https://bugzilla.redhat.com/1514369 / project-infrastructure: Make Gluster code 
history available with cregit
https://bugzilla.redhat.com/1521013 / project-infrastructure: rfc.sh should 
allow custom remote names for ORIGIN
https://bugzilla.redhat.com/1521034 / protocol: client: fix a race in the 
client reconnect code
https://bugzilla.redhat.com/1521038 / protocol: Core dumps in protocol/server 
from 3.8-fb ports
https://bugzilla.redhat.com/1520374 / protocol: Crash in regression test 
https://bugzilla.redhat.com/1523122 / protocol: fix serval bugs found on 
testing protocol/client
https://bugzilla.redhat.com/1519598 / protocol: Reduce lock contention on 
protocol client manipulating fd
https://bugzilla.redhat.com/1521014 / quota: quota_unlink_cbk crashes when 
loc.inode is null
https://bugzilla.redhat.com/1522651 / rdma: rdma transport may access an 
obsolete item in gf_rdma_device_t->all_mr, and causes glusterfsd/glusterfs 
process crash.
https://bugzilla.redhat.com/1521041 / rpc: rpc: fix the timedout tests
https://bugzilla.redhat.com/1521030 / rpc: rpc: unregister programs before 
registering them again
https://bugzilla.redhat.com/1521119 / sharding: GlusterFS segmentation fault 
when deleting files from sharded tiered volume
https://bugzilla.redhat.com/1517961 / tests: Failure of some regression tests 
on Centos7 (passes on centos6)
https://bugzilla.redhat.com/1519684 / tests: regressions completed within 4:30 
hr instead of 6:00 hr
https://bugzilla.redhat.com/1522808 / tiering: Gluster client crashes while 
using both tiering and sharding
[...truncated 2 lines...]

Attachment: build.log
Description: Binary data

Gluster-devel mailing list

Reply via email to