[Gluster-Maintainers] Gluster Maintainer's meeting: 7th Jan, 2019 - Meeting minutes

2019-01-08 Thread Amar Tumballi Suryanarayan
Meeting date: 2019-01-07 18:30 IST, 13:00 UTC, 08:00 EDT
BJ Link

   - Bridge: https://bluejeans.com/217609845
   - Watch: https://bluejeans.com/s/sGFpa

Attendance
Agenda

   -

   Welcome 2019: New goals / Discuss:
   - https://hackmd.io/OiQId65pStuBa_BPPazcmA
  - Give it a week and take it to mailing list, discuss and agree upon
  - [Nigel] Some of the above points are threads of its own. May need
  separate thread.
   -

   Progress with GCS
   -

  Email about GCS in community.
  -

  RWX:
  - Scale testing showing GD2 can scale to 1000s of PVs (each is a
 gluster volume)
 - Bricks with LVM
 - Some delete issues seen, specially with LV command scale. Patch
 sent.
 - Create rate: 500 PVs / 12mins
 - More details by end of the week, including delete numbers.
  -

  RWO:
  - new CSI for gluster-block showing good scale numbers, which is
 reaching higher than current 1k RWO PV per cluster, but need
to iron out
 few things. (https://github.com/gluster/gluster-csi-driver/pull/105
 )
 - 280 pods in 3 hosts, 1-1 Pod->PV ratio: leaner graph.
 - 1080 PVs with 1-12 ratio on 3 machines
 - Working on 3000+ PVC on just 3 hosts, will update by another 2
 days.
 - Poornima is coming up with steps and details about the
 PR/version used etc.
  -

   Static Analyzers:
   - glusterfs:
 - Coverity - 63 open
 - https://scan.coverity.com/projects/gluster-glusterfs?tab=overview
 - clang-scan - 32 open
 -
 
https://build.gluster.org/job/clang-scan/lastCompletedBuild/clangScanBuildBugs/
  - gluster-block:
 -
 https://scan.coverity.com/projects/gluster-gluster-block?tab=overview
 - coverity: 1 open (66 last week)
  -

   GlusterFS-6:
   - Any priority review needed?
 - Fencing patches
 - Reducing threads (GH Issue: 475)
 - glfs-api statx patches [merged]
  - What are the critical areas need focus?
 - Asan Build ? Currently not green
 - Some java errors, machine offline. Need to look into this.
  - How to make glusto automated tests become blocker for the release?
  - Upgrade tests, need to start early.
  - Schedule as called out in the mail
  

  NOTE: Working backwards on the schedule, here’s what we have:
 - Announcement: Week of Mar 4th, 2019
 - GA tagging: Mar-01-2019
 - RC1: On demand before GA
 - RC0: Feb-04-2019
 - Late features cut-off: Week of Jan-21st, 2018
 - Branching (feature cutoff date): Jan-14-2018 (~45 days prior to
 branching)
 - Feature/scope proposal for the release (end date): Dec-12-2018
  -

   Round Table?
   - [Sunny] Meetup in BLR this weekend. Please do come (at least those who
  are in BLR)
  - [Susant] Softserve has 4hrs timeout, which can’t get full
  regression cycle. Can we get at least 2 more hours added, so full
  regression can be run.


---

On Mon, Jan 7, 2019 at 9:04 AM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:

>
> Meeting date: 2019-01-07 18:30 IST, 13:00 UTC, 08:00 EDTBJ Link
>
>- Bridge: https://bluejeans.com/217609845
>
> Attendance
> Agenda
>
>-
>
>Welcome 2019: Discuss about goals :
>- https://hackmd.io/OiQId65pStuBa_BPPazcmA
>-
>
>Progress with GCS
>- Scale testing showing GD2 can scale to 1000s of PVs (each is a
>   gluster volume, in RWX mode)
>   - new CSI for gluster-block showing good scale numbers, which is
>   reaching higher than current 1k RWO PV per cluster, but need to iron out
>   few things. (https://github.com/gluster/gluster-csi-driver/pull/105)
>-
>
>Performance focus:
>- Any update? What are the patch in progress?
>   - How to measure the perf of a patch, is there any hardware?
>-
>
>Static Analyzers:
>- glusterfs:
>  - coverity - 63 open
>  - clang-scan - 32 open (with many false-positives).
>   - gluster-block:
>  - coverity: 1 open (66 last week)
>   -
>
>GlusterFS-6:
>- Any priority review needed?
>   - What are the critical areas need focus?
>   - How to make glusto automated tests become blocker for the release?
>   - Upgrade tests, need to start early.
>   - Schedule as called out in the mail
>   
> 
>   NOTE: Working backwards on the schedule, here’s what we have:
>  - Announcement: Week of Mar 4t

[Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #1078

2019-01-08 Thread jenkins
See 


--
[...truncated 1009.33 KB...]
this = 0x7f3d30006ce0
stub = 0x0
#2  0x7f3d72ad2dd5 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#3  0x7f3d7239aead in clone () from /lib64/libc.so.6
No symbol table info available.

Thread 5 (Thread 0x7f3d1a7fc700 (LWP 7843)):
#0  0x7f3d72ad6965 in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0
No symbol table info available.
#1  0x7f3d73a42c5d in rpcsvc_request_handler (arg=0x7f3d5e18f0d0) at 
:2195
queue = 0x7f3d5e18f0d0
program = 0x7f3d5e18f060
req = 0x7f3d1a7fb3a8
tmp_req = 0x7f3d1a7fb3a8
actor = 0x7f3d5e886ba0 
done = false
ret = 0
tmp_list = {next = 0x7f3d1a7fbe80, prev = 0x7f3d1a7fbe80}
__FUNCTION__ = "rpcsvc_request_handler"
#2  0x7f3d72ad2dd5 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#3  0x7f3d7239aead in clone () from /lib64/libc.so.6
No symbol table info available.

Thread 4 (Thread 0x7f3d497fa700 (LWP 7794)):
#0  0x7f3d72ad6d12 in pthread_cond_timedwait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0
No symbol table info available.
#1  0x7f3d653476f3 in janitor_get_next_fd (ctx=0x248b010, janitor_sleep=10) 
at 
:1550
pfd = 0x0
timeout = {tv_sec = 1546981262, tv_nsec = 0}
#2  0x7f3d653477af in posix_ctx_janitor_thread_proc (data=0x7f3d60008da0) 
at 
:1581
this = 0x7f3d60008da0
pfd = 0x7f3d3c004240
ctx = 0x248b010
priv = 0x7f3d600899e0
sleep_duration = 10
__FUNCTION__ = "posix_ctx_janitor_thread_proc"
#3  0x7f3d72ad2dd5 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#4  0x7f3d7239aead in clone () from /lib64/libc.so.6
No symbol table info available.

Thread 3 (Thread 0x7f3d5c0a4700 (LWP 7797)):
#0  0x7f3d72ad6d12 in pthread_cond_timedwait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0
No symbol table info available.
#1  0x7f3d5f7a57dc in iot_worker (data=0x7f3d3003fdf0) at 
:197
conf = 0x7f3d3003fdf0
this = 0x7f3d300119f0
stub = 0x0
sleep_till = {tv_sec = 1546981374, tv_nsec = 714561800}
ret = 0
pri = -1
bye = false
__FUNCTION__ = "iot_worker"
#2  0x7f3d72ad2dd5 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#3  0x7f3d7239aead in clone () from /lib64/libc.so.6
No symbol table info available.

Thread 2 (Thread 0x7f3d485f7700 (LWP 7796)):
#0  0x7f3d72ad6965 in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0
No symbol table info available.
#1  0x7f3d5ef4852e in index_worker (data=0x7f3d30018ed0) at 
:216
priv = 0x7f3d30032760
this = 0x7f3d30018ed0
stub = 0x0
bye = false
#2  0x7f3d72ad2dd5 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#3  0x7f3d7239aead in clone () from /lib64/libc.so.6
No symbol table info available.

Thread 1 (Thread 0x7f3d65da2700 (LWP 7783)):
#0  0x0040be0d in ?? ()
No symbol table info available.
#1  0x7f3d741cad98 in ?? ()
No symbol table info available.
#2  0x0003 in ?? ()
No symbol table info available.
#3  0x in ?? ()
No symbol table info available.
=
  Finish backtrace
 program name : /build/install/sbin/glusterfsd
 corefile : /glfs_epoll001-7775.core
=

+ rm -f /build/install/cores/gdbout.txt
+ for corefile in '$CORELIST'
+ getliblistfromcore /build/install/cores/glfs_brosign-12074.core
+ rm -f /build/install/cores/gdbout.txt
+ gdb -c /build/install/cores/glfs_brosign-12074.core -q -ex 'set pagination 
off' -ex 'info sharedlibrary' -ex q
+ set +x
+ rm -f /build/install/cores/gdbout.txt
+ for corefile in '$CORELIST'
+ getliblistfromcore /build/install/cores/glfs_brosign-14960.core
+ rm -f /build/install/cores/gdbout.txt
+ gdb -c /build/install/cores/glfs_brosign-14960.core -q -ex 'set pagination 
off' -ex 'info sharedlibrary' -ex q
+ set +x
+ rm -f /build/install/cores/gdbout.txt
+ for corefile in '$CORELIST'
+ getliblistfromcore /build/install/cores/glfs_brosign-15247.core
+ rm -f /build/install/cores/gdbout.txt
+ gd

[Gluster-Maintainers] Build failed in Jenkins: regression-test-with-multiplex #1077

2019-01-08 Thread jenkins
See 


Changes:

[Vijay Bellur] performance/io-threads: Improve debuggability in statedump

[Kotresh H R] features/bit-rot: do not send version and signature keys in dict

[Amar Tumballi] Revert "iobuf: Get rid of pre allocated iobuf_pool and use per 
thread

[Xavi Hernandez] features/shard: Assign fop id during background deletion to 
prevent

--
[...truncated 980.24 KB...]
./tests/bugs/glusterfs/bug-872923.t  -  9 second
./tests/bugs/gfapi/bug-1447266/1460514.t  -  9 second
./tests/bugs/ec/bug-1179050.t  -  9 second
./tests/bugs/core/bug-949242.t  -  9 second
./tests/bugs/bitrot/1207029-bitrot-daemon-should-start-on-valid-node.t  -  9 
second
./tests/basic/xlator-pass-through-sanity.t  -  9 second
./tests/basic/fop-sampling.t  -  9 second
./tests/basic/afr/ta-write-on-bad-brick.t  -  9 second
./tests/basic/afr/tarissue.t  -  9 second
./tests/basic/afr/arbiter-statfs.t  -  9 second
./tests/basic/afr/arbiter-remove-brick.t  -  9 second
./tests/gfid2path/get-gfid-to-path.t  -  8 second
./tests/bugs/upcall/bug-1458127.t  -  8 second
./tests/bugs/snapshot/bug-1064768.t  -  8 second
./tests/bugs/replicate/bug-1448804-check-quorum-type-values.t  -  8 second
./tests/bugs/replicate/bug-1250170-fsync.t  -  8 second
./tests/bugs/quota/bug-1250582-volume-reset-should-not-remove-quota-quota-deem-statfs.t
  -  8 second
./tests/bugs/quota/bug-1243798.t  -  8 second
./tests/bugs/nfs/bug-915280.t  -  8 second
./tests/bugs/io-stats/bug-1598548.t  -  8 second
./tests/bugs/io-cache/bug-read-hang.t  -  8 second
./tests/bugs/glusterfs/bug-861015-log.t  -  8 second
./tests/bugs/fuse/bug-985074.t  -  8 second
./tests/bugs/cli/bug-1087487.t  -  8 second
./tests/bugs/changelog/bug-1208470.t  -  8 second
./tests/bugs/bitrot/1209751-bitrot-scrub-tunable-reset.t  -  8 second
./tests/basic/volume-status.t  -  8 second
./tests/basic/glusterd/arbiter-volume-probe.t  -  8 second
./tests/basic/gfapi/upcall-cache-invalidate.t  -  8 second
./tests/basic/gfapi/mandatory-lock-optimal.t  -  8 second
./tests/basic/gfapi/bug-1241104.t  -  8 second
./tests/basic/ec/ec-read-policy.t  -  8 second
./tests/basic/ec/ec-anonymous-fd.t  -  8 second
./tests/basic/distribute/file-create.t  -  8 second
./tests/basic/afr/ta-shd.t  -  8 second
./tests/basic/afr/gfid-mismatch.t  -  8 second
./tests/basic/afr/afr-read-hash-mode.t  -  8 second
./tests/bugs/snapshot/bug-1260848.t  -  7 second
./tests/bugs/shard/bug-1258334.t  -  7 second
./tests/bugs/replicate/bug-1561129-enospc.t  -  7 second
./tests/bugs/replicate/bug-1365455.t  -  7 second
./tests/bugs/quota/bug-1287996.t  -  7 second
./tests/bugs/quota/bug-1104692.t  -  7 second
./tests/bugs/io-cache/bug-858242.t  -  7 second
./tests/bugs/glusterfs/bug-902610.t  -  7 second
./tests/bugs/glusterfs/bug-848251.t  -  7 second
./tests/bugs/glusterd/bug-948729/bug-948729-force.t  -  7 second
./tests/bugs/glusterd/bug-1482906-peer-file-blank-line.t  -  7 second
./tests/bugs/fuse/bug-963678.t  -  7 second
./tests/bugs/ec/bug-1227869.t  -  7 second
./tests/bugs/distribute/bug-912564.t  -  7 second
./tests/bugs/distribute/bug-882278.t  -  7 second
./tests/bugs/distribute/bug-1088231.t  -  7 second
./tests/bugs/core/bug-986429.t  -  7 second
./tests/bugs/core/bug-908146.t  -  7 second
./tests/bugs/bug-1371806_1.t  -  7 second
./tests/bugs/bug-1258069.t  -  7 second
./tests/bugs/bitrot/bug-1229134-bitd-not-support-vol-set.t  -  7 second
./tests/bugs/bitrot/1209818-vol-info-show-scrub-process-properly.t  -  7 second
./tests/bitrot/br-stub.t  -  7 second
./tests/basic/md-cache/bug-1317785.t  -  7 second
./tests/basic/gfapi/glfd-lkowner.t  -  7 second
./tests/basic/gfapi/gfapi-dup.t  -  7 second
./tests/basic/gfapi/anonymous_fd.t  -  7 second
./tests/basic/ec/ec-internal-xattrs.t  -  7 second
./tests/basic/distribute/throttle-rebal.t  -  7 second
./tests/basic/ctime/ctime-noatime.t  -  7 second
./tests/basic/afr/heal-info.t  -  7 second
./tests/basic/afr/gfid-heal.t  -  7 second
./tests/gfid2path/block-mount-access.t  -  6 second
./tests/features/readdir-ahead.t  -  6 second
./tests/features/flock_interrupt.t  -  6 second
./tests/features/delay-gen.t  -  6 second
./tests/bugs/upcall/bug-upcall-stat.t  -  6 second
./tests/bugs/upcall/bug-1369430.t  -  6 second
./tests/bugs/transport/bug-873367.t  -  6 second
./tests/bugs/snapshot/bug-1178079.t  -  6 second
./tests/bugs/shard/bug-1342298.t  -  6 second
./tests/bugs/shard/bug-1260637.t  -  6 second
./tests/bugs/shard/bug-1256580.t  -  6 second
./tests/bugs/replicate/bug-976800.t  -  6 second
./tests/bugs/replicate/bug-767585-gfid.t  -  6 second
./tests/bugs/replicate/bug-1101647.t  -  6 second
./tests/bugs/readdir-ahead/bug-1439640.t  -  6 second
./tests/bugs/posix/bug-765380.t  -  6 second
./tests/bugs/nfs/socket-as-fifo.t  -  6 second
./tests/bugs/nfs/bug-1143880-fix-gNFSd-auth-crash.t  -  6 second
./tests/bugs/nfs/bug-1116503.t  -  6 sec