On 17/02/20 6:26 pm, sankarshan wrote:
Looking at https://fstat.gluster.org/summary I see quite a few tests
still failing - are these failures being looked into?

./tests/basic/fencing/afr-lock-heal-basic.t
./tests/basic/fencing/afr-lock-heal-advanced.t

I will take a look at these two.

-Ravi

./tests/bugs/glusterd/quorum-validation.t

[Multiple bits in-line]

On Mon, 17 Feb 2020 at 14:38, Deepshikha Khandelwal <[email protected]> wrote:

04:08:01 Skipping script  : #!/bin/bash
04:08:01
04:08:01 ARCHIVE_BASE="/archives"
04:08:01 ARCHIVED_LOGS="logs"
04:08:01 UNIQUE_ID="${JOB_NAME}-${BUILD_ID}"
04:08:01 SERVER=$(hostname)
04:08:01
04:08:01 filename="${ARCHIVED_LOGS}/glusterfs-logs-${UNIQUE_ID}.tgz"
04:08:01 sudo -E tar -czf "${ARCHIVE_BASE}/${filename}" /var/log/glusterfs 
/var/log/messages*;
04:08:01 echo "Logs archived in http://${SERVER}/${filename}";
04:08:01 sudo reboot


FYI, here we are skipping the post build script!`echo "Logs archived in 
http://${SERVER}/${filename}"` is a  part of the script.


You can find the logs in Build artifacts 
https://build.gluster.org/job/regression-test-with-multiplex/1611/



Sunil, were you able to access the logs - please acknowledge on this list.

On Thu, Jan 16, 2020 at 2:24 PM Sunil Kumar Heggodu Gopala Acharya 
<[email protected]> wrote:
Hi,

Please have the log location fixed so that others can take look into the 
failures.

https://build.gluster.org/job/regression-test-with-multiplex/1611/consoleFull

21:10:48 echo "Logs archived in http://${SERVER}/${filename}";



On Thu, Jan 16, 2020 at 2:18 PM Sanju Rakonde <[email protected]> wrote:
The below glusterd test cases are constantly failing in brick-mux regression:
./tests/bugs/glusterd/bug-857330/normal.t
./tests/bugs/glusterd/bug-857330/xml.t
./tests/bugs/glusterd/quorum-validation.t

./tests/bugs/glusterd/bug-857330/normal.t and ./tests/bugs/glusterd/bug-857330/xml.t are 
timed-out after 200 seconds. I don't find any abnormality in the logs. we need to 
increase the time(?). I'm unable to run these tests in my local setup as they are always 
failing saying "ModuleNotFoundError: No module named 'xattr' " Is the same 
happening in CI as well?

I am not a big fan of arbitrarily increasing the time. Do we know from
the logs as to why it might need more than 200s (that's 3+ mins -
quite a bit of time)?

Also, we don't print the output of "prove -vf  <test>" when the test gets 
timed-out. It will be great if we print the output. It will help us to debug and to check which 
step took more time.

Was there a patch merged to enable this?

./tests/bugs/glusterd/quorum-validation.t is failing because of the regression caused by 
https://review.gluster.org/#/c/glusterfs/+/21651/. Rafi is looking into this issue. To 
explain the issue in brief, "after a reboot, glusterd is spawning multiple brick 
processes for a single brick instance and volume status shows the brick as offline".

Was this concept not obvious during the review of the patch?




_______________________________________________
maintainers mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/maintainers

Reply via email to