On Wed, Sep 9, 2015 at 8:21 PM, Michael Scherer <[email protected]> wrote:
> Hi, > > just found out that slave21 and 23 were offline due to their disk being > full. The issue is 14G of log, due to something creating tarball > of /var/log/glusterfs, and place the tarball in /var/log/glusterfs/ > > Ndevos say the bug is fixed, but I would rather investigate in more > details, someone has some information, pointer ? > > (slave26 and slave46 are however just without ssh, so going to reboot > them) > I am the culprit. This patch http://review.gluster.org/#/c/12109/ is the reason and it has been taken care of. The test ran on sept 5th. If the slave had a little space left and executed next build then the logs must have got cleared and it should be fine. Sorry for the trouble, will be more careful next time I am messing with test infra scripts. A better fix is posted at http://review.gluster.org/#/c/12110/ and waiting for reviews! -- > Michael Scherer > Sysadmin, Community Infrastructure and Platform, OSAS > > > _______________________________________________ > Gluster-infra mailing list > [email protected] > http://www.gluster.org/mailman/listinfo/gluster-infra > >
_______________________________________________ Gluster-infra mailing list [email protected] http://www.gluster.org/mailman/listinfo/gluster-infra
