On 8/24/18 8:24 AM, Michael Adam wrote:
On 2018-08-23 at 13:54 -0700, Joe Julian wrote:
Personally, I'd like to see the glusterd service replaced by a k8s native controller
(named "kluster").
If you are exclusively interested in gluster for kubernetes
storage, this might seem the right
Oops, big note: Centos Regression jobs may have ended up canceled. Please
retry them.
On Fri, Aug 24, 2018 at 9:31 PM Nigel Babu wrote:
> Hello,
>
> We've had to do an unplanned Jenkins restart. Jenkins was overloaded and
> not responding to any requests. There was a backlog of over 100 jobs as
Hello,
We've had to do an unplanned Jenkins restart. Jenkins was overloaded and
not responding to any requests. There was a backlog of over 100 jobs as
well. The restart seems to have fixed things up.
More details in bug: https://bugzilla.redhat.com/show_bug.cgi?id=1622173
--
nigelb
On 2018-08-23 at 13:54 -0700, Joe Julian wrote:
> Personally, I'd like to see the glusterd service replaced by a k8s native
> controller (named "kluster").
If you are exclusively interested in gluster for kubernetes
storage, this might seem the right approach. But I think
this is much too
GlusterFS Coverity covscan results for the master branch are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-08-24-3cb5b63a/
Coverity covscan results for other active branches are also available at
On 20 August 2018 at 23:06, Shyam Ranganathan wrote:
> Although tests have stabilized quite a bit, and from the maintainers
> meeting we know that some tests have patches coming in, here is a
> readout of other tests that needed a retry. We need to reduce failures
> on retries as well, to be
On Mon, Aug 20, 2018 at 5:24 AM Abhay Singh wrote:
> Hi Vijay,
>
> As per your previous reply, i tried running the test cases using the
> endianess check through the command lscpu | grep "Big endian". Thankfully,
> the namespace.t test case passed successfully.
This is good to hear. Would you