Update:
Cinder - GlusterFS CI job (ubuntu based) was added as experimental (non
voting) to cinder project [1]
Its running successfully without any issue so far [2], [3]
We will monitor it for few days and if it continues to run fine, we will
propose a patch to make it check (voting)
[1]: http
On Fri, Feb 27, 2015 at 4:02 PM, Deepak Shetty wrote:
>
>
> On Wed, Feb 25, 2015 at 11:48 PM, Deepak Shetty
> wrote:
>
>>
>>
>> On Wed, Feb 25, 2015 at 8:42 PM, Deepak Shetty
>> wrote:
>>
>>>
>>>
>>> On Wed, Feb 25, 2015 at 6:34 PM, Jeremy Stanley
>>> wrote:
>>>
On 2015-02-25 17:02:34 +05
On Wed, Feb 25, 2015 at 11:48 PM, Deepak Shetty wrote:
>
>
> On Wed, Feb 25, 2015 at 8:42 PM, Deepak Shetty
> wrote:
>
>>
>>
>> On Wed, Feb 25, 2015 at 6:34 PM, Jeremy Stanley
>> wrote:
>>
>>> On 2015-02-25 17:02:34 +0530 (+0530), Deepak Shetty wrote:
>>> [...]
>>> > Run 2) We removed glusterfs
On Thu, Feb 26, 2015, at 03:03 AM, Deepak Shetty wrote:
> On Wed, Feb 25, 2015 at 6:11 AM, Jeremy Stanley
> wrote:
>
> > On 2015-02-25 01:02:07 +0530 (+0530), Bharat Kumar wrote:
> > [...]
> > > After running 971 test cases VM inaccessible for 569 ticks
> > [...]
> >
> > Glad you're able to rep
On Wed, Feb 25, 2015 at 6:11 AM, Jeremy Stanley wrote:
> On 2015-02-25 01:02:07 +0530 (+0530), Bharat Kumar wrote:
> [...]
> > After running 971 test cases VM inaccessible for 569 ticks
> [...]
>
> Glad you're able to reproduce it. For the record that is running
> their 8GB performance flavor wit
On Wed, Feb 25, 2015 at 8:42 PM, Deepak Shetty wrote:
>
>
> On Wed, Feb 25, 2015 at 6:34 PM, Jeremy Stanley wrote:
>
>> On 2015-02-25 17:02:34 +0530 (+0530), Deepak Shetty wrote:
>> [...]
>> > Run 2) We removed glusterfs backend, so Cinder was configured with
>> > the default storage backend i.e
On Wed, Feb 25, 2015 at 6:34 PM, Jeremy Stanley wrote:
> On 2015-02-25 17:02:34 +0530 (+0530), Deepak Shetty wrote:
> [...]
> > Run 2) We removed glusterfs backend, so Cinder was configured with
> > the default storage backend i.e. LVM. We re-created the OOM here
> > too
> >
> > So that proves th
On 2015-02-25 17:02:34 +0530 (+0530), Deepak Shetty wrote:
[...]
> Run 2) We removed glusterfs backend, so Cinder was configured with
> the default storage backend i.e. LVM. We re-created the OOM here
> too
>
> So that proves that glusterfs doesn't cause it, as its happening
> without glusterfs to
On Wed, Feb 25, 2015 at 6:11 AM, Jeremy Stanley wrote:
> On 2015-02-25 01:02:07 +0530 (+0530), Bharat Kumar wrote:
> [...]
> > After running 971 test cases VM inaccessible for 569 ticks
> [...]
>
> Glad you're able to reproduce it. For the record that is running
> their 8GB performance flavor wit
On 2015-02-25 01:02:07 +0530 (+0530), Bharat Kumar wrote:
[...]
> After running 971 test cases VM inaccessible for 569 ticks
[...]
Glad you're able to reproduce it. For the record that is running
their 8GB performance flavor with a CentOS 7 PVHVM base image. The
steps to recreate are http://paste.
Ran the job manually on rax VM, provided by Jeremy. (Thank you Jeremy).
After running 971 test cases VM inaccessible for 569 ticks, then
continues... (Look at the console.log [1])
And also have a look at dstat log. [2]
The summary is:
==
Totals
==
Ran: 1125 tests in 5835. sec.
- P
On Fri, Feb 20, 2015 at 10:49:29AM -0800, Joe Gordon wrote:
> On Fri, Feb 20, 2015 at 7:29 AM, Deepak Shetty wrote:
>
> > Hi Jeremy,
> > Couldn't find anything strong in the logs to back the reason for OOM.
> > At the time OOM happens, mysqld and java processes have the most RAM hence
> > OOM s
FWIW, we tried to run our job in a rax provider VM (provided by ianw from
his personal account)
and we ran the tempest tests twice, but the OOM did not re-create. Of the 2
runs, one of the run
used the same PYTHONHASHSEED as we had in one of the failed runs, still no
oom.
Jeremy graciously agreed
On Feb 21, 2015 12:26 AM, "Joe Gordon" wrote:
>
>
>
> On Fri, Feb 20, 2015 at 7:29 AM, Deepak Shetty
wrote:
>>
>> Hi Jeremy,
>> Couldn't find anything strong in the logs to back the reason for OOM.
>> At the time OOM happens, mysqld and java processes have the most RAM
hence OOM selects mysqld
On Feb 21, 2015 12:20 AM, "Jeremy Stanley" wrote:
>
> On 2015-02-20 16:29:31 +0100 (+0100), Deepak Shetty wrote:
> > Couldn't find anything strong in the logs to back the reason for
> > OOM. At the time OOM happens, mysqld and java processes have the
> > most RAM hence OOM selects mysqld (4.7G) to
On Fri, Feb 20, 2015 at 7:29 AM, Deepak Shetty wrote:
> Hi Jeremy,
> Couldn't find anything strong in the logs to back the reason for OOM.
> At the time OOM happens, mysqld and java processes have the most RAM hence
> OOM selects mysqld (4.7G) to be killed.
>
> From a glusterfs backend perspect
On 2015-02-20 16:29:31 +0100 (+0100), Deepak Shetty wrote:
> Couldn't find anything strong in the logs to back the reason for
> OOM. At the time OOM happens, mysqld and java processes have the
> most RAM hence OOM selects mysqld (4.7G) to be killed.
[...]
Today I reran it after you rolled back som
Hi Jeremy,
Couldn't find anything strong in the logs to back the reason for OOM.
At the time OOM happens, mysqld and java processes have the most RAM hence
OOM selects mysqld (4.7G) to be killed.
>From a glusterfs backend perspective, i haven't found anything suspicious,
and we don't have the lo
On 2015-02-19 17:03:49 +0100 (+0100), Deepak Shetty wrote:
[...]
> For some reason we are seeing the centos7 glusterfs CI job getting
> aborted/ killed either by Java exception or the build getting
> aborted due to timeout.
[...]
> Hoping to root cause this soon and get the cinder-glusterfs CI job
19 matches
Mail list logo