On Thu, Apr 9, 2020 at 1:11 PM Gianluca Cecchi
wrote:
>
>> This ^^, right here is the reason the VM paused. Are you using a plain
>> distribute volume here?
>> Can you share some of the log messages that occur right above these
>> errors?
>> Also, can you check if the file
>> $VMSTORE_BRICKPATH/.
On Thu, Apr 9, 2020 at 7:46 AM Krutika Dhananjay
wrote:
>
>
> On Tue, Apr 7, 2020 at 7:36 PM Gianluca Cecchi
> wrote:
>
>>
>> OK. So I set log at least at INFO level on all subsystems and tried a
>> redeploy of Openshift with 3 mater nodes and 7 worker nodes.
>> One worker got the error and VM i
On Tue, Apr 7, 2020 at 7:36 PM Gianluca Cecchi
wrote:
>
> OK. So I set log at least at INFO level on all subsystems and tried a
> redeploy of Openshift with 3 mater nodes and 7 worker nodes.
> One worker got the error and VM in paused mode
>
> Apr 7, 2020, 3:27:28 PM VM worker-6 has been paused d
On Wed, Apr 8, 2020 at 6:00 PM Strahil Nikolov
wrote:
> On April 8, 2020 2:43:01 PM GMT+03:00, Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
> >On Tue, Apr 7, 2020 at 8:16 PM Strahil Nikolov
> >wrote:
> >
> >Hi Gianluca,
> >>
> >>
> >> The positive thing is that you can reproduce the iss
On April 8, 2020 2:43:01 PM GMT+03:00, Gianluca Cecchi
wrote:
>On Tue, Apr 7, 2020 at 8:16 PM Strahil Nikolov
>wrote:
>
>Hi Gianluca,
>>
>>
>> The positive thing is that you can reproduce the issue.
>>
>> I would ask you to check your gluster version and if there are any
>> updates - update the
On Tue, Apr 7, 2020 at 8:16 PM Strahil Nikolov
wrote:
Hi Gianluca,
>
>
> The positive thing is that you can reproduce the issue.
>
> I would ask you to check your gluster version and if there are any
> updates - update the cluster.
>
I'd prefer to stick on oVirt release version of Gluster if po
On April 7, 2020 4:59:03 PM GMT+03:00, Gianluca Cecchi
wrote:
>OK. So I set log at least at INFO level on all subsystems and tried a
>redeploy of Openshift with 3 mater nodes and 7 worker nodes.
>One worker got the error and VM in paused mode
>
>Apr 7, 2020, 3:27:28 PM VM worker-6 has been paused
On Tue, Apr 7, 2020 at 3:59 PM Gianluca Cecchi
wrote:
>
> OK. So I set log at least at INFO level on all subsystems and tried a
> redeploy of Openshift with 3 mater nodes and 7 worker nodes.
> One worker got the error and VM in paused mode
>
> Apr 7, 2020, 3:27:28 PM VM worker-6 has been paused d
OK. So I set log at least at INFO level on all subsystems and tried a
redeploy of Openshift with 3 mater nodes and 7 worker nodes.
One worker got the error and VM in paused mode
Apr 7, 2020, 3:27:28 PM VM worker-6 has been paused due to unknown storage
error.
The vm has only one 100Gb virtual dis
On Sat, Mar 28, 2020 at 8:26 PM Nir Soffer wrote:
>
>
> Gluster disks are thin (raw-sparse) by default just like any other
> file based storage.
>
> If this theory was correct, this would fail consistently on gluster:
>
> 1. create raw sparse image
>
> truncate -s 100g /rhev/data-center/mnt/g
On Sun, Mar 29, 2020 at 2:42 AM Gianluca Cecchi
wrote:
>
> On Sat, Mar 28, 2020 at 7:34 PM Nir Soffer wrote:
>>
>> On Sat, Mar 28, 2020 at 5:00 AM Gianluca Cecchi
>> wrote:
>> ...
>> > Further information.
>> > What I see around time frame in gluster brick log file
>> > gluster_bricks-vmstore-v
On Sat, Mar 28, 2020 at 8:26 PM Nir Soffer wrote:
[snip]
> Hey Nir,
> > You are right ... This is just a theory based on my knowledge and it
> might not be valid.
> > We nees the libvirt logs to confirm or reject the theory, but I'm
> convinced that is the reason.
> >
> > Yet, it's quite poss
On Sat, Mar 28, 2020 at 7:34 PM Nir Soffer wrote:
> On Sat, Mar 28, 2020 at 5:00 AM Gianluca Cecchi
> wrote:
> ...
> > Further information.
> > What I see around time frame in gluster brick log file
> gluster_bricks-vmstore-vmstore.log (timestamp is behind 1 hour in log file)
> >
> > [2020-03-27
On Sat, Mar 28, 2020 at 7:02 PM Darrell Budic
wrote:
> Nic,
>
> I didn’t see what version of gluster you were running? There was a leak
> that caused similar behavior for me in early 6.x versions, but it was fixed
> in 6.6 (I think, you’d have to find it in the bugzilla to be sure) and I
> havn’t
On Sat, Mar 28, 2020 at 9:47 PM Strahil Nikolov wrote:
>
> On March 28, 2020 7:26:33 PM GMT+02:00, Nir Soffer wrote:
> >On Sat, Mar 28, 2020 at 1:59 PM Strahil Nikolov
> >wrote:
> >>
> >> On March 28, 2020 11:03:54 AM GMT+02:00, Gianluca Cecchi
> > wrote:
> >> >On Sat, Mar 28, 2020 at 8:39 AM St
On March 28, 2020 7:26:33 PM GMT+02:00, Nir Soffer wrote:
>On Sat, Mar 28, 2020 at 1:59 PM Strahil Nikolov
>wrote:
>>
>> On March 28, 2020 11:03:54 AM GMT+02:00, Gianluca Cecchi
> wrote:
>> >On Sat, Mar 28, 2020 at 8:39 AM Strahil Nikolov
>
>> >wrote:
>> >
>> >> On March 28, 2020 3:21:45 AM GMT+0
On Sat, Mar 28, 2020 at 5:00 AM Gianluca Cecchi
wrote:
...
> Further information.
> What I see around time frame in gluster brick log file
> gluster_bricks-vmstore-vmstore.log (timestamp is behind 1 hour in log file)
>
> [2020-03-27 23:30:38.575808] I [MSGID: 101055]
> [client_t.c:436:gf_client_
Nic,
I didn’t see what version of gluster you were running? There was a leak that
caused similar behavior for me in early 6.x versions, but it was fixed in 6.6
(I think, you’d have to find it in the bugzilla to be sure) and I havn’t seen
this in a while. Not sure it’s exactly your symptoms (min
On Sat, Mar 28, 2020 at 1:59 PM Strahil Nikolov wrote:
>
> On March 28, 2020 11:03:54 AM GMT+02:00, Gianluca Cecchi
> wrote:
> >On Sat, Mar 28, 2020 at 8:39 AM Strahil Nikolov
> >wrote:
> >
> >> On March 28, 2020 3:21:45 AM GMT+02:00, Gianluca Cecchi <
> >> gianluca.cec...@gmail.com> wrote:
> >
On March 28, 2020 11:03:54 AM GMT+02:00, Gianluca Cecchi
wrote:
>On Sat, Mar 28, 2020 at 8:39 AM Strahil Nikolov
>wrote:
>
>> On March 28, 2020 3:21:45 AM GMT+02:00, Gianluca Cecchi <
>> gianluca.cec...@gmail.com> wrote:
>>
>>
>[snip]
>
>>Actually it only happened with empty disk (thin provision
On Sat, Mar 28, 2020 at 8:39 AM Strahil Nikolov
wrote:
> On March 28, 2020 3:21:45 AM GMT+02:00, Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>
[snip]
>Actually it only happened with empty disk (thin provisioned) and sudden
> >high I/O during the initial phase of install of the OS; it
On March 28, 2020 3:21:45 AM GMT+02:00, Gianluca Cecchi
wrote:
>Hello,
>having deployed oVirt 4.3.9 single host HCI with Gluster, I see some
>times
>VM going into paused state for the error above and needing to manually
>run
>it (sometimes this resumal operation fails).
>Actually it only happened
On Sat, Mar 28, 2020 at 2:21 AM Gianluca Cecchi
wrote:
> Hello,
> having deployed oVirt 4.3.9 single host HCI with Gluster, I see some times
> VM going into paused state for the error above and needing to manually run
> it (sometimes this resumal operation fails).
> Actually it only happened with
23 matches
Mail list logo