Hi there,
last days we are facing issues with paused VMs (in past it was for few
second to resize lv device), but now it doesn't resume. we migrated to
4.5.2 cluster, this never happened before with the same storage.
there is almost notning in engine log
2022-09-06 09:47:11,160+02 INFO
Il 16/03/2018 15:48, Alex Crow ha scritto:
On 16/03/18 13:46, Nicolas Ecarnot wrote:
Le 16/03/2018 à 13:28, Karli Sjöberg a écrit :
Den 16 mars 2018 12:26 skrev Enrico Becchetti
:
Dear All,
Does someone had seen that error ?
Yes, I experienced it
On Fri, Mar 16, 2018 at 1:25 PM, Enrico Becchetti <
enrico.becche...@pg.infn.it> wrote:
> Dear All,
> Does someone had seen that error ? When I run this command from my virtual
> machine:
>
> # time dd if=/dev/zero of=enrico.dd bs=4k count=1000
>
I don't think it's a very interesting test
Le 16/03/2018 à 15:48, Alex Crow a écrit :
On 16/03/18 13:46, Nicolas Ecarnot wrote:
Le 16/03/2018 à 13:28, Karli Sjöberg a écrit :
Den 16 mars 2018 12:26 skrev Enrico Becchetti
:
Dear All,
Does someone had seen that error ?
Yes, I experienced it
On 16/03/18 13:46, Nicolas Ecarnot wrote:
Le 16/03/2018 à 13:28, Karli Sjöberg a écrit :
Den 16 mars 2018 12:26 skrev Enrico Becchetti
:
Dear All,
Does someone had seen that error ?
Yes, I experienced it dozens of times on 3.6 (my 4.2 setup has
Le 16/03/2018 à 13:28, Karli Sjöberg a écrit :
Den 16 mars 2018 12:26 skrev Enrico Becchetti :
Dear All,
Does someone had seen that error ?
Yes, I experienced it dozens of times on 3.6 (my 4.2 setup has
insufficient workload to trigger such event).
yes ... it's a thin provisioning , in fact with preallocated disk type I
haven't any problem.
Thanks you so much
Best Regards
Enrico
Il 16/03/2018 13:28, Karli Sjöberg ha scritto:
Den 16 mars 2018 12:26 skrev Enrico Becchetti
:
Dear All,
Does someone
Dear All,
Does someone had seen that error ? When I run this command from my
virtual machine:
# time dd if=/dev/zero of=enrico.dd bs=4k count=1000
VM was paused due to kind a storage error/problem. Strange message
because tell about "no storage space error" but ovirt puts virtual
On Thu, Apr 14, 2016 at 1:23 PM, wrote:
> Hi Nir,
>
> El 2016-04-14 11:02, Nir Soffer escribió:
>>
>> On Thu, Apr 14, 2016 at 12:38 PM, Fred Rolland
>> wrote:
>>>
>>> Nir,
>>> See attached the repoplot output.
>>
>>
>> So we have about one concurrent lvm
Hi Nir,
El 2016-04-14 11:02, Nir Soffer escribió:
On Thu, Apr 14, 2016 at 12:38 PM, Fred Rolland
wrote:
Nir,
See attached the repoplot output.
So we have about one concurrent lvm command without any disk
operations, and
everything seems snappy.
Nicolás, maybe this
On Thu, Apr 14, 2016 at 12:38 PM, Fred Rolland wrote:
> Nir,
> See attached the repoplot output.
So we have about one concurrent lvm command without any disk operations, and
everything seems snappy.
Nicolás, maybe this storage or the host is overloaded by the vms? Are your
Nir,
See attached the repoplot output.
On Thu, Apr 14, 2016 at 12:18 PM, Nir Soffer wrote:
> On Thu, Apr 14, 2016 at 12:02 PM, Fred Rolland
> wrote:
> > From the log, we can see that the lvextend command took 18 sec, which is
> > quite long.
>
> Fred,
On Thu, Apr 14, 2016 at 12:02 PM, Fred Rolland wrote:
> From the log, we can see that the lvextend command took 18 sec, which is
> quite long.
Fred, can you run repoplot on this log file? it will may explain why this lvm
call took 18 seconds.
Nir
>
>
>From the log, we can see that the lvextend command took 18 sec, which is
quite long.
60decf0c-6d9a-4c3b-bee6-de9d2ff05e85::DEBUG::2016-04-13
10:52:06,759::lvm::290::Storage.Misc.excCmd::(cmd) /usr/bin/taskset
--cpu-list 0-23 /usr/bin/sudo -n /usr/sbin/lvm lvextend --config ' devices
{
> On 14 Apr 2016, at 09:57, nico...@devels.es wrote:
>
> Ok, that makes sense, thanks for the insight both Alex and Fred. I'm
> attaching the VDSM log of the SPM node at the time of the pause. I couldn't
> find anything that would clearly identify the problem, but maybe you'll be
> able to.
Ok, that makes sense, thanks for the insight both Alex and Fred. I'm
attaching the VDSM log of the SPM node at the time of the pause. I
couldn't find anything that would clearly identify the problem, but
maybe you'll be able to.
Thanks.
Regards.
El 2016-04-13 13:09, Fred Rolland escribió:
Hi,
Yes, just as Alex explained, if the disk has been created as thin
provisioning, the vdsm will extends once a watermark is reached.
Usually it should not get to the state the Vm is paused.
>From the log, you can see that the request for extension has been sent
before the VM got to the No
Ahh, we've seen this as well in RHEV and have wondered what was going on.
A better message would be good.
On Wed, Apr 13, 2016 at 7:43 PM, Alex Crow wrote:
> Hi,
>
> If you have set up VM disks as Thin Provisioned, the VM has to pause when
> the disk image needs to
Hi,
If you have set up VM disks as Thin Provisioned, the VM has to pause
when the disk image needs to expand. You won't see this on VMs with
preallocated storage.
It's not the SAN that's running out of space, it's the VM image needing
to be expanded incrementally each time.
Cheers
Alex
Hi Fred,
This is an iSCSI storage. I'm attaching the VDSM logs from the host
where this machine has been running. Should you need any further info,
don't hesitate to ask.
Thanks.
Regards.
El 2016-04-13 11:54, Fred Rolland escribió:
Hi,
What kind of storage do you have ? (ISCSI,FC,NFS...)
Hi,
What kind of storage do you have ? (ISCSI,FC,NFS...)
Can you provide the vdsm logs from the host where this VM runs ?
Thanks,
Freddy
On Wed, Apr 13, 2016 at 1:02 PM, wrote:
> Hi,
>
> We're running oVirt 3.6.4.1-1. Lately we're seeing a bunch of events like
> these:
>
>
Hi,
We're running oVirt 3.6.4.1-1. Lately we're seeing a bunch of events
like these:
2016-04-13 10:52:30,735 INFO
[org.ovirt.engine.core.vdsbroker.VmAnalyzer]
(DefaultQuartzScheduler_Worker-86) [60dea18f] VM
'f9cd282e-110a-4896-98d3-6d320662744d'(vm.domain.com) moved from 'Up'
-->
22 matches
Mail list logo