will simply put new content in there.
Not sure about vdsm, maybe some component (vdsm, engine) is still trying to
get status of node 3. I have been away for too long, so Didi will have to
pitch in.
Martin Sivak
>
> On Thu, Feb 3, 2022 at 6:02 PM Ayansh Rocks
> wrote:
>
>> Hi Y
ballooning aggressivity.
Best regards
--
Martin Sivak
ex-oVirt maintainer of this area
On Mon, Sep 7, 2020 at 9:38 PM KISHOR K wrote:
>
> Hi All,
>
> I'm new to Ovirt and not having a perfect experience with Ovirt yet.
> I ran into a strange issue today when I tried to creat
werManagement)
enables the power cycling mechanism and the second one
(HostsInReserve) controls how many empty hosts are allowed to stay up.
When the not enough hosts are empty anymore a new one will be started.
Best regards
--
Martin Sivak
> But you can set a script to change the performance
. In fact, it almost
does not happen at all, especially in the virtual desktop use case. So
we let the user specify how much can the VM allocation grow above the
physical memory capacity of a node (- some overhead for system and
such).
Does this make sense to you?
Best regards
--
Martin Sivak
ex
mom.Controllers.Balloon - INFO - Ballooning
> guest:node1 from 695648 to 660865
> 2019-06-13 07:12:51,437 - mom.GuestMonitor.Thread - INFO - GuestMonitor-node1
> ending
>
> Can someone clarify what exactly does this (from to yyyy) mean ?
>
> Best Regards,
> Strahil Nik
).
--
Martin Sivak
used to be maintainer of mom
On Thu, Jun 13, 2019 at 12:26 AM Darrell Budic wrote:
>
> Do you have the overt-guest-agent running on your VMs? It’s required for
> ballooning to control allocations on the guest side.
>
> On Jun 12, 2019, at 11:32 AM, Strahil w
Hi,
the stale records are not an issue at all. You can remove them for
visually cleaner reports (hoste-engine --clean-metadata command, check
the man page), but it makes no difference to the algorithms.
Best regards
Martin Sivak
On Thu, May 2, 2019 at 11:31 AM Andreas Elvers
wrote:
>
>
Hi,
as far as I know you can manually migrate hosted engine from the
webadmin UI by clicking at the migrate button.
The question is, why would you want to?
Best regards
Martin Sivak
On Mon, Mar 11, 2019 at 2:01 PM wrote:
>
> Am I reading these right in that manual migration is not po
of our colleagues wrote a blog post about this:
https://mpolednik.github.io/2017/06/26/hugepages-and-ovirt/
Best regards
Martin Sivak
On Fri, Feb 15, 2019 at 5:35 AM Vincent Royer wrote:
> How do I know how many huge pages my hosts can support?
>
> cat /proc/meminfo |
end filtering out the emails as
a workaround before this can be fully investigated and fixed.
Simone, Denis: I can't do more here, looks like a race in agent - broker
initialization and host id management.
Best regards
Martin Sivak
On Fri, Dec 14, 2018 at 12:35 PM fsoyer wrote:
> In b
Hi,
check the broker.log as well. The connect is used to talk to
ovirt-ha-broker service socket.
Best regards
Martin Sivak
On Fri, Dec 14, 2018 at 12:20 PM fsoyer wrote:
> I think I have it in agent.log. What can be this "file not found" ?
>
> MainThread::ERROR::2018-1
Hi,
no StartState is not common, it is only ever entered when the agent
boots up. So something restarted or killed the agent process. Check
the agent log in /var/log/ovirt-hosted-engine-ha for errors.
Best regards
Martin Sivak
On Fri, Dec 14, 2018 at 12:05 PM fsoyer wrote:
>
> Hi Martin
(look for the notification section) or the
hosted-engine tool (search --help for set config) depending on the
version of hosted engine you are using.
Best regards
--
Martin Sivak
On Thu, Dec 13, 2018 at 3:10 PM fsoyer wrote:
>
> Hi,
> I don't find revelant answer about this. Sor
just
faster when searching for identical pages.
Best regards
--
Martin Sivak
On Tue, Nov 13, 2018 at 8:13 AM, wrote:
> Hi buddy,I encoutered a issue:
> Enviroment:ovirt 3.5,node2+node3+engine,node2 and node3 are in the same
> cluster.
> node2:64GB memory
> node3:64GB memor
:) We do not recommend using the old procedure anymore
unless there is something special that does not work there. In other
words, try ansible first from now on.
Best regards
--
Martin Sivak
HE ex-maintainer :)
On Fri, Nov 9, 2018 at 1:56 PM, Gianluca Cecchi
wrote:
>
> On Fri, Nov 9, 2
;,
> "vms_rule": {
> "enabled": "true",
> "enforcing": "true",
> "positive": "true"
> },
>
> But both the VMs are coming up in TestHost
a bit first, do the migration and restore the affinity
rule.
Martin Sivak
On Mon, Oct 15, 2018 at 12:13 PM, Staniforth, Paul
wrote:
> Hello,
> I have found that migration doesn't work when using vms_rule
> Enforcing Hard.
> Scenario vms in an affinity group wit
unch the VM.
Correct.
> It also won’t make
> changes to bring all VMs into compliance with the Affinity rules, from what
> I can tel.
It will try that too, but rather less aggressively.
Best regards
Martin Sivak
On Sun, Oct 14, 2018 at 7:59 PM, Darrell Budic wrote:
> VM to VM affinit
keys you can
change (on 4.2 fo sure and 4.1 very probably too).
Best regards
Martin Sivak
On Wed, Aug 29, 2018 at 11:56 PM, Douglas Duckworth
wrote:
> Yes, indeed!
>
> How can I change this internal Python setting?
>
> On Wed, Aug 29, 2018, 5:43 PM Martin Sivak wrote:
>>
&
ype :
> broker
The value here is a regular expression that is matched against the
state transition string in the email.
Best regards
Martin Sivak
On Wed, Aug 29, 2018 at 10:34 PM, Douglas Duckworth
wrote:
> Thanks for sharing
>
> I may want to do that
>
> Though first I want
Hi,
did you put the host to maintenance and activated it again when you
enabled ballooning? Or search for a slightly hidden Sync MOM policy
link somewhere in the host area that would force it.
The log seems to indicate ballooning is either disabled or "not necessary".
Martin
On Tue, Jul 31, 201
Hi,
> 2018-07-31 11:49:44,258 - mom.Monitor - DEBUG - Field 'mem_free' not known.
> Ignoring.
No, this is "normal". Should not affect ballooning.
> I can't find a INFO log file with My_VM- is ready
And the line where the GuestMonitor is starting? You need to get back
to the time the VM was
user_limit', 'vcpu_quota'])
...
2018-08-01 10:52:25,808 - mom.Monitor - INFO - ms-vhost-1 is ready
There might be a message reporting that some required data fields are
not available and the ballooning won't work for that VM i
se spaces in the path? Also make sure the
backslash in \:ovirt is doubled if you execute this from bash like
you seem to be doing (\\:ovirt)
Martin
On Mon, Jun 25, 2018 at 3:38 PM, Reznikov Alexei wrote:
> 25.06.2018 15:12, Martin Sivak пишет:
>
>> Hi,
>>
>> yes there is
when no disks for
lockspace and metadata exist at all.
Best regards
Martin Sivak
On Mon, Jun 25, 2018 at 9:52 AM, Reznikov Alexei wrote:
> 21.06.2018 20:15, reznikov...@soskol.com пишет:
>>
>> Hi list!
>>
>> After upgrade my cluster from 4.1.9 to 4.2.2, agent and br
by
enabling this like this:
- use the logcontrol script from here
https://github.com/oVirt/ovirt-engine/blob/master/contrib/log-control.sh
- and enable DEBUG for
org.ovirt.engine.core.bll.scheduling.policyunits.RankSelectorPolicyUnit
Best regards
--
Martin Sivak
SLA, oVirt
On Mon, Jun 18, 2018
It should not be. Was there anything in the log? Storage failure or something?
Best regards
Martin Sivak
On Thu, Jun 14, 2018 at 11:59 AM, Callum Smith wrote:
> So the one host with the stale-data, putting that into maintenance and then
> rebooting seems to have brought it back and stopp
Dear Callum,
unknown stale-data means the hosts did not submit status update during
the last minute. That might be just a glitch or something happened to
the storage connection there.
Best regards
Martin Sivak
On Thu, Jun 14, 2018 at 11:28 AM, Callum Smith wrote:
> Dear Martin,
>
engine is up. Manually
clicking the migrate button should also work.
Best regards
Martin Sivak
On Thu, Jun 14, 2018 at 10:41 AM, Callum Smith wrote:
> Dear All,
>
> Getting an issue where the HE can't b migrated, the log is full of:
> "VM HostedEngine is down with error.
s when free
memory is above the treshold.
There are many ways to describe those (above, below, used memory, free
memory) and it is too late to change it anyway.
Best regards
Martin Sivak
On Thu, Jun 14, 2018 at 12:12 AM, Alastair Neil wrote:
> when the free memory is below defined maxi
regards
--
Martin Sivak
oVirt
On Wed, Jun 13, 2018 at 7:14 PM, Alastair Neil wrote:
> Can someone clarify these setting for me, I am having difficulty parsing
> what exactly they mean. They seem to me to be backwards.
>
> If I wish to set a threshold at which I want my host to be c
Hi,
it actually does not make much difference unless you need some special
customization. Node is plug and play, CenOS + oVirt repos might have
fresher packages (snapshots, nightlies) and allow live changes.
--
Martin Sivak
On Wed, Jun 13, 2018 at 1:07 PM, Jayme wrote:
> I'm about to
repositories:
For example the URL for CentOS 7 based installation:
http://resources.ovirt.org/pub/ovirt-4.1/rpm/el7/noarch/ovirt-hosted-engine-ha-2.1.9-1.el7.noarch.rpm
Best regards
--
Martin Sivak
SLA / oVirt
On Mon, Jun 4, 2018 at 2:45 PM, wrote:
> Thank you again so very much Andrej! I
Hi,
hosted engine does not care about disks. Gluster might, but I do not
know enough about best practice brick layout there, sorry.
Martin
On Wed, May 23, 2018 at 11:44 AM, femi adegoke wrote:
> Must all 7 hosts have the same amount of storage/number of disks?
>
Hi,
we recommend up to 7 HE hosts, it is not important if the number is
odd or even. The real implementation limit is much higher and you
won't reach it. But since we do not test more than 7 as part of the QE
process, we can't recommend it.
Best regards
Martin Sivak
On Wed, May 23,
Hi,
you can use multiple different inventory sources at the same time - so
use your file + ovirt4.py
https://docs.ansible.com/ansible/2.5/user_guide/intro_dynamic_inventory.html#using-inventory-directories-and-multiple-inventory-sources
Best regards
Martin Sivak
On Tue, May 22, 2018 at 9:50
> Does the MAC address ever change during the deployment process?
No.
> What is meant by "proper MAC reservation"?
The DHCP server must for a given MAC always return the same IP that is
paired with the requested FQDN in the DNS server (or injected into
/etc/hosts).
Best regar
anged the engine to static, the deployment passed.
DHCP is tricky as you need to make sure that the FQDN always matches
the same VM (proper MAC, DNS and DHCP reservation come into play). But
this is the default for all installs I did and always worked fine.
Best regards
Martin Sivak
On Mon, May 21
vm.conf in the process.
Best regards
Martin Sivak
On Wed, May 2, 2018 at 2:52 AM, Justin Zygmont wrote:
> After rebooting the node hosting the engine, I get this:
>
>
>
> # hosted-engine --connect-storage
>
> # hosted-engine --vm-start
>
> The hosted engine configura
-b9b6b8fe073c,address:None}
Best regards
--
Martin Sivak
SLA / oVirt
On Fri, Apr 27, 2018 at 4:40 PM, Nico De Ranter wrote:
>
> It seems I messed up my glusterfs filesystem as a result the hosted-engine
> didn't start anymore.
>
> Sigh.
>
> Nico
>
> On Fri, Apr 27,
at hand:
> hosted-engine.ovirt.com=192.168.122.91, it is engine VM, visit
> hosted-engine.ovirt.com show me web UI.
> [root@hosted-engine2 ~]# curl
> http://hosted-engine.ovirt.com/ovirt-engine/services/health
> Error404 - Not Found
Best regards
Martin Sivak
On Thu, Apr 26,
ute to host
I told you before. This is normal as it is trying to figure out
whether the host is up.
Best regards
Martin Sivak
On Thu, Apr 26, 2018 at 4:14 AM, wrote:
> engine VM:192.168.122.91
> hosted-engine1 : 192.168.122.66
> hosted-engine2 : 192.168.122.223
>
> I can not
and .122.66. The engine
constantly monitors all its hosts and that means it is trying to
connect to them every now and then.
Please execute the two following commands on Host B and show us the
results (use the proper fqdn):
$(hosted-engine --check-liveliness)
$(curl http://{fqdn}/ovirt-engine/s
The engine will try connecting to all registered hosts all the time.
That is normal.
If your host can reach the engine then check whether it can reach
http://{fqdn}/ovirt-engine/services/health as that is what is used to
make sure the engine is alive.
Best regards
Martin Sivak
On Wed, Apr 25
it manually by executing $(hosted-engine
--check-liveliness) from the host.
Best regards
Martin Sivak
On Wed, Apr 25, 2018 at 12:51 PM, wrote:
> Hi,
>
> two node :
> 192.168.122.66 hosted-engine1
> 192.168.122.223 hosted-engine2
>
> I power off hosted-engine1, so I do not
engine needs something that was only
available on the dead host (A) like some storage, host B cannot ping
the gateway..
Best regards
Martin Sivak
On Wed, Apr 25, 2018 at 11:33 AM, wrote:
> sorry, I mis-represent,
>
> I hava two node, A:192.168.122.65 , B:192.168.122.66 with host
Hi,
was that a clean host? What does virsh -r net-list show?
Best regards
Martin Sivak
On Tue, Apr 24, 2018 at 9:12 AM, wrote:
> Hi,
>
> I deploy hosted engine but it has some error,
>
> # hosted-engine --deploy
>
> [ INFO ] TASK [Check status of default libvirt network
he DB every time they add a host out of order and the
SPM ID is not selectable by user, because it needs to fit some storage
constraints (we use it for protecting storage metadata) and must match
the hosted engine ID now.
Best regards
Martin Sivak
On Mon, Apr 23, 2018 at 2:47 PM, Thomas Klute
backup if it is a production
environment and I can only propose this because I used it repeatedly
during development tests. But production use is on your own risk.
Best regards
Martin Sivak
On Mon, Apr 23, 2018 at 3:45 PM, Thomas Klute wrote:
> Dear Martin,
>
> a follow up question regarding
lease change hosted engine ID to match SPM ID
(/etc/ovirt-hosted-engine/hosted-engine.conf) and ignore the hostname
vs ID mismatch. All other options might cost you..
Best regards
--
Martin Sivak
SLA / oVirt
On Mon, Apr 23, 2018 at 2:26 PM, Thomas Klute wrote:
> Dear Simone,
>
> thanks f
fact) and we would like to know whether
this might be related to that change or not.
Simone, do you know how to debug this? Are there logs we could use to
check the behavior? The host-deploy logs maybe?
Best regards
--
Martin Sivak
SLA / oVirt
On Mon, Apr 23, 2018 at 12:54 PM, Thomas Klute wrote
(if fencing is properly configured).
Best regards
Martin Sivak
On Fri, Apr 20, 2018 at 12:13 PM, wrote:
> this process is not error ?
> - Original Message -
> From: Martin Sivak
> To: dhy336
> Cc: users
> Subject: Re: Re: [ovirt-users] 回复:Re: Hosted-engine can not_s
Hi,
the engine does not know you killed the host. It will notice
eventually and handle the situation. Just give it time (5 minutes or
so).
Best regards
--
Martin Sivak
SLA / oVirt
On Fri, Apr 20, 2018 at 12:00 PM, wrote:
> Hi, thanks for your feedback. I hava another qeustions
>
>
Hi,
your ovirt-hosted-engine-ha package is too old. You need at least
2.1.9 to properly support 4.2 engine. The same applies to vdsm. Please
upgrade the node.
Best regards
Martin Sivak
On Fri, Apr 20, 2018 at 3:58 AM, wrote:
> Hi I find some error logs in /var/log/ovirt-hosted-engine
We need more than just this small log snippet. Please check the vdsm
and libvirt logs as well.
Best regards
Martin Sivak
On Thu, Apr 19, 2018 at 2:05 PM, wrote:
> Hi,
> I deploy three node with hosted engine, I force shut down a node which
> Host-engine VM is run, But hosted eng
Hi,
That part is related to the hosted engine storage. You need an
additional storage domain for regular VMs as specified in the note I
sent you. Add the storage using the webadmin UI.
Best regards
--
Martin Sivak
SLA / oVirt
On Wed, Apr 18, 2018 at 11:55 AM, wrote:
> Select the type
Virt 4.2.2
release now supports much better and safer deployment method.
Best regards
--
Martin Sivak
SLA / oVirt
On Wed, Apr 18, 2018 at 11:08 AM, wrote:
> Hi,
> I setup hosted engine, and it is successed, but it has not add my share
> storage (nfs) to Storage Domain,
> I don`t find en
10:04 AM, Simone Tiraboschi wrote:
>
>
> On Sat, Apr 14, 2018 at 1:47 PM, Martin Sivak wrote:
>>
>> Hi,
>>
>> the vnc device is there by default (I copied it out of my own hosted
>> engine instance), I do not know why it was missing in your case.
>
>
&
Hi,
the vnc device is there by default (I copied it out of my own hosted
engine instance), I do not know why it was missing in your case.
Best regards
Martin Sivak
On Fri, Apr 13, 2018 at 5:13 PM, Thomas Klute wrote:
> Dear Martin,
>
> yes, that worked. Thank you so much!!
> We
the VNC approach again.
Best regards
Martin Sivak
On Fri, Apr 13, 2018 at 2:35 PM, Thomas Klute wrote:
> Dear Martin,
>
> thanks for the feedback.
> We already read this and tried it.
> It seems to me that the graphics device was removed from the hosted
> engine by some ovirt rel
/#handle-engine-vm-boot-problems
Best regards
--
Martin Sivak
SLA / oVirt
On Fri, Apr 13, 2018 at 11:26 AM, Thomas Klute wrote:
> Dear oVirt Team,
>
> after trying to reboot a hosted engine setup on oVirt 4.2 the VM won't
> come up anymore.
> The qemu-kvm process is there
add an additional host with hosted engine
bits directly from the webadmin UI (HostedEngine side tab of Add new host
dialog, select Deploy).
Best regards
Martin Sivak
On Mon, Apr 9, 2018 at 6:21 PM, FERNANDO FREDIANI wrote:
> Hello Simone
>
> The doubt is once one hosted engine is deploye
-fedora/ for examples. Most of
them (if not all) should be valid for CentOS as well.
The hostname you set must be resolvable to IP and that IP has to point
back to the host you are on.
Best regards
Martin Sivak
On Wed, Apr 4, 2018 at 12:50 PM, dhy336 wrote:
> thanks, but i do not know why is
Hi,
make sure you have at least ovirt-hosted-engine-ha-2.2.1 and the
service was properly restarted.
The situation you are describing can happen when you run older hosted
engine agent with 4.2 ovirt-engine.
It was tracked as: https://bugzilla.redhat.com/1518887
Best regards
Martin Sivak
On
>
> Why did it trash it?
Split brain and concurrent filesystem access...
The bug only happened in 4.2.2 and was never released officially apart
from development builds. And it should be fixed now.
Martin
On Fri, Mar 16, 2018 at 11:04 AM, Yaniv Kaul wrote:
>
>
> On Mar 15, 2018 9:21 PM, "Maton,
p.html
Best regards
Martin Sivak
On Tue, Mar 13, 2018 at 4:45 PM, Gianluca Cecchi
wrote:
> On Tue, Mar 13, 2018 at 4:14 PM, Martin Sivak wrote:
>>
>> Hi,
>>
>> make sure the service is actually started and the firewall is
>> configured properly:
>>
>&
-cmd --reload
Best regards
Martin Sivak
On Tue, Mar 13, 2018 at 3:33 PM, Peter Hudec wrote:
> Hi,
>
> after upgrade to 4.2. there was running the cockpit on each host.
> Right now, there is no service on port 9090. Is there any special setup
> how to put it back?
>
> [
. See "Storage" in the Administration Guide for different
storage options and on how to add a data storage domain."
Best regards
Martin Sivak
On Fri, Mar 9, 2018 at 10:08 AM, Oliver Dietzel wrote:
> Install from node iso on gluster works fine, the hosted engine vm installs
can consider it when planning the feature. And stand assured that
we are thinking about how to implement this properly.
Best regards
--
Martin Sivak
SLA / oVirt
On Mon, Feb 26, 2018 at 10:09 PM, Fabrice SOLER <
fabrice.so...@ac-guadeloupe.fr> wrote:
> Hello,
>
> My node (IP ovirtmg
what happened that you started
looking into logs in the first place.
Best regards
Martin Sivak
On Wed, Feb 21, 2018 at 12:04 AM, Jamie Lawrence
wrote:
> Hello,
>
> I have a sanlock problem. I don't fully understand the logs, but from what I
> can gather, messages like this
Hi Artem,
just a restart of ovirt-ha-agent services should be enough.
Best regards
Martin Sivak
On Mon, Feb 19, 2018 at 4:40 PM, Artem Tambovskiy
wrote:
> Ok, understood.
> Once I set correct host_id on both hosts how to take changes in force? With
> minimal downtime? Or i need re
deploying RHHI (hyper converged RH product) is
here:
https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure/1.1/html-single/deploying_red_hat_hyperconverged_infrastructure/index#deploy
Best regards
Martin Sivak
On Mon, Feb 12, 2018 at 4:25 PM, Philipp Richter
w
://bugzilla.redhat.com/show_bug.cgi?id=1373930)
For all those reasons we do not recommend using it in production, but
we are not aware about anything that would really block you from doing
it. It just hasn't been tested and polished enough yet.
Best regards
Martin Sivak
On Fri, Feb 9, 2018 at 1:02 PM, S
different
SD!!).
We have two tracking bugs for the related work:
https://bugzilla.redhat.com/show_bug.cgi?id=1455169 and
https://bugzilla.redhat.com/show_bug.cgi?id=1393902 - most of what was
needed was fixed already.
Best regards
Martin Sivak
On Fri, Feb 9, 2018 at 11:06 AM, Gianluca Cecchi
highly available at all.
Best regards
Martin Sivak
On Fri, Feb 9, 2018 at 8:25 AM, Gianluca Cecchi
wrote:
> On Fri, Feb 9, 2018 at 2:30 AM, Thomas Letherby wrote:
>>
>> Thanks, that answers my follow up question! :)
>>
>> My concern is that I could have a host off-li
Andrej, this might be related to the recent fixes of yours in that
area. Can you take a look please?
Best regards
Martin Sivak
On Thu, Feb 8, 2018 at 4:18 PM, Donny Davis wrote:
> Ovirt 4.2 has been humming away quite nicely for me in the last few months,
> and now I am hitting an issu
trying to install on a network with out a gateway.
How were your users accessing the VMs? Was this some kind of super
secure deployment with no outside connectivity?
Best regards
Martin Sivak
On Tue, Feb 6, 2018 at 4:32 PM, Ben De Luca wrote:
> This is expected behaviour, even if it’s not
network for data center, but you can change the address if your
topology is different.
Best regards
Martin Sivak
On Tue, Feb 6, 2018 at 4:27 PM, Alex K wrote:
> Hi,
>
> I have seen hosts rendered unresponsive when gateway is lost.
> I will be able to provide more info once I prepare an
should fix it too.
Best regards
Martin Sivak
On Mon, Jan 22, 2018 at 8:03 AM, Artem Tambovskiy
wrote:
> Hello Kasturi,
>
> Yes, I set global maintenance mode intentionally,
> I'm run out of the ideas troubleshooting my cluster and decided to undeploy
> the hosted engine from se
.
Best regards
Martin Sivak
On Fri, Jan 19, 2018 at 1:01 PM, Alex K wrote:
> Hi All,
>
> I have a 3 server ovirt 4.1 selft hosted setup with gluster replica 3.
>
> I see that suddenly one of the hosts reported as unresponsive and at same
> time the /var/log/messages logged:
&
maintenance mode (and check that it is
visible from the other host using he --vm-status)
- mount storage domain (hosted-engine --connect-storage)
- check sanlock client status to see if proper lockspaces are present
Best regards
Martin Sivak
On Tue, Jan 16, 2018 at 1:16 PM, Derek Atkins wrote:
>
I actually do not agree with Simone here. The fix he talks about adds
a call to prepareImage, but your log clearly shows that prepareImage
is the call that fails:
Jan 12 16:52:36 cultivar0 journal: vdsm storage.Dispatcher ERROR
FINISH prepareImage error=Volume does not exist:
(u'8582bdfc-ef54-47af
ck (most recent call last):
> File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py",
> line 88, in run
> self._storage_broker.get_raw_stats()
> File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/stora
> Can you please stop all hosted engine tooling (
On all hosts I should have added.
Martin
On Fri, Jan 12, 2018 at 3:22 PM, Martin Sivak wrote:
>> RequestError: failed to read metadata: [Errno 2] No such file or directory:
>> '/var/run/vdsm/storage/248f46f0-d793-4581-9810-c
a-agent,
ovirt-ha-broker), move the file (metadata file is not important when
services are stopped, but better safe than sorry) and restart all
services again?
> Could there possibly be a permissions
> problem somewhere?
Maybe, but the file itself looks out of the ordinary. I wonder how it got ther
tion one more thing. I originally upgraded the engine VM
>> first using new RPMS then engine-setup. It failed due to not being in
>> global maintenance, so I set global maintenance and ran it again, which
>> appeared to complete as intended but never came back up after. Just in case
in virsh -r list?
Best regards
Martin Sivak
On Thu, Jan 11, 2018 at 10:00 PM, Jayme wrote:
> Please help, I'm really not sure what else to try at this point. Thank you
> for reading!
>
>
> I'm still working on trying to get my hosted engine running after a botched
>
Hi,
check the messages the host is reporting (click through to the details
page). Some of the usual issues are insufficient cpu level to satisfy
cluster requirements, network or storage connection issues and such.
Best regards
Martin Sivak
On Fri, Jan 12, 2018 at 8:39 AM, Tomeu Sastre
installing python-lxml.
I am not sure what happened to your other VM.
Best regards
Martin Sivak
On Thu, Jan 11, 2018 at 6:15 AM, Jayme wrote:
> I performed Ovirt 4.2 upgrade on a 3 host cluster with NFS shared storage.
> The shared storage is mounted from one of the hosts.
>
> I
ive.
We do not have UI for this, but it is easily done using cron and REST
API (either directly or using some SDK).
We were even working on it for a while, but it was put on backburner
since sysadmins know cron well and the UI would have to be limited
anyway.
Best regards
--
Martin Sivak
SLA / oVirt
;> --he-remove-hosts
>> Right? And after that?
>>
>> Can you help me to better understand?
>> Thank you!
>>
>> Il 03 Gen 2018 14:39, "Martin Sivak" ha scritto:
>>
>> Hi,
>>
>> we do not have any nice procedure to do that. Moving
. When the host is not empty anymore a new one
will be started.
Best regards
--
Martin Sivak
SLA / oVirt
On Mon, Jan 8, 2018 at 12:29 PM, Karli Sjöberg wrote:
>
>
> Den 8 jan. 2018 12:07 skrev Alex Shen :
>
> Hi,
>
>
>
> I’m wondering how to apply power-saving schedule in
Hi,
we do not have any nice procedure to do that. Moving hosted engine to
a different storage usually involves backup and restore of the engine
database. See for example here:
http://lists.ovirt.org/pipermail/users/2017-June/082466.html
Best regards
--
Martin Sivak
SLA / oVirt
On Wed, Jan 3
within the nested VM.
Best regards
--
Martin Sivak
SLA / oVirt
On Wed, Dec 27, 2017 at 9:05 AM, Michal Skrivanek
wrote:
>
>
> On 25 Dec 2017, at 13:03, Roman Drovalev wrote:
>
>>I'm not sure I understand the layering - hyper-V on oVirt or vice-versa?
>
> the layering
Hi,
one of the new features of oVirt 4.2 is support for Replica 1 all in
one setup using hosted engine and gluster in hyper-converged mode.
So it should be again possible to use just a single host for
everything, I am not sure we have a documentation ready for that
though.
Best regards
Martin
Btw lacking vdsm logs here this seems to be the same issue Jason
Brooks just reported here too. Hosted engine is trying to get storage
info from VDSM and gets error instead..
--
Martin Sivak
SLA / oVirt
On Thu, Dec 21, 2017 at 9:02 AM, Simone Tiraboschi wrote:
>
>
> On Thu, Dec 21, 201
Hi,
I am afraid we do not have logs that would go that deep into the stack. DNS
resolution issues will definitely affect both the notification system (if
not using localhost smtp) and the engine status checks (because we use the
fqdn).
Best regards
Martin
On Wed, Dec 13, 2017 at 3:15 PM, Luca '
Hi,
we also have a proper fix now and will release it with the next RC build.
Best regards
--
Martin Sivak
SLA / oVirt
On Mon, Dec 11, 2017 at 12:10 PM, Maton, Brett
wrote:
> Really short version ( I can't find the link to the ovirt doc at the
> moment)
>
> Put hosted en
-ha/{agent.log,broker.log} - the log files
The usual reason for migrating (well stopping and starting) the VM are
issues with pinging the configured gateway or crash of the VM.
Best regards
Martin Sivak
On Fri, Dec 1, 2017 at 4:36 AM, Terry hey wrote:
> Hello all,
>
> i created two
ith
> broker.log ad DEBUG level. Where i should start to identify root
> cause? Log is somewhat chatty at this level.
>
> Luca
>
> On Fri, Dec 1, 2017 at 1:24 PM, Martin Sivak wrote:
>> Hi,
>>
>>> [logger_root]
>>> level=INFO
>>
>>>
1 - 100 of 272 matches
Mail list logo