The hosted engine storage is located in an external Fibre Channel SAN.
On 25/4/2559 16:19, Martin Sivak wrote:
Hi,
it seems that all nodes lost access to storage for some reason after
the host was killed. Where is your hosted engine storage located?
Regards
--
Martin Sivak
SLA / oVirt
On
Hi,
it seems that all nodes lost access to storage for some reason after
the host was killed. Where is your hosted engine storage located?
Regards
--
Martin Sivak
SLA / oVirt
On Mon, Apr 25, 2016 at 10:58 AM, Wee Sritippho wrote:
> Hi,
>
> From the hosted-engine FAQ, the
Hi,
From the hosted-engine FAQ, the engine VM should be up and running in
about 5 minutes after its host was forced poweroff. However, after
updated oVirt 3.6.4 to 3.6.5, the engine VM won't restart automatically
even after 10+ minutes (I already made sure that global maintenance mode
is set
On Fri, Apr 22, 2016 at 04:37:20PM +0200, Stefano Stagnaro wrote:
> I can confirm the problem.
>
> Thank you!
>
> On ven, 2016-04-22 at 15:02 +0200, Simone Tiraboschi wrote:
> > On Fri, Apr 22, 2016 at 2:48 PM, Stefano Stagnaro
> > wrote:
> > > [root@h4 ~]#
: [ovirt-users] Hosted Engine Almost setup
I realize now I shouldn't have set the default cluster name, is there a way
I can resume the install of the hosted engine?
I've got the engine up and running, so I just need to jump in from after the
engine install.
Ideas?
Checking for oVirt-Engine
I realize now I shouldn't have set the default cluster name, is there a
way I can resume the install of the hosted engine?
I've got the engine up and running, so I just need to jump in from after
the engine install.
Ideas?
Checking for oVirt-Engine status at
I can confirm the problem.
Thank you!
On ven, 2016-04-22 at 15:02 +0200, Simone Tiraboschi wrote:
> On Fri, Apr 22, 2016 at 2:48 PM, Stefano Stagnaro
> wrote:
> > [root@h4 ~]# /usr/sbin/dmidecode -s system-uuid
> > Not Settable
>
> Ok, the issue is there.
>
On Fri, Apr 22, 2016 at 2:48 PM, Stefano Stagnaro
wrote:
> [root@h4 ~]# /usr/sbin/dmidecode -s system-uuid
> Not Settable
Ok, the issue is there.
Please check you BIOS/UEFI settings.
> On ven, 2016-04-22 at 14:35 +0200, Simone Tiraboschi wrote:
>> On Fri, Apr
[root@h4 ~]# /usr/sbin/dmidecode -s system-uuid
Not Settable
On ven, 2016-04-22 at 14:35 +0200, Simone Tiraboschi wrote:
> On Fri, Apr 22, 2016 at 2:26 PM, Stefano Stagnaro
> wrote:
> > Ciao Simone,
> >
> > here it is:
> >
> > [root@h4 ~]# vdsClient -s 0
On Fri, Apr 22, 2016 at 2:26 PM, Stefano Stagnaro
wrote:
> Ciao Simone,
>
> here it is:
>
> [root@h4 ~]# vdsClient -s 0 getVdsCaps
> Traceback (most recent call last):
> File "/usr/share/vdsm/vdsClient.py", line 3001, in
> code, message =
Ciao Simone,
here it is:
[root@h4 ~]# vdsClient -s 0 getVdsCaps
Traceback (most recent call last):
File "/usr/share/vdsm/vdsClient.py", line 3001, in
code, message = commands[command][0](commandArgs)
File "/usr/share/vdsm/vdsClient.py", line 542, in do_getCap
return
On Fri, Apr 22, 2016 at 1:07 PM, Stefano Stagnaro
wrote:
> Hi,
>
> while deploying Hosted Engine in hyper-converged configuration, installation
> fails with following error:
> [ ERROR ] Failed to execute stage 'Environment customization': ":cannot marshal None
Here is host01's broker.log:
https://gist.github.com/weeix/d73aa8506b296c27110747464ea33312/raw/e73938f4dce3591006b07e6ea61760831f4a2f18/broker.log
On 22 เมษายน 2016 15 นาฬิกา 04 นาที 40 วินาที GMT+07:00, Simone Tiraboschi
wrote:
>On Fri, Apr 22, 2016 at 9:46 AM, Simone
I have created vms in the vmware ESXi.
Thanks,
Nagaraju
On Fri, Apr 22, 2016 at 3:38 PM, Simone Tiraboschi
wrote:
> On Fri, Apr 22, 2016 at 10:27 AM, Budur Nagaraju
> wrote:
> > HI
> >
> > I thought Its promiscuous mode issue ,after reboot of engine
On Fri, Apr 22, 2016 at 10:27 AM, Budur Nagaraju wrote:
> HI
>
> I thought Its promiscuous mode issue ,after reboot of engine unable to reach
> again ? is there anyways to resolve ?
Are you using a nested env?
> Thanks,
> Nagaraju
>
>
> On Fri, Apr 22, 2016 at 11:45 AM, Budur
HI
I thought Its promiscuous mode issue ,after reboot of engine unable to
reach again ? is there anyways to resolve ?
Thanks,
Nagaraju
On Fri, Apr 22, 2016 at 11:45 AM, Budur Nagaraju wrote:
> I found the solution ,promiscuous mode was rejecting the packets this was
>
On Fri, Apr 22, 2016 at 9:46 AM, Simone Tiraboschi wrote:
> On Fri, Apr 22, 2016 at 9:44 AM, Wee Sritippho wrote:
>> Hi,
>>
>> I were upgrading oVirt from 3.6.4.1 to 3.6.5. The engine-vm was running on
>> host02. These are the steps that I've done:
>>
>>
On Fri, Apr 22, 2016 at 9:44 AM, Wee Sritippho wrote:
> Hi,
>
> I were upgrading oVirt from 3.6.4.1 to 3.6.5. The engine-vm was running on
> host02. These are the steps that I've done:
>
> 1. Set hosted engine maintenance mode to global
> 2. Accessed engine-vm and upgraded
Hi,
I were upgrading oVirt from 3.6.4.1 to 3.6.5. The engine-vm was running
on host02. These are the steps that I've done:
1. Set hosted engine maintenance mode to global
2. Accessed engine-vm and upgraded oVirt to latest version
3. Run 'reboot' in engine-vm
4. After about 10 minutes, the
I found the solution ,promiscuous mode was rejecting the packets this was
causing the issue ,enabled and now able to ping without any issues.
Thanks for the support !
On Fri, Apr 22, 2016 at 8:39 AM, Budur Nagaraju wrote:
> HI
>
> Any updates issue is blocking ?
>
> Thanks,
>
HI
Any updates issue is blocking ?
Thanks,
Nagaraju
On Thu, Apr 21, 2016 at 7:20 PM, Budur Nagaraju wrote:
> Below are the details, unable to ping the gateway from the ovirt engine
> ,able to ping the ovirt engine from host as both are there in the same
> network.
>
>
>
Can you ping by ip but not name? If so check your /etc/resolv.conf
file. It's probably missing the DNS servers. I found that when I
installed hosted-engine deployment the first phase of setup (before
creating the engine VM) did not copy over the DNS settings from my
original NIC interface so
Below are the details, unable to ping the gateway from the ovirt engine
,able to ping the ovirt engine from host as both are there in the same
network.
[root@oe ~]# ip addr
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd
On Thu, Apr 21, 2016 at 1:39 PM, Budur Nagaraju wrote:
> HI
>
> Installed hosted engine and after configuring IP to ovirt engine unable to
> ping the gateway and found that there is no issue with the Network .
You mean that from inside the engine machine, you cannot ping the
HI
Installed hosted engine and after configuring IP to ovirt engine unable
to ping the gateway and found that there is no issue with the Network .
Is there any thing am missing while installing Hosted engine ? below is
output details,
--== CONFIGURATION PREVIEW ==--
Engine FQDN
On 20 เมษายน 2016 18 นาฬิกา 29 นาที 26 วินาที GMT+07:00, Yedidyah Bar David
wrote:
>On Wed, Apr 20, 2016 at 1:42 PM, Wee Sritippho
>wrote:
>> Hi Didi & Martin,
>>
>> I followed your instructions and are able to add the 2nd host. Thank
>you :)
>>
>> This is
On Thu, Apr 21, 2016 at 11:40 AM, Budur Nagaraju wrote:
> HI
>
> I have just modified the below ones.
>
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key
>
> to
>
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub
>
> will this affect to the
HI
I have just modified the below ones.
https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key
to
https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub
will this affect to the setup ?
On Thu, Apr 21, 2016 at 1:48 PM, Richard Neuboeck
wrote:
On 21/04/16 10:01, Budur Nagaraju wrote:
> Hi
>
> Getting the below error while installing the hosted engine
> packages,installing oVirt3.5.
>
>
>
> Total size: 76 M
> Installed size: 281 M
> Downloading Packages:
> warning: rpmts_HdrFromFdno: Header V4 RSA/SHA256 Signature, key ID
> d5dc52dc:
Hi,
it seems that in the latest version(s) of glusterfs the pub.key file
is not present even though some yum repo configurations still try to
access it.
Last time I had this problem I updated the repo configuration
(/etc/yum.repos.d/ovirt-3.6-dependencies.repo) to use the other key
(rsa.pub).
Hi
Getting the below error while installing the hosted engine
packages,installing oVirt3.5.
Total size: 76 M
Installed size: 281 M
Downloading Packages:
warning: rpmts_HdrFromFdno: Header V4 RSA/SHA256 Signature, key ID
d5dc52dc: NOKEY
Retrieving key from
Hi everybody,
I added the procedure to the wiki, it you would be so kind to review it.
https://github.com/oVirt/ovirt-site/pull/188
Thanks
Martin
On Wed, Apr 20, 2016 at 1:29 PM, Yedidyah Bar David wrote:
> On Wed, Apr 20, 2016 at 1:42 PM, Wee Sritippho
Hello All,
my setup of a hosted engine on Centos7.2 hangs on : [ INFO ] Connecting
Storage Pool
In the log file I see a lot of items with "ok":
*2016-04-20 13:04:01 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.storage.storage
storage._activateStorageDomain:1067
Hi,
can you please attach the whole logs?
On Wed, Apr 20, 2016 at 1:15 PM, Johan Vermeulen wrote:
> Hello All,
>
> my setup of a hosted engine on Centos7.2 hangs on : [ INFO ] Connecting
> Storage Pool
> In the log file I see a lot of items with "ok":
>
> 2016-04-20
On Wed, Apr 20, 2016 at 1:42 PM, Wee Sritippho wrote:
> Hi Didi & Martin,
>
> I followed your instructions and are able to add the 2nd host. Thank you :)
>
> This is what I've done:
>
> [root@host01 ~]# hosted-engine --set-maintenance --mode=global
>
> [root@host01 ~]#
Hello All,
my setup of a hosted engine on Centos7.2 hangs on : [ INFO ] Connecting
Storage Pool
In the log file I see a lot of items with "ok":
*2016-04-20 13:04:01 DEBUG
otopi.plugins.ovirt_hosted_engine_setup.storage.storage
storage._activateStorageDomain:1067
Hi Didi & Martin,
I followed your instructions and are able to add the 2nd host. Thank you :)
This is what I've done:
[root@host01 ~]# hosted-engine --set-maintenance --mode=global
[root@host01 ~]# systemctl stop ovirt-ha-agent
[root@host01 ~]# systemctl stop ovirt-ha-broker
[root@host01
> And we also do not clean on upgrades... Perhaps we can? Should? Optionally?
>
We can't. We do not execute any setup tool during upgrade and the
clean procedure
requires that all hosted engine tooling is shut down.
Martin
On Wed, Apr 20, 2016 at 11:40 AM, Yedidyah Bar David
On Wed, Apr 20, 2016 at 11:40 AM, Martin Sivak wrote:
>> Doesn't cleaning sanlock lockspace require also to stop sanlock itself?
>> I guess it's supposed to be able to handle this, but perhaps users want
>> to clean the lockspace because dirt there causes also problems with
>>
> Doesn't cleaning sanlock lockspace require also to stop sanlock itself?
> I guess it's supposed to be able to handle this, but perhaps users want
> to clean the lockspace because dirt there causes also problems with
> sanlock, no?
Sanlock can be up, but the lockspace has to be unused.
> So the
On Wed, Apr 20, 2016 at 11:20 AM, Martin Sivak wrote:
>> after moving to global maintenance.
>
> Good point.
>
>> Martin - any advantage of this over '–reinitialize-lockspace'? Besides
>> that it works also in older versions? Care to add this to the howto page?
>
> Reinitialize
> after moving to global maintenance.
Good point.
> Martin - any advantage of this over '–reinitialize-lockspace'? Besides
> that it works also in older versions? Care to add this to the howto page?
Reinitialize lockspace clears the sanlock lockspace, not the metadata
file. Those are two
On Wed, Apr 20, 2016 at 11:11 AM, Martin Sivak wrote:
>> Assuming you never deployed a host with ID 52, this is likely a result of a
>> corruption or dirt or something like that.
>
>> I see that you use FC storage. In previous versions, we did not clean such
>> storage, so you
> Assuming you never deployed a host with ID 52, this is likely a result of a
> corruption or dirt or something like that.
> I see that you use FC storage. In previous versions, we did not clean such
> storage, so you might have dirt left.
This is the exact reason for an error like yours. Using
etadataError: Metadata version 2
from host 52 too new for this agent (highest compatible version: 1)
- ข้อความดั้งเดิม -
จาก: "Yedidyah Bar David" <d...@redhat.com>
ถึง: "Wee Sritippho" <we...@forest.go.th>
สำเนา: "users" <users@ovirt.org>
ส่งแล้ว: พุธ
On Wed, Apr 20, 2016 at 7:15 AM, Wee Sritippho wrote:
> Hi,
>
> I used CentOS-7-x86_64-Minimal-1511.iso to install the hosts and the engine.
>
> The 1st host and the hosted-engine were installed successfully, but the 2nd
> host failed with this error message:
>
> "Failed to
Hi,
I used CentOS-7-x86_64-Minimal-1511.iso to install the hosts and the engine.
The 1st host and the hosted-engine were installed successfully, but the
2nd host setup failed with this error message:
"Failed to execute stage 'Setup validation': Metadata version 2 from
host 52 too new for
Hi,
I used CentOS-7-x86_64-Minimal-1511.iso to install the hosts and the engine.
The 1st host and the hosted-engine were installed successfully, but the
2nd host failed with this error message:
"Failed to execute stage 'Setup validation': Metadata version 2 from
host 52 too new for this
On Fri, Apr 15, 2016 at 2:33 PM, Paul Groeneweg | Pazion wrote:
> Thanks for the help!
>
> I managed to fix it :-)
>
> I made a device from the hosted engine file with losetup and for the LVM
> again.
> From there I was able to fsck my partition. And answered a lot of y to the
>
onal storage and non-hosted engine hosts added etc.
>> >>> >>>
>> >>> >>> Additional VMs added to hosted-engine storage (oVirt Reports VM
>> and
>> >>> >>> Cinder VM). Additional VM's are hosted by other storage - cinder
>>
Thanks for the help!
I managed to fix it :-)
I made a device from the hosted engine file with losetup and for the LVM
again.
>From there I was able to fsck my partition. And answered a lot of y to the
list of inconsistences.
I removed the loop back devices.
I turned off maintenance mode and
On Fri, Apr 15, 2016 at 11:02 AM, Paul Groeneweg | Pazion
wrote:
> Thanks!
>
> I managed to get the console through:
>
> hosted-engine --add-console-password
> /bin/remote-viewer vnc://localhost:5900
>
> Turns out, there seems to be some corruption on the partition:
>
Thanks!
I managed to get the console through:
hosted-engine --add-console-password
/bin/remote-viewer vnc://localhost:5900
Turns out, there seems to be some corruption on the partition:
http://screencast.com/t/6iR0U3QuI
Is there a way to boot from CD, so I can start rescue mode?
Op vr 15
Hi,
you can access the console using vnc or use virsh to get access to the
serial console.
Check the following commands on the host where the VM is currently running:
virsh -r list
virsh -r console HostedEngine
virsh -r vncdisplay HostedEngine
Those should give you enough pointers to connect
Tonight my server with NFS hosted-engine mount crashed.
Now all is back online ,except the hosted engine. I can't ping or ssh the
machine
when I do hosted-engine --vm-status, I get:
..
--== Host 2 status ==--
Status up-to-date : True
Hostname
> >>> Cinder VM). Additional VM's are hosted by other storage - cinder
> and
> >>> >>> NFS.
> >>> >>>
> >>> >>> The system is in production.
> >>> >>>
> >>> >>>
> >>> >>> Engine can be migrated around with the
n h2 - h3 into maintenance (local) upgrade and
>>> >>> Reboot
>>> >>> h3 - No issues - Local maintenance removed from h3.
>>> >>>
>>> >>> - Engine placed on h3 - h2 into maintenance (local) upgrade and
>>> >>> Reb
gt; >>> - Engine placed on h2 - h3 into maintenance (local) upgrade and
>>> Reboot
>>> >>> h3 - No issues - Local maintenance removed from h3.
>>> >>>
>>> >>> - Engine placed on h3 - h2 into maintenance (local) upgrade and
>>>
m>>
Sent: Monday, 11 April 2016 7:11 PM
To: Richard Neuboeck; Simone Tiraboschi; Roy Golan; Martin Sivak;
Sahina Bose
Cc: Bond, Darryl; users
Subject: Re: [ovirt-users] Hosted engine on gluster problem
On Mon, Apr 11, 2016 at 9:37 AM, Richard Neuboeck
<h..
mal BIOS probing)
> >>>
> >>> - Engine starts after h1 comes back and stabilises
> >>>
> >>> - VM(cinder) unpauses itself, VM(reports) continued fine the whole
> time.
> >>> I can do no diagnosis on the 2 VMs as the engine is not
ue is with gluster itself as the volume remains
>>> accessible on all hosts during this time albeit with a missing server
>>> (gluster volume status) as each gluster server is rebooted.
>>>
>>> Gluster was upgraded as part of the process, no issues were seen here.
>
s with gluster itself as the volume remains
>>> accessible on all hosts during this time albeit with a missing server
>>> (gluster volume status) as each gluster server is rebooted.
>>>
>>> Gluster was upgraded as part of the process, no issues were seen here.
is with gluster itself as the volume remains
>> accessible on all hosts during this time albeit with a missing server
>> (gluster volume status) as each gluster server is rebooted.
>>
>> Gluster was upgraded as part of the process, no issues were seen here.
>>
>
sue without the upgrade by following
> the same sort of timeline.
>
>
>
> From: Sandro Bonazzola <sbona...@redhat.com>
> Sent: Monday, 11 April 2016 7:11 PM
> To: Richard Neuboeck; Simone Tiraboschi; Roy Golan; Martin Sivak; Sahina
>
.
From: Sandro Bonazzola <sbona...@redhat.com>
Sent: Monday, 11 April 2016 7:11 PM
To: Richard Neuboeck; Simone Tiraboschi; Roy Golan; Martin Sivak; Sahina Bose
Cc: Bond, Darryl; users
Subject: Re: [ovirt-users] Hosted engine on gluster problem
On Mon, Apr 11, 2016 at 9:37 AM, Richard Ne
On 04/11/2016 11:11 AM, Sandro Bonazzola wrote:
> On Mon, Apr 11, 2016 at 9:37 AM, Richard Neuboeck
> > wrote:
>
> Hi Darryl,
>
> I'm still experimenting with my oVirt installation so I tried to
> recreate the problems you've
On Mon, Apr 11, 2016 at 9:37 AM, Richard Neuboeck
wrote:
> Hi Darryl,
>
> I'm still experimenting with my oVirt installation so I tried to
> recreate the problems you've described.
>
> My setup has three HA hosts for virtualization and three machines
> for the gluster
Hi Darryl,
I'm still experimenting with my oVirt installation so I tried to
recreate the problems you've described.
My setup has three HA hosts for virtualization and three machines
for the gluster replica 3 setup.
I manually migrated the Engine from the initial install host (one)
to host
There seems to be a pretty severe bug with using hosted engine on gluster.
If the host that was used as the initial hosted-engine --deploy host goes away,
the engine VM wil crash and cannot be restarted until the host comes back.
This is regardless of which host the engine was currently
Hi,
What are you trying to accomplish? Hosted engine is a special case
where the management is itself just a VM that can be migrated across
the cluster. But it does not make much sense when you only have one
server.
But to describe the requirements:
You should generally have a network subnet
On Tue, Mar 29, 2016 at 3:32 PM, Taste-Of-IT wrote:
> Hello,
>
> one question from a beginner. The case, i rent a server with 1 public
> IP-Address. Is it possible to setup oVirt as Hosted-Engine via ssh and vnc
> remote with only 1 public IP-Address, or need i a second
Hello,
one question from a beginner. The case, i rent a server with 1 public
IP-Address. Is it possible to setup oVirt as Hosted-Engine via ssh and
vnc remote with only 1 public IP-Address, or need i a second public
IP-Address for the engine? Perhaps its about an host entriy or something
Hi Didi,
It was indeed the iptable issue. I forgot to open the udp ports.
Here are the versions of relevant packages:
Host:
libgovirt-0.3.3-1.el7_2.1.x86_64
ovirt-engine-appliance-3.6-20160301.1.el7.centos.noarch
ovirt-release36-005-1.noarch
ovirt-vmconsole-1.0.0-1.el7.centos.noarch
On Wed, Mar 23, 2016 at 6:40 AM, Wee Sritippho wrote:
> Hi,
>
> I'm installing oVirt hosted-engine using a fibre channel storage. During the
> deployment I found this error:
>
> [ ERROR ] The VDSM host was found in a failed state. Please check engine
> and bootstrap
tication. I can live
with this for the moment, but hopefully the bug can be fixed soon.
Thanks for the quick responses,
Regards,
Paul
-Original Message-
From: Ondra Machacek [mailto:omach...@redhat.com]
Sent: donderdag 17 maart 2016 19:12
To: Paul <p...@kenla.nl>; users@ovirt.org
Subject: Re: [ovirt-u
redentials)
Any suggestions?
-Original Message-
From: Ondra Machacek [mailto:omach...@redhat.com]
Sent: donderdag 17 maart 2016 16:58
To: Paul <p...@kenla.nl>; users@ovirt.org
Subject: Re: [ovirt-users] Hosted engine Single Sign-On to VM with freeIPA
not working
Hi,
your authz name should m
user credentials)
Any suggestions?
-Original Message-
From: Ondra Machacek [mailto:omach...@redhat.com]
Sent: donderdag 17 maart 2016 16:58
To: Paul <p...@kenla.nl>; users@ovirt.org
Subject: Re: [ovirt-users] Hosted engine Single Sign-On to VM with freeIPA
not working
Hi,
your a
Hi,
your authz name should match kerberos name.
So please change your authz name from 'DOMAIN-authz' to 'DOMAIN'
Please see this bz[1] for more detail.
Ondra
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1133137#c7
On 03/17/2016 04:22 PM, Paul wrote:
Hi,
I am having an issue with getting
Hi:
In 3.5, I can add a nic for the hosted engine vm by following these
steps:
- Created the network in the UI
- hosted-engine --set-maintenance --mode=global
- edited /etc/ovirt-hosted-engine/vm.conf; duplicated the existing network
line, changing the macAddr, network, deviceId, and
Hi,
I am having an issue with getting SSO to work when a standard user(UserRole)
logs in to the UserPortal.
The user has permission to use only this VM, so after login the console is
automatically opened for that VM.
Problem is that it doesn't login on the VM system with the provided
On Thu, Mar 17, 2016 at 11:34 AM, Wee Sritippho wrote:
> Hi,
>
> I setup the host's network while installing CentOS 7 (GUI), so the network
> configuration is like this:
>
> eno1 --> bond0_slave1 --\
> |--> bond0
> eno2 --> bond0_slave2 --/
>
> After
On Mon, Mar 14, 2016 at 7:09 PM, Christophe TREFOIS
wrote:
> This procedure is what makes so scared.
>
> Restoring a backup, usually, ends up in cataclysmic nightmares. Maybe not so
> in oVirt :)
>
> Is there a recommended way to test restoring an ovirt-engine backup to
This procedure is what makes so scared.
Restoring a backup, usually, ends up in cataclysmic nightmares. Maybe not so in
oVirt :)
Is there a recommended way to test restoring an ovirt-engine backup to see if
it would “fail” or “work” in production?
On 14 Mar 2016, at 07:57, Sandro Bonazzola
On Mon, Mar 14, 2016 at 8:59 AM, Sandro Bonazzola wrote:
>
>
> On Thu, Feb 25, 2016 at 7:58 AM, Wee Sritippho wrote:
>>
>> Hi,
>>
>> I'm trying to deploy a 2nd host to my hosted-engine environment, but the
>> 1st host doesn't have a root password - it
On Thu, Feb 25, 2016 at 7:58 AM, Wee Sritippho wrote:
> Hi,
>
> I'm trying to deploy a 2nd host to my hosted-engine environment, but the
> 1st host doesn't have a root password - it only has a sudo account.
this kind of configuration is not supported by Hosted Engine.
On Fri, Mar 4, 2016 at 6:27 PM, Pat Riehecky wrote:
> I'm on oVirt 3.6
>
> I'd like to migrate my hosted engine storage to another location and have
> a few questions:
>
There is no special procedure to migrate the Hosted Engine storage to a new
one.
A possible way to do
On Mon, Mar 7, 2016 at 3:39 PM, Pat Riehecky wrote:
> Is there a way to configure the hosted engine to only use SPICE and not
> VNC?
>
At setup time hosted-engine-setup asked you
Please specify the console type you would like to use to
connect to the VM (vnc,
Is there a way to configure the hosted engine to only use SPICE and not VNC?
/usr/libexec/qemu-kvm -name HostedEngine -S -machine
rhel6.5.0,accel=kvm,usb=off -cpu qemu64,-svm -m 4096 -realtime
-device
I'm on oVirt 3.6
I'd like to migrate my hosted engine storage to another location and
have a few questions:
(a) what is the right syntax for glusterfs in
/etc/ovirt-hosted-engine/hosted-engine.conf? (I'm currently on nfs3)
(b) what is the right syntax for fibre channel?
(c) where are
On Fri, Feb 26, 2016 at 3:40 AM, Wee Sritippho wrote:
> Hi,
>
> I'm trying to deploy a 2nd host to my hosted-engine environment, but at some
> point, the setup ask me to type a password for admin@internal again. Do I
> need to type the same password that I choose when
Hi,
I'm trying to deploy a 2nd host to my hosted-engine environment, but at
some point, the setup ask me to type a password for admin@internal
again. Do I need to type the same password that I choose when deploying
the 1st host? If not, would it replace the old password?
Thank you,
Wee
---
Hi,
Wow. How dumb of me. I just realized that I answered "Yes" in this
configuration question:
iptables was detected on your computer, do you wish setup to configure
it? (Yes, No)[Yes]:
So the hosted-engine setup configure my empty iptables to allow just
some necessary port (excluding
The error indicates : OSError: [Errno 30] Read-only file system
Can you check the output of "gluster volume status gv0" on
host01.ovirt.forest.go.th. Please make sure that firewall is not
blocking gluster ports from communicating on the 3 nodes.
On a different note, since you are using gv0
I am having problems getting the hosted-engine storage domain imported into
the web interface.
I upgrade my hosts from el6 to el7.
I am running oVirt 3.6.2
The storage domain is locked: http://screencast.com/t/gzBarFhH0
I am unable to attach in the datacenter as it says "There are no compatible
Hi,
I've updated from 3.6.1 to 3.6.2 and was hoping that the issue with
the hosted engine storage on FC not being imported would be solved.
It managed to start the import but now my storage has been stuck in
"Locked" for weeks. None of the options in the web UI are available
except right click
I cannot get the HE to run on node1. I did 'hosted-engine -vm-shutdown'
on node2. After it was down I did 'hosted-engine --vm-start' on node1
but did not even get a qemu process. I did 'hosted-engine
--vm-poweroff' on node1 and 'hosted-engine --vm-start' on node2 and got
it up and running
Adding Martin
On Thu, Jan 28, 2016 at 5:54 AM, Peter wrote:
>
> I am running oVirt 3.6.2 (original install was 3.6) hosted-engine on a couple
> of Centos 7.2 servers with SAS attached storage using the new FC support to
> connect to the LUNs.
>
> Neither the hosted-engine
Hi,
we really need more logs. Preferably the full agent, vdsm and engine
log around the time the migration is attempted.
The warn/error messages are all related to the fact that hosted engine
runs in the "3.5 mode" with the storage domain not being imported to
the engine yet.
--
Martin Sivak
Martin,
The current state is HE on node2 and all other VMs on node1. Node2 is
in local maintenance as of last night and HE should have migrated but
can't. The requested logs for this situation are at
ftp://aftp.fsl.noaa.gov/divisions/its/peter/ovirt-logs-201601028a/.
I will try shutting down
I am running oVirt 3.6.2 (original install was 3.6) hosted-engine on a couple
of Centos 7.2 servers with SAS attached storage using the new FC support to
connect to the LUNs.
Neither the hosted-engine storage nor the hosted-engine VM show up in the GUI.
I know there have been a lot of bugs
601 - 700 of 1046 matches
Mail list logo