Re: [ovirt-users] Question about taking clugerfs storage offline

2016-04-28 Thread Sahina Bose
In earlier versions, host maintenance did not stop gluster related 
services. With 3.6, when putting a host to maintenance there's an option 
to also stop all gluster related services.

Use this option to shutdown your gluster services from oVirt.

On 04/28/2016 08:32 PM, Edward Clay wrote:

Hello,  I have a 2 node replicated glusterfs storage cluster configured
that shows up under storage.  I need to take both of these glusterfs
nodes down to perform some hardware upgrades.  I'm wondering if I need
to put this storage into maintenance mode or if putting each host into
maintenance mode is good enough?  The reason I'm asking is the last
time I took one of these glusterfs servers down leaving the other up we
ended up in a split brain state that I'm trying to avoid.  Once the
hardware upgrade has been performed I will be adding a 3rd brick/node
to this config which should make things a bit happier.

Thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to power off the vm

2016-04-28 Thread Budur Nagaraju
Thank you much for your support ,able to kill the process and now able to
perform all the functions in UI.

On Thu, Apr 28, 2016 at 6:01 PM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:

>
> On 28 Apr 2016, at 14:18, Budur Nagaraju  wrote:
>
> Not able to access through console ,SSH, even the migration option is not
> getting highlighted unable to perform any actions.
>
> To reboot host I need to migrate remaining vms to other host , that is
> time consuming.
>
> Any commands to kill the process without rebooting  the host?
>
> find the right qemu process. it should have the vm name on the command line
> then kill -9, if it helps then it might be ok and you can start the VM
> again.
> if you don’t know how to do that then really the best option is to migrate
> all other vms away and reboot
>
>
> On Apr 28, 2016 5:42 PM, "Michal Skrivanek" 
> wrote:
>
>
> On 28 Apr 2016, at 14:11, Budur Nagaraju  wrote:
>
> Earlier it was working ,now  not able to power on/off  shutdown. deploy in
> another host etc.
>
>
> I don’t mean in ovirt, I mean the guest itself. Can you get to the
> console? Can you ssh to that guest? Does it do anythign?
> if so it might be worth trying to save it (e.g. migrate), if not, just
> kill it from the host…or migrate everything else away and reboot the host
>
> On Apr 28, 2016 5:38 PM, "Michal Skrivanek" 
> wrote:
>
>>
>> On 28 Apr 2016, at 14:01, Budur Nagaraju  wrote:
>>
>> ovirt node is having 50vms and one  VM is having issues, by restarting
>> libvirt will the  other vms get affect? And am not getting the option to
>> delete.
>>
>>
>> it will not affect the running VMs, they will keep running
>> again, does that one VM actually work?
>>
>> On Apr 28, 2016 5:26 PM, "Michal Skrivanek" 
>> wrote:
>>
>>>
>>> > On 28 Apr 2016, at 13:49, Budur Nagaraju  wrote:
>>> >
>>> > Any commands to check the same ?
>>>
>>> so does the VM actually work?
>>> what’s the status of the process?
>>>
>>> if it works, restart libvirtd (that will induce a vdsm restart as well),
>>> and check if it makes any difference. If not then I guess you’re out of
>>> luck and you can try to kill the qemu process yourself…or reboot the box
>>>
>>> > On Apr 28, 2016 5:10 PM, "Michal Skrivanek" <
>>> michal.skriva...@redhat.com>
>>> > wrote:
>>> >
>>> >>
>>> >>> On 28 Apr 2016, at 10:03, Budur Nagaraju  wrote:
>>> >>>
>>> >>> HI
>>> >>>
>>> >>> One of the vm is showing "?" and  unable to perform any actions below
>>> >> are the logs ,let me know is there any ways to bring it back ?
>>> >>
>>> >> then it’s probably broken in lower layers. check/add vdsm.log from
>>> that
>>> >> period, but it is likely that libvirt lost control over the qemu
>>> process.
>>> >> You may want to check that particular qemu process if it is alright
>>> or .g
>>>
>>>
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Open in Full Screen

2016-04-28 Thread Colin Coe
Hi all

Is there a setting for globally turning on the console option of "Open in
Full Screen"?

Also, can the "Connect Automatically" be permanently and globally disabled?

We're on RHEV 3.5.7

Thanks

CC
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Fwd: Having issues with Hosted Engine

2016-04-28 Thread Luiz Claudio Prazeres Goncalves
Hi Simone, I was reviewing the changelog of 3.6.6, on the link below, but i
was not able to find the bug (https://bugzilla.redhat.com/1327516) as fixed
on the list. According to Bugzilla the target is really 3.6.6, so what's
wrong?


http://www.ovirt.org/release/3.6.6/


Thanks
Luiz

Em qui, 28 de abr de 2016 11:33, Luiz Claudio Prazeres Goncalves <
luiz...@gmail.com> escreveu:

> Nice!... so, I'll survive a bit more with these issues until the version
> 3.6.6 gets released...
>
>
> Thanks
> -Luiz
>
> 2016-04-28 4:50 GMT-03:00 Simone Tiraboschi :
>
>> On Thu, Apr 28, 2016 at 8:32 AM, Sahina Bose  wrote:
>> > This seems like issue reported in
>> > https://bugzilla.redhat.com/show_bug.cgi?id=1327121
>> >
>> > Nir, Simone?
>>
>> The issue is here:
>> MainThread::INFO::2016-04-27
>>
>> 03:26:27,185::storage_server::229::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(disconnect_storage_server)
>> Disconnecting storage server
>> MainThread::INFO::2016-04-27
>>
>> 03:26:27,816::upgrade::983::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(fix_storage_path)
>> Fixing storage path in conf file
>>
>> And it's tracked here: https://bugzilla.redhat.com/1327516
>>
>> We already have a patch, it will be fixed with 3.6.6
>>
>> As far as I saw this issue will only cause a lot of mess in the logs
>> and some false alert but it's basically harmless
>>
>> > On 04/28/2016 05:35 AM, Luiz Claudio Prazeres Goncalves wrote:
>> >
>> >
>> > Hi everyone,
>> >
>> > Until today my environment was fully updated (3.6.5+centos7.2) with 3
>> nodes
>> > (kvm1,kvm2 and kvm3 hosts) . I also have 3 external gluster nodes
>> > (gluster-root1,gluster1 and gluster2 hosts ) , replica 3, which the
>> engine
>> > storage domain is sitting on top (3.7.11 fully updated+centos7.2)
>> >
>> > For some weird reason i've been receiving emails from oVirt with
>> > EngineUnexpectedDown (attached picture) on a daily basis more or less,
>> but
>> > the engine seems to be working fine and my vm's are up and running
>> normally.
>> > I've never had any issue to access the User Interface to manage the vm's
>> >
>> > Today I run "yum update" on the nodes and realised that vdsm was
>> outdated,
>> > so I updated the kvm hosts and they are now , again, fully updated.
>> >
>> >
>> > Reviewing the logs It seems to be an intermittent connectivity issue
>> when
>> > trying to access the gluster engine storage domain as you can see
>> below. I
>> > don't have any network issue in place and I'm 100% sure about it. I have
>> > another oVirt Cluster using the same network and using a engine storage
>> > domain on top of an iSCSI Storage Array with no issues.
>> >
>> > Here seems to be the issue:
>> >
>> > Thread-::INFO::2016-04-27
>> > 23:01:27,864::fileSD::357::Storage.StorageDomain::(validate)
>> > sdUUID=03926733-1872-4f85-bb21-18dc320560db
>> >
>> > Thread-::DEBUG::2016-04-27
>> > 23:01:27,865::persistentDict::234::Storage.PersistentDict::(refresh)
>> read
>> > lines (FileMetadataRW)=[]
>> >
>> > Thread-::DEBUG::2016-04-27
>> > 23:01:27,865::persistentDict::252::Storage.PersistentDict::(refresh)
>> Empty
>> > metadata
>> >
>> > Thread-::ERROR::2016-04-27
>> > 23:01:27,865::task::866::Storage.TaskManager.Task::(_setError)
>> > Task=`d2acf575-1a60-4fa0-a5bb-cd4363636b94`::Unexpected error
>> >
>> > Traceback (most recent call last):
>> >
>> >   File "/usr/share/vdsm/storage/task.py", line 873, in _run
>> >
>> > return fn(*args, **kargs)
>> >
>> >   File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
>> >
>> > res = f(*args, **kwargs)
>> >
>> >   File "/usr/share/vdsm/storage/hsm.py", line 2835, in
>> getStorageDomainInfo
>> >
>> > dom = self.validateSdUUID(sdUUID)
>> >
>> >   File "/usr/share/vdsm/storage/hsm.py", line 278, in validateSdUUID
>> >
>> > sdDom.validate()
>> >
>> >   File "/usr/share/vdsm/storage/fileSD.py", line 360, in validate
>> >
>> > raise se.StorageDomainAccessError(self.sdUUID)
>> >
>> > StorageDomainAccessError: Domain is either partially accessible or
>> entirely
>> > inaccessible: (u'03926733-1872-4f85-bb21-18dc320560db',)
>> >
>> > Thread-::DEBUG::2016-04-27
>> > 23:01:27,865::task::885::Storage.TaskManager.Task::(_run)
>> > Task=`d2acf575-1a60-4fa0-a5bb-cd4363636b94`::Task._run:
>> > d2acf575-1a60-4fa0-a5bb-cd4363636b94
>> > ('03926733-1872-4f85-bb21-18dc320560db',) {} failed - stopping task
>> >
>> > Thread-::DEBUG::2016-04-27
>> > 23:01:27,865::task::1246::Storage.TaskManager.Task::(stop)
>> > Task=`d2acf575-1a60-4fa0-a5bb-cd4363636b94`::stopping in state preparing
>> > (force False)
>> >
>> > Thread-::DEBUG::2016-04-27
>> > 23:01:27,865::task::993::Storage.TaskManager.Task::(_decref)
>> > Task=`d2acf575-1a60-4fa0-a5bb-cd4363636b94`::ref 1 aborting True
>> >
>> > Thread-::INFO::2016-04-27
>> > 23:01:27,865::task::1171::Storage.TaskManager.Task::(prepare)
>> > Task=`d2acf575-1a60-4fa0-a5bb-cd4363636b94`::aborting: Task is aborted:
>> > 

Re: [ovirt-users] audit_log table performance tuning

2016-04-28 Thread Marina Kalinin


- Original Message -
> On 15.04.2016 00:12, Marina Kalinin wrote:
> > Hi,
> > 
> > Any suggestions or maybe already available features in the pipeline for
> > tuning the database, and specifically the audit_log table?
> > 
> > The problem today is that with multiple applications accessing the engine
> > through the RestAPI, especially deployments with CloudForms, create huge
> > amount of login records in the audit_table. Which, in turns, consumes most
> > of the available memory on the machine running the engine and the database
> > and results in a terrible performance of engine and inaccessible Web UI.
> > 
> > The solution today is to delete those records from the table [1]:
> > => delete from audit_log where message like '%logged%';
> > 
> > 
> > Are there any current tunings we can apply to the database?
> > And if not - do we have any RFEs on limiting the records entered to the
> > database or a way to delete/filter those records somehow from the WebUI?
> > All I could find was RFE#1120659 [2], but it does not describe the exact
> > issue.
> > 
> > 
> 
> Hi,
> 
> I remember filing a BZ about this topic some years ago.
> 
> I will mail it tomorrow to this thread as I had this exact issue, as a
> user of the rest api (without any persistent authentication).
Sven, Did you find anything?

I still cannot put my head around the right solution for this case. So I would 
appreciate to see your bug.
> 
> the answer, as I recall (I haven't access to this BZ atm), was to simply
> truncate the event log, which is far from a "solution" at all.
> 
> kind regards
> 
> Sven
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 

-- 
--
 mku
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Errors while trying to join an external LDPA provider

2016-04-28 Thread Ondra Machacek

On 04/28/2016 06:02 PM, Alexis HAUSER wrote:




pool.default.ssl.truststore.file = /tmp/.jks


Maybe trailing space here ^ ?


pool.default.ssl.truststore.password = 



Sadly it doesn't help



So please ensure also that file '/tmp/.jks' is readable by ovirt 
user. The configuration looks fine.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] node unresponsive after reboot

2016-04-28 Thread Campbell McLeay
Hi,

I have a two node + engine ovirt setup, and I was having problems
doing a live migration between nodes. I looked in the vdsm logs and
noticed selinux errors, so I checked the selinux config, and both the
ovirt-engine host and one of the nodes had selinux disabled. So I
thought I would enable it on these two hosts, as it is officially
supported anyway. I started with the node, and put it into maintenance
mode, which interestingly, migrated the VMs off to the other node
without issue. After modifying the selinux config, I then rebooted
that node, which came back up. I then tried to activate the node but
it fails and marks it as unresponsive. From the log:


--8<--

2016-04-28 16:34:31,326 INFO
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp
Reactor) [29acb18b] Connecting to
kvm-ldn-02.ldn.framestore.com/172.16.75.189
2016-04-28 16:34:31,327 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_Worker-32) [ac322cb] Command
'GetCapabilitiesVDSCommand(HostName = kvm-ldn-02,
VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
hostId='b12c0b80-d64d-42fd-8a55-94f92b9ca3aa',
vds='Host[kvm-ldn-02,b12c0b80-d64d-42fd-8a55-94f92b9ca3aa]'})'
execution failed:
org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection
failed
2016-04-28 16:34:31,327 ERROR
[org.ovirt.engine.core.vdsbroker.HostMonitoring]
(DefaultQuartzScheduler_Worker-32) [ac322cb] Failure to refresh Vds
runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException:
Connection failed
2016-04-28 16:34:31,327 ERROR
[org.ovirt.engine.core.vdsbroker.HostMonitoring]
(DefaultQuartzScheduler_Worker-32) [ac322cb] Exception:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection
failed
at 
org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.createNetworkException(VdsBrokerCommand.java:157)
[vdsbroker.jar:]
at 
org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:120)
[vdsbroker.jar:]
at 
org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:65)
[vdsbroker.jar:]
at 
org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33)
[dal.jar:]
at 
org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:467)
[vdsbroker.jar:]
at 
org.ovirt.engine.core.vdsbroker.VdsManager.refreshCapabilities(VdsManager.java:652)
[vdsbroker.jar:]
at 
org.ovirt.engine.core.vdsbroker.HostMonitoring.refreshVdsRunTimeInfo(HostMonitoring.java:119)
[vdsbroker.jar:]
at 
org.ovirt.engine.core.vdsbroker.HostMonitoring.refresh(HostMonitoring.java:84)
[vdsbroker.jar:]
at 
org.ovirt.engine.core.vdsbroker.VdsManager.onTimer(VdsManager.java:227)
[vdsbroker.jar:]
at sun.reflect.GeneratedMethodAccessor120.invoke(Unknown
Source) [:1.8.0_71]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_71]
at java.lang.reflect.Method.invoke(Method.java:497) [rt.jar:1.8.0_71]
at 
org.ovirt.engine.core.utils.timer.JobWrapper.invokeMethod(JobWrapper.java:81)
[scheduler.jar:]
at 
org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:52)
[scheduler.jar:]
at org.quartz.core.JobRunShell.run(JobRunShell.java:213) [quartz.jar:]
at 
org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557)
[quartz.jar:]
Caused by: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException:
Connection failed
at 
org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient.connect(ReactorClient.java:157)
[vdsm-jsonrpc-java-client.jar:]
at 
org.ovirt.vdsm.jsonrpc.client.JsonRpcClient.getClient(JsonRpcClient.java:114)
[vdsm-jsonrpc-java-client.jar:]
at 
org.ovirt.vdsm.jsonrpc.client.JsonRpcClient.call(JsonRpcClient.java:73)
[vdsm-jsonrpc-java-client.jar:]
at 
org.ovirt.engine.core.vdsbroker.jsonrpc.FutureMap.(FutureMap.java:68)
[vdsbroker.jar:]
at 
org.ovirt.engine.core.vdsbroker.jsonrpc.JsonRpcVdsServer.getCapabilities(JsonRpcVdsServer.java:268)
[vdsbroker.jar:]
at 
org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand.executeVdsBrokerCommand(GetCapabilitiesVDSCommand.java:15)
[vdsbroker.jar:]
at 
org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:110)
[vdsbroker.jar:]
... 14 more

--8<--

Any ideas?

Thanks,

Cam
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Errors while trying to join an external LDPA provider

2016-04-28 Thread Alexis HAUSER


> pool.default.ssl.truststore.file = /tmp/.jks

Maybe trailing space here ^ ?

> pool.default.ssl.truststore.password = 
>

Sadly it doesn't help
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Keyboard mapping VNC

2016-04-28 Thread Jonas Israelsson

we have one now and working on it!:)
https://bugzilla.redhat.com/show_bug.cgi?id=1331274

Missed this, so I too opened a bugzilla. My apologies
Feel free to dupe it --> https://bugzilla.redhat.com/show_bug.cgi?id=1331333

Rgds,
Jonas
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Question about taking clugerfs storage offline

2016-04-28 Thread Edward Clay
Hello,  I have a 2 node replicated glusterfs storage cluster configured
that shows up under storage.  I need to take both of these glusterfs
nodes down to perform some hardware upgrades.  I'm wondering if I need
to put this storage into maintenance mode or if putting each host into
maintenance mode is good enough?  The reason I'm asking is the last
time I took one of these glusterfs servers down leaving the other up we
ended up in a split brain state that I'm trying to avoid.  Once the
hardware upgrade has been performed I will be adding a 3rd brick/node
to this config which should make things a bit happier.

Thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Fwd: Having issues with Hosted Engine

2016-04-28 Thread Luiz Claudio Prazeres Goncalves
Nice!... so, I'll survive a bit more with these issues until the version
3.6.6 gets released...


Thanks
-Luiz

2016-04-28 4:50 GMT-03:00 Simone Tiraboschi :

> On Thu, Apr 28, 2016 at 8:32 AM, Sahina Bose  wrote:
> > This seems like issue reported in
> > https://bugzilla.redhat.com/show_bug.cgi?id=1327121
> >
> > Nir, Simone?
>
> The issue is here:
> MainThread::INFO::2016-04-27
>
> 03:26:27,185::storage_server::229::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(disconnect_storage_server)
> Disconnecting storage server
> MainThread::INFO::2016-04-27
>
> 03:26:27,816::upgrade::983::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(fix_storage_path)
> Fixing storage path in conf file
>
> And it's tracked here: https://bugzilla.redhat.com/1327516
>
> We already have a patch, it will be fixed with 3.6.6
>
> As far as I saw this issue will only cause a lot of mess in the logs
> and some false alert but it's basically harmless
>
> > On 04/28/2016 05:35 AM, Luiz Claudio Prazeres Goncalves wrote:
> >
> >
> > Hi everyone,
> >
> > Until today my environment was fully updated (3.6.5+centos7.2) with 3
> nodes
> > (kvm1,kvm2 and kvm3 hosts) . I also have 3 external gluster nodes
> > (gluster-root1,gluster1 and gluster2 hosts ) , replica 3, which the
> engine
> > storage domain is sitting on top (3.7.11 fully updated+centos7.2)
> >
> > For some weird reason i've been receiving emails from oVirt with
> > EngineUnexpectedDown (attached picture) on a daily basis more or less,
> but
> > the engine seems to be working fine and my vm's are up and running
> normally.
> > I've never had any issue to access the User Interface to manage the vm's
> >
> > Today I run "yum update" on the nodes and realised that vdsm was
> outdated,
> > so I updated the kvm hosts and they are now , again, fully updated.
> >
> >
> > Reviewing the logs It seems to be an intermittent connectivity issue when
> > trying to access the gluster engine storage domain as you can see below.
> I
> > don't have any network issue in place and I'm 100% sure about it. I have
> > another oVirt Cluster using the same network and using a engine storage
> > domain on top of an iSCSI Storage Array with no issues.
> >
> > Here seems to be the issue:
> >
> > Thread-::INFO::2016-04-27
> > 23:01:27,864::fileSD::357::Storage.StorageDomain::(validate)
> > sdUUID=03926733-1872-4f85-bb21-18dc320560db
> >
> > Thread-::DEBUG::2016-04-27
> > 23:01:27,865::persistentDict::234::Storage.PersistentDict::(refresh) read
> > lines (FileMetadataRW)=[]
> >
> > Thread-::DEBUG::2016-04-27
> > 23:01:27,865::persistentDict::252::Storage.PersistentDict::(refresh)
> Empty
> > metadata
> >
> > Thread-::ERROR::2016-04-27
> > 23:01:27,865::task::866::Storage.TaskManager.Task::(_setError)
> > Task=`d2acf575-1a60-4fa0-a5bb-cd4363636b94`::Unexpected error
> >
> > Traceback (most recent call last):
> >
> >   File "/usr/share/vdsm/storage/task.py", line 873, in _run
> >
> > return fn(*args, **kargs)
> >
> >   File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
> >
> > res = f(*args, **kwargs)
> >
> >   File "/usr/share/vdsm/storage/hsm.py", line 2835, in
> getStorageDomainInfo
> >
> > dom = self.validateSdUUID(sdUUID)
> >
> >   File "/usr/share/vdsm/storage/hsm.py", line 278, in validateSdUUID
> >
> > sdDom.validate()
> >
> >   File "/usr/share/vdsm/storage/fileSD.py", line 360, in validate
> >
> > raise se.StorageDomainAccessError(self.sdUUID)
> >
> > StorageDomainAccessError: Domain is either partially accessible or
> entirely
> > inaccessible: (u'03926733-1872-4f85-bb21-18dc320560db',)
> >
> > Thread-::DEBUG::2016-04-27
> > 23:01:27,865::task::885::Storage.TaskManager.Task::(_run)
> > Task=`d2acf575-1a60-4fa0-a5bb-cd4363636b94`::Task._run:
> > d2acf575-1a60-4fa0-a5bb-cd4363636b94
> > ('03926733-1872-4f85-bb21-18dc320560db',) {} failed - stopping task
> >
> > Thread-::DEBUG::2016-04-27
> > 23:01:27,865::task::1246::Storage.TaskManager.Task::(stop)
> > Task=`d2acf575-1a60-4fa0-a5bb-cd4363636b94`::stopping in state preparing
> > (force False)
> >
> > Thread-::DEBUG::2016-04-27
> > 23:01:27,865::task::993::Storage.TaskManager.Task::(_decref)
> > Task=`d2acf575-1a60-4fa0-a5bb-cd4363636b94`::ref 1 aborting True
> >
> > Thread-::INFO::2016-04-27
> > 23:01:27,865::task::1171::Storage.TaskManager.Task::(prepare)
> > Task=`d2acf575-1a60-4fa0-a5bb-cd4363636b94`::aborting: Task is aborted:
> > 'Domain is either partially accessible or entirely inaccessible' - code
> 379
> >
> > Thread-::DEBUG::2016-04-27
> > 23:01:27,866::task::1176::Storage.TaskManager.Task::(prepare)
> > Task=`d2acf575-1a60-4fa0-a5bb-cd4363636b94`::Prepare: aborted: Domain is
> > either partially accessible or entirely inaccessible
> >
> >
> > Question: Anyone know what might be happening? I have several gluster
> > config's, as you can see below. All the storage domain are using the same
> > config's
> >
> >
> > More information:

Re: [ovirt-users] Max amount of datacenters per ovirt engine

2016-04-28 Thread Michal Skrivanek

> On 28 Apr 2016, at 14:39, jo...@familiealbers.nl wrote:
> 
> Ok that sounds promising. Does it require 3.6 at minimum?

yes, the 3.6 requires far less network bandwidth during stable conditions 
(nothing’s going on with VM status)

> We already handle starting vms using vdsm in case the network from 
> hypervisors to engine is down.

that’s…well, a bit tricky to do. Can you share how exactly? What compromises 
did you do?

> As well as disabling fencingmto avoid a sloppy network to try and make 
> changes. I would like to reduce the comms between hosts and engine. Are there 
> any ideas you would have about that. 

fencing sometimes does crazy things indeed. There were also quite a few 
enhancements in 3.5/3.6, but iirc they are not default so you’d need to enable 
them (e.g. skipping fencing when storage lease is active)
there really shouldn’t be much else
some parameters might help in unstable conditions, I guess increasing 
vdsHeartbeatInSeconds would make a big difference in general at expense of 
slower detection of network outages(mostly relevant for fencing which you don’t 
use so it shouldn’t matter to you)

> 
> Verstuurd vanaf mijn iPhone
> 
>> Op 28 apr. 2016 om 13:55 heeft Michal Skrivanek 
>>  het volgende geschreven:
>> 
>> 
 On 26 Apr 2016, at 16:48, Martin Sivak  wrote:
 
 @awels: to add another layer of indirection via a dedicated
 hosted-engine per outlet seems a little much. we are talking about 500 *
 4GB RAM at least in this example, so 2 TB RAM just for management
 purposes, if you follow engine hardware recommendations?
>>> 
>>> I would not go that far. Creating zones per continent (for example)
>>> might be enough.
>>> 
 At least RHEV states in the documentation you support up to 200 hosts
 per cluster alone.
>>> 
>>> The default configuration seems to only allow 250 hosts per datacenter.
>>> 
>>> # engine-config -g MaxNumberOfHostsInStoragePool
>>> MaxNumberOfHostsInStoragePool: 250 version: general
>> 
>> yep, but that liit is there because within a DC there is a lot of assumption 
>> for flawless fast enough communication, the most problematic is that all 
>> hosts need to access the same storage and the monitoring gets expensive then.
>> This is a different situation with separate DCs, there’s no cross-DC 
>> communication.
>> I would guess many DCs work great actually.
>> 
>> Too many hosts and VMs in total might be an issue, but since the last 
>> official updates there were a lot of changes. E.g. in stable state due to VM 
>> status events introduced in 3.6 the traffic required between each host and 
>> engine is much lower.
>> I would not be so afraid of thousands anymore, but of course YMMV
>> 
>>> 
>>> --
>>> Martin Sivak
>>> SLA / oVirt
>>> 
 On Tue, Apr 26, 2016 at 4:03 PM, Sven Kieske  wrote:
> On 26.04.2016 14:46, Martin Sivak wrote:
> I think that 1000 hosts per engine is a bit over what we recommend
> (and support). The fact that all of them are going to be remote might
> not be ideal either. The engine assumes the network connection to all
> hosts is almost flawless and the necessary routing and distance to
> your hosts might not play nice with (for example) the fencing logic.
 
 Hi,
 
 this seems a little surprising.
 
 At least RHEV states in the documentation you support up to 200 hosts
 per cluster alone.
 
 There are no documented maxima for clusters or datacenters though.
 
 @awels: to add another layer of indirection via a dedicated
 hosted-engine per outlet seems a little much. we are talking about 500 *
 4GB RAM at least in this example, so 2 TB RAM just for management
 purposes, if you follow engine hardware recommendations?
>> 
>> yeah. currently the added layer of manageiq with HEs everywhere is not that 
>> helpful for this particular case. Still, a per-continent split or 
>> per-low-latency-area might not be a bad idea.
>> I can imagine with a bit more tolerant timeouts and refreshes it might work 
>> well, with incidents/disconnects being isolated within a DC
>> 
 
 But I agree, ovirt does not handle unstable or remote connections that
>> 
>> right. but most of that is again per-DC. You can’t do much cross-DC though 
>> (e.g. sharing a template is a pain)
>> 
>> Thanks
>> michal
>> 
 well, so you might be better of with hundredths of remote engines, but
 it seems to be a nightmare to manage, even if you automate everything.
 
 My personal experience is, that ovirt does scale at least until about
 30-50 DCs managed by a single engine, but that setup was also on a LAN
 (but I would say it could scale well beyond these numbers, at least on a
 LAN).
 
 HTH
 
 Sven
 
 
 ___
 Users mailing list
 Users@ovirt.org
 

Re: [ovirt-users] Errors while trying to join an external LDPA provider

2016-04-28 Thread Ondra Machacek

On 04/28/2016 02:59 PM, Alexis HAUSER wrote:

Hi,


I'm using 3.6.3.4-1.el7.centos and I'm having troubles joining an LDAP provider.

When I try to login into the new profile, I get a "general command validation 
failure" error.

This is what I can get from ovirt-engine/engine.log :


tail -n 400 /var/log/ovirt-engine/engine.log | grep -i error
2016-04-28 09:27:08,355 WARN  
[org.ovirt.engineextensions.aaa.ldap.AuthnExtension] (default task-56) [] 
[ovirt-engine-extension-aaa-ldap.authn::public-authn] Cannot initialize LDAP 
framework, deferring initialization. Error: /etc/ovirt-engine/aaa/.jks  (No 
such file or directory)
2016-04-28 09:27:08,356 ERROR [org.ovirt.engine.core.bll.aaa.LoginUserCommand] 
(default task-56) [] Error during CanDoActionFailure.: Class: class 
org.ovirt.engine.core.extensions.mgr.ExtensionInvokeCommandFailedException
2016-04-28 09:27:13,941 WARN  
[org.ovirt.engineextensions.aaa.ldap.AuthnExtension] (default task-58) [] 
[ovirt-engine-extension-aaa-ldap.authn::public-authn] Cannot initialize LDAP 
framework, deferring initialization. Error: /etc/ovirt-engine/aaa/.jks  (No 
such file or directory)
2016-04-28 09:27:13,941 ERROR [org.ovirt.engine.core.bll.aaa.LoginUserCommand] 
(default task-58) [] Error during CanDoActionFailure.: Class: class 
org.ovirt.engine.core.extensions.mgr.ExtensionInvokeCommandFailedException


I checked the permissions of the file and it's path and they are allright. 
Changing the path to /tmp/xxx.jks didn't help too.

Here is my .profile :


include = 
vars.server = 
vars.user = cn=,ou=,o=,dc=,dc=
vars.password = 
pool.default.auth.simple.bindDN = ${global:vars.user}
pool.default.auth.simple.password = ${global:vars.password}
pool.default.serverset.type = single
pool.default.serverset.single.server = ${global:vars.server}
pool.default.ssl.enable = true
pool.default.serverset.single.port = 636
pool.default.ssl.truststore.file = /tmp/.jks


Maybe trailing space here ^ ?


pool.default.ssl.truststore.password = 


Any idea how to deal with that problem ?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Errors while trying to join an external LDPA provider

2016-04-28 Thread Alexis HAUSER
Hi, 


I'm using 3.6.3.4-1.el7.centos and I'm having troubles joining an LDAP provider.

When I try to login into the new profile, I get a "general command validation 
failure" error.

This is what I can get from ovirt-engine/engine.log :


tail -n 400 /var/log/ovirt-engine/engine.log | grep -i error
2016-04-28 09:27:08,355 WARN  
[org.ovirt.engineextensions.aaa.ldap.AuthnExtension] (default task-56) [] 
[ovirt-engine-extension-aaa-ldap.authn::public-authn] Cannot initialize LDAP 
framework, deferring initialization. Error: /etc/ovirt-engine/aaa/.jks  (No 
such file or directory)
2016-04-28 09:27:08,356 ERROR [org.ovirt.engine.core.bll.aaa.LoginUserCommand] 
(default task-56) [] Error during CanDoActionFailure.: Class: class 
org.ovirt.engine.core.extensions.mgr.ExtensionInvokeCommandFailedException
2016-04-28 09:27:13,941 WARN  
[org.ovirt.engineextensions.aaa.ldap.AuthnExtension] (default task-58) [] 
[ovirt-engine-extension-aaa-ldap.authn::public-authn] Cannot initialize LDAP 
framework, deferring initialization. Error: /etc/ovirt-engine/aaa/.jks  (No 
such file or directory)
2016-04-28 09:27:13,941 ERROR [org.ovirt.engine.core.bll.aaa.LoginUserCommand] 
(default task-58) [] Error during CanDoActionFailure.: Class: class 
org.ovirt.engine.core.extensions.mgr.ExtensionInvokeCommandFailedException


I checked the permissions of the file and it's path and they are allright. 
Changing the path to /tmp/xxx.jks didn't help too.

Here is my .profile :


include = 
vars.server = 
vars.user = cn=,ou=,o=,dc=,dc=
vars.password = 
pool.default.auth.simple.bindDN = ${global:vars.user}
pool.default.auth.simple.password = ${global:vars.password}
pool.default.serverset.type = single
pool.default.serverset.single.server = ${global:vars.server}
pool.default.ssl.enable = true
pool.default.serverset.single.port = 636
pool.default.ssl.truststore.file = /tmp/.jks 
pool.default.ssl.truststore.password = 


Any idea how to deal with that problem ?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Max amount of datacenters per ovirt engine

2016-04-28 Thread jo...@familiealbers.nl
Ok that sounds promising. Does it require 3.6 at minimum? We already handle 
starting vms using vdsm in case the network from hypervisors to engine is down. 
As well as disabling fencingmto avoid a sloppy network to try and make changes. 
I would like to reduce the comms between hosts and engine. Are there any ideas 
you would have about that. 

Verstuurd vanaf mijn iPhone

> Op 28 apr. 2016 om 13:55 heeft Michal Skrivanek  
> het volgende geschreven:
> 
> 
>>> On 26 Apr 2016, at 16:48, Martin Sivak  wrote:
>>> 
>>> @awels: to add another layer of indirection via a dedicated
>>> hosted-engine per outlet seems a little much. we are talking about 500 *
>>> 4GB RAM at least in this example, so 2 TB RAM just for management
>>> purposes, if you follow engine hardware recommendations?
>> 
>> I would not go that far. Creating zones per continent (for example)
>> might be enough.
>> 
>>> At least RHEV states in the documentation you support up to 200 hosts
>>> per cluster alone.
>> 
>> The default configuration seems to only allow 250 hosts per datacenter.
>> 
>> # engine-config -g MaxNumberOfHostsInStoragePool
>> MaxNumberOfHostsInStoragePool: 250 version: general
> 
> yep, but that liit is there because within a DC there is a lot of assumption 
> for flawless fast enough communication, the most problematic is that all 
> hosts need to access the same storage and the monitoring gets expensive then.
> This is a different situation with separate DCs, there’s no cross-DC 
> communication.
> I would guess many DCs work great actually.
> 
> Too many hosts and VMs in total might be an issue, but since the last 
> official updates there were a lot of changes. E.g. in stable state due to VM 
> status events introduced in 3.6 the traffic required between each host and 
> engine is much lower.
> I would not be so afraid of thousands anymore, but of course YMMV
> 
>> 
>> --
>> Martin Sivak
>> SLA / oVirt
>> 
>>> On Tue, Apr 26, 2016 at 4:03 PM, Sven Kieske  wrote:
 On 26.04.2016 14:46, Martin Sivak wrote:
 I think that 1000 hosts per engine is a bit over what we recommend
 (and support). The fact that all of them are going to be remote might
 not be ideal either. The engine assumes the network connection to all
 hosts is almost flawless and the necessary routing and distance to
 your hosts might not play nice with (for example) the fencing logic.
>>> 
>>> Hi,
>>> 
>>> this seems a little surprising.
>>> 
>>> At least RHEV states in the documentation you support up to 200 hosts
>>> per cluster alone.
>>> 
>>> There are no documented maxima for clusters or datacenters though.
>>> 
>>> @awels: to add another layer of indirection via a dedicated
>>> hosted-engine per outlet seems a little much. we are talking about 500 *
>>> 4GB RAM at least in this example, so 2 TB RAM just for management
>>> purposes, if you follow engine hardware recommendations?
> 
> yeah. currently the added layer of manageiq with HEs everywhere is not that 
> helpful for this particular case. Still, a per-continent split or 
> per-low-latency-area might not be a bad idea.
> I can imagine with a bit more tolerant timeouts and refreshes it might work 
> well, with incidents/disconnects being isolated within a DC
> 
>>> 
>>> But I agree, ovirt does not handle unstable or remote connections that
> 
> right. but most of that is again per-DC. You can’t do much cross-DC though 
> (e.g. sharing a template is a pain)
> 
> Thanks
> michal
> 
>>> well, so you might be better of with hundredths of remote engines, but
>>> it seems to be a nightmare to manage, even if you automate everything.
>>> 
>>> My personal experience is, that ovirt does scale at least until about
>>> 30-50 DCs managed by a single engine, but that setup was also on a LAN
>>> (but I would say it could scale well beyond these numbers, at least on a
>>> LAN).
>>> 
>>> HTH
>>> 
>>> Sven
>>> 
>>> 
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to power off the vm

2016-04-28 Thread Michal Skrivanek

> On 28 Apr 2016, at 14:18, Budur Nagaraju  wrote:
> 
> Not able to access through console ,SSH, even the migration option is not 
> getting highlighted unable to perform any actions.
> 
> To reboot host I need to migrate remaining vms to other host , that is time 
> consuming.
> 
> Any commands to kill the process without rebooting  the host?
> 
> 
find the right qemu process. it should have the vm name on the command line
then kill -9, if it helps then it might be ok and you can start the VM again.
if you don’t know how to do that then really the best option is to migrate all 
other vms away and reboot


> On Apr 28, 2016 5:42 PM, "Michal Skrivanek"  > wrote:
> 
>> On 28 Apr 2016, at 14:11, Budur Nagaraju > > wrote:
>> 
>> Earlier it was working ,now  not able to power on/off  shutdown. deploy in 
>> another host etc.
>> 
>> 
> 
> I don’t mean in ovirt, I mean the guest itself. Can you get to the console? 
> Can you ssh to that guest? Does it do anythign?
> if so it might be worth trying to save it (e.g. migrate), if not, just kill 
> it from the host…or migrate everything else away and reboot the host
> 
>> On Apr 28, 2016 5:38 PM, "Michal Skrivanek" > > wrote:
>> 
>>> On 28 Apr 2016, at 14:01, Budur Nagaraju >> > wrote:
>>> 
>>> ovirt node is having 50vms and one  VM is having issues, by restarting 
>>> libvirt will the  other vms get affect? And am not getting the option to 
>>> delete.
>>> 
>>> 
>> 
>> it will not affect the running VMs, they will keep running
>> again, does that one VM actually work?
>> 
>>> On Apr 28, 2016 5:26 PM, "Michal Skrivanek" >> > wrote:
>>> 
>>> > On 28 Apr 2016, at 13:49, Budur Nagaraju >> > > wrote:
>>> >
>>> > Any commands to check the same ?
>>> 
>>> so does the VM actually work?
>>> what’s the status of the process?
>>> 
>>> if it works, restart libvirtd (that will induce a vdsm restart as well), 
>>> and check if it makes any difference. If not then I guess you’re out of 
>>> luck and you can try to kill the qemu process yourself…or reboot the box
>>> 
>>> > On Apr 28, 2016 5:10 PM, "Michal Skrivanek" >> > >
>>> > wrote:
>>> >
>>> >>
>>> >>> On 28 Apr 2016, at 10:03, Budur Nagaraju >> >>> > wrote:
>>> >>>
>>> >>> HI
>>> >>>
>>> >>> One of the vm is showing "?" and  unable to perform any actions below
>>> >> are the logs ,let me know is there any ways to bring it back ?
>>> >>
>>> >> then it’s probably broken in lower layers. check/add vdsm.log from that
>>> >> period, but it is likely that libvirt lost control over the qemu process.
>>> >> You may want to check that particular qemu process if it is alright or .g
>>> 
>> 
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to power off the vm

2016-04-28 Thread Budur Nagaraju
Not able to access through console ,SSH, even the migration option is not
getting highlighted unable to perform any actions.

To reboot host I need to migrate remaining vms to other host , that is time
consuming.

Any commands to kill the process without rebooting  the host?
On Apr 28, 2016 5:42 PM, "Michal Skrivanek" 
wrote:


On 28 Apr 2016, at 14:11, Budur Nagaraju  wrote:

Earlier it was working ,now  not able to power on/off  shutdown. deploy in
another host etc.


I don’t mean in ovirt, I mean the guest itself. Can you get to the console?
Can you ssh to that guest? Does it do anythign?
if so it might be worth trying to save it (e.g. migrate), if not, just kill
it from the host…or migrate everything else away and reboot the host

On Apr 28, 2016 5:38 PM, "Michal Skrivanek" 
wrote:

>
> On 28 Apr 2016, at 14:01, Budur Nagaraju  wrote:
>
> ovirt node is having 50vms and one  VM is having issues, by restarting
> libvirt will the  other vms get affect? And am not getting the option to
> delete.
>
>
> it will not affect the running VMs, they will keep running
> again, does that one VM actually work?
>
> On Apr 28, 2016 5:26 PM, "Michal Skrivanek" 
> wrote:
>
>>
>> > On 28 Apr 2016, at 13:49, Budur Nagaraju  wrote:
>> >
>> > Any commands to check the same ?
>>
>> so does the VM actually work?
>> what’s the status of the process?
>>
>> if it works, restart libvirtd (that will induce a vdsm restart as well),
>> and check if it makes any difference. If not then I guess you’re out of
>> luck and you can try to kill the qemu process yourself…or reboot the box
>>
>> > On Apr 28, 2016 5:10 PM, "Michal Skrivanek" <
>> michal.skriva...@redhat.com>
>> > wrote:
>> >
>> >>
>> >>> On 28 Apr 2016, at 10:03, Budur Nagaraju  wrote:
>> >>>
>> >>> HI
>> >>>
>> >>> One of the vm is showing "?" and  unable to perform any actions below
>> >> are the logs ,let me know is there any ways to bring it back ?
>> >>
>> >> then it’s probably broken in lower layers. check/add vdsm.log from that
>> >> period, but it is likely that libvirt lost control over the qemu
>> process.
>> >> You may want to check that particular qemu process if it is alright or
>> .g
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to power off the vm

2016-04-28 Thread Michal Skrivanek

> On 28 Apr 2016, at 14:11, Budur Nagaraju  wrote:
> 
> Earlier it was working ,now  not able to power on/off  shutdown. deploy in 
> another host etc.
> 
> 

I don’t mean in ovirt, I mean the guest itself. Can you get to the console? Can 
you ssh to that guest? Does it do anythign?
if so it might be worth trying to save it (e.g. migrate), if not, just kill it 
from the host…or migrate everything else away and reboot the host

> On Apr 28, 2016 5:38 PM, "Michal Skrivanek"  > wrote:
> 
>> On 28 Apr 2016, at 14:01, Budur Nagaraju > > wrote:
>> 
>> ovirt node is having 50vms and one  VM is having issues, by restarting 
>> libvirt will the  other vms get affect? And am not getting the option to 
>> delete.
>> 
>> 
> 
> it will not affect the running VMs, they will keep running
> again, does that one VM actually work?
> 
>> On Apr 28, 2016 5:26 PM, "Michal Skrivanek" > > wrote:
>> 
>> > On 28 Apr 2016, at 13:49, Budur Nagaraju > > > wrote:
>> >
>> > Any commands to check the same ?
>> 
>> so does the VM actually work?
>> what’s the status of the process?
>> 
>> if it works, restart libvirtd (that will induce a vdsm restart as well), and 
>> check if it makes any difference. If not then I guess you’re out of luck and 
>> you can try to kill the qemu process yourself…or reboot the box
>> 
>> > On Apr 28, 2016 5:10 PM, "Michal Skrivanek" > > >
>> > wrote:
>> >
>> >>
>> >>> On 28 Apr 2016, at 10:03, Budur Nagaraju > >>> > wrote:
>> >>>
>> >>> HI
>> >>>
>> >>> One of the vm is showing "?" and  unable to perform any actions below
>> >> are the logs ,let me know is there any ways to bring it back ?
>> >>
>> >> then it’s probably broken in lower layers. check/add vdsm.log from that
>> >> period, but it is likely that libvirt lost control over the qemu process.
>> >> You may want to check that particular qemu process if it is alright or .g
>> 
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to power off the vm

2016-04-28 Thread Budur Nagaraju
Earlier it was working ,now  not able to power on/off  shutdown. deploy in
another host etc.
On Apr 28, 2016 5:38 PM, "Michal Skrivanek" 
wrote:

>
> On 28 Apr 2016, at 14:01, Budur Nagaraju  wrote:
>
> ovirt node is having 50vms and one  VM is having issues, by restarting
> libvirt will the  other vms get affect? And am not getting the option to
> delete.
>
>
> it will not affect the running VMs, they will keep running
> again, does that one VM actually work?
>
> On Apr 28, 2016 5:26 PM, "Michal Skrivanek" 
> wrote:
>
>>
>> > On 28 Apr 2016, at 13:49, Budur Nagaraju  wrote:
>> >
>> > Any commands to check the same ?
>>
>> so does the VM actually work?
>> what’s the status of the process?
>>
>> if it works, restart libvirtd (that will induce a vdsm restart as well),
>> and check if it makes any difference. If not then I guess you’re out of
>> luck and you can try to kill the qemu process yourself…or reboot the box
>>
>> > On Apr 28, 2016 5:10 PM, "Michal Skrivanek" <
>> michal.skriva...@redhat.com>
>> > wrote:
>> >
>> >>
>> >>> On 28 Apr 2016, at 10:03, Budur Nagaraju  wrote:
>> >>>
>> >>> HI
>> >>>
>> >>> One of the vm is showing "?" and  unable to perform any actions below
>> >> are the logs ,let me know is there any ways to bring it back ?
>> >>
>> >> then it’s probably broken in lower layers. check/add vdsm.log from that
>> >> period, but it is likely that libvirt lost control over the qemu
>> process.
>> >> You may want to check that particular qemu process if it is alright or
>> .g
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to power off the vm

2016-04-28 Thread Michal Skrivanek

> On 28 Apr 2016, at 14:01, Budur Nagaraju  wrote:
> 
> ovirt node is having 50vms and one  VM is having issues, by restarting 
> libvirt will the  other vms get affect? And am not getting the option to 
> delete.
> 
> 

it will not affect the running VMs, they will keep running
again, does that one VM actually work?

> On Apr 28, 2016 5:26 PM, "Michal Skrivanek"  > wrote:
> 
> > On 28 Apr 2016, at 13:49, Budur Nagaraju  > > wrote:
> >
> > Any commands to check the same ?
> 
> so does the VM actually work?
> what’s the status of the process?
> 
> if it works, restart libvirtd (that will induce a vdsm restart as well), and 
> check if it makes any difference. If not then I guess you’re out of luck and 
> you can try to kill the qemu process yourself…or reboot the box
> 
> > On Apr 28, 2016 5:10 PM, "Michal Skrivanek"  > >
> > wrote:
> >
> >>
> >>> On 28 Apr 2016, at 10:03, Budur Nagaraju  >>> > wrote:
> >>>
> >>> HI
> >>>
> >>> One of the vm is showing "?" and  unable to perform any actions below
> >> are the logs ,let me know is there any ways to bring it back ?
> >>
> >> then it’s probably broken in lower layers. check/add vdsm.log from that
> >> period, but it is likely that libvirt lost control over the qemu process.
> >> You may want to check that particular qemu process if it is alright or .g
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to power off the vm

2016-04-28 Thread Budur Nagaraju
ovirt node is having 50vms and one  VM is having issues, by restarting
libvirt will the  other vms get affect? And am not getting the option to
delete.
On Apr 28, 2016 5:26 PM, "Michal Skrivanek" 
wrote:

>
> > On 28 Apr 2016, at 13:49, Budur Nagaraju  wrote:
> >
> > Any commands to check the same ?
>
> so does the VM actually work?
> what’s the status of the process?
>
> if it works, restart libvirtd (that will induce a vdsm restart as well),
> and check if it makes any difference. If not then I guess you’re out of
> luck and you can try to kill the qemu process yourself…or reboot the box
>
> > On Apr 28, 2016 5:10 PM, "Michal Skrivanek"  >
> > wrote:
> >
> >>
> >>> On 28 Apr 2016, at 10:03, Budur Nagaraju  wrote:
> >>>
> >>> HI
> >>>
> >>> One of the vm is showing "?" and  unable to perform any actions below
> >> are the logs ,let me know is there any ways to bring it back ?
> >>
> >> then it’s probably broken in lower layers. check/add vdsm.log from that
> >> period, but it is likely that libvirt lost control over the qemu
> process.
> >> You may want to check that particular qemu process if it is alright or
> .g
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to power off the vm

2016-04-28 Thread Michal Skrivanek

> On 28 Apr 2016, at 13:49, Budur Nagaraju  wrote:
> 
> Any commands to check the same ?

so does the VM actually work?
what’s the status of the process?

if it works, restart libvirtd (that will induce a vdsm restart as well), and 
check if it makes any difference. If not then I guess you’re out of luck and 
you can try to kill the qemu process yourself…or reboot the box

> On Apr 28, 2016 5:10 PM, "Michal Skrivanek" 
> wrote:
> 
>> 
>>> On 28 Apr 2016, at 10:03, Budur Nagaraju  wrote:
>>> 
>>> HI
>>> 
>>> One of the vm is showing "?" and  unable to perform any actions below
>> are the logs ,let me know is there any ways to bring it back ?
>> 
>> then it’s probably broken in lower layers. check/add vdsm.log from that
>> period, but it is likely that libvirt lost control over the qemu process.
>> You may want to check that particular qemu process if it is alright or .g

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Max amount of datacenters per ovirt engine

2016-04-28 Thread Michal Skrivanek

> On 26 Apr 2016, at 16:48, Martin Sivak  wrote:
> 
>> @awels: to add another layer of indirection via a dedicated
>> hosted-engine per outlet seems a little much. we are talking about 500 *
>> 4GB RAM at least in this example, so 2 TB RAM just for management
>> purposes, if you follow engine hardware recommendations?
> 
> I would not go that far. Creating zones per continent (for example)
> might be enough.
> 
>> At least RHEV states in the documentation you support up to 200 hosts
>> per cluster alone.
> 
> The default configuration seems to only allow 250 hosts per datacenter.
> 
> # engine-config -g MaxNumberOfHostsInStoragePool
> MaxNumberOfHostsInStoragePool: 250 version: general

yep, but that liit is there because within a DC there is a lot of assumption 
for flawless fast enough communication, the most problematic is that all hosts 
need to access the same storage and the monitoring gets expensive then.
This is a different situation with separate DCs, there’s no cross-DC 
communication.
I would guess many DCs work great actually.

Too many hosts and VMs in total might be an issue, but since the last official 
updates there were a lot of changes. E.g. in stable state due to VM status 
events introduced in 3.6 the traffic required between each host and engine is 
much lower.
I would not be so afraid of thousands anymore, but of course YMMV

> 
> --
> Martin Sivak
> SLA / oVirt
> 
> On Tue, Apr 26, 2016 at 4:03 PM, Sven Kieske  wrote:
>> On 26.04.2016 14:46, Martin Sivak wrote:
>>> I think that 1000 hosts per engine is a bit over what we recommend
>>> (and support). The fact that all of them are going to be remote might
>>> not be ideal either. The engine assumes the network connection to all
>>> hosts is almost flawless and the necessary routing and distance to
>>> your hosts might not play nice with (for example) the fencing logic.
>> 
>> Hi,
>> 
>> this seems a little surprising.
>> 
>> At least RHEV states in the documentation you support up to 200 hosts
>> per cluster alone.
>> 
>> There are no documented maxima for clusters or datacenters though.
>> 
>> @awels: to add another layer of indirection via a dedicated
>> hosted-engine per outlet seems a little much. we are talking about 500 *
>> 4GB RAM at least in this example, so 2 TB RAM just for management
>> purposes, if you follow engine hardware recommendations?

yeah. currently the added layer of manageiq with HEs everywhere is not that 
helpful for this particular case. Still, a per-continent split or 
per-low-latency-area might not be a bad idea.
I can imagine with a bit more tolerant timeouts and refreshes it might work 
well, with incidents/disconnects being isolated within a DC

>> 
>> But I agree, ovirt does not handle unstable or remote connections that

right. but most of that is again per-DC. You can’t do much cross-DC though 
(e.g. sharing a template is a pain)

Thanks
michal

>> well, so you might be better of with hundredths of remote engines, but
>> it seems to be a nightmare to manage, even if you automate everything.
>> 
>> My personal experience is, that ovirt does scale at least until about
>> 30-50 DCs managed by a single engine, but that setup was also on a LAN
>> (but I would say it could scale well beyond these numbers, at least on a
>> LAN).
>> 
>> HTH
>> 
>> Sven
>> 
>> 
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to power off the vm

2016-04-28 Thread Budur Nagaraju
Any commands to check the same ?
On Apr 28, 2016 5:10 PM, "Michal Skrivanek" 
wrote:

>
> > On 28 Apr 2016, at 10:03, Budur Nagaraju  wrote:
> >
> > HI
> >
> > One of the vm is showing "?" and  unable to perform any actions below
> are the logs ,let me know is there any ways to bring it back ?
>
> then it’s probably broken in lower layers. check/add vdsm.log from that
> period, but it is likely that libvirt lost control over the qemu process.
> You may want to check that particular qemu process if it is alright or .g.
> stuck in D state or something else.
> Sometimes bouncing libvirtd and vdsm might help
>
> Thanks,
> michal
>
> >
> >
> > 2016-04-28 13:31:10,501 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
> (org.ovirt.thread.pool-8-thread-39) [191cb679] Failed in DestroyVDS method
> > 2016-04-28 13:31:10,502 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
> (org.ovirt.thread.pool-8-thread-39) [191cb679] Command
> org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand return value
> >  StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=42,
> mMessage=Virtual machine destroy error]]
> > 2016-04-28 13:31:10,502 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
> (org.ovirt.thread.pool-8-thread-39) [191cb679] HostName = cstkvm1
> > 2016-04-28 13:31:10,503 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
> (org.ovirt.thread.pool-8-thread-39) [191cb679] Command
> DestroyVDSCommand(HostName = cstkvm1, HostId =
> 808e0118-32af-47d0-b9eb-43c34494c292,
> vmId=7212cdd2-8171-4882-977e-82b7c95f5e62, force=false, secondsToWait=0,
> gracefully=false, reason=) execution failed. Exception: VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to DestroyVDS, error =
> Virtual machine destroy error, code = 42
> > 2016-04-28 13:31:10,503 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
> (org.ovirt.thread.pool-8-thread-39) [191cb679] FINISH, DestroyVDSCommand,
> log id: 5e35dcad
> > 2016-04-28 13:31:10,504 ERROR
> [org.ovirt.engine.core.vdsbroker.DestroyVmVDSCommand]
> (org.ovirt.thread.pool-8-thread-39) [191cb679] VDS::destroy Failed
> destroying vm 7212cdd2-8171-4882-977e-82b7c95f5e62 in vds =
> 808e0118-32af-47d0-b9eb-43c34494c292 : cstkvm1, error =
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to DestroyVDS, error =
> Virtual machine destroy error, code = 42
> > 2016-04-28 13:31:10,513 INFO
> [org.ovirt.engine.core.vdsbroker.DestroyVmVDSCommand]
> (org.ovirt.thread.pool-8-thread-39) [191cb679] FINISH, DestroyVmVDSCommand,
> log id: 6ff5ad79
> > 2016-04-28 13:31:10,516 ERROR [org.ovirt.engine.core.bll.StopVmCommand]
> (org.ovirt.thread.pool-8-thread-39) [191cb679] Command
> org.ovirt.engine.core.bll.StopVmCommand throw Vdc Bll exception. With error
> message VdcBLLException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to DestroyVDS, error =
> Virtual machine destroy error, code = 42 (Failed with error destroyErr and
> code 42)
> > 2016-04-28 13:31:10,531 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (org.ovirt.thread.pool-8-thread-39) [191cb679] Correlation ID: 191cb679,
> Job ID: b8b77109-e559-46b4-be1f-6402e85c0a62, Call Stack: null, Custom
> Event ID: -1, Message: Failed to power off VM raghpai-MOSS10 (Host:
> cstkvm1, User: admin@internal).
> >
> >
> >
> > Thanks,
> > Nagaraju
> >
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to power off the vm

2016-04-28 Thread Michal Skrivanek

> On 28 Apr 2016, at 10:03, Budur Nagaraju  wrote:
> 
> HI 
> 
> One of the vm is showing "?" and  unable to perform any actions below are the 
> logs ,let me know is there any ways to bring it back ?

then it’s probably broken in lower layers. check/add vdsm.log from that period, 
but it is likely that libvirt lost control over the qemu process. You may want 
to check that particular qemu process if it is alright or .g. stuck in D state 
or something else.
Sometimes bouncing libvirtd and vdsm might help

Thanks,
michal

> 
> 
> 2016-04-28 13:31:10,501 ERROR 
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
> (org.ovirt.thread.pool-8-thread-39) [191cb679] Failed in DestroyVDS method
> 2016-04-28 13:31:10,502 INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
> (org.ovirt.thread.pool-8-thread-39) [191cb679] Command 
> org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand return value 
>  StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=42, 
> mMessage=Virtual machine destroy error]]
> 2016-04-28 13:31:10,502 INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
> (org.ovirt.thread.pool-8-thread-39) [191cb679] HostName = cstkvm1
> 2016-04-28 13:31:10,503 ERROR 
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
> (org.ovirt.thread.pool-8-thread-39) [191cb679] Command 
> DestroyVDSCommand(HostName = cstkvm1, HostId = 
> 808e0118-32af-47d0-b9eb-43c34494c292, 
> vmId=7212cdd2-8171-4882-977e-82b7c95f5e62, force=false, secondsToWait=0, 
> gracefully=false, reason=) execution failed. Exception: VDSErrorException: 
> VDSGenericException: VDSErrorException: Failed to DestroyVDS, error = Virtual 
> machine destroy error, code = 42
> 2016-04-28 13:31:10,503 INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
> (org.ovirt.thread.pool-8-thread-39) [191cb679] FINISH, DestroyVDSCommand, log 
> id: 5e35dcad
> 2016-04-28 13:31:10,504 ERROR 
> [org.ovirt.engine.core.vdsbroker.DestroyVmVDSCommand] 
> (org.ovirt.thread.pool-8-thread-39) [191cb679] VDS::destroy Failed destroying 
> vm 7212cdd2-8171-4882-977e-82b7c95f5e62 in vds = 
> 808e0118-32af-47d0-b9eb-43c34494c292 : cstkvm1, error = 
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: 
> VDSGenericException: VDSErrorException: Failed to DestroyVDS, error = Virtual 
> machine destroy error, code = 42
> 2016-04-28 13:31:10,513 INFO  
> [org.ovirt.engine.core.vdsbroker.DestroyVmVDSCommand] 
> (org.ovirt.thread.pool-8-thread-39) [191cb679] FINISH, DestroyVmVDSCommand, 
> log id: 6ff5ad79
> 2016-04-28 13:31:10,516 ERROR [org.ovirt.engine.core.bll.StopVmCommand] 
> (org.ovirt.thread.pool-8-thread-39) [191cb679] Command 
> org.ovirt.engine.core.bll.StopVmCommand throw Vdc Bll exception. With error 
> message VdcBLLException: 
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: 
> VDSGenericException: VDSErrorException: Failed to DestroyVDS, error = Virtual 
> machine destroy error, code = 42 (Failed with error destroyErr and code 42)
> 2016-04-28 13:31:10,531 ERROR 
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (org.ovirt.thread.pool-8-thread-39) [191cb679] Correlation ID: 191cb679, Job 
> ID: b8b77109-e559-46b4-be1f-6402e85c0a62, Call Stack: null, Custom Event ID: 
> -1, Message: Failed to power off VM raghpai-MOSS10 (Host: cstkvm1, User: 
> admin@internal).
> 
> 
> 
> Thanks,
> Nagaraju
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-infra] [Attention needed] GlusterFS repository down - affects CI / Installations

2016-04-28 Thread Niels de Vos
On Wed, Apr 27, 2016 at 04:51:10PM +0200, Sandro Bonazzola wrote:
> On Wed, Apr 27, 2016 at 11:09 AM, Niels de Vos  wrote:
> 
> > On Wed, Apr 27, 2016 at 02:30:57PM +0530, Ravishankar N wrote:
> > > @gluster infra  - FYI.
> > >
> > > On 04/27/2016 02:20 PM, Nadav Goldin wrote:
> > > >Hi,
> > > >The GlusterFS repository became unavailable this morning, as a result
> > all
> > > >Jenkins jobs that use the repository will fail, the common error would
> > be:
> > > >
> > > >
> > http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-7/noarch/repodata/repomd.xml
> > :
> > > >[Errno 14] HTTP Error 403 - Forbidden
> > > >
> > > >
> > > >Also, installations of oVirt will fail.
> >
> > I thought oVirt moved to using the packages from the CentOS Storage SIG?
> >
> 
> We did that for CentOS Virt SIG builds.
> On oVirt upstream we're still on Gluster upstream.
> We'll move to Storage SIG there as well.

Ah, ok, thanks!
Niels


> 
> 
> 
> > In any case, automated tests should probably use those instead of the
> > packages on download.gluster.org. We're trying to minimize the work
> > packagers need to do, and get the glusterfs and other components in the
> > repositories that are provided by different distributions.
> >
> > For more details, see the quickstart for the Storage SIG here:
> >   https://wiki.centos.org/SpecialInterestGroup/Storage/gluster-Quickstart
> >
> > HTH,
> > Niels
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
> >
> 
> 
> -- 
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com


signature.asc
Description: PGP signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Keyboard mapping VNC

2016-04-28 Thread Michal Skrivanek

> On 28 Apr 2016, at 12:53, Jonas Israelsson  
> wrote:
> 
> 
> 
> On 2016-04-28 12:31, Michal Skrivanek wrote:
>>> On 28 Apr 2016, at 10:32, Jonas Israelsson  
>>> wrote:
>>> 
>>> 
>>> 
> I've tested now also with a hosted engine setup. Can't see there either 
> that the keyboard layout information set in the web-ui ever reaches the 
> VM.
 That should work. HE is a bit different, and noVNC client doesn't support 
 keymaps properly, but you should see it in xml
 
 Does spice work? Any special reason you can't use it?
>>> Spice does work. I do however prefer a web based console, and  the spice 
>>> based is in a very sorry state.
>> indeed, but bear in mind the noVnc client is also not so great on keyboard 
>> mappings anyway. Swedish might be ok, but if it is not, then please rather 
>> check with standalone client
> Right, but after the keyboard map actually being used, tweaks to the mapping 
> can be done, to make the experience 'quite pleasant'. In fact the only thing 
> missing right now (with a swedish layout) is the recognition of the  
> key. That to works with native the native client.
> 
> There is as you probably know an old issue to NoVNC, that was brought back to 
> life just a few months ago. To my understanding addressing this issue
> https://github.com/kanaka/noVNC/issues/21

ah, there’s quite some activity recently. sounds good
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Keyboard mapping VNC

2016-04-28 Thread Jonas Israelsson



On 2016-04-28 12:31, Michal Skrivanek wrote:

On 28 Apr 2016, at 10:32, Jonas Israelsson  
wrote:




I've tested now also with a hosted engine setup. Can't see there either that 
the keyboard layout information set in the web-ui ever reaches the VM.

That should work. HE is a bit different, and noVNC client doesn't support 
keymaps properly, but you should see it in xml

Does spice work? Any special reason you can't use it?

Spice does work. I do however prefer a web based console, and  the spice based 
is in a very sorry state.

indeed, but bear in mind the noVnc client is also not so great on keyboard 
mappings anyway. Swedish might be ok, but if it is not, then please rather 
check with standalone client
Right, but after the keyboard map actually being used, tweaks to the 
mapping can be done, to make the experience 'quite pleasant'. In fact 
the only thing missing right now (with a swedish layout) is the 
recognition of the  key. That to works with native the native 
client.


There is as you probably know an old issue to NoVNC, that was brought 
back to life just a few months ago. To my understanding addressing this 
issue

https://github.com/kanaka/noVNC/issues/21


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Keyboard mapping VNC

2016-04-28 Thread Michal Skrivanek

> On 28 Apr 2016, at 10:32, Jonas Israelsson  
> wrote:
> 
> 
> 
>>> I've tested now also with a hosted engine setup. Can't see there either 
>>> that the keyboard layout information set in the web-ui ever reaches the VM.
>> That should work. HE is a bit different, and noVNC client doesn't support 
>> keymaps properly, but you should see it in xml
>> 
>> Does spice work? Any special reason you can't use it?
> Spice does work. I do however prefer a web based console, and  the spice 
> based is in a very sorry state.

indeed, but bear in mind the noVnc client is also not so great on keyboard 
mappings anyway. Swedish might be ok, but if it is not, then please rather 
check with standalone client

>>> Think we have a bug here.
>> Looks like that is a regression indeed. Feel free to open bug yourself, it 
>> shouldn't be difficult to fix
> Will do, thanks (a million) !

we have one now and working on it!:)
https://bugzilla.redhat.com/show_bug.cgi?id=1331274

Thanks,
michal

> 
> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Deleting templates

2016-04-28 Thread Ollie Armstrong
On 28 April 2016 at 10:20, Nicolas Ecarnot  wrote:
> IIRC, I should be able to delete a template if all my templated VMs are 
> *cloned*, and I should not be able to delete a template if some of my 
> templated VMs are "based on/thined/tpl-snapshoted/whatever" ?

As far as my understanding goes, this is correct.

In my environment at least, whenever as VM is created through the web
UI it is cloned and I can delete the template.  By default, the API
doesn't seem to clone the disk, but this can be specified.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Deleting templates

2016-04-28 Thread Nicolas Ecarnot

Hello,

I'm not using templates, and I know I should.
I'm using other automated way that are working fine, but I'd like to 
explore templating.

Before jumping in, I read this doc :

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html-single/Virtual_Machine_Management_Guide/index.html#chap-Templates

and I read contradictory informations.

IIRC, I should be able to delete a template if all my templated VMs are 
*cloned*, and I should not be able to delete a template if some of my 
templated VMs are "based on/thined/tpl-snapshoted/whatever" ?


Is it correct?

If so, parts of the doc above are unclear.

--
Nicolas ECARNOT
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Keyboard mapping VNC

2016-04-28 Thread Jonas Israelsson




I've tested now also with a hosted engine setup. Can't see there either that 
the keyboard layout information set in the web-ui ever reaches the VM.

That should work. HE is a bit different, and noVNC client doesn't support 
keymaps properly, but you should see it in xml

Does spice work? Any special reason you can't use it?
Spice does work. I do however prefer a web based console, and  the spice 
based is in a very sorry state.

Think we have a bug here.

Looks like that is a regression indeed. Feel free to open bug yourself, it 
shouldn't be difficult to fix

Will do, thanks (a million) !




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Unable to power off the vm

2016-04-28 Thread Budur Nagaraju
HI

One of the vm is showing "?" and  unable to perform any actions below are
the logs ,let me know is there any ways to bring it back ?


2016-04-28 13:31:10,501 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(org.ovirt.thread.pool-8-thread-39) [191cb679] Failed in DestroyVDS method
2016-04-28 13:31:10,502 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(org.ovirt.thread.pool-8-thread-39) [191cb679] Command
org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand return value
 StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=42,
mMessage=Virtual machine destroy error]]
2016-04-28 13:31:10,502 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(org.ovirt.thread.pool-8-thread-39) [191cb679] HostName = cstkvm1
2016-04-28 13:31:10,503 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(org.ovirt.thread.pool-8-thread-39) [191cb679] Command
DestroyVDSCommand(HostName = cstkvm1, HostId =
808e0118-32af-47d0-b9eb-43c34494c292,
vmId=7212cdd2-8171-4882-977e-82b7c95f5e62, force=false, secondsToWait=0,
gracefully=false, reason=) execution failed. Exception: VDSErrorException:
VDSGenericException: VDSErrorException: Failed to DestroyVDS, error =
Virtual machine destroy error, code = 42
2016-04-28 13:31:10,503 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(org.ovirt.thread.pool-8-thread-39) [191cb679] FINISH, DestroyVDSCommand,
log id: 5e35dcad
2016-04-28 13:31:10,504 ERROR
[org.ovirt.engine.core.vdsbroker.DestroyVmVDSCommand]
(org.ovirt.thread.pool-8-thread-39) [191cb679] VDS::destroy Failed
destroying vm 7212cdd2-8171-4882-977e-82b7c95f5e62 in vds =
808e0118-32af-47d0-b9eb-43c34494c292 : cstkvm1, error =
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to DestroyVDS, error =
Virtual machine destroy error, code = 42
2016-04-28 13:31:10,513 INFO
[org.ovirt.engine.core.vdsbroker.DestroyVmVDSCommand]
(org.ovirt.thread.pool-8-thread-39) [191cb679] FINISH, DestroyVmVDSCommand,
log id: 6ff5ad79
2016-04-28 13:31:10,516 ERROR [org.ovirt.engine.core.bll.StopVmCommand]
(org.ovirt.thread.pool-8-thread-39) [191cb679] Command
org.ovirt.engine.core.bll.StopVmCommand throw Vdc Bll exception. With error
message VdcBLLException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to DestroyVDS, error =
Virtual machine destroy error, code = 42 (Failed with error destroyErr and
code 42)
2016-04-28 13:31:10,531 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-39) [191cb679] Correlation ID: 191cb679,
Job ID: b8b77109-e559-46b4-be1f-6402e85c0a62, Call Stack: null, Custom
Event ID: -1, Message: Failed to power off VM raghpai-MOSS10 (Host:
cstkvm1, User: admin@internal).



Thanks,
Nagaraju
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vms in paused state

2016-04-28 Thread Michal Skrivanek

> On 27 Apr 2016, at 19:16, Bill James  wrote:
> 
> virsh # list --all
> error: failed to connect to the hypervisor
> error: no valid connection
> error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such 
> file or directory
> 

you need to run virsh in read-only mode
virsh -r list —all

> [root@ovirt1 test vdsm]# systemctl status libvirtd
> ● libvirtd.service - Virtualization daemon
>   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor 
> preset: enabled)
>  Drop-In: /etc/systemd/system/libvirtd.service.d
>   └─unlimited-core.conf
>   Active: active (running) since Thu 2016-04-21 16:00:03 PDT; 5 days ago
> 
> 
> tried systemctl restart libvirtd.
> No change.
> 
> Attached vdsm.log and supervdsm.log.
> 
> 
> [root@ovirt1 test vdsm]# systemctl status vdsmd
> ● vdsmd.service - Virtual Desktop Server Manager
>   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor 
> preset: enabled)
>   Active: active (running) since Wed 2016-04-27 10:09:14 PDT; 3min 46s ago
> 
> 
> vdsm-4.17.18-0.el7.centos.noarch

the vdsm.log attach is good, but it’s too short interval, it only shows 
recovery(vdsm restart) phase when the VMs are identified as paused….can you add 
earlier logs? Did you restart vdsm yourself or did it crash?


> libvirt-daemon-1.2.17-13.el7_2.4.x86_64
> 
> 
> Thanks.
> 
> 
> On 04/26/2016 11:35 PM, Michal Skrivanek wrote:
>>> On 27 Apr 2016, at 02:04, Nir Soffer  wrote:
>>> 
>>> jjOn Wed, Apr 27, 2016 at 2:03 AM, Bill James  wrote:
 I have a hardware node that has 26 VMs.
 9 are listed as "running", 17 are listed as "paused".
 
 In truth all VMs are up and running fine.
 
 I tried telling the db they are up:
 
 engine=> update vm_dynamic set status = 1 where vm_guid =(select
 vm_guid from vm_static where vm_name = 'api1.test.j2noc.com');
 
 GUI then shows it up for a short while,
 
 then puts it back in paused state.
 
 2016-04-26 15:16:46,095 INFO [org.ovirt.engine.core.vdsbroker.VmAnalyzer]
 (DefaultQuartzScheduler_Worker-16) [157cc21e] VM '242ca0af-4ab2-4dd6-b515-5
 d435e6452c4'(api1.test.j2noc.com) moved from 'Up' --> 'Paused'
 2016-04-26 15:16:46,221 INFO [org.ovirt.engine.core.dal.dbbroker.auditlogh
 andling.AuditLogDirector] (DefaultQuartzScheduler_Worker-16) [157cc21e] Cor
 relation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM api1.
 test.j2noc.com has been paused.
 
 
 Why does the engine think the VMs are paused?
 Attached engine.log.
 
 I can fix the problem by powering off the VM then starting it back up.
 But the VM is working fine! How do I get ovirt to realize that?
>>> If this is an issue in engine, restarting engine may fix this.
>>> but having this problem only with one node, I don't think this is the issue.
>>> 
>>> If this is an issue in vdsm, restarting vdsm may fix this.
>>> 
>>> If this does not help, maybe this is libvirt issue? did you try to check vm
>>> status using virsh?
>> this looks more likely as it seems such status is being reported
>> logs would help, vdsm.log at the very least.
>> 
>>> If virsh thinks that the vms are paused, you can try to restart libvirtd.
>>> 
>>> Please file a bug about this in any case with engine and vdsm logs.
>>> 
>>> Adding Michal in case he has better idea how to proceed.
>>> 
>>> Nir
> 
> 
> Cloud Services for Business www.j2.com
> j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox
> 
> 
> This email, its contents and attachments contain information from j2 Global, 
> Inc. and/or its affiliates which may be privileged, confidential or otherwise 
> protected from disclosure. The information is intended to be for the 
> addressee(s) only. If you are not an addressee, any disclosure, copy, 
> distribution, or use of the contents of this message is prohibited. If you 
> have received this email in error please notify the sender by reply e-mail 
> and delete the original message and any copies. (c) 2015 j2 Global, Inc. All 
> rights reserved. eFax, eVoice, Campaigner, FuseMail, KeepItSafe, and Onebox 
> are registered trademarks of j2 Global, Inc. and its affiliates.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Fwd: Having issues with Hosted Engine

2016-04-28 Thread Sahina Bose
This seems like issue reported in 
https://bugzilla.redhat.com/show_bug.cgi?id=1327121


Nir, Simone?

On 04/28/2016 05:35 AM, Luiz Claudio Prazeres Goncalves wrote:


Hi everyone,

Until today my environment was fully updated (3.6.5+centos7.2) with 3 
nodes (kvm1,kvm2 and kvm3 hosts) . I also have 3 external gluster 
nodes (gluster-root1,gluster1 and gluster2 hosts ) , replica 3, which 
the engine storage domain is sitting on top (3.7.11 fully 
updated+centos7.2)


For some weird reason i've been receiving emails from oVirt with 
EngineUnexpectedDown (attached picture) on a daily basis more or less, 
but the engine seems to be working fine and my vm's are up and running 
normally. I've never had any issue to access the User Interface to 
manage the vm's


Today I run "yum update" on the nodes and realised that vdsm was 
outdated, so I updated the kvm hosts and they are now , again, fully 
updated.



Reviewing the logs It seems to be an intermittent connectivity issue 
when trying to access the gluster engine storage domain as you can see 
below. I don't have any network issue in place and I'm 100% sure about 
it. I have another oVirt Cluster using the same network and using a 
engine storage domain on top of an iSCSI Storage Array with no issues.


*Here seems to be the issue:*

Thread-::INFO::2016-04-27 
23:01:27,864::fileSD::357::Storage.StorageDomain::(validate) 
sdUUID=03926733-1872-4f85-bb21-18dc320560db


Thread-::DEBUG::2016-04-27 
23:01:27,865::persistentDict::234::Storage.PersistentDict::(refresh) 
read lines (FileMetadataRW)=[]


Thread-::DEBUG::2016-04-27 
23:01:27,865::persistentDict::252::Storage.PersistentDict::(refresh) 
Empty metadata


Thread-::ERROR::2016-04-27 
23:01:27,865::task::866::Storage.TaskManager.Task::(_setError) 
Task=`d2acf575-1a60-4fa0-a5bb-cd4363636b94`::Unexpected error


Traceback (most recent call last):

  File "/usr/share/vdsm/storage/task.py", line 873, in _run

return fn(*args, **kargs)

  File "/usr/share/vdsm/logUtils.py", line 49, in wrapper

res = f(*args, **kwargs)

  File "/usr/share/vdsm/storage/hsm.py", line 2835, in 
getStorageDomainInfo


dom = self.validateSdUUID(sdUUID)

  File "/usr/share/vdsm/storage/hsm.py", line 278, in validateSdUUID

sdDom.validate()

  File "/usr/share/vdsm/storage/fileSD.py", line 360, in validate

raise se.StorageDomainAccessError(self.sdUUID)

StorageDomainAccessError: Domain is either partially accessible or 
entirely inaccessible: (u'03926733-1872-4f85-bb21-18dc320560db',)


Thread-::DEBUG::2016-04-27 
23:01:27,865::task::885::Storage.TaskManager.Task::(_run) 
Task=`d2acf575-1a60-4fa0-a5bb-cd4363636b94`::Task._run: 
d2acf575-1a60-4fa0-a5bb-cd4363636b94 
('03926733-1872-4f85-bb21-18dc320560db',) {} failed - stopping task


Thread-::DEBUG::2016-04-27 
23:01:27,865::task::1246::Storage.TaskManager.Task::(stop) 
Task=`d2acf575-1a60-4fa0-a5bb-cd4363636b94`::stopping in state 
preparing (force False)


Thread-::DEBUG::2016-04-27 
23:01:27,865::task::993::Storage.TaskManager.Task::(_decref) 
Task=`d2acf575-1a60-4fa0-a5bb-cd4363636b94`::ref 1 aborting True


Thread-::INFO::2016-04-27 
23:01:27,865::task::1171::Storage.TaskManager.Task::(prepare) 
Task=`d2acf575-1a60-4fa0-a5bb-cd4363636b94`::aborting: Task is 
aborted: 'Domain is either partially accessible or entirely 
inaccessible' - code 379


Thread-::DEBUG::2016-04-27 
23:01:27,866::task::1176::Storage.TaskManager.Task::(prepare) 
Task=`d2acf575-1a60-4fa0-a5bb-cd4363636b94`::Prepare: aborted: Domain 
is either partially accessible or entirely inaccessible



*Question: Anyone know what might be happening? I have several gluster 
config's, as you can see below. All the storage domain are using the 
same config's*



*More information:*

I have the "engine" storage domain, "vmos1" storage domain and 
"master" storage domain, so everything looks good.


[root@kvm1 vdsm]# vdsClient -s 0 getStorageDomainsList

03926733-1872-4f85-bb21-18dc320560db

35021ff4-fb95-43d7-92a3-f538273a3c2e

e306e54e-ca98-468d-bb04-3e8900f8840c


*Gluster config:*

[root@gluster-root1 ~]# gluster volume info

Volume Name: engine

Type: Replicate

Volume ID: 64b413d2-c42e-40fd-b356-3e6975e941b0

Status: Started

Number of Bricks: 1 x 3 = 3

Transport-type: tcp

Bricks:

Brick1: gluster1.xyz.com:/gluster/engine/brick1

Brick2: gluster2.xyz.com:/gluster/engine/brick1

Brick3: gluster-root1.xyz.com:/gluster/engine/brick1

Options Reconfigured:

performance.cache-size: 1GB

performance.write-behind-window-size: 4MB

performance.write-behind: off

performance.quick-read: off

performance.read-ahead: off

performance.io-cache: off

performance.stat-prefetch: off

cluster.eager-lock: enable

cluster.quorum-type: auto

network.remote-dio: enable

cluster.server-quorum-type: server

cluster.data-self-heal-algorithm: full

performance.low-prio-threads: 32

features.shard-block-size: 512MB

features.shard: on

storage.owner-gid: 36

storage.owner-uid: 36