[ovirt-users] cleanup old recovery backup

2017-04-12 Thread eric stam
Good morning,
Just a question.
When I do an upgrade of a hosted-engine by the "hosted-engine
--upgarde-appliance" command the procedure will automatically make a backup
for recovery. Is there a way to remove an old backup, without finding out
yourself which files user used and which not (and hooping you will not
remove a wrong one).

-- 
Gr. Eric Stam
*Mob*.: 06-50278119
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Empty ovirt engine web pages

2017-04-12 Thread shubham dubey
Hello,
I have installed ovirt engine from source and installed all other required
packages also,
including ovirt-js-dependencies.But when I am login to the admin account I
am getting blank
page everytime. Some other pages are also coming empty sometime.
I have pasted the logs for
$HOME/ovirt-engine/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py
start[1].The possible error log is

2017-04-13 09:07:09,902+05 ERROR
[org.ovirt.engine.ui.frontend.server.dashboard.DashboardDataServlet.CacheUpdate.Utilization]
(EE-ManagedThreadFactory-default-Thread-1) [] Could not update the
Utilization Cache: Error while running SQL query:
org.ovirt.engine.ui.frontend.server.dashboard.DashboardDataException: Error
while running SQL query
   at
org.ovirt.engine.ui.frontend.server.dashboard.dao.BaseDao.runQuery(BaseDao.java:60)
[frontend.jar:]
   at
org.ovirt.engine.ui.frontend.server.dashboard.dao.HostDwhDao.getTotalCpuMemCount(HostDwhDao.java:78)
[frontend.jar:]
   at
org.ovirt.engine.ui.frontend.server.dashboard.HourlySummaryHelper.getTotalCpuMemCount(HourlySummaryHelper.java:43)
[frontend.jar:]
   at
org.ovirt.engine.ui.frontend.server.dashboard.HourlySummaryHelper.getCpuMemSummary(HourlySummaryHelper.java:21)
[frontend.jar:]
   at
org.ovirt.engine.ui.frontend.server.dashboard.DashboardDataServlet.lookupGlobalUtilization(DashboardDataServlet.java:294)
[frontend.jar:]
   at
org.ovirt.engine.ui.frontend.server.dashboard.DashboardDataServlet.getDashboard(DashboardDataServlet.java:268)
[frontend.jar:]
   at
org.ovirt.engine.ui.frontend.server.dashboard.DashboardDataServlet.populateUtilizationCache(DashboardDataServlet.java:231)
[frontend.jar:]
   at
org.ovirt.engine.ui.frontend.server.dashboard.DashboardDataServlet.access$000(DashboardDataServlet.java:26)
[frontend.jar:]
   at
org.ovirt.engine.ui.frontend.server.dashboard.DashboardDataServlet$1.run(DashboardDataServlet.java:106)
[frontend.jar:]
   at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[rt.jar:1.8.0_121]
   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
[rt.jar:1.8.0_121]
   at
org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383)
[javax.enterprise.concurrent-1.0.jar:]
   at
org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534)
[javax.enterprise.concurrent-1.0.jar:]
   at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[rt.jar:1.8.0_121]
   at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[rt.jar:1.8.0_121]
   at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_121]
   at
org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250)
[javax.enterprise.concurrent-1.0.jar:]
Caused by: java.sql.SQLException: javax.resource.ResourceException:
IJ000453: Unable to get managed connection for java:/DWHDataSource
   at
org.jboss.jca.adapters.jdbc.WrapperDataSource.getConnection(WrapperDataSource.java:146)
   at
org.jboss.as.connector.subsystems.datasources.WildFlyDataSource.getConnection(WildFlyDataSource.java:66)
   at
org.ovirt.engine.ui.frontend.server.dashboard.dao.BaseDao.runQuery(BaseDao.java:53)
[frontend.jar:]
   ... 16 more
Caused by: javax.resource.ResourceException: IJ000453: Unable to get
managed connection for java:/DWHDataSource
   at
org.jboss.jca.core.connectionmanager.AbstractConnectionManager.getManagedConnection(AbstractConnectionManager.java:656)
   at
org.jboss.jca.core.connectionmanager.tx.TxConnectionManagerImpl.getManagedConnection(TxConnectionManagerImpl.java:429)
   at
org.jboss.jca.core.connectionmanager.AbstractConnectionManager.allocateConnection(AbstractConnectionManager.java:747)
   at
org.jboss.jca.adapters.jdbc.WrapperDataSource.getConnection(WrapperDataSource.java:138)
   ... 18 more
Caused by: javax.resource.ResourceException: IJ031084: Unable to create
connection
   at
org.jboss.jca.adapters.jdbc.local.LocalManagedConnectionFactory.createLocalManagedConnection(LocalManagedConnectionFactory.java:345)
   at
org.jboss.jca.adapters.jdbc.local.LocalManagedConnectionFactory.getLocalManagedConnection(LocalManagedConnectionFactory.java:352)
   at
org.jboss.jca.adapters.jdbc.local.LocalManagedConnectionFactory.createManagedConnection(LocalManagedConnectionFactory.java:287)
   at
org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreConcurrentLinkedDequeManagedConnectionPool.createConnectionEventListener(SemaphoreConcurrentLinkedDequeManagedConnectionPool.java:1320)
   at
org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreConcurrentLinkedDequeManagedConnectionPool.getConnection(SemaphoreConcurrentLinkedDequeManagedConnectionPool.java:496)
   

Re: [ovirt-users] Hosted engine setup shooting dirty pool

2017-04-12 Thread Jamie Lawrence

> On Apr 12, 2017, at 1:31 AM, Evgenia Tokar  wrote:
> 
> Hi Jamie, 
> 
> Are you trying to setup hosted engine using the "hosted-engine --deploy" 
> command, or are you trying to migrate existing he vm? 
>  
> For hosted engine setup you need to provide a clean storage domain, which is 
> not a part of your 4.1 setup, this storage domain will be used for the hosted 
> engine and will be visible in the UI once the deployment of the hosted engine 
> is complete.
> If your storage domain appears in the UI it means that it is already 
> connected to the storage pool and is not "clean”.

Hi Jenny,

Thanks for the response.

I’m using `hosted-engine —deploy`, yes. (Actually, the last few attempts have 
been with an answerfile, but the responses are the same.)

I think I may have been unclear.  I understand that it wants an unmolested SD. 
There just doesn’t seem to be a path to provide that with an Ovirt-managed 
Gluster cluster.

I guess my question is how to provide that with an Ovirt-managed gluster 
installation. Or a different way of asking, I guess, would be how do I make 
Ovirt/VDSM ignore a newly created gluster SD so that `hosted-engine` can pick 
it up? I don’t see any options to tell the Gluster cluster to not auto-discover 
or similar. So as soon as I create it, the non-hosted engine picks it up. This 
happens within seconds - I vainly tried to time it with running the installer.

This is why I mentioned dismissing the idea of using another Gluster 
installation, unattached to Ovirt. That’s the only way I could think of to give 
it a clean pool. (I dismissed it because I can’t run this in production with 
that sort of dependency.)

Do I need to take this Gluster cluster out of Ovirt control (delete the Gluster 
cluster from the Ovirt GUI, recreate outside of Ovirt manually), install on to 
that, and then re-associate it in the GUI or something similar?

-j
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine console available

2017-04-12 Thread Rafał Wojciechowski

hi,

I will answer also. however I am using single hypervisor so without ha 
and I have no performed steps:

https://www.ovirt.org/documentation/how-to/hosted-engine/

1. yes - however I have to start in headless mode so it is quite obvius
if I am trying to start with spice/vnc console I am getting segfault 
from libvirtd

2. as above
3. as above


W dniu 12.04.2017 o 14:12, Arsène Gschwind pisze:


Hi all,

I will answer your questions:

1. definitively yes
2. the command hosted-engine --console works well and I'm able to connect.
3. Here are the device entries

devices={device:qxl,alias:video0,type:video,deviceId:5210a3c3-9cc4-4aed-90c6-432dd2d37c46,address:{slot:0x02,
bus:0x00,domain:0x,type:pci,function:0x0}}
devices={device:console,type:console}

Thanks and rgds,
Arsène

On 04/12/2017 10:53 AM, Evgenia Tokar wrote:

Hi all,

I have managed to reproduce this issue and opened a bug for tracking 
it: https://bugzilla.redhat.com/show_bug.cgi?id=1441570 .


There is no solution yet, but I would appreciate if any who 
encountered this issue will answer some questions:

1. Is the console button greyed out in the UI?
2. On the hosted engine host, does the command hosted-engine 
--console fails?
 If it fails, try upgrading ovirt-hosted-engine-ha on the hosted 
engine host. We had a bug related to this issue that was fixed 
(https://bugzilla.redhat.com/show_bug.cgi?id=1364132).
 After upgrade and restart of the vm, this should work, and you 
should be able to connect to the console.
3. On the hosted engine host look at the content of: 
/var/run/ovirt-hosted-engine-ha/vm.conf

Does it contain a graphical device? Or a console device?

Thanks,
Jenny


On Mon, Apr 10, 2017 at 11:44 AM, Martin Sivak > wrote:


Hi,

we are working on that, we can only ask for patience now, Jenny
was trying to find out what happened and how to fix it all week.

Best regards

--
Martin Sivak
SLA / oVirt

On Mon, Apr 10, 2017 at 9:38 AM, Rafał Wojciechowski
> wrote:

hi,

I have similiar issue(I also started my mailthread) after
upgrade 4.0 to 4.1

Version 4.1.1.8-1.el7.centos (before it was some 4.1.0.x or
similiar - update not fixed it)

to run VM I have to set in Console tab Headless mode -
without it I got libvirtd segfault(logs attached in my
mailthread).

So I am able to run VMs only without Console - do you also
have to set headless before run VM?

I noticed that libvirt-daemon was also upgraded to 2.0
version during ovirt upgrade - I dont think that 4.1 was not
testes due to such libvirtd upgrade... but maybe?

Regards,
Rafal Wojciechowski

W dniu 10.04.2017 o 08:24, Arsène Gschwind pisze:


Hi,

After updating to oVirt 4.1.1 Async release i can confirm
that the problem still persists.

Rgds,
Arsène


On 03/25/2017 12:25 PM, Arsène Gschwind wrote:


Hi,

After updating to 4.1.1 i'm observing the same behavior, HE
without any console.
Even when trying to edit the HE VMs it doesn't change
anything, Graphics stays to NONE.

Thanks for any Help.

Regards,
Arsène

On 03/24/2017 03:11 PM, Nelson Lameiras wrote:

Hello,

When upgrading my test setup from 4.0 to 4.1, my engine vm
lost it's console (from SPICE to None in GUI)

My test setup :
2 manually built hosts using centos 7.3, ovirt 4.1
1 manually built hosted engine centos 7.3, oVirt
4.1.0.4-el7, accessible with SPICE console via GUI

I updated ovirt-engine from 4.1.0 to 4.1.1 by doing on
engine :
- yum update
- engine-setup
- reboot engine

When accessing 4.1.1 GUI, Graphics is set to "None" on
"Virtual Machines" page, with "console button" greyed out
(all other VMs have the same Graphics set to the same
value as before)
I tried to edit engine VM settings, and console options
are same as before (SPLICE, QXL).

I'm hopping this is not a new feature, since if we loose
network on engine, console is the only way to debug...

Is this a bug?

ps. I was able to reproduce this bug 2 times

cordialement, regards,



Nelson LAMEIRAS
Ingénieur Systèmes et Réseaux/ Systems and Networks engineer
Tel: +33 5 32 09 09 70
nelson.lamei...@lyra-network.com

www.lyra-network.com  |
www.payzen.eu 



Re: [ovirt-users] I’m having trouble deleting a test gluster volume

2017-04-12 Thread knarra

On 04/12/2017 08:45 PM, Precht, Andrew wrote:


Hi all,

You asked: Any errors in ovirt-engine.log file ?

Yes, In the engine.log this error is repeated about every 3 minutes:


2017-04-12 07:16:12,554-07 ERROR 
[org.ovirt.engine.core.bll.gluster.GlusterTasksSyncJob] 
(DefaultQuartzScheduler3) [ccc8ed0d-8b91-4397-b6b9-ab0f77c5f7b8] Error 
updating tasks from CLI: 
org.ovirt.engine.core.common.errors.EngineException: EngineException: 
Command execution failederror: Error : Request timed outreturn code: 1 
(Failed with error GlusterVolumeStatusAllFailedException and code 
4161)error: Error : Request timed out



I am not sure why this says "Request timed out".


1) gluster volume list ->  Still shows the deleted volume (test1)

2) gluster peer status -> Shows one of the peers twice with different 
uuid’s:


Hostname: 192.168.10.109Uuid: 
42fbb7de-8e6f-4159-a601-3f858fa65f6cState: Peer in Cluster 
(Connected)Hostname: 192.168.10.109Uuid: 
e058babe-7f9d-49fe-a3ea-ccdc98d7e5b5State: Peer in Cluster (Connected)



How did this happen? Are the hostname same for two hosts ?


I tried a gluster volume stop test1, with this result: volume stop: 
test1: failed: Another transaction is in progress for test1. Please 
try again after sometime.



can you restart glusterd and try to stop and delete the volume?


The etc-glusterfs-glusterd.vol.log shows no activity triggered by 
trying to remove the test1 volume from the UI.



The ovirt-engine.log shows this repeating many times, when trying to 
remove the test1 volume from the UI:



2017-04-12 07:57:38,049-07 INFO 
 [org.ovirt.engine.core.bll.lock.InMemoryLockManager] 
(DefaultQuartzScheduler9) [ccc8ed0d-8b91-4397-b6b9-ab0f77c5f7b8] 
Failed to acquire lock and wait lock 
'EngineLock:{exclusiveLocks='[b0e1b909-9a6a-49dc-8e20-3a027218f7e1=]', sharedLocks='null'}'


can you restart ovirt-engine service because i see that "failed to 
acquire lock".  Once ovirt-engine is restarted some one who is holding 
the lock should be release  and things should work fine.


Last but not least, if none of the above works:

Login to all your nodes in the cluster.
rm -rf /var/lib/glusterd/vols/*
rm -rf /var/lib/glusterd/peers/*
systemctl restart glusterd on all the nodes.

Login to UI and see if any volumes / hosts are present. If yes, remove them.

This should clear things for you and you can start from basic.



Thanks much,

Andrew


*From:* knarra 
*Sent:* Tuesday, April 11, 2017 11:10:04 PM
*To:* Precht, Andrew; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon 
Mureinik; Nir Soffer

*Cc:* users
*Subject:* Re: [ovirt-users] I’m having trouble deleting a test 
gluster volume

On 04/12/2017 03:35 AM, Precht, Andrew wrote:


I just noticed this in the Alerts tab: Detected deletion of volume 
test1 on cluster 8000-1, and deleted it from engine DB.


Yet, It still shows in the web UI?

Any errors in ovirt-engine.log file ? if the volume is deleted from db 
ideally it should be deleted from UI too.  Can you go to gluster nodes 
and check for the following:


1) gluster volume list -> should not return anything since you have 
deleted the volumes.


2) gluster peer status -> on all the nodes should show that all the 
peers are in connected state.


can you tail -f /var/log/ovirt-engine/ovirt-engine.log and gluster log 
and capture the error messages when you try deleting the volume from UI?


Log what you have pasted in the previous mail only gives info and i 
could not get any details from that on why volume delete is failing




*From:* Precht, Andrew
*Sent:* Tuesday, April 11, 2017 2:39:31 PM
*To:* knarra; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon 
Mureinik; Nir Soffer

*Cc:* users
*Subject:* Re: [ovirt-users] I’m having trouble deleting a test 
gluster volume

The plot thickens…
I put all hosts in the cluster into maintenance mode, with the Stop 
Gluster service checkbox checked. I then deleted the 
/var/lib/glusterd/vols/test1 directory on all hosts. I then took the 
host that the test1 volume was on out of maintenance mode. Then I 
tried to remove the test1 volume from within the web UI. With no 
luck, I got the message: Could not delete Gluster Volume test1 on 
cluster 8000-1.


I went back and checked all host for the test1 directory, it is not 
on any host. Yet I still can’t remove it…


Any suggestions?


*From:* Precht, Andrew
*Sent:* Tuesday, April 11, 2017 1:15:22 PM
*To:* knarra; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon 
Mureinik; Nir Soffer

*Cc:* users
*Subject:* Re: [ovirt-users] I’m having trouble deleting a test 
gluster volume

Here is an update…

I checked the /var/log/glusterfs/etc-glusterfs-glusterd.vol.log on 
the node that had the trouble volume (test1). I didn’t 

Re: [ovirt-users] Adding existing kvm hosts

2017-04-12 Thread Konstantin Raskoshnyi
+Users

We're using SCLinux 6.7, the latest version available in updates is 2.6.6

So I'm going to fix this

Thanks

On Wed, Apr 12, 2017 at 9:42 AM, Yaniv Kaul  wrote:

> Right. How did you end up with such an ancient version?
>
> Also, please email the users mailing list, not just me (so, for example,
> others will know what the issue is).
> Thanks,
> Y.
>
>
> On Apr 12, 2017 6:52 PM, "Konstantin Raskoshnyi" 
> wrote:
>
>> I just found this error on oVirt engine: Python version 2.6 is too old,
>> expecting at least 2.7.
>>
>> So going to upgrade python first
>>
>> On Wed, Apr 12, 2017 at 4:41 AM, Yaniv Kaul  wrote:
>>
>>> Can you share the vdsm log? The host deploy log (from the engine) ?
>>> Y.
>>>
>>>
>>> On Wed, Apr 12, 2017 at 8:13 AM, Konstantin Raskoshnyi <
>>> konra...@gmail.com> wrote:
>>>
 Hi guys, We're never had mgmt for our kvm machines

 I installed oVirt 4.1 on CentOS73 and trying to add existing kvm hosts
 but oVirt fails with this error

 2017-04-12 05:08:46,430Z ERROR [org.ovirt.engine.core.uutils.ssh.SSHDialog]
 (org.ovirt.thread.pool-7-thread-21) [4a1d5f35] Exception:
 java.io.IOException: Command returned failure code 1 during SSH session
 'root@tank3'

 I don't experience any problems connecting to virtank3 under root.

 2017-04-12 05:08:46,445Z ERROR [org.ovirt.engine.core.dal.dbb
 roker.auditloghandling.AuditLogDirector] 
 (org.ovirt.thread.pool-7-thread-21)
 [4a1d5f35] EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511), Correlation
 ID: 4a1d5f35, Call Stack: null, Custom Event ID: -1, Message: Failed to
 install Host tank3. Command returned failure code 1 during SSH session
 'root@tank3'.
 2017-04-12 05:08:46,445Z ERROR 
 [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase]
 (org.ovirt.thread.pool-7-thread-21) [4a1d5f35] Error during host tank3
 install, prefering first exception: Unexpected connection termination
 2017-04-12 05:08:46,445Z ERROR [org.ovirt.engine.core.bll.hos
 tdeploy.InstallVdsInternalCommand] (org.ovirt.thread.pool-7-thread-21)
 [4a1d5f35] Host installation failed for host 
 'cec720ed-460a-48aa-a9fc-2262b6da5a83',
 'tank3': Unexpected connection termination
 2017-04-12 05:08:46,446Z INFO  
 [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
 (org.ovirt.thread.pool-7-thread-21) [4a1d5f35] START,
 SetVdsStatusVDSCommand(HostName = tank3, 
 SetVdsStatusVDSCommandParameters:{runAsync='true',
 hostId='cec720ed-460a-48aa-a9fc-2262b6da5a83', status='InstallFailed',
 nonOperationalReason='NONE', stopSpmFailureLogged='false',
 maintenanceReason='null'}), log id: 4bbc52f9
 2017-04-12 05:08:46,449Z INFO  
 [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
 (org.ovirt.thread.pool-7-thread-21) [4a1d5f35] FINISH,
 SetVdsStatusVDSCommand, log id: 4bbc52f9
 2017-04-12 05:08:46,457Z ERROR [org.ovirt.engine.core.dal.dbb
 roker.auditloghandling.AuditLogDirector] 
 (org.ovirt.thread.pool-7-thread-21)
 [4a1d5f35] EVENT_ID: VDS_INSTALL_FAILED(505), Correlation ID: 4a1d5f35, Job
 ID: 8af22af5-72a5-4ec4-b216-4e26ceaa48d6, Call Stack: null, Custom
 Event ID: -1, Message: Host tank3 installation failed. Unexpected
 connection termination.
 2017-04-12 05:08:46,496Z INFO  [org.ovirt.engine.core.bll.ho
 stdeploy.InstallVdsInternalCommand] (org.ovirt.thread.pool-7-thread-21)
 [4a1d5f35] Lock freed to object 'EngineLock:{exclusiveLocks='[
 cec720ed-460a-48aa-a9fc-2262b6da5a83=]', sharedLocks='null'}'
 2017-04-12 05:09:02,742Z INFO  [org.ovirt.engine.core.bll.RemoveVdsCommand]
 (default task-48) [13050988-bf00-4391-9862-a8ed8ade34dd] Lock Acquired
 to object 
 'EngineLock:{exclusiveLocks='[cec720ed-460a-48aa-a9fc-2262b6da5a83=]', sharedLocks='null'}'
 2017-04-12 05:09:02,750Z INFO  [org.ovirt.engine.core.bll.RemoveVdsCommand]
 (org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
 Running command: RemoveVdsCommand internal: false. Entities affected :  ID:
 cec720ed-460a-48aa-a9fc-2262b6da5a83 Type: VDSAction group DELETE_HOST
 with role type ADMIN
 2017-04-12 05:09:02,822Z INFO  
 [org.ovirt.engine.core.vdsbroker.RemoveVdsVDSCommand]
 (org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
 START, RemoveVdsVDSCommand( RemoveVdsVDSCommandParameters:{runAsync='true',
 hostId='cec720ed-460a-48aa-a9fc-2262b6da5a83'}), log id: 26e68c12
 2017-04-12 05:09:02,822Z INFO  [org.ovirt.engine.core.vdsbroker.VdsManager]
 (org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
 vdsManager::disposing
 2017-04-12 05:09:02,822Z INFO  
 [org.ovirt.engine.core.vdsbroker.RemoveVdsVDSCommand]
 (org.ovirt.thread.pool-7-thread-22) 

Re: [ovirt-users] Compiling oVirt for Debian.

2017-04-12 Thread Leni Kadali Mutungi
Thank you both for the warm welcome. I had already gone through the
development guide, but the other links are good; I will definitely
check them out and get back to you.

On 4/12/17, Yedidyah Bar David  wrote:
> On Wed, Apr 12, 2017 at 10:28 AM, Sandro Bonazzola 
> wrote:
>
>> Hi Leni, welcome to the oVirt community!
>>
>> On Wed, Apr 12, 2017 at 5:57 AM, Leni Kadali Mutungi <
>> lenikmutu...@gmail.com> wrote:
>>
>>> Hello all.
>>>
>>> I am trying to install oVirt on Debian. So far I've managed to install
>>> a good chunk of the dependencies.
>>
>>
>> Nice to see interest in getting oVirt on Debian! I'm adding Milan Zamal
>> who was looking into getting vdsm running on Debian.
>>
>>
>>
>>> However I haven't been able to
>>> install otopi, ovirt-host-deploy, ovirt-js-dependencies,
>>> ovirt-setup-lib since Debian has no packages for these. With the
>>> exception of otopi (whose build instructions I was unable to make
>>> sense of on GitHub),
>>> everything else is to be gotten from Fedora/EPEL repos.
>>>
>>
>>
>> Please note that otopi is not supporting Debian yet.
>> It's missing support for the packaging system used by Debian. It
>> currently
>> support only yum and dnf package managers.
>> Being ovirt-host-deploy and ovirt-setup-lib depending on otopi, you'll
>> need to work on otopi code first.
>>
>
> Adding to otopi support for apt/dpkg is indeed interesting and useful, but
> imo isn't mandatory for a first milestone. Not having an apt packager will
> simply mean you can't install/update packages using otopi, but other things
> should work. Notably, you won't be able to use engine-setup for upgrades,
> at least not the way it's done with yum and versionlock.
>
>
>>
>>
>>>
>>> I had thought of using alien to convert from rpm to deb, but
>>> apparently the recommended thing is to compile from source, since
>>> using alien can lead to a complex version of dependency hell.
>>>
>>> I can download WildFly from source, though again the recommended
>>> procedure is to install ovirt-engine-wildfly and
>>> ovirt-wildfly-overlay.
>>>
>>
>> I would suggest to get in touch with Wildfly community about having
>> wildfly packaged for Debian.
>>
>>
>>>
>>> Any assistance in tracking down the source code of the above packages
>>> so that I can install them is appreciated.
>>>
>>
> I think that all of them are maintained on gerrit.ovirt.org, and most have
> mirrors on github.com/ovirt.
>
> If you haven't yet, you might want to check also:
>
> http://www.ovirt.org/develop/developer-guide/engine/engine-development-environment/
> http://www.ovirt.org/develop/dev-process/working-with-gerrit/
>
> The engine used to work on gentoo in the past, although I do not think
> anyone tried that in the last 1.5 years, so the following is not up-to-date
> or working, but can still give you some ideas:
>
> https://wiki.gentoo.org/wiki/OVirt
>
> Good luck and best regards,
>
>
>>
>>> --
>>> - Warm regards
>>> Leni Kadali Mutungi
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>>
>>
>> --
>>
>> SANDRO BONAZZOLA
>>
>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>>
>> Red Hat EMEA 
>> 
>> TRIED. TESTED. TRUSTED. 
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
> Didi
>


-- 
- Warm regards
Leni Kadali Mutungi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Compiling oVirt for Debian.

2017-04-12 Thread Yedidyah Bar David
On Wed, Apr 12, 2017 at 10:28 AM, Sandro Bonazzola 
wrote:

> Hi Leni, welcome to the oVirt community!
>
> On Wed, Apr 12, 2017 at 5:57 AM, Leni Kadali Mutungi <
> lenikmutu...@gmail.com> wrote:
>
>> Hello all.
>>
>> I am trying to install oVirt on Debian. So far I've managed to install
>> a good chunk of the dependencies.
>
>
> Nice to see interest in getting oVirt on Debian! I'm adding Milan Zamal
> who was looking into getting vdsm running on Debian.
>
>
>
>> However I haven't been able to
>> install otopi, ovirt-host-deploy, ovirt-js-dependencies,
>> ovirt-setup-lib since Debian has no packages for these. With the
>> exception of otopi (whose build instructions I was unable to make
>> sense of on GitHub),
>> everything else is to be gotten from Fedora/EPEL repos.
>>
>
>
> Please note that otopi is not supporting Debian yet.
> It's missing support for the packaging system used by Debian. It currently
> support only yum and dnf package managers.
> Being ovirt-host-deploy and ovirt-setup-lib depending on otopi, you'll
> need to work on otopi code first.
>

Adding to otopi support for apt/dpkg is indeed interesting and useful, but
imo isn't mandatory for a first milestone. Not having an apt packager will
simply mean you can't install/update packages using otopi, but other things
should work. Notably, you won't be able to use engine-setup for upgrades,
at least not the way it's done with yum and versionlock.


>
>
>>
>> I had thought of using alien to convert from rpm to deb, but
>> apparently the recommended thing is to compile from source, since
>> using alien can lead to a complex version of dependency hell.
>>
>> I can download WildFly from source, though again the recommended
>> procedure is to install ovirt-engine-wildfly and
>> ovirt-wildfly-overlay.
>>
>
> I would suggest to get in touch with Wildfly community about having
> wildfly packaged for Debian.
>
>
>>
>> Any assistance in tracking down the source code of the above packages
>> so that I can install them is appreciated.
>>
>
I think that all of them are maintained on gerrit.ovirt.org, and most have
mirrors on github.com/ovirt.

If you haven't yet, you might want to check also:

http://www.ovirt.org/develop/developer-guide/engine/engine-development-environment/
http://www.ovirt.org/develop/dev-process/working-with-gerrit/

The engine used to work on gentoo in the past, although I do not think
anyone tried that in the last 1.5 years, so the following is not up-to-date
or working, but can still give you some ideas:

https://wiki.gentoo.org/wiki/OVirt

Good luck and best regards,


>
>> --
>> - Warm regards
>> Leni Kadali Mutungi
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>
> Red Hat EMEA 
> 
> TRIED. TESTED. TRUSTED. 
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine console available

2017-04-12 Thread Arsène Gschwind

Hi all,

I will answer your questions:

1. definitively yes
2. the command hosted-engine --console works well and I'm able to connect.
3. Here are the device entries

devices={device:qxl,alias:video0,type:video,deviceId:5210a3c3-9cc4-4aed-90c6-432dd2d37c46,address:{slot:0x02,
bus:0x00,domain:0x,type:pci,function:0x0}}
devices={device:console,type:console}

Thanks and rgds,
Arsène

On 04/12/2017 10:53 AM, Evgenia Tokar wrote:

Hi all,

I have managed to reproduce this issue and opened a bug for tracking 
it: https://bugzilla.redhat.com/show_bug.cgi?id=1441570 .


There is no solution yet, but I would appreciate if any who 
encountered this issue will answer some questions:

1. Is the console button greyed out in the UI?
2. On the hosted engine host, does the command hosted-engine --console 
fails?
 If it fails, try upgrading ovirt-hosted-engine-ha on the hosted 
engine host. We had a bug related to this issue that was fixed 
(https://bugzilla.redhat.com/show_bug.cgi?id=1364132).
 After upgrade and restart of the vm, this should work, and you 
should be able to connect to the console.
3. On the hosted engine host look at the content of: 
/var/run/ovirt-hosted-engine-ha/vm.conf

Does it contain a graphical device? Or a console device?

Thanks,
Jenny


On Mon, Apr 10, 2017 at 11:44 AM, Martin Sivak > wrote:


Hi,

we are working on that, we can only ask for patience now, Jenny
was trying to find out what happened and how to fix it all week.

Best regards

--
Martin Sivak
SLA / oVirt

On Mon, Apr 10, 2017 at 9:38 AM, Rafał Wojciechowski
> wrote:

hi,

I have similiar issue(I also started my mailthread) after
upgrade 4.0 to 4.1

Version 4.1.1.8-1.el7.centos (before it was some 4.1.0.x or
similiar - update not fixed it)

to run VM I have to set in Console tab Headless mode - without
it I got libvirtd segfault(logs attached in my mailthread).

So I am able to run VMs only without Console - do you also
have to set headless before run VM?

I noticed that libvirt-daemon was also upgraded to 2.0 version
during ovirt upgrade - I dont think that 4.1 was not testes
due to such libvirtd upgrade... but maybe?

Regards,
Rafal Wojciechowski

W dniu 10.04.2017 o 08:24, Arsène Gschwind pisze:


Hi,

After updating to oVirt 4.1.1 Async release i can confirm
that the problem still persists.

Rgds,
Arsène


On 03/25/2017 12:25 PM, Arsène Gschwind wrote:


Hi,

After updating to 4.1.1 i'm observing the same behavior, HE
without any console.
Even when trying to edit the HE VMs it doesn't change
anything, Graphics stays to NONE.

Thanks for any Help.

Regards,
Arsène

On 03/24/2017 03:11 PM, Nelson Lameiras wrote:

Hello,

When upgrading my test setup from 4.0 to 4.1, my engine vm
lost it's console (from SPICE to None in GUI)

My test setup :
2 manually built hosts using centos 7.3, ovirt 4.1
1 manually built hosted engine centos 7.3, oVirt
4.1.0.4-el7, accessible with SPICE console via GUI

I updated ovirt-engine from 4.1.0 to 4.1.1 by doing on engine :
- yum update
- engine-setup
- reboot engine

When accessing 4.1.1 GUI, Graphics is set to "None" on
"Virtual Machines" page, with "console button" greyed out
(all other VMs have the same Graphics set to the same value
as before)
I tried to edit engine VM settings, and console options are
same as before (SPLICE, QXL).

I'm hopping this is not a new feature, since if we loose
network on engine, console is the only way to debug...

Is this a bug?

ps. I was able to reproduce this bug 2 times

cordialement, regards,



Nelson LAMEIRAS
Ingénieur Systèmes et Réseaux/ Systems and Networks engineer
Tel: +33 5 32 09 09 70
nelson.lamei...@lyra-network.com

www.lyra-network.com  |
www.payzen.eu 








Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE




___
Users mailing list
Users@ovirt.org 

Re: [ovirt-users] Adding existing kvm hosts

2017-04-12 Thread Yaniv Kaul
Can you share the vdsm log? The host deploy log (from the engine) ?
Y.


On Wed, Apr 12, 2017 at 8:13 AM, Konstantin Raskoshnyi 
wrote:

> Hi guys, We're never had mgmt for our kvm machines
>
> I installed oVirt 4.1 on CentOS73 and trying to add existing kvm hosts but
> oVirt fails with this error
>
> 2017-04-12 05:08:46,430Z ERROR [org.ovirt.engine.core.uutils.ssh.SSHDialog]
> (org.ovirt.thread.pool-7-thread-21) [4a1d5f35] Exception:
> java.io.IOException: Command returned failure code 1 during SSH session
> 'root@tank3'
>
> I don't experience any problems connecting to virtank3 under root.
>
> 2017-04-12 05:08:46,445Z ERROR [org.ovirt.engine.core.dal.
> dbbroker.auditloghandling.AuditLogDirector] 
> (org.ovirt.thread.pool-7-thread-21)
> [4a1d5f35] EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511), Correlation ID:
> 4a1d5f35, Call Stack: null, Custom Event ID: -1, Message: Failed to install
> Host tank3. Command returned failure code 1 during SSH session 'root@tank3
> '.
> 2017-04-12 05:08:46,445Z ERROR 
> [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase]
> (org.ovirt.thread.pool-7-thread-21) [4a1d5f35] Error during host tank3
> install, prefering first exception: Unexpected connection termination
> 2017-04-12 05:08:46,445Z ERROR 
> [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
> (org.ovirt.thread.pool-7-thread-21) [4a1d5f35] Host installation failed
> for host 'cec720ed-460a-48aa-a9fc-2262b6da5a83', 'tank3': Unexpected
> connection termination
> 2017-04-12 05:08:46,446Z INFO  
> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
> (org.ovirt.thread.pool-7-thread-21) [4a1d5f35] START,
> SetVdsStatusVDSCommand(HostName = tank3, 
> SetVdsStatusVDSCommandParameters:{runAsync='true',
> hostId='cec720ed-460a-48aa-a9fc-2262b6da5a83', status='InstallFailed',
> nonOperationalReason='NONE', stopSpmFailureLogged='false',
> maintenanceReason='null'}), log id: 4bbc52f9
> 2017-04-12 05:08:46,449Z INFO  
> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
> (org.ovirt.thread.pool-7-thread-21) [4a1d5f35] FINISH,
> SetVdsStatusVDSCommand, log id: 4bbc52f9
> 2017-04-12 05:08:46,457Z ERROR [org.ovirt.engine.core.dal.
> dbbroker.auditloghandling.AuditLogDirector] 
> (org.ovirt.thread.pool-7-thread-21)
> [4a1d5f35] EVENT_ID: VDS_INSTALL_FAILED(505), Correlation ID: 4a1d5f35, Job
> ID: 8af22af5-72a5-4ec4-b216-4e26ceaa48d6, Call Stack: null, Custom Event
> ID: -1, Message: Host tank3 installation failed. Unexpected connection
> termination.
> 2017-04-12 05:08:46,496Z INFO  
> [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
> (org.ovirt.thread.pool-7-thread-21) [4a1d5f35] Lock freed to object
> 'EngineLock:{exclusiveLocks='[cec720ed-460a-48aa-a9fc-2262b6da5a83= ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
> 2017-04-12 05:09:02,742Z INFO  [org.ovirt.engine.core.bll.RemoveVdsCommand]
> (default task-48) [13050988-bf00-4391-9862-a8ed8ade34dd] Lock Acquired to
> object 
> 'EngineLock:{exclusiveLocks='[cec720ed-460a-48aa-a9fc-2262b6da5a83= ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
> 2017-04-12 05:09:02,750Z INFO  [org.ovirt.engine.core.bll.RemoveVdsCommand]
> (org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
> Running command: RemoveVdsCommand internal: false. Entities affected :  ID:
> cec720ed-460a-48aa-a9fc-2262b6da5a83 Type: VDSAction group DELETE_HOST
> with role type ADMIN
> 2017-04-12 05:09:02,822Z INFO  
> [org.ovirt.engine.core.vdsbroker.RemoveVdsVDSCommand]
> (org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
> START, RemoveVdsVDSCommand( RemoveVdsVDSCommandParameters:{runAsync='true',
> hostId='cec720ed-460a-48aa-a9fc-2262b6da5a83'}), log id: 26e68c12
> 2017-04-12 05:09:02,822Z INFO  [org.ovirt.engine.core.vdsbroker.VdsManager]
> (org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
> vdsManager::disposing
> 2017-04-12 05:09:02,822Z INFO  
> [org.ovirt.engine.core.vdsbroker.RemoveVdsVDSCommand]
> (org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
> FINISH, RemoveVdsVDSCommand, log id: 26e68c12
> 2017-04-12 05:09:02,824Z WARN  
> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
> (ResponseWorker) [] Exception thrown during message processing
> 2017-04-12 05:09:02,848Z INFO  [org.ovirt.engine.core.dal.
> dbbroker.auditloghandling.AuditLogDirector] 
> (org.ovirt.thread.pool-7-thread-22)
> [13050988-bf00-4391-9862-a8ed8ade34dd] EVENT_ID: USER_REMOVE_VDS(44),
> Correlation ID: 13050988-bf00-4391-9862-a8ed8ade34dd, Call Stack: null,
> Custom Event ID: -1, Message: Host tank3 was removed by
> admin@internal-authz.
> 2017-04-12 05:09:02,848Z INFO  [org.ovirt.engine.core.bll.RemoveVdsCommand]
> (org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
> Lock freed to object 'EngineLock:{exclusiveLocks='[
> cec720ed-460a-48aa-a9fc-2262b6da5a83= ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
> 

Re: [ovirt-users] Hosted engine setup shooting dirty pool

2017-04-12 Thread Evgenia Tokar
Hi Jamie,

Are you trying to setup hosted engine using the "hosted-engine --deploy"
command, or are you trying to migrate existing he vm?

For hosted engine setup you need to provide a clean storage domain, which
is not a part of your 4.1 setup, this storage domain will be used for the
hosted engine and will be visible in the UI once the deployment of the
hosted engine is complete.
If your storage domain appears in the UI it means that it is already
connected to the storage pool and is not "clean".

Thanks,
Jenny

On Wed, Apr 12, 2017 at 2:47 AM, Jamie Lawrence 
wrote:

> Or at least, refusing to mount a dirty pool.
>
> I have 4.1 set up, configured and functional, currently wired up with two
> VM hosts and three Gluster hosts. It is configured with a (temporary) NFS
> data storage domain, with the end-goal being two data domains on Gluster;
> one for the hosted engine, one for other VMs.
>
> The issue is that `hosted-engine` sees any gluster volumes offered as
> dirty. (I have been creating them via the command line  right before
> attempting the hosted-engine migration; there is nothing in them at that
> stage.)  I *think* what is happening is that ovirt-engine notices a newly
> created volume and has its way with the volume (visible in the GUI; the
> volume appears in the list), and the hosted-engine installer becomes upset
> about that. What I don’t know is what to do about it. Relevant log lines
> below. The installer almost sounds like it is asking me to remove the
> UUID-directory and whatnot, but I’m pretty sure that’s just going to leave
> me with two problems instead of fixing the first one. I’ve considered
> attempting to wire this together in the DB, which also seems like a great
> way to break things. I’ve even thought of using a Gluster installation that
> Ovirt knows nothing about, mainly as an experiment to see if it would even
> work, but decided it doesn’t matter, because I can’t deploy in that state
> anyway and it doesn’t actually get me any closer to getting this working.
>
> I noticed several bugs in the tracker seemingly related, but the bulk of
> those were for past versions and I saw nothing that seemed actionable from
> my end in the others.
>
> So, can anyone spare a clue as to what is going wrong, and what to do
> about that?
>
> -j
>
> - - - - ovirt-hosted-engine-setup.log - - - -
>
> 2017-04-11 16:14:39 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._storageServerConnection:408 connectStorageServer
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._storageServerConnection:475 {'status': {'message': 'Done',
> 'code': 0}, 'items': [{u'status': 0, u'id': u'890e82cf-5570-4507-a9bc-
> c610584dea6e'}]}
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._storageServerConnection:502 {'status': {'message': 'Done',
> 'code': 0}, 'items': [{u'status': 0, u'id': u'cd1a1bb6-e607-4e35-b815-
> 1fd88b84fe14'}]}
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._check_existing_pools:794 _check_existing_pools
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._check_existing_pools:795 getConnectedStoragePoolsList
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._check_existing_pools:797 {'status': {'message': 'Done', 'code':
> 0}}
> 2017-04-11 16:14:40 INFO otopi.plugins.gr_he_setup.storage.storage
> storage._misc:956 Creating Storage Domain
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createStorageDomain:513 createStorageDomain
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createStorageDomain:547 {'status': {'message': 'Done', 'code':
> 0}}
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createStorageDomain:549 {'status': {'message': 'Done', 'code':
> 0}, u'mdasize': 0, u'mdathreshold': True, u'mdavalid': True, u'diskfree':
> u'321929216000', u'disktotal': u'321965260800', u'mdafree': 0}
> 2017-04-11 16:14:40 INFO otopi.plugins.gr_he_setup.storage.storage
> storage._misc:959 Creating Storage Pool
> 2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createFakeStorageDomain:553 createFakeStorageDomain
> 2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createFakeStorageDomain:570 {'status': {'message': 'Done',
> 'code': 0}}
> 2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createFakeStorageDomain:572 {'status': {'message': 'Done',
> 'code': 0}, u'mdasize': 0, u'mdathreshold': True, u'mdavalid': True,
> u'diskfree': u'1933930496', u'disktotal': u'2046640128', u'mdafree': 0}
> 2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createStoragePool:587 createStoragePool
> 2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage
> storage._createStoragePool:627 createStoragePool(args=[
> 

[ovirt-users] Networking setup

2017-04-12 Thread Alexis HAUSER
Hi, 


I have an Ovirt installation with 3 nodes (5 soon), containing 6 network cards 
(8 soon), a multipath iSCSI array and I would like to know how you would advice 
me to choose which link to bond or not. 

I thought about : 

1+2 : ovirtmgmt (bond) 
3+4 : iSCSI (multipath) 
5 : VM and Display 
6 : Migration 

What do you think about this configuration ? 
Is it a bad idea to set VM and display on the same network interface ? 
Do ovirtmgmt need high bandwidth ? 
In terms of bandwidth, is it a bad idea to have one single NIC for Migration ? 


Thanks in advance for your suggestions 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Compiling oVirt for Debian.

2017-04-12 Thread Sandro Bonazzola
Hi Leni, welcome to the oVirt community!

On Wed, Apr 12, 2017 at 5:57 AM, Leni Kadali Mutungi  wrote:

> Hello all.
>
> I am trying to install oVirt on Debian. So far I've managed to install
> a good chunk of the dependencies.


Nice to see interest in getting oVirt on Debian! I'm adding Milan Zamal who
was looking into getting vdsm running on Debian.



> However I haven't been able to
> install otopi, ovirt-host-deploy, ovirt-js-dependencies,
> ovirt-setup-lib since Debian has no packages for these. With the
> exception of otopi (whose build instructions I was unable to make
> sense of on GitHub),
> everything else is to be gotten from Fedora/EPEL repos.
>


Please note that otopi is not supporting Debian yet.
It's missing support for the packaging system used by Debian. It currently
support only yum and dnf package managers.
Being ovirt-host-deploy and ovirt-setup-lib depending on otopi, you'll need
to work on otopi code first.


>
> I had thought of using alien to convert from rpm to deb, but
> apparently the recommended thing is to compile from source, since
> using alien can lead to a complex version of dependency hell.
>
> I can download WildFly from source, though again the recommended
> procedure is to install ovirt-engine-wildfly and
> ovirt-wildfly-overlay.
>

I would suggest to get in touch with Wildfly community about having wildfly
packaged for Debian.


>
> Any assistance in tracking down the source code of the above packages
> so that I can install them is appreciated.
>
> --
> - Warm regards
> Leni Kadali Mutungi
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

TRIED. TESTED. TRUSTED. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage latency message

2017-04-12 Thread Yaniv Kaul
On Tue, Apr 11, 2017 at 3:57 PM, Chris Adams  wrote:

> I've been getting an occasional message like:
>
> Storage domain hosted_storage experienced a high latency of
> 5.26121 seconds from host node3.
>
> I'm not sure what is causing them though.  I look at my storage
> (EqualLogic iSCSI SAN) and storage network switches and don't see any
> issues.
>
> When the above message was logged, node3 was not hosting the engine
> (doesn't even have engine HA installed), nor was it the SPM, so why
> would it have even been accessing the hosted_storage domain?
>

All hosts are monitoring their access to all storage domains in the data
center.
Y.


> This is with oVirt 4.1.
> --
> Chris Adams 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] I’m having trouble deleting a test gluster volume

2017-04-12 Thread Precht, Andrew
The plot thickens…
I put all hosts in the cluster into maintenance mode, with the Stop Gluster 
service checkbox checked. I then deleted the /var/lib/glusterd/vols/test1 
directory on all hosts. I then took the host that the test1 volume was on out 
of maintenance mode. Then I tried to remove the test1 volume from within the 
web UI. With no luck, I got the message: Could not delete Gluster Volume test1 
on cluster 8000-1.

I went back and checked all host for the test1 directory, it is not on any 
host. Yet I still can’t remove it…

Any suggestions?



From: Precht, Andrew
Sent: Tuesday, April 11, 2017 1:15:22 PM
To: knarra; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon Mureinik; Nir Soffer
Cc: users
Subject: Re: [ovirt-users] I’m having trouble deleting a test gluster volume


Here is an update…

I checked the /var/log/glusterfs/etc-glusterfs-glusterd.vol.log on the node 
that had the trouble volume (test1). I didn’t see any errors. So, I ran a tail 
-f on the log as I tried to remove the volume using the web UI. here is what 
was appended:

[2017-04-11 19:48:40.756360] I [MSGID: 106487] 
[glusterd-handler.c:1474:__glusterd_handle_cli_list_friends] 0-glusterd: 
Received cli list req
[2017-04-11 19:48:42.238840] I [MSGID: 106488] 
[glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: 
Received get vol req
The message "I [MSGID: 106487] 
[glusterd-handler.c:1474:__glusterd_handle_cli_list_friends] 0-glusterd: 
Received cli list req" repeated 6 times between [2017-04-11 19:48:40.756360] 
and [2017-04-11 19:49:32.596536]
The message "I [MSGID: 106488] 
[glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: 
Received get vol req" repeated 20 times between [2017-04-11 19:48:42.238840] 
and [2017-04-11 19:49:34.082179]
[2017-04-11 19:51:41.556077] I [MSGID: 106487] 
[glusterd-handler.c:1474:__glusterd_handle_cli_list_friends] 0-glusterd: 
Received cli list req

I’m seeing that the timestamps on these log entries do not match the time on 
the node.

The next steps
I stopped the glusterd service on the node with volume test1
I deleted it with:  rm -rf /var/lib/glusterd/vols/test1
I started the glusterd service.

After starting the gluster service back up, the directory 
/var/lib/glusterd/vols/test1 reappears.
I’m guessing syncing with the other nodes?
Is this because I have the Volume Option: auth allow *
Do I need to remove the directory /var/lib/glusterd/vols/test1 on all nodes in 
the cluster individually?

thanks



From: knarra 
Sent: Tuesday, April 11, 2017 11:51:18 AM
To: Precht, Andrew; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon Mureinik; 
Nir Soffer
Cc: users
Subject: Re: [ovirt-users] I’m having trouble deleting a test gluster volume

On 04/11/2017 11:28 PM, Precht, Andrew wrote:
Hi all,
The node is oVirt Node 4.1.1 with glusterfs-3.8.10-1.el7.
On the node I can not find /var/log/glusterfs/glusterd.log However, there is a 
/var/log/glusterfs/glustershd.log
can you check if /var/log/glusterfs/etc-glusterfs-glusterd.vol.log exists? if 
yes, can you check if there is any error present in that file ?

What happens if I follow the four steps outlined here to remove the volume from 
the node BUT, I do have another volume present in the cluster. It too is a test 
volume. Neither one has any data on them. So, data loss is not an issue.
Running those four steps will remove the volume from your cluster . If the 
volumes what you have are test volumes you could just follow the steps outlined 
to delete them (since you are not able to delete from UI) and bring back the 
cluster into a normal state.


From: knarra 
Sent: Tuesday, April 11, 2017 10:32:27 AM
To: Sandro Bonazzola; Precht, Andrew; Sahina Bose; Tal Nisan; Allon Mureinik; 
Nir Soffer
Cc: users
Subject: Re: [ovirt-users] I’m having trouble deleting a test gluster volume

On 04/11/2017 10:44 PM, Sandro Bonazzola wrote:
Adding some people

Il 11/Apr/2017 19:06, "Precht, Andrew" 
> ha scritto:
Hi Ovirt users,
I’m a newbie to oVirt and I’m having trouble deleting a test gluster volume. 
The nodes are 4.1.1 and the engine is 4.1.0

When I try to remove the test volume, I click Remove, the dialog box prompting 
to confirm the deletion pops up and after I click OK, the dialog box changes to 
show a little spinning wheel and then it disappears. In the end the volume is 
still there.
with the latest version of glusterfs & ovirt we do not see any issue with 
deleting a volume. Can you please check /var/log/glusterfs/glusterd.log file if 
there is any error present?


The test volume was distributed with two host members. One of the hosts I was 
able to remove from the volume by removing the host form the cluster. When I 
try to remove the remaining host in the volume, even with the “Force Remove” 
box 

[ovirt-users] Adding existing kvm hosts

2017-04-12 Thread Konstantin Raskoshnyi
Hi guys, We're never had mgmt for our kvm machines

I installed oVirt 4.1 on CentOS73 and trying to add existing kvm hosts but
oVirt fails with this error

2017-04-12 05:08:46,430Z ERROR [org.ovirt.engine.core.uutils.ssh.SSHDialog]
(org.ovirt.thread.pool-7-thread-21) [4a1d5f35] Exception:
java.io.IOException: Command returned failure code 1 during SSH session
'root@tank3'

I don't experience any problems connecting to virtank3 under root.

2017-04-12 05:08:46,445Z ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-7-thread-21) [4a1d5f35] EVENT_ID:
VDS_INSTALL_IN_PROGRESS_ERROR(511), Correlation ID: 4a1d5f35, Call Stack:
null, Custom Event ID: -1, Message: Failed to install Host tank3. Command
returned failure code 1 during SSH session 'root@tank3'.
2017-04-12 05:08:46,445Z ERROR
[org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase]
(org.ovirt.thread.pool-7-thread-21) [4a1d5f35] Error during host tank3
install, prefering first exception: Unexpected connection termination
2017-04-12 05:08:46,445Z ERROR
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
(org.ovirt.thread.pool-7-thread-21) [4a1d5f35] Host installation failed for
host 'cec720ed-460a-48aa-a9fc-2262b6da5a83', 'tank3': Unexpected connection
termination
2017-04-12 05:08:46,446Z INFO
 [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(org.ovirt.thread.pool-7-thread-21) [4a1d5f35] START,
SetVdsStatusVDSCommand(HostName = tank3,
SetVdsStatusVDSCommandParameters:{runAsync='true',
hostId='cec720ed-460a-48aa-a9fc-2262b6da5a83', status='InstallFailed',
nonOperationalReason='NONE', stopSpmFailureLogged='false',
maintenanceReason='null'}), log id: 4bbc52f9
2017-04-12 05:08:46,449Z INFO
 [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(org.ovirt.thread.pool-7-thread-21) [4a1d5f35] FINISH,
SetVdsStatusVDSCommand, log id: 4bbc52f9
2017-04-12 05:08:46,457Z ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-7-thread-21) [4a1d5f35] EVENT_ID:
VDS_INSTALL_FAILED(505), Correlation ID: 4a1d5f35, Job ID:
8af22af5-72a5-4ec4-b216-4e26ceaa48d6, Call Stack: null, Custom Event ID:
-1, Message: Host tank3 installation failed. Unexpected connection
termination.
2017-04-12 05:08:46,496Z INFO
 [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
(org.ovirt.thread.pool-7-thread-21) [4a1d5f35] Lock freed to object
'EngineLock:{exclusiveLocks='[cec720ed-460a-48aa-a9fc-2262b6da5a83=]', sharedLocks='null'}'
2017-04-12 05:09:02,742Z INFO  [org.ovirt.engine.core.bll.RemoveVdsCommand]
(default task-48) [13050988-bf00-4391-9862-a8ed8ade34dd] Lock Acquired to
object
'EngineLock:{exclusiveLocks='[cec720ed-460a-48aa-a9fc-2262b6da5a83=]', sharedLocks='null'}'
2017-04-12 05:09:02,750Z INFO  [org.ovirt.engine.core.bll.RemoveVdsCommand]
(org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
Running command: RemoveVdsCommand internal: false. Entities affected :  ID:
cec720ed-460a-48aa-a9fc-2262b6da5a83 Type: VDSAction group DELETE_HOST with
role type ADMIN
2017-04-12 05:09:02,822Z INFO
 [org.ovirt.engine.core.vdsbroker.RemoveVdsVDSCommand]
(org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
START, RemoveVdsVDSCommand( RemoveVdsVDSCommandParameters:{runAsync='true',
hostId='cec720ed-460a-48aa-a9fc-2262b6da5a83'}), log id: 26e68c12
2017-04-12 05:09:02,822Z INFO  [org.ovirt.engine.core.vdsbroker.VdsManager]
(org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
vdsManager::disposing
2017-04-12 05:09:02,822Z INFO
 [org.ovirt.engine.core.vdsbroker.RemoveVdsVDSCommand]
(org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
FINISH, RemoveVdsVDSCommand, log id: 26e68c12
2017-04-12 05:09:02,824Z WARN
 [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker] (ResponseWorker)
[] Exception thrown during message processing
2017-04-12 05:09:02,848Z INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
EVENT_ID: USER_REMOVE_VDS(44), Correlation ID:
13050988-bf00-4391-9862-a8ed8ade34dd, Call Stack: null, Custom Event ID:
-1, Message: Host tank3 was removed by admin@internal-authz.
2017-04-12 05:09:02,848Z INFO  [org.ovirt.engine.core.bll.RemoveVdsCommand]
(org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
Lock freed to object
'EngineLock:{exclusiveLocks='[cec720ed-460a-48aa-a9fc-2262b6da5a83=]', sharedLocks='null'}'
2017-04-12 05:10:56,139Z INFO
 [org.ovirt.engine.core.bll.storage.ovfstore.OvfDataUpdater]
(DefaultQuartzScheduler8) [] Attempting to update VMs/Templates Ovf.


Package vdsm-tool installed.

Any thoughts?

Thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] I’m having trouble deleting a test gluster volume

2017-04-12 Thread Precht, Andrew
I just noticed this in the Alerts tab: Detected deletion of volume test1 on 
cluster 8000-1, and deleted it from engine DB.

Yet, It still shows in the web UI?


From: Precht, Andrew
Sent: Tuesday, April 11, 2017 2:39:31 PM
To: knarra; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon Mureinik; Nir Soffer
Cc: users
Subject: Re: [ovirt-users] I’m having trouble deleting a test gluster volume


The plot thickens…
I put all hosts in the cluster into maintenance mode, with the Stop Gluster 
service checkbox checked. I then deleted the /var/lib/glusterd/vols/test1 
directory on all hosts. I then took the host that the test1 volume was on out 
of maintenance mode. Then I tried to remove the test1 volume from within the 
web UI. With no luck, I got the message: Could not delete Gluster Volume test1 
on cluster 8000-1.

I went back and checked all host for the test1 directory, it is not on any 
host. Yet I still can’t remove it…

Any suggestions?



From: Precht, Andrew
Sent: Tuesday, April 11, 2017 1:15:22 PM
To: knarra; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon Mureinik; Nir Soffer
Cc: users
Subject: Re: [ovirt-users] I’m having trouble deleting a test gluster volume


Here is an update…

I checked the /var/log/glusterfs/etc-glusterfs-glusterd.vol.log on the node 
that had the trouble volume (test1). I didn’t see any errors. So, I ran a tail 
-f on the log as I tried to remove the volume using the web UI. here is what 
was appended:

[2017-04-11 19:48:40.756360] I [MSGID: 106487] 
[glusterd-handler.c:1474:__glusterd_handle_cli_list_friends] 0-glusterd: 
Received cli list req
[2017-04-11 19:48:42.238840] I [MSGID: 106488] 
[glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: 
Received get vol req
The message "I [MSGID: 106487] 
[glusterd-handler.c:1474:__glusterd_handle_cli_list_friends] 0-glusterd: 
Received cli list req" repeated 6 times between [2017-04-11 19:48:40.756360] 
and [2017-04-11 19:49:32.596536]
The message "I [MSGID: 106488] 
[glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: 
Received get vol req" repeated 20 times between [2017-04-11 19:48:42.238840] 
and [2017-04-11 19:49:34.082179]
[2017-04-11 19:51:41.556077] I [MSGID: 106487] 
[glusterd-handler.c:1474:__glusterd_handle_cli_list_friends] 0-glusterd: 
Received cli list req

I’m seeing that the timestamps on these log entries do not match the time on 
the node.

The next steps
I stopped the glusterd service on the node with volume test1
I deleted it with:  rm -rf /var/lib/glusterd/vols/test1
I started the glusterd service.

After starting the gluster service back up, the directory 
/var/lib/glusterd/vols/test1 reappears.
I’m guessing syncing with the other nodes?
Is this because I have the Volume Option: auth allow *
Do I need to remove the directory /var/lib/glusterd/vols/test1 on all nodes in 
the cluster individually?

thanks



From: knarra 
Sent: Tuesday, April 11, 2017 11:51:18 AM
To: Precht, Andrew; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon Mureinik; 
Nir Soffer
Cc: users
Subject: Re: [ovirt-users] I’m having trouble deleting a test gluster volume

On 04/11/2017 11:28 PM, Precht, Andrew wrote:
Hi all,
The node is oVirt Node 4.1.1 with glusterfs-3.8.10-1.el7.
On the node I can not find /var/log/glusterfs/glusterd.log However, there is a 
/var/log/glusterfs/glustershd.log
can you check if /var/log/glusterfs/etc-glusterfs-glusterd.vol.log exists? if 
yes, can you check if there is any error present in that file ?

What happens if I follow the four steps outlined here to remove the volume from 
the node BUT, I do have another volume present in the cluster. It too is a test 
volume. Neither one has any data on them. So, data loss is not an issue.
Running those four steps will remove the volume from your cluster . If the 
volumes what you have are test volumes you could just follow the steps outlined 
to delete them (since you are not able to delete from UI) and bring back the 
cluster into a normal state.


From: knarra 
Sent: Tuesday, April 11, 2017 10:32:27 AM
To: Sandro Bonazzola; Precht, Andrew; Sahina Bose; Tal Nisan; Allon Mureinik; 
Nir Soffer
Cc: users
Subject: Re: [ovirt-users] I’m having trouble deleting a test gluster volume

On 04/11/2017 10:44 PM, Sandro Bonazzola wrote:
Adding some people

Il 11/Apr/2017 19:06, "Precht, Andrew" 
> ha scritto:
Hi Ovirt users,
I’m a newbie to oVirt and I’m having trouble deleting a test gluster volume. 
The nodes are 4.1.1 and the engine is 4.1.0

When I try to remove the test volume, I click Remove, the dialog box prompting 
to confirm the deletion pops up and after I click OK, the dialog box changes to 
show a little spinning wheel and then it disappears. In the end the volume is 
still there.

Re: [ovirt-users] I’m having trouble deleting a test gluster volume

2017-04-12 Thread Precht, Andrew
Here is an update…

I checked the /var/log/glusterfs/etc-glusterfs-glusterd.vol.log on the node 
that had the trouble volume (test1). I didn’t see any errors. So, I ran a tail 
-f on the log as I tried to remove the volume using the web UI. here is what 
was appended:

[2017-04-11 19:48:40.756360] I [MSGID: 106487] 
[glusterd-handler.c:1474:__glusterd_handle_cli_list_friends] 0-glusterd: 
Received cli list req
[2017-04-11 19:48:42.238840] I [MSGID: 106488] 
[glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: 
Received get vol req
The message "I [MSGID: 106487] 
[glusterd-handler.c:1474:__glusterd_handle_cli_list_friends] 0-glusterd: 
Received cli list req" repeated 6 times between [2017-04-11 19:48:40.756360] 
and [2017-04-11 19:49:32.596536]
The message "I [MSGID: 106488] 
[glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: 
Received get vol req" repeated 20 times between [2017-04-11 19:48:42.238840] 
and [2017-04-11 19:49:34.082179]
[2017-04-11 19:51:41.556077] I [MSGID: 106487] 
[glusterd-handler.c:1474:__glusterd_handle_cli_list_friends] 0-glusterd: 
Received cli list req

I’m seeing that the timestamps on these log entries do not match the time on 
the node.

The next steps
I stopped the glusterd service on the node with volume test1
I deleted it with:  rm -rf /var/lib/glusterd/vols/test1
I started the glusterd service.

After starting the gluster service back up, the directory 
/var/lib/glusterd/vols/test1 reappears.
I’m guessing syncing with the other nodes?
Is this because I have the Volume Option: auth allow *
Do I need to remove the directory /var/lib/glusterd/vols/test1 on all nodes in 
the cluster individually?

thanks



From: knarra 
Sent: Tuesday, April 11, 2017 11:51:18 AM
To: Precht, Andrew; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon Mureinik; 
Nir Soffer
Cc: users
Subject: Re: [ovirt-users] I’m having trouble deleting a test gluster volume

On 04/11/2017 11:28 PM, Precht, Andrew wrote:
Hi all,
The node is oVirt Node 4.1.1 with glusterfs-3.8.10-1.el7.
On the node I can not find /var/log/glusterfs/glusterd.log However, there is a 
/var/log/glusterfs/glustershd.log
can you check if /var/log/glusterfs/etc-glusterfs-glusterd.vol.log exists? if 
yes, can you check if there is any error present in that file ?

What happens if I follow the four steps outlined here to remove the volume from 
the node BUT, I do have another volume present in the cluster. It too is a test 
volume. Neither one has any data on them. So, data loss is not an issue.
Running those four steps will remove the volume from your cluster . If the 
volumes what you have are test volumes you could just follow the steps outlined 
to delete them (since you are not able to delete from UI) and bring back the 
cluster into a normal state.


From: knarra 
Sent: Tuesday, April 11, 2017 10:32:27 AM
To: Sandro Bonazzola; Precht, Andrew; Sahina Bose; Tal Nisan; Allon Mureinik; 
Nir Soffer
Cc: users
Subject: Re: [ovirt-users] I’m having trouble deleting a test gluster volume

On 04/11/2017 10:44 PM, Sandro Bonazzola wrote:
Adding some people

Il 11/Apr/2017 19:06, "Precht, Andrew" 
> ha scritto:
Hi Ovirt users,
I’m a newbie to oVirt and I’m having trouble deleting a test gluster volume. 
The nodes are 4.1.1 and the engine is 4.1.0

When I try to remove the test volume, I click Remove, the dialog box prompting 
to confirm the deletion pops up and after I click OK, the dialog box changes to 
show a little spinning wheel and then it disappears. In the end the volume is 
still there.
with the latest version of glusterfs & ovirt we do not see any issue with 
deleting a volume. Can you please check /var/log/glusterfs/glusterd.log file if 
there is any error present?


The test volume was distributed with two host members. One of the hosts I was 
able to remove from the volume by removing the host form the cluster. When I 
try to remove the remaining host in the volume, even with the “Force Remove” 
box ticked, I get this response: Cannot remove Host. Server having Gluster 
volume.

What to try next?
since you have already removed the volume from one host in the cluster and you 
still see it on another host you can do the following to remove the volume from 
another host.

1) Login to the host where the volume is present.
2) cd to /var/lib/glusterd/vols
3) rm -rf 
4) Restart glusterd on that  host.

And before doing the above make sure that you do not have any other volume 
present in the cluster.

Above steps should not be run on a production system as you might loose the 
volume and data.

Now removing the host from UI should succed.


P.S. I’ve tried to join this user group several times in the past, with no 
response.
Is it possible for me to join this group?

Regards,
Andrew





Re: [ovirt-users] I’m having trouble deleting a test gluster volume

2017-04-12 Thread Precht, Andrew
Hi all,
The node is oVirt Node 4.1.1 with glusterfs-3.8.10-1.el7.
On the node I can not find /var/log/glusterfs/glusterd.log However, there is a 
/var/log/glusterfs/glustershd.log

What happens if I follow the four steps outlined here to remove the volume from 
the node BUT, I do have another volume present in the cluster. It too is a test 
volume. Neither one has any data on them. So, data loss is not an issue.



From: knarra 
Sent: Tuesday, April 11, 2017 10:32:27 AM
To: Sandro Bonazzola; Precht, Andrew; Sahina Bose; Tal Nisan; Allon Mureinik; 
Nir Soffer
Cc: users
Subject: Re: [ovirt-users] I’m having trouble deleting a test gluster volume

On 04/11/2017 10:44 PM, Sandro Bonazzola wrote:
Adding some people

Il 11/Apr/2017 19:06, "Precht, Andrew" 
> ha scritto:
Hi Ovirt users,
I’m a newbie to oVirt and I’m having trouble deleting a test gluster volume. 
The nodes are 4.1.1 and the engine is 4.1.0

When I try to remove the test volume, I click Remove, the dialog box prompting 
to confirm the deletion pops up and after I click OK, the dialog box changes to 
show a little spinning wheel and then it disappears. In the end the volume is 
still there.
with the latest version of glusterfs & ovirt we do not see any issue with 
deleting a volume. Can you please check /var/log/glusterfs/glusterd.log file if 
there is any error present?


The test volume was distributed with two host members. One of the hosts I was 
able to remove from the volume by removing the host form the cluster. When I 
try to remove the remaining host in the volume, even with the “Force Remove” 
box ticked, I get this response: Cannot remove Host. Server having Gluster 
volume.

What to try next?
since you have already removed the volume from one host in the cluster and you 
still see it on another host you can do the following to remove the volume from 
another host.

1) Login to the host where the volume is present.
2) cd to /var/lib/glusterd/vols
3) rm -rf 
4) Restart glusterd on that  host.

And before doing the above make sure that you do not have any other volume 
present in the cluster.

Above steps should not be run on a production system as you might loose the 
volume and data.

Now removing the host from UI should succed.


P.S. I’ve tried to join this user group several times in the past, with no 
response.
Is it possible for me to join this group?

Regards,
Andrew




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] I’m having trouble deleting a test gluster volume

2017-04-12 Thread Precht, Andrew
Hi Ovirt users,
I’m a newbie to oVirt and I’m having trouble deleting a test gluster volume. 
The nodes are 4.1.1 and the engine is 4.1.0

When I try to remove the test volume, I click Remove, the dialog box prompting 
to confirm the deletion pops up and after I click OK, the dialog box changes to 
show a little spinning wheel and then it disappears. In the end the volume is 
still there.

The test volume was distributed with two host members. One of the hosts I was 
able to remove from the volume by removing the host form the cluster. When I 
try to remove the remaining host in the volume, even with the “Force Remove” 
box ticked, I get this response: Cannot remove Host. Server having Gluster 
volume.

What to try next?

P.S. I’ve tried to join this user group several times in the past, with no 
response.
Is it possible for me to join this group?

Regards,
Andrew

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] I’m having trouble deleting a test gluster volume

2017-04-12 Thread knarra

On 04/12/2017 03:35 AM, Precht, Andrew wrote:


I just noticed this in the Alerts tab: Detected deletion of volume 
test1 on cluster 8000-1, and deleted it from engine DB.


Yet, It still shows in the web UI?

Any errors in ovirt-engine.log file ? if the volume is deleted from db 
ideally it should be deleted from UI too.  Can you go to gluster nodes 
and check for the following:


1) gluster volume list -> should not return anything since you have 
deleted the volumes.


2) gluster peer status -> on all the nodes should show that all the 
peers are in connected state.


can you tail -f /var/log/ovirt-engine/ovirt-engine.log and gluster log 
and capture the error messages when you try deleting the volume from UI?


Log what you have pasted in the previous mail only gives info and i 
could not get any details from that on why volume delete is failing




*From:* Precht, Andrew
*Sent:* Tuesday, April 11, 2017 2:39:31 PM
*To:* knarra; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon 
Mureinik; Nir Soffer

*Cc:* users
*Subject:* Re: [ovirt-users] I’m having trouble deleting a test 
gluster volume

The plot thickens…
I put all hosts in the cluster into maintenance mode, with the Stop 
Gluster service checkbox checked. I then deleted the 
/var/lib/glusterd/vols/test1 directory on all hosts. I then took the 
host that the test1 volume was on out of maintenance mode. Then I 
tried to remove the test1 volume from within the web UI. With no luck, 
I got the message: Could not delete Gluster Volume test1 on cluster 
8000-1.


I went back and checked all host for the test1 directory, it is not on 
any host. Yet I still can’t remove it…


Any suggestions?


*From:* Precht, Andrew
*Sent:* Tuesday, April 11, 2017 1:15:22 PM
*To:* knarra; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon 
Mureinik; Nir Soffer

*Cc:* users
*Subject:* Re: [ovirt-users] I’m having trouble deleting a test 
gluster volume

Here is an update…

I checked the /var/log/glusterfs/etc-glusterfs-glusterd.vol.log on the 
node that had the trouble volume (test1). I didn’t see any errors. So, 
I ran a tail -f on the log as I tried to remove the volume using the 
web UI. here is what was appended:


[2017-04-11 19:48:40.756360] I [MSGID: 106487] 
[glusterd-handler.c:1474:__glusterd_handle_cli_list_friends] 
0-glusterd: Received cli list req
[2017-04-11 19:48:42.238840] I [MSGID: 106488] 
[glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 
0-management: Received get vol req
The message "I [MSGID: 106487] 
[glusterd-handler.c:1474:__glusterd_handle_cli_list_friends] 
0-glusterd: Received cli list req" repeated 6 times between 
[2017-04-11 19:48:40.756360] and [2017-04-11 19:49:32.596536]
The message "I [MSGID: 106488] 
[glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 
0-management: Received get vol req" repeated 20 times between 
[2017-04-11 19:48:42.238840] and [2017-04-11 19:49:34.082179]
[2017-04-11 19:51:41.556077] I [MSGID: 106487] 
[glusterd-handler.c:1474:__glusterd_handle_cli_list_friends] 
0-glusterd: Received cli list req


I’m seeing that the timestamps on these log entries do not match the 
time on the node.


The next steps
I stopped the glusterd service on the node with volume test1
I deleted it with:  rm -rf /var/lib/glusterd/vols/test1
I started the glusterd service.

After starting the gluster service back up, the directory 
/var/lib/glusterd/vols/test1 reappears.

I’m guessing syncing with the other nodes?
Is this because I have the Volume Option: auth allow *
Do I need to remove the directory /var/lib/glusterd/vols/test1 on all 
nodes in the cluster individually?


thanks


*From:* knarra 
*Sent:* Tuesday, April 11, 2017 11:51:18 AM
*To:* Precht, Andrew; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon 
Mureinik; Nir Soffer

*Cc:* users
*Subject:* Re: [ovirt-users] I’m having trouble deleting a test 
gluster volume

On 04/11/2017 11:28 PM, Precht, Andrew wrote:

Hi all,
The node is oVirt Node 4.1.1 with glusterfs-3.8.10-1.el7.
On the node I can not find /var/log/glusterfs/glusterd.log However, 
there is a /var/log/glusterfs/glustershd.log
can you check if /var/log/glusterfs/etc-glusterfs-glusterd.vol.log 
exists? if yes, can you check if there is any error present in that file ?


What happens if I follow the four steps outlined here to remove the 
volume from the node _BUT_, I do have another volume present in the 
cluster. It too is a test volume. Neither one has any data on them. 
So, data loss is not an issue.
Running those four steps will remove the volume from your cluster . If 
the volumes what you have are test volumes you could just follow the 
steps outlined to delete them (since you are not able to delete from 
UI) and bring back the cluster into a normal state.



Re: [ovirt-users] I’m having trouble deleting a test gluster volume

2017-04-12 Thread knarra

On 04/12/2017 01:45 AM, Precht, Andrew wrote:

Here is an update…

I checked the /var/log/glusterfs/etc-glusterfs-glusterd.vol.log on the 
node that had the trouble volume (test1). I didn’t see any errors. So, 
I ran a tail -f on the log as I tried to remove the volume using the 
web UI. here is what was appended:


[2017-04-11 19:48:40.756360] I [MSGID: 106487] 
[glusterd-handler.c:1474:__glusterd_handle_cli_list_friends] 
0-glusterd: Received cli list req
[2017-04-11 19:48:42.238840] I [MSGID: 106488] 
[glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 
0-management: Received get vol req
The message "I [MSGID: 106487] 
[glusterd-handler.c:1474:__glusterd_handle_cli_list_friends] 
0-glusterd: Received cli list req" repeated 6 times between 
[2017-04-11 19:48:40.756360] and [2017-04-11 19:49:32.596536]
The message "I [MSGID: 106488] 
[glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 
0-management: Received get vol req" repeated 20 times between 
[2017-04-11 19:48:42.238840] and [2017-04-11 19:49:34.082179]
[2017-04-11 19:51:41.556077] I [MSGID: 106487] 
[glusterd-handler.c:1474:__glusterd_handle_cli_list_friends] 
0-glusterd: Received cli list req


I’m seeing that the timestamps on these log entries do not match the 
time on the node.
gluster logs are in UTC format. That is the reason you might be seeing a 
different timestamp on your node and in the gluster logs.


The next steps
I stopped the glusterd service on the node with volume test1
I deleted it with:  rm -rf /var/lib/glusterd/vols/test1
I started the glusterd service.

After starting the gluster service back up, the directory 
/var/lib/glusterd/vols/test1 reappears.

I’m guessing syncing with the other nodes?

yes, since you deleted it only one one node.

Is this because I have the Volume Option: auth allow *
Do I need to remove the directory /var/lib/glusterd/vols/test1 on all 
nodes in the cluster individually?
you need to remove the file /var/lib/glusterd/vols/test1 on all nodes 
and restart glusterd service on all the nodes in the cluster.


thanks


*From:* knarra 
*Sent:* Tuesday, April 11, 2017 11:51:18 AM
*To:* Precht, Andrew; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon 
Mureinik; Nir Soffer

*Cc:* users
*Subject:* Re: [ovirt-users] I’m having trouble deleting a test 
gluster volume

On 04/11/2017 11:28 PM, Precht, Andrew wrote:

Hi all,
The node is oVirt Node 4.1.1 with glusterfs-3.8.10-1.el7.
On the node I can not find /var/log/glusterfs/glusterd.log However, 
there is a /var/log/glusterfs/glustershd.log
can you check if /var/log/glusterfs/etc-glusterfs-glusterd.vol.log 
exists? if yes, can you check if there is any error present in that file ?


What happens if I follow the four steps outlined here to remove the 
volume from the node _BUT_, I do have another volume present in the 
cluster. It too is a test volume. Neither one has any data on them. 
So, data loss is not an issue.
Running those four steps will remove the volume from your cluster . If 
the volumes what you have are test volumes you could just follow the 
steps outlined to delete them (since you are not able to delete from 
UI) and bring back the cluster into a normal state.



*From:* knarra 
*Sent:* Tuesday, April 11, 2017 10:32:27 AM
*To:* Sandro Bonazzola; Precht, Andrew; Sahina Bose; Tal Nisan; Allon 
Mureinik; Nir Soffer

*Cc:* users
*Subject:* Re: [ovirt-users] I’m having trouble deleting a test 
gluster volume

On 04/11/2017 10:44 PM, Sandro Bonazzola wrote:

Adding some people

Il 11/Apr/2017 19:06, "Precht, Andrew" > ha scritto:


Hi Ovirt users,
I’m a newbie to oVirt and I’m having trouble deleting a test
gluster volume. The nodes are 4.1.1 and the engine is 4.1.0

When I try to remove the test volume, I click Remove, the dialog
box prompting to confirm the deletion pops up and after I click
OK, the dialog box changes to show a little spinning wheel and
then it disappears. In the end the volume is still there.

with the latest version of glusterfs & ovirt we do not see any issue 
with deleting a volume. Can you please check 
/var/log/glusterfs/glusterd.log file if there is any error present?




The test volume was distributed with two host members. One of
the hosts I was able to remove from the volume by removing the
host form the cluster. When I try to remove the remaining host
in the volume, even with the “Force Remove” box ticked, I get
this response: Cannot remove Host. Server having Gluster volume.

What to try next?

since you have already removed the volume from one host in the 
cluster and you still see it on another host you can do the following 
to remove the volume from another host.


1) Login to the host where the volume is