Re: [ovirt-users] Video RAM

2016-06-13 Thread Amador Pahim

On 06/14/2016 04:31 AM, qinglong.d...@horebdata.cn wrote:

Hi, all
I want to modify three arguments ram, vram and vgamem in the
qemu command where virtual machines are created. I found that
the arguments had default values in libvirt and I wonder how to modify
them. Could they be modified in a certain xml file or in
a before_vm_start VDSM 
hook?(http://www.ovirt.org/documentation/draft/video-ram/)


Yes, hook is the way to go.


I dont't know where the xml file is or
The xml is created every time you start a vm and deleted every time you 
shutdown a vm. A hook can change the resulting xml before start the vm.


> how to write the hook.

Here some examples: https://github.com/oVirt/vdsm/tree/master/vdsm_hooks


Anyone can help? Thanks!





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Video RAM

2016-06-13 Thread qinglong.d...@horebdata.cn
Hi, all
I want to modify three arguments ram, vram and vgamem in the qemu 
command where virtual machines are created. I found that the arguments had 
default values in libvirt and I wonder how to modify them. Could they be 
modified in a certain xml file or in a before_vm_start VDSM 
hook?(http://www.ovirt.org/documentation/draft/video-ram/) I dont't know where 
the xml file is or how to write the hook.
Anyone can help? Thanks!




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] host kernel upgrade

2016-06-13 Thread Nir Soffer
On Tue, Jun 14, 2016 at 1:12 AM, Rafael Almeida
 wrote:
>
> Hello, friends, it is safe reboot my host after update the kernel in my
> centos 7.2 x64, the ovirt engine 3.6 run over this centos in a
> independent host. which it is the frequency at which the
> host/hypervisors communicates with the engine oVirt?

The hypervisors do not communicate with engine, engine communicate with them,
so you can safely reboot the engine host.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] host kernel upgrade

2016-06-13 Thread Rafael Almeida


Hello, friends, it is safe reboot my host after update the kernel in my
centos 7.2 x64, the ovirt engine 3.6 run over this centos in a
independent host. which it is the frequency at which the
host/hypervisors communicates with the engine oVirt?

greetings
--
Rafhael Almeida Orellana

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to install ovirt 4 hosted engine

2016-06-13 Thread Simone Tiraboschi
On Mon, Jun 13, 2016 at 9:53 PM, Claude Durocher
 wrote:
> Testing the latest Ovirt 4.0 RC and I'm unable to install the hosted engine.

Thanks Claude,
we fixed it today: https://bugzilla.redhat.com/1344900

> On the console :
>
> ...
>   --== HOSTED ENGINE CONFIGURATION ==--
>
> [ INFO  ] Stage: Setup validation
>
>   --== CONFIGURATION PREVIEW ==--
>
>   Bridge interface   : eno1
>   Engine FQDN: ovirt-manager.local.net
>   Bridge name: ovirtmgmt
>   Host address   : ovirt-1
>   SSH daemon port: 22
>   Firewall manager   : iptables
>   Gateway address: 10.1.1.193
>   Host name for web application  : ovirt-1.local.net
>   Storage Domain type: nfs3
>   Host ID: 1
>   Image size GB  : 10
>   Storage connection :
> 192.168.101.196:/exports/ovirt-engine
>   Console type   : qxl
>   Memory size MB : 16384
>   MAC address: 00:16:3e:16:fd:c7
>   Boot type  : disk
>   Number of CPUs : 4
>   OVF archive (for disk boot):
> /usr/share/ovirt-engine-appliance/ovirt-engine-appliance-4.0-20160603.1.el7.centos.ova
>   Restart engine VM after engine-setup: True
>   CPU Type   : model_SandyBridge
> [ INFO  ] Stage: Transaction setup
> [ INFO  ] Stage: Misc configuration
> [ INFO  ] Stage: Package installation
> [ INFO  ] Stage: Misc configuration
> [ INFO  ] Configuring libvirt
> [ INFO  ] Configuring VDSM
> [ INFO  ] Starting vdsmd
> [ ERROR ] Failed to execute stage 'Misc configuration': Error storage server
> connection: (u"domType=6, spUUID=----,
> conList=[{u'vfsType': u'ext3', u'connection':
> u'/var/lib/ovirt-hosted-engine-setup/tmpzDbNZ4', u'spec':
> u'/var/lib/ovirt-hosted-engine-setup/tmpzDbNZ4', u'id':
> u'33b4138f-9937-45a6-828a-433072d8ffbd'}]",)
> [ INFO  ] Stage: Clean up
> [ INFO  ] Generating answer file
> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20160613154009.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
> [ ERROR ] Hosted Engine deployment failed: this system is not reliable,
> please check the issue, fix and redeploy
>   Log file is located at
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160613153910-41q5xe.log
>
>
> In vdsm.log :
>
> jsonrpc.Executor/1::DEBUG::2016-06-13
> 15:40:08,877::fileUtils::190::Storage.fileUtils::(createdir) Creating
> directory: /rhev/data-center/mnt/192.168.101.196:_exports_ovirt-engine mode:
> None
> jsonrpc.Executor/1::INFO::2016-06-13
> 15:40:08,878::mount::222::storage.Mount::(mount) mounting
> 192.168.101.196:/exports/ovirt-engine at
> /rhev/data-center/mnt/192.168.101.196:_exports_ovirt-engine
> jsonrpc.Executor/1::DEBUG::2016-06-13
> 15:40:08,921::utils::870::storage.Mount::(stopwatch)
> /rhev/data-center/mnt/192.168.101.196:_exports_ovirt-engine mounted: 0.05
> seconds
> jsonrpc.Executor/1::DEBUG::2016-06-13
> 15:40:08,924::outOfProcess::69::Storage.oop::(getProcessPool) Creating
> ioprocess Global
> jsonrpc.Executor/1::DEBUG::2016-06-13
> 15:40:08,924::__init__::315::IOProcessClient::(_run) Starting IOProcess...
> jsonrpc.Executor/1::DEBUG::2016-06-13
> 15:40:08,955::hsm::2327::Storage.HSM::(__prefetchDomains) nfs local path:
> /rhev/data-center/mnt/192.168.101.196:_exports_ovirt-engine
> jsonrpc.Executor/1::DEBUG::2016-06-13
> 15:40:08,957::hsm::2351::Storage.HSM::(__prefetchDomains) Found SD uuids: ()
> jsonrpc.Executor/1::DEBUG::2016-06-13
> 15:40:08,957::hsm::2411::Storage.HSM::(connectStorageServer) knownSDs: {}
> jsonrpc.Executor/1::INFO::2016-06-13
> 15:40:08,958::logUtils::52::dispatcher::(wrapper) Run and protect:
> connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id':
> u'e3f4dde2-cfe3-4aa3-8e01-dec66d34f616'}]}
> jsonrpc.Executor/1::DEBUG::2016-06-13
> 15:40:08,958::task::1193::Storage.TaskManager.Task::(prepare)
> Task=`5699f840-cf9b-407e-a83c-40468821d173`::finished: {'statuslist':
> [{'status': 0, 'id': u'e3f4dde2-cfe3-4aa3-8e01-dec66d34f616'}]}
> jsonrpc.Executor/1::DEBUG::2016-06-13
> 15:40:08,958::task::597::Storage.TaskManager.Task::(_updateState)
> Task=`5699f840-cf9b-407e-a83c-40468821d173`::moving from state preparing ->
> state finished
> jsonrpc.Executor/1::DEBUG::2016-06-13
> 15:40:08,958::resourceManager::952::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
> jsonrpc.Executor/1::DEBUG::2016-06-13
> 15:40:08,959::resourceManager::989::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
> 

[ovirt-users] Unable to install ovirt 4 hosted engine

2016-06-13 Thread Claude Durocher

Testing the latest Ovirt 4.0 RC and I'm unable to install the hosted engine. On 
the console :

...
  --== HOSTED ENGINE CONFIGURATION ==--
     
[ INFO  ] Stage: Setup validation
     
  --== CONFIGURATION PREVIEW ==--
     
  Bridge interface   : eno1
  Engine FQDN    : ovirt-manager.local.net
  Bridge name    : ovirtmgmt
  Host address   : ovirt-1
  SSH daemon port    : 22
  Firewall manager   : iptables
  Gateway address    : 10.1.1.193
  Host name for web application  : ovirt-1.local.net
  Storage Domain type    : nfs3
  Host ID    : 1
  Image size GB  : 10
  Storage connection : 
192.168.101.196:/exports/ovirt-engine
  Console type   : qxl
  Memory size MB : 16384
  MAC address    : 00:16:3e:16:fd:c7
  Boot type  : disk
  Number of CPUs : 4
  OVF archive (for disk boot)    : 
/usr/share/ovirt-engine-appliance/ovirt-engine-appliance-4.0-20160603.1.el7.centos.ova
  Restart engine VM after engine-setup: True
  CPU Type   : model_SandyBridge
[ INFO  ] Stage: Transaction setup
[ INFO  ] Stage: Misc configuration
[ INFO  ] Stage: Package installation
[ INFO  ] Stage: Misc configuration
[ INFO  ] Configuring libvirt
[ INFO  ] Configuring VDSM
[ INFO  ] Starting vdsmd
[ ERROR ] Failed to execute stage 'Misc configuration': Error storage server 
connection: (u"domType=6, spUUID=----, 
conList=[{u'vfsType': u'ext3', u'connection': 
u'/var/lib/ovirt-hosted-engine-setup/tmpzDbNZ4', u'spec': 
u'/var/lib/ovirt-hosted-engine-setup/tmpzDbNZ4', u'id': 
u'33b4138f-9937-45a6-828a-433072d8ffbd'}]",)
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file 
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20160613154009.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: this system is not reliable, please 
check the issue, fix and redeploy
  Log file is located at 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20160613153910-41q5xe.log


In vdsm.log :

jsonrpc.Executor/1::DEBUG::2016-06-13 
15:40:08,877::fileUtils::190::Storage.fileUtils::(createdir) Creating 
directory: /rhev/data-center/mnt/192.168.101.196:_exports_ovirt-engine mode: 
None
jsonrpc.Executor/1::INFO::2016-06-13 
15:40:08,878::mount::222::storage.Mount::(mount) mounting 
192.168.101.196:/exports/ovirt-engine at 
/rhev/data-center/mnt/192.168.101.196:_exports_ovirt-engine
jsonrpc.Executor/1::DEBUG::2016-06-13 
15:40:08,921::utils::870::storage.Mount::(stopwatch) 
/rhev/data-center/mnt/192.168.101.196:_exports_ovirt-engine mounted: 0.05 
seconds
jsonrpc.Executor/1::DEBUG::2016-06-13 
15:40:08,924::outOfProcess::69::Storage.oop::(getProcessPool) Creating 
ioprocess Global
jsonrpc.Executor/1::DEBUG::2016-06-13 
15:40:08,924::__init__::315::IOProcessClient::(_run) Starting IOProcess...
jsonrpc.Executor/1::DEBUG::2016-06-13 
15:40:08,955::hsm::2327::Storage.HSM::(__prefetchDomains) nfs local path: 
/rhev/data-center/mnt/192.168.101.196:_exports_ovirt-engine
jsonrpc.Executor/1::DEBUG::2016-06-13 
15:40:08,957::hsm::2351::Storage.HSM::(__prefetchDomains) Found SD uuids: ()
jsonrpc.Executor/1::DEBUG::2016-06-13 
15:40:08,957::hsm::2411::Storage.HSM::(connectStorageServer) knownSDs: {}
jsonrpc.Executor/1::INFO::2016-06-13 
15:40:08,958::logUtils::52::dispatcher::(wrapper) Run and protect: 
connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': 
u'e3f4dde2-cfe3-4aa3-8e01-dec66d34f616'}]}
jsonrpc.Executor/1::DEBUG::2016-06-13 
15:40:08,958::task::1193::Storage.TaskManager.Task::(prepare) 
Task=`5699f840-cf9b-407e-a83c-40468821d173`::finished: {'statuslist': 
[{'status': 0, 'id': u'e3f4dde2-cfe3-4aa3-8e01-dec66d34f616'}]}
jsonrpc.Executor/1::DEBUG::2016-06-13 
15:40:08,958::task::597::Storage.TaskManager.Task::(_updateState) 
Task=`5699f840-cf9b-407e-a83c-40468821d173`::moving from state preparing -> 
state finished
jsonrpc.Executor/1::DEBUG::2016-06-13 
15:40:08,958::resourceManager::952::Storage.ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {}
jsonrpc.Executor/1::DEBUG::2016-06-13 
15:40:08,959::resourceManager::989::Storage.ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
jsonrpc.Executor/1::DEBUG::2016-06-13 
15:40:08,959::task::995::Storage.TaskManager.Task::(_decref) 
Task=`5699f840-cf9b-407e-a83c-40468821d173`::ref 0 aborting False
jsonrpc.Executor/1::DEBUG::2016-06-13 
15:40:08,959::__init__::550::jsonrpc.JsonRpcServer::(_serveRequest) Return 

Re: [ovirt-users] how to build and install ovirt to the Product Environment

2016-06-13 Thread Martin Perina
On Mon, Jun 13, 2016 at 6:27 PM, Nir Soffer  wrote:

> For such issues better use de...@ovirt.org mailing list:
> http://lists.ovirt.org/mailman/listinfo/devel
>
> Nir
>
> On Mon, Jun 13, 2016 at 6:58 PM, Dewey Du  wrote:
> > To build and install ovirt-engine at your home folder under ovirt-engine
> > directory execute the folllowing command:
> >
> > $ make clean install-dev PREFIX="${PREFIX}"
> >
> > What about installing a Product Environment. Is the folllowing command
> > right?
>

​Do you want to use oVirt in production? If so, then I'd highly recommend
to use latest stable version installed from RPMs. More info can be found at

http://www.ovirt.org/download/

Martin Perina
​


> >
> > $ make clean install PREFIX="${PREFIX}"
> >
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how to build and install ovirt to the Product Environment

2016-06-13 Thread Nir Soffer
For such issues better use de...@ovirt.org mailing list:
http://lists.ovirt.org/mailman/listinfo/devel

Nir

On Mon, Jun 13, 2016 at 6:58 PM, Dewey Du  wrote:
> To build and install ovirt-engine at your home folder under ovirt-engine
> directory execute the folllowing command:
>
> $ make clean install-dev PREFIX="${PREFIX}"
>
> What about installing a Product Environment. Is the folllowing command
> right?
>
> $ make clean install PREFIX="${PREFIX}"
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] how to build and install ovirt to the Product Environment

2016-06-13 Thread Dewey Du
To build and install ovirt-engine at your home folder under
ovirt-engine directory execute the folllowing command:

$ make clean install-dev PREFIX="${PREFIX}"

What about installing a Product Environment. Is the folllowing command right?

$ make clean install PREFIX="${PREFIX}"
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Problem with Getty and Serial Consoles

2016-06-13 Thread Christophe TREFOIS
Dear all,

I am running 3.6.6 and am able to select a Console, however, the screen is 
black.

On the hypervisor, I try to start getty service manually, and end up with 
following error in journal

Jun 13 17:01:01 elephant-server.lcsb.uni.lu 
systemd[1]: Stopping user-0.slice.
-- Subject: Unit user-0.slice has begun shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit user-0.slice has begun shutting down.
Jun 13 17:01:37 elephant-server.lcsb.uni.lu 
systemd[1]: Job dev-hvc0.device/start timed out.
Jun 13 17:01:37 elephant-server.lcsb.uni.lu 
systemd[1]: Timed out waiting for device dev-hvc0.device.
-- Subject: Unit dev-hvc0.device has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit dev-hvc0.device has failed.
--
-- The result is timeout.
Jun 13 17:01:37 elephant-server.lcsb.uni.lu 
systemd[1]: Dependency failed for Serial Getty on hvc0.
-- Subject: Unit serial-getty@hvc0.service 
has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit serial-getty@hvc0.service has failed.

I am running CentOS 7.2.

Does anybody have some pointers on what could be the issue here?

Thank you,

—
Christophe




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] java.io.IOException: Command returned failure code 1 during SSH session

2016-06-13 Thread Roy Golan
Please share the whole installation log , this vdsDeploy part is new and I
want to get more details on why the connection says it terminated suddenly

The log path is visible in an 'Event' raised in the webadmin after the
installation fails

On Mon, Jun 13, 2016 at 2:59 PM, Phillip Bailey  wrote:

> On Mon, Jun 13, 2016 at 4:34 AM, Yedidyah Bar David 
> wrote:
>
>> On Sat, Jun 11, 2016 at 5:56 PM, Gregor Binder 
>> wrote:
>> > -BEGIN PGP SIGNED MESSAGE-
>> > Hash: SHA256
>> >
>> > Hi,
>> >
>> > during inspecting the engine.log I found this entry:
>> >
>> > - 
>> > [org.ovirt.engine.core.uutils.ssh.SSHDialog]
>> > (DefaultQuartzScheduler_Worker-81) [] Exception: java.io.IOException:
>> > Command returned failure code 1 during SSH session 'root@> > name remove>'
>> > - 
>> > Message: Failed to check for available updates on host host01 with
>> > message 'Command returned failure code 1 during SSH session
>> > 'root@'
>> > - 
>> >
>> > Looks like a serious problem because the engine can't check for updates.
>>
>> You should have some more information in the log. Please check around this
>> line and/or post full log. Thanks.
>>
>> >
>> > cheers
>> > gregor
>> > - --
>> > GPG-Key: 67F1534F
>> > URL:
>> http://pgp.mit.edu:11371/pks/lookup?op=get=0x137FB29D67F1534
>> > F
>> > -BEGIN PGP SIGNATURE-
>> > Version: GnuPG v2
>> >
>> > iQIcBAEBCAAGBQJXXCaMAAoJEBN/sp1n8VNP1B4QAL1EZRBMe+TFYENj2WH0saTm
>> > GBOZxKljwkno0xdGpql64ZsmPogQ9Ybtus6eEWBuzGScc0uHvsbzVKWrVNf2afAP
>> > XbvWYvdTWECfhSTbQQ0MS/itwkuOfeEONywdo9jcCv+261oEJwQyltjDKK6NDgYl
>> > K4L5Qyvhac0EZsjRpDtKDyHj+QT321hLI5gRps/eMPIAHWl8zaq+LJVFDI4EV3gE
>> > 9Ndcljyxjd6IyqIG4LzQobNowA8Jp+QAIrA316ekkb9BLF7o/W9VaITmS+5xS5Dl
>> > y2lL7Ga/LYdpEkMh8ZQmLwjoTWZvKoL08xFQgnUQ4Ry/UI8ENukmIXecuQebhEHH
>> > Bs4WnaZCDxditHymI809lwf2jpeGVjLkOPuLfev38AIfKS00acm0Yb3TIWNbzs6F
>> > ZJ+rz9X6gtPBET2XOSDPWa/JsCcIbg/XjqEM4qzOANmKzWA4mJpVh7uM6M8mgFd+
>> > 4kZA8hVz4sckat1jbXFXgIJuMvNDwjgKUsDVBoZ1wKJvfj/btfgFaVI/osDnNQ8l
>> > rtnuGCDNhZvJCctxIbLOpC7+raImWLOy89Od1W3KMYg6ECgAM7t5A7VRtTdzSqyt
>> > pji7NaXsZxNqCh2QCXa8srjUQgWttpkRsH/iR3xq/s7QdS+Moail5AoF8XeCzFvB
>> > GX+DLN8RPMcIilUXuaUs
>> > =OnF7
>> > -END PGP SIGNATURE-
>> > ___
>> > Users mailing list
>> > Users@ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>>
>> Didi,
>
> I've been having the same problem since the end of last week. I get the
> error whenever I try to add a new host or run reinstall on one that failed
> to install correctly. The relevant part of my log file is below. It covers
> everything from the point that I start the action that causes the error to
> the end of the log file. Also, I have SSH'd to the host to make sure that
> there's nothing wrong with the connectivity and everything there works as
> expected. Any help you can provide would be greatly appreciated.
>
> 2016-06-13 07:52:56,816 INFO
>  [org.ovirt.engine.core.bll.hostdeploy.InstallVdsCommand] (default task-81)
> [1a1eab5e] Running command: InstallVdsCommand internal: false. Entities
> affected :  ID: 244264be-4156-45a1-aed5-d05681303c07 Type: VDSAction group
> EDIT_HOST_CONFIGURATION with role type ADMIN
> 2016-06-13 07:52:56,862 INFO
>  [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] (default
> task-81) [1a1eab5e] Lock Acquired to object
> 'EngineLock:{exclusiveLocks='[244264be-4156-45a1-aed5-d05681303c07= ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
> 2016-06-13 07:52:56,868 INFO
>  [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
> (org.ovirt.thread.pool-8-thread-1) [1a1eab5e] Running command:
> InstallVdsInternalCommand internal: true. Entities affected :  ID:
> 244264be-4156-45a1-aed5-d05681303c07 Type: VDS
> 2016-06-13 07:52:56,887 INFO
>  [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
> (org.ovirt.thread.pool-8-thread-1) [1a1eab5e] Before Installation host
> 244264be-4156-45a1-aed5-d05681303c07, m2-h1
> 2016-06-13 07:52:56,889 WARN
>  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-81) [1a1eab5e] Correlation ID: null, Call Stack: null, Custom
> Event ID: -1, Message: Failed to verify Power Management configuration for
> Host m2-h1.
> 2016-06-13 07:52:56,918 INFO
>  [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
> (org.ovirt.thread.pool-8-thread-1) [1a1eab5e] START,
> SetVdsStatusVDSCommand(HostName = m2-h1,
> SetVdsStatusVDSCommandParameters:{runAsync='true',
> hostId='244264be-4156-45a1-aed5-d05681303c07', status='Installing',
> nonOperationalReason='NONE', stopSpmFailureLogged='false',
> maintenanceReason='null'}), log id: 7656e770
> 2016-06-13 07:52:56,921 INFO
>  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-81) [1a1eab5e] Correlation ID: 1a1eab5e, Call Stack: null,
> Custom Event ID: -1, Message: Host m2-h1 

Re: [ovirt-users] Disk from FC storage not removed

2016-06-13 Thread Nir Soffer
On Mon, Jun 13, 2016 at 12:06 PM, Krzysztof Dajka  wrote:
> Hi Nir,
>
> Thanks for solution. I didn't notice the guest /dev/backupvg01/backuplv01 on
> all hypervisors. It seems that I've got this issue with 2 additionals
> volumes, but no one noticed because they were only few gb.
>
>
> [root@wrops2 BLUE/WRO ~]# ls -l /sys/block/$(basename $(readlink
> /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8))/holders
> total 0
> lrwxrwxrwx. 1 root root 0 Jun 13 10:48 dm-43 -> ../../dm-43
>
> [root@wrops2 BLUE/WRO ~]# pvscan --cache
> [root@wrops2 BLUE/WRO ~]# vgs -o pv_name,vg_name
>   PV
> VG
>
> /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8
> backupvg01
>   /dev/sda2
> centos_wrops2
>
> /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/99a1c067-9728-484a-a0cb-cb6689d5724c
> deployvg
>   /dev/mapper/360e00d240572
> e69d1c16-36d1-4375-aaee-69f5a5ce1616
>
> /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/86a6d83f-2661-4fe3-8874-ce4d8a111c0d
> jenkins
>   /dev/sda3
> w2vg1
>
> [root@wrops2 BLUE/WRO ~]# dmsetup info
> Name:  backupvg01-backuplv01
> State: ACTIVE
> Read Ahead:8192
> Tables present:LIVE
> Open count:0
> Event number:  0
> Major, minor:  253, 43
> Number of targets: 1
> UUID: LVM-ubxOH5R2h6B8JwLGfhpiNjnAKlPxMPy6KfkeLBxXajoT3gxU0yC5JvOQQVkixrTA
>
> [root@wrops2 BLUE/WRO ~]# lvchange -an /dev/backupvg01/backuplv01
> [root@wrops2 BLUE/WRO ~]# lvremove
> /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8
> Do you really want to remove active logical volume
> ee53af81-820d-4916-b766-5236ca99daf8? [y/n]: y
>   Logical volume "ee53af81-820d-4916-b766-5236ca99daf8" successfully removed
>
>
> Would this configuration in lvm.conf:
> filter = [ "r|/dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/.*|" ]
> on all hypervisors solve problem of scanning guest volumes?

I would use global_filter, to make sure that commands using filter
from the command line do not override your filter. vdsm is such
application, using --config 'devices { filter = ... }'

Nir

>
> 2016-06-11 23:16 GMT+02:00 Nir Soffer :
>>
>> On Thu, Jun 9, 2016 at 11:46 AM, Krzysztof Dajka 
>> wrote:
>> > Hi,
>> >
>> > Recently I tried to delete 1TB disk created on top ~3TB LUN from
>> > ovirtengine.
>> > Disk is preallocated and I backuped data to other disk so I could
>> > recreate
>> > it once again as thin volume. I couldn't remove this disk when it was
>> > attached to a VM. But once I detached it I could remove it permanently.
>> > The
>> > thing is it only disappeared from ovirtengine GUI.
>> >
>> > I've got 4 hosts with FC HBA attached to storage array and all of them
>> > are
>> > saying that this 1TB disk which should be gone is opened by all hosts
>> > simultaneously.
>> >
>> > [root@wrops1 BLUE ~]# lvdisplay -m
>> >
>> > /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8
>> >   --- Logical volume ---
>> >   LV Path
>> >
>> > /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8
>> >   LV Nameee53af81-820d-4916-b766-5236ca99daf8
>> >   VG Namee69d1c16-36d1-4375-aaee-69f5a5ce1616
>> >   LV UUIDsBdBRk-tNyZ-Rval-F4lw-ka6X-wOe8-AQenTb
>> >   LV Write Accessread/write
>> >   LV Creation host, time wrops1.blue, 2015-07-31 10:40:57 +0200
>> >   LV Status  available
>> >   # open 1
>> >   LV Size1.00 TiB
>> >   Current LE 8192
>> >   Segments   1
>> >   Allocation inherit
>> >   Read ahead sectors auto
>> >   - currently set to 8192
>> >   Block device   253:29
>> >
>> >   --- Segments ---
>> >   Logical extents 0 to 8191:
>> > Typelinear
>> > Physical volume /dev/mapper/360e00d240572
>> > Physical extents8145 to 16336
>> >
>> > Deactivating LV doesn't work:
>> > [root@wrops1 BLUE ~]# lvchange -an
>> >
>> > /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8
>> >   Logical volume
>> >
>> > e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 
>> > is
>> > used by another device.
>>
>> Looks like your lv is used as a physical volume on another vg - probably
>> a vg created on a guest. Lvm and systemd are trying hard to discover
>> stuff on multipath devices and expose anything to the hypervisor.
>>
>> Can you share the output of:
>>
>> ls -l /sys/block/$(basename $(readlink
>>
>> /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8))/holders
>>
>> And:
>>
>> pvscan --cache
>> vgs -o pv_name,vg_name
>>
>> Nir
>>
>> > Removing from hypervisor doesn't work either.
>> > [root@wrops1 BLUE ~]# lvremove --force
>> >
>> > /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8
>> >   Logical 

Re: [ovirt-users] Problem accessing to hosted-engine after wrong network config

2016-06-13 Thread Alexis HAUSER
>Thanks for the report.
>Can you please summarize how you solved the wrong-vlan issue? Thanks.

Actually, this isn't very clear. After changing the ovirtmgmt VLAN, I wasn't 
able to access the web interface anymore (or even to ping the FQDN of the 
hosted-engine VM).
After trying a lot of different things with no success, I decided to reboot the 
hypervisor.
I don't know if this reboot was a wrong idea, but I started to realize the VM 
wasn't really started :
- hosted-engine --vm-status was showing as if the VM was started but with 
"unkown stale data"
- vdsClient -s 0 list was showing the VM as down with "exitMessage = Failed to 
acquire lock: No space left on device"

I tried everything about maintenance mode / stopping VM / starting it with 
ovirt commands, but the VM was not starting, it was crashing with this error 
message above (and unreachable from network of course)
I found out there was an option in hosted-engine command to reinitialze 
lockspace but I still had the same error.

Before deleting everything on my NFS data domain, I tried to delete the file 
called __DIRECT_IO_TEST__ which seems to be a lock file (there is no 
documentation at all concerning this, from what I can see) and I've been lucky 
: the VM started again, with a good status and was accessible.

So there are 3 points I don't understand :
1) On the hypervisor, every config file and configuration I could get related 
to ovirtmgmt didn't have any VLAN option : does it mean from the moment I 
changed this VLAN option on the VM its link with the hypervisor has been cut 
and the information about the VLAN in the VM didn't come back to the hypervisor 
?
2) The fact hosted-engine --reinitialize-lockspace didn't 
reinitialize-lockspace correctly and had to do it manually...And only deleting 
this file manually make everything work again
3) After this file was deleted, why I was able to ping and contact again my VM 
while it was still configured on another wrong VLAN, I should have lost 
connectivity completely

Maybe some of these behaviors are bugs, but it's hard to guess what part to be 
able to fill a new bug report...


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine vm-status stale data and cluster seems "broken"

2016-06-13 Thread Charles Kozler
It is up. I can do "ps -Aef | grep -i qemu-kvm | grep -i hosted" and see it
running. I also forcefully shut it down with hosted-engine --vm-stop when
it was on node1 and then did --vm-start on node 2 and it came up. Also the
Web UI is reachable so thats how I also know the hosted engine VM is running

On Mon, Jun 13, 2016 at 8:24 AM, Alexis HAUSER <
alexis.hau...@telecom-bretagne.eu> wrote:

>
> > http://imgur.com/a/6xkaS
>
> I had similar errors with one single host and a hosted-engine VM.
> My case should be totally different, but one thing you could try first is
> to check VM is really up.
> In my issues, VM was shown by hosted-engine command as up, but was down.
> with vdsClient command, you can check if it's status with more details.
>
> What is the result for you of the following command ?
>
>  vdsClient -s 0 list
>



-- 

*Charles Kozler*
*Vice President, IT Operations*

FIX Flyer, LLC
225 Broadway | Suite 1600 | New York, NY 10007
1-888-349-3593
http://www.fixflyer.com 

NOTICE TO RECIPIENT: THIS E-MAIL IS MEANT ONLY FOR THE INTENDED
RECIPIENT(S) OF THE TRANSMISSION, AND CONTAINS CONFIDENTIAL INFORMATION
WHICH IS PROPRIETARY TO FIX FLYER LLC.  ANY UNAUTHORIZED USE, COPYING,
DISTRIBUTION, OR DISSEMINATION IS STRICTLY PROHIBITED.  ALL RIGHTS TO THIS
INFORMATION IS RESERVED BY FIX FLYER LLC.  IF YOU ARE NOT THE INTENDED
RECIPIENT, PLEASE CONTACT THE SENDER BY REPLY E-MAIL AND PLEASE DELETE THIS
E-MAIL FROM YOUR SYSTEM AND DESTROY ANY COPIES.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine vm-status stale data and cluster seems "broken"

2016-06-13 Thread Alexis HAUSER

> http://imgur.com/a/6xkaS 

I had similar errors with one single host and a hosted-engine VM.
My case should be totally different, but one thing you could try first is to 
check VM is really up.
In my issues, VM was shown by hosted-engine command as up, but was down. with 
vdsClient command, you can check if it's status with more details.

What is the result for you of the following command ? 

 vdsClient -s 0 list
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Migrate hosted-engine from NFS to ISCSI

2016-06-13 Thread Simone Tiraboschi
On Thu, Jun 9, 2016 at 9:00 AM, Madhuranthakam, Ravi Kumar
 wrote:
> Is there any solution to it on oVirt 3.6?

You can try to follow the discussion here:
https://bugzilla.redhat.com/show_bug.cgi?id=1240466

Basically you have to:
- take a backup of the engine with engine-backup
- deploy from scratch on an host pointing to the new storage domain
- - if you are going to use the engine appliance, here you have to
avoid automatically executing engine setup since:
- - - you have to manually copy the backup to the new VM
- - - you have tu run engine-backup to restore it,
- - - only after that you can execute engine-setup
- at the end you can continue with hosted-engine setup ***
- then you have to run hosted-engine --deploy again on each host to
point to the new storage domain

*** the flow is currently broken here: hosted-engine-setup will fail since:
- the old hosted-engine storage domain is already in the engine (since
you restored the DB) but you are deploying on a different one
- the engine VM is already in the DB but you are deploying with a new VM
- all the hosted-engine host are already in the engine DB

So you'll probably need to manually edit the engine-DB just after DB
recovery in order to:
- remove the hosted-engine storage domain from the engine DB
- remove the hosted-engine VM from the engine DB
- remove all the hosted-engine host from the engine DB since you are
going to redeploy them

We are looking for adding this capability to engine-backup

> I am also planning to move hosted engine from NFS storage to ISCSI .
>
>
>
> ~Ravi
>
>
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] java.io.IOException: Command returned failure code 1 during SSH session

2016-06-13 Thread Phillip Bailey
On Mon, Jun 13, 2016 at 4:34 AM, Yedidyah Bar David  wrote:

> On Sat, Jun 11, 2016 at 5:56 PM, Gregor Binder 
> wrote:
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA256
> >
> > Hi,
> >
> > during inspecting the engine.log I found this entry:
> >
> > - 
> > [org.ovirt.engine.core.uutils.ssh.SSHDialog]
> > (DefaultQuartzScheduler_Worker-81) [] Exception: java.io.IOException:
> > Command returned failure code 1 during SSH session 'root@ > name remove>'
> > - 
> > Message: Failed to check for available updates on host host01 with
> > message 'Command returned failure code 1 during SSH session
> > 'root@'
> > - 
> >
> > Looks like a serious problem because the engine can't check for updates.
>
> You should have some more information in the log. Please check around this
> line and/or post full log. Thanks.
>
> >
> > cheers
> > gregor
> > - --
> > GPG-Key: 67F1534F
> > URL: http://pgp.mit.edu:11371/pks/lookup?op=get=0x137FB29D67F1534
> > F
> > -BEGIN PGP SIGNATURE-
> > Version: GnuPG v2
> >
> > iQIcBAEBCAAGBQJXXCaMAAoJEBN/sp1n8VNP1B4QAL1EZRBMe+TFYENj2WH0saTm
> > GBOZxKljwkno0xdGpql64ZsmPogQ9Ybtus6eEWBuzGScc0uHvsbzVKWrVNf2afAP
> > XbvWYvdTWECfhSTbQQ0MS/itwkuOfeEONywdo9jcCv+261oEJwQyltjDKK6NDgYl
> > K4L5Qyvhac0EZsjRpDtKDyHj+QT321hLI5gRps/eMPIAHWl8zaq+LJVFDI4EV3gE
> > 9Ndcljyxjd6IyqIG4LzQobNowA8Jp+QAIrA316ekkb9BLF7o/W9VaITmS+5xS5Dl
> > y2lL7Ga/LYdpEkMh8ZQmLwjoTWZvKoL08xFQgnUQ4Ry/UI8ENukmIXecuQebhEHH
> > Bs4WnaZCDxditHymI809lwf2jpeGVjLkOPuLfev38AIfKS00acm0Yb3TIWNbzs6F
> > ZJ+rz9X6gtPBET2XOSDPWa/JsCcIbg/XjqEM4qzOANmKzWA4mJpVh7uM6M8mgFd+
> > 4kZA8hVz4sckat1jbXFXgIJuMvNDwjgKUsDVBoZ1wKJvfj/btfgFaVI/osDnNQ8l
> > rtnuGCDNhZvJCctxIbLOpC7+raImWLOy89Od1W3KMYg6ECgAM7t5A7VRtTdzSqyt
> > pji7NaXsZxNqCh2QCXa8srjUQgWttpkRsH/iR3xq/s7QdS+Moail5AoF8XeCzFvB
> > GX+DLN8RPMcIilUXuaUs
> > =OnF7
> > -END PGP SIGNATURE-
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
> Didi,

I've been having the same problem since the end of last week. I get the
error whenever I try to add a new host or run reinstall on one that failed
to install correctly. The relevant part of my log file is below. It covers
everything from the point that I start the action that causes the error to
the end of the log file. Also, I have SSH'd to the host to make sure that
there's nothing wrong with the connectivity and everything there works as
expected. Any help you can provide would be greatly appreciated.

2016-06-13 07:52:56,816 INFO
 [org.ovirt.engine.core.bll.hostdeploy.InstallVdsCommand] (default task-81)
[1a1eab5e] Running command: InstallVdsCommand internal: false. Entities
affected :  ID: 244264be-4156-45a1-aed5-d05681303c07 Type: VDSAction group
EDIT_HOST_CONFIGURATION with role type ADMIN
2016-06-13 07:52:56,862 INFO
 [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] (default
task-81) [1a1eab5e] Lock Acquired to object
'EngineLock:{exclusiveLocks='[244264be-4156-45a1-aed5-d05681303c07=]', sharedLocks='null'}'
2016-06-13 07:52:56,868 INFO
 [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
(org.ovirt.thread.pool-8-thread-1) [1a1eab5e] Running command:
InstallVdsInternalCommand internal: true. Entities affected :  ID:
244264be-4156-45a1-aed5-d05681303c07 Type: VDS
2016-06-13 07:52:56,887 INFO
 [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
(org.ovirt.thread.pool-8-thread-1) [1a1eab5e] Before Installation host
244264be-4156-45a1-aed5-d05681303c07, m2-h1
2016-06-13 07:52:56,889 WARN
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-81) [1a1eab5e] Correlation ID: null, Call Stack: null, Custom
Event ID: -1, Message: Failed to verify Power Management configuration for
Host m2-h1.
2016-06-13 07:52:56,918 INFO
 [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(org.ovirt.thread.pool-8-thread-1) [1a1eab5e] START,
SetVdsStatusVDSCommand(HostName = m2-h1,
SetVdsStatusVDSCommandParameters:{runAsync='true',
hostId='244264be-4156-45a1-aed5-d05681303c07', status='Installing',
nonOperationalReason='NONE', stopSpmFailureLogged='false',
maintenanceReason='null'}), log id: 7656e770
2016-06-13 07:52:56,921 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-81) [1a1eab5e] Correlation ID: 1a1eab5e, Call Stack: null,
Custom Event ID: -1, Message: Host m2-h1 configuration was updated by
admin@internal.
2016-06-13 07:52:56,933 INFO
 [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(org.ovirt.thread.pool-8-thread-1) [1a1eab5e] FINISH,
SetVdsStatusVDSCommand, log id: 7656e770
2016-06-13 07:52:57,140 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-1) [1a1eab5e] Correlation ID: 1a1eab5e,
Call Stack: null, Custom Event ID: -1, Message: Installing Host m2-h1.

Re: [ovirt-users] Can't start VM after shutdown

2016-06-13 Thread Colin Coe
Initially we only saw this on VMs with 2 or more disks. Subsequently we
confirmed that it does happen on single disk VMs also.

CC

---

Sent from my Nexus 5
On Jun 13, 2016 5:12 PM, "gregor"  wrote:

> The VM has two disks both are VirtIO. During testing its now clear that
> the problem occur only with two disks. When I select only one disk for
> the snapshost it works.
> Is this a problem of oVirt or is it not possible to use two disks on a
> VM in oVirt?
>
> Have you also two or more disks on your VM?
>
> Here are the Testresults:
> -
> What does not work:
> - Export the VM: Failed with error "ImageIsNotLegalChain and code 262"
> - Clone the VM: Failed with error "IRSErrorException: Image is not a
> legal chain" with the ID of the second Disk.
>
> After removing the second Disk:
> - Create offline snapshot: Works
> - Remove offline snapshot: After two hours I run "engine-setup
> --offline" to clean the looked snapshot !!!
> - Export the VM: Works
> - Import the exported VM: Works
> - Add Disk to the imported VM: Works
> - Create offline snapshot of the imported VM: Failed
> - Clone the VM: Works
> - Add Disk to the cloned VM: Works
> - Create offline snapshot of the cloned VM: Failed
>
> What works:
> - Make offline snapshot only with the system disk: Works
> - Remove offline snapshot of the system disk: Works
> - Make online snapshot only with the system disk: Works
> - Remove online snapshot of the system disk: Works
>
> cheers
> gregor
>
> On 12/06/16 19:42, gregor wrote:
> > Hi,
> >
> > I solved my problem, here are the steps but be carefully if you don't
> > know what the commands did and how to restore from backup don't follow
> this:
> >
> > - ssh to the host
> > - systemctl stop ovirt-engine
> > - backup the database with "engine-backup"
> > - navigate to the image files
> > - backup the images: sudo -u vdsm rsync -av  
> > - check which one is the backing file: qemu-img info 
> > - check for damages: qemu-img check 
> > - qemu-img commit 
> > - rename the  + .lease and .meta so it can't be accessed
> >
> > - vmname=srv03
> > - db=engine
> > - sudo -u postgres psql $db -c "SELECT b.disk_alias, s.description,
> > s.snapshot_id, i.creation_date, s.status, i.imagestatus, i.size,
> > i.image_group_id, i.vm_snapshot_id, i.image_guid, i.parentid, i.active
> > FROM images as i JOIN snapshots AS s ON (i.vm_snapshot_id =
> > s.snapshot_id) LEFT JOIN vm_static AS v ON (s.vm_id = v.vm_guid) JOIN
> > base_disks AS b ON (i.image_group_id = b.disk_id) WHERE v.vm_name =
> > '$vmname' ORDER BY creation_date, description, disk_alias"
> >
> > - note the image_guid and parent_id from the broken snapshot and the
> > active snapshot, the active state is the image_guuid with the parentid
> > ----
> > - igid_active=
> > - igid_broken=
> > - the parentid of the image_guuid of the broken snapshot must be the
> > same as the activ snapshots image_guuid
> > - note the snapshot id
> > - sid_active=
> > - sid_broken=
> >
> > - delete the broken snapshot
> > - sudo -u postgres psql $db -c "DELETE FROM snapshots AS s WHERE
> > s.snapshot_id = '$sid_broken'"
> >
> > - pid_new=----
> > - sudo -u postgres psql $db -c "SELECT * FROM images WHERE
> > vm_snapshot_id = '$sid_active' AND image_guid = '$igid_broken'"
> > - sudo -u postgres psql $db -c "DELETE FROM images WHERE vm_snapshot_id
> > = '$sid_broken' AND image_guid = '$igid_active'"
> > - sudo -u postgres psql $db -c "SELECT * FROM image_storage_domain_map
> > WHERE image_id = '$igid_broken'"
> > - sudo -u postgres psql $db -c "DELETE FROM image_storage_domain_map
> > WHERE image_id = '$igid_broken'"
> > - sudo -u postgres psql $db -c "UPDATE images SET image_guid =
> > '$igid_active', parentid = '$pid_new' WHERE vm_snapshot_id =
> > '$sid_active' AND image_guid = '$igid_broken'"
> > - sudo -u postgres psql $db -c "SELECT * FROM image_storage_domain_map"
> > - storid=
> > - diskprofileid=
> > - sudo -u postgres psql $db -c "INSERT INTO image_storage_domain_map
> > (image_id, storage_domain_id, disk_profile_id) VALUES ('$igid_broken',
> > '$stor_id', '$diskprofileid')"
> >
> > - check values
> > - sudo -u postgres psql $db -c "SELECT b.disk_alias, s.description,
> > s.snapshot_id, i.creation_date, s.status, i.imagestatus, i.size,
> > i.image_group_id, i.vm_snapshot_id, i.image_guid, i.parentid, i.active
> > FROM images as i JOIN snapshots AS s ON (i.vm_snapshot_id =
> > s.snapshot_id) LEFT JOIN vm_static AS v ON (s.vm_id = v.vm_guid) JOIN
> > base_disks AS b ON (i.image_group_id = b.disk_id) WHERE v.vm_name =
> > '$vmname' ORDER BY creation_date, description, disk_alias"could not
> > change directory to "/root/Backups/oVirt"
> >
> > - check for errors
> > - engine-setup --offline
> > - systemctl start ovirt-engine
> >
> > Now you should have a clean state and a working VM ;-)
> >
> > What was tested:
> > - Power up and down the VM
> >
> > What does not work:
> > - 

Re: [ovirt-users] Can't start VM after shutdown

2016-06-13 Thread gregor
The VM has two disks both are VirtIO. During testing its now clear that
the problem occur only with two disks. When I select only one disk for
the snapshost it works.
Is this a problem of oVirt or is it not possible to use two disks on a
VM in oVirt?

Have you also two or more disks on your VM?

Here are the Testresults:
-
What does not work:
- Export the VM: Failed with error "ImageIsNotLegalChain and code 262"
- Clone the VM: Failed with error "IRSErrorException: Image is not a
legal chain" with the ID of the second Disk.

After removing the second Disk:
- Create offline snapshot: Works
- Remove offline snapshot: After two hours I run "engine-setup
--offline" to clean the looked snapshot !!!
- Export the VM: Works
- Import the exported VM: Works
- Add Disk to the imported VM: Works
- Create offline snapshot of the imported VM: Failed
- Clone the VM: Works
- Add Disk to the cloned VM: Works
- Create offline snapshot of the cloned VM: Failed

What works:
- Make offline snapshot only with the system disk: Works
- Remove offline snapshot of the system disk: Works
- Make online snapshot only with the system disk: Works
- Remove online snapshot of the system disk: Works

cheers
gregor

On 12/06/16 19:42, gregor wrote:
> Hi,
> 
> I solved my problem, here are the steps but be carefully if you don't
> know what the commands did and how to restore from backup don't follow this:
> 
> - ssh to the host
> - systemctl stop ovirt-engine
> - backup the database with "engine-backup"
> - navigate to the image files
> - backup the images: sudo -u vdsm rsync -av  
> - check which one is the backing file: qemu-img info 
> - check for damages: qemu-img check 
> - qemu-img commit 
> - rename the  + .lease and .meta so it can't be accessed
> 
> - vmname=srv03
> - db=engine
> - sudo -u postgres psql $db -c "SELECT b.disk_alias, s.description,
> s.snapshot_id, i.creation_date, s.status, i.imagestatus, i.size,
> i.image_group_id, i.vm_snapshot_id, i.image_guid, i.parentid, i.active
> FROM images as i JOIN snapshots AS s ON (i.vm_snapshot_id =
> s.snapshot_id) LEFT JOIN vm_static AS v ON (s.vm_id = v.vm_guid) JOIN
> base_disks AS b ON (i.image_group_id = b.disk_id) WHERE v.vm_name =
> '$vmname' ORDER BY creation_date, description, disk_alias"
> 
> - note the image_guid and parent_id from the broken snapshot and the
> active snapshot, the active state is the image_guuid with the parentid
> ----
> - igid_active=
> - igid_broken=
> - the parentid of the image_guuid of the broken snapshot must be the
> same as the activ snapshots image_guuid
> - note the snapshot id
> - sid_active=
> - sid_broken=
> 
> - delete the broken snapshot
> - sudo -u postgres psql $db -c "DELETE FROM snapshots AS s WHERE
> s.snapshot_id = '$sid_broken'"
> 
> - pid_new=----
> - sudo -u postgres psql $db -c "SELECT * FROM images WHERE
> vm_snapshot_id = '$sid_active' AND image_guid = '$igid_broken'"
> - sudo -u postgres psql $db -c "DELETE FROM images WHERE vm_snapshot_id
> = '$sid_broken' AND image_guid = '$igid_active'"
> - sudo -u postgres psql $db -c "SELECT * FROM image_storage_domain_map
> WHERE image_id = '$igid_broken'"
> - sudo -u postgres psql $db -c "DELETE FROM image_storage_domain_map
> WHERE image_id = '$igid_broken'"
> - sudo -u postgres psql $db -c "UPDATE images SET image_guid =
> '$igid_active', parentid = '$pid_new' WHERE vm_snapshot_id =
> '$sid_active' AND image_guid = '$igid_broken'"
> - sudo -u postgres psql $db -c "SELECT * FROM image_storage_domain_map"
> - storid=
> - diskprofileid=
> - sudo -u postgres psql $db -c "INSERT INTO image_storage_domain_map
> (image_id, storage_domain_id, disk_profile_id) VALUES ('$igid_broken',
> '$stor_id', '$diskprofileid')"
> 
> - check values
> - sudo -u postgres psql $db -c "SELECT b.disk_alias, s.description,
> s.snapshot_id, i.creation_date, s.status, i.imagestatus, i.size,
> i.image_group_id, i.vm_snapshot_id, i.image_guid, i.parentid, i.active
> FROM images as i JOIN snapshots AS s ON (i.vm_snapshot_id =
> s.snapshot_id) LEFT JOIN vm_static AS v ON (s.vm_id = v.vm_guid) JOIN
> base_disks AS b ON (i.image_group_id = b.disk_id) WHERE v.vm_name =
> '$vmname' ORDER BY creation_date, description, disk_alias"could not
> change directory to "/root/Backups/oVirt"
> 
> - check for errors
> - engine-setup --offline
> - systemctl start ovirt-engine
> 
> Now you should have a clean state and a working VM ;-)
> 
> What was tested:
> - Power up and down the VM
> 
> What does not work:
> - Its not possible to make offline snapshots, online was not tested
> because I will not getting into such trouble again. It took many hours
> after the machine is up again.
> 
> PLEASE be aware and don't destroy your Host and VM !!!
> 
> cheers
> gregor
> 
> On 12/06/16 13:40, Colin Coe wrote:
>> We've seen this with both Linux and Windows VMs.  I'm guessing that
>> you've had failures on this VM in both snapshot create and delete
>> 

Re: [ovirt-users] Disk from FC storage not removed

2016-06-13 Thread Krzysztof Dajka
Hi Nir,

Thanks for solution. I didn't notice the guest /dev/backupvg01/backuplv01
on all hypervisors. It seems that I've got this issue with 2 additionals
volumes, but no one noticed because they were only few gb.


[root@wrops2 BLUE/WRO ~]# ls -l /sys/block/$(basename $(readlink
/dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8))/holders
total 0
lrwxrwxrwx. 1 root root 0 Jun 13 10:48 dm-43 -> ../../dm-43

[root@wrops2 BLUE/WRO ~]# pvscan --cache
[root@wrops2 BLUE/WRO ~]# vgs -o pv_name,vg_name
  PV
  VG

/dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8
backupvg01
  /dev/sda2
 centos_wrops2

/dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/99a1c067-9728-484a-a0cb-cb6689d5724c
deployvg
  /dev/mapper/360e00d240572
 e69d1c16-36d1-4375-aaee-69f5a5ce1616

/dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/86a6d83f-2661-4fe3-8874-ce4d8a111c0d
jenkins
  /dev/sda3
 w2vg1

[root@wrops2 BLUE/WRO ~]# dmsetup info
Name:  backupvg01-backuplv01
State: ACTIVE
Read Ahead:8192
Tables present:LIVE
Open count:0
Event number:  0
Major, minor:  253, 43
Number of targets: 1
UUID: LVM-ubxOH5R2h6B8JwLGfhpiNjnAKlPxMPy6KfkeLBxXajoT3gxU0yC5JvOQQVkixrTA

[root@wrops2 BLUE/WRO ~]# lvchange -an /dev/backupvg01/backuplv01
[root@wrops2 BLUE/WRO ~]# lvremove
/dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8
Do you really want to remove active logical volume
ee53af81-820d-4916-b766-5236ca99daf8? [y/n]: y
  Logical volume "ee53af81-820d-4916-b766-5236ca99daf8" successfully removed


Would this configuration in lvm.conf:
filter = [ "r|/dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/.*|" ]
on all hypervisors solve problem of scanning guest volumes?

2016-06-11 23:16 GMT+02:00 Nir Soffer :

> On Thu, Jun 9, 2016 at 11:46 AM, Krzysztof Dajka 
> wrote:
> > Hi,
> >
> > Recently I tried to delete 1TB disk created on top ~3TB LUN from
> > ovirtengine.
> > Disk is preallocated and I backuped data to other disk so I could
> recreate
> > it once again as thin volume. I couldn't remove this disk when it was
> > attached to a VM. But once I detached it I could remove it permanently.
> The
> > thing is it only disappeared from ovirtengine GUI.
> >
> > I've got 4 hosts with FC HBA attached to storage array and all of them
> are
> > saying that this 1TB disk which should be gone is opened by all hosts
> > simultaneously.
> >
> > [root@wrops1 BLUE ~]# lvdisplay -m
> >
> /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8
> >   --- Logical volume ---
> >   LV Path
> >
> /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8
> >   LV Nameee53af81-820d-4916-b766-5236ca99daf8
> >   VG Namee69d1c16-36d1-4375-aaee-69f5a5ce1616
> >   LV UUIDsBdBRk-tNyZ-Rval-F4lw-ka6X-wOe8-AQenTb
> >   LV Write Accessread/write
> >   LV Creation host, time wrops1.blue, 2015-07-31 10:40:57 +0200
> >   LV Status  available
> >   # open 1
> >   LV Size1.00 TiB
> >   Current LE 8192
> >   Segments   1
> >   Allocation inherit
> >   Read ahead sectors auto
> >   - currently set to 8192
> >   Block device   253:29
> >
> >   --- Segments ---
> >   Logical extents 0 to 8191:
> > Typelinear
> > Physical volume /dev/mapper/360e00d240572
> > Physical extents8145 to 16336
> >
> > Deactivating LV doesn't work:
> > [root@wrops1 BLUE ~]# lvchange -an
> >
> /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8
> >   Logical volume
> >
> e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 is
> > used by another device.
>
> Looks like your lv is used as a physical volume on another vg - probably
> a vg created on a guest. Lvm and systemd are trying hard to discover
> stuff on multipath devices and expose anything to the hypervisor.
>
> Can you share the output of:
>
> ls -l /sys/block/$(basename $(readlink
>
> /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8))/holders
>
> And:
>
> pvscan --cache
> vgs -o pv_name,vg_name
>
> Nir
>
> > Removing from hypervisor doesn't work either.
> > [root@wrops1 BLUE ~]# lvremove --force
> >
> /dev/e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8
> >   Logical volume
> >
> e69d1c16-36d1-4375-aaee-69f5a5ce1616/ee53af81-820d-4916-b766-5236ca99daf8 is
> > used by another device.
> >
> > I tried and rebooted one host and as soon as it booted the volume became
> > opened once again. Lsof on all hosts doesn't give anything meaningful
> > regarding this LV. As opposed to other LV which are used by qemu-kvm.
> >
> > Has anyone encountered similar problem? How can I remove this LV?
> >
> > 

Re: [ovirt-users] java.io.IOException: Command returned failure code 1 during SSH session

2016-06-13 Thread Yedidyah Bar David
On Sat, Jun 11, 2016 at 5:56 PM, Gregor Binder  wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Hi,
>
> during inspecting the engine.log I found this entry:
>
> - 
> [org.ovirt.engine.core.uutils.ssh.SSHDialog]
> (DefaultQuartzScheduler_Worker-81) [] Exception: java.io.IOException:
> Command returned failure code 1 during SSH session 'root@ name remove>'
> - 
> Message: Failed to check for available updates on host host01 with
> message 'Command returned failure code 1 during SSH session
> 'root@'
> - 
>
> Looks like a serious problem because the engine can't check for updates.

You should have some more information in the log. Please check around this
line and/or post full log. Thanks.

>
> cheers
> gregor
> - --
> GPG-Key: 67F1534F
> URL: http://pgp.mit.edu:11371/pks/lookup?op=get=0x137FB29D67F1534
> F
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
>
> iQIcBAEBCAAGBQJXXCaMAAoJEBN/sp1n8VNP1B4QAL1EZRBMe+TFYENj2WH0saTm
> GBOZxKljwkno0xdGpql64ZsmPogQ9Ybtus6eEWBuzGScc0uHvsbzVKWrVNf2afAP
> XbvWYvdTWECfhSTbQQ0MS/itwkuOfeEONywdo9jcCv+261oEJwQyltjDKK6NDgYl
> K4L5Qyvhac0EZsjRpDtKDyHj+QT321hLI5gRps/eMPIAHWl8zaq+LJVFDI4EV3gE
> 9Ndcljyxjd6IyqIG4LzQobNowA8Jp+QAIrA316ekkb9BLF7o/W9VaITmS+5xS5Dl
> y2lL7Ga/LYdpEkMh8ZQmLwjoTWZvKoL08xFQgnUQ4Ry/UI8ENukmIXecuQebhEHH
> Bs4WnaZCDxditHymI809lwf2jpeGVjLkOPuLfev38AIfKS00acm0Yb3TIWNbzs6F
> ZJ+rz9X6gtPBET2XOSDPWa/JsCcIbg/XjqEM4qzOANmKzWA4mJpVh7uM6M8mgFd+
> 4kZA8hVz4sckat1jbXFXgIJuMvNDwjgKUsDVBoZ1wKJvfj/btfgFaVI/osDnNQ8l
> rtnuGCDNhZvJCctxIbLOpC7+raImWLOy89Od1W3KMYg6ECgAM7t5A7VRtTdzSqyt
> pji7NaXsZxNqCh2QCXa8srjUQgWttpkRsH/iR3xq/s7QdS+Moail5AoF8XeCzFvB
> GX+DLN8RPMcIilUXuaUs
> =OnF7
> -END PGP SIGNATURE-
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Disable iptables management by ovirt

2016-06-13 Thread Yedidyah Bar David
On Thu, Jun 9, 2016 at 11:53 PM, Mark Gagnon  wrote:
> Is it possible to disable the automated management of iptables rules once
> your hosts/engine are running?
>
> @CUSTOM_RULES@ won't cut it because  we wanted to filter by source.

You mean on hosts?

It's a checkbox you can unmark when adding a host, under "Advanced Parameters".

>
> Thank you
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem accessing to hosted-engine after wrong network config

2016-06-13 Thread Yedidyah Bar David
On Thu, Jun 9, 2016 at 7:01 PM, Alexis HAUSER
 wrote:
> Actually I found my answer : it was just a problem on the NFS share, no 
> relationship with ovirt itself, sorry about that.

Thanks for the report.

Can you please summarize how you solved the wrong-vlan issue? Thanks.

> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users