Re: [Users] Reattach storage domain

2014-02-03 Thread Alexandr
01.02.2014 23:12, Itamar Heim пишет:
 On 02/01/2014 09:58 PM, Alexandr wrote:
 01.02.2014 21:57, Meital Bourvine пишет:
 I think that it should be something like this:

 Method: PUT

 URL:https://IP/api/storageconnections/ID

 Body:
 storage_connection
 address/address
 type/type
 port/port
 target/target
 /storage_connection

 (Add the correct details to the body.

 - Original Message -
 From: Itamar Heimih...@redhat.com
 To: Alexandrshur...@shurik.kiev.ua,users@ovirt.org
 Sent: Saturday, February 1, 2014 7:58:56 PM
 Subject: Re: [Users] Reattach storage domain

 On 02/01/2014 06:56 PM, Alexandr wrote:
 Thank you. Can you provide me more detailed steps, I'm not
 familiar with
 rest api :(
 sorry i can't give more details right now:
 manage connection details:
 http://www.ovirt.org/Features/Manage_Storage_Connections

 rest api (and other things):
 https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.3/html-single/Developer_Guide/index.html



 01.02.2014 19:51, Itamar Heim пишет:
 On 02/01/2014 06:38 PM, Alexandr wrote:
 Hello!

 Unfortunately my master storage domain (gluster) is dead. I setup
 another gluster-storage and attach it to the ovirt. Hostname,
 path and
 volume name are same as old ones. Then I restored from tar
 archive all
 files. But I cannot activate master domain, operation failed and
 domain
 status remains inactive. I see - it mounted on nodes:

 vms02.lis.ua:STORAGE on
 /rhev/data-center/mnt/glusterSD/vms02.lis.ua:STORAGE
 a
 I attached engine.log, can someone provide me recovery steps?

 P.S. Sorry for my english


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

 the right way to do this would be to restore it not via engine, then
 use the rest api to edit the storage domain connection (mount) in
 the
 engine.
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

 I tried to change it via ovirt-shell and receive an error:

 [oVirt shell (connected)]# update storageconnection
 4a1d8b07-f393-4134-86cc-4f46145cca2b --address vms02.lis.ua --path
 STORAGE

 error:
 status: 400
 reason: Bad Request
 detail: Cannot edit Storage Connection. Storage connection parameters
 can be edited only for NFS, Posix, local or iSCSI data domains.

 alisa/derez - any reason we don't support edit connection for gluster sd?
I still cannot solve my problem with a storage, and now I have started
most important virtual machines directly via libvirt and set in domain
xml configuration file disk configuration to

disk type='file' device='disk'
  driver name='qemu' type='raw'/
  source
file='/rhev/data-center/mnt/glusterSD/vms02.lis.ua:STORAGE/7f9709c1-3ab6-4af8-9e58-955ef7c9452e/images/fd0bf58e-3ba1-428c-a3e2-7e5410d6bc75/4ebf1396-28ac-4851-8b86-e0f1085f735e'/

Now I have two questions:

1. If setup fresh ovirt installation how can I attach an existing
gluster storage and import disks? Are
http://www.ovirt.org/Features/Import_an_existing_Storage_Domain and
http://www.ovirt.org/Features/Domain_Scan already implemented in 3.3.2?
or
2. May be I can move these disk images into a new SD?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt Node Interface Renaming Problem

2014-02-03 Thread Fabian Deutsch
Am Donnerstag, den 30.01.2014, 10:43 -0800 schrieb David Li:
 Hi,
 
 I am using oVirt node 3.0.3.  It seems systemd renames all the interfaces 
 from ethx to something else. Not sure why but this creates lots of problems 
 for some old scripts. 
 
 For example:
 
 [root@localhost ~]# dmesg | grep -i eth0
 [2.441579] bnx2 :10:00.0 eth0: Broadcom NetXtreme II BCM5709 
 1000Base-SX (C0) PCI Express found at mem fa00, IRQ 30, node addr 
 5c:f3:fc:20:6e:58
 [   27.222803] systemd-udevd[822]: renamed network interface eth0 to enp16s0f0
 
 
 Is there anyway to prevent this?

Hey David,

as Antoni already pointed out, this is a basic Fedora / systemd feature.
It actually solves problems and I'd also suggest - like Antoni - to take
these new names.

I don't know of a way of turning this naming off.

This upstream document gives some more insight:
http://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/

One note: The CentOS based Node is still using the old NIC naming.

- fabian


signature.asc
Description: This is a digitally signed message part
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] about the size of an offline snapshot

2014-02-03 Thread Maor Lipchuk
On 02/02/2014 02:26 PM, Gianluca Cecchi wrote:
 On Sun, Feb 2, 2014 at 11:21 AM, Maor Lipchuk  wrote:
 
 That is correct, you can also see the size and the fields through the
 API or ovirt-cli
 (see
 http://documentation-devel.engineering.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.3-Beta/html-single/Developer_Guide/index.html#Floating_Disk_Elements)
 though, you can not see the true size in the floating disk, IINM you can
 see it under the VM snapshots disks in the API.
 
 Just to correct the link as 3.3 is not beta any more and that the link
 provided is accessible only by Red Hat employees probably:
 
 https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.3/html/Developer_Guide/chap-Floating_Disks.html#Floating_Disk_Elements
 
 Gianluca
 
Thanks for noticing

Regards,
Maor
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.4 test day - results

2014-02-03 Thread Michal Skrivanek

On Jan 27, 2014, at 17:02 , Alexander Wels aw...@redhat.com wrote:

 Hi, I tested the following items during the test day and here are my results:
  
 1. reboot VM functionality
  
 The related feature page is: http://www.ovirt.org/Features/Guest_Reboot
 The feature page mentions a policy selection checkbox which I was unable to 
 find in the web admin UI at all. I checked the patches that implement the 
 feature and did not see the check box implementation. The patches did show me 
 that all I need to use the feature was to install the guest agent on the 
 guest. So for my test I installed a fedora guest, and I installed the guest 
 agent on the guest. After about a minute after starting the guest, the reboot 
 button was enabled and pressing it started the reboot sequence on the guest. 
  
 I had a console open on the guest and it informed me that the admin had 
 started the reboot process and the guest would be rebooted in a minute. I did 
 not find a way to change the time it took for the reboot to happen.
  
 I did the same test with the REST api, with the same result. The reboot was 
 scheduled for a minute after I issued the command. I did not find a way to 
 change the time with the REST api either. I am guessing that is a future 
 feature.

indeed. The scope of the feature was cut down significantly. No customizations 
other than via vdsm.conf. We hope we'll get the original promises in 3.5:)
The default 60 seconds delay time is for shutdown as well. The future 
customizations are to remove it.

  
 2. Fix Control-Alt-Delete functionality in console options
  
 I had trouble getting spice to work in my test setup, but no issues with VNC. 
 So I tested VNC. I checked the VM console options to make sure that 'Map 
 ctrl-alt-del shortcut to ctrl+alt+end' was checked. Then I connected to a 
 running VM with VNC. I pressed ctrl-+alt+end expected it to issue a 
 ctrl-alt-del to the guest. Nothing happened. I pressed ctrl-alt-del and it 
 properly issued ctrl-alt-del to the guest. I made sure there was no issue 
 with my client by using the menu to issue a ctrl-alt-del to the guest which 
 also resulted in the proper action on the guest. I opened a bug for this:
 https://bugzilla.redhat.com/show_bug.cgi?id=1057763
  
 I did this test on my Fedora machine, and the description mentions that 
 certain OSes capture the ctrl-alt-del before sending it to the guest, Fedora 
 is not one of those OSes, so maybe my test was not valid?

it seems only the Windows version of virt-viewer supports the mapping. We'll 
grey it out for Linux clients then (or try to push the change to Linux versions 
as well, whichever can be done first)

Thanks,
michal

  
 3. Show name of the template in General tab for a VM if the VM is 
 deployed from template via clone allocation.
  
 This is a very straight forward test. I created a template from a VM. I named 
 the template. Then created a VM from that template using clone allocation. I 
 verified that the name of the template is now properly shown in the VM 
 general sub tab. Works as expected.
  
 Overall I had issues getting engine installed due to the shmmax issue 
 reported in other threads, and then I had a really hard time adding new hosts 
 from a blank fedora minimum install. I was successful one out of three 
 attempts, which I feel was probably an yum repository issue as I was getting 
 conflicting python-cpopen issues causing VDSM to not start.
  
 Thanks,
 Alexander
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Hosted Engine always reports unknown stale-data

2014-02-03 Thread Andrew Lau
Hi,

I was wondering if anyone has this same notice when they run:
hosted-engine --vm-status

The engine status will always be unknown stale-data even when the VM is
powered on and the engine is online. engine-health will actually report the
correct status.

eg.

--== Host 1 status ==--

Status up-to-date  : False
Hostname   : 172.16.0.11
Host ID: 1
Engine status  : unknown stale-data

Is it some sort of blocked port causing this or is this by design?

Thanks,
Andrew
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Hosted Engine always reports unknown stale-data

2014-02-03 Thread Doron Fediuck


- Original Message -
 From: Andrew Lau and...@andrewklau.com
 To: users users@ovirt.org
 Sent: Monday, February 3, 2014 12:32:45 PM
 Subject: [Users] Hosted Engine always reports unknown stale-data
 
 Hi,
 
 I was wondering if anyone has this same notice when they run:
 hosted-engine --vm-status
 
 The engine status will always be unknown stale-data even when the VM is
 powered on and the engine is online. engine-health will actually report the
 correct status.
 
 eg.
 
 --== Host 1 status ==--
 
 Status up-to-date : False
 Hostname : 172.16.0.11
 Host ID : 1
 Engine status : unknown stale-data
 
 Is it some sort of blocked port causing this or is this by design?
 
 Thanks,
 Andrew
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 

Hi Andrew,
it looks like an issue with the time stamp.
Which time stamp do you have? How relevant is it?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt test day: HA VM Reservation feature test summary

2014-02-03 Thread Doron Fediuck


- Original Message -
 From: Moti Asayag masa...@redhat.com
 To: users users@ovirt.org
 Sent: Monday, January 27, 2014 5:54:22 PM
 Subject: [Users] ovirt test day: HA VM Reservation feature test summary
 
 Hi All,
 
 In the latest ovirt-test-day i've tested the HA VM resource reservation
 feature [1] according to the basic scenarios as described on [2].
 
 The new feature notifies the admin via an event log about his cluster
 inability to preserve resources for HA VMs. I've reported 2 bugs based
 on the behavior: The cluster check doesn't consider the state of the
 cluster's hosts when it calculates the resources [3] and a minor issue
 of the audit log translation into a message [4].
 
 [1] http://www.ovirt.org/Features/HA_VM_reservation
 [2] http://www.ovirt.org/OVirt_3.4_TestDay#SLA
 [3] Bug 1057579 -HA Vm reservation check ignores host status
 https://bugzilla.redhat.com/show_bug.cgi?id=1057579
 [4] Bug 1057584 -HA Vm reservation event log is not well resolved
 https://bugzilla.redhat.com/show_bug.cgi?id=1057584
 
 Thanks,
 Moti

Thanks, Moti.
Good catches. When looking at the code I also noticed 'none' policy does not use
the ha reservations weight module. Were you using the default policy or 
something
else?

Thanks,
Doron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Reattach storage domain

2014-02-03 Thread Itamar Heim

On 02/03/2014 09:17 AM, Alexandr wrote:

01.02.2014 23:12, Itamar Heim пишет:

On 02/01/2014 09:58 PM, Alexandr wrote:

01.02.2014 21:57, Meital Bourvine пишет:

I think that it should be something like this:

Method: PUT

URL:https://IP/api/storageconnections/ID

Body:
storage_connection
address/address
type/type
port/port
target/target
/storage_connection

(Add the correct details to the body.

- Original Message -

From: Itamar Heimih...@redhat.com
To: Alexandrshur...@shurik.kiev.ua,users@ovirt.org
Sent: Saturday, February 1, 2014 7:58:56 PM
Subject: Re: [Users] Reattach storage domain

On 02/01/2014 06:56 PM, Alexandr wrote:

Thank you. Can you provide me more detailed steps, I'm not
familiar with
rest api :(

sorry i can't give more details right now:
manage connection details:
http://www.ovirt.org/Features/Manage_Storage_Connections

rest api (and other things):
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.3/html-single/Developer_Guide/index.html




01.02.2014 19:51, Itamar Heim пишет:

On 02/01/2014 06:38 PM, Alexandr wrote:

Hello!

Unfortunately my master storage domain (gluster) is dead. I setup
another gluster-storage and attach it to the ovirt. Hostname,
path and
volume name are same as old ones. Then I restored from tar
archive all
files. But I cannot activate master domain, operation failed and
domain
status remains inactive. I see - it mounted on nodes:

vms02.lis.ua:STORAGE on
/rhev/data-center/mnt/glusterSD/vms02.lis.ua:STORAGE
a
I attached engine.log, can someone provide me recovery steps?

P.S. Sorry for my english


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


the right way to do this would be to restore it not via engine, then
use the rest api to edit the storage domain connection (mount) in
the
engine.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


I tried to change it via ovirt-shell and receive an error:

[oVirt shell (connected)]# update storageconnection
4a1d8b07-f393-4134-86cc-4f46145cca2b --address vms02.lis.ua --path
STORAGE

error:
status: 400
reason: Bad Request
detail: Cannot edit Storage Connection. Storage connection parameters
can be edited only for NFS, Posix, local or iSCSI data domains.


alisa/derez - any reason we don't support edit connection for gluster sd?

I still cannot solve my problem with a storage, and now I have started
most important virtual machines directly via libvirt and set in domain
xml configuration file disk configuration to

disk type='file' device='disk'
   driver name='qemu' type='raw'/
   source
file='/rhev/data-center/mnt/glusterSD/vms02.lis.ua:STORAGE/7f9709c1-3ab6-4af8-9e58-955ef7c9452e/images/fd0bf58e-3ba1-428c-a3e2-7e5410d6bc75/4ebf1396-28ac-4851-8b86-e0f1085f735e'/

Now I have two questions:

1. If setup fresh ovirt installation how can I attach an existing
gluster storage and import disks? Are
http://www.ovirt.org/Features/Import_an_existing_Storage_Domain and
http://www.ovirt.org/Features/Domain_Scan already implemented in 3.3.2?
or
2. May be I can move these disk images into a new SD?


we're working on this, but for now you can only attach a clean SD. you 
can later move to it the disks and register them via the domain scan


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt test day: HA VM Reservation feature test summary

2014-02-03 Thread Moti Asayag


- Original Message -
 From: Doron Fediuck dfedi...@redhat.com
 To: Moti Asayag masa...@redhat.com
 Cc: users users@ovirt.org
 Sent: Monday, February 3, 2014 1:07:37 PM
 Subject: Re: [Users] ovirt test day: HA VM Reservation feature test summary
 
 
 
 - Original Message -
  From: Moti Asayag masa...@redhat.com
  To: users users@ovirt.org
  Sent: Monday, January 27, 2014 5:54:22 PM
  Subject: [Users] ovirt test day: HA VM Reservation feature test summary
  
  Hi All,
  
  In the latest ovirt-test-day i've tested the HA VM resource reservation
  feature [1] according to the basic scenarios as described on [2].
  
  The new feature notifies the admin via an event log about his cluster
  inability to preserve resources for HA VMs. I've reported 2 bugs based
  on the behavior: The cluster check doesn't consider the state of the
  cluster's hosts when it calculates the resources [3] and a minor issue
  of the audit log translation into a message [4].
  
  [1] http://www.ovirt.org/Features/HA_VM_reservation
  [2] http://www.ovirt.org/OVirt_3.4_TestDay#SLA
  [3] Bug 1057579 -HA Vm reservation check ignores host status
  https://bugzilla.redhat.com/show_bug.cgi?id=1057579
  [4] Bug 1057584 -HA Vm reservation event log is not well resolved
  https://bugzilla.redhat.com/show_bug.cgi?id=1057584
  
  Thanks,
  Moti
 
 Thanks, Moti.
 Good catches. When looking at the code I also noticed 'none' policy does not
 use
 the ha reservations weight module. Were you using the default policy or
 something
 else?
 

I used the the default ('None') policy in my testing.

 Thanks,
 Doron
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt test day: HA VM Reservation feature test summary

2014-02-03 Thread Doron Fediuck


- Original Message -
 From: Moti Asayag masa...@redhat.com
 To: Doron Fediuck dfedi...@redhat.com
 Cc: users users@ovirt.org
 Sent: Monday, February 3, 2014 1:17:38 PM
 Subject: Re: [Users] ovirt test day: HA VM Reservation feature test summary
 
 
 
 - Original Message -
  From: Doron Fediuck dfedi...@redhat.com
  To: Moti Asayag masa...@redhat.com
  Cc: users users@ovirt.org
  Sent: Monday, February 3, 2014 1:07:37 PM
  Subject: Re: [Users] ovirt test day: HA VM Reservation feature test summary
  
  
  
  - Original Message -
   From: Moti Asayag masa...@redhat.com
   To: users users@ovirt.org
   Sent: Monday, January 27, 2014 5:54:22 PM
   Subject: [Users] ovirt test day: HA VM Reservation feature test summary
   
   Hi All,
   
   In the latest ovirt-test-day i've tested the HA VM resource reservation
   feature [1] according to the basic scenarios as described on [2].
   
   The new feature notifies the admin via an event log about his cluster
   inability to preserve resources for HA VMs. I've reported 2 bugs based
   on the behavior: The cluster check doesn't consider the state of the
   cluster's hosts when it calculates the resources [3] and a minor issue
   of the audit log translation into a message [4].
   
   [1] http://www.ovirt.org/Features/HA_VM_reservation
   [2] http://www.ovirt.org/OVirt_3.4_TestDay#SLA
   [3] Bug 1057579 -HA Vm reservation check ignores host status
   https://bugzilla.redhat.com/show_bug.cgi?id=1057579
   [4] Bug 1057584 -HA Vm reservation event log is not well resolved
   https://bugzilla.redhat.com/show_bug.cgi?id=1057584
   
   Thanks,
   Moti
  
  Thanks, Moti.
  Good catches. When looking at the code I also noticed 'none' policy does
  not
  use
  the ha reservations weight module. Were you using the default policy or
  something
  else?
  
 
 I used the the default ('None') policy in my testing.
 

So indeed we may have another issue.

Thanks again,
Doron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Hosted Engine always reports unknown stale-data

2014-02-03 Thread Andrew Lau
On Mon, Feb 3, 2014 at 9:53 PM, Doron Fediuck dfedi...@redhat.com wrote:



 - Original Message -
  From: Andrew Lau and...@andrewklau.com
  To: users users@ovirt.org
  Sent: Monday, February 3, 2014 12:32:45 PM
  Subject: [Users] Hosted Engine always reports unknown stale-data
 
  Hi,
 
  I was wondering if anyone has this same notice when they run:
  hosted-engine --vm-status
 
  The engine status will always be unknown stale-data even when the VM
 is
  powered on and the engine is online. engine-health will actually report
 the
  correct status.
 
  eg.
 
  --== Host 1 status ==--
 
  Status up-to-date : False
  Hostname : 172.16.0.11
  Host ID : 1
  Engine status : unknown stale-data
 
  Is it some sort of blocked port causing this or is this by design?
 
  Thanks,
  Andrew
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 

 Hi Andrew,
 it looks like an issue with the time stamp.
 Which time stamp do you have? How relevant is it?


timestamps seem to be outdated by a lot, interesting error in the broker.log

Thread-24::INFO::2014-02-03
22:33:14,801::engine_health::90::engine_health.CpuLoadNoEngine::(action) VM
not running on this host, status down
Thread-22::INFO::2014-02-03
22:33:14,834::mem_free::53::mem_free.MemFree::(action) memFree: 27382
Thread-23::ERROR::2014-02-03
22:33:14,922::cpu_load_no_engine::156::cpu_load_no_engine.EngineHealth::(update_stat_file)
Failed to getVmStats: 'pid'
Thread-23::INFO::2014-02-03
22:33:14,923::cpu_load_no_engine::121::cpu_load_no_engine.EngineHealth::(calculate_load)
System load total=0.0124, engine=0., non-engine=0.0124

I'm assuming that update_stat_file is the metadata file the vm-status is
getting pulled from?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Hosted Engine always reports unknown stale-data

2014-02-03 Thread Doron Fediuck


- Original Message -
 From: Andrew Lau and...@andrewklau.com
 To: Doron Fediuck dfedi...@redhat.com
 Cc: users users@ovirt.org, Jiri Moskovcak jmosk...@redhat.com, Greg 
 Padgett gpadg...@redhat.com
 Sent: Monday, February 3, 2014 1:35:01 PM
 Subject: Re: [Users] Hosted Engine always reports unknown stale-data
 
 On Mon, Feb 3, 2014 at 9:53 PM, Doron Fediuck dfedi...@redhat.com wrote:
 
 
 
  - Original Message -
   From: Andrew Lau and...@andrewklau.com
   To: users users@ovirt.org
   Sent: Monday, February 3, 2014 12:32:45 PM
   Subject: [Users] Hosted Engine always reports unknown stale-data
  
   Hi,
  
   I was wondering if anyone has this same notice when they run:
   hosted-engine --vm-status
  
   The engine status will always be unknown stale-data even when the VM
  is
   powered on and the engine is online. engine-health will actually report
  the
   correct status.
  
   eg.
  
   --== Host 1 status ==--
  
   Status up-to-date : False
   Hostname : 172.16.0.11
   Host ID : 1
   Engine status : unknown stale-data
  
   Is it some sort of blocked port causing this or is this by design?
  
   Thanks,
   Andrew
  
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
  
 
  Hi Andrew,
  it looks like an issue with the time stamp.
  Which time stamp do you have? How relevant is it?
 
 
 timestamps seem to be outdated by a lot, interesting error in the broker.log
 
 Thread-24::INFO::2014-02-03
 22:33:14,801::engine_health::90::engine_health.CpuLoadNoEngine::(action) VM
 not running on this host, status down
 Thread-22::INFO::2014-02-03
 22:33:14,834::mem_free::53::mem_free.MemFree::(action) memFree: 27382
 Thread-23::ERROR::2014-02-03
 22:33:14,922::cpu_load_no_engine::156::cpu_load_no_engine.EngineHealth::(update_stat_file)
 Failed to getVmStats: 'pid'
 Thread-23::INFO::2014-02-03
 22:33:14,923::cpu_load_no_engine::121::cpu_load_no_engine.EngineHealth::(calculate_load)
 System load total=0.0124, engine=0., non-engine=0.0124
 
 I'm assuming that update_stat_file is the metadata file the vm-status is
 getting pulled from?
 

Yep.
Can you please verify the time your host actually has?
ie- we have a known issue with time, since we assume all
hosts are in sync. So if one of your hosts has a time sync
issue, this can explain the problem you see.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Hosted Engine always reports unknown stale-data

2014-02-03 Thread Andrew Lau
On Mon, Feb 3, 2014 at 10:40 PM, Doron Fediuck dfedi...@redhat.com wrote:



 - Original Message -
  From: Andrew Lau and...@andrewklau.com
  To: Doron Fediuck dfedi...@redhat.com
  Cc: users users@ovirt.org, Jiri Moskovcak jmosk...@redhat.com,
 Greg Padgett gpadg...@redhat.com
  Sent: Monday, February 3, 2014 1:35:01 PM
  Subject: Re: [Users] Hosted Engine always reports unknown stale-data
 
  On Mon, Feb 3, 2014 at 9:53 PM, Doron Fediuck dfedi...@redhat.com
 wrote:
 
  
  
   - Original Message -
From: Andrew Lau and...@andrewklau.com
To: users users@ovirt.org
Sent: Monday, February 3, 2014 12:32:45 PM
Subject: [Users] Hosted Engine always reports unknown stale-data
   
Hi,
   
I was wondering if anyone has this same notice when they run:
hosted-engine --vm-status
   
The engine status will always be unknown stale-data even when
 the VM
   is
powered on and the engine is online. engine-health will actually
 report
   the
correct status.
   
eg.
   
--== Host 1 status ==--
   
Status up-to-date : False
Hostname : 172.16.0.11
Host ID : 1
Engine status : unknown stale-data
   
Is it some sort of blocked port causing this or is this by design?
   
Thanks,
Andrew
   
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
   
  
   Hi Andrew,
   it looks like an issue with the time stamp.
   Which time stamp do you have? How relevant is it?
  
 
  timestamps seem to be outdated by a lot, interesting error in the
 broker.log
 
  Thread-24::INFO::2014-02-03
  22:33:14,801::engine_health::90::engine_health.CpuLoadNoEngine::(action)
 VM
  not running on this host, status down
  Thread-22::INFO::2014-02-03
  22:33:14,834::mem_free::53::mem_free.MemFree::(action) memFree: 27382
  Thread-23::ERROR::2014-02-03
 
 22:33:14,922::cpu_load_no_engine::156::cpu_load_no_engine.EngineHealth::(update_stat_file)
  Failed to getVmStats: 'pid'
  Thread-23::INFO::2014-02-03
 
 22:33:14,923::cpu_load_no_engine::121::cpu_load_no_engine.EngineHealth::(calculate_load)
  System load total=0.0124, engine=0., non-engine=0.0124
 
  I'm assuming that update_stat_file is the metadata file the vm-status is
  getting pulled from?
 

 Yep.
 Can you please verify the time your host actually has?
 ie- we have a known issue with time, since we assume all
 hosts are in sync. So if one of your hosts has a time sync
 issue, this can explain the problem you see.


--== Host 1 status ==--

Status up-to-date  : False
Hostname   : 172.16.0.11
Host ID: 1
Engine status  : unknown stale-data
Score  : 0
Local maintenance  : False
Host timestamp : 1391417611

--== Host 2 status ==--

Status up-to-date  : False
Hostname   : 172.16.0.12
Host ID: 2
Engine status  : unknown stale-data
Score  : 0
Local maintenance  : False
Host timestamp : 1391417171


​
[root@hv01 ~]# date +%s
│[root@hv02 ~]# date +%s
​​
1391427754
 │139142775
​5​

​​
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Error message constantly being reported

2014-02-03 Thread Itamar Heim

On 02/03/2014 07:52 AM, Sahina Bose wrote:


On 02/03/2014 12:06 PM, Itamar Heim wrote:

On 02/03/2014 07:35 AM, Sahina Bose wrote:


On 02/03/2014 05:02 AM, Itamar Heim wrote:

On 02/02/2014 08:01 PM, Jon Archer wrote:

Hi All,

Constantly seeing this message in the logs:
vdsm vds ERROR vdsm exception occured#012Traceback (most recent call
last):#012  File /usr/share/vdsm/BindingXMLRPC.py, line 952, in
wrapper#012res = f(*args, **kwargs)#012  File
/usr/share/vdsm/gluster/api.py, line 54, in wrapper#012 rv =
func(*args, **kwargs)#012  File /usr/share/vdsm/gluster/api.py, line
306, in tasksList#012status =
self.svdsmProxy.glusterTasksList(taskIds)#012  File
/usr/share/vdsm/supervdsm.py, line 50, in __call__#012 return
callMethod()#012  File /usr/share/vdsm/supervdsm.py, line 48, in
lambda#012**kwargs)#012  File string, line 2, in
glusterTasksList#012  File
/usr/lib64/python2.6/multiprocessing/managers.py, line 740, in
_callmethod#012raise convert_to_error(kind,
result)#012GlusterCmdExecFailedException: Command execution
failed#012error: tasks is not a valid status option#012Usage: volume
status [all | VOLNAME [nfs|shd|BRICK]]
[detail|clients|mem|inode|fd|callpool]#012return code: 1


looks like an option which isn't recognised by the gluster volume
status command.

Any ideas how to resolve? It's not causing any problems, but I would
like to stop it.

Cheers

Jon
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


sahina - iirc, there is a patch removing that noise?


Yes, there was a patch removing this for clusters  3.4 compatibility
version

For 3.4 gluster clusters, we need a version of gluster (= 3.5) to
support the gluster async task feature. This version has the support for
gluster volume status tasks




was this backported to stable 3.3 ?


Unfortunately, no - missed this.

Have submitted a patch now - http://gerrit.ovirt.org/23982




ok, as 3.3.3 is going out hopefully now, this will be in 3.3.4.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] [Spice-devel] Full-size display Windows vs Fedora 20 guests

2014-02-03 Thread Bob Doolittle
Thanks! Do you happen to know the bug ID(s)?

-Bob
On Feb 3, 2014 3:13 AM, Christophe Fergeau cferg...@redhat.com wrote:

 Hey,

 On Sat, Feb 01, 2014 at 04:08:02PM +0100, Itamar Heim wrote:
  On 01/31/2014 09:16 PM, Bob Doolittle wrote:
  On 01/31/2014 03:06 PM, Bob Doolittle wrote:
  Hi,
  
  When I select View/Full Screen on a VM running a Windows guest, the
  display resolution automatically adjusts to fit the new canvas.
  
  However, when I do this on a VM running Fedora 20, it doesn't. Nor do
  I know how to query the new canvas size so that I can issue a manual
  xrandr command to fit it (without doing an ssh into the system and
  somehow finagling xwininfo to give me the size).
  
  In the guest I am running spice-vdagent-0.15.0-1, and restarting it
  has no effect. Shouldn't it be the one responsible for display
  optimization?

 Some needed qxl/kms patches are missing in the fedora 20 kernels, using one
 of the vanilla kernels from

 https://fedoraproject.org/wiki/Kernel_Vanilla_Repositories#Linux_vanilla_kernels_for_Fedora
 helped in my testing (I picked the latest kernel-vanilla-mainline).

 Christophe

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] changing hostname in ovirt

2014-02-03 Thread Sven Kieske
Hi,

just to complete this thread.

I changed the hostname by hacking the database, which was successful.

The complete procedure can be found in this BZ:

https://bugzilla.redhat.com/show_bug.cgi?id=1060215

What I don't know/didn't research is, if the certificates
really need to get redeployed and/or recreated.

So far the host runs fine with the old certificates and the new
address in ovirt.

Am 27.01.2014 14:07, schrieb Alon Bar-Lev:
 
 
 - Original Message -
 From: Sven Kieske s.kie...@mittwald.de
 To: d...@redhat.com
 Cc: Alon Bar-Lev alo...@redhat.com, Users@ovirt.org List 
 Users@ovirt.org
 Sent: Monday, January 27, 2014 2:52:03 PM
 Subject: Re: [Users] changing hostname in ovirt

 well, that's not what I want, because I'm talking about a local
 storage DC. I just want to change the hosts address which ovirt
 uses to connect to the host.

 This isn't possible without changing the certificates (re-deploy)?
 
 You can... just a lot of places.
 
 generate empty subject certificate request using 
 /etc/pki/vdsm/keys/vdsmkey.pem key
 
 # openssl req -new -key /etc/pki/vdsm/keys/vdsmkey.pem -subj /
 
 use engine utility /usr/share/ovirt-engine/bin/pki-enroll-request.sh to 
 enroll a new certificate with the new host name.
 
 # cat  /etc/pki/ovirt-engine/requests/xxx.req
 paste request
 # /usr/share/ovirt-engine/bin/pki-enroll-request.sh --name=xxx 
 --subject=/CN=/O=/C=xxx
 # cat /etc/pki/ovirt-engine/certs/xxx.cer
 
 copy the certificate into:
 /etc/pki/vdsm/certs/vdsmcert.pem
 /etc/pki/vdsm/libvirt-spice/server-cert.pem
 /etc/pki/libvirt/clientcert.pem
 

 Am 27.01.2014 13:27, schrieb Dafna Ron:
 well, if you can re-deploy the hosts that would change the certificates
 as well (create new ones with the new hostname).


-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] [Spice-devel] Full-size display Windows vs Fedora 20 guests

2014-02-03 Thread Christophe Fergeau
 On Feb 3, 2014 3:13 AM, Christophe Fergeau cferg...@redhat.com wrote:
On Mon, Feb 03, 2014 at 06:53:14AM -0500, Bob Doolittle wrote:
  Some needed qxl/kms patches are missing in the fedora 20 kernels, using one
  of the vanilla kernels from
 
  https://fedoraproject.org/wiki/Kernel_Vanilla_Repositories#Linux_vanilla_kernels_for_Fedora
  helped in my testing (I picked the latest kernel-vanilla-mainline).

 Thanks! Do you happen to know the bug ID(s)?

See https://bugzilla.redhat.com/show_bug.cgi?id=1060327 for one.

Christophe


pgp0aXcisjI6w.pgp
Description: PGP signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] I can't remove VM

2014-02-03 Thread Eduardo Ramos

Hi all!

I'm having trouble on removing virtual machines. My environment run on a 
ISCSI domain storage. When I try remove, the SPM logs:


# Start vdsm SPM log #
Thread-6019517::INFO::2014-02-03 
09:58:09,293::logUtils::41::dispatcher::(wrapper) Run and protect: 
deleteImage(sdUUID='c332da29-ba9f-4c94-8fa9-346bb8e04e2a', 
spUUID='9dbc7bb1-c460-4202-8f10-862d2ed3ed9a', 
imgUUID='57ba1906-2035-4503-acbc-5f6f077f75cc', postZero='false', 
force='false')
Thread-6019517::INFO::2014-02-03 
09:58:09,293::blockSD::816::Storage.StorageDomain::(validate) 
sdUUID=c332da29-ba9f-4c94-8fa9-346bb8e04e2a
Thread-6019517::ERROR::2014-02-03 
09:58:10,061::task::833::TaskManager.Task::(_setError) 
Task=`8cbf9978-ed51-488a-af52-a3db030e44ff`::Unexpected error

Traceback (most recent call last):
  File /usr/share/vdsm/storage/task.py, line 840, in _run
return fn(*args, **kargs)
  File /usr/share/vdsm/logUtils.py, line 42, in wrapper
res = f(*args, **kwargs)
  File /usr/share/vdsm/storage/hsm.py, line 1429, in deleteImage
allVols = dom.getAllVolumes()
  File /usr/share/vdsm/storage/blockSD.py, line 972, in getAllVolumes
return getAllVolumes(self.sdUUID)
  File /usr/share/vdsm/storage/blockSD.py, line 172, in getAllVolumes
vImg not in res[vPar]['imgs']):
KeyError: '63650a24-7e83-4c0a-851d-0ce9869a294d'
Thread-6019517::INFO::2014-02-03 
09:58:10,063::task::1134::TaskManager.Task::(prepare) 
Task=`8cbf9978-ed51-488a-af52-a3db030e44ff`::aborting: Task is aborted: 
u'63650a24-7e83-4c0a-851d-0ce9869a294d' - code 100
Thread-6019517::ERROR::2014-02-03 
09:58:10,066::dispatcher::70::Storage.Dispatcher.Protect::(run) 
'63650a24-7e83-4c0a-851d-0ce9869a294d'

Traceback (most recent call last):
  File /usr/share/vdsm/storage/dispatcher.py, line 62, in run
result = ctask.prepare(self.func, *args, **kwargs)
  File /usr/share/vdsm/storage/task.py, line 1142, in prepare
raise self.error
KeyError: '63650a24-7e83-4c0a-851d-0ce9869a294d'
Thread-6019518::INFO::2014-02-03 
09:58:10,087::logUtils::41::dispatcher::(wrapper) Run and protect: 
getSpmStatus(spUUID='9dbc7bb1-c460-4202-8f10-862d2ed3ed9a', options=None)
Thread-6019518::INFO::2014-02-03 
09:58:10,088::logUtils::44::dispatcher::(wrapper) Run and protect: 
getSpmStatus, Return response: {'spm_st': {'spmId': 14, 'spmStatus': 
'SPM', 'spmLver': 64}}
Thread-6019519::INFO::2014-02-03 
09:58:10,100::logUtils::41::dispatcher::(wrapper) Run and protect: 
getAllTasksStatuses(spUUID=None, options=None)
Thread-6019519::INFO::2014-02-03 
09:58:10,101::logUtils::44::dispatcher::(wrapper) Run and protect: 
getAllTasksStatuses, Return response: {'allTasksStatus': {}}
Thread-6019520::INFO::2014-02-03 
09:58:10,109::logUtils::41::dispatcher::(wrapper) Run and protect: 
spmStop(spUUID='9dbc7bb1-c460-4202-8f10-862d2ed3ed9a', options=None)
Thread-6019520::INFO::2014-02-03 
09:58:10,681::clusterlock::121::SafeLease::(release) Releasing cluster 
lock for domain c332da29-ba9f-4c94-8fa9-346bb8e04e2a
Thread-6019521::INFO::2014-02-03 
09:58:11,054::logUtils::41::dispatcher::(wrapper) Run and protect: 
repoStats(options=None)
Thread-6019521::INFO::2014-02-03 
09:58:11,054::logUtils::44::dispatcher::(wrapper) Run and protect: 
repoStats, Return response: {u'51eb6183-157d-4015-ae0f-1c7ffb1731c0': 
{'delay': '0.00799298286438', 'lastCheck': '5.3', 'code': 0, 'valid': 
True}, u'c332da29-ba9f-4c94-8fa9-346bb8e04e2a': {'delay': 
'0.0197920799255', 'lastCheck': '4.9', 'code': 0, 'valid': True}, 
u'0e0be898-6e04-4469-bb32-91f3cf8146d1': {'delay': '0.00803208351135', 
'lastCheck': '5.3', 'code': 0, 'valid': True}}
Thread-6019520::INFO::2014-02-03 
09:58:11,732::logUtils::44::dispatcher::(wrapper) Run and protect: 
spmStop, Return response: None
Thread-6019523::INFO::2014-02-03 
09:58:11,835::logUtils::41::dispatcher::(wrapper) Run and protect: 
getAllTasksStatuses(spUUID=None, options=None)
Thread-6019523::INFO::2014-02-03 
09:58:11,835::logUtils::44::dispatcher::(wrapper) Run and protect: 
getAllTasksStatuses, Return response: {'allTasksStatus': {}}
Thread-6019524::INFO::2014-02-03 
09:58:11,844::logUtils::41::dispatcher::(wrapper) Run and protect: 
spmStop(spUUID='9dbc7bb1-c460-4202-8f10-862d2ed3ed9a', options=None)
Thread-6019524::ERROR::2014-02-03 
09:58:11,846::task::833::TaskManager.Task::(_setError) 
Task=`00df5ff7-bbf4-4a0e-b60b-1b06dcaa7683`::Unexpected error

Traceback (most recent call last):
  File /usr/share/vdsm/storage/task.py, line 840, in _run
return fn(*args, **kargs)
  File /usr/share/vdsm/logUtils.py, line 42, in wrapper
res = f(*args, **kwargs)
  File /usr/share/vdsm/storage/hsm.py, line 601, in spmStop
pool.stopSpm()
  File /usr/share/vdsm/storage/securable.py, line 66, in wrapper
raise SecureError()
SecureError
Thread-6019524::INFO::2014-02-03 
09:58:11,855::task::1134::TaskManager.Task::(prepare) 
Task=`00df5ff7-bbf4-4a0e-b60b-1b06dcaa7683`::aborting: Task is aborted: 
u'' - code 100
Thread-6019524::ERROR::2014-02-03 

Re: [Users] Hosted Engine always reports unknown stale-data

2014-02-03 Thread Andrew Lau
The issue was a split-brain issue on the dom_md/ids file causing an
input/output error, thanks!

On Mon, Feb 3, 2014 at 10:43 PM, Andrew Lau and...@andrewklau.com wrote:

 On Mon, Feb 3, 2014 at 10:40 PM, Doron Fediuck dfedi...@redhat.comwrote:



 - Original Message -
  From: Andrew Lau and...@andrewklau.com
  To: Doron Fediuck dfedi...@redhat.com
  Cc: users users@ovirt.org, Jiri Moskovcak jmosk...@redhat.com,
 Greg Padgett gpadg...@redhat.com
  Sent: Monday, February 3, 2014 1:35:01 PM
  Subject: Re: [Users] Hosted Engine always reports unknown stale-data
 
  On Mon, Feb 3, 2014 at 9:53 PM, Doron Fediuck dfedi...@redhat.com
 wrote:
 
  
  
   - Original Message -
From: Andrew Lau and...@andrewklau.com
To: users users@ovirt.org
Sent: Monday, February 3, 2014 12:32:45 PM
Subject: [Users] Hosted Engine always reports unknown stale-data
   
Hi,
   
I was wondering if anyone has this same notice when they run:
hosted-engine --vm-status
   
The engine status will always be unknown stale-data even when
 the VM
   is
powered on and the engine is online. engine-health will actually
 report
   the
correct status.
   
eg.
   
--== Host 1 status ==--
   
Status up-to-date : False
Hostname : 172.16.0.11
Host ID : 1
Engine status : unknown stale-data
   
Is it some sort of blocked port causing this or is this by design?
   
Thanks,
Andrew
   
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
   
  
   Hi Andrew,
   it looks like an issue with the time stamp.
   Which time stamp do you have? How relevant is it?
  
 
  timestamps seem to be outdated by a lot, interesting error in the
 broker.log
 
  Thread-24::INFO::2014-02-03
 
 22:33:14,801::engine_health::90::engine_health.CpuLoadNoEngine::(action) VM
  not running on this host, status down
  Thread-22::INFO::2014-02-03
  22:33:14,834::mem_free::53::mem_free.MemFree::(action) memFree: 27382
  Thread-23::ERROR::2014-02-03
 
 22:33:14,922::cpu_load_no_engine::156::cpu_load_no_engine.EngineHealth::(update_stat_file)
  Failed to getVmStats: 'pid'
  Thread-23::INFO::2014-02-03
 
 22:33:14,923::cpu_load_no_engine::121::cpu_load_no_engine.EngineHealth::(calculate_load)
  System load total=0.0124, engine=0., non-engine=0.0124
 
  I'm assuming that update_stat_file is the metadata file the vm-status is
  getting pulled from?
 

 Yep.
 Can you please verify the time your host actually has?
 ie- we have a known issue with time, since we assume all
 hosts are in sync. So if one of your hosts has a time sync
 issue, this can explain the problem you see.


 --== Host 1 status ==--

 Status up-to-date  : False
 Hostname   : 172.16.0.11
 Host ID: 1
 Engine status  : unknown stale-data
 Score  : 0
 Local maintenance  : False
 Host timestamp : 1391417611

 --== Host 2 status ==--

 Status up-to-date  : False
 Hostname   : 172.16.0.12
 Host ID: 2
 Engine status  : unknown stale-data
 Score  : 0
 Local maintenance  : False
 Host timestamp : 1391417171


 ​
 [root@hv01 ~]# date +%s
   │[root@hv02 ~]# date +%s
 ​​
 1391427754
  │139142775
 ​5​

 ​​


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Hosted Engine always reports unknown stale-data

2014-02-03 Thread Itamar Heim

On 02/03/2014 01:19 PM, Andrew Lau wrote:

The issue was a split-brain issue on the dom_md/ids file causing an
input/output error, thanks!


is this with gluster?



On Mon, Feb 3, 2014 at 10:43 PM, Andrew Lau and...@andrewklau.com
mailto:and...@andrewklau.com wrote:

On Mon, Feb 3, 2014 at 10:40 PM, Doron Fediuck dfedi...@redhat.com
mailto:dfedi...@redhat.com wrote:



- Original Message -
  From: Andrew Lau and...@andrewklau.com
mailto:and...@andrewklau.com
  To: Doron Fediuck dfedi...@redhat.com
mailto:dfedi...@redhat.com
  Cc: users users@ovirt.org mailto:users@ovirt.org, Jiri
Moskovcak jmosk...@redhat.com mailto:jmosk...@redhat.com,
Greg Padgett gpadg...@redhat.com mailto:gpadg...@redhat.com
  Sent: Monday, February 3, 2014 1:35:01 PM
  Subject: Re: [Users] Hosted Engine always reports unknown
stale-data
 
  On Mon, Feb 3, 2014 at 9:53 PM, Doron Fediuck
dfedi...@redhat.com mailto:dfedi...@redhat.com wrote:
 
  
  
   - Original Message -
From: Andrew Lau and...@andrewklau.com
mailto:and...@andrewklau.com
To: users users@ovirt.org mailto:users@ovirt.org
Sent: Monday, February 3, 2014 12:32:45 PM
Subject: [Users] Hosted Engine always reports unknown
stale-data
   
Hi,
   
I was wondering if anyone has this same notice when they run:
hosted-engine --vm-status
   
The engine status will always be unknown stale-data
even when the VM
   is
powered on and the engine is online. engine-health will
actually report
   the
correct status.
   
eg.
   
--== Host 1 status ==--
   
Status up-to-date : False
Hostname : 172.16.0.11
Host ID : 1
Engine status : unknown stale-data
   
Is it some sort of blocked port causing this or is this
by design?
   
Thanks,
Andrew
   
___
Users mailing list
Users@ovirt.org mailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
   
  
   Hi Andrew,
   it looks like an issue with the time stamp.
   Which time stamp do you have? How relevant is it?
  
 
  timestamps seem to be outdated by a lot, interesting error in
the broker.log
 
  Thread-24::INFO::2014-02-03
 
22:33:14,801::engine_health::90::engine_health.CpuLoadNoEngine::(action)
VM
  not running on this host, status down
  Thread-22::INFO::2014-02-03
  22:33:14,834::mem_free::53::mem_free.MemFree::(action)
memFree: 27382
  Thread-23::ERROR::2014-02-03
 

22:33:14,922::cpu_load_no_engine::156::cpu_load_no_engine.EngineHealth::(update_stat_file)
  Failed to getVmStats: 'pid'
  Thread-23::INFO::2014-02-03
 

22:33:14,923::cpu_load_no_engine::121::cpu_load_no_engine.EngineHealth::(calculate_load)
  System load total=0.0124, engine=0., non-engine=0.0124
 
  I'm assuming that update_stat_file is the metadata file the
vm-status is
  getting pulled from?
 

Yep.
Can you please verify the time your host actually has?
ie- we have a known issue with time, since we assume all
hosts are in sync. So if one of your hosts has a time sync
issue, this can explain the problem you see.


--== Host 1 status ==--

Status up-to-date  : False
Hostname   : 172.16.0.11
Host ID: 1
Engine status  : unknown stale-data
Score  : 0
Local maintenance  : False
Host timestamp : 1391417611

--== Host 2 status ==--

Status up-to-date  : False
Hostname   : 172.16.0.12
Host ID: 2
Engine status  : unknown stale-data
Score  : 0
Local maintenance  : False
Host timestamp : 1391417171


​
[root@hv01 ~]# date +%s
 │[root@hv02 ~]# date +%s
​​
1391427754
│139142775
​5​

​​




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org

Re: [Users] Hosted Engine always reports unknown stale-data

2014-02-03 Thread Itamar Heim

On 02/03/2014 01:25 PM, Andrew Lau wrote:

On Mon, Feb 3, 2014 at 11:23 PM, Itamar Heim ih...@redhat.com
mailto:ih...@redhat.comwrote:

On 02/03/2014 01:19 PM, Andrew Lau wrote:

The issue was a split-brain issue on the dom_md/ids file causing an
input/output error, thanks!


is this with gluster?


​Yup a 2 brick gluster replicated instance serving the NFS server, sorry
was meant to say I resolved it too.​


you have to use a gluster with quorum, or this will happen often





On Mon, Feb 3, 2014 at 10:43 PM, Andrew Lau
and...@andrewklau.com mailto:and...@andrewklau.com
mailto:and...@andrewklau.com mailto:and...@andrewklau.com__
wrote:

 On Mon, Feb 3, 2014 at 10:40 PM, Doron Fediuck
dfedi...@redhat.com mailto:dfedi...@redhat.com
 mailto:dfedi...@redhat.com mailto:dfedi...@redhat.com
wrote:



 - Original Message -
   From: Andrew Lau and...@andrewklau.com
mailto:and...@andrewklau.com
 mailto:and...@andrewklau.com
mailto:and...@andrewklau.com__
   To: Doron Fediuck dfedi...@redhat.com
mailto:dfedi...@redhat.com
 mailto:dfedi...@redhat.com mailto:dfedi...@redhat.com
   Cc: users users@ovirt.org
mailto:users@ovirt.org mailto:users@ovirt.org
mailto:users@ovirt.org, Jiri
 Moskovcak jmosk...@redhat.com
mailto:jmosk...@redhat.com mailto:jmosk...@redhat.com
mailto:jmosk...@redhat.com,
 Greg Padgett gpadg...@redhat.com
mailto:gpadg...@redhat.com mailto:gpadg...@redhat.com
mailto:gpadg...@redhat.com
   Sent: Monday, February 3, 2014 1:35:01 PM
   Subject: Re: [Users] Hosted Engine always reports
unknown
 stale-data
  
   On Mon, Feb 3, 2014 at 9:53 PM, Doron Fediuck
 dfedi...@redhat.com mailto:dfedi...@redhat.com
mailto:dfedi...@redhat.com mailto:dfedi...@redhat.com wrote:
  
   
   
- Original Message -
 From: Andrew Lau and...@andrewklau.com
mailto:and...@andrewklau.com
 mailto:and...@andrewklau.com
mailto:and...@andrewklau.com__
 To: users users@ovirt.org
mailto:users@ovirt.org mailto:users@ovirt.org
mailto:users@ovirt.org
 Sent: Monday, February 3, 2014 12:32:45 PM
 Subject: [Users] Hosted Engine always reports
unknown
 stale-data

 Hi,

 I was wondering if anyone has this same notice
when they run:
 hosted-engine --vm-status

 The engine status will always be unknown
stale-data
 even when the VM
is
 powered on and the engine is online.
engine-health will
 actually report
the
 correct status.

 eg.

 --== Host 1 status ==--

 Status up-to-date : False
 Hostname : 172.16.0.11
 Host ID : 1
 Engine status : unknown stale-data

 Is it some sort of blocked port causing this or
is this
 by design?

 Thanks,
 Andrew

 _
 Users mailing list
 Users@ovirt.org mailto:Users@ovirt.org
mailto:Users@ovirt.org mailto:Users@ovirt.org

 http://lists.ovirt.org/__mailman/listinfo/users
http://lists.ovirt.org/mailman/listinfo/users

   
Hi Andrew,
it looks like an issue with the time stamp.
Which time stamp do you have? How relevant is it?
   
  
   timestamps seem to be outdated by a lot, interesting
error in
 the broker.log
  
   Thread-24::INFO::2014-02-03
  


22:33:14,801::engine_health::__90::engine_health.__CpuLoadNoEngine::(action)
 VM
   not running on this host, status down
   Thread-22::INFO::2014-02-03
   22:33:14,834::mem_free::53::__mem_free.MemFree::(action)
 memFree: 27382
   Thread-23::ERROR::2014-02-03
  



Re: [Users] Hosted Engine always reports unknown stale-data

2014-02-03 Thread Andrew Lau
On Mon, Feb 3, 2014 at 11:27 PM, Itamar Heim ih...@redhat.com wrote:

 On 02/03/2014 01:25 PM, Andrew Lau wrote:

 On Mon, Feb 3, 2014 at 11:23 PM, Itamar Heim ih...@redhat.com
 mailto:ih...@redhat.comwrote:


 On 02/03/2014 01:19 PM, Andrew Lau wrote:

 The issue was a split-brain issue on the dom_md/ids file causing
 an
 input/output error, thanks!


 is this with gluster?


 ​Yup a 2 brick gluster replicated instance serving the NFS server, sorry
 was meant to say I resolved it too.​


 you have to use a gluster with quorum, or this will happen often
 ​


​Yeah, I disabled quorum temporarily because I'm only using a two host
scenario and I need to have the case scenario where one is to be shutdown
the VMs won't end up in a paused state.


 ​





 On Mon, Feb 3, 2014 at 10:43 PM, Andrew Lau
 and...@andrewklau.com mailto:and...@andrewklau.com
 mailto:and...@andrewklau.com mailto:and...@andrewklau.com__
 wrote:

  On Mon, Feb 3, 2014 at 10:40 PM, Doron Fediuck
 dfedi...@redhat.com mailto:dfedi...@redhat.com
  mailto:dfedi...@redhat.com mailto:dfedi...@redhat.com
 wrote:



  - Original Message -
From: Andrew Lau and...@andrewklau.com
 mailto:and...@andrewklau.com
  mailto:and...@andrewklau.com
 mailto:and...@andrewklau.com__
To: Doron Fediuck dfedi...@redhat.com
 mailto:dfedi...@redhat.com
  mailto:dfedi...@redhat.com mailto:dfedi...@redhat.com
 
Cc: users users@ovirt.org
 mailto:users@ovirt.org mailto:users@ovirt.org
 mailto:users@ovirt.org, Jiri
  Moskovcak jmosk...@redhat.com
 mailto:jmosk...@redhat.com mailto:jmosk...@redhat.com
 mailto:jmosk...@redhat.com,
  Greg Padgett gpadg...@redhat.com
 mailto:gpadg...@redhat.com mailto:gpadg...@redhat.com
 mailto:gpadg...@redhat.com
Sent: Monday, February 3, 2014 1:35:01 PM
Subject: Re: [Users] Hosted Engine always reports
 unknown
  stale-data
   
On Mon, Feb 3, 2014 at 9:53 PM, Doron Fediuck
  dfedi...@redhat.com mailto:dfedi...@redhat.com
 mailto:dfedi...@redhat.com mailto:dfedi...@redhat.com wrote:
   


 - Original Message -
  From: Andrew Lau and...@andrewklau.com
 mailto:and...@andrewklau.com
  mailto:and...@andrewklau.com
 mailto:and...@andrewklau.com__
  To: users users@ovirt.org
 mailto:users@ovirt.org mailto:users@ovirt.org
 mailto:users@ovirt.org
  Sent: Monday, February 3, 2014 12:32:45 PM
  Subject: [Users] Hosted Engine always reports
 unknown
  stale-data
 
  Hi,
 
  I was wondering if anyone has this same notice
 when they run:
  hosted-engine --vm-status
 
  The engine status will always be unknown
 stale-data
  even when the VM
 is
  powered on and the engine is online.
 engine-health will
  actually report
 the
  correct status.
 
  eg.
 
  --== Host 1 status ==--
 
  Status up-to-date : False
  Hostname : 172.16.0.11
  Host ID : 1
  Engine status : unknown stale-data
 
  Is it some sort of blocked port causing this or
 is this
  by design?
 
  Thanks,
  Andrew
 
  _
  Users mailing list
  Users@ovirt.org mailto:Users@ovirt.org
 mailto:Users@ovirt.org mailto:Users@ovirt.org

  http://lists.ovirt.org/__mailman/listinfo/users
 http://lists.ovirt.org/mailman/listinfo/users
 

 Hi Andrew,
 it looks like an issue with the time stamp.
 Which time stamp do you have? How relevant is it?

   
timestamps seem to be outdated by a lot, interesting
 error in
  the broker.log
   
Thread-24::INFO::2014-02-03
   

 

Re: [Users] Hosted Engine always reports unknown stale-data

2014-02-03 Thread Itamar Heim

On 02/03/2014 01:29 PM, Andrew Lau wrote:

On Mon, Feb 3, 2014 at 11:27 PM, Itamar Heim ih...@redhat.com
mailto:ih...@redhat.comwrote:

On 02/03/2014 01:25 PM, Andrew Lau wrote:

On Mon, Feb 3, 2014 at 11:23 PM, Itamar Heim ih...@redhat.com
mailto:ih...@redhat.com
mailto:ih...@redhat.com mailto:ih...@redhat.com__wrote:


 On 02/03/2014 01:19 PM, Andrew Lau wrote:

 The issue was a split-brain issue on the dom_md/ids
file causing an
 input/output error, thanks!


 is this with gluster?


​Yup a 2 brick gluster replicated instance serving the NFS
server, sorry
was meant to say I resolved it too.​


you have to use a gluster with quorum, or this will happen often
​


​Yeah, I disabled quorum temporarily because I'm only using a two host
scenario and I need to have the case scenario where one is to be
shutdown the VMs won't end up in a paused state.


iiuc, without quorum you'll get both hosted engine and the SPM into 
split brains


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Hosted Engine always reports unknown stale-data

2014-02-03 Thread Andrew Lau
On Mon, Feb 3, 2014 at 11:33 PM, Itamar Heim ih...@redhat.com wrote:

 On 02/03/2014 01:29 PM, Andrew Lau wrote:

 On Mon, Feb 3, 2014 at 11:27 PM, Itamar Heim ih...@redhat.com
 mailto:ih...@redhat.comwrote:

 On 02/03/2014 01:25 PM, Andrew Lau wrote:

 On Mon, Feb 3, 2014 at 11:23 PM, Itamar Heim ih...@redhat.com
 mailto:ih...@redhat.com
 mailto:ih...@redhat.com mailto:ih...@redhat.com__wrote:



  On 02/03/2014 01:19 PM, Andrew Lau wrote:

  The issue was a split-brain issue on the dom_md/ids
 file causing an
  input/output error, thanks!


  is this with gluster?


 Yup a 2 brick gluster replicated instance serving the NFS
 server, sorry
 was meant to say I resolved it too.


 you have to use a gluster with quorum, or this will happen often



 Yeah, I disabled quorum temporarily because I'm only using a two host
 scenario and I need to have the case scenario where one is to be
 shutdown the VMs won't end up in a paused state.


 iiuc, without quorum you'll get both hosted engine and the SPM into split
 brains


I've been using the non-quorum method for a while though and it's seemed to
work all right, and this only case I had the split-brain was because I was
actually messing with gluster and deleting full brick contents to test
cgroups.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] [vdsm] Migration failed (previous migrations succeded)

2014-02-03 Thread Michal Skrivanek

On Feb 1, 2014, at 18:25 , Itamar Heim ih...@redhat.com wrote:

 On 01/31/2014 09:30 AM, Sven Kieske wrote:
 Hi,
 
 is there any documentation regarding all
 allowed settings in the vdsm.conf?
 
 I didn't find anything related in the rhev docs
 
 that's a question for vdsm mailing list - cc-ing…

the vdsm.conf has a description for each parameter…just search for all 
containing migration:)
we do want to expose some/most/all of them in UI/REST eventually, the 
migration downtime setting is there now in 3.4, but others are missing

Thanks,
michal
 
 
 Am 30.01.2014 21:43, schrieb Itamar Heim:
 On 01/30/2014 10:37 PM, Markus Stockhausen wrote:
 Von: Itamar Heim [ih...@redhat.com]
 Gesendet: Donnerstag, 30. Januar 2014 21:25
 An: Markus Stockhausen; ovirt-users
 Betreff: Re: [Users] Migration failed (previous migrations succeded)
 
 
 Now I' getting serious problems. During the migration the VM was
 doing a rather slow download at 1,5 MB/s. So the memory changed
 by 15 MB per 10 seconds. No wonder that a check every 10 seconds
 was not able to see any progress. Im scared what will happen if I
 want to migrate a medium loaded system runing a database.
 
 Any tip for a parametrization?
 
 Markus
 
 
 what's the bandwidth? default is up to 30MB/sec, to allow up to 3 VMs to
 migrate on 1Gb without congesting it.
 you could raise that if you have 10GB, or raise the bandwidth cap and
 reduce max number of concurrent VMs, etc.
 
 My migration network is IPoIB 10GBit. During our tests only one VM
 was migrated.  Bandwidth cap or number of concurrent VMs has not
 been changed after default install.
 
 Is migration_max_bandwidth in /etc/vdsm/vdsm.conf still the right place?
 
 probably
 
 And what settings do you suggest?
 
 well, to begin with, 300MB/sec on 10GE (still allowing concurrent
 migrations)
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
 
 
 
 
 
 ___
 vdsm-devel mailing list
 vdsm-de...@lists.fedorahosted.org
 https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt-3.3.3 release postponed due to blockers

2014-02-03 Thread Douglas Schilling Landgraf

On 02/01/2014 06:05 PM, Dan Kenigsberg wrote:

On Fri, Jan 31, 2014 at 04:16:54PM -0500, Douglas Schilling Landgraf wrote:

On 01/31/2014 05:17 AM, Dan Kenigsberg wrote:

On Fri, Jan 31, 2014 at 09:36:48AM +0100, Sandro Bonazzola wrote:

Il 30/01/2014 22:38, Robert Story ha scritto:

Can we revert these packages to previous versions in the 3.3.2 stable repo
so those of us who want/need to install new hosts in our clusters aren't
dead in the water waiting for 3.3.3?


Hi Robert, I think you can still install 3.3.2 on your clusters with the 
requirement of adding
manually oython-cpopen before trying to install vdsm.

About 3.3.3, I think vdsm should really drop dependency on vdsm-python-cpopen:
it's a package obsoleted by python-cpopen so there's no point in still 
requiring it especially if keeping that requirement still break dependency
resolution.


I really wanted to avoid eliminating a subpackage during a micro
release. That's impolite and surprising.
But given the awkward yum bug, persistent dependency problems, and
the current release delay, I give up.

Let's eliminate vdsm-python-cpopen from ovirt-3.3 branch, and require
python-cpopen. Yaniv, Douglas: could you handle it?



Sure. Done: http://gerrit.ovirt.org/#/c/23942/


Acked. Could you cherry-pick it into dist-git and rebuild the
ovirt-3.3.3 candidate (and without other changes that can wait for
ovirt-3.3.4).


vdsm-4.13.3-3 available at:

F19: http://koji.fedoraproject.org/koji/taskinfo?taskID=6484103
F20: http://koji.fedoraproject.org/koji/taskinfo?taskID=6484142
EL6: http://koji.fedoraproject.org/koji/taskinfo?taskID=6484377

Patch included in this version:
spec: replace requires vdsm-python-cpopen
http://gerrit.ovirt.org/23942

--
Cheers
Douglas
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] I can't remove VM

2014-02-03 Thread Dafna Ron

please attach full vdsm and engine logs.

Thanks,

Dafna


On 02/03/2014 12:11 PM, Eduardo Ramos wrote:

Hi all!

I'm having trouble on removing virtual machines. My environment run on 
a ISCSI domain storage. When I try remove, the SPM logs:


# Start vdsm SPM log #
Thread-6019517::INFO::2014-02-03 
09:58:09,293::logUtils::41::dispatcher::(wrapper) Run and protect: 
deleteImage(sdUUID='c332da29-ba9f-4c94-8fa9-346bb8e04e2a', 
spUUID='9dbc7bb1-c460-4202-8f10-862d2ed3ed9a', 
imgUUID='57ba1906-2035-4503-acbc-5f6f077f75cc', postZero='false', 
force='false')
Thread-6019517::INFO::2014-02-03 
09:58:09,293::blockSD::816::Storage.StorageDomain::(validate) 
sdUUID=c332da29-ba9f-4c94-8fa9-346bb8e04e2a
Thread-6019517::ERROR::2014-02-03 
09:58:10,061::task::833::TaskManager.Task::(_setError) 
Task=`8cbf9978-ed51-488a-af52-a3db030e44ff`::Unexpected error

Traceback (most recent call last):
  File /usr/share/vdsm/storage/task.py, line 840, in _run
return fn(*args, **kargs)
  File /usr/share/vdsm/logUtils.py, line 42, in wrapper
res = f(*args, **kwargs)
  File /usr/share/vdsm/storage/hsm.py, line 1429, in deleteImage
allVols = dom.getAllVolumes()
  File /usr/share/vdsm/storage/blockSD.py, line 972, in getAllVolumes
return getAllVolumes(self.sdUUID)
  File /usr/share/vdsm/storage/blockSD.py, line 172, in getAllVolumes
vImg not in res[vPar]['imgs']):
KeyError: '63650a24-7e83-4c0a-851d-0ce9869a294d'
Thread-6019517::INFO::2014-02-03 
09:58:10,063::task::1134::TaskManager.Task::(prepare) 
Task=`8cbf9978-ed51-488a-af52-a3db030e44ff`::aborting: Task is 
aborted: u'63650a24-7e83-4c0a-851d-0ce9869a294d' - code 100
Thread-6019517::ERROR::2014-02-03 
09:58:10,066::dispatcher::70::Storage.Dispatcher.Protect::(run) 
'63650a24-7e83-4c0a-851d-0ce9869a294d'

Traceback (most recent call last):
  File /usr/share/vdsm/storage/dispatcher.py, line 62, in run
result = ctask.prepare(self.func, *args, **kwargs)
  File /usr/share/vdsm/storage/task.py, line 1142, in prepare
raise self.error
KeyError: '63650a24-7e83-4c0a-851d-0ce9869a294d'
Thread-6019518::INFO::2014-02-03 
09:58:10,087::logUtils::41::dispatcher::(wrapper) Run and protect: 
getSpmStatus(spUUID='9dbc7bb1-c460-4202-8f10-862d2ed3ed9a', options=None)
Thread-6019518::INFO::2014-02-03 
09:58:10,088::logUtils::44::dispatcher::(wrapper) Run and protect: 
getSpmStatus, Return response: {'spm_st': {'spmId': 14, 'spmStatus': 
'SPM', 'spmLver': 64}}
Thread-6019519::INFO::2014-02-03 
09:58:10,100::logUtils::41::dispatcher::(wrapper) Run and protect: 
getAllTasksStatuses(spUUID=None, options=None)
Thread-6019519::INFO::2014-02-03 
09:58:10,101::logUtils::44::dispatcher::(wrapper) Run and protect: 
getAllTasksStatuses, Return response: {'allTasksStatus': {}}
Thread-6019520::INFO::2014-02-03 
09:58:10,109::logUtils::41::dispatcher::(wrapper) Run and protect: 
spmStop(spUUID='9dbc7bb1-c460-4202-8f10-862d2ed3ed9a', options=None)
Thread-6019520::INFO::2014-02-03 
09:58:10,681::clusterlock::121::SafeLease::(release) Releasing cluster 
lock for domain c332da29-ba9f-4c94-8fa9-346bb8e04e2a
Thread-6019521::INFO::2014-02-03 
09:58:11,054::logUtils::41::dispatcher::(wrapper) Run and protect: 
repoStats(options=None)
Thread-6019521::INFO::2014-02-03 
09:58:11,054::logUtils::44::dispatcher::(wrapper) Run and protect: 
repoStats, Return response: {u'51eb6183-157d-4015-ae0f-1c7ffb1731c0': 
{'delay': '0.00799298286438', 'lastCheck': '5.3', 'code': 0, 'valid': 
True}, u'c332da29-ba9f-4c94-8fa9-346bb8e04e2a': {'delay': 
'0.0197920799255', 'lastCheck': '4.9', 'code': 0, 'valid': True}, 
u'0e0be898-6e04-4469-bb32-91f3cf8146d1': {'delay': '0.00803208351135', 
'lastCheck': '5.3', 'code': 0, 'valid': True}}
Thread-6019520::INFO::2014-02-03 
09:58:11,732::logUtils::44::dispatcher::(wrapper) Run and protect: 
spmStop, Return response: None
Thread-6019523::INFO::2014-02-03 
09:58:11,835::logUtils::41::dispatcher::(wrapper) Run and protect: 
getAllTasksStatuses(spUUID=None, options=None)
Thread-6019523::INFO::2014-02-03 
09:58:11,835::logUtils::44::dispatcher::(wrapper) Run and protect: 
getAllTasksStatuses, Return response: {'allTasksStatus': {}}
Thread-6019524::INFO::2014-02-03 
09:58:11,844::logUtils::41::dispatcher::(wrapper) Run and protect: 
spmStop(spUUID='9dbc7bb1-c460-4202-8f10-862d2ed3ed9a', options=None)
Thread-6019524::ERROR::2014-02-03 
09:58:11,846::task::833::TaskManager.Task::(_setError) 
Task=`00df5ff7-bbf4-4a0e-b60b-1b06dcaa7683`::Unexpected error

Traceback (most recent call last):
  File /usr/share/vdsm/storage/task.py, line 840, in _run
return fn(*args, **kargs)
  File /usr/share/vdsm/logUtils.py, line 42, in wrapper
res = f(*args, **kwargs)
  File /usr/share/vdsm/storage/hsm.py, line 601, in spmStop
pool.stopSpm()
  File /usr/share/vdsm/storage/securable.py, line 66, in wrapper
raise SecureError()
SecureError
Thread-6019524::INFO::2014-02-03 
09:58:11,855::task::1134::TaskManager.Task::(prepare) 
Task=`00df5ff7-bbf4-4a0e-b60b-1b06dcaa7683`::aborting: Task 

[Users] My first wiki page

2014-02-03 Thread Juan Pablo Lorier
Hi,

I've created my first wiki page and I'd like someone to review it and
tell me if there's something that need to be changed (besides it does
not have any style yet)
The URL is
http://www.ovirt.org/oVirt_Wiki:How_to_change_Gluster%27s_network_interface
Regards,




signature.asc
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.3.3 RC EL6 Live Snapshot

2014-02-03 Thread Steve Dainard
[root@ovirt002 ~]# vdsClient -s 0 getStorageDomainInfo
a52938f7-2cf4-4771-acb2-0c78d14999e5
uuid = a52938f7-2cf4-4771-acb2-0c78d14999e5
pool = ['fcb89071-6cdb-4972-94d1-c9324cebf814']
lver = 5
version = 3
role = Master
remotePath = gluster-store-vip:/rep1
spm_id = 2
type = NFS
class = Data
master_ver = 1
name = gluster-store-rep1


*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | *Rethink Traffic*
519-513-2407 ex.250
877-646-8476 (toll-free)

*Blog http://miovision.com/blog  |  **LinkedIn
https://www.linkedin.com/company/miovision-technologies  |  Twitter
https://twitter.com/miovision  |  Facebook
https://www.facebook.com/miovision*
--
 Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON,
Canada | N2C 1L3
This e-mail may contain information that is privileged or confidential. If
you are not the intended recipient, please delete the e-mail and any
attachments and notify us immediately.


On Sun, Feb 2, 2014 at 2:55 PM, Dafna Ron d...@redhat.com wrote:

 please run vdsClient -s 0 getStorageDomainInfo a52938f7-2cf4-4771-acb2-
 0c78d14999e5

 Thanks,

 Dafna



 On 02/02/2014 03:02 PM, Steve Dainard wrote:

 Logs attached with VM running on qemu-kvm-rhev packages installed.

 *Steve Dainard *
 IT Infrastructure Manager
 Miovision http://miovision.com/ | /Rethink Traffic/
 519-513-2407 ex.250

 877-646-8476 (toll-free)

 *Blog http://miovision.com/blog | **LinkedIn https://www.linkedin.com/
 company/miovision-technologies  | Twitter https://twitter.com/miovision
  | Facebook https://www.facebook.com/miovision*
 
 Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener,
 ON, Canada | N2C 1L3
 This e-mail may contain information that is privileged or confidential.
 If you are not the intended recipient, please delete the e-mail and any
 attachments and notify us immediately.


 On Sun, Feb 2, 2014 at 5:05 AM, Dafna Ron d...@redhat.com mailto:
 d...@redhat.com wrote:

 can you please upload full engine, vdsm, libvirt and vm's qemu logs?


 On 02/02/2014 02:08 AM, Steve Dainard wrote:

 I have two CentOS 6.5 Ovirt hosts (ovirt001, ovirt002)

 I've installed the applicable qemu-kvm-rhev packages from this
 site: http://www.dreyou.org/ovirt/vdsm32/Packages/ on ovirt002.

 On ovirt001 if I take a live snapshot:

 Snapshot 'test qemu-kvm' creation for VM 'snapshot-test' was
 initiated by admin@internal.
 The VM is paused
 Failed to create live snapshot 'test qemu-kvm' for VM
 'snapshot-test'. VM restart is recommended.
 Failed to complete snapshot 'test qemu-kvm' creation for VM
 'snapshot-test'.
 The VM is then started, and the status for the snapshot
 changes to OK.

 On ovirt002 (with the packages from dreyou) I don't get any
 messages about a snapshot failing, but my VM is still paused
 to complete the snapshot. Is there something else other than
 the qemu-kvm-rhev packages that would enable this functionality?

 I've looked for some information on when the packages would be
 built as required in the CentOS repos, but I don't see
 anything definitive.

 http://lists.ovirt.org/pipermail/users/2013-December/019126.html
 Looks like one of the maintainers is waiting for someone to
 tell him what flags need to be set.

 Also, another thread here:
 http://comments.gmane.org/gmane.comp.emulators.ovirt.arch/1618
 same maintainer, mentioning that he hasn't seen anything in
 the bug tracker.

 There is a bug here:
 https://bugzilla.redhat.com/show_bug.cgi?id=1009100 that seems
 to have ended in finding a way for qemu to expose whether it
 supports live snapshots, rather than figuring out how to get
 the CentOS team the info they need to build the packages with
 the proper flags set.

 I have bcc'd both dreyou (packaged the qemu-kvm-rhev packages
 listed above) and Russ (CentOS maintainer mentioned in the
 other threads) if they wish to chime in and perhaps
 collaborate on which flags, if any, should be set for the
 qemu-kvm builds so we can get a CentOS bug report going and
 hammer this out.

 Thanks everyone.

 **crosses fingers and hopes for live snapshots soon**



 *Steve Dainard *
 IT Infrastructure Manager
 Miovision http://miovision.com/ | /Rethink Traffic/
 519-513-2407 tel:519-513-2407 tel:519-513-2407
 tel:519-513-2407 ex.250
 877-646-8476 tel:877-646-8476 tel:877-646-8476

 tel:877-646-8476 (toll-free)

 *Blog http://miovision.com/blog | **LinkedIn
 https://www.linkedin.com/company/miovision-technologies  |
 Twitter 

Re: [Users] My first wiki page

2014-02-03 Thread Matt Warren
Fist, you have to set...


On 2/3/14, 8:48 AM, Juan Pablo Lorier jplor...@gmail.com wrote:

Hi,

I've created my first wiki page and I'd like someone to review it and
tell me if there's something that need to be changed (besides it does
not have any style yet)
The URL is
http://www.ovirt.org/oVirt_Wiki:How_to_change_Gluster%27s_network_interfac
e
Regards,



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] [vdsm] Migration failed (previous migrations succeded)

2014-02-03 Thread Gianluca Cecchi
On Mon, Feb 3, 2014 at 1:43 PM, Michal Skrivanek
michal.skriva...@redhat.com wrote:

 On Feb 1, 2014, at 18:25 , Itamar Heim ih...@redhat.com wrote:

 On 01/31/2014 09:30 AM, Sven Kieske wrote:
 Hi,

 is there any documentation regarding all
 allowed settings in the vdsm.conf?

 I didn't find anything related in the rhev docs

 that's a question for vdsm mailing list - cc-ing…

 the vdsm.conf has a description for each parameter…just search for all 
 containing migration:)
 we do want to expose some/most/all of them in UI/REST eventually, the 
 migration downtime setting is there now in 3.4, but others are missing

 Thanks,
 michal


 Am 30.01.2014 21:43, schrieb Itamar Heim:
 On 01/30/2014 10:37 PM, Markus Stockhausen wrote:
 Von: Itamar Heim [ih...@redhat.com]
 Gesendet: Donnerstag, 30. Januar 2014 21:25
 An: Markus Stockhausen; ovirt-users
 Betreff: Re: [Users] Migration failed (previous migrations succeded)


 Now I' getting serious problems. During the migration the VM was
 doing a rather slow download at 1,5 MB/s. So the memory changed
 by 15 MB per 10 seconds. No wonder that a check every 10 seconds
 was not able to see any progress. Im scared what will happen if I
 want to migrate a medium loaded system runing a database.

 Any tip for a parametrization?

 Markus


 what's the bandwidth? default is up to 30MB/sec, to allow up to 3 VMs to
 migrate on 1Gb without congesting it.
 you could raise that if you have 10GB, or raise the bandwidth cap and
 reduce max number of concurrent VMs, etc.

 My migration network is IPoIB 10GBit. During our tests only one VM
 was migrated.  Bandwidth cap or number of concurrent VMs has not
 been changed after default install.

 Is migration_max_bandwidth in /etc/vdsm/vdsm.conf still the right place?

 probably

 And what settings do you suggest?



Not tried myself the change of values, but in a previous thread
(actually for limiting and not speeding up migration ;-) these two
parameters were described and to be put in each /etc/vdsm/vdsm.conf

max_outgoing_migrations
(eg 1 for allowing only one migration at a time)

migration_max_bandwidth
(unit is in MBytes/s, vdsm default is 32MiBps and it is for single
migration, not overall)

I think it is necessary to follow this workflow for every host
- put host into maintenance
- stop vdsmd service
- change values
- start vdsmd service
- activate host

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] [vdsm] Migration failed (previous migrations succeded)

2014-02-03 Thread Michal Skrivanek

On Feb 3, 2014, at 16:20 , Gianluca Cecchi gianluca.cec...@gmail.com wrote:

 On Mon, Feb 3, 2014 at 1:43 PM, Michal Skrivanek
 michal.skriva...@redhat.com wrote:
 
 On Feb 1, 2014, at 18:25 , Itamar Heim ih...@redhat.com wrote:
 
 On 01/31/2014 09:30 AM, Sven Kieske wrote:
 Hi,
 
 is there any documentation regarding all
 allowed settings in the vdsm.conf?
 
 I didn't find anything related in the rhev docs
 
 that's a question for vdsm mailing list - cc-ing…
 
 the vdsm.conf has a description for each parameter…just search for all 
 containing migration:)
 we do want to expose some/most/all of them in UI/REST eventually, the 
 migration downtime setting is there now in 3.4, but others are missing
 
 Thanks,
 michal
 
 
 Am 30.01.2014 21:43, schrieb Itamar Heim:
 On 01/30/2014 10:37 PM, Markus Stockhausen wrote:
 Von: Itamar Heim [ih...@redhat.com]
 Gesendet: Donnerstag, 30. Januar 2014 21:25
 An: Markus Stockhausen; ovirt-users
 Betreff: Re: [Users] Migration failed (previous migrations succeded)
 
 
 Now I' getting serious problems. During the migration the VM was
 doing a rather slow download at 1,5 MB/s. So the memory changed
 by 15 MB per 10 seconds. No wonder that a check every 10 seconds
 was not able to see any progress. Im scared what will happen if I
 want to migrate a medium loaded system runing a database.
 
 Any tip for a parametrization?
 
 Markus
 
 
 what's the bandwidth? default is up to 30MB/sec, to allow up to 3 VMs to
 migrate on 1Gb without congesting it.
 you could raise that if you have 10GB, or raise the bandwidth cap and
 reduce max number of concurrent VMs, etc.
 
 My migration network is IPoIB 10GBit. During our tests only one VM
 was migrated.  Bandwidth cap or number of concurrent VMs has not
 been changed after default install.
 
 Is migration_max_bandwidth in /etc/vdsm/vdsm.conf still the right place?
 
 probably
 
 And what settings do you suggest?
 
 
 
 Not tried myself the change of values, but in a previous thread
 (actually for limiting and not speeding up migration ;-) these two
 parameters were described and to be put in each /etc/vdsm/vdsm.conf

yep, they have (brief) description there

 
 max_outgoing_migrations
 (eg 1 for allowing only one migration at a time)
 
 migration_max_bandwidth
 (unit is in MBytes/s, vdsm default is 32MiBps and it is for single
 migration, not overall)
 
 I think it is necessary to follow this workflow for every host

unfortunately yes

 - put host into maintenance
 - stop vdsmd service
 - change values
 - start vdsmd service
 - activate host

 
 Gianluca

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.3.3 RC EL6 Live Snapshot

2014-02-03 Thread Dafna Ron

Can you also put the vdsm, libvirt and qemu packages?

Thanks,
Dafna


On 02/03/2014 04:49 PM, Steve Dainard wrote:

FYI I'm running version 3.3.2, not the 3.3.3 beta.

*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | /Rethink Traffic/
519-513-2407 ex.250
877-646-8476 (toll-free)

*Blog http://miovision.com/blog | **LinkedIn 
https://www.linkedin.com/company/miovision-technologies  | Twitter 
https://twitter.com/miovision  | Facebook 
https://www.facebook.com/miovision*


Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, 
ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or 
confidential. If you are not the intended recipient, please delete the 
e-mail and any attachments and notify us immediately.



On Mon, Feb 3, 2014 at 11:24 AM, Dafna Ron d...@redhat.com 
mailto:d...@redhat.com wrote:


Thanks Steve.

from the logs I can see that the create snapshot succeeds and that
the vm is resumed.
the vm moves to pause as part of libvirt flows:

2014-02-02 14:41:20.872+: 5843: debug :
qemuProcessHandleStop:728 : Transitioned guest snapshot-test to
paused state
2014-02-02 14:41:30.031+: 5843: debug :
qemuProcessHandleResume:776 : Transitioned guest snapshot-test out
of paused into resumed state

There are bugs here but I am not sure yet if this is libvirt
regression or engine.

I'm adding Elad and Maor since in engine logs I can't see anything
calling for live snapshot (only for snapshot) - Maor, shouldn't
live snapshot command be logged somewhere in the logs?
Is it possible that engine is calling to create snapshot and not
create live snapshot which is why the vm pauses?

Elad, if engine is not logging live snapshot anywhere I would open
a bug for engine (to print that in the logs).
Also, there is a bug in vdsm log for sdc where the below is logged
as ERROR and not INFO:

Thread-23::ERROR::2014-02-02
09:51:19,497::sdc::137::Storage.StorageDomainCache::(_findDomain)
looking for unfetched domain a52938f7-2cf4-4771-acb2-0c78d14999e5
Thread-23::ERROR::2014-02-02
09:51:19,497::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain a52938f7-2cf4-4771-acb2-0c78d14999e5

If the engine was sending live snapshot or if there is no
difference in the two commands in engine side than I would open a
bug for libvirt for pausing the vm during live snapshot.

Dafna


On 02/03/2014 02:41 PM, Steve Dainard wrote:

[root@ovirt002 ~]# vdsClient -s 0 getStorageDomainInfo
a52938f7-2cf4-4771-acb2-0c78d14999e5
uuid = a52938f7-2cf4-4771-acb2-0c78d14999e5
pool = ['fcb89071-6cdb-4972-94d1-c9324cebf814']
lver = 5
version = 3
role = Master
remotePath = gluster-store-vip:/rep1
spm_id = 2
type = NFS
class = Data
master_ver = 1
name = gluster-store-rep1


*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | /Rethink Traffic/
519-513-2407 tel:519-513-2407 ex.250
877-646-8476 tel:877-646-8476 (toll-free)

*Blog http://miovision.com/blog | **LinkedIn
https://www.linkedin.com/company/miovision-technologies  |
Twitter https://twitter.com/miovision  | Facebook
https://www.facebook.com/miovision*

Miovision Technologies Inc. | 148 Manitou Drive, Suite 101,
Kitchener, ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or
confidential. If you are not the intended recipient, please
delete the e-mail and any attachments and notify us immediately.


On Sun, Feb 2, 2014 at 2:55 PM, Dafna Ron d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com
mailto:d...@redhat.com wrote:

please run vdsClient -s 0 getStorageDomainInfo
a52938f7-2cf4-4771-acb2-0c78d14999e5

Thanks,

Dafna



On 02/02/2014 03:02 PM, Steve Dainard wrote:

Logs attached with VM running on qemu-kvm-rhev
packages installed.

*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | /Rethink Traffic/
519-513-2407 tel:519-513-2407 tel:519-513-2407
tel:519-513-2407 ex.250

877-646-8476 tel:877-646-8476 tel:877-646-8476
tel:877-646-8476 (toll-free)


*Blog http://miovision.com/blog | **LinkedIn
   
https://www.linkedin.com/company/miovision-technologies  |

Twitter https://twitter.com/miovision  | Facebook
https://www.facebook.com/miovision*
   

Re: [Users] oVirt 3.3.3 RC EL6 Live Snapshot

2014-02-03 Thread Steve Dainard
[root@ovirt002 ~]# rpm -qa | egrep 'qemu|vdsm|libvirt' | sort
gpxe-roms-qemu-0.9.7-6.10.el6.noarch
libvirt-0.10.2-29.el6_5.3.x86_64
libvirt-client-0.10.2-29.el6_5.3.x86_64
libvirt-lock-sanlock-0.10.2-29.el6_5.3.x86_64
libvirt-python-0.10.2-29.el6_5.3.x86_64
qemu-img-rhev-0.12.1.2-2.355.el6.5.x86_64
qemu-kvm-rhev-0.12.1.2-2.355.el6.5.x86_64
qemu-kvm-rhev-tools-0.12.1.2-2.355.el6.5.x86_64
vdsm-4.13.3-2.el6.x86_64
vdsm-cli-4.13.3-2.el6.noarch
vdsm-python-4.13.3-2.el6.x86_64
vdsm-xmlrpc-4.13.3-2.el6.noarch


*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | *Rethink Traffic*
519-513-2407 ex.250
877-646-8476 (toll-free)

*Blog http://miovision.com/blog  |  **LinkedIn
https://www.linkedin.com/company/miovision-technologies  |  Twitter
https://twitter.com/miovision  |  Facebook
https://www.facebook.com/miovision*
--
 Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON,
Canada | N2C 1L3
This e-mail may contain information that is privileged or confidential. If
you are not the intended recipient, please delete the e-mail and any
attachments and notify us immediately.


On Mon, Feb 3, 2014 at 11:54 AM, Dafna Ron d...@redhat.com wrote:

 Can you also put the vdsm, libvirt and qemu packages?

 Thanks,
 Dafna



 On 02/03/2014 04:49 PM, Steve Dainard wrote:

 FYI I'm running version 3.3.2, not the 3.3.3 beta.

 *Steve Dainard *
 IT Infrastructure Manager
 Miovision http://miovision.com/ | /Rethink Traffic/
 519-513-2407 ex.250
 877-646-8476 (toll-free)

 *Blog http://miovision.com/blog | **LinkedIn https://www.linkedin.com/
 company/miovision-technologies  | Twitter https://twitter.com/miovision
  | Facebook https://www.facebook.com/miovision*
 
 Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener,
 ON, Canada | N2C 1L3
 This e-mail may contain information that is privileged or confidential.
 If you are not the intended recipient, please delete the e-mail and any
 attachments and notify us immediately.


 On Mon, Feb 3, 2014 at 11:24 AM, Dafna Ron d...@redhat.com mailto:
 d...@redhat.com wrote:

 Thanks Steve.

 from the logs I can see that the create snapshot succeeds and that
 the vm is resumed.
 the vm moves to pause as part of libvirt flows:

 2014-02-02 14:41:20.872+: 5843: debug :
 qemuProcessHandleStop:728 : Transitioned guest snapshot-test to
 paused state
 2014-02-02 14:41:30.031+: 5843: debug :
 qemuProcessHandleResume:776 : Transitioned guest snapshot-test out
 of paused into resumed state

 There are bugs here but I am not sure yet if this is libvirt
 regression or engine.

 I'm adding Elad and Maor since in engine logs I can't see anything
 calling for live snapshot (only for snapshot) - Maor, shouldn't
 live snapshot command be logged somewhere in the logs?
 Is it possible that engine is calling to create snapshot and not
 create live snapshot which is why the vm pauses?

 Elad, if engine is not logging live snapshot anywhere I would open
 a bug for engine (to print that in the logs).
 Also, there is a bug in vdsm log for sdc where the below is logged
 as ERROR and not INFO:

 Thread-23::ERROR::2014-02-02
 09:51:19,497::sdc::137::Storage.StorageDomainCache::(_findDomain)
 looking for unfetched domain a52938f7-2cf4-4771-acb2-0c78d14999e5
 Thread-23::ERROR::2014-02-02
 09:51:19,497::sdc::154::Storage.StorageDomainCache::(_
 findUnfetchedDomain)
 looking for domain a52938f7-2cf4-4771-acb2-0c78d14999e5

 If the engine was sending live snapshot or if there is no
 difference in the two commands in engine side than I would open a
 bug for libvirt for pausing the vm during live snapshot.

 Dafna


 On 02/03/2014 02:41 PM, Steve Dainard wrote:

 [root@ovirt002 ~]# vdsClient -s 0 getStorageDomainInfo
 a52938f7-2cf4-4771-acb2-0c78d14999e5
 uuid = a52938f7-2cf4-4771-acb2-0c78d14999e5
 pool = ['fcb89071-6cdb-4972-94d1-c9324cebf814']
 lver = 5
 version = 3
 role = Master
 remotePath = gluster-store-vip:/rep1
 spm_id = 2
 type = NFS
 class = Data
 master_ver = 1
 name = gluster-store-rep1


 *Steve Dainard *
 IT Infrastructure Manager
 Miovision http://miovision.com/ | /Rethink Traffic/
 519-513-2407 tel:519-513-2407 ex.250
 877-646-8476 tel:877-646-8476 (toll-free)

 *Blog http://miovision.com/blog | **LinkedIn
 https://www.linkedin.com/company/miovision-technologies  |
 Twitter https://twitter.com/miovision  | Facebook
 https://www.facebook.com/miovision*
 
 
 Miovision Technologies Inc. | 148 Manitou Drive, Suite 101,
 Kitchener, ON, Canada | N2C 1L3
  

Re: [Users] oVirt 3.3.3 RC EL6 Live Snapshot

2014-02-03 Thread Maor Lipchuk
From the engine logs it seems that indeed live snapshot is called (The
command is snapshotVDSCommand see [1]).
This is done right after the snapshot has been created in the VM and it
signals the qemu process to start using the new volume created.

When live snapshot does not succeed we should see in the log something
like Wasn't able to live snapshot due to error:..., but it does not
appear so it seems that this worked out fine.

At some point I can see in the logs that VDSM reports to the engine that
the VM is paused.


[1]
2014-02-02 09:41:20,564 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(pool-6-thread-49) START, SnapshotVDSCommand(HostName = ovirt002, HostId
= 3080fb61-2d03-4008-b47f-9b66276a4257,
vmId=e261e707-a21f-4ae8-9cff-f535f4430446), log id: 7e0d7872
2014-02-02 09:41:21,119 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-93) VM snapshot-test
e261e707-a21f-4ae8-9cff-f535f4430446 moved from Up -- Paused
2014-02-02 09:41:30,234 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(pool-6-thread-49) FINISH, SnapshotVDSCommand, log id: 7e0d7872
2014-02-02 09:41:30,238 INFO
[org.ovirt.engine.core.bll.CreateSnapshotCommand] (pool-6-thread-49)
[67ea047a] Ending command successfully:
org.ovirt.engine.core.bll.CreateSnapshotCommand
...

Regards,
Maor

On 02/03/2014 06:24 PM, Dafna Ron wrote:
 Thanks Steve.
 
 from the logs I can see that the create snapshot succeeds and that the
 vm is resumed.
 the vm moves to pause as part of libvirt flows:
 
 2014-02-02 14:41:20.872+: 5843: debug : qemuProcessHandleStop:728 :
 Transitioned guest snapshot-test to paused state
 2014-02-02 14:41:30.031+: 5843: debug : qemuProcessHandleResume:776
 : Transitioned guest snapshot-test out of paused into resumed state
 
 There are bugs here but I am not sure yet if this is libvirt regression
 or engine.
 
 I'm adding Elad and Maor since in engine logs I can't see anything
 calling for live snapshot (only for snapshot) - Maor, shouldn't live
 snapshot command be logged somewhere in the logs?
 Is it possible that engine is calling to create snapshot and not create
 live snapshot which is why the vm pauses?
 
 Elad, if engine is not logging live snapshot anywhere I would open a bug
 for engine (to print that in the logs).
 Also, there is a bug in vdsm log for sdc where the below is logged as
 ERROR and not INFO:
 
 Thread-23::ERROR::2014-02-02
 09:51:19,497::sdc::137::Storage.StorageDomainCache::(_findDomain)
 looking for unfetched domain a52938f7-2cf4-4771-acb2-0c78d14999e5
 Thread-23::ERROR::2014-02-02
 09:51:19,497::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
 looking for domain a52938f7-2cf4-4771-acb2-0c78d14999e5
 
 If the engine was sending live snapshot or if there is no difference in
 the two commands in engine side than I would open a bug for libvirt for
 pausing the vm during live snapshot.
 
 Dafna
 
 On 02/03/2014 02:41 PM, Steve Dainard wrote:
 [root@ovirt002 ~]# vdsClient -s 0 getStorageDomainInfo
 a52938f7-2cf4-4771-acb2-0c78d14999e5
 uuid = a52938f7-2cf4-4771-acb2-0c78d14999e5
 pool = ['fcb89071-6cdb-4972-94d1-c9324cebf814']
 lver = 5
 version = 3
 role = Master
 remotePath = gluster-store-vip:/rep1
 spm_id = 2
 type = NFS
 class = Data
 master_ver = 1
 name = gluster-store-rep1


 *Steve Dainard *
 IT Infrastructure Manager
 Miovision http://miovision.com/ | /Rethink Traffic/
 519-513-2407 ex.250
 877-646-8476 (toll-free)

 *Blog http://miovision.com/blog | **LinkedIn
 https://www.linkedin.com/company/miovision-technologies  | Twitter
 https://twitter.com/miovision  | Facebook
 https://www.facebook.com/miovision*
 
 Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener,
 ON, Canada | N2C 1L3
 This e-mail may contain information that is privileged or
 confidential. If you are not the intended recipient, please delete the
 e-mail and any attachments and notify us immediately.


 On Sun, Feb 2, 2014 at 2:55 PM, Dafna Ron d...@redhat.com
 mailto:d...@redhat.com wrote:

 please run vdsClient -s 0 getStorageDomainInfo
 a52938f7-2cf4-4771-acb2-0c78d14999e5

 Thanks,

 Dafna



 On 02/02/2014 03:02 PM, Steve Dainard wrote:

 Logs attached with VM running on qemu-kvm-rhev packages
 installed.

 *Steve Dainard *
 IT Infrastructure Manager
 Miovision http://miovision.com/ | /Rethink Traffic/
 519-513-2407 tel:519-513-2407 ex.250

 877-646-8476 tel:877-646-8476 (toll-free)

 *Blog http://miovision.com/blog | **LinkedIn
 https://www.linkedin.com/company/miovision-technologies  |
 Twitter https://twitter.com/miovision  | Facebook
 https://www.facebook.com/miovision*

 
 Miovision Technologies Inc. | 148 Manitou Drive, Suite 101,
 Kitchener, 

Re: [Users] oVirt 3.3.3 RC EL6 Live Snapshot

2014-02-03 Thread Maor Lipchuk
On 02/03/2014 07:18 PM, Dafna Ron wrote:
 Maor,
 
 If snapshotVDSCommand is for live snapshot, what is the offline create
 snapshot command?
It is the CreateSnapshotVdsCommand which calls createVolume in VDSM
 
 we did not say that live snapshot did not succeed :)  we said that the
 vm is paused and restarted - which is something that should not happen
 for live snapshot (or at least never did before).
It's not sure that the restart is related to the live snapshot. but that
should be observed in the libvirt/vdsm logs.
 as I wrote before, we know that vdsm is reporting the vm as paused, that
 is because libvirt is reporting the vm as paused and I think that its
 happening because libvirt is not doing a live snapshot and so pauses the
 vm while taking the snapshot.
That sounds logic to me, it's need to be checked with libvirt, if that
kind of behaviour could happen.
 
 Dafna
 
 
 On 02/03/2014 05:08 PM, Maor Lipchuk wrote:
  From the engine logs it seems that indeed live snapshot is called (The
 command is snapshotVDSCommand see [1]).
 This is done right after the snapshot has been created in the VM and it
 signals the qemu process to start using the new volume created.

 When live snapshot does not succeed we should see in the log something
 like Wasn't able to live snapshot due to error:..., but it does not
 appear so it seems that this worked out fine.

 At some point I can see in the logs that VDSM reports to the engine that
 the VM is paused.


 [1]
 2014-02-02 09:41:20,564 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
 (pool-6-thread-49) START, SnapshotVDSCommand(HostName = ovirt002, HostId
 = 3080fb61-2d03-4008-b47f-9b66276a4257,
 vmId=e261e707-a21f-4ae8-9cff-f535f4430446), log id: 7e0d7872
 2014-02-02 09:41:21,119 INFO
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
 (DefaultQuartzScheduler_Worker-93) VM snapshot-test
 e261e707-a21f-4ae8-9cff-f535f4430446 moved from Up -- Paused
 2014-02-02 09:41:30,234 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
 (pool-6-thread-49) FINISH, SnapshotVDSCommand, log id: 7e0d7872
 2014-02-02 09:41:30,238 INFO
 [org.ovirt.engine.core.bll.CreateSnapshotCommand] (pool-6-thread-49)
 [67ea047a] Ending command successfully:
 org.ovirt.engine.core.bll.CreateSnapshotCommand
 ...

 Regards,
 Maor

 On 02/03/2014 06:24 PM, Dafna Ron wrote:
 Thanks Steve.

 from the logs I can see that the create snapshot succeeds and that the
 vm is resumed.
 the vm moves to pause as part of libvirt flows:

 2014-02-02 14:41:20.872+: 5843: debug : qemuProcessHandleStop:728 :
 Transitioned guest snapshot-test to paused state
 2014-02-02 14:41:30.031+: 5843: debug : qemuProcessHandleResume:776
 : Transitioned guest snapshot-test out of paused into resumed state

 There are bugs here but I am not sure yet if this is libvirt regression
 or engine.

 I'm adding Elad and Maor since in engine logs I can't see anything
 calling for live snapshot (only for snapshot) - Maor, shouldn't live
 snapshot command be logged somewhere in the logs?
 Is it possible that engine is calling to create snapshot and not create
 live snapshot which is why the vm pauses?

 Elad, if engine is not logging live snapshot anywhere I would open a bug
 for engine (to print that in the logs).
 Also, there is a bug in vdsm log for sdc where the below is logged as
 ERROR and not INFO:

 Thread-23::ERROR::2014-02-02
 09:51:19,497::sdc::137::Storage.StorageDomainCache::(_findDomain)
 looking for unfetched domain a52938f7-2cf4-4771-acb2-0c78d14999e5
 Thread-23::ERROR::2014-02-02
 09:51:19,497::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)

 looking for domain a52938f7-2cf4-4771-acb2-0c78d14999e5

 If the engine was sending live snapshot or if there is no difference in
 the two commands in engine side than I would open a bug for libvirt for
 pausing the vm during live snapshot.

 Dafna

 On 02/03/2014 02:41 PM, Steve Dainard wrote:
 [root@ovirt002 ~]# vdsClient -s 0 getStorageDomainInfo
 a52938f7-2cf4-4771-acb2-0c78d14999e5
 uuid = a52938f7-2cf4-4771-acb2-0c78d14999e5
 pool = ['fcb89071-6cdb-4972-94d1-c9324cebf814']
 lver = 5
 version = 3
 role = Master
 remotePath = gluster-store-vip:/rep1
 spm_id = 2
 type = NFS
 class = Data
 master_ver = 1
 name = gluster-store-rep1


 *Steve Dainard *
 IT Infrastructure Manager
 Miovision http://miovision.com/ | /Rethink Traffic/
 519-513-2407 ex.250
 877-646-8476 (toll-free)

 *Blog http://miovision.com/blog | **LinkedIn
 https://www.linkedin.com/company/miovision-technologies  | Twitter
 https://twitter.com/miovision  | Facebook
 https://www.facebook.com/miovision*
 

 Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener,
 ON, Canada | N2C 1L3
 This e-mail may contain information that is privileged or
 confidential. If you are not the intended recipient, please delete the
 e-mail and any attachments and notify us immediately.


 On 

Re: [Users] Users Digest, Vol 29, Issue 20

2014-02-03 Thread Juan Pablo Lorier
Hi,

I have to appologise as I moved the page after sending to the list. Here
it is:

http://www.ovirt.org/Change_network_interface_for_Gluster

Matt: I don't get your post, are you suggesting me to start like that?

Regards,



signature.asc
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] oVirt 3.3.3 release

2014-02-03 Thread Sandro Bonazzola
The oVirt development team is very happy to announce the general
availability of oVirt 3.3.3 as of February 3rd 2013. This release
solidifies oVirt as a leading KVM management application and open
source alternative to VMware vSphere.

oVirt is available now for Fedora 19 and Red Hat Enterprise Linux 6.5
(or similar).

This release of oVirt includes numerous bug fixes.
See the release notes [1] for a list of the new features and bugs fixed.

IMPORTANT NOTE: If you're upgrading from a previous version, please update
ovirt-release to the latest version (10.0.1-3) and verify you have the correct
repositories enabled by running the following commands

# yum update ovirt-release
# yum repolist enabled

before upgrading with the usual procedure. You should see the ovirt-3.3.3 and
ovirt-stable repositories listed in the output of the repolist command.

A new oVirt Node build will be available soon as well.

[1] http://www.ovirt.org/OVirt_3.3.3_release_notes


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.3.3 RC EL6 Live Snapshot

2014-02-03 Thread Dafna Ron

Maor,

If snapshotVDSCommand is for live snapshot, what is the offline create 
snapshot command?


we did not say that live snapshot did not succeed :)  we said that the 
vm is paused and restarted - which is something that should not happen 
for live snapshot (or at least never did before).
as I wrote before, we know that vdsm is reporting the vm as paused, that 
is because libvirt is reporting the vm as paused and I think that its 
happening because libvirt is not doing a live snapshot and so pauses the 
vm while taking the snapshot.


Dafna


On 02/03/2014 05:08 PM, Maor Lipchuk wrote:

 From the engine logs it seems that indeed live snapshot is called (The
command is snapshotVDSCommand see [1]).
This is done right after the snapshot has been created in the VM and it
signals the qemu process to start using the new volume created.

When live snapshot does not succeed we should see in the log something
like Wasn't able to live snapshot due to error:..., but it does not
appear so it seems that this worked out fine.

At some point I can see in the logs that VDSM reports to the engine that
the VM is paused.


[1]
2014-02-02 09:41:20,564 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(pool-6-thread-49) START, SnapshotVDSCommand(HostName = ovirt002, HostId
= 3080fb61-2d03-4008-b47f-9b66276a4257,
vmId=e261e707-a21f-4ae8-9cff-f535f4430446), log id: 7e0d7872
2014-02-02 09:41:21,119 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-93) VM snapshot-test
e261e707-a21f-4ae8-9cff-f535f4430446 moved from Up -- Paused
2014-02-02 09:41:30,234 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(pool-6-thread-49) FINISH, SnapshotVDSCommand, log id: 7e0d7872
2014-02-02 09:41:30,238 INFO
[org.ovirt.engine.core.bll.CreateSnapshotCommand] (pool-6-thread-49)
[67ea047a] Ending command successfully:
org.ovirt.engine.core.bll.CreateSnapshotCommand
...

Regards,
Maor

On 02/03/2014 06:24 PM, Dafna Ron wrote:

Thanks Steve.

from the logs I can see that the create snapshot succeeds and that the
vm is resumed.
the vm moves to pause as part of libvirt flows:

2014-02-02 14:41:20.872+: 5843: debug : qemuProcessHandleStop:728 :
Transitioned guest snapshot-test to paused state
2014-02-02 14:41:30.031+: 5843: debug : qemuProcessHandleResume:776
: Transitioned guest snapshot-test out of paused into resumed state

There are bugs here but I am not sure yet if this is libvirt regression
or engine.

I'm adding Elad and Maor since in engine logs I can't see anything
calling for live snapshot (only for snapshot) - Maor, shouldn't live
snapshot command be logged somewhere in the logs?
Is it possible that engine is calling to create snapshot and not create
live snapshot which is why the vm pauses?

Elad, if engine is not logging live snapshot anywhere I would open a bug
for engine (to print that in the logs).
Also, there is a bug in vdsm log for sdc where the below is logged as
ERROR and not INFO:

Thread-23::ERROR::2014-02-02
09:51:19,497::sdc::137::Storage.StorageDomainCache::(_findDomain)
looking for unfetched domain a52938f7-2cf4-4771-acb2-0c78d14999e5
Thread-23::ERROR::2014-02-02
09:51:19,497::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain a52938f7-2cf4-4771-acb2-0c78d14999e5

If the engine was sending live snapshot or if there is no difference in
the two commands in engine side than I would open a bug for libvirt for
pausing the vm during live snapshot.

Dafna

On 02/03/2014 02:41 PM, Steve Dainard wrote:

[root@ovirt002 ~]# vdsClient -s 0 getStorageDomainInfo
a52938f7-2cf4-4771-acb2-0c78d14999e5
uuid = a52938f7-2cf4-4771-acb2-0c78d14999e5
pool = ['fcb89071-6cdb-4972-94d1-c9324cebf814']
lver = 5
version = 3
role = Master
remotePath = gluster-store-vip:/rep1
spm_id = 2
type = NFS
class = Data
master_ver = 1
name = gluster-store-rep1


*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | /Rethink Traffic/
519-513-2407 ex.250
877-646-8476 (toll-free)

*Blog http://miovision.com/blog | **LinkedIn
https://www.linkedin.com/company/miovision-technologies  | Twitter
https://twitter.com/miovision  | Facebook
https://www.facebook.com/miovision*

Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener,
ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or
confidential. If you are not the intended recipient, please delete the
e-mail and any attachments and notify us immediately.


On Sun, Feb 2, 2014 at 2:55 PM, Dafna Ron d...@redhat.com
mailto:d...@redhat.com wrote:

 please run vdsClient -s 0 getStorageDomainInfo
 a52938f7-2cf4-4771-acb2-0c78d14999e5

 Thanks,

 Dafna



 On 02/02/2014 03:02 PM, Steve Dainard wrote:

 Logs attached with VM running on qemu-kvm-rhev packages
installed.

 *Steve Dainard *
 IT Infrastructure Manager
 Miovision 

Re: [Users] oVirt 3.3.3 RC EL6 Live Snapshot

2014-02-03 Thread Dafna Ron

On 02/03/2014 05:34 PM, Maor Lipchuk wrote:

On 02/03/2014 07:18 PM, Dafna Ron wrote:

Maor,

If snapshotVDSCommand is for live snapshot, what is the offline create
snapshot command?

It is the CreateSnapshotVdsCommand which calls createVolume in VDSM


but we need to be able to know that a live snapshot was sent and not an 
offline snapshot.
Elad, somewhere in this flow we need to know that the snapshot was taken 
on a running vm :) this seems like a bug to me.

we did not say that live snapshot did not succeed :)  we said that the
vm is paused and restarted - which is something that should not happen
for live snapshot (or at least never did before).

It's not sure that the restart is related to the live snapshot. but that
should be observed in the libvirt/vdsm logs.


yes, I am sure because the user is reporting it and the logs show it...

as I wrote before, we know that vdsm is reporting the vm as paused, that
is because libvirt is reporting the vm as paused and I think that its
happening because libvirt is not doing a live snapshot and so pauses the
vm while taking the snapshot.

That sounds logic to me, it's need to be checked with libvirt, if that
kind of behaviour could happen.


Elad, can you please try to reproduce and open a bug to libvirt?


Dafna


On 02/03/2014 05:08 PM, Maor Lipchuk wrote:

  From the engine logs it seems that indeed live snapshot is called (The
command is snapshotVDSCommand see [1]).
This is done right after the snapshot has been created in the VM and it
signals the qemu process to start using the new volume created.

When live snapshot does not succeed we should see in the log something
like Wasn't able to live snapshot due to error:..., but it does not
appear so it seems that this worked out fine.

At some point I can see in the logs that VDSM reports to the engine that
the VM is paused.


[1]
2014-02-02 09:41:20,564 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(pool-6-thread-49) START, SnapshotVDSCommand(HostName = ovirt002, HostId
= 3080fb61-2d03-4008-b47f-9b66276a4257,
vmId=e261e707-a21f-4ae8-9cff-f535f4430446), log id: 7e0d7872
2014-02-02 09:41:21,119 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-93) VM snapshot-test
e261e707-a21f-4ae8-9cff-f535f4430446 moved from Up -- Paused
2014-02-02 09:41:30,234 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(pool-6-thread-49) FINISH, SnapshotVDSCommand, log id: 7e0d7872
2014-02-02 09:41:30,238 INFO
[org.ovirt.engine.core.bll.CreateSnapshotCommand] (pool-6-thread-49)
[67ea047a] Ending command successfully:
org.ovirt.engine.core.bll.CreateSnapshotCommand
...

Regards,
Maor

On 02/03/2014 06:24 PM, Dafna Ron wrote:

Thanks Steve.

from the logs I can see that the create snapshot succeeds and that the
vm is resumed.
the vm moves to pause as part of libvirt flows:

2014-02-02 14:41:20.872+: 5843: debug : qemuProcessHandleStop:728 :
Transitioned guest snapshot-test to paused state
2014-02-02 14:41:30.031+: 5843: debug : qemuProcessHandleResume:776
: Transitioned guest snapshot-test out of paused into resumed state

There are bugs here but I am not sure yet if this is libvirt regression
or engine.

I'm adding Elad and Maor since in engine logs I can't see anything
calling for live snapshot (only for snapshot) - Maor, shouldn't live
snapshot command be logged somewhere in the logs?
Is it possible that engine is calling to create snapshot and not create
live snapshot which is why the vm pauses?

Elad, if engine is not logging live snapshot anywhere I would open a bug
for engine (to print that in the logs).
Also, there is a bug in vdsm log for sdc where the below is logged as
ERROR and not INFO:

Thread-23::ERROR::2014-02-02
09:51:19,497::sdc::137::Storage.StorageDomainCache::(_findDomain)
looking for unfetched domain a52938f7-2cf4-4771-acb2-0c78d14999e5
Thread-23::ERROR::2014-02-02
09:51:19,497::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)

looking for domain a52938f7-2cf4-4771-acb2-0c78d14999e5

If the engine was sending live snapshot or if there is no difference in
the two commands in engine side than I would open a bug for libvirt for
pausing the vm during live snapshot.

Dafna

On 02/03/2014 02:41 PM, Steve Dainard wrote:

[root@ovirt002 ~]# vdsClient -s 0 getStorageDomainInfo
a52938f7-2cf4-4771-acb2-0c78d14999e5
uuid = a52938f7-2cf4-4771-acb2-0c78d14999e5
pool = ['fcb89071-6cdb-4972-94d1-c9324cebf814']
lver = 5
version = 3
role = Master
remotePath = gluster-store-vip:/rep1
spm_id = 2
type = NFS
class = Data
master_ver = 1
name = gluster-store-rep1


*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | /Rethink Traffic/
519-513-2407 ex.250
877-646-8476 (toll-free)

*Blog http://miovision.com/blog | **LinkedIn
https://www.linkedin.com/company/miovision-technologies  | Twitter
https://twitter.com/miovision  | Facebook
https://www.facebook.com/miovision*

Re: [Users] oVirt 3.3.3 RC EL6 Live Snapshot

2014-02-03 Thread Maor Lipchuk
On 02/03/2014 07:46 PM, Dafna Ron wrote:
 On 02/03/2014 05:34 PM, Maor Lipchuk wrote:
 On 02/03/2014 07:18 PM, Dafna Ron wrote:
 Maor,

 If snapshotVDSCommand is for live snapshot, what is the offline create
 snapshot command?
 It is the CreateSnapshotVdsCommand which calls createVolume in VDSM
 
 but we need to be able to know that a live snapshot was sent and not an
 offline snapshot.
Yes, at the logs we can see the all process :

First a request to create a snapshot (new volume) sent to VDSM:
2014-02-02 09:41:09,557 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
(pool-6-thread-49) [67ea047a] START, CreateSnapshotVDSCommand(
storagePoolId = fcb89071-6cdb-4972-94d1-c9324cebf814,
ignoreFailoverLimit = false, storageDomainId =
a52938f7-2cf4-4771-acb2-0c78d14999e5, imageGroupId =
c1cb6b66-655e-48c3-8568-4975295eb037, imageSizeInBytes = 21474836480,
volumeFormat = COW, newImageId = 6d8c80a4-328f-4a53-86a2-a4080a2662ce,
newImageDescription = , imageId = 5085422e-6592-415a-9da3-9e43dac9374b,
sourceImageGroupId = c1cb6b66-655e-48c3-8568-4975295eb037), log id: 7875f3f5

after the snapshot gets created :
2014-02-02 09:41:20,553 INFO
[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand]
(pool-6-thread-49) Ending command successfully:
org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand

then the engine calls the live snapshot (see also [1])
2014-02-02 09:41:30,234 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(pool-6-thread-49) FINISH, SnapshotVDSCommand, log id: 7e0d7872

 Elad, somewhere in this flow we need to know that the snapshot was taken
 on a running vm :) this seems like a bug to me.
 we did not say that live snapshot did not succeed :)  we said that the
 vm is paused and restarted - which is something that should not happen
 for live snapshot (or at least never did before).
 It's not sure that the restart is related to the live snapshot. but that
 should be observed in the libvirt/vdsm logs.
 
 yes, I am sure because the user is reporting it and the logs show it...
 as I wrote before, we know that vdsm is reporting the vm as paused, that
 is because libvirt is reporting the vm as paused and I think that its
 happening because libvirt is not doing a live snapshot and so pauses the
 vm while taking the snapshot.
 That sounds logic to me, it's need to be checked with libvirt, if that
 kind of behaviour could happen.
 
 Elad, can you please try to reproduce and open a bug to libvirt?
 
 Dafna


 On 02/03/2014 05:08 PM, Maor Lipchuk wrote:
   From the engine logs it seems that indeed live snapshot is called
 (The
 command is snapshotVDSCommand see [1]).
 This is done right after the snapshot has been created in the VM and it
 signals the qemu process to start using the new volume created.

 When live snapshot does not succeed we should see in the log something
 like Wasn't able to live snapshot due to error:..., but it does not
 appear so it seems that this worked out fine.

 At some point I can see in the logs that VDSM reports to the engine
 that
 the VM is paused.


 [1]
 2014-02-02 09:41:20,564 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
 (pool-6-thread-49) START, SnapshotVDSCommand(HostName = ovirt002,
 HostId
 = 3080fb61-2d03-4008-b47f-9b66276a4257,
 vmId=e261e707-a21f-4ae8-9cff-f535f4430446), log id: 7e0d7872
 2014-02-02 09:41:21,119 INFO
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
 (DefaultQuartzScheduler_Worker-93) VM snapshot-test
 e261e707-a21f-4ae8-9cff-f535f4430446 moved from Up -- Paused
 2014-02-02 09:41:30,234 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
 (pool-6-thread-49) FINISH, SnapshotVDSCommand, log id: 7e0d7872
 2014-02-02 09:41:30,238 INFO
 [org.ovirt.engine.core.bll.CreateSnapshotCommand] (pool-6-thread-49)
 [67ea047a] Ending command successfully:
 org.ovirt.engine.core.bll.CreateSnapshotCommand
 ...

 Regards,
 Maor

 On 02/03/2014 06:24 PM, Dafna Ron wrote:
 Thanks Steve.

 from the logs I can see that the create snapshot succeeds and that the
 vm is resumed.
 the vm moves to pause as part of libvirt flows:

 2014-02-02 14:41:20.872+: 5843: debug :
 qemuProcessHandleStop:728 :
 Transitioned guest snapshot-test to paused state
 2014-02-02 14:41:30.031+: 5843: debug :
 qemuProcessHandleResume:776
 : Transitioned guest snapshot-test out of paused into resumed state

 There are bugs here but I am not sure yet if this is libvirt
 regression
 or engine.

 I'm adding Elad and Maor since in engine logs I can't see anything
 calling for live snapshot (only for snapshot) - Maor, shouldn't live
 snapshot command be logged somewhere in the logs?
 Is it possible that engine is calling to create snapshot and not
 create
 live snapshot which is why the vm pauses?

 Elad, if engine is not logging live snapshot anywhere I would open
 a bug
 for engine (to print that in the logs).
 Also, there is a bug in vdsm log for sdc where the below is logged as
 ERROR and not INFO:

 

Re: [Users] oVirt January 2014 Updates

2014-02-03 Thread Karli Sjöberg


Skickat från min iPhone

 3 feb 2014 kl. 13:29 skrev Itamar Heim ih...@redhat.com:
 
 1. VERSIONS
 - oVirt 3.3.3 in last phases
  http://www.ovirt.org/OVirt_3.3.3_release_notes
 
 - oVirt 3.4 with another slew of features is getting into test day,
  beta, etc.
  http://red.ht/1eo9TiS
 
 2. WORKSHOPS
 - Fosdem - oVirt stand wass packed as well as a virt  IaaS devroom,
  with many oVirt talks. more details next time.
 
 - more oVirt talks in cfgmgmtcamp and infra.next this week, including:
 
 -- Hervé Leclerc How we build a cloud with CentOS, oVirt, and
   Cisco-UCS Wednesday 5th February in Infrastructure.Next Ghent
   http://bit.ly/1fjTJVC
 
 -- oVirt Node being used as a Discovery Node in The Foreman project
   talk at cfgmgmtcamp, february 3rd
   http://bit.ly/1gAnneI
 
 - oVirt Korea group meeting this Saturday in Seoul
  Register here http://onoffmix.com/event/23134
 
 - Open Virtualization Workshop in Tehran (26,27 February 2014) 
  Isfahan (5,6 March 2014)
  http://t.co/9PR3BxOnpd
 
 3. USING oVirt
 
 - More details on Wind River using ovirt
  http://bit.ly/1i2LtLI
 
 - New Case Study: Nieuwland Geo-Informati
  http://www.ovirt.org/Nieuwland_case_study
 
 - oVirt Node being used as a Discovery Node in The Foreman project
  talk at cfgmgmtcamp, february 3rd
  http://bit.ly/1gAnneI
 
 4. Users
 - Double the amount of emails on users mailing list- up from 800 to
  1600 this month!
 
 - Karli updated how to use spice with ovirt from OS X
  http://www.ovirt.org/SPICE_Remote-Viewer_on_OS_X
 
 - Opaque (spice android ovirt client) v1.1.8 beta released
  https://plus.google.com/communities/116099119712127782216
 
 - how to deploy windows guest agent on windows:
  http://bit.ly/1kr5tJo

This is also explained at:
http://www.ovirt.org/OVirt_Guest_Agent_For_Windows

/K

 
 - Andrew Lau posted on working through Hosted Engine with 3.4.0 beta
  http://bit.ly/1eobzZw
 
 - Deep Dive into oVirt 3.3.3 by Li Jiansheng (chinese)
  http://slidesha.re/1eFWQ8G
 
 - Matthew Bingham  posted a series of video on setting up ovirt:
  Install oVirt 3.3.2
  http://www.youtube.com/watch?v=GWT-m-oWSjQ
 
  Optional export nfs mount for oVirt 3.3.2
  http://www.youtube.com/watch?v=MLbPln5-2jE
 
  Initial webgui oVirt 3.3.2 Steps for storage
  http://www.youtube.com/watch?v=dL0_03ZICw4
 
  Download and upload client iso to ISO_DOMAIN for oVirt 3.3.2
  http://www.youtube.com/watch?v=pDzTHFSmvGE
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Mixing tagged and untagged VLANs

2014-02-03 Thread Trey Dockendorf
Using 3.3.2 I seem unable to mix tagged and untagged VLANs on a single
interface.  I'm trying to put the following logical networks on a
host's eth0.

ovirtmgmt:
 - Display Network
 - Migration Network
 - NOT VM Network
 - NO VLAN

private:
 - VM network
 - NO VLAN

ipmi:
 - VM Network
 - VLAN 2

In the host's network setup ovirtmgmt is already linked to eth0.  If I
attach 'ipmi (VLAN 2)' then try and attach 'private' the message is
Cannot have more than one non-VLAN network on one interface.  Same
occurs if I try and attach 'private' when only 'ovirtmgmt' is assigned
to eth0.

Is it not possible to have multiple untagged VLAN networks associated
to one interface in oVirt?

Thanks
- Trey
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.3.3 RC EL6 Live Snapshot

2014-02-03 Thread Dafna Ron
Maor I am not saying that we are not doing a live snapshot :) I am 
saying that we need a print in the log that states live snapshot command 
was called i.e: Print in the log: LiveSnapshotCommand - this can call 
to the rest of snapshotVDSCreateCommand.



On 02/03/2014 07:38 PM, Maor Lipchuk wrote:

On 02/03/2014 07:46 PM, Dafna Ron wrote:

On 02/03/2014 05:34 PM, Maor Lipchuk wrote:

On 02/03/2014 07:18 PM, Dafna Ron wrote:

Maor,

If snapshotVDSCommand is for live snapshot, what is the offline create
snapshot command?

It is the CreateSnapshotVdsCommand which calls createVolume in VDSM

but we need to be able to know that a live snapshot was sent and not an
offline snapshot.

Yes, at the logs we can see the all process :

First a request to create a snapshot (new volume) sent to VDSM:
2014-02-02 09:41:09,557 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
(pool-6-thread-49) [67ea047a] START, CreateSnapshotVDSCommand(
storagePoolId = fcb89071-6cdb-4972-94d1-c9324cebf814,
ignoreFailoverLimit = false, storageDomainId =
a52938f7-2cf4-4771-acb2-0c78d14999e5, imageGroupId =
c1cb6b66-655e-48c3-8568-4975295eb037, imageSizeInBytes = 21474836480,
volumeFormat = COW, newImageId = 6d8c80a4-328f-4a53-86a2-a4080a2662ce,
newImageDescription = , imageId = 5085422e-6592-415a-9da3-9e43dac9374b,
sourceImageGroupId = c1cb6b66-655e-48c3-8568-4975295eb037), log id: 7875f3f5

after the snapshot gets created :
2014-02-02 09:41:20,553 INFO
[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand]
(pool-6-thread-49) Ending command successfully:
org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand

then the engine calls the live snapshot (see also [1])
2014-02-02 09:41:30,234 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(pool-6-thread-49) FINISH, SnapshotVDSCommand, log id: 7e0d7872


Elad, somewhere in this flow we need to know that the snapshot was taken
on a running vm :) this seems like a bug to me.

we did not say that live snapshot did not succeed :)  we said that the
vm is paused and restarted - which is something that should not happen
for live snapshot (or at least never did before).

It's not sure that the restart is related to the live snapshot. but that
should be observed in the libvirt/vdsm logs.

yes, I am sure because the user is reporting it and the logs show it...

as I wrote before, we know that vdsm is reporting the vm as paused, that
is because libvirt is reporting the vm as paused and I think that its
happening because libvirt is not doing a live snapshot and so pauses the
vm while taking the snapshot.

That sounds logic to me, it's need to be checked with libvirt, if that
kind of behaviour could happen.

Elad, can you please try to reproduce and open a bug to libvirt?


Dafna


On 02/03/2014 05:08 PM, Maor Lipchuk wrote:

   From the engine logs it seems that indeed live snapshot is called
(The
command is snapshotVDSCommand see [1]).
This is done right after the snapshot has been created in the VM and it
signals the qemu process to start using the new volume created.

When live snapshot does not succeed we should see in the log something
like Wasn't able to live snapshot due to error:..., but it does not
appear so it seems that this worked out fine.

At some point I can see in the logs that VDSM reports to the engine
that
the VM is paused.


[1]
2014-02-02 09:41:20,564 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(pool-6-thread-49) START, SnapshotVDSCommand(HostName = ovirt002,
HostId
= 3080fb61-2d03-4008-b47f-9b66276a4257,
vmId=e261e707-a21f-4ae8-9cff-f535f4430446), log id: 7e0d7872
2014-02-02 09:41:21,119 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-93) VM snapshot-test
e261e707-a21f-4ae8-9cff-f535f4430446 moved from Up -- Paused
2014-02-02 09:41:30,234 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
(pool-6-thread-49) FINISH, SnapshotVDSCommand, log id: 7e0d7872
2014-02-02 09:41:30,238 INFO
[org.ovirt.engine.core.bll.CreateSnapshotCommand] (pool-6-thread-49)
[67ea047a] Ending command successfully:
org.ovirt.engine.core.bll.CreateSnapshotCommand
...

Regards,
Maor

On 02/03/2014 06:24 PM, Dafna Ron wrote:

Thanks Steve.

from the logs I can see that the create snapshot succeeds and that the
vm is resumed.
the vm moves to pause as part of libvirt flows:

2014-02-02 14:41:20.872+: 5843: debug :
qemuProcessHandleStop:728 :
Transitioned guest snapshot-test to paused state
2014-02-02 14:41:30.031+: 5843: debug :
qemuProcessHandleResume:776
: Transitioned guest snapshot-test out of paused into resumed state

There are bugs here but I am not sure yet if this is libvirt
regression
or engine.

I'm adding Elad and Maor since in engine logs I can't see anything
calling for live snapshot (only for snapshot) - Maor, shouldn't live
snapshot command be logged somewhere in the logs?
Is it possible that engine is calling to create snapshot and not
create
live snapshot 

[Users] (quick tip) Controlling glusterfsd CPU outbreaks with cgroups

2014-02-03 Thread Andrew Lau
Hi all,

I've seen quite a few posts with people using similar scenarios of running
gluster and ovirt on the same physical box so I think this may be relevant.
After some discussions on the gluster mailing list, I put together a quick
post [1] about how you can limit glusters CPU resources so when it comes to
self-healing etc. your VMs will have higher priority and not crawl to a
halt when glusterfsd takes all the resources.

So far, I haven't been able to apply these cgroup rules without losing the
dynamic ones set by libvirt so you must do this in maintenance mode
followed by a full host restart after following those instructions.

I'm open to suggestions and feedback.

Thanks,
Andre
w

[1]
http://www.andrewklau.com/controlling-glusterfsd-cpu-outbreaks-with-cgroups/
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM install failures on a stateless node

2014-02-03 Thread David Li
Itamar,

It hasn't been completed resolved.  

What I can do right now is to run a live Ubuntu VM on the node without actually 
installing it. I don't know if anyone in the community has ever installed VM on 
a stateless node. I have a few questions:

1. When I edit the VM settings, there is stateless option? Does it mean I can 
run live VM in the memory of a stateless node if I check on it?

2. My node is diskless but I have a iSCSI LUN attached to it. Does this mean 
the VM will be installed on the iSCSI LUN? 

David



- Original Message -
 From: Itamar Heim ih...@redhat.com
 To: David Li david...@sbcglobal.net; users@ovirt.org users@ovirt.org
 Cc: 
 Sent: Sunday, February 2, 2014 8:45 AM
 Subject: Re: [Users] VM install failures on a stateless node
 
 On 01/28/2014 02:08 AM, David Li wrote:
  Hi,
 
  I have been trying to install my first VM on a stateless node.  so far I 
 have failed twice with the node ending up in the Non-responsive 
 mode. I had to reboot to recover and it took a while to reconfigure 
 everything 
 since this is stateless.
 
  I can still get into the node via the console. It's not dead.  But the 
 ovirtmgmt interface seems to be dead. The other iSCSI interface is running ok.
 
 
  Can anyone recommend ways how to debug this problem?
 
 
 
  Thanks.
 
  David
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 
 was this resolved?
 is vdsm up and running? output of vdsClient -s 0 getVdsCaps?
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM install failures on a stateless node

2014-02-03 Thread Itamar Heim

On 02/04/2014 04:49 AM, David Li wrote:

Itamar,

It hasn't been completed resolved.

What I can do right now is to run a live Ubuntu VM on the node without actually 
installing it. I don't know if anyone in the community has ever installed VM on 
a stateless node. I have a few questions:

1. When I edit the VM settings, there is stateless option? Does it mean I can 
run live VM in the memory of a stateless node if I check on it?

2. My node is diskless but I have a iSCSI LUN attached to it. Does this mean 
the VM will be installed on the iSCSI LUN?

David


what type of storage domain are you using?




- Original Message -

From: Itamar Heim ih...@redhat.com
To: David Li david...@sbcglobal.net; users@ovirt.org users@ovirt.org
Cc:
Sent: Sunday, February 2, 2014 8:45 AM
Subject: Re: [Users] VM install failures on a stateless node

On 01/28/2014 02:08 AM, David Li wrote:

  Hi,

  I have been trying to install my first VM on a stateless node.  so far I

have failed twice with the node ending up in the Non-responsive
mode. I had to reboot to recover and it took a while to reconfigure everything
since this is stateless.


  I can still get into the node via the console. It's not dead.  But the

ovirtmgmt interface seems to be dead. The other iSCSI interface is running ok.



  Can anyone recommend ways how to debug this problem?



  Thanks.

  David

  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users



was this resolved?
is vdsm up and running? output of vdsClient -s 0 getVdsCaps?




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users