Re: [Users] The purpose of Wipe on delete ?

2014-02-28 Thread Sandro Bonazzola
Il 27/02/2014 22:16, Dafna Ron ha scritto:
 wipe = writing zero's on the space allocated to that disk to make sure any 
 data once written will be deleted permanently.
 
 so it's  a security vs speed decision on using this option - since we zeroing 
 the disk to make sure any information once written will be overwritten,
 a delete of a large disk can take a while.

I think this may be not really useful, zeroing files on modern file systems 
can't grant any kind of security improvement.
According to shred man page:

   CAUTION: Note that shred relies on a very important assumption: that the 
file system overwrites data in place.  This is the traditional way to
do things, but many modern file system designs  do  not
   satisfy this assumption.  The following are examples of file systems on 
which shred is not effective, or is not guaranteed to be effective in
all file system modes:

   * log-structured or journaled file systems, such as those supplied with 
AIX and Solaris (and JFS, ReiserFS, XFS, Ext3, etc.)

   * file systems that write redundant data and carry on even if some 
writes fail, such as RAID-based file systems

   * file systems that make snapshots, such as Network Appliance's NFS 
server

   * file systems that cache in temporary locations, such as NFS version 3 
clients

   * compressed file systems

   In the case of ext3 file systems, the above disclaimer applies (and 
shred is thus of limited effectiveness) only in data=journal mode, which
journals file data in addition to just metadata.  In both
   the data=ordered (default) and data=writeback modes, shred works as 
usual.  Ext3 journaling modes can be changed by adding the data=something
option to the mount options for a particular file system
   in the /etc/fstab file, as documented in the mount man page (man mount).

   In addition, file system backups and remote mirrors may contain copies 
of the file that cannot be removed, and that will allow a shredded file
to be recovered later.



 
 Dafna
 
 
 
 
 On 02/27/2014 04:14 PM, Richard Davis wrote:
 Hi

 What is the purpose of the Wipe on delete option for a VM disk ?
 Why would you not want data wiped on delete if the alternative is to leave 
 LV metadata and other data languishing on the SD ?


 Thanks

 Rich
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
 


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Fresh installation of ovirt-engine

2014-02-28 Thread Sandro Bonazzola
Il 28/02/2014 09:13, Andy Michielsen ha scritto:
 Hello,
 
 No I did the change during the execution of the engine-setup script. When it 
 want's to create an ISO_DOMAIN it suggest /var/lib/exports/iso and I
 changed it to /exports/iso.
 
 But when everything is finnished and I check with showmount -e FQDN it 
 tells me that it is exporting /var/lib/exports/iso.

Can you upload your setup logs (for example on http://www.fpaste.org/) so we 
can take a look?

 
 Kind regards.
 
 
 
 
 2014-02-28 8:26 GMT+01:00 Sandro Bonazzola sbona...@redhat.com 
 mailto:sbona...@redhat.com:
 
 Il 27/02/2014 22:18, Itamar Heim ha scritto:
  On 02/27/2014 06:14 PM, Andy Michielsen wrote:
  Hello,
 
  I just finnished an installation of a ovirt-engine and changed the
  ISO_DOMAIN path to /exports/iso as this is my 1Tb diskspace.
 
  But now I see that the nfs share created is /var/lib/exports/iso. I
  would expect it to be /export/iso but that doesn't seem to be the case.
 
  Doe I need to modify this manualy or did I do something wrong.
 
 So you used default /var/lib/exports/iso while running engine setup and 
 then changed it manually from the web app?
 Can you upload your setup logs (for example on http://www.fpaste.org/) so 
 we can take a look?
 
 
 
  Kind regards.
 
 
  ___
  Users mailing list
  Users@ovirt.org mailto:Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 
  didi/sandro - thoughts?
 
 
 
 
 
 
 
 --
 Sandro Bonazzola
 Better technology. Faster innovation. Powered by community collaboration.
 See how it works at redhat.com http://redhat.com
 
 


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Setting up an ovirt-node

2014-02-28 Thread Andy Michielsen
Hello,

I did a clean install of a ovirt-node with the iso provided from ovirt.

Everything went fine until I logged on with the admin user and configured
the ovirt-engines address.

Now I don't have any network connection any more.

I have 2 nic's available and defined only the first one with a static IP.

When I check the network settings in the admin menu it tells me I have
serveral bond devices.

If I log on with a root user it states that under
/etc/sysconfig/network_scripts that there is a ifcfg_em1, ifcfg_ovirtmgmt
and a ifcfg_brem1.

The two last devices use the same static ip that I defined on the ifcfg_em1.

How can I get my network back up and runnig as I will need this to connect
to the engine which is running on an other server.

Kind regards.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.5 planning

2014-02-28 Thread Sven Kieske
Bacula?

Am 27.02.2014 20:44, schrieb Itamar Heim:
 I'd rather see integrated with backup solutions to tackle this
 (hopefully, there are relevant open source ones as well)

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Fresh installation of ovirt-engine

2014-02-28 Thread Andy Michielsen
Hello,

I check this logfile myself and it doesn't mention my /exports/iso input at
all. I posted the log on http://www.fpaste.org/

Kind regards.


2014-02-28 9:16 GMT+01:00 Sandro Bonazzola sbona...@redhat.com:

 Il 28/02/2014 09:13, Andy Michielsen ha scritto:
  Hello,
 
  No I did the change during the execution of the engine-setup script.
 When it want's to create an ISO_DOMAIN it suggest /var/lib/exports/iso and I
  changed it to /exports/iso.
 
  But when everything is finnished and I check with showmount -e FQDN it
 tells me that it is exporting /var/lib/exports/iso.

 Can you upload your setup logs (for example on http://www.fpaste.org/) so
 we can take a look?

 
  Kind regards.
 
 
 
 
  2014-02-28 8:26 GMT+01:00 Sandro Bonazzola sbona...@redhat.com mailto:
 sbona...@redhat.com:
 
  Il 27/02/2014 22:18, Itamar Heim ha scritto:
   On 02/27/2014 06:14 PM, Andy Michielsen wrote:
   Hello,
  
   I just finnished an installation of a ovirt-engine and changed the
   ISO_DOMAIN path to /exports/iso as this is my 1Tb diskspace.
  
   But now I see that the nfs share created is /var/lib/exports/iso.
 I
   would expect it to be /export/iso but that doesn't seem to be the
 case.
  
   Doe I need to modify this manualy or did I do something wrong.
 
  So you used default /var/lib/exports/iso while running engine setup
 and then changed it manually from the web app?
  Can you upload your setup logs (for example on
 http://www.fpaste.org/) so we can take a look?
 
 
  
   Kind regards.
  
  
   ___
   Users mailing list
   Users@ovirt.org mailto:Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
  
  
   didi/sandro - thoughts?
 
 
 
 
 
 
 
  --
  Sandro Bonazzola
  Better technology. Faster innovation. Powered by community
 collaboration.
  See how it works at redhat.com http://redhat.com
 
 


 --
 Sandro Bonazzola
 Better technology. Faster innovation. Powered by community collaboration.
 See how it works at redhat.com

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Fresh installation of ovirt-engine

2014-02-28 Thread Sven Kieske
Sounds like a bug to me.
You should file a bugreport at bugzilla.redhat.com to track it.

Am 28.02.2014 09:13, schrieb Andy Michielsen:
 Hello,
 
 No I did the change during the execution of the engine-setup script. When
 it want's to create an ISO_DOMAIN it suggest /var/lib/exports/iso and I
 changed it to /exports/iso.
 
 But when everything is finnished and I check with showmount -e FQDN it
 tells me that it is exporting /var/lib/exports/iso.
 
 Kind regards.

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Setting up an ovirt-node

2014-02-28 Thread Andy Michielsen
Hello,

Will try that. Do I need to configure both nic's with a static ip address ?

Kind regards.


2014-02-28 9:25 GMT+01:00 Alon Bar-Lev alo...@redhat.com:



 - Original Message -
  From: Andy Michielsen andy.michiel...@gmail.com
  To: users@ovirt.org
  Sent: Friday, February 28, 2014 10:18:55 AM
  Subject: [Users] Setting up an ovirt-node
 
  Hello,
 
  I did a clean install of a ovirt-node with the iso provided from ovirt.
 
  Everything went fine until I logged on with the admin user and
 configured the
  ovirt-engines address.
 
  Now I don't have any network connection any more.
 
  I have 2 nic's available and defined only the first one with a static IP.
 
  When I check the network settings in the admin menu it tells me I have
  serveral bond devices.
 
  If I log on with a root user it states that under
  /etc/sysconfig/network_scripts that there is a ifcfg_em1, ifcfg_ovirtmgmt
  and a ifcfg_brem1.
 
  The two last devices use the same static ip that I defined on the
 ifcfg_em1.
 
  How can I get my network back up and runnig as I will need this to
 connect to
  the engine which is running on an other server.

 Hi,

 I suggest you try it in different method.
 Try to enable ssh access and setup a password, do not enter engine address.
 Then add that ovirt-node via engine Add Host

 
  Kind regards.
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Fresh installation of ovirt-engine

2014-02-28 Thread Sandro Bonazzola
Il 28/02/2014 09:45, Andy Michielsen ha scritto:
 Hello,
 
 I check this logfile myself and it doesn't mention my /exports/iso input at 
 all. I posted the log on http://www.fpaste.org/

Ehm, paste number / full url of the paste?


 
 Kind regards.
 
 
 2014-02-28 9:16 GMT+01:00 Sandro Bonazzola sbona...@redhat.com 
 mailto:sbona...@redhat.com:
 
 Il 28/02/2014 09:13, Andy Michielsen ha scritto:
  Hello,
 
  No I did the change during the execution of the engine-setup script. 
 When it want's to create an ISO_DOMAIN it suggest /var/lib/exports/iso and I
  changed it to /exports/iso.
 
  But when everything is finnished and I check with showmount -e FQDN 
 it tells me that it is exporting /var/lib/exports/iso.
 
 Can you upload your setup logs (for example on http://www.fpaste.org/) so 
 we can take a look?
 
 
  Kind regards.
 
 
 
 
  2014-02-28 8:26 GMT+01:00 Sandro Bonazzola sbona...@redhat.com 
 mailto:sbona...@redhat.com mailto:sbona...@redhat.com
 mailto:sbona...@redhat.com:
 
  Il 27/02/2014 22:18, Itamar Heim ha scritto:
   On 02/27/2014 06:14 PM, Andy Michielsen wrote:
   Hello,
  
   I just finnished an installation of a ovirt-engine and changed 
 the
   ISO_DOMAIN path to /exports/iso as this is my 1Tb diskspace.
  
   But now I see that the nfs share created is 
 /var/lib/exports/iso. I
   would expect it to be /export/iso but that doesn't seem to be 
 the case.
  
   Doe I need to modify this manualy or did I do something wrong.
 
  So you used default /var/lib/exports/iso while running engine setup 
 and then changed it manually from the web app?
  Can you upload your setup logs (for example on 
 http://www.fpaste.org/) so we can take a look?
 
 
  
   Kind regards.
  
  
   ___
   Users mailing list
   Users@ovirt.org mailto:Users@ovirt.org mailto:Users@ovirt.org 
 mailto:Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
  
  
   didi/sandro - thoughts?
 
 
 
 
 
 
 
  --
  Sandro Bonazzola
  Better technology. Faster innovation. Powered by community 
 collaboration.
  See how it works at redhat.com http://redhat.com 
 http://redhat.com
 
 
 
 
 --
 Sandro Bonazzola
 Better technology. Faster innovation. Powered by community collaboration.
 See how it works at redhat.com http://redhat.com
 
 


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Fresh installation of ovirt-engine

2014-02-28 Thread Sandro Bonazzola
Il 28/02/2014 09:46, Sven Kieske ha scritto:
 Sounds like a bug to me.
 You should file a bugreport at bugzilla.redhat.com to track it.

Let's start to check what's in the logs :-)


 
 Am 28.02.2014 09:13, schrieb Andy Michielsen:
 Hello,

 No I did the change during the execution of the engine-setup script. When
 it want's to create an ISO_DOMAIN it suggest /var/lib/exports/iso and I
 changed it to /exports/iso.

 But when everything is finnished and I check with showmount -e FQDN it
 tells me that it is exporting /var/lib/exports/iso.

 Kind regards.
 


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Fresh installation of ovirt-engine

2014-02-28 Thread Sandro Bonazzola
Il 28/02/2014 10:11, Andy Michielsen ha scritto:
 Hello,
 
 Hope this works.

 16:29:32 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:215 
DIALOG:SEND Local ISO domain path [/var/lib/exports/iso]:
2014-02-27 16:29:42 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:215 DIALOG:RECEIVE/exports/iso
2014-02-27 16:29:42 DEBUG otopi.ovirt_engine_setup.domains 
domains.check_valid_path:90 validate '/exports/iso' as a valid mount point
2014-02-27 16:29:42 DEBUG otopi.ovirt_engine_setup.domains 
domains.check_base_writable:104 Attempting to write temp file to /exports/iso
2014-02-27 16:29:42 DEBUG otopi.ovirt_engine_setup.domains 
domains.check_available_space:122 Checking available space on /exports/iso
2014-02-27 16:29:42 DEBUG otopi.ovirt_engine_setup.domains 
domains.check_available_space:129 Available space on /exports/iso is 950698Mb
2014-02-27 16:29:42 ERROR otopi.plugins.ovirt_engine_setup.config.iso_domain 
iso_domain._customization:342 Cannot access mount point /exports/iso:
Error: directory /exports/iso is not empty
2014-02-27 16:29:42 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:215 DIALOG:SEND Local ISO domain path
[/var/lib/exports/iso]:
2014-02-27 16:30:55 DEBUG otopi.ovirt_engine_setup.domains 
domains.check_valid_path:90 validate '/var/lib/exports/iso' as a valid mount 
point
2014-02-27 16:30:55 DEBUG otopi.ovirt_engine_setup.domains 
domains.check_base_writable:104 Attempting to write temp file to /var/lib
2014-02-27 16:30:55 DEBUG otopi.ovirt_engine_setup.domains 
domains.check_available_space:122 Checking available space on /var/lib
2014-02-27 16:30:55 DEBUG otopi.ovirt_engine_setup.domains 
domains.check_available_space:129 Available space on /var/lib is 17352Mb
2014-02-27 16:30:55 DEBUG otopi.plugins.otopi.dialog.human 
dialog.__logString:215 DIALOG:SEND Local ISO domain name 
[ISO_DOMAIN]:
2014-02-27 16:30:58 DEBUG otopi.context context.dumpEnvironment:441 ENVIRONMENT 
DUMP - BEGIN
2014-02-27 16:30:58 DEBUG otopi.context context.dumpEnvironment:456 ENV 
OVESETUP_CONFIG/isoDomainMountPoint=str:'/var/lib/exports/iso'
2014-02-27 16:30:58 DEBUG otopi.context context.dumpEnvironment:456 ENV 
OVESETUP_CONFIG/isoDomainName=str:'ISO_DOMAIN'



Setup proposed /var/lib/exports/iso
You specifed /exports/iso as mount point and failed validation
Setup failed validation and proposed again /var/lib/exports/iso
It has been accepted
Setup successfully validated /var/lib/exports/iso

I think there's no bug here.


 
 Kind regards.
 
 
 2014-02-28 9:47 GMT+01:00 Sandro Bonazzola sbona...@redhat.com 
 mailto:sbona...@redhat.com:
 
 Il 28/02/2014 09:45, Andy Michielsen ha scritto:
  Hello,
 
  I check this logfile myself and it doesn't mention my /exports/iso 
 input at all. I posted the log on http://www.fpaste.org/
 
 Ehm, paste number / full url of the paste?
 
 
 
  Kind regards.
 
 
  2014-02-28 9:16 GMT+01:00 Sandro Bonazzola sbona...@redhat.com 
 mailto:sbona...@redhat.com mailto:sbona...@redhat.com
 mailto:sbona...@redhat.com:
 
  Il 28/02/2014 09:13, Andy Michielsen ha scritto:
   Hello,
  
   No I did the change during the execution of the engine-setup 
 script. When it want's to create an ISO_DOMAIN it suggest
 /var/lib/exports/iso and I
   changed it to /exports/iso.
  
   But when everything is finnished and I check with showmount -e 
 FQDN it tells me that it is exporting /var/lib/exports/iso.
 
  Can you upload your setup logs (for example on 
 http://www.fpaste.org/) so we can take a look?
 
  
   Kind regards.
  
  
  
  
   2014-02-28 8:26 GMT+01:00 Sandro Bonazzola sbona...@redhat.com 
 mailto:sbona...@redhat.com mailto:sbona...@redhat.com
 mailto:sbona...@redhat.com mailto:sbona...@redhat.com 
 mailto:sbona...@redhat.com
  mailto:sbona...@redhat.com mailto:sbona...@redhat.com:
  
   Il 27/02/2014 22:18, Itamar Heim ha scritto:
On 02/27/2014 06:14 PM, Andy Michielsen wrote:
Hello,
   
I just finnished an installation of a ovirt-engine and 
 changed the
ISO_DOMAIN path to /exports/iso as this is my 1Tb 
 diskspace.
   
But now I see that the nfs share created is 
 /var/lib/exports/iso. I
would expect it to be /export/iso but that doesn't seem to 
 be the case.
   
Doe I need to modify this manualy or did I do something 
 wrong.
  
   So you used default /var/lib/exports/iso while running engine 
 setup and then changed it manually from the web app?
   Can you upload your setup logs (for example on 
 http://www.fpaste.org/) so we can take a look?
  
  
   
Kind regards.

Re: [Users] The purpose of Wipe on delete ?

2014-02-28 Thread Dafna Ron
1. you cannot use this option for nfs based storage since we zero the 
files any way when we delete the disk (the only way to actually delete 
it in nfs).
2. configuration on the storage side is the administrator decision... 
they can choose not to use this option and use a different method on 
storage side.


Dafna


On 02/28/2014 08:11 AM, Sandro Bonazzola wrote:

Il 27/02/2014 22:16, Dafna Ron ha scritto:

wipe = writing zero's on the space allocated to that disk to make sure any data 
once written will be deleted permanently.

so it's  a security vs speed decision on using this option - since we zeroing 
the disk to make sure any information once written will be overwritten,
a delete of a large disk can take a while.

I think this may be not really useful, zeroing files on modern file systems 
can't grant any kind of security improvement.
According to shred man page:

CAUTION: Note that shred relies on a very important assumption: that 
the file system overwrites data in place.  This is the traditional way to
do things, but many modern file system designs  do  not
satisfy this assumption.  The following are examples of file systems on 
which shred is not effective, or is not guaranteed to be effective in
all file system modes:

* log-structured or journaled file systems, such as those supplied with 
AIX and Solaris (and JFS, ReiserFS, XFS, Ext3, etc.)

* file systems that write redundant data and carry on even if some 
writes fail, such as RAID-based file systems

* file systems that make snapshots, such as Network Appliance's NFS 
server

* file systems that cache in temporary locations, such as NFS version 3 
clients

* compressed file systems

In the case of ext3 file systems, the above disclaimer applies (and 
shred is thus of limited effectiveness) only in data=journal mode, which
journals file data in addition to just metadata.  In both
the data=ordered (default) and data=writeback modes, shred works as 
usual.  Ext3 journaling modes can be changed by adding the data=something
option to the mount options for a particular file system
in the /etc/fstab file, as documented in the mount man page (man mount).

In addition, file system backups and remote mirrors may contain copies 
of the file that cannot be removed, and that will allow a shredded file
to be recovered later.




Dafna




On 02/27/2014 04:14 PM, Richard Davis wrote:

Hi

What is the purpose of the Wipe on delete option for a VM disk ?
Why would you not want data wiped on delete if the alternative is to leave LV 
metadata and other data languishing on the SD ?


Thanks

Rich
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users







--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Fresh installation of ovirt-engine

2014-02-28 Thread Andy Michielsen
Hello,

So its a simple permissions problem. My bad. Appologies.

Kind regards.


2014-02-28 10:16 GMT+01:00 Sandro Bonazzola sbona...@redhat.com:

 Il 28/02/2014 10:11, Andy Michielsen ha scritto:
  Hello,
 
  Hope this works.

  16:29:32 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:215
 DIALOG:SEND Local ISO domain path [/var/lib/exports/iso]:
 2014-02-27 16:29:42 DEBUG otopi.plugins.otopi.dialog.human
 dialog.__logString:215 DIALOG:RECEIVE/exports/iso
 2014-02-27 16:29:42 DEBUG otopi.ovirt_engine_setup.domains
 domains.check_valid_path:90 validate '/exports/iso' as a valid mount point
 2014-02-27 16:29:42 DEBUG otopi.ovirt_engine_setup.domains
 domains.check_base_writable:104 Attempting to write temp file to
 /exports/iso
 2014-02-27 16:29:42 DEBUG otopi.ovirt_engine_setup.domains
 domains.check_available_space:122 Checking available space on /exports/iso
 2014-02-27 16:29:42 DEBUG otopi.ovirt_engine_setup.domains
 domains.check_available_space:129 Available space on /exports/iso is
 950698Mb
 2014-02-27 16:29:42 ERROR
 otopi.plugins.ovirt_engine_setup.config.iso_domain
 iso_domain._customization:342 Cannot access mount point /exports/iso:
 Error: directory /exports/iso is not empty
 2014-02-27 16:29:42 DEBUG otopi.plugins.otopi.dialog.human
 dialog.__logString:215 DIALOG:SEND Local ISO domain path
 [/var/lib/exports/iso]:
 2014-02-27 16:30:55 DEBUG otopi.ovirt_engine_setup.domains
 domains.check_valid_path:90 validate '/var/lib/exports/iso' as a valid
 mount point
 2014-02-27 16:30:55 DEBUG otopi.ovirt_engine_setup.domains
 domains.check_base_writable:104 Attempting to write temp file to /var/lib
 2014-02-27 16:30:55 DEBUG otopi.ovirt_engine_setup.domains
 domains.check_available_space:122 Checking available space on /var/lib
 2014-02-27 16:30:55 DEBUG otopi.ovirt_engine_setup.domains
 domains.check_available_space:129 Available space on /var/lib is 17352Mb
 2014-02-27 16:30:55 DEBUG otopi.plugins.otopi.dialog.human
 dialog.__logString:215 DIALOG:SEND Local ISO domain name
 [ISO_DOMAIN]:
 2014-02-27 16:30:58 DEBUG otopi.context context.dumpEnvironment:441
 ENVIRONMENT DUMP - BEGIN
 2014-02-27 16:30:58 DEBUG otopi.context context.dumpEnvironment:456 ENV
 OVESETUP_CONFIG/isoDomainMountPoint=str:'/var/lib/exports/iso'
 2014-02-27 16:30:58 DEBUG otopi.context context.dumpEnvironment:456 ENV
 OVESETUP_CONFIG/isoDomainName=str:'ISO_DOMAIN'



 Setup proposed /var/lib/exports/iso
 You specifed /exports/iso as mount point and failed validation
 Setup failed validation and proposed again /var/lib/exports/iso
 It has been accepted
 Setup successfully validated /var/lib/exports/iso

 I think there's no bug here.


 
  Kind regards.
 
 
  2014-02-28 9:47 GMT+01:00 Sandro Bonazzola sbona...@redhat.com mailto:
 sbona...@redhat.com:
 
  Il 28/02/2014 09:45, Andy Michielsen ha scritto:
   Hello,
  
   I check this logfile myself and it doesn't mention my /exports/iso
 input at all. I posted the log on http://www.fpaste.org/
 
  Ehm, paste number / full url of the paste?
 
 
  
   Kind regards.
  
  
   2014-02-28 9:16 GMT+01:00 Sandro Bonazzola 
  sbona...@redhat.commailto:
 sbona...@redhat.com mailto:sbona...@redhat.com
  mailto:sbona...@redhat.com:
  
   Il 28/02/2014 09:13, Andy Michielsen ha scritto:
Hello,
   
No I did the change during the execution of the engine-setup
 script. When it want's to create an ISO_DOMAIN it suggest
  /var/lib/exports/iso and I
changed it to /exports/iso.
   
But when everything is finnished and I check with showmount
 -e FQDN it tells me that it is exporting /var/lib/exports/iso.
  
   Can you upload your setup logs (for example on
 http://www.fpaste.org/) so we can take a look?
  
   
Kind regards.
   
   
   
   
2014-02-28 8:26 GMT+01:00 Sandro Bonazzola 
 sbona...@redhat.com mailto:sbona...@redhat.com mailto:
 sbona...@redhat.com
  mailto:sbona...@redhat.com mailto:sbona...@redhat.com mailto:
 sbona...@redhat.com
   mailto:sbona...@redhat.com mailto:sbona...@redhat.com:
   
Il 27/02/2014 22:18, Itamar Heim ha scritto:
 On 02/27/2014 06:14 PM, Andy Michielsen wrote:
 Hello,

 I just finnished an installation of a ovirt-engine
 and changed the
 ISO_DOMAIN path to /exports/iso as this is my 1Tb
 diskspace.

 But now I see that the nfs share created is
 /var/lib/exports/iso. I
 would expect it to be /export/iso but that doesn't
 seem to be the case.

 Doe I need to modify this manualy or did I do
 something wrong.
   
So you used default /var/lib/exports/iso while running
 engine setup and then changed 

Re: [Users] Disk Migration

2014-02-28 Thread Nir Soffer
- Original Message -
 From: Nir Soffer nsof...@redhat.com
 To: d...@redhat.com
 Cc: Maurice James midnightst...@msn.com, Ofer Blaut 
 obl...@redhat.com, users@ovirt.org, Michal Skrivanek
 mskri...@redhat.com, Dan Kenigsberg dan...@redhat.com
 Sent: Thursday, February 27, 2014 9:15:24 AM
 Subject: Re: [Users] Disk Migration
 
 - Original Message -
  From: Dafna Ron d...@redhat.com
  To: Maurice James midnightst...@msn.com
  Cc: Ofer Blaut obl...@redhat.com, users@ovirt.org
  Sent: Wednesday, February 26, 2014 7:34:11 PM
  Subject: Re: [Users] Disk Migration
  
  On 02/26/2014 05:24 PM, Maurice James wrote:
   I have a specific interface set up for migrations. Why do disk
   migrations not use the interface that I have set for migrations? Is
   that by design? Shouldnt it use the interfaces that I have set aside
   for migrations? VM migrations work as they should but not disk migrations
 
  I don't think that you can configure interface for disk migration.
  Disk migration is actually copy of information from the original disk to
  a new disk created on a new domain + delete of the original disk once
  that is done.
  it's not actually a migration and so I am not sure you can actually
  configure an interface for that.
  adding ofer - perhpas he has a solution or it's possible and I am not
  aware of it.
 
 I guess that *not* using the migration network for storage operation is
 the expected behavior, to make migration faster and safer.

This seems to be the only documentation for the migration network:
www.ovirt.org/Features/Migration_Network

And it seems that it works as intended.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Disk Migration

2014-02-28 Thread Nir Soffer
- Original Message -
 From: Michal Skrivanek mskri...@redhat.com
 To: Nir Soffer nsof...@redhat.com
 Cc: d...@redhat.com, Dan Kenigsberg dan...@redhat.com, Maurice James 
 midnightst...@msn.com, Ofer Blaut
 obl...@redhat.com, Users@ovirt.org Users users@ovirt.org
 Sent: Thursday, February 27, 2014 10:12:26 AM
 Subject: Re: [Users] Disk Migration
 
 
 On Feb 27, 2014, at 08:15 , Nir Soffer nsof...@redhat.com wrote:
 
  - Original Message -
  From: Dafna Ron d...@redhat.com
  To: Maurice James midnightst...@msn.com
  Cc: Ofer Blaut obl...@redhat.com, users@ovirt.org
  Sent: Wednesday, February 26, 2014 7:34:11 PM
  Subject: Re: [Users] Disk Migration
  
  On 02/26/2014 05:24 PM, Maurice James wrote:
  I have a specific interface set up for migrations. Why do disk
  migrations not use the interface that I have set for migrations? Is
  that by design? Shouldnt it use the interfaces that I have set aside
  for migrations? VM migrations work as they should but not disk migrations
  
  I don't think that you can configure interface for disk migration.
  Disk migration is actually copy of information from the original disk to
  a new disk created on a new domain + delete of the original disk once
  that is done.
  it's not actually a migration and so I am not sure you can actually
  configure an interface for that.
  adding ofer - perhpas he has a solution or it's possible and I am not
  aware of it.
  
  I guess that *not* using the migration network for storage operation is
  the expected behavior, to make migration faster and safer.
  
  Michal, Dan, can you elaborate on this?
 
 with storage offloading it's probably not going to be significant, however
 today it likely is.
 Nir, why would not using migration network make it better? Won't we have the
 same problem as before without migration network at all, i.e. choking the
 management channel?

Gigs of data sent of the same netowrk used for migarations would make migration
slower when the network is saturated.

 Should we maybe consider a dedicated storage network?

This can be setup now in 3.4.

Sergey, can you explain how this is configured now in 3.4?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Disk Migration

2014-02-28 Thread Michal Skrivanek

On 28 Feb 2014, at 15:27, Nir Soffer wrote:

 - Original Message -
 From: Michal Skrivanek mskri...@redhat.com
 To: Nir Soffer nsof...@redhat.com
 Cc: d...@redhat.com, Dan Kenigsberg dan...@redhat.com, Maurice James 
 midnightst...@msn.com, Ofer Blaut
 obl...@redhat.com, Users@ovirt.org Users users@ovirt.org
 Sent: Thursday, February 27, 2014 10:12:26 AM
 Subject: Re: [Users] Disk Migration
 
 
 On Feb 27, 2014, at 08:15 , Nir Soffer nsof...@redhat.com wrote:
 
 - Original Message -
 From: Dafna Ron d...@redhat.com
 To: Maurice James midnightst...@msn.com
 Cc: Ofer Blaut obl...@redhat.com, users@ovirt.org
 Sent: Wednesday, February 26, 2014 7:34:11 PM
 Subject: Re: [Users] Disk Migration
 
 On 02/26/2014 05:24 PM, Maurice James wrote:
 I have a specific interface set up for migrations. Why do disk
 migrations not use the interface that I have set for migrations? Is
 that by design? Shouldnt it use the interfaces that I have set aside
 for migrations? VM migrations work as they should but not disk migrations
 
 I don't think that you can configure interface for disk migration.
 Disk migration is actually copy of information from the original disk to
 a new disk created on a new domain + delete of the original disk once
 that is done.
 it's not actually a migration and so I am not sure you can actually
 configure an interface for that.
 adding ofer - perhpas he has a solution or it's possible and I am not
 aware of it.
 
 I guess that *not* using the migration network for storage operation is
 the expected behavior, to make migration faster and safer.
 
 Michal, Dan, can you elaborate on this?
 
 with storage offloading it's probably not going to be significant, however
 today it likely is.
 Nir, why would not using migration network make it better? Won't we have the
 same problem as before without migration network at all, i.e. choking the
 management channel?
 
 Gigs of data sent of the same netowrk used for migarations would make 
 migration
 slower when the network is saturated.

well, you saturate one or the other…still from the functional perspective it's 
better to not choke the management…so either use the migration network (and the 
migrations will compete with disk moves) or a separate network.
I'd be tempted to say that typically having too many different networks might 
be an overkill and difficult to set up/maintain

 
 Should we maybe consider a dedicated storage network?
 
 This can be setup now in 3.4.
 
 Sergey, can you explain how this is configured now in 3.4?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Snapshot merging and the effect on underlying LV metadata

2014-02-28 Thread Davis, Richard
A very neat solution.I like that.
Thank you for sharing.



-Original Message-
From: R P Herrold [mailto:herr...@owlriver.com] 
Sent: 27 February 2014 19:20
To: Davis, Richard
Cc: 'users@ovirt.org'
Subject: Snapshot merging and the effect on underlying LV metadata

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Thu, 27 Feb 2014, Davis, Richard wrote:

 I am being told that unless the Wipe After Delete option is set on a 
 vDisk, any subsequent snapshot merging of the related VM will not 
 delete LV metadata (or any data!) from the volume created by the 
 snapshot. Is this correct ? I'm kinda hoping not !

It is my belief a depetion cannot be relied upon to have happened in all cases. 
 Some options flag sets in lvm ** do ** persist old data, and so our security 
practice at PMman to treat data on removed LV's as though it persists

There are published reports that instances on other public cloud providers have 
been deployed with 'non-wiped' drives in the 'slack space'.  Why run the 
reputational risk?

When we reclaim a LV, we perform a 'renaming' that permits to spot 'dirty' and 
'scratched' instances needing wiping.  [we also fill a new VG / PV with LV's 
indicating it needs wiping, as we do not wish to expose content if a drive is 
pulled and then re-used after testing when SMART errors appeared, but do not 
stand up to disqualify a drive]

Later a cron driven process, sensitive to IO load runs.  It builds a list of 
candidates over a day old, using 'find' and the LV name series showing it is 
dirty and scratched.  Then in turn by LV found, it fires off a sub-task (when 
load is low), which in turn performs a 'niced' 'shred' operation on that LV, 
followed by the 'shred 'zeroing' operation.  When load is too high, it sleeps 
for a couple of minutes, and re-tries

fragment:
 $_shredCmd = ionice -c 3 shred -n \
.$_num_passes. -z .$_working_lvm;

Only when that sub-process has completed do we 'rename' and later 'remove' a 
given LV, to let its space re-enter the assignment pool

- -- Russ herrold

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iEYEARECAAYFAlMPkAMACgkQMRh1QZtklkSamQCgnVqEo2Kmzq9Ao8T0BCYhBTyn
aToAoIaOVGkxX3EsVghMxOtgE3RiUr9G
=rm/K
-END PGP SIGNATURE-
This email is confidential and should not be used by anyone who is not the 
original intended recipient. PGDS cannot accept liability for statements made 
which are clearly the sender's own and not made on behalf of the company. In 
addition no statement should be construed as giving investment advice within or 
outside the United Kingdom.

PGDS (UK ONE) LIMITED, Laurence Pountney Hill, London, EC4R 0HH.
Incorporated and registered in England and Wales. Registered Office as
above. Registered number 1967719.

PGDS is the trading name of certain subsidiaries of Prudential plc 
(registered in England, number 1397169), whose registered office is at Laurence 
Pountney Hill London EC4R OHH, some of whose subsidiaries are authorised and 
regulated, as applicable, by the Prudential Regulation Authority and the 
Financial Conduct Authority. Prudential plc is not affiliated in any manner 
with Prudential Financial, Inc, a company whose principal place of business is 
in the United States of America. 

A list of other Prudential companies together with their registered statutory 
details can be found in 'About Prudential' on http://www.prudential.co.uk

An e-mail reply to this address may be subject to interception or monitoring 
for operational reasons or for lawful business practices.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] [ANN] oVirt 3.4.0 Release Candidate is now available

2014-02-28 Thread Sandro Bonazzola
The oVirt team is pleased to announce that the 3.4.0 Release Candidate is now 
available for testing.

Release notes and information on the changes for this update are still being 
worked on and will be available soon on the wiki[1].
Please ensure to follow install instruction from release notes if you're going 
to test it.
The existing repository ovirt-3.4.0-prerelease has been updated for delivering 
this release candidate and future refreshes until final release.

An oVirt Node iso is already available, unchanged from third beta.

You're welcome to join us testing this release candidate in next week test day 
[2] scheduled for 2014-03-06!


[1] http://www.ovirt.org/OVirt_3.4.0_release_notes
[2] http://www.ovirt.org/OVirt_3.4_Test_Day

-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] virt-v2v

2014-02-28 Thread Maurice James
This is really annoying. This is the output of trying to import a vm from esx
My command:virt-v2v -ic esx://172.16.10.200/?no_verify=1 -o rhev -osd 
hostname.domain.com:/storage/exports --network ovirtmgmt Tester


part of the output:
Tester_Tester:   4% [   

Tester_Tester:   4% [= *

Tester_Tester:   4% [=  
*   
Tester_Tester:   4% [=  
 *  
Tester_Tester:   4% [=  

  * Tester_Tester:   4% [=  

Tester_Tester:   4% [== 
   *
Tester_Tester:   5% 
[== *   

Tester_Tester:   5% [== 
 *  
Tester_Tester:   5% [== 

  * Tester_Tester: 100% 
[=]D
 0h01m45svirt-v2v: Didn't receive full volume. Received 1140183276 of 
21474836480 bytes.


  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] [ANN] oVirt 3.4.0 Release Candidate is now available

2014-02-28 Thread Brad House

You're welcome to join us testing this release candidate in next week test day 
[2] scheduled for 2014-03-06!

[1] http://www.ovirt.org/OVirt_3.4.0_release_notes
[2] http://www.ovirt.org/OVirt_3.4_Test_Day


Known issues should list some information about Gluster I think.
Such as the fact that libgfapi is not currently being used even
when choosing GlusterFS instead of POSIXFS, instead it creates
a Posix mount and uses that.  This was an advertised 3.3 feature,
so this would be considered a regression or known issue, right?

I was told it was due to BZ #1017289

This has been observed in Fedora 19, though that BZ lists RHEL6.

Thanks!
-Brad
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Snapshot merging and the effect on underlying LV metadata

2014-02-28 Thread Alasdair G Kergon
In lvm2 version 2.02.105 lvconvert gained a --splitsnapshot option to
allow people to wipe snapshot content before releasing the extents
for reallocation.

   --splitsnapshot
  Separates SnapshotLogicalVolume from  its  origin.   The  volume
  that  is split off contains the chunks that differ from the ori-
  gin along with the metadata describing them.  This volume can be
  wiped  and then destroyed with lvremove.  The inverse of --snap-
  shot.

Alasdair

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.5 planning

2014-02-28 Thread Jonas Israelsson

  
  
That is the extended tab, If only having user-permissions that tab
is not avalible, nor should it be.

My point is, the basic tab lacks functions, and the extended gives
an ordinary user far to many options (as earlier stated wont work
anyway since a ordinary user wont get any permission to create new
machines, disks etc) hence my suggestion to add a few new features
to the basic tab.

Rgds Jonas


On 27/02/14 14:42, Maurice James wrote:


  
  Its a "plug" icon

  


  
   Date: Thu, 27 Feb 2014 15:31:34 +0200
 From: ih...@redhat.com
 To: jo...@israelsson.com; users@ovirt.org;
mskri...@redhat.com
 Subject: Re: [Users] oVirt 3.5 planning
 
 On 02/25/2014 12:00 PM, Jonas Israelsson wrote:
  Not sure if this already exist but I have had to
help quite a few users
  that have only user-permissions to restart their
VM if they managed to
  hang the OS.
  This since they lack the permission to power off
the machine, and
  shutdown simply is not enough. Giving them more
permission can help,
  since they
  then will have the extended tab with more options
including the ability
  to power off a VM , this however IMO is overkill
since they are then
  presented
  with a vast number of options such as add disk,
nic, networks etc, all
  not working since they have no (and should have
none)
  permission to those objects.
 
  So adding to the basic view in the user portal a
power off button and
  extending the ordinary user-permission to also
include power off I think
  would be a good idea.
 
  Rgds Jonas
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 michal - don't we have power off vm in the basic user
portal?
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
  

  


  

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.5 planning

2014-02-28 Thread Juan Pablo Lorier
Hi,

I'm kind of out of date at this time, but I'd like to propose something
that was meant for 3.4 and I don't know it it made into it; use any nfs
share as either a iso or export so you can just copy into the share and
then update in some way the db.
Also, make export domain able to be shared among dcs as is iso domain,
that is an rfe from long time ago and a useful one.
Attaching and dettaching domains is both time consuming and boring.
Also using tagged and untagged networks on top of the same nic.
Everybody does that except for ovirt.
I also like to say that tough I have a huge enthusiasm for ovirt's fast
evolution, I think that you may need to slow down with adding new
features until most of the rfes that have over a year are done, because
other way it's kind of disappointing opening a rfe just to see it
sleeping too much time. Don't take this wrong, I've been listened and
helped by the team everytime I needed and I'm thankful for that.

 Regards,
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SD Disk's Logical Volume not visible/activated on some nodes

2014-02-28 Thread Nir Soffer
- Original Message -
 From: Boyan Tabakov bl...@alslayer.net
 To: Nir Soffer nsof...@redhat.com
 Cc: users@ovirt.org
 Sent: Tuesday, February 25, 2014 11:53:45 AM
 Subject: Re: [Users] SD Disk's Logical Volume not visible/activated on some 
 nodes
 
 Hello,
 
 On 22.2.2014, 22:19, Nir Soffer wrote:
  - Original Message -
  From: Boyan Tabakov bl...@alslayer.net
  To: Nir Soffer nsof...@redhat.com
  Cc: users@ovirt.org
  Sent: Wednesday, February 19, 2014 7:18:36 PM
  Subject: Re: [Users] SD Disk's Logical Volume not visible/activated on
  some nodes
 
  Hello,
 
  On 19.2.2014, 17:09, Nir Soffer wrote:
  - Original Message -
  From: Boyan Tabakov bl...@alslayer.net
  To: users@ovirt.org
  Sent: Tuesday, February 18, 2014 3:34:49 PM
  Subject: [Users] SD Disk's Logical Volume not visible/activated on some
  nodes
 
  Consequently, when creating/booting
  a VM with the said disk attached, the VM fails to start on host2,
  because host2 can't see the LV. Similarly, if the VM is started on
  host1, it fails to migrate to host2. Extract from host2 log is in the
  end. The LV in question is 6b35673e-7062-4716-a6c8-d5bf72fe3280.
 
  As far as I could track quickly the vdsm code, there is only call to lvs
  and not to lvscan or lvchange so the host2 LVM doesn't fully refresh.

lvs should see any change on the shared storage.

  The only workaround so far has been to restart VDSM on host2, which
  makes it refresh all LVM data properly.

When vdsm starts, it calls multipath -r, which ensure that we see all physical 
volumes.

 
  When is host2 supposed to pick up any newly created LVs in the SD VG?
  Any suggestions where the problem might be?
 
  When you create a new lv on the shared storage, the new lv should be
  visible on the other host. Lets start by verifying that you do see
  the new lv after a disk was created.
 
  Try this:
 
  1. Create a new disk, and check the disk uuid in the engine ui
  2. On another machine, run this command:
 
  lvs -o vg_name,lv_name,tags
 
  You can identify the new lv using tags, which should contain the new disk
  uuid.
 
  If you don't see the new lv from the other host, please provide
  /var/log/messages
  and /var/log/sanlock.log.
 
  Just tried that. The disk is not visible on the non-SPM node.
  
  This means that storage is not accessible from this host.
 
 Generally, the storage seems accessible ok. For example, if I restart
 the vdsmd, all volumes get picked up correctly (become visible in lvs
 output and VMs can be started with them).

Lests repeat this test, but now, if you do not see the new lv, please 
run:

multipath -r

And report the results.

  Here's the full
  sanlock.log for that host:
 ...
  0x7fc37c0008c0:0x7fc37c0008d0:0x7fc391f5f000 ioto 10 to_count 1
  2014-02-06 05:24:10+0200 563065 [31453]: s1 delta_renew read rv -202
  offset 0 /dev/3307f6fa-dd58-43db-ab23-b1fb299006c7/ids
  
  Sanlock cannot write to the ids lockspace
 
 Which line shows that sanlock can't write? The messages are not very
 human readable.

The one above my comment at 2014-02-06 05:24:10+0200

I suggest to set sanlock debug level on the sanlock log to get more detailed 
output.

Edit /etc/sysconfig/sanlock and add:

# -L 7: use debug level logging to sanlock log file
SANLOCKOPTS=$SANLOCKOPTS -L 7

 
  Last entry is from yesterday, while I just created a new disk.
  
  What was the status of this host in the engine from 2014-02-06
  05:24:10+0200 to 2014-02-18 14:22:16?
  
  vdsm.log and engine.log for this time frame will make it more clear.
 
 Host was up and running. The vdsm and engine logs are quite large, as we
 were running some VM migrations between the hosts. Any pointers at what
 to look for? For example, I noticed many entries in engine.log like this:

It will be hard to make any progress without the logs.

 
 One warning that I keep seeing in vdsm logs on both nodes is this:
 
 Thread-1617881::WARNING::2014-02-24
 16:57:50,627::sp::1553::Storage.StoragePool::(getInfo) VG
 3307f6fa-dd58-43db-ab23-b1fb299006c7's metadata size exceeded
  critical size: mdasize=134217728 mdafree=0

Can you share the output of the command bellow?

lvs -o 
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
 
I suggest that you open a bug and attach there  engine.log, /var/log/messages, 
vdsm.log and sanlock.log.

Please also give detailed info on the host os, vdsm version etc.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] virt-v2v

2014-02-28 Thread Douglas Schilling Landgraf

Hi,

On 02/28/2014 11:13 AM, Maurice James wrote:

This is really annoying. This is the output of trying to import a vm
from esx


I would suggest you to enable debugging and share here to have virt-v2v 
people helps.

https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.2/html-single/V2V_Guide/index.html#sect-v2v_general



My command:
virt-v2v -ic esx://172.16.10.200/?no_verify=1 -o rhev -osd
hostname.domain.com:/storage/exports --network ovirtmgmt Tester



part of the output:

Tester_Tester:   4% [

 Tester_Tester:   4% [= *

  Tester_Tester:   4%
[=  *

 Tester_Tester:   4% [=
   *
  Tester_Tester:   4% [=

* Tester_Tester:
4% [=

Tester_Tester:   4% [==*

Tester_Tester:   5% [==
 *
 Tester_Tester:
   5% [==
*
Tester_Tester:   5% [==

   * Tester_Tester: 100%
[=]D
0h01m45s
virt-v2v: Didn't receive full volume. Received 1140183276 of 21474836480
bytes.





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




--
Cheers
Douglas
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] virt-v2v

2014-02-28 Thread Maurice James
I ran
LIBGUESTFS_TRACE=1 LIBGUESTFS_DEBUG=1 virt-v2v -ic 
esx://172.16.10.200/?no_verify=1 -o rhev -osd host.domain.com:/storage/exports 
--network ovirtmgmt Tester 21 | tee virt-v2v.log
The contents of virt-v2v.log is:
virt-v2v: Didn't receive full volume. Received 1139880072 of 21474836480 
bytes.virt-v2v: Transferring storage volume Tester_Tester: 21474836480 bytes



 Date: Fri, 28 Feb 2014 16:22:15 -0500
 From: dougsl...@redhat.com
 To: midnightst...@msn.com
 CC: users@ovirt.org; rjo...@redhat.com; mbo...@redhat.com
 Subject: Re: [Users] virt-v2v
 
 Hi,
 
 On 02/28/2014 11:13 AM, Maurice James wrote:
  This is really annoying. This is the output of trying to import a vm
  from esx
 
 I would suggest you to enable debugging and share here to have virt-v2v 
 people helps.
 https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.2/html-single/V2V_Guide/index.html#sect-v2v_general
 
 
  My command:
  virt-v2v -ic esx://172.16.10.200/?no_verify=1 -o rhev -osd
  hostname.domain.com:/storage/exports --network ovirtmgmt Tester
 
 
 
  part of the output:
 
  Tester_Tester:   4% [
 
   Tester_Tester:   4% [= *
 
Tester_Tester:   4%
  [=  *
 
   Tester_Tester:   4% [=
 *
Tester_Tester:   4% [=
 
  * Tester_Tester:
  4% [=
 
  Tester_Tester:   4% [==*
 
  Tester_Tester:   5% [==
   *
   Tester_Tester:
 5% [==
  *
  Tester_Tester:   5% [==
 
 * Tester_Tester: 100%
  [=]D
  0h01m45s
  virt-v2v: Didn't receive full volume. Received 1140183276 of 21474836480
  bytes.
 
 
 
 
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 
 
 -- 
 Cheers
 Douglas
  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Windows Guest Agent

2014-02-28 Thread Lindsay Mathieson
The KVM release notes (http://wiki.qemu.org/ChangeLog/1.7#Guest_agent)

Mention that the windows guest agent now supports VSS.

Whereabouta can I download this?

NB: I am not a redhat customer.

-- 
Lindsay
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Snapshot merging and the effect on underlying LV metadata

2014-02-28 Thread R P Herrold
On Fri, 28 Feb 2014, Alasdair G Kergon wrote:

 In lvm2 version 2.02.105 lvconvert gained a --splitsnapshot option to
 allow people to wipe snapshot content before releasing the extents
 for reallocation.
 
--splitsnapshot
   Separates SnapshotLogicalVolume from  its  origin.   The  volume
   that  is split off contains the chunks that differ from the ori-
   gin along with the metadata describing them.  This volume can be
   wiped  and then destroyed with lvremove.  The inverse of --snap-
   shot.

Nice to know ... we use the snapshot feature heavilyin our 
virtualization, but as: 
CentOS 6 is at lvm2-2.02.100-8.el6.x86_64, and 
C 5 at lvm2-2.02.88-12.el5, 

we will need to wait a bit before relying on its presence.  
Any chance of a re-basing / refresh / backport at least into 
RHEL 6 (we have only one Xen oriented dom0 at this point on 
C5)?

Thanks

--Russ herrold
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Host requirements for 3.4 compatibility

2014-02-28 Thread Darren Evenson
I have updated my engine to 3.4 rc.

I created a new cluster with 3.4 compatibility version, and then I moved a host 
I had in maintenance mode to the new cluster.

When I activate it, I get the error Host kvmhost2 is compatible with versions 
(3.0,3.1,3.2,3.3) and cannot join Cluster Cluster_new which is set to version 
3.4.

My host was Fedora 20 with the latest updates:

Kernel Version: 3.13.4 - 200.fc20.x86_64
KVM Version: 1.6.1 - 3.fc20
LIBVIRT Version: libvirt-1.1.3.3-5.fc20
VDSM Version: vdsm-4.13.3-3.fc20

So I enabled fedora-virt-preview and updated, but I still get the same error, 
even now with libvirt 1.2.1:

Kernel Version: 3.13.4 - 200.fc20.x86_64
KVM Version: 1.7.0 - 5.fc20
LIBVIRT Version: libvirt-1.2.1-3.fc20
VDSM Version: vdsm-4.13.3-3.fc20

What am I missing?

- Darren

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Windows Guest Agent

2014-02-28 Thread Nicholas Kesick
Lindsay,
You have to build the windows guest agent. I have some directions that I need 
to add to the Wiki about that. I'll get them added and reply with a link, of 
you don't figure it put before then.

- Nick

--- Original Message ---

From: Lindsay Mathieson lindsay.mathie...@gmail.com
Sent: February 28, 2014 2:44 PM
To: users@ovirt.org
Subject: [Users] Windows Guest Agent

The KVM release notes (http://wiki.qemu.org/ChangeLog/1.7#Guest_agent)

Mention that the windows guest agent now supports VSS.

Whereabouta can I download this?

NB: I am not a redhat customer.

--
Lindsay
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] [ANN] oVirt 3.4.0 Release Candidate is now available

2014-02-28 Thread Darrell Budic
Started testing this on two self-hosted clusters, with mixed results. There 
were updates from 3.4.0 beta 3.

On both, got informed the system was going to reboot in 2 minutes while it was 
still installing yum updates.

On the faster system, the whole update process finished before the 2 minutes 
were up, the VM restarted, and all appears normal.

On the other, slower cluster, the 2 minutes hit while the yum updates were 
still being installed, and the system rebooted. It continued rebooting every 3 
minutes or so, and the engine console web pages are not available because the 
engine doesn’t start. it did this at least 3 times before I went ahead and 
reran engine-setup, which completed successfully. The system stopped restarting 
and the web interface was available again. A quick perusal of system logs and 
engine-setup logs didn’t reveal what requested the reboot.

That was rather impolite of something to do that without warning :) At least it 
was recoverable. Seems like scheduling the reboot while the yum updates were 
still running seems like a poor idea as well.

  -Darrell

On Feb 28, 2014, at 10:11 AM, Sandro Bonazzola sbona...@redhat.com wrote:

 The oVirt team is pleased to announce that the 3.4.0 Release Candidate is now 
 available for testing.
 
 Release notes and information on the changes for this update are still being 
 worked on and will be available soon on the wiki[1].
 Please ensure to follow install instruction from release notes if you're 
 going to test it.
 The existing repository ovirt-3.4.0-prerelease has been updated for 
 delivering this release candidate and future refreshes until final release.
 
 An oVirt Node iso is already available, unchanged from third beta.
 
 You're welcome to join us testing this release candidate in next week test 
 day [2] scheduled for 2014-03-06!
 
 
 [1] http://www.ovirt.org/OVirt_3.4.0_release_notes
 [2] http://www.ovirt.org/OVirt_3.4_Test_Day
 
 -- 
 Sandro Bonazzola
 Better technology. Faster innovation. Powered by community collaboration.
 See how it works at redhat.com
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Snapshot merging and the effect on underlying LV metadata

2014-02-28 Thread Alasdair G Kergon
On Fri, Feb 28, 2014 at 02:45:28PM -0500, R P Herrold wrote:
 Nice to know ... we use the snapshot feature heavilyin our 
 virtualization, but as: 
   CentOS 6 is at lvm2-2.02.100-8.el6.x86_64, and 
   C 5 at lvm2-2.02.88-12.el5, 
 we will need to wait a bit before relying on its presence.  
 Any chance of a re-basing / refresh / backport at least into 
 RHEL 6 (we have only one Xen oriented dom0 at this point on 
 C5)?
 
It will be in RHEL6.6.

Alasdair

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Setting up proof of concept

2014-02-28 Thread Andy Michielsen
Hello,

I would like some help in setting up an ovirt solution to provide an
enviroment for me and my co-workers.

I have a hp proliant 380 g5 with 2 4-cores, 32 gigs of ram and 8 300gb disk
setup in a raid 10. It also has 6 1gb nic's. This one I think of using as
the ovirt engine and nfs server.

I also have a dell r420 with 2 6-cores, 64 gigs of ram and 2 146gb disks an
2 1gb nic's. This one I would like to use as a ovirt node.

I have been thinking about it and trying stuff out but always seem to be
doing something wrong.

How would you go about this. How would you use the resources available.

Do I need to configure the nic's in a specific way to seperate nfs access
or not.

I would like to use a seperate, virtual network for the vm's.

Al the tutorials if have found are not so clear to me.

Any help would be greatly appriciated on this. Or just point me in the
right direction.

Kind regards.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users