[Users] Copy VM to external disk and import back

2014-01-25 Thread Alan Murrell
Hello,

I did some updates to my test installation from an unofficial repo (I
had installed a compile version of qemu-kvm-rhev from a private repo).
After rebooting, I was encountering errors with adding networks (when I
tried to save, it would say that my already-exisiting networks were
being added twice)

Anyway, not sure what exactly broke, as there were some other oVirt
updates as well.  I was planning on just wiping my all-in-one install
and starting from scratch, but I first wanted to see if it was possible
to copy my Win7 VM to an external USB drive, then import it back in
after I have a fresh all-in-one install?

I already have an export domain and have tested exporting one of my
other VMs, but am not sure if I would then be able to:

  1.) Copy the resulting directory to external storage
  2.) Do my fresh install
  3.) Copy the directory from external storage to my new export domain
  4,) Import the VM from the export domain

I guess my question is, is the export domain like the ISO domain: can I
copy files directly to it (with the appropriate metadata, of course) and
after a few minutes, have oVirt able to automatically see the contents
and be able to import it back in.

Thanks! :-)

-Alan


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Copy VM to external disk and import back

2014-01-25 Thread Itamar Heim
yes. You can copy files to and from the export domain. You can also just
detach ir, and attach it to the installation later.
On Jan 25, 2014 10:37 AM, Alan Murrell li...@murrell.ca wrote:

 Hello,

 I did some updates to my test installation from an unofficial repo (I
 had installed a compile version of qemu-kvm-rhev from a private repo).
 After rebooting, I was encountering errors with adding networks (when I
 tried to save, it would say that my already-exisiting networks were
 being added twice)

 Anyway, not sure what exactly broke, as there were some other oVirt
 updates as well.  I was planning on just wiping my all-in-one install
 and starting from scratch, but I first wanted to see if it was possible
 to copy my Win7 VM to an external USB drive, then import it back in
 after I have a fresh all-in-one install?

 I already have an export domain and have tested exporting one of my
 other VMs, but am not sure if I would then be able to:

   1.) Copy the resulting directory to external storage
   2.) Do my fresh install
   3.) Copy the directory from external storage to my new export domain
   4,) Import the VM from the export domain

 I guess my question is, is the export domain like the ISO domain: can I
 copy files directly to it (with the appropriate metadata, of course) and
 after a few minutes, have oVirt able to automatically see the contents
 and be able to import it back in.

 Thanks! :-)

 -Alan


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Hosted-engine runtime issues (3.4 BETA)

2014-01-25 Thread Frank Wall
Hi,

finally I've got the new hosted-engine feature running on
RHEL6 using oVirt 3.4 BETA/nightly. I've come across a few
issues and wanted to clarify if this is the desired 
behaviour:

1.) hosted-engine storage domain not visible in GUI
The NFS-Storage I've used to install the hosted-engine
is not visible in oVirt's Admin Portal. Though it is mounted 
on my oVirt Node below /rhev/data-center/mnt/. I tried to
import this storage domain, but apparently this fails because
it's already mounted.
Is there any way to make this storage domain visible?

2.) hosted-engine VM device are not visible in GUI
The disk and network devices are not visible in the
admin portal. Thus I'm unable to change anything.
Is this intended? If so, how am I supposed to make changes?

3.) move hosted-engine VM to a different storage
Because of all of the above I seem to be unable to move
my hosted-engine VM to a different NFS-Storage. How can
this be done?


Thanks
- Frank
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Reboot causes poweroff of VM 3.4 Beta

2014-01-25 Thread Jon Archer

Hi,

Seem to be suffering an issue in 3.4 where if a vm is rebooted it 
actually shuts down, this occurs for all guests regardless of OS 
installed within.


Anyone seen this?

Jon
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Copy VM to external disk and import back

2014-01-25 Thread Alan Murrell
On Sat, 2014-01-25 at 12:42 +0200, Itamar Heim wrote:
 yes. You can copy files to and from the export domain.

OK, good to know.  So basically as long as I keep everything in tact
(the meta file, etc.), then there should be no problem in getting it
imported back in.

-Alan


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Reboot causes poweroff of VM 3.4 Beta

2014-01-25 Thread Roy Golan
Please attach engine.log and vdsm.log

On Jan 25, 2014 5:59 PM, Jon Archer j...@rosslug.org.uk wrote:

 Hi, 

 Seem to be suffering an issue in 3.4 where if a vm Hi,

Seem to be suffering an issue in 3.4 where if a vm is rebooted it 
actually shuts down, this occurs for all guests regardless of OS 
installed within.

Anyone seen this?

Jon
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] [vdsm] The machine type of one cluster

2014-01-25 Thread Itamar Heim

On 01/25/2014 04:23 AM, Kewei Yu wrote:

Hi all:
 There is a machine type in cluster, It will decide which machine
of Qemu will be used, When we add the first host into cluster, a default
machine type is shown. We can correct the DB's value of the engine to
set the machine type.
 I just want to know how dose cluster choice the default machine? It
is decided by VDSM? Qemu? or It is only fixed value in engine's DB?

Regard
Kewei


___
vdsm-devel mailing list
vdsm-de...@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel



basically, for fedora its 'pc', which may have some live migration 
issues between different versions in the same cluster.
for .el6, its 'rhel63/rhel64/rhel65/etc.', which is a stable definition 
of emulation mode for the cluster (i.e., even a .el7 host should live 
migrate to .el6 if we specify its emulation mode as rhel65, etc.)


engine defines per cluster level the expected emulation mode.
vdsm reports from libvirt from qemu, so engine can check the host is a 
match.
if the first host in the cluster is fedora, it will be set to 'pc', if 
its .el6, it will be set to the 'rhelxx' option.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Hosted-engine runtime issues (3.4 BETA)

2014-01-25 Thread Itamar Heim

On 01/25/2014 03:43 PM, Frank Wall wrote:

Hi,

finally I've got the new hosted-engine feature running on
RHEL6 using oVirt 3.4 BETA/nightly. I've come across a few
issues and wanted to clarify if this is the desired
behaviour:

1.) hosted-engine storage domain not visible in GUI
The NFS-Storage I've used to install the hosted-engine
is not visible in oVirt's Admin Portal. Though it is mounted
on my oVirt Node below /rhev/data-center/mnt/. I tried to
import this storage domain, but apparently this fails because
it's already mounted.
Is there any way to make this storage domain visible?


not yet.



2.) hosted-engine VM device are not visible in GUI
The disk and network devices are not visible in the
admin portal. Thus I'm unable to change anything.
Is this intended? If so, how am I supposed to make changes?


the VM should be visible, the disk/nics - not yet.



3.) move hosted-engine VM to a different storage
Because of all of the above I seem to be unable to move
my hosted-engine VM to a different NFS-Storage. How can
this be done?


not yet (from the webadmin).
if you shutdown the engine and fix the config manually, should be doable.




Thanks
- Frank
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Extremely poor disk access speeds in Windows guest

2014-01-25 Thread Steve Dainard
Thanks for the responses everyone, really appreciate it.

I've condensed the other questions into this reply.


Steve,
 What is the CPU load of the GlusterFS host when comparing the raw brick
 test to the gluster mount point test? Give it 30 seconds and see what top
 reports. You’ll probably have to significantly increase the count on the
 test so that it runs that long.

 - Nick



Gluster mount point:

*4K* on GLUSTER host
[root@gluster1 rep2]# dd if=/dev/zero of=/mnt/rep2/test1 bs=4k count=50
50+0 records in
50+0 records out
204800 bytes (2.0 GB) copied, 100.076 s, 20.5 MB/s

Top reported this right away:
PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND

 1826 root  20   0  294m  33m 2540 S 27.2  0.4   0:04.31 glusterfs

 2126 root  20   0 1391m  31m 2336 S 22.6  0.4  11:25.48 glusterfsd

Then at about 20+ seconds top reports this:
  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND

 1826 root  20   0  294m  35m 2660 R 141.7  0.5   1:14.94 glusterfs

 2126 root  20   0 1392m  31m 2344 S 33.7  0.4  11:46.56 glusterfsd

*4K* Directly on the brick:
dd if=/dev/zero of=test1 bs=4k count=50
50+0 records in
50+0 records out
204800 bytes (2.0 GB) copied, 4.99367 s, 410 MB/s

 7750 root  20   0  102m  648  544 R 50.3  0.0   0:01.52 dd

 7719 root  20   0 000 D  1.0  0.0   0:01.50 flush-253:2

Same test, gluster mount point on OVIRT host:
dd if=/dev/zero of=/mnt/rep2/test1 bs=4k count=50
50+0 records in
50+0 records out
204800 bytes (2.0 GB) copied, 42.4518 s, 48.2 MB/s

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND

 2126 root  20   0 1396m  31m 2360 S 40.5  0.4  13:28.89 glusterfsd


Same test, on OVIRT host but against NFS mount point:
dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=4k count=50
50+0 records in
50+0 records out
204800 bytes (2.0 GB) copied, 18.8911 s, 108 MB/s

PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND

 2141 root  20   0  550m 184m 2840 R 84.6  2.3  16:43.10 glusterfs

 2126 root  20   0 1407m  30m 2368 S 49.8  0.4  13:49.07 glusterfsd


Interesting - It looks like if I use a NFS mount point, I incur a cpu hit
on two processes instead of just the daemon. I also get much better
performance if I'm not running dd (fuse) on the GLUSTER host.


The storage servers are a bit older, but are both dual socket quad core

opterons with 4x 7200rpm drives.


A block size of 4k is quite small so that the context switch overhead
involved with fuse would be more perceivable.

Would it be possible to increase the block size for dd and test?



 I'm in the process of setting up a share from my desktop and I'll see if

I can bench between the two systems. Not sure if my ssd will impact the

tests, I've heard there isn't an advantage using ssd storage for glusterfs.


Do you have any pointers to this source of information? Typically glusterfs
performance for virtualization work loads is bound by the slowest element
in the entire stack. Usually storage/disks happen to be the bottleneck and
ssd storage does benefit glusterfs.

-Vijay


I had a couple technical calls with RH (re: RHSS), and when I asked if
SSD's could add any benefit I was told no. The context may have been in a
product comparison to other storage vendors, where they use SSD's for
read/write caching, versus having an all SSD storage domain (which I'm not
proposing, but which is effectively what my desktop would provide).

Increasing bs against NFS mount point (gluster backend):
dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=128k count=16000
16000+0 records in
16000+0 records out
2097152000 bytes (2.1 GB) copied, 19.1089 s, 110 MB/s


GLUSTER host top reports:
  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND

 2141 root  20   0  550m 183m 2844 R 88.9  2.3  17:30.82 glusterfs

 2126 root  20   0 1414m  31m 2408 S 46.1  0.4  14:18.18 glusterfsd


So roughly the same performance as 4k writes remotely. I'm guessing if I
could randomize these writes we'd see a large difference.


Check this thread out,
http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs-native-integration/
it's
quite dated but I remember seeing similar figures.

In fact when I used FIO on a libgfapi mounted VM I got slightly faster
read/write speeds than on the physical box itself (I assume because of some
level of caching). On NFS it was close to half.. You'll probably get a
little more interesting results using FIO opposed to dd

( -Andrew)


Sorry Andrew, I meant to reply to your other message - it looks like CentOS
6.5 can't use libgfapi right now, I stumbled across this info in a couple
threads. Something about how the CentOS build has different flags set on
build for RHEV snapshot support then RHEL, so native gluster storage
domains are disabled because snapshot support is assumed and would break
otherwise. I'm assuming this is still valid as I cannot get a storage 

Re: [Users] [vdsm] The machine type of one cluster

2014-01-25 Thread Kewei Yu
2014/1/26 Itamar Heim ih...@redhat.com

 On 01/25/2014 04:23 AM, Kewei Yu wrote:

 Hi all:
  There is a machine type in cluster, It will decide which machine
 of Qemu will be used, When we add the first host into cluster, a default
 machine type is shown. We can correct the DB's value of the engine to
 set the machine type.
  I just want to know how dose cluster choice the default machine? It
 is decided by VDSM? Qemu? or It is only fixed value in engine's DB?

 Regard
 Kewei


 ___
 vdsm-devel mailing list
 vdsm-de...@lists.fedorahosted.org
 https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


 basically, for fedora its 'pc', which may have some live migration issues
 between different versions in the same cluster.
 for .el6, its 'rhel63/rhel64/rhel65/etc.', which is a stable definition of
 emulation mode for the cluster (i.e., even a .el7 host should live migrate
 to .el6 if we specify its emulation mode as rhel65, etc.)

 engine defines per cluster level the expected emulation mode.
 vdsm reports from libvirt from qemu, so engine can check the host is a
 match.
 if the first host in the cluster is fedora, it will be set to 'pc', if its
 .el6, it will be set to the 'rhelxx' option.

Thanks for your answer, It is helpful to me.



Regard
Kewei
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] The machine type of one cluster

2014-01-25 Thread Kewei Yu
Hi all:
There is a machine type in cluster, It will decide which machine of
Qemu will be used, When we add the first host into cluster, a default
machine type is shown. We can correct the DB's value of the engine to set
the machine type.
I just want to know how dose cluster choice the default machine? It is
decided by VDSM? Qemu? or It is only fixed value in engine's DB?

Regard
Kewei
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] [SOLVED] Guest Agent Data under Network Interfaces empty

2014-01-25 Thread Yedidyah Bar David
- Original Message -
 From: Itamar Heim ih...@redhat.com
 To: Yedidyah Bar David d...@redhat.com, Joop jvdw...@xs4all.nl
 Cc: users users@ovirt.org, Moti Asayag masa...@redhat.com, Lior 
 Vernia lver...@redhat.com
 Sent: Thursday, January 23, 2014 6:45:11 PM
 Subject: Re: [Users] [SOLVED] Guest Agent Data under Network Interfaces 
 empty
 
 On 01/23/2014 04:20 PM, Yedidyah Bar David wrote:
  - Original Message -
  From: Yedidyah Bar David d...@redhat.com
  To: Joop jvdw...@xs4all.nl
  Cc: users users@ovirt.org
  Sent: Thursday, January 23, 2014 3:30:27 PM
  Subject: Re: [Users] Guest Agent Data under Network Interfaces empty
 
  - Original Message -
  From: Joop jvdw...@xs4all.nl
  Cc: users users@ovirt.org
  Sent: Thursday, January 23, 2014 3:25:15 PM
  Subject: Re: [Users] Guest Agent Data under Network Interfaces empty
 
  Yedidyah Bar David wrote:
  Hi all,
 
  I installed ovirt engine 3.4 beta with two VMs - one opensuse 13.1
  with ovirt-guest-agent from [1] and another fedora 19 with oga from
  fedora. Both of them seem to work well - I can see installed
  applications,
  logged in user, memory usage. But in both of them, under Network
  Interfaces,
  the Guest Agent Data tab on the right has just headers, with no data.
 
  'vdsClient -s 0 getAllVmStats' on the host does show such data correctly
  for both VMs.
 
  Am I missing anything? Is it a bug, or I should do something to get
  there
  data from the agent (through vdsm)?
 
  I'm guessing that you're missing ethtool and/or python-ethtool? (sorry
  can't find the right name right now)
 
  Both have python-ethtool, which is a dependency of the guest agent.
  And vdsm does report correctly - I am pretty certain it's a problem in the
  engine and not on the host/VMs.
 
  Thanks anyway,
  --
  Didi
 
 
  Found https://bugzilla.redhat.com/907781, and following comment 7 there,
  restarted the browser (logout/login was not enough) and now it's ok.
 
 why would that be an ok behavior?

I didn't say it's ok - I opened bz #1057163 for it.
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Extremely poor disk access speeds in Windows guest

2014-01-25 Thread Itamar Heim

On 01/26/2014 02:37 AM, Steve Dainard wrote:

Thanks for the responses everyone, really appreciate it.

I've condensed the other questions into this reply.


Steve,
What is the CPU load of the GlusterFS host when comparing the raw
brick test to the gluster mount point test? Give it 30 seconds and
see what top reports. You’ll probably have to significantly increase
the count on the test so that it runs that long.

- Nick



Gluster mount point:

*4K* on GLUSTER host
[root@gluster1 rep2]# dd if=/dev/zero of=/mnt/rep2/test1 bs=4k count=50
50+0 records in
50+0 records out
204800 tel:204800 bytes (2.0 GB) copied, 100.076 s, 20.5 MB/s

Top reported this right away:
PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
  1826 root  20   0  294m  33m 2540 S 27.2  0.4   0:04.31 glusterfs
  2126 root  20   0 1391m  31m 2336 S 22.6  0.4  11:25.48 glusterfsd

Then at about 20+ seconds top reports this:
   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
  1826 root  20   0  294m  35m 2660 R 141.7  0.5   1:14.94 glusterfs
  2126 root  20   0 1392m  31m 2344 S 33.7  0.4  11:46.56 glusterfsd

*4K* Directly on the brick:
dd if=/dev/zero of=test1 bs=4k count=50
50+0 records in
50+0 records out
204800 tel:204800 bytes (2.0 GB) copied, 4.99367 s, 410 MB/s

  7750 root  20   0  102m  648  544 R 50.3  0.0   0:01.52 dd
  7719 root  20   0 000 D  1.0  0.0   0:01.50 flush-253:2

Same test, gluster mount point on OVIRT host:
dd if=/dev/zero of=/mnt/rep2/test1 bs=4k count=50
50+0 records in
50+0 records out
204800 tel:204800 bytes (2.0 GB) copied, 42.4518 s, 48.2 MB/s

   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
  2126 root  20   0 1396m  31m 2360 S 40.5  0.4  13:28.89 glusterfsd


Same test, on OVIRT host but against NFS mount point:
dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=4k count=50
50+0 records in
50+0 records out
204800 tel:204800 bytes (2.0 GB) copied, 18.8911 s, 108 MB/s

PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
  2141 root  20   0  550m 184m 2840 R 84.6  2.3  16:43.10 glusterfs
  2126 root  20   0 1407m  30m 2368 S 49.8  0.4  13:49.07 glusterfsd

Interesting - It looks like if I use a NFS mount point, I incur a cpu
hit on two processes instead of just the daemon. I also get much better
performance if I'm not running dd (fuse) on the GLUSTER host.


The storage servers are a bit older, but are both dual socket
quad core

opterons with 4x 7200rpm drives.


A block size of 4k is quite small so that the context switch
overhead involved with fuse would be more perceivable.

Would it be possible to increase the block size for dd and test?



I'm in the process of setting up a share from my desktop and
I'll see if

I can bench between the two systems. Not sure if my ssd will
impact the

tests, I've heard there isn't an advantage using ssd storage for
glusterfs.


Do you have any pointers to this source of information? Typically
glusterfs performance for virtualization work loads is bound by the
slowest element in the entire stack. Usually storage/disks happen to
be the bottleneck and ssd storage does benefit glusterfs.

-Vijay


I had a couple technical calls with RH (re: RHSS), and when I asked if
SSD's could add any benefit I was told no. The context may have been in
a product comparison to other storage vendors, where they use SSD's for
read/write caching, versus having an all SSD storage domain (which I'm
not proposing, but which is effectively what my desktop would provide).

Increasing bs against NFS mount point (gluster backend):
dd if=/dev/zero of=/mnt/rep2-nfs/test1 bs=128k count=16000
16000+0 records in
16000+0 records out
2097152000 tel:2097152000 bytes (2.1 GB) copied, 19.1089 s, 110 MB/s


GLUSTER host top reports:
   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
  2141 root  20   0  550m 183m 2844 R 88.9  2.3  17:30.82 glusterfs
  2126 root  20   0 1414m  31m 2408 S 46.1  0.4  14:18.18 glusterfsd

So roughly the same performance as 4k writes remotely. I'm guessing if I
could randomize these writes we'd see a large difference.


Check this thread out,

http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs-native-integration/ 
it's
quite dated but I remember seeing similar figures.

In fact when I used FIO on a libgfapi mounted VM I got slightly
faster read/write speeds than on the physical box itself (I assume
because of some level of caching). On NFS it was close to half..
You'll probably get a little more interesting results using FIO
opposed to dd

( -Andrew)


Sorry Andrew, I meant to reply to your other message - it looks like
CentOS 6.5 can't use libgfapi right now, I stumbled across this info in
a couple