that could explain this issue.
Thanks,
*Steve Dainard *
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
I have two options, virtio and virtio-scsi.
I was using virtio, and have also attempted virtio-scsi on another Windows
guest with the same results.
Using the newest drivers, virtio-win-0.1-74.iso.
*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | *Rethink Traffic*
519
if I
can bench between the two systems. Not sure if my ssd will impact the
tests, I've heard there isn't an advantage using ssd storage for glusterfs.
Does anyone have a hardware reference design for glusterfs as a backend for
virt? Or is there a benchmark utility?
*Steve Dainard
root 20 0 1380m 31m 2396 S 2.3 0.4 83:19.40 glusterfsd
*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | *Rethink Traffic*
519-513-2407 ex.250
877-646-8476 (toll-free)
*Blog http://miovision.com/blog | **LinkedIn
https://www.linkedin.com/company/miovision
slowly, but I
seem to have more than the 100KB/s write speeds I was seeing previously.
Does anyone know what this device is for? I have the Red Hat VirtIO SCSI
Controller listed under storage controllers.
*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | *Rethink Traffic
to allow the
other engine to connect it to the new pool for restore.
*Steve Dainard *
On Tue, Jan 28, 2014 at 12:41 PM, Juan Pablo Lorier jplor...@gmail.comwrote:
Hi,
I had some issues with a gluster cluster and after some time trying to
get the storage domain up or delete it (I opened a BZ
PCI\VEN_1AF4DEV_1004CC_0100
A disclaimer: There are time and patches gaps between RHEL and other
versions.
Ronen.
On 01/28/2014 10:39 PM, Steve Dainard wrote:
I've had a bit of luck here.
Overall IO performance is very poor during Windows updates, but a
contributing
: current
transaction is aborted, commands ignored until end of transaction block;
nested exception is org.postgresql.util.PSQLException: ERROR: current
transaction is aborted, commands ignored until end of transaction block
*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com
file) server, would this have any impact? I would think not as its a local
db.
When I changed the password for the psql engine user, is there any config
file this is referenced in that may not have been updated?
Thanks,
*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com
Is this file supposed to exist:
2014-01-30 10:24:18,990 WARN [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-23) The file /etc/ovirt-engine/engine.conf doesn't
exist or isn't readable. Will return an empty set of properties.
I can't find it anywhere on the system.
*Steve Dainard
I've reconfigured my setup (good succes below, but need clarity on gluster
option):
Two nodes total, both running virt and glusterfs storage (2 node replica,
quorum).
I've created an NFS storage domain, pointed at the first nodes IP address.
I've launched a 2008 R2 SP1 install with a virtio-scsi
I've enabled nfs.trusted-sync (
http://gluster.org/community/documentation/index.php/Gluster_3.2:_Setting_Volume_Options#nfs.trusted-sync
) on the gluster volume, and the speed difference is immeasurable . Can
anyone explain what this option does, and what the risks are with a 2
node
How would you developers, speaking for the oVirt-community, propose to
solve this for CentOS _now_ ?
I would imagine that the easiest way is that you build and host this one
package(qemu-kvm-rhev), since you´ve basically already have the source
and recipe (since you´re already providing it
IDE is just as slow. Just over 2 hours for 2008R2 install.
Is this what you mean by kvm?
lsmod | grep kvm
kvm_intel 54285 3
kvm 332980 1 kvm_intel
*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | *Rethink Traffic*
519-513-2407 ex.250
, should be set for
the qemu-kvm builds so we can get a CentOS bug report going and hammer this
out.
Thanks everyone.
**crosses fingers and hopes for live snapshots soon**
*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | *Rethink Traffic*
519-513-2407 ex.250
877-646-8476
master_ver = 1
name = gluster-store-rep1
*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | *Rethink Traffic*
519-513-2407 ex.250
877-646-8476 (toll-free)
*Blog http://miovision.com/blog | **LinkedIn
https://www.linkedin.com/company/miovision-technologies | Twitter
https
.x86_64
qemu-kvm-rhev-0.12.1.2-2.355.el6.5.x86_64
qemu-kvm-rhev-tools-0.12.1.2-2.355.el6.5.x86_64
vdsm-4.13.3-2.el6.x86_64
vdsm-cli-4.13.3-2.el6.noarch
vdsm-python-4.13.3-2.el6.x86_64
vdsm-xmlrpc-4.13.3-2.el6.noarch
*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | *Rethink
=
REMOTE_PATH=10.0.10.3:/rep2
ROLE=Regular
SDUUID=471487ed-2946-4dfc-8ec3-96546006be12
TYPE=POSIXFS
VERSION=3
_SHA_CKSUM=469191aac3fb8ef504b6a4d301b6d8be6fffece1
*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | *Rethink Traffic*
519-513-2407 ex.250
877-646-8476 (toll-free
I should be able to provide any logs required, I've reverted to my NFS
storage domain but can move a host over to POSIX whenever necessary.
*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | *Rethink Traffic*
519-513-2407 ex.250
877-646-8476 (toll-free)
*Blog http
total 0
drwxr-xr-x. 4 vdsm kvm 91 Jan 30 17:34 iso-mount
drwxr-xr-x. 3 root root 23 Jan 30 17:31 lv-iso-domain
drwxr-xr-x. 3 vdsm kvm 35 Jan 29 17:43 lv-storage-domain
drwxr-xr-x. 3 vdsm kvm 17 Feb 4 15:43 lv-vm-domain
drwxr-xr-x. 4 vdsm kvm 91 Feb 5 10:36 rep2-mount
*Steve Dainard
Thanks,
*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | *Rethink Traffic*
519-513-2407 ex.250
877-646-8476 (toll-free)
*Blog http://miovision.com/blog | **LinkedIn
https://www.linkedin.com/company/miovision-technologies | Twitter
https://twitter.com/miovision
Hi Dafna,
No snapshots of either of those VM's have been taken, and there are no
updates for any of those packages on EL 6.5.
*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | *Rethink Traffic*
*Blog http://miovision.com/blog | **LinkedIn
https://www.linkedin.com
not.
Are there any increased logging levels that might help determine what the
issue is?
Thanks,
*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | *Rethink Traffic*
*Blog http://miovision.com/blog | **LinkedIn
https://www.linkedin.com/company/miovision-technologies | Twitter
https
/ids:0
flags 0
ovirt002 sanlock.log has on entries during that time frame.
*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | *Rethink Traffic*
*Blog http://miovision.com/blog | **LinkedIn
https://www.linkedin.com/company/miovision-technologies | Twitter
https
the risks of 2 nodes in quorum ensuring
storage consistency, or 2 nodes no quorum with an extra shot at uptime.
*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | *Rethink Traffic*
*Blog http://miovision.com/blog | **LinkedIn
https://www.linkedin.com/company/miovision
On Mon, Feb 17, 2014 at 3:50 AM, Justin Clacherty jus...@redfish.com.auwrote:
Hi,
I'm just setting up some storage for use with oVirt and am wondering why I
might choose glusterfs rather than just exporting a raid array as iscsi.
The oVirt team seem to be pushing gluster (though that could
, including 'you are insane for thinking this is a good
idea' (and some supported reasoning would be great).
Thanks,
*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | *Rethink Traffic*
*Blog http://miovision.com/blog | **LinkedIn
https://www.linkedin.com/company/miovision
On Sun, Feb 23, 2014 at 4:27 AM, Ayal Baron aba...@redhat.com wrote:
- Original Message -
I'm looking for some opinions on this configuration in an effort to
increase
write performance:
3 storage nodes using glusterfs in replica 3, quorum.
gluster doesn't support replica 3
On Sun, Feb 23, 2014 at 3:20 PM, Ayal Baron aba...@redhat.com wrote:
- Original Message -
On Sun, Feb 23, 2014 at 4:27 AM, Ayal Baron aba...@redhat.com wrote:
- Original Message -
I'm looking for some opinions on this configuration in an effort to
increase
When I create a new vm from a template (centos 6, 20gig thin disk) it takes
close to 40 minutes to clone the disk image.
Are there any settings that control how much bandwidth the cloning process
can use, or how its prioritized? This is brutally slow, and although a
quick clone isn't needed
This may have been lost over the weekend...
*Steve*
On Thu, Apr 17, 2014 at 1:21 PM, Steve wrote:
When I create a new vm from a template (centos 6, 20gig thin disk) it
takes close to 40 minutes to clone the disk image.
Are there any settings that control how much bandwidth the cloning
for GLUSTER ?
Il giorno 21/apr/2014, alle ore 16:00, Steve Dainard
sdain...@miovision.com ha scritto:
This may have been lost over the weekend...
*Steve*
On Thu, Apr 17, 2014 at 1:21 PM, Steve wrote:
When I create a new vm from a template (centos 6, 20gig thin disk) it
takes close
-46f8-9f4b-964d8af0675b/1a67de4b-aa
1c-4436-baca-ca55726d54d7\' backing_fmt=\'qcow2\' encryption=off
cluster_size=65536 ],)'
do you have any alert in the webadmin to restart the vm?
Dafna
On 04/22/2014 03:31 PM, Steve Dainard wrote:
Sorry for the confusion.
I attempted to take a live
under that vm.
btw, you know that your export domain is getting StorageDomainDoesNotExist
in the vdsm log? is that domain in up state? can you try to deactivate the
export domain?
Thanks,
Dafna
On 04/22/2014 05:20 PM, Steve Dainard wrote:
Ominous..
23 snapshots. Is there an upper
I'm currently using a two node combined virt/storage setup with Ovirt 3.3.4
and Gluster 3.4.2 (replica 2, glusterfs storage domain). I'll call this
pair PROD.
I'm then geo-replicating to another gluster replica pair on the local net,
btrfs underlying storage, and volume snapshots so I can recover
wrote:
On 04/23/2014 09:57 PM, R P Herrold wrote:
On Wed, 23 Apr 2014, Steve Dainard wrote:
I have other VM's with the same amount of snapshots without this problem.
No conclusion jumping going on. More interested in what the best practice
is for VM's that accumulate snapshots over time
Hello Ovirt team,
Reading this bulletin: https://access.redhat.com/site/solutions/117763
there is a reference to 'private Red Hat Bug # 523354' covering online
backups of VM's.
Can someone comment on this feature, and rough timeline? Is this a native
backup solution that will be included with
I'm running a trial of RHEV 3.4 and decided to run through our DR plan.
I have two storage domains.
vm-store
aws_storage_gateway
vm-store is the domain which contains all local VM's which would be
considered important. I must have put vm-store into maintenance mode before
aws_storage_gateway
And 5 minutes later I found...
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.4/html-single/Administration_Guide/index.html
3.5.6. Re-Initializing a Data Center: Recovery Procedure
Which worked quite well.
Steve
On Mon, Jul 14, 2014 at 10:27 AM, Steve
Hello,
I'd like to get an understanding of the relationship between VM's using a
storage domain, and the child directories listed under .../storage domain
name/storage domain uuid/images/.
Running through some backup scenarios I'm noticing a significant difference
between the number of
/
Thanks,
Steve
On Fri, Jul 18, 2014 at 8:12 AM, Yair Zaslavsky yzasl...@redhat.com wrote:
- Original Message -
From: Steve Dainard sdain...@miovision.com
To: users users@ovirt.org
Sent: Thursday, July 17, 2014 7:51:31 PM
Subject: [ovirt-users] Relationship bw storage domain
uuid
Anyone?
On Fri, Jun 20, 2014 at 9:35 AM, Steve Dainard sdain...@miovision.com
wrote:
Hello Ovirt team,
Reading this bulletin: https://access.redhat.com/site/solutions/117763
there is a reference to 'private Red Hat Bug # 523354' covering online
backups of VM's.
Can someone comment
I added a hook to rhevm, and then restarted the engine service which
triggered a hosted-engine VM shutdown (likely because of the failed
liveliness check).
Once the hosted-engine VM shutdown it did not restart on the other host.
On both hosts configured for hosted-engine I'm seeing logs from
I'm using the hostusb hook on RHEV 3.4 trial.
The usb device is passed through to the VM, but I'm getting errors in a
Windows VM when the device driver is loaded.
I started with a simple usb drive, on the host it is listed as:
Bus 002 Device 010: ID 05dc:c75c Lexar Media, Inc.
Which I added as
I should mention I can mount this usb drive in a CentOS 6.5 VM without any
problems.
On Mon, Jul 21, 2014 at 2:11 PM, Steve Dainard sdain...@miovision.com
wrote:
I'm using the hostusb hook on RHEV 3.4 trial.
The usb device is passed through to the VM, but I'm getting errors in a
Windows VM
Hi Michal,
How can I generate libvirt xml from rhevm?
Thanks,
Steve
On Tue, Jul 22, 2014 at 4:12 AM, Michal Skrivanek
michal.skriva...@redhat.com wrote:
On 21 Jul 2014, at 20:54, Steve Dainard wrote:
I should mention I can mount this usb drive in a CentOS 6.5 VM without any
problems
, Steve Dainard sdain...@miovision.com wrote:
Hi Michal,
How can I generate libvirt xml from rhevm?
virsh -r dumpxml domain on the host
Thanks,
michal
Thanks,
Steve
On Tue, Jul 22, 2014 at 4:12 AM, Michal Skrivanek
michal.skriva...@redhat.com wrote:
On 21 Jul 2014
, 2014 at 03:50:59PM +0200, Michal Skrivanek wrote:
On Jul 22, 2014, at 15:49 , Steve Dainard sdain...@miovision.com
wrote:
Hi Michal,
How can I generate libvirt xml from rhevm?
virsh -r dumpxml domain on the host
Or dig into vdsm.log (in case the VM is no longer
Any other ideas here? Is there a specific driver I should load instead of
the Windows default one?
Thanks,
Steve
On Tue, Jul 22, 2014 at 10:23 AM, Steve Dainard sdain...@miovision.com
wrote:
I just saw the your device can perform faster warning again in Windows
and decided to check it out
Creating /var/lib/glusterd/groups/virt on each node and adding parameters
found here:
https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.0/html/Quick_Start_Guide/chap-Quick_Start_Guide-Virtual_Preparation.html
solved
this issue.
Steve Dainard
Infrastructure Manager
Miovision
]
(DefaultQuartzScheduler_Worker-42) Cleared all tasks of pool
5849b030-626e-47cb-ad90-3ce782d831b3.
*Steve Dainard *
Infrastructure Manager
Miovision http://miovision.com/ | *Rethink Traffic*
519-513-2407 ex.250
877-646-8476 (toll-free)
*Blog http://miovision.com/blog |
**LinkedInhttps://www.linkedin.com
12:39:51,795 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(pool-6-thread-50) Lock freed to object EngineLock [exclusiveLocks= key:
8e2c9057-deee-48a6-8314-a34530fc53cb value: VM
, sharedLocks= ]
*Steve Dainard *
Infrastructure Manager
Miovision http://miovision.com/ | *Rethink Traffic*
519-513
importing any data storage domain i assume?
Yes.
Possibility of direct use of HW by VM.
such as?
Telephone modem. PCI-express cards. Graphic cards
+ USB devices, we have hardware license keys for some software. In kvm/qemu
I can expose the license key directly to a VM.
and
Awesome.
Dave can you have someone edit the installation page and specifically
mention that EL hypervisors don't have native Gluster (fuse only) support -
It would be very frustrating to think you can have all the features, and OS
stability as well, only to find out EL doesn't support a key
Hello,
New Ovirt 3.3 install on Fedora 19.
When I try to add a gluster storage domain I get the following:
*UI error:*
*Error while executing action Add Storage Connection: Permission settings
on the specified path do not allow access to the storage.*
*Verify permission settings on the
on the fact that the throughput is almost exactly
half.
*Steve Dainard *
IT Infrastructure Manager
Miovision http://miovision.com/ | *Rethink Traffic*
519-513-2407 ex.250
877-646-8476 (toll-free)
*Blog http://miovision.com/blog |
**LinkedInhttps://www.linkedin.com/company/miovision-technologies
I'd like to request a setting in VM pools which forces the VM to be stateless.
I see from the docs this will occur if selected under run once, or
started from the user portal, but it would be nice to set this for a
normal admin start of the VM(s) as well.
I should also mention that I can't start multiple pool VM's at the
same time with run-once from the admin portal, so I'd have to
individually run once each VM which is a member of the pool.
On Fri, Aug 21, 2015 at 3:57 PM, Steve Dainard sdain...@spd1.com wrote:
I'd like to request a setting
Hello,
Trying to configure Ovirt 3.5.3.1-1.el7.centos for LDAP authentication.
I've configured the appropriate aaa profile but I'm getting TLS errors
when I search for users to add via ovirt:
The connection reader was unable to successfully complete TLS
negotiation:
I have two hosts, only one of them was running VM's at the time of
this crash so I can't tell if this is a node specific problem.
rpm -qa | egrep -i 'gluster|vdsm|libvirt' |sort
glusterfs-3.6.7-1.el7.x86_64
glusterfs-api-3.6.7-1.el7.x86_64
glusterfs-cli-3.6.7-1.el7.x86_64
> From: "Simone Tiraboschi" <stira...@redhat.com>
> > To: "Steve Dainard" <sdain...@spd1.com>, "Francesco Romani" <
> from...@redhat.com>
> > Cc: "users" <users@ovirt.org>
> > Sent: Friday, October 1
61 matches
Mail list logo