- Original Message -
Am 24.02.2014 17:59, schrieb Itamar Heim:
with oVirt 3.4 getting close to GA with many many great features, time
to collect requests for 3.5...
Well I've got a list of bugs I'd like to get fixed
and a list of RFEs, some of these might also be viable for
- Original Message -
On 02/25/2014 03:30 PM, Sven Kieske wrote:
Well, afaik the current workflow is like this:
I have iso domain (hosted on NFS server) a, attached to host b.
On host b I got vms c,d and e.
When I attach an ISO from a to c, the iso gets
copied from the
Michal, please see inline
- Original Message -
- Original Message -
From: R P Herrold herr...@owlriver.com
To: Ricky Schneberger ri...@schneberger.se
Cc: users@ovirt.org
Sent: Monday, February 24, 2014 6:42:54 PM
Subject: [Users] After upgrade of ovirt 3.3 all Windows
- Original Message -
On 2/23/2014 3:20 PM, Ayal Baron wrote:
- Original Message -
On Sun, Feb 23, 2014 at 4:27 AM, Ayal Baron aba...@redhat.com wrote:
- Original Message -
I'm looking for some opinions on this configuration in an effort to
increase
write
- Original Message -
I'm looking for some opinions on this configuration in an effort to increase
write performance:
3 storage nodes using glusterfs in replica 3, quorum.
gluster doesn't support replica 3 yet, so I'm not sure how heavily I'd rely on
this.
Ovirt storage domain
- Original Message -
On Sun, Feb 23, 2014 at 4:27 AM, Ayal Baron aba...@redhat.com wrote:
- Original Message -
I'm looking for some opinions on this configuration in an effort to
increase
write performance:
3 storage nodes using glusterfs in replica 3
- Original Message -
There's a bug on this:
https://bugzilla.redhat.com/show_bug.cgi?id=1063979
you can install the qemu-kvm-rhev package to solve this as described here:
http://comments.gmane.org/gmane.linux.centos.general/138593
Thanks,
Dafna
On 02/13/2014 09:14 AM,
- Original Message -
- Original Message -
From: Eduardo Ramos edua...@freedominterface.org
To: Nir Soffer nsof...@redhat.com
Cc: users users@ovirt.org
Sent: Monday, February 10, 2014 9:13:50 PM
Subject: Re: [Users] 'Drive' object has no attribute 'format'
Hi
- Original Message -
- Original Message -
From: Ayal Baron aba...@redhat.com
To: Nir Soffer nsof...@redhat.com
Cc: Eduardo Ramos edua...@freedominterface.org, users
users@ovirt.org
Sent: Tuesday, February 11, 2014 8:17:53 PM
Subject: Re: [Users] 'Drive' object has
Hi Karli,
- Original Message -
Skickat från min iPhone
28 dec 2013 kl. 16:04 skrev Andrew Cathrow acath...@redhat.com:
- Original Message -
From: Gianluca Cecchi gianluca.cec...@gmail.com
To: Ayal Baron aba...@redhat.com
Cc: users users@ovirt.org
Sent
- Original Message -
Hi all,
I'm about to migrate from the dreyou 3.2 repo to the official 3.3
repositories. I was used to take live snapshot with
qemu-kvm-rhev-0.12.1.2-2.355.el6.5.x86_64 which was recompiled from the
redhat srpms, because live snapshot was not supported by the
- Original Message -
- Original Message -
From: Sander Grendelman san...@grendelman.com
To: Itamar Heim ih...@redhat.com
Cc: users@ovirt.org, Michal Skrivanek mskri...@redhat.com
Sent: Wednesday, December 18, 2013 11:40:36 AM
Subject: Re: [Users] Excessive syslog
- Original Message -
On Wed, Dec 18, 2013 at 4:10 PM, Nir Soffer nsof...@redhat.com wrote:
Well in node1.log, we have 7687 errors:
$ grep 'has no attribute' vdsm-node1.log | wc -l
7687
But no such errors in vdsm-node2.log:
$ grep 'has no attribute' vdsm-node2.log | wc -l
- Original Message -
Ohad, can you have a look?
Still no Ohad.
Just forward him the email as attachment (that way he can reply to original
mail)
- Original Message -
From: Vered Volansky ve...@redhat.com
To: Rob Abshear rabsh...@citytwist.net
Cc: users@ovirt.org
so it's a bigger scope. What
we can do relatively quickly is just update the new size in engine.
But we can use the same bug to track progress.
Thanks.
Regards,
On 21/11/13 10:57, Ayal Baron wrote:
- Original Message -
Hi again,
I've tried again to set the SD to mantenance
- Original Message -
Hi again,
I've tried again to set the SD to mantenance and back and it didn't make
the trick.
I've posted in the bug you refered.
Thanks to you all.
Regards,
Currently we do not update the size of the LUN in the db.
Please file a bug on this as in 3.3 we
Hi Paul,
First of all, thanks for the detailed answer, it really helps.
See comments inline.
- Original Message -
Hello Itamar.
The specific use case is a particular propriety filesystem that needs to see
a scsi device. It will do scsi inquiry conmmands to verify suitability.
In
Adding Stefan with the correct email this time.
- Original Message -
Hi Paul,
First of all, thanks for the detailed answer, it really helps.
See comments inline.
- Original Message -
Hello Itamar.
The specific use case is a particular propriety filesystem that needs to
If I understand correctly you have a storage domain which is built of multiple
(at least 2) LUNs.
One of these LUNs seems to be missing (Wy3Ymi-J7bJ-hVxg-sg3L-F5Gv-MQmz-Utwv7z
is an LVM PV UUID).
It looks like you are either not fully connected to the storage server (missing
a connection) or
for the update.
Coincidentally we have a patch upstream that should ignore the iso and export
domains in such situations (http://gerrit.ovirt.org/#/c/17986/) and would void
the need for you to deactivate them.
Dan
On 9/22/13 6:08 AM, Ayal Baron wrote:
If I understand correctly you have a storage
as explained above) and once that is done just put sd1 in
maintenance. In this scenario, the system will automatically move the master
storage domain to be sd2, no downtime for VMs which don't have disks on sd1.
- DHC
On Sun, Aug 18, 2013 at 5:47 AM, Ayal Baron aba...@redhat.com wrote
- Original Message -
Pausing the VM's can work in certain situations for simple maintenance.
However suppose the purpose of the storage shutdown is to move data around
for certain VM's or perhaps change that particular underlying storage
Then why not live migrate the relevant disks?
Hi Lukasz,
- Original Message -
Hello,
I'm trying to set up an oVirt 3.2.1 cluster with FC storage.
However, I have encountered a problem when adding the storage to the
cluster. The action fails with the error shown in the log fragment
below.
Thread-628::DEBUG::2013-07-25
- Original Message -
Forgot one thing, the paste bin link!
http://pastebin.com/3t7E41JK
Date: Wednesday, July 17, 2013 7:08 PM
To: users users@ovirt.org
Subject: HELP, ISCSI Freakin!
Hey Guys,
I am running oVirt 3.1; and I am getting the following in dmesg, and I have
Hi Hans,
- Original Message -
Hello,
I tried to move a disk from on iSCSI to an other and back again.
Finally I ended with 2 disks of the same name in a storage.
One is in stauts ok, the other in status locked. I tried to put the locked
into status ok, and tried
to delete it (even
through some shell processing when run from python.
What version of lvm do you have on this host?
Cheers,
Daniele
On 07/21/2013 01:03 PM, Ayal Baron wrote:
Hi Daniele,
Your lvm version seems wrong:
9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
11:01:07,161::misc::84
Hi Daniele,
Your lvm version seems wrong:
9b238eef-380f-4fd4-aca1-465f675b7656::DEBUG::2013-07-18
11:01:07,161::misc::84::Storage.Misc.excCmd::(lambda) '/usr/bin/sudo -n
/sbin/lvm vgchange ... --deltag MDT_POOL_SPM_LVER=18 --deltag
MDT_POOL_SPM_ID=-1 --deltag
- Original Message -
Tal,
Thanks for responding, I'll try that sequence. But, what about the vgresize?
There is no such command, only pvresize (which does what you need).
It will do this by itself? I saw that there are some logical volumes...
Thanks again.
De : Tal Nisan
- Original Message -
hi:all
1. if can add new feature
Did you have something specific in mind?
2. if fix bugs
If you need guidance or suggestions on where to start, please say so.
3.anything else
There are lots of things that any contribution on would be more than welcome
- Original Message -
sön 2013-04-21 klockan 08:42 -0400 skrev Ayal Baron:
- Original Message -
On 04/19/2013 11:46 AM, Gianluca Cecchi wrote:
On Fri, Apr 19, 2013 at 9:58 AM, Itamar Heim wrote:
qemu-guest-agent isn't ovirt/rhev-guest-agent.
now that qemu
- Original Message -
On 04/19/2013 11:46 AM, Gianluca Cecchi wrote:
On Fri, Apr 19, 2013 at 9:58 AM, Itamar Heim wrote:
qemu-guest-agent isn't ovirt/rhev-guest-agent.
now that qemu started their own guest agent, ovirt/rhev-guest-agent isn't
used for things covered by the
Can you attach the sanlock log and the full vdsm log? (compress it if it's too
big and not xz yet)
Thanks.
- Original Message -
Any ideas on what can cause that storage crash?
could it be related to using a SSD?
Thanks,
Yuval Meir
On Wed, Mar 27, 2013 at 6:08 PM, Yuval M
- Original Message -
Hi nnex,
To add to Ofri's comment - Live Storage Migration only works on
domains in the same DC, so they must be of the same type - you
cannot migrate a disk from an NFS storage to a block device or vice
versa.
At the moment at least.
- Original
- Original Message -
Hi, I am using ovirt 3.2 and noticed that during image operations
(move,
copy)
the user has no progress report. This is painful, especially if the
images are large
or the storage is slow.
Vdsm uses 'dd' to transfer images and dd provides a simple progress
- Original Message -
Hi there,
I was doing some testing around ovirt and iscsi and found an issue
where as when you use dd to create backing-stores for iscsi and
you point ovirt to it to discover login, it thinks the LUN ID is
the same although the target is different
- Original Message -
Le 14/02/2013 16:43, Itamar Heim a écrit :
That is exactly the case : it is indeed a pure data disk with an
ext3
filesystem (no LVM, no nothing).
I intend to attach it to some existing VM.
I know how to convert it into qcow2 or whatever.
I know how to
Thanks for the comprehensive answer!
One small nit - I would create the disk thin and not preallocated on nfs to
avoid the lengthy 'dd'.
- Original Message -
In the mean time I can share my successful one-shot experience with a
WIndows XP VM.
Coming from Fedora 16 with Qemu/KVM as
- Original Message -
On Wed, Jan 23, 2013 at 4:41 PM, Yeela Kaplan wrote:
Yes, you need a different DC and host for iSCSI SDs.
Possibly I can test tomorrow adding another host that should go into
the same DC but I can temporarily put it in another newly created
iSCSI DC for
Hi Gianluca,
The problem you have now doesn't seem related $subject.
It looks like http://gerrit.ovirt.org/#/c/10024/5 to me.
Can you check to see if your code contains the above fix?
Did you vdsm from nightly as well or from Fedora 18 rpms?
Fede, does the f18 build include the above fix?
82579V Gigabit Network
Connection (rev 05)
03:00.0 Ethernet controller: D-Link System Inc RTL8139 Ethernet (rev
10)
2012/11/22 Ayal Baron aba...@redhat.com
What type of NICs do you have? (It's a shot in the dark but I know
there is an issue with bnx2x driver which causes
What type of NICs do you have? (It's a shot in the dark but I know there is an
issue with bnx2x driver which causes random reboots which some users have hit).
Can you attach full vdsm.log and spm-lock.log ?
- Original Message -
[2012-11-18 15:20:08] Protecting spm lock for vdsm pid
On Sun, Nov 11, 2012 at 1:03 AM, Ayal Baron aba...@redhat.com
wrote:
- Original Message -
I install only the 3.1 version on all packages. I dropped the
firewall and selinux.
I tried to add a new storage with the gui and via cli:
[oVirt shell (connected
- Original Message -
I install only the 3.1 version on all packages. I dropped the
firewall and selinux.
I tried to add a new storage with the gui and via cli:
[oVirt shell (connected)]# create storagedomain --host-name
local_host --storage-type localfs --storage-path
- Original Message -
Dear all,
After some problems I have my ovirt setup running now. But I run into
a
new problem with the network configuration.
We have a quad nic with 4 gigabit connections over 2 switches (2X2)
and
an active/passive iscsi SAN. The first is connected to
- Original Message -
On Sat, Sep 29, 2012 at 3:47 PM, Ayal Baron aba...@redhat.com
wrote:
However, as I read this email, it occurred that some other things
might not be equal. Specifically, using multiple LUNs could
provide
a means of shrinking the storage domain
Hi Josh,
- Original Message -
On Sun, Sep 23, 2012 at 8:41 AM, Itamar Heim ih...@redhat.com
wrote:
On 09/23/2012 05:33 PM, Josh Logan wrote:
On Sun, Sep 23, 2012 at 6:10 AM, Itamar Heim ih...@redhat.com
mailto: ih...@redhat.com wrote:
On 09/22/2012
- Original Message -
On Thu, Sep 27, 2012 at 11:08 AM, Ayal Baron aba...@redhat.com
wrote:
Alan Johnson a...@datdec.com meant to write:
So, no change.
This looks like an LVM issue. Have you tried deactivating the VG
before pvresize?
I have not, but I
- Original Message -
On Wed, Sep 26, 2012 at 6:12 PM, Ayal Baron aba...@redhat.com
wrote:
Sounds really over-complicated for what you're trying to do.
Agreed! That's why I asked. =) To be clear, all that was necessary to
end up where I wanted was to reboot the hosts
- Original Message -
On Sat, Aug 11, 2012 at 5:56 PM, Ayal Baron aba...@redhat.com
wrote:
- Original Message -
Hi,
pvresize doesn't work, still same size.
How do i disconnect the iscsi session?
Between disconnecting and ovirt connect again, will i
- Original Message -
Hi,
Due to some issues we had with the latest stable, I started testing
the latest nightly build of the ovirt node iso.
The problems I run into now:
- After node installation the network interfaces are both disabled. I
had to login the node with the
- Original Message -
Hi Daniel,
Thanks for your reply. Regarding the live snapshot I'm using VDSM
4.1, but I'm getting an error and looks like a qemu version problem.
Could it be ?
yes it could.
what distribution are you using?
what qemu version?
The best way to make
.
What you can do prior to running pvresize is run:
iscsiadm -m session -R
Hope this helps.
Best regards,
Ricardo Esteves.
-Original Message-
From : Ayal Baron aba...@redhat.com
To : Johan Kragsterman johan.kragster...@capvert.se
Cc : users@ovirt.org , Ricardo Esteves
- Original Message -
On 07/20/2012 09:19 PM, Trey Dockendorf wrote:
On Fri, Jul 20, 2012 at 11:32 AM, Itamar Heim ih...@redhat.com
wrote:
On 07/20/2012 07:21 PM, Trey Dockendorf wrote:
On Fri, Jul 20, 2012 at 3:52 AM, Itamar Heim ih...@redhat.com
wrote:
On 07/20/2012
SNIP
Would a qcow2 image with preallocation=metadata be possible on an
iSCSI data store?
ayal?
nope. metadata preallocation means that each logical block has a
corresponding physical block.
Ayal, by saying logical block and physical block here, what do
they
stand for in linux
in linking it?
- Nick
Subject: Re: [Users] Adding oVirt Node Fails
From: mbu...@redhat.com
To: aba...@redhat.com
CC: cybertimber2...@hotmail.com; users@ovirt.org
Date: Sun, 6 May 2012 21:26:08 -0400
On Sat, 2012-05-05 at 16:39 -0400, Ayal Baron wrote:
- Original
- Original Message -
Date: Sat, 5 May 2012 12:29:20 +0300
From: dfedi...@redhat.com
To: cybertimber2...@hotmail.com
CC: mbu...@redhat.com; users@ovirt.org
Subject: Re: [Users] Adding oVirt Node Fails
On 04/05/12 02:49, Nicholas Kesick wrote:
Fri, 27 Apr 2012
- Original Message -
On 04/25/2012 05:08 PM, David Metcalf wrote:
The Balloon Driver on Windows 7 isn't starting. I also notice that
qemu
command does not show a balloon flag so I think the device was
never
presented to the guest. How do I get the balloon device to show up?
Sounds like vdsm should just require a newer version of qemu-kvm as well?
Have you tested with a newer qemu-kvm?
- Original Message -
Seeing an issue wherein ovirt moves a managed host to non-operational
state.
This occurs with the currently released version of ovirt and the
latest
58 matches
Mail list logo