On Tue, Mar 15, 2016 at 1:14 PM, Pavel Gashev wrote:
> Nir,
>
>
> On 05/03/16 15:35, "Nir Soffer" wrote:
>>On Sat, Mar 5, 2016 at 10:52 AM, Pavel Gashev wrote:
>>> On 05/03/16 02:23, "Nir Soffer" wrote:
>>>>On Sat, Mar 5, 2016 at 12:
On Tue, Mar 15, 2016 at 10:28 PM, Joop wrote:
> On 14-3-2016 10:43, Allon Mureinik wrote:
>
> Odd. Never seen such behavior in any of our set ups.
> Can you please include vdsm's logs, sanlock's logs and /var/log/messages?
>
> I have noticed the same behaviour but not on server hardware but on my
On Fri, Mar 18, 2016 at 6:18 PM, Hal Martin wrote:
> Hello everyone,
>
> Please excuse me if this question has been asked before, I've searched
> the mailing list archives and can't find someone having the same
> problem as I am.
>
> I have a new oVirt installation that I'm trying to configure, an
On Fri, Mar 18, 2016 at 9:27 PM, Nir Soffer wrote:
> On Fri, Mar 18, 2016 at 7:59 PM, Charles Tassell
> wrote:
>> Hi Martin,
>>
>> Thanks, I checked that log and found the error " AttributeError:
>> 'SourceThread' object has no attribute
On Fri, Mar 18, 2016 at 7:59 PM, Charles Tassell wrote:
> Hi Martin,
>
> Thanks, I checked that log and found the error " AttributeError:
> 'SourceThread' object has no attribute '_destServer'" which from what I see
> from my googling was supposedly fixed... I'm running
> vdsm-4.17.23-0.el7.cen
On Thu, Mar 17, 2016 at 11:35 AM, p...@email.cz wrote:
> I used that, but lock active in a few seconds again.
> And oVirt do not update any VM's status
Unlocking entities is ok when you know that the operation that took
the lock is finished
or failed. This is a workaround for buggy operations lea
On Fri, Mar 18, 2016 at 7:55 PM, Nathanaël Blanchet wrote:
> Hello,
>
> I can create snapshot when no one exists but I'm not able to remove it
> after.
Do you try to remove it when the vm is running?
> It concerns many of my vms, and when stopping them, they can't boot anymore
> because of the i
On Thu, Mar 17, 2016 at 10:49 AM, Johan Kooijman wrote:
> Hi all,
>
> Since we upgraded to the latest ovirt node running 7.2, we're seeing that
> nodes become unavailable after a while. It's running fine, with a couple of
> VM's on it, untill it becomes non responsive. At that moment it doesn't ev
On Thu, Mar 17, 2016 at 4:37 PM, p...@email.cz wrote:
> Hello,
> how can I put a copy of VM disks ( outside ovirt envir. copy ) to ovirt
> inventory ?? ( available for " attach disk" option visibility )
Do you mean how to import existing vm disk into ovirt?
You can use v2v (in ovirt-3.6) to i
On Fri, Mar 18, 2016 at 9:37 PM, Nir Soffer wrote:
> On Fri, Mar 18, 2016 at 9:27 PM, Nir Soffer wrote:
>> On Fri, Mar 18, 2016 at 7:59 PM, Charles Tassell
>> wrote:
>>> Hi Martin,
>>>
>>> Thanks, I checked that log and found the error " A
On Thu, Mar 24, 2016 at 7:51 PM, Christophe TREFOIS
wrote:
> Hi Nir,
>
> And the second one is down now too. see some comments below.
>
>> On 13 Mar 2016, at 12:51, Nir Soffer wrote:
>>
>> On Sun, Mar 13, 2016 at 9:46 AM, Christophe TREFOIS
>> wrote:
>>
o catch the required contents?
>
> For the idle vs stuck in a loop, I guess the VM has 4 children qemu threads,
> and one of them was at 100%.
>
> Thank you for your help,
>
> --
> Christophe
>
>> -Original Message-
>> From: Nir Soffer [mailto:nsof...@r
6124
> F: +352 46 66 44 6949
> http://www.uni.lu/lcsb
>
>
>
>
> This message is confidential and may contain privileged information.
> It is intended for the named recipient only.
> If you receive it in error please notify me and permanently delete the
> origin
On Thu, Mar 31, 2016 at 12:03 AM, Alastair Neil wrote:
> Is it planned to allow the snapshots that are created during a live storage
> migration to be automatically deleted once the migration has completed? It
> is easy to forget about them and end up with large snapshots.
Yes, it is planned for
On Fri, Apr 1, 2016 at 4:13 PM, Pavel Gashev wrote:
> Hello,
>
> I was trying to improve iSCSI performance by increasing number of
> simultaneous connections to iSCSI target. Please note it's not about iSCSI
> multipathing. It's about using multiple connections over the same
> network/path to util
On Thu, Apr 7, 2016 at 5:05 PM, wrote:
> Hi,
>
> Lately we're having a lot of events like these:
>
> 2016-04-07 14:54:25,247 WARN
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
> (org.ovirt.thread.pool-8-thread-2) [] domain
> '5de4a000-a9c4-489c-8eee-10368647c413:iscsi01' in problem.
On Thu, Apr 7, 2016 at 6:28 PM, Matthew Bohnsack wrote:
> Thanks for your responses.
>
> Is gaining the ability to have local and shared storage domains in the same
> datacenter (and therefore host) on the roadmap? It's most certainly
> functionality we require in our virtual environment.
I don't
On Fri, Apr 8, 2016 at 7:17 PM, Brett I. Holcomb wrote:
> On Fri, 2016-04-08 at 11:31 +0200, Martin Sivak wrote:
>
> Hi,
>
>
> I set highly available on, did not pin to any host, and also set the
> watchdog which should reset if they go down but I'm not sure that will start
> them if the host come
On Fri, Apr 8, 2016 at 9:33 PM, Brett I. Holcomb wrote:
> On Fri, 2016-04-08 at 19:25 +0300, Nir Soffer wrote:
>
> On Fri, Apr 8, 2016 at 7:17 PM, Brett I. Holcomb
> wrote:
>
> On Fri, 2016-04-08 at 11:31 +0200, Martin Sivak wrote:
>
> Hi,
>
>
> I set highly avai
On Sat, Apr 9, 2016 at 12:07 AM, Brett I. Holcomb wrote:
> On Fri, 2016-04-08 at 21:50 +0300, Nir Soffer wrote:
>
> On Fri, Apr 8, 2016 at 9:33 PM, Brett I. Holcomb
> wrote:
>
> On Fri, 2016-04-08 at 19:25 +0300, Nir Soffer wrote:
>
> On Fri, Apr 8, 2016 at 7:17 PM, B
t=ovirt-engine
Component should be RFEs
oVirt team is probably SLA
We have a voting feature now, so users interested in this feature
can vote on the bug.
Thanks,
Nir
>
>
>
>
> On Sat, 2016-04-09 at 01:06 +0300, Nir Soffer wrote:
>
> On Sat, Apr 9, 2016 at 12:07 AM, Brett I. Ho
On Sat, Apr 9, 2016 at 3:57 PM, Paolo Smiraglia
wrote:
> Hi all!
>
> My name is Paolo and I'm the sysadmin of the research group (infosec)
> where I work.
>
> Recently our infrastructure was updated and I'm planning to use oVirt
> as virtualisation manager. The new "toys" they gave me are
>
> -
On Sun, Apr 10, 2016 at 1:16 PM, Yaniv Kaul wrote:
>
>
> On Sun, Apr 10, 2016 at 11:02 AM, Yedidyah Bar David
> wrote:
>>
>> On Sun, Apr 10, 2016 at 10:43 AM, Barak Korren wrote:
>> > Hi there, I use the following Python SDK snippet to create a template
>> > from an existing VM:
>> >
>> > te
On Tue, Apr 12, 2016 at 3:05 PM, Luiz Claudio Prazeres Goncalves
wrote:
> Hi Sandro, I've been using gluster with 3 external hosts for a while and
> things are working pretty well, however this single point of failure looks
> like a simple feature to implement,but critical to anyone who wants to u
On Tue, Apr 12, 2016 at 6:12 PM, Martin Sivak wrote:
> Hi,
>
> thanks for the summary, this is what I was suspecting.
>
> Just a clarification about the hosted engine host-id and lockspace.
> Hosted engine has a separate lockspace from VDSM and uses
> hosted-engine's host-id there consistently to
On Wed, Apr 13, 2016 at 10:23 PM, Markus Stockhausen
wrote:
> Hi,
>
> I was just annoyed that during my tests I got a second error. For me
> it seemed as finally everything worked fine. Although the single disk live
> merge task was shown to be still running finally.
>
> Nevertheless after Nirs ex
On Thu, Apr 14, 2016 at 12:02 PM, Fred Rolland wrote:
> From the log, we can see that the lvextend command took 18 sec, which is
> quite long.
Fred, can you run repoplot on this log file? it will may explain why this lvm
call took 18 seconds.
Nir
>
> 60decf0c-6d9a-4c3b-bee6-de9d2ff05e85::DEBUG:
of io?
lvextend operation should be very fast operation, this is just a
metadata change,
allocating couple of extents to that lv.
Zdenek, how do you suggest to debug slow lvm commands?
See the attached pdf, lvm commands took 15-50 seconds.
>
> On Thu, Apr 14, 2016 at 12:18 PM, Nir S
On Thu, Apr 14, 2016 at 1:23 PM, wrote:
> Hi Nir,
>
> El 2016-04-14 11:02, Nir Soffer escribió:
>>
>> On Thu, Apr 14, 2016 at 12:38 PM, Fred Rolland
>> wrote:
>>>
>>> Nir,
>>> See attached the repoplot output.
>>
>>
>>
which can be easily fixed.
>
> What do you think?
>
> Regards
> -Luiz
>
>
>
> 2016-04-13 4:15 GMT-03:00 Sandro Bonazzola :
>>
>>
>>
>> On Tue, Apr 12, 2016 at 6:47 PM, Nir Soffer wrote:
>>>
>>> On Tue, Apr 12, 2016 at 3:05 PM, Luiz
On Mon, Apr 18, 2016 at 3:16 PM, Clint Boggio wrote:
> OVirt 3.6, 4 node cluster with dedicated engine. Main storage domain is
> iscsi, ISO and Export domains are NFS.
>
> Several of my VM snapshot disks show to be in an "illegal state". The system
> will not allow me to manipulate the snapshots
un VM Bill-V on Host KVM03.
We need the entire engine to investigate.
>
> ###
> # END LOG OUTPUT FROM ENGINE
> ###########
>
> I have followed the storage chain out to where the UUID'ed snapshots live,
ot like your backup script trying
too hard.
>
> I can upload the logs and a copy of the backup script. Do you all have
> a repository you'd like meto upload to ? Let me know and i'll upload
> them right now.
Please file a bug and attach the files there.
Nir
>
>
&g
up.
It would be useful if you dump the xml of the vms with this issue and
attach it to
the bug.
Nir
>
> Thank you all for your help.
>
>> On Apr 20, 2016, at 3:19 PM, Nir Soffer wrote:
>>
>>> On Wed, Apr 20, 2016 at 10:42 PM, Clint Boggio wrote:
>>> I gr
On Thu, Apr 21, 2016 at 8:44 AM, Idan Shaby wrote:
> Hi Satheesaran,
>
> Please file a BZ and attach all the relevant logs so we can investigate it
> properly.
In particular we would like to see sanlock.log (/var/log/sanlock.log)
and glusterfs logs
(/var/log/glusterfs/server:path*.log
Nir
>
>
>
On Tue, Apr 26, 2016 at 1:33 AM, Pat Riehecky wrote:
> I've just done a clean install of the 3.6 hosted engine (decided to wipe out
> my previous system)
>
> The install went in just fine, no errors I saw, but I'm getting interesting
> errors in the ovirt-hosted-engine-ha agent.log
>
> I have no i
On Tue, Apr 26, 2016 at 9:50 PM, Pat Riehecky wrote:
>
>
> On 04/26/2016 09:43 AM, Nir Soffer wrote:
>>
>> On Tue, Apr 26, 2016 at 1:33 AM, Pat Riehecky wrote:
>>>
>>> I've just done a clean install of the 3.6 hosted engine (decided to wipe
>>
jjOn Wed, Apr 27, 2016 at 2:03 AM, Bill James wrote:
> I have a hardware node that has 26 VMs.
> 9 are listed as "running", 17 are listed as "paused".
>
> In truth all VMs are up and running fine.
>
> I tried telling the db they are up:
>
> engine=> update vm_dynamic set status = 1 where vm_guid =
On Thu, Apr 28, 2016 at 2:16 AM, Bill James wrote:
> oh, never mind. I found it. Its now in the gui under "Configure".
>
>
>
>
> On 04/27/2016 04:08 PM, Bill James wrote:
>
> How do you increase and display the settings of
> I saw some references to changing via MaxMacsCountInPool & MacPoolRanges
On Fri, Apr 29, 2016 at 9:17 PM, wrote:
> Hi,
>
> We're running oVirt 3.6.5.3-1 and lately we're experiencing some issues with
> some VMs being paused because they're marked as non-responsive. Mostly,
> after a few seconds they recover, but we want to debug precisely this
> problem so we can fix
art libvirtd.
> No change.
>
> Attached vdsm.log and supervdsm.log.
>
>
> [root@ovirt1 test vdsm]# systemctl status vdsmd
> â— vdsmd.service - Virtual Desktop Server Manager
> Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor
> preset: enabled)
>
On Sat, Apr 30, 2016 at 11:33 AM, Nicolás wrote:
> Hi Nir,
>
> El 29/04/16 a las 22:34, Nir Soffer escribió:
>>
>> On Fri, Apr 29, 2016 at 9:17 PM, wrote:
>>>
>>> Hi,
>>>
>>> We're running oVirt 3.6.5.3-1 and lately we're
On Sat, Apr 30, 2016 at 7:16 PM, wrote:
> El 2016-04-30 16:55, Nir Soffer escribió:
>>
>> On Sat, Apr 30, 2016 at 11:33 AM, Nicolás wrote:
>>>
>>> Hi Nir,
>>>
>>> El 29/04/16 a las 22:34, Nir Soffer escribió:
>>>>
>>&
On Sat, Apr 30, 2016 at 11:33 AM, Nicolás wrote:
> Hi Nir,
>
> El 29/04/16 a las 22:34, Nir Soffer escribió:
>>
>> On Fri, Apr 29, 2016 at 9:17 PM, wrote:
>>>
>>> Hi,
>>>
>>> We're running oVirt 3.6.5.3-1 and lately we're
On Sat, Apr 30, 2016 at 10:28 PM, Nir Soffer wrote:
> On Sat, Apr 30, 2016 at 7:16 PM, wrote:
>> El 2016-04-30 16:55, Nir Soffer escribió:
>>>
>>> On Sat, Apr 30, 2016 at 11:33 AM, Nicolás wrote:
>>>>
>>>> Hi Nir,
>>>>
>>>
On Sun, May 1, 2016 at 12:48 AM, wrote:
> El 2016-04-30 22:37, Nir Soffer escribió:
>>
>> On Sat, Apr 30, 2016 at 10:28 PM, Nir Soffer wrote:
>>>
>>> On Sat, Apr 30, 2016 at 7:16 PM, wrote:
>>>>
>>>> El 2016-04-30 16:55, Nir Soffer esc
On Sun, May 1, 2016 at 1:35 AM, wrote:
> El 2016-04-30 23:22, Nir Soffer escribió:
>>
>> On Sun, May 1, 2016 at 12:48 AM, wrote:
>>>
>>> El 2016-04-30 22:37, Nir Soffer escribió:
>>>>
>>>>
>>>> On Sat, Apr 30, 2016 at 10:28
On Sun, May 1, 2016 at 3:31 PM, wrote:
> El 2016-04-30 23:22, Nir Soffer escribió:
>>
>> On Sun, May 1, 2016 at 12:48 AM, wrote:
>>>
>>> El 2016-04-30 22:37, Nir Soffer escribió:
>>>>
>>>>
>>>> On Sat, Apr 30, 2016 at 10:28
On Wed, May 4, 2016 at 8:52 AM, knarra wrote:
> Hi All,
>
> I have hosted engine setup with glusterfs as my storage domain. when i
> move a storage domain to maintenance and activate it again i see that there
> are two glusterfs process running for every storage domain. Is this expected
> beha
On Fri, May 6, 2016 at 4:01 PM, Chris Adams wrote:
> One of my oVirt clusters, running 3.6.5, lost power last night (power
> failure plus bad UPS batteries - batteries on order!). When power came
> back, the storage and nodes came back, and then the hosted engine
> started, but nothing else happe
On Tue, May 10, 2016 at 6:20 PM, Neal Gompa wrote:
> Hello,
>
> I recently found out that oVirt "deprecated" the All-in-One
> configuration option in oVirt 3.6 and removed it in oVirt 4.0. This is
> a huge problem for me, as it means that my oVirt machines don't have
> an upgrade path.
>
> My expe
On Sun, May 8, 2016 at 3:14 AM, Luciano Natale wrote:
> Hi everyone. I've been having trouble when exporting VM's. I get error when
> moving image. I've created a whole new storage domain exclusive for this
> issue, and same thing happens. It's not always the same VM that fails, but
> once it fail
On Thu, May 12, 2016 at 6:24 PM, Roman A. Chukov
wrote:
> Hello there,
>
> I have a question about the algorithm of changing of master storage domain.
> I have a fresh installation of oVirt 3.6 on 2 hardware nodes, using a number
> of NFS shares as storage domains, and one of them was a master. Wh
On Fri, May 13, 2016 at 10:35 AM, Nicolas Ecarnot wrote:
> Le 13/05/2016 09:22, Nir Soffer a écrit :
>>
>> On Thu, May 12, 2016 at 6:24 PM, Roman A. Chukov
>> wrote:
>>>
>>> Hello there,
>>>
>>> I have a question about the algorithm of
On Thu, May 12, 2016 at 10:50 AM, Gianluca Cecchi
wrote:
> On Thu, May 12, 2016 at 4:50 AM, Jason Ziemba wrote:
>>
>> I'm fairly new to oVirt (coming from ProxMox) and trying to wrap my head
>> around the mixed (local/NAS) data domain options that are available.
>>
>> I'm trying to configure a se
On Thu, May 12, 2016 at 5:50 AM, Jason Ziemba wrote:
> I'm fairly new to oVirt (coming from ProxMox) and trying to wrap my head
> around the mixed (local/NAS) data domain options that are available.
>
> I'm trying to configure a set of systems to have local storage, as their
> primary data storage
On Fri, May 13, 2016 at 1:34 PM, Grundmann, Christian
wrote:
> Hi
>
> I create a lot of Thin Disk VMs from templates every day, and get „paused
> due to no storage space error“ and „paused due to unknown storage error“ on
> a few of them.
>
> Sometimes the VMs are resumed a few seconds later but s
On Fri, May 13, 2016 at 1:34 PM, Grundmann, Christian
wrote:
> 4GB would be the maximum Size I need before I destroy the VM. So if the
> initial Size would be 4GB the pausing shouldn’t happen anymore
Maybe you want to use preallocated disks instead?
Nir
__
On Fri, May 13, 2016 at 2:32 PM, Grundmann, Christian
wrote:
> Hi,
> @ This looks correct if this is in the [irs] section of the configuration.
> It is
>
> cat /etc/vdsm/vdsm.conf
> [vars]
> ssl = true
>
> [addresses]
> management_port = 54321
>
> [irs ]
This is the "irs " section, not the "irs"
Thx
> Christian
>
> -Ursprüngliche Nachricht-
> Von: Nir Soffer [mailto:nsof...@redhat.com]
> Gesendet: Freitag, 13. Mai 2016 13:52
> An: Grundmann, Christian
> Cc: users@ovirt.org
> Betreff: Re: [ovirt-users] volume_utilization_chunk_mb not working
>
> On Fri, May 13, 20
On Fri, May 13, 2016 at 3:53 PM, Charles Tassell wrote:
> Hi Gervais,
>
> Okay, I see two problems: there are some leftover direcyories causing
> issues and for some reason VDSM seems to be trying to bind to a port
> something is already running on (probably an older version of VDSM.) Try
> rem
On Fri, May 13, 2016 at 3:37 PM, Gervais de Montbrun
wrote:
> Hi Charles,
>
> I think the problem I am having is due to the setup failing and not
> something in vdsm configs as I have never gotten this server to start up
> properly and the BRIDGE ethernet interface + ovirt routes are not setup.
>
This smells like selinux issues, did yoi try with permissive mode?
בתאריך 20 במאי 2016 7:59 אחה״צ, "Bill James" כתב:
> Nobody has any ideas or thoughts on how to troubleshoot?
>
> why does qemu group work but not kvm when qemu is part of kvm group?
>
> [root@ovirt1 prod vdsm]# grep qemu /etc/gro
On Fri, May 20, 2016 at 9:02 PM, Bill James wrote:
> [root@ovirt1 prod ~]# sestatus
> SELinux status: disabled
>
Same on ovirt2?
>
>
>
>
> On 5/20/16 10:49 AM, Nir Soffer wrote:
>
> This smells like selinux issues, did yoi try with permissive mo
user,fuser,egroup,rgroup,sgroup,fgroup,cmd | egrep
'qemu|libvirt'
ps auxe | egrep 'qemu|libvirt'
>
>
>
>
> On 5/20/16 11:13 AM, Nir Soffer wrote:
>
> On Fri, May 20, 2016 at 9:02 PM, Bill James wrote:
>
>> [root@ovirt1 prod ~]#
luster configuration.
Nir
>
>
>
>
> On 5/20/16 11:47 AM, Nir Soffer wrote:
>
> On Fri, May 20, 2016 at 9:25 PM, Bill James wrote:
>>
>> yes
>>
>> [root@ovirt2 prod .shard]# sestatus
>> SELinux status: disabled
>>
>>
> [root@ovirt3 prod 4c4bfdf7-bc70-41b2-ab58-710ff8e850bf]# grep ^user
> /etc/libvirt/qemu.conf
> user = "root"
>
> I'm assuming that's what sets the qemu user.
>
>
>
> When I first tried using that script without setting "user = root" it didn&
install it manually from here:
https://github.com/oVirt/ovirt-imageio/archive/master.zip
To install the package, do:
cd ovirt-imageio/common
make rpm
yum install dist/ovirt-imageio-common-0.1-1.noarch.rpm
This issue will be resolved soon.
Nir
>
>
>
>
>
> On 5/20/16 2:44 PM, N
On Mon, May 30, 2016 at 1:44 PM, Alessandro De Salvo
wrote:
> Hi,
> just to answer myself, I found these instructions that solved my problem:
>
> http://7xqb88.com1.z0.glb.clouddn.com/Features_Cinder%20Integration.pdf
>
> Basically, I was missing the step to add the ceph key to the Authentication
On Mon, May 30, 2016 at 4:07 PM, Nicolas Ecarnot wrote:
> Hello,
>
> We're planning a move from our old building towards a new one a few meters
> away.
>
>
>
> In a similar way of Martijn
> (https://www.mail-archive.com/users@ovirt.org/msg33182.html), I have
> maintenance planed on our storage sid
On Mon, May 30, 2016 at 11:06 AM, InterNetX - Juergen Gotteswinter
wrote:
> Hi,
>
> since some time we get Error Messages from Sanlock, and so far i was not
> able to figure out what exactly they try to tell and more important if
> its something which can be ignored or needs to be fixed (and how)
On Wed, Jun 1, 2016 at 2:27 PM, Nicolas Ecarnot wrote:
> Hello,
>
> Last week, one of our DC went through a network crash, and surprisingly,
> most of our hosts did resist.
> Some of them lost there connectivity, and were stonithed.
>
> I'd like to be sure to understand what tests are made to decl
On Thu, Jun 2, 2016 at 6:35 PM, David Teigland wrote:
>> verify_leader 2 wrong space name
>> 4643f652-8014-4951-8a1a-02af41e67d08
>> f757b127-a951-4fa9-bf90-81180c0702e6
>> /dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids
>
>> leader1 delta_acquire_begin error -226 lockspace
>> f757b127-a951-4fa9-bf9
- Original Message -
> From: "lofyer"
> To: "users"
> Sent: Saturday, October 19, 2013 12:38:16 PM
> Subject: [Users] install engine with everything setup like node-iso and
> xen-server
>
> I'm trying to make a livecd of ovirt-engine
Did you know about ovirt-live?
http://wiki.ovi
- Original Message -
> From: "Patrick Hurrelmann"
> To: "oVirt Mailing List"
> Sent: Tuesday, November 12, 2013 11:34:20 AM
> Subject: [Users] Low quality of el6 vdsm rpms
>
> Hi all,
>
> sorry for this rant, but...
>
> /usr/bin/pep8 --exclude="config.py,constants.py" --filename '*.p
Hi all,
I will be the cloud storage group community gardener next week, starting at
December 16.
I plan to work on:
- Adding a Python mocking library and integrating it with the vdsm unit-tests
- Improving ovirt-live documentation
- Other ovirt-live stuff
Thanks,
Nir
___
- Original Message -
> From: "Sander Grendelman"
> To: "Itamar Heim"
> Cc: users@ovirt.org, "Michal Skrivanek"
> Sent: Wednesday, December 18, 2013 11:40:36 AM
> Subject: Re: [Users] Excessive syslog logging from vdsm/sampling.py
>
> - vdsm-4.13.0-11.el6.x86_64
> - FC storage domain
> -
- Original Message -
> From: "Nir Soffer"
> To: a...@ovirt.org, "oVirt Mailing List"
> Sent: Sunday, December 15, 2013 6:02:00 PM
> Subject: Community Gardener for week starting at December 16
>
> Hi all,
>
> I will be the cloud storage gro
- Original Message -
> From: "Ayal Baron"
> To: "Nir Soffer"
> Cc: "Sander Grendelman" , "Michal Skrivanek"
> , users@ovirt.org
> Sent: Wednesday, December 18, 2013 1:27:36 PM
> Subject: Re: [Users] Excessive syslog loggi
- Original Message -
> From: "Dave Neary"
> To: "Nir Soffer" , a...@ovirt.org, "oVirt Mailing List"
>
> Sent: Wednesday, December 18, 2013 2:01:14 PM
> Subject: Re: Community Gardener for week starting at December 16
>
>
>
> O
- Original Message -
> From: "Sander Grendelman"
> To: "Nir Soffer"
> Cc: "Michal Skrivanek" , users@ovirt.org, "Ayal Baron"
>
> Sent: Wednesday, December 18, 2013 4:00:21 PM
> Subject: Re: [Users] Excessive syslog logging from
- Original Message -
> From: "Sander Grendelman"
> To: "Nir Soffer"
> Cc: "Michal Skrivanek" , users@ovirt.org, "Ayal Baron"
>
> Sent: Wednesday, December 18, 2013 4:00:21 PM
> Subject: Re: [Users] Excessive syslog logging from
- Original Message -
> From: "Sander Grendelman"
> To: "Nir Soffer"
> Cc: "Michal Skrivanek" , users@ovirt.org, "Ayal Baron"
>
> Sent: Wednesday, December 18, 2013 4:00:21 PM
> Subject: Re: [Users] Excessive syslog logging from
- Original Message -
> From: "Nir Soffer"
> To: "Sander Grendelman"
> Cc: users@ovirt.org, "Michal Skrivanek"
> Sent: Wednesday, December 18, 2013 5:46:37 PM
> Subject: Re: [Users] Excessive syslog logging from vdsm/sampling.py
>
e several hosts, and the iscsi
>> SAN has been fenced and/or rebooted.
>>
>> Thanks,
>>
>> Juergen
>>
>>
>> Am 6/2/2016 um 6:03 PM schrieb David Teigland:
>>> On Thu, Jun 02, 2016 at 06:47:37PM +0300, Nir Soffer wrote:
>>>>> This is
On Sat, Jun 4, 2016 at 5:17 PM, David Gossage
wrote:
> This morning I updated my gluster version to 3.7.11 and during this I
> shutdown ovirt completely and all VM's. On bringing them back up ovirt
> seems to have not recreated symlinks to gluster storage
>
> Before VM's were created and the path
On Wed, Jun 8, 2016 at 4:12 PM, Barak Korren wrote:
> On 8 June 2016 at 15:58, Fernando Frediani
> wrote:
>> Hi there,
>>
>> I'm spending a fair amount of time to find out how (if possible) to upload a
>> .img image to a oVirt Storage Domain and be able to mount it in a VM as a
>> disk.
>>
>> It
On Thu, Jun 9, 2016 at 11:46 AM, Krzysztof Dajka wrote:
> Hi,
>
> Recently I tried to delete 1TB disk created on top ~3TB LUN from
> ovirtengine.
> Disk is preallocated and I backuped data to other disk so I could recreate
> it once again as thin volume. I couldn't remove this disk when it was
> a
of scanning guest volumes?
I would use global_filter, to make sure that commands using filter
from the command line do not override your filter. vdsm is such
application, using --config 'devices { filter = ... }'
Nir
>
> 2016-06-11 23:16 GMT+02:00 Nir Soffer :
>>
>> On
For such issues better use de...@ovirt.org mailing list:
http://lists.ovirt.org/mailman/listinfo/devel
Nir
On Mon, Jun 13, 2016 at 6:58 PM, Dewey Du wrote:
> To build and install ovirt-engine at your home folder under ovirt-engine
> directory execute the folllowing command:
>
> $ make clean inst
On Tue, Jun 14, 2016 at 1:12 AM, Rafael Almeida
wrote:
>
> Hello, friends, it is safe reboot my host after update the kernel in my
> centos 7.2 x64, the ovirt engine 3.6 run over this centos in a
> independent host. which it is the frequency at which the
> host/hypervisors communicates with the en
On Tue, Jun 14, 2016 at 5:26 PM, Ryan Mahoney
wrote:
> On my hosts, I have configured a 1gbe nic for ovirtmgmt whose usage is
> currently setup for Management, Display, VM and Migration. I also have a 2
> 10gbe nics bonded LACP which are VLAN tagged and assigned the dozen or so
> VLANS needed for
On Tue, Jun 14, 2016 at 8:59 PM, Fernando Frediani
wrote:
> Hi there,
>
> I see that supported storage types in oVirt are: iSCSI, FCoE NFS, Local and
> Gluster.
We support iSCSI, FC, FCoE, NFS, Gluster, Ceph, Local and any posix like
shared file system.
> Specifically speaking about iSCSI and FC
On Tue, Jun 14, 2016 at 11:23 PM, Fernando Frediani
wrote:
> Hi Nir,
> Thanks for clarification.
>
> Answering your questions: The intent was to use a Posix like filesystem
> similar to VMFS5 (GFS2, OCFS2, or other) where you have no choice for how
> the block storage is presented to multiple serv
On Wed, Jun 15, 2016 at 8:41 PM, Cam Mac wrote:
> Hi,
>
> I haven't had any luck using the oVirt GUI or virt-v2v (see earlier email),
> and I need to find a way to migrate quite a few Windows hosts (Windows 7,
> 2012, 2008, 2k3 etc) into my test oVirt cluster as a PoC so I can make a
> compelling
On Thu, Jun 16, 2016 at 12:31 AM, Claude Durocher
wrote:
> I get this warning in vdsm logs with ovirt 3.6 installed on CentOS 7.2 :
>
> WARNING: lvmetad is running but disabled. Restart lvmetad before enabling
> it!
>
> The service is effectively running but disabled in systemd. In
> /etc/lvm/lvm.
https://nssesxi-mgmt/folder/wvm2_1?dcPath=North%2520Sutton%2520Street&dsName=nssesxi%252dc1%252dr10%252dlun2
Smells like vmware bug, double-escaping %20 to %2520.
Did you check with vmware support?
I would eliminate the spaces in the data center name to avoid such issues.
Nir
>
> Kind
On Fri, Jun 17, 2016 at 5:30 AM, Fernando Frediani
wrote:
> +1
>
>
>
> On 16/06/2016 23:14, Bond, Darryl wrote:
>>
>> Has there been any consideration of allowing the hosted engine to be
>> installed on a Ceph rbd.
>>
>> I'm not suggesting using cinder but addressing the rbd directly in the
>> hos
On Mon, Jun 20, 2016 at 2:23 AM, Bond, Darryl wrote:
> Nir,
> I absolutely understand why you would want to use Cinder for oVirt disk
> management.
> My question as only about the hosted engine storage which is a 'special
> case'. The engine-setup process could have the path to the RBD and secre
On Fri, Jun 24, 2016 at 11:39 AM, Vinzenz Feenstra wrote:
>
>> On Jun 24, 2016, at 9:10 AM, nico...@devels.es wrote:
>>
>> Hi,
>>
>> We're trying to delete an auto-generated live snapshot that has been created
>> after migrating an online VM's storage to a different domain. oVirt version
>> is 3
201 - 300 of 1192 matches
Mail list logo