On 01/09/2014 12:29 PM, Nicolas Ecarnot wrote:
> Hi Maor, hi everyone,
>
> Le 07/01/2014 04:09, Maor Lipchuk a écrit :
>> Looking at bugzilla, it could be related to
>> https://bugzilla.redhat.com/1029069
>> (based on the exception described at
>> https://bu
Adding Sandro to the thread
On 01/12/2014 11:26 AM, Maor Lipchuk wrote:
> On 01/09/2014 12:29 PM, Nicolas Ecarnot wrote:
>> Hi Maor, hi everyone,
>>
>> Le 07/01/2014 04:09, Maor Lipchuk a écrit :
>>> Looking at bugzilla, it could be related to
>>> https://bu
Hi Trey,
Can you please also attach the engine/vdsm logs.
Thanks,
Maor
On 01/27/2014 06:12 PM, Trey Dockendorf wrote:
> I setup my first oVirt instance since 3.0 a few days ago and it went
> very well, and I left the single host cluster running with 1 VM over
> the weekend. Today I come back an
Hi Sandy,
virtual size is the size of the disk the VM knows, it is actually the
size you chose to create it with.
The true size is the summerise of all the true size which the volumes
related to disk.
So for example if you have one disk of 20G and you occupied 18GB of it.
Then you created a snaps
which is
> still in place and was in place during the crash.
>
> Thanks
> - Trey
>
> On Tue, Jan 28, 2014 at 2:45 AM, Maor Lipchuk wrote:
>> Hi Trey,
>>
>> Can you please also attach the engine/vdsm logs.
>>
>> Thanks,
>> Maor
>>
>> On
Hi Nicolas,
Can u please attach the VDSM logs of the problematic nodes and valid
nodes, the engine log and also the sanlock log.
You wrote that many nodes suddenly began to become
unresponsive,
Do you mean that the hosts switched to non-responsive status in the engine?
I'm asking that because non
Hi,
Please see inline response.
Regards,
Maor
On 01/29/2014 02:35 PM, Nicolas Ecarnot wrote:
> Le 29/01/2014 13:29, Maor Lipchuk a écrit :
>> Hi Nicolas,
>>
>> Can u please attach the VDSM logs of the problematic nodes and valid
>> nodes, the engine log and also the s
On 02/01/2014 06:52 PM, Itamar Heim wrote:
> On 01/29/2014 10:45 AM, Maor Lipchuk wrote:
>> Hi Sandy,
>>
>> virtual size is the size of the disk the VM knows, it is actually the
>> size you chose to create it with.
>> The true size is the summerise of all the true
On 02/02/2014 02:26 PM, Gianluca Cecchi wrote:
> On Sun, Feb 2, 2014 at 11:21 AM, Maor Lipchuk wrote:
>
>> That is correct, you can also see the size and the fields through the
>> API or ovirt-cli
>> (see
>> http://documentation-devel.engineering.redha
From the engine logs it seems that indeed live snapshot is called (The
command is snapshotVDSCommand see [1]).
This is done right after the snapshot has been created in the VM and it
signals the qemu process to start using the new volume created.
When live snapshot does not succeed we should see i
ibvirt, if that
kind of behaviour could happen.
>
> Dafna
>
>
> On 02/03/2014 05:08 PM, Maor Lipchuk wrote:
>> From the engine logs it seems that indeed live snapshot is called (The
>> command is snapshotVDSCommand see [1]).
>> This is done right after the snapshot has b
On 02/03/2014 07:46 PM, Dafna Ron wrote:
> On 02/03/2014 05:34 PM, Maor Lipchuk wrote:
>> On 02/03/2014 07:18 PM, Dafna Ron wrote:
>>> Maor,
>>>
>>> If snapshotVDSCommand is for live snapshot, what is the offline create
>>> snapshot command?
>
Hi Andreas,
Basically it means that the snapshot was created but the process of the
QEMU is still writing on the original volume (The snapshot), so any
changes you will made while this VM is running will be in the snapshot.
This could be fixed when restarting the VM (as described in the event),
a
Hi Boyan,
Generally we don't disconnecting external Lun disks when we remove them
from the oVirt management.
You can disconnect them manually from the host, or use restart.
IIRC one reason for that, is because we might have Storage Domains which
use the same target.
another reason is that we kee
Hi mad,
Since oVirt version 3.4 we support only two Data Center types, shared
and local.
changing the Data Center between the two is supported, though the Data
Center should not contain any Storage Domains.
So that means that if you have VMs with disks, you will need to export
them to an export do
Hi Tomasz,
I'm very please to hear that you are interested in the GSOC project,
see my answers inline.
It's great to see that you know the material pretty well, I'm interested
to hear some more ideas and feedbacks.
I will start a thread soon (Maybe also schedule a call) once getting
more feedback
On 03/11/2014 05:20 PM, Itamar Heim wrote:
> On 03/11/2014 05:14 PM, Eyal Edri wrote:
>>
>>
>> - Original Message -
>>> From: "Itamar Heim"
>>> To: "Eyal Edri" , "Tomasz Kołek"
>>> , users@ovirt.org, "infra"
>>> Sent: Tuesday, March 11, 2014 5:10:54 PM
>>> Subject: Re: [Users] [GSOC][Gerr
ill be more
generic.
>
>
> BR,
> Tomek
>
> -Original Message-
> From: Maor Lipchuk [mailto:mlipc...@redhat.com]
> Sent: Tuesday, March 11, 2014 8:40 PM
> To: Meital Bourvine; Eyal Edri
> Cc: Tomasz Kołek; users@ovirt.org; infra
> Subject: Re: [Users] [G
On 03/12/2014 05:57 PM, Alon Bar-Lev wrote:
>
>
> - Original Message -
>> From: "Itamar Heim"
>> To: "Eli Mesika" , users@ovirt.org
>> Cc: "Maor Lipchuk" , "Tomasz Kołek"
>> , "infra"
>> Sent:
Hi Gary,
Please take a look at
http://www.ovirt.org/Feature/iSCSI-Multipath#User_Experience
Regards,
Maor
On 03/27/2014 05:59 PM, Gary Lloyd wrote:
> Hello
>
> I have just deployed Ovirt 3.4 on our test environment. Does anyone know
> how the ISCSI multipath issue is resolved ? At the moment it
----
>
>
> On 27 March 2014 16:02, Maor Lipchuk <mailto:mlipc...@redhat.com>> wrote:
>
> Hi Gary,
>
> Please take a look at
> http://www.ovirt.org/Feature/iSCSI-Multipath#User_Experience
>
> Regards,
> Maor
&
You should be able only to extend disk size, not shrinking it.
There was an idea to integrate virt-sprsify for thin provisioned disks,
(see
http://www.google-melange.com/gsoc/proposal/review/org/google/gsoc2014/utkarshsins/5629499534213120)
although this is not supported yet.
regards,
Maor
On 04
Hi Shubham,
In case you do not need the DWH setup, you can disable the Dashboard
background queries via the ovirt-engine.conf.
Please, add the following file to your $DEV_OVIRT_PREFIX folder:
"$DEV_OVIRT_PREFIX/etc/ovirt-engine/engine.conf.d/99-no-dashboard.conf"
and add it the following content
On Tue, Apr 18, 2017 at 10:50 AM, Maor Lipchuk wrote:
> Hi Shubham,
>
> In case you do not need the DWH setup, you can disable the Dashboard
> background queries via the ovirt-engine.conf.
>
> Please, add the following file to your $DEV_OVIRT_PREFIX folder:
> "$DEV_OVI
Hi Bryan,
It seems like there was an error from qemu-img while reading sector 143654878 .
the Image copy (conversion) failed with low level qemu-img failure:
CopyImageError: low level Image copy failed:
("cmd=['/usr/bin/taskset', '--cpu-list', '0-31', '/usr/bin/nice',
'-n', '19', '/usr/bin/ionic
Hi Alex,
I saw a bug that might be related to the issue you encountered at
https://bugzilla.redhat.com/show_bug.cgi?id=1386443
Sahina, maybe you have any advise? Do you think that BZ1386443is related?
Regards,
Maor
On Sat, Jun 3, 2017 at 8:45 PM, Abi Askushi wrote:
> Hi All,
>
> I have install
It seems related to the destroy of the storage domain since destroy
will still remove it from the engine if there is a problem.
Have you tried to restart VDSM, or reboot the Host?
There could be a problem with umount the storage domain.
Can you please send the VDSM logs
Regards,
Maor
On Thu, Jun
Hi Marcelo.
Can you please elaborate a bit more about the issue.
Can you please also attach the engine and VDSM logs.
Thanks,
Maor
On Wed, May 31, 2017 at 10:08 AM, Sandro Bonazzola
wrote:
>
>
> On Tue, May 23, 2017 at 4:25 PM, Marcelo Leandro
> wrote:
>
>> I see now that image base id stay i
; ---- Original message
> From: Maor Lipchuk
> Date: 6/2/17 5:29 PM (GMT-06:00)
> To: Bryan Sockel
> Cc: users@ovirt.org
> Subject: Re: [ovirt-users] VM/Template copy issue
>
>
> From : Maor Lipchuk [mlipc...@redhat.com]
seems that gluster replica
>> needs 512B block size.
>> Is there a way to make gluster function with 4K drives?
>>
>> Thank you!
>>
>> On Sun, Jun 4, 2017 at 2:34 PM, Maor Lipchuk wrote:
>>>
>>> Hi Alex,
>>>
>>> I saw a bug that mi
Hi arsene,
See my comments inline
On Mon, Jun 12, 2017 at 1:02 PM, Arsène Gschwind
wrote:
> Hi
>
> Our setup looks like:
>
> - 2 clusters in 2 different site connected with 10GBit LAN
> - Storage based on FC SAN replicated on both site and available for both
> site (The LUNs are available over 4
Hi Vinícius,
For some reason it looks like your networks are both connected to the same IPs.
based on the VDSM logs:
u'connectionParams':[
{
u'netIfaceName':u'eno3.11',
u'connection':u'192.168.11.14',
},
{
u'netIfaceName':u'eno
Hi Vinícius,
I was trying to reproduce your scenario and also encountered this
issue, so please disregard my last comment, can you please open a bug
on that so we can investigate it properly
Thanks,
Maor
On Tue, Jul 25, 2017 at 11:26 AM, Maor Lipchuk wrote:
> Hi Vinícius,
>
> For so
Can you please try to connect to your iSCSI server using iscsadm from
your VDSM Host, for example like so:
iscsiadm -m node -T
iqn.2017-07.br.ufrj.if.cc.storage.ctl:ovirt-mpio -I eno3.11 -P
192.168.12.14,3260 --login
On Tue, Jul 25, 2017 at 11:54 AM, Maor Lipchuk wrote:
> Hi Vinícius,
&g
it can still be activated by:
> iscsid.socket
>
> [root@ovirt3 ~]# mv /var/lib/iscsi/ifaces/* /tmp/ifaces
>
> [root@ovirt3 ~]# service iscsid start
> Redirecting to /bin/systemctl start iscsid.service
>
> And finally:
>
> [root@ovirt3 ~]# hosted-engine --vm-status
>
On Thu, Jul 27, 2017 at 1:14 PM, Abi Askushi wrote:
> Hi All,
>
> For VM backups I am using some python script to automate the snapshot ->
> clone -> export -> delete steps (although with some issues when trying to
> backups a Windows 10 VM)
>
> I was wondering if there is there any plan to integr
Hi Johan,
Can you please share the vdsm and engine logs.
Also, it won't harm to also get the sanlock logs just in case sanlock
was configured to save all debugging in a log file (see
http://people.redhat.com/teigland/sanlock-messages.txt)).
Try to share the sanlock ouput by running 'sanlock clie
Hi Andy,
Can you please elaborate more regarding the requirements and what you
are trying to achieve.
Have you tried to use vm-pool? Those VMs should be configured as
stateless and acquire minimal storage capacity.
Regards,
Maor
On Fri, Jul 28, 2017 at 10:19 PM, Andy Michielsen
wrote:
> Hello a
ho just start
> with firefox or virt-viewer.
>
> All idea's and suggestions are welcome.
>
> Thanks.
>
>> Op 30 jul. 2017 om 10:32 heeft Maor Lipchuk het
>> volgende geschreven:
>>
>> Hi Andy,
>>
>> Can you please elaborate more regarding the
If you are referring to a plain OS to run the laptops from on, I'm not
aware of any specific OS. maybe DSL (http://www.damnsmalllinux.org/)?
On Sun, Jul 30, 2017 at 1:01 PM, Maor Lipchuk wrote:
> What about ovirt-live?
>http://www.ovirt.org/download/ovirt-live/
>
> On Sun, J
result -226
>
>
> vdsm logs doesnt have any errors and engine.log does not have any
> errors.
>
> And if i check the ids file manually. I can see that everything in it
> is correct except for the first host in the cluster where the space
> name and host id is broken.
>
>
On Sun, Jul 30, 2017 at 4:24 PM, Maor Lipchuk wrote:
> Hi David,
Sorry, I meant Johan
>
> I'm not sure how it got to that character in the first place.
> Nir, Is there a safe way to fix that while there are running VMs?
>
> Regards,
> Maor
>
> On Sun, Jul 30, 2017
On Mon, Aug 21, 2017 at 6:12 PM, Matthias Leopold
wrote:
> Hi,
>
> we're experimenting with Cinder/Ceph Storage on oVirt 4.1.3. When we tried
> to snapshot a VM (2 disks on Cinder storage domain) the task never finished
> and now seems to be in an uninterruptible loop. We tried to stop it in
> var
On Tue, Aug 22, 2017 at 1:08 PM, Matthias Leopold
wrote:
>
>
> Am 2017-08-22 um 09:33 schrieb Maor Lipchuk:
>>
>> On Mon, Aug 21, 2017 at 6:12 PM, Matthias Leopold
>> wrote:
>>>
>>> Hi,
>>>
>>> we're experimenting with Cinder/Ce
On Sun, Oct 1, 2017 at 2:50 PM, Nir Soffer wrote:
> On Sun, Oct 1, 2017 at 9:58 AM Yaniv Kaul wrote:
>>
>> On Sat, Sep 30, 2017 at 8:41 PM, Charles Kozler
>> wrote:
>>>
>>> Hello,
>>>
>>> I recently read on this list from a redhat member that export domain is
>>> either being deprecated or looki
On Sun, Oct 15, 2017 at 8:17 PM, Yaniv Kaul wrote:
>
>
> On Sun, Oct 15, 2017 at 7:13 PM, Nir Soffer wrote:
>>
>> On Sun, Oct 1, 2017 at 3:32 PM Maor Lipchuk wrote:
>>>
>>> On Sun, Oct 1, 2017 at 2:50 PM, Nir Soffer wrote:
>>> &g
- Original Message -
> From: "Itamar Heim"
> To: "Juan Carlos YJ Lin" , users@ovirt.org, "Maor
> Lipchuk" , "Allon
> Mureinik"
> Sent: Tuesday, November 4, 2014 3:00:06 AM
> Subject: Re: [ovirt-users] Ovirt Node migration
>
inal Message -
> From: "Raul Laansoo"
> To: "Maor Lipchuk"
> Cc: "users"
> Sent: Tuesday, November 4, 2014 9:57:46 AM
> Subject: Re: [ovirt-users] Import FC Storage as iSCSI
>
> Hi.
>
> I have managed to get my oVirt node running using oVirt
- Original Message -
> From: "Raul Laansoo"
> To: "Maor Lipchuk"
> Cc: "Piotr Kliczewski" , "users"
> Sent: Tuesday, November 4, 2014 11:06:09 AM
> Subject: Re: [ovirt-users] Import FC Storage as iSCSI
>
> Progress. Now it
It should be from Data Center 3.5 considering the option_value at the
vdc_options table.
Liron, do you have any insights on this?
Raul, Was your 'source' Data Center was from version 3.5?
Regards,
Maor
- Original Message -
> From: "Raul Laansoo"
> To: &q
- Original Message -
> From: "Raul Laansoo"
> To: "Maor Lipchuk"
> Cc: "Piotr Kliczewski" , "users"
> Sent: Wednesday, November 5, 2014 9:06:05 AM
> Subject: Re: [ovirt-users] Import FC Storage as iSCSI
>
> Hi.
>
The OVF_STORE disk (Rleate to OvfStoreOnAnyDomain feature) should only be
supported for 3.5 Data Center.
Liron, do you have any insight on that?
Regards,
Maor
- Original Message -
> From: "Raul Laansoo"
> To: "Maor Lipchuk"
> Cc: "Piotr Klicze
Hi Coffee,
The OVF_STORE disk sync the VMs and Templates data in an un-synchronous way
every 60 minutes (see [1])
this might be the reason that sometime you see the entities candidate for
import and sometime you don't.
There are patches which sync the VMs/Templates data once the Storage Domai
- Original Message -
> From: "mad Engineer"
> To: users@ovirt.org
> Sent: Thursday, December 4, 2014 11:26:32 AM
> Subject: [ovirt-users] shared storage with iscsi
>
> Hello All,
> I am using NFS as shared storage and is working fine,able
> to migrate instances across nodes.
Hi Jason,
Did you try to create a new local Data Center, and add a local storage domain
there?
or it have to be on the same Data Center containing the hosted engine?
Regards,
Maor
- Original Message -
> From: "Jason Greene"
> To: users@ovirt.org
> Sent: Friday, December 5, 2014 11:20:
- Original Message -
> From: "Wolfgang Bucher"
> To: users@ovirt.org
> Sent: Wednesday, January 28, 2015 9:01:45 AM
> Subject: Re: [ovirt-users] OVF Storage
>
> AW: [ovirt-users] OVF Storage
>
> Hello
>
>
>
>
> whitch logs do you need. Attached a screenshot of the messages.
Hi wolf
Hi Lars,
Are you still getting
VAR__ACTION__ADD,VAR__TYPE__STORAGE__CONNECTION,ACTION_TYPE_FAILED_STORAGE_CONNECTION_ALREADY_EXISTS?
If so, try to log in into the engine DataBase and call: "SELECT * FROM
storage_server_connections where connection ilike '%[Your connection path]%';"
See if you ge
- Original Message -
> From: "Roy Golan"
> To: "Mikola Rose" , "Maor Lipchuk"
> Cc: users@ovirt.org
> Sent: Wednesday, January 28, 2015 12:02:50 PM
> Subject: Re: [ovirt-users] oVirt 3.5.1 - VM "hostedengine" Failing to start
&g
Hi Donald,
Can you please elaborate more on the process, I'm not sure I understood the
scenario clearly.
Regards,
Maor
- Original Message -
> From: "DONALD DAVIS"
> To: "Maor Lipchuk"
> Cc: "Allon Mureinik" , users@ovirt.org
> Sent:
"Elad Ben Aharon" , "Maor Lipchuk"
> , "Liron Aravot"
>
> Cc: users@ovirt.org
> Sent: Wednesday, February 11, 2015 7:07:00 PM
> Subject: RE: [ovirt-users] importing iscsi storage domain
>
> yes, no problem. I am including the logs from yesterda
Hi Pascal,
Can u please attach the vdsm logs of the Host used to add the Storage Domain.
(Should be located under /var/log/vdsm/)
error 351 means "Error creating a storage domain", we should look in the vdsm
logs to get more info about the reason of the failure.
Might be worth to take a look at
Hi John,
Well done for writing the script.
I was wondering, what is it that you are trying to do? If you try to migrate
disks from one setup to another you might want to consider using "Import
Storage Domain" (see [1]),
though it is only supported from oVirt 3.5.
[1]
http://www.ovirt.org/Featu
Hi Eduardo,
Can u please open an RFE on that issue:
https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt
Thanks,
Maor
- Original Message -
> From: "Eduardo Terzella"
> To: users@ovirt.org
> Sent: Friday, February 20, 2015 10:56:39 PM
> Subject: [ovirt-users] It is possible - Ovirt ?
I assume you are referring to this RFE:
https://bugzilla.redhat.com/1183995 - [RFE] Allow force attaching an export
domain with stale pool meta-data by turning it into a data domain
Regards,
Maor
- Original Message -
> From: "Allon Mureinik"
> To: "Maor L
ot;Eli Mesika" , "Ido Barkan" ,
> "Adam Litke" ,
> "Francesco Romani" , "Maor Lipchuk"
> , "Vered Volansky"
> , "Eli Mesika" , Users@ovirt.org,
> de...@ovirt.org
> Sent: Wednesday, March 18, 2015 9:56:17 AM
> Su
Hi Vince,
Live merge is not implemented yet, although it is in the roadmap.
Regards,
Maor
On 11/22/2012 03:29 PM, Vincent Miszczak wrote:
> Hi,
>
>
>
> I’ve tried live VM disks snapshot. Its ok. It appears I cannot delete
> those snapshots while the VM is running.
>
>
>
> Is it a bug? If
Hi Erik,
How much free storage space do you have in your domains?
Can you please add VDSM log.
Regards,
Maor
On 12/19/2012 05:58 AM, Erik Jacobs wrote:
> I attempted to take a snapshot of a machine while it was running. I
> noticed that the machine was paused, and then I attempted to resume it.
Hi Adrián,
Does the template disk allocation policy indicates thin provisioning or
preallocate?
I also didn't found any bug related to it.
Can you please attach VDSM and engine logs.
Thanks,
Maor
On 12/20/2012 11:28 PM, Adrian Gibanel wrote:
> I'm going to describe the bug.
>
> You create a vir
Hi Alex, if the VMs that you created used thin provisioning for the
template, then the disks of the created VM are based on the volumes of
the template images using qcow, and there for, they will be related to
this template.
If you don't want the VM to be based on the template images you can use
t
Hi Jiri,
Perhaps you are referring to quota (http://www.ovirt.org/Features/Quota)
Regards,
Maor
On 01/09/2013 12:15 PM, Jiri Belka wrote:
> Hi,
>
> in vSphere you can create a resource pool[1] and define to it access control
> and delegation...
>
>
> Access control and delegation - When a top
Hi Alex,
Can you please open a bug on that issue.
It appears that since your template have dots in its name it cannot be
import.
Regards,
Maor
On 01/16/2013 05:55 PM, Omer Frenkel wrote:
> [adding Gilad]
> what engine version is this? (latest upstream looks different and i am
> able to import "co
Hi Adrian,
Sorry for the late respond.
Please see inline comments,
and feel free if there are any more questions, or other issues which you
want us to address to.
p.s. Since some of the responds were in different threads, I gathered
all of them to this email.
> 3) About the attached logs the vi
Hi Alex,
The storage domains are part of the Data Center, and should not be
related to clusters.
Clusters are used for migrating VMs, which are the qemu processes,
between hosts.
Disks are created on the storage domains regardless to clusters.
The host which write the data in the storage domain to
storage domains, one of the reasons
we use the SPM is to avoid collisions when hosts write to the same
storage domain.
>
> Alex
>
>
> On 01/17/2013 07:39 PM, Maor Lipchuk wrote:
>> Hi Alex,
>> The storage domains are part of the Data Center, and should not be
>>
Would changing the SPM priority in the host config have the desired
> effect w/o putting the host into Maint mode ? Or can I only do that when
> the host is in Maint mode ?
You can't change your SPM host while your host is up and running.
>
>
> On 17 January 2013 20:42, Maor L
o putting the host into Maint mode ? Or can I only do that
> when the host is in Maint mode ?
>
>
> On 17 January 2013 20:42, Maor Lipchuk <mailto:mlipc...@redhat.com>> wrote:
>
> Hi Alex,
> See my last respond, I add a comment i
Hi Json can you please add the logs of engine and VDSM.
There is an open bug on it which might reflect the issue you encountered
https://bugzilla.redhat.com/884635.
I can verify it, the moment you will send the logs.
Regards,
Maor
On 02/05/2013 06:31 AM, Jason Lawer wrote:
> Hi,
>
> I appear t
Hi xianghuadu,
any chance it is the ip tables which rejects the communication from your
engine to your ovirt node?
Regards,
Maor
On 02/25/2013 04:07 PM, xianghuadu wrote:
> I am in installation ovirt - engine + ovirt - node, when ovirt - engine add
> node - host node unable to activate. I throug
Hi Gianluca, I checked the code of 3.2 and you are right, the patch was
not pushed there.
Although it seems that the origin fix exists in 3.2, I will send a patch
to fix that.
Regards,
Maor
On 03/19/2013 05:41 PM, Gianluca Cecchi wrote:
> Hello,
> in my opinion 3.2.1 for fedora 18 doesn't contain
FYI, the patch is at http://gerrit.ovirt.org/#/c/13172/
Thanks for your attention,
Maor
On 03/19/2013 05:52 PM, Maor Lipchuk wrote:
> Hi Gianluca, I checked the code of 3.2 and you are right, the patch was
> not pushed there.
> Although it seems that the origin fix exists in 3.2, I wi
Adding Timothy to the thread.
Remove live snapshot is not supported yet, but it's in the roadmap.
For now VM must be shut down before removing a snapshot.
On 03/19/2013 07:11 PM, Gianluca Cecchi wrote:
> On Tue, Mar 19, 2013 at 5:52 PM, wrote:
>> I can create a snapshot, but how can I revert it
Also adding Federico to the thread, could be a san lock issue:
SanlockException(-203, 'Sanlock lockspace add failure', 'Sanlock exception')
Regards,
Maor
On 03/20/2013 01:50 PM, Eli Mesika wrote:
>
>
> - Original Message -
>> From: "Limor Gavish"
>> To: "Eli Mesika"
>> Cc: users@ovirt
Hi Jose, You first need to add the user.
After it was added you can mark the user, and then you should see the
permission sub-tab.
Regards,
Maor
On 03/20/2013 04:10 PM, supo...@logicworks.pt wrote:
> using the administrator portal, in the users tab -> add, I only have a
> search field.
> Don't f
Hi Juan,
I think you encountered this bug https://bugzilla.redhat.com/881941, the
log there is quite the same.
The auto recovery process should fix that after 15 minutes, but need to
see if it is enabled in your environment.
On 03/20/2013 05:29 PM, Juan Jose wrote:
> I forgot the vdsm.log file,
>
On 03/20/2013 05:58 PM, Maor Lipchuk wrote:
> Hi Juan,
> I think you encountered this bug https://bugzilla.redhat.com/881941, the
> log there is quite the same.
> The auto recovery process should fix that after 15 minutes, but need to
> see if it is enabled in your environment.
Th
t;
> Many thanks,
>
> Juanjo.
>
> On Wed, Mar 20, 2013 at 5:02 PM, Maor Lipchuk <mailto:mlipc...@redhat.com>> wrote:
>
> On 03/20/2013 05:58 PM, Maor Lipchuk wrote:
> > Hi Juan,
> > I think you encountered this bug
> https://bugzi
DB directly but I'm asking
> if it is possible to have some kind of workaround for these situations.
>
> Many thanks in avanced,
>
> Juanjo.
>
> On Thu, Mar 21, 2013 at 4:52 PM, Maor Lipchuk <mailto:mlipc...@redhat.com>> wrote:
>
> thanks for the
On 03/21/2013 06:12 PM, Maor Lipchuk wrote:
> Was VDSM restart did any change?
>
> If your host is UP and running you can try to Re-initialize your data
> center (By right click on the Data Center) and pick a new storage to be
> the master.
You will need first to add a new stora
Hi Gianluca,
You can set the VM to run on a specific host, when editing the VM and
choose the Host tab and Run On a specific host
Generally, you can configure your cluster to use three different types
of selections: None, Even Distributed and Power Saving.
This can be set at the Cluster Policy tab
Hi Gianluca,
See my comments inline.
Regards,
Maor
On 03/22/2013 10:56 AM, Gianluca Cecchi wrote:
> Hello,
> this is my situation. Downtime of the VMs is not a problems as it is a
> test environment
> all based on 3.2.1 on f18 and ovirt stable repo
>
> DC1
> FC type with cluster1
> 2 hosts conne
Hi Gianluce,
The Host which creates the disks, or any storage related allocation
operation is only the SPM in the DC.
Regards,
Maor
On 03/24/2013 10:30 AM, Gianluca Cecchi wrote:
> On Sun, Mar 24, 2013 at 9:05 AM, Maor Lipchuk wrote:
>> Hi Gianluca,
>> You can set the VM to ru
>From the VDSM log, it seems that the master storage domain was not
responding.
Thread-23::DEBUG::2013-03-22
18:50:20,263::domainMonitor::216::Storage.DomainMonitorThread::(_monitorDomain)
Domain 1083422e-a5db-41b6-b667-b9ef1ef244f0 changed its status to Invalid
Traceback (most recent call la
On 03/27/2013 10:37 AM, Dan Kenigsberg wrote:
> (dropping announce list. they only care about the finished product, not
> the road to get it done)
>
> On Tue, Mar 26, 2013 at 10:24:16AM +0100, Gianluca Cecchi wrote:
>> On Thu, Mar 21, 2013 at 2:36 PM, Mike Burns wrote:
>>
do they
013 17:10, "Maor Lipchuk" <mailto:mlipc...@redhat.com>> ha scritto:
>>
>> FYI, the patch is at http://gerrit.ovirt.org/#/c/13172/
>>
>> Thanks for your attention,
>> Maor
>>
>> On 03/19/2013 05:52 PM, Maor Lipchuk wrote:
>>
Hi suporte,
You probably encountered this bug https://bugzilla.redhat.com/884635.
The fix for this bug should delete the disk from the engine even though
the image does not exists in the storage.
Can u please attach or check the VDSM and engine log just to be sure,
that you encountered the same sc
8e279f5'", 'code': 268}}
>
> Can you please teach me how to apply the fix?
>
> Thanks
> Jose
>
>
> *From: *"Maor Lipchuk"
> *To: *supo...@logicworks.pt
> *Cc: *"users@oVirt.org"
> *Sent: *Segu
...@logicworks.pt wrote:
> I'm using Version 3.2.1-, libvirt-0.10.2.4-1.fc18, vdsm-4.10.3-10.fc18
> is there an upgrade?
>
> Regards
> Jose
>
> --------
> *From: *"Maor Lipchuk"
> *To: *supo
Hi,
Can u please also attach the engine log, and the full log of VDSM
In the log you sent I see you got the message
supervdsm::190::SuperVdsmProxy::(_connect) Connect to svdsm failed
This behaviour could be related to https://bugzilla.redhat.com/910005
which was fixed in later version of VDSM.
But
Thanks,
> Vadim
>
> Вск 26 Май 2013 21:41:54 +0400, Maor Lipchuk написал:
>> Hi,
>> Can u please also attach the engine log, and the full log of VDSM
>>
>> In the log you sent I see you got the message
>> supervdsm::190::SuperVdsmProxy::(_connect) Connect to sv
hose VMs were slimmer then the 3.1 VM.
Did u managed to reproduced it on other Vms from 3.1?
Was it reproduced all the time?
Regards,
Maor
On 05/27/2013 02:47 PM, ov...@qip.ru wrote:
> Hi, Maor
>
> + messages
>
> Thanks,
> Vadim
>
> Пнд 27 Май 2013 12:56:50 +0400, M
1 - 100 of 292 matches
Mail list logo