[Users] *SAN LUNs* thin-provisioning : Reclaiming block device storage

2013-11-04 Thread Nicolas Ecarnot

[This post is *not* about *VM* thin-provisioning]

Hi,

Our setup is made of an Equalogic SAN (PS6100) connected via a dedicated 
iSCSI network, an oVirt 3.3.0-4.el6 manager and CentOS nodes.

This is working fine (thank you all).

I usually create a LUN for the master storage domain, configure in 
thin-provisioned mode on the SAN side, and fill it with VMs until it 
reaches some 90%.
Then I create another LUN the same way, add it as an additional storage 
domain, move some VM on it from the first domain and it's OK.


The issue there is I have no mean to reclaim any freed storage on the 
master storage LUN. So the question (not *master* domain specific - any 
storage domain is concerned) is how to reclaim space on these domains.


I began to have a closer look at how a storage domain is setup (PV, VG, 
LVs). I also found some hints about some mount option (unmap) that could 
be passed when mounting ext4 filesystems.
But I guess the issue is way more complex, as LVM may add a layer 
between ext4 and the block device.
How is the LVM setup working? I saw there was one LV by disk (and some 
metadata LVs), in one big VG in one big PV. Is the entire disk space 
allocated for the PV, or is only a partial of it is used first, then an 
extension (growing) is made each time some more space is needed by 
adding VMs?


I have the opportunity to test and play with a test LUN in a test oVirt, 
so I fear no crash. If the only answer is "Unavailable feature", it's 
ok, but I just have to know.


--
Nicolas Ecarnot
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ISO_DOMAIN won't attach

2013-11-04 Thread Sandro Bonazzola
Il 04/11/2013 20:58, Bob Doolittle ha scritto:
> Aha - finally nailed it.
> 
> Although iptables was disabled on engine, firewalld was not. Once I disabled 
> firewalld, nfsv3 mounts worked fine, and ISO_DOMAIN was able to attach.
> 
> Is the admin expected to take care of this, or is engine-setup supposed to 
> disable firewalld?

While running engine-setup it asks you if you want to configure firewalld if 
present.
If you say no it asks you if you want to configure iptables if present.
If you say no you'll need to take care of configuring it manually.
If iptables has been choosen to be configured, firewalld is disabled if present.

Did you add the host from web interface instead of using all in one plugin?
ovirt engine is not able to handle firewalld, see 
https://bugzilla.redhat.com/show_bug.cgi?id=995362
Bug 995362 - [RFE] Support firewalld.



> 
> -Bob
> 
> On 11/04/2013 02:52 PM, Bob Doolittle wrote:
>> By wrapping the mount command on the node and recording the mount args, I 
>> was able to reproduce the issue manually.
>>
>> Although I can remote mount the iso dir using default options, when I 
>> specify -o nfsvers=3 the mount times out, which is the problem.
>>
>> I can do a loopback mount on the engine using nfsvers=3, but I can't do a 
>> remote mount from the node. I have SELinux set to 'permissive' on both
>> engine and node.
>>
>> I know I can work around this issue by changing the advanced parameters to 
>> specify V4, but would like to understand the real issue first. I've seen
>> the opposite, where v3 works but v4 times out (e.g. if your export isn't 
>> part of the root filesystem) but never the opposite like this where v4
>> works and v3 does not.
>>
>> Any clues?
>>
>> -Bob
>>
>> On 11/04/2013 12:06 PM, Bob Doolittle wrote:
>>> I have a fresh, default oVirt 3.3 setup, with F19 on the engine and the 
>>> node (after applying the shmmax workaround discussed in separate thread). I
>>> followed the Quick Start Guide.
>>>
>>> I've added the node, and a storage domain on the node, so the Datacenter is 
>>> now initialized.
>>>
>>> However, my ISO_DOMAIN remains unattached. If I select it, select its Data 
>>> Center tab, and click on Attach, it times out.
>>>
>>> The following messages are are seen in engine.log when the operation is 
>>> attempted:
>>> http://pastebin.com/WguQJFRu
>>>
>>> I can loopback mount the ISO_DOMAIN directory manually, so I'm not sure why 
>>> anything is timing out?
>>>
>>> -Bob
>>>
>>> P.S. I also note that I have to do a "Shift-Refresh" on the Storage page 
>>> before ISO_DOMAIN shows up. This is consistently reproducible. Firefox 25.
>>
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Problem in 3.3 with shmmax configuration

2013-11-04 Thread Sandro Bonazzola
Il 04/11/2013 17:17, Bob Doolittle ha scritto:
> 
> On 11/04/2013 05:43 AM, Sandro Bonazzola wrote:
>> Il 02/11/2013 17:51, Bob Doolittle ha scritto:
>>> Hi,
>>>
>>> I'm setting up Engine for the 2nd time - the first time I answered a 
>>> configuration question wrong. So I did:
>>>
>>> engine-setup
>>> engine-cleanup
>>> engine-setup
>>>
>>> Things worked, until I rebooted the system. I found that postgresql would 
>>> not startup, and was failing with "could not create shared memory segment:
>>> Invalid Argument".
>>>
>>> I resolved this issue by creating a file /etc/sysctl.d/10-shmmax.conf, 
>>> containing the line:
>>> kernel.shmmax = 10
>>>
>>> (I read somewhere that postgresql recommends setting shmmax to 1/4 of 
>>> physical memory, and I have 4GB)
>>>
>>> 1. Is this a known bug? If not, should I file one? If so, how do I do that? 
>>> :)
>> Which version are you installing?
> 
> 3.3, on Fedora 19.
> 
>> Can you please attach all 3 logs from above sequence (setup, cleanup, setup)?
>> I think something may have gone wrong on second setup execution while 
>> setting shmmax.
> 
> Unfortunately I no longer have those logs.
> 
> *However* last night I powered down my node and engine. Last night engine 
> would at least come up.
> 
> Now when I boot up my engine I get the same error. So my guess is that the 
> shmmax setting isn't being configured in a persistent fashion somehow, or
> is somehow reverting.
> 
> My most recent setup log is here (I hate to send large logs to lists):
> https://dl.dropboxusercontent.com/u/35965416/ovirt-engine-setup-20131103141618.log

2013-11-03 14:18:18 DEBUG otopi.plugins.ovirt_engine_setup.system.sysctl 
plugin.execute:441 execute-output: ('/sbin/sysctl', '-n', 'kernel.shmmax')
stdout:
35554432

setup detected that your shmmax is already configured with a good value so it 
didn't change the configuration.
If you set above value before running setup, please retry running setup without 
setting it or setting it to a value less than 35554432.
setup will detect it's too low and will create needed configuration files for 
fixing it.



> 
> -Bob
> 
>>> 2. Is there a better fix than the one I settled on? Does the normal 
>>> configuration wind up increasing shmmax, or reducing postgresql's limits? 
>>> What are
>>> the default values for this in a normal engine configuration?
>> No better fix, engine-setup just do something like that, setting shmmax to 
>> 35554432.
>>
>>
>>> Thanks,
>>>  Bob
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
> 


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt-live questions

2013-11-04 Thread i iordanov
Hi Itamar,

Just a follow-up to our previous conversation. I believe all my
troubles were due to running the VMs inside an oVirt node which itself
was a VM (with nested virtualization).

I reworked my scheme where the node is now installed directly onto
hardware and everything is operating lightning fast with no hangups.

Thanks for the quick reply and take care!

Cheers,
iordan

On Sat, Nov 2, 2013 at 10:46 AM, i iordanov  wrote:
> Hi Dan,
>
> I reproduced this bug within oVirt Live 1.1. One difference from a
> traditional setup in my case was that I was running the oVirt Live
> distro within a VM with nested virtualization enabled. I don't think
> that this is the cause because I was able to run VMs successfully when
> I removed the -uuid option, but it's worth mentioning.
>
> On Sat, Nov 2, 2013 at 11:24 AM, Dan Kenigsberg  wrote:
>> How are you sure that -uuid is the trigger for this qemu bug? Could you
>> copy here the shortest command line that reproduces the bug?
>
> When I get a chance, I will get that VM running again and will give
> you the shortest command-line that reproduces the hang.
>
>> I suppose that a qemu mailing list could provide more help. What is your
>> host kernel and exact qemu version?
>
> I'll get that for you in the follow-up as well.
>
> Thanks!
> iordan
>
>
> --
> The conscious mind has only one thread of execution.



-- 
The conscious mind has only one thread of execution.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] oVirt at LISA 13, Washington, D.C November 5th-7th

2013-11-04 Thread Itamar Heim

Einav will present oVirt at a booth in LISA 13 today through thursday.

if you are in the area, come and say hi.

https://www.usenix.org/conference/lisa13

Thanks,
   Itamar
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Installing Windows VMs

2013-11-04 Thread Greg Sheremeta

- Original Message -
> From: "Bob Doolittle" 
> To: users@ovirt.org
> Sent: Monday, November 4, 2013 6:43:00 PM
> Subject: [Users] Installing Windows VMs
> 
> In the QuickStart Guide, it implies that you should find a virtual
> floppy image for virtio-win in the ISO_DOMAIN. Or at least I can find no
> information on where to find/populate that image before the Guide shows
> its use.
> 
> However, in my default 3.3 installation, there was nothing pre-populated
> in my ISO_DOMAIN. Should there have been?
> 
> Regards,
>  Bob
> 

I've never seen that happen on any of my installs.

You can grab the drivers at
http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/

Greg
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Installing Windows VMs

2013-11-04 Thread Bob Doolittle
In the QuickStart Guide, it implies that you should find a virtual 
floppy image for virtio-win in the ISO_DOMAIN. Or at least I can find no 
information on where to find/populate that image before the Guide shows 
its use.


However, in my default 3.3 installation, there was nothing pre-populated 
in my ISO_DOMAIN. Should there have been?


Regards,
Bob

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Custom settings for Kernel Same-page Merging (KSM)?

2013-11-04 Thread Itamar Heim

On 11/04/2013 01:24 PM, Frank Wall wrote:

Hi Martin,

On 2013-11-04 11:12, Martin Sivak wrote:

we do not currently have anything in the engine that would allow you
to do this. If you create an oVirt RFE bug for it it would make it
easier for us to add the feature and track the progress.


good advice, here we go:
https://bugzilla.redhat.com/show_bug.cgi?id=1026294

I wasn't sure if I should file a bug for vdsmd or ovirt-engine,
so maybe someone needs to reclassify this PR.


However you can update the policy file you found to accomplish this.
Just set (defvar ksm_sleep_ms_baseline 10) to a higher value and do
not forget to properly restart VDSM when you do.


Thanks, I've already thought of doing this as a temporary workaround,
but wanted to make sure that it's not customizable yet. So I'm going
to hardcode my preferred value on next maintenance.


I'm more concerned by the fact ksm consumed RAM supposedly before memory 
usage was >80%.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ISO_DOMAIN won't attach

2013-11-04 Thread Bob Doolittle

Aha - finally nailed it.

Although iptables was disabled on engine, firewalld was not. Once I 
disabled firewalld, nfsv3 mounts worked fine, and ISO_DOMAIN was able to 
attach.


Is the admin expected to take care of this, or is engine-setup supposed 
to disable firewalld?


-Bob

On 11/04/2013 02:52 PM, Bob Doolittle wrote:
By wrapping the mount command on the node and recording the mount 
args, I was able to reproduce the issue manually.


Although I can remote mount the iso dir using default options, when I 
specify -o nfsvers=3 the mount times out, which is the problem.


I can do a loopback mount on the engine using nfsvers=3, but I can't 
do a remote mount from the node. I have SELinux set to 'permissive' on 
both engine and node.


I know I can work around this issue by changing the advanced 
parameters to specify V4, but would like to understand the real issue 
first. I've seen the opposite, where v3 works but v4 times out (e.g. 
if your export isn't part of the root filesystem) but never the 
opposite like this where v4 works and v3 does not.


Any clues?

-Bob

On 11/04/2013 12:06 PM, Bob Doolittle wrote:
I have a fresh, default oVirt 3.3 setup, with F19 on the engine and 
the node (after applying the shmmax workaround discussed in separate 
thread). I followed the Quick Start Guide.


I've added the node, and a storage domain on the node, so the 
Datacenter is now initialized.


However, my ISO_DOMAIN remains unattached. If I select it, select its 
Data Center tab, and click on Attach, it times out.


The following messages are are seen in engine.log when the operation 
is attempted:

http://pastebin.com/WguQJFRu

I can loopback mount the ISO_DOMAIN directory manually, so I'm not 
sure why anything is timing out?


-Bob

P.S. I also note that I have to do a "Shift-Refresh" on the Storage 
page before ISO_DOMAIN shows up. This is consistently reproducible. 
Firefox 25.




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ISO_DOMAIN won't attach

2013-11-04 Thread Bob Doolittle
By wrapping the mount command on the node and recording the mount args, 
I was able to reproduce the issue manually.


Although I can remote mount the iso dir using default options, when I 
specify -o nfsvers=3 the mount times out, which is the problem.


I can do a loopback mount on the engine using nfsvers=3, but I can't do 
a remote mount from the node. I have SELinux set to 'permissive' on both 
engine and node.


I know I can work around this issue by changing the advanced parameters 
to specify V4, but would like to understand the real issue first. I've 
seen the opposite, where v3 works but v4 times out (e.g. if your export 
isn't part of the root filesystem) but never the opposite like this 
where v4 works and v3 does not.


Any clues?

-Bob

On 11/04/2013 12:06 PM, Bob Doolittle wrote:
I have a fresh, default oVirt 3.3 setup, with F19 on the engine and 
the node (after applying the shmmax workaround discussed in separate 
thread). I followed the Quick Start Guide.


I've added the node, and a storage domain on the node, so the 
Datacenter is now initialized.


However, my ISO_DOMAIN remains unattached. If I select it, select its 
Data Center tab, and click on Attach, it times out.


The following messages are are seen in engine.log when the operation 
is attempted:

http://pastebin.com/WguQJFRu

I can loopback mount the ISO_DOMAIN directory manually, so I'm not 
sure why anything is timing out?


-Bob

P.S. I also note that I have to do a "Shift-Refresh" on the Storage 
page before ISO_DOMAIN shows up. This is consistently reproducible. 
Firefox 25.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ISO_DOMAIN won't attach

2013-11-04 Thread Frank Wall

On 2013-11-04 18:54, Bob Doolittle wrote:

If there's any diagnostic info anybody would like me to
acquire first, please speak up asap.


It's always a good idea to keep a copy of /var/log/vdsm/vdsm.log
from oVirt node and /var/log/ovirt-engine/engine.log from
oVirt engine.



Regards
- Frank
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ISO_DOMAIN won't attach

2013-11-04 Thread Bob Doolittle
I appreciate the tip, although I'm concerned about masking over issues 
we'd all prefer to be uncovered and fixed.


If there's any diagnostic info anybody would like me to acquire first, 
please speak up asap.


-Bob

On 11/04/2013 12:12 PM, Frank Wall wrote:

On 2013-11-04 18:06, Bob Doolittle wrote:

However, my ISO_DOMAIN remains unattached. If I select it, select its
Data Center tab, and click on Attach, it times out.


Try putting the oVirt node in maintenance mode, wait,
and activate it again. Every once in a while this helped
me solve strange problems.


Regards
- Frank
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ISO_DOMAIN won't attach

2013-11-04 Thread Frank Wall

On 2013-11-04 18:06, Bob Doolittle wrote:

However, my ISO_DOMAIN remains unattached. If I select it, select its
Data Center tab, and click on Attach, it times out.


Try putting the oVirt node in maintenance mode, wait,
and activate it again. Every once in a while this helped
me solve strange problems.


Regards
- Frank
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] ISO_DOMAIN won't attach

2013-11-04 Thread Bob Doolittle
I have a fresh, default oVirt 3.3 setup, with F19 on the engine and the 
node (after applying the shmmax workaround discussed in separate 
thread). I followed the Quick Start Guide.


I've added the node, and a storage domain on the node, so the Datacenter 
is now initialized.


However, my ISO_DOMAIN remains unattached. If I select it, select its 
Data Center tab, and click on Attach, it times out.


The following messages are are seen in engine.log when the operation is 
attempted:

http://pastebin.com/WguQJFRu

I can loopback mount the ISO_DOMAIN directory manually, so I'm not sure 
why anything is timing out?


-Bob

P.S. I also note that I have to do a "Shift-Refresh" on the Storage page 
before ISO_DOMAIN shows up. This is consistently reproducible. Firefox 25.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Problem in 3.3 with shmmax configuration

2013-11-04 Thread Bob Doolittle


On 11/04/2013 05:43 AM, Sandro Bonazzola wrote:

Il 02/11/2013 17:51, Bob Doolittle ha scritto:

Hi,

I'm setting up Engine for the 2nd time - the first time I answered a 
configuration question wrong. So I did:

engine-setup
engine-cleanup
engine-setup

Things worked, until I rebooted the system. I found that postgresql would not 
startup, and was failing with "could not create shared memory segment:
Invalid Argument".

I resolved this issue by creating a file /etc/sysctl.d/10-shmmax.conf, 
containing the line:
kernel.shmmax = 10

(I read somewhere that postgresql recommends setting shmmax to 1/4 of physical 
memory, and I have 4GB)

1. Is this a known bug? If not, should I file one? If so, how do I do that? :)

Which version are you installing?


3.3, on Fedora 19.


Can you please attach all 3 logs from above sequence (setup, cleanup, setup)?
I think something may have gone wrong on second setup execution while setting 
shmmax.


Unfortunately I no longer have those logs.

*However* last night I powered down my node and engine. Last night 
engine would at least come up.


Now when I boot up my engine I get the same error. So my guess is that 
the shmmax setting isn't being configured in a persistent fashion 
somehow, or is somehow reverting.


My most recent setup log is here (I hate to send large logs to lists):
https://dl.dropboxusercontent.com/u/35965416/ovirt-engine-setup-20131103141618.log

-Bob


2. Is there a better fix than the one I settled on? Does the normal 
configuration wind up increasing shmmax, or reducing postgresql's limits? What 
are
the default values for this in a normal engine configuration?

No better fix, engine-setup just do something like that, setting shmmax to 
35554432.



Thanks,
 Bob


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Tagged and Untagged Traffic on the same Interface.

2013-11-04 Thread Matt Curry
Thanks, I am just limited on the nodes interfaces.  I appreciate the response.  
I will have two vlans going through one bonded interface on the node to try to 
take advantage of the bond.

[Skopos Website]   Matt
 Curry  |  Sr. Systems Administrator  | Skopos 
Web
e:  mcu...@skopos.us  t:  214.347.7103  f:  
214.520.5079 [LinkedIn]  


From: Assaf Muller mailto:amul...@redhat.com>>
Date: Sunday, November 3, 2013 4:03 AM
To: Matt Curry mailto:mcu...@skopos.us>>
Cc: "users@ovirt.org" 
mailto:users@ovirt.org>>
Subject: Re: [Users] Tagged and Untagged Traffic on the same Interface.

Hi Matt,

You can associate multiple networks with a single nic / bond, providing that 
the untagged network is also non-VM.
You may have up to one untagged non-VM network, and zero or more tagged 
networks on the same network device.

Example - All networks on the same device:
Non-VM, untagged management network
Tagged, non-VM network X 3
Tagged, VM network X 2

- Original Message -
From: "Matt Curry" mailto:mcu...@skopos.us>>
To: users@ovirt.org
Sent: Saturday, November 2, 2013 12:11:04 AM
Subject: [Users] Tagged and Untagged Traffic on the same Interface.

Hello All,

Is it possible to have tagged, and untagged traffic on the same interface.

Example.
2 nics on node.
Bond them together.

Management traffic and other traffic on same bonded interface?

All help is appreciated.

PS. I am on #ovirt as MCLinux…
Thanks


This is a PRIVATE message. If you are not the intended recipient, please delete 
without copying and kindly advise us by e-mail of the mistake in delivery. 
NOTE: Regardless of content, this e-mail shall not operate to bind SKOPOS to 
any order or other contract unless pursuant to explicit written agreement or 
government initiative expressly permitting the use of e-mail for such purpose.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



This is a PRIVATE message. If you are not the intended recipient, please delete 
without copying and kindly advise us by e-mail of the mistake in delivery. 
NOTE: Regardless of content, this e-mail shall not operate to bind SKOPOS to 
any order or other contract unless pursuant to explicit written agreement or 
government initiative expressly permitting the use of e-mail for such purpose.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Master Data Domain fails to activate

2013-11-04 Thread Mark Shields
oVirt 3.3 on CentOS 6.4.

I had the bright idea of shutting down all boxes then rebooting them
(trying to fix another unrelated problem).  Now I can't get the master
data domain to activate.  Here's the relevant section from the
engine.log on the ovirt-engine system (virt01):


http://pastebin.com/hS6DLdf5


>From above, what appears to be the error:


2013-11-04 09:58:23,726 ERROR
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
(pool-6-thread-13) [1455cecf] Command
org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand throw Vdc
Bll exception. With error message VdcBLLException: Cannot allocate IRS
server (Failed with VDSM error IRS_REPOSITORY_NOT_FOUND and code 5009)



Can someone provide assistance?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Live storage migration fails on CentOS 6.4 + ovirt3.3 cluster

2013-11-04 Thread Sander Grendelman
Moving the storage of a (running) VM to a different (FC) storage domain fails.

Steps to reproduce:
1) Create new VM
2) Start VM
3) Start move of the VM to a different storage domain

When I look at the logs it seems that vdsm/libvirt tries to use an
option that is unsupported by libvirt or the qemu-kvm version on
CentOS 6.4:

"libvirtError: unsupported configuration: reuse is not supported with
this QEMU binary"

Information in the "Events" section of the oVirt engine manager:

2013-Nov-04, 14:45 VM migratest powered off by grendelmans (Host: gnkvm01).
2013-Nov-04, 14:05 User grendelmans moving disk migratest_Disk1 to
domain gneva03_vmdisk02.
2013-Nov-04, 14:04 Snapshot 'Auto-generated for Live Storage
Migration' creation for VM 'migratest' has been completed.
2013-Nov-04, 14:04 Failed to create live snapshot 'Auto-generated for
Live Storage Migration' for VM 'migratest'. VM restart is recommended.
2013-Nov-04, 14:04 Snapshot 'Auto-generated for Live Storage
Migration' creation for VM 'migratest' was initiated by grendelmans.
2013-Nov-04, 14:04 VM migratest started on Host gnkvm01
2013-Nov-04, 14:03 VM migratest was started by grendelmans (Host: gnkvm01).

Information from the vdsm log:

Thread-100903::DEBUG::2013-11-04
14:04:56,548::lvm::311::Storage.Misc.excCmd::(cmd) SUCCESS:  =
'';  = 0
Thread-100903::DEBUG::2013-11-04
14:04:56,615::lvm::448::OperationMutex::(_reloadlvs) Operation 'lvm
reload operation' released the operation mutex
Thread-100903::DEBUG::2013-11-04
14:04:56,622::blockVolume::588::Storage.Misc.excCmd::(getMetadata)
'/bin/dd iflag=direct skip=38 bs=512
if=/dev/dfbbc8dd-bfae-44e1-8876-2bb82921565a/metadata count=1' (cwd
None)
Thread-100903::DEBUG::2013-11-04
14:04:56,642::blockVolume::588::Storage.Misc.excCmd::(getMetadata)
SUCCESS:  = '1+0 records in\n1+0 records out\n512 bytes (512 B)
copied, 0.000208694 s, 2.5 MB/s\n';  = 0
Thread-100903::DEBUG::2013-11-04
14:04:56,643::misc::288::Storage.Misc::(validateDDBytes) err: ['1+0
records in', '1+0 records out', '512 bytes (512 B) copied, 0.000208694
s, 2.5 MB/s'], size: 512
Thread-100903::INFO::2013-11-04
14:04:56,644::logUtils::47::dispatcher::(wrapper) Run and protect:
prepareImage, Return response: {'info': {'path':
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821',
'volType': 'path'}, 'path':
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821',
'chain': [{'path':
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/7af63c13-c44b-4418-a1d4-e0e092ee7f04',
'domainID': 'dfbbc8dd-bfae-44e1-8876-2bb82921565a', 'vmVolInfo':
{'path': 
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/7af63c13-c44b-4418-a1d4-e0e092ee7f04',
'volType': 'path'}, 'volumeID':
'7af63c13-c44b-4418-a1d4-e0e092ee7f04', 'imageID':
'57ff3040-0cbd-4659-bd21-f07036d84dd8'}, {'path':
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821',
'domainID': 'dfbbc8dd-bfae-44e1-8876-2bb82921565a', 'vmVolInfo':
{'path': 
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821',
'volType': 'path'}, 'volumeID':
'4d05730d-433c-40d9-8600-6fb0eb5af821', 'imageID':
'57ff3040-0cbd-4659-bd21-f07036d84dd8'}]}
Thread-100903::DEBUG::2013-11-04
14:04:56,644::task::1168::TaskManager.Task::(prepare)
Task=`0f953aa3-e2b9-4008-84ad-f271136d8d23`::finished: {'info':
{'path': 
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821',
'volType': 'path'}, 'path':
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-433c-40d9-8600-6fb0eb5af821',
'chain': [{'path':
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/7af63c13-c44b-4418-a1d4-e0e092ee7f04',
'domainID': 'dfbbc8dd-bfae-44e1-8876-2bb82921565a', 'vmVolInfo':
{'path': 
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/7af63c13-c44b-4418-a1d4-e0e092ee7f04',
'volType': 'path'}, 'volumeID':
'7af63c13-c44b-4418-a1d4-e0e092ee7f04', 'imageID':
'57ff3040-0cbd-4659-bd21-f07036d84dd8'}, {'path':
'/rhev/data-center/def9b712-876a-49a9-b4b9-df9770befac4/dfbbc8dd-bfae-44e1-8876-2bb82921565a/images/57ff3040-0cbd-4659-bd21-f07036d84dd8/4d05730d-

Re: [Users] Cannot delete a template

2013-11-04 Thread Eli Mesika


- Original Message -
> From: "Karli Sjöberg" 
> To: "Eli Mesika" 
> Cc: "Users@ovirt.org" 
> Sent: Monday, November 4, 2013 3:00:27 PM
> Subject: Re: [Users] Cannot delete a template
> 
> mån 2013-11-04 klockan 07:53 -0500 skrev Eli Mesika:
> 
> 
> 
> - Original Message -
> > From: "Karli Sjöberg" mailto:karli.sjob...@slu.se>>
> > To: "Eli Mesika" mailto:emes...@redhat.com>>
> > Cc: "Users@ovirt.org"
> > mailto:users@ovirt.org>>
> > Sent: Monday, November 4, 2013 1:11:14 PM
> > Subject: Re: [Users] Cannot delete a template
> >
> > mån 2013-11-04 klockan 06:06 -0500 skrev Eli Mesika:
> >
> >
> >
> > - Original Message -
> > > From: "Karli Sjöberg"
> > > mailto:karli.sjob...@slu.se>>
> > > To: "Eli Mesika"
> > > mailto:emes...@redhat.com>>
> > > Cc: "Users@ovirt.org"
> > > mailto:users@ovirt.org>>
> > > Sent: Monday, November 4, 2013 1:00:46 PM
> > > Subject: Re: [Users] Cannot delete a template
> > >
> > > mån 2013-11-04 klockan 05:56 -0500 skrev Eli Mesika:
> > >
> > >
> > >
> > > - Original Message -
> > > > From: "Karli Sjöberg"
> > > > mailto:karli.sjob...@slu.se>>
> > > > To:
> > > > "Users@ovirt.org"
> > > > mailto:users@ovirt.org>>
> > > > Sent: Monday, November 4, 2013 9:08:19 AM
> > > > Subject: [Users] Cannot delete a template
> > > >
> > > > Hi!
> > > >
> > > > I´m trying to delete an old template (NFS Datastore) but engine won´t
> > > > let
> > > > me.
> > > > Here´s the relevant part of SPM´s vdsmd.log:
> > >
> > > Did you notice that VDSM claims that the volume is shared :
> > >
> > > CannotDeleteSharedVolume: Shared Volume cannot be deleted: ("Cannot
> > > delete
> > > shared image ae73b435-2935-4e9f-9269-c6c8fad2cf38. ..
> > >
> > >
> > > Yes I noticed that but I don´t even know what that means, betting that´s
> > > a
> > > bad thing? How do I change that, I found no toggle of that in Template ->
> > > Edit, or looking at it through Disk?
> >
> > Do you have any VMs that were created from this old template ,
> > You can see that when you select a template from the web-admin UI ...
> >
> >
> > I go for Templates, click on the problematic one, click subtab Virtual
> > Machines and:
> > No Virtual Machines to display
> >
> > So no, there shouldn´t be anything holding it.
> 
> can you paste here the result of the following query
> 
> psql -U  -c "select * from vm_images_view where image_guid =
> 'ae73b435-2935-4e9f-9269-c6c8fad2cf38';" 
> 
> 
> No problemo:
> # psql -U engine -c "select * from vm_images_view where image_guid =
> 'ae73b435-2935-4e9f-9269-c6c8fad2cf38';" engine
> storage_id | storage_path | storage_name | storage_pool_id | image_guid |
> creation_date | actual_size | read_rate | read_latency_seconds |
> write_latency_seconds | flush_latency_second
> s | write_rate | size | it_guid | description | parentid | imagestatus |
> lastmodified | app_list | vm_snapshot_id | volume_type | image_group_id |
> active | volume_format | disk_interfa
> ce | boot | wipe_after_delete | propagate_errors | entity_type |
> number_of_vms | vm_names | quota_id | quota_name | quota_enforcement_type |
> disk_id | disk_alias | disk_description | s
> hareable
> +--+--+-++---+-+---+--+---+-
> --++--+-+-+--+-+--+--++-+++---+-
> ---+--+---+--+-+---+--+--+++-++--+--
> -
> (0 rows)


Well its seems that at least from the engine side there is no reason not to be 
able to remove the template 
CCing Alon M from the storage team 


> 
> 
> 
> 
> 
> 
> >
> >
> >
> >
> > >
> > >
> > >
> > >
> > > > Thread-304915::DEBUG::2013-11-04
> > > > 07:51:56,269::BindingXMLRPC::161::vds::(wrapper) [130.238.96.66]
> > > > Thread-304915::DEBUG::2013-11-04
> > > > 07:51:56,270::task::568::TaskManager.Task::(_updateState)
> > > > Task=`78b8f42d-14cd-48e2-9e3e-61a6610c9f66`::moving from state init ->
> > > > state
> > > > preparing
> > > > Thread-304915::INFO::2013-11-04
> > > > 07:51:56,270::logUtils::41::dispatcher::(wrapper) Run and protect:
> > > > deleteImage(sdUUID='f231d176-a11a-47a8-84e7-12bfa9095bdd',
> > > > spUUID='8b70c99d-50f2-4a30-9214-c51ef20cb48a',
> > > > imgUUID='ae73b435-2935-4e9f-9269-c6c8fad2cf38', postZero='false',
> > > > force='false')
> > > > Thread-304915::INFO::2013-11-04
> > > > 07:51:56,27

Re: [Users] virt-v2v convert failed

2013-11-04 Thread Matthew Booth
> Hello,
> 
> I tried to convert kvm(libvirt) guest to oVirt 3.3 on computer
> running  
> fedora 19 and it fails. I was converting 450GB VM with windows 2008.
> 
> Output is:
> 
> Caching w2k8t.qcow2: 100%  
> [===]D19h52m18s
> Error in mkstemp using /tmp/XX: Could not create temp file  
> /tmp/lLmnkh9ktk: Operation not permited at  
> /usr/share/perl5/vendor_perl/Sys/VirtConvert/ExecHelper.pm line 74.
> 
> Command was:
> virt-v2v -i libvirt -ic qemu+ssh://r...@server.example.org/system -o  
> rhev -os 10.20.30.40:/export -of qcow2 -oa sparse -n ovirtmgmt w2k8t
> My /tmp (where it caches) was big enough, because I connected 1TB  
> external drive as /tmp

mkstemp is part of the core perl library, and if it reports that it
can't write to /tmp then it's almost certainly correct. Are you sure you
made /tmp world writeable when when you mounted your external drive?

> I used virt-v2v-0.9.0-3.fc19.i686.
> Same problem was with Linux server. It fails with same problem.
> 
> 
> 
> 
> And the second question is, how to convert that big VM faster? Is it  
> possible to connect to libvirt daemon without ssh? Or even convert
> VM  
> offline on virt-v2v capable pc (fedora, centos, rhel)?

You could run virt-v2v on your libvirt/kvm host. That way it wouldn't
have to copy it over the network. It also wouldn't have to use any space
in /tmp.

Matt

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Cannot delete a template

2013-11-04 Thread Eli Mesika


- Original Message -
> From: "Karli Sjöberg" 
> To: "Eli Mesika" 
> Cc: "Users@ovirt.org" 
> Sent: Monday, November 4, 2013 1:11:14 PM
> Subject: Re: [Users] Cannot delete a template
> 
> mån 2013-11-04 klockan 06:06 -0500 skrev Eli Mesika:
> 
> 
> 
> - Original Message -
> > From: "Karli Sjöberg" mailto:karli.sjob...@slu.se>>
> > To: "Eli Mesika" mailto:emes...@redhat.com>>
> > Cc: "Users@ovirt.org"
> > mailto:users@ovirt.org>>
> > Sent: Monday, November 4, 2013 1:00:46 PM
> > Subject: Re: [Users] Cannot delete a template
> >
> > mån 2013-11-04 klockan 05:56 -0500 skrev Eli Mesika:
> >
> >
> >
> > - Original Message -
> > > From: "Karli Sjöberg"
> > > mailto:karli.sjob...@slu.se>>
> > > To: "Users@ovirt.org"
> > > mailto:users@ovirt.org>>
> > > Sent: Monday, November 4, 2013 9:08:19 AM
> > > Subject: [Users] Cannot delete a template
> > >
> > > Hi!
> > >
> > > I´m trying to delete an old template (NFS Datastore) but engine won´t let
> > > me.
> > > Here´s the relevant part of SPM´s vdsmd.log:
> >
> > Did you notice that VDSM claims that the volume is shared :
> >
> > CannotDeleteSharedVolume: Shared Volume cannot be deleted: ("Cannot delete
> > shared image ae73b435-2935-4e9f-9269-c6c8fad2cf38. ..
> >
> >
> > Yes I noticed that but I don´t even know what that means, betting that´s a
> > bad thing? How do I change that, I found no toggle of that in Template ->
> > Edit, or looking at it through Disk?
> 
> Do you have any VMs that were created from this old template ,
> You can see that when you select a template from the web-admin UI ...
> 
> 
> I go for Templates, click on the problematic one, click subtab Virtual
> Machines and:
> No Virtual Machines to display
> 
> So no, there shouldn´t be anything holding it.

can you paste here the result of the following query 

psql -U  -c "select * from vm_images_view where image_guid = 
'ae73b435-2935-4e9f-9269-c6c8fad2cf38';"  


> 
> 
> 
> 
> >
> >
> >
> >
> > > Thread-304915::DEBUG::2013-11-04
> > > 07:51:56,269::BindingXMLRPC::161::vds::(wrapper) [130.238.96.66]
> > > Thread-304915::DEBUG::2013-11-04
> > > 07:51:56,270::task::568::TaskManager.Task::(_updateState)
> > > Task=`78b8f42d-14cd-48e2-9e3e-61a6610c9f66`::moving from state init ->
> > > state
> > > preparing
> > > Thread-304915::INFO::2013-11-04
> > > 07:51:56,270::logUtils::41::dispatcher::(wrapper) Run and protect:
> > > deleteImage(sdUUID='f231d176-a11a-47a8-84e7-12bfa9095bdd',
> > > spUUID='8b70c99d-50f2-4a30-9214-c51ef20cb48a',
> > > imgUUID='ae73b435-2935-4e9f-9269-c6c8fad2cf38', postZero='false',
> > > force='false')
> > > Thread-304915::INFO::2013-11-04
> > > 07:51:56,270::fileSD::302::Storage.StorageDomain::(validate)
> > > sdUUID=f231d176-a11a-47a8-84e7-12bfa9095bdd
> > > Thread-304915::DEBUG::2013-11-04
> > > 07:51:56,279::persistentDict::234::Storage.PersistentDict::(refresh) read
> > > lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=ZFS2-1_DS1_Data',
> > > 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=',
> > > 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=902',
> > > 'POOL_DESCRIPTION=C4241',
> > > 'POOL_DOMAINS=773370d4-be5d-43d8-95cd-ffd3a6a5b1c8:Active,50aba7a1-bbc1-4eca-8480-45ec37b64e63:Active,4f46680f-a70d-4846-bfc3-9318ce7d4d6d:Active,f231d176-a11a-47a8-84e7-12bfa9095bdd:Active',
> > > 'POOL_SPM_ID=-1', 'POOL_SPM_LVER=-1',
> > > 'POOL_UUID=8b70c99d-50f2-4a30-9214-c51ef20cb48a',
> > > 'REMOTE_PATH=hostnfs5-ua.sto.slu.se:/export/ds1/data', 'ROLE=Master',
> > > 'SDUUID=f231d176-a11a-47a8-84e7-12bfa9095bdd', 'TYPE=NFS', 'VERSION=3',
> > > '_SHA_CKSUM=de1b77f5912e94052c4fbd21299f44a6498275d9']
> > > Thread-304915::DEBUG::2013-11-04
> > > 07:51:56,280::resourceManager::190::ResourceManager.Request::(__init__)
> > > ResName=`Storage.ae73b435-2935-4e9f-9269-c6c8fad2cf38`ReqID=`e412eadc-5af7-426f-94f9-4d52783614e7`::Request
> > > was made in '/usr/share/vdsm/storage/resourceManager.py' line '189' at
> > > '__init__'
> > > Thread-304915::DEBUG::2013-11-04
> > > 07:51:56,280::resourceManager::504::ResourceManager::(registerResource)
> > > Trying to register resource
> > > 'Storage.ae73b435-2935-4e9f-9269-c6c8fad2cf38'
> > > for lock type 'exclusive'
> > > Thread-304915::DEBUG::2013-11-04
> > > 07:51:56,281::resourceManager::547::ResourceManager::(registerResource)
> > > Resource 'Storage.ae73b435-2935-4e9f-9269-c6c8fad2cf38' is free. Now
> > > locking
> > > as 'exclusive' (1 active user)
> > > Thread-304915::DEBUG::2013-11-04
> > > 07:51:56,281::resourceManager::227::ResourceManager.Request::(grant)
> > > ResName=`Storage.ae73b435-2935-4e9f-9269-c6c8fad2cf38`ReqID=`e412eadc-5af7-426f-94f9-4d52783614e7`::Granted
> > > request
> > > Thread-304915::DEBUG::2013-11-04
> > > 07:51:56,281::task::794::TaskManager.Task::(resourceAcquired)
> > > Task=`78b8f42d-14cd-48e2-9e3e-61a6610c9f66`::_resourcesAcquired:
> > > Storage.ae

[Users] Fwd: Major frontend refactoring landed into master

2013-11-04 Thread Vojtech Szocs
Hi, we did some frontend refactoring in oVirt master branch, see below for 
details.

Let us know if you find any issues.

Vojtech


- Forwarded Message -
From: "Vojtech Szocs" 
To: "engine-devel" 
Cc: "Alexander Wels" , "Einav Cohen" , 
"Itamar Heim" 
Sent: Monday, November 4, 2013 12:55:33 PM
Subject: Major frontend refactoring landed into master

Hey guys,

I've just merged patch "frontend refactor phase 2" 
[http://gerrit.ovirt.org/#/c/17356/] into master branch.

This is the second patch in a series dedicated to improving parts of frontend 
code responsible for communication with Engine backend: 
http://www.ovirt.org/Features/Design/FrontendRefactor

If you find any issues related to GUI (WebAdmin, UserPortal) - just let Alex 
(CC'ed) or me know and we'll fix them.

Regards,
Vojtech
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Custom settings for Kernel Same-page Merging (KSM)?

2013-11-04 Thread Frank Wall

Hi Martin,

On 2013-11-04 11:12, Martin Sivak wrote:

we do not currently have anything in the engine that would allow you
to do this. If you create an oVirt RFE bug for it it would make it
easier for us to add the feature and track the progress.


good advice, here we go:
https://bugzilla.redhat.com/show_bug.cgi?id=1026294

I wasn't sure if I should file a bug for vdsmd or ovirt-engine,
so maybe someone needs to reclassify this PR.


However you can update the policy file you found to accomplish this.
Just set (defvar ksm_sleep_ms_baseline 10) to a higher value and do
not forget to properly restart VDSM when you do.


Thanks, I've already thought of doing this as a temporary workaround,
but wanted to make sure that it's not customizable yet. So I'm going
to hardcode my preferred value on next maintenance.


Thanks
- Frank
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Cannot delete a template

2013-11-04 Thread Eli Mesika


- Original Message -
> From: "Karli Sjöberg" 
> To: "Eli Mesika" 
> Cc: "Users@ovirt.org" 
> Sent: Monday, November 4, 2013 1:00:46 PM
> Subject: Re: [Users] Cannot delete a template
> 
> mån 2013-11-04 klockan 05:56 -0500 skrev Eli Mesika:
> 
> 
> 
> - Original Message -
> > From: "Karli Sjöberg" mailto:karli.sjob...@slu.se>>
> > To: "Users@ovirt.org"
> > mailto:users@ovirt.org>>
> > Sent: Monday, November 4, 2013 9:08:19 AM
> > Subject: [Users] Cannot delete a template
> >
> > Hi!
> >
> > I´m trying to delete an old template (NFS Datastore) but engine won´t let
> > me.
> > Here´s the relevant part of SPM´s vdsmd.log:
> 
> Did you notice that VDSM claims that the volume is shared :
> 
> CannotDeleteSharedVolume: Shared Volume cannot be deleted: ("Cannot delete
> shared image ae73b435-2935-4e9f-9269-c6c8fad2cf38. ..
> 
> 
> Yes I noticed that but I don´t even know what that means, betting that´s a
> bad thing? How do I change that, I found no toggle of that in Template ->
> Edit, or looking at it through Disk?

Do you have any VMs that were created from this old template ,
You can see that when you select a template from the web-admin UI ... 

> 
> 
> 
> 
> > Thread-304915::DEBUG::2013-11-04
> > 07:51:56,269::BindingXMLRPC::161::vds::(wrapper) [130.238.96.66]
> > Thread-304915::DEBUG::2013-11-04
> > 07:51:56,270::task::568::TaskManager.Task::(_updateState)
> > Task=`78b8f42d-14cd-48e2-9e3e-61a6610c9f66`::moving from state init ->
> > state
> > preparing
> > Thread-304915::INFO::2013-11-04
> > 07:51:56,270::logUtils::41::dispatcher::(wrapper) Run and protect:
> > deleteImage(sdUUID='f231d176-a11a-47a8-84e7-12bfa9095bdd',
> > spUUID='8b70c99d-50f2-4a30-9214-c51ef20cb48a',
> > imgUUID='ae73b435-2935-4e9f-9269-c6c8fad2cf38', postZero='false',
> > force='false')
> > Thread-304915::INFO::2013-11-04
> > 07:51:56,270::fileSD::302::Storage.StorageDomain::(validate)
> > sdUUID=f231d176-a11a-47a8-84e7-12bfa9095bdd
> > Thread-304915::DEBUG::2013-11-04
> > 07:51:56,279::persistentDict::234::Storage.PersistentDict::(refresh) read
> > lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=ZFS2-1_DS1_Data',
> > 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=',
> > 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=902', 'POOL_DESCRIPTION=C4241',
> > 'POOL_DOMAINS=773370d4-be5d-43d8-95cd-ffd3a6a5b1c8:Active,50aba7a1-bbc1-4eca-8480-45ec37b64e63:Active,4f46680f-a70d-4846-bfc3-9318ce7d4d6d:Active,f231d176-a11a-47a8-84e7-12bfa9095bdd:Active',
> > 'POOL_SPM_ID=-1', 'POOL_SPM_LVER=-1',
> > 'POOL_UUID=8b70c99d-50f2-4a30-9214-c51ef20cb48a',
> > 'REMOTE_PATH=hostnfs5-ua.sto.slu.se:/export/ds1/data', 'ROLE=Master',
> > 'SDUUID=f231d176-a11a-47a8-84e7-12bfa9095bdd', 'TYPE=NFS', 'VERSION=3',
> > '_SHA_CKSUM=de1b77f5912e94052c4fbd21299f44a6498275d9']
> > Thread-304915::DEBUG::2013-11-04
> > 07:51:56,280::resourceManager::190::ResourceManager.Request::(__init__)
> > ResName=`Storage.ae73b435-2935-4e9f-9269-c6c8fad2cf38`ReqID=`e412eadc-5af7-426f-94f9-4d52783614e7`::Request
> > was made in '/usr/share/vdsm/storage/resourceManager.py' line '189' at
> > '__init__'
> > Thread-304915::DEBUG::2013-11-04
> > 07:51:56,280::resourceManager::504::ResourceManager::(registerResource)
> > Trying to register resource 'Storage.ae73b435-2935-4e9f-9269-c6c8fad2cf38'
> > for lock type 'exclusive'
> > Thread-304915::DEBUG::2013-11-04
> > 07:51:56,281::resourceManager::547::ResourceManager::(registerResource)
> > Resource 'Storage.ae73b435-2935-4e9f-9269-c6c8fad2cf38' is free. Now
> > locking
> > as 'exclusive' (1 active user)
> > Thread-304915::DEBUG::2013-11-04
> > 07:51:56,281::resourceManager::227::ResourceManager.Request::(grant)
> > ResName=`Storage.ae73b435-2935-4e9f-9269-c6c8fad2cf38`ReqID=`e412eadc-5af7-426f-94f9-4d52783614e7`::Granted
> > request
> > Thread-304915::DEBUG::2013-11-04
> > 07:51:56,281::task::794::TaskManager.Task::(resourceAcquired)
> > Task=`78b8f42d-14cd-48e2-9e3e-61a6610c9f66`::_resourcesAcquired:
> > Storage.ae73b435-2935-4e9f-9269-c6c8fad2cf38 (exclusive)
> > Thread-304915::DEBUG::2013-11-04
> > 07:51:56,281::task::957::TaskManager.Task::(_decref)
> > Task=`78b8f42d-14cd-48e2-9e3e-61a6610c9f66`::ref 1 aborting False
> > Thread-304915::DEBUG::2013-11-04
> > 07:51:56,282::resourceManager::190::ResourceManager.Request::(__init__)
> > ResName=`Storage.f231d176-a11a-47a8-84e7-12bfa9095bdd`ReqID=`544caeb2-9d61-42e4-9fc7-1f7811700f4b`::Request
> > was made in '/usr/share/vdsm/storage/resourceManager.py' line '189' at
> > '__init__'
> > Thread-304915::DEBUG::2013-11-04
> > 07:51:56,282::resourceManager::504::ResourceManager::(registerResource)
> > Trying to register resource 'Storage.f231d176-a11a-47a8-84e7-12bfa9095bdd'
> > for lock type 'shared'
> > Thread-304915::DEBUG::2013-11-04
> > 07:51:56,282::resourceManager::547::ResourceManager::(registerResource)
> > Resource 'Storage.f231d176-a11a-47a8-84e7-12bfa9095bdd' is free. Now
> > locking
> > as 'shared' (1 active user)
> > T

Re: [Users] Urgent: Export NFS Migration issue oVirt 3.0 -> 3.2.1

2013-11-04 Thread Eli Mesika


- Original Message -
> From: "Sven Knohsalla" 
> To: users@ovirt.org
> Sent: Monday, November 4, 2013 9:48:00 AM
> Subject: [Users] Urgent: Export NFS Migration issue oVirt 3.0 -> 3.2.1
> 
> 
> 
> Hi,
> 
> 
> 
> we are currently running in the following issue when trying to migrate VMs
> from oVirt 3.0 -> oVirt 3.2.1 :
> 
> NFS Exports are mounted separately to both oVirt instances, both are working.
> 
> 
> When detaching NFS Export with exported VMs from oVirt 3.0 to oVirt 3.2.1 the
> following error message occurs:
> “Failed to attach Storage Domain DE-SD-Export-NFS to Data Center Default.
> (User: X)”
> 
> engine log:
> 2013-11-04 08:33:46,763 ERROR
> [org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand]
> (pool-3-thread-49) [6348553f] Command
> org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand throw Vdc
> Bll exception. With error message VdcBLLException:
> org.ovirt.engine.core.vdsbroker.irsbroker.IrsOperationFailedNoFailoverException:
> IRSGenericException: IRSErrorException: Storage domain already attached to
> pool: 'domain=5c1dec62-3144-4fc4-8ac5-be4e8c04c3ae,
> pool=4e37e18a-ae32-41a3-a558-af2495d64da8'


Are you sure that this is the correct portion of the log? , you say that you 
are doing a DETACH while the log clearly shows that ATTACH was called.
Can you please attach the full engine + vdsm logs 

Thanks 

> 
> 2013-11-04 08:33:46,772 INFO
> [org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand]
> (pool-3-thread-49) [6348553f] Command
> [id=9095ca2b-f875-42f4-8861-1c89805e6b74]: Compensating NEW_ENTITY_ID of
> org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot:
> storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, storageId =
> 5c1dec62-3144-4fc4-8ac5-be4e8c04c3ae.
> 
> 
> 
> This issue occurred first when exporting a oVirt 3.0 VM to NFS export share
> of oVirt 3.2.1.
> The opposite way ( attach NFS Export from oVirt 3.2.1 to 3.0) did work with
> no issue.
> 
> 
> Is this a known bug?
> 
> Thanks in advance for your help!
> 
> Regards,
> 
> Sven.
> 
> 
> 
> Sven Knohsalla | Sr. IT Systems Administrator
> 
> 
> 
> Office +49 631 68036 433 | Fax +49 631 68036 111 |E-Mail
> s.knohsa...@netbiscuits.com | Skype: netbiscuits.admin
> 
> Netbiscuits GmbH | Europaallee 10 | 67657 | GERMANY
> 
> 
> 
> 
> 
> 
> 
> Register Court: Local Court Kaiserslautern | Commercial Register ID: HR B
> 3604
> Management Board : Guido Moggert, Michael Neidhöfer, Christian Reitz, Martin
> Süß
> 
> 
> 
> This message and any files transmitted with it are confidential and intended
> solely for the use of the individual or entity to whom they are addressed.
> It may also be privileged or otherwise protected by work product immunity or
> other legal rules. Please notify the sender immediately by e-mail if you
> have received this e-mail by mistake and delete this e-mail from your
> system. If you are not the intended recipient you are notified that
> disclosing, copying, distributing or taking any action in reliance on the
> contents of this information is strictly prohibited.
> 
> Warning: Although Netbiscuits has taken reasonable precautions to ensure no
> viruses are present in this email, the company cannot accept responsibility
> for any loss or damage arising from the use of this email or attachments.
> 
> 
> 
> Please consider the environment before printing
> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Cannot delete a template

2013-11-04 Thread Karli Sjöberg
mån 2013-11-04 klockan 05:56 -0500 skrev Eli Mesika:



- Original Message -
> From: "Karli Sjöberg" mailto:karli.sjob...@slu.se>>
> To: "Users@ovirt.org" 
> mailto:users@ovirt.org>>
> Sent: Monday, November 4, 2013 9:08:19 AM
> Subject: [Users] Cannot delete a template
>
> Hi!
>
> I´m trying to delete an old template (NFS Datastore) but engine won´t let me.
> Here´s the relevant part of SPM´s vdsmd.log:

Did you notice that VDSM claims that the volume is shared :

CannotDeleteSharedVolume: Shared Volume cannot be deleted: ("Cannot delete 
shared image ae73b435-2935-4e9f-9269-c6c8fad2cf38. ..


Yes I noticed that but I don´t even know what that means, betting that´s a bad 
thing? How do I change that, I found no toggle of that in Template -> Edit, or 
looking at it through Disk?




> Thread-304915::DEBUG::2013-11-04
> 07:51:56,269::BindingXMLRPC::161::vds::(wrapper) [130.238.96.66]
> Thread-304915::DEBUG::2013-11-04
> 07:51:56,270::task::568::TaskManager.Task::(_updateState)
> Task=`78b8f42d-14cd-48e2-9e3e-61a6610c9f66`::moving from state init -> state
> preparing
> Thread-304915::INFO::2013-11-04
> 07:51:56,270::logUtils::41::dispatcher::(wrapper) Run and protect:
> deleteImage(sdUUID='f231d176-a11a-47a8-84e7-12bfa9095bdd',
> spUUID='8b70c99d-50f2-4a30-9214-c51ef20cb48a',
> imgUUID='ae73b435-2935-4e9f-9269-c6c8fad2cf38', postZero='false',
> force='false')
> Thread-304915::INFO::2013-11-04
> 07:51:56,270::fileSD::302::Storage.StorageDomain::(validate)
> sdUUID=f231d176-a11a-47a8-84e7-12bfa9095bdd
> Thread-304915::DEBUG::2013-11-04
> 07:51:56,279::persistentDict::234::Storage.PersistentDict::(refresh) read
> lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=ZFS2-1_DS1_Data',
> 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=',
> 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=902', 'POOL_DESCRIPTION=C4241',
> 'POOL_DOMAINS=773370d4-be5d-43d8-95cd-ffd3a6a5b1c8:Active,50aba7a1-bbc1-4eca-8480-45ec37b64e63:Active,4f46680f-a70d-4846-bfc3-9318ce7d4d6d:Active,f231d176-a11a-47a8-84e7-12bfa9095bdd:Active',
> 'POOL_SPM_ID=-1', 'POOL_SPM_LVER=-1',
> 'POOL_UUID=8b70c99d-50f2-4a30-9214-c51ef20cb48a',
> 'REMOTE_PATH=hostnfs5-ua.sto.slu.se:/export/ds1/data', 'ROLE=Master',
> 'SDUUID=f231d176-a11a-47a8-84e7-12bfa9095bdd', 'TYPE=NFS', 'VERSION=3',
> '_SHA_CKSUM=de1b77f5912e94052c4fbd21299f44a6498275d9']
> Thread-304915::DEBUG::2013-11-04
> 07:51:56,280::resourceManager::190::ResourceManager.Request::(__init__)
> ResName=`Storage.ae73b435-2935-4e9f-9269-c6c8fad2cf38`ReqID=`e412eadc-5af7-426f-94f9-4d52783614e7`::Request
> was made in '/usr/share/vdsm/storage/resourceManager.py' line '189' at
> '__init__'
> Thread-304915::DEBUG::2013-11-04
> 07:51:56,280::resourceManager::504::ResourceManager::(registerResource)
> Trying to register resource 'Storage.ae73b435-2935-4e9f-9269-c6c8fad2cf38'
> for lock type 'exclusive'
> Thread-304915::DEBUG::2013-11-04
> 07:51:56,281::resourceManager::547::ResourceManager::(registerResource)
> Resource 'Storage.ae73b435-2935-4e9f-9269-c6c8fad2cf38' is free. Now locking
> as 'exclusive' (1 active user)
> Thread-304915::DEBUG::2013-11-04
> 07:51:56,281::resourceManager::227::ResourceManager.Request::(grant)
> ResName=`Storage.ae73b435-2935-4e9f-9269-c6c8fad2cf38`ReqID=`e412eadc-5af7-426f-94f9-4d52783614e7`::Granted
> request
> Thread-304915::DEBUG::2013-11-04
> 07:51:56,281::task::794::TaskManager.Task::(resourceAcquired)
> Task=`78b8f42d-14cd-48e2-9e3e-61a6610c9f66`::_resourcesAcquired:
> Storage.ae73b435-2935-4e9f-9269-c6c8fad2cf38 (exclusive)
> Thread-304915::DEBUG::2013-11-04
> 07:51:56,281::task::957::TaskManager.Task::(_decref)
> Task=`78b8f42d-14cd-48e2-9e3e-61a6610c9f66`::ref 1 aborting False
> Thread-304915::DEBUG::2013-11-04
> 07:51:56,282::resourceManager::190::ResourceManager.Request::(__init__)
> ResName=`Storage.f231d176-a11a-47a8-84e7-12bfa9095bdd`ReqID=`544caeb2-9d61-42e4-9fc7-1f7811700f4b`::Request
> was made in '/usr/share/vdsm/storage/resourceManager.py' line '189' at
> '__init__'
> Thread-304915::DEBUG::2013-11-04
> 07:51:56,282::resourceManager::504::ResourceManager::(registerResource)
> Trying to register resource 'Storage.f231d176-a11a-47a8-84e7-12bfa9095bdd'
> for lock type 'shared'
> Thread-304915::DEBUG::2013-11-04
> 07:51:56,282::resourceManager::547::ResourceManager::(registerResource)
> Resource 'Storage.f231d176-a11a-47a8-84e7-12bfa9095bdd' is free. Now locking
> as 'shared' (1 active user)
> Thread-304915::DEBUG::2013-11-04
> 07:51:56,283::resourceManager::227::ResourceManager.Request::(grant)
> ResName=`Storage.f231d176-a11a-47a8-84e7-12bfa9095bdd`ReqID=`544caeb2-9d61-42e4-9fc7-1f7811700f4b`::Granted
> request
> Thread-304915::DEBUG::2013-11-04
> 07:51:56,283::task::794::TaskManager.Task::(resourceAcquired)
> Task=`78b8f42d-14cd-48e2-9e3e-61a6610c9f66`::_resourcesAcquired:
> Storage.f231d176-a11a-47a8-84e7-12bfa9095bdd (shared)
> Thread-304915::DEBUG::2013-11-04
> 07:51:56,283::task::957::TaskManager.Task::(_dec

Re: [Users] Cannot delete a template

2013-11-04 Thread Eli Mesika


- Original Message -
> From: "Karli Sjöberg" 
> To: "Users@ovirt.org" 
> Sent: Monday, November 4, 2013 9:08:19 AM
> Subject: [Users] Cannot delete a template
> 
> Hi!
> 
> I´m trying to delete an old template (NFS Datastore) but engine won´t let me.
> Here´s the relevant part of SPM´s vdsmd.log:

Did you notice that VDSM claims that the volume is shared :

CannotDeleteSharedVolume: Shared Volume cannot be deleted: ("Cannot delete 
shared image ae73b435-2935-4e9f-9269-c6c8fad2cf38. ..

> Thread-304915::DEBUG::2013-11-04
> 07:51:56,269::BindingXMLRPC::161::vds::(wrapper) [130.238.96.66]
> Thread-304915::DEBUG::2013-11-04
> 07:51:56,270::task::568::TaskManager.Task::(_updateState)
> Task=`78b8f42d-14cd-48e2-9e3e-61a6610c9f66`::moving from state init -> state
> preparing
> Thread-304915::INFO::2013-11-04
> 07:51:56,270::logUtils::41::dispatcher::(wrapper) Run and protect:
> deleteImage(sdUUID='f231d176-a11a-47a8-84e7-12bfa9095bdd',
> spUUID='8b70c99d-50f2-4a30-9214-c51ef20cb48a',
> imgUUID='ae73b435-2935-4e9f-9269-c6c8fad2cf38', postZero='false',
> force='false')
> Thread-304915::INFO::2013-11-04
> 07:51:56,270::fileSD::302::Storage.StorageDomain::(validate)
> sdUUID=f231d176-a11a-47a8-84e7-12bfa9095bdd
> Thread-304915::DEBUG::2013-11-04
> 07:51:56,279::persistentDict::234::Storage.PersistentDict::(refresh) read
> lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=ZFS2-1_DS1_Data',
> 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=',
> 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=902', 'POOL_DESCRIPTION=C4241',
> 'POOL_DOMAINS=773370d4-be5d-43d8-95cd-ffd3a6a5b1c8:Active,50aba7a1-bbc1-4eca-8480-45ec37b64e63:Active,4f46680f-a70d-4846-bfc3-9318ce7d4d6d:Active,f231d176-a11a-47a8-84e7-12bfa9095bdd:Active',
> 'POOL_SPM_ID=-1', 'POOL_SPM_LVER=-1',
> 'POOL_UUID=8b70c99d-50f2-4a30-9214-c51ef20cb48a',
> 'REMOTE_PATH=hostnfs5-ua.sto.slu.se:/export/ds1/data', 'ROLE=Master',
> 'SDUUID=f231d176-a11a-47a8-84e7-12bfa9095bdd', 'TYPE=NFS', 'VERSION=3',
> '_SHA_CKSUM=de1b77f5912e94052c4fbd21299f44a6498275d9']
> Thread-304915::DEBUG::2013-11-04
> 07:51:56,280::resourceManager::190::ResourceManager.Request::(__init__)
> ResName=`Storage.ae73b435-2935-4e9f-9269-c6c8fad2cf38`ReqID=`e412eadc-5af7-426f-94f9-4d52783614e7`::Request
> was made in '/usr/share/vdsm/storage/resourceManager.py' line '189' at
> '__init__'
> Thread-304915::DEBUG::2013-11-04
> 07:51:56,280::resourceManager::504::ResourceManager::(registerResource)
> Trying to register resource 'Storage.ae73b435-2935-4e9f-9269-c6c8fad2cf38'
> for lock type 'exclusive'
> Thread-304915::DEBUG::2013-11-04
> 07:51:56,281::resourceManager::547::ResourceManager::(registerResource)
> Resource 'Storage.ae73b435-2935-4e9f-9269-c6c8fad2cf38' is free. Now locking
> as 'exclusive' (1 active user)
> Thread-304915::DEBUG::2013-11-04
> 07:51:56,281::resourceManager::227::ResourceManager.Request::(grant)
> ResName=`Storage.ae73b435-2935-4e9f-9269-c6c8fad2cf38`ReqID=`e412eadc-5af7-426f-94f9-4d52783614e7`::Granted
> request
> Thread-304915::DEBUG::2013-11-04
> 07:51:56,281::task::794::TaskManager.Task::(resourceAcquired)
> Task=`78b8f42d-14cd-48e2-9e3e-61a6610c9f66`::_resourcesAcquired:
> Storage.ae73b435-2935-4e9f-9269-c6c8fad2cf38 (exclusive)
> Thread-304915::DEBUG::2013-11-04
> 07:51:56,281::task::957::TaskManager.Task::(_decref)
> Task=`78b8f42d-14cd-48e2-9e3e-61a6610c9f66`::ref 1 aborting False
> Thread-304915::DEBUG::2013-11-04
> 07:51:56,282::resourceManager::190::ResourceManager.Request::(__init__)
> ResName=`Storage.f231d176-a11a-47a8-84e7-12bfa9095bdd`ReqID=`544caeb2-9d61-42e4-9fc7-1f7811700f4b`::Request
> was made in '/usr/share/vdsm/storage/resourceManager.py' line '189' at
> '__init__'
> Thread-304915::DEBUG::2013-11-04
> 07:51:56,282::resourceManager::504::ResourceManager::(registerResource)
> Trying to register resource 'Storage.f231d176-a11a-47a8-84e7-12bfa9095bdd'
> for lock type 'shared'
> Thread-304915::DEBUG::2013-11-04
> 07:51:56,282::resourceManager::547::ResourceManager::(registerResource)
> Resource 'Storage.f231d176-a11a-47a8-84e7-12bfa9095bdd' is free. Now locking
> as 'shared' (1 active user)
> Thread-304915::DEBUG::2013-11-04
> 07:51:56,283::resourceManager::227::ResourceManager.Request::(grant)
> ResName=`Storage.f231d176-a11a-47a8-84e7-12bfa9095bdd`ReqID=`544caeb2-9d61-42e4-9fc7-1f7811700f4b`::Granted
> request
> Thread-304915::DEBUG::2013-11-04
> 07:51:56,283::task::794::TaskManager.Task::(resourceAcquired)
> Task=`78b8f42d-14cd-48e2-9e3e-61a6610c9f66`::_resourcesAcquired:
> Storage.f231d176-a11a-47a8-84e7-12bfa9095bdd (shared)
> Thread-304915::DEBUG::2013-11-04
> 07:51:56,283::task::957::TaskManager.Task::(_decref)
> Task=`78b8f42d-14cd-48e2-9e3e-61a6610c9f66`::ref 1 aborting False
> Thread-304915::ERROR::2013-11-04
> 07:51:56,313::task::833::TaskManager.Task::(_setError)
> Task=`78b8f42d-14cd-48e2-9e3e-61a6610c9f66`::Unexpected error
> Traceback (most recent call last):
> File "/usr/share/vdsm/storage/task.py", line 840, in _run

Re: [Users] Problem in 3.3 with shmmax configuration

2013-11-04 Thread Sandro Bonazzola
Il 02/11/2013 17:51, Bob Doolittle ha scritto:
> Hi,
> 
> I'm setting up Engine for the 2nd time - the first time I answered a 
> configuration question wrong. So I did:
> 
> engine-setup
> engine-cleanup
> engine-setup
> 
> Things worked, until I rebooted the system. I found that postgresql would not 
> startup, and was failing with "could not create shared memory segment:
> Invalid Argument".
> 
> I resolved this issue by creating a file /etc/sysctl.d/10-shmmax.conf, 
> containing the line:
> kernel.shmmax = 10
> 
> (I read somewhere that postgresql recommends setting shmmax to 1/4 of 
> physical memory, and I have 4GB)
> 
> 1. Is this a known bug? If not, should I file one? If so, how do I do that? :)

Which version are you installing?
Can you please attach all 3 logs from above sequence (setup, cleanup, setup)?
I think something may have gone wrong on second setup execution while setting 
shmmax.


> 2. Is there a better fix than the one I settled on? Does the normal 
> configuration wind up increasing shmmax, or reducing postgresql's limits? 
> What are
> the default values for this in a normal engine configuration?

No better fix, engine-setup just do something like that, setting shmmax to 
35554432.


> 
> Thanks,
> Bob
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Custom settings for Kernel Same-page Merging (KSM)?

2013-11-04 Thread Martin Sivak
Hi Frank,

we do not currently have anything in the engine that would allow you to do 
this. If you create an oVirt RFE bug for it it would make it easier for us to 
add the feature and track the progress.

However you can update the policy file you found to accomplish this. Just set 
(defvar ksm_sleep_ms_baseline 10) to a higher value and do not forget to 
properly restart VDSM when you do.

Regards

--
Martin Sivák
msi...@redhat.com
Red Hat Czech
RHEV-M SLA / Brno, CZ

- Original Message -
> Hi,
> 
> recently I've noticed that ksmd is using quite a lot CPU on
> my oVirt node. While I have plenty RAM, my CPU is way slow.
> Thus ksmd is eating up the most limited ressource here.
> 
> Is it supported to use custom settings for ksmd? I've noticed
> that it is handled by MoM: /etc/vdsm/mom.d/03-ksm.policy.
> May I just change some settings here, or is there a tuning
> parameter in oVirt engine? Couldn't find a matching one
> with engine-config.
> 
> I'd like to use a higher value for sleep_millisecs to reduce
> CPU usage (with the possible side effect of higher RAM usage).
> 
> 
> Thanks
> - Frank
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users