Re: [ovirt-users] Using Microsoft NFS server as storage domain

2016-01-22 Thread Pavel Gashev
Nir,


On 21/01/16 23:55, "Nir Soffer"  wrote:
>live migration starts by creating a snapshot, then copying the disks to the new
>storage, and then mirroring the active layer so both the old and the
>new disks are
>the same. Finally we switch to the new disk, and delete the old disk.
>
>So probably the issue is in the mirroring step. This is most likely a
>qemu issue.

Thank you for clarification. This brought me an idea to check consistency of 
the old disk.

I performed the following testing:
1. Create a VM on MS NFS
2. Initiate live disk migration to another storage
3. Catch the source files before oVirt has removed them by creating hard links 
to another directory
4. Shutdown VM
5. Create another VM and move the catched files to the place where new disk 
files is located
6. Check consistency of filesystem in both VMs


The source disk is consistent. The destination disk is corrupted.

>
>I'll try to get instructions for this from libvirt developers. If this
>happen with
>libvirt alone, this is a libvirt or qemu bug, and there is little we (ovirt) 
>can
>do about it.


I've tried to reproduce the mirroring of active layer:

1. Create two thin template provisioned VMs from the same template on different 
storages.
2. Start VM1
3. virsh blockcopy VM1 vda /rhev/data-center/...path.to.disk.of.VM2.. --wait 
--verbose --reuse-external --shallow
4. virsh blockjob VM1 vda --abort --pivot
5. Shutdown VM1
6. Start VM2. Boot in recovery mode and check filesystem.

I did try this a dozen times. Everything works fine. No data corruption.


Ideas?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] memory leak in 3.5.6 - not vdsm

2016-01-22 Thread Charles Kozler
Hi Nir -

do you have a release target date for 3.5.8? Any estimate would help.

If its not VDSM, what is it exactly? Sorry, I understood from the ticket it
was something inside vdsm, was I mistaken?

CentOS 6 is the servers. 6.7 to be exact

I have done all forms of flushing that I can (page cache, inodes, dentry's,
etc) and as well moved VM's around to other nodes and nothing changes the
memory. How can I find the leak? Where is the leak? RES shows the following
of which, the totals dont add up to 20GB

   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND


 19044 qemu  20   0 8876m 4.0g 5680 S  3.6 12.9   1571:44 qemu-kvm


 26143 qemu  20   0 5094m 1.1g 5624 S  9.2  3.7   6012:12 qemu-kvm


  5837 root   0 -20  964m 624m 3664 S  0.0  2.0  85:22.09 glusterfs


 14328 root   0 -20  635m 169m 3384 S  0.0  0.5  43:15.23 glusterfs


  5134 vdsm   0 -20 4368m 111m  10m S  5.9  0.3   3710:50 vdsm


  4095 root  15  -5  727m  43m  10m S  0.0  0.1   0:02.00
supervdsmServer

4.0G + 1.1G + 624M + 169 + 111M + 43M = ~7GB

This was top sorted by RES from highest to lowest

At that point I wouldnt know where else to look except slab / kernel
structures. Of which slab shows:

[compute[root@node1 ~]$ cat /proc/meminfo | grep -i slab
Slab:2549748 kB

So roughly 2-3GB. Adding that to the other use of 7GB we have still about
10GB unaccounted for

On Fri, Jan 22, 2016 at 4:24 PM, Nir Soffer  wrote:

> On Fri, Jan 22, 2016 at 11:08 PM, Charles Kozler 
> wrote:
> > Hi Nir -
> >
> > Thanks for getting back to me. Will the patch to 3.6 be backported to
> 3.5?
>
> We plan to include them in 3.5.8.
>
> > As you can tell from the images, it takes days and days for it to
> increase
> > over time. I also wasnt sure if that was the right bug because VDSM
> memory
> > shows normal from top ...
> >
> >PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
> >   5134 vdsm   0 -20 4368m 111m  10m S  2.0  0.3   3709:28 vdsm
>
> As you wrote, this issue is not related to vdsm.
>
> >
> > Res is only 111M. This is from node1 which is showing currently 20GB of
> 32GB
> > used with only 2 VMs running on it - 1 with 4G and another with ~1 GB of
> RAM
> > configured
> >
> > The images are from nagios and the value here is a direct correlation to
> > what you would see in the free command output. See below from an example
> of
> > node 1 and node 2
> >
> > [compute[root@node1 ~]$ free
> >  total   used   free sharedbuffers cached
> > Mem:  32765316   20318156   12447160252  30884 628948
> > -/+ buffers/cache:   19658324   13106992
> > Swap: 19247100  0   19247100
> > [compute[root@node1 ~]$ free -m
> >  total   used   free sharedbuffers cached
> > Mem: 31997  19843  12153  0 30614
> > -/+ buffers/cache:  19199  12798
> > Swap:18795  0  18795
> >
> > And its correlated image http://i.imgur.com/PZLEgyx.png (~19GB used)
> >
> > And as a control, node 2 that I just restarted today
> >
> > [compute[root@node2 ~]$ free
> >  total   used   free sharedbuffers cached
> > Mem:  327653161815324   30949992212  35784 717320
> > -/+ buffers/cache:1062220   31703096
> > Swap: 19247100  0   19247100
>
> Is this rhel/centos 6?
>
> > [compute[root@node2 ~]$ free -m
> >  total   used   free sharedbuffers cached
> > Mem: 31997   1772  30225  0 34700
> > -/+ buffers/cache:   1036  30960
> > Swap:18795  0  18795
> >
> > And its correlated image http://i.imgur.com/8ldPVqY.png  (~2GB used).
> Note
> > how 1772 in the image is exactly what is registered under 'used' in free
> > command
>
> I guess you should start looking at the processes running on these nodes.
>
> Maybe try to collect memory usage per process using ps?
>
> >
> > On Fri, Jan 22, 2016 at 3:59 PM, Nir Soffer  wrote:
> >>
> >> On Fri, Jan 22, 2016 at 9:25 PM, Charles Kozler 
> >> wrote:
> >> > Here is a screenshot of my three nodes and their increased memory
> usage
> >> > over
> >> > 30 days. Note that node #2 had 1 single VM that had 4GB of RAM
> assigned
> >> > to
> >> > it. I had since shut it down and saw no memory reclamation occur.
> >> > Further, I
> >> > flushed page caches and inodes and ran 'sync'. I tried everything but
> >> > nothing brought the memory usage down. vdsm was low too (couple
> hundred
> >> > MB)
> >>
> >> Note that there is an old leak in vdsm, will be fixed in next 3.6 build:
> >> https://bugzilla.redhat.com/1269424
> >>
> >> > and there was no qemu-kvm process running so I'm at a loss
> >> >
> >> > http://imgur.com/a/aFPcK
> >> >
> >> > Please advise on what I can do to debug this. Note I 

Re: [ovirt-users] Using Microsoft NFS server as storage domain

2016-01-22 Thread Nir Soffer
On Fri, Jan 22, 2016 at 5:15 PM, Pavel Gashev  wrote:
> Nir,
>
>
> On 21/01/16 23:55, "Nir Soffer"  wrote:
>>live migration starts by creating a snapshot, then copying the disks to the 
>>new
>>storage, and then mirroring the active layer so both the old and the
>>new disks are
>>the same. Finally we switch to the new disk, and delete the old disk.
>>
>>So probably the issue is in the mirroring step. This is most likely a
>>qemu issue.
>
> Thank you for clarification. This brought me an idea to check consistency of 
> the old disk.
>
> I performed the following testing:
> 1. Create a VM on MS NFS
> 2. Initiate live disk migration to another storage
> 3. Catch the source files before oVirt has removed them by creating hard 
> links to another directory
> 4. Shutdown VM
> 5. Create another VM and move the catched files to the place where new disk 
> files is located
> 6. Check consistency of filesystem in both VMs
>
>
> The source disk is consistent. The destination disk is corrupted.
>
>>
>>I'll try to get instructions for this from libvirt developers. If this
>>happen with
>>libvirt alone, this is a libvirt or qemu bug, and there is little we (ovirt) 
>>can
>>do about it.
>
>
> I've tried to reproduce the mirroring of active layer:
>
> 1. Create two thin template provisioned VMs from the same template on 
> different storages.
> 2. Start VM1
> 3. virsh blockcopy VM1 vda /rhev/data-center/...path.to.disk.of.VM2.. --wait 
> --verbose --reuse-external --shallow
> 4. virsh blockjob VM1 vda --abort --pivot
> 5. Shutdown VM1
> 6. Start VM2. Boot in recovery mode and check filesystem.
>
> I did try this a dozen times. Everything works fine. No data corruption.

If you take same vm, and do a live storage migration in ovirt, the
file system is
corrupted after the migration?

What is the guest os? did you try with more then one?

>
>
> Ideas?

Thanks for this research!

The next step is to open a bug with the logs I requested in my last
message. Please mark the bug
as urgent.

I'm adding Kevin (from qemu) and Eric (from libvirt), hopefully they
can tell if the virsh flow is
indeed identical to what ovirt does, and what should be the next step
for debugging this.

Ovirt is using blockCopy if available (should be available everywhere
for some time), or fallback
to blockRebase. Do you see this warning?

blockCopy not supported, using blockRebase

For reference, this is the relevant code in ovirt for the mirroring
part. The mirroring starts with
diskReplicateStart(), and ends with diskReplicateFinish(). I remove
the parts about managing
vdsm state and left the calls to libvirt.

3378 def diskReplicateFinish(self, srcDisk, dstDisk):
...
3394 blkJobInfo = self._dom.blockJobInfo(drive.name, 0)
...
3418 if srcDisk != dstDisk:
3419 self.log.debug("Stopping the disk replication
switching to the "
3420"destination drive: %s", dstDisk)
3421 blockJobFlags = libvirt.VIR_DOMAIN_BLOCK_JOB_ABORT_PIVOT
...
3429 else:
3430 self.log.debug("Stopping the disk replication
remaining on the "
3431"source drive: %s", dstDisk)
3432 blockJobFlags = 0
...
3435 try:
3436 # Stopping the replication
3437 self._dom.blockJobAbort(drive.name, blockJobFlags)
3438 except Exception:
3439 self.log.exception("Unable to stop the replication for"
3440" the drive: %s", drive.name)
...

3462 def _startDriveReplication(self, drive):
3463 destxml = drive.getReplicaXML().toprettyxml()
3464 self.log.debug("Replicating drive %s to %s", drive.name, destxml)
3465
3466 flags = (libvirt.VIR_DOMAIN_BLOCK_COPY_SHALLOW |
3467  libvirt.VIR_DOMAIN_BLOCK_COPY_REUSE_EXT)
3468
3469 # TODO: Remove fallback when using libvirt >= 1.2.9.
3470 try:
3471 self._dom.blockCopy(drive.name, destxml, flags=flags)
3472 except libvirt.libvirtError as e:
3473 if e.get_error_code() != libvirt.VIR_ERR_NO_SUPPORT:
3474 raise
3475
3476 self.log.warning("blockCopy not supported, using blockRebase")
3477
3478 base = drive.diskReplicate["path"]
3479 self.log.debug("Replicating drive %s to %s", drive.name, base)
3480
3481 flags = (libvirt.VIR_DOMAIN_BLOCK_REBASE_COPY |
3482  libvirt.VIR_DOMAIN_BLOCK_REBASE_REUSE_EXT |
3483  libvirt.VIR_DOMAIN_BLOCK_REBASE_SHALLOW)
3484
3485 if drive.diskReplicate["diskType"] == DISK_TYPE.BLOCK:
3486 flags |= libvirt.VIR_DOMAIN_BLOCK_REBASE_COPY_DEV
3487
3488 self._dom.blockRebase(drive.name, base, flags=flags)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] SDK: Direct export of a snapshot into a storage domain?

2016-01-22 Thread Nir Soffer
On Fri, Jan 22, 2016 at 11:22 AM, gregor  wrote:
> Hi,
>
> is it possible to export a snapshot directly into a storage domain?

This is possible using the ui (export from the snapshots sub-tab).

Arik, is this possible using REST api?

>
> This will improve the speed and space overhead of my backup, tool.
>
> https://github.com/wefixit-AT/oVirtBackup/issues/7
>
> regards
> gregor
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] memory leak in 3.5.6 - not vdsm

2016-01-22 Thread Charles Kozler
Hi Nir -

Thanks for getting back to me. Will the patch to 3.6 be backported to 3.5?
As you can tell from the images, it takes days and days for it to increase
over time. I also wasnt sure if that was the right bug because VDSM memory
shows normal from top ...

   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND


  5134 vdsm   0 -20 4368m 111m  10m S  2.0  0.3   3709:28 vdsm

Res is only 111M. This is from node1 which is showing currently 20GB of
32GB used with only 2 VMs running on it - 1 with 4G and another with ~1 GB
of RAM configured

The images are from nagios and the value here is a direct correlation to
what you would see in the free command output. See below from an example of
node 1 and node 2

[compute[root@node1 ~]$ free
 total   used   free sharedbuffers cached
Mem:  32765316   20318156   12447160252  30884 628948
-/+ buffers/cache:   19658324   13106992
Swap: 19247100  0   19247100
[compute[root@node1 ~]$ free -m
 total   used   free sharedbuffers cached
Mem: 31997  19843  12153  0 30614
-/+ buffers/cache:  19199  12798
Swap:18795  0  18795

And its correlated image http://i.imgur.com/PZLEgyx.png (~19GB used)

And as a control, node 2 that I just restarted today

[compute[root@node2 ~]$ free
 total   used   free sharedbuffers cached
Mem:  327653161815324   30949992212  35784 717320
-/+ buffers/cache:1062220   31703096
Swap: 19247100  0   19247100
[compute[root@node2 ~]$ free -m
 total   used   free sharedbuffers cached
Mem: 31997   1772  30225  0 34700
-/+ buffers/cache:   1036  30960
Swap:18795  0  18795

And its correlated image http://i.imgur.com/8ldPVqY.png  (~2GB used). Note
how 1772 in the image is exactly what is registered under 'used' in free
command

On Fri, Jan 22, 2016 at 3:59 PM, Nir Soffer  wrote:

> On Fri, Jan 22, 2016 at 9:25 PM, Charles Kozler 
> wrote:
> > Here is a screenshot of my three nodes and their increased memory usage
> over
> > 30 days. Note that node #2 had 1 single VM that had 4GB of RAM assigned
> to
> > it. I had since shut it down and saw no memory reclamation occur.
> Further, I
> > flushed page caches and inodes and ran 'sync'. I tried everything but
> > nothing brought the memory usage down. vdsm was low too (couple hundred
> MB)
>
> Note that there is an old leak in vdsm, will be fixed in next 3.6 build:
> https://bugzilla.redhat.com/1269424
>
> > and there was no qemu-kvm process running so I'm at a loss
> >
> > http://imgur.com/a/aFPcK
> >
> > Please advise on what I can do to debug this. Note I have restarted node
> 2
> > (which is why you see the drop) to see if it raises in memory use over
> tim
> > even with no VM's running
>
> Not sure what is "memory" that you show in the graphs. Theoretically this
> may be
> normal memory usage, Linux using free memory for the buffer cache.
>
> Can you instead show the output of "free", during one day, maybe run once
> per hour?
>
> You may also like to install sysstat for collecting and monitoring
> resources usage.
>
> >
> > [compute[root@node2 log]$ rpm -qa | grep -i ovirt
> > libgovirt-0.3.2-1.el6.x86_64
> > ovirt-release35-006-1.noarch
> > ovirt-hosted-engine-ha-1.2.8-1.el6.noarch
> > ovirt-hosted-engine-setup-1.2.6.1-1.el6.noarch
> > ovirt-engine-sdk-python-3.5.6.0-1.el6.noarch
> > ovirt-host-deploy-1.3.2-1.el6.noarch
> >
> >
> > --
> >
> > Charles Kozler
> > Vice President, IT Operations
> >
> > FIX Flyer, LLC
> > 225 Broadway | Suite 1600 | New York, NY 10007
> > 1-888-349-3593
> > http://www.fixflyer.com
> >
> > NOTICE TO RECIPIENT: THIS E-MAIL IS MEANT ONLY FOR THE INTENDED
> RECIPIENT(S)
> > OF THE TRANSMISSION, AND CONTAINS CONFIDENTIAL INFORMATION WHICH IS
> > PROPRIETARY TO FIX FLYER LLC.  ANY UNAUTHORIZED USE, COPYING,
> DISTRIBUTION,
> > OR DISSEMINATION IS STRICTLY PROHIBITED.  ALL RIGHTS TO THIS INFORMATION
> IS
> > RESERVED BY FIX FLYER LLC.  IF YOU ARE NOT THE INTENDED RECIPIENT, PLEASE
> > CONTACT THE SENDER BY REPLY E-MAIL AND PLEASE DELETE THIS E-MAIL FROM
> YOUR
> > SYSTEM AND DESTROY ANY COPIES.
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>



-- 

*Charles Kozler*
*Vice President, IT Operations*

FIX Flyer, LLC
225 Broadway | Suite 1600 | New York, NY 10007
1-888-349-3593
http://www.fixflyer.com 

NOTICE TO RECIPIENT: THIS E-MAIL IS MEANT ONLY FOR THE INTENDED
RECIPIENT(S) OF THE TRANSMISSION, AND CONTAINS CONFIDENTIAL INFORMATION
WHICH IS PROPRIETARY TO FIX FLYER LLC.  ANY UNAUTHORIZED USE, COPYING,
DISTRIBUTION, OR DISSEMINATION IS STRICTLY PROHIBITED.  ALL RIGHTS 

Re: [ovirt-users] memory leak in 3.5.6 - not vdsm

2016-01-22 Thread Nir Soffer
On Fri, Jan 22, 2016 at 9:25 PM, Charles Kozler  wrote:
> Here is a screenshot of my three nodes and their increased memory usage over
> 30 days. Note that node #2 had 1 single VM that had 4GB of RAM assigned to
> it. I had since shut it down and saw no memory reclamation occur. Further, I
> flushed page caches and inodes and ran 'sync'. I tried everything but
> nothing brought the memory usage down. vdsm was low too (couple hundred MB)

Note that there is an old leak in vdsm, will be fixed in next 3.6 build:
https://bugzilla.redhat.com/1269424

> and there was no qemu-kvm process running so I'm at a loss
>
> http://imgur.com/a/aFPcK
>
> Please advise on what I can do to debug this. Note I have restarted node 2
> (which is why you see the drop) to see if it raises in memory use over tim
> even with no VM's running

Not sure what is "memory" that you show in the graphs. Theoretically this may be
normal memory usage, Linux using free memory for the buffer cache.

Can you instead show the output of "free", during one day, maybe run once
per hour?

You may also like to install sysstat for collecting and monitoring
resources usage.

>
> [compute[root@node2 log]$ rpm -qa | grep -i ovirt
> libgovirt-0.3.2-1.el6.x86_64
> ovirt-release35-006-1.noarch
> ovirt-hosted-engine-ha-1.2.8-1.el6.noarch
> ovirt-hosted-engine-setup-1.2.6.1-1.el6.noarch
> ovirt-engine-sdk-python-3.5.6.0-1.el6.noarch
> ovirt-host-deploy-1.3.2-1.el6.noarch
>
>
> --
>
> Charles Kozler
> Vice President, IT Operations
>
> FIX Flyer, LLC
> 225 Broadway | Suite 1600 | New York, NY 10007
> 1-888-349-3593
> http://www.fixflyer.com
>
> NOTICE TO RECIPIENT: THIS E-MAIL IS MEANT ONLY FOR THE INTENDED RECIPIENT(S)
> OF THE TRANSMISSION, AND CONTAINS CONFIDENTIAL INFORMATION WHICH IS
> PROPRIETARY TO FIX FLYER LLC.  ANY UNAUTHORIZED USE, COPYING, DISTRIBUTION,
> OR DISSEMINATION IS STRICTLY PROHIBITED.  ALL RIGHTS TO THIS INFORMATION IS
> RESERVED BY FIX FLYER LLC.  IF YOU ARE NOT THE INTENDED RECIPIENT, PLEASE
> CONTACT THE SENDER BY REPLY E-MAIL AND PLEASE DELETE THIS E-MAIL FROM YOUR
> SYSTEM AND DESTROY ANY COPIES.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM manipulate DMI Chassis Serial value

2016-01-22 Thread Nir Soffer
Adding Francesco

On Fri, Jan 22, 2016 at 1:40 PM,   wrote:
> Hi list,
>
> it is possible to change the DMI Chassis Serial value in the VM config?
>
> Virtualbox is able to perform this via the following command for example:
> c:\Program Files\Oracle\VirtualBox\VBoxManage.exe setextradata
> "NAME_OF_VIRTUAL_MACHINE"
> "VBoxInternal/Devices/pcbios/0/Config/DmiChassisSerial" "123456789"
>
> Handle 0x0300, DMI type 3, 21 bytes
> Chassis Information
> Manufacturer: Red Hat
> Type: Other
> Lock: Not Present
> Version: RHEL 7.2.0 PC (i440FX + PIIX, 1996)
> Serial Number: Not Specified
> Asset Tag: Not Specified
> Boot-up State: Safe
> Power Supply State: Safe
> Thermal State: Safe
> Security Status: Unknown
> OEM Information: 0x
> Height: Unspecified
> Number Of Power Cords: Unspecified
> Contained Elements: 0
>
> root@demo:~ # dmidecode -s chassis-serial-number
> Not Specified
>
>
> I want to change the red above!!
> Is there a way to change that in Ovirt as well?
>
> Best regards and thank you for your help.
> Christoph
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] memory leak in 3.5.6 - not vdsm

2016-01-22 Thread Sandro Bonazzola
Il 22/Gen/2016 22:31, "Charles Kozler"  ha scritto:
>
> Hi Nir -
>
> do you have a release target date for 3.5.8? Any estimate would help.
>

There won't be any supported release after 3.5.6. Please update to 3.6.2
next week

> If its not VDSM, what is it exactly? Sorry, I understood from the ticket
it was something inside vdsm, was I mistaken?
>
> CentOS 6 is the servers. 6.7 to be exact
>
> I have done all forms of flushing that I can (page cache, inodes,
dentry's, etc) and as well moved VM's around to other nodes and nothing
changes the memory. How can I find the leak? Where is the leak? RES shows
the following of which, the totals dont add up to 20GB
>
>PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND


>  19044 qemu  20   0 8876m 4.0g 5680 S  3.6 12.9   1571:44 qemu-kvm


>  26143 qemu  20   0 5094m 1.1g 5624 S  9.2  3.7   6012:12 qemu-kvm


>   5837 root   0 -20  964m 624m 3664 S  0.0  2.0  85:22.09 glusterfs


>  14328 root   0 -20  635m 169m 3384 S  0.0  0.5  43:15.23 glusterfs


>   5134 vdsm   0 -20 4368m 111m  10m S  5.9  0.3   3710:50 vdsm


>   4095 root  15  -5  727m  43m  10m S  0.0  0.1   0:02.00
supervdsmServer
>
> 4.0G + 1.1G + 624M + 169 + 111M + 43M = ~7GB
>
> This was top sorted by RES from highest to lowest
>
> At that point I wouldnt know where else to look except slab / kernel
structures. Of which slab shows:
>
> [compute[root@node1 ~]$ cat /proc/meminfo | grep -i slab
> Slab:2549748 kB
>
> So roughly 2-3GB. Adding that to the other use of 7GB we have still about
10GB unaccounted for
>
> On Fri, Jan 22, 2016 at 4:24 PM, Nir Soffer  wrote:
>>
>> On Fri, Jan 22, 2016 at 11:08 PM, Charles Kozler 
wrote:
>> > Hi Nir -
>> >
>> > Thanks for getting back to me. Will the patch to 3.6 be backported to
3.5?
>>
>> We plan to include them in 3.5.8.
>>
>> > As you can tell from the images, it takes days and days for it to
increase
>> > over time. I also wasnt sure if that was the right bug because VDSM
memory
>> > shows normal from top ...
>> >
>> >PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
>> >   5134 vdsm   0 -20 4368m 111m  10m S  2.0  0.3   3709:28 vdsm
>>
>> As you wrote, this issue is not related to vdsm.
>>
>> >
>> > Res is only 111M. This is from node1 which is showing currently 20GB
of 32GB
>> > used with only 2 VMs running on it - 1 with 4G and another with ~1 GB
of RAM
>> > configured
>> >
>> > The images are from nagios and the value here is a direct correlation
to
>> > what you would see in the free command output. See below from an
example of
>> > node 1 and node 2
>> >
>> > [compute[root@node1 ~]$ free
>> >  total   used   free sharedbuffers
 cached
>> > Mem:  32765316   20318156   12447160252  30884
 628948
>> > -/+ buffers/cache:   19658324   13106992
>> > Swap: 19247100  0   19247100
>> > [compute[root@node1 ~]$ free -m
>> >  total   used   free sharedbuffers
 cached
>> > Mem: 31997  19843  12153  0 30
614
>> > -/+ buffers/cache:  19199  12798
>> > Swap:18795  0  18795
>> >
>> > And its correlated image http://i.imgur.com/PZLEgyx.png (~19GB used)
>> >
>> > And as a control, node 2 that I just restarted today
>> >
>> > [compute[root@node2 ~]$ free
>> >  total   used   free sharedbuffers
 cached
>> > Mem:  327653161815324   30949992212  35784
 717320
>> > -/+ buffers/cache:1062220   31703096
>> > Swap: 19247100  0   19247100
>>
>> Is this rhel/centos 6?
>>
>> > [compute[root@node2 ~]$ free -m
>> >  total   used   free sharedbuffers
 cached
>> > Mem: 31997   1772  30225  0 34
700
>> > -/+ buffers/cache:   1036  30960
>> > Swap:18795  0  18795
>> >
>> > And its correlated image http://i.imgur.com/8ldPVqY.png  (~2GB used).
Note
>> > how 1772 in the image is exactly what is registered under 'used' in
free
>> > command
>>
>> I guess you should start looking at the processes running on these nodes.
>>
>> Maybe try to collect memory usage per process using ps?
>>
>> >
>> > On Fri, Jan 22, 2016 at 3:59 PM, Nir Soffer  wrote:
>> >>
>> >> On Fri, Jan 22, 2016 at 9:25 PM, Charles Kozler 
>> >> wrote:
>> >> > Here is a screenshot of my three nodes and their increased memory
usage
>> >> > over
>> >> > 30 days. Note that node #2 had 1 single VM that had 4GB of RAM
assigned
>> >> > to
>> >> > it. I had since shut it down and saw no memory reclamation occur.
>> >> > Further, I
>> >> > flushed page caches and inodes and ran 'sync'. I tried everything
but
>> >> > nothing brought the memory usage down. vdsm was low too (couple
hundred
>> >> > MB)
>> >>
>> >> Note that there is an old leak in vdsm, will be fixed in 

Re: [ovirt-users] memory leak in 3.5.6 - not vdsm

2016-01-22 Thread Charles Kozler
Sandro -

Do you have available documentation that can support upgrading self hosted?
I followed this
http://community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/

Would it be as easy as installing the RPM and then running yum upgrade?

Thanks

On Fri, Jan 22, 2016 at 4:42 PM, Sandro Bonazzola 
wrote:

>
> Il 22/Gen/2016 22:31, "Charles Kozler"  ha scritto:
> >
> > Hi Nir -
> >
> > do you have a release target date for 3.5.8? Any estimate would help.
> >
>
> There won't be any supported release after 3.5.6. Please update to 3.6.2
> next week
>
> > If its not VDSM, what is it exactly? Sorry, I understood from the ticket
> it was something inside vdsm, was I mistaken?
> >
> > CentOS 6 is the servers. 6.7 to be exact
> >
> > I have done all forms of flushing that I can (page cache, inodes,
> dentry's, etc) and as well moved VM's around to other nodes and nothing
> changes the memory. How can I find the leak? Where is the leak? RES shows
> the following of which, the totals dont add up to 20GB
> >
> >PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
>
>
> >  19044 qemu  20   0 8876m 4.0g 5680 S  3.6 12.9   1571:44 qemu-kvm
>
>
> >  26143 qemu  20   0 5094m 1.1g 5624 S  9.2  3.7   6012:12 qemu-kvm
>
>
> >   5837 root   0 -20  964m 624m 3664 S  0.0  2.0  85:22.09 glusterfs
>
>
> >  14328 root   0 -20  635m 169m 3384 S  0.0  0.5  43:15.23 glusterfs
>
>
> >   5134 vdsm   0 -20 4368m 111m  10m S  5.9  0.3   3710:50 vdsm
>
>
> >   4095 root  15  -5  727m  43m  10m S  0.0  0.1   0:02.00
> supervdsmServer
> >
> > 4.0G + 1.1G + 624M + 169 + 111M + 43M = ~7GB
> >
> > This was top sorted by RES from highest to lowest
> >
> > At that point I wouldnt know where else to look except slab / kernel
> structures. Of which slab shows:
> >
> > [compute[root@node1 ~]$ cat /proc/meminfo | grep -i slab
> > Slab:2549748 kB
> >
> > So roughly 2-3GB. Adding that to the other use of 7GB we have still
> about 10GB unaccounted for
> >
> > On Fri, Jan 22, 2016 at 4:24 PM, Nir Soffer  wrote:
> >>
> >> On Fri, Jan 22, 2016 at 11:08 PM, Charles Kozler 
> wrote:
> >> > Hi Nir -
> >> >
> >> > Thanks for getting back to me. Will the patch to 3.6 be backported to
> 3.5?
> >>
> >> We plan to include them in 3.5.8.
> >>
> >> > As you can tell from the images, it takes days and days for it to
> increase
> >> > over time. I also wasnt sure if that was the right bug because VDSM
> memory
> >> > shows normal from top ...
> >> >
> >> >PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
> >> >   5134 vdsm   0 -20 4368m 111m  10m S  2.0  0.3   3709:28 vdsm
> >>
> >> As you wrote, this issue is not related to vdsm.
> >>
> >> >
> >> > Res is only 111M. This is from node1 which is showing currently 20GB
> of 32GB
> >> > used with only 2 VMs running on it - 1 with 4G and another with ~1 GB
> of RAM
> >> > configured
> >> >
> >> > The images are from nagios and the value here is a direct correlation
> to
> >> > what you would see in the free command output. See below from an
> example of
> >> > node 1 and node 2
> >> >
> >> > [compute[root@node1 ~]$ free
> >> >  total   used   free sharedbuffers
>  cached
> >> > Mem:  32765316   20318156   12447160252  30884
>  628948
> >> > -/+ buffers/cache:   19658324   13106992
> >> > Swap: 19247100  0   19247100
> >> > [compute[root@node1 ~]$ free -m
> >> >  total   used   free sharedbuffers
>  cached
> >> > Mem: 31997  19843  12153  0 30
> 614
> >> > -/+ buffers/cache:  19199  12798
> >> > Swap:18795  0  18795
> >> >
> >> > And its correlated image http://i.imgur.com/PZLEgyx.png (~19GB used)
> >> >
> >> > And as a control, node 2 that I just restarted today
> >> >
> >> > [compute[root@node2 ~]$ free
> >> >  total   used   free sharedbuffers
>  cached
> >> > Mem:  327653161815324   30949992212  35784
>  717320
> >> > -/+ buffers/cache:1062220   31703096
> >> > Swap: 19247100  0   19247100
> >>
> >> Is this rhel/centos 6?
> >>
> >> > [compute[root@node2 ~]$ free -m
> >> >  total   used   free sharedbuffers
>  cached
> >> > Mem: 31997   1772  30225  0 34
> 700
> >> > -/+ buffers/cache:   1036  30960
> >> > Swap:18795  0  18795
> >> >
> >> > And its correlated image http://i.imgur.com/8ldPVqY.png  (~2GB
> used). Note
> >> > how 1772 in the image is exactly what is registered under 'used' in
> free
> >> > command
> >>
> >> I guess you should start looking at the processes running on these
> nodes.
> >>
> >> Maybe try to collect memory usage per process using ps?
> >>
> >> >
> >> > On Fri, Jan 22, 2016 at 3:59 PM, Nir Soffer 
> wrote:
> >> >>

Re: [ovirt-users] memory leak in 3.5.6 - not vdsm

2016-01-22 Thread Nir Soffer
On Fri, Jan 22, 2016 at 11:08 PM, Charles Kozler  wrote:
> Hi Nir -
>
> Thanks for getting back to me. Will the patch to 3.6 be backported to 3.5?

We plan to include them in 3.5.8.

> As you can tell from the images, it takes days and days for it to increase
> over time. I also wasnt sure if that was the right bug because VDSM memory
> shows normal from top ...
>
>PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
>   5134 vdsm   0 -20 4368m 111m  10m S  2.0  0.3   3709:28 vdsm

As you wrote, this issue is not related to vdsm.

>
> Res is only 111M. This is from node1 which is showing currently 20GB of 32GB
> used with only 2 VMs running on it - 1 with 4G and another with ~1 GB of RAM
> configured
>
> The images are from nagios and the value here is a direct correlation to
> what you would see in the free command output. See below from an example of
> node 1 and node 2
>
> [compute[root@node1 ~]$ free
>  total   used   free sharedbuffers cached
> Mem:  32765316   20318156   12447160252  30884 628948
> -/+ buffers/cache:   19658324   13106992
> Swap: 19247100  0   19247100
> [compute[root@node1 ~]$ free -m
>  total   used   free sharedbuffers cached
> Mem: 31997  19843  12153  0 30614
> -/+ buffers/cache:  19199  12798
> Swap:18795  0  18795
>
> And its correlated image http://i.imgur.com/PZLEgyx.png (~19GB used)
>
> And as a control, node 2 that I just restarted today
>
> [compute[root@node2 ~]$ free
>  total   used   free sharedbuffers cached
> Mem:  327653161815324   30949992212  35784 717320
> -/+ buffers/cache:1062220   31703096
> Swap: 19247100  0   19247100

Is this rhel/centos 6?

> [compute[root@node2 ~]$ free -m
>  total   used   free sharedbuffers cached
> Mem: 31997   1772  30225  0 34700
> -/+ buffers/cache:   1036  30960
> Swap:18795  0  18795
>
> And its correlated image http://i.imgur.com/8ldPVqY.png  (~2GB used). Note
> how 1772 in the image is exactly what is registered under 'used' in free
> command

I guess you should start looking at the processes running on these nodes.

Maybe try to collect memory usage per process using ps?

>
> On Fri, Jan 22, 2016 at 3:59 PM, Nir Soffer  wrote:
>>
>> On Fri, Jan 22, 2016 at 9:25 PM, Charles Kozler 
>> wrote:
>> > Here is a screenshot of my three nodes and their increased memory usage
>> > over
>> > 30 days. Note that node #2 had 1 single VM that had 4GB of RAM assigned
>> > to
>> > it. I had since shut it down and saw no memory reclamation occur.
>> > Further, I
>> > flushed page caches and inodes and ran 'sync'. I tried everything but
>> > nothing brought the memory usage down. vdsm was low too (couple hundred
>> > MB)
>>
>> Note that there is an old leak in vdsm, will be fixed in next 3.6 build:
>> https://bugzilla.redhat.com/1269424
>>
>> > and there was no qemu-kvm process running so I'm at a loss
>> >
>> > http://imgur.com/a/aFPcK
>> >
>> > Please advise on what I can do to debug this. Note I have restarted node
>> > 2
>> > (which is why you see the drop) to see if it raises in memory use over
>> > tim
>> > even with no VM's running
>>
>> Not sure what is "memory" that you show in the graphs. Theoretically this
>> may be
>> normal memory usage, Linux using free memory for the buffer cache.
>>
>> Can you instead show the output of "free", during one day, maybe run once
>> per hour?
>>
>> You may also like to install sysstat for collecting and monitoring
>> resources usage.
>>
>> >
>> > [compute[root@node2 log]$ rpm -qa | grep -i ovirt
>> > libgovirt-0.3.2-1.el6.x86_64
>> > ovirt-release35-006-1.noarch
>> > ovirt-hosted-engine-ha-1.2.8-1.el6.noarch
>> > ovirt-hosted-engine-setup-1.2.6.1-1.el6.noarch
>> > ovirt-engine-sdk-python-3.5.6.0-1.el6.noarch
>> > ovirt-host-deploy-1.3.2-1.el6.noarch
>> >
>> >
>> > --
>> >
>> > Charles Kozler
>> > Vice President, IT Operations
>> >
>> > FIX Flyer, LLC
>> > 225 Broadway | Suite 1600 | New York, NY 10007
>> > 1-888-349-3593
>> > http://www.fixflyer.com
>> >
>> > NOTICE TO RECIPIENT: THIS E-MAIL IS MEANT ONLY FOR THE INTENDED
>> > RECIPIENT(S)
>> > OF THE TRANSMISSION, AND CONTAINS CONFIDENTIAL INFORMATION WHICH IS
>> > PROPRIETARY TO FIX FLYER LLC.  ANY UNAUTHORIZED USE, COPYING,
>> > DISTRIBUTION,
>> > OR DISSEMINATION IS STRICTLY PROHIBITED.  ALL RIGHTS TO THIS INFORMATION
>> > IS
>> > RESERVED BY FIX FLYER LLC.  IF YOU ARE NOT THE INTENDED RECIPIENT,
>> > PLEASE
>> > CONTACT THE SENDER BY REPLY E-MAIL AND PLEASE DELETE THIS E-MAIL FROM
>> > YOUR
>> > SYSTEM AND DESTROY ANY COPIES.
>> >
>> > ___
>> > Users mailing list
>> > Users@ovirt.org
>> > 

Re: [ovirt-users] oVirt 3.6.1 with FreeIPA Auth domain performance

2016-01-22 Thread Justin Bushey
Ondra,

Thanks again. You've definitely saved me from spending too much time going
down a bunny hole.

-- Justin

On Fri, Jan 22, 2016 at 4:35 AM, Ondra Machacek  wrote:

> Hi,
>
> the best thing you can do is to migrate to new AAA ldap[1],
> as anyway you will have to do so in 4.0, as manage-domains
> will be removed, so I think better invest time to migration,
> then to searching for root cause. We will be happy to help
> you with migration. You can also try migration tool[2].
>
> Ondra
>
> [1]
> https://gerrit.ovirt.org/gitweb?p=ovirt-engine-extension-aaa-ldap.git;a=blob;f=README
> [2]
> https://github.com/machacekondra/ovirt-engine-kerbldap-migration/releases
>
>
> On 01/22/2016 09:37 AM, Justin Bushey wrote:
>
>> Hello,
>>
>> I just wanted to see if anyone else has seen issues with using FreeIPA
>> as an authentication domain with oVirt 3.6.1. Specifically, I'm seeing
>> extremely slow performance when authenticating as an IPA user, between
>> 5-10 minutes to get logged into the UI. On the KDC side I'm seeing
>> ticket requests from the oVirt host, which succeed and are repeated.
>> Eventually authentication succeeds to the Web UI.
>>
>> The IPA domain was added using `engine-manage-domains` with the IPA
>> provider option. I could configure direct LDAP authentication if
>> absolutely need be, but this is really bugging me.
>>
>> Google hasn't turned up any similar issues so I wanted to check if
>> anyone else has seen anything like this. I can post logs tomorrow if
>> anyone wants to assist me in troubleshooting ;)
>>
>> Thanks,
>>
>> Justin Bushey
>> InfoRelay Online Systems, Inc.
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] memory leak in 3.5.6 - not vdsm

2016-01-22 Thread Charles Kozler
Here is a screenshot of my three nodes and their increased memory usage
over 30 days. Note that node #2 had 1 single VM that had 4GB of RAM
assigned to it. I had since shut it down and saw no memory reclamation
occur. Further, I flushed page caches and inodes and ran 'sync'. I tried
everything but nothing brought the memory usage down. vdsm was low too
(couple hundred MB) and there was no qemu-kvm process running so I'm at a
loss

http://imgur.com/a/aFPcK

Please advise on what I can do to debug this. Note I have restarted node 2
(which is why you see the drop) to see if it raises in memory use over tim
even with no VM's running

[compute[root@node2 log]$ rpm -qa | grep -i ovirt
libgovirt-0.3.2-1.el6.x86_64
ovirt-release35-006-1.noarch
ovirt-hosted-engine-ha-1.2.8-1.el6.noarch
ovirt-hosted-engine-setup-1.2.6.1-1.el6.noarch
ovirt-engine-sdk-python-3.5.6.0-1.el6.noarch
ovirt-host-deploy-1.3.2-1.el6.noarch


-- 

*Charles Kozler*
*Vice President, IT Operations*

FIX Flyer, LLC
225 Broadway | Suite 1600 | New York, NY 10007
1-888-349-3593
http://www.fixflyer.com 

NOTICE TO RECIPIENT: THIS E-MAIL IS MEANT ONLY FOR THE INTENDED
RECIPIENT(S) OF THE TRANSMISSION, AND CONTAINS CONFIDENTIAL INFORMATION
WHICH IS PROPRIETARY TO FIX FLYER LLC.  ANY UNAUTHORIZED USE, COPYING,
DISTRIBUTION, OR DISSEMINATION IS STRICTLY PROHIBITED.  ALL RIGHTS TO THIS
INFORMATION IS RESERVED BY FIX FLYER LLC.  IF YOU ARE NOT THE INTENDED
RECIPIENT, PLEASE CONTACT THE SENDER BY REPLY E-MAIL AND PLEASE DELETE THIS
E-MAIL FROM YOUR SYSTEM AND DESTROY ANY COPIES.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt python API clone VM to other data domain

2016-01-22 Thread gregor
Hi,

if you have issues or improvements please post this on github to improve
the backup tool.

https://github.com/wefixit-AT/oVirtBackup/issues

regards
gregor

On 20/01/16 10:20, Algirdas Žemaitis wrote:
> Thank you, this was the solution, 
> Modified version of original script available here: 
> http://pastebin.com/GEtSJMCN
> Original : https://github.com/wefixit-AT/oVirtBackup
> 
> Diff: moded version is doing backup for all running VMS in cluster, trying to 
> clone VM into different storage domain (avoiding load on main domain)
> Please remove "iterration" if statement, I was using it to stop script from 
> doing backup for all VMs (testing)
> 
> -Original Message-
> From: Juan Hernández [mailto:jhern...@redhat.com] 
> Sent: Monday, January 18, 2016 5:37 PM
> To: Algirdas Žemaitis ; users@ovirt.org
> Subject: Re: [ovirt-users] Ovirt python API clone VM to other data domain
> 
> On 01/18/2016 03:39 PM, Algirdas Žemaitis wrote:
>> Hello,
>>
>>  
>>
>> It seems ovirt API lacks of possibility to define storage domain, in 
>> which cloned vm (from snapshot) will be created, is it done on 
>> purpose, or it is just missing/under development ?
>>
>>  
>>
>> Scenario using WEB GUI :
>>
>> Create snapshot of any running VM, right-click on snapshot, select “clone”
>>
>> In pop-up window you can enter new VM name etc, under “resource 
>> allocation” section **you can select where it will be created**
>>
>>  
>>
>>  
>>
>> Scenario using python-sdk/ovirt-api
>>
>>  
>>
>> I was trying to create vm for example like this:
>>
>> ##
>>
>> vm_params = params.VM(name=vm_from_list + '__bak', 
>> cluster=api.clusters.get("Default"),
>> storage_domain=api.storagedomains.get("temp"), memory=vm.get_memory(),
>> snapshots=snapshots_param)
>>
>> api.vms.add(vm_params)
>>
>> ##
>>
>> # temp is „other“ storage domain, NFS v3
>>
>> VM will be still created on same storage as original VM, no matter 
>> what domain I will define in params...
>>
>> Also tried other variations, using templates, disk profiles and so on, 
>> but nothing has changed where new VM is created.
>>
>>  
>>
>> I know cloning is not intended for backup purpose, but it is 
>> workaround probably half of Ovirt users use.
>>
>> In my case, it is not very smart to do snapshot and create VM 
>> (allocate disk space, etc) on same storage domain where there is 
>> already running a lot of VMs, environment is already busy.
>>
>>  
>>
>> Thanks !
>>
> 
> See here:
> 
> http://lists.ovirt.org/pipermail/users/2016-January/037321.html
> 
> --
> Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta 3ºD, 
> 28016 Madrid, Spain Inscrita en el Reg. Mercantil de Madrid – C.I.F. 
> B82657941 - Red Hat S.L.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] SDK: Direct export of a snapshot into a storage domain?

2016-01-22 Thread gregor
Hi,

is it possible to export a snapshot directly into a storage domain?

This will improve the speed and space overhead of my backup, tool.

https://github.com/wefixit-AT/oVirtBackup/issues/7

regards
gregor
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 3.6.1 with FreeIPA Auth domain performance

2016-01-22 Thread Ondra Machacek

Hi,

the best thing you can do is to migrate to new AAA ldap[1],
as anyway you will have to do so in 4.0, as manage-domains
will be removed, so I think better invest time to migration,
then to searching for root cause. We will be happy to help
you with migration. You can also try migration tool[2].

Ondra

[1] 
https://gerrit.ovirt.org/gitweb?p=ovirt-engine-extension-aaa-ldap.git;a=blob;f=README
[2] 
https://github.com/machacekondra/ovirt-engine-kerbldap-migration/releases


On 01/22/2016 09:37 AM, Justin Bushey wrote:

Hello,

I just wanted to see if anyone else has seen issues with using FreeIPA
as an authentication domain with oVirt 3.6.1. Specifically, I'm seeing
extremely slow performance when authenticating as an IPA user, between
5-10 minutes to get logged into the UI. On the KDC side I'm seeing
ticket requests from the oVirt host, which succeed and are repeated.
Eventually authentication succeeds to the Web UI.

The IPA domain was added using `engine-manage-domains` with the IPA
provider option. I could configure direct LDAP authentication if
absolutely need be, but this is really bugging me.

Google hasn't turned up any similar issues so I wanted to check if
anyone else has seen anything like this. I can post logs tomorrow if
anyone wants to assist me in troubleshooting ;)

Thanks,

Justin Bushey
InfoRelay Online Systems, Inc.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] memory leak in 3.5.6 - not vdsm

2016-01-22 Thread Nir Soffer
On Fri, Jan 22, 2016 at 11:30 PM, Charles Kozler  wrote:
> Hi Nir -
>
> do you have a release target date for 3.5.8? Any estimate would help.
>
> If its not VDSM, what is it exactly? Sorry, I understood from the ticket it
> was something inside vdsm, was I mistaken?

The bug I mentioned in my previous mail *is* a vdsm leak. This issue is not.

>
> CentOS 6 is the servers. 6.7 to be exact
>
> I have done all forms of flushing that I can (page cache, inodes, dentry's,
> etc) and as well moved VM's around to other nodes and nothing changes the
> memory. How can I find the leak? Where is the leak? RES shows the following
> of which, the totals dont add up to 20GB
>
>PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
>  19044 qemu  20   0 8876m 4.0g 5680 S  3.6 12.9   1571:44 qemu-kvm
>  26143 qemu  20   0 5094m 1.1g 5624 S  9.2  3.7   6012:12 qemu-kvm
>   5837 root   0 -20  964m 624m 3664 S  0.0  2.0  85:22.09 glusterfs
>  14328 root   0 -20  635m 169m 3384 S  0.0  0.5  43:15.23 glusterfs
>   5134 vdsm   0 -20 4368m 111m  10m S  5.9  0.3   3710:50 vdsm
>   4095 root  15  -5  727m  43m  10m S  0.0  0.1   0:02.00
> supervdsmServer
>
> 4.0G + 1.1G + 624M + 169 + 111M + 43M = ~7GB
>
> This was top sorted by RES from highest to lowest

Can you you list *all* processes  and sum the RSS of all of them?

You something like:

for status in /proc/*/status; do egrep '^VmRSS' $status; done |
awk '{sum+=$2} END {print sum}'

> At that point I wouldnt know where else to look except slab / kernel
> structures. Of which slab shows:
>
> [compute[root@node1 ~]$ cat /proc/meminfo | grep -i slab
> Slab:2549748 kB
>
> So roughly 2-3GB. Adding that to the other use of 7GB we have still about
> 10GB unaccounted for
>
> On Fri, Jan 22, 2016 at 4:24 PM, Nir Soffer  wrote:
>>
>> On Fri, Jan 22, 2016 at 11:08 PM, Charles Kozler 
>> wrote:
>> > Hi Nir -
>> >
>> > Thanks for getting back to me. Will the patch to 3.6 be backported to
>> > 3.5?
>>
>> We plan to include them in 3.5.8.
>>
>> > As you can tell from the images, it takes days and days for it to
>> > increase
>> > over time. I also wasnt sure if that was the right bug because VDSM
>> > memory
>> > shows normal from top ...
>> >
>> >PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
>> >   5134 vdsm   0 -20 4368m 111m  10m S  2.0  0.3   3709:28 vdsm
>>
>> As you wrote, this issue is not related to vdsm.
>>
>> >
>> > Res is only 111M. This is from node1 which is showing currently 20GB of
>> > 32GB
>> > used with only 2 VMs running on it - 1 with 4G and another with ~1 GB of
>> > RAM
>> > configured
>> >
>> > The images are from nagios and the value here is a direct correlation to
>> > what you would see in the free command output. See below from an example
>> > of
>> > node 1 and node 2
>> >
>> > [compute[root@node1 ~]$ free
>> >  total   used   free sharedbuffers
>> > cached
>> > Mem:  32765316   20318156   12447160252  30884
>> > 628948
>> > -/+ buffers/cache:   19658324   13106992
>> > Swap: 19247100  0   19247100
>> > [compute[root@node1 ~]$ free -m
>> >  total   used   free sharedbuffers
>> > cached
>> > Mem: 31997  19843  12153  0 30
>> > 614
>> > -/+ buffers/cache:  19199  12798
>> > Swap:18795  0  18795
>> >
>> > And its correlated image http://i.imgur.com/PZLEgyx.png (~19GB used)
>> >
>> > And as a control, node 2 that I just restarted today
>> >
>> > [compute[root@node2 ~]$ free
>> >  total   used   free sharedbuffers
>> > cached
>> > Mem:  327653161815324   30949992212  35784
>> > 717320
>> > -/+ buffers/cache:1062220   31703096
>> > Swap: 19247100  0   19247100
>>
>> Is this rhel/centos 6?
>>
>> > [compute[root@node2 ~]$ free -m
>> >  total   used   free sharedbuffers
>> > cached
>> > Mem: 31997   1772  30225  0 34
>> > 700
>> > -/+ buffers/cache:   1036  30960
>> > Swap:18795  0  18795
>> >
>> > And its correlated image http://i.imgur.com/8ldPVqY.png  (~2GB used).
>> > Note
>> > how 1772 in the image is exactly what is registered under 'used' in free
>> > command
>>
>> I guess you should start looking at the processes running on these nodes.
>>
>> Maybe try to collect memory usage per process using ps?
>>
>> >
>> > On Fri, Jan 22, 2016 at 3:59 PM, Nir Soffer  wrote:
>> >>
>> >> On Fri, Jan 22, 2016 at 9:25 PM, Charles Kozler 
>> >> wrote:
>> >> > Here is a screenshot of my three nodes and their increased memory
>> >> > usage
>> >> > over
>> >> > 30 days. Note that node #2 had 1 single VM that had 4GB of RAM
>> >> > assigned
>> >> > to
>> >> > it. I had since shut it down and saw no memory 

[ovirt-users] oVirt 3.6.1 with FreeIPA Auth domain performance

2016-01-22 Thread Justin Bushey
Hello,

I just wanted to see if anyone else has seen issues with using FreeIPA as
an authentication domain with oVirt 3.6.1. Specifically, I'm seeing
extremely slow performance when authenticating as an IPA user, between 5-10
minutes to get logged into the UI. On the KDC side I'm seeing ticket
requests from the oVirt host, which succeed and are repeated. Eventually
authentication succeeds to the Web UI.

The IPA domain was added using `engine-manage-domains` with the IPA
provider option. I could configure direct LDAP authentication if absolutely
need be, but this is really bugging me.

Google hasn't turned up any similar issues so I wanted to check if anyone
else has seen anything like this. I can post logs tomorrow if anyone wants
to assist me in troubleshooting ;)

Thanks,

Justin Bushey
InfoRelay Online Systems, Inc.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 3.6.1 with FreeIPA Auth domain performance

2016-01-22 Thread Justin Bushey
Also, I forgot to mention that I am running this against FreeIPA 4.2.

Thanks

On Fri, Jan 22, 2016 at 3:37 AM, Justin Bushey 
wrote:

> Hello,
>
> I just wanted to see if anyone else has seen issues with using FreeIPA as
> an authentication domain with oVirt 3.6.1. Specifically, I'm seeing
> extremely slow performance when authenticating as an IPA user, between 5-10
> minutes to get logged into the UI. On the KDC side I'm seeing ticket
> requests from the oVirt host, which succeed and are repeated. Eventually
> authentication succeeds to the Web UI.
>
> The IPA domain was added using `engine-manage-domains` with the IPA
> provider option. I could configure direct LDAP authentication if absolutely
> need be, but this is really bugging me.
>
> Google hasn't turned up any similar issues so I wanted to check if anyone
> else has seen anything like this. I can post logs tomorrow if anyone wants
> to assist me in troubleshooting ;)
>
> Thanks,
>
> Justin Bushey
> InfoRelay Online Systems, Inc.
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] VM manipulate DMI Chassis Serial value

2016-01-22 Thread ovirt

Hi list,

it is possible to change the DMI Chassis Serial value in the VM config?

Virtualbox is able to perform this via the following command for example:
c:\Program Files\Oracle\VirtualBox\VBoxManage.exe setextradata 
"NAME_OF_VIRTUAL_MACHINE" 
"VBoxInternal/Devices/pcbios/0/Config/DmiChassisSerial" "123456789"


Handle 0x0300, DMI type 3, 21 bytes
Chassis Information
Manufacturer: Red Hat
Type: Other
Lock: Not Present
Version: RHEL 7.2.0 PC (i440FX + PIIX, 1996)
*Serial Number: Not Specified*
Asset Tag: Not Specified
Boot-up State: Safe
Power Supply State: Safe
Thermal State: Safe
Security Status: Unknown
OEM Information: 0x
Height: Unspecified
Number Of Power Cords: Unspecified
Contained Elements: 0

root@demo:~ # dmidecode -s chassis-serial-number
*Not Specified*


I want to change the red above!!
Is there a way to change that in Ovirt as well?

Best regards and thank you for your help.
Christoph


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Issues after upgrade

2016-01-22 Thread Martin Perina


- Original Message -
> From: "Roman Mohr" 
> To: "Fabien CARRE" 
> Cc: "users" 
> Sent: Friday, January 22, 2016 1:38:40 PM
> Subject: Re: [ovirt-users] Issues after upgrade
> 
> Hi Fabien,
> 
> On Fri, Jan 22, 2016 at 1:08 PM, Fabien CARRE < carre.fab...@gmail.com >
> wrote:
> 
> 
> 
> Hello,
> I am experiencing some issues ever since I upgraded ovirt-engine from
> 3.6.0.3-1.el6 to 3.6.1.3-1.el6.
> 
> The first problem was after the renewal of the certifcate, The browser saying
> : firefox (Error code: sec_error_reused_issuer_and_serial)
> I had to switch back to the previous one to access the Portal.
> 
> The second issue is the portals themselves (user and admin), I am supposed to
> fill all the fields but the profile is an empty drop down menu. (cf attached
> file)
> 
> 
> could it be that you did not run engine-setup again after you updated
> ovirt-engine?

Yes, if profile/domain combox box in login dialog is empty, you are most 
probably
hit by [1], please try to execute engine-setup and take a look at README.admin
contained in ovirt-engine-extension-aaa-jdbc package.


Thanks

Martin Perina


[1] https://bugzilla.redhat.com/show_bug.cgi?id=1293338

> 
> 
> 
> 
> Has anyone faced those issues ?
> 
> Thank you
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 
> Best regards,
> Roman
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Issues after upgrade

2016-01-22 Thread Fabien CARRE
Hi,
I did run the engine-setup and I had to answer some questions.

regards,

On 22 January 2016 at 13:38, Roman Mohr  wrote:

> Hi Fabien,
>
> On Fri, Jan 22, 2016 at 1:08 PM, Fabien CARRE 
> wrote:
>
>> Hello,
>> I am experiencing some issues ever since I upgraded ovirt-engine from
>> 3.6.0.3-1.el6 to 3.6.1.3-1.el6.
>>
>> The first problem was after the renewal of the certifcate, The browser
>> saying : firefox (Error code: sec_error_reused_issuer_and_serial)
>> I had to switch back to the previous one to access the Portal.
>>
>> The second issue is the portals themselves (user and admin), I am
>> supposed to fill all the fields but the profile is an empty drop down menu.
>> (cf attached file)
>>
>>
> could it be that you did not run engine-setup again after you updated
> ovirt-engine?
>
>
>> Has anyone faced those issues ?
>>
>> Thank you
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
> Best regards,
> Roman
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Issues after upgrade

2016-01-22 Thread Roman Mohr
Hi Fabien,

On Fri, Jan 22, 2016 at 1:08 PM, Fabien CARRE 
wrote:

> Hello,
> I am experiencing some issues ever since I upgraded ovirt-engine from
> 3.6.0.3-1.el6 to 3.6.1.3-1.el6.
>
> The first problem was after the renewal of the certifcate, The browser
> saying : firefox (Error code: sec_error_reused_issuer_and_serial)
> I had to switch back to the previous one to access the Portal.
>
> The second issue is the portals themselves (user and admin), I am supposed
> to fill all the fields but the profile is an empty drop down menu. (cf
> attached file)
>
>
could it be that you did not run engine-setup again after you updated
ovirt-engine?


> Has anyone faced those issues ?
>
> Thank you
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
Best regards,
Roman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] memory leak in 3.5.6 - not vdsm

2016-01-22 Thread Charles Kozler
Thanks Sandro. Should clarify my storage is external on a redundant SAN.
The steps I was concerned about was the actual upgrade. I tried to upgrade
before and it brought my entire stack crumbling down so I'm hesitant. This
bug seems like a huge bug that should at least somehow backported if at all
possible because, to me, it renders the entire 3.5.6 branch unusable as no
VMs can be deployed since OOM will eventually kill them. In any case that's
just my opinion and I'm a new user to ovirt. The docs I followed originally
got me going how I need and somehow didn't work for 3.6 in the same fashion
so naturally I'm hesitant to upgrade but clearly have no option if I want
to continue my infrastructure on ovirt. Thank you again for taking the time
out to assist me, I truly appreciate it. I will try an upgrade next week
and pray it all goes well :-)
On Jan 23, 2016 12:40 AM, "Sandro Bonazzola"  wrote:

>
>
> On Fri, Jan 22, 2016 at 10:53 PM, Charles Kozler 
> wrote:
>
>> Sandro -
>>
>> Do you have available documentation that can support upgrading self
>> hosted? I followed this
>> http://community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/
>>
>> Would it be as easy as installing the RPM and then running yum upgrade?
>>
>>
> Note that mentioned article describes an unsupported hyperconverged setup
> running NFS over Gluster.
> That said,
> 1) put the hosted-engine storage domain into global maintenance mode
> 2) upgrade the engine VM
> 3) select the first host to upgrade and put it under maintenance from the
> engine, wait for the engine vm to migrate if needed.
> 4) yum upgrade the first host and wait until ovirt-ha-agent completes
> 5) exit global and local maintenance mode
> 6) repeat 3-5 on all the other hosts
> 7) once all hosts are updated you can increase the cluster compatibility
> level to 3.6. At this point the engine will trigger the auto-import of the
> hosted-engine storage domain.
>
> Simone, Roy, can you confirm above steps? Maybe also you can update
> http://www.ovirt.org/Hosted_Engine_Howto#Upgrade_Hosted_Engine
>
>
>
>> Thanks
>>
>> On Fri, Jan 22, 2016 at 4:42 PM, Sandro Bonazzola 
>> wrote:
>>
>>>
>>> Il 22/Gen/2016 22:31, "Charles Kozler"  ha
>>> scritto:
>>> >
>>> > Hi Nir -
>>> >
>>> > do you have a release target date for 3.5.8? Any estimate would help.
>>> >
>>>
>>> There won't be any supported release after 3.5.6. Please update to 3.6.2
>>> next week
>>>
>>> > If its not VDSM, what is it exactly? Sorry, I understood from the
>>> ticket it was something inside vdsm, was I mistaken?
>>> >
>>> > CentOS 6 is the servers. 6.7 to be exact
>>> >
>>> > I have done all forms of flushing that I can (page cache, inodes,
>>> dentry's, etc) and as well moved VM's around to other nodes and nothing
>>> changes the memory. How can I find the leak? Where is the leak? RES shows
>>> the following of which, the totals dont add up to 20GB
>>> >
>>> >PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
>>>
>>>
>>> >  19044 qemu  20   0 8876m 4.0g 5680 S  3.6 12.9   1571:44 qemu-kvm
>>>
>>>
>>> >  26143 qemu  20   0 5094m 1.1g 5624 S  9.2  3.7   6012:12 qemu-kvm
>>>
>>>
>>> >   5837 root   0 -20  964m 624m 3664 S  0.0  2.0  85:22.09
>>> glusterfs
>>>
>>> >  14328 root   0 -20  635m 169m 3384 S  0.0  0.5  43:15.23
>>> glusterfs
>>>
>>> >   5134 vdsm   0 -20 4368m 111m  10m S  5.9  0.3   3710:50 vdsm
>>>
>>>
>>> >   4095 root  15  -5  727m  43m  10m S  0.0  0.1   0:02.00
>>> supervdsmServer
>>> >
>>> > 4.0G + 1.1G + 624M + 169 + 111M + 43M = ~7GB
>>> >
>>> > This was top sorted by RES from highest to lowest
>>> >
>>> > At that point I wouldnt know where else to look except slab / kernel
>>> structures. Of which slab shows:
>>> >
>>> > [compute[root@node1 ~]$ cat /proc/meminfo | grep -i slab
>>> > Slab:2549748 kB
>>> >
>>> > So roughly 2-3GB. Adding that to the other use of 7GB we have still
>>> about 10GB unaccounted for
>>> >
>>> > On Fri, Jan 22, 2016 at 4:24 PM, Nir Soffer 
>>> wrote:
>>> >>
>>> >> On Fri, Jan 22, 2016 at 11:08 PM, Charles Kozler <
>>> char...@fixflyer.com> wrote:
>>> >> > Hi Nir -
>>> >> >
>>> >> > Thanks for getting back to me. Will the patch to 3.6 be backported
>>> to 3.5?
>>> >>
>>> >> We plan to include them in 3.5.8.
>>> >>
>>> >> > As you can tell from the images, it takes days and days for it to
>>> increase
>>> >> > over time. I also wasnt sure if that was the right bug because VDSM
>>> memory
>>> >> > shows normal from top ...
>>> >> >
>>> >> >PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+
>>> COMMAND
>>> >> >   5134 vdsm   0 -20 4368m 111m  10m S  2.0  0.3   3709:28 vdsm
>>> >>
>>> >> As you wrote, this issue is not related to vdsm.
>>> >>
>>> >> >
>>> >> > Res is only 111M. This is from node1 which is showing currently
>>> 20GB of 32GB
>>> >> > used with only 2 VMs running on it - 1 with 4G 

Re: [ovirt-users] memory leak in 3.5.6 - not vdsm

2016-01-22 Thread Sandro Bonazzola
On Fri, Jan 22, 2016 at 10:53 PM, Charles Kozler 
wrote:

> Sandro -
>
> Do you have available documentation that can support upgrading self
> hosted? I followed this
> http://community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/
>
> Would it be as easy as installing the RPM and then running yum upgrade?
>
>
Note that mentioned article describes an unsupported hyperconverged setup
running NFS over Gluster.
That said,
1) put the hosted-engine storage domain into global maintenance mode
2) upgrade the engine VM
3) select the first host to upgrade and put it under maintenance from the
engine, wait for the engine vm to migrate if needed.
4) yum upgrade the first host and wait until ovirt-ha-agent completes
5) exit global and local maintenance mode
6) repeat 3-5 on all the other hosts
7) once all hosts are updated you can increase the cluster compatibility
level to 3.6. At this point the engine will trigger the auto-import of the
hosted-engine storage domain.

Simone, Roy, can you confirm above steps? Maybe also you can update
http://www.ovirt.org/Hosted_Engine_Howto#Upgrade_Hosted_Engine



> Thanks
>
> On Fri, Jan 22, 2016 at 4:42 PM, Sandro Bonazzola 
> wrote:
>
>>
>> Il 22/Gen/2016 22:31, "Charles Kozler"  ha scritto:
>> >
>> > Hi Nir -
>> >
>> > do you have a release target date for 3.5.8? Any estimate would help.
>> >
>>
>> There won't be any supported release after 3.5.6. Please update to 3.6.2
>> next week
>>
>> > If its not VDSM, what is it exactly? Sorry, I understood from the
>> ticket it was something inside vdsm, was I mistaken?
>> >
>> > CentOS 6 is the servers. 6.7 to be exact
>> >
>> > I have done all forms of flushing that I can (page cache, inodes,
>> dentry's, etc) and as well moved VM's around to other nodes and nothing
>> changes the memory. How can I find the leak? Where is the leak? RES shows
>> the following of which, the totals dont add up to 20GB
>> >
>> >PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
>>
>>
>> >  19044 qemu  20   0 8876m 4.0g 5680 S  3.6 12.9   1571:44 qemu-kvm
>>
>>
>> >  26143 qemu  20   0 5094m 1.1g 5624 S  9.2  3.7   6012:12 qemu-kvm
>>
>>
>> >   5837 root   0 -20  964m 624m 3664 S  0.0  2.0  85:22.09 glusterfs
>>
>>
>> >  14328 root   0 -20  635m 169m 3384 S  0.0  0.5  43:15.23 glusterfs
>>
>>
>> >   5134 vdsm   0 -20 4368m 111m  10m S  5.9  0.3   3710:50 vdsm
>>
>>
>> >   4095 root  15  -5  727m  43m  10m S  0.0  0.1   0:02.00
>> supervdsmServer
>> >
>> > 4.0G + 1.1G + 624M + 169 + 111M + 43M = ~7GB
>> >
>> > This was top sorted by RES from highest to lowest
>> >
>> > At that point I wouldnt know where else to look except slab / kernel
>> structures. Of which slab shows:
>> >
>> > [compute[root@node1 ~]$ cat /proc/meminfo | grep -i slab
>> > Slab:2549748 kB
>> >
>> > So roughly 2-3GB. Adding that to the other use of 7GB we have still
>> about 10GB unaccounted for
>> >
>> > On Fri, Jan 22, 2016 at 4:24 PM, Nir Soffer  wrote:
>> >>
>> >> On Fri, Jan 22, 2016 at 11:08 PM, Charles Kozler 
>> wrote:
>> >> > Hi Nir -
>> >> >
>> >> > Thanks for getting back to me. Will the patch to 3.6 be backported
>> to 3.5?
>> >>
>> >> We plan to include them in 3.5.8.
>> >>
>> >> > As you can tell from the images, it takes days and days for it to
>> increase
>> >> > over time. I also wasnt sure if that was the right bug because VDSM
>> memory
>> >> > shows normal from top ...
>> >> >
>> >> >PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
>> >> >   5134 vdsm   0 -20 4368m 111m  10m S  2.0  0.3   3709:28 vdsm
>> >>
>> >> As you wrote, this issue is not related to vdsm.
>> >>
>> >> >
>> >> > Res is only 111M. This is from node1 which is showing currently 20GB
>> of 32GB
>> >> > used with only 2 VMs running on it - 1 with 4G and another with ~1
>> GB of RAM
>> >> > configured
>> >> >
>> >> > The images are from nagios and the value here is a direct
>> correlation to
>> >> > what you would see in the free command output. See below from an
>> example of
>> >> > node 1 and node 2
>> >> >
>> >> > [compute[root@node1 ~]$ free
>> >> >  total   used   free sharedbuffers
>>  cached
>> >> > Mem:  32765316   20318156   12447160252  30884
>>  628948
>> >> > -/+ buffers/cache:   19658324   13106992
>> >> > Swap: 19247100  0   19247100
>> >> > [compute[root@node1 ~]$ free -m
>> >> >  total   used   free sharedbuffers
>>  cached
>> >> > Mem: 31997  19843  12153  0 30
>>   614
>> >> > -/+ buffers/cache:  19199  12798
>> >> > Swap:18795  0  18795
>> >> >
>> >> > And its correlated image http://i.imgur.com/PZLEgyx.png (~19GB used)
>> >> >
>> >> > And as a control, node 2 that I just restarted today
>> >> >
>> >> > [compute[root@node2 ~]$ free
>> >> >   

[ovirt-users] Cannot login after upgrade from 3.5 to 3.6

2016-01-22 Thread Marcelo Leandro
After engine-setup from ovirt-engine-3.5.6.2-1.el7.centos to
ovirt-engine-3.6.1.3-1.el7.centos, I'm not able to login anymore. The
engine.log say
Caused by: org.codehaus.jackson.map.JsonMappingException: Invalid type
id 'org.ovirt.engine.core.common.businessentities.DiskImage' (for id
type 'Id.class'): no such class found (through reference chain:
org.ovirt.engine.core.common.action.AddVmFromSnapshotParameters["vm"]->org.ovirt.engine.core.common.businessentities.VM["diskList"])

Here the setup log:
https://copy.com/YBddV8bg3tnjHnwp

And here the engine.log:
https://copy.com/TWjgIu7KwSkcmRZ8

Need urgent assistance. Thank you in advance.

Best Regards,
Marcelo Leandro
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cannot login after upgrade from 3.5 to 3.6

2016-01-22 Thread Amador Pahim

On 01/22/2016 03:03 PM, Marcelo Leandro wrote:

After engine-setup from ovirt-engine-3.5.6.2-1.el7.centos to
ovirt-engine-3.6.1.3-1.el7.centos, I'm not able to login anymore. The
engine.log say
Caused by: org.codehaus.jackson.map.JsonMappingException: Invalid type
id 'org.ovirt.engine.core.common.businessentities.DiskImage' (for id
type 'Id.class'): no such class found (through reference chain:
org.ovirt.engine.core.common.action.AddVmFromSnapshotParameters["vm"]->org.ovirt.engine.core.common.businessentities.VM["diskList"])


hmm, I see a similar msg here:
http://lists.ovirt.org/pipermail/devel/2015-July/010976.html

Not sure if related. Eli? Any thoughts here?



Here the setup log:
https://copy.com/YBddV8bg3tnjHnwp

And here the engine.log:
https://copy.com/TWjgIu7KwSkcmRZ8

Need urgent assistance. Thank you in advance.

Best Regards,
Marcelo Leandro
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Issues after upgrade

2016-01-22 Thread Fabien CARRE
Hello,
After another engine-setup run everything is back to normal !
Thanks

On 22 January 2016 at 13:47, Martin Perina  wrote:

>
>
> - Original Message -
> > From: "Roman Mohr" 
> > To: "Fabien CARRE" 
> > Cc: "users" 
> > Sent: Friday, January 22, 2016 1:38:40 PM
> > Subject: Re: [ovirt-users] Issues after upgrade
> >
> > Hi Fabien,
> >
> > On Fri, Jan 22, 2016 at 1:08 PM, Fabien CARRE < carre.fab...@gmail.com >
> > wrote:
> >
> >
> >
> > Hello,
> > I am experiencing some issues ever since I upgraded ovirt-engine from
> > 3.6.0.3-1.el6 to 3.6.1.3-1.el6.
> >
> > The first problem was after the renewal of the certifcate, The browser
> saying
> > : firefox (Error code: sec_error_reused_issuer_and_serial)
> > I had to switch back to the previous one to access the Portal.
> >
> > The second issue is the portals themselves (user and admin), I am
> supposed to
> > fill all the fields but the profile is an empty drop down menu. (cf
> attached
> > file)
> >
> >
> > could it be that you did not run engine-setup again after you updated
> > ovirt-engine?
>
> Yes, if profile/domain combox box in login dialog is empty, you are most
> probably
> hit by [1], please try to execute engine-setup and take a look at
> README.admin
> contained in ovirt-engine-extension-aaa-jdbc package.
>
>
> Thanks
>
> Martin Perina
>
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1293338
>
> >
> >
> >
> >
> > Has anyone faced those issues ?
> >
> > Thank you
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
> >
> > Best regards,
> > Roman
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 3.6.1 with FreeIPA Auth domain performance

2016-01-22 Thread Donny Davis
I use Freeipa without issue on AAA Ldap

Here is a simple write up that may help you understand how aaa ldap works.
This is out dated, so don't just copy and paste however it will help
you get the gist

https://ipv6cloud.wordpress.com/2014/12/16/ovirt-simple-ldap-aaa/

On Fri, Jan 22, 2016 at 2:08 PM, Justin Bushey 
wrote:

> Ondra,
>
> Thanks again. You've definitely saved me from spending too much time going
> down a bunny hole.
>
> -- Justin
>
> On Fri, Jan 22, 2016 at 4:35 AM, Ondra Machacek 
> wrote:
>
>> Hi,
>>
>> the best thing you can do is to migrate to new AAA ldap[1],
>> as anyway you will have to do so in 4.0, as manage-domains
>> will be removed, so I think better invest time to migration,
>> then to searching for root cause. We will be happy to help
>> you with migration. You can also try migration tool[2].
>>
>> Ondra
>>
>> [1]
>> https://gerrit.ovirt.org/gitweb?p=ovirt-engine-extension-aaa-ldap.git;a=blob;f=README
>> [2]
>> https://github.com/machacekondra/ovirt-engine-kerbldap-migration/releases
>>
>>
>> On 01/22/2016 09:37 AM, Justin Bushey wrote:
>>
>>> Hello,
>>>
>>> I just wanted to see if anyone else has seen issues with using FreeIPA
>>> as an authentication domain with oVirt 3.6.1. Specifically, I'm seeing
>>> extremely slow performance when authenticating as an IPA user, between
>>> 5-10 minutes to get logged into the UI. On the KDC side I'm seeing
>>> ticket requests from the oVirt host, which succeed and are repeated.
>>> Eventually authentication succeeds to the Web UI.
>>>
>>> The IPA domain was added using `engine-manage-domains` with the IPA
>>> provider option. I could configure direct LDAP authentication if
>>> absolutely need be, but this is really bugging me.
>>>
>>> Google hasn't turned up any similar issues so I wanted to check if
>>> anyone else has seen anything like this. I can post logs tomorrow if
>>> anyone wants to assist me in troubleshooting ;)
>>>
>>> Thanks,
>>>
>>> Justin Bushey
>>> InfoRelay Online Systems, Inc.
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Donny Davis
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users