[ovirt-users] Export/Import Virtual Machcines

2017-02-02 Thread Fernando Frediani

Hi.

I found that it is possible to import Virtual Machines from a VMware 
environment,  XEN or even Libvirt/KVM but it is not directly from 
another oVirt environment. Is this expected ?


I have two environments where I need to transfer VMs from one to another 
and the only way I see is Export it to a Export Domain, copy it over to 
another Export Domain in the other side then Import. Is this intended to 
be like that ? Why to order environments that works straight way and to 
own oVirt not ?


Thanks

Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] planned Jenkins and Resources restart

2017-02-02 Thread Evgheni Dereveanchin
Maintenance completed, all services back up
and running. As always - if you see any
issues please report them to Jira.

Regards, 
Evgheni Dereveanchin 

- Original Message -
From: "Evgheni Dereveanchin" 
To: "infra" 
Cc: "devel" , users@ovirt.org
Sent: Thursday, 2 February, 2017 6:43:18 PM
Subject: planned Jenkins and Resources restart

Hi everyone,

I will be applying security updates to our
production systems today. During the maintenance
following services may be unavailable for short
periods of time:

jenkins.ovirt.org
resources.ovirt.org
artifactory.ovirt.org
templates.ovirt.org
mirrors.phx.ovirt.org
proxy.phx.ovirt.org

Jenkins will not schedule new builds while services
are rebooting as this can cause false positives.

I will announce you once the maintenance is over.

Regards, 
Evgheni Dereveanchin 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-02 Thread serg_k
Title: Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?



Updated from 4.0.6
Docs are quite incomplete, it's not mentioned about installing ovirt-release41 on centos HV and ovirt-nodes manually, you need to guess.
Also links in release notes are broken ( https://www.ovirt.org/release/4.1.0/ )
They are going to https://www.ovirt.org/release/4.1.0/Hosted_Engine_Howto , but docs for 4.1.0 are absent.

Upgrade went well, everything migrated without problems(I need to restart VMs only to change cluster level to 4.1).
Good news, SPICE HTML 5 client now working for me on Win client with firefox, before on 4.x it was sending connect requests forever.

There is some bugs I've found playing with new version:
1) some storage tabs displaying "No items to display "
for example:
if I'm expanding System\Data centers\[dc name]\ and selecting Storage it displays nothing in main tab, but displays all domains in tree,
if I'm selecting [dc name] and Storage tab, also nothing,
but in System \ Strorage tab all domains present,
also in Clusters\[cluster name]\ Storage tab they present.

2) links to embedded files and clients aren't working, engine says 404, examples:
https://[your manager's address]/ovirt-engine/services/files/spice/usbdk-x64.msi
https://[your manager's address]/ovirt-engine/services/files/spice/virt-viewer-x64.msi
and other,
but they are in docs(in ovirt and also in rhel)

3) there is also link in "Console options" menu (right click on VM) called "Console Client Resources", it's going to dead location:
http://www.ovirt.org/documentation/admin-guide/virt/console-client-resources 
If you are going to fix issue №2 maybe also adding links directly to installation files embedded will be more helpful for users) 

4) little disappointed about "pass discards" on NFS storage, as I've found NFS implementation(even 4.1) in Centos 7 doesn't support
fallocate(FALLOC_FL_PUNCH_HOLE), that quemu uses for file storage, it was added only in kernel 3.18, sparsify also not working, but I'll mail separate
thread with this question.

-- 



 Thursday, February 2, 2017, 15:19:29:





Hi,
did you install/update to 4.1.0? Let us know your experience!
We end up knowing only when things doesn't work well, let us know it works fine for you :-)

If you're not planning an update to 4.1.0 in the near future, let us know why.
Maybe we can help.

Thanks!
-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Export/Import Virtual Machcines

2017-02-02 Thread Nir Soffer
On Thu, Feb 2, 2017 at 10:38 PM, Fernando Frediani
 wrote:
> Hi.
>
> I found that it is possible to import Virtual Machines from a VMware
> environment,  XEN or even Libvirt/KVM but it is not directly from another
> oVirt environment. Is this expected ?

Yes, but I agree that having *one* import UI for any kind of virtualization
solution would be cool.

> I have two environments where I need to transfer VMs from one to another and
> the only way I see is Export it to a Export Domain, copy it over to another
> Export Domain in the other side then Import.

No, you can export the vm to the export domain, and then detach the export
domain from one environment and attach to another, where you can import
the vm.

Another way is to detach entire data domain from one environment, and attach
it to another environment, where you can import the vms without copying
anything, assuming that you can leave the domain attached to the new
environment.

We support upload/download of disk snapshots. If you have a vm with one disk
without snapshots, you can download the disk from one environment, and upload
it to the new environment, and then attach it to existing or new vm. We plan to
support upload/download of entire vms in the future.

Can you share more details why do you need to move vms from one environment
to another regularly?

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] problem after rebooting the node

2017-02-02 Thread Edward Haas
Hello Shalabh,

Please provide the logs from your node:
- messages
- vdsm/vdsm.log, vdsm/supervdsm.log

It may be that you are missing openvswitch installed, although VDSM should
not require it for its operation.

Thanks,
Edy.


On Thu, Feb 2, 2017 at 2:10 PM, Shalabh Goel 
wrote:

> HI,
>
> I am getting the following error on my node after rebooting it.
>
> VDSM ovirtnode2 command HostSetupNetworksVDS failed: Executing commands
> failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection
> failed (No such file or directory)
>
>
> To solve this, I am trying to restart ovsdb-server using the following
> command,
>
> ovsdb-server --remote=punix:*/var/run/openvswitch/db.sock*
> --remote=db:Open_vSwitch,Open_vSwitch,manager_options
> --private-key=db:Open_vSwitch,SSL,private_key--certificate=
> db:Open_vSwitch,SSL,certificate 
> --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert
> --pidfile --detach
>
> But I am getting the following error.
>
> ovsdb-server: /var/run/openvswitch/ovsdb-server.pid.tmp: create failed
> (No such file or directory)
>
> How to restart the ovsdb-server?? Also ovirtmgmt network is missing from
> my node. It happened after I rebooted my node after it got upgraded to
> Ovirt 4.1
>
> --
> Shalabh Goel
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM has been paused due to storage I/O problem

2017-02-02 Thread Gianluca Cecchi
On Thu, Feb 2, 2017 at 10:53 PM, Benjamin Marzinski 
wrote:

>
> > > I'm trying to mitigate inserting a timeout for my SAN devices but I'm
> not
> > > sure of its effectiveness as CentOS 7 behavior  of "multipathd -k" and
> then
> > > "show config" seems different from CentOS 6.x
> > > In fact my attempt for multipath.conf is this
>
> There was a significant change in how multipath deals with merging
> device configurations between RHEL6 and RHEL7.  The short answer is, as
> long as you copy the entire existing configuration, and just change what
> you want changed (like you did), you can ignore the change.  Also,
> multipath doesn't care if you quote numbers.
>
> If you want to verify that no_path_retry is being set as intented, you
> can run:
>
> # multipath -r -v3 | grep no_path_retry
>

Hi Benjamin,
thank you very much for the explanations, especially the long one ;-)
I tried and confirmed that I has no_path_retry = 4 as expected

The regex matching is only for merge, correct?
So in your example if in RH EL 7 I put this

device {
vendor "IBM"
product "^1814"
no_path_retry 12
}

It would not match for merging, but it would match for applying to my
device (because it is put at the end of config read backwards).
And it would apply only the no_path_retry setting, while all other ones
would not be picked from builtin configuration for device, but from
defaults in general.
So for example it would set path_checker not this way:
path_checker "rdac"

but this way:
path_checker "directio"
that is default..

correct?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] NFS and pass discards\unmap question

2017-02-02 Thread Sergey Kulikov

I've upgraded to 4.1 release, it have great feature "Pass discards", that now 
can be used without vdsm hooks,
After upgrade I've tested it with NFS 4.1 storage, exported from netapp, but 
unfortunately found out, that
it's not working, after some investigation, I've found, that NFS 
implementation(even 4.1) in Centos 7
doesn't support sparse files and fallocate(FALLOC_FL_PUNCH_HOLE), that quemu 
uses for file storage, it was
added only in kernel 3.18, and sparse files is also announced feature of 
upcoming NFS4.2,
sparsify also not working on this data domains(runs, but nothing happens).

This test also shows, that FALLOC_FL_PUNCH_HOLE not working, it was executed on 
centos ovirt host with mounted nfs share:
# truncate -s 1024 test1
# fallocate -p -o 0 -l 1024 test1
fallocate: keep size mode (-n option) unsupported

Is there any plans to backport this feature to node-ng, or centos? or we should 
wait for RHEL 8?
NFS is more and more popular, so discards is VERY useful feature.
I'm also planning to test fallocate on latest fedora with 4.x kernel and 
mounted nfs.

Thanks for your work!

-- 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM has been paused due to storage I/O problem

2017-02-02 Thread Benjamin Marzinski
On Wed, Feb 01, 2017 at 09:39:45AM +0200, Nir Soffer wrote:
> On Tue, Jan 31, 2017 at 6:09 PM, Gianluca Cecchi
>  wrote:
> > On Tue, Jan 31, 2017 at 3:23 PM, Nathanaël Blanchet 
> > wrote:
> >>
> >> exactly the same issue by there with FC EMC domain storage...
> >>
> >>
> >
> > I'm trying to mitigate inserting a timeout for my SAN devices but I'm not
> > sure of its effectiveness as CentOS 7 behavior  of "multipathd -k" and then
> > "show config" seems different from CentOS 6.x
> > In fact my attempt for multipath.conf is this

There was a significant change in how multipath deals with merging
device configurations between RHEL6 and RHEL7.  The short answer is, as
long as you copy the entire existing configuration, and just change what
you want changed (like you did), you can ignore the change.  Also,
multipath doesn't care if you quote numbers.

If you want to verify that no_path_retry is being set as intented, you
can run:

# multipath -r -v3 | grep no_path_retry

To reload you multipath devices with verbosity turned up. You shoudl see
lines like:

Feb 02 09:38:30 | mpatha: no_path_retry = 12 (controller setting)

That will tell you what no_path_retry is set to.

The configuration Nir suggested at the end of this email looks good to
me.

Now, here's the long answer:

multipath allows you to merge device configurations.  This means that as
long as you put in the "vendor" and "product" strings, you only need to
set the other values that you care about. On RHEL6, this would work

device {
vendor "IBM"
product "^1814"
no_path_retry 12
}

And it would create a configuration that was exactly the same as the
builtin config for this device, except that no_path_retry was set to 12.
However, this wasn't as easy for users as it was supposed to be.
Specifically, users would often add their device's vendor and product
information, as well as whatever they wanted changed, and then be
surprised when multipath didn't retain all the information from the
builtin configuration as advertised. This is because they used the
actual vendor and product strings for their device, but the builtin
device configuration's vendor and product strings were regexes. In
RHEL6, multipath only merged configurations if the vendor and product
strings string matched. So users would try

device {
vendor "IBM"
product "1814 FASt"
no_path_retry 12
}

and it wouldn't work as expected, since the product strings didn't
match.  To fix this, when RHEL7 checks if a user configuration should be
merged with a builtin configuration, all that is required is that the
user configuration's vendor and product strings regex match the builtin.
This means that the above configuration will work as expected in RHEL7.
However the first configuration won't because "^1814" doesn't regex
match "^1814".  This means that multipath would treat is as a completely
new configuration, and not merge any values from the builtin
configuration.  You can reenable the RHEL6 behaviour in RHEL7 by setting 

hw_str_match yes

in the defaults section.

Now, because the builtin configurations could handle more than one
device type per configuration, since they used regexes to match the
vendor and product strings, multipath couldn't just remove the original
builtin configuration when users added a new configuration that modified
it.  Otherwise, devices that regex matched the builtin configuration's
vendor and product strings but not the user configuration's vendor and
product strings wouldn't have any device configuration information. So
multipath keeps the original builtin configuration as well as the new
one.  However, when it's time to assign a device configuration to a
device, multipath looks through the device configurations list
backwards, and finds the first match.  This means that it will always
use the user configuration instead of the builtin one (since new
configurations get added to the end of the list).

Like I said before, if you add all the values you want set in your
configuration, instead of relying on them being merged from the builtin
configuration, then you don't need to worry about any of this.

-Ben

> >
> > # VDSM REVISION 1.3
> > # VDSM PRIVATE
> >
> > defaults {
> > polling_interval5
> > no_path_retry   fail
> > user_friendly_names no
> > flush_on_last_del   yes
> > fast_io_fail_tmo5
> > dev_loss_tmo30
> > max_fds 4096
> > }
> >
> > # Remove devices entries when overrides section is available.
> > devices {
> > device {
> > # These settings overrides built-in devices settings. It does not
> > apply
> > # to devices without built-in settings (these use the settings in
> > the
> > # "defaults" section), or to devices defined in the "devices"
> > section.

Re: [ovirt-users] ovirt-engine failed to check for updates

2017-02-02 Thread Martin Perina
Hi,

could you please share what repositories you have configured on the host
nodes?

As already mentioned in the thread ovirt-imageio is included in 4.0, but
not in 3.6

Thanks

Martin Perina


On Wed, Feb 1, 2017 at 8:52 PM, Michael Watters  wrote:

> The engine is 4.0, the host nodes are running 3.6 with VDSM 4.17.35.
>
>
> On 02/01/2017 02:41 PM, Victor Jose Acosta wrote:
> > Hello
> >
> > ovirt-imageio-daemon is part of ovirt 4 repository, is your engine
> > version 4 or 3.6?
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to update ovirt nodes

2017-02-02 Thread Yaniv Kaul
On Thu, Feb 2, 2017 at 10:32 AM, Yedidyah Bar David  wrote:

> On Thu, Feb 2, 2017 at 8:14 AM, Shalabh Goel 
> wrote:
> > Hi
> >
> > Ovirt 4.1 has been released, I want to to know how to upgrade the Ovirt
> > nodes to 4.1. I was able to upgrade ovirt-engine but I could not find
> how to
> > upgrade node.
>
> Something like:
>
> 1. Move node to maintenance
> 2. Add 4.1 repos
> 3. yum update
> 4. reboot
> 5. Activate (exit maintenance)
>

Once all nodes are in 4.1 level you should move the cluster and then the DC
to 4.1 and enjoy its new features.
Y.


>
> See also:
>
> https://www.ovirt.org/node/
> http://www.ovirt.org/node/faq/
>
> If in "node" you referred to ovirt-node, and not a regular OS install, then
> the flow is exactly the same, but what actually happens inside it is very
> different - the entire OS image is replaced.
>
> You might want to check also:
>
> http://www.ovirt.org/develop/release-management/features/
> engine/upgrademanager/
>
> Best,
> --
> Didi
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to update ovirt nodes

2017-02-02 Thread Yedidyah Bar David
On Thu, Feb 2, 2017 at 8:14 AM, Shalabh Goel  wrote:
> Hi
>
> Ovirt 4.1 has been released, I want to to know how to upgrade the Ovirt
> nodes to 4.1. I was able to upgrade ovirt-engine but I could not find how to
> upgrade node.

Something like:

1. Move node to maintenance
2. Add 4.1 repos
3. yum update
4. reboot
5. Activate (exit maintenance)

See also:

https://www.ovirt.org/node/
http://www.ovirt.org/node/faq/

If in "node" you referred to ovirt-node, and not a regular OS install, then
the flow is exactly the same, but what actually happens inside it is very
different - the entire OS image is replaced.

You might want to check also:

http://www.ovirt.org/develop/release-management/features/engine/upgrademanager/

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM has been paused due to storage I/O problem

2017-02-02 Thread Yaniv Kaul
On Thu, Feb 2, 2017 at 12:04 PM, Gianluca Cecchi 
wrote:

>
>
> On Thu, Feb 2, 2017 at 10:48 AM, Nir Soffer  wrote:
>
>> On Thu, Feb 2, 2017 at 1:11 AM, Gianluca Cecchi
>>  wrote:
>> > On Wed, Feb 1, 2017 at 8:22 PM, Gianluca Cecchi <
>> gianluca.cec...@gmail.com>
>> > wrote:
>> >>
>> >>
>> >> OK. In the mean time I have applied your suggested config and restarted
>> >> the 2 nodes.
>> >> Let we test and see if I find any problems running also some I/O tests.
>> >> Thanks in the mean time,
>> >> Gianluca
>> >
>> >
>> >
>> > Quick test without much success
>> >
>> > Inside the guest I run this loop
>> > while true
>> > do
>> > time dd if=/dev/zero bs=1024k count=1024 of=/home/g.cecchi/testfile
>>
>
A single 'dd' rarely saturates a high performance storage.
There are better utilities to test ('fio' , 'vdbench' and 'ddpt' for
example).
It's also testing a very theoretical scenario - very rarely you write zeros
and very rarely you write so much sequential IO, and with a fixed block
size. So it's almost 'hero numbers'.

> sleep 5
>> > done
>>
>> I don't think this test is related to the issues you reported earlier.
>>
>>
> I thought the same too, and all related comments you wrote.
> I'm going to test the suggested modifications for chunks.
> In general do you recommend thin provisioning at all on SAN storage?
>

Depends on your SAN. On thin provisioned one (with potentially inline dedup
and compression, such as XtremIO, Pure, Nimble and others) I don't see a
great value in thin provisioning.


>
> I decided to switch to preallocated for further tests and confirm
> So I created a snapshot and then a clone of the VM, changing allocation
> policy of the disk to preallocated.
> So far so good.
>
> Feb 2, 2017 10:40:23 AM VM ol65preallocated creation has been completed.
> Feb 2, 2017 10:24:15 AM VM ol65preallocated creation was initiated by
> admin@internal-authz.
> Feb 2, 2017 10:22:31 AM Snapshot 'for cloning' creation for VM 'ol65' has
> been completed.
> Feb 2, 2017 10:22:31 AM Snapshot 'for cloning' creation for VM 'ol65' was
> initiated by admin@internal-authz.
>
> so the throughput seems ok based on this storage type (the LUNs are on
> RAID5 made with sata disks): 16 minutes to write 90Gb is about 96MBytes/s,
> what expected
>

What is your expectation? Is it FC, iSCSI? How many paths? What is the IO
scheduler in the VM? Is it using virtio-blk or virtio-SCSI?
Y.



>
> What I see in messages during the cloning phasefrom 10:24 to 10:40:
>
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:14 ovmsrv05 journal: vdsm root WARNING File:
> /rhev/data-center/588237b8-0031-02f6-035d-0136/
> 922b5269-ab56-4c4d-838f-49d33427e2ab/images/9d1c977f-
> 540d-436a-9d93-b1cb0816af2a/607dbf59-7d4d-4fc3-ae5f-e8824bf82648 already
> removed
> Feb  2 10:24:14 ovmsrv05 multipathd: dm-15: remove map (uevent)
> Feb  2 10:24:14 ovmsrv05 multipathd: dm-15: devmap not registered, can't
> remove
> Feb  2 10:24:14 ovmsrv05 multipathd: dm-15: remove map (uevent)
> Feb  2 10:24:17 ovmsrv05 kernel: blk_update_request: critical target
> error, dev dm-4, sector 44566529
> Feb  2 10:24:17 ovmsrv05 kernel: dm-15: WRITE SAME failed. Manually
> zeroing.
> Feb  2 10:40:07 ovmsrv05 kernel: scsi_verify_blk_ioctl: 16 callbacks
> suppressed
> Feb  2 10:40:07 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:40:17 ovmsrv05 multipathd: dm-15: remove map (uevent)
> Feb  2 10:40:17 ovmsrv05 multipathd: dm-15: devmap not registered, can't
> remove
> Feb  2 10:40:17 ovmsrv05 multipathd: dm-15: remove map (uevent)
> Feb  2 10:40:22 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
>
>
>
>
>> > After about 7 rounds I get this in messages of the host where the VM is
>> > running:
>> >
>> > Feb  1 23:31:39 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
>> partition!
>> > Feb  1 23:31:39 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
>> partition!
>> > Feb  1 23:31:44 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
>> partition!
>> > Feb  1 23:31:45 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
>> partition!
>> > Feb  1 23:31:45 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
>> partition!
>> > Feb  1 23:31:47 

Re: [ovirt-users] ovirt-engine failed to check for updates

2017-02-02 Thread Yedidyah Bar David
On Wed, Feb 1, 2017 at 10:45 PM, Victor Jose Acosta 
wrote:

> So, that's the problem, engine is looking for updates for version 4, it
> does not matter if your node is version 3.6, i think this is a bug. Engine
> should be looking for node version updates instead of engine version.
>

Do you intend a 4.0 engine to update a 3.6 host to a later 3.6.z version?
that's not supported.

To update the host to 4.0, you should add 4.0 repos to the host.


>
> On 01/02/17 17:33, users-requ...@ovirt.org wrote:
>
> Re:  ovirt-engine failed to check for updates
>
>
> --
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM has been paused due to storage I/O problem

2017-02-02 Thread Nir Soffer
On Thu, Feb 2, 2017 at 1:11 AM, Gianluca Cecchi
 wrote:
> On Wed, Feb 1, 2017 at 8:22 PM, Gianluca Cecchi 
> wrote:
>>
>>
>> OK. In the mean time I have applied your suggested config and restarted
>> the 2 nodes.
>> Let we test and see if I find any problems running also some I/O tests.
>> Thanks in the mean time,
>> Gianluca
>
>
>
> Quick test without much success
>
> Inside the guest I run this loop
> while true
> do
> time dd if=/dev/zero bs=1024k count=1024 of=/home/g.cecchi/testfile
> sleep 5
> done

I don't think this test is related to the issues you reported earlier.

What you test here is how fast ovirt thin provisioning can extend your
disk when writing zeros. We don't handle that very well, each extend
needs at couple 4-6 seconds to complete before we refresh the lv
on the host running the vm, and this is the *best* case. In the worst
case it can take much longer.

Also what you tested here is how fast you can write to your vm buffer cache,
since you are not using direct io.

A better way to perform this test is:

time dd if=/dev/zero bs=1024k count=1024 of=/home/g.cecchi/testfile oflag=direct

This will give you the time to actually write data to storage.

If you have a real issue with vms pausing during writes when vm disk has to
be extended, you can enlarge the extend chunk, 1GiB by default.

To use chunks of 2GiB, set:

[irs]
volume_utilization_percent = 50
volume_utilization_chunk_mb = 2048

This will extend the drive when free space is less than 1024MiB
(volume_utilization_chunk_mb * (100 - volume_utilization_percent) / 100)

If this is not enough, you can also use lower volume_utilization_percent,
for example, this will extend the disk in 2GiB chunks when free space
is bellow 1536MiB:

[irs]
volume_utilization_percent = 25
volume_utilization_chunk_mb = 2048

> BTW: my home is inside / filesystem on guest that has space to accomodate
> 1Gb of the dd command:
> [g.cecchi@ol65 ~]$ df -h /home/g.cecchi/
> FilesystemSize  Used Avail Use% Mounted on
> /dev/mapper/vg_ol65-lv_root
>20G  4.9G   14G  27% /
> [g.cecchi@ol65 ~]$
>
> After about 7 rounds I get this in messages of the host where the VM is
> running:
>
> Feb  1 23:31:39 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  1 23:31:39 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  1 23:31:44 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  1 23:31:45 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  1 23:31:45 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  1 23:31:47 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  1 23:31:50 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  1 23:31:50 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  1 23:31:56 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  1 23:31:57 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a partition!

This is interesting, we have seen this messages before, but could never
detect the flow causing them, are you sure you see this each time you extend
your disk?

If you can reproduce this, please file a bug.

>
> Nothing on the other host.
> In web admin events pane:
> Feb 1, 2017 11:31:44 PM VM ol65 has been paused due to no Storage space
> error.
> Feb 1, 2017 11:31:44 PM VM ol65 has been paused.
>
> I stop the dd loop and after some seconds:
> Feb 1, 2017 11:32:32 PM VM ol65 has recovered from paused back to up
>
> Multipath status for my device:
>
> 3600a0b8000299aa8d08b55014119 dm-2 IBM ,1814  FAStT
> size=4.0T features='3 queue_if_no_path pg_init_retries 50' hwhandler='1
> rdac' wp=rw
> |-+- policy='service-time 0' prio=0 status=active
> | |- 0:0:1:3 sdj 8:144 active undef running
> | `- 2:0:1:3 sdp 8:240 active undef running
> `-+- policy='service-time 0' prio=0 status=enabled
>   |- 0:0:0:3 sdd 8:48  active undef running
>   `- 2:0:0:3 sdk 8:160 active undef running
>
> In engine.log
>
> 2017-02-01 23:22:01,449 INFO
> [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-15)
> [530aee87] Running command: CreateUserSessionCommand internal: false.
> 2017-02-01 23:22:04,011 INFO
> [org.ovirt.engine.docs.utils.servlet.ContextSensitiveHelpMappingServlet]
> (default task-12) [] Context-sensitive help is not installed. Manual
> directory doesn't exist: /usr/share/ovirt-engine/manual
> 2017-02-01 23:31:43,936 INFO
> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
> (ForkJoinPool-1-worker-10) [10c15e39] VM
> '932db7c7-4121-4cbe-ad8d-09e4e99b3cdd'(ol65) moved from 'Up' --> 'Paused'
> 2017-02-01 23:31:44,087 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (ForkJoinPool-1-worker-10) [10c15e39] Correlation ID: null, Call Stack:
> null, Custom Event ID: -1, Message: VM ol65 has been paused.
> 2017-02-01 23:31:44,227 ERROR
> 

Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-02 Thread Sandro Bonazzola
On Fri, Feb 3, 2017 at 5:51 AM, Shalabh Goel 
wrote:

> HI,
>
> I am having the following issue in two of my nodes after upgrading. The
> ovirt engine says that it is not able to find ovirtmgmt network on the
> nodes and hence the nodes are set to non-operational. More details are in
> the following message.
>
>
> Thanks
>
> Shalabh Goel
>
>
>> --
>>
>> Message: 2
>> Date: Thu, 2 Feb 2017 17:40:05 +0530
>> From: Shalabh Goel 
>> To: users 
>> Subject: [ovirt-users] problem after rebooting the node
>> Message-ID:
>> 

Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-02 Thread Sandro Bonazzola
On Thu, Feb 2, 2017 at 9:59 PM,  wrote:

>
>
> Updated from 4.0.6
> Docs are quite incomplete, it's not mentioned about installing
> ovirt-release41 on centos HV and ovirt-nodes manually, you need to guess.
> Also links in release notes are broken ( https://www.ovirt.org/release/
> 4.1.0/ )
> They are going to https://www.ovirt.org/release/4.1.0/Hosted_Engine_Howto
> 
> , but docs for 4.1.0 are absent.
>
>
Thanks, opened https://github.com/oVirt/ovirt-site/issues/765
I'd like to ask you if you can push your suggestion on documentation fixes
/ improvements editing the website following "Edit this page on GitHub"
link at the bottom of the page.
Any help getting documentation updated and more useful to users is really
appreciated.


> Upgrade went well, everything migrated without problems(I need to restart
> VMs only to change cluster level to 4.1).
> Good news, SPICE HTML 5 client now working for me on Win client with
> firefox, before on 4.x it was sending connect requests forever.
>
> There is some bugs I've found playing with new version:
> 1) some storage tabs displaying "No items to display "
> for example:
> if I'm expanding System\Data centers\[dc name]\ and selecting Storage it
> displays nothing in main tab, but displays all domains in tree,
> if I'm selecting [dc name] and Storage tab, also nothing,
> but in System \ Strorage tab all domains present,
> also in Clusters\[cluster name]\ Storage tab they present.
>

Thanks, opened https://bugzilla.redhat.com/show_bug.cgi?id=1418924



>
> 2) links to embedded files and clients aren't working, engine says 404,
> examples:
> https://[your manager's address]/ovirt-engine/services/files/spice/usbdk-
> x64.msi
> https://[your manager's address]/ovirt-engine/services/files/spice/virt-
> viewer-x64.msi
> and other,
> but they are in docs(in ovirt and also in rhel)
>


Thanks, opened https://bugzilla.redhat.com/show_bug.cgi?id=1418923



>
> 3) there is also link in "Console options" menu (right click on VM) called
> "Console Client Resources", it's going to dead location:
> http://www.ovirt.org/documentation/admin-guide/
> virt/console-client-resources
> If you are going to fix issue №2 maybe also adding links directly to
> installation files embedded will be more helpful for users)
>
>
Thanks, opened https://bugzilla.redhat.com/show_bug.cgi?id=1418921



> 4) little disappointed about "pass discards" on NFS storage, as I've found
> NFS implementation(even 4.1) in Centos 7 doesn't support
> fallocate(FALLOC_FL_PUNCH_HOLE), that quemu uses for file storage, it was
> added only in kernel 3.18, sparsify also not working, but I'll mail separate
> thread with this question.
>
>
>
>
>
>
>
> *-- Thursday, February 2, 2017, 15:19:29: *
>
> Hi,
> did you install/update to 4.1.0? Let us know your experience!
> We end up knowing only when things doesn't work well, let us know it works
> fine for you :-)
>
> If you're not planning an update to 4.1.0 in the near future, let us know
> why.
> Maybe we can help.
>
> Thanks!
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>



-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-02 Thread Shalabh Goel
HI,

I am having the following issue in two of my nodes after upgrading. The
ovirt engine says that it is not able to find ovirtmgmt network on the
nodes and hence the nodes are set to non-operational. More details are in
the following message.


Thanks

Shalabh Goel


> --
>
> Message: 2
> Date: Thu, 2 Feb 2017 17:40:05 +0530
> From: Shalabh Goel 
> To: users 
> Subject: [ovirt-users] problem after rebooting the node
> Message-ID:
> 

Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-02 Thread Sandro Bonazzola
On Fri, Feb 3, 2017 at 7:02 AM, Lars Seipel  wrote:

> On Thu, Feb 02, 2017 at 01:19:29PM +0100, Sandro Bonazzola wrote:
> > did you install/update to 4.1.0? Let us know your experience!
> > We end up knowing only when things doesn't work well, let us know it
> works
> > fine for you :-)
>
> Will do that in a week or so. What's the preferred way to upgrade to
> 4.1.0 starting from a 4.0.x setup with a hosted engine?
>
> Is it recommended to use engine-setup/yum (i.e. chapter 2 of the Upgrade
> Guide) or would you prefer an appliance upgrade using hosted-engine(8)
> as described in the HE guide?
>

Appliance upgrade was designed to help transitioning from 3.6 el6 to 4.0
el7 appliances.
I would recommend  to use engine-setup/yum within the appliance to upgrade
the engine.


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ANN] oVirt 4.1.0 Second Beta Release is now available for testing

2017-02-02 Thread Yedidyah Bar David
On Mon, Jan 30, 2017 at 3:09 PM, Yaniv Dary  wrote:
> Adding one more person to look.
>
> Yaniv Dary
> Technical Product Manager
> Red Hat Israel Ltd.
> 34 Jerusalem Road
> Building A, 4th floor
> Ra'anana, Israel 4350109
>
> Tel : +972 (9) 7692306
> 8272306
> Email: yd...@redhat.com
> IRC : ydary
>
>
> On Mon, Jan 30, 2017 at 1:35 PM, Sandro Bonazzola 
> wrote:
>>
>>
>>
>> On Mon, Jan 30, 2017 at 12:24 PM, Nathanaël Blanchet 
>> wrote:
>>>
>>>
>>>
>>> Le 28/12/2016 à 15:25, Nathanaël Blanchet a écrit :
>>>
>>>
>>>
>>> Le 28/12/2016 à 15:09, Yaniv Bronheim a écrit :
>>>
>>>
>>>
>>> On Wed, Dec 28, 2016 at 3:43 PM, Nathanaël Blanchet 
>>> wrote:

 Hello,

 On my 4.1 Second Beta test platform, I meet this issue on the three
 hosts : VDSM gaua3 command failed: >>> 'exceptions.AttributeError'>:'NoneType' object has no attribute
 'statistics'">
>>>
>>> Still the same error with RC2 and GA is on 1st of february...
>>
>>
>> Adding some people, looks related to metrics but I may be wrong.

Doesn't seem so to me.

>>
>>
>>
>>>
>>>
>>>
>>> Hi Nathanael, Thank you for the report
>>>
>>> Hi Yaniv
>>>
>>>
>>> please send also the following logs for deeper investigation
>>> /var/log/vdsm.log
>>> /var/log/supervdsm.log
>>> /var/log/messages or joursnalctl -xn output

Did you attach the last one? Could not find it.

In vdsm.log I see:

2016-12-28 14:22:32,068 ERROR (periodic/1) [virt.periodic.Operation]
 operation failed
(periodic:192)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/periodic.py", line
190, in __call__
self._func()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/sampling.py", line
563, in __call__
stats = hostapi.get_stats(self._cif, self._samples.stats())
  File "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 72,
in get_stats
ret.update(cif.mom.getKsmStats())
  File "/usr/lib/python2.7/site-packages/vdsm/momIF.py", line 71, in getKsmStats
stats = self._mom.getStatistics()['host']
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__
return self.__send(self.__name, args)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request
verbose=self.__verbose
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request
return self.single_request(host, handler, request_body, verbose)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1306, in single_request
return self.parse_response(response)
  File "/usr/lib64/python2.7/xmlrpclib.py", line 1482, in parse_response
return u.close()
  File "/usr/lib64/python2.7/xmlrpclib.py", line 794, in close
raise Fault(**self._stack[0])
Fault: :'NoneType' object
has no attribute 'statistics'">
2016-12-28 14:22:33,277 INFO  (jsonrpc/2) [dispatcher] Run and
protect: repoStats(options=None) (logUtils:49)
2016-12-28 14:22:33,277 INFO  (jsonrpc/2) [dispatcher] Run and
protect: repoStats, Return response:
{u'38eff02a-1061-4f33-b870-beaea860f59b': {'code': 0, 'actual': True,
'version': 0, 'acquired': True, 'delay': '0.000274322', 'lastCheck':
'0.2', 'valid': True}, u'5dd036bb-10dc-4f1d-b80b-3549ceabdc24':
{'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay':
'0.000385412', 'lastCheck': '5.5', 'valid': True}} (logUtils:52)
2016-12-28 14:22:33,278 WARN  (jsonrpc/2) [MOM] MOM not available. (momIF:116)
2016-12-28 14:22:33,279 WARN  (jsonrpc/2) [MOM] MOM not available, KSM
stats will be missing. (momIF:79)

It seems to me like vdsm tries to connect to mom and fails.

If this still happens, I suggest to:
1. Move host to maintenance
2. Restart both vdsm daemons and mom
3. If it still happens, check/share all logs to find out why it fails.

Best,

>>>
>>> Also, please specify a bit the platform you are running on and when this
>>> issue occurs
>>>
>>> 3 el7 hosts, 1 gluster + virt cluster, FC domain storage with the latest
>>> 4.1 beta, independant el7 engine
>>>
>>>
>>> Greetings,
>>> Yaniv Bronhaim.
>>>
>>>


 Le 21/12/2016 à 16:12, Sandro Bonazzola a écrit :

 The oVirt Project is pleased to announce the availability of the Second
 Beta Release of oVirt 4.1.0 for testing, as of December 21st, 2016

 This is pre-release software. Please take a look at our community
 page[1]
 to know how to ask questions and interact with developers and users.
 All issues or bugs should be reported via oVirt Bugzilla[2].
 This pre-release should not to be used in production.

 This release is available now for:
 * Fedora 24 (tech preview)
 * Red Hat Enterprise Linux 7.3 or later
 * CentOS Linux (or similar) 7.3 or later

 This release supports Hypervisor Hosts running:
 * Red Hat Enterprise Linux 7.3 or later
 * CentOS Linux (or similar) 7.3 or later
 * Fedora 24 (tech preview)

 See the release notes draft [3] for installation / upgrade instructions
 and
 a list of new 

Re: [ovirt-users] Safe to install/use epel repository?

2017-02-02 Thread Yedidyah Bar David
On Thu, Feb 2, 2017 at 1:43 PM, Gianluca Cecchi
 wrote:
> On Wed, Feb 1, 2017 at 8:26 AM, Yedidyah Bar David  wrote:
>>
>>
>>
>> Perhaps add them with includepkgs, to be on the safe side, and not
>> all of epel.
>>
>> Best,
>> --
>> Didi
>
>
> Added the needed packages at the end of the includepkgs line of
> [ovirt-4.0-epel] section of ovirt-4.0-dependencies.repo

I think you can have your own repo file, with a different name/id
but same mirrorlist/baseurl. Didn't try. That way, it won't be
overwritten if you later update ovirt-release40. This one won't
be updated anymore, now that 4.1 is out, but the same applies to
4.1 of course.

I updated the 4.1 release notes adding a note about EPEL.

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] problem after rebooting the node

2017-02-02 Thread Shalabh Goel
HI,

I am getting the following error on my node after rebooting it.

VDSM ovirtnode2 command HostSetupNetworksVDS failed: Executing commands
failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection
failed (No such file or directory)


To solve this, I am trying to restart ovsdb-server using the following
command,

ovsdb-server --remote=punix:*/var/run/openvswitch/db.sock*
--remote=db:Open_vSwitch,Open_vSwitch,manager_options
--private-key=db:Open_vSwitch,SSL,private_key--certificate=db:Open_vSwitch,SSL,certificate
--bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --pidfile --detach

But I am getting the following error.

ovsdb-server: /var/run/openvswitch/ovsdb-server.pid.tmp: create failed (No
such file or directory)

How to restart the ovsdb-server?? Also ovirtmgmt network is missing from my
node. It happened after I rebooted my node after it got upgraded to Ovirt
4.1

-- 
Shalabh Goel
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM has been paused due to storage I/O problem

2017-02-02 Thread Gianluca Cecchi
On Thu, Feb 2, 2017 at 12:09 PM, Yaniv Kaul  wrote:

>
>
>
>>
>> I decided to switch to preallocated for further tests and confirm
>> So I created a snapshot and then a clone of the VM, changing allocation
>> policy of the disk to preallocated.
>> So far so good.
>>
>> Feb 2, 2017 10:40:23 AM VM ol65preallocated creation has been completed.
>> Feb 2, 2017 10:24:15 AM VM ol65preallocated creation was initiated by
>> admin@internal-authz.
>> Feb 2, 2017 10:22:31 AM Snapshot 'for cloning' creation for VM 'ol65' has
>> been completed.
>> Feb 2, 2017 10:22:31 AM Snapshot 'for cloning' creation for VM 'ol65' was
>> initiated by admin@internal-authz.
>>
>> so the throughput seems ok based on this storage type (the LUNs are on
>> RAID5 made with sata disks): 16 minutes to write 90Gb is about 96MBytes/s,
>> what expected
>>
>
> What is your expectation? Is it FC, iSCSI? How many paths? What is the IO
> scheduler in the VM? Is it using virtio-blk or virtio-SCSI?
> Y.
>
>
Peak bandwith no more than 140 MBytes/s, based on storage capabilities, but
I don't have to do a rude performance test. I need stability
Hosts has a mezzanine dual-port HBA (4 Gbit); each HBA connected to a
different FC-switch and the multipath connection has 2 active paths (one
for each HBA).

I confirm that with preallocated disk of the cloned VM I don't have indeed
the previous problems.
The same loop executed for about 66 times in a 10 minutes interval without
any problem registered on hosts
No message at all in /var/log/messages of both hosts.
My storage domain not compromised
It remains important the question about thin provisioning and SAN LUNs (aka
with LVM based disks).
In my opinion I shouldn't care of the kind of I/O made inside a VM and
anyway it shouldn't interfere with my storage domain, bringing down
completely my hosts/VMs.
In theory there could be an application inside a VM that generates
something similar to my loop and so would generate problems.
For sure I can then notify VM responsible about his/her workload, but it
should not compromise my virtual infrastructure
I could have an RDBMS inside a VM and a user that creates a big datafile
and that should imply many extend operations if the disk is thin
provisioned

What about [irs] values? Where are they located, in vdsm.conf?
What are defaults for volume_utilization_percent and
volume_utilization_chunk_mb?
Did they change from 3.6 to 4.0 to 4.1?
What I should do after changing them to make them active?

Thanks in advance,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM has been paused due to storage I/O problem

2017-02-02 Thread Nir Soffer
On Thu, Feb 2, 2017 at 12:04 PM, Gianluca Cecchi
 wrote:
>
>
> On Thu, Feb 2, 2017 at 10:48 AM, Nir Soffer  wrote:
>>
>> On Thu, Feb 2, 2017 at 1:11 AM, Gianluca Cecchi
>>  wrote:
>> > On Wed, Feb 1, 2017 at 8:22 PM, Gianluca Cecchi
>> > 
>> > wrote:
>> >>
>> >>
>> >> OK. In the mean time I have applied your suggested config and restarted
>> >> the 2 nodes.
>> >> Let we test and see if I find any problems running also some I/O tests.
>> >> Thanks in the mean time,
>> >> Gianluca
>> >
>> >
>> >
>> > Quick test without much success
>> >
>> > Inside the guest I run this loop
>> > while true
>> > do
>> > time dd if=/dev/zero bs=1024k count=1024 of=/home/g.cecchi/testfile
>> > sleep 5
>> > done
>>
>> I don't think this test is related to the issues you reported earlier.
>>
>
> I thought the same too, and all related comments you wrote.
> I'm going to test the suggested modifications for chunks.
> In general do you recommend thin provisioning at all on SAN storage?

Only if your storage does no support thin provisioning, or you need snapshot
support.

If you don't need these feature, using raw will be much more reliable
and faster.

Even if you use raw, you can still perform live storage migration; we
create a snapshot
using qcow2 format, copy the base raw volume to another storage, and
finally delete
the snapshot on the destination storage.

In the future (ovirt 5?) we would like to use only smart storage thin
provisioning
and snapshot support.

> I decided to switch to preallocated for further tests and confirm
> So I created a snapshot and then a clone of the VM, changing allocation
> policy of the disk to preallocated.
> So far so good.
>
> Feb 2, 2017 10:40:23 AM VM ol65preallocated creation has been completed.
> Feb 2, 2017 10:24:15 AM VM ol65preallocated creation was initiated by
> admin@internal-authz.
> Feb 2, 2017 10:22:31 AM Snapshot 'for cloning' creation for VM 'ol65' has
> been completed.
> Feb 2, 2017 10:22:31 AM Snapshot 'for cloning' creation for VM 'ol65' was
> initiated by admin@internal-authz.
>
> so the throughput seems ok based on this storage type (the LUNs are on RAID5
> made with sata disks): 16 minutes to write 90Gb is about 96MBytes/s, what
> expected
>
> What I see in messages during the cloning phasefrom 10:24 to 10:40:
>
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:13 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:24:14 ovmsrv05 journal: vdsm root WARNING File:
> /rhev/data-center/588237b8-0031-02f6-035d-0136/922b5269-ab56-4c4d-838f-49d33427e2ab/images/9d1c977f-540d-436a-9d93-b1cb0816af2a/607dbf59-7d4d-4fc3-ae5f-e8824bf82648
> already removed
> Feb  2 10:24:14 ovmsrv05 multipathd: dm-15: remove map (uevent)
> Feb  2 10:24:14 ovmsrv05 multipathd: dm-15: devmap not registered, can't
> remove
> Feb  2 10:24:14 ovmsrv05 multipathd: dm-15: remove map (uevent)
> Feb  2 10:24:17 ovmsrv05 kernel: blk_update_request: critical target error,
> dev dm-4, sector 44566529
> Feb  2 10:24:17 ovmsrv05 kernel: dm-15: WRITE SAME failed. Manually zeroing.
> Feb  2 10:40:07 ovmsrv05 kernel: scsi_verify_blk_ioctl: 16 callbacks
> suppressed
> Feb  2 10:40:07 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!
> Feb  2 10:40:17 ovmsrv05 multipathd: dm-15: remove map (uevent)
> Feb  2 10:40:17 ovmsrv05 multipathd: dm-15: devmap not registered, can't
> remove
> Feb  2 10:40:17 ovmsrv05 multipathd: dm-15: remove map (uevent)
> Feb  2 10:40:22 ovmsrv05 kernel: dd: sending ioctl 80306d02 to a partition!

Lets file a bug to investigate these "kernel: dd: sending ioctl
80306d02 to a partition!"
messages.

Please attach vdsm log on the machine emitting theses logs.

>> > After about 7 rounds I get this in messages of the host where the VM is
>> > running:
>> >
>> > Feb  1 23:31:39 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
>> > partition!
>> > Feb  1 23:31:39 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
>> > partition!
>> > Feb  1 23:31:44 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
>> > partition!
>> > Feb  1 23:31:45 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
>> > partition!
>> > Feb  1 23:31:45 ovmsrv06 kernel: dd: sending ioctl 80306d02 to a
>> > partition!
>> > Feb  1 23:31:47 ovmsrv06 

[ovirt-users] Data / ISOs / Export - Domains

2017-02-02 Thread Fernando Frediani

Hi there.

What are the main diferences between the three types of Domains 
available that you cannot use one for the all uses. For example, on the 
same Datastore put Virtual Machines, ISO files (if a filesystem) and 
Export Machines. Is there anything specific the forbids it which could 
cause a trouble in mixing files of different types in the same Datastore 
or is it just a design wish to keep them separate.


This could help in the sense that gives more flexibility in more limited 
scenarios.


Also, what is the limitation that is not possible to use Local Storage 
in a Host the belongs to a Shared Datacenter. What would happend if I 
have the possibility to run VMs in a Shared Storage and also in Local 
Storage (or store ISO files for exemple) ?


Thanks

Fernando

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Safe to install/use epel repository?

2017-02-02 Thread Gianluca Cecchi
On Wed, Feb 1, 2017 at 8:26 AM, Yedidyah Bar David  wrote:

>
>
> Perhaps add them with includepkgs, to be on the safe side, and not
> all of epel.
>
> Best,
> --
> Didi
>

Added the needed packages at the end of the includepkgs line
of [ovirt-4.0-epel] section of ovirt-4.0-dependencies.repo
Thanks!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM has been paused due to storage I/O problem

2017-02-02 Thread Nir Soffer
On Thu, Feb 2, 2017 at 1:34 PM, Gianluca Cecchi
 wrote:
> On Thu, Feb 2, 2017 at 12:09 PM, Yaniv Kaul  wrote:
>>
>>
>>
>>>
>>>
>>> I decided to switch to preallocated for further tests and confirm
>>> So I created a snapshot and then a clone of the VM, changing allocation
>>> policy of the disk to preallocated.
>>> So far so good.
>>>
>>> Feb 2, 2017 10:40:23 AM VM ol65preallocated creation has been completed.
>>> Feb 2, 2017 10:24:15 AM VM ol65preallocated creation was initiated by
>>> admin@internal-authz.
>>> Feb 2, 2017 10:22:31 AM Snapshot 'for cloning' creation for VM 'ol65' has
>>> been completed.
>>> Feb 2, 2017 10:22:31 AM Snapshot 'for cloning' creation for VM 'ol65' was
>>> initiated by admin@internal-authz.
>>>
>>> so the throughput seems ok based on this storage type (the LUNs are on
>>> RAID5 made with sata disks): 16 minutes to write 90Gb is about 96MBytes/s,
>>> what expected
>>
>>
>> What is your expectation? Is it FC, iSCSI? How many paths? What is the IO
>> scheduler in the VM? Is it using virtio-blk or virtio-SCSI?
>> Y.
>>
>
> Peak bandwith no more than 140 MBytes/s, based on storage capabilities, but
> I don't have to do a rude performance test. I need stability
> Hosts has a mezzanine dual-port HBA (4 Gbit); each HBA connected to a
> different FC-switch and the multipath connection has 2 active paths (one for
> each HBA).
>
> I confirm that with preallocated disk of the cloned VM I don't have indeed
> the previous problems.
> The same loop executed for about 66 times in a 10 minutes interval without
> any problem registered on hosts
> No message at all in /var/log/messages of both hosts.
> My storage domain not compromised
> It remains important the question about thin provisioning and SAN LUNs (aka
> with LVM based disks).
> In my opinion I shouldn't care of the kind of I/O made inside a VM and
> anyway it shouldn't interfere with my storage domain, bringing down
> completely my hosts/VMs.
> In theory there could be an application inside a VM that generates something
> similar to my loop and so would generate problems.
> For sure I can then notify VM responsible about his/her workload, but it
> should not compromise my virtual infrastructure
> I could have an RDBMS inside a VM and a user that creates a big datafile and
> that should imply many extend operations if the disk is thin provisioned
>
> What about [irs] values? Where are they located, in vdsm.conf?

Yes but you should not modify them in vdsm.conf.

> What are defaults for volume_utilization_percent and
> volume_utilization_chunk_mb?
> Did they change from 3.6 to 4.0 to 4.1?

No, the defaults did not change in the last 3.5 years.

In 4.0 we introduced dropin support, and this is the recommend
way to perform configuration changes.

To change these values, you create a file at

/etc/vdsm/vdsm.conf.d/50_my.conf

The name of the file does not matter, vdsm will read all files in
the vdsm.conf.d directory, sort them by name (this is way you
should use 50_ prefix), and apply the changes to the configuration.

In this file you put the sections and options you need, like:

[irs]
volume_utilization_percent = 25
volume_utilization_chunk_mb = 2048

> What I should do after changing them to make them active?

Restart vdsm

Using these method, you can provision the same file on all hosts
using standard provisioning tools.

It is not recommended to modify /etc/vdsm/vdsm.conf. If you do
this you will have to manually merge changes from vdsm.conf.rpmnew
after upgrading vdsm.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-shell] update hostnic/nic ???

2017-02-02 Thread Nathanaël Blanchet
Hi I managed to create my dhcp hostnic thanks to the python script, but 
the same with ovirt-shell:


 * [oVirt shell (connected) # add networkattachment --parent-host-name
   taal --network-name brv106 --host_nic-name enp2s0f0 => OK but I
   didn't find any way to add boot protocol to DHCP

 * [oVirt shell (connected)]# add networkattachment --parent-host-name
   taal --network-name brv106 --host_nic-name enp2s0f0
   --ip_address_assignments-ip_address_assignment dhcp

  === ERROR 
=
 "dhcp" is invalid segment at option 
"--ip_address_assignments-ip_address_assignment".


 * [oVirt shell (connected)]# update nic enp3s0f0 --parent-host-name
   zonda --network-name brv106 --boot_protocol dhcp

  === ERROR 
=

  status: 405
  reason: Method Not Allowed
  detail:


Should be much simpler via CLI to do such a thing, what's wrong there?

Le 15/01/2016 à 12:20, Juan Hernández a écrit :

On 01/14/2016 01:28 PM, Bloemen, Jurriën wrote:

On 14-01-16 12:16, Juan Hernández wrote:

On 01/14/2016 11:24 AM, Bloemen, Jurriën wrote:

Hi,

First I created a bonding interface:

# add nic --parent-host-name server01 --name bond0 --network-name
VLAN602 --bonding-slaves-host_nic host_nic.name=eno1
--bonding-slaves-host_nic host_nic.name=eno2

This works great but no IP is set on VLAN602.

Then I'm trying to add an ip address to a network with the following
command:

# update hostnic --parent-host-name server01 --network-name VLAN602
--boot_protocol static --ip-address 10.10.10.10 --ip-netmask 255.255.255.0

==
ERROR

   
wrong number of arguments, try 'help update' for help.



Looking at this document
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6-Beta/html/RHEVM_Shell_Guide/nic.html
I need to use "nic" instead of "hostnic" but then I don't have the
options to say this is a --parent-host-name. Only VM related command
options.

So I think the documentation is behind.

Can somebody help me with what the command is to add a IP to a
VLAN/Network for a host?



The command should be like this:

   # update nic bond0 --parent-host-name server01 --network-name VLAN602
--boot_protocol static --ip-address 10.10.10.10 --ip-netmask 255.255.255.0

Note that the it is "nic" instead of "hostnic" and that you need to
specify the name of that NIC, in this case "bond0".

The command will work if you type it like that, but auto-completion
won't work. This is a bug in the CLI, indirectly caused by the fact that
the name of the URL segment used in the RESTAPI is "nics" (from
/hosts/{host:id}/*nics*) but the name of the XML schema complex type is
"HostNIC".


Thanks! That works!

Another question:

Now I got the message that my network is out-of-sync. How can i force
within the ovirt-shell that it syncs the networks?

hmz pressed sent by accident

What I want to say is:

Now I got the message that my network is out-of-sync. How can i force
within the ovirt-shell that it syncs the networks?
Because when I press "Sync All Networks" the IP address disappears

But when I check the box "Sync Network" within the VLAN602 options it
gets pushed to the host.

Is there a difference between the both? And how do I run both via
ovirt-shell?


The "sync network" operation is not supported by ovirt-shell.

If you want to set the network configuration, and make it persistent,
then you will need to use one of the "setupNetworks" operations. These
aren't fully usable with ovirt-shell either, so if you want to use it
you will need to use directly the API or one of the SDKs. For example,
lets assume that you have a host with network interfaces eth0, eth1, and
eth2, and that you want to configure eth1 and eth2 as a bond, to put
your VLAN and IP address on top. You can do that with a script like this:

---8<---
#!/bin/sh -ex

url="https://engine.example.com/ovirt-engine/api;
user="admin@internal"
password="..."

curl \
--verbose \
--cacert /etc/pki/ovirt-engine/ca.pem \
--user "${user}:${password}" \
--request POST \
--header "Content-Type: application/xml" \
--header "Accept: application/xml" \
--data '

   
 
   
 VLAN602
   
   
 bond0
   
   
 static
 
   
 
   
 
   
   
 
   

[ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-02 Thread Sandro Bonazzola
Hi,
did you install/update to 4.1.0? Let us know your experience!
We end up knowing only when things doesn't work well, let us know it works
fine for you :-)

If you're not planning an update to 4.1.0 in the near future, let us know
why.
Maybe we can help.

Thanks!
-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to update ovirt nodes

2017-02-02 Thread Shalabh Goel
Hi can u just confirm something for me? I had to change the existing
ovirt-4.0 repos to ovirt 4.1 (by replacing 40 by 41 in the repo files). And
then I ran yum update. And rebooted the nodes and booted into
ovirt-node4.1. Is the procedure correct?

Thanks

Shalabh Goel

On Thu, Feb 2, 2017 at 2:02 PM, Yedidyah Bar David  wrote:

> On Thu, Feb 2, 2017 at 8:14 AM, Shalabh Goel 
> wrote:
> > Hi
> >
> > Ovirt 4.1 has been released, I want to to know how to upgrade the Ovirt
> > nodes to 4.1. I was able to upgrade ovirt-engine but I could not find
> how to
> > upgrade node.
>
> Something like:
>
> 1. Move node to maintenance
> 2. Add 4.1 repos
> 3. yum update
> 4. reboot
> 5. Activate (exit maintenance)
>
> See also:
>
> https://www.ovirt.org/node/
> http://www.ovirt.org/node/faq/
>
> If in "node" you referred to ovirt-node, and not a regular OS install, then
> the flow is exactly the same, but what actually happens inside it is very
> different - the entire OS image is replaced.
>
> You might want to check also:
>
> http://www.ovirt.org/develop/release-management/features/
> engine/upgrademanager/
>
> Best,
> --
> Didi
>



-- 
Shalabh Goel
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-02 Thread Lars Seipel
On Thu, Feb 02, 2017 at 01:19:29PM +0100, Sandro Bonazzola wrote:
> did you install/update to 4.1.0? Let us know your experience!
> We end up knowing only when things doesn't work well, let us know it works
> fine for you :-)

Will do that in a week or so. What's the preferred way to upgrade to
4.1.0 starting from a 4.0.x setup with a hosted engine?

Is it recommended to use engine-setup/yum (i.e. chapter 2 of the Upgrade
Guide) or would you prefer an appliance upgrade using hosted-engine(8)
as described in the HE guide?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-02 Thread Fernando Frediani

Hello

Thanks for sharing your procedures.

Why did you have to restart VMs for the migration to work ? Is it 
mandatory for an upgrade ?


Fernando


On 02/02/2017 12:23, Краснобаев Михаил wrote:

Hi,
upgraded my cluster (3 hosts, engine, nfs-share) to the latest 4.1 
release and Centos 7.3 (from 4.06).

Did the following:
1. Upgraded engine machine to Centos 7.3
2. Upgraded engine packages and ran "engine-setup"
3. Upgraded one by one hosts to 7.3 + packages from the new 4.1. repo 
and refreshed hosts capabilities.

4. Raised cluster and datacenter compatibility level to 4.1.
5. Restarted virtual machines and tested migration.
6. Profit! Everything went really smoothly. No errors.
Now trying to figure out how the sparsify function works. I need to 
run trimming from inside the VM first?

Best regards, Mikhail.
02.02.2017, 15:19, "Sandro Bonazzola" :

Hi,
did you install/update to 4.1.0? Let us know your experience!
We end up knowing only when things doesn't work well, let us know it 
works fine for you :-)
If you're not planning an update to 4.1.0 in the near future, let us 
know why.

Maybe we can help.
Thanks!
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com 
,

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users


--
С уважением, Краснобаев Михаил.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM has been paused due to storage I/O problem

2017-02-02 Thread Nir Soffer
On Thu, Feb 2, 2017 at 4:16 PM, Gianluca Cecchi
 wrote:
> On Thu, Feb 2, 2017 at 12:48 PM, Nir Soffer  wrote:
>>
>>
>> >
>> > What about [irs] values? Where are they located, in vdsm.conf?
>>
>> Yes but you should not modify them in vdsm.conf.
>>
>> > What are defaults for volume_utilization_percent and
>> > volume_utilization_chunk_mb?
>> > Did they change from 3.6 to 4.0 to 4.1?
>>
>> No, the defaults did not change in the last 3.5 years.
>
>
> OK.
> In a previous message of this thread you wrote that the default
> volume_utilization_chunk_mb is 1024
> What about default for volume_utilization_percent? 50?

Yes, this should be a commented out option in /etc/vdsm/vdsm.conf.

If you don't have a vdsm.conf file, or the file is empty, you can
generate a new file
like this:

python /usr/lib64/python2.7/site-packages/vdsm/config.py > vdsm.conf.examle

>
>
>>
>>
>> > What I should do after changing them to make them active?
>>
>> Restart vdsm
>
>
> with host into maintenance mode or could it be active?
>
> Thanks
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Infiniband migration network

2017-02-02 Thread Logan Kuhn
We are starting to scale and have started to notice the limitations of
using the default network for everything.  We have infiniband in our vm
hosts and would like to use that as our migration network, but haven't
figured out how.  Creating a logical network and assigning it to one of the
infiniband networks doesn't seem to work because infiniband doesn't handle
bridging well.

Regards,
Logan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM has been paused due to storage I/O problem

2017-02-02 Thread Gianluca Cecchi
On Thu, Feb 2, 2017 at 12:48 PM, Nir Soffer  wrote:

>
> >
> > What about [irs] values? Where are they located, in vdsm.conf?
>
> Yes but you should not modify them in vdsm.conf.
>
> > What are defaults for volume_utilization_percent and
> > volume_utilization_chunk_mb?
> > Did they change from 3.6 to 4.0 to 4.1?
>
> No, the defaults did not change in the last 3.5 years.
>

OK.
In a previous message of this thread you wrote that the default
volume_utilization_chunk_mb
is 1024
What about default for volume_utilization_percent? 50?



>
> > What I should do after changing them to make them active?
>
> Restart vdsm
>

with host into maintenance mode or could it be active?

Thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-02 Thread Краснобаев Михаил
Hi, >Why did you have to restart VMs for the migration to work ? Is it mandatory for an upgrade ? I had to restart the VMs (even shutdown and start) for the raised compatibility level to kick in.Migration works even if you don't restart VMs. > Is it mandatory for an upgrade ? No. But at some point you will have to or the VMs cluster compatibility level stays at the previous version. Best regard, Mikhail 02.02.2017, 17:25, "Fernando Frediani" :HelloThanks for sharing your procedures.Why did you have to restart VMs for the migration to work ? Is it mandatory for an upgrade ?Fernando On 02/02/2017 12:23, Краснобаев Михаил wrote:Hi, upgraded my cluster (3 hosts, engine, nfs-share) to the latest 4.1 release and Centos 7.3 (from 4.06). Did the following: 1. Upgraded engine machine to Centos 7.32. Upgraded engine packages and ran "engine-setup"3. Upgraded one by one hosts to 7.3 + packages from the new 4.1. repo and refreshed hosts capabilities.4. Raised cluster and datacenter compatibility level to 4.1.5. Restarted virtual machines and tested migration.6. Profit! Everything went really smoothly. No errors. Now trying to figure out how the sparsify function works. I need to run trimming from inside the VM first? Best regards, Mikhail.   02.02.2017, 15:19, "Sandro Bonazzola" :Hi,did you install/update to 4.1.0? Let us know your experience!We end up knowing only when things doesn't work well, let us know it works fine for you :-) If you're not planning an update to 4.1.0 in the near future, let us know why.Maybe we can help. Thanks! --Sandro BonazzolaBetter technology. Faster innovation. Powered by community collaboration.See how it works at redhat.com,___Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users  -- С уважением, Краснобаев Михаил.   ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
,___Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users  -- С уважением, Краснобаев Михаил.  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-02 Thread Yaniv Kaul
On Thu, Feb 2, 2017 at 4:23 PM, Краснобаев Михаил  wrote:

> Hi,
>
> upgraded my cluster (3 hosts, engine, nfs-share) to the latest 4.1 release
> and Centos 7.3 (from 4.06).
>
> Did the following:
>
> 1. Upgraded engine machine to Centos 7.3
> 2. Upgraded engine packages and ran "engine-setup"
> 3. Upgraded one by one hosts to 7.3 + packages from the new 4.1. repo and
> refreshed hosts capabilities.
> 4. Raised cluster and datacenter compatibility level to 4.1.
> 5. Restarted virtual machines and tested migration.
> 6. Profit! Everything went really smoothly. No errors.
>
> Now trying to figure out how the sparsify function works. I need to run
> trimming from inside the VM first?
>

If you've configured it to use virtio-SCSI, and DISCARD is enabled, you
can. But I believe virt-sparsify does a bit.

BTW, depending on the OS, if DISCARD is enabled, I would not do anything -
for example, in Fedora, there's a systemd timer that once a week runs
fstrim for you.

If not, then it has to be turned off and then you can run virt-sparsify.
Y.


>
> Best regards, Mikhail.
>
>
>
> 02.02.2017, 15:19, "Sandro Bonazzola" :
>
> Hi,
> did you install/update to 4.1.0? Let us know your experience!
> We end up knowing only when things doesn't work well, let us know it works
> fine for you :-)
>
> If you're not planning an update to 4.1.0 in the near future, let us know
> why.
> Maybe we can help.
>
> Thanks!
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
> ,
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> --
> С уважением, Краснобаев Михаил.
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM has been paused due to storage I/O problem

2017-02-02 Thread Gianluca Cecchi
On Thu, Feb 2, 2017 at 3:30 PM, Nir Soffer  wrote:

>
> If you don't have a vdsm.conf file, or the file is empty, you can
> generate a new file
> like this:
>
> python /usr/lib64/python2.7/site-packages/vdsm/config.py >
> vdsm.conf.examle
>

thanks.
It seems that the package python-libs-2.7.5-48.el7.x86_64 actually
uses /usr/lib path instead of /usr/lib64...
What worked
python /usr/lib/python2.7/site-packages/vdsm/config.py > vdsm.conf.example


> >
> >
> >>
> >>
> >> > What I should do after changing them to make them active?
> >>
> >> Restart vdsm
> >
> >
> > with host into maintenance mode or could it be active?
> >
> > Thanks
> >
>

Can you confirm that the host can be active when I restart vdsmd service?

Thanks
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM has been paused due to storage I/O problem

2017-02-02 Thread Nir Soffer
On Thu, Feb 2, 2017 at 4:42 PM, Gianluca Cecchi
 wrote:
> On Thu, Feb 2, 2017 at 3:30 PM, Nir Soffer  wrote:
>>
>>
>> If you don't have a vdsm.conf file, or the file is empty, you can
>> generate a new file
>> like this:
>>
>> python /usr/lib64/python2.7/site-packages/vdsm/config.py >
>> vdsm.conf.examle
>
>
> thanks.
> It seems that the package python-libs-2.7.5-48.el7.x86_64 actually uses
> /usr/lib path instead of /usr/lib64...
> What worked
> python /usr/lib/python2.7/site-packages/vdsm/config.py > vdsm.conf.example

Indeed, my mistake, we move do lib several versions ago.

>
>>
>> >
>> >
>> >>
>> >>
>> >> > What I should do after changing them to make them active?
>> >>
>> >> Restart vdsm
>> >
>> >
>> > with host into maintenance mode or could it be active?
>> >
>> > Thanks
>> >
>
>
> Can you confirm that the host can be active when I restart vdsmd service?

Sure. This may abort a storage operation if one is running when you restart
vdsm, but vdsm is designed so you can restart or kill it safely.

For example, if you abort a disk copy in the middle, the operation will fail
and the destination disk will be deleted.

If you want to avoid such issue, you can put a host to maintenance, but this
requires migration of vms to other hosts.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Host Issue

2017-02-02 Thread Bryan Sockel
Hi,

Came into the office with an issue with my ovirt setup this morning.  On one 
of my hosts the / partition was completely full causing the host to go into 
an unknown state.  I was able to clear out some space for the time being and 
attempting to recover my that host.  VM's are still running and responding 
on the host.

I am using Gluster volumes in my configuration, and had to restart gluster 
service on that host.  I also restarted the ovirt-ha-agent service.

I am seeing this entry in my agent.log every two seconds:

MainThread::INFO::2017-02-02 
09:11:19,606::util::214::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(connect_vdsm_json_rpc)
 
Waiting for VDSM hardware info

In my vdsm.log i am seeing this
jsonrpc.Executor/4::INFO::2017-02-02 
09:13:42,088::__init__::525::jsonrpc.JsonRpcServer::(_handle_request) In 
recovery, ignoring 'Host.getAllVmStats' in bridge with {}
jsonrpc.Executor/4::INFO::2017-02-02 
09:13:42,088::__init__::513::jsonrpc.JsonRpcServer::(_serveRequest) RPC call 
Host.getAllVmStats failed (error 99) in 0.00 seconds
jsonrpc.Executor/5::INFO::2017-02-02 
09:13:42,114::__init__::525::jsonrpc.JsonRpcServer::(_handle_request) In 
recovery, ignoring 'Host.getHardwareInfo' in bridge with {}
jsonrpc.Executor/5::INFO::2017-02-02 
09:13:42,115::__init__::513::jsonrpc.JsonRpcServer::(_serveRequest) RPC call 
Host.getHardwareInfo failed (error 99) in 0.00 seconds
jsonrpc.Executor/6::INFO::2017-02-02 
09:13:44,121::__init__::525::jsonrpc.JsonRpcServer::(_handle_request) In 
recovery, ignoring 'Host.getHardwareInfo' in bridge with {}
jsonrpc.Executor/6::INFO::2017-02-02 
09:13:44,122::__init__::513::jsonrpc.JsonRpcServer::(_serveRequest) RPC call 
Host.getHardwareInfo failed (error 99) in 0.00 seconds
jsonrpc.Executor/7::INFO::2017-02-02 
09:13:46,127::__init__::525::jsonrpc.JsonRpcServer::(_handle_request) In 
recovery, ignoring 'Host.getHardwareInfo' in bridge with {}
jsonrpc.Executor/7::INFO::2017-02-02 
09:13:46,127::__init__::513::jsonrpc.JsonRpcServer::(_serveRequest) RPC call 
Host.getHardwareInfo failed (error 99) in 0.00 seconds
clientIFinit::DEBUG::2017-02-02 
09:13:46,257::task::597::Storage.TaskManager.Task::(_updateState) 
Task=`ba88b701-9f7a-488e-9c80-3d61cec38053`::moving from state init -> state 
preparing
clientIFinit::INFO::2017-02-02 
09:13:46,258::logUtils::49::dispatcher::(wrapper) Run and protect: 
getConnectedStoragePoolsList(options=None)
clientIFinit::INFO::2017-02-02 
09:13:46,258::logUtils::52::dispatcher::(wrapper) Run and protect: 
getConnectedStoragePoolsList, Return response: {'poollist': []}
clientIFinit::DEBUG::2017-02-02 
09:13:46,258::task::1193::Storage.TaskManager.Task::(prepare) 
Task=`ba88b701-9f7a-488e-9c80-3d61cec38053`::finished: {'poollist': []}
clientIFinit::DEBUG::2017-02-02 
09:13:46,258::task::597::Storage.TaskManager.Task::(_updateState) 
Task=`ba88b701-9f7a-488e-9c80-3d61cec38053`::moving from state preparing -> 
state finished
clientIFinit::DEBUG::2017-02-02 
09:13:46,258::resourceManager::952::Storage.ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {}
clientIFinit::DEBUG::2017-02-02 
09:13:46,259::resourceManager::989::Storage.ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
clientIFinit::DEBUG::2017-02-02 
09:13:46,259::task::995::Storage.TaskManager.Task::(_decref) 
Task=`ba88b701-9f7a-488e-9c80-3d61cec38053`::ref 0 aborting False
clientIFinit::INFO::2017-02-02 
09:13:46,259::clientIF::558::vds::(_waitForStoragePool) recovery: waiting 
for storage pool to go up
jsonrpc.Executor/0::INFO::2017-02-02 
09:13:48,133::__init__::525::jsonrpc.JsonRpcServer::(_handle_request) In 
recovery, ignoring 'Host.getHardwareInfo' in bridge with {}
jsonrpc.Executor/0::INFO::2017-02-02 
09:13:48,134::__init__::513::jsonrpc.JsonRpcServer::(_serveRequest) RPC call 
Host.getHardwareInfo failed (error 99) in 0.00 seconds
jsonrpc.Executor/1::INFO::2017-02-02 
09:13:50,140::__init__::525::jsonrpc.JsonRpcServer::(_handle_request) In 
recovery, ignoring 'Host.getHardwareInfo' in bridge with {}
jsonrpc.Executor/1::INFO::2017-02-02 
09:13:50,140::__init__::513::jsonrpc.JsonRpcServer::(_serveRequest) RPC call 
Host.getHardwareInfo failed (error 99) in 0.00 seconds
clientIFinit::DEBUG::2017-02-02 
09:13:51,265::task::597::Storage.TaskManager.Task::(_updateState) 
Task=`e8c558b1-f5d0-49ea-ac92-51660d03636e`::moving from state init -> state 
preparing
clientIFinit::INFO::2017-02-02 
09:13:51,265::logUtils::49::dispatcher::(wrapper) Run and protect: 
getConnectedStoragePoolsList(options=None)
clientIFinit::INFO::2017-02-02 
09:13:51,265::logUtils::52::dispatcher::(wrapper) Run and protect: 
getConnectedStoragePoolsList, Return response: {'poollist': []}
clientIFinit::DEBUG::2017-02-02 
09:13:51,265::task::1193::Storage.TaskManager.Task::(prepare) 
Task=`e8c558b1-f5d0-49ea-ac92-51660d03636e`::finished: {'poollist': []}
clientIFinit::DEBUG::2017-02-02 
09:13:51,266::task::597::Storage.TaskManager.Task::(_updateState) 

Re: [ovirt-users] [Call for feedback] did you install/update to 4.1.0?

2017-02-02 Thread Краснобаев Михаил
Hi, upgraded my cluster (3 hosts, engine, nfs-share) to the latest 4.1 release and Centos 7.3 (from 4.06). Did the following: 1. Upgraded engine machine to Centos 7.32. Upgraded engine packages and ran "engine-setup"3. Upgraded one by one hosts to 7.3 + packages from the new 4.1. repo and refreshed hosts capabilities.4. Raised cluster and datacenter compatibility level to 4.1.5. Restarted virtual machines and tested migration.6. Profit! Everything went really smoothly. No errors. Now trying to figure out how the sparsify function works. I need to run trimming from inside the VM first? Best regards, Mikhail.   02.02.2017, 15:19, "Sandro Bonazzola" :Hi,did you install/update to 4.1.0? Let us know your experience!We end up knowing only when things doesn't work well, let us know it works fine for you :-) If you're not planning an update to 4.1.0 in the near future, let us know why.Maybe we can help. Thanks!--Sandro BonazzolaBetter technology. Faster innovation. Powered by community collaboration.See how it works at redhat.com,___Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users  -- С уважением, Краснобаев Михаил.  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Host Issue

2017-02-02 Thread Martin Sivak
Hi,

VDSM error 99 means RecoveryInProgress and it might take some time
depending on how many VMs there are.

So I suggest you wait a bit more for now and see what happens.

Best regards

--
Martin Sivak
oVirt / SLA

On Thu, Feb 2, 2017 at 4:18 PM, Bryan Sockel  wrote:
>  Hi,
>
> Came into the office with an issue with my ovirt setup this morning.  On one
> of my hosts the / partition was completely full causing the host to go into
> an unknown state.  I was able to clear out some space for the time being and
> attempting to recover my that host.  VM's are still running and responding
> on the host.
>
> I am using Gluster volumes in my configuration, and had to restart gluster
> service on that host.  I also restarted the ovirt-ha-agent service.
>
> I am seeing this entry in my agent.log every two seconds:
>
> MainThread::INFO::2017-02-02
> 09:11:19,606::util::214::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(connect_vdsm_json_rpc)
> Waiting for VDSM hardware info
>
> In my vdsm.log i am seeing this
> jsonrpc.Executor/4::INFO::2017-02-02
> 09:13:42,088::__init__::525::jsonrpc.JsonRpcServer::(_handle_request) In
> recovery, ignoring 'Host.getAllVmStats' in bridge with {}
> jsonrpc.Executor/4::INFO::2017-02-02
> 09:13:42,088::__init__::513::jsonrpc.JsonRpcServer::(_serveRequest) RPC call
> Host.getAllVmStats failed (error 99) in 0.00 seconds
> jsonrpc.Executor/5::INFO::2017-02-02
> 09:13:42,114::__init__::525::jsonrpc.JsonRpcServer::(_handle_request) In
> recovery, ignoring 'Host.getHardwareInfo' in bridge with {}
> jsonrpc.Executor/5::INFO::2017-02-02
> 09:13:42,115::__init__::513::jsonrpc.JsonRpcServer::(_serveRequest) RPC call
> Host.getHardwareInfo failed (error 99) in 0.00 seconds
> jsonrpc.Executor/6::INFO::2017-02-02
> 09:13:44,121::__init__::525::jsonrpc.JsonRpcServer::(_handle_request) In
> recovery, ignoring 'Host.getHardwareInfo' in bridge with {}
> jsonrpc.Executor/6::INFO::2017-02-02
> 09:13:44,122::__init__::513::jsonrpc.JsonRpcServer::(_serveRequest) RPC call
> Host.getHardwareInfo failed (error 99) in 0.00 seconds
> jsonrpc.Executor/7::INFO::2017-02-02
> 09:13:46,127::__init__::525::jsonrpc.JsonRpcServer::(_handle_request) In
> recovery, ignoring 'Host.getHardwareInfo' in bridge with {}
> jsonrpc.Executor/7::INFO::2017-02-02
> 09:13:46,127::__init__::513::jsonrpc.JsonRpcServer::(_serveRequest) RPC call
> Host.getHardwareInfo failed (error 99) in 0.00 seconds
> clientIFinit::DEBUG::2017-02-02
> 09:13:46,257::task::597::Storage.TaskManager.Task::(_updateState)
> Task=`ba88b701-9f7a-488e-9c80-3d61cec38053`::moving from state init -> state
> preparing
> clientIFinit::INFO::2017-02-02
> 09:13:46,258::logUtils::49::dispatcher::(wrapper) Run and protect:
> getConnectedStoragePoolsList(options=None)
> clientIFinit::INFO::2017-02-02
> 09:13:46,258::logUtils::52::dispatcher::(wrapper) Run and protect:
> getConnectedStoragePoolsList, Return response: {'poollist': []}
> clientIFinit::DEBUG::2017-02-02
> 09:13:46,258::task::1193::Storage.TaskManager.Task::(prepare)
> Task=`ba88b701-9f7a-488e-9c80-3d61cec38053`::finished: {'poollist': []}
> clientIFinit::DEBUG::2017-02-02
> 09:13:46,258::task::597::Storage.TaskManager.Task::(_updateState)
> Task=`ba88b701-9f7a-488e-9c80-3d61cec38053`::moving from state preparing ->
> state finished
> clientIFinit::DEBUG::2017-02-02
> 09:13:46,258::resourceManager::952::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
> clientIFinit::DEBUG::2017-02-02
> 09:13:46,259::resourceManager::989::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
> clientIFinit::DEBUG::2017-02-02
> 09:13:46,259::task::995::Storage.TaskManager.Task::(_decref)
> Task=`ba88b701-9f7a-488e-9c80-3d61cec38053`::ref 0 aborting False
> clientIFinit::INFO::2017-02-02
> 09:13:46,259::clientIF::558::vds::(_waitForStoragePool) recovery: waiting
> for storage pool to go up
> jsonrpc.Executor/0::INFO::2017-02-02
> 09:13:48,133::__init__::525::jsonrpc.JsonRpcServer::(_handle_request) In
> recovery, ignoring 'Host.getHardwareInfo' in bridge with {}
> jsonrpc.Executor/0::INFO::2017-02-02
> 09:13:48,134::__init__::513::jsonrpc.JsonRpcServer::(_serveRequest) RPC call
> Host.getHardwareInfo failed (error 99) in 0.00 seconds
> jsonrpc.Executor/1::INFO::2017-02-02
> 09:13:50,140::__init__::525::jsonrpc.JsonRpcServer::(_handle_request) In
> recovery, ignoring 'Host.getHardwareInfo' in bridge with {}
> jsonrpc.Executor/1::INFO::2017-02-02
> 09:13:50,140::__init__::513::jsonrpc.JsonRpcServer::(_serveRequest) RPC call
> Host.getHardwareInfo failed (error 99) in 0.00 seconds
> clientIFinit::DEBUG::2017-02-02
> 09:13:51,265::task::597::Storage.TaskManager.Task::(_updateState)
> Task=`e8c558b1-f5d0-49ea-ac92-51660d03636e`::moving from state init -> state
> preparing
> clientIFinit::INFO::2017-02-02
> 09:13:51,265::logUtils::49::dispatcher::(wrapper) Run and protect:
> getConnectedStoragePoolsList(options=None)
> 

Re: [ovirt-users] VM has been paused due to storage I/O problem

2017-02-02 Thread Gianluca Cecchi
On Thu, Feb 2, 2017 at 3:51 PM, Nir Soffer  wrote:

>
> > Can you confirm that the host can be active when I restart vdsmd service?
>
> Sure. This may abort a storage operation if one is running when you restart
> vdsm, but vdsm is designed so you can restart or kill it safely.
>
> For example, if you abort a disk copy in the middle, the operation will
> fail
> and the destination disk will be deleted.
>
> If you want to avoid such issue, you can put a host to maintenance, but
> this
> requires migration of vms to other hosts.
>
> Nir
>

OK. Created 50_thin_block_extension_rules.conf under /etc/vdsm/vdsm.conf.d
and restarted vdsmd

One last (latest probably... ;-) question
Is it expected that if I restart vdsmd on the host that is the SPM, then
SPM is shifted to another node?
Because when restarting vdsmd on the host that is not SPM I didn't get any
message in web admin gui and restart of vdsmd itself was very fast.
Instead on the host with SPM, the command took several seconds and I got
these events

Feb 2, 2017 4:01:23 PM Host ovmsrv05 power management was verified
successfully.
Feb 2, 2017 4:01:23 PM Status of host ovmsrv05 was set to Up.
Feb 2, 2017 4:01:19 PM Executing power management status on Host ovmsrv05
using Proxy Host ovmsrv06 and Fence Agent ilo:10.4.192.212.
Feb 2, 2017 4:01:18 PM Storage Pool Manager runs on Host ovmsrv06 (Address:
ovmsrv06.datacenter.polimi.it).
Feb 2, 2017 4:01:13 PM VDSM ovmsrv05 command failed: Recovering from crash
or Initializing
Feb 2, 2017 4:01:11 PM Host ovmsrv05 is initializing. Message: Recovering
from crash or Initializing
Feb 2, 2017 4:01:11 PM VDSM ovmsrv05 command failed: Recovering from crash
or Initializing
Feb 2, 2017 4:01:11 PM Invalid status on Data Center Default. Setting Data
Center status to Non Responsive (On host ovmsrv05, Error: Recovering from
crash or Initializing).
Feb 2, 2017 4:01:11 PM VDSM ovmsrv05 command failed: Recovering from crash
or Initializing
Feb 2, 2017 4:01:05 PM Host ovmsrv05 is not responding. It will stay in
Connecting state for a grace period of 80 seconds and after that an attempt
to fence the host will be issued.
Feb 2, 2017 4:01:05 PM Host ovmsrv05 is not responding. It will stay in
Connecting state for a grace period of 80 seconds and after that an attempt
to fence the host will be issued.
Feb 2, 2017 4:01:05 PM VDSM ovmsrv05 command failed: Connection reset by
peer

Thanks
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM has been paused due to storage I/O problem

2017-02-02 Thread Nir Soffer
On Thu, Feb 2, 2017 at 6:05 PM, Gianluca Cecchi
 wrote:
> On Thu, Feb 2, 2017 at 3:51 PM, Nir Soffer  wrote:
>>
>>
>> > Can you confirm that the host can be active when I restart vdsmd
>> > service?
>>
>> Sure. This may abort a storage operation if one is running when you
>> restart
>> vdsm, but vdsm is designed so you can restart or kill it safely.
>>
>> For example, if you abort a disk copy in the middle, the operation will
>> fail
>> and the destination disk will be deleted.
>>
>> If you want to avoid such issue, you can put a host to maintenance, but
>> this
>> requires migration of vms to other hosts.
>>
>> Nir
>
>
> OK. Created 50_thin_block_extension_rules.conf under /etc/vdsm/vdsm.conf.d
> and restarted vdsmd
>
> One last (latest probably... ;-) question
> Is it expected that if I restart vdsmd on the host that is the SPM, then SPM
> is shifted to another node?

Yes, engine will move spm to another host when spm fails, unless you
disabled spm role for any other host (see host > spm tab).

> Because when restarting vdsmd on the host that is not SPM I didn't get any
> message in web admin gui and restart of vdsmd itself was very fast.
> Instead on the host with SPM, the command took several seconds and I got
> these events

It is expected the restarting the spm is slower, but we need to see vdsm logs
to understand why.

> Feb 2, 2017 4:01:23 PM Host ovmsrv05 power management was verified
> successfully.
> Feb 2, 2017 4:01:23 PM Status of host ovmsrv05 was set to Up.
> Feb 2, 2017 4:01:19 PM Executing power management status on Host ovmsrv05
> using Proxy Host ovmsrv06 and Fence Agent ilo:10.4.192.212.
> Feb 2, 2017 4:01:18 PM Storage Pool Manager runs on Host ovmsrv06 (Address:
> ovmsrv06.datacenter.polimi.it).
> Feb 2, 2017 4:01:13 PM VDSM ovmsrv05 command failed: Recovering from crash
> or Initializing
> Feb 2, 2017 4:01:11 PM Host ovmsrv05 is initializing. Message: Recovering
> from crash or Initializing
> Feb 2, 2017 4:01:11 PM VDSM ovmsrv05 command failed: Recovering from crash
> or Initializing
> Feb 2, 2017 4:01:11 PM Invalid status on Data Center Default. Setting Data
> Center status to Non Responsive (On host ovmsrv05, Error: Recovering from
> crash or Initializing).
> Feb 2, 2017 4:01:11 PM VDSM ovmsrv05 command failed: Recovering from crash
> or Initializing
> Feb 2, 2017 4:01:05 PM Host ovmsrv05 is not responding. It will stay in
> Connecting state for a grace period of 80 seconds and after that an attempt
> to fence the host will be issued.
> Feb 2, 2017 4:01:05 PM Host ovmsrv05 is not responding. It will stay in
> Connecting state for a grace period of 80 seconds and after that an attempt
> to fence the host will be issued.
> Feb 2, 2017 4:01:05 PM VDSM ovmsrv05 command failed: Connection reset by
> peer

It look like the engine discovered that the SPM was down, and reconnected.

It is expected that changes in the spm status are detected early and engine
is trying to recover the spm, the SPM role is critical in ovirt.

Are you sure you did not get any message when restarting the other host?
I would expect that engine detect and report a restart of all hosts.

If you can reproduce this, restarting vdsm is not detected on engine and
not reported in engine even log, please file a bug.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Infiniband migration network

2017-02-02 Thread Giorgio Biacchi

Hi,
you cant bridge infiniband. You have to set the network as "no VM network" so it 
will not be bridged.


I have a similar setup where Infiniband is used for data domain mount and vm 
migrations and to have it working in connected mode with a MTU of 65520 you also 
need a couple of vdsm hooks on all the hypervisors.


Regards

On 02/02/2017 03:57 PM, Logan Kuhn wrote:

We are starting to scale and have started to notice the limitations of using the
default network for everything.  We have infiniband in our vm hosts and would
like to use that as our migration network, but haven't figured out how.
Creating a logical network and assigning it to one of the infiniband networks
doesn't seem to work because infiniband doesn't handle bridging well.

Regards,
Logan


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
gb

PGP Key: http://pgp.mit.edu/
Primary key fingerprint: C510 0765 943E EBED A4F2 69D3 16CC DC90 B9CB 0F34
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Guest IP

2017-02-02 Thread Alexandr Krivulya

Hi,

is there any way to get guest ip provided by guest tools from user 
portal? It present in an admin portal, but not in user.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-engine failed to check for updates

2017-02-02 Thread Martin Perina
Hi,

unfortunately you have found a bug [1]. The workaround is to upgrade 3.6
host to 4.0, for 4.0+ package lists for "Check for upgrades" flow are
correct.

Martin Perina


[1] https://bugzilla.redhat.com/show_bug.cgi?id=1418757


On Wed, Feb 1, 2017 at 8:52 PM, Michael Watters  wrote:

> The engine is 4.0, the host nodes are running 3.6 with VDSM 4.17.35.
>
>
> On 02/01/2017 02:41 PM, Victor Jose Acosta wrote:
> > Hello
> >
> > ovirt-imageio-daemon is part of ovirt 4 repository, is your engine
> > version 4 or 3.6?
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-engine failed to check for updates

2017-02-02 Thread Michael Watters
My intention was to upgrade the ovirt-engine to 4.0 first and then the
host nodes when we can get to them.  

On Thu, 2017-02-02 at 10:50 +0200, Yedidyah Bar David wrote:
> On Wed, Feb 1, 2017 at 10:45 PM, Victor Jose Acosta  om> wrote:
> > So, that's the problem, engine is looking for updates for version
> > 4, it does not matter if your node is version 3.6, i think this is
> > a bug. Engine should be looking for node version updates instead of
> > engine version.
> > 
> 
> Do you intend a 4.0 engine to update a 3.6 host to a later 3.6.z
> version? that's not supported.
> 
> To update the host to 4.0, you should add 4.0 repos to the host.
>  
> > On 01/02/17 17:33, users-requ...@ovirt.org wrote:
> > > Re:  ovirt-engine failed to check for updates
> >  
> > -- 
> > 
> > 
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> > 
> 
> 
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-engine failed to check for updates

2017-02-02 Thread Michael Watters
Thanks.  I guess we'll just have to ignore these errors until the host
nodes get upgraded.


On Thu, 2017-02-02 at 17:38 +0100, Martin Perina wrote:
> Hi,
> 
> unfortunately you have found a bug [1]. The workaround is to upgrade
> 3.6 host to 4.0, for 4.0+ package lists for "Check for upgrades" flow
> are correct.
> 
> Martin Perina
> 
> 
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1418757
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] planned Jenkins and Resources restart

2017-02-02 Thread Evgheni Dereveanchin
Hi everyone,

I will be applying security updates to our
production systems today. During the maintenance
following services may be unavailable for short
periods of time:

jenkins.ovirt.org
resources.ovirt.org
artifactory.ovirt.org
templates.ovirt.org
mirrors.phx.ovirt.org
proxy.phx.ovirt.org

Jenkins will not schedule new builds while services
are rebooting as this can cause false positives.

I will announce you once the maintenance is over.

Regards, 
Evgheni Dereveanchin 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users