Re: [ovirt-users] no vm.conf?

2017-03-21 Thread Michael Kleinpaste
Just to verify the process (Is there doc somewhere that I'm not finding for
the new procedure?)

   1. Put the cluster in global-ha-maintenance mode.
   2. Make the changes in the webadmin UI.
   3. Reboot the hosted-engine VM.
   4. All should be good?
   5. Disable global-ha-maintenance mode.

Sound right?

On Tue, Mar 21, 2017 at 4:02 PM Michael Kleinpaste <
michael.kleinpa...@sharperlending.com> wrote:

Great, thanks!

On Tue, Mar 21, 2017 at 3:58 PM Martin Sivak  wrote:

Hi,

yes, it changed (in 4.0 I think). Just edit the hosted engine VM using
the webadmin UI using the same dialog as for any other VM. Memory
size, cpu counts and some other fields are editable and will be
automatically transferred to all nodes.

Best regards

--
Martin Sivak
SLA / oVirt

On Tue, Mar 21, 2017 at 11:45 PM, Michael Kleinpaste
 wrote:
> Hello all,
>
> I wanted to adjust the hosted-engine VM to use a little less CPU and
memory.
> However, none of my hosts have the vm.conf file in
> /etc/ovirt-hosted-engine-ha/ as instructions say it should be.  Has the
> process for changing the hosted-engine VM settings been changed with 4.1?
> --
> Michael Kleinpaste
> Senior Systems Administrator
> SharperLending, LLC.
> www.SharperLending.com
> michael.kleinpa...@sharperlending.com
> (509) 324-1230   Fax: (509) 324-1234
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>

-- 
*Michael Kleinpaste*
Senior Systems Administrator
SharperLending, LLC.
www.SharperLending.com
michael.kleinpa...@sharperlending.com
(509) 324-1230   Fax: (509) 324-1234

-- 
*Michael Kleinpaste*
Senior Systems Administrator
SharperLending, LLC.
www.SharperLending.com
michael.kleinpa...@sharperlending.com
(509) 324-1230   Fax: (509) 324-1234
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] About Template Sub Version

2017-03-21 Thread 张 余歌
Hello,Friends.

I wanto use "Template Sub Version = latest" +"Stateless status" to realize some 
function,But I find I will loss my data if I use this way,it seems I need to 
creat a FTP Server to solve this problem.Is there another good method to avoid 
data loss?thanks!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how to handle mount options for hosted engine on glusterfs

2017-03-21 Thread Ian Neilsen
Good to know. Ive been using the 1 dash notation following RH document.
Dont think Ive seen the 1 dash before.

On the original cluster I used IP's second cluster I used FQDN's but made
sure I have a hosts file present.

On 22 March 2017 at 10:09, /dev/null  wrote:

> Ian, knara,
>
> success! I got it working using the two-dash-notation and ip addresses.
> Shure this is the most relyable way, even with local hosts file.
>
> In my case, the hosted vm dies and takes some time to be running again. Is
> it possible to have the vm surviving the switch to the
> backup-volfile-server?
>
> Thanks & regards
>
> /dev/null
>
> *On Tue, 21 Mar 2017 11:52:32 +0530, knarra wrote*
> > On 03/21/2017 10:52 AM, Ian Neilsen wrote:
> >
>
>
> >
> > knara
> >
> > Looks like your conf is incorrect for mnt option.
> >
> >
>
> Hi Ian,
> >
> > mnt_option should be mnt_options=backup-volfile-servers=:
> and this is how we test it.
> >
> > Thanks
> > kasturi.
> >
>
>
> >
> > It should be I believe;  mnt_options=backupvolfile-server=server name
> >
> > not
> >
> > mnt_options=backup-volfile-servers=host2
> >
> > If your dns isnt working or your hosts file is incorrect this will
> prevent it as well.
> >
> >
> >
> > On 21 March 2017 at 03:30, /dev/null  wrote:
> >
>>
>>
>> > Hi kasturi,
>> >
>> > thank you. I tested and it seems not to work, even after rebooting the
>> current mount does not show up the mnt_options nor the switch over works.
>> >
>> > [root@host2 ~]# cat /etc/ovirt-hosted-engine/hosted-engine.conf
>> > ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
>> > gateway=192.168.2.1
>> > iqn=
>> > conf_image_UUID=7bdc29ad-bee6-4a33-8d58-feae9f45d54f
>> > ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
>> > sdUUID=1775d440-649c-4921-ba3b-9b6218c27ef3
>> > connectionUUID=fcf70593-8214-4e8d-b546-63c210a3d5e7
>> > conf_volume_UUID=06dd17e5-a440-417a-94e8-75929b6f9ed5
>> > user=
>> > host_id=2
>> > bridge=ovirtmgmt
>> > metadata_image_UUID=6252c21c-227d-4dbd-bb7b-65cf342154b6
>> > spUUID=----
>> > mnt_options=backup-volfile-servers=host2
>> > fqdn=ovirt.test.lab
>> > portal=
>> > vm_disk_id=1bb9ea7f-986c-4803-ae82-8d5a47b1c496
>> > metadata_volume_UUID=426ff2cc-58a2-4b83-b22f-3f7dc99890d4
>> > vm_disk_vol_id=b57d40d2-e68b-440a-bab7-0a9631f4baa4
>> > domainType=glusterfs
>> > port=
>> > console=qxl
>> > ca_subject="C=EN, L=Test, O=Test, CN=Test"
>> > password=
>> > vmid=272942f3-99b9-48b9-aca4-19ec852f6874
>> > lockspace_image_UUID=9fbdbfd4-3b31-43ce-80e2-283f0aeead49
>> > lockspace_volume_UUID=b1e4d3ed-ec78-41cd-9a39-4372f488fb92
>> > vdsm_use_ssl=true
>> > storage=host1:/gvol0
>> > conf=/var/run/ovirt-hosted-engine-ha/vm.conf
>> >
>> > [root@host2 ~]# mount |grep gvol0
>> > host1:/gvol0 on /rhev/data-center/mnt/glusterSD/host1:_gvol0 type
>> fuse.glusterfs (rw,relatime,user_id=0,group_i
>> d=0,default_permissions,allow_other,max_read=131072 <13%2010%2072>)
>> >
>> > Any suggestion?
>> >
>> > I will try an answerfile-install as well later, but it was helpful to
>> know, where to set this.
>> >
>> > Thanks & best regards
>> >
>> * > On Mon, 20 Mar 2017 12:12:25 +0530, knarra wrote*
>> >
>> > > On 03/20/2017 05:09 AM, /dev/null wrote:
>> > >
>>
>> Hi,
>>
>> how do i make the hosted_storage aware of gluster server failure? In 
>> --deploy i
>>
>> cannot
>> provide backup-volfile-servers. In
>> /etc/ovirt-hosted-engine/hosted-engine.conf
>>
>> there
>>
>> is
>> an mnt_options line, but i
>>
>> read
>> (https://github.com/oVirt/ovirt-hosted-engine-setup/commit/995c6a65ab897d804f794306cc3654214f2c29b6)
>> that this settings get lost during deployment on seconday
>>
>> servers.
>>
>> Is there an official way to deal with that? Should this option be set 
>> manualy on
>>
>> all
>>
>> nodes?
>>
>> Thanks!
>>
>> /dev/null
>>
>> Hi, > >I think in the above patch they are just   hiding the the
>> query for mount_options but i think all the code is still present and you
>> should not loose mount options during additional host deployment. For more
>> info you can refer [1]. > > You can set this option manually on all
>> nodes by editing /etc/ovirt-hosted-engine/hosted-engine.conf. Following
>> steps will help you to achieve this. > > 1) Move each host to maintenance,
>> edit the file '/etc/ovirt-hosted-engine/hosted-engine.conf'. > 2) set
>> mnt_options = backup-volfile-servers=: > 3)
>> restart the services 'systemctl restart ovirt-ha-agent' ; 'systemctl
>> restart ovirt-ha-broker' > 4) Activate the node. > > Repeat the above steps
>> for all the nodes in the cluster. > > [1] https://bugzilla.redhat.com/sh
>> ow_bug.cgi?id=1426517#c2 > > Hope this helps !! > > Thanks > kasturi >
>>
>> --
>> Diese Nachricht wurde auf Viren und andere gef�hrliche Inhalte
>>
>> untersucht
>> und ist - aktuelle Virenscanner vorausgesetzt -
>>
>> sauber.
>> For all your IT requirements visit: http://www.transtec.co.uk
>>
>> > >
>>
>> 

Re: [ovirt-users] how to handle mount options for hosted engine on glusterfs

2017-03-21 Thread /dev/null
Ian, knara,

success! I got it working using the two-dash-notation and ip addresses. Shure 
this is the most relyable way, even with local hosts file.

In my case, the hosted vm dies and takes some time to be running again. Is it 
possible to have the vm surviving the switch to the backup-volfile-server?

Thanks & regards

/dev/null

On Tue, 21 Mar 2017 11:52:32 +0530, knarra wrote
> On 03/21/2017 10:52 AM, Ian Neilsenwrote:
> 
> 
> knara
> 
> Looks like your conf is incorrect for mnt option.
> 
> Hi Ian,
>     
>     mnt_option should bemnt_options=backup-volfile-servers=: and 
> thisis how we test it.
> 
> Thanks
> kasturi.
> 
> 
> It should be I believe; mnt_options=backupvolfile-server=server name
> 
> not
> 
> mnt_options=backup-volfile-servers=host2
> 
> If your dns isnt working or yourhosts file is incorrect this will prevent it 
> as well.
> 
> 
> 
> On 21 March 2017 at 03:30, /dev/null wrote:
> 
> Hi kasturi,
> 
> thank you. I tested and it seems not to work, evenafter rebooting the current 
> mount does not show up themnt_options nor the switch over works.
> 
> [root@host2 ~]# cat /etc/ovirt-hosted-engine/hosted-engine.conf
> ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
> gateway=192.168.2.1
> iqn=
> conf_image_UUID=7bdc29ad-bee6-4a33-8d58-feae9f45d54f
> ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
> sdUUID=1775d440-649c-4921-ba3b-9b6218c27ef3
> connectionUUID=fcf70593-8214-4e8d-b546-63c210a3d5e7
> conf_volume_UUID=06dd17e5-a440-417a-94e8-75929b6f9ed5
> user=
> host_id=2
> bridge=ovirtmgmt
> metadata_image_UUID=6252c21c-227d-4dbd-bb7b-65cf342154b6
> spUUID=----
> mnt_options=backup-volfile-servers=host2
> fqdn=ovirt.test.lab
> portal=
> vm_disk_id=1bb9ea7f-986c-4803-ae82-8d5a47b1c496
> metadata_volume_UUID=426ff2cc-58a2-4b83-b22f-3f7dc99890d4
> vm_disk_vol_id=b57d40d2-e68b-440a-bab7-0a9631f4baa4
> domainType=glusterfs
> port=
> console=qxl
> ca_subject="C=EN, L=Test, O=Test, CN=Test"
> password=
> vmid=272942f3-99b9-48b9-aca4-19ec852f6874
> lockspace_image_UUID=9fbdbfd4-3b31-43ce-80e2-283f0aeead49
> lockspace_volume_UUID=b1e4d3ed-ec78-41cd-9a39-4372f488fb92
> vdsm_use_ssl=true
> storage=host1:/gvol0
> conf=/var/run/ovirt-hosted-engine-ha/vm.conf
> 
> [root@host2 ~]# mount |grep gvol0
> host1:/gvol0 on /rhev/data-center/mnt/glusterSD/host1:_gvol0type 
> fuse.glusterfs 
> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
> 
> Any suggestion?
> 
> I will try an answerfile-install as well later, but itwas helpful to know, 
> where to set this.  
> 
> Thanks & best regards
> 
> On Mon, 20 Mar 2017 12:12:25 +0530, knarra wrote
> 
> > On 03/20/2017 05:09 AM, /dev/nullwrote:
> >Hi,

how do i make the hosted_storage aware of gluster server failure? In --deploy i

cannot
provide backup-volfile-servers. In 
/etc/ovirt-hosted-engine/hosted-engine.conf

there

is
an mnt_options line, but i

read
(https://github.com/oVirt/ovirt-hosted-engine-setup/commit/995c6a65ab897d804f794306cc3654214f2c29b6)
that this settings get lost during deployment on seconday

servers.

Is there an official way to deal with that? Should this option be set manualy on

all

nodes?

Thanks!

/dev/nullHi,> >    I think in the above patch they are just   hiding the the 
queryfor mount_options but i think all the code is still present and youshould 
not loose mount options during additional host deployment.For more info you can 
refer [1]. >     >     You can set this option manually on all nodes by 
editing/etc/ovirt-hosted-engine/hosted-engine.conf. Following steps willhelp 
you to achieve this.> > 1) Move each host to maintenance, edit the 
file'/etc/ovirt-hosted-engine/hosted-engine.conf'.> 2) set mnt_options 
=backup-volfile-servers=:> 3) restart the services 
'systemctl restart ovirt-ha-agent' ;'systemctl restart ovirt-ha-broker'> 4) 
Activate the node.> > Repeat the above steps for all the nodes in the cluster.> 
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1426517#c2> > Hope this helps 
!!> > Thanks> kasturi> --
Diese Nachricht wurde auf Viren und andere gef�hrliche Inhalte

untersucht
und ist - aktuelle Virenscanner vorausgesetzt -

sauber.
For all your IT requirements visit: http://www.transtec.co.uk

> >___
Users mailing

list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
> > --> Diese E-Mail wurde auf Viren und gefährliche Anhänge> durchMailScanner 
> > untersucht und ist wahrscheinlich virenfrei.> MailScanner dankt transtec 
> > f�r die freundliche Unterst�tzung.--Diese E-Mail wurde auf Viren und 
> > gefährliche AnhängedurchMailScanner untersucht und ist wahrscheinlich 
> > virenfrei.MailScanner dankt transtec für die freundliche 
> > Unterstützung.___Users mailing 
> > listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users-- 
> 
> Ian NeilsenMobile: 0424 379 762Linkedin: http://au.linkedin.com/in/ianneilsen
> 

Re: [ovirt-users] Lost our HostedEngineVM

2017-03-21 Thread Juan Pablo
whats the output of the /var/log/ovirt-hosted-engine-ha/agent.log ?

regards

2017-03-21 18:58 GMT-03:00 Matt Emma :

> We’re in a bit of a panic mode, so excuse any shortness.
>
>
>
> We had a storage failure. We rebooted a VMHost that had the hostedengine
> VM - The HostedENgine did not try to move to the other hosts. We’ve since
> restored storage and we are able to successfully restart the paused VMs. We
> know the HostedEngine’s VM ID is there a way we can force load it from the
> mounted storage?
>
>
>
> -Matt
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] no vm.conf?

2017-03-21 Thread Michael Kleinpaste
Great, thanks!

On Tue, Mar 21, 2017 at 3:58 PM Martin Sivak  wrote:

> Hi,
>
> yes, it changed (in 4.0 I think). Just edit the hosted engine VM using
> the webadmin UI using the same dialog as for any other VM. Memory
> size, cpu counts and some other fields are editable and will be
> automatically transferred to all nodes.
>
> Best regards
>
> --
> Martin Sivak
> SLA / oVirt
>
> On Tue, Mar 21, 2017 at 11:45 PM, Michael Kleinpaste
>  wrote:
> > Hello all,
> >
> > I wanted to adjust the hosted-engine VM to use a little less CPU and
> memory.
> > However, none of my hosts have the vm.conf file in
> > /etc/ovirt-hosted-engine-ha/ as instructions say it should be.  Has the
> > process for changing the hosted-engine VM settings been changed with 4.1?
> > --
> > Michael Kleinpaste
> > Senior Systems Administrator
> > SharperLending, LLC.
> > www.SharperLending.com
> > michael.kleinpa...@sharperlending.com
> > (509) 324-1230   Fax: (509) 324-1234
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
-- 
*Michael Kleinpaste*
Senior Systems Administrator
SharperLending, LLC.
www.SharperLending.com
michael.kleinpa...@sharperlending.com
(509) 324-1230   Fax: (509) 324-1234
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] no vm.conf?

2017-03-21 Thread Martin Sivak
Hi,

yes, it changed (in 4.0 I think). Just edit the hosted engine VM using
the webadmin UI using the same dialog as for any other VM. Memory
size, cpu counts and some other fields are editable and will be
automatically transferred to all nodes.

Best regards

--
Martin Sivak
SLA / oVirt

On Tue, Mar 21, 2017 at 11:45 PM, Michael Kleinpaste
 wrote:
> Hello all,
>
> I wanted to adjust the hosted-engine VM to use a little less CPU and memory.
> However, none of my hosts have the vm.conf file in
> /etc/ovirt-hosted-engine-ha/ as instructions say it should be.  Has the
> process for changing the hosted-engine VM settings been changed with 4.1?
> --
> Michael Kleinpaste
> Senior Systems Administrator
> SharperLending, LLC.
> www.SharperLending.com
> michael.kleinpa...@sharperlending.com
> (509) 324-1230   Fax: (509) 324-1234
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] no vm.conf?

2017-03-21 Thread Michael Kleinpaste
Hello all,

I wanted to adjust the hosted-engine VM to use a little less CPU and
memory.  However, none of my hosts have the vm.conf file
in /etc/ovirt-hosted-engine-ha/ as instructions say it should be.  Has the
process for changing the hosted-engine VM settings been changed with 4.1?
-- 
*Michael Kleinpaste*
Senior Systems Administrator
SharperLending, LLC.
www.SharperLending.com
michael.kleinpa...@sharperlending.com
(509) 324-1230   Fax: (509) 324-1234
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Lost our HostedEngineVM

2017-03-21 Thread Matt Emma
We're in a bit of a panic mode, so excuse any shortness.

We had a storage failure. We rebooted a VMHost that had the hostedengine VM - 
The HostedENgine did not try to move to the other hosts. We've since restored 
storage and we are able to successfully restart the paused VMs. We know the 
HostedEngine's VM ID is there a way we can force load it from the mounted 
storage?

-Matt
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Strange network performance on VirtIIO VM NIC

2017-03-21 Thread Yaniv Kaul
On Tue, Mar 21, 2017 at 8:14 PM, FERNANDO FREDIANI <
fernando.fredi...@upx.com> wrote:

> Hello Yaniv.
>
> Have a new information about this scenario: I have load-balanced the
> requests between both vNICs, so each is receiving/sending half of the
> traffic in average and the packet loss although it still exists it lowered
> to 1% - 2% (which was expected as the CPU to process this traffic is shared
> by more than one CPU at a time).
> However the Load on the VM is still high probably due to the interrupts.
>
> Find below in-line the answers to some of your points:
>
> On 21/03/2017 12:31, Yaniv Kaul wrote:
>
>
> So there are 2 NUMA nodes on the host? And where are the NICs located?
>
> Tried to search how to check it but couldn't find how. Could you give me a
> hint ?
>

I believe 'lspci -vmm' should provide you with node information per PCI
device.
'numactl' can also provide interesting information.

>
>
> BTW, since those are virtual interfaces, why do you need two on the same
> VLAN?
>
> Very good question. It's because of an specific situation where I need to
> 2 MAC addresses in order to balance the traffic in LAG in a switch which
> does only layer 2 hashing.
>
>
> Are you using hyper-threading on the host? Otherwise, I'm not sure threads
> per core would help.
>
> Yes I have hyper-threading enabled on the Host. Is it worth to enable it ?
>

Depends on the workload. Some benefit from it, some don't. I wouldn't in
your case (it benefits mainly the case of many VMs with small number of
vCPUs).
Y.


>
>
> Thanks
> Fernando
>
>
>> On 18/03/2017 12:53, Yaniv Kaul wrote:
>>
>>
>>
>> On Fri, Mar 17, 2017 at 6:11 PM, FERNANDO FREDIANI <
>> fernando.fredi...@upx.com> wrote:
>>
>>> Hello all.
>>>
>>> I have a peculiar problem here which perhaps others may have had or know
>>> about and can advise.
>>>
>>> I have Virtual Machine with 2 VirtIO NICs. This VM serves around 1Gbps
>>> of traffic with thousands of clients connecting to it. When I do a packet
>>> loss test to the IP pinned to NIC1 it varies from 3% to 10% of packet loss.
>>> When I run the same test on NIC2 the packet loss is consistently 0%.
>>>
>>> From what I gather I may have something to do with possible lack of
>>> Multi Queu VirtIO where NIC1 is managed by a single CPU which might be
>>> hitting 100% and causing this packet loss.
>>>
>>> Looking at this reference (https://fedoraproject.org/wik
>>> i/Features/MQ_virtio_net) I see one way to test it is start the VM with
>>> 4 queues (for example), but checking on the qemu-kvm process I don't see
>>> option present. Any way I can force it from the Engine ?
>>>
>>
>> I don't see a need for multi-queue for 1Gbps.
>> Can you share the host statistics, the network configuration, the
>> qemu-kvm command line, etc.?
>> What is the difference between NIC1 and NIC2, in the way they are
>> connected to the outside world?
>>
>>
>>>
>>> This other reference (https://www.linux-kvm.org/pag
>>> e/Multiqueue#Enable_MQ_feature) points to the same direction about
>>> starting the VM with queues=N
>>>
>>> Also trying to increase the TX ring buffer within the guest with ethtool
>>> -g eth0 is not possible.
>>>
>>> Oh, by the way, the Load on the VM is significantly high despite the CPU
>>> usage isn't above 50% - 60% in average.
>>>
>>
>> Load = latest 'top' results? Vs. CPU usage? Can mean a lot of processes
>> waiting for CPU and doing very little - typical for web servers, for
>> example. What is occupying the CPU?
>> Y.
>>
>>
>>>
>>> Thanks
>>> Fernando
>>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>>
>>
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine with iscsi storage domain

2017-03-21 Thread Simone Tiraboschi
On Tue, Mar 21, 2017 at 3:06 PM, Devin A. Bougie 
wrote:

> On Mar 20, 2017, at 12:54 PM, Simone Tiraboschi 
> wrote:
> > The engine should import it by itself once you add your first storage
> domain for regular VMs.
> > No manual import actions are required.
>
> It didn't seem to for us.  I don't see it in the Storage tab (maybe I
> shouldn't?).  I can install a new host from the engine web ui, but I don't
> see any hosted-engine options.  If I put the new host in maintenance and
> reinstall, I can select DEPLOY under "Choose hosted engine deployment
> action."  However, the web UI than complains that:
> Cannot edit Host.  You are using an unmanaged hosted engine VM.  P{ease
> upgrade the cluster level to 3.6 and wait for the hosted engine storage
> domain to be properly imported.
>
>
Did you already add your first storage domain for regular VMs?
If also that one is on iSCSI, it should be connected trough a different
iSCSI portal.

This is on a new 4.1 cluster with the hosted-engine created using
> hosted-engine --deploy on the first host.
>
> > No, a separate network for the storage is even recommended.
>
> Glad to hear, thanks!
>
> Devin
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Adding posix compliant FS

2017-03-21 Thread carl langlois
Okey, i have manager to use the POSIX compliant FS.

First thing i did was to remove any multipath stuff from the disk and have
a standard parttiton table i.e /dev/sdb1 (but i do not think that realy
help)
change block device(/dev/sdb1) group and owner to vdsm:kvm (did not do the
trick either got still permission denied)
create a directory in /rhev/data-center/mnt/_dev_sdb1 and set owner and
group to vdsm:kvm (this did the trick)

So why did i had to create the last directory by hand to make it work?..i
my missing something?

Thanks
Carl




On Tue, Mar 21, 2017 at 10:50 AM, Fred Rolland  wrote:

> Can you try :
>
> chown -R vdsm:kvm /dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1
>
> On Tue, Mar 21, 2017 at 4:32 PM, carl langlois 
> wrote:
>
>>
>> jsonrpc.Executor/7::WARNING::2017-03-21 09:27:40,099::outOfProcess::19
>> 3::Storage.oop::(validateAccess) Permission denied for directory:
>> /rhev/data-center/mnt/_dev_mapper_KINGSTON__SV300S37A240G__50026B726804F13B1
>> with permissions:7
>> jsonrpc.Executor/7::INFO::2017-03-21 
>> 09:27:40,099::mount::233::storage.Mount::(umount)
>> unmounting /rhev/data-center/mnt/_dev_mapper_KINGSTON__SV300S37A240G__5
>> 0026B726804F13B1
>> jsonrpc.Executor/7::DEBUG::2017-03-21 
>> 09:27:40,104::utils::871::storage.Mount::(stopwatch)
>> /rhev/data-center/mnt/_dev_mapper_KINGSTON__SV300S37A240G__50026B726804F13B1
>> unmounted: 0.00 seconds
>> jsonrpc.Executor/7::ERROR::2017-03-21 
>> 09:27:40,104::hsm::2403::Storage.HSM::(connectStorageServer)
>> Could not connect to storageServer
>> Traceback (most recent call last):
>>   File "/usr/share/vdsm/storage/hsm.py", line 2400, in
>> connectStorageServer
>> conObj.connect()
>>   File "/usr/share/vdsm/storage/storageServer.py", line 249, in connect
>> six.reraise(t, v, tb)
>>   File "/usr/share/vdsm/storage/storageServer.py", line 242, in connect
>> self.getMountObj().getRecord().fs_file)
>>   File "/usr/share/vdsm/storage/fileSD.py", line 81, in validateDirAccess
>> raise se.StorageServerAccessPermissionError(dirPath)
>> StorageServerAccessPermissionError: Permission settings on the specified
>> path do not allow access to the storage. Verify permission settings on the
>> specified storage path.: 'path = /rhev/data-center/mnt/_dev_map
>> per_KINGSTON__SV300S37A240G__50026B726804F13B1'
>> jsonrpc.Executor/7::DEBUG::201
>>
>> Thanks again.
>>
>>
>> On Tue, Mar 21, 2017 at 10:14 AM, Fred Rolland 
>> wrote:
>>
>>> Can you share the VDSM log again ?
>>>
>>> On Tue, Mar 21, 2017 at 4:08 PM, carl langlois 
>>> wrote:
>>>
 Interesting, when i'm using 
 /dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1
 now the UI give error on the permission setting..

 root@ovhost4 ~]# ls -al /dev/mapper/KINGSTON_SV300S37A
 240G_50026B726804F13B1
 lrwxrwxrwx 1 root root 7 Mar 18 08:28 
 /dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1
 -> ../dm-3

 and the permission on the dm-3

 [root@ovhost4 ~]# ls -al /dev/dm-3
 brw-rw 1 vdsm kvm 253, 3 Mar 18 08:28 /dev/dm-3


 how do i change the permission on the sym link..

 Thanks




 On Tue, Mar 21, 2017 at 10:00 AM, Fred Rolland 
 wrote:

> Can you try to use /dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1
> in the UI.
> It seems the kernel change the path that we use to mount and then we
> cannot validate that the mount exists.
>
> It should be anyway better as the mapping could change after reboot.
>
> On Tue, Mar 21, 2017 at 2:20 PM, carl langlois  > wrote:
>
>> Here is the /proc/mounts
>>
>> rootfs / rootfs rw 0 0
>> sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
>> proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
>> devtmpfs /dev devtmpfs 
>> rw,nosuid,size=65948884k,nr_inodes=16487221,mode=755
>> 0 0
>> securityfs /sys/kernel/security securityfs
>> rw,nosuid,nodev,noexec,relatime 0 0
>> tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
>> devpts /dev/pts devpts 
>> rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000
>> 0 0
>> tmpfs /run tmpfs rw,nosuid,nodev,mode=755 0 0
>> tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
>> cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatim
>> e,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd
>> 0 0
>> pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
>> cgroup /sys/fs/cgroup/cpu,cpuacct cgroup
>> rw,nosuid,nodev,noexec,relatime,cpuacct,cpu 0 0
>> cgroup /sys/fs/cgroup/net_cls,net_prio cgroup
>> rw,nosuid,nodev,noexec,relatime,net_prio,net_cls 0 0
>> cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids
>> 0 0
>> cgroup /sys/fs/cgroup/devices cgroup 
>> 

Re: [ovirt-users] Strange network performance on VirtIIO VM NIC

2017-03-21 Thread FERNANDO FREDIANI

Hello Yaniv.

Have a new information about this scenario: I have load-balanced the 
requests between both vNICs, so each is receiving/sending half of the 
traffic in average and the packet loss although it still exists it 
lowered to 1% - 2% (which was expected as the CPU to process this 
traffic is shared by more than one CPU at a time).

However the Load on the VM is still high probably due to the interrupts.

Find below in-line the answers to some of your points:


On 21/03/2017 12:31, Yaniv Kaul wrote:


So there are 2 NUMA nodes on the host? And where are the NICs located?
Tried to search how to check it but couldn't find how. Could you give me 
a hint ?


BTW, since those are virtual interfaces, why do you need two on the 
same VLAN?
Very good question. It's because of an specific situation where I need 
to 2 MAC addresses in order to balance the traffic in LAG in a switch 
which does only layer 2 hashing.
Are you using hyper-threading on the host? Otherwise, I'm not sure 
threads per core would help.

Yes I have hyper-threading enabled on the Host. Is it worth to enable it ?

Thanks
Fernando



On 18/03/2017 12:53, Yaniv Kaul wrote:



On Fri, Mar 17, 2017 at 6:11 PM, FERNANDO FREDIANI
>
wrote:

Hello all.

I have a peculiar problem here which perhaps others may have
had or know about and can advise.

I have Virtual Machine with 2 VirtIO NICs. This VM serves
around 1Gbps of traffic with thousands of clients connecting
to it. When I do a packet loss test to the IP pinned to NIC1
it varies from 3% to 10% of packet loss. When I run the same
test on NIC2 the packet loss is consistently 0%.

From what I gather I may have something to do with possible
lack of Multi Queu VirtIO where NIC1 is managed by a single
CPU which might be hitting 100% and causing this packet loss.

Looking at this reference
(https://fedoraproject.org/wiki/Features/MQ_virtio_net
) I
see one way to test it is start the VM with 4 queues (for
example), but checking on the qemu-kvm process I don't see
option present. Any way I can force it from the Engine ?


I don't see a need for multi-queue for 1Gbps.
Can you share the host statistics, the network configuration,
the qemu-kvm command line, etc.?
What is the difference between NIC1 and NIC2, in the way they
are connected to the outside world?


This other reference
(https://www.linux-kvm.org/page/Multiqueue#Enable_MQ_feature
)
points to the same direction about starting the VM with queues=N

Also trying to increase the TX ring buffer within the guest
with ethtool -g eth0 is not possible.

Oh, by the way, the Load on the VM is significantly high
despite the CPU usage isn't above 50% - 60% in average.


Load = latest 'top' results? Vs. CPU usage? Can mean a lot of
processes waiting for CPU and doing very little - typical for
web servers, for example. What is occupying the CPU?
Y.


Thanks
Fernando



___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users











___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] OSSEC reporting hidden processes

2017-03-21 Thread Charles Kozler
Unfortunately by the time I am able to SSH to the server and start looking
around, that PID is no where to be found

So it seems something winds up in ovirt, runs, doesnt register in /proc (I
think even threads register themself in /proc), and then dies off

Any ideas?

On Tue, Mar 21, 2017 at 3:10 AM, Yedidyah Bar David  wrote:

> On Mon, Mar 20, 2017 at 5:59 PM, Charles Kozler 
> wrote:
> > Hi -
> >
> > I am wondering why OSSEC would be reporting hidden processes on my ovirt
> > nodes? I run OSSEC across the infrastructure and multiple ovirt clusters
> > have assorted nodes that will report a process is running but does not
> have
> > an entry in /proc and thus "possible rootkit" alert is fired
> >
> > I am well aware that I do not have rootkits on these systems but am
> > wondering what exactly inside ovirt is causing this to trigger? Or any
> > ideas? Below is sample alert. All my google-fu turns up is that a process
> > would have to **try** to hide itself from /proc, so curious what this is
> > inside ovirt. Thanks!
> >
> > -
> >
> > OSSEC HIDS Notification.
> > 2017 Mar 20 11:54:47
> >
> > Received From: (ovirtnode2.mydomain.com2) any->rootcheck
> > Rule: 510 fired (level 7) -> "Host-based anomaly detection event
> > (rootcheck)."
> > Portion of the log(s):
> >
> > Process '24574' hidden from /proc. Possible kernel level rootkit.
>
> What do you get from:
>
> ps -eLf | grep -w 24574
>
> Thanks,
> --
> Didi
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] How do you oVirt?

2017-03-21 Thread Sandro Bonazzola
As we continue to develop oVirt 4.2 and future releases, the Development
and Integration teams at Red Hat would value
insights on how you are deploying the oVirt environment.
Please help us to hit the mark by completing this short survey. Survey will
close on April 15th

Here's the link to the survey:
https://docs.google.com/forms/d/e/1FAIpQLSdloxiIP2HrW2HguU0UVbNtKgpSBaJXj-Z9lxyNAR7B9_S0Zg/viewform?usp=fb_send_twt

Thanks,

-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Strange network performance on VirtIIO VM NIC

2017-03-21 Thread Yaniv Kaul
On Tue, Mar 21, 2017 at 5:00 PM, FERNANDO FREDIANI <
fernando.fredi...@upx.com> wrote:

> Hi Yaniv
> On 21/03/2017 06:19, Yaniv Kaul wrote:
>
>
> Is your host with NUMA support (multiple sockets) ? Are all your
> interfaces connected to the same socket? Perhaps one is on the 'other'
> socket (a different PCI bus, etc.)? This can introduce latency.
> In general, you would want to align everything, from host (interrupts of
> the drivers) all the way to the guest to perform the processing on the same
> socket.
>
> I believe so it is. Look:
> ~]# dmesg | grep -i numa
> [0.00] Enabling automatic NUMA balancing. Configure with
> numa_balancing= or the kernel.numa_balancing sysctl
> [0.693082] pci_bus :00: on NUMA node 0
> [0.696457] pci_bus :40: on NUMA node 1
> [0.700678] pci_bus :3f: on NUMA node 0
> [0.704844] pci_bus :7f: on NUMA node 1
>

So there are 2 NUMA nodes on the host? And where are the NICs located?


>
> The thing is, if was something affecting the underlying network layer
> (drivers for the physical nics for example) it would affect all traffic to
> the VM, not just the one going in/out via vNIC1, right ?
>

Most likely.


>
>
> Layer 2+3 may or may not provide you with good distribution across the
> physical links, depending on the traffic. Layer 3+4 hashing is better, but
> is not entirely compliant with all vendors/equipment.
>
> Yes, I have tested with both and both work well. Have settled on layer2+3
> as it balances the traffic equally layer3+4 for my scenario.
> Initially I have guessed it could be the bonding, but ruled that out when
> I tested with another physical interface that doesn't have any bonding and
> the problem happened the same for the VM in question.
>
> Linux is not always happy with multiple interfaces on the same L2 network.
> I think there are some params needed to be set to make it happy?
>
> Yes you are right and yes, knowing of that I have configured PBR using
> iproute2 which makes Linux work happy in this scenario. Works like a charm.
>

BTW, since those are virtual interfaces, why do you need two on the same
VLAN?


>
>
>
> That can explain it.  Ideally, you need to also streamline the processing
> in the guest. The relevant application should be on the same NUMA node as
> the vCPU processing the virtio-net interrupts.
> In your case, the VM sees a single NUMA node - does that match the
> underlying host architecture as well?
>
> Not sure. The command line from qemu-kvm is automatically generated by
> oVirt. Perhaps some extra option to be changed under Advanced Parameters on
> VM CPU configuration ? Also I was wondering if enabling "IO Threads
> Enabled" under Resource Allocation could be of any help.
>

IO threads are for IO (= storage, perhaps it's not clear and we need to
clarify it) and only useful with large number of disks (and IO of course).


>
> To finish I more inclined to understand that problem is restricted to the
> VM, not to the Host(drivers, physical NICs, etc), given the packet loss
> happens in vNIC1 not in vNIC2 when it has no traffic. If it was in the Host
> level or bonding it would affect the whole VM traffic in either vNICs.
> As a last resource I am considering add an extra 2 vCPUs to the VMs, but I
> guess that will only lower the problem. Does anyone think that "Threads per
> Core" or IO Thread could be a better choice ?
>

Are you using hyper-threading on the host? Otherwise, I'm not sure threads
per core would help.
Y.



>
> Thanks
> Fernando
>
>
> On 18/03/2017 12:53, Yaniv Kaul wrote:
>
>
>
> On Fri, Mar 17, 2017 at 6:11 PM, FERNANDO FREDIANI <
> fernando.fredi...@upx.com> wrote:
>
>> Hello all.
>>
>> I have a peculiar problem here which perhaps others may have had or know
>> about and can advise.
>>
>> I have Virtual Machine with 2 VirtIO NICs. This VM serves around 1Gbps of
>> traffic with thousands of clients connecting to it. When I do a packet loss
>> test to the IP pinned to NIC1 it varies from 3% to 10% of packet loss. When
>> I run the same test on NIC2 the packet loss is consistently 0%.
>>
>> From what I gather I may have something to do with possible lack of Multi
>> Queu VirtIO where NIC1 is managed by a single CPU which might be hitting
>> 100% and causing this packet loss.
>>
>> Looking at this reference (https://fedoraproject.org/wik
>> i/Features/MQ_virtio_net) I see one way to test it is start the VM with
>> 4 queues (for example), but checking on the qemu-kvm process I don't see
>> option present. Any way I can force it from the Engine ?
>>
>
> I don't see a need for multi-queue for 1Gbps.
> Can you share the host statistics, the network configuration, the qemu-kvm
> command line, etc.?
> What is the difference between NIC1 and NIC2, in the way they are
> connected to the outside world?
>
>
>>
>> This other reference (https://www.linux-kvm.org/pag
>> e/Multiqueue#Enable_MQ_feature) points to the same direction about
>> starting the VM with queues=N
>>
>> Also trying to 

Re: [ovirt-users] Strange network performance on VirtIIO VM NIC

2017-03-21 Thread FERNANDO FREDIANI

Hi Yaniv

On 21/03/2017 06:19, Yaniv Kaul wrote:


Is your host with NUMA support (multiple sockets) ? Are all your 
interfaces connected to the same socket? Perhaps one is on the 'other' 
socket (a different PCI bus, etc.)? This can introduce latency.
In general, you would want to align everything, from host (interrupts 
of the drivers) all the way to the guest to perform the processing on 
the same socket.

I believe so it is. Look:
~]# dmesg | grep -i numa
[0.00] Enabling automatic NUMA balancing. Configure with 
numa_balancing= or the kernel.numa_balancing sysctl

[0.693082] pci_bus :00: on NUMA node 0
[0.696457] pci_bus :40: on NUMA node 1
[0.700678] pci_bus :3f: on NUMA node 0
[0.704844] pci_bus :7f: on NUMA node 1

The thing is, if was something affecting the underlying network layer 
(drivers for the physical nics for example) it would affect all traffic 
to the VM, not just the one going in/out via vNIC1, right ?


Layer 2+3 may or may not provide you with good distribution across the 
physical links, depending on the traffic. Layer 3+4 hashing is better, 
but is not entirely compliant with all vendors/equipment.
Yes, I have tested with both and both work well. Have settled on 
layer2+3 as it balances the traffic equally layer3+4 for my scenario.
Initially I have guessed it could be the bonding, but ruled that out 
when I tested with another physical interface that doesn't have any 
bonding and the problem happened the same for the VM in question.
Linux is not always happy with multiple interfaces on the same L2 
network. I think there are some params needed to be set to make it happy?
Yes you are right and yes, knowing of that I have configured PBR using 
iproute2 which makes Linux work happy in this scenario. Works like a charm.


That can explain it.  Ideally, you need to also streamline the 
processing in the guest. The relevant application should be on the 
same NUMA node as the vCPU processing the virtio-net interrupts.
In your case, the VM sees a single NUMA node - does that match the 
underlying host architecture as well?
Not sure. The command line from qemu-kvm is automatically generated by 
oVirt. Perhaps some extra option to be changed under Advanced Parameters 
on VM CPU configuration ? Also I was wondering if enabling "IO Threads 
Enabled" under Resource Allocation could be of any help.


To finish I more inclined to understand that problem is restricted to 
the VM, not to the Host(drivers, physical NICs, etc), given the packet 
loss happens in vNIC1 not in vNIC2 when it has no traffic. If it was in 
the Host level or bonding it would affect the whole VM traffic in either 
vNICs.
As a last resource I am considering add an extra 2 vCPUs to the VMs, but 
I guess that will only lower the problem. Does anyone think that 
"Threads per Core" or IO Thread could be a better choice ?


Thanks
Fernando



On 18/03/2017 12:53, Yaniv Kaul wrote:



On Fri, Mar 17, 2017 at 6:11 PM, FERNANDO FREDIANI 
> wrote:


Hello all.

I have a peculiar problem here which perhaps others may have had
or know about and can advise.

I have Virtual Machine with 2 VirtIO NICs. This VM serves around
1Gbps of traffic with thousands of clients connecting to it. When
I do a packet loss test to the IP pinned to NIC1 it varies from
3% to 10% of packet loss. When I run the same test on NIC2 the
packet loss is consistently 0%.

From what I gather I may have something to do with possible lack
of Multi Queu VirtIO where NIC1 is managed by a single CPU which
might be hitting 100% and causing this packet loss.

Looking at this reference
(https://fedoraproject.org/wiki/Features/MQ_virtio_net
) I see
one way to test it is start the VM with 4 queues (for example),
but checking on the qemu-kvm process I don't see option present.
Any way I can force it from the Engine ?


I don't see a need for multi-queue for 1Gbps.
Can you share the host statistics, the network configuration, the 
qemu-kvm command line, etc.?
What is the difference between NIC1 and NIC2, in the way they are 
connected to the outside world?



This other reference
(https://www.linux-kvm.org/page/Multiqueue#Enable_MQ_feature
)
points to the same direction about starting the VM with queues=N

Also trying to increase the TX ring buffer within the guest with
ethtool -g eth0 is not possible.

Oh, by the way, the Load on the VM is significantly high despite
the CPU usage isn't above 50% - 60% in average.


Load = latest 'top' results? Vs. CPU usage? Can mean a lot of 
processes waiting for CPU and doing very little - typical for web 
servers, for example. What is occupying the CPU?

Y.


Thanks
Fernando




Re: [ovirt-users] Adding posix compliant FS

2017-03-21 Thread Fred Rolland
Can you try :

chown -R vdsm:kvm /dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1

On Tue, Mar 21, 2017 at 4:32 PM, carl langlois 
wrote:

>
> jsonrpc.Executor/7::WARNING::2017-03-21 09:27:40,099::outOfProcess::
> 193::Storage.oop::(validateAccess) Permission denied for directory:
> /rhev/data-center/mnt/_dev_mapper_KINGSTON__SV300S37A240G__50026B726804F13B1
> with permissions:7
> jsonrpc.Executor/7::INFO::2017-03-21 
> 09:27:40,099::mount::233::storage.Mount::(umount)
> unmounting /rhev/data-center/mnt/_dev_mapper_KINGSTON__SV300S37A240G__
> 50026B726804F13B1
> jsonrpc.Executor/7::DEBUG::2017-03-21 
> 09:27:40,104::utils::871::storage.Mount::(stopwatch)
> /rhev/data-center/mnt/_dev_mapper_KINGSTON__SV300S37A240G__50026B726804F13B1
> unmounted: 0.00 seconds
> jsonrpc.Executor/7::ERROR::2017-03-21 09:27:40,104::hsm::2403::
> Storage.HSM::(connectStorageServer) Could not connect to storageServer
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/hsm.py", line 2400, in
> connectStorageServer
> conObj.connect()
>   File "/usr/share/vdsm/storage/storageServer.py", line 249, in connect
> six.reraise(t, v, tb)
>   File "/usr/share/vdsm/storage/storageServer.py", line 242, in connect
> self.getMountObj().getRecord().fs_file)
>   File "/usr/share/vdsm/storage/fileSD.py", line 81, in validateDirAccess
> raise se.StorageServerAccessPermissionError(dirPath)
> StorageServerAccessPermissionError: Permission settings on the specified
> path do not allow access to the storage. Verify permission settings on the
> specified storage path.: 'path = /rhev/data-center/mnt/_dev_
> mapper_KINGSTON__SV300S37A240G__50026B726804F13B1'
> jsonrpc.Executor/7::DEBUG::201
>
> Thanks again.
>
>
> On Tue, Mar 21, 2017 at 10:14 AM, Fred Rolland 
> wrote:
>
>> Can you share the VDSM log again ?
>>
>> On Tue, Mar 21, 2017 at 4:08 PM, carl langlois 
>> wrote:
>>
>>> Interesting, when i'm using 
>>> /dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1
>>> now the UI give error on the permission setting..
>>>
>>> root@ovhost4 ~]# ls -al /dev/mapper/KINGSTON_SV300S37A
>>> 240G_50026B726804F13B1
>>> lrwxrwxrwx 1 root root 7 Mar 18 08:28 
>>> /dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1
>>> -> ../dm-3
>>>
>>> and the permission on the dm-3
>>>
>>> [root@ovhost4 ~]# ls -al /dev/dm-3
>>> brw-rw 1 vdsm kvm 253, 3 Mar 18 08:28 /dev/dm-3
>>>
>>>
>>> how do i change the permission on the sym link..
>>>
>>> Thanks
>>>
>>>
>>>
>>>
>>> On Tue, Mar 21, 2017 at 10:00 AM, Fred Rolland 
>>> wrote:
>>>
 Can you try to use /dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1
 in the UI.
 It seems the kernel change the path that we use to mount and then we
 cannot validate that the mount exists.

 It should be anyway better as the mapping could change after reboot.

 On Tue, Mar 21, 2017 at 2:20 PM, carl langlois 
 wrote:

> Here is the /proc/mounts
>
> rootfs / rootfs rw 0 0
> sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
> proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
> devtmpfs /dev devtmpfs 
> rw,nosuid,size=65948884k,nr_inodes=16487221,mode=755
> 0 0
> securityfs /sys/kernel/security securityfs
> rw,nosuid,nodev,noexec,relatime 0 0
> tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
> devpts /dev/pts devpts 
> rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000
> 0 0
> tmpfs /run tmpfs rw,nosuid,nodev,mode=755 0 0
> tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
> cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatim
> e,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd
> 0 0
> pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
> cgroup /sys/fs/cgroup/cpu,cpuacct cgroup 
> rw,nosuid,nodev,noexec,relatime,cpuacct,cpu
> 0 0
> cgroup /sys/fs/cgroup/net_cls,net_prio cgroup
> rw,nosuid,nodev,noexec,relatime,net_prio,net_cls 0 0
> cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids
> 0 0
> cgroup /sys/fs/cgroup/devices cgroup 
> rw,nosuid,nodev,noexec,relatime,devices
> 0 0
> cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset
> 0 0
> cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio
> 0 0
> cgroup /sys/fs/cgroup/perf_event cgroup 
> rw,nosuid,nodev,noexec,relatime,perf_event
> 0 0
> cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory
> 0 0
> cgroup /sys/fs/cgroup/freezer cgroup 
> rw,nosuid,nodev,noexec,relatime,freezer
> 0 0
> cgroup /sys/fs/cgroup/hugetlb cgroup 
> rw,nosuid,nodev,noexec,relatime,hugetlb
> 0 0
> configfs /sys/kernel/config configfs rw,relatime 0 0
> /dev/mapper/cl_ovhost1-root / xfs 

Re: [ovirt-users] Adding posix compliant FS

2017-03-21 Thread carl langlois
jsonrpc.Executor/7::WARNING::2017-03-21
09:27:40,099::outOfProcess::193::Storage.oop::(validateAccess) Permission
denied for directory:
/rhev/data-center/mnt/_dev_mapper_KINGSTON__SV300S37A240G__50026B726804F13B1
with permissions:7
jsonrpc.Executor/7::INFO::2017-03-21
09:27:40,099::mount::233::storage.Mount::(umount) unmounting
/rhev/data-center/mnt/_dev_mapper_KINGSTON__SV300S37A240G__50026B726804F13B1
jsonrpc.Executor/7::DEBUG::2017-03-21
09:27:40,104::utils::871::storage.Mount::(stopwatch)
/rhev/data-center/mnt/_dev_mapper_KINGSTON__SV300S37A240G__50026B726804F13B1
unmounted: 0.00 seconds
jsonrpc.Executor/7::ERROR::2017-03-21
09:27:40,104::hsm::2403::Storage.HSM::(connectStorageServer) Could not
connect to storageServer
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 2400, in connectStorageServer
conObj.connect()
  File "/usr/share/vdsm/storage/storageServer.py", line 249, in connect
six.reraise(t, v, tb)
  File "/usr/share/vdsm/storage/storageServer.py", line 242, in connect
self.getMountObj().getRecord().fs_file)
  File "/usr/share/vdsm/storage/fileSD.py", line 81, in validateDirAccess
raise se.StorageServerAccessPermissionError(dirPath)
StorageServerAccessPermissionError: Permission settings on the specified
path do not allow access to the storage. Verify permission settings on the
specified storage path.: 'path =
/rhev/data-center/mnt/_dev_mapper_KINGSTON__SV300S37A240G__50026B726804F13B1'
jsonrpc.Executor/7::DEBUG::201

Thanks again.


On Tue, Mar 21, 2017 at 10:14 AM, Fred Rolland  wrote:

> Can you share the VDSM log again ?
>
> On Tue, Mar 21, 2017 at 4:08 PM, carl langlois 
> wrote:
>
>> Interesting, when i'm using 
>> /dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1
>> now the UI give error on the permission setting..
>>
>> root@ovhost4 ~]# ls -al /dev/mapper/KINGSTON_SV300S37A
>> 240G_50026B726804F13B1
>> lrwxrwxrwx 1 root root 7 Mar 18 08:28 
>> /dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1
>> -> ../dm-3
>>
>> and the permission on the dm-3
>>
>> [root@ovhost4 ~]# ls -al /dev/dm-3
>> brw-rw 1 vdsm kvm 253, 3 Mar 18 08:28 /dev/dm-3
>>
>>
>> how do i change the permission on the sym link..
>>
>> Thanks
>>
>>
>>
>>
>> On Tue, Mar 21, 2017 at 10:00 AM, Fred Rolland 
>> wrote:
>>
>>> Can you try to use /dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1
>>> in the UI.
>>> It seems the kernel change the path that we use to mount and then we
>>> cannot validate that the mount exists.
>>>
>>> It should be anyway better as the mapping could change after reboot.
>>>
>>> On Tue, Mar 21, 2017 at 2:20 PM, carl langlois 
>>> wrote:
>>>
 Here is the /proc/mounts

 rootfs / rootfs rw 0 0
 sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
 proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
 devtmpfs /dev devtmpfs rw,nosuid,size=65948884k,nr_inodes=16487221,mode=755
 0 0
 securityfs /sys/kernel/security securityfs
 rw,nosuid,nodev,noexec,relatime 0 0
 tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
 devpts /dev/pts devpts 
 rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000
 0 0
 tmpfs /run tmpfs rw,nosuid,nodev,mode=755 0 0
 tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
 cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatim
 e,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd
 0 0
 pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
 cgroup /sys/fs/cgroup/cpu,cpuacct cgroup 
 rw,nosuid,nodev,noexec,relatime,cpuacct,cpu
 0 0
 cgroup /sys/fs/cgroup/net_cls,net_prio cgroup
 rw,nosuid,nodev,noexec,relatime,net_prio,net_cls 0 0
 cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids
 0 0
 cgroup /sys/fs/cgroup/devices cgroup 
 rw,nosuid,nodev,noexec,relatime,devices
 0 0
 cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset
 0 0
 cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio
 0 0
 cgroup /sys/fs/cgroup/perf_event cgroup 
 rw,nosuid,nodev,noexec,relatime,perf_event
 0 0
 cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory
 0 0
 cgroup /sys/fs/cgroup/freezer cgroup 
 rw,nosuid,nodev,noexec,relatime,freezer
 0 0
 cgroup /sys/fs/cgroup/hugetlb cgroup 
 rw,nosuid,nodev,noexec,relatime,hugetlb
 0 0
 configfs /sys/kernel/config configfs rw,relatime 0 0
 /dev/mapper/cl_ovhost1-root / xfs rw,relatime,attr2,inode64,noquota 0 0
 systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=35,pgrp=1,timeo
 ut=300,minproto=5,maxproto=5,direct 0 0
 mqueue /dev/mqueue mqueue rw,relatime 0 0
 debugfs /sys/kernel/debug debugfs rw,relatime 0 0
 hugetlbfs /dev/hugepages hugetlbfs rw,relatime 0 0
 tmpfs /tmp tmpfs rw 0 0
 

Re: [ovirt-users] Adding posix compliant FS

2017-03-21 Thread Fred Rolland
Can you share the VDSM log again ?

On Tue, Mar 21, 2017 at 4:08 PM, carl langlois 
wrote:

> Interesting, when i'm using 
> /dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1
> now the UI give error on the permission setting..
>
> root@ovhost4 ~]# ls -al /dev/mapper/KINGSTON_SV300S37A240G_
> 50026B726804F13B1
> lrwxrwxrwx 1 root root 7 Mar 18 08:28 
> /dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1
> -> ../dm-3
>
> and the permission on the dm-3
>
> [root@ovhost4 ~]# ls -al /dev/dm-3
> brw-rw 1 vdsm kvm 253, 3 Mar 18 08:28 /dev/dm-3
>
>
> how do i change the permission on the sym link..
>
> Thanks
>
>
>
>
> On Tue, Mar 21, 2017 at 10:00 AM, Fred Rolland 
> wrote:
>
>> Can you try to use /dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1
>> in the UI.
>> It seems the kernel change the path that we use to mount and then we
>> cannot validate that the mount exists.
>>
>> It should be anyway better as the mapping could change after reboot.
>>
>> On Tue, Mar 21, 2017 at 2:20 PM, carl langlois 
>> wrote:
>>
>>> Here is the /proc/mounts
>>>
>>> rootfs / rootfs rw 0 0
>>> sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
>>> proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
>>> devtmpfs /dev devtmpfs rw,nosuid,size=65948884k,nr_inodes=16487221,mode=755
>>> 0 0
>>> securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime
>>> 0 0
>>> tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
>>> devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000
>>> 0 0
>>> tmpfs /run tmpfs rw,nosuid,nodev,mode=755 0 0
>>> tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
>>> cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatim
>>> e,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd
>>> 0 0
>>> pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
>>> cgroup /sys/fs/cgroup/cpu,cpuacct cgroup 
>>> rw,nosuid,nodev,noexec,relatime,cpuacct,cpu
>>> 0 0
>>> cgroup /sys/fs/cgroup/net_cls,net_prio cgroup
>>> rw,nosuid,nodev,noexec,relatime,net_prio,net_cls 0 0
>>> cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids
>>> 0 0
>>> cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices
>>> 0 0
>>> cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset
>>> 0 0
>>> cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio
>>> 0 0
>>> cgroup /sys/fs/cgroup/perf_event cgroup 
>>> rw,nosuid,nodev,noexec,relatime,perf_event
>>> 0 0
>>> cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory
>>> 0 0
>>> cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer
>>> 0 0
>>> cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb
>>> 0 0
>>> configfs /sys/kernel/config configfs rw,relatime 0 0
>>> /dev/mapper/cl_ovhost1-root / xfs rw,relatime,attr2,inode64,noquota 0 0
>>> systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=35,pgrp=1,timeo
>>> ut=300,minproto=5,maxproto=5,direct 0 0
>>> mqueue /dev/mqueue mqueue rw,relatime 0 0
>>> debugfs /sys/kernel/debug debugfs rw,relatime 0 0
>>> hugetlbfs /dev/hugepages hugetlbfs rw,relatime 0 0
>>> tmpfs /tmp tmpfs rw 0 0
>>> nfsd /proc/fs/nfsd nfsd rw,relatime 0 0
>>> /dev/mapper/cl_ovhost1-home /home xfs rw,relatime,attr2,inode64,noquota
>>> 0 0
>>> /dev/sda1 /boot xfs rw,relatime,attr2,inode64,noquota 0 0
>>> sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
>>> tmpfs /run/user/42 tmpfs rw,nosuid,nodev,relatime,size=
>>> 13192948k,mode=700,uid=42,gid=42 0 0
>>> gvfsd-fuse /run/user/42/gvfs fuse.gvfsd-fuse
>>> rw,nosuid,nodev,relatime,user_id=42,group_id=42 0 0
>>> fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
>>> ovhost2:/home/exports/defaultdata 
>>> /rhev/data-center/mnt/ovhost2:_home_exports_defaultdata
>>> nfs rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,so
>>> ft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mounta
>>> ddr=10.8.236.162,mountvers=3,mountport=20048,mountproto=
>>> udp,local_lock=none,addr=10.8.236.162 0 0
>>> ovhost2:/home/exports/ISO /rhev/data-center/mnt/ovhost2:_home_exports_ISO
>>> nfs rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,so
>>> ft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mounta
>>> ddr=10.8.236.162,mountvers=3,mountport=20048,mountproto=
>>> udp,local_lock=none,addr=10.8.236.162 0 0
>>> ovhost2:/home/exports/data /rhev/data-center/mnt/ovhost2:_home_exports_data
>>> nfs rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,so
>>> ft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mounta
>>> ddr=10.8.236.162,mountvers=3,mountport=20048,mountproto=
>>> udp,local_lock=none,addr=10.8.236.162 0 0
>>> tmpfs /run/user/0 tmpfs rw,nosuid,nodev,relatime,size=13192948k,mode=700
>>> 0 0
>>> /dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1
>>> /rhev/data-center/mnt/_dev_dm-3 ext4 rw,nosuid,relatime,data=ordered 0 0

Re: [ovirt-users] Adding posix compliant FS

2017-03-21 Thread carl langlois
Interesting, when i'm using
/dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1
now the UI give error on the permission setting..

root@ovhost4 ~]# ls -al /dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1
lrwxrwxrwx 1 root root 7 Mar 18 08:28
/dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1 -> ../dm-3

and the permission on the dm-3

[root@ovhost4 ~]# ls -al /dev/dm-3
brw-rw 1 vdsm kvm 253, 3 Mar 18 08:28 /dev/dm-3


how do i change the permission on the sym link..

Thanks




On Tue, Mar 21, 2017 at 10:00 AM, Fred Rolland  wrote:

> Can you try to use /dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1
> in the UI.
> It seems the kernel change the path that we use to mount and then we
> cannot validate that the mount exists.
>
> It should be anyway better as the mapping could change after reboot.
>
> On Tue, Mar 21, 2017 at 2:20 PM, carl langlois 
> wrote:
>
>> Here is the /proc/mounts
>>
>> rootfs / rootfs rw 0 0
>> sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
>> proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
>> devtmpfs /dev devtmpfs rw,nosuid,size=65948884k,nr_inodes=16487221,mode=755
>> 0 0
>> securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime
>> 0 0
>> tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
>> devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000
>> 0 0
>> tmpfs /run tmpfs rw,nosuid,nodev,mode=755 0 0
>> tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
>> cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatim
>> e,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd
>> 0 0
>> pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
>> cgroup /sys/fs/cgroup/cpu,cpuacct cgroup 
>> rw,nosuid,nodev,noexec,relatime,cpuacct,cpu
>> 0 0
>> cgroup /sys/fs/cgroup/net_cls,net_prio cgroup
>> rw,nosuid,nodev,noexec,relatime,net_prio,net_cls 0 0
>> cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0
>> 0
>> cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices
>> 0 0
>> cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset
>> 0 0
>> cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio
>> 0 0
>> cgroup /sys/fs/cgroup/perf_event cgroup 
>> rw,nosuid,nodev,noexec,relatime,perf_event
>> 0 0
>> cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory
>> 0 0
>> cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer
>> 0 0
>> cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb
>> 0 0
>> configfs /sys/kernel/config configfs rw,relatime 0 0
>> /dev/mapper/cl_ovhost1-root / xfs rw,relatime,attr2,inode64,noquota 0 0
>> systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=35,pgrp=1,timeo
>> ut=300,minproto=5,maxproto=5,direct 0 0
>> mqueue /dev/mqueue mqueue rw,relatime 0 0
>> debugfs /sys/kernel/debug debugfs rw,relatime 0 0
>> hugetlbfs /dev/hugepages hugetlbfs rw,relatime 0 0
>> tmpfs /tmp tmpfs rw 0 0
>> nfsd /proc/fs/nfsd nfsd rw,relatime 0 0
>> /dev/mapper/cl_ovhost1-home /home xfs rw,relatime,attr2,inode64,noquota
>> 0 0
>> /dev/sda1 /boot xfs rw,relatime,attr2,inode64,noquota 0 0
>> sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
>> tmpfs /run/user/42 tmpfs rw,nosuid,nodev,relatime,size=
>> 13192948k,mode=700,uid=42,gid=42 0 0
>> gvfsd-fuse /run/user/42/gvfs fuse.gvfsd-fuse
>> rw,nosuid,nodev,relatime,user_id=42,group_id=42 0 0
>> fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
>> ovhost2:/home/exports/defaultdata 
>> /rhev/data-center/mnt/ovhost2:_home_exports_defaultdata
>> nfs rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,
>> soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,
>> mountaddr=10.8.236.162,mountvers=3,mountport=20048,mountprot
>> o=udp,local_lock=none,addr=10.8.236.162 0 0
>> ovhost2:/home/exports/ISO /rhev/data-center/mnt/ovhost2:_home_exports_ISO
>> nfs rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,
>> soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,
>> mountaddr=10.8.236.162,mountvers=3,mountport=20048,mountprot
>> o=udp,local_lock=none,addr=10.8.236.162 0 0
>> ovhost2:/home/exports/data /rhev/data-center/mnt/ovhost2:_home_exports_data
>> nfs rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,
>> soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,
>> mountaddr=10.8.236.162,mountvers=3,mountport=20048,mountprot
>> o=udp,local_lock=none,addr=10.8.236.162 0 0
>> tmpfs /run/user/0 tmpfs rw,nosuid,nodev,relatime,size=13192948k,mode=700
>> 0 0
>> /dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1
>> /rhev/data-center/mnt/_dev_dm-3 ext4 rw,nosuid,relatime,data=ordered 0 0
>>
>> Thanks you for your help.
>>
>> Carl
>>
>>
>> On Tue, Mar 21, 2017 at 6:31 AM, Fred Rolland 
>> wrote:
>>
>>> Can you provide the content of /proc/mounts after it has being mounted
>>> by VDSM ?
>>>
>>> On Tue, Mar 21, 

Re: [ovirt-users] hosted-engine with iscsi storage domain

2017-03-21 Thread Devin A. Bougie
On Mar 20, 2017, at 12:54 PM, Simone Tiraboschi  wrote:
> The engine should import it by itself once you add your first storage domain 
> for regular VMs.
> No manual import actions are required.

It didn't seem to for us.  I don't see it in the Storage tab (maybe I 
shouldn't?).  I can install a new host from the engine web ui, but I don't see 
any hosted-engine options.  If I put the new host in maintenance and reinstall, 
I can select DEPLOY under "Choose hosted engine deployment action."  However, 
the web UI than complains that:
Cannot edit Host.  You are using an unmanaged hosted engine VM.  P{ease upgrade 
the cluster level to 3.6 and wait for the hosted engine storage domain to be 
properly imported.

This is on a new 4.1 cluster with the hosted-engine created using hosted-engine 
--deploy on the first host.

> No, a separate network for the storage is even recommended.

Glad to hear, thanks!

Devin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Adding posix compliant FS

2017-03-21 Thread Fred Rolland
Can you try to use /dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1 in
the UI.
It seems the kernel change the path that we use to mount and then we cannot
validate that the mount exists.

It should be anyway better as the mapping could change after reboot.

On Tue, Mar 21, 2017 at 2:20 PM, carl langlois 
wrote:

> Here is the /proc/mounts
>
> rootfs / rootfs rw 0 0
> sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
> proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
> devtmpfs /dev devtmpfs rw,nosuid,size=65948884k,nr_inodes=16487221,mode=755
> 0 0
> securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime
> 0 0
> tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
> devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000
> 0 0
> tmpfs /run tmpfs rw,nosuid,nodev,mode=755 0 0
> tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
> cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,
> relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd
> 0 0
> pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
> cgroup /sys/fs/cgroup/cpu,cpuacct cgroup 
> rw,nosuid,nodev,noexec,relatime,cpuacct,cpu
> 0 0
> cgroup /sys/fs/cgroup/net_cls,net_prio cgroup 
> rw,nosuid,nodev,noexec,relatime,net_prio,net_cls
> 0 0
> cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0
> cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices
> 0 0
> cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset
> 0 0
> cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio
> 0 0
> cgroup /sys/fs/cgroup/perf_event cgroup 
> rw,nosuid,nodev,noexec,relatime,perf_event
> 0 0
> cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory
> 0 0
> cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer
> 0 0
> cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb
> 0 0
> configfs /sys/kernel/config configfs rw,relatime 0 0
> /dev/mapper/cl_ovhost1-root / xfs rw,relatime,attr2,inode64,noquota 0 0
> systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=35,pgrp=1,
> timeout=300,minproto=5,maxproto=5,direct 0 0
> mqueue /dev/mqueue mqueue rw,relatime 0 0
> debugfs /sys/kernel/debug debugfs rw,relatime 0 0
> hugetlbfs /dev/hugepages hugetlbfs rw,relatime 0 0
> tmpfs /tmp tmpfs rw 0 0
> nfsd /proc/fs/nfsd nfsd rw,relatime 0 0
> /dev/mapper/cl_ovhost1-home /home xfs rw,relatime,attr2,inode64,noquota 0
> 0
> /dev/sda1 /boot xfs rw,relatime,attr2,inode64,noquota 0 0
> sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
> tmpfs /run/user/42 tmpfs rw,nosuid,nodev,relatime,size=
> 13192948k,mode=700,uid=42,gid=42 0 0
> gvfsd-fuse /run/user/42/gvfs fuse.gvfsd-fuse 
> rw,nosuid,nodev,relatime,user_id=42,group_id=42
> 0 0
> fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
> ovhost2:/home/exports/defaultdata 
> /rhev/data-center/mnt/ovhost2:_home_exports_defaultdata
> nfs rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=
> 255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=
> sys,mountaddr=10.8.236.162,mountvers=3,mountport=20048,
> mountproto=udp,local_lock=none,addr=10.8.236.162 0 0
> ovhost2:/home/exports/ISO /rhev/data-center/mnt/ovhost2:_home_exports_ISO
> nfs rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=
> 255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=
> sys,mountaddr=10.8.236.162,mountvers=3,mountport=20048,
> mountproto=udp,local_lock=none,addr=10.8.236.162 0 0
> ovhost2:/home/exports/data /rhev/data-center/mnt/ovhost2:_home_exports_data
> nfs rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=
> 255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=
> sys,mountaddr=10.8.236.162,mountvers=3,mountport=20048,
> mountproto=udp,local_lock=none,addr=10.8.236.162 0 0
> tmpfs /run/user/0 tmpfs rw,nosuid,nodev,relatime,size=13192948k,mode=700
> 0 0
> /dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1
> /rhev/data-center/mnt/_dev_dm-3 ext4 rw,nosuid,relatime,data=ordered 0 0
>
> Thanks you for your help.
>
> Carl
>
>
> On Tue, Mar 21, 2017 at 6:31 AM, Fred Rolland  wrote:
>
>> Can you provide the content of /proc/mounts after it has being mounted by
>> VDSM ?
>>
>> On Tue, Mar 21, 2017 at 12:28 PM, carl langlois 
>> wrote:
>>
>>> Here is the vdsm.log
>>>
>>>
>>> jsonrpc.Executor/0::ERROR::2017-03-18 
>>> 08:23:48,317::hsm::2403::Storage.HSM::(connectStorageServer)
>>> Could not connect to storageServer
>>> Traceback (most recent call last):
>>>   File "/usr/share/vdsm/storage/hsm.py", line 2400, in
>>> connectStorageServer
>>> conObj.connect()
>>>   File "/usr/share/vdsm/storage/storageServer.py", line 242, in connect
>>> self.getMountObj().getRecord().fs_file)
>>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line
>>> 260, in getRecord
>>> (self.fs_spec, self.fs_file))
>>> OSError: [Errno 2] 

Re: [ovirt-users] Changing gateway ping address

2017-03-21 Thread Sandro Bonazzola
On Fri, Mar 17, 2017 at 11:01 AM, Sven Achtelik 
wrote:

> Thank you, which table is holding that information ? How can I change that
> value in the DB ? Access it directly ?
>

Simone? Martin?




>
>
> *Von:* Simone Tiraboschi [mailto:stira...@redhat.com]
> *Gesendet:* Donnerstag, 16. März 2017 16:20
> *An:* Sandro Bonazzola 
> *Cc:* Sven Achtelik ; Martin Sivak <
> msi...@redhat.com>; Matteo ; users@ovirt.org
>
> *Betreff:* Re: [ovirt-users] Changing gateway ping address
>
>
>
>
>
>
>
> On Thu, Mar 16, 2017 at 1:23 PM, Sandro Bonazzola 
> wrote:
>
>
>
>
>
> On Thu, Mar 16, 2017 at 9:38 AM, Sven Achtelik 
> wrote:
>
> Hi Sandro,
>
> where can I find that answer file ? Running ovirt 4.1.
>
>
>
> Things chaged a lot since 3.5, adding Simone and Martin since I can't
> remember if it's possible to set it from web ui now.
>
>
>
> That answerfile is on the shared storage used for the hosted-engine
> storage domain inside a configuration volume but, If I'm not wrong, the
> engine fetches it when it imports the hosted-engine storage domain but it
> doesn't periodically refresh it so I think that if you want to get it
> stably changed also for future hosts you have to fix that value in the
> engine DB.
>
>
>
>
>
>
>
>
>
>
>
>
> Thank you,
>
> Sven
>
> -Ursprüngliche Nachricht-
> Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im Auftrag
> von Sandro Bonazzola
> Gesendet: Freitag, 10. Juli 2015 09:26
> An: Matteo ; users@ovirt.org
> Betreff: Re: [ovirt-users] Changing gateway ping address
>
>
> Il 10/07/2015 09:08, Matteo ha scritto:
> > Hi all,
> >
> > I need to change the gateway ping address, the one used by hosted engine
> setup.
> >
> > Is ok to edit /etc/ovirt-hosted-engine/hosted-engine.conf on each
> > node, update the gateway param with the new ip address and restart the
> > agent on each node?
> >
> > With a blind test seems ok, but need to understand if is the right
> procedure.
>
> Yes it's ok.
> You should also change it in the answer files so if you add new nodes it
> will be set automatically.
>
>
> >
> > Thanks,
> > Matteo
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
>
> --
>
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>
>
>



-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Adding posix compliant FS

2017-03-21 Thread carl langlois
Here is the /proc/mounts

rootfs / rootfs rw 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
devtmpfs /dev devtmpfs rw,nosuid,size=65948884k,nr_inodes=16487221,mode=755
0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime
0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
devpts /dev/pts devpts
rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,nodev,mode=755 0 0
tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup
rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd
0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup
rw,nosuid,nodev,noexec,relatime,cpuacct,cpu 0 0
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup
rw,nosuid,nodev,noexec,relatime,net_prio,net_cls 0 0
cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0
cgroup /sys/fs/cgroup/devices cgroup
rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset
0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/perf_event cgroup
rw,nosuid,nodev,noexec,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory
0 0
cgroup /sys/fs/cgroup/freezer cgroup
rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup
rw,nosuid,nodev,noexec,relatime,hugetlb 0 0
configfs /sys/kernel/config configfs rw,relatime 0 0
/dev/mapper/cl_ovhost1-root / xfs rw,relatime,attr2,inode64,noquota 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs
rw,relatime,fd=35,pgrp=1,timeout=300,minproto=5,maxproto=5,direct 0 0
mqueue /dev/mqueue mqueue rw,relatime 0 0
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,relatime 0 0
tmpfs /tmp tmpfs rw 0 0
nfsd /proc/fs/nfsd nfsd rw,relatime 0 0
/dev/mapper/cl_ovhost1-home /home xfs rw,relatime,attr2,inode64,noquota 0 0
/dev/sda1 /boot xfs rw,relatime,attr2,inode64,noquota 0 0
sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
tmpfs /run/user/42 tmpfs
rw,nosuid,nodev,relatime,size=13192948k,mode=700,uid=42,gid=42 0 0
gvfsd-fuse /run/user/42/gvfs fuse.gvfsd-fuse
rw,nosuid,nodev,relatime,user_id=42,group_id=42 0 0
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
ovhost2:/home/exports/defaultdata
/rhev/data-center/mnt/ovhost2:_home_exports_defaultdata nfs
rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=10.8.236.162,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=10.8.236.162
0 0
ovhost2:/home/exports/ISO /rhev/data-center/mnt/ovhost2:_home_exports_ISO
nfs
rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=10.8.236.162,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=10.8.236.162
0 0
ovhost2:/home/exports/data /rhev/data-center/mnt/ovhost2:_home_exports_data
nfs
rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=10.8.236.162,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=10.8.236.162
0 0
tmpfs /run/user/0 tmpfs rw,nosuid,nodev,relatime,size=13192948k,mode=700 0 0
/dev/mapper/KINGSTON_SV300S37A240G_50026B726804F13B1
/rhev/data-center/mnt/_dev_dm-3 ext4 rw,nosuid,relatime,data=ordered 0 0

Thanks you for your help.

Carl


On Tue, Mar 21, 2017 at 6:31 AM, Fred Rolland  wrote:

> Can you provide the content of /proc/mounts after it has being mounted by
> VDSM ?
>
> On Tue, Mar 21, 2017 at 12:28 PM, carl langlois 
> wrote:
>
>> Here is the vdsm.log
>>
>>
>> jsonrpc.Executor/0::ERROR::2017-03-18 
>> 08:23:48,317::hsm::2403::Storage.HSM::(connectStorageServer)
>> Could not connect to storageServer
>> Traceback (most recent call last):
>>   File "/usr/share/vdsm/storage/hsm.py", line 2400, in
>> connectStorageServer
>> conObj.connect()
>>   File "/usr/share/vdsm/storage/storageServer.py", line 242, in connect
>> self.getMountObj().getRecord().fs_file)
>>   File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line
>> 260, in getRecord
>> (self.fs_spec, self.fs_file))
>> OSError: [Errno 2] Mount of `/dev/dm-3` at `/rhev/data-center/mnt/_dev_dm-3`
>> does not exist
>>
>>
>> thanks
>>
>> On Fri, Mar 17, 2017 at 3:06 PM, Fred Rolland 
>> wrote:
>>
>>> Please send Vdsm log.
>>> Thanks
>>>
>>> On Fri, Mar 17, 2017 at 8:46 PM, carl langlois 
>>> wrote:
>>>
 Hi,

 The link that you send is for NFS strorage but i am trying to add a
 POSIX compliant.

 [image: Inline image 1]




 when i press okey it mount the disk to :

 [root@ovhost4 ~]# 

Re: [ovirt-users] Expand an ovirt+glusterfs cluster

2017-03-21 Thread Davide Ferrari
I may add that as of now I've just added the bricks and nothing more, no
VMs/new disks created in the oVirt cluster (in case i should remove the
bricks)

2017-03-21 13:09 GMT+01:00 Davide Ferrari :

> Hello
>
> I have a 4 node oVirt 4.0 cluster running on top of glusterfs volumes,
> managed by the oVirt cluster (option ticked in the cluster properties)
>
> Now, I'm adding several new nodes to this cluster with a better CPU
> (Broadwell vs Haswell) but now comes the "bad" part. Out of enthusiasm I've
> already added storage from these new servers to the data volume already
> present and used by the current cluster. In fact now ovirt has detected new
> hosts and it's asking me if I want to add these host to the cluster.
>
> But there are two problems:
> 1) I want to create a new, Broadwell cluster
> 2) I'm using a separate VLAN+domain name for gluster, and oVirt is
> proposing me to use the gluster (storage) domain names as the new hosts
> identifier.
>
> What are the right steps/actions to take now? It's a running production
> system.
>
> Thanks in advance
>
> --
> Davide Ferrari
> Senior Systems Engineer
>



-- 
Davide Ferrari
Senior Systems Engineer
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Expand an ovirt+glusterfs cluster

2017-03-21 Thread Davide Ferrari
Hello

I have a 4 node oVirt 4.0 cluster running on top of glusterfs volumes,
managed by the oVirt cluster (option ticked in the cluster properties)

Now, I'm adding several new nodes to this cluster with a better CPU
(Broadwell vs Haswell) but now comes the "bad" part. Out of enthusiasm I've
already added storage from these new servers to the data volume already
present and used by the current cluster. In fact now ovirt has detected new
hosts and it's asking me if I want to add these host to the cluster.

But there are two problems:
1) I want to create a new, Broadwell cluster
2) I'm using a separate VLAN+domain name for gluster, and oVirt is
proposing me to use the gluster (storage) domain names as the new hosts
identifier.

What are the right steps/actions to take now? It's a running production
system.

Thanks in advance

-- 
Davide Ferrari
Senior Systems Engineer
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and Openstack Neutron: network not working

2017-03-21 Thread Luca 'remix_tj' Lorenzetto
Hello Marcin, hello all,

the provider failure was due to neutron services not behaving
correctly. Now when i try to start a VM attached to an external
network i get this in vdsm.log:

2017-03-21 12:51:38,893 INFO  (vm/23ea52a8) [vds] prepared volume
path: 
/rhev/data-center/0001-0001-0001-0001-0311/3fb1af07-186b-4d2b-8a7a-26ff265f71fb/images/8933217f-47b0-4fe4-bb2f-7defcffb6bb8/4f996ad6-c06c-4527-be1f-45a9e718d01e
(clientIF:374)
2017-03-21 12:51:39,047 INFO  (vm/23ea52a8) [root]  (hooks:108)
2017-03-21 12:51:39,809 INFO  (vm/23ea52a8) [root] Adding vNIC
0eec5c68-49c9-4d0f-8dc6-eeacb09aa2c3 for provider type
OPENSTACK_NETWORK and plugin OPEN_VSWITCHSetting up vNIC (portId
0eec5c68-49c9-4d0f-8dc6-eeacb09aa2c3) security groups
openstacknet hook: [unexpected error]: Traceback (most recent call last):
  File "/usr/libexec/vdsm/hooks/before_device_create/50_openstacknet",
line 194, in 
main()
  File "/usr/libexec/vdsm/hooks/before_device_create/50_openstacknet",
line 148, in main
addOpenstackVnic(domxml, pluginType, vNicId, hasSecurityGroups)
  File "/usr/libexec/vdsm/hooks/before_device_create/50_openstacknet",
line 121, in addOpenstackVnic
addOvsVnic(domxml, iface, portId, hasSecurityGroups)
  File "/usr/libexec/vdsm/hooks/before_device_create/50_openstacknet",
line 90, in addOvsVnic
addOvsHybridVnic(domxml, iface, portId)
  File "/usr/libexec/vdsm/hooks/before_device_create/50_openstacknet",
line 98, in addOvsHybridVnic
portId)
  File "/usr/libexec/vdsm/hooks/before_device_create/openstacknet_utils.py",
line 99, in setUpSecurityGroupVnic
'external-ids:attached-mac=%s' % macAddr])
  File "/usr/libexec/vdsm/hooks/before_device_create/openstacknet_utils.py",
line 47, in executeOrExit
(command, err))
RuntimeError: Failed to execute ['/usr/bin/ovs-vsctl', '--',
'--may-exist', 'add-port', 'br-int', 'qvo0eec5c68-49', '--', 'set',
'Interface', 'qvo0eec5c68-49',
'external-ids:iface-id=0eec5c68-49c9-4d0f-8dc6-eeacb09aa2c3',
'external-ids:iface-status=active',
u'external-ids:attached-mac=00:1a:4a:16:01:51'], due to: ovs-vsctl: no
bridge named br-int



 (hooks:108)
2017-03-21 12:51:39,810 ERROR (vm/23ea52a8) [virt.vm]
(vmId='23ea52a8-e499-41bb-8be4-8621b67869fd') The vm start process
failed (vm:616)
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 552, in _startUnderlyingVm
self._run()
  File "/usr/share/vdsm/virt/vm.py", line 1985, in _run
domxml = hooks.before_vm_start(self._buildDomainXML(),
  File "/usr/share/vdsm/virt/vm.py", line 1667, in _buildDomainXML
self._appendDevices(domxml)
  File "/usr/share/vdsm/virt/vm.py", line 1621, in _appendDevices
deviceXML, self.conf, dev.custom)
  File "/usr/lib/python2.7/site-packages/vdsm/hooks.py", line 132, in
before_device_create
params=customProperties)
  File "/usr/lib/python2.7/site-packages/vdsm/hooks.py", line 118, in
_runHooksDir
raise exception.HookError(err)
HookError: Hook Error: ('Adding vNIC
0eec5c68-49c9-4d0f-8dc6-eeacb09aa2c3 for provider type
OPENSTACK_NETWORK and plugin OPEN_VSWITCHSetting up vNIC (portId
0eec5c68-49c9-4d0f-8dc6-eeacb09aa2c3) security groups\nopenstacknet
hook: [unexpected error]: Traceback (most recent call last):\n  File
"/usr/libexec/vdsm/hooks/before_device_create/50_openstacknet", line
194, in \nmain()\n  File
"/usr/libexec/vdsm/hooks/before_device_create/50_openstacknet", line
148, in main\naddOpenstackVnic(domxml, pluginType, vNicId,
hasSecurityGroups)\n  File
"/usr/libexec/vdsm/hooks/before_device_create/50_openstacknet", line
121, in addOpenstackVnic\naddOvsVnic(domxml, iface, portId,
hasSecurityGroups)\n  File
"/usr/libexec/vdsm/hooks/before_device_create/50_openstacknet", line
90, in addOvsVnic\naddOvsHybridVnic(domxml, iface, portId)\n  File
"/usr/libexec/vdsm/hooks/before_device_create/50_openstacknet", line
98, in addOvsHybridVnic\nportId)\n  File
"/usr/libexec/vdsm/hooks/before_device_create/openstacknet_utils.py",
line 99, in setUpSecurityGroupVnic\n
\'external-ids:attached-mac=%s\' % macAddr])\n  File
"/usr/libexec/vdsm/hooks/before_device_create/openstacknet_utils.py",
line 47, in executeOrExit\n(command, err))\nRuntimeError: Failed
to execute [\'/usr/bin/ovs-vsctl\', \'--\', \'--may-exist\',
\'add-port\', \'br-int\', \'qvo0eec5c68-49\', \'--\', \'set\',
\'Interface\', \'qvo0eec5c68-49\',
\'external-ids:iface-id=0eec5c68-49c9-4d0f-8dc6-eeacb09aa2c3\',
\'external-ids:iface-status=active\',
u\'external-ids:attached-mac=00:1a:4a:16:01:51\'], due to: ovs-vsctl:
no bridge named br-int\n\n\n\n',)
2017-03-21 12:51:39,812 INFO  (vm/23ea52a8) [virt.vm]
(vmId='23ea52a8-e499-41bb-8be4-8621b67869fd') Changed state to Down:
Hook Error: ('Adding vNIC 0eec5c68-49c9-4d0f-8dc6-eeacb09aa2c3 for
provider type OPENSTACK_NETWORK and plugin OPEN_VSWITCHSetting up vNIC
(portId 0eec5c68-49c9-4d0f-8dc6-eeacb09aa2c3) security
groups\nopenstacknet hook: [unexpected error]: Traceback (most recent
call last):\n  File

[ovirt-users] error on live migraion: ovn?

2017-03-21 Thread Gianluca Cecchi
Hello,
environment on 4.1.
I have a Vm with 2 nics: one of them is on ovn.
Trying to migrate I get error. Ho wto decode it?
Is live migration supported with OVN or mixed nics?

Thanks,
Gianluca


in engine.log

2017-03-21 10:37:26,209+01 INFO
 [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-26)
[f0cea507-608b-4f36-ac4e-606faec35cd9] Lock Acquired to object
'EngineLock:{exclusiveLocks='[2e571c77-bae1-4c1c-bf98-effaf9fed741=]',
sharedLocks='null'}'
2017-03-21 10:37:26,499+01 INFO
 [org.ovirt.engine.core.bll.MigrateVmToServerCommand]
(org.ovirt.thread.pool-6-thread-43) [f0cea507-608b-4f36-ac4e-606faec35cd9]
Running command: MigrateVmToServerCommand internal: false. Entities
affected :  ID: 2e571c77-bae1-4c1c-bf98-effaf9fed741 Type: VMAction group
MIGRATE_VM with role type USER
2017-03-21 10:37:26,786+01 INFO
 [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
(org.ovirt.thread.pool-6-thread-43) [f0cea507-608b-4f36-ac4e-606faec35cd9]
START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true',
hostId='30677d2c-4eb8-4ed9-ba54-0b89945a45fd',
vmId='2e571c77-bae1-4c1c-bf98-effaf9fed741', srcHost='ovmsrv07.mydomain',
dstVdsId='02bb501a-b641-4ee1-bab1-5e640804e65f',
dstHost='ovmsrv05.mydomain:54321', migrationMethod='ONLINE',
tunnelMigration='false', migrationDowntime='0', autoConverge='true',
migrateCompressed='false', consoleAddress='null', maxBandwidth='null',
enableGuestEvents='true', maxIncomingMigrations='2',
maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime,
params=[100]}], stalling=[{limit=1, action={name=setDowntime,
params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}},
{limit=3, action={name=setDowntime, params=[300]}}, {limit=4,
action={name=setDowntime, params=[400]}}, {limit=6,
action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort,
params=[]}}]]'}), log id: 14390155
2017-03-21 10:37:26,787+01 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(org.ovirt.thread.pool-6-thread-43) [f0cea507-608b-4f36-ac4e-606faec35cd9]
START, MigrateBrokerVDSCommand(HostName = ovmsrv07,
MigrateVDSCommandParameters:{runAsync='true',
hostId='30677d2c-4eb8-4ed9-ba54-0b89945a45fd',
vmId='2e571c77-bae1-4c1c-bf98-effaf9fed741', srcHost='ovmsrv07.mydomain',
dstVdsId='02bb501a-b641-4ee1-bab1-5e640804e65f',
dstHost='ovmsrv05.mydomain:54321', migrationMethod='ONLINE',
tunnelMigration='false', migrationDowntime='0', autoConverge='true',
migrateCompressed='false', consoleAddress='null', maxBandwidth='null',
enableGuestEvents='true', maxIncomingMigrations='2',
maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime,
params=[100]}], stalling=[{limit=1, action={name=setDowntime,
params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}},
{limit=3, action={name=setDowntime, params=[300]}}, {limit=4,
action={name=setDowntime, params=[400]}}, {limit=6,
action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort,
params=[]}}]]'}), log id: 5fd9f196
2017-03-21 10:37:27,386+01 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(org.ovirt.thread.pool-6-thread-43) [f0cea507-608b-4f36-ac4e-606faec35cd9]
FINISH, MigrateBrokerVDSCommand, log id: 5fd9f196
2017-03-21 10:37:27,445+01 INFO
 [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
(org.ovirt.thread.pool-6-thread-43) [f0cea507-608b-4f36-ac4e-606faec35cd9]
FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14390155
2017-03-21 10:37:27,475+01 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-43) [f0cea507-608b-4f36-ac4e-606faec35cd9]
EVENT_ID: VM_MIGRATION_START(62), Correlation ID:
f0cea507-608b-4f36-ac4e-606faec35cd9, Job ID:
514c0562-de81-44a1-bde3-58027662b536, Call Stack: null, Custom Event ID:
-1, Message: Migration started (VM: c7service, Source: ovmsrv07,
Destination: ovmsrv05, User: g.cecchi@internal-authz).
2017-03-21 10:37:30,341+01 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(DefaultQuartzScheduler4) [53ea57f2] START, FullListVDSCommand(HostName =
ovmsrv07, FullListVDSCommandParameters:{runAsync='true',
hostId='30677d2c-4eb8-4ed9-ba54-0b89945a45fd',
vmIds='[2e571c77-bae1-4c1c-bf98-effaf9fed741]'}), log id: 46e85505
2017-03-21 10:37:30,526+01 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(DefaultQuartzScheduler4) [53ea57f2] FINISH, FullListVDSCommand, return:
[{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0,
vmId=2e571c77-bae1-4c1c-bf98-effaf9fed741,
guestDiskMapping={0QEMU_QEMU_HARDDISK_6af3dfe5-6da7-48e3-9={name=/dev/sda},
QEMU_DVD-ROM_QM3={name=/dev/sr0}}, transparentHugePages=true,
timeOffset=0, cpuType=Opteron_G2, smp=1, pauseCode=NOERR,
guestNumaNodes=[Ljava.lang.Object;@31e4f47d, smartcardEnable=false,
custom={device_39acb1f5-c31e-4810-a4c3-26d460a6e374=VmDevice:{id='VmDeviceId:{deviceId='39acb1f5-c31e-4810-a4c3-26d460a6e374',

Re: [ovirt-users] Adding posix compliant FS

2017-03-21 Thread Fred Rolland
Can you provide the content of /proc/mounts after it has being mounted by
VDSM ?

On Tue, Mar 21, 2017 at 12:28 PM, carl langlois 
wrote:

> Here is the vdsm.log
>
>
> jsonrpc.Executor/0::ERROR::2017-03-18 
> 08:23:48,317::hsm::2403::Storage.HSM::(connectStorageServer)
> Could not connect to storageServer
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/hsm.py", line 2400, in
> connectStorageServer
> conObj.connect()
>   File "/usr/share/vdsm/storage/storageServer.py", line 242, in connect
> self.getMountObj().getRecord().fs_file)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line
> 260, in getRecord
> (self.fs_spec, self.fs_file))
> OSError: [Errno 2] Mount of `/dev/dm-3` at `/rhev/data-center/mnt/_dev_dm-3`
> does not exist
>
>
> thanks
>
> On Fri, Mar 17, 2017 at 3:06 PM, Fred Rolland  wrote:
>
>> Please send Vdsm log.
>> Thanks
>>
>> On Fri, Mar 17, 2017 at 8:46 PM, carl langlois 
>> wrote:
>>
>>> Hi,
>>>
>>> The link that you send is for NFS strorage but i am trying to add a
>>> POSIX compliant.
>>>
>>> [image: Inline image 1]
>>>
>>>
>>>
>>>
>>> when i press okey it mount the disk to :
>>>
>>> [root@ovhost4 ~]# ls -al /rhev/data-center/mnt/_dev_dm-4/
>>> total 28
>>> drwxr-xr-x. 4 vdsm kvm  4096 Mar 16 12:12 .
>>> drwxr-xr-x. 6 vdsm kvm  4096 Mar 17 13:35 ..
>>> drwxr-xr-x. 2 vdsm kvm 16384 Mar 16 11:42 lost+found
>>> drwxr-xr-x. 4 vdsm kvm  4096 Mar 16 12:12 .Trash-0
>>>
>>>
>>> and doing a touch with vdsm user work
>>>
>>> [root@ovhost4 ~]# sudo -u vdsm touch  /rhev/data-center/mnt/_dev_dm
>>> -4/test
>>> [root@ovhost4 ~]# ls -al /rhev/data-center/mnt/_dev_dm-4/
>>> total 28
>>> drwxr-xr-x. 4 vdsm kvm  4096 Mar 17 13:44 .
>>> drwxr-xr-x. 6 vdsm kvm  4096 Mar 17 13:35 ..
>>> drwxr-xr-x. 2 vdsm kvm 16384 Mar 16 11:42 lost+found
>>> -rw-r--r--. 1 vdsm kvm 0 Mar 17 13:44 test
>>> drwxr-xr-x. 4 vdsm kvm  4096 Mar 16 12:12 .Trash-0
>>>
>>>
>>> But it fail with a general exception error and the storage does not
>>> exist in ovirt
>>>
>>> any help would be appreciated.
>>>
>>>
>>> Which log you need to see?
>>>
>>> Thanks
>>>
>>>
>>>
>>> Le jeu. 16 mars 2017 17:02, Fred Rolland  a écrit :
>>>
 Hi,

 Can you check if the folder permissions are OK ?
 Check [1] for more details.

 Can you share more of the log ?


 [1] https://www.ovirt.org/documentation/how-to/troubleshooting/t
 roubleshooting-nfs-storage-issues/

 On Thu, Mar 16, 2017 at 7:49 PM, carl langlois 
 wrote:

 Hi Guys,

 I am trying to add a posix FS on one of my host. Ovirt in actually
 mounting it but fail with "Error while executing action Add Storage
 Connection: General Exception"

 If i look in the vdsm.log i cant see

 sonrpc.Executor/7::DEBUG::2017-03-16 
 12:39:28,248::fileUtils::209::Storage.fileUtils::(createdir)
 Creating directory: /rhev/data-center/mnt/_dev_dm-3 mode: None
 jsonrpc.Executor/7::DEBUG::2017-03-16 
 12:39:28,248::fileUtils::218::Storage.fileUtils::(createdir)
 Using existing directory: /rhev/data-center/mnt/_dev_dm-3
 jsonrpc.Executor/7::INFO::2017-03-16 
 12:39:28,248::mount::226::storage.Mount::(mount)
 mounting /dev/dm-3 at /rhev/data-center/mnt/_dev_dm-3
 jsonrpc.Executor/7::DEBUG::2017-03-16 
 12:39:28,270::utils::871::storage.Mount::(stopwatch)
 /rhev/data-center/mnt/_dev_dm-3 mounted: 0.02 seconds
 jsonrpc.Executor/7::ERROR::2017-03-16 
 12:39:28,271::hsm::2403::Storage.HSM::(connectStorageServer)
 Could not connect to storageServer
 Traceback (most recent call last):
   File "/usr/share/vdsm/storage/hsm.py", line 2400, in
 connectStorageServer
 conObj.connect()
   File "/usr/share/vdsm/storage/storageServer.py", line 242, in connect
 self.getMountObj().getRecord().fs_file)
   File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line
 260, in getRecord
 (self.fs_spec, self.fs_file))
 OSError: [Errno 2] Mount of `/dev/dm-3` at
 `/rhev/data-center/mnt/_dev_dm-3` does not exist


 any help would be appreciated.

 Thanks

 CL


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users



>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Strange network performance on VirtIIO VM NIC

2017-03-21 Thread Yaniv Kaul
On Mon, Mar 20, 2017 at 5:14 PM, FERNANDO FREDIANI <
fernando.fredi...@upx.com> wrote:

> Hello Yaniv.
>
> It also looks to me initially that for 1Gbps multi-queue would not be
> necessary, however the Virtual Machine is relatively busy where the CPU
> necessary to process it may (or not) be competing with the processes
> running on in the guest.
>
> The network is as following: 3 x 1 Gb interfaces  bonded together with
> layer2+3 has algorithm where the VMs connect to the outside world.
>

Is your host with NUMA support (multiple sockets) ? Are all your interfaces
connected to the same socket? Perhaps one is on the 'other' socket (a
different PCI bus, etc.)? This can introduce latency.
In general, you would want to align everything, from host (interrupts of
the drivers) all the way to the guest to perform the processing on the same
socket.

Layer 2+3 may or may not provide you with good distribution across the
physical links, depending on the traffic. Layer 3+4 hashing is better, but
is not entirely compliant with all vendors/equipment.

vNIC1 and vNIC2 in the VMs are the same VirtIO NIC types. These vNICs are
> connected to the same VLAN and they are both able to output 1Gbps
> throughput each at the same time in iperf tests as the bond below has 3Gb
> capacity.
>

Linux is not always happy with multiple interfaces on the same L2 network.
I think there are some params needed to be set to make it happy?


> Please note something interesting I mentioned previously: All traffic
> currently goes in and out via vNIC1 which is showing packet loss (3% to
> 10%) on the tests conducted. NIC2 has zero traffic and if the same tests
> are conducted against it shows 0% packets loss.
> At first impression if it was something related to the bond or even to the
> physical NICs on the Host it should show packet loss for ANY of the vNICs
> as the traffic flows through the same physical NIC and bond, but is not the
> case.
>
> This is the qemu-kvm command the Host is executing:
> /usr/libexec/qemu-kvm -name guest=VM_NAME_REPLACED,debug-threads=on -S
> -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/
> qemu/domain-6-VM_NAME_REPLACED/master-key.aes -machine
> pc-i440fx-rhel7.3.0,accel=kvm,usb=off -cpu SandyBridge -m 4096 -realtime
> mlock=off -smp 4,maxcpus=16,sockets=16,cores=1,threads=1 -numa
> node,nodeid=0,cpus=0-3,mem=4096 -uuid 57ffc2ed-fec5-47d6-bfb1-60c728737bd2
> -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-3.1611.el7.
> centos,serial=4C4C4544-0043-5610-804B-B1C04F4E3232,uuid=
> 57ffc2ed-fec5-47d6-bfb1-60c728737bd2 -no-user-config -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-6-
> VM_NAME_REPLACED/monitor.sock,server,nowait -mon 
> chardev=charmonitor,id=monitor,mode=control
> -rtc base=2017-03-17T01:12:39,driftfix=slew -global
> kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on
> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x7 -device
> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5
> -drive if=none,id=drive-ide0-1-0,readonly=on -device
> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
> file=/rhev/data-center/2325e1a4-c702-469c-82eb-ff43baa06d44/8dcd90f4-c0f0-
> 47db-be39-5b49685acc04/images/ebe10e75-799a-439e-bc52-
> 551b894c34fa/1a73cd53-0e51-4e49-8631-38cf571f6bb9,format=
> qcow2,if=none,id=drive-scsi0-0-0-0,serial=ebe10e75-799a-
> 439e-bc52-551b894c34fa,cache=none,werror=stop,rerror=stop,aio=native
> -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-
> scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 -drive file=/rhev/data-center/
> 2325e1a4-c702-469c-82eb-ff43baa06d44/8dcd90f4-c0f0-
> 47db-be39-5b49685acc04/images/db401b27-006d-494c-a1ee-
> 1d37810710c8/664cffe6-52f8-429d-8bb9-2f43fa7a468f,format=
> qcow2,if=none,id=drive-scsi0-0-0-1,serial=db401b27-006d-
> 494c-a1ee-1d37810710c8,cache=none,werror=stop,rerror=stop,aio=native
> -device 
> scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi0-0-0-1,id=scsi0-0-0-1
> -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=36 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:60,bus=pci.0,addr=0x3
> -netdev tap,fd=37,id=hostnet1,vhost=on,vhostfd=38 -device
> virtio-net-pci,netdev=hostnet1,id=net1,mac=00:1a:4a:16:01:61,bus=pci.0,addr=0x4
> -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/
> 57ffc2ed-fec5-47d6-bfb1-60c728737bd2.com.redhat.rhevm.vdsm,server,nowait
> -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=
> charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev
> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/
> 57ffc2ed-fec5-47d6-bfb1-60c728737bd2.org.qemu.guest_agent.0,server,nowait
> -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=
> charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev
> spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-
> 

Re: [ovirt-users] Event History for a VM

2017-03-21 Thread Yedidyah Bar David
On Thu, Mar 16, 2017 at 12:54 PM, Sven Achtelik  wrote:
> Hi All,
>
>
>
> I would need to have an Event-History of our VMs for auditing purposes that
> is able to go back until the moment the VM was created/imported. I found the
> Events Tab in the VM view and found that this is not showing everything to
> the moment of creation. Things that are important for me would be any change
> in CPUs or Host that the VM is pinned to. Are the Events stored in the
> Engine DB and can I read them in any way ? Is there a value that needs to be
> changed in order to keep all Events for a VM ?

IIUC that's AuditLogAgingThreshold (in engine-config), defaults to 30 days.
IIUC it's not designed to be extended "forever" - doing so will likely have
a significant impact on performance.

You might also want to have a look at Event Notifications:

http://www.ovirt.org/documentation/admin-guide/chap-Event_Notifications/

Best,

>
>
>
> Thank you for helping,
>
>
>
> Sven
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Event History for a VM

2017-03-21 Thread Sven Achtelik
Hi Kevin,

thank you for the details on this. Do you also know how and where things like 
changes to the VMs CPUs or preferred host or Ram are stored ? I’m looking for 
the ability to show the history of a VM.

Example: I have a VM that has 4 vCPus, 64GB Ram and is pinned to host 1. 2 or 3 
years later I need to be able to show an auditor that this has not changed 
throughout that time.

Are those changes recorded ? an if that’s the case are the recorded in the DB ?

Thank you,

Sven

Von: Kevin Goldblatt [mailto:kgold...@redhat.com]
Gesendet: Dienstag, 21. März 2017 09:32
An: Gianluca Cecchi 
Cc: Sven Achtelik ; users@ovirt.org; Raz Tamir 
; Goldblatt, Kevin 
Betreff: Re: [ovirt-users] Event History for a VM

Hi Sven,
On your engine you can run the following to get the vms info from the engine 
database:

su - postgres -c "psql -U postgres engine -c  'select * from vms;'" |less -S
You may also find some info on the specific vm in the engine log and the 
libvirt log:
On the engine - /var/log/ovirt-engine/engine.log (this will probably have been 
rotated in your case. Check to see the oldest engine.log in the directory).

On the host the the vm runs on - /var/log/libvirt/qemu/vm111.log

Hope this helps,

Kevin

On Tue, Mar 21, 2017 at 9:45 AM, Gianluca Cecchi 
> wrote:
On Tue, Mar 21, 2017 at 8:42 AM, Sven Achtelik 
> wrote:
Hi,

does anyone know if this information is pulled from the logs and if it’s 
related to the log-rotation or if this is part of the Engine DB. I need to know 
if it’s possible to read this information like 2 or 3 years later for some 
auditing purpose. It might help if you could let me know where to look at.

Thank you,

Sven
Von: users-boun...@ovirt.org 
[mailto:users-boun...@ovirt.org] Im Auftrag von 
Sven Achtelik
Gesendet: Donnerstag, 16. März 2017 11:54
An: users@ovirt.org
Betreff: [ovirt-users] Event History for a VM

Hi All,

I would need to have an Event-History of our VMs for auditing purposes that is 
able to go back until the moment the VM was created/imported. I found the 
Events Tab in the VM view and found that this is not showing everything to the 
moment of creation. Things that are important for me would be any change in 
CPUs or Host that the VM is pinned to. Are the Events stored in the Engine DB 
and can I read them in any way ? Is there a value that needs to be changed in 
order to keep all Events for a VM ?

Thank you for helping,

Sven


+1

Gianluca

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Event History for a VM

2017-03-21 Thread Kevin Goldblatt
Hi Sven,

On your engine you can run the following to get the vms info from the
engine database:

su - postgres -c "psql -U postgres engine -c  'select * from vms;'" |less -S

You may also find some info on the specific vm in the engine log and the
libvirt log:

On the engine - /var/log/ovirt-engine/engine.log (this will probably have
been rotated in your case. Check to see the oldest engine.log in the
directory).

On the host the the vm runs on - /var/log/libvirt/qemu/vm111.log


Hope this helps,


Kevin

On Tue, Mar 21, 2017 at 9:45 AM, Gianluca Cecchi 
wrote:

> On Tue, Mar 21, 2017 at 8:42 AM, Sven Achtelik 
> wrote:
>
>> Hi,
>>
>>
>>
>> does anyone know if this information is pulled from the logs and if it’s
>> related to the log-rotation or if this is part of the Engine DB. I need to
>> know if it’s possible to read this information like 2 or 3 years later for
>> some auditing purpose. It might help if you could let me know where to look
>> at.
>>
>>
>>
>> Thank you,
>>
>>
>>
>> Sven
>>
>> *Von:* users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] *Im
>> Auftrag von *Sven Achtelik
>> *Gesendet:* Donnerstag, 16. März 2017 11:54
>> *An:* users@ovirt.org
>> *Betreff:* [ovirt-users] Event History for a VM
>>
>>
>>
>> Hi All,
>>
>>
>>
>> I would need to have an Event-History of our VMs for auditing purposes
>> that is able to go back until the moment the VM was created/imported. I
>> found the Events Tab in the VM view and found that this is not showing
>> everything to the moment of creation. Things that are important for me
>> would be any change in CPUs or Host that the VM is pinned to. Are the
>> Events stored in the Engine DB and can I read them in any way ? Is there a
>> value that needs to be changed in order to keep all Events for a VM ?
>>
>>
>>
>> Thank you for helping,
>>
>>
>>
>> Sven
>>
>>
>>
>
> +1
>
> Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] The Hosted Engine Storage Domain doesn't exist. It should be imported into the setup.

2017-03-21 Thread Paolo Margara
Hi Simone,

I'll respond inline


Il 20/03/2017 11:59, Simone Tiraboschi ha scritto:
>
>
> On Mon, Mar 20, 2017 at 11:15 AM, Simone Tiraboschi
> > wrote:
>
>
> On Mon, Mar 20, 2017 at 10:12 AM, Paolo Margara
> > wrote:
>
> Hi Yedidyah,
>
> Il 19/03/2017 11:55, Yedidyah Bar David ha scritto:
> > On Sat, Mar 18, 2017 at 12:25 PM, Paolo Margara
> > wrote:
> >> Hi list,
> >>
> >> I'm working on a system running on oVirt 3.6 and the Engine
> is reporting
> >> the warning "The Hosted Engine Storage Domain doesn't
> exist. It should
> >> be imported into the setup." repeatedly in the Events tab
> into the Admin
> >> Portal.
> >>
> >> I've read into the list that Hosted Engine Storage Domain
> should be
> >> imported automatically into the setup during the upgrade to 3.6
> >> (original setup was on 3.5), but this not happened while the
> >> HostedEngine is correctly visible into the VM tab after the
> upgrade.
> > Was the upgrade to 3.6 successful and clean?
> The upgrade from 3.5 to 3.6 was successful, as every
> subsequent minor
> release upgrades. I rechecked the upgrade logs I haven't seen any
> relevant error.
> One addition information: I'm currently running on CentOS 7
> and also the
> original setup was on this release version.
> >
> >> The Hosted Engine Storage Domain is on a dedicated gluster
> volume but
> >> considering that, if I remember correctly, oVirt 3.5 at
> that time did
> >> not support gluster as a backend for the HostedEngine at
> that time I had
> >> installed the engine using gluster's NFS server using
> >> 'localhost:/hosted-engine' as a mount point.
> >>
> >> Currently on every nodes I can read into the log of the
> >> ovirt-hosted-engine-ha agent the following lines:
> >>
> >> MainThread::INFO::2017-03-17
> >>
> 
> 14:04:17,773::hosted_engine::462::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> >> Current state EngineUp (score: 3400)
> >> MainThread::INFO::2017-03-17
> >>
> 
> 14:04:17,774::hosted_engine::467::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> >> Best remote host virtnode-0-1 (id: 2
> >> , score: 3400)
> >> MainThread::INFO::2017-03-17
> >>
> 
> 14:04:27,956::hosted_engine::613::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
> >> Initializing VDSM
> >> MainThread::INFO::2017-03-17
> >>
> 
> 14:04:28,055::hosted_engine::658::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
> >> Connecting the storage
> >> MainThread::INFO::2017-03-17
> >>
> 14:04:28,078::storage_server::218::ovirt_hosted_engine_ha.li
> 
> b.storage_server.StorageServer::(connect_storage_server)
> >> Connecting storage server
> >> MainThread::INFO::2017-03-17
> >>
> 14:04:28,278::storage_server::222::ovirt_hosted_engine_ha.li
> 
> b.storage_server.StorageServer::(connect_storage_server)
> >> Connecting storage server
> >> MainThread::INFO::2017-03-17
> >>
> 14:04:28,398::storage_server::230::ovirt_hosted_engine_ha.li
> 
> b.storage_server.StorageServer::(connect_storage_server)
> >> Refreshing the storage domain
> >> MainThread::INFO::2017-03-17
> >>
> 
> 14:04:28,822::hosted_engine::685::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
> >> Preparing images
> >> MainThread::INFO::2017-03-17
> >>
> 
> 14:04:28,822::image::126::ovirt_hosted_engine_ha.lib.image.Image::(prepare_images)
> >> Preparing images
> >> MainThread::INFO::2017-03-17
> >>
> 
> 14:04:29,308::hosted_engine::688::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
> >> Reloading vm.conf from the
> >>  shared storage domain
> >> MainThread::INFO::2017-03-17
> >>
> 
> 14:04:29,309::config::206::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
> >> Trying to get a fresher copy
> >> of vm configuration from the OVF_STORE
> >> MainThread::WARNING::2017-03-17
> >>
> 
> 

Re: [ovirt-users] Event History for a VM

2017-03-21 Thread Gianluca Cecchi
On Tue, Mar 21, 2017 at 8:42 AM, Sven Achtelik 
wrote:

> Hi,
>
>
>
> does anyone know if this information is pulled from the logs and if it’s
> related to the log-rotation or if this is part of the Engine DB. I need to
> know if it’s possible to read this information like 2 or 3 years later for
> some auditing purpose. It might help if you could let me know where to look
> at.
>
>
>
> Thank you,
>
>
>
> Sven
>
> *Von:* users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] *Im
> Auftrag von *Sven Achtelik
> *Gesendet:* Donnerstag, 16. März 2017 11:54
> *An:* users@ovirt.org
> *Betreff:* [ovirt-users] Event History for a VM
>
>
>
> Hi All,
>
>
>
> I would need to have an Event-History of our VMs for auditing purposes
> that is able to go back until the moment the VM was created/imported. I
> found the Events Tab in the VM view and found that this is not showing
> everything to the moment of creation. Things that are important for me
> would be any change in CPUs or Host that the VM is pinned to. Are the
> Events stored in the Engine DB and can I read them in any way ? Is there a
> value that needs to be changed in order to keep all Events for a VM ?
>
>
>
> Thank you for helping,
>
>
>
> Sven
>
>
>

+1

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Event History for a VM

2017-03-21 Thread Sven Achtelik
Hi,

does anyone know if this information is pulled from the logs and if it's 
related to the log-rotation or if this is part of the Engine DB. I need to know 
if it's possible to read this information like 2 or 3 years later for some 
auditing purpose. It might help if you could let me know where to look at.

Thank you,

Sven
Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im Auftrag von 
Sven Achtelik
Gesendet: Donnerstag, 16. März 2017 11:54
An: users@ovirt.org
Betreff: [ovirt-users] Event History for a VM

Hi All,

I would need to have an Event-History of our VMs for auditing purposes that is 
able to go back until the moment the VM was created/imported. I found the 
Events Tab in the VM view and found that this is not showing everything to the 
moment of creation. Things that are important for me would be any change in 
CPUs or Host that the VM is pinned to. Are the Events stored in the Engine DB 
and can I read them in any way ? Is there a value that needs to be changed in 
order to keep all Events for a VM ?

Thank you for helping,

Sven

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Get Involved with the oVirt Project! Spring 2017 Edition

2017-03-21 Thread Sandro Bonazzola
Hi,

Got some time to spare? Join us on the oVirt project!

oVirt is a feature rich server virtualization management system with
advanced capabilities. It also provides a crucial venue for user and
developer cooperation, and is the first truly open and comprehensive data
center virtualization management initiative.

If you are new to oVirt, we recommend that you visit the develop
 page to learn more about the project, and
the working with Gerrit
 page for
more about our code review system. If you haven’t worked with Gerrit
before, we recommend you visit the introduction to Gerrit
 page.

On our community page  you’ll discover
the many ways in which you can contribute to the project, even without
programming skills.

More suggestions? Below you’ll find other ways in which you can contribute
to the oVirt project.

Keep ‘em Fresh

Like the idea of having fresh disk images of your favorite distribution in
the oVirt Glance repository? You can help in two ways: First, test existing
online images to ensure that they work with cloud-init. Second, create an
online image and report your success to de...@ovirt.org. We’ll be happy to
upload the images once they are ready.


Get Vdsm Running on Debian

If you like Debian and have programming or packaging skills, you can help
with the ongoing effort to get VDSM running on Debian
.


All the current work on Debian (VDSM and related packages) can be found in
the Debian git repositories . You can also keep up
to date on how the work is progressing by subscribing to the oVirt devel
mailing list .

Fix Some Bugs

These bugs are just waiting to be resolved:

Bug ID

Summary

1411133

Hosted-Engine: iscsi activation storage failing

1209881

[RFE] Remove iptables from hosted-engine.spec file to be able to deploy
hosted-engine without firewall services installed

1353713

[RFE] - iSCSI Setup Should use different User/Password For Discovery and
Portal

1130445

[TEXT] - If engine-setup asks for updates and choice is no, suggest
'--offline' on re-run.

1174236

[RFE] Integrate installation with Server Roles of Fedora Server

1356425

[TEXT] 'hosted-engine --vm-start' said it destroyed the VM

1328488

The engine fails with oracle java8

Port oVirt to Your Favorite Distribution

Are you sure-handed at packaging software, and using a distribution
currently unsupported by oVirt? Help to get oVirt ported to any of the
following distributions:

Fedora 

CentOS 

Gentoo . Also, check out Google Summer
of Code  to
learn how you can contribute to Gentoo and get paid for it!

Debian
:


Archlinux 

OpenSUSE 

Get into DevOps

If you love DevOps and find yourself counting stable builds in Jenkins CI
while trying to fall asleep, then the oVirt infrastructure team is looking
for you! Join us and dive into the latest and coolest DevOps tools today!
Check out these open tasks
.

You can also help by telling us how you use oVirt in your DevOps
environment. Email us at de...@ovirt.org (Please use [DevOps] in the
subject line. For more information on oVirt DevOps, visit oVirt
infrastructure docs
 and oVirt
infra documentation
.



Improve the oVirt System Testing

Make oVirt system testing
 - and
the Lago system  that runs it - even
better. Visit Lago project documentation
 to learn how to run the test suite
yourself.

More Bugs, No Coding Required

No time for DevOps and you are not a programer? You can still contribute.
Here are some easy bugs to fix

that don’t require a single line of code.

Tests to Run

Do you prefer to test things? Here are some test cases you can try
, using nightly snapshots
.

Switching to Fedora 25

For oVirt 4.2, support for Fedora will be based on Fedora 25. You can help
to 

Re: [ovirt-users] OSSEC reporting hidden processes

2017-03-21 Thread Yedidyah Bar David
On Mon, Mar 20, 2017 at 5:59 PM, Charles Kozler  wrote:
> Hi -
>
> I am wondering why OSSEC would be reporting hidden processes on my ovirt
> nodes? I run OSSEC across the infrastructure and multiple ovirt clusters
> have assorted nodes that will report a process is running but does not have
> an entry in /proc and thus "possible rootkit" alert is fired
>
> I am well aware that I do not have rootkits on these systems but am
> wondering what exactly inside ovirt is causing this to trigger? Or any
> ideas? Below is sample alert. All my google-fu turns up is that a process
> would have to **try** to hide itself from /proc, so curious what this is
> inside ovirt. Thanks!
>
> -
>
> OSSEC HIDS Notification.
> 2017 Mar 20 11:54:47
>
> Received From: (ovirtnode2.mydomain.com2) any->rootcheck
> Rule: 510 fired (level 7) -> "Host-based anomaly detection event
> (rootcheck)."
> Portion of the log(s):
>
> Process '24574' hidden from /proc. Possible kernel level rootkit.

What do you get from:

ps -eLf | grep -w 24574

Thanks,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how to handle mount options for hosted engine on glusterfs

2017-03-21 Thread knarra

On 03/21/2017 10:52 AM, Ian Neilsen wrote:

knara

Looks like your conf is incorrect for mnt option.


Hi Ian,

mnt_option should be mnt_options=backup-volfile-servers=: 
and this is how we test it.


Thanks
kasturi.

It should be I believe; mnt_options=backupvolfile-server=server name

not

mnt_options=backup-volfile-servers=host2

If your dns isnt working or your hosts file is incorrect this will 
prevent it as well.




On 21 March 2017 at 03:30, /dev/null > wrote:


Hi kasturi,

thank you. I tested and it seems not to work, even after rebooting
the current mount does not show up the mnt_options nor the switch
over works.

[root@host2 ~]# cat /etc/ovirt-hosted-engine/hosted-engine.conf
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
gateway=192.168.2.1
iqn=
conf_image_UUID=7bdc29ad-bee6-4a33-8d58-feae9f45d54f
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
sdUUID=1775d440-649c-4921-ba3b-9b6218c27ef3
connectionUUID=fcf70593-8214-4e8d-b546-63c210a3d5e7
conf_volume_UUID=06dd17e5-a440-417a-94e8-75929b6f9ed5
user=
host_id=2
bridge=ovirtmgmt
metadata_image_UUID=6252c21c-227d-4dbd-bb7b-65cf342154b6
spUUID=----
mnt_options=backup-volfile-servers=host2
fqdn=ovirt.test.lab
portal=
vm_disk_id=1bb9ea7f-986c-4803-ae82-8d5a47b1c496
metadata_volume_UUID=426ff2cc-58a2-4b83-b22f-3f7dc99890d4
vm_disk_vol_id=b57d40d2-e68b-440a-bab7-0a9631f4baa4
domainType=glusterfs
port=
console=qxl
ca_subject="C=EN, L=Test, O=Test, CN=Test"
password=
vmid=272942f3-99b9-48b9-aca4-19ec852f6874
lockspace_image_UUID=9fbdbfd4-3b31-43ce-80e2-283f0aeead49
lockspace_volume_UUID=b1e4d3ed-ec78-41cd-9a39-4372f488fb92
vdsm_use_ssl=true
storage=host1:/gvol0
conf=/var/run/ovirt-hosted-engine-ha/vm.conf


[root@host2 ~]# mount |grep gvol0
host1:/gvol0 on /rhev/data-center/mnt/glusterSD/host1:_gvol0 type
fuse.glusterfs

(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072
)

Any suggestion?

I will try an answerfile-install as well later, but it was helpful
to know, where to set this.

Thanks & best regards
*
On Mon, 20 Mar 2017 12:12:25 +0530, knarra wrote*
> On 03/20/2017 05:09 AM, /dev/null wrote:
>

Hi,

how do i make the hosted_storage aware of gluster server failure? In 
--deploy i
cannot
provide backup-volfile-servers. In 
/etc/ovirt-hosted-engine/hosted-engine.conf
there
is
an mnt_options line, but i
read

(https://github.com/oVirt/ovirt-hosted-engine-setup/commit/995c6a65ab897d804f794306cc3654214f2c29b6

)
that this settings get lost during deployment on seconday
servers.

Is there an official way to deal with that? Should this option be set 
manualy on
all
nodes?

Thanks!

/dev/null

Hi, > >I think in the above patch they are just   hiding the
the query for mount_options but i think all the code is still
present and you should not loose mount options during additional
host deployment. For more info you can refer [1]. > > You can
set this option manually on all nodes by editing
/etc/ovirt-hosted-engine/hosted-engine.conf. Following steps will
help you to achieve this. > > 1) Move each host to maintenance,
edit the file '/etc/ovirt-hosted-engine/hosted-engine.conf'. > 2)
set mnt_options =
backup-volfile-servers=: > 3) restart
the services 'systemctl restart ovirt-ha-agent' ; 'systemctl
restart ovirt-ha-broker' > 4) Activate the node. > > Repeat the
above steps for all the nodes in the cluster. > > [1]
https://bugzilla.redhat.com/show_bug.cgi?id=1426517#c2
 > > Hope
this helps !! > > Thanks > kasturi >

--
Diese Nachricht wurde auf Viren und andere gef�hrliche Inhalte
untersucht
und ist - aktuelle Virenscanner vorausgesetzt -
sauber.
For all your IT requirements visit:http://www.transtec.co.uk 


>
>
___
Users mailing
list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users



>
>-- > Diese E-Mail wurde auf Viren und gefährliche Anhänge > durch
*MailScanner*  untersucht und ist
wahrscheinlich virenfrei. > MailScanner dankt transtec
 f�r die freundliche Unterst�tzung. --
Diese E-Mail wurde auf Viren und gefährliche Anhänge durch
*MailScanner*  untersucht und ist
wahrscheinlich virenfrei. MailScanner dankt