Re: [ovirt-users] oVirt + Gluster Hyperconverged

2016-07-18 Thread Hanson Turner

Hi Fernando,

Not anything spectacular that I have seen, but I'm using 16GB minimum 
each node.


Probably want to setup your hosted-engine as 2cpu, 4096mb ram. I believe 
that's the min reqs.


Thanks,

Hanson


On 07/15/2016 09:48 AM, Fernando Frediani wrote:

Hi folks,

I have a few servers with reasonable amount of raw storage but they 
are 3 with only 8GB of memory each.
I wanted to have them with an oVirt Hyperconverged + Gluster mainly to 
take advantage of the amount of the storage spread between them and 
have ability to live migrate VMs.


Question is: Does running Gluster on the same Hypervisor nodes 
consumes any significant memory that won't be much left for running VMs ?


Thanks
Fernando
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
-
- Network Engineer  -
-
-  Andrews Wireless -
- 671 Durham road 21-
-Uxbridge ON, L9P 1R4   -
-P: 905.852.8896-
-F: 905.852.7587-
- Toll free  (877)852.8896  -
-

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] disk not bootable

2016-07-18 Thread Fernando Fuentes
Ops... forgot the link:
 
http://pastebin.com/LereJgyw
 
The requested infor is in the pastebin.
 
Regards,
 
 
--
Fernando Fuentes
ffuen...@txweather.org
http://www.txweather.org
 
 
 
On Mon, Jul 18, 2016, at 03:16 PM, Fernando Fuentes wrote:
> Nir,
>
> After some playing around with pvscan I was able to get all of the
> need it information.
>
> Please see:
>
>
> --
> Fernando Fuentes
> ffuen...@txweather.org
> http://www.txweather.org
>
>
>
> On Mon, Jul 18, 2016, at 02:30 PM, Nir Soffer wrote:
>> On Mon, Jul 18, 2016 at 6:48 PM, Fernando Fuentes
>>  wrote:
>> > Nir,
>> >
>> > As requested:
>> >
>> > [root@gamma ~]# lsblk
>> > NAME
>> > MAJ:MIN RM
>> >   SIZE RO TYPE  MOUNTPOINT
>> > sda
>> > 8:00
>> >   557G  0 disk
>> > ├─sda1
>> > 8:10
>> >   500M  0 part  /boot
>> > └─sda2
>> > 8:20
>> > 556.5G  0 part
>> >   ├─vg_gamma-lv_root (dm-0)
>> >   253:00
>> >  50G  0 lvm   /
>> >   ├─vg_gamma-lv_swap (dm-1)
>> >   253:10
>> >   4G  0 lvm   [SWAP]
>> >   └─vg_gamma-lv_home (dm-2)
>> >   253:20
>> >   502.4G  0 lvm   /home
>> > sr0
>> > 11:01
>> >  1024M  0 rom
>> > sdb
>> > 8:16   0
>> > 2T  0 disk
>> > └─36589cfc00881b9b93c2623780840 (dm-4)
>> > 253:40
>> > 2T  0 mpath
>> > sdc
>> > 8:32   0
>> > 2T  0 disk
>> > └─36589cfc0050564002c7e51978316 (dm-3)
>> > 253:30
>> > 2T  0 mpath
>> >   ├─3ccb7b67--8067--4315--9656--d68ba10975ba-metadata (dm-5)
>> >   253:50
>> > 512M  0 lvm
>> >   ├─3ccb7b67--8067--4315--9656--d68ba10975ba-outbox (dm-6)
>> >   253:60
>> > 128M  0 lvm
>> >   ├─3ccb7b67--8067--4315--9656--d68ba10975ba-leases (dm-7)
>> >   253:70
>> >   2G  0 lvm
>> >   ├─3ccb7b67--8067--4315--9656--d68ba10975ba-ids (dm-8)
>> >   253:80
>> > 128M  0 lvm
>> >   ├─3ccb7b67--8067--4315--9656--d68ba10975ba-inbox (dm-9)
>> >   253:90
>> > 128M  0 lvm
>> >   └─3ccb7b67--8067--4315--9656--d68ba10975ba-master (dm-10)
>> >   253:10   0
>> >   1G  0 lvm
>> > sdd
>> > 8:48   0
>> > 4T  0 disk
>> > └─36589cfc0059ccab70662b71c47ef (dm-11)
>> > 253:11   0
>> > 4T  0 mpath
>> >   ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-metadata (dm-12)
>> >   253:12   0
>> > 512M  0 lvm
>> >   ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-ids (dm-13)
>> >   253:13   0
>> > 128M  0 lvm
>> >   ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-leases (dm-14)
>> >   253:14   0
>> >   2G  0 lvm
>> >   ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-outbox (dm-15)
>> >   253:15   0
>> > 128M  0 lvm
>> >   ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-inbox (dm-16)
>> >   253:16   0
>> > 128M  0 lvm
>> >   └─4861322b--352f--41c6--890a--5cbf1c2c1f01-master (dm-17)
>> >   253:17   0
>> >   1G  0 lvm
>>
>> So you have 2 storage domains:
>>
>> - 3ccb7b67-8067-4315-9656-d68ba10975ba
>> - 4861322b-352f-41c6-890a-5cbf1c2c1f01
>>
>> But most likely both of them are not active now.
>>
>> Can you share the output of:
>>
>> iscsiadm -m session
>>
>> On a system connected to iscsi storage you will see something like:
>>
>> # iscsiadm -m session
>> tcp: [5] 10.35.0.99:3260,1 iqn.2003-01.org.dumbo.target1 (non-flash)
>>
>> The special lvs (ids, leases, ...) should be active, and you should
>> see also
>> regular disks lvs (used for snapshot for vm disks).
>>
>> Here is an example from machine connected to active iscsi domain:
>>
>> # lvs
>>   LV   VG
>>   Attr   LSize
>>   27c4c795-bca4-4d7b-9b40-cda9098790f5 5f35b5c0-17d7-4475-9125-
>>   e97f1cdb06f9 -wi---   1.00g
>>   35be1f52-5b28-4c90-957a-710dbbb8f13f 5f35b5c0-17d7-4475-9125-
>>   e97f1cdb06f9 -wi---   8.00g
>>   36d9b41b-4b01-4fc2-8e93-ccf79af0f766 5f35b5c0-17d7-4475-9125-
>>   e97f1cdb06f9 -wi--- 128.00m
>>   4fda3b44-27a5-4ce4-b8c3-66744aa9937b 5f35b5c0-17d7-4475-9125-
>>   e97f1cdb06f9 -wi---   2.12g
>>   c2e78f72-d499-44f0-91f5-9930a599dc87 5f35b5c0-17d7-4475-9125-
>>   e97f1cdb06f9 -wi--- 128.00m
>>   d49919b4-30fc-440f-9b21-3367ddfdf396 5f35b5c0-17d7-4475-9125-
>>   e97f1cdb06f9 -wi---   4.00g
>>   f3b10280-43ed-4772-b122-18c92e098171 5f35b5c0-17d7-4475-9125-
>>   e97f1cdb06f9 -wi---   1.00g
>>   f409cc48-8248-4239-a4ea-66b0b1084416 5f35b5c0-17d7-4475-9125-
>>   e97f1cdb06f9 -wi---   1.00g
>>   ids  5f35b5c0-17d7-4475-9125-
>>   e97f1cdb06f9 -wi-ao 128.00m
>>   inbox5f35b5c0-17d7-4475-9125-
>>   e97f1cdb06f9 -wi-a- 128.00m
>>   leases   5f35b5c0-17d7-4475-9125-
>>   e97f1cdb06f9 -wi-a-   2.00g
>>   master   5f35b5c0-17d7-4475-9125-
>>   e97f1cdb06f9 -wi-a-   1.00g
>>   metadata 5f35b5c0-17d7-4475-9125-
>>   e97f1cdb06f9 -wi-a- 512.00m
>>   outbox   5f35b5c0-17d7-4475-9125-
>>   e97f1cdb06f9 -wi-a- 128.00m
>>
>>
>>
>> > [root@gamma ~]#
>> >
>> > Regards,
>> >
>> > --
>> > 

Re: [ovirt-users] disk not bootable

2016-07-18 Thread Fernando Fuentes
Nir,
 
After some playing around with pvscan I was able to get all of the need
it information.
 
Please see:
 
 
--
Fernando Fuentes
ffuen...@txweather.org
http://www.txweather.org
 
 
 
On Mon, Jul 18, 2016, at 02:30 PM, Nir Soffer wrote:
> On Mon, Jul 18, 2016 at 6:48 PM, Fernando Fuentes
>  wrote:
> > Nir,
> >
> > As requested:
> >
> > [root@gamma ~]# lsblk
> > NAME
> > MAJ:MIN RM
> >   SIZE RO TYPE  MOUNTPOINT
> > sda
> > 8:00
> >   557G  0 disk
> > ├─sda1
> > 8:10
> >   500M  0 part  /boot
> > └─sda2
> > 8:20
> > 556.5G  0 part
> >   ├─vg_gamma-lv_root (dm-0)
> >   253:00
> >  50G  0 lvm   /
> >   ├─vg_gamma-lv_swap (dm-1)
> >   253:10
> >   4G  0 lvm   [SWAP]
> >   └─vg_gamma-lv_home (dm-2)
> >   253:20
> >   502.4G  0 lvm   /home
> > sr0
> > 11:01
> >  1024M  0 rom
> > sdb
> > 8:16   0
> > 2T  0 disk
> > └─36589cfc00881b9b93c2623780840 (dm-4)
> > 253:40
> > 2T  0 mpath
> > sdc
> > 8:32   0
> > 2T  0 disk
> > └─36589cfc0050564002c7e51978316 (dm-3)
> > 253:30
> > 2T  0 mpath
> >   ├─3ccb7b67--8067--4315--9656--d68ba10975ba-metadata (dm-5)
> >   253:50
> > 512M  0 lvm
> >   ├─3ccb7b67--8067--4315--9656--d68ba10975ba-outbox (dm-6)
> >   253:60
> > 128M  0 lvm
> >   ├─3ccb7b67--8067--4315--9656--d68ba10975ba-leases (dm-7)
> >   253:70
> >   2G  0 lvm
> >   ├─3ccb7b67--8067--4315--9656--d68ba10975ba-ids (dm-8)
> >   253:80
> > 128M  0 lvm
> >   ├─3ccb7b67--8067--4315--9656--d68ba10975ba-inbox (dm-9)
> >   253:90
> > 128M  0 lvm
> >   └─3ccb7b67--8067--4315--9656--d68ba10975ba-master (dm-10)
> >   253:10   0
> >   1G  0 lvm
> > sdd
> > 8:48   0
> > 4T  0 disk
> > └─36589cfc0059ccab70662b71c47ef (dm-11)
> > 253:11   0
> > 4T  0 mpath
> >   ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-metadata (dm-12)
> >   253:12   0
> > 512M  0 lvm
> >   ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-ids (dm-13)
> >   253:13   0
> > 128M  0 lvm
> >   ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-leases (dm-14)
> >   253:14   0
> >   2G  0 lvm
> >   ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-outbox (dm-15)
> >   253:15   0
> > 128M  0 lvm
> >   ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-inbox (dm-16)
> >   253:16   0
> > 128M  0 lvm
> >   └─4861322b--352f--41c6--890a--5cbf1c2c1f01-master (dm-17)
> >   253:17   0
> >   1G  0 lvm
>
> So you have 2 storage domains:
>
> - 3ccb7b67-8067-4315-9656-d68ba10975ba
> - 4861322b-352f-41c6-890a-5cbf1c2c1f01
>
> But most likely both of them are not active now.
>
> Can you share the output of:
>
> iscsiadm -m session
>
> On a system connected to iscsi storage you will see something like:
>
> # iscsiadm -m session
> tcp: [5] 10.35.0.99:3260,1 iqn.2003-01.org.dumbo.target1 (non-flash)
>
> The special lvs (ids, leases, ...) should be active, and you should
> see also
> regular disks lvs (used for snapshot for vm disks).
>
> Here is an example from machine connected to active iscsi domain:
>
> # lvs
>   LV   VG
>   Attr   LSize
>   27c4c795-bca4-4d7b-9b40-cda9098790f5 5f35b5c0-17d7-4475-9125-
>   e97f1cdb06f9 -wi---   1.00g
>   35be1f52-5b28-4c90-957a-710dbbb8f13f 5f35b5c0-17d7-4475-9125-
>   e97f1cdb06f9 -wi---   8.00g
>   36d9b41b-4b01-4fc2-8e93-ccf79af0f766 5f35b5c0-17d7-4475-9125-
>   e97f1cdb06f9 -wi--- 128.00m
>   4fda3b44-27a5-4ce4-b8c3-66744aa9937b 5f35b5c0-17d7-4475-9125-
>   e97f1cdb06f9 -wi---   2.12g
>   c2e78f72-d499-44f0-91f5-9930a599dc87 5f35b5c0-17d7-4475-9125-
>   e97f1cdb06f9 -wi--- 128.00m
>   d49919b4-30fc-440f-9b21-3367ddfdf396 5f35b5c0-17d7-4475-9125-
>   e97f1cdb06f9 -wi---   4.00g
>   f3b10280-43ed-4772-b122-18c92e098171 5f35b5c0-17d7-4475-9125-
>   e97f1cdb06f9 -wi---   1.00g
>   f409cc48-8248-4239-a4ea-66b0b1084416 5f35b5c0-17d7-4475-9125-
>   e97f1cdb06f9 -wi---   1.00g
>   ids  5f35b5c0-17d7-4475-9125-
>   e97f1cdb06f9 -wi-ao 128.00m
>   inbox5f35b5c0-17d7-4475-9125-
>   e97f1cdb06f9 -wi-a- 128.00m
>   leases   5f35b5c0-17d7-4475-9125-
>   e97f1cdb06f9 -wi-a-   2.00g
>   master   5f35b5c0-17d7-4475-9125-
>   e97f1cdb06f9 -wi-a-   1.00g
>   metadata 5f35b5c0-17d7-4475-9125-
>   e97f1cdb06f9 -wi-a- 512.00m
>   outbox   5f35b5c0-17d7-4475-9125-
>   e97f1cdb06f9 -wi-a- 128.00m
>
>
>
> > [root@gamma ~]#
> >
> > Regards,
> >
> > --
> > Fernando Fuentes
> > ffuen...@txweather.org
> > http://www.txweather.org
> >
> > On Mon, Jul 18, 2016, at 07:43 AM, Nir Soffer wrote:
> >> Can you share output of lsblk on this host?
> >>
> >> On Mon, Jul 18, 2016 at 3:52 AM, Fernando Fuentes
> >> 
> >> wrote:
> >> > Nir,
> >> >
> >> > That's odd. gamma is my iscsi host, its in up state and it has
> >> > active
> >> > VM's.
> >> > What am I 

Re: [ovirt-users] deploying ovirt 3.6 engine on a glustered storage

2016-07-18 Thread Andy Michielsen
Hello Joop,

I have 3 servers with 64 gb of RAM and 2 Tb of storage.

This sounds like a way to go. I will try, test and get back to you.

Kind regards.

Verstuurd vanaf mijn iPad

> Op 18 jul. 2016 om 21:08 heeft Joop  het volgende 
> geschreven:
> 
>> On 18-7-2016 18:09, Andy Michielsen wrote:
>> Hello,
>> 
>>  That's my problem. I can set is up later but not at the moment. Once I get 
>> this first host op and running with the engine on a glusterfs I can 
>> decomission 2 other servers and add those to this setup.
> I have an idea how to get this going but you'll need to test this really well.
> You can make a replica 3 with all bricks on the same server and then when one 
> of your other servers are available move one brick to it and when all is 
> healed move the next one to it.
> 
> Now, I'll probably get publicly whipped for this idea so I'm going to say it 
> again, test, test, did I say test it?
> 
> How much storage are you talking about?
> 
> Regards,
> 
> Joop
> 
> PS: are you on ovirt-irc?
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] centos 7.1 and up & ixgbe

2016-07-18 Thread Douglas Schilling Landgraf

Hi Johan,


On 07/18/2016 09:53 AM, Johan Kooijman wrote:

Hi Jeff,

was the issue ever resolved? Don't have permissions to view the bugzilla.


There are proposal patches in the bugzilla, I have requested more 
information about upstream status.

As soon I have updates, I will reply here.

For now, if you have the hardware and want to give a test against our 
latest upstream build jobs, links below:


ovirt-node 3.6:
http://jenkins.ovirt.org/job/ovirt-node_ovirt-3.6_create-iso-el7_merged/

ovirt-node 4.0 (next):
http://jenkins.ovirt.org/job/ovirt-node-ng_ovirt-4.0-snapshot_build-artifacts-fc23-x86_64/

Thanks!



On Thu, Mar 17, 2016 at 4:34 PM, Jeff Spahr > wrote:


I had the same issue, and I also have a support case open.  They
referenced https://bugzilla.redhat.com/show_bug.cgi?id=1288237
which is private.  I didn't have any success getting that bugzilla
changed to public.  We couldn't keep waiting for the issue to be
fixed so we replaced the NICs with Broadcom/Qlogic that we knew
had no issues in other hosts.

On Thu, Mar 17, 2016 at 11:27 AM, Sigbjorn Lie
> wrote:

Hi,

Is this on CentOS/RHEL 7.2?

Log in as root as see if you can see any messages from ixgbe
about "tx queue hung" in dmesg. I
currently have an open support case for RHEL7.2 and the ixgbe
driver, where there is a driver
issue causing the network adapter to reset continuously when
there are network traffic.


Regards,
Siggi



On Thu, March 17, 2016 12:52, Nir Soffer wrote:
> On Thu, Mar 17, 2016 at 10:49 AM, Johan Kooijman
> wrote:
>
>> Hi all,
>>
>>
>> Since we upgraded to the latest ovirt node running 7.2,
we're seeing that
>> nodes become unavailable after a while. It's running fine,
with a couple of VM's on it, untill it
>> becomes non responsive. At that moment it doesn't even
respond to ICMP. It'll come back by
>> itself after a while, but oVirt fences the machine before
that time and restarts VM's elsewhere.
>>
>>
>> Engine tells me this message:
>>
>>
>> VDSM host09 command failed: Message timeout which can be
caused by
>> communication issues
>>
>> Is anyone else experiencing these issues with ixgbe
drivers? I'm running on
>> Intel X540-AT2 cards.
>>
>
> We will need engine and vdsm logs to understand this issue.
>
>
> Can you file a bug and attach ful logs?
>
>
> Nir
> ___
> Users mailing list
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users
>
>


___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users




--
Met vriendelijke groeten / With kind regards,
Johan Kooijman


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] disk not bootable

2016-07-18 Thread Nir Soffer
On Mon, Jul 18, 2016 at 6:48 PM, Fernando Fuentes 
wrote:
> Nir,
>
> As requested:
>
> [root@gamma ~]# lsblk
> NAME  MAJ:MIN RM
>   SIZE RO TYPE  MOUNTPOINT
> sda 8:00
>   557G  0 disk
> ├─sda1  8:10
>   500M  0 part  /boot
> └─sda2  8:20
> 556.5G  0 part
>   ├─vg_gamma-lv_root (dm-0)   253:00
>  50G  0 lvm   /
>   ├─vg_gamma-lv_swap (dm-1)   253:10
>   4G  0 lvm   [SWAP]
>   └─vg_gamma-lv_home (dm-2)   253:20
>   502.4G  0 lvm   /home
> sr011:01
>  1024M  0 rom
> sdb 8:16   0
> 2T  0 disk
> └─36589cfc00881b9b93c2623780840 (dm-4)253:40
> 2T  0 mpath
> sdc 8:32   0
> 2T  0 disk
> └─36589cfc0050564002c7e51978316 (dm-3)253:30
> 2T  0 mpath
>   ├─3ccb7b67--8067--4315--9656--d68ba10975ba-metadata (dm-5)  253:50
> 512M  0 lvm
>   ├─3ccb7b67--8067--4315--9656--d68ba10975ba-outbox (dm-6)253:60
> 128M  0 lvm
>   ├─3ccb7b67--8067--4315--9656--d68ba10975ba-leases (dm-7)253:70
>   2G  0 lvm
>   ├─3ccb7b67--8067--4315--9656--d68ba10975ba-ids (dm-8)   253:80
> 128M  0 lvm
>   ├─3ccb7b67--8067--4315--9656--d68ba10975ba-inbox (dm-9) 253:90
> 128M  0 lvm
>   └─3ccb7b67--8067--4315--9656--d68ba10975ba-master (dm-10)   253:10   0
>   1G  0 lvm
> sdd 8:48   0
> 4T  0 disk
> └─36589cfc0059ccab70662b71c47ef (dm-11)   253:11   0
> 4T  0 mpath
>   ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-metadata (dm-12) 253:12   0
> 512M  0 lvm
>   ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-ids (dm-13)  253:13   0
> 128M  0 lvm
>   ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-leases (dm-14)   253:14   0
>   2G  0 lvm
>   ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-outbox (dm-15)   253:15   0
> 128M  0 lvm
>   ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-inbox (dm-16)253:16   0
> 128M  0 lvm
>   └─4861322b--352f--41c6--890a--5cbf1c2c1f01-master (dm-17)   253:17   0
>   1G  0 lvm

So you have 2 storage domains:

- 3ccb7b67-8067-4315-9656-d68ba10975ba
- 4861322b-352f-41c6-890a-5cbf1c2c1f01

But most likely both of them are not active now.

Can you share the output of:

iscsiadm -m session

On a system connected to iscsi storage you will see something like:

# iscsiadm -m session
tcp: [5] 10.35.0.99:3260,1 iqn.2003-01.org.dumbo.target1 (non-flash)

The special lvs (ids, leases, ...) should be active, and you should see also
regular disks lvs (used for snapshot for vm disks).

Here is an example from machine connected to active iscsi domain:

# lvs
  LV   VG
Attr   LSize
  27c4c795-bca4-4d7b-9b40-cda9098790f5 5f35b5c0-17d7-4475-9125-e97f1cdb06f9
-wi---   1.00g
  35be1f52-5b28-4c90-957a-710dbbb8f13f 5f35b5c0-17d7-4475-9125-e97f1cdb06f9
-wi---   8.00g
  36d9b41b-4b01-4fc2-8e93-ccf79af0f766 5f35b5c0-17d7-4475-9125-e97f1cdb06f9
-wi--- 128.00m
  4fda3b44-27a5-4ce4-b8c3-66744aa9937b 5f35b5c0-17d7-4475-9125-e97f1cdb06f9
-wi---   2.12g
  c2e78f72-d499-44f0-91f5-9930a599dc87 5f35b5c0-17d7-4475-9125-e97f1cdb06f9
-wi--- 128.00m
  d49919b4-30fc-440f-9b21-3367ddfdf396 5f35b5c0-17d7-4475-9125-e97f1cdb06f9
-wi---   4.00g
  f3b10280-43ed-4772-b122-18c92e098171 5f35b5c0-17d7-4475-9125-e97f1cdb06f9
-wi---   1.00g
  f409cc48-8248-4239-a4ea-66b0b1084416 5f35b5c0-17d7-4475-9125-e97f1cdb06f9
-wi---   1.00g
  ids  5f35b5c0-17d7-4475-9125-e97f1cdb06f9
-wi-ao 128.00m
  inbox5f35b5c0-17d7-4475-9125-e97f1cdb06f9
-wi-a- 128.00m
  leases   5f35b5c0-17d7-4475-9125-e97f1cdb06f9
-wi-a-   2.00g
  master   5f35b5c0-17d7-4475-9125-e97f1cdb06f9
-wi-a-   1.00g
  metadata 5f35b5c0-17d7-4475-9125-e97f1cdb06f9
-wi-a- 512.00m
  outbox   5f35b5c0-17d7-4475-9125-e97f1cdb06f9
-wi-a- 128.00m



> [root@gamma ~]#
>
> Regards,
>
> --
> Fernando Fuentes
> ffuen...@txweather.org
> http://www.txweather.org
>
> On Mon, Jul 18, 2016, at 07:43 AM, Nir Soffer wrote:
>> Can you share output of lsblk on this host?
>>
>> On Mon, Jul 18, 2016 at 3:52 AM, Fernando Fuentes 
>> wrote:
>> > Nir,
>> >
>> > That's odd. gamma is my iscsi host, its in up state and it has active
>> > VM's.
>> > What am I missing?
>> >
>> > Regards,

Re: [ovirt-users] deploying ovirt 3.6 engine on a glustered storage

2016-07-18 Thread Joop
On 18-7-2016 18:09, Andy Michielsen wrote:
> Hello,
>
>  That's my problem. I can set is up later but not at the moment. Once
> I get this first host op and running with the engine on a glusterfs I
> can decomission 2 other servers and add those to this setup.
>
I have an idea how to get this going but you'll need to test this really
well.
You can make a replica 3 with all bricks on the same server and then
when one of your other servers are available move one brick to it and
when all is healed move the next one to it.

Now, I'll probably get publicly whipped for this idea so I'm going to
say it again, test, test, did I say test it?

How much storage are you talking about?

Regards,

Joop

PS: are you on ovirt-irc?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] API endpoint?

2016-07-18 Thread Alexander Wels
On Monday, July 18, 2016 03:39:56 PM Gervais de Montbrun wrote:
> Hi Folks,
> 
> Has the api endpoint moved?
> 
> I am having issues with my nagios check and also with ovirt-shell. Both are
> trying to git the api endpoint at https://myhostedengine.mydomain/api
>  and both are returning a 404 error.
> 
> Cheers,
> Gervais

Yes its /ovirt-engine/api now.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] API endpoint?

2016-07-18 Thread Gervais de Montbrun
Hi Folks,

Has the api endpoint moved?

I am having issues with my nagios check and also with ovirt-shell. Both are 
trying to git the api endpoint at https://myhostedengine.mydomain/api 
 and both are returning a 404 error.

Cheers,
Gervais



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] deploying ovirt 3.6 engine on a glustered storage

2016-07-18 Thread Andy Michielsen
Hello,

 That's my problem. I can set is up later but not at the moment. Once I get 
this first host op and running with the engine on a glusterfs I can decomission 
2 other servers and add those to this setup.

Kind regards

Verstuurd vanaf mijn iPad

> Op 18 jul. 2016 om 13:10 heeft knarra  het volgende 
> geschreven:
> 
>> On 07/18/2016 03:43 PM, knarra wrote:
>>> On 07/17/2016 02:14 PM, Andy Michielsen wrote: 
>>> Hello, 
>>> 
>>> I have the gluster shares set up nd downloaded the appliance. Is there an 
>>> other way deploying it than running hosted-engine deploy ? 
>>> 
>>> Kind regards. 
>>> 
>>> Verstuurd vanaf mijn iPad
>> Hi, 
>> 
>> Do you have replica 3 volume created ? If not you need to create the 
>> volume. 
>> 
>> Thanks 
>> kasturi.
> Blog [1] has all the steps to setup. Hope this helps !!
> 
> [1] 
> http://blogs-ramesh.blogspot.in/2016/01/ovirt-and-gluster-hyperconvergence.html?m=1
> 
> 
> 
>>> 
 Op 16 jul. 2016 om 18:04 heeft Darrell Budic  het 
 volgende geschreven: 
 
 You won’t be able to use the engine-setup for the whole thing, but if you 
 setup thegluster share by hand, you should be able to use it for the 
 hosted engine setup. 
 
 
> On Jul 16, 2016, at 9:57 AM, Andy Michielsen  
> wrote: 
> 
> Hello, 
> 
> I'm trying to install a new oVirt enviroment at my company but I'm having 
> troubles installing the engine on the /gluster/engine/brick as it is not 
> in replica 3 mode. 
> 
> I want to install it first on 1 host as I have only 1 host available at 
> the moment. Afterwards I will add 2 additional host but I need to get 
> past this replica 3 requirement first now. 
> 
> How can I accomplish this ? 
> 
> Kind regards 
> ___ 
> Users mailing list 
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users
>>> ___ 
>>> Users mailing list 
>>> Users@ovirt.org 
>>> http://lists.ovirt.org/mailman/listinfo/users
>> 
>> 
>> ___ 
>> Users mailing list 
>> Users@ovirt.org 
>> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] disk not bootable

2016-07-18 Thread Fernando Fuentes
Nir,

As requested:

[root@gamma ~]# lsblk
NAME  MAJ:MIN RM
  SIZE RO TYPE  MOUNTPOINT
sda 8:00
  557G  0 disk  
├─sda1  8:10
  500M  0 part  /boot
└─sda2  8:20
556.5G  0 part  
  ├─vg_gamma-lv_root (dm-0)   253:00
 50G  0 lvm   /
  ├─vg_gamma-lv_swap (dm-1)   253:10
  4G  0 lvm   [SWAP]
  └─vg_gamma-lv_home (dm-2)   253:20
  502.4G  0 lvm   /home
sr011:01
 1024M  0 rom   
sdb 8:16   0
2T  0 disk  
└─36589cfc00881b9b93c2623780840 (dm-4)253:40
2T  0 mpath 
sdc 8:32   0
2T  0 disk  
└─36589cfc0050564002c7e51978316 (dm-3)253:30
2T  0 mpath 
  ├─3ccb7b67--8067--4315--9656--d68ba10975ba-metadata (dm-5)  253:50
512M  0 lvm   
  ├─3ccb7b67--8067--4315--9656--d68ba10975ba-outbox (dm-6)253:60
128M  0 lvm   
  ├─3ccb7b67--8067--4315--9656--d68ba10975ba-leases (dm-7)253:70
  2G  0 lvm   
  ├─3ccb7b67--8067--4315--9656--d68ba10975ba-ids (dm-8)   253:80
128M  0 lvm   
  ├─3ccb7b67--8067--4315--9656--d68ba10975ba-inbox (dm-9) 253:90
128M  0 lvm   
  └─3ccb7b67--8067--4315--9656--d68ba10975ba-master (dm-10)   253:10   0
  1G  0 lvm   
sdd 8:48   0
4T  0 disk  
└─36589cfc0059ccab70662b71c47ef (dm-11)   253:11   0
4T  0 mpath 
  ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-metadata (dm-12) 253:12   0
512M  0 lvm   
  ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-ids (dm-13)  253:13   0
128M  0 lvm   
  ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-leases (dm-14)   253:14   0
  2G  0 lvm   
  ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-outbox (dm-15)   253:15   0
128M  0 lvm   
  ├─4861322b--352f--41c6--890a--5cbf1c2c1f01-inbox (dm-16)253:16   0
128M  0 lvm   
  └─4861322b--352f--41c6--890a--5cbf1c2c1f01-master (dm-17)   253:17   0
  1G  0 lvm   
[root@gamma ~]# 

Regards,

-- 
Fernando Fuentes
ffuen...@txweather.org
http://www.txweather.org

On Mon, Jul 18, 2016, at 07:43 AM, Nir Soffer wrote:
> Can you share output of lsblk on this host?
> 
> On Mon, Jul 18, 2016 at 3:52 AM, Fernando Fuentes 
> wrote:
> > Nir,
> >
> > That's odd. gamma is my iscsi host, its in up state and it has active
> > VM's.
> > What am I missing?
> >
> > Regards,
> >
> > --
> > Fernando Fuentes
> > ffuen...@txweather.org
> > http://www.txweather.org
> >
> > On Sun, Jul 17, 2016, at 07:24 PM, Nir Soffer wrote:
> >> On Sun, Jul 17, 2016 at 1:24 AM, Fernando Fuentes 
> >> wrote:
> >> > Nir,
> >> >
> >> > Ok I got the uuid but I am getting the same results as before.
> >> > Nothing comes up.
> >> >
> >> > [root@gamma ~]# pvscan --cache
> >> > [root@gamma ~]# lvs -o vg_name,lv_name,tags | grep
> >> > 3b7d9349-9eb1-42f8-9e04-7bbb97c02b98
> >> > [root@gamma ~]#
> >> >
> >> > without the grep all I get is:
> >> >
> >> > [root@gamma ~]# lvs -o vg_name,lv_name,tags
> >> >   VG   LV  LV Tags
> >> >   vg_gamma lv_home
> >> >   vg_gamma lv_root
> >> >   vg_gamma lv_swap
> >>
> >> You are not connected to the iscsi storage domain.
> >>
> >> Please try this from a host in up state in engine.
> >>
> >> Nir
> >>
> >> >
> >> > On the other hand an fdisk shows a bunch of disks and here is one
> >> > example:
> >> >
> >> > Disk /dev/mapper/36589cfc0050564002c7e51978316: 2199.0 GB,
> >> > 219902322 bytes
> >> > 255 heads, 63 sectors/track, 267349 cylinders
> >> > Units = cylinders of 16065 * 512 = 8225280 bytes
> >> > Sector size (logical/physical): 512 bytes / 32768 bytes
> >> > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes
> >> > Disk identifier: 0x
> >> >
> >> >
> >> > Disk /dev/mapper/36589cfc00881b9b93c2623780840: 2199.0 GB,
> >> > 219902322 bytes
> >> > 255 heads, 63 sectors/track, 267349 cylinders
> >> > Units = cylinders of 16065 * 512 = 8225280 bytes
> >> > Sector size (logical/physical): 512 bytes / 32768 bytes
> >> > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes
> >> > Disk identifier: 0x
> >> >
> >> >
> >> > Disk /dev/mapper/3ccb7b67--8067--4315--9656--d68ba10975ba-metadata: 536
> >> > MB, 536870912 bytes
> >> > 255 heads, 63 sectors/track, 65 cylinders
> >> > Units = cylinders of 16065 * 512 = 8225280 bytes
> >> > Sector size (logical/physical): 512 bytes / 32768 bytes
> >> > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes
> >> > Disk identifier: 0x
> >> >
> >> > Disk 

Re: [ovirt-users] centos 7.1 and up & ixgbe

2016-07-18 Thread Johan Kooijman
Hi Jeff,

was the issue ever resolved? Don't have permissions to view the bugzilla.

On Thu, Mar 17, 2016 at 4:34 PM, Jeff Spahr  wrote:

> I had the same issue, and I also have a support case open.  They
> referenced https://bugzilla.redhat.com/show_bug.cgi?id=1288237 which is
> private.  I didn't have any success getting that bugzilla changed to
> public.  We couldn't keep waiting for the issue to be fixed so we replaced
> the NICs with Broadcom/Qlogic that we knew had no issues in other hosts.
>
> On Thu, Mar 17, 2016 at 11:27 AM, Sigbjorn Lie 
> wrote:
>
>> Hi,
>>
>> Is this on CentOS/RHEL 7.2?
>>
>> Log in as root as see if you can see any messages from ixgbe about "tx
>> queue hung" in dmesg. I
>> currently have an open support case for RHEL7.2 and the ixgbe driver,
>> where there is a driver
>> issue causing the network adapter to reset continuously when there are
>> network traffic.
>>
>>
>> Regards,
>> Siggi
>>
>>
>>
>> On Thu, March 17, 2016 12:52, Nir Soffer wrote:
>> > On Thu, Mar 17, 2016 at 10:49 AM, Johan Kooijman <
>> m...@johankooijman.com> wrote:
>> >
>> >> Hi all,
>> >>
>> >>
>> >> Since we upgraded to the latest ovirt node running 7.2, we're seeing
>> that
>> >> nodes become unavailable after a while. It's running fine, with a
>> couple of VM's on it, untill it
>> >> becomes non responsive. At that moment it doesn't even respond to
>> ICMP. It'll come back by
>> >> itself after a while, but oVirt fences the machine before that time
>> and restarts VM's elsewhere.
>> >>
>> >>
>> >> Engine tells me this message:
>> >>
>> >>
>> >> VDSM host09 command failed: Message timeout which can be caused by
>> >> communication issues
>> >>
>> >> Is anyone else experiencing these issues with ixgbe drivers? I'm
>> running on
>> >> Intel X540-AT2 cards.
>> >>
>> >
>> > We will need engine and vdsm logs to understand this issue.
>> >
>> >
>> > Can you file a bug and attach ful logs?
>> >
>> >
>> > Nir
>> > ___
>> > Users mailing list
>> > Users@ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>> >
>> >
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] disk not bootable

2016-07-18 Thread Nir Soffer
Can you share output of lsblk on this host?

On Mon, Jul 18, 2016 at 3:52 AM, Fernando Fuentes  wrote:
> Nir,
>
> That's odd. gamma is my iscsi host, its in up state and it has active
> VM's.
> What am I missing?
>
> Regards,
>
> --
> Fernando Fuentes
> ffuen...@txweather.org
> http://www.txweather.org
>
> On Sun, Jul 17, 2016, at 07:24 PM, Nir Soffer wrote:
>> On Sun, Jul 17, 2016 at 1:24 AM, Fernando Fuentes 
>> wrote:
>> > Nir,
>> >
>> > Ok I got the uuid but I am getting the same results as before.
>> > Nothing comes up.
>> >
>> > [root@gamma ~]# pvscan --cache
>> > [root@gamma ~]# lvs -o vg_name,lv_name,tags | grep
>> > 3b7d9349-9eb1-42f8-9e04-7bbb97c02b98
>> > [root@gamma ~]#
>> >
>> > without the grep all I get is:
>> >
>> > [root@gamma ~]# lvs -o vg_name,lv_name,tags
>> >   VG   LV  LV Tags
>> >   vg_gamma lv_home
>> >   vg_gamma lv_root
>> >   vg_gamma lv_swap
>>
>> You are not connected to the iscsi storage domain.
>>
>> Please try this from a host in up state in engine.
>>
>> Nir
>>
>> >
>> > On the other hand an fdisk shows a bunch of disks and here is one
>> > example:
>> >
>> > Disk /dev/mapper/36589cfc0050564002c7e51978316: 2199.0 GB,
>> > 219902322 bytes
>> > 255 heads, 63 sectors/track, 267349 cylinders
>> > Units = cylinders of 16065 * 512 = 8225280 bytes
>> > Sector size (logical/physical): 512 bytes / 32768 bytes
>> > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes
>> > Disk identifier: 0x
>> >
>> >
>> > Disk /dev/mapper/36589cfc00881b9b93c2623780840: 2199.0 GB,
>> > 219902322 bytes
>> > 255 heads, 63 sectors/track, 267349 cylinders
>> > Units = cylinders of 16065 * 512 = 8225280 bytes
>> > Sector size (logical/physical): 512 bytes / 32768 bytes
>> > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes
>> > Disk identifier: 0x
>> >
>> >
>> > Disk /dev/mapper/3ccb7b67--8067--4315--9656--d68ba10975ba-metadata: 536
>> > MB, 536870912 bytes
>> > 255 heads, 63 sectors/track, 65 cylinders
>> > Units = cylinders of 16065 * 512 = 8225280 bytes
>> > Sector size (logical/physical): 512 bytes / 32768 bytes
>> > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes
>> > Disk identifier: 0x
>> >
>> > Disk /dev/mapper/4861322b--352f--41c6--890a--5cbf1c2c1f01-master: 1073
>> > MB, 1073741824 bytes
>> > 255 heads, 63 sectors/track, 130 cylinders
>> > Units = cylinders of 16065 * 512 = 8225280 bytes
>> > Sector size (logical/physical): 512 bytes / 32768 bytes
>> > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes
>> > Disk identifier: 0x
>> >
>> > Regards,
>> >
>> > --
>> > Fernando Fuentes
>> > ffuen...@txweather.org
>> > http://www.txweather.org
>> >
>> > On Sat, Jul 16, 2016, at 04:25 PM, Fernando Fuentes wrote:
>> >> Nir,
>> >>
>> >> Ok ill look for it here in a few.
>> >> Thanks for your reply and help!
>> >>
>> >> --
>> >> Fernando Fuentes
>> >> ffuen...@txweather.org
>> >> http://www.txweather.org
>> >>
>> >> On Sat, Jul 16, 2016, at 04:16 PM, Nir Soffer wrote:
>> >> > On Fri, Jul 15, 2016 at 3:50 PM, Fernando Fuentes 
>> >> > wrote:
>> >> > > Nir,
>> >> > >
>> >> > > I try to follow your steps but I cant seem to find the ID of the
>> >> > > template.
>> >> >
>> >> > The image-uuid of the template is displayed in the Disks tab in engine.
>> >> >
>> >> > To find the volume-uuid on block storage, you can do:
>> >> >
>> >> > pvscan --cache
>> >> > lvs -o vg_name,lv_name,tags | grep image-uuid
>> >> >
>> >> > >
>> >> > > Regards,
>> >> > >
>> >> > > --
>> >> > > Fernando Fuentes
>> >> > > ffuen...@txweather.org
>> >> > > http://www.txweather.org
>> >> > >
>> >> > > On Sun, Jul 10, 2016, at 02:15 PM, Nir Soffer wrote:
>> >> > >> On Thu, Jul 7, 2016 at 7:46 PM, Melissa Mesler 
>> >> > >> 
>> >> > >> wrote:
>> >> > >> > All, I did a test for Fernando in our ovirt environment. I created 
>> >> > >> > a vm
>> >> > >> > called win7melly in the nfs domain. I then migrated it to the iscsi
>> >> > >> > domain. It booted without any issue. So it has to be something 
>> >> > >> > with the
>> >> > >> > templates. I have attached the vdsm log for the host the vm 
>> >> > >> > resides on.
>> >> > >>
>> >> > >> The log show a working vm, so it does not help much.
>> >> > >>
>> >> > >> I think that the template you copied from the nfs domain to the block
>> >> > >> domain is
>> >> > >> corrupted, or the volume metadata are incorrect.
>> >> > >>
>> >> > >> If I understand this correctly, this started when Fernando could not 
>> >> > >> copy
>> >> > >> the vm
>> >> > >> disk to the block storage, and I guess the issue was that the 
>> >> > >> template
>> >> > >> was missing
>> >> > >> on that storage domain. I assume that he copied the template to the
>> >> > >> block storage
>> >> > >> domain by opening the templates tab, selecting the template, and 
>> >> > >> choosing
>> >> > >> copy
>> >> > >> from the menu.
>> >> > >>
>> >> > >> Lets compare the 

Re: [ovirt-users] Hosted Engine 4.0 randomly stopping

2016-07-18 Thread Artyom Lukianov
Can you please provide agent.log from /var/log/ovirt-hosted-engine-ha?

On Mon, Jul 18, 2016 at 3:24 PM, Matt .  wrote:

> Hi,
>
> I see an odd behavour on the 4.0 hosted engine, it randomly stops
> running and needs a new start.
>
> What is the solution for this ? There seems to be some settings for
> the HA-broker to make which will fix it ?
>
> Any details are welcome.
>
> Thanks,
>
> Matt
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hosted Engine 4.0 randomly stopping

2016-07-18 Thread Matt .
Hi,

I see an odd behavour on the 4.0 hosted engine, it randomly stops
running and needs a new start.

What is the solution for this ? There seems to be some settings for
the HA-broker to make which will fix it ?

Any details are welcome.

Thanks,

Matt
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] CARP Fails on Bond mode=1

2016-07-18 Thread Jorick Astrego

Hi,

I tried sending this last week, but somehow the list blackholes my 
messages

You could try to add " net.inet.carp.drop_echoed=1" to pfsense in 
/etc/sysctl.conf ?

It is an old fix for VMWare and FreeBSD. I am not able to test it at the 
moment but I can see it's not in the config of the latest version of 
PFSense.

Or maybe better:

https://doc.pfsense.org/index.php/CARP_Configuration_Troubleshooting

Client Port Issues

If a physical CARP cluster is connected to a switch with an ESX box
using multiple ports on the ESX box (lagg group or similar), and
only certain devices/IPs are reachable by the target VM, then the
port group settings in ESX may need adjusted to set the load
balancing for the group to hash based on IP, not the originating
interface.

Side effects of having that set incorrectly include:

  * Traffic only reaching the target VM in promisc mode on its NIC
  * Inability to reach the CARP IP from the target VM when the
"real" IP of the primary firewall is reachable
  * Port forwards or other inbound connections to the target VM work
from some IPs and not others.


So you could try with bonding option "xmit_hash_policy=layer2+3" and see 
if that helps...

Kind regards,
Jorick Astrego


On 07/13/2016 03:59 PM, Matt . wrote:
> As addition: I get the same result using mode=4, only when I use
> multiple VLANS on the interface.
>
> 2016-07-13 15:58 GMT+02:00 Matt .:
>> Hi Pavel,
>>
>> Thanks for your update. I also saw that the post are both online but I
>> thought the second nic only advertises the mac so the switch does not
>> get confused.
>>
>> The issue might be that i do VRRP, so the bond is connected to two
>> switches, they are not stacked, only trunked as that's what VRRP
>> requires and works well on the side where there is only one VLAN on
>> the Host interface.
>>
>> It just goes wrong on multiple vlans.
>>
>> This is what I see everywhere.
>>
>> Mode 1 (active-backup)
>> This mode places one of the interfaces into a backup state and will
>> only make it active if the link is lost by the active interface. Only
>> one slave in the bond is active at an instance of time. A different
>> slave becomes active only when the active slave fails. This mode
>> provides fault tolerance.
>>
>> It's sure I need to get my traffic back on my sending port, so that is
>> why the arp for the passive port was there I thought.
>>
>> Are there other modes that should be working on VRRP in your understanding ?
>>
>> Thanks a lot,
>>
>> Matt
>>
>>
>>
>> 2016-07-13 15:43 GMT+02:00 Pavel Gashev:
>>> In mode=1 the active interface sends traffic, but both interfaces accept 
>>> incoming traffic. Hardware switches send broadcast/multicast/unknown 
>>> destination MACs to all ports, including the passive interface. So packet 
>>> sent from the active interface can be received back from the passive 
>>> interface. FreeBSD CARP just would go mad when it receives its own packets.
>>>
>>> I believe if you get Linux implementation, it will work well in the same 
>>> network setup. I use keepalived in oVirt VMs with bonded network, and have 
>>> no issues.
>>>
>>> -Original Message-
>>> From: "Matt ."
>>> Date: Wednesday 13 July 2016 at 15:54
>>> To: Pavel Gashev, users
>>> Subject: Re: [ovirt-users] CARP Fails on Bond mode=1
>>>
>>> How can it lead into packet duplication when the passive should not be
>>> active and only it's mac-address should be visible on the switch to
>>> prevent confusion on the switch ?
>>>
>>> For a VRRP setup on the switch there is no other option then mode=1 as
>>> far as I know ?
>>>
>>> 2016-07-13 14:50 GMT+02:00 Pavel Gashev:
 I would say that bonding breaks CARP somehow. In example mode=1 can lead 
 to packet duplication, so pfsense can receive it's own packets. Try 
 firewall in pfsense all incomming packets that have the same source MAC 
 address as pfsense.

 -Original Message-
 From: "Matt ."
 Date: Wednesday 13 July 2016 at 15:29
 To: Pavel Gashev
 Subject: Re: [ovirt-users] CARP Fails on Bond mode=1

 Hi Pavel,

 No it's Pfsense, so FreeBSD.

 Is there something different there ?



 2016-07-13 13:59 GMT+02:00 Pavel Gashev:
> Matt,
>
> How is CARP implemented? Is it OpenBSD?
>
> -Original Message-
> From:  on behalf of "Matt 
> ."
> Date: Wednesday 13 July 2016 at 12:42
> Cc: users
> Subject: Re: [ovirt-users] CARP Fails on Bond mode=1
>
> Hi Pavel,
>
> This is done and used without the Bond before.
>
> Now I applied a bond it goes wrong and I'm searching but can't find a
> thing about it.
>
>

Re: [ovirt-users] Debian linux and oVirt SSO

2016-07-18 Thread Tadas

ovirt agent stops on this line and code below it is not executed:

https://github.com/oVirt/ovirt-guest-agent/blob/master/ovirt-guest-agen
t/CredServer.py#L147



On Mon, 2016-07-18 at 14:12 +0300, Tadas wrote:
> This is really interesting.
> pam-ovirt-cred is randomly failing on one of two checks:
> 
> https://github.com/oVirt/ovirt-guest-agent/blob/master/pam-ovirt-cred
> /c
> red_channel.c#L107
> 
> and
> 
> https://github.com/oVirt/ovirt-guest-agent/blob/master/pam-ovirt-cred
> /c
> red_channel.c#L134
> 
> Theres  no pattern, on which step it will fail. Sometimes it fails on
> writing to socket sometimes on reading:
> 
> Jul 18 14:11:02 desktop64 cred-debug: recv() failed
> Jul 18 14:11:14 desktop64 cred-debug: send() failed
> Jul 18 14:11:18 desktop64 cred-debug: recv() failed
> Jul 18 14:11:23 desktop64 cred-debug: recv() failed
> Jul 18 14:11:28 desktop64 cred-debug: send() failed
> Jul 18 14:11:33 desktop64 cred-debug: recv() failedOn Mon, 2016-07-18 
> at 09:51 +0300, Tadas wrote:
> > After moving to gdm, I've managed to solve the timeout issue. Now i
> > bumped into another one:
> > oVirt agent seem to emit credentials without error:
> > 
> > Dummy-1::DEBUG::2016-07-18
> > 09:29:53,293::OVirtAgentLogic::304::root::User log-in (credentials
> > =
> > '\x00\x00\x00\x04test\x00')
> > Dummy-1::INFO::2016-07-18 09:29:53,293::CredServer::207::root::The
> > following users are allowed to connect: [0]
> > Dummy-1::DEBUG::2016-07-18
> > 09:29:53,294::CredServer::272::root::Token:
> > 250954
> > Dummy-1::INFO::2016-07-18
> > 09:29:53,294::CredServer::273::root::Opening
> > credentials channel...
> > Dummy-1::INFO::2016-07-18
> > 09:29:53,294::CredServer::132::root::Emitting
> > user authenticated signal (250954).
> > Dummy-1::INFO::2016-07-18
> > 09:29:53,349::CredServer::277::root::Credentials channel was
> > closed.
> > 
> > But pam module is failing:
> > gdm-ovirtcred]: pam_ovirt_cred(gdm-ovirtcred:auth): Failed to
> > acquire
> > user's credentials
> > 
> > After poking a bit I've managed to find, that module fails on:
> > 
> >     if (ret == -1) {
> > D(("send() failed."));
> > return -1;
> > }
> > 
> > in cred_channel.c
> > 
> > 
> > Also, i have to mention, that there's no /etc/pamd/password-auth
> > file
> > in Debian Linux. I've copied it from Centos (it is needed by gdm-
> > ovirtcred.pam)
> > > > ___
> > > > Users mailing list
> > > > Users@ovirt.org
> > > > http://lists.ovirt.org/mailman/listinfo/users
> > > 
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] CARP Fails on Bond mode=1

2016-07-18 Thread Jorick Astrego

Hi,

You could try to add " net.inet.carp.drop_echoed=1" to pfsense in 
/etc/sysctl.conf ?

It is an old fix for VMWare and FreeBSD. I am not able to test it at the 
moment but I can see it's not in the config of the latest version of 
PFSense.

Or maybe better:

https://doc.pfsense.org/index.php/CARP_Configuration_Troubleshooting


Client Port Issues

If a physical CARP cluster is connected to a switch with an ESX box
using multiple ports on the ESX box (lagg group or similar), and
only certain devices/IPs are reachable by the target VM, then the
port group settings in ESX may need adjusted to set the load
balancing for the group to hash based on IP, not the originating
interface.

Side effects of having that set incorrectly include:

  * Traffic only reaching the target VM in promisc mode on its NIC
  * Inability to reach the CARP IP from the target VM when the
"real" IP of the primary firewall is reachable
  * Port forwards or other inbound connections to the target VM work
from some IPs and not others.


So you could try with bonding option "xmit_hash_policy=layer2+3" and see 
if that helps...

Kind regards,
Jorick Astrego


On 07/13/2016 03:59 PM, Matt . wrote:
> As addition: I get the same result using mode=4, only when I use
> multiple VLANS on the interface.
>
> 2016-07-13 15:58 GMT+02:00 Matt .:
>> Hi Pavel,
>>
>> Thanks for your update. I also saw that the post are both online but I
>> thought the second nic only advertises the mac so the switch does not
>> get confused.
>>
>> The issue might be that i do VRRP, so the bond is connected to two
>> switches, they are not stacked, only trunked as that's what VRRP
>> requires and works well on the side where there is only one VLAN on
>> the Host interface.
>>
>> It just goes wrong on multiple vlans.
>>
>> This is what I see everywhere.
>>
>> Mode 1 (active-backup)
>> This mode places one of the interfaces into a backup state and will
>> only make it active if the link is lost by the active interface. Only
>> one slave in the bond is active at an instance of time. A different
>> slave becomes active only when the active slave fails. This mode
>> provides fault tolerance.
>>
>> It's sure I need to get my traffic back on my sending port, so that is
>> why the arp for the passive port was there I thought.
>>
>> Are there other modes that should be working on VRRP in your understanding ?
>>
>> Thanks a lot,
>>
>> Matt
>>
>>
>>
>> 2016-07-13 15:43 GMT+02:00 Pavel Gashev:
>>> In mode=1 the active interface sends traffic, but both interfaces accept 
>>> incoming traffic. Hardware switches send broadcast/multicast/unknown 
>>> destination MACs to all ports, including the passive interface. So packet 
>>> sent from the active interface can be received back from the passive 
>>> interface. FreeBSD CARP just would go mad when it receives its own packets.
>>>
>>> I believe if you get Linux implementation, it will work well in the same 
>>> network setup. I use keepalived in oVirt VMs with bonded network, and have 
>>> no issues.
>>>
>>> -Original Message-
>>> From: "Matt ."
>>> Date: Wednesday 13 July 2016 at 15:54
>>> To: Pavel Gashev, users
>>> Subject: Re: [ovirt-users] CARP Fails on Bond mode=1
>>>
>>> How can it lead into packet duplication when the passive should not be
>>> active and only it's mac-address should be visible on the switch to
>>> prevent confusion on the switch ?
>>>
>>> For a VRRP setup on the switch there is no other option then mode=1 as
>>> far as I know ?
>>>
>>> 2016-07-13 14:50 GMT+02:00 Pavel Gashev:
 I would say that bonding breaks CARP somehow. In example mode=1 can lead 
 to packet duplication, so pfsense can receive it's own packets. Try 
 firewall in pfsense all incomming packets that have the same source MAC 
 address as pfsense.

 -Original Message-
 From: "Matt ."
 Date: Wednesday 13 July 2016 at 15:29
 To: Pavel Gashev
 Subject: Re: [ovirt-users] CARP Fails on Bond mode=1

 Hi Pavel,

 No it's Pfsense, so FreeBSD.

 Is there something different there ?



 2016-07-13 13:59 GMT+02:00 Pavel Gashev:
> Matt,
>
> How is CARP implemented? Is it OpenBSD?
>
> -Original Message-
> From:  on behalf of "Matt 
> ."
> Date: Wednesday 13 July 2016 at 12:42
> Cc: users
> Subject: Re: [ovirt-users] CARP Fails on Bond mode=1
>
> Hi Pavel,
>
> This is done and used without the Bond before.
>
> Now I applied a bond it goes wrong and I'm searching but can't find a
> thing about it.
>
>
>
> 2016-07-13 11:03 GMT+02:00 Pavel Gashev:

Re: [ovirt-users] ovirt-engine-4.0.6 Network Information

2016-07-18 Thread Petr Horacek
Hello,

Cloud-Init is executed only during the first boot. It won't change your
settings if you use it later.

Regards,
Petr

2016-07-15 12:22 GMT+02:00 转圈圈 <313922...@qq.com>:

>Network information is set up on the interface.But the information of
> the virtual machine is not changed.That's why?
>
>
>on the interface:
>
>
> On a virtual machine:
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


537E3D91@8A3A1725.6DB98857
Description: Binary data


B84F9002@8A3A1725.6DB98857
Description: Binary data
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Debian linux and oVirt SSO

2016-07-18 Thread Tadas
This is really interesting.
pam-ovirt-cred is randomly failing on one of two checks:

https://github.com/oVirt/ovirt-guest-agent/blob/master/pam-ovirt-cred/c
red_channel.c#L107

and

https://github.com/oVirt/ovirt-guest-agent/blob/master/pam-ovirt-cred/c
red_channel.c#L134

Theres  no pattern, on which step it will fail. Sometimes it fails on
writing to socket sometimes on reading:

Jul 18 14:11:02 desktop64 cred-debug: recv() failed
Jul 18 14:11:14 desktop64 cred-debug: send() failed
Jul 18 14:11:18 desktop64 cred-debug: recv() failed
Jul 18 14:11:23 desktop64 cred-debug: recv() failed
Jul 18 14:11:28 desktop64 cred-debug: send() failed
Jul 18 14:11:33 desktop64 cred-debug: recv() failedOn Mon, 2016-07-18 at 09:51 
+0300, Tadas wrote:
> After moving to gdm, I've managed to solve the timeout issue. Now i
> bumped into another one:
> oVirt agent seem to emit credentials without error:
> 
> Dummy-1::DEBUG::2016-07-18
> 09:29:53,293::OVirtAgentLogic::304::root::User log-in (credentials =
> '\x00\x00\x00\x04test\x00')
> Dummy-1::INFO::2016-07-18 09:29:53,293::CredServer::207::root::The
> following users are allowed to connect: [0]
> Dummy-1::DEBUG::2016-07-18
> 09:29:53,294::CredServer::272::root::Token:
> 250954
> Dummy-1::INFO::2016-07-18
> 09:29:53,294::CredServer::273::root::Opening
> credentials channel...
> Dummy-1::INFO::2016-07-18
> 09:29:53,294::CredServer::132::root::Emitting
> user authenticated signal (250954).
> Dummy-1::INFO::2016-07-18
> 09:29:53,349::CredServer::277::root::Credentials channel was closed.
> 
> But pam module is failing:
> gdm-ovirtcred]: pam_ovirt_cred(gdm-ovirtcred:auth): Failed to acquire
> user's credentials
> 
> After poking a bit I've managed to find, that module fails on:
> 
>     if (ret == -1) {
> D(("send() failed."));
> return -1;
> }
> 
> in cred_channel.c
> 
> 
> Also, i have to mention, that there's no /etc/pamd/password-auth file
> in Debian Linux. I've copied it from Centos (it is needed by gdm-
> ovirtcred.pam)
> > > ___
> > > Users mailing list
> > > Users@ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/users
> > 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] deploying ovirt 3.6 engine on a glustered storage

2016-07-18 Thread knarra

On 07/18/2016 03:43 PM, knarra wrote:

On 07/17/2016 02:14 PM, Andy Michielsen wrote:

Hello,

I have the gluster shares set up nd downloaded the appliance. Is 
there an other way deploying it than running hosted-engine deploy ?


Kind regards.

Verstuurd vanaf mijn iPad

Hi,

Do you have replica 3 volume created ? If not you need to create 
the volume.


Thanks
kasturi.

Blog [1] has all the steps to setup. Hope this helps !!

[1] 
http://blogs-ramesh.blogspot.in/2016/01/ovirt-and-gluster-hyperconvergence.html?m=1






Op 16 jul. 2016 om 18:04 heeft Darrell Budic 
 het volgende geschreven:


You won’t be able to use the engine-setup for the whole thing, but 
if you setup thegluster share by hand, you should be able to use it 
for the hosted engine setup.



On Jul 16, 2016, at 9:57 AM, Andy Michielsen 
 wrote:


Hello,

I'm trying to install a new oVirt enviroment at my company but I'm 
having troubles installing the engine on the /gluster/engine/brick 
as it is not in replica 3 mode.


I want to install it first on 1 host as I have only 1 host 
available at the moment. Afterwards I will add 2 additional host 
but I need to get past this replica 3 requirement first now.


How can I accomplish this ?

Kind regards
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] deploying ovirt 3.6 engine on a glustered storage

2016-07-18 Thread knarra

On 07/17/2016 02:14 PM, Andy Michielsen wrote:

Hello,

I have the gluster shares set up nd downloaded the appliance. Is there an other 
way deploying it than running hosted-engine deploy ?

Kind regards.

Verstuurd vanaf mijn iPad

Hi,

Do you have replica 3 volume created ? If not you need to create 
the volume.


Thanks
kasturi.



Op 16 jul. 2016 om 18:04 heeft Darrell Budic  het 
volgende geschreven:

You won’t be able to use the engine-setup for the whole thing, but if you setup 
thegluster share by hand, you should be able to use it for the hosted engine 
setup.



On Jul 16, 2016, at 9:57 AM, Andy Michielsen  wrote:

Hello,

I'm trying to install a new oVirt enviroment at my company but I'm having 
troubles installing the engine on the /gluster/engine/brick as it is not in 
replica 3 mode.

I want to install it first on 1 host as I have only 1 host available at the 
moment. Afterwards I will add 2 additional host but I need to get past this 
replica 3 requirement first now.

How can I accomplish this ?

Kind regards
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] synchronize cache

2016-07-18 Thread Markus Scherer

Hi,

on an fresh installed fedora server 23 i got a "Failed to synchronize 
cache for repo 'ovirt-4.0'".
I have done a "dnf clean all" a "dnf check-update" and a reboot but 
always the same problem.


thx for help
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Debian linux and oVirt SSO

2016-07-18 Thread Tadas
After moving to gdm, I've managed to solve the timeout issue. Now i
bumped into another one:
oVirt agent seem to emit credentials without error:

Dummy-1::DEBUG::2016-07-18
09:29:53,293::OVirtAgentLogic::304::root::User log-in (credentials =
'\x00\x00\x00\x04test\x00')
Dummy-1::INFO::2016-07-18 09:29:53,293::CredServer::207::root::The
following users are allowed to connect: [0]
Dummy-1::DEBUG::2016-07-18 09:29:53,294::CredServer::272::root::Token:
250954
Dummy-1::INFO::2016-07-18 09:29:53,294::CredServer::273::root::Opening
credentials channel...
Dummy-1::INFO::2016-07-18 09:29:53,294::CredServer::132::root::Emitting
user authenticated signal (250954).
Dummy-1::INFO::2016-07-18
09:29:53,349::CredServer::277::root::Credentials channel was closed.

But pam module is failing:
gdm-ovirtcred]: pam_ovirt_cred(gdm-ovirtcred:auth): Failed to acquire
user's credentials

After poking a bit I've managed to find, that module fails on:

    if (ret == -1) {
D(("send() failed."));
return -1;
}

in cred_channel.c


Also, i have to mention, that there's no /etc/pamd/password-auth file
in Debian Linux. I've copied it from Centos (it is needed by gdm-
ovirtcred.pam)
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users