Re: [ovirt-users] Wishlist - Mix gluster and local storage in same data center

2015-11-07 Thread Nir Soffer
On Tue, Nov 3, 2015 at 7:48 AM, Liam Curtis  wrote:
>
> Any chance this will change? It is a severe limitation to not be able to use 
> local storage available to a host as that is often very fast storage.

This will couple the vm to this host, so you cannot run it to another
host. If this host
is down, you cannot run the vm on any other host, since your storage is gone.

Don't you think this is a severe limitation as well?

The current system is focused on high availability; all hosts must be
able to access all
storage domains in the data center, so you can run any vm on all hosts
on the data center.

If you don't need to migrate vms or manage shared storage, you can use
virt-manager
to run vms on specific hosts.

Nir

>
>
> On Sun, Nov 1, 2015 at 10:22 AM, Aharon Canan  wrote:
>>
>> No...
>>
>> when creating DC you choose shared/local (where gluster is shared)
>>
>> You can mix shared (gluster/iscsi/nfs etc) but not local.
>>
>>
>>
>>
>> Regards,
>> __
>> Aharon Canan
>>
>> 
>>
>> From: "Liam Curtis" 
>> To: Users@ovirt.org
>> Sent: Sunday, November 1, 2015 5:17:58 PM
>> Subject: [ovirt-users] Wishlist - Mix gluster and local storage in same data 
>>center
>>
>> Hello all...
>>
>> Would like to be able to use both local storage and gluster within same host 
>> / data center.
>>
>> Wondering if this is something being worked on?
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
>
> --
>
> Liam Curtis
> Manager of Systems Engineering
> Datto, Inc.
> (203) 529-4949 x228
> www.datto.com
>
>
> Join the conversation!
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] volume parameters

2015-11-07 Thread p...@email.cz

Hello,
would U recommend me set of functional params for volume replica 2 ?
Old ones was  ( for 3.5.2 gluster version )
storage.owner-uid   36
storage.owner-gid   36
performance.io-cache  off
performance.read-ahead  off
network.remote-dio enable
cluster.eager-lock enable
performance.stat-prefetch off
performance.quick-read off
cluster.quorum-count 1
cluster.server-quorum-type none
cluster.quorum-type  fixed

after upgrade to 3.5.7 version and setting default recommendation, 
volumes became inaccessable ( permission denied - fixed by owner uid/gui 
settings to 36)

Why the defaults have been changed  ?
Just still Error / Critical messages occure  ( examples follow )

*E* - list of grep  etc-glusterfs-glusterd.vol.log
[2015-11-07 10:49:10.883564] E [socket.c:2965:socket_connect] 
0-management: Failed to set keep-alive: Invalid argument
[2015-11-07 10:49:10.886152] E [socket.c:2965:socket_connect] 
0-management: Failed to set keep-alive: Invalid argument
[2015-11-07 10:49:15.954942] E [rpc-clnt.c:362:saved_frames_unwind] (--> 
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x196)[0x7fa88b014a66] (--> 
/lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fa88addf9be] (--> 
/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fa88addface] (--> 
/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x9c)[0x7fa88ade148c] 
(--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x88)[0x7fa88ade1c98] ) 
0-management: forced unwinding frame type(Peer mgmt) op(--(2)) called at 
2015-11-07 10:49:10.918764 (xid=0x5)
[2015-11-07 10:49:26.719176] E [socket.c:2965:socket_connect] 
0-management: Failed to set keep-alive: Invalid argument
[2015-11-07 10:54:59.738232] E [MSGID: 106243] [glusterd.c:1623:init] 
0-management: creation of 1 listeners failed, continuing with succeeded 
transport
[2015-11-07 10:55:01.860991] E [socket.c:2965:socket_connect] 
0-management: Failed to set keep-alive: Invalid argument
[2015-11-07 10:55:01.863932] E [socket.c:2965:socket_connect] 
0-management: Failed to set keep-alive: Invalid argument
[2015-11-07 10:55:01.866779] E [socket.c:2965:socket_connect] 
0-management: Failed to set keep-alive: Invalid argument


*C* - list of grep  etc-glusterfs-glusterd.vol.log
[2015-11-07 10:49:16.045778] C [MSGID: 106003] 
[glusterd-server-quorum.c:346:glusterd_do_volume_quorum_action] 
0-management: Server quorum regained for volume 1KVM12-P4. Starting 
local bricks.
[2015-11-07 10:49:16.049319] C [MSGID: 106003] 
[glusterd-server-quorum.c:346:glusterd_do_volume_quorum_action] 
0-management: Server quorum regained for volume 1KVM12-P5. Starting 
local bricks.


regs.Paf1


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Wishlist - Mix gluster and local storage in same data center

2015-11-07 Thread Chris Adams
Once upon a time, Nir Soffer  said:
> On Tue, Nov 3, 2015 at 7:48 AM, Liam Curtis  wrote:
> >
> > Any chance this will change? It is a severe limitation to not be able to 
> > use local storage available to a host as that is often very fast storage.
> 
> This will couple the vm to this host, so you cannot run it to another
> host. If this host
> is down, you cannot run the vm on any other host, since your storage is gone.
> 
> Don't you think this is a severe limitation as well?

There are many things that tie a VM to a host, like USB device
passthrough, but that's not reason to remove all such support from
oVirt, is it?

In my case, I'd like to mix iSCSI and local storage, because I have a
couple of systems that need higher disk I/O that I'd like to put on my
shared storage.  The two systems are redundant to each other, so that is
taken care of at a different layer.

The two systems don't however consume all the resources of the host
machines (lots of CPU and RAM available).  I'd like to make them nodes
in my oVirt cluster, so those resources can be used for other VMs (that
are on shared storage for that level of HA), but I can't do that (at
least as far as I know, with oVirt 3.5).  I thought that had been
mentioned as a feature for 3.6, but I don't see it anywhere in the
features or release notes, so I assume that functionality is still not
available.

-- 
Chris Adams 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vdsm without sanlock

2015-11-07 Thread Devin A. Bougie
On Nov 7, 2015, at 2:10 AM, Nir Soffer  wrote:
>> Mainly the dependence on a shared or remote filesystem (nfs, gfs2, etc.).
> 
> There is no such dependency.
> Sanlock is using either an lv on block device (iscsi, fop)

Thanks, Nir!  I was thinking sanlock required a disk_lease_dir, which all the 
documentation says to put on NFS or GFS2.  However, as you say I now see that 
ovirt can use sanlock with block devices without requiring a disk_lease_dir.

Thanks again,
Devin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] MacSpoof with multiple VM's -> bad/slow response on 3.5.3

2015-11-07 Thread Matt .
Updated Pfsense, rebooted where failovers where done, same issues
occures in some way when the FW was rebooted or turned off/onn again.

I happens randomly but is a pain in the ass to debug as you are
expecting it should not vanish :)

CARP IP's are not working anymore, sometimes they are... and finally
you find out... macspoofing is gone as setting on the VM.

More people seen this ?

2015-10-21 17:18 GMT+02:00 Matt . :
> Hi Dan,
>
> Yes it vanished from the gui.
>
> The strange thing is I was looking using VdsClient and saw my
> loadbalancers where not started with macspoof true, but my Firewalls
> were.
>
> In the time between they have been migrated and restarted also in
> tests, so I cannot see if it has been running with macspoof true and
> it vanished from the gui, but I'm 200% it was in there (GUI).
>
> Now this is double checked etc all working fine again, I wonder if
> macspoof can have issues with multiple VHID groups in between the same
> hosts on the same vlan.
>
> It's kinda guessing as I cannot show how it was setup, this came up
> during some failover test after the machines have been moved to
> dedicated hosts some while ago.
>
> So, ask me, shoot me and maybe we can tackle if this could be a bug or so.
>
> Thanks,
>
> Matt
>
> 2015-10-21 14:44 GMT+02:00 Dan Kenigsberg :
>> On Wed, Oct 21, 2015 at 07:51:28AM +0200, Matt . wrote:
>>> Hi Guys,
>>>
>>> On a 3.5.3 updated cluster I see issues with macspoofed VM's.
>>>
>>> The cluster is a CcentOS 7 cluster which always performed well.
>>>
>>> I noticed that on my loadbalancers the macspoof true setting
>>> disspeared in the engine and when I added it and rebooted some other
>>> carp machines it was vanished at those machines.
>>
>> What do you mean by "vanished"? Does it not show on the UI? When did it
>> happen? Can you share the domxml of the VMs that you start?
>>
>>>
>>> It's a quite simple setup:
>>>
>>> 2 static nodes in a mutiple hosts cluster with carp machines on it,
>>> per blade 1 Firewall, Pfsense, and one Loadbalancer ZEN.
>>>
>>> The cluster ID's differ on ZEN, and the carp IP's on pfsense have a
>>> different VHID's so I wondered if this is a known issue with the
>>> vanished macspoof true setting I found out (so the whole virtual IP
>>> doesn't work in that case).
>>>
>>> Some other cluster works fine without any issue, the spoofed systems
>>> here are on CentOS 6
>>>
>>> This setup has run for more than a year without any issues.
>>>
>>> I hope someone has some information if this issue is known.
>>>
>>> Thanks Matt
>>>
>>> (sorry for my bad typing it's kinda early/late ;))
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users