As long as all your compute nodes are part of the gluster peer,
localhost will work.
Just remember, gluster will connect to any server, so even if you
mount as localhost:/ it could be accessing the storage from another
host in the gluster peer group.


On Fri, Jul 4, 2014 at 3:26 PM, Punit Dambiwal <hypu...@gmail.com> wrote:
> Hi Andrew,
>
> Yes..both on the same node...but i have 4 nodes of this type in the same
> cluster....So it should work or not ??
>
> 1. 4 physical nodes with 12 bricks each(distributed replicated)...
> 2. The same all 4 nodes use for the compute purpose also...
>
> Do i still require the VIP...or not ?? because i tested even the mount point
> node goes down...the VM will not pause and not affect...
>
>
> On Fri, Jul 4, 2014 at 1:18 PM, Andrew Lau <and...@andrewklau.com> wrote:
>>
>> Or just localhost as your computer and storage are on the same box.
>>
>>
>> On Fri, Jul 4, 2014 at 2:48 PM, Punit Dambiwal <hypu...@gmail.com> wrote:
>> > Hi Andrew,
>> >
>> > Thanks for the update....that means HA can not work without VIP in the
>> > gluster,so better to use the glusterfs with the VIP to take over the
>> > ip...in
>> > case of any storage node failure...
>> >
>> >
>> > On Fri, Jul 4, 2014 at 12:35 PM, Andrew Lau <and...@andrewklau.com>
>> > wrote:
>> >>
>> >> Don't forget to take into consideration quroum, that's something
>> >> people often forget
>> >>
>> >> The reason you're having the current happen, is gluster only uses the
>> >> initial IP address to get the volume details. After that it'll connect
>> >> directly to ONE of the servers, so with your 2 storage server case,
>> >> 50% chance it won't go to paused state.
>> >>
>> >> For the VIP, you could consider CTDB or keepelived, or even just using
>> >> localhost (as your storage and compute are all on the same machine).
>> >> For CTDB, checkout
>> >> http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/
>> >>
>> >> I have a BZ open regarding gluster VMs going into paused state and not
>> >> being resumable, so it's something you should also consider. My case,
>> >> switch dies, gluster volume goes away, VMs go into paused state but
>> >> can't be resumed. If you lose one server out of a cluster is a
>> >> different story though.
>> >> https://bugzilla.redhat.com/show_bug.cgi?id=1058300
>> >>
>> >> HTH
>> >>
>> >> On Fri, Jul 4, 2014 at 11:48 AM, Punit Dambiwal <hypu...@gmail.com>
>> >> wrote:
>> >> > Hi,
>> >> >
>> >> > Thanks...can you suggest me any good how to/article for the glusterfs
>> >> > with
>> >> > ovirt...
>> >> >
>> >> > One strange thing is if i will try both (compute & storage) on the
>> >> > same
>> >> > node...the below quote not happen....
>> >> >
>> >> > ---------------------
>> >> >
>> >> > Right now, if 10.10.10.2 goes away, all your gluster mounts go away
>> >> > and
>> >> > your
>> >> > VMs get paused because the hypervisors can’t access the storage. Your
>> >> > gluster storage is still fine, but ovirt can’t talk to it because
>> >> > 10.10.10.2
>> >> > isn’t there.
>> >> > ---------------------
>> >> >
>> >> > Even the 10.10.10.2 goes down...i can still access the gluster mounts
>> >> > and no
>> >> > VM pause....i can access the VM via ssh...no connection
>> >> > failure.....the
>> >> > connection drop only in case of SPM goes down and the another node
>> >> > will
>> >> > elect as SPM(All the running VM's pause in this condition).
>> >> >
>> >> >
>> >> >
>> >> > On Fri, Jul 4, 2014 at 4:12 AM, Darrell Budic
>> >> > <darrell.bu...@zenfire.com>
>> >> > wrote:
>> >> >>
>> >> >> You need to setup a virtual IP to use as the mount point, most
>> >> >> people
>> >> >> use
>> >> >> keepalived to provide a virtual ip via vrrp for this. Setup
>> >> >> something
>> >> >> like
>> >> >> 10.10.10.10 and use that for your mounts.
>> >> >>
>> >> >> Right now, if 10.10.10.2 goes away, all your gluster mounts go away
>> >> >> and
>> >> >> your VMs get paused because the hypervisors can’t access the
>> >> >> storage.
>> >> >> Your
>> >> >> gluster storage is still fine, but ovirt can’t talk to it because
>> >> >> 10.10.10.2
>> >> >> isn’t there.
>> >> >>
>> >> >> If the SPM goes down, it the other hypervisor hosts will elect a new
>> >> >> one
>> >> >> (under control of the ovirt engine).
>> >> >>
>> >> >> Same scenarios if storage & compute are on the same server, you
>> >> >> still
>> >> >> need
>> >> >> a vip address for the storage portion to serve as the mount point so
>> >> >> it’s
>> >> >> not dependent on any one server.
>> >> >>
>> >> >> -Darrell
>> >> >>
>> >> >> On Jul 3, 2014, at 1:14 AM, Punit Dambiwal <hypu...@gmail.com>
>> >> >> wrote:
>> >> >>
>> >> >> Hi,
>> >> >>
>> >> >> I have some HA related concern about glusterfs with Ovirt...let say
>> >> >> i
>> >> >> have
>> >> >> 4 storage node with gluster bricks as below :-
>> >> >>
>> >> >> 1. 10.10.10.1 to 10.10.10.4 with 2 bricks each and i have
>> >> >> distributed
>> >> >> replicated architecture...
>> >> >> 2. Now attached this gluster storge to ovrit-engine with the
>> >> >> following
>> >> >> mount point 10.10.10.2/vol1
>> >> >> 3. In my cluster i have 3 hypervisior hosts (10.10.10.5 to
>> >> >> 10.10.10.7)
>> >> >> SPM
>> >> >> is on 10.10.10.5...
>> >> >> 4. What happen if 10.10.10.2 will goes down.....can hypervisior host
>> >> >> can
>> >> >> still access the storage ??
>> >> >> 5. What happen if SPM goes down ???
>> >> >>
>> >> >> Note :- What happen for point 4 &5 ,If storage and Compute both
>> >> >> working
>> >> >> on
>> >> >> the same server.
>> >> >>
>> >> >> Thanks,
>> >> >> Punit
>> >> >> _______________________________________________
>> >> >> Users mailing list
>> >> >> Users@ovirt.org
>> >> >> http://lists.ovirt.org/mailman/listinfo/users
>> >> >>
>> >> >>
>> >> >
>> >> >
>> >> > _______________________________________________
>> >> > Users mailing list
>> >> > Users@ovirt.org
>> >> > http://lists.ovirt.org/mailman/listinfo/users
>> >> >
>> >
>> >
>
>
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to