On the write-behind translator is there a way to wait for one of the AFR
Replica's to get a close response and then finish replicated the data in the
background (which write-behind currently does) and issue the close system
call to the replica servers long after the application has moved on because
atleast one of the replicas is keeping up?

Thanks

On Sun, Mar 8, 2009 at 12:00 PM, <[email protected]> wrote:

> Send Gluster-users mailing list submissions to
>        [email protected]
>
> To subscribe or unsubscribe via the World Wide Web, visit
>        http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
> or, via email, send a message with subject or body 'help' to
>        [email protected]
>
> You can reach the person managing the list at
>        [email protected]
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Gluster-users digest..."
>
>
> Today's Topics:
>
>   1. How caches are working on AFR? (Stas Oskin)
>   2. Problems compiling Gluster Patched fuse. (Evan Hart)
>   3. Re: How caches are working on AFR? (Anand Babu Periasamy)
>   4. GlusterFS running, but not syncing is done (Stas Oskin)
>   5. Accessing the host glusterFS directory from OpenVZ        virtual
>      server (Stas Oskin)
>   6. mounting glusterfs on /etc/mtab read only (Enno Lange)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Sun, 8 Mar 2009 02:22:03 +0200
> From: Stas Oskin <[email protected]>
> Subject: [Gluster-users] How caches are working on AFR?
> To: gluster-users <[email protected]>
> Message-ID:
>        <[email protected]>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi.
>
> I have a question to GlustreFS developers.
>
> if I have a pair of servers in client-server AFR (A and B), and the
> application running on A writes to disk, how soon the application receives
> OK and can continue?
>
> After the cache on server A is filled with data (and then all is
> synchronized in background), or only after cache on server B gets data as
> well?
>
> Thanks.
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://zresearch.com/pipermail/gluster-users/attachments/20090308/b01adf5e/attachment.html
> >
>
> ------------------------------
>
> Message: 2
> Date: Sat, 7 Mar 2009 13:59:19 -0800
> From: Evan Hart <[email protected]>
> Subject: [Gluster-users] Problems compiling Gluster Patched fuse.
> To: [email protected]
> Message-ID:
>        <[email protected]>
> Content-Type: text/plain; charset="iso-8859-1"
>
> I'm having problems compiling fuse-2.7.4glfs11
> on # uname -a
> Linux cdc 2.6.27-gentoo-r8 #1 SMP Fri Mar 6 12:21:10 PST 2009 x86_64
> Quad-Core AMD Opteron(tm) Processor 2350 AuthenticAMD GNU/Linux
>
> http://pastebin.com/m2dc978be
>
> Any help would be great..
>
> Thanks
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://zresearch.com/pipermail/gluster-users/attachments/20090307/973d55b4/attachment-0001.htm
> >
>
> ------------------------------
>
> Message: 3
> Date: Sat, 07 Mar 2009 18:18:15 -0800
> From: Anand Babu Periasamy <[email protected]>
> Subject: Re: [Gluster-users] How caches are working on AFR?
> To: Stas Oskin <[email protected]>
> Cc: gluster-users <[email protected]>
> Message-ID: <[email protected]>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> Replicate in 2.0 performs atomic writes by default. This means, writes will
> return control
> back to application only after both the volumes (or more) are successfully
> written.
>
> To mask the performance penalty of atomic writes, you should load
> write-behind on top of
> it. Write-behind returns control as soon as it receives the write call from
> the
> application, but it continues to write in background. Write-behind also
> performs
> block-aggregation. Smaller writes are aggregated into fewer large writes.
>
> POSIX says application should verify the return status of close system call
> to ensure all
> writes were successfully written. If they are any pending writes, close
> call will block to
>  ensure all the data is completely written. There is an option in
> write-behind to even
> close in background. It is unsafe and turned off by default.
>
> Applications that expect every write to succeed, issues synchronous writes.
>
> I Hope it answers your question.
>
> Happy Hacking,
> --
> Anand Babu Periasamy
> GPG Key ID: 0x62E15A31
> Blog [http://ab.multics.org]
> GlusterFS [http://www.gluster.org]
> The GNU Operating System [http://www.gnu.org]
>
>
>
> Stas Oskin wrote:
> > Hi.
> >
> > I have a question to GlustreFS developers.
> >
> > if I have a pair of servers in client-server AFR (A and B), and the
> > application running on A writes to disk, how soon the application
> > receives OK and can continue?
> >
> > After the cache on server A is filled with data (and then all is
> > synchronized in background), or only after cache on server B gets data
> > as well?
> >
> > Thanks.
> >
> >
> > ------------------------------------------------------------------------
> >
> > _______________________________________________
> > Gluster-users mailing list
> > [email protected]
> > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
>
>
> ------------------------------
>
> Message: 4
> Date: Sun, 8 Mar 2009 10:58:17 +0200
> From: Stas Oskin <[email protected]>
> Subject: [Gluster-users] GlusterFS running, but not syncing is done
> To: gluster-users <[email protected]>
> Message-ID:
>        <[email protected]>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi.
>
> I'm trying to run my first GlusterFS setup, basically 2 servers running in
> AFR mode.
>
> While the servers find and connect to each other, unfortunately the file
> are
> not being synchronized between them. I mean, when I place a file in one of
> the servers, the other one does not receive it.
>
> Here is what I receive on each of the servers:
> 2009-03-08 02:41:43 N [server-protocol.c:7186:mop_setvolume] server:
> accepted client from 192.168.253.41:1020
> 2009-03-08 02:41:48 D [client-protocol.c:5924:client_protocol_reconnect]
> home2: breaking reconnect chain
> 2009-03-08 02:41:48 D [client-protocol.c:5924:client_protocol_reconnect]
> home2: breaking reconnect chain
>
> and
>
> 2009-03-08 02:41:43 D [client-protocol.c:6557:notify] home2: got
> GF_EVENT_CHILD_UP
> 2009-03-08 02:41:43 D [socket.c:951:socket_connect] home2: connect ()
> called
> on transport already connected
> 2009-03-08 02:41:43 N [client-protocol.c:5853:client_setvolume_cbk] home2:
> connection and handshake succeeded
> 2009-03-08 02:41:53 D [client-protocol.c:5924:client_protocol_reconnect]
> home2: breaking reconnect chain
> 2009-03-08 02:41:53 D [client-protocol.c:5924:client_protocol_reconnect]
> home2: breaking reconnect chain
>
> Any idea why the files are not synchronized and how it can be diagnosed?
>
> Thanks.
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://zresearch.com/pipermail/gluster-users/attachments/20090308/2dc9a172/attachment-0001.htm
> >
>
> ------------------------------
>
> Message: 5
> Date: Sun, 8 Mar 2009 11:59:51 +0200
> From: Stas Oskin <[email protected]>
> Subject: [Gluster-users] Accessing the host glusterFS directory from
>        OpenVZ  virtual server
> To: gluster-users <[email protected]>
> Message-ID:
>        <[email protected]>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi.
>
> This might be unrelated to this list, but I looking for a way to access
> GlusterFS partition from OpenVZ virtual server.
>
> Meaning a virtual server running on a particular server will access it's
> host GlusterFS directory.
>
> The immediate idea I had was to have the virtual server as the client of
> GlusterFS, as it would basically happen on same machine networking, but
> perhaps there is a way to write the data directly to host partition?
>
> Thanks.
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://zresearch.com/pipermail/gluster-users/attachments/20090308/63b03427/attachment-0001.htm
> >
>
> ------------------------------
>
> Message: 6
> Date: Sun, 08 Mar 2009 14:05:45 +0100
> From: Enno Lange <[email protected]>
> Subject: [Gluster-users] mounting glusterfs on /etc/mtab read only
> To: [email protected]
> Message-ID: <[email protected]>
> Content-Type: text/plain; charset=ISO-8859-15; format=flowed
>
> Hi,
>
> we running a cluster of diskless gentoo-systems. Therefore, /etc/mtab is
> linked to /proc/mounts as usual. Trying to mount a glusterfs fails
> because mtab is not writable. Is there by any chance a way to pass '-n'
> or something equivalent to the underlying mount -t fuse process?
>
> The actual workaround we deployed is to link /etc/mtab to a local file
> on a scratch partition, which in my opinion is quite unsatisfying: The
> mount process will succeed but the mounted fs will not appear in the
> linked /etc/mtab.
>
> Enno Lange
>
>
>
> ------------------------------
>
> _______________________________________________
> Gluster-users mailing list
> [email protected]
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
>
> End of Gluster-users Digest, Vol 11, Issue 12
> *********************************************
>
_______________________________________________
Gluster-users mailing list
[email protected]
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users

Reply via email to