Re: [Gluster-users] Errors during dbench run (rename failed)

2013-03-18 Thread Pranith Kumar K

On 03/18/2013 02:49 PM, Pranith Kumar K wrote:

On 03/18/2013 02:36 PM, Hans Lambermont wrote:

Pranith Kumar K wrote on 20130318:


On 03/17/2013 06:55 PM, Marc Seeger wrote:

This is how dbench died:
I, [2013-03-16T05:34:03.176890 #13121]  INFO -- : [710] rename 
/mnt/gfs/something.example.com_1363412031/clients/client2/~dmtmp/PWRPNT/NEWPCB.PPT 
/mnt/gfs/something.example.com_1363412031/clients/client2/~dmtmp/PWRPNT/PPTB1E4.TMP 
failed (No such file or directory) - expected NT_STATUS_OK

...

hi, If obtaining of entry locks fails for any of the bricks in a
replica subvolume, rename used to fail. This bug is fixed in 3.4aplha.

Cool !  Will this be backported to the 3.3 branch ?

regards,
Hans Lambermont
That fix is a complete refactor of entry-locks. Let me check if I can 
come up with a small patch to fix just this case.


Pranith.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Here is the fix:
http://review.gluster.com/4689

Pranith.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Possible reason for "meta-data data entry missing-entry gfid self-heal failed"?

2013-03-18 Thread Marc Seeger
Sadly, we keep seeing those. The logs display the same pattern:

[2013-03-18 05:22:49.174382] I 
[afr-self-heal-common.c:1941:afr_sh_post_nb_entrylk_conflicting_sh_cbk] 
0-replicate0: Non blocking entrylks failed.
[2013-03-18 05:22:49.174382] E 
[afr-self-heal-common.c:2160:afr_self_heal_completion_cbk] 0-replicate0: 
background  meta-data data entry missing-entry gfid self-heal failed on 
/home/vcltest256/.drush/vcltest256.aliases.drushrc.php.lock
[2013-03-18 05:22:49.174382] W [inode.c:914:inode_lookup] 
(-->/usr/lib/libglusterfs.so.0(default_lookup_cbk+0xc1) [0x7f14a4cd6d01] 
(-->/usr/lib/glusterfs/3.3.1/xlator/mount/fuse.so(+0xf198) [0x7f14a27fe198] 
(-->/usr/lib/glusterfs/3.3.1/xlator/mount/fuse.so(+0xeffb) [0x7f14a27fdffb]))) 
0-fuse: inode not found



On Mar 7, 2013, at 1:14 AM, Marc Seeger  wrote:

> Almost forgot: These operations were done on a symlinked directory (/home is 
> linked to /mnt/gfs/home where /mnt/gfs is the gluster mountpoint)
> 
> On Mar 7, 2013, at 1:08 AM, Marc Seeger  wrote:
> 
>> Hey, in our testing we seem to sometimes run into a problem with gluster fs 
>> breaking.
>> The most recent occurrence was two processes on two machines trying to stat 
>> the same lock file:
>> 2013-03-06T16:41:27+00:00 daemon.notice : creating directory: 
>> dir=/home/vcltest464/.drush, user=10036, group=10036, mode=0700
>> 2013-03-06T16:41:27+00:00 daemon.notice : PHP Warning:  stat(): stat failed 
>> for /home/vcltest464/.drush/vcltest464.aliases.drushrc.php.lock in [...].php 
>> on line 695
>> 2013-03-06T16:41:27+00:00 daemon.notice : PHP Warning:  stat(): stat failed 
>> for /home/vcltest464/.drush/vcltest464.aliases.drushrc.php.lock in [...].php 
>> on line 695
>> 2013-03-06T16:41:27+00:00 daemon.notice : PHP Warning:  stat(): stat failed 
>> for /home/vcltest464/.drush/vcltest464.aliases.drushrc.php.lock in [...].php 
>> on line 695
>> (a few thousand times. it was a bug in the locking code when the stat failed)
>> 
>> It ends up as something like this in the gluster log files:
>> 
>> [2013-03-06 16:34:36.875559] W [client3_1-fops.c:2457:client3_1_link_cbk] 
>> 0-remote8: remote operation failed: File exists 
>> (---- -> /home/vcltest418/prod)
>> [2013-03-06 16:34:36.875559] W [client3_1-fops.c:2457:client3_1_link_cbk] 
>> 0-remote7: remote operation failed: File exists 
>> (---- -> /home/vcltest418/prod)
>> [2013-03-06 16:36:24.809098] W [client3_1-fops.c:327:client3_1_mkdir_cbk] 
>> 0-remote8: remote operation failed: File exists. Path: 
>> /vcltest473/php_sessions (----)
>> [2013-03-06 16:36:24.809098] W [client3_1-fops.c:327:client3_1_mkdir_cbk] 
>> 0-remote7: remote operation failed: File exists. Path: 
>> /vcltest473/php_sessions (----)
>> [2013-03-06 16:36:24.809098] W [fuse-bridge.c:292:fuse_entry_cbk] 
>> 0-glusterfs-fuse: 9061: MKDIR() /vcltest473/php_sessions => -1 (File exists)
>> [2013-03-06 16:36:26.179144] I 
>> [afr-self-heal-common.c:1189:afr_sh_missing_entry_call_impunge_recreate] 
>> 0-replicate0: no missing files - 
>> /home/vcltest473/.drush/vcltest473.aliases.drushrc.php.lock. proceeding to 
>> metadata check
>> [2013-03-06 16:36:34.899435] I [afr-self-heal-entry.c:2333:afr_sh_entry_fix] 
>> 0-replicate0: /vcltest473/livedev: Performing conservative merge
>> [2013-03-06 16:41:02.118580] W [client3_1-fops.c:327:client3_1_mkdir_cbk] 
>> 0-remote8: remote operation failed: File exists. Path: 
>> /vcltest723/files-private (----)
>> [2013-03-06 16:41:02.118580] W [client3_1-fops.c:327:client3_1_mkdir_cbk] 
>> 0-remote7: remote operation failed: File exists. Path: 
>> /vcltest723/files-private (----)
>> [2013-03-06 16:41:02.118580] W [fuse-bridge.c:292:fuse_entry_cbk] 
>> 0-glusterfs-fuse: 12435: MKDIR() /vcltest723/files-private => -1 (File 
>> exists)
>> [2013-03-06 16:41:27.179425] I 
>> [afr-self-heal-common.c:1941:afr_sh_post_nb_entrylk_conflicting_sh_cbk] 
>> 0-replicate0: Non blocking entrylks failed.
>> [2013-03-06 16:41:27.179425] E 
>> [afr-self-heal-common.c:2160:afr_self_heal_completion_cbk] 0-replicate0: 
>> background  meta-data data entry missing-entry gfid self-heal failed on 
>> /home/vcltest464/.drush/vcltest464.aliases.drushrc.php.lock
>> [2013-03-06 16:41:27.179425] W [inode.c:914:inode_lookup] 
>> (-->/usr/lib/libglusterfs.so.0(default_lookup_cbk+0xc1) [0x7f93c6ac7d01] 
>> (-->/usr/lib/glusterfs/3.3.1/xlator/mount/fuse.so(+0xf198) [0x7f93c45ef198] 
>> (-->/usr/lib/glusterfs/3.3.1/xlator/mount/fuse.so(+0xeffb) 
>> [0x7f93c45eeffb]))) 0-fuse: inode not found
>> 
>> 
>> After this, the mountpoint was not responding to file_exists() anymore which 
>> usually means the client died. Any idea what could have caused such 
>> behaviour?
>> 
>> 
>> Cheers,
>> Marc
> 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://su

Re: [Gluster-users] How to evaluate the glusterfs performance with small file workload?

2013-03-18 Thread Shawn Nock
nlxswig  writes:

> Hi guys
>
> I have met some troubles when I want to evaluate the glusterfs
> performance with small file workload.
>
> 1: What kind of benchmark should I use to test the small file
> operation ?

I use fio (http://freecode.com/projects/fio).

Here's an old tutorial:
https://www.linux.com/learn/tutorials/442451-inspecting-disk-io-performance-with-fio/

You'll need to read the docs to use it well, but the tutorial gives
some idea of the operation.

-- 
Shawn Nock (OpenPGP: 0x65118FA5)


pgpv7JLYLubma.pgp
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] different size of nodes

2013-03-18 Thread Papp Tamas

On 03/18/2013 01:43 PM, Thomas Wakefield wrote:


You can set the free disk space limit.  This will force gluster to write files 
to another volume.

gluster volume set "volume"  cluster.min-free-disk XXGB(you insert your 
volume name and the amount of free space you want, probably like 2-300GB)


It doesn't help in this case, just makes sure, there are no full bricks.


Running a rebalance would help move your files around so that gl4 is not filled 
up.
gluster volume rebalance "volume"  start


Data is rebalanced in term of used disk space in GB, it's not related to the % 
of a brick, am I wrong?

tamas
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] different size of nodes

2013-03-18 Thread Thomas Wakefield
You can set the free disk space limit.  This will force gluster to write files 
to another volume.

gluster volume set "volume"  cluster.min-free-disk XXGB(you insert your 
volume name and the amount of free space you want, probably like 2-300GB)

Running a rebalance would help move your files around so that gl4 is not filled 
up.
gluster volume rebalance "volume"  start

-Tom


On Mar 16, 2013, at 6:54 AM, Papp Tamas  wrote:

> hi All,
> 
> There is a distributed cluster with 5 bricks:
> 
> gl0
> Filesystem  Size  Used Avail Use% Mounted on
> /dev/sda4   5.5T  4.1T  1.5T  75% /mnt/brick1
> gl1
> Filesystem  Size  Used Avail Use% Mounted on
> /dev/sda4   5.5T  4.3T  1.3T  78% /mnt/brick1
> gl2
> Filesystem  Size  Used Avail Use% Mounted on
> /dev/sda4   5.5T  4.1T  1.4T  76% /mnt/brick1
> gl3
> Filesystem  Size  Used Avail Use% Mounted on
> /dev/sda4   4.1T  4.1T  2.1G 100% /mnt/brick1
> gl4
> Filesystem  Size  Used Avail Use% Mounted on
> /dev/sda4   4.1T  4.1T   24M 100% /mnt/brick1
> 
> 
> Volume Name: w-vol
> Type: Distribute
> Volume ID: 89e31546-cc2e-4a27-a448-17befda04726
> Status: Started
> Number of Bricks: 5
> Transport-type: tcp
> Bricks:
> Brick1: gl0:/mnt/brick1/export
> Brick2: gl1:/mnt/brick1/export
> Brick3: gl2:/mnt/brick1/export
> Brick4: gl3:/mnt/brick1/export
> Brick5: gl4:/mnt/brick1/export
> Options Reconfigured:
> nfs.mount-udp: on
> nfs.addr-namelookup: off
> nfs.ports-insecure: on
> nfs.port: 2049
> cluster.stripe-coalesce: on
> nfs.disable: off
> performance.flush-behind: on
> performance.io-thread-count: 64
> performance.quick-read: on
> performance.stat-prefetch: on
> performance.io-cache: on
> performance.write-behind: on
> performance.read-ahead: on
> performance.write-behind-window-size: 4MB
> performance.cache-refresh-timeout: 1
> performance.cache-size: 4GB
> network.frame-timeout: 60
> performance.cache-max-file-size: 1GB
> 
> 
> 
> As you can see 2 of the bricks are smaller and they're full.
> The gluster volume is not full of course:
> 
> gl0:/w-vol   25T   21T  4.0T  84% /W/Projects
> 
> 
> I'm not able to write to the volume. Why? Is it an issue? If so, is it known?
> How can I stop writing to full nodes?
> 
> Thanks,
> tamas
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] How to evaluate the glusterfs performance with small file workload?

2013-03-18 Thread Torbjørn Thorsen
On Mon, Mar 18, 2013 at 11:27 AM, nlxswig  wrote:
> Hi guys
> 1: What kind of benchmark should I use to test the small file operation
> ?

I've been wondering a bit about the same thing.
I was thinking it would be nice to have something record and
synthesize IO patterns.
One could record a process which does a lot of handling of small
files, for example Dovecot,
and be able to replay those IO patterns on top of any filesystem.

A quick look around revealed ioreplay[1].
It seems to work by replaying strace output, which is cool idea.
I haven't tried it, but it looks to be a nice testing tool.

[1]: https://code.google.com/p/ioapps/wiki/ioreplay

> 4: From the glusterfs document, I get that in order to avoid the cache
> coherency there is no write cache feature.
>
> Does it mean that there is no inference of memory cache for small
> file write performance of glusterfs?
>
> So, when we testing glusterfs with:
>
> "dd if=/dev/zero of=test.img bs=10k count=1 oflag=direct" and
>
> "dd if=/dev/zero of=test.img bs=10k count=1"
>
> These two commands should get the same write performance.
>
> While when I do this, the results of these two commands are not same
> each other. and the gap is big.
>
> How to explain?

My impression is that there are write caching features,
but Gluster tries hard to maintain coherency and correctness regarding writes.
For one type of cache, see the write-behind translator that is enabled
by default.

AFAIK, the difference between the to dd invocations is that the first
one disables
all caches, while the last one doesn't even wait for the sync before finishing.
My understanding leads me to say that the first one can't use cache at all,
while the second one uses all the cache there is.

Try to run the last one with "conv=fsync".
This will sync the file at the end of writing, ensuring that when dd
returns the data should be on disk. This will probably even out the
run time for the two invocations.



--
Vennlig hilsen
Torbjørn Thorsen
Utvikler / driftstekniker

Trollweb Solutions AS
- Professional Magento Partner
www.trollweb.no

Telefon dagtid: +47 51215300
Telefon kveld/helg: For kunder med Serviceavtale

Besøksadresse: Luramyrveien 40, 4313 Sandnes
Postadresse: Maurholen 57, 4316 Sandnes

Husk at alle våre standard-vilkår alltid er gjeldende
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] How to evaluate the glusterfs performance with small file workload?

2013-03-18 Thread nlxswig
Hi guys

I have met some troubles when I want to evaluate the glusterfs performance 
with small file workload.

1: What kind of benchmark should I use to test the small file operation ?

As we all know, we can use iozone tools to test the large file 
operation, while for the sake of memory cache, 

   if we testing small file operation with iozone, the result will not 
correct. Then, what kind of benchmark should 

   I use? 

   How about "dd oflag=direct"?

   2: How to simulate the the large scale clients concurrence accessing 
operation?

   When we use iozone, there is a cluster can help us doing multiple 
clients testing. While, if the number of 

   clients is about hounds , it's difficult for us to deploy so many 
clients at the same time? Could we

   deploy multiple processes on one client at the same time to simulate 
multiple clients concurrence? 

3: For the small file operation, how to increase the workload of a single 
client? 

4: From the glusterfs document, I get that in order to avoid the cache 
coherency there is no write cache feature.

Does it mean that there is no inference of memory cache for small file 
write performance of glusterfs?

So, when we testing glusterfs with:

"dd if=/dev/zero of=test.img bs=10k count=1 oflag=direct" and

"dd if=/dev/zero of=test.img bs=10k count=1"

These two commands should get the same write performance.

While when I do this, the results of these two commands are not same 
each other. and the gap is big.

How to explain?

5: How to tuning for the small file operation on glusterfs?



If you know, please let me know, many thanks



Best Regards

Lixin Niu



   ___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Errors during dbench run (rename failed)

2013-03-18 Thread Pranith Kumar K

On 03/18/2013 02:36 PM, Hans Lambermont wrote:

Pranith Kumar K wrote on 20130318:


On 03/17/2013 06:55 PM, Marc Seeger wrote:

This is how dbench died:
I, [2013-03-16T05:34:03.176890 #13121]  INFO -- : [710] rename 
/mnt/gfs/something.example.com_1363412031/clients/client2/~dmtmp/PWRPNT/NEWPCB.PPT
 
/mnt/gfs/something.example.com_1363412031/clients/client2/~dmtmp/PWRPNT/PPTB1E4.TMP
 failed (No such file or directory) - expected NT_STATUS_OK

...

hi, If obtaining of entry locks fails for any of the bricks in a
replica subvolume, rename used to fail. This bug is fixed in 3.4aplha.

Cool !  Will this be backported to the 3.3 branch ?

regards,
Hans Lambermont
That fix is a complete refactor of entry-locks. Let me check if I can 
come up with a small patch to fix just this case.


Pranith.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Nightly rpms?

2013-03-18 Thread Nux!

Hello,

On some occasions with 3.4 for example I seemed to hit bugs that not 
only were already reported, but in some cases even fixed (like some 
recent quota failed issue). Is there a place where I could get nightly 
or at least weekly RPMs?

This way at least I'll hit new or unresolved bugs ... :-)

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Errors during dbench run (rename failed)

2013-03-18 Thread Hans Lambermont
Pranith Kumar K wrote on 20130318:

> On 03/17/2013 06:55 PM, Marc Seeger wrote:
> >This is how dbench died:
> >I, [2013-03-16T05:34:03.176890 #13121]  INFO -- : [710] rename 
> >/mnt/gfs/something.example.com_1363412031/clients/client2/~dmtmp/PWRPNT/NEWPCB.PPT
> > 
> >/mnt/gfs/something.example.com_1363412031/clients/client2/~dmtmp/PWRPNT/PPTB1E4.TMP
> > failed (No such file or directory) - expected NT_STATUS_OK
...
> hi, If obtaining of entry locks fails for any of the bricks in a
> replica subvolume, rename used to fail. This bug is fixed in 3.4aplha.

Cool !  Will this be backported to the 3.3 branch ?

regards,
   Hans Lambermont
-- 
Hans Lambermont | Senior Architect
(t) +31407370104 (w) www.shapeways.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Errors during dbench run (rename failed)

2013-03-18 Thread Pranith Kumar K

On 03/17/2013 06:55 PM, Marc Seeger wrote:

Hi,

We just ran into drench dying on one of our test runs.
We execute a dbench each on 2 machines.
We use the following parameters: dbench 6 -t 60 -D $DIRECTORY (host specific, 
they each write in a separate one)
The directories are on a mountpoint connected using glusterfs 3.3.1 
(3.3.1-ubuntu1~lucid8 from 
https://launchpad.net/~semiosis/+archive/ubuntu-glusterfs-3.3)

This is how dbench died:

I, [2013-03-16T05:34:03.176890 #13121]  INFO -- : [710] rename 
/mnt/gfs/something.example.com_1363412031/clients/client2/~dmtmp/PWRPNT/NEWPCB.PPT
 
/mnt/gfs/something.example.com_1363412031/clients/client2/~dmtmp/PWRPNT/PPTB1E4.TMP
 failed (No such file or directory) - expected NT_STATUS_OK


These are the logs at the time. They are a bit noisy, the matching message is 
emphasised using *:

[2013-03-16 05:34:03.082813] I [afr-inode-write.c:428:afr_open_fd_fix] 
0-replicate0: Opening fd 0x7f1adb67f274
[2013-03-16 05:34:03.082813] W [client3_1-fops.c:1595:client3_1_entrylk_cbk] 
0-remote9: remote operation failed: No such file or directory
[2013-03-16 05:34:03.082813] W [client3_1-fops.c:418:client3_1_open_cbk] 
0-remote9: remote operation failed: No such file or directory. Path: 
/something.example.com_1363412031/clients/client2/~dmtmp/PWRPNT/PPTB1E4.TMP 
(b49d6051-93f6-4eca-b161-865a5bea964b)
[2013-03-16 05:34:03.082813] I [afr-inode-write.c:428:afr_open_fd_fix] 
0-replicate0: Opening fd 0x7f1adb67f4cc
[2013-03-16 05:34:03.082813] I [afr-inode-write.c:428:afr_open_fd_fix] 
0-replicate0: Opening fd 0x7f1adb67f6c0
[2013-03-16 05:34:03.082813] I [afr-inode-write.c:428:afr_open_fd_fix] 
0-replicate0: Opening fd 0x7f1adb67f468
[2013-03-16 05:34:03.082813] I [afr-inode-write.c:428:afr_open_fd_fix] 
0-replicate0: Opening fd 0x7f1adb67f33c
[2013-03-16 05:34:03.082813] W [client3_1-fops.c:881:client3_1_flush_cbk] 
0-remote9: remote operation failed: No such file or directory
[2013-03-16 05:34:03.092814] W [client3_1-fops.c:418:client3_1_open_cbk] 
0-remote9: remote operation failed: No such file or directory. Path: 
/something.example.com_1363412031/clients/client2/~dmtmp/PWRPNT/ZD16.BMP 
(73e3b099-48cd-4e76-8049-c64bf8f63500)
[2013-03-16 05:34:03.092814] W [client3_1-fops.c:418:client3_1_open_cbk] 
0-remote9: remote operation failed: No such file or directory. Path: 
/something.example.com_1363412031/clients/client2/~dmtmp/PWRPNT/NEWPCB.PPT 
(ba53fb9f-0648-4794-aaa9-bba9331b52cb)
[2013-03-16 05:34:03.092814] W [client3_1-fops.c:418:client3_1_open_cbk] 
0-remote9: remote operation failed: No such file or directory. Path: 
/something.example.com_1363412031/clients/client2/~dmtmp/PWRPNT/PCBENCHM.PPT 
(a0c96e9a-4d4a-4984-9892-ff0b2ecbb7e3)
[2013-03-16 05:34:03.092814] W [client3_1-fops.c:418:client3_1_open_cbk] 
0-remote9: remote operation failed: No such file or directory. Path: 
/something.example.com_1363412031/clients/client4/~dmtmp/PWRPNT/PPTB1E4.TMP 
(2b8f1677-6376-4286-a381-8f4897bc9f4a)
[2013-03-16 05:34:03.092814] I [afr-inode-write.c:428:afr_open_fd_fix] 
0-replicate0: Opening fd 0x7f1adb67f594
[2013-03-16 05:34:03.092814] I [afr-inode-write.c:428:afr_open_fd_fix] 
0-replicate0: Opening fd 0x7f1adb67f3a0
[2013-03-16 05:34:03.092814] I [afr-inode-write.c:428:afr_open_fd_fix] 
0-replicate0: Opening fd 0x7f1adb67f2d8
[2013-03-16 05:34:03.112816] W [client3_1-fops.c:418:client3_1_open_cbk] 
0-remote9: remote operation failed: No such file or directory. Path: 
/something.example.com_1363412031/clients/client4/~dmtmp/PWRPNT/ZD16.BMP 
(eafa5f6a-fe12-4b9c-a5b9-386f2ff2123f)
[2013-03-16 05:34:03.112816] W [client3_1-fops.c:418:client3_1_open_cbk] 
0-remote9: remote operation failed: No such file or directory. Path: 
/something.example.com_1363412031/clients/client4/~dmtmp/PWRPNT/NEWPCB.PPT 
(8c99ede1-3782-49f0-b544-00f4ec3beb9b)
[2013-03-16 05:34:03.112816] W [client3_1-fops.c:418:client3_1_open_cbk] 
0-remote9: remote operation failed: No such file or directory. Path: 
/something.example.com_1363412031/clients/client4/~dmtmp/PWRPNT/PCBENCHM.PPT 
(a725ede8-bc10-42a1-9622-55afad13f9f7)
[2013-03-16 05:34:03.112816] W [client3_1-fops.c:881:client3_1_flush_cbk] 
0-remote9: remote operation failed: No such file or directory
[2013-03-16 05:34:03.112816] W [client3_1-fops.c:881:client3_1_flush_cbk] 
0-remote9: remote operation failed: No such file or directory
[2013-03-16 05:34:03.112816] W [client3_1-fops.c:881:client3_1_flush_cbk] 
0-remote9: remote operation failed: No such file or directory
[2013-03-16 05:34:03.112816] W [client3_1-fops.c:881:client3_1_flush_cbk] 
0-remote9: remote operation failed: No such file or directory
[2013-03-16 05:34:03.112816] W [client3_1-fops.c:881:client3_1_flush_cbk] 
0-remote9: remote operation failed: No such file or directory
[2013-03-16 05:34:03.112816] W [client3_1-fops.c:881:client3_1_flush_cbk] 
0-remote9: remote operation failed: No such file or directory
[2013-03-16 05:34:03.112816] W [client3_1-fops.c:881:client3_1_flush_cbk] 
0-remote9:

Re: [Gluster-users] Questions about gluster/fuse, page cache, and coherence

2013-03-18 Thread nlxswig
Good questions, 
Why are there no reply?


At 2011-08-16 04:53:50,"Patrick J. LoPresti"  wrote:
>(FUSE developers:  Although my questions are specifically about
>Gluster, I suspect most of the answers have more to do with FUSE, so I
>figure this is on-topic for your list.  If I figured wrong, I
>apologize.)
>
>I have done quite a bit of searching looking for answers to these
>questions, and I just cannot find them...
>
>I think I understand how the Linux page cache works for an ordinary
>local (non-FUSE) partition.  Specifically:
>
>1) When my application calls read(), it reads from the page cache.  If
>the page(s) are not resident, the kernel puts my application to sleep
>and gets busy reading them from disk.
>
>2) When my application calls write(), it writes to the page cache.
>The kernel will -- eventually, when it feels like it -- flush those
>dirty pages to disk.
>
>3) When my application calls mmap(), page cache pages are mapped into
>my process's address space, allowing me to create a dirty page or read
>a page by accessing memory.
>
>4) When the kernel reads a page, it might decide to read some other
>pages, depending on the underlying block device's read-ahead
>parameters.  I can control these via "blockdev".  On the write side, I
>can exercise some control with various VM parameters (dirty_ratio
>etc).  I can also use calls like fsync() and posix_fadvise() to exert
>some control over page cache management at the application level.
>
>
>My question is pretty simple.  If you had to re-write the above four
>points for a Gluster file system, what would they look like?  If it
>matters, I am specifically interested in Gluster 3.2.2 on Suse Linux
>Enterprise Server 11 SP1 (Linux 2.6.32.43 + whatever Suse does to
>their kernels).
>
>Does Gluster use the page cache on read()?  On write()?  If so, how
>does it ensure coherency between clients?  If not, how does mmap()
>work (or does it not work)?
>
>What read-ahead will the kernel use?  Does posix_fadvise(...,
>POSIX_FADV_WILLNEED) have any effect on a Gluster file system?
>
>I find it hard to imagine that I am the only person with questions
>like these...  Did I miss a FAQ list somewhere?
>
>Thanks.
>
> - Pat
>___
>Gluster-users mailing list
>Gluster-users@gluster.org
>http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users