On 12/14/2015 03:44 AM, Andrus, Brian Contractor wrote:
All,
I have a small gluster filesystem on 3 nodes.
I have a perl program that multi-threads and each thread writes it’s
output to one of 3 files depending on some results.
My trouble is that I am seeing missing lines from the output.
The input is a file of 500 lines. Depending on the line, it would be
written to one of three files, but when I total the lines put out, I
am missing anywhere from 4 to 8 lines.
This is even the case if I use an input file that should all go to a
single file.
BUT… when I have it write to /tmp or /dev/shm, all of the lines
expected are there.
This leads me to think there is something not happy with gluster and
concurrent writes.
Here is the code for the actual write:
flock(GOOD_FILES, LOCK_EX) or die $!;
seek(GOOD_FILES, 0, SEEK_END) or die $!;
print GOOD_FILES $lines_to_process[$tid-1] ."\n";
flock(GOOD_FILES, LOCK_UN) or die $!;
So I would expect the proper file locking is taking place.
Is it possible that gluster is not writing because of a race condition?
May be because of caching? Could you try with "gluster volume set
<volname> performance.write-behind off"
Pranith
Any insight as to where to look for a solution is appreciated.
Brian Andrus
_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users