Hi,
for a test I read from a file with dd:
dd if=someFile bs=64k of=/dev/null
This will do 2 reads with 64k bytes, one read with 24053 and one final read
with 0 bytes.
I now check the WF json output, looking for the new ReadOffset and bytesRead
fields I get:
kafkacat -b my-broker -t test1-watch | while read mist; do echo $mist | python
-m json.tool | grep -i read ; done
"bytesRead": "0",
"minReadOffset": "9223372036854775807",
"maxReadOffset": "0",
That is the open()...
"bytesRead": "65536",
"minReadOffset": "0",
"maxReadOffset": "65535",
That is the first read()....
"bytesRead": "131072",
"minReadOffset": "0",
"maxReadOffset": "131071",
WHo reads 2x64k bytes from the beginning? Is the the accumulated read data?
"bytesRead": "155125",
"minReadOffset": "0",
"maxReadOffset": "155124",
And again, now it looks as if in one event the entire file is read?
The documentation
(https://www.ibm.com/docs/en/spectrum-scale/5.1.3?topic=folder-json-attributes-in-clustered-watch)
doesn't explain this....
--
Dr. Jürgen Hannappel DESY/IT Tel. : +49 40 8998-4616
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org