Hello all, I'm not sure what to make of this one. I just spent a couple of hours trying to find the "leak" in an algorithm of mine. It was reading roughly 1200 records, claiming to have processed all of them, and yet writing only about 450 rows into an output text file. One clue should have been that the number of output rows was somewhat random; I did not fully appreciate that until I worked around the problme.
I tried an explicit #flush - no help. I looked for logic errors and found none. The file was being written to a Windows hosted share mounted by CIFS (which I am learning to view with contempt) from Ubuntu 9.04. You can see where this is going: writing the file locally gave the expected result. Any ideas on how one might further isolate the problem? My trust in Windows is well known<g>; I have never liked shared directories; I _really_ do not like CIFS as compared (reliability wise) to SMBFS; the network between me and the server is in question too (long story). All of that said, file support in Squeak, and hence so far inherited by Pharo, is not the best code I have seen to date, so it is easy to suspect too. Can one argue that since it worked locally, Pharo is not the problem? The little bit that I know of cifs is not encouraging. It sounds as though things moved from an easily killed process into the kernel which shows an almost Windows-like unwillingness to shut down when it cannot see servers. I have found numerous reports of problems copying large files over cifs, and I have enountered them too. Bill _______________________________________________ Pharo-project mailing list [email protected] http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project
