On 22 Jul 2018, at 12:10, Norbert Hartl wrote:

Thank you guys! It is really great to see that there are people like Max jumping onto problems like this and solving it.

You're welcome!


@sven I’m eager to test when the new configuration is available. Please drop us a note when you are done!

thanks again,

Norbert

Am 22.07.2018 um 11:06 schrieb Sven Van Caekenberghe <[email protected]>:

Ah, that makes total sense. Great catch. Thank you.

===
Name: Zinc-FileSystem-SvenVanCaekenberghe.17
Author: SvenVanCaekenberghe
Time: 22 July 2018, 11:04:40.148339 am
UUID: e97a508e-f42d-0d00-a127-f016078b1897
Ancestors: Zinc-FileSystem-SvenVanCaekenberghe.16

Bugfix to ZnFileSystemUtils class>>#newBinaryFileNamed:do: we must ensure that the binaryStream opened here is closed as well (thx Max Leske for the bug hunting and fix)
===

The method in question now look as follows:

newBinaryFileNamed: fileName do: block
        | fileReference |
        fileReference := fileName asFileReference.
        ^ (fileReference respondsTo: #binaryWriteStreamDo:ifPresent:)
                ifTrue: [
                        fileName asFileReference
                                binaryWriteStreamDo: block
                                ifPresent: [ FileExists signalWith: 
fileReference ] ]
                ifFalse: [
                        fileReference isFile
                                ifTrue: [ FileExists signalWith: fileReference ]
                                ifFalse: [ | binaryStream |
                                        binaryStream := self 
binaryFileStreamFor: fileName.
                                        [ block value: binaryStream ] ensure: [ 
binaryStream close ] ] ]

I create a new 2.9.4 of the config later on.

On 21 Jul 2018, at 22:00, Max Leske <[email protected]> wrote:

Sven, you're right. What I found last time works as a side-effect. The real problem is in ZnFileSystemUtils class>>newBinaryFileNamed:do:. In Pharo 6.1 the logic there leads to the following evaluation:

block value: (self binaryFileStreamFor: fileName)

This is the only place where the stream is created without a surrounding safety block (e.g. #writeStream:do:), which means, no one closes the stream. The line should be:

| s |
s := self binaryFileStreamFor: fileName.
[ block value: s ] ensure: [ s close ]

ZnFileSystemUtils class>>newBinaryFileNamed:do: is only used by ZnClient after loading Zinc-HTTP-SvenVanCaekenberghe.472, which is being loaded by version 2.9.1 of ConfigurationOfZincHTTPComponents.

Cheers,
Max

On 21 Jul 2018, at 17:21, Max Leske wrote:

Just to be safe, I'll redo my experiment and get back to you.

On 21 Jul 2018, at 16:59, Sven Van Caekenberghe wrote:
I tried running Max's snippet (Pharo 6.1 on Ubuntu 16.04 LTS),

ZnClient new
url: 'https://github.com/zweidenker/Parasol/archive/master.zip';
downloadTo: '/tmp/foobar.zip'.
bytes := '/tmp/foobar.zip' asFileReference binaryReadStreamDo: [ :s | s contents ].
Transcript open; show: bytes size; cr.
5 seconds asDelay wait.
Transcript show: ('/tmp/foobar.zip' asFileReference binaryReadStreamDo: [ :s | s contents ]) size.

And it gives me,

318420
318420

For me there is nothing nothing to see here ... and I have the impression we are all talking about different things.
On 21 Jul 2018, at 14:18, Max Leske <[email protected]> wrote:

On 21 Jul 2018, at 12:07, Norbert Hartl wrote:
Max,

what do you think will be a proper fix for this? I cannot see if this is a vm problem or not.

In my opinion it should be fixed in Zinc as there's no way for the user of Zinc to flush the stream. The alternative would be for Zinc to expose a method or setting that allows the user to specify the expected behaviour. I do not think that this should be handled by the VM plugin or the stream class, as there are indeed different behaviours that make sense.

Cheers,
Max
Norbert
Am 14.07.2018 um 08:25 schrieb Max Leske <[email protected]>:

Hi Nicolas,

The PR you are referring [1] to concerns a missing fflush() when truncating a file. I believe that is a different case to this one, as a) the file is not truncated and b) in the case of truncation it is an explicit requirement that the file be empty before the first write happens. In our case, and in general, it depends on the situation whether flushing must occur immediately, e.g. because the file will be read again immediately, or not, e.g. when writing a data to a backup file.

Thanks for the pointer though.


Cheers,
Max


[1] https://github.com/OpenSmalltalk/opensmalltalk-vm/pull/271
On 12 Jul 2018, at 23:19, Nicolas Cellier wrote:

Isn't there a recent PR on opensmalltalk that address just this?
Le jeu. 12 juil. 2018 à 23:16, Max Leske <[email protected]> a écrit :

Hi Norbert,

I was able to reproduce the problem and then identify the culprit,
although what I don't yet understand is how this is related to the changes
in Zinc.

Anyway, the problem is that the file stream isn't properly flushed in ZnUtils class>>streamFrom:to:size:. Adding outputStream flush as the last
statement fixes your problem. Apparently, StandardFileStream /
MultiByteFileStream perform a simple close() on the file which, according to the man page on close(), does *not* guarantee that the contents have all been written to file. In this case a flush is necessary because the
entire file is immediately read again.

Here's a smaller test case for you to play with Sven:

ZnClient new
url: 'https://github.com/zweidenker/Parasol/archive/master.zip';
downloadTo: '/tmp/foobar.zip'.
bytes := '/tmp/foobar.zip' asFileReference binaryReadStreamDo: [ :s | s contents ].
Transcript open; show: bytes size; cr.
5 seconds asDelay wait.
Transcript show: ('/tmp/foobar.zip' asFileReference binaryReadStreamDo: [ :s | s contents ]) size.

The output in the Transcript should be:

315392
318420

Cheers,
Max

On 12 Jul 2018, at 8:17, Norbert Hartl wrote:

Am 12.07.2018 um 08:05 schrieb Max Leske [email protected]:

On 11 Jul 2018, at 22:44, Norbert Hartl wrote:

Hi Max,

I constructed a case that fails exactly like I experience it. Could you
try? Just unpack the attachment on a linux, set PHARO_VM env to your
executable and execute build.sh

I will. Might take a couple of days though.

No problem. I‘m happy if you find time.

Norbert

Max

Norbert

Am 10.07.2018 um 09:17 schrieb Max Leske [email protected]:

On 10 Jul 2018, at 8:48, Norbert Hartl wrote:

Max,

thanks for taking the effort.

No worries.

Am 10.07.2018 um 08:37 schrieb Max Leske [email protected]:

I did my best to reproduce the issue but didn't have any luck. I really need access to a Linux VM to debug this. So I'm praying that Apple fixes
the access restrictions to kernel extensions in their next beta...

BTW, Metacello uses ZnClient, which uses ZnFileSystemUtils, so there is
indeed a chance that something goes wrong during download of the zip
archive and that something may be tied to a difference in the versions of
Zinc (although I still think that's unlikely).

Yes there is potential but I don‘t see it. I take a fresh 6.1 image and load my project into. I‘m not sure but I think zinc 2.9.2 is loaded rather early in that process. So I wonder why it does not go wrong in the first
phase. And also not if I load the test group within the first phase.
It must be either the second Metacello invocation or the stopping, copying
and starting of the image.
I try to isolate this case more and provide a script that goes wrong on my machine. But it will take some time because I needed to stop trying to
solve this as I wasted nearly two days on that already.

Let me know once you have something and I'll try to help out.

Norbert

Max

On 9 Jul 2018, at 19:43, Norbert Hartl wrote:

Am 09.07.2018 um 19:10 schrieb Max Leske [email protected]:

I checked the Parasol Archive and it does not appear to be in Zip64 format (Metacello uses ZipArchive which can't cope with Zip64 but ZipArchive can read the Parasol zip). So my next guess is that there's either a problem with Metacello or Pharo in the way that ZipArchive is used (e.g. endianness problem or non-binary stream data). It would therefore be helpful to find out what happens in ZipArchive>>readFrom:, i.e. what kind of stream is passed / opened, is it binary, does the file exist and is it still the
correct file.

I couldn’t see anything obvious. The file in the debug message exists, it
is a readable zip file. The way Metacello uses it it is a
StandardFileStream. It switches it to binary in the code.
But the only difference between a working and non-working build is the
upgrade to zinc 2.9.2.

Norbert

I'd debug it myself but I can't run VirtualBox at the moment because I'm
on the macOS beta and it won't start...

Max

On 9 Jul 2018, at 18:31, Norbert Hartl wrote:

Hi Max,

Am 09.07.2018 um 18:18 schrieb Max Leske [email protected]:

Hi Norbert,

This is a bit of a guess, but it's possible that the archive that is
downloaded from github is in Zip64 format and that the libraries for
extracting Zip64 are missing on your Linux. That would of course contradict
the experience that the same operation appears to work in 6.1.

to be honest I don’t know what Zip64 format is. I thought the Zip classes
are pure smalltalk for unpacking.

Try extracting the archive manually on your Linux machine with the same method that Metacello uses (I assume, Metacello uses the ZipPlugin, which
will probably use the system libraries).

I have no ZipPlugin as library in any of my vms.

But there are zips downloaded and unpacked. I start the image the first time loading all the code of my project. Then it is saved, copied to a new name and reopened to load the tests and then it fails. I did try to load
the tests in the first run and then it works.

I’m running out of ideas

thanks,

Norbert

Cheers,
Max

On 9 Jul 2018, at 17:14, Norbert Hartl wrote:

With the help of Esteban I got one step further. If I do

MCWorkingCopy
managersForClass: ZnFileSystemUtils
do: [ :each | each ancestry initialize ]

before loading my project it updates Zinc-FileSystem as well. Sadly it
still does not work for me because I get

Loading baseline of BaselineOfMobilityMap...
...RETRY->BaselineOfParasol
...RETRY->BaselineOfParasol[31mError: can't find EOCD position

and I don’t know why. zip file is downloaded from github and present but
it fails and only on my jenkins on linux. On my Mac this works.

I try a few things but then I’m back on pharo6.1 for the time being :(

Norbert

Am 07.07.2018 um 13:28 schrieb Norbert Hartl [email protected]:

Really? I thought the same but then I didn’t believe it works like that.

Anyway this would be very easy to solve. We just need to ask Sven if he is
fine with doing an empty .16 version for Zinc-FileSystem and does an
in-place version reset of 2.9.2 or a new 2.9.3. I’m not fully convinced
that will solve it but the cost won’t be very high.

Norbert

Am 07.07.2018 um 13:22 schrieb Cyril Ferlicot [email protected]:

Need to investigate more. There are two .15 versions but there is 1 year in between (if you didn’t notice TheIntegratior.15 is from 2017). Just want
to have more context information because I can only see that this is
strange but lacking insight.

I’m trying to figure out why it does not update Zinc-FileSystem. No matter what I do I cannot get Metacello to load the newer package. That would fix
the issue because I’m loading 2.9.2 which should have
Zinc-FileSystem-SVC.15 and not stay on the one included in the image.

I think this is important for everyone that has a product based on 6.1 and that want to migrate someday to pharo7. This way it is impossible to do it
step by step.

If there is a package .15 in the image and a package .15 in the repo, I think Metacello will not update because it rely on the numbers to know when to update. If it find a .15 and currently have a .15 I think Metacello will
think they are at the same version.

Norbert

--
Cyril Ferlicot
https://ferlicot.fr




Reply via email to