[Pharo-dev] StHub Support Ticket

2014-03-24 Thread Sven Van Caekenberghe
Can someone please help Philippe Marschall who is having trouble saving to 
StHub ? His account is http://www.smalltalkhub.com/#!/~marschall

He seems to be able to log in to the website part, but can't save anything. Not 
in the Seaside repositories, nor in a new Test repository that he just created. 
He was able to work normally until very recently (february).

TIA,

Sven




[Pharo-dev] FileSystemGit and Fuel final report

2014-03-24 Thread Max Leske
For those who’ve been interested in what went on during the last two weeks, 
where I visited Lille, I’ve compiled a short report. Regarding my work on Git 
I’ll continue to report on the status from time to time.

I want to thank everyone at RMoD once again for the warm welcome and the 
support I received. Special thanks to Ben and Camillo for providing a place to 
crash at.
Thanks also go to all of you guys and girls on the list for pushing Pharo 
constantly and for showing interest in my work. It’s very encouraging.

Cheers,
Max


Git:
- Esteban Lorenzano and me worked together on preparing the libgit2 and libssh2 
libraries for integration into the VM
- Igor Stasenko worked with me to solve a couple of problems I had with 
NativeBoost. Especially callbacks can be tricky
- Learnt how to build the VM in debugging mode so that I could debug FFI calls 
in XCode
- Stefan Marr worked with me on moving the tests from Phexample to SUnit
- The implementation now enables writing of blobs, trees and commits, cloning 
of remote repositories (https), fetching from remote repositories (https), 
pushing to remote repositories (https)
- Authentication with remote repositories via SSH is working but clone, fetch, 
push don't work yet (problems with the libssh2 interaction that I wasn't able 
to resolve yet)
- I set up an initial build infrastructure on the INRIA CI server 
(https://ci.inria.fr/pharo-contribution/job/LibGit2/)
- We defined a rough roadmap for what needs to be done in the near future:
   1. finish the low level libgit2 abstraction (offer a minimal API that 
hides the bindings; we don't want users to use the bindings directly)
   2. we already have a prototype of a FileSystem wrapper for Git. We want 
to use that on top of the libgit2 abstraction layer
   3. we already have a Monticello FileSystem wrapper prototype. We want to 
use that to abstract from the actual storage method. Together with the 
Git-FileSystem wrapper this should make it very easy to continue (for now) 
using Monticello and the existing GUI tools while using Git as a backend for 
storage.
- On Friday 21 I gave a short demo at RMoD on the work accomplished and what 
the plans are for the future

Fuel:
- Martín Dias and I worked together on:
   - debugging a problem with large object graphs
   - preparing a new baseline for Fuel 2
   - moving the benchmark suite to SMark
   - setting up a benchmark build on the INRIA CI server which will help us 
track performance changes when introducing changes
   - defining a rough roadmap for Fuel
- Had a discussion with Usman Bhatti about the uses and the future of Fuel in 
Moose

Re: [Pharo-dev] FileSystemGit and Fuel final report

2014-03-24 Thread Stephan Eggermont
Great!

I have a different (and lower priority) use case for the libgit2 bindings:
Michael Feathers has done some history analysis on github repositories
which we’ve used at the SPA2011 egg race. Basically, commit info on
method level for 4 ruby projects. I guess I’d have to use the low level
bindings to do similar things?

Stephan


Re: [Pharo-dev] StHub Support Ticket

2014-03-24 Thread Nicolas Petton

Sven Van Caekenberghe writes:

 Can someone please help Philippe Marschall who is having trouble saving to 
 StHub ? His account is http://www.smalltalkhub.com/#!/~marschall

 He seems to be able to log in to the website part, but can't save
 anything. Not in the Seaside repositories, nor in a new Test
 repository that he just created. He was able to work normally until
 very recently (february).

Hi!

Yes, Philippe sent me an MCZ file, and I'm doing some testing.

Cheers,
Nico


 TIA,

 Sven


-- 
Nicolas Petton
http://nicolas-petton.fr



Re: [Pharo-dev] FileSystemGit and Fuel final report

2014-03-24 Thread Sven Van Caekenberghe
Sounds like a productive week !
Thanks for the report and the work.

On 24 Mar 2014, at 10:06, Max Leske maxle...@gmail.com wrote:

 For those who’ve been interested in what went on during the last two weeks, 
 where I visited Lille, I’ve compiled a short report. Regarding my work on Git 
 I’ll continue to report on the status from time to time.
 
 I want to thank everyone at RMoD once again for the warm welcome and the 
 support I received. Special thanks to Ben and Camillo for providing a place 
 to crash at.
 Thanks also go to all of you guys and girls on the list for pushing Pharo 
 constantly and for showing interest in my work. It’s very encouraging.
 
 Cheers,
 Max
 
 
 Git:
 - Esteban Lorenzano and me worked together on preparing the libgit2 and 
 libssh2 libraries for integration into the VM
 - Igor Stasenko worked with me to solve a couple of problems I had with 
 NativeBoost. Especially callbacks can be tricky
 - Learnt how to build the VM in debugging mode so that I could debug FFI 
 calls in XCode
 - Stefan Marr worked with me on moving the tests from Phexample to SUnit
 - The implementation now enables writing of blobs, trees and commits, cloning 
 of remote repositories (https), fetching from remote repositories (https), 
 pushing to remote repositories (https)
 - Authentication with remote repositories via SSH is working but clone, 
 fetch, push don't work yet (problems with the libssh2 interaction that I 
 wasn't able to resolve yet)
 - I set up an initial build infrastructure on the INRIA CI server 
 (https://ci.inria.fr/pharo-contribution/job/LibGit2/)
 - We defined a rough roadmap for what needs to be done in the near future:
1. finish the low level libgit2 abstraction (offer a minimal API that 
 hides the bindings; we don't want users to use the bindings directly)
2. we already have a prototype of a FileSystem wrapper for Git. We 
 want to use that on top of the libgit2 abstraction layer
3. we already have a Monticello FileSystem wrapper prototype. We want 
 to use that to abstract from the actual storage method. Together with the 
 Git-FileSystem wrapper this should make it very easy to continue (for now) 
 using Monticello and the existing GUI tools while using Git as a backend for 
 storage.
 - On Friday 21 I gave a short demo at RMoD on the work accomplished and what 
 the plans are for the future
 
 Fuel:
 - Martín Dias and I worked together on:
- debugging a problem with large object graphs
- preparing a new baseline for Fuel 2
- moving the benchmark suite to SMark
- setting up a benchmark build on the INRIA CI server which will help 
 us track performance changes when introducing changes
- defining a rough roadmap for Fuel
 - Had a discussion with Usman Bhatti about the uses and the future of Fuel in 
 Moose




Re: [Pharo-dev] FileSystemGit and Fuel final report

2014-03-24 Thread Max Leske

On 24.03.2014, at 11:08, Stephan Eggermont step...@stack.nl wrote:

 Great!
 
 I have a different (and lower priority) use case for the libgit2 bindings:
 Michael Feathers has done some history analysis on github repositories
 which we’ve used at the SPA2011 egg race. Basically, commit info on
 method level for 4 ruby projects. I guess I’d have to use the low level
 bindings to do similar things?

Not necessarily. The API could expose `git log` operations and then you could 
easily get a list of all commits referencing a specific method (for instance). 
If you want to do that yourself of course, then yes, you’ll have to use the 
bindings (or rather the minimal objectified abstraction. Using the bindings 
directly is kind of dangerous). In my opinion however, Git provides all the 
tools you need for that and you should use those tools if possible.

Does that answer your question? Bug me again if not :)

Max

 
 Stephan




Re: [Pharo-dev] Debugger acting up

2014-03-24 Thread Gary Chambers

Hi,

maybe related to something I found when using our locked-down images (no 
Tools in ToolRegistry).


In SmalltalkImageopenLog

FileStream fileNamed: Smalltalk tools debugger logFileName

was causing a loop since the default error handling ends up sending 
#logDuring:, which

sends #openLog, causing an attempt to log an error (no debugger)...

For a quick workaround I changed to

openLog
This is a _private_ method,
Because it really belongs to logging facility,
we should delegate to it at some point 

^ ( FileStream fileNamed: Debugger logFileName )
 wantsLineEndConversion: true;
 setToEnd;
 yourself

though obviously not ideal having the direct reference to Debugger there.

Regards, Gary

- Original Message - 
From: Diego Lont diego.l...@delware.nl

To: Pharo Development List pharo-dev@lists.pharo.org
Sent: Friday, March 21, 2014 2:32 PM
Subject: [Pharo-dev] Debugger acting up


Hi all,

I do not have a proper test case yet, but as I have seen it twice now I want 
to ask you guys if you have any problems with it.


In a recent moose 5.0 image with seaside and some other suff loaded, the 
page would not load. Interrupting the image showed that the debugger was 
lost in the following loop:


MethodContext(ContextPart)handleSignal:
MethodContext(ContextPart)handleSignal:
MethodContext(ContextPart)handleSignal:
MethodContext(ContextPart)handleSignal:
MethodContext(ContextPart)handleSignal:
MethodContext(ContextPart)handleSignal:
MethodContext(ContextPart)handleSignal:
MethodContext(ContextPart)handleSignal:
MethodContext(ContextPart)handleSignal:
MethodContext(ContextPart)handleSignal:
MethodContext(ContextPart)handleSignal:
MethodContext(ContextPart)handleSignal:
MethodContext(ContextPart)handleSignal:
MethodContext(ContextPart)handleSignal:
MethodContext(ContextPart)handleSignal:
MethodContext(ContextPart)handleSignal:
MethodContext(ContextPart)handleSignal:
MethodContext(ContextPart)handleSignal:
MethodContext(ContextPart)handleSignal:
MethodContext(ContextPart)handleSignal:
MethodContext(ContextPart)handleSignal:
MethodContext(ContextPart)handleSignal:
MethodContext(ContextPart)handleSignal:
MethodContext(ContextPart)handleSignal:

Originally it was a halt that caused the signal, and that should pop up the 
debugger, but instead of that it went into an infinite loop.


Now I see that building Pier3 development on a pharo2 image also seems to 
causes this loop. The root cause is also in the mail, so maybe this is just 
the size of the stack ...:
Startup Error: An attempt to use interactive tools detected, while in 
non-interactive mode
Interactive Request: You are about to load new versions of the following 
packages

that have unsaved changes in the image:

Grease-Tests-Pharo20-Core

If you continue, you will lose these changes:
NonInteractiveUIManagernonInteractiveWarning:
NonInteractiveUIManagernonInteractiveRequest:title:
NonInteractiveUIManagernonInteractiveRequest:
NonInteractiveUIManagerconfirm:trueChoice:falseChoice:cancelChoice:default:
MCMergeOrLoadWarningdefaultAction
UndefinedObjecthandleSignal:

If anyone knows where this problem might come from, please put a mail on the 
mailing list.


Cheers,
Diego





[Pharo-dev] Created PetitParserExtension project

2014-03-24 Thread Guillaume Larcheveque
Hi everybody,

I am using PetitParser a lot and had implemented some little tools that
help me for writing or generating grammars. I put two of the most useful in
the project PetitParserExtension (
http://smalltalkhub.com/#!/~Moose/PetitParserExtensions).

the first one is useful when you generate grammars with lots of rules:

A PPExtendedCompositeParser offers you a new way to create rules. Just
define your rule as a method in the #rules protocol and it will be managed
exactly like rules in PPCompositeParser.
You will not have the limitation of 256 rules due to the instance variables
limit but you will need to refer to your rule with the method #rule: in any
other rule using it.

You can mix PPCompositeParser and PPExtendedCompositeParser way to define
rules.


the second one is cool for token identification because it creates a parser
that will parse in a time depending on token length and not the number of
different tokens.

A PPMultiStringParser is a tool able to create a really effective parser on
a huge  collection of Strings. This parser will match any string in this
collection, and the longest one if two are matchable (( PPMultiStringParser
on: #('tin' 'tintin')) parse: 'tintin' will match 'tintin' and not just
'tin')


If someone else has implemented some tools around PetitParser, it will be
great to put those all together so feel free to commit your tools in this
project and to improve mines.

-- 
*Guillaume Larcheveque*


Re: [Pharo-dev] new website about Artefact

2014-03-24 Thread Sergi Reyner
2014-03-24 14:40 GMT+00:00 olivier olivier.auver...@gmail.com:

 Hi,

 A website is now available about Artefact. Documentation and useful
 informations are grouped to help you to produce many and beautiful PDF
 documents with Pharo.

 https://sites.google.com/site/artefactpdf/


I am of the opinion that any documentation that includes unicorns is an
automatic +1.
So, +2!

Cheers,
Sergi


Re: [Pharo-dev] new website about Artefact

2014-03-24 Thread camille teruel
Congrats it's very funny :D


2014-03-24 15:40 GMT+01:00 olivier olivier.auver...@gmail.com:

 Hi,

 A website is now available about Artefact. Documentation and useful
 informations are grouped to help you to produce many and beautiful PDF
 documents with Pharo.

 https://sites.google.com/site/artefactpdf/

 The pasta team



Re: [Pharo-dev] new website about Artefact

2014-03-24 Thread p...@highoctane.be
+3




On Mon, Mar 24, 2014 at 3:53 PM, Sergi Reyner sergi.rey...@gmail.comwrote:

 2014-03-24 14:40 GMT+00:00 olivier olivier.auver...@gmail.com:

 Hi,

 A website is now available about Artefact. Documentation and useful
 informations are grouped to help you to produce many and beautiful PDF
 documents with Pharo.

 https://sites.google.com/site/artefactpdf/


 I am of the opinion that any documentation that includes unicorns is an
 automatic +1.
 So, +2!

 Cheers,
 Sergi



[Pharo-dev] threading in Pharo

2014-03-24 Thread Alexandre Bergel
Hi!

Threads in Pharo have always been mysterious for me.
If I doit the following: [  true ] whileTrue
Can other thread interrupt this? 

Say in other words, can the following piece of code may suffer from a 
concurrent problem in Pharo
anOrderedCollection add: 42

My current understanding about thread is that there is a scheduling that may 
occurs each time we enter the VM (e.g., primitive call, instantiating an 
object, throwing an exception). So, the code anOrderedCollection add: 42” will 
_never_ suffer from concurrent call because adding 42 to a collection does not 
enter the VM. Does this make sense?

What is the current status of this?

Thanks to you all
Alexandre
-- 
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel  http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.






Re: [Pharo-dev] threading in Pharo

2014-03-24 Thread Jan Vrany

Hi,

On 24/03/14 17:56, Alexandre Bergel wrote:

Hi!

Threads in Pharo have always been mysterious for me.
If I doit the following: [  true ] whileTrue
Can other thread interrupt this?

Say in other words, can the following piece of code may suffer from a 
concurrent problem in Pharo
anOrderedCollection add: 42

My current understanding about thread is that there is a scheduling that may occurs 
each time we enter the VM (e.g., primitive call, instantiating an object, throwing 
an exception). So, the code anOrderedCollection add: 42” will _never_ suffer 
from concurrent call because adding 42 to a collection does not enter the VM. Does 
this make sense?


I don't think so. AFAIK interrupts are checked on message sends and on
backward jumps (because of inlined loops just like in your example).

So context switch may happen on each send...

Jan



What is the current status of this?

Thanks to you all
Alexandre






Re: [Pharo-dev] Just found, i can compile this:

2014-03-24 Thread Eliot Miranda
On Mon, Mar 24, 2014 at 10:00 AM, Christophe Demarey 
christophe.dema...@inria.fr wrote:

 Hi Eliot,

 Le 20 mars 2014 à 17:04, Eliot Miranda a écrit :

 On Thu, Mar 20, 2014 at 2:03 AM, Christophe Demarey 
 christophe.dema...@inria.fr wrote:

 Hi Eliot,

 Le 19 mars 2014 à 16:25, Eliot Miranda a écrit :

 Hi Christophe,

 On Mar 19, 2014, at 1:45 AM, Christophe Demarey 
 christophe.dema...@inria.fr wrote:


 Le 18 mars 2014 à 19:50, Eliot Miranda a écrit :

 On Tue, Mar 18, 2014 at 10:10 AM, Christophe Demarey 
 christophe.dema...@inria.fr wrote:

 Why arguments should be read only?
 They are just temporary variables with an initial value.


 Read the blue book.  It was a decision of the language designers to
 forbid assignment to arguments to allow debugging.  The assignment to block
 arguments is a side-effect of the old BlockContext implementation of blocks
 where block arguments were mapped onto temporary variables of the home
 context.  It is an anachronism and should be forbidden also.


 Thank you for the explanation.
 I'm just curious why it is so difficult to implement a debugger if
 arguments are assignable?
 If you need to restart the execution of a method, and so you need to get
 the initial value of the argument, I understand you cannot find the value
 anymore in the method context but it is available in the caller context, no?
 As I never implemented a debugger, I cannot figure out the difficulties.


 the args are no longer available; they get moved from the caller context
 to the callee.  If you think about stack frames then what happens is that
 the slots containing the outgoing arguments are used as the slots for the 
 incoming
 arguments.  So if arguments are assigned to they are indeed lost.


 ok, I understand. Thank you for the explanation.


 But look at how many methods in the system (or in any system).  The
 proportion of methods/functions/procedures that could be written to assign
 their arguments is very small so the Smalltalk trade off is a good one.


 I agree we don't really loose something. You can always assign arguments
 to temporary variables and update temporary variables. My point was just:
 if possible, why don't do it?


 It's that tricky trade-off between value and cost.  Is it worth it?  Does
 doing it provide as much value as doing what it displaces (opportunity
 cost).  IMO it simply doesn't provide enough value, and there are different
 priorities in the compiler.  But if you disagree implement it your self and
 then it'll be done.


 It looks like My point was just: if possible, why don't do it? was
 misunderstood.
 I wanted to say: if available for free (no work needed, why forbid it?).
 You gave a very good reason and I also think that the cost does not worth
 the value. :)


Agreed.  Good discussion.

-- 
best,
Eliot


Re: [Pharo-dev] threading in Pharo

2014-03-24 Thread Alexandre Bergel
 My current understanding about thread is that there is a scheduling that may 
 occurs each time we enter the VM (e.g., primitive call, instantiating an 
 object, throwing an exception). So, the code anOrderedCollection add: 42” 
 will _never_ suffer from concurrent call because adding 42 to a collection 
 does not enter the VM. Does this make sense?
 
 I don't think so. AFAIK interrupts are checked on message sends and on
 backward jumps (because of inlined loops just like in your example).
 
 So context switch may happen on each send...

Ah yes!! I missed this case.
Any idea what is the cost of using a semaphore? Inserting the expression 
anOrderedCollection add: 42” in a semaphore surely make the expression slower. 
Any idea how much slower?

Alexandre
-- 
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel  http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.






Re: [Pharo-dev] threading in Pharo

2014-03-24 Thread Stefan Marr
Hi Alexandre:

On 24 Mar 2014, at 19:20, Alexandre Bergel alexandre.ber...@me.com wrote:

 Any idea what is the cost of using a semaphore? Inserting the expression 
 anOrderedCollection add: 42” in a semaphore surely make the expression 
 slower. Any idea how much slower?

Can you elaborate a little on the problem.
Your granularity does not seem to be of the right level.
Covering a single #add: operations is most probably rather fine grained, and 
might not give you the guarantees you would expect.
What exactly do you want to achieve?
How many Smalltalk processes are interacting with that collection? How many 
consumer/producer do you have?

And well, on the standard priority, you normally got cooperative scheduling 
anyway. So, it depends on what you are doing whether there is a real issue.
And if you don’t want to use a semaphore, there are also other mechanism. I 
think, there should be something like ‘execute uninterruptible’ for a block. 
Think that raises the priority of the processes to the highest level for the 
execution of the block, if I remember correctly.


Best regards
Stefan

-- 
Stefan Marr
INRIA Lille - Nord Europe
http://stefan-marr.de/research/






Re: [Pharo-dev] threading in Pharo

2014-03-24 Thread Alexandre Bergel
 Any idea what is the cost of using a semaphore? Inserting the expression 
 anOrderedCollection add: 42” in a semaphore surely make the expression 
 slower. Any idea how much slower?
 
 Can you elaborate a little on the problem.

I am working on a memory model for expandable collection in Pharo. Currently, 
OrderedCollection, Dictionary and other expandable collections use a internal 
array to store their data. My new collection library recycle these array 
instead of letting the garbage collector dispose them. I simply insert the 
arrays in an ordered collection when an array is not necessary anymore. And I 
remove one when I need one. 

At the end, #add:  and #remove: are performed on these polls of arrays. I 
haven’t been able to spot any problem regarding concurrency and I made no 
effort in preventing them. I have a simple global collection and each call site 
of OrderedCollection new” can pick an element of my global collection.

I have the impression that I simply need to guard the access to the global 
poll, which is basically guarding #add:  #remove: and #includes:

 Your granularity does not seem to be of the right level.
 Covering a single #add: operations is most probably rather fine grained, and 
 might not give you the guarantees you would expect.
 What exactly do you want to achieve?
 How many Smalltalk processes are interacting with that collection? How many 
 consumer/producer do you have?

Basically, all the processes may access my global polls by inserting or 
removing elements. 

 And well, on the standard priority, you normally got cooperative scheduling 
 anyway. So, it depends on what you are doing whether there is a real issue.
 And if you don’t want to use a semaphore, there are also other mechanism. I 
 think, there should be something like ‘execute uninterruptible’ for a block. 
 Think that raises the priority of the processes to the highest level for the 
 execution of the block, if I remember correctly.

What is funny, is that I did not care at all about multi-threading and 
concurrency, and I have not spotted any problem so far.

Alexandre


 
 
 Best regards
 Stefan
 
 -- 
 Stefan Marr
 INRIA Lille - Nord Europe
 http://stefan-marr.de/research/
 
 
 
 

-- 
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel  http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.






Re: [Pharo-dev] threading in Pharo

2014-03-24 Thread Jan Vrany

On 24/03/14 18:20, Alexandre Bergel wrote:

My current understanding about thread is that there is a scheduling that may occurs 
each time we enter the VM (e.g., primitive call, instantiating an object, throwing 
an exception). So, the code anOrderedCollection add: 42” will _never_ suffer 
from concurrent call because adding 42 to a collection does not enter the VM. Does 
this make sense?


I don't think so. AFAIK interrupts are checked on message sends and on
backward jumps (because of inlined loops just like in your example).

So context switch may happen on each send...


Ah yes!! I missed this case.
Any idea what is the cost of using a semaphore? Inserting the expression 
anOrderedCollection add: 42” in a semaphore surely make the expression slower. 
Any idea how much slower?



Not really sure, since #signal and #wait are primitives,
but simple benchmark should do it :-)

However, you may want to use recursion lock (sometimes called
monitor - in Pharo class Monitor) which allows for recursion.
Otherwise you may get a nice deadlock if your code recurs.

Cost of recursion lock could be reduced to couple
machine instructions (if there's no contention), if done
properly. The implementation of Monitor in Pharo seems to be
way, way more costly than few instructions.

Jan





Re: [Pharo-dev] threading in Pharo

2014-03-24 Thread Jan Vrany

On 24/03/14 18:57, Alexandre Bergel wrote:

Any idea what is the cost of using a semaphore? Inserting the
expression anOrderedCollection add: 42” in a semaphore surely
make the expression slower. Any idea how much slower?


Can you elaborate a little on the problem.


I am working on a memory model for expandable collection in Pharo.
Currently, OrderedCollection, Dictionary and other expandable
collections use a internal array to store their data. My new
collection library recycle these array instead of letting the garbage
collector dispose them. I simply insert the arrays in an ordered
collection when an array is not necessary anymore. And I remove one
when I need one.



Just out of curiosity, why do you do that? I would say is better throw 
them away (in most cases)


Jan



Re: [Pharo-dev] threading in Pharo

2014-03-24 Thread Alexandre Bergel
 Just out of curiosity, why do you do that? I would say is better throw them 
 away (in most cases)

Just for the sake of publishing original ideas :-) Joking :-)
The VM is considering expandable collections as simple objects, and this has a 
cost in terms of memory and CPU consumption. I will be able to say more at esug 
hopefully.

Alexandre

-- 
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel  http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.






Re: [Pharo-dev] threading in Pharo

2014-03-24 Thread Stefan Marr
Hi Alexandre:

On 24 Mar 2014, at 19:57, Alexandre Bergel alexandre.ber...@me.com wrote:

 I am working on a memory model for expandable collection in Pharo. Currently, 
 OrderedCollection, Dictionary and other expandable collections use a internal 
 array to store their data. My new collection library recycle these array 
 instead of letting the garbage collector dispose them. I simply insert the 
 arrays in an ordered collection when an array is not necessary anymore. And I 
 remove one when I need one. 

Hm, is that really going to be worth the trouble?

 At the end, #add:  and #remove: are performed on these polls of arrays. I 
 haven’t been able to spot any problem regarding concurrency and I made no 
 effort in preventing them. I have a simple global collection and each call 
 site of OrderedCollection new” can pick an element of my global collection.
 
 I have the impression that I simply need to guard the access to the global 
 poll, which is basically guarding #add:  #remove: and #includes:

One of the AtomicCollections might be the right things for you?

 What is funny, is that I did not care at all about multi-threading and 
 concurrency, and I have not spotted any problem so far.

There isn’t any ‘multi-threading’ like in Java, you got a much more control 
version: cooperative on the same priority, preemptive between priorities.
So, I am not surprised. And well, these operations are likely not to be 
problematic when they are racy, except when the underling data structure could 
get into an inconsistent state itself. The overall operations 
(adding/removing/searing) are racy on the application level anyway.

However, much more interesting would be to know what kind of benefit do you see 
for such reuse?
And especially, with Spur around the corner, will it still pay off then? Or is 
it an application-specific optimization?

Best regards
Stefan



 
 Alexandre
 
 
 
 
 Best regards
 Stefan
 
 -- 
 Stefan Marr
 INRIA Lille - Nord Europe
 http://stefan-marr.de/research/
 
 
 
 
 
 -- 
 _,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
 Alexandre Bergel  http://www.bergel.eu
 ^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.
 
 
 
 

-- 
Stefan Marr
INRIA Lille - Nord Europe
http://stefan-marr.de/research/






Re: [Pharo-dev] threading in Pharo

2014-03-24 Thread Alexandre Bergel
 I am working on a memory model for expandable collection in Pharo. 
 Currently, OrderedCollection, Dictionary and other expandable collections 
 use a internal array to store their data. My new collection library recycle 
 these array instead of letting the garbage collector dispose them. I simply 
 insert the arrays in an ordered collection when an array is not necessary 
 anymore. And I remove one when I need one. 
 
 Hm, is that really going to be worth the trouble?

This technique reduces the consumption of about 15% of memory. 

 At the end, #add:  and #remove: are performed on these polls of arrays. I 
 haven’t been able to spot any problem regarding concurrency and I made no 
 effort in preventing them. I have a simple global collection and each call 
 site of OrderedCollection new” can pick an element of my global collection.
 
 I have the impression that I simply need to guard the access to the global 
 poll, which is basically guarding #add:  #remove: and #includes:
 
 One of the AtomicCollections might be the right things for you?

I will have a look at it.

 What is funny, is that I did not care at all about multi-threading and 
 concurrency, and I have not spotted any problem so far.
 
 There isn’t any ‘multi-threading’ like in Java, you got a much more control 
 version: cooperative on the same priority, preemptive between priorities.
 So, I am not surprised. And well, these operations are likely not to be 
 problematic when they are racy, except when the underling data structure 
 could get into an inconsistent state itself. The overall operations 
 (adding/removing/searing) are racy on the application level anyway.
 
 However, much more interesting would be to know what kind of benefit do you 
 see for such reuse?
 And especially, with Spur around the corner, will it still pay off then? Or 
 is it an application-specific optimization?

I am exploring a new design of the collection library of Pharo. Not all the 
(academic) ideas will be worth porting into the mainstream of Pharo. But some 
of them yes.

Thanks for all your help guys! You’re great!

Cheers,
Alexandre

-- 
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel  http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.






[Pharo-dev] Migration from Pharo 1.1 to 2.0

2014-03-24 Thread Laurent Laffont
Hi,

is there a way to export / fileOut objects from Pharo 1.1 and reload them into 
Pharo 2.0 or 3.0 ?

Cheers,

Laurent



Re: [Pharo-dev] Migration from Pharo 1.1 to 2.0

2014-03-24 Thread Mariano Martinez Peck
Fuel works from 1.1.1 to 3.0:
http://rmod.lille.inria.fr/web/pier/software/Fuel/Version1.9/Documentation/Installation

If I remember correctly  1.1.1 was just same image but with support for
cog right? in such a case, it is likely it should work for 1.1 as well.
You can give it a try.

Cheers,


On Mon, Mar 24, 2014 at 5:37 PM, Laurent Laffont
laurent.laff...@gmail.comwrote:

 Hi,

 is there a way to export / fileOut objects from Pharo 1.1 and reload them
 into Pharo 2.0 or 3.0 ?

 Cheers,

 Laurent




-- 
Mariano
http://marianopeck.wordpress.com


Re: [Pharo-dev] Migration from Pharo 1.1 to 2.0

2014-03-24 Thread Max Leske
Fuel 1.9.3 still works with 1.1.1

On 24.03.2014, at 21:57, Mariano Martinez Peck marianop...@gmail.com wrote:

 Fuel works from 1.1.1 to 3.0: 
 http://rmod.lille.inria.fr/web/pier/software/Fuel/Version1.9/Documentation/Installation
 
 If I remember correctly  1.1.1 was just same image but with support for 
 cog right? in such a case, it is likely it should work for 1.1 as well.
 You can give it a try.
 
 Cheers,
 
 
 On Mon, Mar 24, 2014 at 5:37 PM, Laurent Laffont laurent.laff...@gmail.com 
 wrote:
 Hi,
 
 is there a way to export / fileOut objects from Pharo 1.1 and reload them 
 into Pharo 2.0 or 3.0 ?
 
 Cheers,
 
 Laurent
 
 
 
 
 -- 
 Mariano
 http://marianopeck.wordpress.com



Re: [Pharo-dev] Migration from Pharo 1.1 to 2.0

2014-03-24 Thread Laurent Laffont
It works ! Thank you !

Laurent

Le lundi 24 mars 2014, 22:10:14 Max Leske a écrit :
 Fuel 1.9.3 still works with 1.1.1
 
 On 24.03.2014, at 21:57, Mariano Martinez Peck marianop...@gmail.com wrote:
 
  Fuel works from 1.1.1 to 3.0: 
  http://rmod.lille.inria.fr/web/pier/software/Fuel/Version1.9/Documentation/Installation
  
  If I remember correctly  1.1.1 was just same image but with support for 
  cog right? in such a case, it is likely it should work for 1.1 as well.
  You can give it a try.
  
  Cheers,
  
  
  On Mon, Mar 24, 2014 at 5:37 PM, Laurent Laffont 
  laurent.laff...@gmail.com wrote:
  Hi,
  
  is there a way to export / fileOut objects from Pharo 1.1 and reload them 
  into Pharo 2.0 or 3.0 ?
  
  Cheers,
  
  Laurent
  
  
  
  
 




Re: [Pharo-dev] threading in Pharo

2014-03-24 Thread p...@highoctane.be
On Mon, Mar 24, 2014 at 8:23 PM, Alexandre Bergel
alexandre.ber...@me.comwrote:

  I am working on a memory model for expandable collection in Pharo.
 Currently, OrderedCollection, Dictionary and other expandable collections
 use a internal array to store their data. My new collection library recycle
 these array instead of letting the garbage collector dispose them. I simply
 insert the arrays in an ordered collection when an array is not necessary
 anymore. And I remove one when I need one.
 
  Hm, is that really going to be worth the trouble?

 This technique reduces the consumption of about 15% of memory.

  At the end, #add:  and #remove: are performed on these polls of arrays.
 I haven't been able to spot any problem regarding concurrency and I made no
 effort in preventing them. I have a simple global collection and each call
 site of OrderedCollection new can pick an element of my global collection.
 
  I have the impression that I simply need to guard the access to the
 global poll, which is basically guarding #add:  #remove: and #includes:
 
  One of the AtomicCollections might be the right things for you?

 I will have a look at it.

  What is funny, is that I did not care at all about multi-threading and
 concurrency, and I have not spotted any problem so far.
 
  There isn't any 'multi-threading' like in Java, you got a much more
 control version: cooperative on the same priority, preemptive between
 priorities.
  So, I am not surprised. And well, these operations are likely not to be
 problematic when they are racy, except when the underling data structure
 could get into an inconsistent state itself. The overall operations
 (adding/removing/searing) are racy on the application level anyway.
 
  However, much more interesting would be to know what kind of benefit do
 you see for such reuse?
  And especially, with Spur around the corner, will it still pay off then?
 Or is it an application-specific optimization?

 I am exploring a new design of the collection library of Pharo. Not all
 the (academic) ideas will be worth porting into the mainstream of Pharo.
 But some of them yes.

 Thanks for all your help guys! You're great!

 Cheers,
 Alexandre

 --
 _,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
 Alexandre Bergel  http://www.bergel.eu
 ^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.




An interesting method I stumbled upon which may help in understanding how
these things do work.

BlockClosurevalueUnpreemptively
 Evaluate the receiver (block), without the possibility of preemption by
higher priority processes. Use this facility VERY sparingly!
Think about using BlockvalueUninterruptably first, and think about using
Semaphorecritical: before that, and think about redesigning your
application even before that!
 After you've done all that thinking, go right ahead and use it...
| activeProcess oldPriority result semaphore |
 activeProcess := Processor activeProcess.
oldPriority := activeProcess priority.
activeProcess priority: Processor highestPriority.
 result := self ensure: [activeProcess priority: oldPriority].


Re: [Pharo-dev] threading in Pharo

2014-03-24 Thread Guillermo Polito
Here is the better documentation I found online about it when I learnt it:

http://wiki.squeak.org/squeak/382

Afterwards... reading the vm and playing was the hard way to learn it...

Then it seems there is an ongoing chapter in the topic

https://ci.inria.fr/pharo-contribution/job/PharoForTheEnterprise/lastSuccessfulBuild/artifact/ConcurrencyBasics/ConcurrencyBasics.pier.html


On Mon, Mar 24, 2014 at 10:54 PM, p...@highoctane.be p...@highoctane.bewrote:

 On Mon, Mar 24, 2014 at 8:23 PM, Alexandre Bergel alexandre.ber...@me.com
  wrote:

  I am working on a memory model for expandable collection in Pharo.
 Currently, OrderedCollection, Dictionary and other expandable collections
 use a internal array to store their data. My new collection library recycle
 these array instead of letting the garbage collector dispose them. I simply
 insert the arrays in an ordered collection when an array is not necessary
 anymore. And I remove one when I need one.
 
  Hm, is that really going to be worth the trouble?

 This technique reduces the consumption of about 15% of memory.

  At the end, #add:  and #remove: are performed on these polls of
 arrays. I haven't been able to spot any problem regarding concurrency and I
 made no effort in preventing them. I have a simple global collection and
 each call site of OrderedCollection new can pick an element of my global
 collection.
 
  I have the impression that I simply need to guard the access to the
 global poll, which is basically guarding #add:  #remove: and #includes:
 
  One of the AtomicCollections might be the right things for you?

 I will have a look at it.

  What is funny, is that I did not care at all about multi-threading and
 concurrency, and I have not spotted any problem so far.
 
  There isn't any 'multi-threading' like in Java, you got a much more
 control version: cooperative on the same priority, preemptive between
 priorities.
  So, I am not surprised. And well, these operations are likely not to be
 problematic when they are racy, except when the underling data structure
 could get into an inconsistent state itself. The overall operations
 (adding/removing/searing) are racy on the application level anyway.
 
  However, much more interesting would be to know what kind of benefit do
 you see for such reuse?
  And especially, with Spur around the corner, will it still pay off
 then? Or is it an application-specific optimization?

 I am exploring a new design of the collection library of Pharo. Not all
 the (academic) ideas will be worth porting into the mainstream of Pharo.
 But some of them yes.

 Thanks for all your help guys! You're great!

 Cheers,
 Alexandre

 --
 _,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
 Alexandre Bergel  http://www.bergel.eu
 ^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.




 An interesting method I stumbled upon which may help in understanding how
 these things do work.

 BlockClosurevalueUnpreemptively
  Evaluate the receiver (block), without the possibility of preemption by
 higher priority processes. Use this facility VERY sparingly!
 Think about using BlockvalueUninterruptably first, and think about
 using Semaphorecritical: before that, and think about redesigning your
 application even before that!
  After you've done all that thinking, go right ahead and use it...
 | activeProcess oldPriority result semaphore |
  activeProcess := Processor activeProcess.
 oldPriority := activeProcess priority.
 activeProcess priority: Processor highestPriority.
  result := self ensure: [activeProcess priority: oldPriority].




[Pharo-dev] Trait1 Trait2 Trait3 in Smalltalk globals allClassesAndTraits

2014-03-24 Thread p...@highoctane.be
On a fresh 3.0, there are Trait1, Trait2 and Trait3 in Smalltalk globals
allClassesAndTraits

These 3 look like they are equivalents to Trait. What are these for?

Phil


Re: [Pharo-dev] threading in Pharo

2014-03-24 Thread Eliot Miranda
Also Smalltalk-80: the Language and its Implementation (the Blue
Bookhttp://stephane.ducasse.free.fr/FreeBooks/BlueBook/Bluebook.pdf)
has a good chapter, Chapter 15: 15 Multiple Independent Processes


On Mon, Mar 24, 2014 at 3:29 PM, Guillermo Polito guillermopol...@gmail.com
 wrote:

 Here is the better documentation I found online about it when I learnt it:

 http://wiki.squeak.org/squeak/382

 Afterwards... reading the vm and playing was the hard way to learn it...

 Then it seems there is an ongoing chapter in the topic


 https://ci.inria.fr/pharo-contribution/job/PharoForTheEnterprise/lastSuccessfulBuild/artifact/ConcurrencyBasics/ConcurrencyBasics.pier.html


 On Mon, Mar 24, 2014 at 10:54 PM, p...@highoctane.be 
 p...@highoctane.bewrote:

 On Mon, Mar 24, 2014 at 8:23 PM, Alexandre Bergel 
 alexandre.ber...@me.com wrote:

  I am working on a memory model for expandable collection in Pharo.
 Currently, OrderedCollection, Dictionary and other expandable collections
 use a internal array to store their data. My new collection library recycle
 these array instead of letting the garbage collector dispose them. I simply
 insert the arrays in an ordered collection when an array is not necessary
 anymore. And I remove one when I need one.
 
  Hm, is that really going to be worth the trouble?

 This technique reduces the consumption of about 15% of memory.

  At the end, #add:  and #remove: are performed on these polls of
 arrays. I haven't been able to spot any problem regarding concurrency and I
 made no effort in preventing them. I have a simple global collection and
 each call site of OrderedCollection new can pick an element of my global
 collection.
 
  I have the impression that I simply need to guard the access to the
 global poll, which is basically guarding #add:  #remove: and #includes:
 
  One of the AtomicCollections might be the right things for you?

 I will have a look at it.

  What is funny, is that I did not care at all about multi-threading
 and concurrency, and I have not spotted any problem so far.
 
  There isn't any 'multi-threading' like in Java, you got a much more
 control version: cooperative on the same priority, preemptive between
 priorities.
  So, I am not surprised. And well, these operations are likely not to
 be problematic when they are racy, except when the underling data structure
 could get into an inconsistent state itself. The overall operations
 (adding/removing/searing) are racy on the application level anyway.
 
  However, much more interesting would be to know what kind of benefit
 do you see for such reuse?
  And especially, with Spur around the corner, will it still pay off
 then? Or is it an application-specific optimization?

 I am exploring a new design of the collection library of Pharo. Not all
 the (academic) ideas will be worth porting into the mainstream of Pharo.
 But some of them yes.

 Thanks for all your help guys! You're great!

 Cheers,
 Alexandre

 --
 _,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
 Alexandre Bergel  http://www.bergel.eu
 ^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.




 An interesting method I stumbled upon which may help in understanding how
 these things do work.

 BlockClosurevalueUnpreemptively
  Evaluate the receiver (block), without the possibility of preemption
 by higher priority processes. Use this facility VERY sparingly!
 Think about using BlockvalueUninterruptably first, and think about
 using Semaphorecritical: before that, and think about redesigning your
 application even before that!
  After you've done all that thinking, go right ahead and use it...
 | activeProcess oldPriority result semaphore |
  activeProcess := Processor activeProcess.
 oldPriority := activeProcess priority.
 activeProcess priority: Processor highestPriority.
  result := self ensure: [activeProcess priority: oldPriority].





-- 
best,
Eliot


[Pharo-dev] Crazy Smalltalk code snippets

2014-03-24 Thread Pavel Krivanek
Who can find the most useful usage of this?

thisContext instVarNamed: #receiver put: 42.
self factorial

GOTO statement in Pharo:

FileStream stdout nextPutAll: 'Hello world'; lf.
thisContext jump: -12.
Let's collect the next ones :-)

Cheers,
-- Pavel


Re: [Pharo-dev] Crazy Smalltalk code snippets

2014-03-24 Thread p...@highoctane.be
I am curious.

Maybe on the first one, substitute something when a DNU is encountered.
Like logging undefined receivers.

I wonder how the next one behaves on JITted methods. I fear that offset
errors may lead to weird errors.


On Mon, Mar 24, 2014 at 11:51 PM, Pavel Krivanek
pavel.kriva...@gmail.comwrote:

 Who can find the most useful usage of this?

 thisContext instVarNamed: #receiver put: 42.
  self factorial

 GOTO statement in Pharo:

 FileStream stdout nextPutAll: 'Hello world'; lf.
  thisContext jump: -12.
 Let's collect the next ones :-)

 Cheers,
 -- Pavel



Re: [Pharo-dev] Crazy Smalltalk code snippets

2014-03-24 Thread Eliot Miranda
On Mon, Mar 24, 2014 at 4:02 PM, p...@highoctane.be p...@highoctane.bewrote:

 I am curious.

 Maybe on the first one, substitute something when a DNU is encountered.
 Like logging undefined receivers.

 I wonder how the next one behaves on JITted methods. I fear that offset
 errors may lead to weird errors.


It *should* just work :-) (provided the jump distance of -12 is correct).
 The VM traps assignments to variables of contexts, and converts them to
vanilla contexts.  Therefore the jump doesn't occur in JITTED code but back
in normal interpreted bytecode.

On Mon, Mar 24, 2014 at 11:51 PM, Pavel Krivanek
pavel.kriva...@gmail.comwrote:

 Who can find the most useful usage of this?

 thisContext instVarNamed: #receiver put: 42.
  self factorial

 GOTO statement in Pharo:

 FileStream stdout nextPutAll: 'Hello world'; lf.
  thisContext jump: -12.
 Let's collect the next ones :-)


this is my favourite, and would lock-up a strict blue book VM.  (I once
locked up Allen Wirfs-Brock's 4404 with this and he wasn't best pleased
[forgive me Allen])


| a |
a := Array with: #perform:withArguments: with: nil.
a at: 2 put: a.
a perform: a first withArguments: a

;-)



 Cheers,
 -- Pavel





-- 
best,
Eliot


Re: [Pharo-dev] Crazy Smalltalk code snippets

2014-03-24 Thread p...@highoctane.be
On Tue, Mar 25, 2014 at 12:11 AM, Eliot Miranda eliot.mira...@gmail.comwrote:




 On Mon, Mar 24, 2014 at 4:02 PM, p...@highoctane.be p...@highoctane.bewrote:

 I am curious.

 Maybe on the first one, substitute something when a DNU is encountered.
 Like logging undefined receivers.

 I wonder how the next one behaves on JITted methods. I fear that offset
 errors may lead to weird errors.


 It *should* just work :-) (provided the jump distance of -12 is
 correct).  The VM traps assignments to variables of contexts, and converts
 them to vanilla contexts.  Therefore the jump doesn't occur in JITTED code
 but back in normal interpreted bytecode.


How magical!


  On Mon, Mar 24, 2014 at 11:51 PM, Pavel Krivanek 
 pavel.kriva...@gmail.com wrote:

 Who can find the most useful usage of this?

 thisContext instVarNamed: #receiver put: 42.
  self factorial

 GOTO statement in Pharo:

 FileStream stdout nextPutAll: 'Hello world'; lf.
  thisContext jump: -12.
 Let's collect the next ones :-)


 this is my favourite, and would lock-up a strict blue book VM.  (I once
 locked up Allen Wirfs-Brock's 4404 with this and he wasn't best pleased
 [forgive me Allen])


 | a |
 a := Array with: #perform:withArguments: with: nil.
 a at: 2 put: a.
 a perform: a first withArguments: a

 ;-)



The infinite is near.

Not as blocking, but still... (Alt-. interrupts this one)

Crazyrun
Continuation currentDo: [ :cc | here := cc ].
 here value: true.

Crazy new run

BTW, is there any support for partial continuations in Pharo? Continuations
like this one look like full continuations and that's *huge*.



 Cheers,
 -- Pavel





 --
 best,
 Eliot



Re: [Pharo-dev] Crazy Smalltalk code snippets

2014-03-24 Thread Eliot Miranda
On Mon, Mar 24, 2014 at 4:33 PM, p...@highoctane.be p...@highoctane.bewrote:

 On Tue, Mar 25, 2014 at 12:11 AM, Eliot Miranda 
 eliot.mira...@gmail.comwrote:




 On Mon, Mar 24, 2014 at 4:02 PM, p...@highoctane.be 
 p...@highoctane.bewrote:

 I am curious.

 Maybe on the first one, substitute something when a DNU is encountered.
 Like logging undefined receivers.

 I wonder how the next one behaves on JITted methods. I fear that offset
 errors may lead to weird errors.


 It *should* just work :-) (provided the jump distance of -12 is
 correct).  The VM traps assignments to variables of contexts, and converts
 them to vanilla contexts.  Therefore the jump doesn't occur in JITTED code
 but back in normal interpreted bytecode.


 How magical!


  On Mon, Mar 24, 2014 at 11:51 PM, Pavel Krivanek 
 pavel.kriva...@gmail.com wrote:

 Who can find the most useful usage of this?

 thisContext instVarNamed: #receiver put: 42.
  self factorial

 GOTO statement in Pharo:

 FileStream stdout nextPutAll: 'Hello world'; lf.
  thisContext jump: -12.
 Let's collect the next ones :-)


 this is my favourite, and would lock-up a strict blue book VM.  (I once
 locked up Allen Wirfs-Brock's 4404 with this and he wasn't best pleased
 [forgive me Allen])


 | a |
 a := Array with: #perform:withArguments: with: nil.
 a at: 2 put: a.
 a perform: a first withArguments: a

 ;-)



 The infinite is near.

 Not as blocking, but still... (Alt-. interrupts this one)

 Crazyrun
 Continuation currentDo: [ :cc | here := cc ].
  here value: true.

 Crazy new run

 BTW, is there any support for partial continuations in Pharo?
 Continuations like this one look like full continuations and that's *huge*.


Yes, see Seaside.  Remember a Process is simply a linked list of context
/objects/, so it is easy to copy as much or as little of one as one wants.





 Cheers,
 -- Pavel





 --
 best,
 Eliot





-- 
best,
Eliot


Re: [Pharo-dev] threading in Pharo

2014-03-24 Thread Stephan Eggermont
Alexandre wrote:
I am working on a memory model for expandable collection in Pharo. Currently, 
OrderedCollection, Dictionary and other expandable collections use a internal 
array to store their data. My new collection library recycle these array 
instead of letting the garbage collector dispose them. I simply insert the 
arrays in an ordered collection when an array is not necessary anymore. And I 
remove one when I need one. 

I hope you have large collections use multiple arrays? 
In other systems I’ve mostly found copying/moving the arrays to be more of a 
bottleneck.
Why do you use 15% less memory? 

Stephan