Re: [Pharo-dev] [ANN] Pharo Association has a new Website!

2016-11-10 Thread Marcus Denker
Hi,

I have already started to improve the integration between the sites.

- Contribute has now the Join form embedded: http://pharo.org/contribute
- not directly related, but http://pharo.org/community now has the form to
subscribe to the newsletter embedded.

More to be done, step by step...


On Thu, Nov 10, 2016 at 2:25 PM, Denis Kudriashov 
wrote:

> That's nice, we have donate button now. Is it possible to make it more
> visible? Or at least put it in pharo.org
>
>
> 2016-11-10 10:22 GMT+01:00 Marcus Denker :
>
>> Hello,
>>
>> We have changed the backend of the Pharo Association.
>>
>> https://association.pharo.org
>>
>> If you ever joined the association in the past, please consider to
>> re-subscribe.
>>
>> We have added already all existing active members, in this case you should
>> have already received a new password.
>>
>> For all questions, do not hesitate to contact assocat...@pharo.org
>> 
>>
>>
>> Marcus
>>
>
>


-- 
--
Marcus Denker  --  den...@acm.org
http://www.marcusdenker.de


Re: [Pharo-dev] OpalEncoderForV3PlusClosures error

2016-11-10 Thread Clément Bera
On latest VM from opensmalltalk you can switch to the SistaV1 encoder which
solves this problem.

On Nov 10, 2016 23:31, "Thierry Goubier"  wrote:

>
>
> 2016-11-10 17:28 GMT+01:00 Max Leske :
>
>> Too many statements probably (e.g. a lot of branches).
>>
>
> I suspected that. Many, many loops over arrays of arrays of arrays of
> float, no branches.
>
> Thierry
>
>
>>
>> Max
>>
>> > On 10 Nov 2016, at 17:11, Thierry Goubier 
>> wrote:
>> >
>> > I've got a compilation error which is:
>> >
>> > genJumpLong: distance index -1504 is out of range -1024 to 1023
>> >
>> > What does it mean? I'm on Pharo6 64 bits.
>> >
>> > Thierry
>>
>>
>>
>


Re: [Pharo-dev] Please test new VMs (round one)

2016-11-10 Thread Hernán Morales Durand
No problem, I was asking about instructions to download and reproduce the
Windows VM compilation problem.
But Esteban already gave me some steps to start.

Hernán

2016-11-10 13:21 GMT-03:00 Sven Van Caekenberghe :

> What is your exact question ?
>
> > On 10 Nov 2016, at 16:31, Hernán Morales Durand <
> hernan.mora...@gmail.com> wrote:
> >
> > Anyone?
> >
> >
> > 2016-11-09 14:17 GMT-03:00 Hernán Morales Durand <
> hernan.mora...@gmail.com>:
> >
> >
> > 2016-11-09 8:00 GMT-03:00 Esteban Lorenzano :
> > Hi guys,
> >
> > I want to start moving the VM stuff into the new structure. Now I know
> there are still missing things :)
> > Can you download a VM from here: https://bintray.com/estebanlm/
> pharo-vm/build/201611082123#files
> >
> > and start using it, and report problems?
> >
> > thanks!
> > Esteban
> >
> > ps: Windows users: I’m still not there, I’m having problems to build
> third-party libraries with cygwin… also if someone can help me here I would
> thank it :)
> >
> >
> > I can help, let me know how can I reproduce those problems
> >
> > Hernán
> >
> >
>
>
>


Re: [Pharo-dev] Breaking the 4GB barrier with Pharo 6 64-bit

2016-11-10 Thread Igor Stasenko
On 10 November 2016 at 19:58, Stephan Eggermont  wrote:

> Igor wrote:
> >Now i hope at the end of the day,
> >the guys who doing data mining/statistical
> >analysis will finally shut up and happily
> >be able to work with more bloat without
> >need of learning a ways to properly
> >manage memory & resources, and
> >implement them finally.
>
> The actual problem is of course having to work with all that data before
> you understand the structure. Or highly interconnected structures with
> unpredictable access patterns. Partial graphs are nice, once you understand
> how to partition. Needing to understand how to partition first is a
> dependency I'd rather avoid.
>
>
No, no, no! This is simply not true.
It is you, who writes the code that generates a lot of statistical
data/analysis data, and its output is fairly predictable.. else you are not
collecting any data, but just a random noise, isn't?
Those graphs are far from being unpredictable, because they are product of
a software you wrote.
Its not unpredictable, unless you claim that code you write is
unpredictable, then i wonder, what are you doing in a field of data
analysis, if you
admit that your data is nothing but just a dice roll?
If you cannot tame & reason about the complexity of own code, then maybe
better to change occupation and go work in casino? :)

I mean, Doru is light years ahead of me and many others in field of data
analysis.. so what i can advise to him on his playground?
You absolutely right, that the most hardest part, you identified, is find
the way how you dissect the graph data on smaller chunks. And storing such
dissected graph in chunks on a hard drive outside of image and loading in
case of need, is just nothing compared to the first part.
And if Doru can't handle this, then who else can? Me? I have nothing
comparing to his experience in that field. I had very little/occasional
experience in my career where i had to deal with such domain. Cmon..


> >Because even if you can fit all data in
> >memory, consider how much time it takes
> >for GC to scan 4+ Gb of memory,
>
> That's often not what is happening. The large data is mostly static, so
> gets moved out of new space very quickly. Otherwise working with large data
> quickly becomes annoying indeed. I fully agree with you on that.
>
> Stephan
>
>


-- 
Best regards,
Igor Stasenko.


Re: [Pharo-dev] Breaking the 4GB barrier with Pharo 6 64-bit

2016-11-10 Thread Stephan Eggermont
Igor wrote:
>Now i hope at the end of the day, 
>the guys who doing data mining/statistical 
>analysis will finally shut up and happily 
>be able to work with more bloat without 
>need of learning a ways to properly 
>manage memory & resources, and 
>implement them finally. 

The actual problem is of course having to work with all that data before you 
understand the structure. Or highly interconnected structures with 
unpredictable access patterns. Partial graphs are nice, once you understand how 
to partition. Needing to understand how to partition first is a dependency I'd 
rather avoid. 

>Because even if you can fit all data in 
>memory, consider how much time it takes 
>for GC to scan 4+ Gb of memory, 

That's often not what is happening. The large data is mostly static, so gets 
moved out of new space very quickly. Otherwise working with large data quickly 
becomes annoying indeed. I fully agree with you on that. 

Stephan



Re: [Pharo-dev] Breaking the 4GB barrier with Pharo 6 64-bit

2016-11-10 Thread Igor Stasenko
On 10 November 2016 at 18:57, Tudor Girba  wrote:

> Hi Igor,
>
> I see you are still having fun :). I am not sure what you are arguing
> about, but it does not seem to be much related to what I said.
>
> It is not fun, seeing that after years since we discussed this problem,
and i shared my view on it, nothing changed.
I really wish that problem be lifted from your eyesight. But your rhetoric
tells me that you prefer to sit and wait instead of solving it.
But feel free to tell me, if i am wrong.


> And again, I would be very happy to work with you on something concrete.
> Just let me know if this is of interest and perhaps we can channel the
> energy on solutions rather than on discussions like this.
>
> Why bother? Lets wait till be will have desktops with 1TB of RAM :)
Ohh. sorry.
Yeah, unfortunately i don't have much free time right now to dedicate to
Pharo. But who knows, it may change.
As you can see i keep coming, because smalltalk is not something you can
forget after you learned it :)

Please, don't take my tone too close. Its my frustration takes offensive
forms. My frustration, that i assuming that you can help youself, because
your problem is not that hard to solve.
But instead, you prefer to rely on somebody else's effort(s). Arrhgghhh!! :)


Cheers,
> Doru
>
>
> >
> > --
> > Best regards,
> > Igor Stasenko.
>
> --
> www.tudorgirba.com
> www.feenk.com
>
> "From an abstract enough point of view, any two things are similar."
>
>
>


-- 
Best regards,
Igor Stasenko.


Re: [Pharo-dev] Breaking the 4GB barrier with Pharo 6 64-bit

2016-11-10 Thread Igor Stasenko
On 10 November 2016 at 18:41, Sven Van Caekenberghe  wrote:

>
> > On 10 Nov 2016, at 17:35, Aliaksei Syrel  wrote:
> >
> > > The speed of GC will always be in linear dependency from the size of
> governed memory.
> >
> > Asymptotic complexity of GC is O(N), where N is heap size - amount of
> objects, not memory size.
>
> Even that is not necessarily true, Generational Garbage collection and
> other tricks can avoid a full heap GC for a long time, even (or especially)
> under memory allocation stress.
>

That's why it asymptotic.. Still more objects => more memory.. O(N) =>
O(K).. so my statement holds true.
And all of the tricks is a puny attempts to workaround that , all those
generational, multi-generational, permanent space etc etc. It does helps,
of course,
but not solves the problem. Since you can always invent a real-world
scenario where you can put it on knees and so, that it from 'asymptotic'
becomes quite
'symptotic'.. so, all those elaborations does not dismiss my argument,
especially when we're talking about large data.

When it comes about BIG data - manual data/resource management is the way
to go. The rest is handwaving and self-delugion :)


>
> Apart from that, of course we have to write the most resource efficient
> code that we can !
>
> > I agree, however, that it's not good to create a lot of short living
> objects. That is why there are many practices how to overcome this problem.
> For example Object Pool can be nice example.
> >
> > Nevertheless I can imagine many usecasses when breaking 4GB limit is
> useful. For example double buffering during rendering process. 1 pixel
> takes 32bit of memory => 8k image (near future displays) would take 126Mb
> of memory. Double buffering would be useful for Roassal (huge zoomed out
> visualization).
> >
> > Storing 126Mb array object takes a lot of memory but does not influence
> on GC performance since it is just one object on the heap.
> >
> > Cheers
> > Alex
> >
> >
> > On Nov 10, 2016 5:02 PM, "Igor Stasenko"  wrote:
> >
> >
> > On 10 November 2016 at 11:42, Tudor Girba  wrote:
> > Hi Igor,
> >
> > I am happy to see you getting active again. The next step is to commit
> code at the rate you reply emails. I’d be even happier :).
> >
> > To address your point, of course it certainly would be great to have
> more people work on automated support for swapping data in and out of the
> image. That was the original idea behind the Fuel work. I have seen a
> couple of cases on the mailing lists where people are actually using Fuel
> for caching purposes. I have done this a couple of times, too. But, at this
> point these are dedicated solutions and would be interesting to see it
> expand further.
> >
> > However, your assumption is that the best design is one that deals with
> small chunks of data at a time. This made a lot of sense when memory was
> expensive and small. But, these days the cost is going down very rapidly,
> and sizes of 128+ GB of RAM is nowadays quite cheap, and there are strong
> signs of super large non-volatile memories become increasingly accessible.
> The software design should take advantage of what hardware offers, so it is
> not unreasonable to want to have a GC that can deal with large size.
> >
> > The speed of GC will always be in linear dependency from the size of
> governed memory. Yes, yes.. super fast and super clever, made by some
> wizard.. but still same dependency.
> > So, it will be always in your interest to keep memory footprint as small
> as possible. PERIOD.
> >
> > We should always challenge the assumptions behind our designs, because
> the world keeps changing and we risk becoming irrelevant, a syndrome that
> is not foreign to Smalltalk aficionados.
> >
> >
> > What you saying is just: okay, we have a problem here, we hit a wall..
> But we don't look for solutions! Instead let us sit and wait till someone
> else will be so generous to help with it.
> > WOW, what a brilliant strategy!!
> > So, you putting fate of your project(s) into hands of 3-rd party, which
> > a) maybe , only maybe will work to solve your problem in next 10 years
> > b) may decide it not worth effort right now(never) and focus on
> something else, because they have own priorities after all
> >
> > Are you serious?
> > "Our furniture don't fits in modern truck(s), so let us wait will
> industry invent bigger trucks, build larger roads and then we will move"
> Hilarious!
> >
> > In that case, the problem that you arising is not that mission-critical
> to you, and thus making constant noise about your problem(s) is just what
> it is: a noise.
> > Which returns us to my original mail with offensive tone.
> >
> >
> > Cheers,
> > Doru
> >
> >
> >
> > --
> > www.tudorgirba.com
> > www.feenk.com
> >
> > "Not knowing how to do something is not an argument for how it cannot be
> done."
> >
> >
> >
> >
> >
> > --
> > Best regards,
> > Igor Stasenko.
>
>
>


-- 
Best regards,
Igor Stasenko.


Re: [Pharo-dev] Breaking the 4GB barrier with Pharo 6 64-bit

2016-11-10 Thread Aliaksei Syrel
On 10 November 2016 at 17:41, Sven Van Caekenberghe  wrote:

> Even that is not necessarily true, Generational Garbage collection and
> other tricks can avoid a full heap GC for a long time, even (or especially)
> under memory allocation stress.


That is why it is Big O notation (upper bound / worst case) ;)


Re: [Pharo-dev] Breaking the 4GB barrier with Pharo 6 64-bit

2016-11-10 Thread Tudor Girba
Hi Igor,

I see you are still having fun :). I am not sure what you are arguing about, 
but it does not seem to be much related to what I said.

And again, I would be very happy to work with you on something concrete. Just 
let me know if this is of interest and perhaps we can channel the energy on 
solutions rather than on discussions like this.

Cheers,
Doru


> On Nov 10, 2016, at 5:01 PM, Igor Stasenko  wrote:
> 
> 
> 
> On 10 November 2016 at 11:42, Tudor Girba  wrote:
> Hi Igor,
> 
> I am happy to see you getting active again. The next step is to commit code 
> at the rate you reply emails. I’d be even happier :).
> 
> To address your point, of course it certainly would be great to have more 
> people work on automated support for swapping data in and out of the image. 
> That was the original idea behind the Fuel work. I have seen a couple of 
> cases on the mailing lists where people are actually using Fuel for caching 
> purposes. I have done this a couple of times, too. But, at this point these 
> are dedicated solutions and would be interesting to see it expand further.
> 
> However, your assumption is that the best design is one that deals with small 
> chunks of data at a time. This made a lot of sense when memory was expensive 
> and small. But, these days the cost is going down very rapidly, and sizes of 
> 128+ GB of RAM is nowadays quite cheap, and there are strong signs of super 
> large non-volatile memories become increasingly accessible. The software 
> design should take advantage of what hardware offers, so it is not 
> unreasonable to want to have a GC that can deal with large size.
> 
> The speed of GC will always be in linear dependency from the size of governed 
> memory. Yes, yes.. super fast and super clever, made by some wizard.. but 
> still same dependency.
> So, it will be always in your interest to keep memory footprint as small as 
> possible. PERIOD.
>  
> We should always challenge the assumptions behind our designs, because the 
> world keeps changing and we risk becoming irrelevant, a syndrome that is not 
> foreign to Smalltalk aficionados.
> 
> 
> What you saying is just: okay, we have a problem here, we hit a wall.. But we 
> don't look for solutions! Instead let us sit and wait till someone else will 
> be so generous to help with it.
> WOW, what a brilliant strategy!!
> So, you putting fate of your project(s) into hands of 3-rd party, which 
> a) maybe , only maybe will work to solve your problem in next 10 years 
> b) may decide it not worth effort right now(never) and focus on something 
> else, because they have own priorities after all
>  
> Are you serious?
> "Our furniture don't fits in modern truck(s), so let us wait will industry 
> invent bigger trucks, build larger roads and then we will move" Hilarious!
> 
> In that case, the problem that you arising is not that mission-critical to 
> you, and thus making constant noise about your problem(s) is just what it is: 
> a noise.
> Which returns us to my original mail with offensive tone.
> 
> 
> Cheers,
> Doru
> 
> 
> 
> --
> www.tudorgirba.com
> www.feenk.com
> 
> "Not knowing how to do something is not an argument for how it cannot be 
> done."
> 
> 
> 
> 
> 
> -- 
> Best regards,
> Igor Stasenko.

--
www.tudorgirba.com
www.feenk.com

"From an abstract enough point of view, any two things are similar."







Re: [Pharo-dev] Breaking the 4GB barrier with Pharo 6 64-bit

2016-11-10 Thread Sven Van Caekenberghe

> On 10 Nov 2016, at 17:35, Aliaksei Syrel  wrote:
> 
> > The speed of GC will always be in linear dependency from the size of 
> > governed memory.
> 
> Asymptotic complexity of GC is O(N), where N is heap size - amount of 
> objects, not memory size.

Even that is not necessarily true, Generational Garbage collection and other 
tricks can avoid a full heap GC for a long time, even (or especially) under 
memory allocation stress.

Apart from that, of course we have to write the most resource efficient code 
that we can !

> I agree, however, that it's not good to create a lot of short living objects. 
> That is why there are many practices how to overcome this problem. For 
> example Object Pool can be nice example.
> 
> Nevertheless I can imagine many usecasses when breaking 4GB limit is useful. 
> For example double buffering during rendering process. 1 pixel takes 32bit of 
> memory => 8k image (near future displays) would take 126Mb of memory. Double 
> buffering would be useful for Roassal (huge zoomed out visualization).
> 
> Storing 126Mb array object takes a lot of memory but does not influence on GC 
> performance since it is just one object on the heap.
> 
> Cheers
> Alex
> 
> 
> On Nov 10, 2016 5:02 PM, "Igor Stasenko"  wrote:
> 
> 
> On 10 November 2016 at 11:42, Tudor Girba  wrote:
> Hi Igor,
> 
> I am happy to see you getting active again. The next step is to commit code 
> at the rate you reply emails. I’d be even happier :).
> 
> To address your point, of course it certainly would be great to have more 
> people work on automated support for swapping data in and out of the image. 
> That was the original idea behind the Fuel work. I have seen a couple of 
> cases on the mailing lists where people are actually using Fuel for caching 
> purposes. I have done this a couple of times, too. But, at this point these 
> are dedicated solutions and would be interesting to see it expand further.
> 
> However, your assumption is that the best design is one that deals with small 
> chunks of data at a time. This made a lot of sense when memory was expensive 
> and small. But, these days the cost is going down very rapidly, and sizes of 
> 128+ GB of RAM is nowadays quite cheap, and there are strong signs of super 
> large non-volatile memories become increasingly accessible. The software 
> design should take advantage of what hardware offers, so it is not 
> unreasonable to want to have a GC that can deal with large size.
> 
> The speed of GC will always be in linear dependency from the size of governed 
> memory. Yes, yes.. super fast and super clever, made by some wizard.. but 
> still same dependency.
> So, it will be always in your interest to keep memory footprint as small as 
> possible. PERIOD.
>  
> We should always challenge the assumptions behind our designs, because the 
> world keeps changing and we risk becoming irrelevant, a syndrome that is not 
> foreign to Smalltalk aficionados.
> 
> 
> What you saying is just: okay, we have a problem here, we hit a wall.. But we 
> don't look for solutions! Instead let us sit and wait till someone else will 
> be so generous to help with it.
> WOW, what a brilliant strategy!!
> So, you putting fate of your project(s) into hands of 3-rd party, which 
> a) maybe , only maybe will work to solve your problem in next 10 years 
> b) may decide it not worth effort right now(never) and focus on something 
> else, because they have own priorities after all
>  
> Are you serious?
> "Our furniture don't fits in modern truck(s), so let us wait will industry 
> invent bigger trucks, build larger roads and then we will move" Hilarious!
> 
> In that case, the problem that you arising is not that mission-critical to 
> you, and thus making constant noise about your problem(s) is just what it is: 
> a noise.
> Which returns us to my original mail with offensive tone.
> 
> 
> Cheers,
> Doru
> 
> 
> 
> --
> www.tudorgirba.com
> www.feenk.com
> 
> "Not knowing how to do something is not an argument for how it cannot be 
> done."
> 
> 
> 
> 
> 
> -- 
> Best regards,
> Igor Stasenko.




Re: [Pharo-dev] Breaking the 4GB barrier with Pharo 6 64-bit

2016-11-10 Thread Aliaksei Syrel
> The speed of GC will always be in linear dependency from the size of
governed memory.

Asymptotic complexity of GC is O(N), where N is heap size - amount of
objects, not memory size.

I agree, however, that it's not good to create a lot of short living
objects. That is why there are many practices how to overcome this problem.
For example Object Pool can be nice example.

Nevertheless I can imagine many usecasses when breaking 4GB limit is
useful. For example double buffering during rendering process. 1 pixel
takes 32bit of memory => 8k image (near future displays) would take 126Mb
of memory. Double buffering would be useful for Roassal (huge zoomed out
visualization).

Storing 126Mb array object takes a lot of memory but does not influence on
GC performance since it is just one object on the heap.

Cheers
Alex

On Nov 10, 2016 5:02 PM, "Igor Stasenko"  wrote:

>
>
> On 10 November 2016 at 11:42, Tudor Girba  wrote:
>
>> Hi Igor,
>>
>> I am happy to see you getting active again. The next step is to commit
>> code at the rate you reply emails. I’d be even happier :).
>>
>
>> To address your point, of course it certainly would be great to have more
>> people work on automated support for swapping data in and out of the image.
>> That was the original idea behind the Fuel work. I have seen a couple of
>> cases on the mailing lists where people are actually using Fuel for caching
>> purposes. I have done this a couple of times, too. But, at this point these
>> are dedicated solutions and would be interesting to see it expand further.
>>
>> However, your assumption is that the best design is one that deals with
>> small chunks of data at a time. This made a lot of sense when memory was
>> expensive and small. But, these days the cost is going down very rapidly,
>> and sizes of 128+ GB of RAM is nowadays quite cheap, and there are strong
>> signs of super large non-volatile memories become increasingly accessible.
>> The software design should take advantage of what hardware offers, so it is
>> not unreasonable to want to have a GC that can deal with large size.
>>
>> The speed of GC will always be in linear dependency from the size of
> governed memory. Yes, yes.. super fast and super clever, made by some
> wizard.. but still same dependency.
> So, it will be always in your interest to keep memory footprint as small
> as possible. PERIOD.
>
>
>> We should always challenge the assumptions behind our designs, because
>> the world keeps changing and we risk becoming irrelevant, a syndrome that
>> is not foreign to Smalltalk aficionados.
>>
>>
> What you saying is just: okay, we have a problem here, we hit a wall.. But
> we don't look for solutions! Instead let us sit and wait till someone else
> will be so generous to help with it.
> WOW, what a brilliant strategy!!
> So, you putting fate of your project(s) into hands of 3-rd party, which
> a) maybe , only maybe will work to solve your problem in next 10 years
> b) may decide it not worth effort right now(never) and focus on something
> else, because they have own priorities after all
>
> Are you serious?
> "Our furniture don't fits in modern truck(s), so let us wait will industry
> invent bigger trucks, build larger roads and then we will move" Hilarious!
>
> In that case, the problem that you arising is not that mission-critical to
> you, and thus making constant noise about your problem(s) is just what it
> is: a noise.
> Which returns us to my original mail with offensive tone.
>
>
> Cheers,
>> Doru
>>
>>
>>
>> --
>> www.tudorgirba.com
>> www.feenk.com
>>
>> "Not knowing how to do something is not an argument for how it cannot be
>> done."
>>
>>
>>
>
>
> --
> Best regards,
> Igor Stasenko.
>


Re: [Pharo-dev] OpalEncoderForV3PlusClosures error

2016-11-10 Thread Thierry Goubier
2016-11-10 17:28 GMT+01:00 Max Leske :

> Too many statements probably (e.g. a lot of branches).
>

I suspected that. Many, many loops over arrays of arrays of arrays of
float, no branches.

Thierry


>
> Max
>
> > On 10 Nov 2016, at 17:11, Thierry Goubier 
> wrote:
> >
> > I've got a compilation error which is:
> >
> > genJumpLong: distance index -1504 is out of range -1024 to 1023
> >
> > What does it mean? I'm on Pharo6 64 bits.
> >
> > Thierry
>
>
>


Re: [Pharo-dev] OpalEncoderForV3PlusClosures error

2016-11-10 Thread Max Leske
Too many statements probably (e.g. a lot of branches).

Max

> On 10 Nov 2016, at 17:11, Thierry Goubier  wrote:
> 
> I've got a compilation error which is:
> 
> genJumpLong: distance index -1504 is out of range -1024 to 1023
> 
> What does it mean? I'm on Pharo6 64 bits.
> 
> Thierry




Re: [Pharo-dev] Instructions for Pharo 6 64bits

2016-11-10 Thread Sven Van Caekenberghe

> On 28 Oct 2016, at 11:56, Esteban Lorenzano  wrote:
> 
> Image here: http://files.pharo.org/get-files/60/pharo-64.zip

In the above image I get the following output on the Transcript while sending 
#ast to all methods

AthensCairoGradientPaint>>initializeRadialBetween:extending:and:extending:withColorRamp:(origin
 is shadowed)
DynamicSpecExample>>openOnString(ui is shadowed)
DynamicSpecExample>>openOnInteger(ui is shadowed)
ExternalForm>>setExtent:depth:bits:(pointer is shadowed)
ExternalForm>>primCreateManualSurfaceWidth:height:rowPitch:depth:isMSB:(width 
is shadowed)
ExternalForm>>primCreateManualSurfaceWidth:height:rowPitch:depth:isMSB:(height 
is shadowed)
ExternalForm>>primCreateManualSurfaceWidth:height:rowPitch:depth:isMSB:(depth 
is shadowed)
ExternalForm>>primManualSurface:setPointer:(pointer is shadowed)
ExternalType>>asPointerType:(pointerSize is shadowed)
RBRefactoryTestDataApp>>tempVarOverridesInstVar(temporaryVariable is shadowed)
RBSmalllintTestObject>>tempVarOverridesInstVar(temporaryVariable is shadowed)
TraitDescription>>fileOutLocalMethodsInCategory:on:(localSelectors is shadowed)

Just to let you know,

Sven




Re: [Pharo-dev] Please test new VMs (round one)

2016-11-10 Thread Sven Van Caekenberghe
What is your exact question ?

> On 10 Nov 2016, at 16:31, Hernán Morales Durand  
> wrote:
> 
> Anyone?
> 
> 
> 2016-11-09 14:17 GMT-03:00 Hernán Morales Durand :
> 
> 
> 2016-11-09 8:00 GMT-03:00 Esteban Lorenzano :
> Hi guys, 
> 
> I want to start moving the VM stuff into the new structure. Now I know there 
> are still missing things :)
> Can you download a VM from here: 
> https://bintray.com/estebanlm/pharo-vm/build/201611082123#files
> 
> and start using it, and report problems?
> 
> thanks!
> Esteban
> 
> ps: Windows users: I’m still not there, I’m having problems to build 
> third-party libraries with cygwin… also if someone can help me here I would 
> thank it :)
> 
> 
> I can help, let me know how can I reproduce those problems
> 
> Hernán 
> 
> 




[Pharo-dev] OpalEncoderForV3PlusClosures error

2016-11-10 Thread Thierry Goubier
I've got a compilation error which is:

genJumpLong: distance index -1504 is out of range -1024 to 1023

What does it mean? I'm on Pharo6 64 bits.

Thierry


Re: [Pharo-dev] Breaking the 4GB barrier with Pharo 6 64-bit

2016-11-10 Thread Igor Stasenko
On 10 November 2016 at 11:42, Tudor Girba  wrote:

> Hi Igor,
>
> I am happy to see you getting active again. The next step is to commit
> code at the rate you reply emails. I’d be even happier :).
>

> To address your point, of course it certainly would be great to have more
> people work on automated support for swapping data in and out of the image.
> That was the original idea behind the Fuel work. I have seen a couple of
> cases on the mailing lists where people are actually using Fuel for caching
> purposes. I have done this a couple of times, too. But, at this point these
> are dedicated solutions and would be interesting to see it expand further.
>
> However, your assumption is that the best design is one that deals with
> small chunks of data at a time. This made a lot of sense when memory was
> expensive and small. But, these days the cost is going down very rapidly,
> and sizes of 128+ GB of RAM is nowadays quite cheap, and there are strong
> signs of super large non-volatile memories become increasingly accessible.
> The software design should take advantage of what hardware offers, so it is
> not unreasonable to want to have a GC that can deal with large size.
>
> The speed of GC will always be in linear dependency from the size of
governed memory. Yes, yes.. super fast and super clever, made by some
wizard.. but still same dependency.
So, it will be always in your interest to keep memory footprint as small as
possible. PERIOD.


> We should always challenge the assumptions behind our designs, because the
> world keeps changing and we risk becoming irrelevant, a syndrome that is
> not foreign to Smalltalk aficionados.
>
>
What you saying is just: okay, we have a problem here, we hit a wall.. But
we don't look for solutions! Instead let us sit and wait till someone else
will be so generous to help with it.
WOW, what a brilliant strategy!!
So, you putting fate of your project(s) into hands of 3-rd party, which
a) maybe , only maybe will work to solve your problem in next 10 years
b) may decide it not worth effort right now(never) and focus on something
else, because they have own priorities after all

Are you serious?
"Our furniture don't fits in modern truck(s), so let us wait will industry
invent bigger trucks, build larger roads and then we will move" Hilarious!

In that case, the problem that you arising is not that mission-critical to
you, and thus making constant noise about your problem(s) is just what it
is: a noise.
Which returns us to my original mail with offensive tone.


Cheers,
> Doru
>
>
>
> --
> www.tudorgirba.com
> www.feenk.com
>
> "Not knowing how to do something is not an argument for how it cannot be
> done."
>
>
>


-- 
Best regards,
Igor Stasenko.


Re: [Pharo-dev] Please test new VMs (round one)

2016-11-10 Thread Hernán Morales Durand
Anyone?


2016-11-09 14:17 GMT-03:00 Hernán Morales Durand :

>
>
> 2016-11-09 8:00 GMT-03:00 Esteban Lorenzano :
>
>> Hi guys,
>>
>> I want to start moving the VM stuff into the new structure. Now I know
>> there are still missing things :)
>> Can you download a VM from here: https://bintray.com/este
>> banlm/pharo-vm/build/201611082123#files
>>
>> and start using it, and report problems?
>>
>> thanks!
>> Esteban
>>
>> ps: Windows users: I’m still not there, I’m having problems to build
>> third-party libraries with cygwin… also if someone can help me here I would
>> thank it :)
>>
>
>
> I can help, let me know how can I reproduce those problems
>
> Hernán
>
>


Re: [Pharo-dev] [ANN] Pharo Association has a new Website!

2016-11-10 Thread Marcus Denker

> On 10 Nov 2016, at 14:44, Esteban A. Maringolo  wrote:
> 
> Great!
> 
> What do we do with the current BountySource salt?
> (https://salt.bountysource.com/teams/pharo)
> Should we move directly to the new Wild Apricot backend?
> 
Salt is just one payment possibiliy. 

Salt is a bit strange, e.g. we do not get any information about new sponsors 
there. No email, nothing.
So it is a bit hard to keep up, but we will just send mails to all sponsors 
there telling them to tell us.

When they are then in and the bill gets send, the salt sponsorship again is the 
payment and the bill is
accepted as payed.

> Accepting Bitcoin payments would be a plus ;-) (https://bitpay.com/tour)
> 

Like everything, we just set up the abolute minimum to get this going. This 
means for payment we just
enabled Paypal.

The backing supports

• 2Checkout
• Authorize.Net
• BluePay – beta
• Global Payments – beta
• iATS Payments – beta
• Moneris 
• PayPal Payflow Pro – beta
• PayPal Payments Advanced – beta
• PayPal Payments Standard
• PayPal Express Checkout
• PayPal Payments Pro
• Skrill – beta
• Stripe  – beta

and via  CRE Secure even more.

I have not looked into this at all.

There is a lot that one could do, I will collect all suggestions, but we need 
to see, too, that every tiny
one of them takes effort to make real.

Marcus





Re: [Pharo-dev] [ANN] Pharo Association has a new Website!

2016-11-10 Thread Esteban A. Maringolo
Great!

What do we do with the current BountySource salt?
(https://salt.bountysource.com/teams/pharo)
Should we move directly to the new Wild Apricot backend?

Accepting Bitcoin payments would be a plus ;-) (https://bitpay.com/tour)

Best regards!

Esteban A. Maringolo


2016-11-10 6:22 GMT-03:00 Marcus Denker :
> Hello,
>
> We have changed the backend of the Pharo Association.
>
> https://association.pharo.org
>
> If you ever joined the association in the past, please consider to
> re-subscribe.
>
> We have added already all existing active members, in this case you should
> have already received a new password.
>
> For all questions, do not hesitate to contact assocat...@pharo.org
>
>
> Marcus



[Pharo-dev] [pharo-project/pharo-core]

2016-11-10 Thread GitHub
  Branch: refs/tags/60288
  Home:   https://github.com/pharo-project/pharo-core


Re: [Pharo-dev] [ANN] Pharo Association has a new Website!

2016-11-10 Thread Denis Kudriashov
That's nice, we have donate button now. Is it possible to make it more
visible? Or at least put it in pharo.org


2016-11-10 10:22 GMT+01:00 Marcus Denker :

> Hello,
>
> We have changed the backend of the Pharo Association.
>
> https://association.pharo.org
>
> If you ever joined the association in the past, please consider to
> re-subscribe.
>
> We have added already all existing active members, in this case you should
> have already received a new password.
>
> For all questions, do not hesitate to contact assocat...@pharo.org
> 
>
>
> Marcus
>


[Pharo-dev] [pharo-project/pharo-core] 95f350: 60288

2016-11-10 Thread GitHub
  Branch: refs/heads/6.0
  Home:   https://github.com/pharo-project/pharo-core
  Commit: 95f35034de3196a288254f9c5cb5b78cabe7d4f6
  
https://github.com/pharo-project/pharo-core/commit/95f35034de3196a288254f9c5cb5b78cabe7d4f6
  Author: Jenkins Build Server 
  Date:   2016-11-10 (Thu, 10 Nov 2016)

  Changed paths:
A Deprecated60.package/extension/Pragma/instance/keyword.st
R Deprecated60.package/extension/Pragma/instance/selector.st
M 
GT-Tests-Spotter.package/GTSpotterStepTest.class/instance/private-navigation/pragma_of_.st
M 
GT-Tests-Spotter.package/GTSpotterStepTest.class/instance/private-navigation/pragmas_inPackages_.st
M HelpSystem-Core.package/SystemHelp.class/class/private 
accessing/allSystemHelpPragmas.st
M HelpSystem-Core.package/WikiStyleHelpBuilder.class/class/private 
accessing/allHelpPragmas.st
M Kernel-Tests.package/PragmaTest.class/instance/tests/testCopy.st
M Kernel.package/CompiledMethod.class/instance/accessing-pragmas %26 
properties/hasPragmaNamed_.st
M Kernel.package/Pragma.class/class/finding/allNamed_from_to_.st
M Kernel.package/Pragma.class/class/finding/allNamed_in_.st
M Kernel.package/Pragma.class/class/private/withPragmasIn_do_.st
M Kernel.package/Pragma.class/definition.st
M Kernel.package/Pragma.class/instance/accessing-method/methodSelector.st
A Kernel.package/Pragma.class/instance/accessing-method/selector.st
M Kernel.package/Pragma.class/instance/accessing-pragma/key.st
R Kernel.package/Pragma.class/instance/accessing-pragma/keyword.st
M Kernel.package/Pragma.class/instance/accessing-pragma/message.st
M Kernel.package/Pragma.class/instance/comparing/=.st
M Kernel.package/Pragma.class/instance/comparing/analogousCodeTo_.st
M Kernel.package/Pragma.class/instance/comparing/hash.st
M Kernel.package/Pragma.class/instance/initialization/setKeyword_.st
M Kernel.package/Pragma.class/instance/printing/printOn_.st
M Kernel.package/Pragma.class/instance/processing/sendTo_.st
M Kernel.package/Pragma.class/instance/testing/hasLiteralSuchThat_.st
M Kernel.package/Pragma.class/instance/testing/hasLiteral_.st
M 
Keymapping-Core.package/extension/CompiledMethod/instance/isShortcutDeclaration.st
M 
Keymapping-Pragmas.package/KMPragmaKeymapBuilder.class/instance/registrations 
handling/pragmaCollector.st
M MenuRegistration.package/PragmaMenuBuilder.class/instance/registrations 
handling/pragmaCollector.st
M 
Nautilus.package/MethodIsScriptAction.class/instance/testing/isActionHandled.st
M 
Nautilus.package/MethodIsScriptWithArgumentAction.class/instance/testing/isActionHandled.st
M 
Nautilus.package/MethodSampleInstanceAction.class/instance/order/isActionHandled.st
R ScriptLoader60.package/ScriptLoader.class/instance/pharo - 
scripts/script60287.st
A ScriptLoader60.package/ScriptLoader.class/instance/pharo - 
scripts/script60288.st
R ScriptLoader60.package/ScriptLoader.class/instance/pharo - 
updates/update60287.st
A ScriptLoader60.package/ScriptLoader.class/instance/pharo - 
updates/update60288.st
M 
ScriptLoader60.package/ScriptLoader.class/instance/public/commentForCurrentUpdate.st
M 
SmartSuggestions.package/SugsSuggestionFactory.class/class/private/createCollector_.st
M Spec-Core.package/ComposableModel.class/class/protocol/specSelectors.st
M 
Spec-Core.package/ComposableModel.class/instance/private/defaultSpecSelector.st
A 
System-Announcements.package/ClassAnnouncement.class/instance/accessing/classAffected.st
A 
System-Announcements.package/ClassAnnouncement.class/instance/accessing/classTagAffected.st
A 
System-Announcements.package/ClassAnnouncement.class/instance/accessing/packageAffected.st
M System-Announcements.package/ClassRemoved.class/definition.st
M 
System-Announcements.package/ClassRemoved.class/instance/accessing/classRemoved_.st
A 
System-Announcements.package/ClassRemoved.class/instance/accessing/classTagAffected.st
A 
System-Announcements.package/ClassRemoved.class/instance/accessing/packageAffected.st
M 
System-Settings.package/SettingBrowser.class/class/accessing/settingsKeywords.st
M 
System-Settings.package/SettingTree.class/instance/accessing/acceptableKeywords_.st
M 
System-Settings.package/SettingTreeBuilder.class/instance/accessing/buildPragma_.st
M 
Tool-Base.package/MethodClassifier.class/instance/classification-rules/classifyByOtherImplementors_.st
M 
Tool-Base.package/MethodClassifier.class/instance/classification-rules/classifyInSuperclassProtocol_.st

  Log Message:
  ---
  60288
19323 ClassRemoved should provide information about affected package
https://pharo.fogbugz.com/f/cases/19323

18233 improving Pragma API
https://pharo.fogbugz.com/f/cases/18233

19320 Improve MethodClassifier logic
https://pharo.fogbugz.com/f/cases/19320

http://files.pharo.org/image/60/60288.zip




Re: [Pharo-dev] Breaking the 4GB barrier with Pharo 6 64-bit

2016-11-10 Thread Thierry Goubier
2016-11-10 12:38 GMT+01:00 Norbert Hartl :

>
> Am 10.11.2016 um 12:27 schrieb Thierry Goubier  >:
>
>
>
> 2016-11-10 12:18 GMT+01:00 Norbert Hartl :
>
>> [ ...]
>>
>> Be it small chunks of data or not. A statement that general is most
>> likely to be wrong. So the best way might be to ignore it. Indeed you are
>> right that hardware got cheap. Even more important is the fact that
>> hardware is almost always cheaper than personal costs. Solving all those
>> technical problems instead of real ones and not trying to act in an
>> economical way ruins a lot of companies out there. You can ignore
>> economical facts (are any other) but that doesn't make you really smart!
>>
>
> I disagree with that. In some areas (HPC, Exascale, HPDA), whatever the
> physical limit is, we will reach and go larger than that.
>
> To what you disagree? I didn't say you never need it. In your case you
> have concrete examples where it is necessary. In a lot of other cases it is
> counter productive. Isn't that an agreement that you cannot say it in a way
> too general way?
>

It is hard to disagree with something so general :) But, what we strive for
is still to be general, otherwise we wouldn't have general purpose
programming languages... mostly because a domain specific, dedicated
solution is a costly proposition.


>
> Now, about that memory aspect, there is an entire field dedicated to
> algorithmic solutions that never require the entire data set in memory. You
> just have to look and implement the right underlying abstractions to allow
> those algorithms to be implemented and run efficiently.
>
> And that is good. I think you got me wrong. I find it important to be able
> to handle partial graphs in memory. But should everyone doing some
> statistical research be one implementing that? Something like this I took
> from the complaint making that the most important part and it is not.
>

Well, take again my "larger than memory" image example. Optimizing for that
case makes the image viewer more efficient in the general case, so, yes, it
can be argued that everybody should write statistical research in a
"out-of-memory" system: will cost almost nothing in efficiency on datasets
small enough, will allow the system to scale. Otherwise you end up with the
R situation, where it runs your stat nice and fine until you reach an
unknown (to you) size limit, where it crashes or seems to run forever (if
not worse).

Thierry


>
> Norbert
>
>
> (my best example for that: satelite imagery viewers... have allways been
> able to handle images larger than the computer RAM size. Just need a
> buffered streaming interface to the file).
>
>
> Thierry
>
>
>>
>> my 2 cents,
>>
>> Norbert
>>
>>
>> > We should always challenge the assumptions behind our designs, because
>> the world keeps changing and we risk becoming irrelevant, a syndrome that
>> is not foreign to Smalltalk aficionados.
>> >
>> > Cheers,
>> > Doru
>> >
>> >
>> >> On Nov 10, 2016, at 9:12 AM, Igor Stasenko  wrote:
>> >>
>> >>
>> >> On 10 November 2016 at 07:27, Tudor Girba 
>> wrote:
>> >> Hi Igor,
>> >>
>> >> Please refrain from speaking down on people.
>> >>
>> >>
>> >> Hi, Doru!
>> >> I just wanted to hear you :)
>> >>
>> >> If you have a concrete solution for how to do things, please feel free
>> to share it with us. We would be happy to learn from it.
>> >>
>> >>
>> >> Well, there's so many solutions, that i even don't know what to offer,
>> and given the potential of smalltalk, i wonder why
>> >> you are not employing any. But in overall it is a quesition of storing
>> most of your data on disk, and only small portion of it
>> >> in image (in most optimal cases - only the portion that user
>> sees/operates with).
>> >> As i said to you before, you will hit this wall inevitably, no matter
>> how much memory is available.
>> >> So, what stops you from digging in that direction?
>> >> Because even if you can fit all data in memory, consider how much time
>> it takes for GC to scan 4+ Gb of memory, comparing to
>> >> 100 MB or less.
>> >> I don't think you'll find it convenient to work in environment where
>> you'll have 2-3 seconds pauses between mouse clicks.
>> >> So, of course, my tone is not acceptable, but its pain to see how
>> people remain helpless without even thinking about
>> >> doing what they need. We have Fuel for how many years now?
>> >> So it can't be as easy as it is, just serialize the data and purge it
>> from image, till it will be required again.
>> >> Sure it will require some effort, but it is nothing comparing to day
>> to day pain that you have to tolerate because of lack of solution.
>> >>
>> >> Cheers,
>> >> Tudor
>> >>
>> >>
>> >>> On Nov 10, 2016, at 4:11 AM, Igor Stasenko 
>> wrote:
>> >>>
>> >>> Nice progress, indeed.
>> >>> Now i hope at the end of the day, the guys who doing data
>> mining/statistical analysis will finally shut up and happily be able
>> >>> to work with more bloat without need of learning a ways to properly
>> manage memory & r

Re: [Pharo-dev] Breaking the 4GB barrier with Pharo 6 64-bit

2016-11-10 Thread Norbert Hartl

> Am 10.11.2016 um 12:27 schrieb Thierry Goubier :
> 
> 
> 
> 2016-11-10 12:18 GMT+01:00 Norbert Hartl  >:
> [ ...]
> 
> Be it small chunks of data or not. A statement that general is most likely to 
> be wrong. So the best way might be to ignore it. Indeed you are right that 
> hardware got cheap. Even more important is the fact that hardware is almost 
> always cheaper than personal costs. Solving all those technical problems 
> instead of real ones and not trying to act in an economical way ruins a lot 
> of companies out there. You can ignore economical facts (are any other) but 
> that doesn't make you really smart!
> 
> I disagree with that. In some areas (HPC, Exascale, HPDA), whatever the 
> physical limit is, we will reach and go larger than that.
> 
To what you disagree? I didn't say you never need it. In your case you have 
concrete examples where it is necessary. In a lot of other cases it is counter 
productive. Isn't that an agreement that you cannot say it in a way too general 
way?

> Now, about that memory aspect, there is an entire field dedicated to 
> algorithmic solutions that never require the entire data set in memory. You 
> just have to look and implement the right underlying abstractions to allow 
> those algorithms to be implemented and run efficiently.
> 
And that is good. I think you got me wrong. I find it important to be able to 
handle partial graphs in memory. But should everyone doing some statistical 
research be one implementing that? Something like this I took from the 
complaint making that the most important part and it is not.

Norbert


> (my best example for that: satelite imagery viewers... have allways been able 
> to handle images larger than the computer RAM size. Just need a buffered 
> streaming interface to the file).
> 

> Thierry
>  
> 
> my 2 cents,
> 
> Norbert
> 
> 
> > We should always challenge the assumptions behind our designs, because the 
> > world keeps changing and we risk becoming irrelevant, a syndrome that is 
> > not foreign to Smalltalk aficionados.
> >
> > Cheers,
> > Doru
> >
> >
> >> On Nov 10, 2016, at 9:12 AM, Igor Stasenko  >> > wrote:
> >>
> >>
> >> On 10 November 2016 at 07:27, Tudor Girba  >> > wrote:
> >> Hi Igor,
> >>
> >> Please refrain from speaking down on people.
> >>
> >>
> >> Hi, Doru!
> >> I just wanted to hear you :)
> >>
> >> If you have a concrete solution for how to do things, please feel free to 
> >> share it with us. We would be happy to learn from it.
> >>
> >>
> >> Well, there's so many solutions, that i even don't know what to offer, and 
> >> given the potential of smalltalk, i wonder why
> >> you are not employing any. But in overall it is a quesition of storing 
> >> most of your data on disk, and only small portion of it
> >> in image (in most optimal cases - only the portion that user sees/operates 
> >> with).
> >> As i said to you before, you will hit this wall inevitably, no matter how 
> >> much memory is available.
> >> So, what stops you from digging in that direction?
> >> Because even if you can fit all data in memory, consider how much time it 
> >> takes for GC to scan 4+ Gb of memory, comparing to
> >> 100 MB or less.
> >> I don't think you'll find it convenient to work in environment where 
> >> you'll have 2-3 seconds pauses between mouse clicks.
> >> So, of course, my tone is not acceptable, but its pain to see how people 
> >> remain helpless without even thinking about
> >> doing what they need. We have Fuel for how many years now?
> >> So it can't be as easy as it is, just serialize the data and purge it from 
> >> image, till it will be required again.
> >> Sure it will require some effort, but it is nothing comparing to day to 
> >> day pain that you have to tolerate because of lack of solution.
> >>
> >> Cheers,
> >> Tudor
> >>
> >>
> >>> On Nov 10, 2016, at 4:11 AM, Igor Stasenko  >>> > wrote:
> >>>
> >>> Nice progress, indeed.
> >>> Now i hope at the end of the day, the guys who doing data 
> >>> mining/statistical analysis will finally shut up and happily be able
> >>> to work with more bloat without need of learning a ways to properly 
> >>> manage memory & resources, and implement them finally.
> >>> But i guess, that won't be long silence, before they again start 
> >>> screaming in despair: please help, my bloat doesn't fits into memory... :)
> >>>
> >>> On 9 November 2016 at 12:06, Sven Van Caekenberghe  >>> > wrote:
> >>> OK, I am quite excited about the future possibilities of 64-bit Pharo. So 
> >>> I played a bit more with the current test version [1], trying to push the 
> >>> limits. In the past, it was only possible to safely allocate about 1.5GB 
> >>> of memory even though a 32-bit process' limit is theoretically 4GB (the 
> >>> OS and the VM need space too).
> >>>
> >>> Allocating a couple of 1GB ByteArrays is one way to push memory use, but

Re: [Pharo-dev] Breaking the 4GB barrier with Pharo 6 64-bit

2016-11-10 Thread Thierry Goubier
2016-11-10 12:18 GMT+01:00 Norbert Hartl :

> [ ...]
>
> Be it small chunks of data or not. A statement that general is most likely
> to be wrong. So the best way might be to ignore it. Indeed you are right
> that hardware got cheap. Even more important is the fact that hardware is
> almost always cheaper than personal costs. Solving all those technical
> problems instead of real ones and not trying to act in an economical way
> ruins a lot of companies out there. You can ignore economical facts (are
> any other) but that doesn't make you really smart!
>

I disagree with that. In some areas (HPC, Exascale, HPDA), whatever the
physical limit is, we will reach and go larger than that.

Now, about that memory aspect, there is an entire field dedicated to
algorithmic solutions that never require the entire data set in memory. You
just have to look and implement the right underlying abstractions to allow
those algorithms to be implemented and run efficiently.

(my best example for that: satelite imagery viewers... have allways been
able to handle images larger than the computer RAM size. Just need a
buffered streaming interface to the file).

Thierry


>
> my 2 cents,
>
> Norbert
>
>
> > We should always challenge the assumptions behind our designs, because
> the world keeps changing and we risk becoming irrelevant, a syndrome that
> is not foreign to Smalltalk aficionados.
> >
> > Cheers,
> > Doru
> >
> >
> >> On Nov 10, 2016, at 9:12 AM, Igor Stasenko  wrote:
> >>
> >>
> >> On 10 November 2016 at 07:27, Tudor Girba  wrote:
> >> Hi Igor,
> >>
> >> Please refrain from speaking down on people.
> >>
> >>
> >> Hi, Doru!
> >> I just wanted to hear you :)
> >>
> >> If you have a concrete solution for how to do things, please feel free
> to share it with us. We would be happy to learn from it.
> >>
> >>
> >> Well, there's so many solutions, that i even don't know what to offer,
> and given the potential of smalltalk, i wonder why
> >> you are not employing any. But in overall it is a quesition of storing
> most of your data on disk, and only small portion of it
> >> in image (in most optimal cases - only the portion that user
> sees/operates with).
> >> As i said to you before, you will hit this wall inevitably, no matter
> how much memory is available.
> >> So, what stops you from digging in that direction?
> >> Because even if you can fit all data in memory, consider how much time
> it takes for GC to scan 4+ Gb of memory, comparing to
> >> 100 MB or less.
> >> I don't think you'll find it convenient to work in environment where
> you'll have 2-3 seconds pauses between mouse clicks.
> >> So, of course, my tone is not acceptable, but its pain to see how
> people remain helpless without even thinking about
> >> doing what they need. We have Fuel for how many years now?
> >> So it can't be as easy as it is, just serialize the data and purge it
> from image, till it will be required again.
> >> Sure it will require some effort, but it is nothing comparing to day to
> day pain that you have to tolerate because of lack of solution.
> >>
> >> Cheers,
> >> Tudor
> >>
> >>
> >>> On Nov 10, 2016, at 4:11 AM, Igor Stasenko  wrote:
> >>>
> >>> Nice progress, indeed.
> >>> Now i hope at the end of the day, the guys who doing data
> mining/statistical analysis will finally shut up and happily be able
> >>> to work with more bloat without need of learning a ways to properly
> manage memory & resources, and implement them finally.
> >>> But i guess, that won't be long silence, before they again start
> screaming in despair: please help, my bloat doesn't fits into memory... :)
> >>>
> >>> On 9 November 2016 at 12:06, Sven Van Caekenberghe 
> wrote:
> >>> OK, I am quite excited about the future possibilities of 64-bit Pharo.
> So I played a bit more with the current test version [1], trying to push
> the limits. In the past, it was only possible to safely allocate about
> 1.5GB of memory even though a 32-bit process' limit is theoretically 4GB
> (the OS and the VM need space too).
> >>>
> >>> Allocating a couple of 1GB ByteArrays is one way to push memory use,
> but it feels a bit silly. So I loaded a bunch of projects (including
> Seaside) to push the class/method counts (7K classes, 100K methods) and
> wrote a script [2] that basically copies part of the class/method metadata
> including 2 copies of each's methods source code as well as its AST
> (bypassing the cache of course). This feels more like a real object graph.
> >>>
> >>> I had to create no less than 7 (SEVEN) copies (each kept open in an
> inspector) to break through the mythical 4GB limit (real allocated & used
> memory).
> >>>
> >>> 
> >>>
> >>> I also have the impression that the image shrinking problem is gone
> (closing everything frees memory, saving the image has it return to its
> original size, 100MB in this case).
> >>>
> >>> Great work, thank you. Bright future again.
> >>>
> >>> Sven
> >>>
> >>> PS: Yes, GC is slower; No, I did not yet try to save such a lar

Re: [Pharo-dev] Breaking the 4GB barrier with Pharo 6 64-bit

2016-11-10 Thread Norbert Hartl

> Am 10.11.2016 um 10:42 schrieb Tudor Girba :
> 
> Hi Igor,
> 
> I am happy to see you getting active again. The next step is to commit code 
> at the rate you reply emails. I’d be even happier :).
> 
+1

> To address your point, of course it certainly would be great to have more 
> people work on automated support for swapping data in and out of the image. 
> That was the original idea behind the Fuel work. I have seen a couple of 
> cases on the mailing lists where people are actually using Fuel for caching 
> purposes. I have done this a couple of times, too. But, at this point these 
> are dedicated solutions and would be interesting to see it expand further.
> 
And still it would be to general. The only thing you can say is that swapping 
in/out will make it slower. So you usually don't want that it is swapping. It 
is comparable with swap space in OSes. In many use case scenarios having swap 
at all is an architectural design failure. So before having a problem that 
resources get sparse there are good points not to care too much. And if you 
want to do it there is no general solution to it. How do you swap out a partial 
graph with fuel? How can you load back a small part graph of the graph you 
swapped out? Do we need to reify object references into objects in order to 
make that smart? 
It is understandable from a developers perspective. You have a real problem you 
should solve but then you make up all sorts of technical problems that you 
think you need to solve instead of the original problem. That is one prominent 
way how projects fail. 

> However, your assumption is that the best design is one that deals with small 
> chunks of data at a time. This made a lot of sense when memory was expensive 
> and small. But, these days the cost is going down very rapidly, and sizes of 
> 128+ GB of RAM is nowadays quite cheap, and there are strong signs of super 
> large non-volatile memories become increasingly accessible. The software 
> design should take advantage of what hardware offers, so it is not 
> unreasonable to want to have a GC that can deal with large size.
> 
Be it small chunks of data or not. A statement that general is most likely to 
be wrong. So the best way might be to ignore it. Indeed you are right that 
hardware got cheap. Even more important is the fact that hardware is almost 
always cheaper than personal costs. Solving all those technical problems 
instead of real ones and not trying to act in an economical way ruins a lot of 
companies out there. You can ignore economical facts (are any other) but that 
doesn't make you really smart!

my 2 cents,

Norbert


> We should always challenge the assumptions behind our designs, because the 
> world keeps changing and we risk becoming irrelevant, a syndrome that is not 
> foreign to Smalltalk aficionados.
> 
> Cheers,
> Doru
> 
> 
>> On Nov 10, 2016, at 9:12 AM, Igor Stasenko  wrote:
>> 
>> 
>> On 10 November 2016 at 07:27, Tudor Girba  wrote:
>> Hi Igor,
>> 
>> Please refrain from speaking down on people.
>> 
>> 
>> Hi, Doru!
>> I just wanted to hear you :)
>> 
>> If you have a concrete solution for how to do things, please feel free to 
>> share it with us. We would be happy to learn from it.
>> 
>> 
>> Well, there's so many solutions, that i even don't know what to offer, and 
>> given the potential of smalltalk, i wonder why
>> you are not employing any. But in overall it is a quesition of storing most 
>> of your data on disk, and only small portion of it
>> in image (in most optimal cases - only the portion that user sees/operates 
>> with).
>> As i said to you before, you will hit this wall inevitably, no matter how 
>> much memory is available.
>> So, what stops you from digging in that direction?
>> Because even if you can fit all data in memory, consider how much time it 
>> takes for GC to scan 4+ Gb of memory, comparing to
>> 100 MB or less.
>> I don't think you'll find it convenient to work in environment where you'll 
>> have 2-3 seconds pauses between mouse clicks.
>> So, of course, my tone is not acceptable, but its pain to see how people 
>> remain helpless without even thinking about 
>> doing what they need. We have Fuel for how many years now? 
>> So it can't be as easy as it is, just serialize the data and purge it from 
>> image, till it will be required again.
>> Sure it will require some effort, but it is nothing comparing to day to day 
>> pain that you have to tolerate because of lack of solution.
>> 
>> Cheers,
>> Tudor
>> 
>> 
>>> On Nov 10, 2016, at 4:11 AM, Igor Stasenko  wrote:
>>> 
>>> Nice progress, indeed.
>>> Now i hope at the end of the day, the guys who doing data 
>>> mining/statistical analysis will finally shut up and happily be able
>>> to work with more bloat without need of learning a ways to properly manage 
>>> memory & resources, and implement them finally.
>>> But i guess, that won't be long silence, before they again start screaming 
>>> in despair: please help, my bloat doesn't fits 

Re: [Pharo-dev] Breaking the 4GB barrier with Pharo 6 64-bit

2016-11-10 Thread Dimitris Chloupis
On Thu, Nov 10, 2016 at 11:43 AM Tudor Girba  wrote:

> Hi Igor,
>
> I am happy to see you getting active again. The next step is to commit
> code at the rate you reply emails. I’d be even happier :).
>
>
>  aouch that was not very nice

I agree with Igor and Phil, there is no genuine interest by the community
for optimising Pharo for big data. Which makes sense because coders that
care a lot about performance stick with C/C++.  Not that I blame them.  You
cant have your cake and eat it too.

No idea why you would want to add 128+ GB of RAM on a computer, its not as
if CPUs are powerful enough to deal with such a massive amount of data even
if you do your coding at C level.

I know because I am working daily with 3d graphics.

Foremost CPUs have lost the war, GPUs have dominated for almost a decade
now especially in the area of large parallelism , its quite easy for a
cheap GPU nowdays to outperform a CPU by 10 times , and some expensive ones
can even 100 times faster than the fastest CPU. But that is for doing the
same calculation over a very large data set.

If you go down that path you need OpenCL or CUDA support in Pharo. Assuming
you wanna do it all in Pharo. Because modern GPUs are so generic in
functionality that are used in many areas that having nothing to do with
graphics and are very popular especially for physical simulations which are
cases that data can reach easily in TBs or even PBs.

Also a solution that I am implementing with CPPBridge would make sense
here, a shared memory area that lives outside the VM memory so it cannot be
garbage collected but still inside the Pharo process for Pharo to have
direct access to it with no compromise on performance. Also being shared
means that multiple instances of Pharo can have direct access to it giving
you true parallelism.

If you want to get the comforts of pharo including GC then you move a
portion of the data to VM by copying data from the shared memory to Pharo
objects and of course erasing or overwriting the data at the shared memory
side so you dont waste RAM.

You can also delegate which pharo instance deals with what portion of the
shared memory so you can optimise the use of multiple cores, data
processing that will benefit from GPUs pararrelism should be moved to GPUs
with the appropriate Pharo library.

The memory mapped file storing the share memory will be stripping any meta
data and storing the data in its most compact format, while data that needs
to be more flexible and more high level can be stored inside a Pharo image.

If 10 Pharos execute at the same time one of those instance can be
performing the role of manager of streaming data from hard drive to shared
memory in the background without affecting the performance of other Pharos.
This will give you the ability to deal with TBs of data and take advantage
old computers with little memory.

Out of all that I will be materializing the shared memory part , the
protocol and the memory mapped file that will save the shared memory.
Because I dont need the rest.

Of course here comes the debate why do it in Pharo and instead use a C/C++
library or C support for CUDA/OpenCL and let pharo just be in the driving
seat perform the role of manager.

This is how Python is used by modern scientists. C++ libraries driven by
Python scripting. Pharo can do the same.

I dont believe optimising GC will be an ideal solution. It is not even
necessary.


Re: [Pharo-dev] Breaking the 4GB barrier with Pharo 6 64-bit

2016-11-10 Thread p...@highoctane.be
On Thu, Nov 10, 2016 at 10:31 AM, Denis Kudriashov 
wrote:

>
> 2016-11-10 9:49 GMT+01:00 p...@highoctane.be :
>
>> Ah, but then it may be more interesting to have a data image (maybe a lot
>> of these) and a front end image.
>>
>> Isn't Seamless something that could help us here? No need to bring the
>> data back, just manipulate it through proxies.
>>
>
> Problem that server image will anyway perform GC. And it will be slow if
> server image is big which will stop all world.
>

What if we asked it to not do any GC at all? Like if we have tons of RAM,
why bother? Especially if what it is used to is to keep datasets: load
them, save image to disk. When needed trash the loaded stuff and reload
from zero.

Basically that is what happens with Spark.

http://sujee.net/2015/01/22/understanding-spark-caching/#.WCRIgy0rKpo
https://0x0fff.com/spark-misconceptions/

and Tachyon/Alluxio is kind of solving this kind of issue (may be nice to
have that interacting with Pharo image). http://www.alluxio.org/ This thing
basically keeps stuff in memory in case one needs to reuse the data between
workload runs.

Or have an object memory for work and one for datasets (first one gets
GC'd, the other one isn't).

Phil


Re: [Pharo-dev] Breaking the 4GB barrier with Pharo 6 64-bit

2016-11-10 Thread Sven Van Caekenberghe

> On 10 Nov 2016, at 10:25, Sven Van Caekenberghe  wrote:
> 
> 
>> On 10 Nov 2016, at 10:10, Denis Kudriashov  wrote:
>> 
>> 
>> 2016-11-09 23:30 GMT+01:00 Nicolas Cellier 
>> :
>> uptime  0h0m0s
>> memory  70,918,144 bytes
>>old 61,966,112 bytes (87.4%)
>>young   2,781,608 bytes (3.9004%)
>> I see yet another bad usage of round:/roundTo: --^
>> 
>> It just printed float :). I think anybody round values in this statistics.
> 
> Nothing should be rounded. Just compute the percentage and then use 
> #printShowingDecimalPlaces: or #printOn:showingDecimalPlaces:

For example,

'Status OK - Clock {1} - Allocated {2} bytes - {3} % free.' format: { 
   DateAndTime now.
   self memoryTotal asStringWithCommas. 
   (self memoryFree / self memoryTotal * 100.0) printShowingDecimalPlaces: 2 }

Prints

Status OK - Clock 2016-11-10T09:47:18.367242+00:00 - Allocated 217,852,528 
bytes - 2.36 % free.

(This is part of NeoConsole, a REPL package).




Re: [Pharo-dev] Breaking the 4GB barrier with Pharo 6 64-bit

2016-11-10 Thread Tudor Girba
Hi Igor,

I am happy to see you getting active again. The next step is to commit code at 
the rate you reply emails. I’d be even happier :).

To address your point, of course it certainly would be great to have more 
people work on automated support for swapping data in and out of the image. 
That was the original idea behind the Fuel work. I have seen a couple of cases 
on the mailing lists where people are actually using Fuel for caching purposes. 
I have done this a couple of times, too. But, at this point these are dedicated 
solutions and would be interesting to see it expand further.

However, your assumption is that the best design is one that deals with small 
chunks of data at a time. This made a lot of sense when memory was expensive 
and small. But, these days the cost is going down very rapidly, and sizes of 
128+ GB of RAM is nowadays quite cheap, and there are strong signs of super 
large non-volatile memories become increasingly accessible. The software design 
should take advantage of what hardware offers, so it is not unreasonable to 
want to have a GC that can deal with large size.

We should always challenge the assumptions behind our designs, because the 
world keeps changing and we risk becoming irrelevant, a syndrome that is not 
foreign to Smalltalk aficionados.

Cheers,
Doru


> On Nov 10, 2016, at 9:12 AM, Igor Stasenko  wrote:
> 
> 
> On 10 November 2016 at 07:27, Tudor Girba  wrote:
> Hi Igor,
> 
> Please refrain from speaking down on people.
> 
> 
> Hi, Doru!
> I just wanted to hear you :)
>  
> If you have a concrete solution for how to do things, please feel free to 
> share it with us. We would be happy to learn from it.
> 
> 
> Well, there's so many solutions, that i even don't know what to offer, and 
> given the potential of smalltalk, i wonder why
> you are not employing any. But in overall it is a quesition of storing most 
> of your data on disk, and only small portion of it
> in image (in most optimal cases - only the portion that user sees/operates 
> with).
> As i said to you before, you will hit this wall inevitably, no matter how 
> much memory is available.
> So, what stops you from digging in that direction?
> Because even if you can fit all data in memory, consider how much time it 
> takes for GC to scan 4+ Gb of memory, comparing to
> 100 MB or less.
> I don't think you'll find it convenient to work in environment where you'll 
> have 2-3 seconds pauses between mouse clicks.
> So, of course, my tone is not acceptable, but its pain to see how people 
> remain helpless without even thinking about 
> doing what they need. We have Fuel for how many years now? 
> So it can't be as easy as it is, just serialize the data and purge it from 
> image, till it will be required again.
> Sure it will require some effort, but it is nothing comparing to day to day 
> pain that you have to tolerate because of lack of solution.
>  
> Cheers,
> Tudor
> 
> 
> > On Nov 10, 2016, at 4:11 AM, Igor Stasenko  wrote:
> >
> > Nice progress, indeed.
> > Now i hope at the end of the day, the guys who doing data 
> > mining/statistical analysis will finally shut up and happily be able
> > to work with more bloat without need of learning a ways to properly manage 
> > memory & resources, and implement them finally.
> > But i guess, that won't be long silence, before they again start screaming 
> > in despair: please help, my bloat doesn't fits into memory... :)
> >
> > On 9 November 2016 at 12:06, Sven Van Caekenberghe  wrote:
> > OK, I am quite excited about the future possibilities of 64-bit Pharo. So I 
> > played a bit more with the current test version [1], trying to push the 
> > limits. In the past, it was only possible to safely allocate about 1.5GB of 
> > memory even though a 32-bit process' limit is theoretically 4GB (the OS and 
> > the VM need space too).
> >
> > Allocating a couple of 1GB ByteArrays is one way to push memory use, but it 
> > feels a bit silly. So I loaded a bunch of projects (including Seaside) to 
> > push the class/method counts (7K classes, 100K methods) and wrote a script 
> > [2] that basically copies part of the class/method metadata including 2 
> > copies of each's methods source code as well as its AST (bypassing the 
> > cache of course). This feels more like a real object graph.
> >
> > I had to create no less than 7 (SEVEN) copies (each kept open in an 
> > inspector) to break through the mythical 4GB limit (real allocated & used 
> > memory).
> >
> > 
> >
> > I also have the impression that the image shrinking problem is gone 
> > (closing everything frees memory, saving the image has it return to its 
> > original size, 100MB in this case).
> >
> > Great work, thank you. Bright future again.
> >
> > Sven
> >
> > PS: Yes, GC is slower; No, I did not yet try to save such a large image.
> >
> > [1]
> >
> > VM here: http://bintray.com/estebanlm/pharo-vm/build#files/
> > Image here: http://files.pharo.org/get-files/60/pharo-64.zip
> >
> > [2]
> >
> >

Re: [Pharo-dev] Breaking the 4GB barrier with Pharo 6 64-bit

2016-11-10 Thread Denis Kudriashov
2016-11-10 9:49 GMT+01:00 p...@highoctane.be :

> Ah, but then it may be more interesting to have a data image (maybe a lot
> of these) and a front end image.
>
> Isn't Seamless something that could help us here? No need to bring the
> data back, just manipulate it through proxies.
>

Problem that server image will anyway perform GC. And it will be slow if
server image is big which will stop all world.


Re: [Pharo-dev] Breaking the 4GB barrier with Pharo 6 64-bit

2016-11-10 Thread Tudor Girba
Hi,

There is never any point in talking down on people. It never leads to anything 
except perhaps stifling action and participation.

We want to foster an environment in which people should not be afraid to be a 
novice at something and ask for help.

Cheers,
Doru

> On Nov 10, 2016, at 9:45 AM, p...@highoctane.be wrote:
> 
> Tudor,
> 
> Igor still has a point here. I was talking yesterday with a data science guy 
> and he was indeed more interested in lamenting than working out solutions for 
> his problems.
> 
> Which weren't that hard to begin with as all it took is an hour of work to 
> get his results. But I think he felt better complaining and self aggrandizing 
> than actually making things work and move on to the next challenge.
> 
> Example of his "issues": 
> 
> Him:"I have a lot of data"
> Me: "Like what, more or less than 1TB?"
> Him: "Less than that"
> Me: "kay, can you give me a sample set of this hard disk?"
> Him: "Yeah, but no, well, I need to get it first"
> Me: "Let's sit tomorrow over lunch so that we can ingest it all and work it 
> out"
> Him: "Let me come back to you..."
> 
> I think he was more interested in uttering things like "Spark 2.0" "Lots of 
> data" "Star schema" (and saying it loud so that people could hear it) than 
> solving anything real.
> 
> Overgeneralizing yes, speaking down, heh, not so much. There are indeed super 
> smart/efficient/effective people in data science. But there is also a crowd 
> that is quite, how to say... more interested in the Egyptian-style grand 
> priest status than in the actual problems.
> 
> Phil
> 
> 
> 
> On Thu, Nov 10, 2016 at 7:27 AM, Tudor Girba  wrote:
> Hi Igor,
> 
> Please refrain from speaking down on people.
> 
> If you have a concrete solution for how to do things, please feel free to 
> share it with us. We would be happy to learn from it.
> 
> Cheers,
> Tudor
> 
> 
> > On Nov 10, 2016, at 4:11 AM, Igor Stasenko  wrote:
> >
> > Nice progress, indeed.
> > Now i hope at the end of the day, the guys who doing data 
> > mining/statistical analysis will finally shut up and happily be able
> > to work with more bloat without need of learning a ways to properly manage 
> > memory & resources, and implement them finally.
> > But i guess, that won't be long silence, before they again start screaming 
> > in despair: please help, my bloat doesn't fits into memory... :)
> >
> > On 9 November 2016 at 12:06, Sven Van Caekenberghe  wrote:
> > OK, I am quite excited about the future possibilities of 64-bit Pharo. So I 
> > played a bit more with the current test version [1], trying to push the 
> > limits. In the past, it was only possible to safely allocate about 1.5GB of 
> > memory even though a 32-bit process' limit is theoretically 4GB (the OS and 
> > the VM need space too).
> >
> > Allocating a couple of 1GB ByteArrays is one way to push memory use, but it 
> > feels a bit silly. So I loaded a bunch of projects (including Seaside) to 
> > push the class/method counts (7K classes, 100K methods) and wrote a script 
> > [2] that basically copies part of the class/method metadata including 2 
> > copies of each's methods source code as well as its AST (bypassing the 
> > cache of course). This feels more like a real object graph.
> >
> > I had to create no less than 7 (SEVEN) copies (each kept open in an 
> > inspector) to break through the mythical 4GB limit (real allocated & used 
> > memory).
> >
> > 
> >
> > I also have the impression that the image shrinking problem is gone 
> > (closing everything frees memory, saving the image has it return to its 
> > original size, 100MB in this case).
> >
> > Great work, thank you. Bright future again.
> >
> > Sven
> >
> > PS: Yes, GC is slower; No, I did not yet try to save such a large image.
> >
> > [1]
> >
> > VM here: http://bintray.com/estebanlm/pharo-vm/build#files/
> > Image here: http://files.pharo.org/get-files/60/pharo-64.zip
> >
> > [2]
> >
> > | meta |
> > ASTCache reset.
> > meta := Dictionary new.
> > Smalltalk allClassesAndTraits do: [ :each | | classMeta methods |
> >   (classMeta := Dictionary new)
> > at: #name put: each name asSymbol;
> > at: #comment put: each comment;
> > at: #definition put: each definition;
> > at: #object put: each.
> >   methods := Dictionary new.
> >   classMeta at: #methods put: methods.
> >   each methodsDo: [ :method | | methodMeta |
> > (methodMeta := Dictionary new)
> >   at: #name put: method selector;
> >   at: #source put: method sourceCode;
> >   at: #ast put: method ast;
> >   at: #args put: method argumentNames asArray;
> >   at: #formatted put: method ast formattedCode;
> >   at: #comment put: (method comment ifNotNil: [ :str | str 
> > withoutQuoting ]);
> >   at: #object put: method.
> > methods at: method selector put: methodMeta ].
> >   meta at: each name asSymbol put: classMeta ].
> > meta.
> >
> >
> >
> > --
> > Sven Van Caekenberghe
> > Proudly supporting Pharo
> > http://pha

Re: [Pharo-dev] Breaking the 4GB barrier with Pharo 6 64-bit

2016-11-10 Thread Sven Van Caekenberghe

> On 10 Nov 2016, at 10:10, Denis Kudriashov  wrote:
> 
> 
> 2016-11-09 23:30 GMT+01:00 Nicolas Cellier 
> :
> uptime  0h0m0s
> memory  70,918,144 bytes
> old 61,966,112 bytes (87.4%)
> young   2,781,608 bytes (3.9004%)
> I see yet another bad usage of round:/roundTo: --^
> 
> It just printed float :). I think anybody round values in this statistics.

Nothing should be rounded. Just compute the percentage and then use 
#printShowingDecimalPlaces: or #printOn:showingDecimalPlaces:


Re: [Pharo-dev] [ANN] Pharo Association has a new Website!

2016-11-10 Thread Marcus Denker

> On 10 Nov 2016, at 10:22, Marcus Denker  wrote:
> 
> Hello,
> 
> We have changed the backend of the Pharo Association.
> 
>   https://association.pharo.org 
> 
> If you ever joined the association in the past, please consider to 
> re-subscribe.
> 
> We have added already all existing active members, in this case you should
> have already received a new password.
> 
> For all questions, do not hesitate to contact assocat...@pharo.org 
> 
ups: Pharo Association 



Re: [Pharo-dev] Breaking the 4GB barrier with Pharo 6 64-bit

2016-11-10 Thread Denis Kudriashov
Hi Igor.

2016-11-10 9:12 GMT+01:00 Igor Stasenko :

> Because even if you can fit all data in memory, consider how much time it
> takes for GC to scan 4+ Gb of memory, comparing to
> 100 MB or less.
>

But do you think there is no solution to that. Imaging Pharo as computer
model, no files, only objects. No way to implement proper GC  in such
environment?
(it is of course question to others and to Eliot).


[Pharo-dev] [ANN] Pharo Association has a new Website!

2016-11-10 Thread Marcus Denker
Hello,

We have changed the backend of the Pharo Association.

https://association.pharo.org 

If you ever joined the association in the past, please consider to re-subscribe.

We have added already all existing active members, in this case you should
have already received a new password.

For all questions, do not hesitate to contact assocat...@pharo.org 



Marcus

Re: [Pharo-dev] Breaking the 4GB barrier with Pharo 6 64-bit

2016-11-10 Thread Denis Kudriashov
2016-11-09 23:30 GMT+01:00 Nicolas Cellier <
nicolas.cellier.aka.n...@gmail.com>:

> uptime  0h0m0s
>> memory  70,918,144 bytes
>> old 61,966,112 bytes (87.4%)
>> young   2,781,608 bytes (3.9004%)
>>
> I see yet another bad usage of round:/roundTo: --^
>

It just printed float :). I think anybody round values in this statistics.


Re: [Pharo-dev] Breaking the 4GB barrier with Pharo 6 64-bit

2016-11-10 Thread p...@highoctane.be
Ah, but then it may be more interesting to have a data image (maybe a lot
of these) and a front end image.

Isn't Seamless something that could help us here? No need to bring the data
back, just manipulate it through proxies.

FWIW, I have 2PB of data. Not going to fit in RAM. But also would takes
ages to load anyway, so we work on pieces.

FWIW2: one Hadoop cluster I am managing now will grow by, I think, some
100s of servers in the coming months.
64bit Pharo is something I can deploy on the boxes, which are 128GB-256GB
things with like 32-48 cores.

As I mentioned before, if you want to run something on it with Pharo, be my
guest.


Phil

On Thu, Nov 10, 2016 at 9:12 AM, Igor Stasenko  wrote:

>
> On 10 November 2016 at 07:27, Tudor Girba  wrote:
>
>> Hi Igor,
>>
>> Please refrain from speaking down on people.
>>
>>
> Hi, Doru!
> I just wanted to hear you :)
>
>
>> If you have a concrete solution for how to do things, please feel free to
>> share it with us. We would be happy to learn from it.
>>
>>
> Well, there's so many solutions, that i even don't know what to offer, and
> given the potential of smalltalk, i wonder why
> you are not employing any. But in overall it is a quesition of storing
> most of your data on disk, and only small portion of it
> in image (in most optimal cases - only the portion that user sees/operates
> with).
> As i said to you before, you will hit this wall inevitably, no matter how
> much memory is available.
> So, what stops you from digging in that direction?
> Because even if you can fit all data in memory, consider how much time it
> takes for GC to scan 4+ Gb of memory, comparing to
> 100 MB or less.
> I don't think you'll find it convenient to work in environment where
> you'll have 2-3 seconds pauses between mouse clicks.
> So, of course, my tone is not acceptable, but its pain to see how people
> remain helpless without even thinking about
> doing what they need. We have Fuel for how many years now?
> So it can't be as easy as it is, just serialize the data and purge it from
> image, till it will be required again.
> Sure it will require some effort, but it is nothing comparing to day to
> day pain that you have to tolerate because of lack of solution.
>
>
>> Cheers,
>> Tudor
>>
>>
>> > On Nov 10, 2016, at 4:11 AM, Igor Stasenko  wrote:
>> >
>> > Nice progress, indeed.
>> > Now i hope at the end of the day, the guys who doing data
>> mining/statistical analysis will finally shut up and happily be able
>> > to work with more bloat without need of learning a ways to properly
>> manage memory & resources, and implement them finally.
>> > But i guess, that won't be long silence, before they again start
>> screaming in despair: please help, my bloat doesn't fits into memory... :)
>> >
>> > On 9 November 2016 at 12:06, Sven Van Caekenberghe 
>> wrote:
>> > OK, I am quite excited about the future possibilities of 64-bit Pharo.
>> So I played a bit more with the current test version [1], trying to push
>> the limits. In the past, it was only possible to safely allocate about
>> 1.5GB of memory even though a 32-bit process' limit is theoretically 4GB
>> (the OS and the VM need space too).
>> >
>> > Allocating a couple of 1GB ByteArrays is one way to push memory use,
>> but it feels a bit silly. So I loaded a bunch of projects (including
>> Seaside) to push the class/method counts (7K classes, 100K methods) and
>> wrote a script [2] that basically copies part of the class/method metadata
>> including 2 copies of each's methods source code as well as its AST
>> (bypassing the cache of course). This feels more like a real object graph.
>> >
>> > I had to create no less than 7 (SEVEN) copies (each kept open in an
>> inspector) to break through the mythical 4GB limit (real allocated & used
>> memory).
>> >
>> > 
>> >
>> > I also have the impression that the image shrinking problem is gone
>> (closing everything frees memory, saving the image has it return to its
>> original size, 100MB in this case).
>> >
>> > Great work, thank you. Bright future again.
>> >
>> > Sven
>> >
>> > PS: Yes, GC is slower; No, I did not yet try to save such a large image.
>> >
>> > [1]
>> >
>> > VM here: http://bintray.com/estebanlm/pharo-vm/build#files/
>> > Image here: http://files.pharo.org/get-files/60/pharo-64.zip
>> >
>> > [2]
>> >
>> > | meta |
>> > ASTCache reset.
>> > meta := Dictionary new.
>> > Smalltalk allClassesAndTraits do: [ :each | | classMeta methods |
>> >   (classMeta := Dictionary new)
>> > at: #name put: each name asSymbol;
>> > at: #comment put: each comment;
>> > at: #definition put: each definition;
>> > at: #object put: each.
>> >   methods := Dictionary new.
>> >   classMeta at: #methods put: methods.
>> >   each methodsDo: [ :method | | methodMeta |
>> > (methodMeta := Dictionary new)
>> >   at: #name put: method selector;
>> >   at: #source put: method sourceCode;
>> >   at: #ast put: method ast;
>> >   at: #args put: method argumentNames a

Re: [Pharo-dev] Breaking the 4GB barrier with Pharo 6 64-bit

2016-11-10 Thread p...@highoctane.be
Tudor,

Igor still has a point here. I was talking yesterday with a data science
guy and he was indeed more interested in lamenting than working out
solutions for his problems.

Which weren't that hard to begin with as all it took is an hour of work to
get his results. But I think he felt better complaining and self
aggrandizing than actually making things work and move on to the next
challenge.

Example of his "issues":

Him:"I have a lot of data"
Me: "Like what, more or less than 1TB?"
Him: "Less than that"
Me: "kay, can you give me a sample set of this hard disk?"
Him: "Yeah, but no, well, I need to get it first"
Me: "Let's sit tomorrow over lunch so that we can ingest it all and work it
out"
Him: "Let me come back to you..."

I think he was more interested in uttering things like "Spark 2.0" "Lots of
data" "Star schema" (and saying it loud so that people could hear it) than
solving anything real.

Overgeneralizing yes, speaking down, heh, not so much. There are indeed
super smart/efficient/effective people in data science. But there is also a
crowd that is quite, how to say... more interested in the Egyptian-style
grand priest status than in the actual problems.

Phil



On Thu, Nov 10, 2016 at 7:27 AM, Tudor Girba  wrote:

> Hi Igor,
>
> Please refrain from speaking down on people.
>
> If you have a concrete solution for how to do things, please feel free to
> share it with us. We would be happy to learn from it.
>
> Cheers,
> Tudor
>
>
> > On Nov 10, 2016, at 4:11 AM, Igor Stasenko  wrote:
> >
> > Nice progress, indeed.
> > Now i hope at the end of the day, the guys who doing data
> mining/statistical analysis will finally shut up and happily be able
> > to work with more bloat without need of learning a ways to properly
> manage memory & resources, and implement them finally.
> > But i guess, that won't be long silence, before they again start
> screaming in despair: please help, my bloat doesn't fits into memory... :)
> >
> > On 9 November 2016 at 12:06, Sven Van Caekenberghe  wrote:
> > OK, I am quite excited about the future possibilities of 64-bit Pharo.
> So I played a bit more with the current test version [1], trying to push
> the limits. In the past, it was only possible to safely allocate about
> 1.5GB of memory even though a 32-bit process' limit is theoretically 4GB
> (the OS and the VM need space too).
> >
> > Allocating a couple of 1GB ByteArrays is one way to push memory use, but
> it feels a bit silly. So I loaded a bunch of projects (including Seaside)
> to push the class/method counts (7K classes, 100K methods) and wrote a
> script [2] that basically copies part of the class/method metadata
> including 2 copies of each's methods source code as well as its AST
> (bypassing the cache of course). This feels more like a real object graph.
> >
> > I had to create no less than 7 (SEVEN) copies (each kept open in an
> inspector) to break through the mythical 4GB limit (real allocated & used
> memory).
> >
> > 
> >
> > I also have the impression that the image shrinking problem is gone
> (closing everything frees memory, saving the image has it return to its
> original size, 100MB in this case).
> >
> > Great work, thank you. Bright future again.
> >
> > Sven
> >
> > PS: Yes, GC is slower; No, I did not yet try to save such a large image.
> >
> > [1]
> >
> > VM here: http://bintray.com/estebanlm/pharo-vm/build#files/
> > Image here: http://files.pharo.org/get-files/60/pharo-64.zip
> >
> > [2]
> >
> > | meta |
> > ASTCache reset.
> > meta := Dictionary new.
> > Smalltalk allClassesAndTraits do: [ :each | | classMeta methods |
> >   (classMeta := Dictionary new)
> > at: #name put: each name asSymbol;
> > at: #comment put: each comment;
> > at: #definition put: each definition;
> > at: #object put: each.
> >   methods := Dictionary new.
> >   classMeta at: #methods put: methods.
> >   each methodsDo: [ :method | | methodMeta |
> > (methodMeta := Dictionary new)
> >   at: #name put: method selector;
> >   at: #source put: method sourceCode;
> >   at: #ast put: method ast;
> >   at: #args put: method argumentNames asArray;
> >   at: #formatted put: method ast formattedCode;
> >   at: #comment put: (method comment ifNotNil: [ :str | str
> withoutQuoting ]);
> >   at: #object put: method.
> > methods at: method selector put: methodMeta ].
> >   meta at: each name asSymbol put: classMeta ].
> > meta.
> >
> >
> >
> > --
> > Sven Van Caekenberghe
> > Proudly supporting Pharo
> > http://pharo.org
> > http://association.pharo.org
> > http://consortium.pharo.org
> >
> >
> >
> >
> >
> >
> >
> > --
> > Best regards,
> > Igor Stasenko.
>
> --
> www.tudorgirba.com
> www.feenk.com
>
> "We can create beautiful models in a vacuum.
> But, to get them effective we have to deal with the inconvenience of
> reality."
>
>
>


Re: [Pharo-dev] macOS Sierra support

2016-11-10 Thread Esteban Lorenzano
the regular download by zeroconf should give you an usable VM.
Those CI jobs are not the official place where to take VMs… and they do not 
build the actual branch for the moment. In fact, I will disable the jobs 
because there is no point on keep then running atm and people got confused. 

Esteban

> On 10 Nov 2016, at 00:56, Sean P. DeNigris  wrote:
> 
> Max Leske wrote
>> on Sierra... If you need a Cog VM
> 
> Do we have a Cog VM that works on Sierra yet? I downloaded 547 from
> https://ci.inria.fr/pharo/view/5.0-VM-Legacy/job/PharoVM/Architecture=32,Slave=vm-builder-mac/
> . The application window opened, but the contents were all white (e.g. the
> World never appeared) 
> 
> 
> 
> -
> Cheers,
> Sean
> --
> View this message in context: 
> http://forum.world.st/macOS-Sierra-support-tp4917181p4922435.html
> Sent from the Pharo Smalltalk Developers mailing list archive at Nabble.com.
> 




Re: [Pharo-dev] Breaking the 4GB barrier with Pharo 6 64-bit

2016-11-10 Thread Igor Stasenko
On 10 November 2016 at 07:27, Tudor Girba  wrote:

> Hi Igor,
>
> Please refrain from speaking down on people.
>
>
Hi, Doru!
I just wanted to hear you :)


> If you have a concrete solution for how to do things, please feel free to
> share it with us. We would be happy to learn from it.
>
>
Well, there's so many solutions, that i even don't know what to offer, and
given the potential of smalltalk, i wonder why
you are not employing any. But in overall it is a quesition of storing most
of your data on disk, and only small portion of it
in image (in most optimal cases - only the portion that user sees/operates
with).
As i said to you before, you will hit this wall inevitably, no matter how
much memory is available.
So, what stops you from digging in that direction?
Because even if you can fit all data in memory, consider how much time it
takes for GC to scan 4+ Gb of memory, comparing to
100 MB or less.
I don't think you'll find it convenient to work in environment where you'll
have 2-3 seconds pauses between mouse clicks.
So, of course, my tone is not acceptable, but its pain to see how people
remain helpless without even thinking about
doing what they need. We have Fuel for how many years now?
So it can't be as easy as it is, just serialize the data and purge it from
image, till it will be required again.
Sure it will require some effort, but it is nothing comparing to day to day
pain that you have to tolerate because of lack of solution.


> Cheers,
> Tudor
>
>
> > On Nov 10, 2016, at 4:11 AM, Igor Stasenko  wrote:
> >
> > Nice progress, indeed.
> > Now i hope at the end of the day, the guys who doing data
> mining/statistical analysis will finally shut up and happily be able
> > to work with more bloat without need of learning a ways to properly
> manage memory & resources, and implement them finally.
> > But i guess, that won't be long silence, before they again start
> screaming in despair: please help, my bloat doesn't fits into memory... :)
> >
> > On 9 November 2016 at 12:06, Sven Van Caekenberghe  wrote:
> > OK, I am quite excited about the future possibilities of 64-bit Pharo.
> So I played a bit more with the current test version [1], trying to push
> the limits. In the past, it was only possible to safely allocate about
> 1.5GB of memory even though a 32-bit process' limit is theoretically 4GB
> (the OS and the VM need space too).
> >
> > Allocating a couple of 1GB ByteArrays is one way to push memory use, but
> it feels a bit silly. So I loaded a bunch of projects (including Seaside)
> to push the class/method counts (7K classes, 100K methods) and wrote a
> script [2] that basically copies part of the class/method metadata
> including 2 copies of each's methods source code as well as its AST
> (bypassing the cache of course). This feels more like a real object graph.
> >
> > I had to create no less than 7 (SEVEN) copies (each kept open in an
> inspector) to break through the mythical 4GB limit (real allocated & used
> memory).
> >
> > 
> >
> > I also have the impression that the image shrinking problem is gone
> (closing everything frees memory, saving the image has it return to its
> original size, 100MB in this case).
> >
> > Great work, thank you. Bright future again.
> >
> > Sven
> >
> > PS: Yes, GC is slower; No, I did not yet try to save such a large image.
> >
> > [1]
> >
> > VM here: http://bintray.com/estebanlm/pharo-vm/build#files/
> > Image here: http://files.pharo.org/get-files/60/pharo-64.zip
> >
> > [2]
> >
> > | meta |
> > ASTCache reset.
> > meta := Dictionary new.
> > Smalltalk allClassesAndTraits do: [ :each | | classMeta methods |
> >   (classMeta := Dictionary new)
> > at: #name put: each name asSymbol;
> > at: #comment put: each comment;
> > at: #definition put: each definition;
> > at: #object put: each.
> >   methods := Dictionary new.
> >   classMeta at: #methods put: methods.
> >   each methodsDo: [ :method | | methodMeta |
> > (methodMeta := Dictionary new)
> >   at: #name put: method selector;
> >   at: #source put: method sourceCode;
> >   at: #ast put: method ast;
> >   at: #args put: method argumentNames asArray;
> >   at: #formatted put: method ast formattedCode;
> >   at: #comment put: (method comment ifNotNil: [ :str | str
> withoutQuoting ]);
> >   at: #object put: method.
> > methods at: method selector put: methodMeta ].
> >   meta at: each name asSymbol put: classMeta ].
> > meta.
> >
> >
> >
> > --
> > Sven Van Caekenberghe
> > Proudly supporting Pharo
> > http://pharo.org
> > http://association.pharo.org
> > http://consortium.pharo.org
> >
> >
> >
> >
> >
> >
> >
> > --
> > Best regards,
> > Igor Stasenko.
>
> --
> www.tudorgirba.com
> www.feenk.com
>
> "We can create beautiful models in a vacuum.
> But, to get them effective we have to deal with the inconvenience of
> reality."
>
>
>


-- 
Best regards,
Igor Stasenko.