Re: [Pharo-users] Profiling

2016-09-10 Thread stepharo

Hi vitor

can you open a bug entry about the clean up process because it should work?

if I remember well every image we produce gets the clean for production 
code run.


Stef


Le 10/9/16 à 17:32, Vitor Medina Cruz a écrit :
Also, I could not test with a cleaned image since this procedure 
doesn't seems to be working I get errors in a UI pharo or it goes 
forever in the headless mode.


On Sat, Sep 10, 2016 at 12:30 PM, Vitor Medina Cruz 
> wrote:


Ok, my mistake was not take in account that DO has a MUCH faster
link than the the one I have at home... :/ Those ten I/O calls
traffics more than 5Mb, so you are right Sven :)

Ben: I tried to change the Delay Scheduler, but the problem is
that there is actually too much I/O wait.There was also no
difference between running on Windows or Linux platform.



On Thu, Sep 8, 2016 at 2:48 PM, Vitor Medina Cruz
> wrote:

Why not ? You are doing (lot's of) network I/O. It is
normal that your image code has to wait from time to time
for data to come in from the network. By definition that
is slow (in CPU terms). The Digital Ocean instance is
probably faster in that respect.


But only the wait time corresponds to 73% (~13 seconds in no
CPU time) of the entire procedure, which is taking ~18 seconds
total. In the remote server the same procedure takes only ~6
seconds, supposing it still takes 73% in waiting it would give
us ~4 seconds in wait time. I think 4 seconds to 13 seconds of
difference for wait time is too much, it is not? The maximum
I/O calls I am doing is ten. It is like it takes more than one
second waiting for response, which does not seems right. I
will do additional tests.

Also, having the IDE UI with lot's of tools open might
influence things. Best do some more experiments. But
benchmarking is very tricky.


Yes, I will try doing different stuff here!

Thanks!
Vitor


On Thu, Sep 8, 2016 at 9:09 AM, Sven Van Caekenberghe
> wrote:


> On 08 Sep 2016, at 14:01, Vitor Medina Cruz
> wrote:
>
> Thanks for the answers!
>
> If this is time spent on I/O it is really strange. I am
consuming the Twitter API and it don't get so much time
like this to get a response. Besides, while those profiles
were made at a Windows 10 local machine, the same code on
a Pharo 5 (get.pharo.org ) deployed
on a linux deploy on Digital Ocean takes ~6 seconds, which
means that a lot less time is spent on I/O. Isn't that
Strange? I will try to spin up a local linux machine with
both a headfull and headless Pharo to see if this time
changes.
>
> Is there a way to profile a remote image? I would like
to see what is happening in the Digital Ocean deploy.
Maybe put the headless Pharo there in profiling mode?
>
> Ben: this is a heavy json parser procedure, I would
expect to NeoJson to take some time. Perhaps there is a
way to optimize this, but what catch my attention was the
huge amount of time spent on the idleProcess. Been that
I/O wait, it shouldn't be like this.

Why not ? You are doing (lot's of) network I/O. It is
normal that your image code has to wait from time to time
for data to come in from the network. By definition that
is slow (in CPU terms). The Digital Ocean instance is
probably faster in that respect.

Also, having the IDE UI with lot's of tools open might
influence things. Best do some more experiments. But
benchmarking is very tricky.

> Thanks,
> Vitor
>
> On Thu, Sep 8, 2016 at 4:42 AM, Clément Bera
>
wrote:
>
>
> On Thu, Sep 8, 2016 at 3:44 AM, Vitor Medina Cruz
> wrote:
> Hello,
>
> While profiling some I/O code that takes ~20 seconds to
execute under my local image, the report says that about
~13 seconds is waste on OtherProcesses ->
ProcessorScheduler class>>idleProcess. I could not
understand what this idleProcess do by looking at the
code. First I thought this could be time waiting the I/O

Re: [Pharo-users] Cryptography packages

2016-09-10 Thread Esteban A. Maringolo
Thanks tocayo,

I see the latest metacello config was done by you, but I can't load it
because I get a MNU exception in the initialization of VintageFrame class,
becasue ASN1Module is not in my image.

initializeAsn1Der
((ASN1Module name: #secureSession) sequence: #VintageFrame mapping:
VintageFrame)
add: #header type: #VintagePayloadHeader;
add: #payload type: #ASN1AnyType;
yourself.

Any pointers?

Esteban A. Maringolo

2016-09-10 3:14 GMT-03:00 Esteban Lorenzano :

> http://www.squeaksource.com/Cryptography.html
>
> some of the algorithms present there are already in the image, but others
> don’t :)
>
> Esteban
>
> On 10 Sep 2016, at 04:03, Esteban A. Maringolo 
> wrote:
>
> Hi there,
>
> What are the most maintaned/popular cryptography package for Pharo?
>
> Regards!
>
> Esteban A. Maringolo
>
>
>


Re: [Pharo-users] Porting TalkFFI / LibClang to Pharo 5 UFFI

2016-09-10 Thread Ben Coman
On Sat, Sep 10, 2016 at 3:14 PM, Ben Coman  wrote:

> Looks like I've been reinventing the wheel making an FFI interface to
> libclang.  I just bumped into Ciprian's TalkFFI which provides a
> NativeBoost interface to libclang (and was used to create libgit2 bindings)
>
> * https://rochiamaya.wordpress.com/2013/07/30/create-
> bindings-with-talkffi/
> * http://smalltalkhub.com/#!/~CipT/TalkFFI
> * http://smalltalkhub.com/#!/~CipT/LibClang
>
> But it is NativeBoost based and the Configuration loads AsmJit and
> NativeBoost packages, which seems to lock up while "Initializing
> Nativeboost."  What is involved in porting TalkFFI to Pharo 5.0 UFFI?
>
> For a start, these can be manually load no problem...
>   LibClang-FFI-Types-CiprianTeodorov.2
>   LibClang-Tests-CiprianTeodorov.5
>   LibClang-Examples-CiprianTeodorov.2
>
> but loading LibClang-FFI-Binding-CiprianTeodorov.1
> reports "This package depends on the following classes:
>   CLLibraryMap
>   CLExternalLibraryWrapper
> You must resolve these dependencies before you will be able to load these
> definitions:
>   CXIndexH"
>
> which I found these in TalkFFI-Runtime-CiprianTeodorov.7, but loading this
> reports "This package depends on the following classes:
>   NBExternalLibraryWrapper
> You must resolve these dependencies before you will be able to load these
> definitions:
>   CLExternalLibraryWrapper
>
> which I see is in Pharo 4.0 and not Pharo 5.0.  So what is the replacement
> for NBExternalLibraryWrapper?
> What are the other general patterns for porting NativeBoost apps to UFFI?
>
>
For discussion I've compiled a rough comparison of classes of NativeBoost
and UFFI.  Quite a bit of guesswork though.  Feel free to edit.  Probably
could be slimmed by removing less significant classes like tests and OS
specific classes.
https://docs.google.com/spreadsheets/d/1ZN6GUtzerh7KejODXNtze3njiGAryUUdSvk_U8uLZeM/edit?usp=sharing


cheers -ben


Re: [Pharo-users] Profiling

2016-09-10 Thread Vitor Medina Cruz
Also, I could not test with a cleaned image since this procedure doesn't
seems to be working I get errors in a UI pharo or it goes forever in
the headless mode.

On Sat, Sep 10, 2016 at 12:30 PM, Vitor Medina Cruz 
wrote:

> Ok, my mistake was not take in account that DO has a MUCH faster link than
> the the one I have at home... :/ Those ten I/O calls traffics more than
> 5Mb, so you are right Sven :)
>
> Ben: I tried to change the Delay Scheduler, but the problem is that there
> is actually too much I/O wait.There was also no difference between running
> on Windows or Linux platform.
>
>
>
> On Thu, Sep 8, 2016 at 2:48 PM, Vitor Medina Cruz 
> wrote:
>
>> Why not ? You are doing (lot's of) network I/O. It is normal that your
>>> image code has to wait from time to time for data to come in from the
>>> network. By definition that is slow (in CPU terms). The Digital Ocean
>>> instance is probably faster in that respect.
>>>
>>
>> But only the wait time corresponds to 73% (~13 seconds in no CPU time) of
>> the entire procedure, which is taking ~18 seconds total. In the remote
>> server the same procedure takes only ~6 seconds, supposing it still takes
>> 73% in waiting it would give us ~4 seconds in wait time. I think 4 seconds
>> to 13 seconds of difference for wait time is too much, it is not? The
>> maximum I/O calls I am doing is ten. It is like it takes more than one
>> second waiting for response, which does not seems right. I will do
>> additional tests.
>>
>>
>>> Also, having the IDE UI with lot's of tools open might influence things.
>>> Best do some more experiments. But benchmarking is very tricky.
>>
>>
>> Yes, I will try doing different stuff here!
>>
>> Thanks!
>> Vitor
>>
>>
>> On Thu, Sep 8, 2016 at 9:09 AM, Sven Van Caekenberghe 
>> wrote:
>>
>>>
>>> > On 08 Sep 2016, at 14:01, Vitor Medina Cruz 
>>> wrote:
>>> >
>>> > Thanks for the answers!
>>> >
>>> > If this is time spent on I/O it is really strange. I am consuming the
>>> Twitter API and it don't get so much time like this to get a response.
>>> Besides, while those profiles were made at a Windows 10 local machine, the
>>> same code on a Pharo 5 (get.pharo.org) deployed on a linux deploy on
>>> Digital Ocean takes ~6 seconds, which means that a lot less time is spent
>>> on I/O. Isn't that Strange? I will try to spin up a local linux machine
>>> with both a headfull and headless Pharo to see if this time changes.
>>> >
>>> > Is there a way to profile a remote image? I would like to see what is
>>> happening in the Digital Ocean deploy. Maybe put the headless Pharo there
>>> in profiling mode?
>>> >
>>> > Ben: this is a heavy json parser procedure, I would expect to NeoJson
>>> to take some time. Perhaps there is a way to optimize this, but what catch
>>> my attention was the huge amount of time spent on the idleProcess. Been
>>> that I/O wait, it shouldn't be like this.
>>>
>>> Why not ? You are doing (lot's of) network I/O. It is normal that your
>>> image code has to wait from time to time for data to come in from the
>>> network. By definition that is slow (in CPU terms). The Digital Ocean
>>> instance is probably faster in that respect.
>>>
>>> Also, having the IDE UI with lot's of tools open might influence things.
>>> Best do some more experiments. But benchmarking is very tricky.
>>>
>>> > Thanks,
>>> > Vitor
>>> >
>>> > On Thu, Sep 8, 2016 at 4:42 AM, Clément Bera 
>>> wrote:
>>> >
>>> >
>>> > On Thu, Sep 8, 2016 at 3:44 AM, Vitor Medina Cruz <
>>> vitormc...@gmail.com> wrote:
>>> > Hello,
>>> >
>>> > While profiling some I/O code that takes ~20 seconds to execute under
>>> my local image, the report says that about ~13 seconds is waste on
>>> OtherProcesses -> ProcessorScheduler class>>idleProcess. I could not
>>> understand what this idleProcess do by looking at the code. First I thought
>>> this could be time waiting the I/O operation to terminate, but that don't
>>> make much sense because I have the same code on a Digital Ocean Doplet and
>>> it takes ~6 seconds to execute.
>>> >
>>> > Can someone help me understand what does this time on idleProcess
>>> means?
>>> >
>>> > The VM is not event-driven. Hence when all the processes are suspended
>>> or terminated, the VM falls back to the idle process. The idle process
>>> waits for 1ms, checks if any event has occurred and/or if a process can
>>> restart, and if not waits for 1 more ms to check again. That's kind of dumb
>>> but it works and we need both time and funds to make the VM event-driven
>>> (in the latter case the VM restarts directly when an event happens, instead
>>> of checking at the next ms).
>>> >
>>> > Basically the idle process profiled time is the time where Pharo has
>>> nothing to do because all processes are terminated or suspended. You can
>>> say that it is the time spent in I/O operations + the time before Pharo
>>> notices the I/O operation 

Re: [Pharo-users] Profiling

2016-09-10 Thread Vitor Medina Cruz
Ok, my mistake was not take in account that DO has a MUCH faster link than
the the one I have at home... :/ Those ten I/O calls traffics more than
5Mb, so you are right Sven :)

Ben: I tried to change the Delay Scheduler, but the problem is that there
is actually too much I/O wait.There was also no difference between running
on Windows or Linux platform.



On Thu, Sep 8, 2016 at 2:48 PM, Vitor Medina Cruz 
wrote:

> Why not ? You are doing (lot's of) network I/O. It is normal that your
>> image code has to wait from time to time for data to come in from the
>> network. By definition that is slow (in CPU terms). The Digital Ocean
>> instance is probably faster in that respect.
>>
>
> But only the wait time corresponds to 73% (~13 seconds in no CPU time) of
> the entire procedure, which is taking ~18 seconds total. In the remote
> server the same procedure takes only ~6 seconds, supposing it still takes
> 73% in waiting it would give us ~4 seconds in wait time. I think 4 seconds
> to 13 seconds of difference for wait time is too much, it is not? The
> maximum I/O calls I am doing is ten. It is like it takes more than one
> second waiting for response, which does not seems right. I will do
> additional tests.
>
>
>> Also, having the IDE UI with lot's of tools open might influence things.
>> Best do some more experiments. But benchmarking is very tricky.
>
>
> Yes, I will try doing different stuff here!
>
> Thanks!
> Vitor
>
>
> On Thu, Sep 8, 2016 at 9:09 AM, Sven Van Caekenberghe 
> wrote:
>
>>
>> > On 08 Sep 2016, at 14:01, Vitor Medina Cruz 
>> wrote:
>> >
>> > Thanks for the answers!
>> >
>> > If this is time spent on I/O it is really strange. I am consuming the
>> Twitter API and it don't get so much time like this to get a response.
>> Besides, while those profiles were made at a Windows 10 local machine, the
>> same code on a Pharo 5 (get.pharo.org) deployed on a linux deploy on
>> Digital Ocean takes ~6 seconds, which means that a lot less time is spent
>> on I/O. Isn't that Strange? I will try to spin up a local linux machine
>> with both a headfull and headless Pharo to see if this time changes.
>> >
>> > Is there a way to profile a remote image? I would like to see what is
>> happening in the Digital Ocean deploy. Maybe put the headless Pharo there
>> in profiling mode?
>> >
>> > Ben: this is a heavy json parser procedure, I would expect to NeoJson
>> to take some time. Perhaps there is a way to optimize this, but what catch
>> my attention was the huge amount of time spent on the idleProcess. Been
>> that I/O wait, it shouldn't be like this.
>>
>> Why not ? You are doing (lot's of) network I/O. It is normal that your
>> image code has to wait from time to time for data to come in from the
>> network. By definition that is slow (in CPU terms). The Digital Ocean
>> instance is probably faster in that respect.
>>
>> Also, having the IDE UI with lot's of tools open might influence things.
>> Best do some more experiments. But benchmarking is very tricky.
>>
>> > Thanks,
>> > Vitor
>> >
>> > On Thu, Sep 8, 2016 at 4:42 AM, Clément Bera 
>> wrote:
>> >
>> >
>> > On Thu, Sep 8, 2016 at 3:44 AM, Vitor Medina Cruz 
>> wrote:
>> > Hello,
>> >
>> > While profiling some I/O code that takes ~20 seconds to execute under
>> my local image, the report says that about ~13 seconds is waste on
>> OtherProcesses -> ProcessorScheduler class>>idleProcess. I could not
>> understand what this idleProcess do by looking at the code. First I thought
>> this could be time waiting the I/O operation to terminate, but that don't
>> make much sense because I have the same code on a Digital Ocean Doplet and
>> it takes ~6 seconds to execute.
>> >
>> > Can someone help me understand what does this time on idleProcess means?
>> >
>> > The VM is not event-driven. Hence when all the processes are suspended
>> or terminated, the VM falls back to the idle process. The idle process
>> waits for 1ms, checks if any event has occurred and/or if a process can
>> restart, and if not waits for 1 more ms to check again. That's kind of dumb
>> but it works and we need both time and funds to make the VM event-driven
>> (in the latter case the VM restarts directly when an event happens, instead
>> of checking at the next ms).
>> >
>> > Basically the idle process profiled time is the time where Pharo has
>> nothing to do because all processes are terminated or suspended. You can
>> say that it is the time spent in I/O operations + the time before Pharo
>> notices the I/O operation is terminated, which can be up to 1ms.
>> >
>> >
>> >
>> > The full report is:
>> >
>> >  - 18407 tallies, 18605 msec.
>> >
>> > **Tree**
>> > 
>> > Process: (40s) Morphic UI Process: nil
>> > 
>> > 25.1% {4663ms} UndefinedObject>>DoIt
>> >   25.1% {4663ms} TweetsServiceRestConsumer(Twee
>> 

[Pharo-users] [ANN] Dr. Geo release 16.10a

2016-09-10 Thread Hilaire
Dear Pharo fellows,

I am proud to announce release 16.10a of Dr. Geo.
It comes with the usual bug fixes and a new French programming API for
French speaking kids: programmed sketch can be written in French Smalltalk!

Read more at http://www.drgeo.eu/news/drgeo1610release

Hilaire
-- 
Dr. Geo
http://drgeo.eu




[Pharo-users] Porting TalkFFI / LibClang to Pharo 5 UFFI

2016-09-10 Thread Ben Coman
Looks like I've been reinventing the wheel making an FFI interface to
libclang.  I just bumped into Ciprian's TalkFFI which provides a
NativeBoost interface to libclang (and was used to create libgit2 bindings)

* https://rochiamaya.wordpress.com/2013/07/30/create-bindings-with-talkffi/
* http://smalltalkhub.com/#!/~CipT/TalkFFI
* http://smalltalkhub.com/#!/~CipT/LibClang

But it is NativeBoost based and the Configuration loads AsmJit and
NativeBoost packages, which seems to lock up while "Initializing
Nativeboost."  What is involved in porting TalkFFI to Pharo 5.0 UFFI?

For a start, these can be manually load no problem...
  LibClang-FFI-Types-CiprianTeodorov.2
  LibClang-Tests-CiprianTeodorov.5
  LibClang-Examples-CiprianTeodorov.2

but loading LibClang-FFI-Binding-CiprianTeodorov.1
reports "This package depends on the following classes:
  CLLibraryMap
  CLExternalLibraryWrapper
You must resolve these dependencies before you will be able to load these
definitions:
  CXIndexH"

which I found these in TalkFFI-Runtime-CiprianTeodorov.7, but loading this
reports "This package depends on the following classes:
  NBExternalLibraryWrapper
You must resolve these dependencies before you will be able to load these
definitions:
  CLExternalLibraryWrapper

which I see is in Pharo 4.0 and not Pharo 5.0.  So what is the replacement
for NBExternalLibraryWrapper?
What are the other general patterns for porting NativeBoost apps to UFFI?

cheers -ben


Re: [Pharo-users] Adopting someone else's image

2016-09-10 Thread Siemen Baader
Thanks. I found out that I had two unrelated problems. Spotter was one and
not critical, and the reason I had errors with Monticello was that I had
made the mistake to rename the .image file, so it didn't match the changes
file anymore.

Siemen 



--
View this message in context: 
http://forum.world.st/Adopting-someone-else-s-image-tp4914933p4915049.html
Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.



Re: [Pharo-users] arrays with FFI structs

2016-09-10 Thread Esteban Lorenzano
With UFFI, what you’ll do for “being OO” is to: 

1) extend FFIExternalStructure 
2) then use your class adding methods that use the structure into it (using 
“self” as argument). 

For example: 

FFIExternalStructure subclass: #MyStruct.

MyStruct>>#method1: arg
^ self ffiCall: #(int function1(self, arg1))

(and example of this can be found on AthensCairoMatrix in image).

cheers, 
Esteban

ps: in Pharo5, you may want to update to latest UFFI (not *needed* but better… 
I need to make a new Pharo5 build with updated versions)


> On 10 Sep 2016, at 04:41, Ben Coman  wrote:
> 
> On Sat, Sep 10, 2016 at 8:47 AM, Pierce Ng  wrote:
>> On Sat, Sep 10, 2016 at 01:11:18AM +0800, Ben Coman wrote:
>>> Are arrays within structs handled? I have a C type declaration...
>> 
>> Ben,
>> 
>> Is it possible to write C functions to manipulate these structures, build 
>> these
>> functions into a shared library, and call the functions from Pharo?
> 
> Maybe.  Particularly since my next challenge is to use a callback.  My
> usage is parsing the VM platform C files as a one-shot import to
> analyse from Pharo, so doing most of the legwork in C and returning
> just the final result to Pharo may be fine.  However at the moment my
> goal is as much about learning to use FFI, so I'll push in that
> direction as long as I can.
> 
>> More "object oriented", heh.
> 
> Actually "clang" is OO being written in C++.  But I understand the C++
> name mangling makes life difficult for our FFI.  "libclang" is the
> plain-C wrapper interface of "clang", which also is advertised as more
> stable, with clang advertised as often changing.  Apparently libclang
> can't access all of clang's features, but I think it will be a while
> before I reach the point of discovering the impact of that, and it may
> well be outside my requirements.
> 
> cheers -ben
> 
>> Of course I've only just had a cursory glance at
>> libclang while typing this reply and don't know what your usage is.
>> 
>> Pierce
>> 
> 




Re: [Pharo-users] Cryptography packages

2016-09-10 Thread Esteban Lorenzano
http://www.squeaksource.com/Cryptography.html 


some of the algorithms present there are already in the image, but others don’t 
:)

Esteban

> On 10 Sep 2016, at 04:03, Esteban A. Maringolo  wrote:
> 
> Hi there,
> 
> What are the most maintaned/popular cryptography package for Pharo?
> 
> Regards!
> 
> Esteban A. Maringolo