[Pharo-users] [ANN] P3 version 1.4

2022-05-31 Thread Sven Van Caekenberghe
Hi,

There is a new release of P3, the modern, lean and mean PostgreSQL client for 
Pharo.

https://github.com/svenvc/P3

https://github.com/svenvc/P3/releases/tag/v1.4

I thank all contributors and users for their help and feedback: you make a real 
difference.

Sven

--
Sven Van Caekenberghe
Proudly supporting Pharo
http://pharo.org
http://association.pharo.org
http://consortium.pharo.org

[Pharo-users] Pharo Enterprise book HTML chapter links

2022-04-30 Thread Sven Van Caekenberghe
Hi,

All the Chapter Topics links on https://books.pharo.org/enterprise-pharo/ 
(bottom left side) are broken.

Is it still possible to link to individual chapters ?

Can someone have a look, it would be a shame to let this bit rot.

Thx,

Sven

PS: Looking at the other books it seems HTML reading is completely gone, what a 
shame. This used to work fine.

[Pharo-users] Re: A question about #beginsWith: and #endsWith:

2022-04-20 Thread Sven Van Caekenberghe
Since

'abc' includesSubstring: ''.
 "true"
'abc' indexOfSubCollection: ''.
 "0"

and following basic principles, my first reaction would also be that

'abc' beginsWith: ''.
'abc' endsWith: ''.

should both be true.

On the other hand, this is really a degenerate case. You know the answer 
upfront (it does not depend on the actual receiver) and thus it does not tell 
you much.

I vote for this to be a bug.

> On 20 Apr 2022, at 14:07, Richard O'Keefe  wrote:
> 
> I've just tracked down a nasty little problem
> porting some code to Pharo.  As a result, I
> have added to the comments in my own versions
> of these methods.beginsWith: aSequence
>   "Answer true if aSequence is a prefix of the receiver.
>This makes sense for all sequences.
>There is a compatibility issue concerning 'abc' beginsWith: ''
>+ VisualWorks, Dolphin, astc, GNU ST (where the method is
>  called #startsWith:) and VisualAge (where the method
>  is called #wbBeginsWith:)
>  agree than EVERY sequence begins with an empty prefix.
>- Squeak and Pharo
>  agree that NO sequence begins with an empty sequence.
># ST/X chooses compatibility with Squeak, heaving a big unhappy
>  sigh, and adds #startsWith: to have something sensible to use.
>Now ST/X *thinks* it is compatible with VW, though it isn't, so 
>I wonder if this was a bug that VW fixed and Squeak didn't?
>astc goes with the majority here.  This is also compatible with
>Haskell, ML, and with StartsWith in C# and startsWith in Java."
>   ^self beginsWith: aSequence ignoringCase: false
> 
> endsWith: aSequence
>   "Answer true if aSequence is a suffix of the receiver.
>This makes sense for all sequences.
>There is a compatibility issue concerning 'abc' endsWith: ''.  
>+ VisualWorks, Dolphin, astc, GNU ST, and VisualAge (where 
>  the method is called #wbEndsWith:)
>  agree that EVERY sequence ends with an empty suffix.
>- Squeak and Pharo
>  agree that NO sequence ends with an empty suffix.
># ST/X chooses compatibility with the majority, apparently
>  unaware that this makes #beginsWith: and #endsWith: inconsistent.
>astc goes with the majority here.  This is also compatible with
>Haskell, ML, C#, and Java."
>   ^self endsWith: aSequence ignoringCase: false 
> 
> Does anyone have any idea
>  - why Squeak and Pharo are the odd ones out?
>  - why anyone thought making #beginsWith: and #endsWith:, um, "quirky"
>was a good idea (it's pretty standard in books on the theory of
>strings to define "x is a prefix of y iff there is a z such that
>y = x concatenated with z")
>  
> I was about to try to file a bug report for the first time,
> then realised that maybe other people don't think this IS a bug.
> 
> 
> 


[Pharo-users] Re: [Pharo-dev] [ANN] Pharo 10 released!

2022-04-05 Thread Sven Van Caekenberghe
Great news. A big thank you to all those involved !

> On 5 Apr 2022, at 12:39, Esteban Lorenzano  wrote:
> 
> Dear Pharo users and dynamic language lovers: 
> 
> We have released Pharo version 10 !
> 
> Pharo is a pure object-oriented programming language and a powerful 
> environment, focused on simplicity and immediate feedback.
> 
> 
> 
> Pharo 10 was a short iteration where we focused mainly on stability and 
> enhancement of the environment :
> 
>   • Massive system cleanup
>   •
>   • gained speed
>   • removed dead code
>   • removed old/deprecated frameworks (Glamour, GTTools, Spec1)
>   • All Remaining tools written using the deprecated frameworks have been 
> rewritten: Dependency Analyser, Critique Browser, and many other small 
> utilities.
>   • Modularisation has made a leap, creating correct baselines (project 
> descriptions) for many internal systems, making possible the work and 
> deployment of minimal images.
>   • Removing support for the old Bytecode sets and embedded blocks 
> simplified the compiler and language core.
>   • As a result, our image size has been reduced by 10% (from 66MB to 
> 58MB)
>   • The VM has also improved in several areas: better async I/O support, 
> socket handling, FFI ABI,  
> Even being a short iteration, we have closed a massive amount of issues: 
> around 600 issues and 700 pull requests. A more extended changelog can be 
> found at 
> https://github.com/pharo-project/pharo-changelogs/blob/master/Pharo100ChangeLogs.md.
> 
> While the technical improvements are significant, still the most impressive 
> fact is that the new code that got in the main Pharo 10 image was contributed 
> by more than 80 people.
> 
> Pharo is more than code. It is an exciting project involving a great 
> community. 
> 
> We thank all the contributors to this release:
> 
> Aaron Bieber, Ackerley Tng, Alban Benmouffek, Alejandra Cossio, Aless Hosry, 
> Alexandre Bergel, Aliaksei Syrel, Alistair Grant, Arturo Zambrano, Asbathou 
> Biyalou-Sama, Axel Marlard, Bastien Degardins, Ben Coman, Bernardo Contreras, 
> Bernhard Pieber, Carlo Teixeira, Carlos Lopez, Carolina Hernandez, Christophe 
> Demarey, Clotilde Toullec, Connor Skennerton, Cyril Ferlicot, Dave Mason, 
> David Wickes, Denis Kudriashov, Eric Gade, Erik Stel, Esteban Lorenzano, 
> Evelyn Cusi Lopez, Ezequiel R. Aguerre, Gabriel Omar Cotelli, Geraldine 
> Galindo, Giovanni Corriga, Guille Polito, Himanshu, Jan Bliznicenko, Jaromir 
> Matas, Kasper Østerbye, Kausthub Thekke Madathil, Konrad Hinsen, Kurt 
> Kilpela, Luz Paz, Marco Rimoldi, Marcus Denker, Martín Dias, Massimo 
> Nocentini, Max Leske, Maximilian-ignacio Willembrinck Santander, Miguel 
> Campero, Milton Mamani Torres, Nahuel Palumbo, Norbert Hartl, Norm Green, 
> Nour Djihan, Noury Bouraqadi, Oleksandr Zaitsev, Pablo Sánchez Rodríguez, 
> Pablo Tesone, Pavel Krivanek, Pierre Misse-Chanabier, Quentin Ducasse, 
> Raffaello Giulietti, Rakshit, Renaud de Villemeur, Rob Sayers, Roland 
> Bernard, Ronie Salgado, Santiago Bragagnolo, Sean DeNigris, Sebastian Jordan 
> Montt, Soufyane Labsari, Stephan Eggermont, Steven Costiou, Stéphane Ducasse, 
> Sven Van Caekenberghe, Theo Rogliano, Thomas Dupriez, Théo Lanord, Torsten 
> Bergmann, Vincent Blondeau.
>  
> 
> (If you contributed to Pharo 10 development in any way and we missed your 
> name, please send us an email and we will add you).
> 
> Enjoy!
> 
> The Pharo Team
> 
> Discover Pharo: https://pharo.org/features
> 
> Try Pharo: http://pharo.org/download
> 
> Learn Pharo: http://pharo.org/documentation


[Pharo-users] Re: Illegal Leading Byte

2022-03-04 Thread Sven Van Caekenberghe
Hi Craig,

> On 3 Mar 2022, at 23:15, craig  wrote:
> 
> Hi Guys,
> 
>  
> I'm reading a text file which is supposed to be ASCII encoded.  This file 
> contains a list of filepaths and was created by a Python program.
> 
>  
> Well, it turns-out that file names on Windows can contain illegal UTF-8 
> characters.  This causes ZnUTF8Encoder to signal 'Illegal leading byte for 
> utf-8 encoding' and crash the program.
> 
>  
> I would like to handle this situation more elegantly, is there a more 
> appropriate code-page to use for the Windows filesystem?
> 
> <4b5aa143.png>
> 
>  
>  
> Craig

We support more than 80 different character encoders. Of course, you should 
first know what encoder is being used, after that, it is easy to use a 
different encoder. Consider:

'/tmp/foo.txt' asFileReference readStreamEncoded: #utf8 do: [ :in | in upToEnd 
].

'/tmp/foo.txt' asFileReference readStreamEncoded: #windows1252 do: [ :in | in 
upToEnd ]. 

'/tmp/foo.txt' asFileReference readStreamEncoded: #latin1 do: [ :in | in 
upToEnd ]. 

ZnCharacterEncoder knownEncodingIdentifiers.

#windows1252 asZnCharacterEncoder.

If you could post a small example of your file, I could try to help. It will 
probably be #windows1252 or #latin1.

HTH,

Sven




[Pharo-users] Re: [ANN] Pharo LZ4 Tools

2022-02-20 Thread Sven Van Caekenberghe
6"

(104633/69146) reciprocal asFloat.
 
"0.6608431374422983"

Now we get a 33% reduction in size, which is better.

I am sure that with a more carefully, better tuned dictionary the compression 
rate could be improved a couple of percent. There also exist tools that can 
compute an optimal dictionary from a given input set.

Sorry for the long post, I hope at least someone found this interesting.

Sven


> Envoyé depuis mon téléphone Huawei
> 
> 
>  Message original 
> De : Sven Van Caekenberghe 
> Date : ven. 18 févr. 2022 à 21:13
> À : Any question about pharo is welcome 
> Objet : [Pharo-users] [ANN] Pharo LZ4 Tools
> Hi,
> 
> Pharo LZ4 Tools (https://github.com/svenvc/pharo-lz4-tools) is an 
> implementation of LZ4 compression and decompression in pure Pharo.
> 
> LZ4 is a lossless compression algorithm that is focused on speed. It belongs 
> to the LZ77 family of byte-oriented compression schemes.
> 
> - https://en.wikipedia.org/wiki/LZ4_(compression_algorithm)
> - https://lz4.github.io/lz4/
> - https://github.com/lz4/lz4
> 
> Both the frame format 
> (https://github.com/lz4/lz4/blob/dev/doc/lz4_Frame_format.md) as well as the 
> block format (https://github.com/lz4/lz4/blob/dev/doc/lz4_Block_format.md) 
> are implemented. Dictionary based compression/decompression is available too. 
> The XXHash32 algorithm is also implemented.
> 
> Of course this implementation is not as fast as highly optimised native 
> implementations, but it works quite well and is readable/understandable, if 
> you like this kind of stuff. It can be useful to interact with other systems 
> using LZ4.
> 
> Sven


[Pharo-users] [ANN] Pharo LZ4 Tools

2022-02-18 Thread Sven Van Caekenberghe
Hi,

Pharo LZ4 Tools (https://github.com/svenvc/pharo-lz4-tools) is an 
implementation of LZ4 compression and decompression in pure Pharo.

LZ4 is a lossless compression algorithm that is focused on speed. It belongs to 
the LZ77 family of byte-oriented compression schemes.

 - https://en.wikipedia.org/wiki/LZ4_(compression_algorithm)
 - https://lz4.github.io/lz4/
 - https://github.com/lz4/lz4

Both the frame format 
(https://github.com/lz4/lz4/blob/dev/doc/lz4_Frame_format.md) as well as the 
block format (https://github.com/lz4/lz4/blob/dev/doc/lz4_Block_format.md) are 
implemented. Dictionary based compression/decompression is available too. The 
XXHash32 algorithm is also implemented.

Of course this implementation is not as fast as highly optimised native 
implementations, but it works quite well and is readable/understandable, if you 
like this kind of stuff. It can be useful to interact with other systems using 
LZ4.

Sven


[Pharo-users] Re: Too many parenthesis - a matter of syntax

2022-01-26 Thread Sven Van Caekenberghe
Hi Kaspar,

I found the initial example actually reasonable readable.

However, I would simplify it as follows:

(json at: #tree)
  select: [ :each |
((each at: #type) = #blob)
  and: [ #(md mic) includes: (Path from: (each at: #path)) extension ] ].

I would personally not try to extend the Smalltalk syntax. The cost is not 
worth the gain.

Tools could be written to help in dealing with deeply nested structures 
(JSON/XML), they could form their own DSL at a higher level.

For a limited example, NeoJSONObject uses the DNU trick to support unary 
accessors:

  json at: #tree 

can be written as

  json tree

and has a path accessor:

  json atPath: #(tree field name)

it also behaves like JavaScript objects in that missing keys are nil. It is not 
that all this is good or better, it is just handy in some cases. Your code 
would then be even simpler (less parenthesis):

json tree select: [ :each |
  (each type = #blob)
and: [ #(md mic) includes: (Path from: each path) extension ] ].

Sven

> On 26 Jan 2022, at 10:19, Kasper Osterbye  wrote:
> 
> Cheers all
> 
> I have noticed that I often ends up with quite a number of nested 
> expressions, for example:
> 
> (((json at: 'tree') 
>   select: [ :e | (e at: 'type') = ‘blob' ]) 
>   collect: [:e | Path from: (e at: 'path')])
>   select: [ :p | p segments last 
>   in: [ :name | (name endsWith: '.md') | (name 
> endsWith: '.mic') ] ]
> 
> What kind of proposals (if any) have been for a different syntax which could 
> give a more streamlined syntax?
> 
> My own thinking has been around an alternative to the cascade semicolon. What 
> symbol to use does not matter for me, but something like
> json at: ‘tree' º
>   select: [ :e | ((e at: 'type') = 'blob’)]º
>   collect: [:e | Path from: (e at: 'path’)]º
>   select: [ :p | p segments last 
>   in: [ :name | (name endsWith: '.md') | (name endsWith: '.mic') 
> ] ]
> 
> Basically, a send the right hand expression to the result of the left hand 
> expression.
> 
> Has anyone ever tried this, or is it just one of the many small annoyances 
> best left alone?
> 
> Best,
> 
> Kasper


[Pharo-users] Re: An Emacs <-> Pharo bridge (unidirectional at first)

2021-12-09 Thread Sven Van Caekenberghe



> On 8 Dec 2021, at 22:24, Mathieu Dubois via Pharo-users 
>  wrote:
> 
> gettimothy wrote:
> 
> https://github.com/ahungry/geben
> 
> that is the protocol emacs/php uses. Dbg .
> 
> iirc, there is another standard for code browsing, highlighting, etc.
> 
> If i find it I will ping you.
> 
> Are you talking about LSP 
> (https://en.wikipedia.org/wiki/Language_Server_Protocol) ?

This also already exists for Pharo:

https://badetitou.fr/projects/vscode-pharo/2021/12/02/with-gtk/

https://marketplace.visualstudio.com/items?itemName=badetitou.pharo-language-server

Sven


[Pharo-users] Re: An Emacs <-> Pharo bridge (unidirectional at first)

2021-12-08 Thread Sven Van Caekenberghe



> On 8 Dec 2021, at 11:07, Pierre Misse  wrote:
> 
> Hello,
> 
> I am aware of a Visual studio implementation that does pretty much what you 
> described.
> https://github.com/badetitou/vscode-pharo
> 
> I also kinda remember something pharo with emacs called: bubbles?
> I can't remember sorry :/

Haha, it took me a while to find it because I also forgot, but here it is.

It is called Shampoo:

  https://github.com/dmatveev/shampoo-emacs

HTH,

Sven

PS: I wrote and use https://github.com/svenvc/NeoConsole which might help as 
well to get you started

> Pierre
> 
> On 12/8/2021 5:48 AM, Eduardo Ochs wrote:
>> Hi list,
>> 
>> I'm looking for help on doing something VERY un-smalltalkish with
>> Pharo... let me explain. I am working on several variants of this way
>> of controlling external programs with Emacs:
>> 
>>   http://angg.twu.net/LATEX/2021emacsconf.pdf
>> 
>> The slides - link above - are the best way to understand how it works,
>> but there's mode info here:
>> 
>>   http://angg.twu.net/emacsconf2021.html
>> 
>> The method above only works with programs that have REPLs that can be
>> run in terminals, but I have some variants of it that let me send
>> commands - single-line or multi-line - to external programs that only
>> have GUIs. In one of these variant the external program listens to the
>> signals SIGUSR1s and SIGUSR2s, and initially what it does when it
>> receives these signals is:
>> 
>>   on SIGUSR1: print the contents of the file /tmp/bridge-data
>>   on SIGUSR2:  eval the contents of the file /tmp/bridge-data
>> 
>> The action of SIGUSR2 can be used to redefine the action fo SIGUSR1.
>> There is a demo for Tcl here:
>> 
>>   http://angg.twu.net/IMAGES/2021-emacs-tcl-bridge.png
>>   http://angg.twu.net/e/tcl.e.html#2021-emacs-tcl-bridge
>> 
>> How can I implement something similar in Pharo? I mean, how do I make
>> it react to SIGUSR1s by printing - in any sense - the contents of
>> /tmp/bridge-data, and react to SIGUSR2s by eval-ing the contents of
>> /tmp/bridge-data?
>> 
>> Thanks in advance! =)
>>   Eduardo Ochs
>>   http://angg.twu.net/#eev


[Pharo-users] Re: uFFI provide buffer to C

2021-11-29 Thread Sven Van Caekenberghe



> On 29 Nov 2021, at 10:10, Guillermo Polito  wrote:
> 
> Hi Gerry,
> 
> I think the point of Tomaz is that an FFI method can only have a single 
> #ffiCall: statement.
> In your example, what you can do is to split your method in two methods:
> 
> First a method with the mapping (following the idea of Tomaz but using libc)
> 
> FFITutorial>>getHostNameInto: hostName ofLenght: length
>> ^self ffiCall: #( int gethostname(char *hostName, int length) ) library: 
>> 'libc.so.6'. 
> 
> And then have a second method that calls this one:
> 
> FFITutorial>>getHostName
> 
>| nameBuffer |  
> 
> nameBuffer := ByteArray new: 256. 
> 
> self getHostNameInto: nameBuffer ofLenght: 256.
> 
> ^nameBuffer asString
> 
> Notice that converting the byte array buffer to string will only work by 
> chance if the called function returns characters encoded in ascii.
> In Pharo strings are Unicode characters where each character is a unicode 
> code-point.

In general #utf8Decoded is better than #asString since UTF-8 includes 7-bit 
ASCII. It will fail when the native encoding is one of the simpler byte 
encodings, like Latin1 (or iso-8859-1), though.

> The best thing is to read the documentation of the library and check the 
> encoding of the string that is put in the buffer to decode it accordingly, 
> for example from utf8.
> 
> G
> 
> 
>> El 29 nov 2021, a las 0:56, Gerry Weaver  escribió:
>> 
>> Hi,
>> 
>> Thanks for the example. Unfortunately, it doesn't work on Linux.
>> 
>> Thanks,
>> Gerry
>> 
>> 
>> From: Tomaž Turk 
>> Sent: Sunday, November 28, 2021 11:39 AM
>> To: Any question about pharo is welcome 
>> Subject: [Pharo-users] Re: uFFI provide buffer to C
>>  
>> Hi,
>> 
>> try with this:
>> 
>> FFITutorial>>getHostNameInto: hostName ofLenght: length
>> ^self ffiCall: #( int gethostname(char *hostName, int length) ) library: 
>> 'Ws2_32.dll'. "... on Windows"
>> 
>> From Playground, you can then do:
>> 
>> | name len |
>> len := 50.
>> name:= ByteArray new: len.
>> FFITutorial new getHostNameInto: name ofLenght: len. 
>> name inspect.
>> 
>> I hope this helps.
>> 
>> Best wishes,
>> Tomaz
> 


[Pharo-users] Re: Strategies for (re)using HTTP client

2021-11-03 Thread Sven Van Caekenberghe
Maybe this helps:

https://github.com/svenvc/zinc/commit/d9fe41707b16748b9340540127ec5d77800856b6

> On 1 Nov 2021, at 21:22, Sven Van Caekenberghe  wrote:
> 
> 
> 
>> On 1 Nov 2021, at 20:03, Esteban Maringolo  wrote:
>> 
>> If I need to use cookies, would it make sense to keep a ZnUserAgentSession 
>> and assign it to each new client?
> 
> I don't know or remember, my first reaction would be to say that it was not 
> designed for that purpose, but maybe it could be used for that.
> 
>> I'm asking this in case I have to share cookies between requests. But the 
>> session variable seems to be private in ZnClient given that it doesn't 
>> provide any setter.
> 
> There is the cookie jar object inside the session, maybe you can try to share 
> that ?
> 
> Or you could copy the necessary cookies over ? That certainly seems like the 
> safest thing.
> 
>> Regards,
>> 
>> Esteban A. Maringolo
>> 
>> 
>> On Mon, Nov 1, 2021 at 1:39 PM Esteban Maringolo  
>> wrote:
>> Thank you Sven.
>> 
>> I'll go with instantiating a new client for each request, the less state 
>> shared, the better :-)
>> 
>> Regards!
>> 
>> Esteban A. Maringolo
>> 
>> 
>> On Fri, Oct 29, 2021 at 12:15 PM Sven Van Caekenberghe  wrote:
>> Hi,
>> 
>>> On 29 Oct 2021, at 15:42, Esteban Maringolo  wrote:
>>> 
>>> Hi,
>>> 
>>> I happened to me more than once that I have to create some REST service 
>>> "client" in which I usually wrap an HTTP client inside (ZnClient in this 
>>> case), and all the calls to the service, end up using the HTTP client, 
>>> inside some mutex to serialize the execution.
>> 
>> Yes, that is a good design. However, whether your REST client object 
>> wrapping a ZnClient is used by multiple concurrent threads is another 
>> decision. I would personally not do that. See further.
>> 
>>> But I don't like that, in particular when some endpoints are better with 
>>> streaming responses (large downloads) and I have to fiddle with the client 
>>> and set it back to the settings before executing the request.
>> 
>> You only can and should reuse connections to the same endpoint/service only, 
>> doing similar types of request/responses (let's say simple ones).
>> 
>> A definitive danger point is authentication and authorization: different 
>> calls by different users might need different REST call settings, each time. 
>> Also, caching can be a problem, if user A can see / sees the cache of user B.
>> 
>>> So, long story short... is it always safer to instantiate a new ZnClient on 
>>> a per request basis since no state is shared, but I guess it is also less 
>>> effective if I'm performing several requests to the same server.
>> 
>> A new instance is definitely safer because it is cleaner. Reusing a 
>> connection is more efficient when doing multiple calls in (quick) 
>> succession. The penalty is usually pretty low, you can postpone optimising 
>> until there is an actual problem.
>> 
>> Error handling and recovery are also harder in the reuse case (what state is 
>> the connection in ?).
>> 
>> ZnClient does a form of automatic reconnection/retry though.
>> 
>>> What are the recommended approaches here?
>>> 
>>> 
>>> Esteban A. Maringolo
>> 
>> HTH,
>> 
>> Sven
> 


[Pharo-users] Re: Strategies for (re)using HTTP client

2021-11-01 Thread Sven Van Caekenberghe



> On 1 Nov 2021, at 20:03, Esteban Maringolo  wrote:
> 
> If I need to use cookies, would it make sense to keep a ZnUserAgentSession 
> and assign it to each new client?

I don't know or remember, my first reaction would be to say that it was not 
designed for that purpose, but maybe it could be used for that.

> I'm asking this in case I have to share cookies between requests. But the 
> session variable seems to be private in ZnClient given that it doesn't 
> provide any setter.

There is the cookie jar object inside the session, maybe you can try to share 
that ?

Or you could copy the necessary cookies over ? That certainly seems like the 
safest thing.

> Regards,
> 
> Esteban A. Maringolo
> 
> 
> On Mon, Nov 1, 2021 at 1:39 PM Esteban Maringolo  wrote:
> Thank you Sven.
> 
> I'll go with instantiating a new client for each request, the less state 
> shared, the better :-)
> 
> Regards!
> 
> Esteban A. Maringolo
> 
> 
> On Fri, Oct 29, 2021 at 12:15 PM Sven Van Caekenberghe  wrote:
> Hi,
> 
> > On 29 Oct 2021, at 15:42, Esteban Maringolo  wrote:
> > 
> > Hi,
> > 
> > I happened to me more than once that I have to create some REST service 
> > "client" in which I usually wrap an HTTP client inside (ZnClient in this 
> > case), and all the calls to the service, end up using the HTTP client, 
> > inside some mutex to serialize the execution.
> 
> Yes, that is a good design. However, whether your REST client object wrapping 
> a ZnClient is used by multiple concurrent threads is another decision. I 
> would personally not do that. See further.
> 
> > But I don't like that, in particular when some endpoints are better with 
> > streaming responses (large downloads) and I have to fiddle with the client 
> > and set it back to the settings before executing the request.
> 
> You only can and should reuse connections to the same endpoint/service only, 
> doing similar types of request/responses (let's say simple ones).
> 
> A definitive danger point is authentication and authorization: different 
> calls by different users might need different REST call settings, each time. 
> Also, caching can be a problem, if user A can see / sees the cache of user B.
> 
> > So, long story short... is it always safer to instantiate a new ZnClient on 
> > a per request basis since no state is shared, but I guess it is also less 
> > effective if I'm performing several requests to the same server.
> 
> A new instance is definitely safer because it is cleaner. Reusing a 
> connection is more efficient when doing multiple calls in (quick) succession. 
> The penalty is usually pretty low, you can postpone optimising until there is 
> an actual problem.
> 
> Error handling and recovery are also harder in the reuse case (what state is 
> the connection in ?).
> 
> ZnClient does a form of automatic reconnection/retry though.
> 
> > What are the recommended approaches here?
> > 
> > 
> > Esteban A. Maringolo
> 
> HTH,
> 
> Sven


[Pharo-users] Re: Strategies for (re)using HTTP client

2021-10-29 Thread Sven Van Caekenberghe
Hi,

> On 29 Oct 2021, at 15:42, Esteban Maringolo  wrote:
> 
> Hi,
> 
> I happened to me more than once that I have to create some REST service 
> "client" in which I usually wrap an HTTP client inside (ZnClient in this 
> case), and all the calls to the service, end up using the HTTP client, inside 
> some mutex to serialize the execution.

Yes, that is a good design. However, whether your REST client object wrapping a 
ZnClient is used by multiple concurrent threads is another decision. I would 
personally not do that. See further.

> But I don't like that, in particular when some endpoints are better with 
> streaming responses (large downloads) and I have to fiddle with the client 
> and set it back to the settings before executing the request.

You only can and should reuse connections to the same endpoint/service only, 
doing similar types of request/responses (let's say simple ones).

A definitive danger point is authentication and authorization: different calls 
by different users might need different REST call settings, each time. Also, 
caching can be a problem, if user A can see / sees the cache of user B.

> So, long story short... is it always safer to instantiate a new ZnClient on a 
> per request basis since no state is shared, but I guess it is also less 
> effective if I'm performing several requests to the same server.

A new instance is definitely safer because it is cleaner. Reusing a connection 
is more efficient when doing multiple calls in (quick) succession. The penalty 
is usually pretty low, you can postpone optimising until there is an actual 
problem.

Error handling and recovery are also harder in the reuse case (what state is 
the connection in ?).

ZnClient does a form of automatic reconnection/retry though.

> What are the recommended approaches here?
> 
> 
> Esteban A. Maringolo

HTH,

Sven


[Pharo-users] Re: Splitting a single HTTP Request into multiple concurrent requests

2021-10-19 Thread Sven Van Caekenberghe



> On 19 Oct 2021, at 00:17, Yanni Chiu  wrote:
> 
> A good use case is when one of the downloads fails. When it’s just one big 
> one then you have start over from the beginning.

Yes, that is a good use case for the Range feature.

It is possible to configure ZnClient to retry when a request fails, for example:

  client numberOfRetries: 3; retryDelay: 2 "seconds".

could be added to the example.

> On Mon, Oct 18, 2021 at 11:05 AM Sven Van Caekenberghe  wrote:
> Hi,
> 
> Somebody asked how you would split single HTTP Request into multiple 
> concurrent requests. This is one way to do it.
> 
> Upfront I should state that 
> 
> - I do no think this is worth the trouble
> - It is only applicable to large downloads (even larger than in the example)
> - The other side (server) must honour Range requests correctly (and be fast)
> 
> This one is based on the data used in the ZnHTTPSTest(s)>>#testTransfers 
> units test. More specifically the files available under 
> https://s3-eu-west-1.amazonaws.com/public-stfx-eu/ such as 
> https://s3-eu-west-1.amazonaws.com/public-stfx-eu/test-2050.txt for the 
> smallest one.
> 
> sizes := (Integer primesUpTo: 100) collect: [ :each | 1024 * each + each ].
> 
> size := sizes last.
> concurrency := 11.
> step := size // concurrency.
> 
> ranges := (0 to: size - 1 by: step) collect: [ :each |
>   { each. (each + step) min: size } ].
> 
> chunks := Array new: ranges size.
> done := Semaphore new.
> ms := 0.
> 
> [
> ms := Time millisecondClockValue.
> ranges withIndexDo: [ :range :index | 
>   [ | client |
>  (client := ZnClient new)
> https;
> host: 's3-eu-west-1.amazonaws.com';
> addPath: 'public-stfx-eu'.
>  client addPath: ('test-{1}.txt' format: { size }).
>  client headerAt: #Range put: ('bytes={1}-{2}' format: range).
>  client get.
>  client close.
>  chunks at: index put: client contents.
>  done signal ] forkAt: Processor lowIOPriority ].
> ranges size timesRepeat: [ done wait ].
> ms := Time millisecondsSince: ms.
> (String empty join: chunks) inspect.
> ] fork.
> 
> This takes about 2 seconds total for me.
> 
> [
>ZnClient new
>  https;
>  host: 's3-eu-west-1.amazonaws.com';
>  addPath: 'public-stfx-eu';
>  addPath: 'test-99425.txt';
>  get.
> ] timeToRun.
> 
> Which is roughly similar to the single request (again, for me).
> 
> Two things to note: connection time dominates, in the parallel case, 11 
> independent requests were executed, so concurrency is definitively happening.
> 
> The largest size file is just 100k, split in about 10 parts, which is most 
> probably not enough to see much effect from doing things in parallel.
> 
> HTH,
> 
> Sven


[Pharo-users] Splitting a single HTTP Request into multiple concurrent requests

2021-10-18 Thread Sven Van Caekenberghe
Hi,

Somebody asked how you would split single HTTP Request into multiple concurrent 
requests. This is one way to do it.

Upfront I should state that 

- I do no think this is worth the trouble
- It is only applicable to large downloads (even larger than in the example)
- The other side (server) must honour Range requests correctly (and be fast)

This one is based on the data used in the ZnHTTPSTest(s)>>#testTransfers units 
test. More specifically the files available under 
https://s3-eu-west-1.amazonaws.com/public-stfx-eu/ such as 
https://s3-eu-west-1.amazonaws.com/public-stfx-eu/test-2050.txt for the 
smallest one.

sizes := (Integer primesUpTo: 100) collect: [ :each | 1024 * each + each ].

size := sizes last.
concurrency := 11.
step := size // concurrency.

ranges := (0 to: size - 1 by: step) collect: [ :each |
  { each. (each + step) min: size } ].

chunks := Array new: ranges size.
done := Semaphore new.
ms := 0.

[
ms := Time millisecondClockValue.
ranges withIndexDo: [ :range :index | 
  [ | client |
 (client := ZnClient new)
https;
host: 's3-eu-west-1.amazonaws.com';
addPath: 'public-stfx-eu'.
 client addPath: ('test-{1}.txt' format: { size }).
 client headerAt: #Range put: ('bytes={1}-{2}' format: range).
 client get.
 client close.
 chunks at: index put: client contents.
 done signal ] forkAt: Processor lowIOPriority ].
ranges size timesRepeat: [ done wait ].
ms := Time millisecondsSince: ms.
(String empty join: chunks) inspect.
] fork.

This takes about 2 seconds total for me.

[
   ZnClient new
 https;
 host: 's3-eu-west-1.amazonaws.com';
 addPath: 'public-stfx-eu';
 addPath: 'test-99425.txt';
 get.
] timeToRun.

Which is roughly similar to the single request (again, for me).

Two things to note: connection time dominates, in the parallel case, 11 
independent requests were executed, so concurrency is definitively happening.

The largest size file is just 100k, split in about 10 parts, which is most 
probably not enough to see much effect from doing things in parallel.

HTH,

Sven


[Pharo-users] Re: Silly question about storing and displaying currency amounts?

2021-08-30 Thread Sven Van Caekenberghe
David,

> On 30 Aug 2021, at 14:02, David Pennington  wrote:
> 
> Hi everyone. I have a little bank analysis package for my own use but having 
> trouble displaying and storing amounts. I parse out a CSV file from the bank 
> and convert the amounts from text to ScaledDecimal. I then store these into a 
> STON file, which converts them back to text. I then read them in and convert 
> them back to ScaledDecimal again.
> 
> I am not used to ~Pharo have spent 24 years using VisualAge Smalltalk so I 
> need a little bit of help because I am getting 1 Penny errors in the 
> conversions. I can cope with this but I would like to get it right.
> 
> Can anyone give me a simple means of managing, say, an amount like £76.49 
> from the bank so that it stops coming back to me as £76.48?
> 
> David
> Totally Objects

Working with money is always challenging. I know that many people say 'Use 
ScaledDecimal' as if that magically solves all problems but it does not. 
ScaledDecimal is a bit of a dangerous class to use. It is just a fraction with 
a certain precision. Internally it might be pretty accurate with respect to 
most operations, its external representation as floating point number can be 
misleading, as this representation is prone to all problems related to floating 
point (including the fact that decimal and binary floating point are not the 
same).

STON does not doing anything wrong as far as I know. Consider the following:

| amount ston |
amount := 76.49 asScaledDecimal.
ston := STON toString: amount.
STON fromString: ston.
(STON fromString: ston) = amount.
 "true"

| amount ston |
amount := 76.49 asScaledDecimal: 2.
ston := STON toString: amount.
STON fromString: ston.
(STON fromString: ston) = amount.
 "true"

| amount ston |
amount := 76.49.
ston := STON toString: amount.
STON fromString: ston.
(STON fromString: ston) = amount.
 "true"

BUT, 76.49 asScaledDecimal already has your penny loss (if rounded to pennies).

HTH,

Sven


[Pharo-users] Re: Using new VM but can't find any log files

2021-08-28 Thread Sven Van Caekenberghe
David,

> On 28 Aug 2021, at 17:47, David Pennington  wrote:
> 
> Hi there. I have installed the new M1 VM that supposedly gives much better 
> information into a log file. However, my image is still disappearing such 
> that it now doesn’t last more than 12 hours before going. I have searched for 
> these log files but I can’t find any. Can anyone help me in this as I am 
> close the throwing this all into the bin if I can’t keep my simple web site 
> going regularly. My clients are not happy, as you can imagine and telling me 
> to just go with a Blogger blog! I don’t want to do that so can anyone help?
> David

We are sorry that you have this problem. Many people have already interacted 
with you via various channels.

Multiple Pharo developers are working on this issue 
(https://github.com/pharo-project/pharo/issues/9565) but right now there is no 
immediate solution.

One option would be to go back to Pharo 7 or 8 (it is normally easy to have the 
same code base for Pharo 7, 8 and 9), but then you probably need to move away 
from M1 hardware.

Another option would be to run your application under control of launchd (on 
macOS) os systemd (on Linux). These tools automatically restart your app when 
it goes down. This is something you need for proper production deployment 
anyway.

In any case we are working on it and will surely let you know if we make 
progress.

HTH,

Sven

[Pharo-users] Re: [ANN] Bootstrap 5 for Seaside in Pharo

2021-07-30 Thread Sven Van Caekenberghe
Hi Torsten,

Great, thanks for doing the Bootstrap series, it has been very useful for 
me/us, we use it in several production web apps. We're still on 3 though ;-)

Sven

> On 30 Jul 2021, at 09:24, Torsten Bergmann  wrote:
> 
> Hi,
> 
> after some of you are using https://github.com/astares/Seaside-Bootstrap4
> I wanted to let you know that I just published the new / updated project to
> support for Bootstrap 5 web development (https://getbootstrap.com) in Seaside
> using Pharo.
> 
> Project location is on GitHub:
> 
>https://github.com/astares/Seaside-Bootstrap5
> 
> It might not be fully complete by 100% with all new possibilities of BS5 - but
> it is usable and tests are green in Pharo 8 and Pharo 9. Examples are 
> included,
> just follow the instructions on the Github page.
> 
> Feel free to send contributions via PR's.
> 
> Have fun!
> T.


[Pharo-users] Re: ZnServer on a cheap server: your experiences

2021-07-27 Thread Sven Van Caekenberghe
HI Yanni,

Thanks for sharing your experience report!

Sven

> On 27 Jul 2021, at 01:53, Yanni Chiu  wrote:
> 
> Yes, very stable. I've had a DigitalOcean droplet running for 6 years
> that was holding various false starts. To my surprise, one (never
> "shipped") website was still running (Pharo-3.?, mongodb, Bootstrap
> css). I'd been poking the site every once in a while, and found that
> the Bootstrap UI steadily degraded, because I'd pointed the site's
> .css at some non-fixed version from some CDN server (so that .css must
> have changed over the years). The uptime on the machine was over 1000
> days, and there have been many emails over the years about network,
> hardware, etc. changes. After years of unattended reboots, some relic
> was still up.
> 
> Anyhow, that instance is now deleted, probably soon to be joined by a
> more recent setup where I started with the DO 1-click docker droplet.
> This setup was useful to learn about docker, and I might go back to it
> in the future. Given Estaban's comments in this thread, I decided to
> do a bare nginx + Pharo image deployment (which is similar to the now
> deleted 6 year instance described above, which had Apache2 instead of
> nginx). I will have the mongo server on a separate machine, but have
> not decided whether to use the DO offering of a managed mongo server.
> It's still a work in progress, and other's experiences are of
> interest.
> 
> Thanks for reading.
> Yanni Chiu
> 
> On Thu, Jul 22, 2021 at 8:33 AM Esteban Maringolo  
> wrote:
>> 
>> Yeap, not much more to add than what Sven said. What's important to
>> mention is that they are really stable.
>> I've been running Pharo servers in DO droplets of all sizes, without
>> issues (some running for months), I only had to upscale one droplet to
>> more memory, and it was because of a leak I introduced with PGSQL
>> connections, otherwise, they're pretty lightweight for today's
>> standards and a normal workload.
>> 
>> I have deployments with nginx and Pharo images as upstreams, and I
>> have one with Docker swarm and Traefik doing the load balancing among
>> different Pharo workers and acting as the HTTPS endpoint. I'm removing
>> this option though and moving back to nginx only.
>> 
>> What I never tried was to host a docker container running Pharo, in
>> some Docker hosting. It might be the best option as a quick start, but
>> after doing the math, it's always more expensive than a Droplet.
>> 
>> Best regards,
>> 
>> Esteban A. Maringolo
>> 
>> On Thu, Jul 22, 2021 at 3:21 AM Sven Van Caekenberghe  wrote:
>>> 
>>> Hi Vince,
>>> 
>>> That is certainly possible and works well. I would recommend an instance 
>>> with 1GB RAM, that leaves you some headroom.
>>> 
>>> Deploying web applications is of course a broad subject, much of the 
>>> required knowledge is not Pharo specific, but needed anyway.
>>> 
>>> The last chapter in the Pharo Enterprise book 
>>> (https://books.pharo.org/enterprise-pharo/) is a good starting point 
>>> (Deployment). But there are other and different approaches.
>>> 
>>> A plain HTTP demo instance running on a DO instance can be found here: 
>>> http://zn.stfx.eu/welcome
>>> 
>>> For production use you should front with NGINX or something similar to add 
>>> HTTPS.
>>> 
>>> May people on this mailing list deploy Pharo server applications, we have 
>>> tens of them in day to day production doing real work.
>>> 
>>> Good luck on your journey, you know where to ask questions.
>>> 
>>> Regards,
>>> 
>>> Sven
>>> 
>>>> On 22 Jul 2021, at 07:56, vin...@gmail.com wrote:
>>>> 
>>>> Anyone here run a web app using plain ZnServer (or subclasses) on a cheap 
>>>> VPS (i.e., $10/month DO droplet or equivalent). What are your 
>>>> experiences?, suggestions.
>>>> 
>>>> I am planning a web app with just plain ZnServer, SQLite3, ATS or 
>>>> equivalent.
>>>> 
>>>> Thanks, Vince
>>>> 


[Pharo-users] Re: Zinc exception logging

2021-07-26 Thread Sven Van Caekenberghe
Hi Esteban,

> On 26 Jul 2021, at 02:47, Esteban Maringolo  wrote:
> 
> Is there a way to have a "stack dump" response in Zinc?
> 
> #debugMode is good for development, but I'm having an "Internal Error:
> 4", that I don't know how to trace.
> 
> *   Trying 167.71.182.110...
> * TCP_NODELAY set
> * Connected to fore.base.golf (167.71.182.110) port 8090 (#0)
>> GET /web HTTP/1.1
>> Host: x.y.z:8090
>> User-Agent: curl/7.58.0
>> Accept: */*
>> 
> < HTTP/1.1 500 Internal Server Error
> < Content-Length: 512
> < Content-Type: text/plain;charset=utf-8
> < Server: Zinc HTTP Components 1.0 (Pharo/8.0)
> < Date: Mon, 26 Jul 2021 00:45:39 GMT
> <
> * Connection #0 to host x.y.z left intact
> Internal Error: 4
> 
> Regards!
> 
> Esteban A. Maringolo

Searching the source code of my image, I only find 'Internal Error' in 
WAExceptionHandler/WAResponseGenerator, not in Zn code. Furthermore, printing 4 
there would be weird (as it is the exception message).

Is this a plain Zn response or a Seaside response ?

I know that Seaside has more logging options.

Zn does not have an exception logging mechanism that dumps a stack trace, only 
the debugMode for interactive development. I see you used curl, can't you fire 
the request against your dev image then ?

Sven


[Pharo-users] Re: ZnServer on a cheap server: your experiences

2021-07-22 Thread Sven Van Caekenberghe
Hi Vince,

That is certainly possible and works well. I would recommend an instance with 
1GB RAM, that leaves you some headroom.

Deploying web applications is of course a broad subject, much of the required 
knowledge is not Pharo specific, but needed anyway.

The last chapter in the Pharo Enterprise book 
(https://books.pharo.org/enterprise-pharo/) is a good starting point 
(Deployment). But there are other and different approaches.

A plain HTTP demo instance running on a DO instance can be found here: 
http://zn.stfx.eu/welcome

For production use you should front with NGINX or something similar to add 
HTTPS.

May people on this mailing list deploy Pharo server applications, we have tens 
of them in day to day production doing real work.

Good luck on your journey, you know where to ask questions.

Regards,

Sven 

> On 22 Jul 2021, at 07:56, vin...@gmail.com wrote:
> 
> Anyone here run a web app using plain ZnServer (or subclasses) on a cheap VPS 
> (i.e., $10/month DO droplet or equivalent). What are your experiences?, 
> suggestions.
> 
> I am planning a web app with just plain ZnServer, SQLite3, ATS or equivalent.
> 
> Thanks, Vince
> 


[Pharo-users] Re: How to handle (recover) from a ZnInvalidUTF8: Illegal continuation byte for utf-8 encoding error?

2021-07-20 Thread Sven Van Caekenberghe
There is ZnCharacterEncoder knownEncodingIdentifiers.

You either provide an identifier from this list (as string or symbol) or an 
instance (the argument gets sent #asZnCharacterEncoder if you want to know).

Most text editors will tell you the encoding they are using to read your file 
and you can use that to inspect the contents.

If you want, you can sent me such a file privately.

Yes, you can access the encoder from the character read stream to configure it 
further. Or you do it upfront as an instance instead of an identifier.

> On 20 Jul 2021, at 15:47, Tim Mackinnon  wrote:
> 
> Hey thanks guys - so looking at readStreamEncoded: - how do I know what the 
> valid encodings are? Skimming those doc’s Sven referenced, I can start to 
> pick out some - but is there a list? I see that method parameter says 
> “anEncoding” but the type hint on that is misleading as it seems like its a 
> String or is it a Symbol? If I search for Encoder classes - I do find 
> ZnCharacterEncoder - and it has class methods for latin1, utf8, ascii - so is 
> this the definitive list? And should the encoding strings used in those 
> methods be constants or something I can reference in my code?
> 
> Gosh - this raises a whole host of things I just naively assumed happened for 
> me.
> 
> So it looks like the file giving me issues - seems to have characters like £ 
> or ¬ in it. So I’m wondering how I know what the proper encoding format would 
> be (I think these files were written out with some PHP app) - is it just a 
> trial and error thing?
> 
> I tried changing my code to:
> 
> details parseStream: (firmEfs readStreamEncoded: 'iso-8859-1’). - and other 
> variants like ‘ASCII’ and ‘latin1’ - and this then gives me another error:
> "ZnCharacterEncodingError: Character Unicode code point outside encoder range”
> 
> So it does sound like I have a file that isn’t conforming to known standards 
> - and I guess I have to use #beLenient option.
> 
> Sven - In the examples for using #beLenient - you seem to show something that 
> assumes you will iterate with Do - as my existing code takes a stream, that 
> it wants to do a #nextLine on - would it be bad to do something like this:
> 
> efsStream := (firmEfs readStreamEncoded: 'latin1').
> efsStream encoder beLenient.
> 
> details parsStream: efsStream.
> 
> That is - get the endcoder from my Stream and make it lenient? 
>
> Appreciate the pointers on this guys - I’m definitely learning something new 
> here.
> 
> Tim
> 
>> On 20 Jul 2021, at 12:11, Guillermo Polito  wrote:
>> 
>> 
>> 
>>> El 20 jul 2021, a las 11:45, Sven Van Caekenberghe  escribió:
>>> 
>>> 
>>> 
>>>> On 20 Jul 2021, at 11:03, Sven Van Caekenberghe  wrote:
>>>> 
>>>> Hi Tim,
>>>> 
>>>> An introduction to this part of the system is in 
>>>> https://ci.inria.fr/pharo-contribution/job/EnterprisePharoBook/lastSuccessfulBuild/artifact/book-result/Zinc-Encoding-Meta/Zinc-Encoding-Meta.html
>>>>  [Character Encoding and Resource Meta Description] from the "Enterprise 
>>>> Pharo" book.
>>>> 
>>>> The error means that a file that you try to read as UTF-8 does contain 
>>>> things that are invalid with respect to the UTF-8 standard.
>>>> 
>>>> Are you sure the file is in UTF-8, maybe it is in ASCII, Latin-1 or 
>>>> something else ?
>>>> 
>>>> It is possible to customise the encoding to something different than the 
>>>> default UTF-8. For non-UTF encoders, there is a strict/lenient option to 
>>>> disallow/allow illegal stuff (but then you will get these in your strings).
>>>> 
>>>> I can show you how to do that if you want.
>>> 
>>> '/var/log/system.log' asFileReference readStreamDo: [ :in | in upToEnd ].
>>> 
>>> '/var/log/system.log' asFileReference binaryReadStreamDo: [ :in |
>>> (ZnCharacterReadStream on: in encoding: #ascii) upToEnd ].
>>> 
>>> '/var/log/system.log' asFileReference binaryReadStreamDo: [ :in |
>>> (ZnCharacterReadStream on: in encoding: ZnCharacterEncoder ascii 
>>> beLenient) upToEnd ].
>> 
>> There is also readStreamEncoded:[do:], which is a bit more concise but does 
>> the same :)
>> 
>>> 
>>> HTH
>>> 
>>>> Sven
>>>> 
>>>>> On 20 Jul 2021, at 10:31, Tim Mackinnon  wrote:
>>>>> 
>>>>> Hi - I’m doing a bit of log file processing with Pharo - and I’ve hit an 
>>>>> unexpected error and am wondering what t

[Pharo-users] Re: How to handle (recover) from a ZnInvalidUTF8: Illegal continuation byte for utf-8 encoding error?

2021-07-20 Thread Sven Van Caekenberghe



> On 20 Jul 2021, at 12:11, Guillermo Polito  wrote:
> 
> 
> 
>> El 20 jul 2021, a las 11:45, Sven Van Caekenberghe  escribió:
>> 
>> 
>> 
>>> On 20 Jul 2021, at 11:03, Sven Van Caekenberghe  wrote:
>>> 
>>> Hi Tim,
>>> 
>>> An introduction to this part of the system is in 
>>> https://ci.inria.fr/pharo-contribution/job/EnterprisePharoBook/lastSuccessfulBuild/artifact/book-result/Zinc-Encoding-Meta/Zinc-Encoding-Meta.html
>>>  [Character Encoding and Resource Meta Description] from the "Enterprise 
>>> Pharo" book.
>>> 
>>> The error means that a file that you try to read as UTF-8 does contain 
>>> things that are invalid with respect to the UTF-8 standard.
>>> 
>>> Are you sure the file is in UTF-8, maybe it is in ASCII, Latin-1 or 
>>> something else ?
>>> 
>>> It is possible to customise the encoding to something different than the 
>>> default UTF-8. For non-UTF encoders, there is a strict/lenient option to 
>>> disallow/allow illegal stuff (but then you will get these in your strings).
>>> 
>>> I can show you how to do that if you want.
>> 
>> '/var/log/system.log' asFileReference readStreamDo: [ :in | in upToEnd ].
>> 
>> '/var/log/system.log' asFileReference binaryReadStreamDo: [ :in |
>>  (ZnCharacterReadStream on: in encoding: #ascii) upToEnd ].
>> 
>> '/var/log/system.log' asFileReference binaryReadStreamDo: [ :in |
>>  (ZnCharacterReadStream on: in encoding: ZnCharacterEncoder ascii 
>> beLenient) upToEnd ].
> 
> There is also readStreamEncoded:[do:], which is a bit more concise but does 
> the same :)

Yes indeed !

>> HTH
>> 
>>> Sven
>>> 
>>>> On 20 Jul 2021, at 10:31, Tim Mackinnon  wrote:
>>>> 
>>>> Hi - I’m doing a bit of log file processing with Pharo - and I’ve hit an 
>>>> unexpected error and am wondering what the best way to approach it is.
>>>> 
>>>> It seems that I have a log file that has unexpected characters, and so my 
>>>> readStream loop that reads lines gets an error: "ZnInvalidUTF8: Illegal 
>>>> continuation byte for utf-8 encoding”.
>>>> 
>>>> For some reason this file (unlike my others) seems to contain characters 
>>>> that it shouldn’t - but what is the best way for me to continue 
>>>> processing? Should I be opening my files in a different way - or can I 
>>>> resume the error somehow- I’m not familiar with this area of Pharo and am 
>>>> after a bit of advice.
>>>> 
>>>> My code is like this (and I get the error when doing nextLine)
>>>> 
>>>> 
>>>> parseStream: aFileStream with: aBlock
>>>>| line items |
>>>>[ (line := aFileStream nextLine) isNil ]
>>>>whileFalse: [ 
>>>>items := $/ split: line.
>>>>items size = 3 ifTrue: [aBlock value: items]]
>>>> 
>>>> My stream is created like this:
>>>> 
>>>> firmEfs := (pathName , '/' , firmName , '_files') asFileReference.
>>>> details parseStream: firmEfs readStream.
>>>> 
>>>> 
>>>> Should I be opening the stream a bit differently - or can I catch that 
>>>> encoding error and resume it with some safe character?
>>>> 
>>>> Thanks for any help.
>>>> 
>>>> Tim


[Pharo-users] Re: How to handle (recover) from a ZnInvalidUTF8: Illegal continuation byte for utf-8 encoding error?

2021-07-20 Thread Sven Van Caekenberghe



> On 20 Jul 2021, at 11:03, Sven Van Caekenberghe  wrote:
> 
> Hi Tim,
> 
> An introduction to this part of the system is in 
> https://ci.inria.fr/pharo-contribution/job/EnterprisePharoBook/lastSuccessfulBuild/artifact/book-result/Zinc-Encoding-Meta/Zinc-Encoding-Meta.html
>  [Character Encoding and Resource Meta Description] from the "Enterprise 
> Pharo" book.
> 
> The error means that a file that you try to read as UTF-8 does contain things 
> that are invalid with respect to the UTF-8 standard.
> 
> Are you sure the file is in UTF-8, maybe it is in ASCII, Latin-1 or something 
> else ?
> 
> It is possible to customise the encoding to something different than the 
> default UTF-8. For non-UTF encoders, there is a strict/lenient option to 
> disallow/allow illegal stuff (but then you will get these in your strings).
> 
> I can show you how to do that if you want.

'/var/log/system.log' asFileReference readStreamDo: [ :in | in upToEnd ].

'/var/log/system.log' asFileReference binaryReadStreamDo: [ :in |
(ZnCharacterReadStream on: in encoding: #ascii) upToEnd ].

'/var/log/system.log' asFileReference binaryReadStreamDo: [ :in |
(ZnCharacterReadStream on: in encoding: ZnCharacterEncoder ascii 
beLenient) upToEnd ].

HTH

> Sven
> 
>> On 20 Jul 2021, at 10:31, Tim Mackinnon  wrote:
>> 
>> Hi - I’m doing a bit of log file processing with Pharo - and I’ve hit an 
>> unexpected error and am wondering what the best way to approach it is.
>> 
>> It seems that I have a log file that has unexpected characters, and so my 
>> readStream loop that reads lines gets an error: "ZnInvalidUTF8: Illegal 
>> continuation byte for utf-8 encoding”.
>> 
>> For some reason this file (unlike my others) seems to contain characters 
>> that it shouldn’t - but what is the best way for me to continue processing? 
>> Should I be opening my files in a different way - or can I resume the error 
>> somehow- I’m not familiar with this area of Pharo and am after a bit of 
>> advice.
>> 
>> My code is like this (and I get the error when doing nextLine)
>> 
>> 
>> parseStream: aFileStream with: aBlock
>>  | line items |
>>  [ (line := aFileStream nextLine) isNil ]
>>  whileFalse: [ 
>>  items := $/ split: line.
>>  items size = 3 ifTrue: [aBlock value: items]]
>> 
>> My stream is created like this:
>> 
>> firmEfs := (pathName , '/' , firmName , '_files') asFileReference.
>> details parseStream: firmEfs readStream.
>> 
>> 
>> Should I be opening the stream a bit differently - or can I catch that 
>> encoding error and resume it with some safe character?
>> 
>> Thanks for any help.
>> 
>> Tim
> 


[Pharo-users] Re: How to handle (recover) from a ZnInvalidUTF8: Illegal continuation byte for utf-8 encoding error?

2021-07-20 Thread Sven Van Caekenberghe
Hi Tim,

An introduction to this part of the system is in 
https://ci.inria.fr/pharo-contribution/job/EnterprisePharoBook/lastSuccessfulBuild/artifact/book-result/Zinc-Encoding-Meta/Zinc-Encoding-Meta.html
 [Character Encoding and Resource Meta Description] from the "Enterprise Pharo" 
book.

The error means that a file that you try to read as UTF-8 does contain things 
that are invalid with respect to the UTF-8 standard.

Are you sure the file is in UTF-8, maybe it is in ASCII, Latin-1 or something 
else ?

It is possible to customise the encoding to something different than the 
default UTF-8. For non-UTF encoders, there is a strict/lenient option to 
disallow/allow illegal stuff (but then you will get these in your strings).

I can show you how to do that if you want.

Sven

> On 20 Jul 2021, at 10:31, Tim Mackinnon  wrote:
> 
> Hi - I’m doing a bit of log file processing with Pharo - and I’ve hit an 
> unexpected error and am wondering what the best way to approach it is.
> 
> It seems that I have a log file that has unexpected characters, and so my 
> readStream loop that reads lines gets an error: "ZnInvalidUTF8: Illegal 
> continuation byte for utf-8 encoding”.
> 
> For some reason this file (unlike my others) seems to contain characters that 
> it shouldn’t - but what is the best way for me to continue processing? Should 
> I be opening my files in a different way - or can I resume the error somehow- 
> I’m not familiar with this area of Pharo and am after a bit of advice.
> 
> My code is like this (and I get the error when doing nextLine)
> 
> 
> parseStream: aFileStream with: aBlock
>   | line items |
>   [ (line := aFileStream nextLine) isNil ]
>   whileFalse: [ 
>   items := $/ split: line.
>   items size = 3 ifTrue: [aBlock value: items]]
> 
> My stream is created like this:
> 
> firmEfs := (pathName , '/' , firmName , '_files') asFileReference.
> details parseStream: firmEfs readStream.
> 
> 
> Should I be opening the stream a bit differently - or can I catch that 
> encoding error and resume it with some safe character?
> 
> Thanks for any help.
> 
> Tim


[Pharo-users] Re: [ANN] Pharo 9 released!

2021-07-15 Thread Sven Van Caekenberghe



> On 15 Jul 2021, at 16:44, Russ Whaley  wrote:
> 
> Congratulations everyone on Pharo v9.  I love working in this immersive 
> environment.

Well said, congratulations to all contributors.

> On Thu, Jul 15, 2021 at 5:15 AM Esteban Lorenzano  wrote:
> Dear World and dynamic language lovers: 
> 
> The time has come for Pharo 9 !
> 
> Pharo is a pure object-oriented programming language and a powerful 
> environment, focused on simplicity and immediate feedback.
> 
> 
> 
> Here are the key features of Pharo 9:   
> 
>   • Full redesign of the Spec UI framework (new logic, application, 
> style, GTK3 back-end)
>   • New tools:
>   •
>   • new playground,
>   • new object centric inspector,
>   • new object centric debugger.
>   • better and new Refactorings
>   • class comments are now written in Microdown format (Markdown 
> compatible)
>   • classes now can be defined using a "fluid" api (Preview)
>   • New completion framework that adapts better to edition contexts and 
> is customizable
>   • Fast universal non-blocking FFI which now uses libFFI as backend.
>   • Pharo now supports Windows, OSX, Linux (Ubuntu, Debian, Fedora, 
> openSUSE, Arch, Raspbian) and multiple architectures (Intel/ARM 32/64bits).
>   • Virtual Machine
>   •
>   • Idle VM
>   • Support for ARM 64bits
>   • Support for Apple M1
>   • More than 3000 tests
>   • Built for Ubuntu 18.04, 19.04, 20.04, 21.04, 21.10; Debian 9, 
> 10, Testing; Fedora 32, 32, 34; openSUSE 15.1, 15.2, Tumbleweed; Manjaro; Arch
>   • Uses SDL 2.0 as back-end by default. It supports extended event 
> handling, including trackpad support.
>   • General speed up due to compiler optimisations and UI simplification.
>   • And many, many more tests.
> 
> These are just the more prominent highlights, but the details are just as 
> important. We have closed a massive amount of issues: around 1400 issues and 
> 2150 pull requests.
> 
> A more extended changelog can be found at 
> https://github.com/pharo-project/pharo-changelogs/blob/master/Pharo90ChangeLogs.md.
> 
> While the technical improvements are significant, still the most impressive 
> fact is that the new code that got in the main Pharo 9 image was contributed 
> by more than 90 people.
> 
> Pharo is more than code. It is an exciting project involving a great 
> community. 
> 
> We thank all the contributors to this release:
> 
> Aaron Bieber, Ackerley Tng, Alban Benmouffek, Ale Cossio, Alexandre Bergel, 
> Alistair Grant, Allex Oliveira, Angela Chia-Ling, Arturo Zambrano, Asbathou 
> Biyalou-Sama, Ben Coman, Benoit Verhaegue, Carlo Teixeira, Carlos Lopez, 
> Carolina Hernandez, Charles A. Monteiro, Christoph Thiede, Christophe 
> Demarey, Clotilde Toullec, Cyril Ferlicot, Damien Pollet, Daniel Aparicio, 
> David Bajger, David Sánchez i Gregori, Denis Kudriashov, Ellis Harris, Eric 
> Brandwein, Eric Gade, Erik Stel, ErikOnBike, Esteban Lorenzano, Esteban 
> Villalobos, Evelyn Cusi Lopez, Evelyn Cusi Lopez, Ewan Dawson, Francis 
> Pineda, Francis Pineda, Gabriel Omar Cotelli, Geraldine Galindo, Giovanni 
> Corriga, Guille Polito, Himanshu jld, Johan Brichau, Jonathan van Alteren, 
> Jordan Montt, Julien Delplanque, Kamil Kukura, Kasper Østerbye, Kurt Kilpela, 
> Laurine Dargaud, Marco Rimoldi, Marcus Denker, Martin Dias, Martin McClure, 
> Massimo Nocentini, Max Leske, Maximilian Ignacio Willembrinck Santander, 
> Milton Mamani Torres, Moussa Saker, Myroslava Romaniuk, Nicolas Anquetil, 
> Norbert Hartl, Nour Djihan, Oleksandr Zaitsev, Pablo Sánchez Rodríguez, Pablo 
> Tesone, Pavel Krivanek, Philippe Lesueur, Pierre Misse, Rakshit P., Rob 
> Sayers, Roland Bernard, Ronie Salgado, Sean DeNigris, Sebastian Jordan 
> Montaño, Serge Stinckwich, Stephan Eggermont, Steven Costiou, Stéphane 
> Ducasse, Sven Van Caekenberghe, Thomas Dupriez, Théo Lanord, Théo Rogliano, 
> Todd Blanchard, Torsten Bergmann, Vincent Blondeau, Wéslleymberg Lisboa.
>  
> 
> (If you contributed with Pharo 9 development in any way and we missed your 
> name, please send us an email and we will add you).
> 
> Enjoy!
> 
> The Pharo Team
> 
> Try Pharo: http://pharo.org/download
> 
> Learn Pharo: http://pharo.org/documentation
> 
> 
> -- 
> Russ Whaley
> whaley.r...@gmail.com


[Pharo-users] Re: Communication between different images

2021-07-01 Thread Sven Van Caekenberghe



> On 1 Jul 2021, at 21:01, Esteban Maringolo  wrote:
> 
> On Thu, Jul 1, 2021 at 3:00 PM Jesus Mari Aguirre
>  wrote:
>> 
>> As far as I know, zmq doesn't need a broker but subscribers should know the 
>> address of the publisher, if the network increases its complexity with more 
>> publishers you need a broker,  that is a proxy on zmq.
>> If I understand well you need any of them should be able to publish a change 
>> to all of the other images?
> 
> I want to broadcast notifications of updated objects so they can be
> recomputed or reloaded from the database to reflect the latest
> changes.
> The change might happen in any of the worker images, either by user UI
> or API call.

This reminds me that there is a notification mechanism in PostgreSQL. I have 
not used it though. The SQL commands are NOTIFY and (UN)LISTEN. I think these 
will arrive as P3Notifications.

> 
> Regards!
> 
> Esteban A. Maringolo


[Pharo-users] Re: Communication between different images

2021-07-01 Thread Sven Van Caekenberghe



> On 1 Jul 2021, at 20:58, Esteban Maringolo  wrote:
> 
> Hi Sven,
> 
> Thanks, now I understand better the use of QoS and session ID with a
> practical use case.
> 
> How do you deal with the lifetime of each "listener" (subscriber) on
> each image? I mean, how stable is it?
> 
> E.g.
> MQTTClient new
>  open;
> subscribeToTopic: '/updates';
> runWith: [ :message |
>   Transcript cr; show: '[UPDATED]'; space; show: message contents
> asString; cr.
>];
>yourself.
> 
> You just fork that and it will continue to receive messages until you
> close the connection?

Basically, yes.

> Is there a watchdog class/library to ensure
> things like this continue working? (I'm used to having my images
> living for months sometimes).

I have an MQTTListener class that wraps the MQTT client and its process, the 
process's run method looks like this:

run
[
self log: 'Starting MQTT reading'.
self ensureMQTTClient ifTrue: [
mqttClient runWith: [ :message | 
self handleMessage: message ] ].
self log: 'End of MQTT reading, restarting in 5s'.
self closeMQTTClient.
5 seconds wait ] repeat

It is a bit crude but it does help to restore the connection when the broker 
becomes temporarily unavailable.

> Thanks in advance.
> 
> Regards!
> 
> 
> Esteban A. Maringolo
> 
> 
> On Wed, Jun 30, 2021 at 5:00 AM Sven Van Caekenberghe  wrote:
>> 
>> 
>> 
>>> On 30 Jun 2021, at 01:28, Esteban Maringolo  wrote:
>>> 
>>> I like the Ansible approach using RabbitMQ, in particular because it
>>> would also work to delegate tasks to workers (even if in the same
>>> image), and the task queue would be preserved, whilst in MQTT if there
>>> are no subscribers for a topic, then all messages sent to it are
>>> discarded.
>> 
>> That is correct but less of a problem in practice than it seems to be at 
>> first sight.
>> 
>> As a client, you normally have to be able to start from scratch and load 
>> everything that came before the moment you start up.
>> 
>> When a client with a specific ID has connected once with clean session 
>> false, QoS 1 and a subscription, the persistence of the queue will be 
>> enabled for a as long as the server lives (and maybe even beyond restarts).
>> 
>> There is absolutely no question that Rabbit MQ has much more functionality, 
>> I found Mosquitto MQTT very nice to work with, but as always, YMMV.
>> 
>> Sven


[Pharo-users] Re: Communication between different images

2021-06-30 Thread Sven Van Caekenberghe



> On 30 Jun 2021, at 01:28, Esteban Maringolo  wrote:
> 
> I like the Ansible approach using RabbitMQ, in particular because it
> would also work to delegate tasks to workers (even if in the same
> image), and the task queue would be preserved, whilst in MQTT if there
> are no subscribers for a topic, then all messages sent to it are
> discarded.

That is correct but less of a problem in practice than it seems to be at first 
sight.

As a client, you normally have to be able to start from scratch and load 
everything that came before the moment you start up.

When a client with a specific ID has connected once with clean session false, 
QoS 1 and a subscription, the persistence of the queue will be enabled for a as 
long as the server lives (and maybe even beyond restarts).

There is absolutely no question that Rabbit MQ has much more functionality, I 
found Mosquitto MQTT very nice to work with, but as always, YMMV.

Sven


[Pharo-users] Re: Communication between different images

2021-06-30 Thread Sven Van Caekenberghe



> On 29 Jun 2021, at 20:10, Esteban Maringolo  wrote:
> 
> I already got MQTT running, It's not completely clear to me how the
> QoS setting works (atLeastOnce, exactlyOnce, atMostOnce), I don't
> remember using that setting in Java (it probably had a sensible
> default).

From the comments in MQTTPacket class side

atMostOnce
"Quality of service level 0 - no acks, send and forget"

^ 0

atLeastOnce
"Quality of service level 1 - single acks, possible duplicates"

^ 1

exactlyOnce
"Quality of service level 2 - no loss, no duplicates"

^ 2

The lowest one is the fastest, but won't work with persistent queues. In most 
cases the middle level is OK and enough - duplicates can only occur when you do 
not receive the ack.




[Pharo-users] Re: Communication between different images

2021-06-29 Thread Sven Van Caekenberghe



> On 29 Jun 2021, at 15:50, Esteban Maringolo  wrote:
> 
> Hi Sven,
> 
> I thought about both RabittMQ and MQTT too.
> 
> For RabittMQ I noticed you provide a docker config to get a container
> running quickly.

That was done for GitHub action testing, by Santiago, you could indeed (re)use 
part of that.

> What is the easy-go solution for MQTT?

Installing mosquitto is very simple, just apt install. See also the GitHub 
action.

There is much to say about both, you can find lots of information on the 
internet on how to configure these services, getting started is quite easy.

> Regards!
> 
> Esteban A. Maringolo
> 
> On Tue, Jun 29, 2021 at 3:15 AM Sven Van Caekenberghe  wrote:
>> 
>> Hi Esteban,
>> 
>>> On 29 Jun 2021, at 04:55, Esteban Maringolo  wrote:
>>> 
>>> Hi,
>>> 
>>> I'm rearchitecting a web app to perform updates only when necessary
>>> (instead of computing them all the time) on each request, I can have a
>>> global announcer and subscribers to know when to update within an
>>> image, but is there a way to have something like that but for
>>> inter-image coordination?
>>> 
>>> I'd only need to communicate the id and the class name (or a similar
>>> identifier), so on other images they'll update accordingly, and if
>>> there is an update in one image, it will notify the other images. The
>>> common data is on the database, so this is just to avoid re-reading a
>>> lot of things.
>>> 
>>> Is a message queue a good fit for this? Pub/Sub?
>>> What is available in Pharo that works without having to set up a lot of 
>>> things?
>>> 
>>> Thanks!
>>> 
>>> Esteban A. Maringolo
>> 
>> RabbitMQ (for which there is the STOMP client 'STAMP' 
>> https://github.com/svenvc/stamp) is one option, but it is a bit more complex.
>> 
>> I guess Redis would work too 
>> (https://medium.com/concerning-pharo/quick-write-me-a-redis-client-5fbe4ddfb13d).
>> 
>> More recently I have been using MQTT (with the client 
>> https://github.com/svenvc/mqtt) which is much simpler.
>> 
>> You post a message on a topic and have one or more listeners see it. You can 
>> even arrange for the broker to keep the messages you miss (within a 
>> reasonable window) - which is very nice for all kinds of reasons (especially 
>> operational). I have a system in production that uses this mechanism to 
>> coordinate different images (and handle ingress of data) for more than a 
>> year now.
>> 
>> Sven


[Pharo-users] Re: Communication between different images

2021-06-29 Thread Sven Van Caekenberghe
Hi Esteban,

> On 29 Jun 2021, at 04:55, Esteban Maringolo  wrote:
> 
> Hi,
> 
> I'm rearchitecting a web app to perform updates only when necessary
> (instead of computing them all the time) on each request, I can have a
> global announcer and subscribers to know when to update within an
> image, but is there a way to have something like that but for
> inter-image coordination?
> 
> I'd only need to communicate the id and the class name (or a similar
> identifier), so on other images they'll update accordingly, and if
> there is an update in one image, it will notify the other images. The
> common data is on the database, so this is just to avoid re-reading a
> lot of things.
> 
> Is a message queue a good fit for this? Pub/Sub?
> What is available in Pharo that works without having to set up a lot of 
> things?
> 
> Thanks!
> 
> Esteban A. Maringolo

RabbitMQ (for which there is the STOMP client 'STAMP' 
https://github.com/svenvc/stamp) is one option, but it is a bit more complex.

I guess Redis would work too 
(https://medium.com/concerning-pharo/quick-write-me-a-redis-client-5fbe4ddfb13d).

More recently I have been using MQTT (with the client 
https://github.com/svenvc/mqtt) which is much simpler.

You post a message on a topic and have one or more listeners see it. You can 
even arrange for the broker to keep the messages you miss (within a reasonable 
window) - which is very nice for all kinds of reasons (especially operational). 
I have a system in production that uses this mechanism to coordinate different 
images (and handle ingress of data) for more than a year now.

Sven


[Pharo-users] Re: Zn and AWS - retrieving object versions?

2021-06-18 Thread Sven Van Caekenberghe
Tim,

> On 17 Jun 2021, at 19:05, Sven Van Caekenberghe  wrote:
> 
> PS: I think there is other AWS S3 for Pharo out there, but I can't remember.

I found two of them:

- https://github.com/newapplesho/aws-sdk-smalltalk

- https://github.com/jvdsandt/pharo-aws-toolbox

I am sure both of these are more up to date and get more AWS details right that 
my old minimal, proof of concept package.

Sven


[Pharo-users] Re: Zn and AWS - retrieving object versions?

2021-06-17 Thread Sven Van Caekenberghe
Tim,

I think that from the moment there is a query part added to the REST URL 
something goes wrong.

Step 1 is to make sure the signing works. Now, ZnAWSS3RequestSignatureTool was 
written for an earlier version (I don't remember which one, probably V1). This 
is tricky to figure out.

I know for example that if there are multiple query arguments, they have to be 
sorted, which is currently not happening. But that won't be the only problem 
(encoding could be another). The challenge is to construct the correct 
'canonical' string to sign.

First you got a clear 'signature wrong' error, that seems to be gone. But you 
still get an authentication error, I am not sure what is going on.

I am afraid you will have to try to debug this a bit yourself. It has been 
years since I looked at this code.

If you get this to work with another tool, maybe you can enable debug logging 
to see what is happening with the URL and headers.

Sven

PS: I think there is other AWS S3 for Pharo out there, but I can't remember.

> On 17 Jun 2021, at 18:32, Tim Mackinnon  wrote:
> 
> Hi Sven - actually I think its not just the versions thing - I can’t get 
> filtering to work on the standard query either:
> 
> client keysIn: ‘mtt-data' query: (Dictionary with: 'prefix'->'sample’).
> 
> Given a normal query works, there must be something extra that is being 
> missed/added that impacts things.
> 
> Tim
> 
>> On 17 Jun 2021, at 17:19, Tim Mackinnon  wrote:
>> 
>> Hi Sven - thanks for taking a quick look, this change did stop me getting an 
>> immediate error - but I seem to get no results back.
>> 
>> Digging into it a bit more - I seem to be issuing the GET request as 
>> indicated by 
>> https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectVersions.html
>> 
>> e.g.
>> 
>> GET /?versions HTTP/1.1
>> 
>> Headers:
>> a ZnMultiValueDictionary('User-Agent'->'Zinc HTTP Components 1.0 
>> (Pharo/8.0)’ 
>> 'Accept'->'*/*' 'Host'->'mtt-data.s3.amazonaws.com’ 
>> 'Date'->'Thu, 17 Jun 2021 16:08:44 GMT’ 
>> 'Authorization'->'AWS AKIAIP2xxx:DqPPbDy’)
>> 
>> From the spec (which I’m no expert on) - the get command looks right (the 
>> other params are optional) and the Bucket name is prefixed onto 
>> s3.amazonaws.com in the header.
>> 
>> But is seems that I am getting a 200 - 403 response :
>> "a ZnResponse(200 OK application/xml 1899B)"
>> HTTP/1.1 403 Forbidden
>> 
>> (Aside - the code should possibly handle this  better - the spec seems to 
>> say that response content needs to be interpreted - but that’s something I 
>> can look to help with when we know the actual issue).
>> 
>> But I’m a bit mystified why the previous: client keysIn: 'mtt-data’ 
>> Invocation would work, but this seemingly extra step would be forbidden. My 
>> IAM role says it has full S3 access, so I’m not sure if this is still that 
>> we haven’t got the right fix or if its something environmental. I’m not sure 
>> what to do - if you have another idea, I’m happy to be a guniea pig to try 
>> it.
>> 
>> Tim
>> 
>> 
>> 
>>> On 17 Jun 2021, at 09:06, Sven Van Caekenberghe  wrote:
>>> 
>>> Hi Tim,
>>> 
>>> It has been a very long time since I last looked at this code. It is a good 
>>> thing that at least part of it still works.
>>> 
>>> Just by reading the code and reasoning about your problem, I would suggest 
>>> you try the following change:
>>> 
>>> In ZnAWSS3RequestSignatureTool>>#canonicalStringFor: replace the last 
>>> 
>>> request uri pathPrintString
>>> 
>>> by
>>> 
>>> request uri pathQueryFragmentPrintString
>>> 
>>> I did not try this myself, so it might not solve your issue.
>>> 
>>> Please let me know if this works.
>>> 
>>> Sven
>>> 
>>>> On 17 Jun 2021, at 03:05, Tim Mackinnon  wrote:
>>>> 
>>>> Hi everyone - I’m wondering if someone knows the trick to listing object 
>>>> versions in AWS S3?
>>>> 
>>>> I was previously using a non-Zn library (there are a few around - but they 
>>>> are quite old and I’m not sure how much they are maintained) - however I 
>>>> hadn’t realised that Zn actually supports S3 until I read it a bit 
>>>> carefully and realised that I needed to load an extra AWS group.
>>>> 
>>>> So - I have a bucket that is versioned, and I wanted to read the versions 
>>>> of an object - and hopefully be able to read the contents of an older 

[Pharo-users] Re: Zn and AWS - retrieving object versions?

2021-06-17 Thread Sven Van Caekenberghe
Hi Tim,

It has been a very long time since I last looked at this code. It is a good 
thing that at least part of it still works.

Just by reading the code and reasoning about your problem, I would suggest you 
try the following change:

In ZnAWSS3RequestSignatureTool>>#canonicalStringFor: replace the last 

 request uri pathPrintString

by

 request uri pathQueryFragmentPrintString

I did not try this myself, so it might not solve your issue.

Please let me know if this works.

Sven

> On 17 Jun 2021, at 03:05, Tim Mackinnon  wrote:
> 
> Hi everyone - I’m wondering if someone knows the trick to listing object 
> versions in AWS S3?
> 
> I was previously using a non-Zn library (there are a few around - but they 
> are quite old and I’m not sure how much they are maintained) - however I 
> hadn’t realised that Zn actually supports S3 until I read it a bit carefully 
> and realised that I needed to load an extra AWS group.
> 
> So - I have a bucket that is versioned, and I wanted to read the versions of 
> an object - and hopefully be able to read the contents of an older version.
> 
> I can list the contents of a bucket, but when I try to read the versions of 
> an object - I get a forbidden error - which when I dig deeper gives a 
> stranger explanation?
> 
> I have confirmed that the AWS CLI is able to list versions in that bucket - 
> so I’m a bit confused what the issue might be?
> 
> (client := ZnAWSS3Client new)
>   accessKeyId: 'xxx';
>   secretAccessKey: ‘y';
>   checkIntegrity: true.
> 
> client buckets. “Works"
> client keysIn: 'mtt-data’. “Works"
> 
> client at: 'mtt-data' -> 'sample.txt’. “Works"
> 
> client keysIn: 'mtt-data' query: (Dictionary with: 'versions'->nil). “Gives 
> an error?”
> 
> 
> HTTP/1.1 403 Forbidden
> 
> 
> SignatureDoesNotMatchThe request signature we 
> calculated does not match the signature you provided. Check your key and 
> signing method.xxxGET
> 
> 
> Has anyone tried doing this - I’m sure there is something really simple that 
> I am missing?
> 
> Tim


[Pharo-users] Re: Reading http post data using Zinc

2021-06-16 Thread Sven Van Caekenberghe
Hi Davide,

> On 16 Jun 2021, at 23:17, Davide Varvello via Pharo-users 
>  wrote:
> 
> Hi Guys,
> I'm posting from an http form and I'm wondering how to read data from the
> post. It seems the request should give a ZnMultiPartFormDataEntity, but I
> can't find how to use it.
> 
> Can you help me please?
> Davide
> 
> 
> 
> 
> --
> Sent from: http://forum.world.st/Pharo-Smalltalk-Users-f1310670.html

It depends on how the form is posted (there are two approaches).

A working example of both can be found in ZnDefaultServerDelegate>>#formTest2: 
and #formTest3:

ZnServerTest>>#testFormTest2: and #testFormTest3: exercise this functionality

Basically, you just take the entity from the request and use it.

HTH,

Sven



[Pharo-users] Re: Rounding in Floats

2021-06-16 Thread Sven Van Caekenberghe



> On 16 Jun 2021, at 16:20, Konrad Hinsen  wrote:
> 
> On 16/06/2021 15:52, Sven Van Caekenberghe wrote:
>> I am also a bit intrigued by this. Like you said: several other programming 
>> languages (I tried a couple of Common Lisp and Scheme implementations) do 
>> the same as Pharo, but handheld calculators, normal and scientific, do not.
> 
> Handheld calculators use decimal floats, not binary floats. That doesn't 
> remove rounding issues, but it makes conversion to and from print 
> representations loss-free.
> 
> 
> Konrad
> 

mmm, this is interesting.

It would be possible (and maybe it has already been done) to implement/add such 
a decimal floating point number to Pharo. It would require reimplementing all 
operations from scratch (esp. sin/cos/tan log/exp and so on), it would be slow, 
but interesting.




[Pharo-users] Re: Rounding in Floats

2021-06-16 Thread Sven Van Caekenberghe
I am also a bit intrigued by this. Like you said: several other programming 
languages (I tried a couple of Common Lisp and Scheme implementations) do the 
same as Pharo, but handheld calculators, normal and scientific, do not.

I think that going for 1/10 fractions/precision is not going to help: you got a 
division by 113 in your formula 
[https://en.wikipedia.org/wiki/Handicap_(golf)], this will give smaller 
fractions.

The problem is not the calculation (modern 64-bit floats as in Pharo are plenty 
accurate), it is how you handle results. You should just round it correctly and 
be done with it.

Note that

a roundTo: 0.01. (1e-14) still gives 4.5

it is only one step further that you hit the limit and get the ugly but correct 
result.

I assume that most calculators always use a printing precision that is lower 
than their internal precision, hence they hide this reality.

When computing with money, you would be inclined to put everything in cents 
(because you cannot divide them further). But once you start computing 
percentage discounts or taxes, you again get problems. At each of those steps 
you must make sure that no cents are lost.

> On 16 Jun 2021, at 15:19, Esteban Maringolo  wrote:
> 
> On Wed, Jun 16, 2021 at 9:13 AM Konrad Hinsen
>  wrote:
>> 
>> On 15/06/2021 16:05, Esteban Maringolo wrote:
>>> And in this particular case, it might cause issues, since the "full
>>> precision" (whatever that means) must be retained when using such a
>>> number to derive another one.
>> 
>> To me, "full precision" means "no precision is ever lost in arithmetic
>> operations". If that's what you need, your choices are
>> 
>> 1) Integers
>> 
>> 2) Rational numbers (fractions)
> 
>> Pharo has 1) and 2).
> 
> Yeap, I'll switch to 2.
> 
> What I'm not totally convinced of is whether to store numbers as
> integers (knowing beforehand the decimal precision, and converting it
> to fractions when read back) or if to store them as scaled decimals
> (which internally hold a fraction).
> 
> Thanks!


[Pharo-users] Re: Rounding in Floats

2021-06-15 Thread Sven Van Caekenberghe



> On 15 Jun 2021, at 16:19, Esteban Maringolo  wrote:
> 
> Hi Sven,
> 
> I accidentally skipped this.
> 
> How is this different from the GRNumberPrinter?

It is similar, but different (it does several things to produce cleaner 
numbers). Basically, when I produced certain JSON with floats that were results 
of calculations, I got these very long, ugly numbers. Forcing them to a certain 
precision always added trailing zeros and made them floats. I also wanted 
integers to be printed when possible (1 instead of 1.0, or 0 instead of 0.0). I 
also have automatic switching to scientific notation outside a certain range.

> Where is the code available?

It is part of NeoJSON, https://github.com/svenvc/NeoJSON

Keep in mind that this is just an unfinished experiment.

> Thanks!
> 
> Esteban A. Maringolo
> 
> On Mon, Jun 14, 2021 at 6:30 PM Sven Van Caekenberghe  wrote:
>> 
>> BTW, I recently wrote my own float printer, as an experiment.
>> 
>> NeoJSONFloatPrinter lowPrecision print: a.
>> 
>> => 4.5
>> 
>> What I wanted was a human friendly, compact float printer, that tries to go 
>> for the shortest, simplest number. It prefers integers and goes to 
>> scientific notation when needed, while limiting the precision. Maybe it is 
>> interesting to look at the code.
>> 
>> But I am far from an expert.
>> 
>>> On 14 Jun 2021, at 23:23, Sven Van Caekenberghe  wrote:
>>> 
>>> 
>>> 
>>>> On 14 Jun 2021, at 22:44, Esteban Maringolo  wrote:
>>>> 
>>>> I'm coming back to this because I've been bitten by these floating
>>>> points things again.
>>>> 
>>>> If in Pharo [1] you do:
>>>> a := 6.7 + (32.8 - 35)
>>>> 
>>>> It will produce:
>>>> 4.497
>>>> 
>>>> Which, when rounded, will produce 4.
>>> 
>>> But,
>>> 
>>> a roundTo: 0.1 "=> 4.5"
>>> 
>>>> In other places [2] I do the same simple addition and subtraction it
>>>> produces 4.5, that when rounded will produce 5.
>>>> 
>>>> I know now that Pharo doesn't lie to me while other systems do, and
>>>> all that Richard pointed to before.
>>>> 
>>>> The issue here is that I'm following some calculation formula that was
>>>> defined in some of the "other" systems, and so when I follow such a
>>>> formula I get these edgy cases where my system produces a different
>>>> output.
>>>> 
>>>> In this case the formula is for golf handicap calculations, and it
>>>> caused my system to give 4 instead of 5 to a player, resulting in
>>>> giving the first place to a player other than the one deserved.
>>>> It was no big deal (it's not The Masters), but these cases appear from
>>>> time to time.
>>>> 
>>>> Is there any way to "configure" the floating point calculation to
>>>> behave as the "other systems"?
>>>> 
>>>> What is the best way to anticipate these situations, am I the only one
>>>> being bitten by these issues?
>>>> 
>>>> Thanks in advance for any hints about these problems.
>>>> 
>>>> 
>>>> Best regards,
>>>> 
>>>> [1] Dolphin Smalltalk, JS, Python, Ruby, Dart produces the same output as 
>>>> Pharo.
>>>> [2] VisualWorks, VAST, Excel, VB and all calculators I tried
>>>> 
>>>> 
>>>> 
>>>> Esteban A. Maringolo
>>>> 
>>>> On Tue, Sep 8, 2020 at 12:45 AM Esteban Maringolo  
>>>> wrote:
>>>>> 
>>>>> On Tue, Sep 8, 2020 at 12:16 AM Richard O'Keefe  wrote:
>>>>>> 
>>>>>> "7.1 roundTo: 0.1 should return 7.1"
>>>>>> You're still not getting it.
>>>>> 
>>>>> I was until Konrad explained it.
>>>>> 
>>>>>> Binary floating point CANNOT represent either of those numbers.
>>>>>> You seem to be assuming that Pharo is making some mistake.
>>>>>> It isn't.  All it is doing is refusing to lie to you.
>>>>> 
>>>>>> The systems that print 7.1 are LYING to you,
>>>>>> and Pharo is not.
>>>>> 
>>>>> I'm not assuming a mistake from Pharo, I had a wrong expectation what
>>>>> to get if I round to that precision.
>>>>> I don't know whether other systems lie or simply fulfill user
>>>>> expectations, if you send the #roundTo: to a float, I did expect to
>>>>> get a number with the same precision.
>>>>> That is my expectation as a user. As in the other thread I expected
>>>>> two scaled decimals that are printed equal to also be compared as
>>>>> equal  (which they don't).
>>>>> 
>>>>> Whether there is a good reason for those behaviors is beyond my
>>>>> current comprehension, but it certainly doesn't follow the "principle
>>>>> of least surprise".
>>>>> 
>>>>> In any case, the method proposed by Tomohiro solved my issues.
>>>>> 
>>>>> Regards,
>>>>> 
>>>>> Esteban A. Maringolo


[Pharo-users] Re: Rounding in Floats

2021-06-14 Thread Sven Van Caekenberghe
BTW, I recently wrote my own float printer, as an experiment.

NeoJSONFloatPrinter lowPrecision print: a. 

=> 4.5

What I wanted was a human friendly, compact float printer, that tries to go for 
the shortest, simplest number. It prefers integers and goes to scientific 
notation when needed, while limiting the precision. Maybe it is interesting to 
look at the code.

But I am far from an expert.

> On 14 Jun 2021, at 23:23, Sven Van Caekenberghe  wrote:
> 
> 
> 
>> On 14 Jun 2021, at 22:44, Esteban Maringolo  wrote:
>> 
>> I'm coming back to this because I've been bitten by these floating
>> points things again.
>> 
>> If in Pharo [1] you do:
>> a := 6.7 + (32.8 - 35)
>> 
>> It will produce:
>> 4.497
>> 
>> Which, when rounded, will produce 4.
> 
> But,
> 
> a roundTo: 0.1 "=> 4.5"
> 
>> In other places [2] I do the same simple addition and subtraction it
>> produces 4.5, that when rounded will produce 5.
>> 
>> I know now that Pharo doesn't lie to me while other systems do, and
>> all that Richard pointed to before.
>> 
>> The issue here is that I'm following some calculation formula that was
>> defined in some of the "other" systems, and so when I follow such a
>> formula I get these edgy cases where my system produces a different
>> output.
>> 
>> In this case the formula is for golf handicap calculations, and it
>> caused my system to give 4 instead of 5 to a player, resulting in
>> giving the first place to a player other than the one deserved.
>> It was no big deal (it's not The Masters), but these cases appear from
>> time to time.
>> 
>> Is there any way to "configure" the floating point calculation to
>> behave as the "other systems"?
>> 
>> What is the best way to anticipate these situations, am I the only one
>> being bitten by these issues?
>> 
>> Thanks in advance for any hints about these problems.
>> 
>> 
>> Best regards,
>> 
>> [1] Dolphin Smalltalk, JS, Python, Ruby, Dart produces the same output as 
>> Pharo.
>> [2] VisualWorks, VAST, Excel, VB and all calculators I tried
>> 
>> 
>> 
>> Esteban A. Maringolo
>> 
>> On Tue, Sep 8, 2020 at 12:45 AM Esteban Maringolo  
>> wrote:
>>> 
>>> On Tue, Sep 8, 2020 at 12:16 AM Richard O'Keefe  wrote:
>>>> 
>>>> "7.1 roundTo: 0.1 should return 7.1"
>>>> You're still not getting it.
>>> 
>>> I was until Konrad explained it.
>>> 
>>>> Binary floating point CANNOT represent either of those numbers.
>>>> You seem to be assuming that Pharo is making some mistake.
>>>> It isn't.  All it is doing is refusing to lie to you.
>>> 
>>>> The systems that print 7.1 are LYING to you,
>>>> and Pharo is not.
>>> 
>>> I'm not assuming a mistake from Pharo, I had a wrong expectation what
>>> to get if I round to that precision.
>>> I don't know whether other systems lie or simply fulfill user
>>> expectations, if you send the #roundTo: to a float, I did expect to
>>> get a number with the same precision.
>>> That is my expectation as a user. As in the other thread I expected
>>> two scaled decimals that are printed equal to also be compared as
>>> equal  (which they don't).
>>> 
>>> Whether there is a good reason for those behaviors is beyond my
>>> current comprehension, but it certainly doesn't follow the "principle
>>> of least surprise".
>>> 
>>> In any case, the method proposed by Tomohiro solved my issues.
>>> 
>>> Regards,
>>> 
>>> Esteban A. Maringolo


[Pharo-users] Re: Rounding in Floats

2021-06-14 Thread Sven Van Caekenberghe



> On 14 Jun 2021, at 22:44, Esteban Maringolo  wrote:
> 
> I'm coming back to this because I've been bitten by these floating
> points things again.
> 
> If in Pharo [1] you do:
> a := 6.7 + (32.8 - 35)
> 
> It will produce:
> 4.497
> 
> Which, when rounded, will produce 4.

But,

a roundTo: 0.1 "=> 4.5"

> In other places [2] I do the same simple addition and subtraction it
> produces 4.5, that when rounded will produce 5.
> 
> I know now that Pharo doesn't lie to me while other systems do, and
> all that Richard pointed to before.
> 
> The issue here is that I'm following some calculation formula that was
> defined in some of the "other" systems, and so when I follow such a
> formula I get these edgy cases where my system produces a different
> output.
> 
> In this case the formula is for golf handicap calculations, and it
> caused my system to give 4 instead of 5 to a player, resulting in
> giving the first place to a player other than the one deserved.
> It was no big deal (it's not The Masters), but these cases appear from
> time to time.
> 
> Is there any way to "configure" the floating point calculation to
> behave as the "other systems"?
> 
> What is the best way to anticipate these situations, am I the only one
> being bitten by these issues?
> 
> Thanks in advance for any hints about these problems.
> 
> 
> Best regards,
> 
> [1] Dolphin Smalltalk, JS, Python, Ruby, Dart produces the same output as 
> Pharo.
> [2] VisualWorks, VAST, Excel, VB and all calculators I tried
> 
> 
> 
> Esteban A. Maringolo
> 
> On Tue, Sep 8, 2020 at 12:45 AM Esteban Maringolo  
> wrote:
>> 
>> On Tue, Sep 8, 2020 at 12:16 AM Richard O'Keefe  wrote:
>>> 
>>> "7.1 roundTo: 0.1 should return 7.1"
>>> You're still not getting it.
>> 
>> I was until Konrad explained it.
>> 
>>> Binary floating point CANNOT represent either of those numbers.
>>> You seem to be assuming that Pharo is making some mistake.
>>> It isn't.  All it is doing is refusing to lie to you.
>> 
>>> The systems that print 7.1 are LYING to you,
>>> and Pharo is not.
>> 
>> I'm not assuming a mistake from Pharo, I had a wrong expectation what
>> to get if I round to that precision.
>> I don't know whether other systems lie or simply fulfill user
>> expectations, if you send the #roundTo: to a float, I did expect to
>> get a number with the same precision.
>> That is my expectation as a user. As in the other thread I expected
>> two scaled decimals that are printed equal to also be compared as
>> equal  (which they don't).
>> 
>> Whether there is a good reason for those behaviors is beyond my
>> current comprehension, but it certainly doesn't follow the "principle
>> of least surprise".
>> 
>> In any case, the method proposed by Tomohiro solved my issues.
>> 
>> Regards,
>> 
>> Esteban A. Maringolo


[Pharo-users] Re: New Pharo-based commercial software

2021-06-14 Thread Sven Van Caekenberghe
Congratulations. This is a really good concept, and it looks nice as well.

> On 12 Jun 2021, at 11:36, Noury Bouraqadi  wrote:
> 
> Hi everyone,
> 
> I'm glad to announce a new Pharo-based commercial product: PLC3000 
> (https://plc3000.com).
> 
> It's a SaaS solution for teaching PLC programming for factory automation. The 
> server side is based on Zinc and the client side uses PharoJS.
> 
> This wouldn't have been possible without the great work done by the community 
> in large, and more specifically, the Pharo consortium.
> 
> Thank you all,
> Noury


[Pharo-users] Re: STON reader problem

2021-05-27 Thread Sven Van Caekenberghe
David,

You have to load a newer version of STON. You are trying to read a 
ScaledDecimal, support for this was added around Oct 2018.

https://github.com/svenvc/ston/commit/9c83e3cc2f00cab83e57f2e10a139d6ecef3cb30

See https://github.com/svenvc/ston for load/dependency instructions.

Sven

> On 27 May 2021, at 19:11, David Pennington  wrote:
> 
> OK, so I have installed Pharo 9.0 with the M1 VM and everything is nice and 
> quick. I am porting my code across and I have a problem that involves STON 
> Reader.
> 
> I use this to convert my objects for storage in my text based database. I 
> can’t run my code because STON Reader fails when reading in one of the 
> objects.
> 
> The contents are as follows:
> 
> "{#entryName:'PAYPAL 
> *COSYFEET',#entryDate:Date['2021-01-25Z'],#transactionID:'202101250001',#entryAmount:-5041216832887849/140737488355328s8,#entryCategory:nil,#entryDescription:'35314369001
>  GB , 5959 22JAN21',#match:'N’}”
> 
> The errors is this:
> 
> 
> 
> Character 122 is at the conjunction of the 4 and 9 in the ewntryAmount: 
> 5041216832887849/140737488355328s8,
> 
> Any help on this would be great. I have a few thousand entries to fix if 
> there is a problem with the file (although my code works fine with this file 
> under 8.0)
> 
> David Pennington


[Pharo-users] Re: New VM, how do I get it

2021-05-24 Thread Sven Van Caekenberghe
Look carefully, they are both FileReferences, the UI is different.

Also, explore the difference between the working directory and the image 
directory:

FileLocator workingDirectory absolutePath.

FileLocator imageDirectory absolutePath.

The working directory depends on how the image was launched. Check out 
FileLocator for other well-known locations.

> On 24 May 2021, at 18:39, David Pennington  wrote:
> 
> Sorry yes, I gave you the screen shots without the code.
> 
> In V 9.0 if I execute the following
> 
> | working stream |
> working := FileSystem disk workingDirectory
> 
> I get an Inspector on /Users/davidpennington. (Not a File~Reference)
> 
> In V 8.0 I get an Inspector on a FileReference 
> (/Users/davidpennington/Documents/Pharo/images/Pharo 8.0 - 64bit (stable)).
> 
> As my database relies on disk paths, this is going to create a difference 
> between the two versions which surely should not be so.
> 
> 
>> On 24 May 2021, at 16:38, Stéphane Ducasse  wrote:
>> 
>> 
>> 
>>> On 24 May 2021, at 15:06, David Pennington  wrote:
>>> 
>>> OK, I have V9.0 working. The performance difference is amazing on an 
>>> Inspect. V8.0 - 3 seconds to respond - V9 - almost instantaneous.
>> 
>> good to know. 
>> Now we prefer to work with the slow version (old OS) because they kick us to 
>> pay attention to speed. 
>> Pharo 90 got many speed up improvements too. 
>> 
>>> I have got caught up in a simple file operation - I am trying to rename a 
>>> file and `I have come u with the following difference.
>> 
>> I do not get what is the error. 
>> Do you have a script to reproduce it. 
>> We can only fix what we can reproduce. 
>> 
>> S. 
>> 
>>> 
>>> Version 8.0
>>> 
>>> 
>>> 
>>> Version 9.0
>>> 
>>> 
>>> 
>>> So V 9 is different from V 8.0
>>> 
>>> Is there a fix for this?
>>> 
>>> David
>>> 
 On 24 May 2021, at 12:18, Guillermo Polito  
 wrote:
 
 Zeroconf should download the M1 vm, buy the pharo launcher does not 
 support that yet.
 
 El lun., 24 may. 2021 13:16, Stéphane Ducasse  
 escribió:
 is your machine a M1?
 If this is the case check the email that were sent on how to get the VM.
 
 S. 
 
> On 23 May 2021, at 11:54, da...@totallyobjects.com wrote:
> 
> Thank you. I t honk that I have it but how do I know if I have the M1 vm? 
> How do I integrate it with Pharo launcher? 
> 
> Sent from my Huawei tablet
> 
> 
>  Original Message 
> Subject: [Pharo-users] Re: New VM, how do I get it
> From: Stéphane Ducasse 
> To: Any question about pharo is welcome 
> CC: 
> 
> 
> David 
> 
> There is no magic. 
> You should also consider that pharo-ui is a shell script and that you can 
> also read it and learn. 
> The vm is an executable and it needs an image and it cannot guess where 
> you put it. 
> 
> S. 
> 
>> On 22 May 2021, at 18:38, David Pennington  
>> wrote:
>> 
>> 
> 
> 
> Stéphane Ducasse
> http://stephane.ducasse.free.fr / http://www.pharo.org 
> 03 59 35 87 52
> Assistant: Aurore Dalle 
> FAX 03 59 57 78 50
> TEL 03 59 35 86 16
> S. Ducasse - Inria
> 40, avenue Halley, 
> Parc Scientifique de la Haute Borne, Bât.A, Park Plaza
> Villeneuve d'Ascq 59650
> France
> 
 
 
 Stéphane Ducasse
 http://stephane.ducasse.free.fr / http://www.pharo.org 
 03 59 35 87 52
 Assistant: Aurore Dalle 
 FAX 03 59 57 78 50
 TEL 03 59 35 86 16
 S. Ducasse - Inria
 40, avenue Halley, 
 Parc Scientifique de la Haute Borne, Bât.A, Park Plaza
 Villeneuve d'Ascq 59650
 France
 
>>> 
>> 
>> 
>> Stéphane Ducasse
>> http://stephane.ducasse.free.fr / http://www.pharo.org 
>> 03 59 35 87 52
>> Assistant: Aurore Dalle 
>> FAX 03 59 57 78 50
>> TEL 03 59 35 86 16
>> S. Ducasse - Inria
>> 40, avenue Halley, 
>> Parc Scientifique de la Haute Borne, Bât.A, Park Plaza
>> Villeneuve d'Ascq 59650
>> France
> 


[Pharo-users] Re: New VM, how do I get it

2021-05-21 Thread Sven Van Caekenberghe
You start an image by typing the following in the Terminal

./pharo-ui Pharo.image

> On 21 May 2021, at 16:31, David Pennington  wrote:
> 
> This is a bit like pulling teeth.
> 
> I have done what you said. I now have the following in Terminal
> 
> davidpennington@Davids-Air ~ % wget -O - https://get.pharo.org/vmLatest90 | 
> bash 
> --2021-05-21 15:24:47--  https://get.pharo.org/vmLatest90
> Resolving get.pharo.org (get.pharo.org)... 2001:41d0:301::23, 164.132.235.17
> Connecting to get.pharo.org (get.pharo.org)|2001:41d0:301::23|:443... 
> connected.
> HTTP request sent, awaiting response... 200 OK
> Length: 5177 (5.1K)
> Saving to: ‘STDOUT’
> 
> -   100%[===>]   5.06K  --.-KB/sin 0s 
>  
> 
> 2021-05-21 15:24:47 (823 MB/s) - written to stdout [5177/5177]
> 
> Downloading the latest pharoVM:
>   http://files.pharo.org/get-files/90/pharo-vm-Darwin-arm64-latest.zip
> pharo-vm/Pharo.app/Contents/MacOS/Pharo
> Creating starter scripts pharo and pharo-ui
> 
> Where do I go next?
> 
> 
>> On 21 May 2021, at 12:31, Stéphane Ducasse  wrote:
>> 
>> you execute on a unix shell
>> 
>> wget -O - https://get.pharo.org/vmLatest90 | bash
>> 
>> S
>> 
>>> On 21 May 2021, at 13:16, David Pennington  wrote:
>>> 
>>> I am sorry but, being a newbie to Pharo, a lot of what you say goes over my 
>>> head.
>>> 
>>> How do I use Zero Conf?
>>> 
>>> I have loaded up the latest 9.0 from Pharo launcher. How do I get the <1 VM 
>>> for that?
>>> 
>>> Sorry for being stupid.
>>> 
>>> David
>>> P.S. 31 years a Smalltalker so its just the underlying bits that pass over 
>>> me - smile.
>>> 
 On 19 May 2021, at 08:38, teso...@gmail.com wrote:
 
 Hi David, 

for M1 we have Pharo 9 compatible VMs, you can download them using Zero 
 Conf or directly. 
 
 For the latest: 
   - wget -O - https://get.pharo.org/vmLatest90 | bash
   - http://files.pharo.org/get-files/90/pharo-vm-Darwin-arm64-latest.zip
 
 For the stable:
   - wget -O - https://get.pharo.org/vm90 | bash
   - http://files.pharo.org/get-files/90/pharo-vm-Darwin-arm64-stable.zip
 
 If you are scripting the download I recommend using ZeroConf. 
 
 For Pharo 8, we don't have a M1 native version, because Pharo 8 requires 
 changes in the image to support the newer VMs. We have plans to backport 
 the changes in the future, now we are putting all efforts in the release 
 of Pharo 9. However, if the community consider it, we can switch 
 priorities but it is not magical; we will need to leave something aside. 
 Also, future versions of the Pharo Launcher will have support for 
 detecting the architecture.
 
 In the meantime, Pharo 8 / 9 can be used without a problem with Rossetta, 
 although the performance is not ideal.
 
 Tell me if you have any problem.
 Cheers,
 Pablo
 
 
 On Tue, May 18, 2021 at 3:54 PM David Pennington 
  wrote:
 Hi there. I am currently using v8.0 on a new M1 MacBookAir. When I save 
 the image I keep getting a message telling me that my VM is too old and to 
 download a new one. I have looked in my Pharo Launcher but there is no new 
 one there. What do I do please?
 
 David
 
 
 -- 
 Pablo Tesone.
 teso...@gmail.com
>>> 
>> 
>> 
>> Stéphane Ducasse
>> http://stephane.ducasse.free.fr / http://www.pharo.org 
>> 03 59 35 87 52
>> Assistant: Aurore Dalle 
>> FAX 03 59 57 78 50
>> TEL 03 59 35 86 16
>> S. Ducasse - Inria
>> 40, avenue Halley, 
>> Parc Scientifique de la Haute Borne, Bât.A, Park Plaza
>> Villeneuve d'Ascq 59650
>> France
>> 
> 


[Pharo-users] Re: P3 connection error

2021-05-19 Thread Sven Van Caekenberghe
Bernard,

> On 18 May 2021, at 22:23, Sven Van Caekenberghe  wrote:
> 
> Tomorrow I will try to test plain and tls connections on my machine.

It took some trial & error with my local PostgreSQL server (on my mac I use 
https://postgresapp.com although I normally use the regular Ubuntu versions on 
servers).

From a high level (P3, Pharo 7, macOS), I can do the following for a plain 
connection:

P3LogEvent logToTranscript.

(P3Client new url: 'psql://sven@localhost') in: [ :client |
  [ client isWorking ] ensure: [ client close ] ].

This returns true and shows the following in the Transcript:

2021-05-19 13:06:44 001 [P3] 68689 #Connect psql://sven@localhost:5432 Trust
2021-05-19 13:06:44 002 [P3] 68689 #Query SELECT 721 AS N
2021-05-19 13:06:44 003 [P3] 68689 #Result SELECT 1, 1 record, 1 colum, 0 ms
2021-05-19 13:06:44 004 [P3] 68689 #Close

For a secure connection, either of the following work:

(P3Client new url: 'psql://sven@localhost?sslmode=require') in: [ :client |
  [ client isWorking ] ensure: [ client close ] ].

(P3Client new url: 'psql://sven@localhost') in: [ :client |
  [ client connectSSL; isWorking ] ensure: [ client close ] ].

These both return true and give the same output:

2021-05-19 13:07:39 005 [P3] 68701 #Connect 
psql://sven@localhost:5432?sslmode=require Trust
2021-05-19 13:07:39 006 [P3] 68701 #Query SELECT 316 AS N
2021-05-19 13:07:39 007 [P3] 68701 #Result SELECT 1, 1 record, 1 colum, 0 ms
2021-05-19 13:07:39 008 [P3] 68701 #Close

This is a local login without a password.


Now, since you both got plain and secure connections working, there is no 
fundamental problem for either type, or the rest of your setup/situation. You 
seem to have hit some edge, resulting from a small configuration difference in 
one of the server you are trying to log in to.

You say you get a "SSL Exception: connect failed [code:-5]" - we have to find 
out what the -5 means. As you can see in 
ZdcPluginSSLSession>>#primitiveSSL:connect:startingAt:count:into: these errors 
are not specified or explained.

We will have to look into the SSL plugin C code for that, and maybe/probably 
consult the Windows documentation for the system calls used.

Apart from certificate issues, it could also be a cipher issue (as in something 
specific is required but not available).


Sven


PS: to configure the server (version 13.2), I added the following to 
postgres.conf

ssl = on
ssl_cert_file = '/Users/sven/.ssh/ssl-cert-snakeoil.pem'
ssl_key_file = '/Users/sven/.ssh/ssl-cert-snakeoil.key'

the ssl-cert-snakeoil I took from a standard Ubuntu postgreSQL installation.

In pg_hba.conf the first field allows you to control access over different 
connection types:

- host = both plain & secure
- hostssl = only secure
- hostnossl = only plain


[Pharo-users] Re: P3 connection error

2021-05-18 Thread Sven Van Caekenberghe
Hi,

Since you can connect to 3 of the 4 machines, both over plain and tls, it 
basically works.

You will have to find out what is different in the host configurations of the 
servers.

It could be a certificate issue like you suggest, I don't know.

I am guessing you are on Windows ?

Tomorrow I will try to test plain and tls connections on my machine.

Sven

> On 18 May 2021, at 21:22, Bernhard Pieber  wrote:
> 
> Hi Sven,
> 
> The explicit form does not work either. All the fields contain safe 
> characters.
> 
> However, I just found out that I can connect to three other hosts. All four 
> hosts should have the same settings (databases and users), and just one of 
> them does not work. So there must be a difference in the settings after all.
> 
> I noticed that the error message ends with "SSL off“. So maybe the problem is 
> related to SSL after all. Just calling #setSSL does not help, though. I get 
> SSL Exception: connect failed [code:-5]. Maybe I am missing some certificates?
> 
> When I connect with psql, three of the four hosts show this message:
> psql (12.5, Server 12.6)
> SSL-Verbindung (Protokoll: TLSv1.2, Verschlüsselungsmethode: 
> ECDHE-ECDSA-AES128-GCM-SHA256, Bits: 128, Komprimierung: aus)
> 
> The fourth does not mention SSL.
> 
> However, only one of the three hosts that show SSL does not work. Really 
> strange.
> 
> (All of the four hosts work with psql, SQuirreL and DBeaver.)
> 
> Thanks for your support!
> 
> Bernhard
> 
>> Am 18.05.2021 um 20:16 schrieb Sven Van Caekenberghe :
>> 
>> 
>> (CC-ing the list)
>> 
>> Hmm, that should just work.
>> 
>> Are there any special characters in the username, password or host 
>> (non-ascii, URL unsafe characters) ?
>> 
>> You could try the explicit init form
>> 
>> P3Client new host: 'host'; user: 'user'; password: 'password'; database: 
>> 'database'; yourself.
>> 
>>> On 18 May 2021, at 19:47, Bernhard Pieber  wrote:
>>> 
>>> Hi Sven,
>>> 
>>> Thank you for the fast response.
>>> 
>>> Yes, I can connect using the psql client using this command line:
>>> C:\PostgreSQL\12\bin\psql.exe -h host -U user -d database -p 5432
>>> 
>>> I have to enter the password in the command prompt.
>>> 
>>> The driver URL in SQuirreL is:
>>> jdbc:postgresql://host:5432/database
>>> 
>>> User name and password are separate text fields.
>>> 
>>> pgAdmin also works, by the way.
>>> 
>>> In P3 I use the long form:
>>> P3Client new url: 'psql://user:password@host:5432/database'.
>>> 
>>> Cheers,
>>> Bernhard
>>> 
>>>> Am 18.05.2021 um 19:16 schrieb Sven Van Caekenberghe :
>>>> 
>>>> 
>>>> Hi Bernard,
>>>> 
>>>>> On 18 May 2021, at 18:40, Bernhard Pieber  wrote:
>>>>> 
>>>>> Hi,
>>>>> 
>>>>> I have a PostgreSQL database on a remote host which I want to access 
>>>>> using P3. I do have a username and a password and can connect via 
>>>>> SQuirreL and DBeaver. Both use a JDBC driver. However, when I try to 
>>>>> access it via Pharo and P3 I get the infamous "no pg_hba.conf entry for 
>>>>> host “ error. The thing is that I cannot change the 
>>>>> pg_hba.conf file as the server does not belong to me. I wonder why the 
>>>>> JDBC driver does not run into this problem when connecting from my IP 
>>>>> address? It must do something differently.
>>>>> 
>>>>> As I have just started playing with P3 (and PostgreSQL to be honest) I 
>>>>> may be missing something fundamental. Using #setSSL did not help, by the 
>>>>> way. Any other ideas I could try?
>>>>> 
>>>>> Cheers,
>>>>> Bernhard
>>>> 
>>>> This is an interesting problem: to do a remote, over the network, 
>>>> connection this has to be enabled in PostegreSQL in the pg_hba.conf. But 
>>>> since other clients can connect, it would help if you could give me more 
>>>> details regarding their connection settings. I know this could include 
>>>> confidential information, so be careful what you post.
>>>> 
>>>> You could also try to connect using the command line psql client, from 
>>>> your machine.
>>>> 
>>>> Sven
>>> 
>> 
> 
> 


[Pharo-users] Re: P3 connection error

2021-05-18 Thread Sven Van Caekenberghe
(CC-ing the list)

Hmm, that should just work.

Are there any special characters in the username, password or host (non-ascii, 
URL unsafe characters) ?

You could try the explicit init form

 P3Client new host: 'host'; user: 'user'; password: 'password'; database: 
'database'; yourself.

> On 18 May 2021, at 19:47, Bernhard Pieber  wrote:
> 
> Hi Sven,
> 
> Thank you for the fast response.
> 
> Yes, I can connect using the psql client using this command line:
> C:\PostgreSQL\12\bin\psql.exe -h host -U user -d database -p 5432
> 
> I have to enter the password in the command prompt.
> 
> The driver URL in SQuirreL is:
> jdbc:postgresql://host:5432/database
> 
> User name and password are separate text fields.
> 
> pgAdmin also works, by the way.
> 
> In P3 I use the long form:
> P3Client new url: 'psql://user:password@host:5432/database'.
> 
> Cheers,
> Bernhard
> 
>> Am 18.05.2021 um 19:16 schrieb Sven Van Caekenberghe :
>> 
>> 
>> Hi Bernard,
>> 
>>> On 18 May 2021, at 18:40, Bernhard Pieber  wrote:
>>> 
>>> Hi,
>>> 
>>> I have a PostgreSQL database on a remote host which I want to access using 
>>> P3. I do have a username and a password and can connect via SQuirreL and 
>>> DBeaver. Both use a JDBC driver. However, when I try to access it via Pharo 
>>> and P3 I get the infamous "no pg_hba.conf entry for host “ 
>>> error. The thing is that I cannot change the pg_hba.conf file as the server 
>>> does not belong to me. I wonder why the JDBC driver does not run into this 
>>> problem when connecting from my IP address? It must do something 
>>> differently.
>>> 
>>> As I have just started playing with P3 (and PostgreSQL to be honest) I may 
>>> be missing something fundamental. Using #setSSL did not help, by the way. 
>>> Any other ideas I could try?
>>> 
>>> Cheers,
>>> Bernhard
>> 
>> This is an interesting problem: to do a remote, over the network, connection 
>> this has to be enabled in PostegreSQL in the pg_hba.conf. But since other 
>> clients can connect, it would help if you could give me more details 
>> regarding their connection settings. I know this could include confidential 
>> information, so be careful what you post.
>> 
>> You could also try to connect using the command line psql client, from your 
>> machine.
>> 
>> Sven
> 


[Pharo-users] Re: P3 connection error

2021-05-18 Thread Sven Van Caekenberghe
Hi Bernard,

> On 18 May 2021, at 18:40, Bernhard Pieber  wrote:
> 
> Hi,
> 
> I have a PostgreSQL database on a remote host which I want to access using 
> P3. I do have a username and a password and can connect via SQuirreL and 
> DBeaver. Both use a JDBC driver. However, when I try to access it via Pharo 
> and P3 I get the infamous "no pg_hba.conf entry for host “ 
> error. The thing is that I cannot change the pg_hba.conf file as the server 
> does not belong to me. I wonder why the JDBC driver does not run into this 
> problem when connecting from my IP address? It must do something differently.
> 
> As I have just started playing with P3 (and PostgreSQL to be honest) I may be 
> missing something fundamental. Using #setSSL did not help, by the way. Any 
> other ideas I could try?
> 
> Cheers,
> Bernhard

This is an interesting problem: to do a remote, over the network, connection 
this has to be enabled in PostegreSQL in the pg_hba.conf. But since other 
clients can connect, it would help if you could give me more details regarding 
their connection settings. I know this could include confidential information, 
so be careful what you post.

You could also try to connect using the command line psql client, from your 
machine. 

Sven



[Pharo-users] Re: NeoCSVReader and wrong number of fieldAccessors

2021-05-13 Thread Sven Van Caekenberghe
There is now the following commit:

https://github.com/svenvc/NeoCSV/commit/0acc2270b382f52533c478f2f1585341e390d4b5

which should address a couple of issues.

> On 22 Jan 2021, at 12:15, jtuc...@objektfabrik.de wrote:
> 
> Tim,
> 
> 
> 
> 
> Am 22.01.21 um 10:22 schrieb Tim Mackinnon:
>> I’m not doing any CSV processing at the moment, but have in the past - so 
>> was interested in this thread.
>> 
>> @Kasper, can’t you just use #readHeader upfront, and do the assertion 
>> yourself, and then proceed to loop through your records? It would seem that 
>> the Neo caters for what you are suggesting - and if you want to add a helper 
>> method extension you have the building blocks to already do this?
>> 
> This is a good idea. One caveat, however: #readHeader in its current 
> implementation does 2 things: 
> 
>   • read the line respecting each field (thereby, respect line breaks 
> within quoted fields - perfect for this purpose)
>   • update the number of Columns for further reading (assuming 
> #readHeader's purpose is to interpret the header line) 
> This second thing is in our way, because it may influence the way the 
> following lines will be interpreted. That is ecactly why I created an issue 
> on github (https://github.com/svenvc/NeoCSV/issues/20). 
> A method that reads a line without any side effects (other than pushing the 
> position pointer forward to the next line) would come in handy for such 
> scenarios. But you can always argue that this has nothing to do with CSV, 
> because in CSV all lines have the same number of columns, each of them 
> containing the same kind of information, and there may be exactly one header 
> line. Anything else is just some file that may contain CSV-y stuff in it. So 
> I am really not sure if NeoCSV should build lots of stuff for such files. I'd 
> love to have this, but I'd understand if Sven refused to integrate it ;-)
> 
> 
>> The only flaw I can think of, is if there is no header present then I can’t 
>> recall what Neo does - ideally throws an exception so you can decide what to 
>> do - potentially continue if the number of columns is what you expect and 
>> the data matches the columns - or you fail with an error that a header is 
>> required. But I think you would always need to do some basic initial checks 
>> when processing CSV due to the nature of the format?
> Right. You'd always have to write some specific logic for this particular 
> file format and make NeoCSV ignore the right stuff...
> 
> 
> 
> Joachim
> 
> 
> 
> 
> 
>> 
>> Tim
>> 
>> On Fri, 22 Jan 2021, at 6:42 AM, Kasper Osterbye wrote:
>>> As it happened, I ran into the exact same scenario as Joachim just the 
>>> other day,
>>> that is, the external provider of my csv had added some new columns. In my 
>>> case
>>> manifested itself in an error that an integer field was not an integer 
>>> (because new
>>> columns were added in the middle).
>>> 
>>> Reading through this whole thread leaves me with the feeling that no matter 
>>> what Sven
>>> adds, there is still a risk for error. Nevertheless, my suggestion would be 
>>> to add a 
>>> functionality to #skipHeaders, or make a sister method: 
>>> #assertAndSkipHeaders: numberOfColumns onFailDo: aBlock given the actual 
>>> number of headers
>>> That would give me a way to handle the error up front. 
>>> 
>>> This will only be interesting if your data has headers of cause.
>>> 
>>> Thanks for NeoCSV which I use all the time!
>>> 
>>> Best,
>>> 
>>> Kasper 
>> 
> 
> 
> -- 
> ---
> Objektfabrik Joachim Tuchel  
> mailto:jtuc...@objektfabrik.de
> 
> Fliederweg 1 
> http://www.objektfabrik.de
> 
> D-71640 Ludwigsburg  
> http://joachimtuchel.wordpress.com
> 
> Telefon: +49 7141 56 10 86 0 Fax: +49 7141 56 10 86 1
> 
> 
> 
> 


[Pharo-users] Re: Announcements - and whether its bad to check if a subscription would be handled?

2021-04-27 Thread Sven Van Caekenberghe
The whole idea is to decouple producers and consumers, like in messaging 
systems.

You should not care if there are other listening, just like the listeners 
should not care if there is someone posting data.

Asking for subscribers is introducing a coupling.

The announcement mechanism will/should deal with this in an efficient way.

> On 27 Apr 2021, at 16:03, Tim Mackinnon  wrote:
> 
> From my rather long ramble below - I am still curious if its distasteful to 
> have a method on Announcer 
> 
> hasSubscriptionsHandling: anAnnouncement 
> "Answer true if I have any subscribers to anAnnouncement"
> 
> ^(registry subscriptionsHandling: anAnnouncement ) notEmpty 
> 
> Tim
> 
> On Thu, 22 Apr 2021, at 11:34 PM, Tim Mackinnon wrote:
>> Hi everyone - I’ve always thought the article on announcements many 
>> years ago was very cool - and we don’t seem to use them as much as we 
>> could (but equally they aren’t a panacea to be overused everywhere 
>> either - and they do get used in Pharo to some extent).
>> 
>> Anyway, I’ve been playing around with CodeParadise (CP is a very cool 
>> project, and Erik is very supportive and thinking about how to write 
>> web apps a different way… I’m fascinated),
>> 
>> And - CP uses announcements as mechanism to send events from the View 
>> Client (in a web browser) to a Presenter on the server (which makes 
>> total sense).
>> 
>> In taking things for a spin, I hit an interesting problem on how in a 
>> web component world, you should display a spelling test of words - 
>> 
>> e.g. SpellingTest — has many —> SpellingWord(s).
>> 
>> 
>> Initially I bunged it all in a single presenter with its associated 
>> view, and it was a bit messy, and Erik guided me down a route (that CP 
>> nicely supports) - that my SpellingTest view should have the name/date 
>> of the test as well as an add word input field, but the list of current 
>> Words (which I had bunged into a table) - were actually more elegant as 
>> sub-components - hence a WordView - which renders a single word in a 
>> DIV, and for the edit screen I was creating, a Delete button next to 
>> the word (so you could delete it). So a 1 to many relationship 
>> essentials.
>> 
>> This is where the announcements kick in (and lead to my ultimate question). 
>> 
>> When you click the Delete button, if I use a sub component - my view 
>> will generate a DeleteWordAnnouncement - which gets fed to my 
>> SpellingWordPresenter - however words in this sense don’t naturally 
>> know their parent (the SpellingTest) - and its the parent test that has 
>> a #deleteWord: method.
>> 
>> I’ve been taking with Erik, on different ways to elegantly handle this.
>> 
>> a) you could change the model so words know their parent (in my case, 
>> I’m using a 3rd party model for Flashcards, and they just don’t know 
>> this - and adapting them would be a nuisance
>> b) my TestPresenter could listen to announcements on the WordPresenter 
>> - and I could get some communications between presenters (although 
>> normally the Presenters just get events from Views, and pure domain 
>> models - so it feels a bit abnormal to consider another Presenter as a 
>> sort of model - but I could live with this
>> c) given the composable nature of views/presenters (and CP is base on a 
>> WebComponent model) - you could bubble up Announcements, so that if an 
>> event isn’t handled by a view’s immediate presenter, you could re-route 
>> it to the parent of the View (the component owner) and see if it’s 
>> presenter could do something.
>> 
>> 
>> I think (c) has a certain expectation to it - in fact when I converted 
>> my initial one-presenter attempt into components, I still had listener 
>> code in my TestPresenter that was expecting to get a deleteWord 
>> announcement and I was initially surprised that I wasn’t getting it (as 
>> it was now just going to the Word component I had refactored out). 
>> 
>> So I wonder if others here would expect things to work this way too 
>> (and are there other examples in the wild that lead you here - or scare 
>> you away from this?).
>> 
>> Back to  my Announcement question - if C is a good idea - why doesn’t 
>> the Announcer class let you check if if will handle a particular 
>> announcement? The API has  #hasSubscriber: and #hasSubscriberClass: , 
>> but its missing:
>> 
>> hasSubscriptionsHandling: anAnnouncement 
>>  "Answer true if I have any subcribers to anAnnouncement"
>> 
>>  ^(registry subscriptionsHandling: anAnnouncement ) notEmpty 
>> 
>> 
>> And I am wondering if this is because it's a bad thing to expect to be 
>> able to check? In my case above, I would want to do this to know if CP 
>> should instead try announcing a message to a parent presenter because 
>> the current presenter won’t handle it.  In my example above, my 
>> WordComponentView will broadcast that the delete button was clicked, 
>> but its actually a parent view which would reasonably want to listen to 
>> this kind of event and 

[Pharo-users] Re: Segfault with ZnClient and https

2021-04-21 Thread Sven Van Caekenberghe



> On 21 Apr 2021, at 09:46, Sven Van Caekenberghe  wrote:
> 
> $ ./pharo Pharo.image eval "(ZnClient new url: 
> 'https://pharo.org/web/files/pharo.png'; get; response) contentType"
> image/png

Actually, this is nicer, simpler and clearer:

$ ./pharo Pharo.image eval "ZnClient new get: 
'https://pharo.org/web/files/pharo.png'; response"
a ZnResponse(200 OK image/png 34696B)


[Pharo-users] Re: Segfault with ZnClient and https

2021-04-21 Thread Sven Van Caekenberghe
Hi Petter,

It should work, but of course there might be something wrong. 
To get a baseline for your issue here is what I did, and what you could try to 
repeat.

$ cat /etc/lsb-release 
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.2 LTS"

$ uname -a
Linux t3-pharo-test 4.15.0-140-generic #144-Ubuntu SMP Fri Mar 19 14:12:35 UTC 
2021 x86_64 x86_64 x86_64 GNU/Linux

$ mkdir pharo

$ cd pharo

$ curl get.pharo.org/64/90+vm | bash
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100  3054  100  30540 0   129k  0 --:--:-- --:--:-- --:--:--  129k
Downloading the latest 90 Image:
http://files.pharo.org/get-files/90/pharo64.zip
Pharo.image
Downloading the latest pharoVM:
http://files.pharo.org/get-files/90/pharo64-linux-stable.zip
pharo-vm/pharo
Creating starter scripts pharo and pharo-ui

$ ./pharo --version
5.0-202002121043  Wed Feb 12 11:06:45 UTC 2020 gcc 5.4.0 [Production Spur 
64-bit VM]
CoInterpreter * VMMaker-CompatibleUserName.1580983506 uuid: 
7aff73cb-5a2e-5002-a356-37de4e762a49 Feb 12 2020
StackToRegisterMappingCogit * VMMaker-CompatibleUserName.1580983506 uuid: 
7aff73cb-5a2e-5002-a356-37de4e762a49 Feb 12 2020
VM: 202002121043 https://github.com/pharo-project/opensmalltalk-vm.git
Date: Wed Feb 12 11:43:20 2020 CommitHash: 52202d8
Plugins: 202002121043 https://github.com/pharo-project/opensmalltalk-vm.git
Linux travis-job-93347cf3-9798-4672-8c23-706b9ceb049b 4.15.0-1028-gcp 
#29~16.04.1-Ubuntu SMP Tue Feb 12 16:31:10 UTC 2019 x86_64 x86_64 x86_64 
GNU/Linux
plugin path: /home/t3/tmp/pharo/pharo-vm/lib/pharo/5.0-202002121043 [default: 
/home/t3/tmp/pharo/pharo-vm/lib/pharo/5.0-202002121043/]

$ ./pharo Pharo.image printVersion
[version] 'Pharo9.0.0' 
'Pharo-9.0.0+build.1343.sha.7b1f2ac8e896d2b755451d8e00686a290240fde5 (64 Bit)'

$ ./pharo Pharo.image eval "(ZnClient new url: 
'https://pharo.org/web/files/pharo.png'; get; response) contentType"
image/png

$ ./pharo Pharo.image eval ZdcPluginSSLSession new
a ZdcPluginSSLSession

You could try to use ldd to try to figure out if you have any missing libraries.

Sven

> On 20 Apr 2021, at 23:22, Petter  wrote:
> 
> Hi, I have trouble with latest Fedora 33, plain installation and Pharo 8.0
> stable, 64 bit.
> 
> When I try a 
> 
> ZnClient new url: 'https://something..'; get.
> 
> The image segfaults.
> 
> Last lines from log are:
> 
> primitiveSSL:setStringProperty:toValue:
> atAllPut:
> atAllPut:
> primitiveSSL:connect:startingAt:count:into:
> 
> stack page bytes 8192 available headroom 5576 minimum unused headroom 5952
> 
> ---
> 
> It seems libssl and libcrypto is bundled togheter with Pharo, looks like
> version 1.0.0.
> 
> I then try to compile openssl with 1.0.0-stable which I build myself, but
> when replacing the Pharo-libs with my own I get the message that I am using
> a too old tls-version error.
> 
> I am a little confused on what is happening here. Do anyone have a clue?
> 
> (git checkouts over ssh works fine, strange enough)
> 
> Petter
> 
> 
> 
> --
> Sent from: http://forum.world.st/Pharo-Smalltalk-Users-f1310670.html


[Pharo-users] Re: Seaside disappearing under Windows

2021-04-20 Thread Sven Van Caekenberghe
David,

> On 20 Apr 2021, at 17:01, David Pennington  wrote:
> 
> I have finally given up on trying to sort this out. Under both Windows Server 
> 2012 and Windows 10 my Pharo image just exits leaving nothing, not even a 
> crash file. Hence, I have fixed my MacBookAir so that it stays alive with the 
> lid down and I am now using that as my Seaside server. It has been up for the 
> last 4 days so it seems that this is a Windows only problem and (hopefully) 
> not my code.
> 
> In passing, I am a high-level programmer in Pharo in that I do not use any 
> special calls or processes.
> 
> Davids
> Totally `Objects

I believe you are using some form of persistency that you built yourself, with 
data written in STON to files. Assuming you store each object in its own file, 
you are then also doing a lot of IO. Are you sure that you are properly closing 
your file streams, in all circumstances ?

It is possibly that you run out of IO resources.

Also, garbage collection on a server image (without a UI) is sometimes a bit 
strange, it can take a while to kick in.

I am just guessing here.

It is still not good that the image/vm quits without any notice, that is for 
sure.

Sven


[Pharo-users] Re: Postgres P3 Driver in seaside/web apps - strategy for "prepare:"

2021-04-19 Thread Sven Van Caekenberghe
Hi Sanjay,

> On 19 Apr 2021, at 18:14, Sanjay Minni  wrote:
> 
> Hi 
> 
> using the P3 Postgres driver
> 
> what is the optimised way of using the following in a seaside/web
> application 
>  statement := client prepare: ...
>  statement execute:
>  statement close.
> 
> Sven readme on the drivers page 
> "prepared statements ... need to be closed, prepared statement exist ...
> single session / connection ..."

First make sure that you give each session in your (seaside) web application 
its own connection/client to the database. P3Client is meant to be used single 
threaded, it is your responsibility to protect this. Make sure these are 
initialised and disposed of properly - use logging.

Second, I would not put time in trying to gain performance with prepared 
statements until you can prove that it makes a real, measurable difference. 
PostgreSQL is very fast. But this is IMHO.

See further.

> Typically in a desktop app I would fire the "prepare" statements(s) once
> when open the particular UI / window (say typically 2 or 3 statements in a
> UI) and then "close" when I exit
> 
> How does this work in a multi-tabbed browser app
> lets say I open a tab with a particular UI and fire the "prepare" statement
> 
> now what if the tab idles for too long - and then i press send/save. The
> program would have just fired the "execute ..." assuming the prepare is
> active. would the prepare / session have been automatically closed ?
> 
> - when should i typically fire the prepare statement

Either upfront when you connect, or each time when you need them (see further). 
The scope of a prepared statement is the connection/session.

Preparing 10s of statements upfront that you might not need could increase 
connection time.

> - how should i test if the session / prepare is still active or needs to be
> refired

There is P3Client>>#preparedStatementNamed:
You best use P3Client>>#prepare:named: then.

> - how are orphan prepare statements disposed off ny the database / program

They get thrown out when the connection closes.

HTH,

Sven

PS: 

There is also P3ConnectionPool that can protect against concurrent access, 
prepare connections, warm up, on top of its base functionality of pooling, of 
course. But a challenge with connection pooling is what to do when errors occur.

> thanks for pointers
> Sanjay 
> 
> 
> 
> -
> cheers, 
> Sanjay
> --
> Sent from: http://forum.world.st/Pharo-Smalltalk-Users-f1310670.html


[Pharo-users] Re: Crash after http request, see the dump file

2021-04-14 Thread Sven Van Caekenberghe
I can't see anything networking, HTTP or Zinc related in the dump.
It looks like something went wrong during garbage collection.
I suppose this is not a repeatable case, is it ?

> On 14 Apr 2021, at 09:22, Davide Varvello via Pharo-users 
>  wrote:
> 
> Hi guys,
> I'm working on Pharo 8 on BigSur 
> (Pharo 8.0.0 Build information:
> Pharo-8.0.0+build.1128.sha.9f6475d88dda7d83acdeeda794df35d304cf620d (64
> Bit))
> 
> Yesterday after an http call to my zinc server the image crashed.
> Can you please take a look to the dump file and tell me what happened?
> 
> It would be worth so much to me, thank you
> 
> The file is here: https://pastebin.com/SrE6gcCv
> 
> 
> Davide
> 
> 
> 
> --
> Sent from: http://forum.world.st/Pharo-Smalltalk-Users-f1310670.html


[Pharo-users] Re: Whats the easiest/cheapest way to run a Pharo web app in 2021?

2021-04-13 Thread Sven Van Caekenberghe
Although my main instance is on Digital Ocean, I have a test/play instance on 
AWS.

This is really hip & cool: it is an AWS Graviton 2 instance (Amazon's own ARM64 
CPU, much like Apple Silicon) [ https://aws.amazon.com/ec2/graviton/ ]. I run a 
small t4g.micro instance, 1GB RAM, 8GB Disk.

Last bill was just USD 2.89 which is crazy cheap for a full month 24/7.

Thanks to the fact that Pharo has a full JIT VM on ARM64, this is crazy fast as 
well.

I am sure that the reason this is so cheap is the fact that it is super 
efficient.

You can try this easily for yourself.

> On 13 Apr 2021, at 01:57, Esteban Maringolo  wrote:
> 
> What do you use that's so cheap/affordable?
> 
> El lun., 12 de abril de 2021 04:48, Norbert Hartl  
> escribió:
> 
> 
> > Am 12.04.2021 um 04:02 schrieb Jeff Gray :
> > 
> > Considering easiest and cheapest, there's always self hosting, or are you
> > discounting that idea?
> > Most geeks have a bit of spare hardware laying around and broadband
> > up-speeds aren't too bad.
> > I'm guessing that if we are in the $5 a month ball park then we aren't
> > needing a guaranteed up time.
> > 
> 
> My cloud instance is 3€/month. With an additional 20% amount the instance has 
> a backup. And setting it up is way simpler then getting dynamic DNS updates 
> and all of that configured. Times have changed a bit.
> 
> 
> Norbert


[Pharo-users] Re: Problem with Pharo 9 - repositories missing

2021-04-12 Thread Sven Van Caekenberghe
David,

Since you have so much trouble building on Windows, I would suggest building a 
deployment image on macOS and then copy that over to Windows (*.image *.changes 
*.sources), install a VM on Windows, and run headless with a startup.st script.

Sven

> On 11 Apr 2021, at 13:43,  
>  wrote:
> 
> Hi everyone.
>  
> I have a simple talk, or so I thought. I am trying to port a Mac 
> Pharo/Seaside project to either a Windows 2012 server or to a Windows 10 
> laptop. I have set up my git credentials and that all works fine. However, 
> when I install either 8.0 or 9.0 on my Windows 10 laptop it shows he following
> Repositories   --  status
> PharoLocal repository missing
> Pharo-spec2   Local repository missing
> Pharo-newtoolsLocal repository missing
> Iceberg Local repository missing
> Libgit-pharo-bindings Local repository missing
> tonel  Local repository missing
>  
> What on earth is going on here as installing Pharo on my Mac was seamless. I 
> have to get the project onto a Windows machine as these are the only servers 
> that I have. I really can’t be this difficult, can it?
>  
> How do I get out of this mess, given that that is a clean install?
> David
> Totally Objects


[Pharo-users] Re: Problem installing Seaside on Windows

2021-04-02 Thread Sven Van Caekenberghe
I normally build on the server where I deploy (linux of course), but you do not 
have to. Pharo is cross platform. So it is perfectly possible to build your 
deployment image on your mac, copy it over and run it on the server. I would 
always advise to not save the image with running servers inside, but to use a 
start up script to start your server explicitly.

> On 2 Apr 2021, at 19:32, David Pennington  wrote:
> 
> Thank you to everyone for your help. I sorted out the public and private keys 
> and shortened the path but it kept up with an authentication issue. I deleted 
> everything and installed V 9.0. I shortened its name and ran the GitHub code 
> and I now have Seaside installed and running. I now need to install STON and 
> I can get my code working.
> 
> What a great community this is!
> 
> David
> 
>  
> 
> On 02/04/2021 17:43, Tomaž Turk wrote:
> 
>> According to my experiences this happens if Pharo images are located deeply 
>> in the directory path. It helps if you move the image in question to some 
>> higher level. In any case, C:\Pharo should work no matter what.
>>  
>> Seaside has lots of dependencies which are then located under its repository.
>>  
>> Tomaz
>> 
>> On Fri, 2 Apr 2021, 18:31 Sanjay Minni,  wrote:
>> Yes you hit the window filename length problem.
>>  
>> Retry by creating one more pharo 8 image. Only don't take the default image 
>> name as you have taken. Change it to very short like p8-1 to keep image file 
>> name and consequently the folder name length to a minimum
>>  
>> Then install seaside. 
>>  
>> If that doesn't work repost and we will try still another option
>>  
>> Regards
>> Sanjay
>> 
>> On Fri, 2 Apr, 2021, 9:36 pm David Pennington,  
>> wrote:
>> OK, I got my software working fine on the Mac but I really need it on 
>> Windows now so I have had another go. I have followed all the instructions 
>> and I have a public key registered with GIT for both the Mac and the Windows 
>> laptop. The Mac now quite happily installs stuff that wouldn't load before. 
>> (TinyLogger). However, the Windows laptop now get a lot further with the 
>> Seaside install but fails with this message:
>>  
>> "LGit_GIT_EEXISTS: Failed to stat file 
>> 'C:/Users/david/Documents/Pharo/images/Pharo 8.0 - 64bit 
>> (sta...aximumAbsoluteAge.maximumRelativeAge.overflowAction..st': The 
>> filename or extension is too long.
>> "
>> Any thoughts?
>>  
>> 
>> On 22/03/2021 18:18, David Pennington wrote:
>> 
>> I am sorry but github is a mystery to me. I installed all of this on my mac 
>> with no troubles. Surely it can't be anymore difficult on  PC?
>> 
>> On 22 Mar 2021 18:05, Stéphane Ducasse  wrote:
>> david 
>>  
>> did you succeed to clone or checkout a github repo from this machine and 
>> without pharo at all?
>> Because Pharo is just using libgit. 
>>  
>> S. 
>> 
>> On 22 Mar 2021, at 18:58, David Pennington  wrote:
>> 
>> Tried that. I got the following
>>  
>> Failed to get server certificate: the handle is in the wrong state for the 
>> requested operation. 
>>  
>> I assume that someone thinks that this is helpful:-)
>> 
>> On 22 Mar 2021 16:06, Sanjay Minni  wrote:
>> Hi David, 
>> 
>> I have repeatedly installed Seaside on Pharo 8 / 9 64 bit - Windows 10 
>> without any issues and I have done it both ssh and https 
>> 
>> I do it quickly / simply by 
>> 
>> tools->iceberg->[+ add](on top panel right) 
>> on popup select: 'clone from github.com' 
>> fill in owner: SeasideSt(case does not matter) 
>>project: seaside 
>>local directory:   (leave the default for 
>> now) 
>>protocol  try https first (not 
>> sure if github requires a password) 
>>  or ssh which may be 
>> slightly complicated 
>> once seaside libraries are pulled in and seaside appears in the iceberg 
>> panel then 
>> right click on seaside 
>>  on popup scroll down to metacello->install baseline (default) 
>> 
>> hope that works 
>> 
>> 
>> 
>> 
>> Long Haired David wrote 
>> > Hi everyone. 
>> > 
>> > I have been developing a new web site using Seaside on my M1 MacBookAir 
>> > and I have had no issues. 
>> > 
>> > To deploy it, I have to install Pharo on either a Windows 10 or a Windows 
>> > Server 2012 server. Pharo has installed on both without any issues. 
>> > However, I am having problems installing Seaside. 
>> > 
>> > I have Pharo 8.0 installed on both (64 bit version). 
>> > 
>> > If I try and install from the Catalog, I get the following error in the 
>> > Transcript. 
>> > 
>> > IceGenericError: Failed to stat file 
>> > 'C:/Users/david/Documents/Pharo/images/Pharo 8.0 - 64bit 
>> > (stable)/pharo-local. 
>> > 
>> > If I try using Monticello, I get the following: 
>> > 
>> > Metacello new 
>> > baseline:'Seaside3'; 
>> > repository: 'github://SeasideSt/Seaside:master/repository'; 
>> > load 
>> > I got an error while 

[Pharo-users] Re: Problem installing Seaside on Windows

2021-03-23 Thread Sven Van Caekenberghe



> On 22 Mar 2021, at 19:05, Stéphane Ducasse  wrote:
> 
> david 
> 
> did you succeed to clone or checkout a github repo from this machine and 
> without pharo at all?

This is indeed step one: make sure that you can check out code from git(hub) on 
the command line in Windows. Only if that works you can do the next steps.

BTW, once you have a repository checked out, you can just point Iceberg to it 
and load code.

PS: yes, you will also have to learn the basic of git(hub).

> Because Pharo is just using libgit. 
> 
> S. 
> 
>> On 22 Mar 2021, at 18:58, David Pennington  wrote:
>> 
>> Tried that. I got the following
>> 
>> Failed to get server certificate: the handle is in the wrong state for the 
>> requested operation. 
>> 
>> I assume that someone thinks that this is helpful:-)
>> 
>> On 22 Mar 2021 16:06, Sanjay Minni  wrote:
>> Hi David, 
>> 
>> I have repeatedly installed Seaside on Pharo 8 / 9 64 bit - Windows 10 
>> without any issues and I have done it both ssh and https 
>> 
>> I do it quickly / simply by 
>> 
>> tools->iceberg->[+ add](on top panel right) 
>> on popup select: 'clone from github.com' 
>> fill in owner: SeasideSt(case does not matter) 
>>project: seaside 
>>local directory:   (leave the default for 
>> now) 
>>protocol  try https first (not 
>> sure if github requires a password) 
>>  or ssh which may be 
>> slightly complicated 
>> once seaside libraries are pulled in and seaside appears in the iceberg 
>> panel then 
>> right click on seaside 
>>  on popup scroll down to metacello->install baseline (default) 
>> 
>> hope that works 
>> 
>> 
>> 
>> 
>> Long Haired David wrote 
>> > Hi everyone. 
>> > 
>> > I have been developing a new web site using Seaside on my M1 MacBookAir 
>> > and I have had no issues. 
>> > 
>> > To deploy it, I have to install Pharo on either a Windows 10 or a Windows 
>> > Server 2012 server. Pharo has installed on both without any issues. 
>> > However, I am having problems installing Seaside. 
>> > 
>> > I have Pharo 8.0 installed on both (64 bit version). 
>> > 
>> > If I try and install from the Catalog, I get the following error in the 
>> > Transcript. 
>> > 
>> > IceGenericError: Failed to stat file 
>> > 'C:/Users/david/Documents/Pharo/images/Pharo 8.0 - 64bit 
>> > (stable)/pharo-local. 
>> > 
>> > If I try using Monticello, I get the following: 
>> > 
>> > Metacello new 
>> > baseline:'Seaside3'; 
>> > repository: 'github://SeasideSt/Seaside:master/repository'; 
>> > load 
>> > I got an error while cloning: There was an authentication error while 
>> > trying to execute the operation: . 
>> > This happens usually because you didn't provide a valid set of 
>> > credentials. 
>> > You may fix this problem in different ways: 
>> > 
>> > 1. adding your keys to ssh-agent, executing ssh-add ~/.ssh/id_rsa in your 
>> > command line. 
>> > 2. adding your keys in settings (open settings browser search for "Use 
>> > custom SSH keys" and 
>> > add your public and private keys). 
>> > 3. using HTTPS instead SSH (Just use an url in the form HTTPS://etc.git). 
>> > I will try to clone the HTTPS variant. 
>> > 
>> > Can you help please? 
>> > 
>> > David 
>> > Totally Objects 
>> 
>> 
>> 
>> 
>> 
>> - 
>> cheers, 
>> Sanjay 
>> -- 
>> Sent from: http://forum.world.st/Pharo-Smalltalk-Users-f1310670.html 
>> 
>> 
> 
> 
> Stéphane Ducasse
> http://stephane.ducasse.free.fr / http://www.pharo.org 
> 03 59 35 87 52
> Assistant: Aurore Dalle 
> FAX 03 59 57 78 50
> TEL 03 59 35 86 16
> S. Ducasse - Inria
> 40, avenue Halley, 
> Parc Scientifique de la Haute Borne, Bât.A, Park Plaza
> Villeneuve d'Ascq 59650
> France
> 


[Pharo-users] Re: change process variable while process is running

2021-03-02 Thread Sven Van Caekenberghe
Hi,

You might learn something from the class/hierarchy ProcessSpecificVariable, 
esp. the methods in the category 'process specific' in the Process class, e.g. 
#psValueAt:[put:]

Sven

> On 2 Mar 2021, at 10:42, mspg...@gmail.com wrote:
> 
> Hello everybody, I am kind of new to Pharo so I apologise if my
> question is silly :)
> how can a change a variable in a process while the process is running?
> for example in:
> [[ | msg| msg := 'help me'. 100 timesRepeat: [(Delay forSeconds: 0.5)
> wait. Transcript show: msg; cr]] fork.
> 
> how do I change the value of msg while the process is running in order
> to modify what the Transcript is showing?
> is that possible?
> thanks.
> Domenico
> 


[Pharo-users] Re: Strings in Playground are read only in Pharo 9

2021-02-28 Thread Sven Van Caekenberghe
Markus,

'hello' is a literal string constant, part of the set of constants of a method 
(or a doit which like a temporary method disconnected from a class).

Constants like these are managed by the compiler and can be shared between 
different expressions to avoid duplication.

Changing such a constant is dangerous because it means you are changing 
static/compiled code. Consider a method like

name
 ^ 'Markus'

I could call this method and change the returned string destructively in place. 
Next time someone calls it, s/he would get the modified string. Even worse, 
s/he would not understand since the source code did not change.

In the past it was not possible to mark such strings as being constant, now we 
can. Which is a big win.

You can use #copy to get a string that you can modify.

'hello' copy at: 2 put: $a; yourself

HTH,

Sven

> On 28 Feb 2021, at 00:50, Markus Wedel  wrote:
> 
> Hi all,
> 
> strings in Playground are read only in Pharo 9.0 Build 1153 so that
> 
> 'hello' at: 2 put: $a; yourself
> ctrl+p
> 
> This throws an error:
> „Modification forbidden: ‚hello‘ is read-only, hence its field cannot be 
> modified with $a
> 
> which is actually a very nice error message but is this supposed to happen?
> The example does work in Pharo 8 without problems.
> 
> 
> Greetings
> Markus


[Pharo-users] Re: A question about packaging

2021-02-16 Thread Sven Van Caekenberghe
Hi David,

> On 16 Feb 2021, at 20:49, David Pennington  wrote:
> 
> Hi everyone. I have been using Pharo sine I got my M1 MacBookAir in late 
> December. Before then, I was a VAST user since around 1995.
> 
> I have a question about packaging. Example, in VAST, I have an Application 
> called Family_Accounts. Inside that I have some classes such as FAEntry, 
> FABudget and so on. If I wanted to extend String to add an instance to 
> convert to ScaledDecimal but using my special amount strings. I select the 
> New menu item and then select either a class or an extension. As I want to 
> extend String, I select Extension and then, within my Family_Accounts, I can 
> see the String class so that I can extend it.
> 
> In Pharo, I can have a package called Family_Accounts. I can extend String, 
> OK  but my extension stays within the original package with, optionally, a 
> Family_Accounts classification.  I can’t see how to add String to the 
> Family_Accounts package to keep everything together in one place. Can this be 
> done?
> 
> David

Extension methods are placed in method categories named after the package, 
*Family_Accounts. With the Calypso Browser, the UI hides this. At the bottom 
right there is a checkbox to make your method an extension and to chose the 
package to place it in. It can be a bit hard to use or confusing, but it does 
work.

Also, once you have extensions, they will show up as a virtual package (sub) 
category, which is very nice to get an overview.

I am sure this is described in some documentation somewhere.

Sven


[Pharo-users] Re: Problem with Dictionary and Associations

2021-02-11 Thread Sven Van Caekenberghe
Ah, I overlooked that as well ;-)

This could happen by STON input as follows

{ ... } : null

with everything after the closing } being junk from a previous file that you 
overwrote with shorter contents.

Try deleting or truncating your (existing) file first, for example using 
#ensureDelete.

> On 11 Feb 2021, at 18:07,   
> wrote:
> 
> David
> I think you are misreading the debugger display. The bit you are missing is 
> the last few characters in the evaluation display, after the closing 
> parenthesis. They should not be there if it is displaying a dictionary. 
> Evidently the variable 'aDict' is in fact an association whose key is the 
> dictionary and whose value is nil. 
> 
> I can't see where it goes from there, but I think you need to look more 
> closely at the preceding line, FAOEntry>>getEntryObjectFrom:,  and see what 
> exactly it does with the junk in your .dat file.
> 
> HTH
> 
> Peter Kenny
> 
> 
> -Original Message-
> From: David Pennington  
> Sent: 11 February 2021 16:46
> To: Any question about pharo is welcome 
> Subject: [Pharo-users] Re: Problem with Dictionary and Associations
> 
> Thank you for that but it doesn’t resolve my problem. Why does the stack move 
> to an Association when doing the access to the dictionary? It is the 
> Association indexing that fails as it won’t allow #entryAmount as a key.
> 
> If I inspect the following code (STON fromString: result ) in the line ^ self 
> makeObject: (STON fromString: result ) I get an Association. Why don’t I get 
> a Dictionary. I did two days ago when all of this was working perfectly and 
> we were happily matching the Pharo Seaside display to our bank account :=)
> 
> David
> 
>> On 11 Feb 2021, at 16:39, Sven Van Caekenberghe  wrote:
>> 
>> I can parse the file data you provided:
>> 
>> STON fromString: 
>> '{#entryName:''Housekeeping'',#entryDate:Date[''2021-02-25Z''],#transactionID:''2021022501'',#entryAmount:-400/1s8,#entryCategory:''Housekeeping'',#entryDescription:''Housekeeping'',#match:nil}ousekeeping'',#match:nil}'
>>  
>> 
>> "a Dictionary(#entryAmount->-400.s8 #entryCategory->'Housekeeping' 
>> #entryDate->25 February 2021 #entryDescription->'Housekeeping' 
>> #entryName->'Housekeeping' #match->nil #transactionID->'2021022501' )"
>> 
>> (BTW, this is STON not JSON).
>> 
>> What I do see in the input is junk after the last }
>> 
>> That can happen when you overwrite an existing file with shorter content. 
>> You should truncate such a file.
>> 
>> From your screenshot I can see nothing wrong, #entryAmount seems an existing 
>> key in aDict, that should just work.
>> 
>>> On 11 Feb 2021, at 16:33, David Pennington  wrote:
>>> 
>>> I attach a couple of screen shots and a file containing the item that I am 
>>> trying to open. What else can I supply? This has been working for a couple 
>>> of weeks and suddenly doesn’t work. I save the JSON and then load it back 
>>> again. I enclose a file with the JSON as contents. As you can see from the 
>>> screenshot, the debugger shows it as a dictionary but the execution path 
>>> takes it to an Association which is what I don’t understand.
>>> 
>>> <20210225 01>
>>> 
>>>> On 10 Feb 2021, at 19:17, Sven Van Caekenberghe  wrote:
>>>> 
>>>> Hi David,
>>>> 
>>>>> On 10 Feb 2021, at 19:18, da...@totallyobjects.com wrote:
>>>>> 
>>>>> I am using STON to objects out to disk. Up to two days ago, I was reading 
>>>>> them in as Dictionaries and converting to objects from there. All of a 
>>>>> sudden yesterday morning, I got an error saying that the association is 
>>>>> only indexable with integers. Even so, I don't seem to be able to access 
>>>>> the contents. 
>>>>> 
>>>>> Fistly, any ideas why this has changed and secondly, any ideas how to fix 
>>>>> it? 
>>>>> 
>>>>> David
>>>>> Totally Objects
>>>>> 
>>>>> Sent from my Huawei tablet
>>>> 
>>>> I am afraid I need more information.
>>>> 
>>>> Could you create a reproducible case ?
>>>> Do you have a stack trace ?
>>>> 
>>>> In any case, STON is a text format, that can be edited (in most cases, 
>>>> shared or circular references being hard to edit by hand).
>>>> 
>>>> Sven
>>> 


[Pharo-users] Re: Problem with Dictionary and Associations

2021-02-11 Thread Sven Van Caekenberghe
I can parse the file data you provided:

STON fromString: 
'{#entryName:''Housekeeping'',#entryDate:Date[''2021-02-25Z''],#transactionID:''2021022501'',#entryAmount:-400/1s8,#entryCategory:''Housekeeping'',#entryDescription:''Housekeeping'',#match:nil}ousekeeping'',#match:nil}'
 

"a Dictionary(#entryAmount->-400.s8 #entryCategory->'Housekeeping' 
#entryDate->25 February 2021 #entryDescription->'Housekeeping' 
#entryName->'Housekeeping' #match->nil #transactionID->'2021022501' )"

(BTW, this is STON not JSON).

What I do see in the input is junk after the last }

That can happen when you overwrite an existing file with shorter content. You 
should truncate such a file.

From your screenshot I can see nothing wrong, #entryAmount seems an existing 
key in aDict, that should just work.

> On 11 Feb 2021, at 16:33, David Pennington  wrote:
> 
> I attach a couple of screen shots and a file containing the item that I am 
> trying to open. What else can I supply? This has been working for a couple of 
> weeks and suddenly doesn’t work. I save the JSON and then load it back again. 
> I enclose a file with the JSON as contents. As you can see from the 
> screenshot, the debugger shows it as a dictionary but the execution path 
> takes it to an Association which is what I don’t understand.
> 
> <20210225 01>
> 
>> On 10 Feb 2021, at 19:17, Sven Van Caekenberghe  wrote:
>> 
>> Hi David,
>> 
>>> On 10 Feb 2021, at 19:18, da...@totallyobjects.com wrote:
>>> 
>>> I am using STON to objects out to disk. Up to two days ago, I was reading 
>>> them in as Dictionaries and converting to objects from there. All of a 
>>> sudden yesterday morning, I got an error saying that the association is 
>>> only indexable with integers. Even so, I don't seem to be able to access 
>>> the contents. 
>>> 
>>> Fistly, any ideas why this has changed and secondly, any ideas how to fix 
>>> it? 
>>> 
>>> David
>>> Totally Objects
>>> 
>>> Sent from my Huawei tablet
>> 
>> I am afraid I need more information.
>> 
>> Could you create a reproducible case ?
>> Do you have a stack trace ?
>> 
>> In any case, STON is a text format, that can be edited (in most cases, 
>> shared or circular references being hard to edit by hand).
>> 
>> Sven
> 


[Pharo-users] Re: Problem with Dictionary and Associations

2021-02-10 Thread Sven Van Caekenberghe
Hi David,

> On 10 Feb 2021, at 19:18, da...@totallyobjects.com wrote:
> 
> I am using STON to objects out to disk. Up to two days ago, I was reading 
> them in as Dictionaries and converting to objects from there. All of a sudden 
> yesterday morning, I got an error saying that the association is only 
> indexable with integers. Even so, I don't seem to be able to access the 
> contents. 
> 
> Fistly, any ideas why this has changed and secondly, any ideas how to fix it? 
> 
> David
> Totally Objects
> 
> Sent from my Huawei tablet

I am afraid I need more information.

Could you create a reproducible case ?
Do you have a stack trace ?

In any case, STON is a text format, that can be edited (in most cases, shared 
or circular references being hard to edit by hand).

Sven


[Pharo-users] Re: RabbitMQ and Pharo

2021-02-05 Thread Sven Van Caekenberghe



> On 5 Feb 2021, at 17:58, Gabriel Cotelli  wrote:
> 
> You can also take a look at https://github.com/ba-st/ansible

Looks nice, Gabriel, I didn't know about this project.

> On Fri, Feb 5, 2021, 13:04 saogat.rab--- via Pharo-users 
>  wrote:
> Hi,
> 
> 
> 
> I was wondering if anyone here used RabbitMQ in Pharo. Would like to hear 
> about your experience.
> 
> 
> 
> Kind regards,
> 
> Saogat
> 


[Pharo-users] Re: RabbitMQ and Pharo

2021-02-05 Thread Sven Van Caekenberghe
Hi,

You could use the following:

 https://github.com/svenvc/stamp

which is an implementation of STOMP 

 https://en.wikipedia.org/wiki/Streaming_Text_Oriented_Messaging_Protocol

which works fine with RabbitMQ and Pharo.

Sven

> On 5 Feb 2021, at 17:03, saogat.rab--- via Pharo-users 
>  wrote:
> 
> Hi,
> 
> 
> 
> I was wondering if anyone here used RabbitMQ in Pharo. Would like to hear 
> about your experience.
> 
> 
> 
> Kind regards,
> 
> Saogat
> 


[Pharo-users] Re: NeoCSVReader and wrong number of fieldAccessors

2021-01-07 Thread Sven Van Caekenberghe



> On 7 Jan 2021, at 07:15, Richard O'Keefe  wrote:
> 
> You aren't sure what point I was making?
> How about the one I actually wrote down:
>   What test data was NeoCSV benchmarked with
>   and can I get my hands on it?
> THAT is the point.  The data points I showed (and
> many others I have not) are not satisfactory to me.
> I have been searching for CSV test collections.
> One site offered 6 files of which only one downloaded.
> I found a "benchmark suite" for CSV containing no
> actual CSV files.
> So where *else* should I look for benchmark data than
> associated with a parser people in this community are
> generally happy with that is described as "efficient"?

Did you actually read my email and look at the code ?

NeoCSVBenchmark generates its own test data.

> Is it so unreasonable to suspect that my results might
> be a fluke?  Is it bad manners to assume that something
> described as efficient has tests showing that?
> 
> 
> 
> On Wed, 6 Jan 2021 at 22:23, jtuc...@objektfabrik.de 
>  wrote:
> Richard,
> 
> I am not sure what point you are trying to make here. 
> You have something cooler and faster? Great, how about sharing? 
> You could make a faster one when it doesn't convert numbers and stuff? Great. 
> I guess the time will be spent after parsing in 95% of the use cases. It 
> depends. And that is exactly what you are saying. The word efficient means 
> nothing without context. How is that related to this thread?
> 
> I think this thread mostly shows the strength of a community, especially when 
> there are members who are active, friendly and highly motivated. My problem 
> git solved in blazing speed without me paying anything for it. Just because 
> Sven thought my problem could be other people's problem as well. 
> 
> I am happy with NeoCSV's speed, even if there may be more lightweigt and 
> faster solutions. Tbh, my main concern with NeoCSV is not speed, but how well 
> I can understand problems and fix them. I care about data types on parsing. A 
> non-configurable csv parser gives me a bunch of dictionaries and Strings. 
> That could be a waste of cycles and memory once you need the data as objects. 
> My use case is not importing trillions of records all day, and for a few 
> hundred or maybe sometimes thousands, it is good/fast enough. 
> 
> 
> Joachim
> 
> 
> 
> 
> 
> Am 06.01.21 um 05:10 schrieb Richard O'Keefe:
>> NeoCSVReader is described as efficient.  What is that
>> in comparison to?  What benchmark data are used?
>> Here are benchmark results measured today.
>> (5,000 data line file, 9,145,009 characters).
>>  methodtime(ms)
>>  Just read characters   410
>>  CSVDecoder>>next  3415   astc's CSV reader (defaults). 1.26 x CSVParser
>>  NeoCSVReader>>next4798   NeoCSVReader (default state). 1.78 x CSVParser
>>  CSVParser>>next   2701   pared-to-the-bone CSV reader. 1.00 reference.
>> 
>> (10,000 data line file, 1,544,836 characters).
>>  methodtime(ms)
>>  Just read characters93
>>  CSVDecoder>>next   530   astc's CSV reader (defaults). 1.26 x CSVParser 
>>  NeoCSVReader>>next 737   NeoCSVReader (default state). 1.75 x CSVParser 
>>  CSVParser>>next421   pared-to-the-bone CSV reader. 1.00 reference.
>> 
>> CSVParser is just 78 lines and is not customisable.  It really is
>> stripped to pretty much an absolute minimum.  All of the parsers
>> were configured (if that made sense) to return an Array of Strings.
>> Many of the CSV files I've worked with use short records instead
>> of ending a line with a lot of commas.  Some of them also have the 
>> occasional stray comment off to the right, not mentioned in the header.
>> I've also found it necessary to skip multiple lines at the beginning
>> and/or end.  (Really, some government agencies seem to have NO idea
>> that anyone might want to do more with a CSV file than eyeball it in
>> Excel.)
>> 
>> If there is a benchmark suite I can use to improve CSVDecoder,
>> I would like to try it out.
>> 
>> On Tue, 5 Jan 2021 at 02:36, jtuc...@objektfabrik.de 
>>  wrote:
>> Happy new year to all of you! May 2021 be an increasingly less crazy 
>> year than 2020...
>> 
>> 
>> I have a question that sounds a bit strange, but we have two effects 
>> with NeoCSVReader related to wrong definitions of the reader.
>> 
>> One effect is that reading a Stream #upToEnd leads to an endless loop, 
>> the other is that the Reader produces twice as many objects as there are 
>> lines in the file that is being read.
>> 
>> In both scenarios, the reason is that the CSV Reader has a wrong number 
>> of column definitions.
>> 
>> Of course that is my fault: why do I feed a "malformed" CSV file to poor 
>> NeoCSVReader?
>> 
>> Let me explain: we have a few import interfaces which end users can 
>> define using a more or less nice assistant in our Application. The CSV 
>> files they upload to our App come from third parties like payment 
>> providers, banks and other sources. These change their 

[Pharo-users] Re: is there a better way

2021-01-06 Thread Sven Van Caekenberghe
Roelof,

Working with multiple high resolution images, as I believe you are doing, is 
always going to be a real challenge, performance wise. It just takes time to 
transfer lots of data.

First you have to make sure that you are not doing too much work (double 
downloads, using too high resolutions for previews or browsing). Also, make 
sure your ultimate client (the browser) can cache as well if applicable (set 
modification dates on the response).

Next you could cache images locally (on your app server) so that next time you 
need the same image, you do not need to download it again. Of course, this only 
helps if your hit rate is higher than zero (if you actually ask for the same 
image multiple times).

It is also possible to do multiple download requests concurrently: if the other 
end is fast enough, that can certainly help.

HTH,

Sven

> On 6 Jan 2021, at 18:11, Roelof Wobben via Pharo-users 
>  wrote:
> 
> 
> I did it on the root document and see this : 
> 
> 
> 
> So as far as I see it , The most time it taken by getting all the data from 
> all the 10 images.
> 
> I hope someone can look at me if im on the right track and will help me to 
> figure out faster ways to achieve the same
> 
> Roelof
> 
> 
> 
> Op 5-1-2021 om 05:16 schreef Richard O'Keefe:
>> Before you take another step, explore the root document.
>> 
>> Profiling is easy.
>> Open a Playground.
>> Type an expression such as
>>   3 tinyBenchmarks
>> Right click and select 'Profile it'.
>> 
>> More generally, in a browser, look at the "Tool - Profilers"
>> class category.  The classic approach was
>>   MessageTally spyOn: [3 tinyBenchmarks]
>> If I understand correctly, 'Profile it' uses TimeProfiler,
>> which has a nicer interface.  (This is in Pharo 8.)
>> 
>> 
>> On Sun, 3 Jan 2021 at 23:03, Roelof Wobben  wrote:
>> I want that the code fetches a url and some data from the Rijksmuseaum api.
>> And as far as I see it the second it not pointless because it getting more 
>> detailed info about the painting as in the first get. 
>> 
>> I did not profiled it because I never learned how to do that in Pharo. 
>> 
>> Roelof
>> 
>> 
>> 
>> Op 3-1-2021 om 01:09 schreef Richard O'Keefe:
>>> What do you want the code to do?
>>> Have you profiled the code to see where the time is going?
>>> 
>>> A quick look at the code shows
>>>  - Paintings does one web get
>>>  - each Painting does two more web gets
>>>! and the first of those seems to be pretty pointless,
>>>  as it refetches an object that Paintings already fetched
>>>  and just looked at.
>>> 
>>> 
>>> 
>>> On Sun, 3 Jan 2021 at 01:16, Roelof Wobben via Pharo-users 
>>>  wrote:
>>> Hello,
>>> 
>>> I have now this code : https://github.com/RoelofWobben/Rijksmuseam
>>> 
>>> but it seems to be slow.
>>> 
>>> Can anyone help me with a way I can use a sort of cache so the page 
>>> looks first at the cache if a image is there .
>>> If so, take the image from there , if not , ask the api for the url of 
>>> the image.
>>> 
>>> Roelof
>> 
> 


[Pharo-users] Re: NeoCSVReader and wrong number of fieldAccessors

2021-01-06 Thread Sven Van Caekenberghe
Joachim,

> On 6 Jan 2021, at 11:21, jtuc...@objektfabrik.de wrote:
> 
> Hi Sven,
> 
> 
> I must say I am really happy with your change. We get a nice exception 
> whenever the number of fieldAccessor doesn't match with the number of defined 
> fieldAccessors. So far it also seems the endless loops are gone as well. What 
> a leap forward!

Thank you for your kind words.

But thank you as well: it really helps to get constructive feedback from actual 
users, to improve the code for everyone.

> I'm adding an issue on github about the conversion errors, I hope that is a 
> convenient place for such comments/ideas?

Did you see NeoCSVReaderTests>>#testConversionErrors ?

It is not perfect, but you do get an error when a number conversion fails, you 
could make your own conversions fail similarly.

 (NeoCSVReader on: 'a' readStream) addIntegerField; upToEnd.

Like you said: some validation can be done at the CSV level, but certainly not 
everything.

Sven

> Joachim
> 
> 
> 
> 
> 
> 
> Am 05.01.21 um 21:06 schrieb jtuc...@objektfabrik.de:
>> Sven,
>> 
>> 
>> I tested your change with the file and filter (our own way of defining csv 
>> mappings by the end users) which used to send our application into an 
>> endless loop.
>> 
>> And voila: we get an exception instead of a frozen image! I will give the 
>> conversion errors a test drive tomorrow.
>> 
>> I am absolutely happy with your change. Thank you very much.
>> 
>> 
>> Joachim
>> 
>> 
>> P.S: I even learned a little bit about Iceberg. I am not really sure each of 
>> my mouse clicks made sense, but I had your commit in the image and could 
>> test it and port the deltas over to my Smalltalk dialect...
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> Am 05.01.21 um 19:52 schrieb jtuc...@objektfabrik.de:
>>> Hi Sven,
>>> 
>>> 
>>> all I can say is: wow. I have no words.
>>> 
>>> I will have to learn a bit about Pharo and github real quick now in order 
>>> to try your changes
>>> 
>>> Thank you very much. I'll give you feedback as fast as I can.
>>> 
>>> (And forget my questions about #readAtEndOrEndOfLine. I somhow didn't 
>>> understand it is expected to return a Boolean. Not sure why. I thought of 
>>> 'read' as a command, not a question in simple past..., so I thought its job 
>>> should be to read the rest of the line if we're not there yet)
>>> 
>>> 
>>> Joachim
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> Am 05.01.21 um 17:49 schrieb Sven Van Caekenberghe:
>>>> Hi Joachim,
>>>> 
>>>> Have a look at the following commit:
>>>> 
>>>> https://github.com/svenvc/NeoCSV/commit/a3d6258c28138fe3b15aa03ae71cf1e077096d39
>>>>  
>>>> 
>>>> and specifically the added unit tests. These should help clarify the new 
>>>> behaviour.
>>>> 
>>>> If anything is not clear, please ask.
>>>> 
>>>> HTH,
>>>> 
>>>> Sven
>>>> 
>>>>> On 5 Jan 2021, at 08:49, jtuc...@objektfabrik.de wrote:
>>>>> 
>>>>> Sven,
>>>>> 
>>>>> first of all thanks a lot for taking your time with this!
>>>>> 
>>>>> Your test case is so beautifully small I can't believe it ;-)
>>>>> 
>>>>> While I think some kind of validation could help with parsing CSV, I 
>>>>> remember reading your comment on this in some other discussion long ago. 
>>>>> You wrote you don't see it as a responsibility of a parser and that you 
>>>>> wouldn't want to add this to NeoCSV. I must say I tend to agree mostly. 
>>>>> Whatever you do at parsing can only cover part of the problems related to 
>>>>> validation. There will be checks that require access to other fields from 
>>>>> the same line, or some object that will be the owner of the Collection 
>>>>> that you are just importing, so a lot of validation must be done after 
>>>>> parsing anyways.
>>>>> 
>>>>> So I think we can mostly ignore the validation part. Whatever a reader 
>>>>> will do, it will not be good enough.
>>>>> 
>>>>> A nice way of exposing conversion errors for fields created with 
>>>>> #addField:converter: would help a lot, however.
>>>>> 
>>>>&

[Pharo-users] Re: NeoCSVReader and wrong number of fieldAccessors

2021-01-06 Thread Sven Van Caekenberghe
Hi Richard,

Benchmarking is a can of worms, many factors have to be considered. But the 
first requirement is obviously to be completely open over what you are doing 
and what you are comparing.

NeoCSV contains a simple benchmark suite called NeoCSVBenchmark, which was used 
during development. Note that it is a bit tricky to use: you need to run a 
write benchmark with a specific configuration before you can try read 
benchmarks.

The core data is a 100.000 line file (2.5 MB) like this:

1,-1,9
2,-2,8
3,-3,7
4,-4,6
5,-5,5
6,-6,4
7,-7,3
8,-8,2
9,-9,1
10,-10,0
...

That parses in ~250ms on my machine.

NeoCSV has quite a bit of features and handles various edge cases. Obviously, a 
minimal, custom implementation could be faster.

NeoCSV is called efficient not just because it is reasonably fast, but because 
it can be configured to generate domain objects without intermediate structures 
and because it can convert individual fields (parse numbers, dates, times, ...) 
while parsing.

Like you said, some generated CSV output out in the wild is very irregular. I 
try to stick with standard CSV as much as possible.

Sven

> On 6 Jan 2021, at 05:10, Richard O'Keefe  wrote:
> 
> NeoCSVReader is described as efficient.  What is that
> in comparison to?  What benchmark data are used?
> Here are benchmark results measured today.
> (5,000 data line file, 9,145,009 characters).
>  methodtime(ms)
>  Just read characters   410
>  CSVDecoder>>next  3415   astc's CSV reader (defaults). 1.26 x CSVParser
>  NeoCSVReader>>next4798   NeoCSVReader (default state). 1.78 x CSVParser
>  CSVParser>>next   2701   pared-to-the-bone CSV reader. 1.00 reference.
> 
> (10,000 data line file, 1,544,836 characters).
>  methodtime(ms)
>  Just read characters93
>  CSVDecoder>>next   530   astc's CSV reader (defaults). 1.26 x CSVParser 
>  NeoCSVReader>>next 737   NeoCSVReader (default state). 1.75 x CSVParser 
>  CSVParser>>next421   pared-to-the-bone CSV reader. 1.00 reference.
> 
> CSVParser is just 78 lines and is not customisable.  It really is
> stripped to pretty much an absolute minimum.  All of the parsers
> were configured (if that made sense) to return an Array of Strings.
> Many of the CSV files I've worked with use short records instead
> of ending a line with a lot of commas.  Some of them also have the occasional 
> stray comment off to the right, not mentioned in the header.
> I've also found it necessary to skip multiple lines at the beginning
> and/or end.  (Really, some government agencies seem to have NO idea
> that anyone might want to do more with a CSV file than eyeball it in
> Excel.)
> 
> If there is a benchmark suite I can use to improve CSVDecoder,
> I would like to try it out.
> 
> On Tue, 5 Jan 2021 at 02:36, jtuc...@objektfabrik.de 
>  wrote:
> Happy new year to all of you! May 2021 be an increasingly less crazy 
> year than 2020...
> 
> 
> I have a question that sounds a bit strange, but we have two effects 
> with NeoCSVReader related to wrong definitions of the reader.
> 
> One effect is that reading a Stream #upToEnd leads to an endless loop, 
> the other is that the Reader produces twice as many objects as there are 
> lines in the file that is being read.
> 
> In both scenarios, the reason is that the CSV Reader has a wrong number 
> of column definitions.
> 
> Of course that is my fault: why do I feed a "malformed" CSV file to poor 
> NeoCSVReader?
> 
> Let me explain: we have a few import interfaces which end users can 
> define using a more or less nice assistant in our Application. The CSV 
> files they upload to our App come from third parties like payment 
> providers, banks and other sources. These change their file structures 
> whenever they feel like it and never tell anybody. So a CSV import that 
> may have been working for years may one day tear a whole web server 
> image down because of a wrong number of fieldAccessors. This is bad on 
> many levels.
> 
> You can easily try the doubling effect at home: define a working CSV 
> Reader and comment out one of the addField: commands before you use the 
> NeoCSVReader to parse a CSV file. Say your CSV file has 3 lines with 4 
> columns each. If you remove one of the fieldAccessors, an #upToEnd will 
> yoield an Array of 6 objects rather than 3.
> 
> I haven't found the reason for the cases where this leads to an endless 
> loop, but at least this one is clear...
> 
> I *guess* this is due to the way #readEndOfLine is implemented. It seems 
> to not peek forward to the end of the line. I have the gut feeling 
> #peekChar should peek instead of reading the #next character form the 
> input Stream, but #peekChar has too many senders to just go ahead and 
> mess with it ;-)
> 
> So I wonder if there are any tried approaches to this problem.
> 
> One thing I might do is not use #upToEnd, but read each line using 
> PositionableStream>>#nextLine 

[Pharo-users] Re: NeoCSVReader and wrong number of fieldAccessors

2021-01-05 Thread Sven Van Caekenberghe
Hi Joachim,

Have a look at the following commit:

 
https://github.com/svenvc/NeoCSV/commit/a3d6258c28138fe3b15aa03ae71cf1e077096d39

and specifically the added unit tests. These should help clarify the new 
behaviour.

If anything is not clear, please ask.

HTH,

Sven

> On 5 Jan 2021, at 08:49, jtuc...@objektfabrik.de wrote:
> 
> Sven,
> 
> first of all thanks a lot for taking your time with this!
> 
> Your test case is so beautifully small I can't believe it ;-)
> 
> While I think some kind of validation could help with parsing CSV, I remember 
> reading your comment on this in some other discussion long ago. You wrote you 
> don't see it as a responsibility of a parser and that you wouldn't want to 
> add this to NeoCSV. I must say I tend to agree mostly. Whatever you do at 
> parsing can only cover part of the problems related to validation. There will 
> be checks that require access to other fields from the same line, or some 
> object that will be the owner of the Collection that you are just importing, 
> so a lot of validation must be done after parsing anyways.
> 
> So I think we can mostly ignore the validation part. Whatever a reader will 
> do, it will not be good enough.
> 
> A nice way of exposing conversion errors for fields created with 
> #addField:converter: would help a lot, however.
> 
> I am glad you agree on the underflow bug. This is more a question of 
> well-formedness than of validation. If a reader finds out it doesn't fit for 
> a file structure, it should tell the user/developer about it or at least 
> gracefully return some more or less incomplete object resembling what it 
> could parse. But it shouldn't cross line borders and return a wrong number of 
> objects.
> 
> 
> I will definitely continue my hunt for the endless loop. It is not an ideal 
> situation if one user of our Seaside Application completely blocks an image 
> that may be serving a few other users by just using a CVS parser that doesn't 
> fit with the file. I suspect this has to do with #readEndOfLine in some 
> special case of the underflow bug, but cannot prove it yet. But I have a file 
> and parser that reliably goes into an endless loop. I just need to isolate 
> the bare CSV parsing from the whole machinery we've build around NeoCSV 
> reader for these user-defined mappings... I wouldn't be surprised if it is a 
> problem buried somewhere in our preparations in building a parser from 
> user-defined data... I will report my progress here, I promise!
> 
> 
> One question I keep thinking about in NeoCSV: You implemented a method called 
> #peekChar, but it doesn't #peek. It buffers a character and does read the 
> #next character. I tried replacing the #next with #peek, but that is 
> definitely a shortcut to 100% CPU, because #peekChar is used a lot, not only 
> for consuming an "unmapped remainder" of a line... I somehow have the feeling 
> that at least in #readEndOfLine the next char should bee peeked instead of 
> consumed in order to find out if it's workload or part of the crlf/lf...
> Shouldn't a reader step forward by using #peek to see whether there is more 
> data after all fieldAccessors have been applied to the line (see 
> #readNextRecordAsObject)? Otoh, at one point the reader has to skip to the 
> next line, so I am not sure if peek has any place here... I need to debug a 
> little more to understand...
> 
> 
> 
> Joachim
> 
> 
> 
> 
> 
> 
> Am 04.01.21 um 20:57 schrieb Sven Van Caekenberghe:
>> Hi Joachim,
>> 
>> Thanks for the detailed feedback. This is most helpful. I need to think more 
>> about this and experiment a bit. This is what I came up with in a 
>> Workspace/Playground:
>> 
>> input := 'foo,1
>> bar,2
>> foobar,3'.
>> 
>> (NeoCSVReader on: input readStream) upToEnd.
>> (NeoCSVReader on: input readStream) addField; upToEnd.
>> (NeoCSVReader on: input readStream) addField; addField; addField; upToEnd.
>> 
>> (NeoCSVReader on: input readStream) recordClass: Dictionary; addField: [ 
>> :obj :str | obj at: #one put: str]; upToEnd.
>> (NeoCSVReader on: input readStream) recordClass: Dictionary; addField: [ 
>> :obj :str | obj at: #one put: str]; addField: [ :obj :str | obj at: #two 
>> put: str]; addField: [ :obj :str | obj at: #three put: str]; upToEnd.
>> (NeoCSVReader on: input readStream) recordClass: Dictionary; 
>> emptyFieldValue: #passNil; addField: [ :obj :str | obj at: #one put: str]; 
>> addField: [ :obj :str | obj at: #two put: str]; addField: [ :obj :str | obj 
>> at: #three put: str]; upToEnd.
>> 
>> In my opinion there are two distinct issues:
>> 
>> 1. what to do when you define a specific num

[Pharo-users] Re: NeoCSVReader and wrong number of fieldAccessors

2021-01-04 Thread Sven Van Caekenberghe
Hi Joachim,

Thanks for the detailed feedback. This is most helpful. I need to think more 
about this and experiment a bit. This is what I came up with in a 
Workspace/Playground:

input := 'foo,1
bar,2
foobar,3'.

(NeoCSVReader on: input readStream) upToEnd.
(NeoCSVReader on: input readStream) addField; upToEnd.
(NeoCSVReader on: input readStream) addField; addField; addField; upToEnd.

(NeoCSVReader on: input readStream) recordClass: Dictionary; addField: [ :obj 
:str | obj at: #one put: str]; upToEnd.
(NeoCSVReader on: input readStream) recordClass: Dictionary; addField: [ :obj 
:str | obj at: #one put: str]; addField: [ :obj :str | obj at: #two put: str]; 
addField: [ :obj :str | obj at: #three put: str]; upToEnd.
(NeoCSVReader on: input readStream) recordClass: Dictionary; emptyFieldValue: 
#passNil; addField: [ :obj :str | obj at: #one put: str]; addField: [ :obj :str 
| obj at: #two put: str]; addField: [ :obj :str | obj at: #three put: str]; 
upToEnd.

In my opinion there are two distinct issues:

1. what to do when you define a specific number of fields to be read and there 
are not enough of them in the input (underflow), or there are too many of them 
in the input (overflow).

it is clear that the underflow case is wrong and a bug that has to be fixed.
the overflow case seems OK (resulting in nil fields)

2. to validate the input (a functionality not yet present)

this would basically mean to signal an error in the under or overflow case.
but wrong type conversions should be errors too.

I understand that you want to validate foreign input.

It is a pity that you cannot produce an infinite loop example, that would also 
be useful.

That's it for now, I will come back to you.

Regards,

Sven

> On 4 Jan 2021, at 14:46, jtuc...@objektfabrik.de wrote:
> 
> Please find attached a small test case to demonstrate what I mean. There is 
> just some nonsense Business Object class and a simple test case in this 
> fileout.
> 
> 
> Am 04.01.21 um 14:36 schrieb jtuc...@objektfabrik.de:
>> Happy new year to all of you! May 2021 be an increasingly less crazy year 
>> than 2020...
>> 
>> 
>> I have a question that sounds a bit strange, but we have two effects with 
>> NeoCSVReader related to wrong definitions of the reader.
>> 
>> One effect is that reading a Stream #upToEnd leads to an endless loop, the 
>> other is that the Reader produces twice as many objects as there are lines 
>> in the file that is being read.
>> 
>> In both scenarios, the reason is that the CSV Reader has a wrong number of 
>> column definitions.
>> 
>> Of course that is my fault: why do I feed a "malformed" CSV file to poor 
>> NeoCSVReader?
>> 
>> Let me explain: we have a few import interfaces which end users can define 
>> using a more or less nice assistant in our Application. The CSV files they 
>> upload to our App come from third parties like payment providers, banks and 
>> other sources. These change their file structures whenever they feel like it 
>> and never tell anybody. So a CSV import that may have been working for years 
>> may one day tear a whole web server image down because of a wrong number of 
>> fieldAccessors. This is bad on many levels.
>> 
>> You can easily try the doubling effect at home: define a working CSV Reader 
>> and comment out one of the addField: commands before you use the 
>> NeoCSVReader to parse a CSV file. Say your CSV file has 3 lines with 4 
>> columns each. If you remove one of the fieldAccessors, an #upToEnd will 
>> yoield an Array of 6 objects rather than 3.
>> 
>> I haven't found the reason for the cases where this leads to an endless 
>> loop, but at least this one is clear...
>> 
>> I *guess* this is due to the way #readEndOfLine is implemented. It seems to 
>> not peek forward to the end of the line. I have the gut feeling #peekChar 
>> should peek instead of reading the #next character form the input Stream, 
>> but #peekChar has too many senders to just go ahead and mess with it ;-)
>> 
>> So I wonder if there are any tried approaches to this problem.
>> 
>> One thing I might do is not use #upToEnd, but read each line using 
>> PositionableStream>>#nextLine and first check each line if the number of 
>> separators matches the number of fieldAccessors minus 1 (and go through the 
>> hoops of handling separators in quoted fields and such...). Only if that 
>> test succeeds, I would then hand a Stream with the whole line to the reader 
>> and do a #next.
>> 
>> This will, however, mean a lot of extra cycles for large files. Of course I 
>> could do this only for some lines, maybe just the first one. Whatever.
>> 
>> 
>> But somehow I have the feeling I should get an exception telling me the line 
>> is not compatible to the Reader's definition or such. Or 
>> #readAtEndOrEndOfLine should just walk the line to the end and ignore the 
>> rest of the line, returnong an incomplete object
>> 
>> 
>> Maybe I am just missing the right setting or switch? What best practices did 
>> you 

[Pharo-users] Re: Pharo and Virtual Realitity

2020-12-28 Thread Sven Van Caekenberghe



> On 28 Dec 2020, at 03:25, askoh  wrote:
> 
> ZnReadEvalPrintDelegate startInServerOn: 1701.
> 
> (ZnServer on: 1701)
>bindingAddress: NetNameResolver localHostAddress;
>delegate: ZnReadEvalPrintDelegate new;
>start;
>yourself

I don't know anything about VRIDE but the above two expressions are equivalent, 
they do the same thing, twice. You need only one of them.

[Pharo-users] Re: PrintString in PBE8

2020-12-25 Thread Sven Van Caekenberghe
Maybe his question is (also) why the automatic refactoring did it wrong, the 
rules warned about the wrong use of #printString, suggested a fix, but the 
solution is still using #printString, hence the same problem.

> On 25 Dec 2020, at 16:20, Stéphane Ducasse  wrote:
> 
> Hi 
> 
> this warning is just that printOn: is working on a stream
> 
> and when we do 
> 
> printOn: aStream
> 
>   aStream nextPutAll: x printString
> 
> printString creates yeat another stream then ask the contents and passes it 
> to the first one
> 
> 
> printOn: aStream
> 
>   x printOn: aStream
> 
> is faster and cleaner in that case. 
> 
> 
>> On 24 Dec 2020, at 18:32, g_patrickb--- via Pharo-users 
>>  wrote:
>> 
>> I started working through PBE8, and in section 3.13 there is a method:
>> 
>> Counter >> printOn: aStream
>> 
>> super printOn: aStream.
>> 
>> aStream nextPutAll: ' with value: ', count printString.
>> 
>> But it returns two warnings:
>> 
>> [printString] No printString inside printOn
>> 
>> Use cascaded nextPutAll:’s instead of #, in #nextPutAll:
>> 
>> 
>> 
>> It has the option to automatically resolve the cascaded nextPutAll: which 
>> results in:
>> 
>> printOn: aStream
>> 
>> super printOn: aStream.
>> 
>> aStream
>> 
>> nextPutAll: ' with value: ';
>> 
>> nextPutAll: count printString
>> 
>> 
>> 
>> But it still has the warning about printString.
>> 
> 
> 
> Stéphane Ducasse
> http://stephane.ducasse.free.fr / http://www.pharo.org 
> 03 59 35 87 52
> Assistant: Aurore Dalle 
> FAX 03 59 57 78 50
> TEL 03 59 35 86 16
> S. Ducasse - Inria
> 40, avenue Halley, 
> Parc Scientifique de la Haute Borne, Bât.A, Park Plaza
> Villeneuve d'Ascq 59650
> France
> 


[Pharo-users] [ Article ] Understanding Base58 Encoding - It is all about integers

2020-12-15 Thread Sven Van Caekenberghe
Hi,

I wrote and published another article in Concerning Pharo:

 Understanding Base58 Encoding

 It is all about integers

 https://medium.com/concerning-pharo/understanding-base58-encoding-23e673e37ff6

This introductory article shows a bit how easy and how much fun it is to do 
integer and binary encoding and decoding as well as printing and parsing in 
Pharo. It also tries to explain Base58 Encoding from a first principles point 
of view.

Sven


[Pharo-users] Re: [Pharo-dev] Call for Beta-testers Pharo ARM64 JIT

2020-12-13 Thread Sven Van Caekenberghe
 give more details of the current status, and the 
> following steps including Apple Silicon, Windows ARM64 and Linux Open Build 
> System support.
> 
> ## Current Status
> 
> Our objective is to have a running JIT for the new aarch64 architecture (ARM 
> 64bits). This task includes not only a new backend for the JIT compiler but 
> also adding support for external libraries, dependencies and the build 
> process. 
> This means having a working VM with comparable features as the one existing 
> in Intel X64. We are targeting all the major operating systems running in 
> this platform (Linux, OSX, Windows).
> Each of them present different restrictions and conditions.
> 
> This is the current status:
> 
> - We implemented a full backend for the JIT compiler targeting aarch64.
> - All the image side was adapted to run on it, tested on Ubuntu ARM 64 bits. 
> - We added support for: Iceberg (Libgit) / Athens (Cairo) / SDL / GTK
> - We implemented a LibFFI-based FFI backend as the default one for Pharo 9 in 
> aarch64 (next to come in all platforms). 
> This opens the door to ease port the features to other platforms and OSes. 
> 
> ## Following Steps and Open Betas: Linux Open Build System (OBS), Windows 
> ARM64 and Apple Silicon
> 
> Linux Systems: In the following days, we will also support Raspbian (Debian) 
> and Manjaro on ARM64. For doing so, we are pushing the last details in having 
> a single Linux build system through OBS. So, if you want to start doing 
> beta-testing of these versions please contact us. A public beta will be open 
> in around two weeks.
> 
> Windows Systems: We have extended the build process to fully support 
> Microsoft Visual Studio compilers and more flexibility to select the targets, 
> also we are building it to run in Windows ARM. To correctly run the VM in 
> Windows it is needed to build all dependencies for aarch64. In the following 
> weeks, we expect to have a working Non-JIT version and a JIT version. The 
> remaining points to have a JIT version are related with the build process as 
> the API of the operating system has not changed from X64 to aarch64.
> 
> OSX Systems: Our third target is to have a working version for the newest 
> Apple silicon. We are acquiring the corresponding hardware to test and to 
> address the differences in the API exposed to JIT applications. As it is the 
> case of the Windows VM, there is not need to change the machine code 
> generation backend; but to compile external libraries, and particularities of 
> the new OS version.
> 
> Thanks for your support, and again, if you like to start beta testing the VM 
> please contact us. In the meantime, we will continue giving you news about 
> the current state and where are we going. 
> 
> The consortium would like to particularly thank Schmidt Buro and Lifeware for 
> their contracts. 
> 
> Regards,
> 
> Pablo in behalf of the Pharo Consortium Engineers
> 
> Stéphane Ducasse
> http://stephane.ducasse.free.fr / http://www.pharo.org 
> 03 59 35 87 52
> Assistant: Aurore Dalle 
> FAX 03 59 57 78 50
> TEL 03 59 35 86 16
> S. Ducasse - Inria
> 40, avenue Halley, 
> Parc Scientifique de la Haute Borne, Bât.A, Park Plaza
> Villeneuve d'Ascq 59650
> France
> 

--
Sven Van Caekenberghe
Proudly supporting Pharo
http://pharo.org
http://association.pharo.org
http://consortium.pharo.org





[Pharo-users] Re: (Re)storing code blocks from text strings (hopefully in STON)

2020-12-05 Thread Sven Van Caekenberghe
https://github.com/SquareBracketAssociates/Booklet-STON/releases/tag/continuous

> On 5 Dec 2020, at 00:23, Offray Vladimir Luna Cárdenas 
>  wrote:
> 
> Ohhh :-O ... where is the STON booklet. I would love to read it.
> 
> Thanks,
> 
> Offray
> 
> On 4/12/20 3:24 p. m., Stéphane Ducasse wrote:
>> Done :)
>> 
>> 
>>> On 4 Dec 2020, at 21:20, Stéphane Ducasse  wrote:
>>> 
>>> It looks like it is recurring  enough to be part of the Ston booklet :)
>>> 
>>> I will add it. 
>>> 
>>> S. 
>>> 
>>>> On 1 Dec 2020, at 10:54, Sven Van Caekenberghe  wrote:
>>>> 
>>>> Hi Offray,
>>>> 
>>>> This is a recurring question. BlockClosures are way too general and 
>>>> powerful to be serialised. That is why serialising BlockClosures is not 
>>>> supported in STON.
>>>> 
>>>> The code inside a block can refer to and even affect state outside the 
>>>> block. Furthermore the return operator is quite special as it returns from 
>>>> some outer context.
>>>> 
>>>> A subset of BlockClosures are those that are clean. These do not close 
>>>> over other variables, nor do they contain a return. By using their source 
>>>> code representation, it is possible to serialise/materialise them.
>>>> 
>>>> You can try this by adding the following methods:
>>>> 
>>>> BlockClosure>>#stonOn: stonWriter
>>>>  self isClean
>>>>ifTrue: [ stonWriter writeObject: self listSingleton: self printString ]
>>>>ifFalse: [ stonWriter error: 'Only clean blocks can be serialized' ]
>>>> 
>>>> BlockClosure>>#stonContainSubObjects
>>>>  ^ false
>>>> 
>>>> BlockClosure class>>#fromSton: stonReader
>>>>  ^ self compilerClass new 
>>>>  source: stonReader parseListSingleton; 
>>>>  evaluate
>>>> 
>>>> With these additions you can do the following:
>>>> 
>>>>  STON fromString: (STON toString: [ :x :y | x + y ]).
>>>> 
>>>> Note that the actual class name depends on the Pharo version (BlockClosure 
>>>> in Pharo 7, FullBlockClosure in Pharo 9 and maybe soon CleanBlockClosure - 
>>>> Marcus is working on that last one and that would be very cool because it 
>>>> would say exactly what it it).
>>>> 
>>>> I am still not 100% convinced to add this as a standard feature to STON. 
>>>> Using source code fully exposes the implementation, while using the 
>>>> compiler can be dangerous. It also adds a dependency on source code and 
>>>> the compiler. But it would be good if people can experiment with this 
>>>> feature.
>>>> 
>>>> Does this help you ?
>>>> 
>>>> Regards,
>>>> 
>>>> Sven
>>>> 
>>>> PS: I would not modify an object just to serialise it.
>>>> 
>>>>> On 30 Nov 2020, at 18:19, Offray Vladimir Luna Cárdenas 
>>>>>  wrote:
>>>>> 
>>>>> Hi,
>>>>> 
>>>>> I'm using STON for all my light storage serialization needs, like the
>>>>> Grafoscopio notebooks, and I also love it, as Russ stated in their mail
>>>>> question, and I share with him a similar request: for my Brea[1] static
>>>>> site generator I would like to store some BreaQuery objects as external
>>>>> STON files, and recover them, so I can run the queries that
>>>>> recreate/update the website easily. I could store them as Grafoscopio
>>>>> notebooks, but I don't want to make Grafoscopio a prerequisite for Brea
>>>>> or I could use Fuel, but I would like to store queries as a diff
>>>>> friendly text based format. I have considered Metacello/Iceberg packages
>>>>> to export code in a diff friendly format, but It maybe overkill. So I
>>>>> would like to see if STON can serve me here too.
>>>>> 
>>>>> [1] https://mutabit.com/repos.fossil/brea/
>>>>> [2] https://mutabit.com/repos.fossil/indieweb/
>>>>> 
>>>>> So far, I'm able to serialize a code block as a string using:
>>>>> 
>>>>> BreaQuery>>asStonModified
>>>>>self codeBlock: self codeBlock asString
>>>>>^ STON toStringPretty: self
>>>>> 
>>>>> But I'm unable to populate a block from a string. There is any way to
>>>>> make a string, lets say 'a + b', to become the code contents of a block,
>>>>> ie: [a + b ] ?
>>>>> 
>>>>> Thanks,
>>>>> 
>>>>> Offray
>>>>> 
>>> 
>>> 
>>> Stéphane Ducasse
>>> http://stephane.ducasse.free.fr / http://www.pharo.org 
>>> 03 59 35 87 52
>>> Assistant: Aurore Dalle 
>>> FAX 03 59 57 78 50
>>> TEL 03 59 35 86 16
>>> S. Ducasse - Inria
>>> 40, avenue Halley, 
>>> Parc Scientifique de la Haute Borne, Bât.A, Park Plaza
>>> Villeneuve d'Ascq 59650
>>> France
>>> 
>> 
>> 
>> Stéphane Ducasse
>> http://stephane.ducasse.free.fr / http://www.pharo.org 
>> 03 59 35 87 52
>> Assistant: Aurore Dalle 
>> FAX 03 59 57 78 50
>> TEL 03 59 35 86 16
>> S. Ducasse - Inria
>> 40, avenue Halley, 
>> Parc Scientifique de la Haute Borne, Bât.A, Park Plaza
>> Villeneuve d'Ascq 59650
>> France
>> 


[Pharo-users] Re: (Re)storing code blocks from text strings (hopefully in STON)

2020-12-04 Thread Sven Van Caekenberghe
Thx!

> On 4 Dec 2020, at 21:24, Stéphane Ducasse  wrote:
> 
> Done :)
> 
> 
>> On 4 Dec 2020, at 21:20, Stéphane Ducasse  wrote:
>> 
>> It looks like it is recurring  enough to be part of the Ston booklet :)
>> 
>> I will add it. 
>> 
>> S. 
>> 
>>> On 1 Dec 2020, at 10:54, Sven Van Caekenberghe  wrote:
>>> 
>>> Hi Offray,
>>> 
>>> This is a recurring question. BlockClosures are way too general and 
>>> powerful to be serialised. That is why serialising BlockClosures is not 
>>> supported in STON.
>>> 
>>> The code inside a block can refer to and even affect state outside the 
>>> block. Furthermore the return operator is quite special as it returns from 
>>> some outer context.
>>> 
>>> A subset of BlockClosures are those that are clean. These do not close over 
>>> other variables, nor do they contain a return. By using their source code 
>>> representation, it is possible to serialise/materialise them.
>>> 
>>> You can try this by adding the following methods:
>>> 
>>> BlockClosure>>#stonOn: stonWriter
>>>  self isClean
>>>ifTrue: [ stonWriter writeObject: self listSingleton: self printString ]
>>>ifFalse: [ stonWriter error: 'Only clean blocks can be serialized' ]
>>> 
>>> BlockClosure>>#stonContainSubObjects
>>>  ^ false
>>> 
>>> BlockClosure class>>#fromSton: stonReader
>>>  ^ self compilerClass new 
>>>  source: stonReader parseListSingleton; 
>>>  evaluate
>>> 
>>> With these additions you can do the following:
>>> 
>>>  STON fromString: (STON toString: [ :x :y | x + y ]).
>>> 
>>> Note that the actual class name depends on the Pharo version (BlockClosure 
>>> in Pharo 7, FullBlockClosure in Pharo 9 and maybe soon CleanBlockClosure - 
>>> Marcus is working on that last one and that would be very cool because it 
>>> would say exactly what it it).
>>> 
>>> I am still not 100% convinced to add this as a standard feature to STON. 
>>> Using source code fully exposes the implementation, while using the 
>>> compiler can be dangerous. It also adds a dependency on source code and the 
>>> compiler. But it would be good if people can experiment with this feature.
>>> 
>>> Does this help you ?
>>> 
>>> Regards,
>>> 
>>> Sven
>>> 
>>> PS: I would not modify an object just to serialise it.
>>> 
>>>> On 30 Nov 2020, at 18:19, Offray Vladimir Luna Cárdenas 
>>>>  wrote:
>>>> 
>>>> Hi,
>>>> 
>>>> I'm using STON for all my light storage serialization needs, like the
>>>> Grafoscopio notebooks, and I also love it, as Russ stated in their mail
>>>> question, and I share with him a similar request: for my Brea[1] static
>>>> site generator I would like to store some BreaQuery objects as external
>>>> STON files, and recover them, so I can run the queries that
>>>> recreate/update the website easily. I could store them as Grafoscopio
>>>> notebooks, but I don't want to make Grafoscopio a prerequisite for Brea
>>>> or I could use Fuel, but I would like to store queries as a diff
>>>> friendly text based format. I have considered Metacello/Iceberg packages
>>>> to export code in a diff friendly format, but It maybe overkill. So I
>>>> would like to see if STON can serve me here too.
>>>> 
>>>> [1] https://mutabit.com/repos.fossil/brea/
>>>> [2] https://mutabit.com/repos.fossil/indieweb/
>>>> 
>>>> So far, I'm able to serialize a code block as a string using:
>>>> 
>>>> BreaQuery>>asStonModified
>>>>self codeBlock: self codeBlock asString
>>>>^ STON toStringPretty: self
>>>> 
>>>> But I'm unable to populate a block from a string. There is any way to
>>>> make a string, lets say 'a + b', to become the code contents of a block,
>>>> ie: [a + b ] ?
>>>> 
>>>> Thanks,
>>>> 
>>>> Offray
>>>> 
>> 
>> 
>> Stéphane Ducasse
>> http://stephane.ducasse.free.fr / http://www.pharo.org 
>> 03 59 35 87 52
>> Assistant: Aurore Dalle 
>> FAX 03 59 57 78 50
>> TEL 03 59 35 86 16
>> S. Ducasse - Inria
>> 40, avenue Halley, 
>> Parc Scientifique de la Haute Borne, Bât.A, Park Plaza
>> Villeneuve d'Ascq 59650
>> France
>> 
> 
> 
> Stéphane Ducasse
> http://stephane.ducasse.free.fr / http://www.pharo.org 
> 03 59 35 87 52
> Assistant: Aurore Dalle 
> FAX 03 59 57 78 50
> TEL 03 59 35 86 16
> S. Ducasse - Inria
> 40, avenue Halley, 
> Parc Scientifique de la Haute Borne, Bât.A, Park Plaza
> Villeneuve d'Ascq 59650
> France
> 


[Pharo-users] Re: (Re)storing code blocks from text strings (hopefully in STON)

2020-12-01 Thread Sven Van Caekenberghe
Hi Offray,

This is a recurring question. BlockClosures are way too general and powerful to 
be serialised. That is why serialising BlockClosures is not supported in STON.

The code inside a block can refer to and even affect state outside the block. 
Furthermore the return operator is quite special as it returns from some outer 
context.

A subset of BlockClosures are those that are clean. These do not close over 
other variables, nor do they contain a return. By using their source code 
representation, it is possible to serialise/materialise them.

You can try this by adding the following methods:

BlockClosure>>#stonOn: stonWriter
  self isClean
ifTrue: [ stonWriter writeObject: self listSingleton: self printString ]
ifFalse: [ stonWriter error: 'Only clean blocks can be serialized' ]

BlockClosure>>#stonContainSubObjects
  ^ false

BlockClosure class>>#fromSton: stonReader
  ^ self compilerClass new 
  source: stonReader parseListSingleton; 
  evaluate

With these additions you can do the following:

  STON fromString: (STON toString: [ :x :y | x + y ]).

Note that the actual class name depends on the Pharo version (BlockClosure in 
Pharo 7, FullBlockClosure in Pharo 9 and maybe soon CleanBlockClosure - Marcus 
is working on that last one and that would be very cool because it would say 
exactly what it it).

I am still not 100% convinced to add this as a standard feature to STON. Using 
source code fully exposes the implementation, while using the compiler can be 
dangerous. It also adds a dependency on source code and the compiler. But it 
would be good if people can experiment with this feature.

Does this help you ?

Regards,

Sven

PS: I would not modify an object just to serialise it.

> On 30 Nov 2020, at 18:19, Offray Vladimir Luna Cárdenas 
>  wrote:
> 
> Hi,
> 
> I'm using STON for all my light storage serialization needs, like the
> Grafoscopio notebooks, and I also love it, as Russ stated in their mail
> question, and I share with him a similar request: for my Brea[1] static
> site generator I would like to store some BreaQuery objects as external
> STON files, and recover them, so I can run the queries that
> recreate/update the website easily. I could store them as Grafoscopio
> notebooks, but I don't want to make Grafoscopio a prerequisite for Brea
> or I could use Fuel, but I would like to store queries as a diff
> friendly text based format. I have considered Metacello/Iceberg packages
> to export code in a diff friendly format, but It maybe overkill. So I
> would like to see if STON can serve me here too.
> 
> [1] https://mutabit.com/repos.fossil/brea/
> [2] https://mutabit.com/repos.fossil/indieweb/
> 
> So far, I'm able to serialize a code block as a string using:
> 
> BreaQuery>>asStonModified
> self codeBlock: self codeBlock asString
> ^ STON toStringPretty: self
> 
> But I'm unable to populate a block from a string. There is any way to
> make a string, lets say 'a + b', to become the code contents of a block,
> ie: [a + b ] ?
> 
> Thanks,
> 
> Offray
> 


[Pharo-users] [ANN] Pharo P3 PostgreSQL client extended with SCRAM-SHA-256 authentication support

2020-11-04 Thread Sven Van Caekenberghe
Hi,

P3, the modern, lean and mean PostgreSQL client for Pharo has been extended 
with SCRAM-SHA-256 authentication support.

  https://github.com/svenvc/P3

To authenticate users when a client connects to the database, several 
mechanisms are offered by PostgreSQL. Previously, the following methods were 
supported in P3:

- trust (no password)
- password (plain text password)
- md5 (MD5 based challenge/response)

More recent versions of PostgreSQL offer a method called 'scram-sha-256', which 
is an improved challenge/response scheme using more advanced cryptographic 
techniques.

To make this feature possible, a couple of these cryptography techniques had to 
be implemented:

- https://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer
- 
https://en.wikipedia.org/wiki/Salted_Challenge_Response_Authentication_Mechanism
- https://en.wikipedia.org/wiki/PBKDF2

This is all a bit technical, but if there are PostgreSQL users out there that 
do understand this, you could help with testing this new feature - the main 
README file has been updated with a 'Connection and Authentication' section.

The development work for adding this feature (as open source) was generously 
paid for by Ingenieurbüro für Bauwesen Schmidt GmbH (in collaboration with 
Richard Uttner of Projector Software GmbH and Pavel Krivanek) - thank you.

Regards,

Sven


--
Sven Van Caekenberghe
Proudly supporting Pharo
http://pharo.org
http://association.pharo.org
http://consortium.pharo.org



[Pharo-users] Re: Zinc question (may be this is an HTML one)

2020-10-20 Thread Sven Van Caekenberghe
Hi Stef,

> On 20 Oct 2020, at 09:35, Stéphane Ducasse  wrote:
> 
> Hi sven and others
> 
> 
> While trying to improve microdown I have the following question. 
> How can I have a more generic getImage: method. 
> 
> Right now I use getPng: and I would like to support getPng: and getJpeg. 
> 
> I saw that there is getImageOfType: mimeType fromUrl: urlObject
> 
> 
> getImageOfType: mimeType fromUrl: urlObject
>   | client |
>   (client := self client)
>   url: urlObject;
>   accept: mimeType;
>   enforceHttpSuccess: true;
>   enforceAcceptContentType: true;
>   get.
>   "ImageReadWriter does automatic type detection"
>   ^ ImageReadWriter formFromStream: client entity readStream
> 
> So it looks like what I want except that I do not know how to specify a Mime 
> 
> So I did 
> 
>  ZnEasy 
>   getImageOfType:  (ZnMimeType main: 'image' sub: '*')
>   fromUrl: 'http://pharo.org/files/pharo.png'
> 
> Now I would like to know if this is the correct way to do it. 
> I could imagine that we could give a set of possible mime types.

This is an HTTP question (the protocol, not HTML the document format).

And you solved your problem well, I never tried it like that myself ;-)

I would not write it differently.

Like the comment says, it will only work with those types recognised by 
ImageReadWriter, which are the most common ones, JPG, GIF and PNG.

Image/* is a wildcard matching any image type, and there are many, many more. 

The current mechanism in Zinc with #accept: and #enforceAcceptContentType: true 
assumes a singular 'Accept' header. But indeed, technically, it is possible to 
specify more than one.

For example, this also works:

ZnClient new
  url: 'http://pharo.org/files/pharo.png';
  headerAt: 'Accept' add: ZnMimeType imageGif asString;
  headerAt: 'Accept' add: ZnMimeType imagePng asString;
  headerAt: 'Accept' add: ZnMimeType imageJpeg asString;
  contentReader: [ :entity | ImageReadWriter formFromStream: entity readStream 
];
  get.

But I have to think a bit about this issue.

Another approach is to use the extension of the URL.

ZnMimeType forFilenameExtension: 'http://pharo.org/files/pharo.png' asUrl file 
asFileReference extension.

It all depends at what level you want to tell your user they tried getting an 
unsupported file type.

Sven

> S.
> 
> 
> 
> Stéphane Ducasse
> http://stephane.ducasse.free.fr / http://www.pharo.org 
> 03 59 35 87 52
> Assistant: Aurore Dalle 
> FAX 03 59 57 78 50
> TEL 03 59 35 86 16
> S. Ducasse - Inria
> 40, avenue Halley, 
> Parc Scientifique de la Haute Borne, Bât.A, Park Plaza
> Villeneuve d'Ascq 59650
> France
> 


[Pharo-users] Re: Easiest light weight cloud/web persistence for Pharo?

2020-10-09 Thread Sven Van Caekenberghe



> On 9 Oct 2020, at 12:13, Sean P. DeNigris  wrote:
> 
> Tim Mackinnon wrote
>> Thanks, I had completely forgotten about STON, thats a good point too
>> (possibly this is what SimplePersistence uses as well - I'm not sure).
> 
> It currently uses Fuel, but the serializer/materializer is abstracted, so
> STON could probably be plugged in easily

As a textual format, STON has the advantage that you can read/edit the file 
(especially if you pretty print it).

Wrapping a ZnBuffered[Read|Write]Stream around your text/character stream 
before handing it to STON improves speed. 

> -
> Cheers,
> Sean
> --
> Sent from: http://forum.world.st/Pharo-Smalltalk-Users-f1310670.html


[Pharo-users] Re: how to check for the statusCode

2020-09-28 Thread Sven Van Caekenberghe
You *really* should read the HTTP chapters of 
http://books.pharo.org/enterprise-pharo/

in particular 
https://ci.inria.fr/pharo-contribution/job/EnterprisePharoBook/lastSuccessfulBuild/artifact/book-result/Zinc-HTTP-Client/Zinc-HTTP-Client.html

not just the Web App chapters.

Also, any general introduction to HTTP (maybe even 
https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol) will help you.

You can just say

 response statusLine

You also really, really have to learn how to use the tools (the IDE) to help 
yourself. If you browse the ZnResponse class, you can see all this for 
yourself. I know there is a steep learning curve, and that it can be 
intimidating, but the whole idea behind Pharo is that you have this living 
object system that you can explore and learn from.

The introduction texts (Pharo By Example, the MOOC, etc) all try to learn you 
that.

Use spotter to find things, use browsers to look at code, use inspectors to 
look at objects, use senders and implementers, use class references, read class 
and method comments, study unit tests. Learn from the system.

BTW, ZnEasy is just a simple class side facade, ZnClient is the real thing.

> On 28 Sep 2020, at 20:34, Roelof Wobben via Pharo-users 
>  wrote:
> 
> Hello,
> 
> Sorry for asking so much questions but I lost the big picture right now.
> 
> Im trying to get the response object back from a api call.
> 
> That I can do with :
> 
> ` response := ZnEasy get: url. `
> 
> and I see that it has the fields statusline which has the field code
> 
> I thought I could use it like this :  response at: #statusline `
> but that give me a error message that  indexes needs to be integers.
> 
> How do I get the code field again ?
> 
> Roelof


[Pharo-users] Re: Standalone html builder (a la seaside without seaside?)

2020-09-28 Thread Sven Van Caekenberghe
Hi Tim,

> On 28 Sep 2020, at 19:28, Tim Mackinnon  wrote:
> 
> Hi - has anyone ever managed to extract the html builder out of seaside - or 
> written something equivalent?
> 
> I often find I want to build some HTML, but don’t want the full seaside - and 
> was wondering if anyone has managed to extract it, or have something similar?
> 
> This combined with Renoir from BA-ST would give a good little light weight 
> web potential to run with Zinc.
> 
> Tim

There is a minimal HTML builder to be found in ZnHtmlOutputStream. It is used 
for a couple of examples and default functionality of ZnServer. There are even 
unit tests.

It is not the same as the Seaside HTML approach though.

Sven


[Pharo-users] Re: why is ZnEasy choking on this and languages as c# and ruby not

2020-09-27 Thread Sven Van Caekenberghe



> On 27 Sep 2020, at 19:34, Roelof Wobben via Pharo-users 
>  wrote:
> 
> Hello, 
> 
> In a project of mine I do this :  
> 
> (ZnEasy get: 
> 'https://www.rijksmuseum.nl/api/nl/collection/SK-C-1368/tiles?key=14OGzuak'
> ) contents .
> 
> but this is given me a bytearray.
> 
> When I do the same in other languages I know like c# or ruby I get a 
> dictionary like I suspect. 
> 
> 
> Can som one explain to me why this happens with ZnEasy. 
> 
> Regards, 
> 
> Roelof

This website/webservice has an incomplete http 1.1 fallback IMHO, as it does 
not set the content-type of the response - in that case the default is 
application/octet-stream, hence a ByteArray.

You can see this when you ask curl to use http 1.1

$ curl -v --http1.1 
https://www.rijksmuseum.nl/api/nl/collection/SK-C-1368/tiles?key=14OGzuak

You will see there is no content-type in the response. Now, curl too will 
seemingly show you the expected text, but that is an assumption it cannot 
actually make (it does this in a unix tradition).


Now, since we know/expect textual JSON data (apparently), you can make the 
request work as follows:

(ZnEasy get: 
'https://www.rijksmuseum.nl/api/nl/collection/SK-C-1368/tiles?key=14OGzuak') 
contents utf8Decoded.

When you have NeoJSON load, you can parse as follows:

NeoJSONObject fromString: (ZnEasy get: 
'https://www.rijksmuseum.nl/api/nl/collection/SK-C-1368/tiles?key=14OGzuak') 
contents utf8Decoded.


Sven

[Pharo-users] [ANN] P3 version 1.3

2020-09-21 Thread Sven Van Caekenberghe
Hi,

There is a new release of P3, the modern, lean and mean PostgreSQL client for 
Pharo.

https://github.com/svenvc/P3

Version 1.3 contains the following changes:

- Add object logging, see the P3LogEvent hierarchy
- Added P3ConnectionPool with tests
- Better management of prepared statements
- Add support for Chronology objects Time, Date and DateAndTime to be used 
directly as binding arguments for formatted/prepared statements, with tests
- Added basic support for array based parameter binding, see P3ValuesArray and 
#printValuesArrayOn:
- Better documentation and fallback for session/connection timezone and 
character encoder/decoder
- Reimplementation of P3Error adding unique codes and #isLocal as opposed to 
PostreSQL server generated messages; signalling now happens with instances 
created by class side accessors
- Bring back P3Client>>#queryEncoding as an alias for P3Client>>#serverEncoding 
as compatibility support for PharoDatabaseAccessor
- Add P3DatabaseDriver>>#connectSSL:
- Various cleanups and internal improvements 

https://github.com/svenvc/P3/releases/tag/v1.3

The quality of open source software is determined by it being alive, supported 
and maintained.

The first way to help is to simply use P3 in your projects and report back 
about your successes and the issues that you encounter. You can ask questions 
on the Pharo mailing lists.

I want to thank all contributors and users for their help and feedback: you 
make a real difference.

Since the end of last year, I have been using P3 in a real commercial 
production context, processing 10.000s of inserts a day and successfully 
supporting a web application for consulting the data. 

Enjoy,

Sven

--
Sven Van Caekenberghe
Proudly supporting Pharo
http://pharo.org
http://association.pharo.org
http://consortium.pharo.org





[Pharo-users] Re: Updating lists.pharo.org: New server, Mailman3 and more

2020-09-15 Thread Sven Van Caekenberghe



> On 14 Sep 2020, at 17:46, Sean P. DeNigris  wrote:
> 
> Marcus Denker-4 wrote
>> We are updating the mailinglists (everything https://lists.pharo.org/
>> https://lists.pharo.org/;)
> 
> Thanks, Marcus - all this logistical stuff gets no glory but is so important
> :)

+100

Indeed, thank you, Marcus !

> -
> Cheers,
> Sean
> --
> Sent from: http://forum.world.st/Pharo-Smalltalk-Users-f1310670.html


Re: [Pharo-users] Rounding in Floats

2020-09-07 Thread Sven Van Caekenberghe



> On 6 Sep 2020, at 22:21, Esteban Maringolo  wrote:
> 
> It is not for printing but for testing. I want to assert that a
> certain calculation gives the expected result.

Then you should use #assert:closeTo: and friends.

(9.1 + (-2.0)) closeTo: 7.1 precision: 0.1.

Floats should always be compared using an epsilon (precision) value in tests, 
not using equality.




Re: [Pharo-users] Rounding in Floats

2020-09-06 Thread Sven Van Caekenberghe



> On 6 Sep 2020, at 16:06, Esteban Maringolo  wrote:
> 
> Hi,
> 
> Continuing with my issues with decimals, I'm having one issue that is
> not clear to me why it happens.
> 
> If I do:
> (9.1 + (-2.0)) roundTo: 0.1.
> "7.1005"
> 
> I expect to get a single decimal Float (rounded with whatever
> precision, but a single decimal).
> 
> Even if I do something like this:
> 7.1 roundTo: 0.1
> 
> It gives the wrong result.
> 
> In VW and VAST it provides the right result.
> (9.1 + (-2.0)) roundTo: 0.1 "7.1"
> 
> In Dolphin it also returns the wrong result, it seems to use the same
> algorithm to round it.
> 
> Is this a bug?

Maybe.

But I would not approach the problem of rounding like that. 
You probably want to control how numbers are printed.
I would keep the numbers themselves at maximum internal precision and only do 
something when printing them.

1 / 3 asFloat printShowingDecimalPlaces: 1.

Since you like Seaside, the following is even much more powerful (has *many* 
options):

GRNumberPrinter new precision: 1; print: 1/3 asFloat.

Check it out.

> 
> Esteban A. Maringolo
> 




Re: [Pharo-users] ZnClient - how suppress popup notifications

2020-09-02 Thread Sven Van Caekenberghe
In any case,

[ ZnClient new get: 'http://host-does-not-exist-123123.com' ] on: NetworkError 
do: [ #myFailure ]

returns #myFailure and does not invoke the #defaultAction, nor any UI.

Like I said, you can just catch the exception you want.

> On 2 Sep 2020, at 22:15, Jimmie Houchin  wrote:
> 
> 
> On 9/2/20 2:36 PM, Sven Van Caekenberghe wrote:
>> Hi Jimmie,
>> 
>>> On 2 Sep 2020, at 20:29, Jimmie Houchin  wrote:
>>> 
>>> 
>>> Before I get to my problem. I want to thank Sven for the huge effort that 
>>> had to be made to provide all of the Zinc networking tools. Thank you.
>> Thanks, you're welcome.
>> 
>>> I am using ZnClient in an app. It is working fine. But I do not want any UI 
>>> Notifications. This will eventually be headless on a server. But right now 
>>> in development when the internet goes out I get a pop up from 
>>> NameLookupFailure which offers me the options of "Give Up" or "Retry".
>>> 
>>> I need to suppress this popup. I already have Error handling code which 
>>> will catch NameLookupFailure among many other network based errors.
>> I know this is confusing, but this is not a problem. You can simply catch 
>> the NameLookupFailure and this will work as expected. The problem is the 
>> custom/overwritten NameLookupFailure>>#defaultAction which is doing UI stuff 
>> (although this gets handled differently in a headless image as well). IMHO 
>> this should be removed.
>> 
>> ZnClientTest>>#testIfFailNonExistingHost is an example that does more or 
>> less what you want.
> 
> 
> Yes. In the current stack trace in the debugger it never reaches my Error 
> handling and hits the #defaultAction method.
> 
> 
> Thanks. That puts me on the path I want to go.
> 
> I probably need to learn to read the tests for or as documentation for code. 
> I am not currently in that habit.
> 
> 
>>> Also, is there a better way to check if the network is up other than simply 
>>> making a request and either getting a successful response or an Error?
>> This is not such an easy problem to solve. Doing something simple, like 
>> accessing a known host, is one way (but never 100% since that host might be 
>> down on itself).
>> 
>> There is also the problem of timeouts (i.e. very slow networks).
>> 
>> One of my experimental projects, https://github.com/svenvc/NeoDNS, does 
>> contains something called NeoNetworkState that tests internet connectivity 
>> by doing a DNS call. But this probably goes to far.
>> 
>> HTH,
> 
> Helps tremendously. Currently when I get a network error, I have a loop which 
> polls the least resource consuming URL on the server that I need to access. I 
> exit the loop upon a successful response and continue with my app's main 
> loop. I figure that lets me know that networking is back up and the server I 
> need is responding. Covers all my bases. I just wanted to know if I was 
> overlooking something that all you smart people who spend way more time than 
> I do on this stuff had a solution. I couldn't think of an easy generic 
> solution that could be created.
> 
> Again thanks.
> 
> 
>> 
>> Sven
>> 
>>> Thanks for any help.
>>> 
>>> 
>>> Jimmie




Re: [Pharo-users] ZnClient - how suppress popup notifications

2020-09-02 Thread Sven Van Caekenberghe
Hi Jimmie,

> On 2 Sep 2020, at 20:29, Jimmie Houchin  wrote:
> 
> 
> Before I get to my problem. I want to thank Sven for the huge effort that had 
> to be made to provide all of the Zinc networking tools. Thank you.

Thanks, you're welcome.

> I am using ZnClient in an app. It is working fine. But I do not want any UI 
> Notifications. This will eventually be headless on a server. But right now in 
> development when the internet goes out I get a pop up from NameLookupFailure 
> which offers me the options of "Give Up" or "Retry".
> 
> I need to suppress this popup. I already have Error handling code which will 
> catch NameLookupFailure among many other network based errors.

I know this is confusing, but this is not a problem. You can simply catch the 
NameLookupFailure and this will work as expected. The problem is the 
custom/overwritten NameLookupFailure>>#defaultAction which is doing UI stuff 
(although this gets handled differently in a headless image as well). IMHO this 
should be removed.

ZnClientTest>>#testIfFailNonExistingHost is an example that does more or less 
what you want.

> Also, is there a better way to check if the network is up other than simply 
> making a request and either getting a successful response or an Error?

This is not such an easy problem to solve. Doing something simple, like 
accessing a known host, is one way (but never 100% since that host might be 
down on itself).

There is also the problem of timeouts (i.e. very slow networks).

One of my experimental projects, https://github.com/svenvc/NeoDNS, does 
contains something called NeoNetworkState that tests internet connectivity by 
doing a DNS call. But this probably goes to far.

HTH,

Sven

> Thanks for any help.
> 
> 
> Jimmie




Re: [Pharo-users] Question ZnClient with file

2020-08-30 Thread Sven Van Caekenberghe



> On 30 Aug 2020, at 18:05, Sabine Manaa  wrote:
> 
> Hi Sven,
> 
> thanks a lot for your answer! I was already replying 3 hours ago but my 
> answer did not pass the mailing list.
> 
> Perhaps you can answer this mail for the mailing list again:
> 
> I was writing: 
> 
> Hi Sven,
> you see me here very happy here. 
> It is much simpler as I was thinking. Just:
> 
> ^ ZnClient new
> url:
> 'https://my.sevdesk.de/api/v1/Voucher/Factory/uploadTempFile?token=32695d076245b124b066faaa56afc71b74';
> addPart:
> (ZnMimePart
> fieldName: 'file'
> fileNamed: '/Users/sabine/Desktop/belege/neue_belege/mcdonalds.jpeg');
> post
> 
> With this, it succeeds with the upload.

Great.

In production you should also check whether the upload succeeded.

One way to do this is by using #enforceHttpSuccess - this will signal an 
exception unless the host returned a 200 or similar code (assuming the 
receiving host acts like this).

> ...
> 
> Please allow me to ask one more question:
> 
> My situation in my application is, that I do not have the file local as in 
> the example but in amazon S3 and I have a url like this:
> 
> https://s3.eu-central-1.amazonaws.com/spf-belege-dev/K1000137/201905061113-506963984-9575877/31/7/small-202008241549-7987688-1.png?X-Amz-Algorithm=AWS4-HMAC-SHA256=AKIAJUHEWICJ33EJAUMA%2F20200830%2Feu-central-1%2Fs3%2Faws4_request=20200830T155656Z=6000=host=ab3f329b1e63c896f8hh95e36951f199fae081ca
> 
> My question is: Is it possible to NOT download this file to my server for 
> then sending it with filename and path to the other system but setting this 
> url directly in the ZnClient?
> For the other system it has to be like a file upload...
> This would save transfer costs and time.

I understand, but unless the host supports this, there is not much you can do. 
The data has to be transferred, if they are not willing to fetch it from some 
URL, you will have to do the copying.

If the files are not too large, you can take them temporarily in memory if you 
want (so that you do not have to create a temp file).

See ZnImageExampleDelegateTest and how #testUpload uses #image there.

It probably would also work with a ZnStreamingEntity (so that not everything 
has to be in memory at the same time, just the buffer to do stream copying), 
but that might require some more experimenting.

> Regards
> Sabine
>  
> 
> Sabine
> 
> Am So., 30. Aug. 2020 um 14:48 Uhr schrieb Sabine Manaa 
> :
> Hi Sven,
> 
> you see me here very happy here. 
> 
> It is much simpler as I was thinking. Just:
> 
> ^ ZnClient new
>   url:
>   
> 'https://my.sevdesk.de/api/v1/Voucher/Factory/uploadTempFile?token=32695d076245b124b0faaa56afc71b74';
>   addPart:
>   (ZnMimePart
>   fieldName: 'file'
>   fileNamed: 
> '/Users/sabine/Desktop/belege/neue_belege/mcdonalds.jpeg');
>   post
> 
> succeeds with the upload.
> 
> Thank you very very much!
> I write this in discord, too.
> 
> Sabine
> 
> Am So., 30. Aug. 2020 um 14:03 Uhr schrieb Sven Van Caekenberghe-2 [via 
> Smalltalk] :
> Hi, 
> 
> CC'ing the Pharo Users ML since that gives a permanent record of my answer. 
> 
> File uploads using ZnClient do work in the common case. You can check 
> ZnServerTest>>#testFormTest3 or ZnImageExampleDelegateTest>>#testUpload as 
> well as several other senders of #addPart: 
> 
> First you are mixing 2 types of forms (see 
> https://ci.inria.fr/pharo-contribution/job/EnterprisePharoBook/lastSuccessfulBuild/artifact/book-result/Zinc-HTTP-Client/Zinc-HTTP-Client.html
>  section 6. Submitting HTML Forms). 
> 
> For a file upload you need a ZnMultiPartFormDataEntity which is configured 
> automatically in ZnClient by #multiPartFormDataEntity when you do #addPart: 
> (no need to set a content type). 
> 
> Next you are mixing the file name and the file contents. It is best to use 
> the class side ZnMimePart instance creation methods, either 
> #fieldName:fileName:entity: or #fieldName:fileNamed: 
> 
> Also, when you create a ZnByteArray entity with the contents of a .jpg or 
> .png you not only have to load the actual bytes (obviously), but you also 
> have to set the mime type correctly. In #fieldName:fileNamed: you can see how 
> this is done by using the file extension, but that is just one way to do it, 
> if you know the type upfront, just set it. 
> 
> There is also ZnClient>>#uploadEntityFrom: (used by #testUploadSmallDocument) 
> but that is not using a form. 
> 
> I am sure you will be able to figure it out, if not just ask. 
> 
> Sven 
> 
> > On 30 Aug 2020, at 09:51, Sabine 

Re: [Pharo-users] Question ZnClient with file

2020-08-30 Thread Sven Van Caekenberghe
Hi,

CC'ing the Pharo Users ML since that gives a permanent record of my answer.

File uploads using ZnClient do work in the common case. You can check 
ZnServerTest>>#testFormTest3 or ZnImageExampleDelegateTest>>#testUpload as well 
as several other senders of #addPart:

First you are mixing 2 types of forms (see 
https://ci.inria.fr/pharo-contribution/job/EnterprisePharoBook/lastSuccessfulBuild/artifact/book-result/Zinc-HTTP-Client/Zinc-HTTP-Client.html
 section 6. Submitting HTML Forms).

For a file upload you need a ZnMultiPartFormDataEntity which is configured 
automatically in ZnClient by #multiPartFormDataEntity when you do #addPart: (no 
need to set a content type).

Next you are mixing the file name and the file contents. It is best to use the 
class side ZnMimePart instance creation methods, either 
#fieldName:fileName:entity: or #fieldName:fileNamed: 

Also, when you create a ZnByteArray entity with the contents of a .jpg or .png 
you not only have to load the actual bytes (obviously), but you also have to 
set the mime type correctly. In #fieldName:fileNamed: you can see how this is 
done by using the file extension, but that is just one way to do it, if you 
know the type upfront, just set it.

There is also ZnClient>>#uploadEntityFrom: (used by #testUploadSmallDocument) 
but that is not using a form.

I am sure you will be able to figure it out, if not just ask.

Sven

> On 30 Aug 2020, at 09:51, Sabine Manaa  wrote:
> 
> Hi Sven,
> 
> I hope you are well this serious times!
> 
> I have a problem with ZnClient. I was asking yesterday in Discord but
> we did not find a solution.
> 
> https://discordapp.com/channels/223421264751099906/223421264751099906/749313351859044382
> 
> I write a summary here:
> 
> I have this command, which works fine on the command line:
> 
> curl -X POST 
> "https://my.sevdesk.de/api/v1/Voucher/Factory/uploadTempFile?token=32695d076245b124b0faaa56afc71b74;
> -H "accept: application/xml" -H "Content-Type: multipart/form-data" -F
> "file=@/Users/sabine/Desktop/belege/neue_belege/mcdonalds.jpeg;type=image/jpeg"
> 
> Now, I want to "translate" this in a ZnClient command, but I do not
> get it. my command is:
> 
> ^ ZnClient new
>   systemPolicy;
>   https;
>   accept: ZnMimeType applicationXml;
>   headerAt: 'Content-Type' add: 'multipart/form-data';
>   host: 'my.sevdesk.de';
>   path: 
> '/api/v1/Voucher/Factory/uploadTempFile?token=32695d076245b124b0faaa56afc71b74';
>   ifFail: [ :exception | exception response entity inspect ];
>   formAt: 'file' put:
> '@/Users/sabine/Desktop/belege/neue_belege/mcdonalds.jpeg';
>   formAt: 'type' put: 'image/jpeg';
>   post
> 
> There must be a difference between the command line and the ZnClient
> command because with the ZnClient command, I get this error:
> {"objects":null,"error":{"message":"Uploaded file is not inside the
> allowed directory","code":null,"data":null}}.
> 
> I was also trying to get the command line from the ZnClient instance
> with the method curl but that gives me this:
> 
> echo 
> 66696c653d402f55736572732f736162696e652f4465736b746f702f62656c6567652f6e6575655f62656c6567652f6d63646f6e616c64732e6a70656726747970653d696d6167652f6a706567
> | xxd -r -p | curl -X POST
> https://my.sevdesk.de:443/api/v1/Voucher/Factory/uploadTempFile?token=32695d076245b124b0faaa56afc71b74
> -H"User-Agent:Zinc HTTP Components 1.0 (Pharo/7.0)"
> -H"Accept:application/xml"
> -H"Content-Type:application/x-www-form-urlencoded"
> -H"Host:my.sevdesk.de" -H"Content-Length:77" --data-binary @-
> 
> Would be very nice if you could help me. All I want ist to "translate"
> the above curl command in a corresponding ZnClient command
> 
> Regards
> Sabine




Re: [Pharo-users] CannotWriteData errors in P3 and Seaside

2020-08-21 Thread Sven Van Caekenberghe
Hi Esteban,

That is good to hear !

Sven

> On 20 Aug 2020, at 23:36, Esteban Maringolo  wrote:
> 
> Hi all,
> 
> On Thu, Aug 20, 2020 at 3:02 PM Sven Van Caekenberghe  wrote:
> 
>>>> I don't know what is going on inside Socket, I just stated my opinion.
>>> Maybe there is something to investigate here?
> 
>> Is it not so that in Docker all network connections in/out/between instances 
>> are mediated by some management software ?
>> I even thought it was nginx. Maybe I am totally wrong here.
> 
> No, it is not that, but inside a Swarm all containers run in a
> "overlay network" (basically a VPN) that is independent of the host
> network, this way you can distribute containers among different hosts.
> 
> All the packages are routed by Docker itself, and apparently there is
> an issue there, that if a connection is idle for a certain time, it
> silently stops routing packages, leaving both sides of the connection
> unaware of it.
> https://github.com/moby/moby/issues/31208
> 
>> What if you change
>> 
>> P3DatabaseDriver >> connect: aLogin
> 
>> by inserting
>> 
>> verbose: true
>> 
>> before the last statement.
>> 
>> You could also make a subclass of P3DatabaseDriver.
> 
> I could, but that would only log one extra CONNECT entry, not of much use.
> 
> 
>>> At this point I'm factoring out what might be causing this. It's an
>>> issue that only happens to me in production, and I don't have a better
>>> instrumentation in place to debug it.
> 
> I think I found the culprit, now I need to know how to bypass or setup
> the network to avoid these situations.
> 
> Luckily this will be solved soon. :-)
> 
> Regards.
> 




Re: [Pharo-users] CannotWriteData errors in P3 and Seaside

2020-08-20 Thread Sven Van Caekenberghe



> On 20 Aug 2020, at 19:51, Esteban Maringolo  wrote:
> 
> Hi,
> 
> I'm replying to the list as well... because the last two mails got
> replied to our personal addresses.

Oh, I did not notice that, that was not my intention.

> On Thu, Aug 20, 2020 at 11:55 AM Sven Van Caekenberghe  wrote:
>>> On 20 Aug 2020, at 15:31, Esteban Maringolo  wrote:
>>> 
>>> Hi Sven,
>>> 
>>> If a socketstream doesn't know the state of the connection, then what
>>> is the #socketIsConnected method for? In particular the
>>> #isOtherEndClosed test.
>>> 
>>> ZdcAbstractSocketStream>>#socketIsConnected
>>> ^socket isConnected and: [ socket isOtherEndClosed not ]
>> 
>> I don't know what is going on inside Socket, I just stated my opinion.
> 
> Maybe there is something to investigate here?

Is it not so that in Docker all network connections in/out/between instances 
are mediated by some management software ? I even thought it was nginx. Maybe I 
am totally wrong here.

>> With logging enabled, I can do the following:
>> 
>> $ grep P3 server-2020-08-20.log | grep CONNECT | tail -n 20
>> 
>> 2020-08-20 14:43:06 [P3] 30513 DISCONNECTING 
>> psql://client-xyz:hiddenpassword@client-xyz-db:5432/client-xyz
>> 2020-08-20 14:44:06 [P3] 30516 CONNECTED 
>> psql://client-xyz:hiddenpassword@client-xyz-db:5432/client-xyz
>> 2020-08-20 14:44:06 [P3] 30516 DISCONNECTING 
>> psql://client-xyz:hiddenpassword@client-xyz-db:5432/client-xyz
>> 2020-08-20 14:44:06 [P3] 30517 CONNECTED 
>> psql://client-xyz:hiddenpassword@client-xyz-db:5432/client-xyz
>> 2020-08-20 14:44:06 [P3] 30517 DISCONNECTING 
>> psql://client-xyz:hiddenpassword@client-xyz-db:5432/client-xyz
>> 
>> The number after [P3] is the session identifier (backend process id) of that 
>> connection. You should see each one being opened and closed in pairs.
> 
> Yes, I noticed the pid, and compared it with what I had on the
> pg_stat_activity table.

Right.

> I don't get the CONNECTED log because there is no way to set the
> logging in the P3DatabaseDriver before it creates (and connects) the
> P3Client.
> Maybe there could be a setting on P3Client class to set verbosity
> globally? Or at the P3DatabaseDriver instead.

What if you change 

P3DatabaseDriver >> connect: aLogin
connection := self connectionClass new.
connection 
host: aLogin host;
port: aLogin port asInteger;
database: aLogin databaseName;
user: aLogin username;
password: aLogin password.
connection connect

by inserting

 verbose: true

before the last statement.

You could also make a subclass of P3DatabaseDriver.

> Summarizing... I'm pretty confident that P3 works correctly and also
> the PG server.
> At this point I'm factoring out what might be causing this. It's an
> issue that only happens to me in production, and I don't have a better
> instrumentation in place to debug it.
> 
> Again, thanks for the support.
> 
> Regards.




Re: [Pharo-users] CannotWriteData errors in P3 and Seaside

2020-08-19 Thread Sven Van Caekenberghe
IIUC a socket stream does not automatically/automagically know that the state 
of the connection changed, unless/until it tries to use it (read or write to 
it, wait for data, ...).

I would recommend using #isWorking to actually test if a connection is good, if 
you would need to do that. That does an actual query.

Did you enable P3 logging as I suggested ? What did you learn ?

It feels as if in your particular setup the server eagerly closes connections 
that are open for too long. P3 was written under the assumption that that does 
not happen (it does not for me).

> On 19 Aug 2020, at 19:48, Esteban Maringolo  wrote:
> 
> I kept looking into this, and still haven't found what might be causing it.
> 
> However, I was trying to "salvage" that until I find a solution, and
> run a "healthcheck" to be sure that the GlorpSession has an active
> connection, and then I found that a P3Client reports as connected even
> when it's not.
> 
> I connected to a PostgreSQL database running in a Docker container,
> stopped the container, and the driver continues to report as
> connected, even way after the server was stopped (the timeout is using
> the default 10 seconds).
> 
> The ZdcSocketStream reports both ends of the socket as connected, when
> the server side certainly isn't.
> 
> I noticed also that in P3Client>>connect it calls #ensureOpen, and it
> takes the socket as open when it is not, so as soon as it tries to
> flush the data written to it, a `CannotWriteData` exception is
> signaled.
> 
> In my case I'm developing on Pharo 8 on Windows with PostgreSQL
> running in Docker on WSL2 (ubuntu), but on the server it is a 100%
> Linux deployment.
> 
> Any ideas?
> 
> Esteban A. Maringolo
> 
> On Mon, Aug 10, 2020 at 5:44 PM Esteban Maringolo  
> wrote:
>> 
>> Looking at the Postgres side of the log I find that the connection was
>> reset from the other side (it is Pharo).
>> 
>> The reason for that is yet unknown to me. Since I don't do anything
>> (that I'm aware of).
>> 
>> golfware_database.1.9sl4bt9j6cv5@gw| 2020-08-10 20:02:35.939 UTC
>> [132] LOG:  could not receive data from client: Connection reset by
>> peer
>> golfware_database.1.9sl4bt9j6cv5@gw| 2020-08-10 20:06:58.083 UTC
>> [139] LOG:  could not receive data from client: Connection reset by
>> peer
>> golfware_database.1.9sl4bt9j6cv5@gw| 2020-08-10 20:06:58.083 UTC
>> [137] LOG:  could not receive data from client: Connection reset by
>> peer
>> golfware_database.1.9sl4bt9j6cv5@gw| 2020-08-10 20:33:20.163 UTC
>> [166] LOG:  could not receive data from client: Connection reset by
>> peer
>> golfware_database.1.9sl4bt9j6cv5@gw| 2020-08-10 20:35:22.019 UTC
>> [168] LOG:  could not receive data from client: Connection reset by
>> peer
>> golfware_database.1.9sl4bt9j6cv5@gw| 2020-08-10 20:37:33.091 UTC
>> [177] LOG:  could not receive data from client: Connection reset by
>> peer
>> 
>> I'll have to keep looking.
>> 
>> Best regards!
>> 
>> Esteban A. Maringolo
>> 
>> 
>> On Mon, Aug 10, 2020 at 5:35 PM Esteban Maringolo  
>> wrote:
>>> 
>>> My Seaside session isn't closing the connection, only when
>>> unregistered, but this seems to be something else I don't know.
>>> 
>>> I saw there is logging, and I need to set it up in general (including
>>> Fuel serialized stacks).
>>> Looking in the web apparently there is the need for a keepalive that
>>> is not in place.
>>> What disturbs me is that it doesn't happen in development. Making things 
>>> harder.
>>> 
>>> Regards!
>>> 
>>> Esteban A. Maringolo
>>> 
>>> On Mon, Aug 10, 2020 at 5:02 PM Sven Van Caekenberghe  wrote:
>>>> 
>>>> Hi Esteban,
>>>> 
>>>> I have a web app with P3 under Seaside in production and it works fine. 
>>>> But that is without Glorp, nor any connection pooling.
>>>> 
>>>> You say the connection seems closed, maybe the closing got triggered by 
>>>> your app somehow ? How do you clean up expired sessions ? How do you 
>>>> handle logouts ?
>>>> 
>>>> P3 does normally reconnect automatically, IIRC.
>>>> 
>>>> You could try to enable logging in P3Client, that is a recent addition. It 
>>>> should show you what happens to your connections.
>>>> 
>>>> Sven
>>>> 
>>>>> On 10 Aug 2020, at 21:15, Esteban Maringolo  wrote:
>>>>> 
>>>>> Hi all, Sven 

Re: [Pharo-users] CannotWriteData errors in P3 and Seaside

2020-08-10 Thread Sven Van Caekenberghe
Hi Esteban,

I have a web app with P3 under Seaside in production and it works fine. But 
that is without Glorp, nor any connection pooling.

You say the connection seems closed, maybe the closing got triggered by your 
app somehow ? How do you clean up expired sessions ? How do you handle logouts ?

P3 does normally reconnect automatically, IIRC.

You could try to enable logging in P3Client, that is a recent addition. It 
should show you what happens to your connections.

Sven

> On 10 Aug 2020, at 21:15, Esteban Maringolo  wrote:
> 
> Hi all, Sven ;-)
> 
> I'm having erratic P3 errors in a recent application I wrote using
> Pharo, Seaside and Glorp with P3 as driver.
> 
> Each Seaside session has a GlorpSession, which in turn has a
> P3Connection in its accessor. I don't know why, but sometimes the
> P3Connection socket is closed, and then when trying to read from the
> database, it cannot write the query to the P3 socket and exception is
> raised, and it isn't handled by the P3DatabaseDriver (automatically
> trying to reconnect?).
> 
> I don't know if I'm doing something wrong, I plan to migrate the
> GlorpPooledDatabaseAccessor and also use the P3ConnectionPool, but I
> want to be sure that the current setup works of if maybe I'm exceeding
> some limit or timeout that causes the connection to be closed.
> 
> Regards!
> 
> 
> Esteban A. Maringolo
> 




  1   2   3   4   5   6   7   8   9   10   >