Re: [Pharo-users] Stream >> <

2019-09-11 Thread Herby Vojčík

On 11. 9. 2019 11:46, Herby Vojčík wrote:

On 11. 9. 2019 3:23, Richard O'Keefe wrote:

#write: really does not seem to be any improvement over #nextPutAll:.


Will post.


Actually, I won't. I don't care any more.

I found Contributor Covenant-derived Code of Conduct was added to Pharo, 
three months ago. This is unacceptable under any circumstances.


Have fun in your woke hell.

Herby




Re: [Pharo-users] Stream >> <

2019-09-10 Thread Herby Vojčík

On 10. 9. 2019 15:20, Sven Van Caekenberghe wrote:

Development happens in Pharo 8 first, with possible back ports if really 
necessary.

The last relevant change was the following:

https://github.com/pharo-project/pharo/pull/2698


That's a very nice change, indeed. Thank you.

But there's still one catch. I looked, and in Pharo 8 branch, this is 
still the active implementation:


WriteStream >> << anObject [
"Write anObject to the receiver, dispatching using #putOn:
	This is a shortcut for both nextPut: and nextPutAll: since anObject can 
be both

the element type of the receiver as well as a collection of those 
elements.
No further conversions of anObject are applied.
This is an optimisation.
Return self to accomodate chaining."

anObject class == collection class
ifTrue: [ self nextPutAll: anObject ]
ifFalse: [ anObject putOn: self ]
]

It is wrong to shortcut putOn: double dispatch based on anObject class 
== collection class. I strongly believe this test should pass:


testPutDiverseNestedSequences

  | array otherSequenceable higherOrderArray higherOrderSequenceable |

  array := Array with: 1 with: 2 with: 3.
  otherSequenceable := OrderedCollection with: 1 with: 2 with: 3.
  higherOrderArray := Array with: array with: otherSequenceable.
  higherOrderSequenceable := OrderedCollection with: array with: 
otherSequenceable.


  result := Array streamContents: [ :s | s
<< array << otherSequenceable
<< higherOrderArray << higherOrderSequenceable ].

  self assert: result equals: #(
1 2 3 1 2 3
1 2 3 1 2 3
1 2 3 1 2 3 )

If I guess correctly, based on mere implemetation detail of which class 
holds the 1 2 3, some of them are not unnested.


I understand how that optimization is needed in case a string is put on 
character stream, double dispatching every character is insane. But so 
is shortcutting Array is Array-based stream.


Maybe the optimization should have additional anObject isString test (in 
case of string, the counterexample cannot be created, because they 
cannot be nested). Or there should be additional double dispatch via 
nextPutString:, if string-based streams have their hierarchy. You know 
the codebase better.


Unless it is already solved by some other twist.

Thanks, Herby



Re: [Pharo-users] Stream >> <

2019-09-10 Thread Herby Vojčík

On 10. 9. 2019 14:54, Richard O'Keefe wrote:

I think it's fair to say that #<< *is* a bug.
There does not seem to be any coherent description of what it means.
It's overloaded to mean *either* #nextPut: *or* #nextPutAll: *or*
something else, in some confusing ways.
CommandLineHandler   #nextPutAll: (sent somewhere else)
Integer  left shift (someone has been smoking too 
much C++)

NonInteractiveTranscript #show: = locked #print:
SocketStream #putOn: (which may itself act like
  #nextPut:, #nextPutAll:, #print,
  put elements sans separators, or
  something else)
Stream   #putOn: (see above)
WriteStream  either #nextPutAll: or #putOn:
Transcript   #show: = locked #print:
ThreadSafeTranscript #show: = locked #print:
VTermOutputDriver    #putOn:
VTermOutputDriver2   #asString then #nextPutAll:
ZnEncodedWriteStream #nextPutAll:
ZnHtmlOutputStream   #asString then #nextPutAll:
SequenceableCollection class #streamContents:

As was once said about PL/I, #<< fills a much-needed gap.
When I see #print:, or #nextPut:, or #nextPutAll:, I know
what to expect.  When I see #putOn:, I have in general no
idea what will happen.  And when I see << it is worse.


I don't think so. I have pretty coherent view of how << can work. In 
Amber this coherent view helped to create Silk library for DOM 
manipulation by treating a DOM element as a ind of a stream.


Having simple thing working (<< aCollection unpack the collection, 
having putOn: to be able to customize how object are put on stream) can 
help a lot; if, things are kept consistent.



One point of << is to imitate C++'s composition of outputs.
That might work, too, if only there were some agreement
about what #nextPutAll: returns.  There is not.  It might
return the receiver.  It might return some other stream
related to the receiver.  It might even return the collection
argument.  So when you see
  a << b << c
in general you not only do not have a clue what (a) is going
to do with (b) but you have no idea what object the message
<< c will be sent to.



This is strawman. We know what str << a << b << c does if we know what 
is output of #<<, it has nothing to do with #nextPutAll:. And it's 
simple, STream >> << should return self, and we're done.



Now let's see if we can puzzle out what
Array streamContents: [ :s | s << 10 << '10' << #(10 '10') ]
does.
The output will be going to a WriteStream.
aWriteStream << anInteger
is not, but is like, aWriteStream print: anInteger.
So we add $1 and $0.
aWriteStream << aString
reduces to aWriteStream nextPutAll: aString.
So we add $1 and $0.
aWriteStream on anArray << anotherArray
reduces to aWriteStream nextPutAll: anotherArray.
So we add 10 and '10'.
Thus the result we get is
#($1 $0 $1 $0 10 '10').
What result we should *expect* from this muddle I cannot say.


#(10 '10' 10 '10')

Of course.

After all, I put things on Array stream, which holds objects, not a 
character stream.



If, on the other hand, you wrote explicitly
Array streamContents: [:stream |
   stream print: 10; nextPutAll: '10'; nextPutAll: #(10 '10')]
you would have an easy time figuring out what to expect.


I see nextPut[All]: as low-level put API, and print:, write: and << as 
high-level one. I would not combine them.


I actually combine print: with write: to nice effect in Amber. Lot of 
code which actually export source to the disk uses combination of these 
two to enhance readability (IMO). For example:


exportTraitDefinitionOf: aClass on: aStream
"Chunk format."

aStream
write: 'Trait named: '; printSymbol: aClass name; lf;
tab; write: 'package: '; print: aClass category; write: '!!'; 
lf.
aClass comment ifNotEmpty: [
aStream
write: '!!'; print: aClass; write: ' commentStamp!!'; lf;
write: { self chunkEscape: aClass comment. '!!' }; lf ].
aStream lf

As write: and << are synonyms in Amber (so they probably was in some 
part of Pharo history), I chose to pair print: keyword selector with 
write: keyword selector from the style point of view.


Also, since write: is <<, I can write: a collection of pieces to put and 
I don't need to cascade lots of write:s.


What I wanted to illustrate is, good implementation of << can be pretty 
useful.



By the way, there is no standard definition of #show:, but in
other Smalltalk systems it's usually a variant of #nextPutAll:,
not a variant of #print:.  There's no denying that locked output
is useful to have, but #show: is not perhaps the best name for it.


Herby




[Pharo-users] Stream >> <

2019-09-10 Thread Herby Vojčík

Hello!

In Pharo 7.0.4,

  Array streamContents: [ :s | s << 10 << '10' << #(10 '10') ]

  >>> #($1 $0 $1 $0 10 '10')

Bug or feature?

Herby



Re: [Pharo-users] Set >> collect:thenDo:

2019-09-08 Thread Herby Vojčík

On 8. 9. 2019 14:28, Peter Kenny wrote:

Two comments:
First, the method comment for Collection>>collect:thenDo: is "Utility method
to improve readability", which is exactly the same as for
collect:thenSelect: and collect:thenReject:. This suggests that the
*intention* of the method is not to introduce new behaviour, but simply to
provide a shorthand for the version with parentheses. For other kinds of


I had that same impression.


collection this is true; just the deduping makes Set different. If we want


I would be more defensive here and say that generic collection should 
have (collect:)do: implementation and only sequenceable collections have 
the optimized one (if it indeed is the case that it is a shotrhand for 
parenthesized one).



the different behaviour, this should be indicated by method name and
comment.
Second, if we remove asSet from the second snippet, the output is exactly
the same. It will be the same as long as the original collection has no
duplicates. Somehow the effect is to ignore the asSet. It just smells wrong.

Peter Kenny


Herby


Kasper Osterbye wrote

The first version:

(#(1 2 3) asSet collect: #odd)
do: [ :each | Transcript show: each; cr ]

is rather straight forward I believe, as collect: and do: has been around
forever (relatively speaking).


#(1 2 3) asSet collect: #odd
thenDo: [ :each | Transcript show: each; cr ]


On 8 September 2019 at 09.13.36, Richard Sargent (



richard.sargent@



) wrote:

  I am skeptical of one that relies on a specific implementation rather
than
a specific definition.

I share your feeling. I am not sure where such a definition would come
from. In Squeak it is defined as:

collect: collectBlock thenDo: doBlock

^ (self collect: collectBlock) do: doBlock

In pharo as:

collect: collectBlock thenDo: doBlock

^ self do: [ :each | doBlock value: (collectBlock value: each)]

I might have called the method collect:eachDo:, but we each have slightly
different styles. What I like about the pharo version is that is a
shorthand for something which is not achieved by mere parentheses.

Best,

Kasper






--
Sent from: http://forum.world.st/Pharo-Smalltalk-Users-f1310670.html






[Pharo-users] Set >> collect:thenDo:

2019-09-07 Thread Herby Vojčík

Hello!


  (#(1 2 3) asSet collect: #odd)
do: [ :each | Transcript show: each; cr ]

  > true
  > false



  #(1 2 3) asSet collect: #odd
thenDo: [ :each | Transcript show: each; cr ]

  > true
  > false
  > true



Bug or feature?

Herby



Re: [Pharo-users] Ssilence does not mean agreement

2019-09-04 Thread Herby Vojčík

On 4. 9. 2019 20:10, Sven Van Caekenberghe wrote:

If nobody talks casually about guns as if they are a normal part of life on 
this list I will have no problem doing so.


You are not director here.

Get over your compulsion. If unable yourself, seek medical help.

Herby


Sven




Re: [Pharo-users] SequenceableCollection>>#allButFirst: inconsistence across subclasses

2019-08-30 Thread Herby Vojčík

On 30. 8. 2019 11:56, Ben Coman wrote:



On Fri, 30 Aug 2019 at 15:34, Julien > wrote:


Hello,

I opened that issue: https://github.com/pharo-project/pharo/issues/4442

And I think to fix it we need to actually discuss about what we want.

#allButFirst: behaves differently depending on the actual type of
sequenceable collection when argument is greater than collection size.

For instance:

#(1 2) allButFirst: 3.  "PrimitiveFailed signaled"
(LinkedList with: 1 with: 2) allButFirst: 3. "PrimitiveFailed signaled"
(OrderedCollection with: 1 with: 2) allButFirst: 3.  "an
OrderedCollection() »

The question is then, who is right?


Its worthwhile to at least survey other Smalltalks.
For Visualworks...
       #(1 2) allButFirst: 3.  "==> #()"
       (OrderedCollection with: 1 with: 2) allButFirst: 3.   "==> 
OrderedCollection ()"
       (LinkedList with: Link new with: Link new ) allButFirst: 3.  
"raises an error Subscription out of bounds error"

and also...
       (LinkedList with: Link new with: Link new ) allButFirst: 2.  
"raises an error Subscription out of bounds error"


I feel that proceeding-without-iterating is nicer than 
showing-an-application-error.
It provides the opportunity to not-check the number elements or wrapping 
error handling around it - i.e. less code if its not important.


That's what I think as well.

If its important not to exceed the number of elements, then that check 
can be explicitly coded.


cheers -ben





Re: [Pharo-users] [ANN] Iterators

2019-08-23 Thread Herby Vojčík
On 23. 8. 2019 19:23, Herby Vojčík wrote:> On 23. 8. 2019 16:14, Julien 
wrote:

>> Hello,
>>
>> I wanted to have an iterator framework for Pharo for a long time.
>>
>> So I started building it step by step and today I think that, while it
>> still requires more documentation, it is ready to be announced and
>> used by others.
>>
>> I present you Iterators : https://github.com/juliendelplanque/Iterators
>>
>> The idea is that, as described by the iterator design pattern, any
>> object that needs to be walked provides one or many iterators.
>>
>> In the library, #iterator method is the convention to get the default
>> iterator of any object (if it has one).
>>
>> Iterators provides a DSL to deal with iterators combination.
>>
>> It is inspired from shell’s streams manipulation syntax:
>>
>> - The pipe "|" allows one to chain iterators
>> - The ">" allows one to create a new collection with data transformed
>> through chained iterators
>> - The ">>" allows one to fill an existing collection with data
>> transformed through chained iterators
>> For example, one can write:
>>
>> iterator := #(1 2 3) iterator.
>> iterator
>> | [ :x | x * 2 ] collectIt
>> | [ :x :y | x + y ] reduceIt
>>  > Array "#(12)"
>
> Isn't this something readStream should provide?
>
>str := #(1 2 3) readStream.
>str
>  | [ :x | x * 2 ] collectIt
>  | [ :x :y | x + y ] reduceIt
>  > Array "#(12)"
>
> It is an object from which you take the front element, one at a time.
>
> Why have something very similar with different name?
>
> Herby
Now that I think about it, the problem may be a nomenclature.

This is my understanding, fix me if I am mistaken:

There are pull-based and push-based sequences out there. In Smalltalk it 
can be said a sequence is pull-based if it has #next and #atEnd; it is 
push-based if it has #do:.


AFAICT the tools to read pull-based sequence is called an iterator, and 
it can have transformations like filter, map etc., sometimes called 
"ix". The approach to transform push-based sequence (called observable) 
is called reactive programming ("rx").


It seems you create rx-like library, but called the object iterator.

Maybe the set of operations you provide should be define for both 
push-based and pull-based sequences, and called names that conform to 
common canon.


Herby




Re: [Pharo-users] [ANN] Iterators

2019-08-23 Thread Herby Vojčík

On 23. 8. 2019 16:14, Julien wrote:

Hello,

I wanted to have an iterator framework for Pharo for a long time.

So I started building it step by step and today I think that, while it 
still requires more documentation, it is ready to be announced and used 
by others.


I present you Iterators : https://github.com/juliendelplanque/Iterators

The idea is that, as described by the iterator design pattern, any 
object that needs to be walked provides one or many iterators.


In the library, #iterator method is the convention to get the default 
iterator of any object (if it has one).


Iterators provides a DSL to deal with iterators combination.

It is inspired from shell’s streams manipulation syntax:

- The pipe "|" allows one to chain iterators
- The ">" allows one to create a new collection with data transformed 
through chained iterators
- The ">>" allows one to fill an existing collection with data 
transformed through chained iterators

For example, one can write:

iterator := #(1 2 3) iterator.
iterator
| [ :x | x * 2 ] collectIt
| [ :x :y | x + y ] reduceIt
 > Array "#(12)"


Isn't this something readStream should provide?

  str := #(1 2 3) readStream.
  str
| [ :x | x * 2 ] collectIt
| [ :x :y | x + y ] reduceIt
> Array "#(12)"

It is an object from which you take the front element, one at a time.

Why have something very similar with different name?

Herby


Or

iterator := #(1 2 3) iterator.
collectionToFill := OrderedCollection new.
iterator
| [ :x | x * 2 ] collectIt
| [ :x :y | x + y ] reduceIt
 > collectionToFill.
collectionToFill "anOrderedCollection(12)"

The equivalent of "/dev/null" in Linux also exists:

iterator := #(1 2 3) iterator.
iterator
| [ :x | x * 2 ] collectIt
| [ :object | object logCr ] doIt "Just print incoming objects in 
transcript."

 > NullAddableObject "Special object that ignore incoming objects."

There are documentation and examples on the GitHub repository.

—

Initially, the goal was to avoid to duplicate all collection’s iterator 
methods (e.g. #collect:, #select:, etc) in composite objects.


Thus, it provides an IteratorWithCollectionAPI which wrap an iterator 
and provides all the methods we want (#collect:, #select:, …).


Via IteratorWithCollectionAPI, your objects automatically gets the 
Collection API, you just need to access it via #iterator message:


myObject iterator select: [ :x | x isFoo ]

This is another way to use the framework, just to avoid code duplication.

—

Future work is to provide the possibility to have iterator with multiple 
inputs.


I already have an undocumented prototype on the repository that works 
like this:


it1 := (1 to: 10) iterator.
it2 := (1 to: 10) iterator.
it1 & it2
| [ :x :y | x@y ] mergeIt
 > Array. "{(1@1). (2@2). (3@3). (4@4). (5@5). (6@6). (7@7). (8@8). 
(9@9). (10@10)}"



Yes, "&" operator will again kind of mimic the one from the shell.

—

Hope it helps other people.

Feedback is welcome.

Cheers,

Julien

---
Julien Delplanque
Doctorant à l’Université de Lille
http://juliendelplanque.be/phd.html
Equipe Rmod, Inria
Bâtiment B 40, Avenue Halley 59650 Villeneuve d'Ascq
Numéro de téléphone: +333 59 35 86 40







Re: [Pharo-users] OrderedCollection as an instance variable

2019-07-25 Thread Herby Vojčík

On 25. 7. 2019 4:27, Richard O'Keefe wrote:

Comment 2.
 ?? This is a poor design.?? As it is, any object can replace the tracks 
of an artist
 ?? with *anything*.?? And even without doing that, any object can add and 
remove

 ?? items to an artist's tracks, even if the added items are not Tracks.
 ?? There are a number of OO design principles, notably the Law of 
Demeter and

 ?? of course the GRASP patterns, which basically say "never mutate another
 ?? object's parts, ask IT to do the mutation."


As outlined nicely in this classic: 
https://alanknightsblog.blogspot.com/2011/10/principles-of-oo-design-or-everything-i.html.




Re: [Pharo-users] OrderedCollection as an instance variable

2019-07-25 Thread Herby Vojčík

On 24. 7. 2019 17:30, sergio ruiz wrote:

hmm???

maybe this is cleaner..

tracks
tracks ifNil: [ self tracks: OrderedCollection new ].
^ tracks


IMO, return ifNil: value is an understood and used idiom, so I'd say

  ^ tracks ifNil: [ tracks := ... ]

is the most clean way. Maybe look at sender of ifNil: and see for 
yourself which is the most idiomatic way to do lazy getter in the 
current image (there will be lots of these usages there).


Herby



because your #tracks: returns self, not the collection value



peace,
sergio
photographer, journalist,??visionary

Public Key: http://bit.ly/29z9fG0
#BitMessage BM-NBaswViL21xqgg9STRJjaJaUoyiNe2dV
http://www.codeandmusic.com
http://www.twitter.com/sergio_101
http://www.facebook.com/sergio101






Re: [Pharo-users] Bloc of code in tiers programming language

2019-05-18 Thread Herby Vojčík

On 15. 5. 2019 15:44, Tomaž Turk wrote:

In javascript I believe is

var f = function(x) { return Math.cos(x) + x; }
var df = function(x) { return f(x + 1e-8) - f(x) * 1e8; }


You should use modern JS for comparision, though, so:

const f = x => Math.cos(x) + x;
const df = x => (f(x + 1e-8) - f(x)) * 1e8;

(fixed the operator precedence as well)

Herby

P.S.: I would parametrize the epsilon, as well as function, so

const deriv = epsilon => f => x => (f(x + epsilon) - f(x)) * epsilon;
const df = deriv(1e8)(f);



Best wishes,
Tomaz

-- Original Message --
From: "Atharva Khare" >
To: "Any question about pharo is welcome" >

Sent: 15.5.2019 15:26:11
Subject: Re: [Pharo-users] Bloc of code in tiers programming language


Hey,

I think in python, you use Lambda Expressions. Here is how I would do 
it in python3:

import math
f = lambda x: math.cos(x) + x
d_f = lambda x: (f(x + 1e-8) - f(x)) * 1e8



On Wed, May 15, 2019 at 6:33 PM Hilaire > wrote:


Hi,

We, Smalltalkers, use bloc of code as easily as we breathe air.

I am writing an article on Smalltalk programming for a French
mathematics teachers magazine.

To illustrate the simplicity of Smalltalk, I would like to compare how
the bloc of code 'f' and 'df' below will be implemented in Javascript
and Python:


f := [ :x | x cos + x ].
df := [ :x | (f value: x + 1e-8) - (f value: x) * 1e8].

Here f is a way to implement a function and df its derivate.

Do some of you knows how it will be written in Javascript and Python
with their own ad-hoc anonymous function?

Thanks

Hilaire

-- 
Dr. Geo

http://drgeo.eu








Re: [Pharo-users] Richard Kenneth Eng is NOT Mr. Smalltalk

2019-04-10 Thread Herby Vojčík

On 28. 2. 2019 18:02, Esteban Maringolo wrote:

Hi Michael,

I share the belief that beyond some point pushing for something can 
backfire, and if you keep going then you enter into the trolls or 
fanatics zone.


However I don't believe the community has to do something, or exclude 
anybody. In a wild place like the web, all efforts to exclude, silence 
or ban are futile, so it's up to each one to judge. Transliterating a 
local saying: "When John speaks about Peter, it says more about John 
than about Peter.".


Regards,

Esteban A. Maringolo


+1

Herby



Re: [Pharo-users] is this valid smalltalk

2019-04-04 Thread Herby Vojčík

On 4. 4. 2019 13:16, Roelof Wobben wrote:

Hello,

For a challenge of Exercism I need to check if some parenthes and 
brackets are balanced.


so for example

()  = true
([])  = true

but (])  is not true because the bracket has no opening bracket.

Now I wonder if I can premature end a #do:  like this

collection do: [:element | (condition with element) ifFalse: [^false]]

in pseudo code

is this a valid way to end the do ?


Not sure I understand your question properly, but yes, you can do this, 
it is called "non-local return" and is a known pattern in Smalltalk.



or is there a better way ?


  (collection
detect: [:element | (condition with element) not]
ifNone: [sentinel]) == sentinel ifFalse: [ ^false ]

for example does not force you to employ it, as well as

  (collection allSatisfy: [:element | (condition with element)])
ifFalse: [ ^false ]



Roelof


Herby



Re: [Pharo-users] A "with" construct like Pascal - easy to do, but is it terrible?

2019-03-04 Thread Herby Vojčík

On 4. 3. 2019 14:15, Esteban Maringolo wrote:

Well... you have the cascading that provides you that.

book1
title:'C Programming';
author: 'Nuha Ali ';
subject: 'C Programming Tutorial';
book_id: 6495407.

What I would like, sometimes, is the option to nest cascades, very
much like the WITH DO construct of Pascal, Basic and others.


Shameless plug: https://blog.herby.sk/post/customizing-cascade.

Some year(s) ago I did a poll in twitter in ppl would like this in 
Amber. It was 5 for, 5 against.


Herby



But also I take this as a hint of some refactoring needed in my objects.

Regards,

Esteban A. Maringolo




[Pharo-users] Pragma "only for place of definition", is there one?

2019-02-16 Thread Herby Vojčík

Hi!

I'd just like to know if there is some pragma (eg. ) which 
would say to a (class-level) method only to use the code in the case 
self is method definition class, otherwise just delegate to super.


I only see it useful in class-side `initialize` method, as in:

  Foo class >> initialize
self == Foo ifTrue: [ registry := Dictionary new ].
^ super initialize

would be

  Foo class >> initialize

registry := Dictionary new

Asking because I look for a way to easily subclass JS classes in Amber 
and such pragma would be useful in such cases, and since Amber treats 
Pharo as a reference, asking for the name of the pragma if there is one.


Thanks, Herby



Re: [Pharo-users] Traits for class methods?

2019-02-13 Thread Herby Vojčík

On 13. 2. 2019 14:54, Cyril Ferlicot wrote:

On Wed, Feb 13, 2019 at 2:40 PM Hilaire  wrote:


I am wondering. Does it not make responsabilities less clear, shareed in
class hierarchy and traits hierarchy, now both in term of behavior but
also state?



Hi,

In general I do not choose between inheritance and composition based
on what the language allows. When we had stateless traits, I created
subclasses even if I was not using any state, because it made sense.

I'm not a big fan of limiting the language on composition because the
choice between composition and inheritance should be conceptual and
not technical.


Sorry for the unconstructive and spammy "I like it" post, but:

+1!


I read in an article this way to choose between inheritance and composition:

«Inheritance should only be used when:
- Both classes are in the same logical domain
- The subclass is a proper subtype of the superclass
- The superclass’s implementation is necessary or appropriate for the subclass
- The enhancements made by the subclass are primarily additive.»


Hilaire

--
Dr. Geo
http://drgeo.eu




Re: [Pharo-users] Pharo-users Digest, Vol 55, Issue 206

2017-11-26 Thread Herby Vojčík

Ben Coman wrote:



On 26 November 2017 at 21:28, Herby Vojčík <he...@mailbox.sk
<mailto:he...@mailbox.sk>> wrote:

Ben Coman wrote:



On 25 November 2017 at 15:18, Викентий Потапов
<vikenti.pota...@gmail.com <mailto:vikenti.pota...@gmail.com>
<mailto:vikenti.pota...@gmail.com
<mailto:vikenti.pota...@gmail.com>>> wrote:


 I downloaded the latest PharoLauncher image, use VM for
Pharo 6.1.
 1) On load i got an error with UTF8 encoding.
 2) Every time i tried to create image (for example, Pharo 7
image) i
 get an error "Can't find the requested origin".

 Reported.

 Best regards, Vikenti Potapov


The overall difficulty here in getting this addressed is
difficulty in
reproducing your environment.
For example, I think it would be awkward for me to operate with OS
locale set to Russian.
You'll see little action on it until there is an Issue logged at
pharo.fogbugz.com <http://pharo.fogbugz.com>
<http://pharo.fogbugz.com/> with a simple set of steps
to reproduced for standard environments (i.e. using English locales)

Reviewing your other posts, I'd suggest initially creating a
script to
wrap Pharo startup that cleans all the environment variables to
ascii


-1.

It should all work in other locales. Maybe you should set some if
not all of the jenkins jobs under different locales (at least
russian and japanese) to catch those things fast.

Your (I hate to use the word) privilege to be fine with ASCII only
actually masks lot of bugs from you.


I think you misunderstand me.  This suggestion was only to help isolate
the problem during troubleshooting.
The best way to troubleshoot something is to a.) start with something
working and then b.) be able to reliability break it.
Vikenti doesn't have (a.) so I was giving him a hack to get that, to be
able to proceed to (b.)

But your suggestion to the running some CI jobs under different locals
and timezones could be useful.


Ok, sorry.

Herby


cheers -ben



Herby

P.S.: Other masking thing is the timezone which is UTC in case of
UK, France and lot of the rest of Western Europe. That should also
be set to. say, Sydney and Honolulu to get big GMT+ and GMT- shifts.


characters, temporarily creating required directories so at lest
you can
get it started.  Then extend that script to reintroduce the
error, like
a single non-ascii character in a single environment variable that
causes an error.  That extended part then provides a path for
others to
reproduce your problem, which you would attach to the Issue.
  Then as
you investigate a solution yourself, ask questions about it on
pharo-dev
to keep the Issue active and over time you'll likely get some action
from others on it.

cheers -ben











Re: [Pharo-users] Pharo-users Digest, Vol 55, Issue 206

2017-11-26 Thread Herby Vojčík

Ben Coman wrote:



On 25 November 2017 at 15:18, Викентий Потапов
> wrote:


I downloaded the latest PharoLauncher image, use VM for Pharo 6.1.
1) On load i got an error with UTF8 encoding.
2) Every time i tried to create image (for example, Pharo 7 image) i
get an error "Can't find the requested origin".

Reported.

Best regards, Vikenti Potapov


The overall difficulty here in getting this addressed is difficulty in
reproducing your environment.
For example, I think it would be awkward for me to operate with OS
locale set to Russian.
You'll see little action on it until there is an Issue logged at
pharo.fogbugz.com  with a simple set of steps
to reproduced for standard environments (i.e. using English locales)

Reviewing your other posts, I'd suggest initially creating a script to
wrap Pharo startup that cleans all the environment variables to ascii


-1.

It should all work in other locales. Maybe you should set some if not 
all of the jenkins jobs under different locales (at least russian and 
japanese) to catch those things fast.


Your (I hate to use the word) privilege to be fine with ASCII only 
actually masks lot of bugs from you.


Herby

P.S.: Other masking thing is the timezone which is UTC in case of UK, 
France and lot of the rest of Western Europe. That should also be set 
to. say, Sydney and Honolulu to get big GMT+ and GMT- shifts.



characters, temporarily creating required directories so at lest you can
get it started.  Then extend that script to reintroduce the error, like
a single non-ascii character in a single environment variable that
causes an error.  That extended part then provides a path for others to
reproduce your problem, which you would attach to the Issue.   Then as
you investigate a solution yourself, ask questions about it on pharo-dev
to keep the Issue active and over time you'll likely get some action
from others on it.

cheers -ben








Re: [Pharo-users] I love the launcher!!!!

2017-11-26 Thread Herby Vojčík

Stephane Ducasse wrote:

Then do not try :) like that you will never get a chance to know :)


Could you please explain? I understood your sentence has high dose of 
homour which I cannot decipher since I lack certain abilities for that.


Thanks, Herby


Stef


On Sun, Nov 26, 2017 at 1:48 PM, Herby Vojčík<he...@mailbox.sk>  wrote:

Stephane Ducasse wrote:

Why don't you try? It does not bite.

For me it works in all scenario. I have projects that i manage over
several weeks and others I drop day to day.
And I have also startup script per versions.


Maybe I will. The main problem was I didn't what it's good about at all -
load image for a version, runs it - what's the "wow it's great" about?
Missed part was it loads correct vm; plus makes sense when you have hundreds
(I don't). Now it must be combined with the startup script magic to work
with long-time projects, but those are also new to me - did not know of them
until your booklet, not using them at all yet.

Herby



Stef

On Fri, Nov 24, 2017 at 3:10 PM, Herby Vojčík<he...@mailbox.sk>   wrote:

Thank you all, now I understand it better. Good for lots of "branches".

However, I wonder how does it work with the rule I read somewhere: "start
each day [of work on a project] with a new image", which means I should
not
reuse clean new image (as it needs populating from VCSes etc.) nor reuse
existing image (I should start with the new one). Or does it combine with
some startup-magic described in one of the recent Steph's booklets, the
"one
start per [new] image" case (but again, one should discriminate projects
from each other)?

Thanks, Herby

Peter Uhnák wrote:

Hi Herby,

normally people use different images for their different projects,
different versions, trying things, etc. Which means we end up with many
locations on disk, and it can be hard to track.
So PharoLauncher is a nice tool where you can download fresh image just
by clicking, and you see the list of your local images and can launch
them, etc.

Peter

On Thu, Nov 23, 2017 at 12:56 PM, Herby Vojčík<he...@mailbox.sk
<mailto:he...@mailbox.sk>>   wrote:

  Stephane Ducasse wrote:

  Hi

  I love the PharoLauncher.


  Pardon my question, I have downloaded it and looked at it, but I
  don't get it. What does it do / what are the use cases (honest
  question)?

  Thanks, Herby


  It helps me to manage my parallel development and projects.

  We should put a link on the Pharo web site because

  http://files.pharo.org/platform/launcher/
  <http://files.pharo.org/platform/launcher/>

  is arcane.

  Stef













Re: [Pharo-users] I love the launcher!!!!

2017-11-26 Thread Herby Vojčík

Stephane Ducasse wrote:

Why don't you try? It does not bite.

For me it works in all scenario. I have projects that i manage over
several weeks and others I drop day to day.
And I have also startup script per versions.


Maybe I will. The main problem was I didn't what it's good about at all 
- load image for a version, runs it - what's the "wow it's great" about? 
Missed part was it loads correct vm; plus makes sense when you have 
hundreds (I don't). Now it must be combined with the startup script 
magic to work with long-time projects, but those are also new to me - 
did not know of them until your booklet, not using them at all yet.


Herby


Stef

On Fri, Nov 24, 2017 at 3:10 PM, Herby Vojčík<he...@mailbox.sk>  wrote:

Thank you all, now I understand it better. Good for lots of "branches".

However, I wonder how does it work with the rule I read somewhere: "start
each day [of work on a project] with a new image", which means I should not
reuse clean new image (as it needs populating from VCSes etc.) nor reuse
existing image (I should start with the new one). Or does it combine with
some startup-magic described in one of the recent Steph's booklets, the "one
start per [new] image" case (but again, one should discriminate projects
from each other)?

Thanks, Herby

Peter Uhnák wrote:

Hi Herby,

normally people use different images for their different projects,
different versions, trying things, etc. Which means we end up with many
locations on disk, and it can be hard to track.
So PharoLauncher is a nice tool where you can download fresh image just
by clicking, and you see the list of your local images and can launch
them, etc.

Peter

On Thu, Nov 23, 2017 at 12:56 PM, Herby Vojčík<he...@mailbox.sk
<mailto:he...@mailbox.sk>>  wrote:

 Stephane Ducasse wrote:

 Hi

 I love the PharoLauncher.


 Pardon my question, I have downloaded it and looked at it, but I
 don't get it. What does it do / what are the use cases (honest
 question)?

 Thanks, Herby


 It helps me to manage my parallel development and projects.

 We should put a link on the Pharo web site because

 http://files.pharo.org/platform/launcher/
 <http://files.pharo.org/platform/launcher/>

 is arcane.

 Stef













Re: [Pharo-users] I love the launcher!!!!

2017-11-24 Thread Herby Vojčík

Thank you all, now I understand it better. Good for lots of "branches".

However, I wonder how does it work with the rule I read somewhere: 
"start each day [of work on a project] with a new image", which means I 
should not reuse clean new image (as it needs populating from VCSes 
etc.) nor reuse existing image (I should start with the new one). Or 
does it combine with some startup-magic described in one of the recent 
Steph's booklets, the "one start per [new] image" case (but again, one 
should discriminate projects from each other)?


Thanks, Herby

Peter Uhnák wrote:

Hi Herby,

normally people use different images for their different projects,
different versions, trying things, etc. Which means we end up with many
locations on disk, and it can be hard to track.
So PharoLauncher is a nice tool where you can download fresh image just
by clicking, and you see the list of your local images and can launch
them, etc.

Peter

On Thu, Nov 23, 2017 at 12:56 PM, Herby Vojčík <he...@mailbox.sk
<mailto:he...@mailbox.sk>> wrote:

Stephane Ducasse wrote:

Hi

I love the PharoLauncher.


Pardon my question, I have downloaded it and looked at it, but I
don't get it. What does it do / what are the use cases (honest
question)?

Thanks, Herby


It helps me to manage my parallel development and projects.

We should put a link on the Pharo web site because

http://files.pharo.org/platform/launcher/
<http://files.pharo.org/platform/launcher/>

is arcane.

Stef









Re: [Pharo-users] I love the launcher!!!!

2017-11-23 Thread Herby Vojčík

Stephane Ducasse wrote:

Hi

I love the PharoLauncher.


Pardon my question, I have downloaded it and looked at it, but I don't 
get it. What does it do / what are the use cases (honest question)?


Thanks, Herby


It helps me to manage my parallel development and projects.

We should put a link on the Pharo web site because

http://files.pharo.org/platform/launcher/

is arcane.

Stef






Re: [Pharo-users] What is code 137 / how to gracefully shut down via SIGTERM?

2017-11-01 Thread Herby Vojčík

Herby Vojčík wrote:

Not as easy as it seems, ran this on docker image herbysk/pharo:64_61

61_64, of course.



Re: [Pharo-users] What is code 137 / how to gracefully shut down via SIGTERM?

2017-11-01 Thread Herby Vojčík

Esteban Lorenzano wrote:

Hi,

I don’t know if is useful in your case, but you made me remember I made
a small tool to trap unix signals within Pharo. I uploaded then to github.

https://github.com/estebanlm/pharo-posix-signal

is very easy to use and it will allow you to trap any signal and do what
you want ;)


Not as easy as it seems, ran this on docker image herbysk/pharo:64_61 
(based on the one from that day Iceberg was backported):




root@2367caef3dac:/# cat s.st
| trap |
Iceberg enableMetacelloIntegration: true; remoteTypeSelector: #httpsUrl.
Metacello new
  repository: 'github://estebanlm/pharo-posix-signal/src';
  baseline: 'POSIXSignal';
  load.
trap := POSIXSignal SIGINT. trap installWith: [ :signal |  'Trapped!!' 
crLog ].
trap := POSIXSignal SIGTERM. trap installWith: [ :signal |  'TERM!!' 
crLog ].

root@2367caef3dac:/# pharo /opt/pharo/Pharo.image ${PWD}/s.st --no-quit
pthread_setschedparam failed: Operation not permitted
This VM uses a separate heartbeat thread to update its internal clock
and handle events.  For best operation, this thread should run at a
higher priority, however the VM was unable to change the priority.  The
effect is that heavily loaded systems may experience some latency
issues.  If this occurs, please create the appropriate configuration
file in /etc/security/limits.d/ as shown below:

cat >DoIt (POSIXSignal is Undeclared)

UndefinedObject>>DoIt (POSIXSignal is Undeclared)

Fetched -> BaselineOfPOSIXSignal-GitHub.1509525907 --- 
https://github.com/estebanlm/pharo-posix-signal.git[master] --- 
/opt/pharo/pharo-local/iceberg/estebanlm/pharo-posi

x-signal/src (Libgit)
Loaded -> BaselineOfPOSIXSignal-GitHub.1509525907 --- 
https://github.com/estebanlm/pharo-posix-signal.git[master] --- 
/opt/pharo/pharo-local/iceberg/estebanlm/pharo-posix

-signal/src (Libgit)
Loading baseline of BaselineOfPOSIXSignal...
Fetched -> POSIXSignal-GitHub.1509525907 --- 
https://github.com/estebanlm/pharo-posix-signal.git[master] --- 
/opt/pharo/pharo-local/iceberg/estebanlm/pharo-posix-signal/s

rc (Libgit)
Loaded -> POSIXSignal-GitHub.1509525907 --- 
https://github.com/estebanlm/pharo-posix-signal.git[master] --- cache

finished baseline^C
Segmentation fault Wed Nov  1 19:03:27 2017


/opt/pharo/pharo-vm/lib/pharo/5.0-201707201942/pharo
Pharo VM version: 5.0-201707201942  Thu Jul 20 20:40:54 UTC 2017 gcc 
4.6.3 [Production Spur 64-bit VM]
Built from: CoInterpreter VMMaker.oscog-eem.2254 uuid: 
4f2c2cce-f4a2-469a-93f1-97ed941df0ad Jul 20 2017
With: StackToRegisterMappingCogit VMMaker.oscog-eem.2252 uuid: 
2f3e9b0e-ecd3-4adf-b092-cce2e2587a5c Jul 20 2017
Revision: VM: 201707201942 
https://github.com/OpenSmalltalk/opensmalltalk-vm.git $ Date: Thu Jul 20 
12:42:21 2017 -0700 $ Plugins: 201707201942 https://github.com/OpenSma

lltalk/opensmalltalk-vm.git $
Build host: Linux testing-gce-74d10329-bbfd-42e5-8995-b0e3a68c73cb 
3.13.0-115-generic #162~precise1-Ubuntu SMP Fri Mar 24 16:47:06 UTC 2017 
x86_64 x86_64 x86_64 GNU/Linux
plugin path: /opt/pharo/pharo-vm/lib/pharo/5.0-201707201942 [default: 
/opt/pharo/pharo-vm/lib/pharo/5.0-201707201942/]



C stack backtrace & registers:
rax 0x2c1da080 rbx 0x2c1d9f10 rcx 0x2c1da138 rdx 0x2c1d9fc8
rdi 0x2c1d9ce8 rsi 0x2c1d9ce8 rbp 0x2c1d9e58 rsp 0x2c1da1f0
r8  0x2c1d9728 r9  0x2c1d97e0 r10 0x2c1d9898 r11 0x2c1d9950
r12 0x2c1d9a08 r13 0x2c1d9ac0 r14 0x2c1d9b78 r15 0x2c1d9c30
rip 0x2c1da2a8
*[0x7ffc2c1da2a8]
/opt/pharo/pharo-vm/lib/pharo/5.0-201707201942/pharo[0x41ccd1]
/opt/pharo/pharo-vm/lib/pharo/5.0-201707201942/pharo[0x41d05f]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7f155abba390]
[0x7f155b151000]
[0x0]


Smalltalk stack dump:
0x7ffc2c1de400 M ProcessorScheduler class>idleProcess 0x2f982d8: 
a(n) ProcessorScheduler class
0x7ffc2c1de440 I [] in ProcessorScheduler class>startUp 0x2f982d8: 
a(n) ProcessorScheduler class
0x7ffc2c1de480 I [] in BlockClosure>newProcess 0x60396e8: a(n) 
BlockClosure


Most recent primitives
relinquishProcessorForMicroseconds:
relinquishProcessorForMicroseconds:
relinquishProcessorForMicroseconds:
relinquishProcessorForMicroseconds:
relinquishProcessorForMicroseconds:
relinquishProcessorForMicroseconds:
relinquishProcessorForMicroseconds:
relinquishProcessorForMicroseconds:
relinquishProcessorForMicroseconds:
relinquishProcessorForMicroseconds:
relinquishProcessorForMicroseconds:
relinquishProcessorForMicroseconds:
relinquishProcessorForMicroseconds:
relinquishProcessorForMicroseconds:
relinquishProcessorForMicroseconds:
relinquishProcessorForMicroseconds:
relinquishProcessorForMicroseconds:
relinquishProcessorForMicroseconds:
primUTCMicrosecondsClock
signal
primSignal:atUTCMicroseconds:
wait
millisecondClockValue
millisecondClockValue
tempAt:
tempAt:put:
tempAt:
terminateTo:

Re: [Pharo-users] What is code 137 / how to gracefully shut down via SIGTERM?

2017-10-31 Thread Herby Vojčík

Bruce O'Neel wrote:

Hi,

Well, not so scary.  That's the mac list.  From a not so recent mac with
Xcode 7 installed it looks like the below.

The good news here is that the common signals have the same numbers.
  And how often do you get an SIGXCPU?


I was think of SIGUSR1, can be used to useful things (IIRC nginx or 
apache reload when sent), but that one is not common. :-( Of course, 
SIGTERM/SIGINT/SIGHUP are, so, yes, these are most important.


Herby


cheers

bruce

#define SIGHUP  1   /* hangup */

#define SIGINT  2   /* interrupt */

#define SIGQUIT 3   /* quit */

#define SIGILL  4   /* illegal instruction (not reset when caught) */

#define SIGTRAP 5   /* trace trap (not reset when caught) */

#define SIGABRT 6   /* abort() */

#if  (defined(_POSIX_C_SOURCE) && !defined(_DARWIN_C_SOURCE))

#define SIGPOLL 7   /* pollable event ([XSR] generated, not
supported) */

#else   /* (!_POSIX_C_SOURCE || _DARWIN_C_SOURCE) */

#define SIGIOT  SIGABRT /* compatibility */

#define SIGEMT  7   /* EMT instruction */

#endif  /* (!_POSIX_C_SOURCE || _DARWIN_C_SOURCE) */

#define SIGFPE  8   /* floating point exception */

#define SIGKILL 9   /* kill (cannot be caught or ignored) */

#define SIGBUS  10  /* bus error */

#define SIGSEGV 11  /* segmentation violation */

#define SIGSYS  12  /* bad argument to system call */

#define SIGPIPE 13  /* write on a pipe with no one to read it */

#define SIGALRM 14  /* alarm clock */

#define SIGTERM 15  /* software termination signal from kill */

#define SIGURG  16  /* urgent condition on IO channel */

#define SIGSTOP 17  /* sendable stop signal not from tty */

#define SIGTSTP 18  /* stop signal from tty */

#define SIGCONT 19  /* continue a stopped process */

#define SIGCHLD 20  /* to parent on child stop or exit */

#define SIGTTIN 21  /* to readers pgrp upon background tty read */

#define SIGTTOU 22  /* like TTIN for output if (tp->t_local) */

#if  (!defined(_POSIX_C_SOURCE) || defined(_DARWIN_C_SOURCE))

#define SIGIO   23  /* input/output possible signal */

#endif

#define SIGXCPU 24  /* exceeded CPU time limit */

#define SIGXFSZ 25  /* exceeded file size limit */

#define SIGVTALRM 26/* virtual time alarm */

#define SIGPROF 27  /* profiling time alarm */

#if  (!defined(_POSIX_C_SOURCE) || defined(_DARWIN_C_SOURCE))

#define SIGWINCH 28 /* window size changes */

#define SIGINFO 29  /* information request */

#endif

#define SIGUSR1 30  /* user defined signal 1 */

#define SIGUSR2 31  /* user defined signal 2 */


/31 October 2017 14:56 Herby Vojčík <he...@mailbox.sk> wrote:/

Bruce O'Neel wrote:
 > Hi,
 >
 > Posix requires that if the process is killed the return status is
 > greater than 128.
 >
 > What is convention on linux systems is that if the process is sent a
 > signal then the signal number is added to 128. Therefore 137 is
SIGKILL
 > (kill -9). SIGTERM is 143, SIGABRT is 134, SIGSEGV is 139, and so
on.
 > I've not seen an exception to this but there could be.
 >
 > Signals off of my closest linux system look like:
 >
 > #define SIGHUP 1
 > #define SIGINT 2
 > #define SIGQUIT 3
 > #define SIGILL 4
 > #define SIGTRAP 5
 > #define SIGABRT 6
 > #define SIGIOT 6
 > #define SIGBUS 7
 > #define SIGFPE 8
 > #define SIGKILL 9
 > #define SIGUSR1 10
 > #define SIGSEGV 11
 > #define SIGUSR2 12
 > #define SIGPIPE 13
 > #define SIGALRM 14
 > #define SIGTERM 15

Scary, because Esteban's sigtrapping package has them defined a bit
differently:

{ #category : #'class initialization' }
POSIXSignal class >> initialize [
SIGHUP := 1.
SIGINT := 2.
SIGQUIT := 3.
SIGILL := 4.
SIGTRAP := 5.
SIGABRT := 6.
SIGPOLL := 7.
SIGIOT := SIGABRT.
SIGEMT := 7.
SIGFPE := 8.
SIGKILL := 9.
SIGBUS := 10.
SIGSEGV := 11.
SIGSYS := 12.
SIGPIPE := 13.
SIGALRM := 14.
SIGTERM := 15.
SIGURG := 16.
SIGSTOP := 17.
SIGTSTP := 18.
SIGCONT := 19.
SIGCHLD := 20.
SIGTTIN := 21.
SIGTTOU := 22.
SIGIO := 23.
SIGXCPU := 24.
SIGXFSZ := 25.
SIGVTALRM := 26.
SIGPROF := 27.
SIGWINCH := 28.
SIGINFO := 29.
SIGUSR1 := 30.
SIGUSR2 := 31.
]








Re: [Pharo-users] What is code 137 / how to gracefully shut down via SIGTERM?

2017-10-31 Thread Herby Vojčík

Bruce O'Neel wrote:

Hi,

Posix requires that if the process is killed the return status is
greater than 128.

What is convention on linux systems is that if the process is sent a
signal then the signal number is added to 128.  Therefore 137 is SIGKILL
(kill -9).  SIGTERM is 143, SIGABRT is 134, SIGSEGV is 139, and so on.
  I've not seen an exception to this but there could be.

Signals off of my closest linux system look like:

#define SIGHUP   1
#define SIGINT   2
#define SIGQUIT  3
#define SIGILL   4
#define SIGTRAP  5
#define SIGABRT  6
#define SIGIOT   6
#define SIGBUS   7
#define SIGFPE   8
#define SIGKILL  9
#define SIGUSR1 10
#define SIGSEGV 11
#define SIGUSR2 12
#define SIGPIPE 13
#define SIGALRM 14
#define SIGTERM 15


Scary, because Esteban's sigtrapping package has them defined a bit 
differently:


{ #category : #'class initialization' }
POSIXSignal class >> initialize [
SIGHUP  := 1.
SIGINT  := 2.
SIGQUIT := 3.
SIGILL  := 4.
SIGTRAP := 5.
SIGABRT := 6.
SIGPOLL := 7.
SIGIOT  := SIGABRT.
SIGEMT  := 7.
SIGFPE  := 8.
SIGKILL := 9.
SIGBUS  := 10.
SIGSEGV := 11.
SIGSYS  := 12.
SIGPIPE := 13.
SIGALRM := 14.
SIGTERM := 15.
SIGURG  := 16.
SIGSTOP := 17.
SIGTSTP := 18.
SIGCONT := 19.
SIGCHLD := 20.
SIGTTIN := 21.
SIGTTOU := 22.
SIGIO   := 23.
SIGXCPU := 24.
SIGXFSZ := 25.
SIGVTALRM   := 26.
SIGPROF := 27.
SIGWINCH:= 28.
SIGINFO := 29.
SIGUSR1 := 30.
SIGUSR2 := 31.
]



Re: [Pharo-users] What is code 137 / how to gracefully shut down via SIGTERM?

2017-10-29 Thread Herby Vojčík

werner kassens wrote:

Hi Herby,
eventually you might want to look at
https://github.com/moby/moby/issues/1063
but then i dont know anything about these things.


Hopefully they have this solved already... but I learned 137 means 
"killed". generally; did not know. :-)



werner


Herby



Re: [Pharo-users] What is code 137 / how to gracefully shut down via SIGTERM?

2017-10-29 Thread Herby Vojčík

Esteban Lorenzano wrote:

Hi,

I don’t know if is useful in your case, but you made me remember I made
a small tool to trap unix signals within Pharo. I uploaded then to github.

https://github.com/estebanlm/pharo-posix-signal

is very easy to use and it will allow you to trap any signal and do what
you want ;)


Thanks, looks that it'll help.


Esteban



On 28 Oct 2017, at 13:39, Herby Vojčík <he...@mailbox.sk
<mailto:he...@mailbox.sk>> wrote:

Hi,

I had to find out how to automatically deploy the backend written in
Pharo, and so far it uses docker-compose stop to stop the instance
(and later docker-compose up -d to get everything up again).

I noticed the stop phase takes a while and ends with status code 137.
I presume it ended forcefully and not gracefully.

What is the idiomatic way to wait on SIGTERM and close the process
gracefully?

Thanks, Herby








Re: [Pharo-users] [ANN] Iceberg 0.6.2 backported to Pharo 6.1

2017-10-28 Thread Herby Vojčík

Thanks.

I did not yet try if I can use it for push, but it's great already as I 
could remove all the workarounds I needed before to work with the local 
repo.


Herby

Esteban Lorenzano wrote:

Hi,

I backported lastest Iceberg version to Pharo 6.1 to allow people to benefit 
for latest changes.
This version has an important amount of tweak and fixes, but most important is 
the inclusion of tonel file format (this is default for Pharo 7.0, optional for 
Pharo 6.1) and introduces a file-per-class format.
The advantages of this format has everything to do with the speed of access (is 
easier to reconstruct a package like) and the space on disk (methods are 
usually small but minimum space in disk is usually 4k so we waste a lot of 
space). Is also a better format for SSD disks.

To backport Iceberg 0.6.2 I also needed to backport latest version of 
Metacello, so Pharo 6.1 and Pharo7+ users now also have the latest version of 
it available :)

cheers!
Esteban










[Pharo-users] Read-only images / how to deploy new image as fast as possible?

2017-10-28 Thread Herby Vojčík

Hi,

I came to the phase where I actually deploy the small backend written in 
Pharo, and I wonder about two things:


  1. Is it possible to make .image / .changes read-only? Will Pharo 
just work (not writing the image, but that is not needed, all data are 
in sqlite file)?


  2. How to "switch" images as fast as possible? I presume there is no 
way to tell vm to "stop doing this and reload different image" or 
something like that? I need to stop the runninng one and then start the 
new one? Or is it possible to "swap" the read-only image for new one and 
tell VM to reload (just wanting to minimize the off-time).


Thanks, Herby



[Pharo-users] What is code 137 / how to gracefully shut down via SIGTERM?

2017-10-28 Thread Herby Vojčík

Hi,

I had to find out how to automatically deploy the backend written in 
Pharo, and so far it uses docker-compose stop to stop the instance (and 
later docker-compose up -d to get everything up again).


I noticed the stop phase takes a while and ends with status code 137. I 
presume it ended forcefully and not gracefully.


What is the idiomatic way to wait on SIGTERM and close the process 
gracefully?


Thanks, Herby



Re: [Pharo-users] Glorp: #includesKey:

2017-10-25 Thread Herby Vojčík

Niall Ross wrote:

Dear Herby,
adding #includesKey: is certainly doable. If you look at callers above
and subcallers below #anySatisfyDefault: you will see the issues
involved. Your includesKey: needs the same degree of platform-awareness
that Glorp's #anySatisfy: and #allSatisfy: implementations use. But
since #anySatisfy: is there, you have a ready template to follow.


I don't feel competent enough, from what I looked, Glorp innards are a 
bit complex.



Alternatively, I may well add #includesKey: - though not as my most
urgent task. :-) I did a fair amount of work to extend the usability of
DictionaryMappings in Glorp three years ago (part of demonstrating the
ObjectStudio business-mapping/Glorp-generating tools - see my ESUG 2014
presentation for details) but I did not then think of providing
#includesKey:. Thanks for suggesting the idea.


My pleasure. :-)

As I wrote elsewhere, also #keys is something (probably the most 
general) one that could be added as a matter of allowing to work with 
the key field - as far as I was able to find out, there is no way to 
actually get to the key, for which I found workaround since I mapped 
object, but would be out of luck if I mapped single primitive value.


For the moment, it is not pressing, but yes, it would be nice to be able 
to have #keys mapping to key field and #includesKey: as an idiomatic way 
to do keys includes:.


Thanks again, Herby


If you were to work on this, be aware:

- always reinitialise FunctionExpression class after adding/changing any
Glorp function (or just close and reopen your image, of course)

- If (and only if) you construct Query whereClauses in stages (e.g. you
have code like

myQuery AND: [:customer | customer orders includesKey: #onlineOrders]

or similar) then, using its callers in GlorpTest as a guide, know when
you might need to send #setUpBaseFromSession: to your query while doing
so. (N.B. that method is a Glorp version 8.2.1 addition; you will not
have it in older Glorp.) The point is that stage-constructed where
clauses must convert from block to expression before execution, to
combine the stages. Any that use #anySatisfy:/#allSatisfy: need
platform-specific information to do this; I would expect any
#includesKey: implementation to be the same.

HTH
Niall Ross


Tom Robinson wrote:

Hi Herby,

In my opinion, the way you found to make it work is the way it should
be. The reason is that the first way doesn't translate into SQL and the
second one does. It might be possible to add includesKey: functionality
but resolving that to SQL would be more complex. I would not call this a
bug. I would call it a limitation of the implementation. I don't know of
anyone planning to add this feature to Glorp right now.

Regards,

Tom

On 10/24/2017 12:27 PM, Herby Vojčík wrote:


Hello!

I am using a DictionaryMapping in my code, and I wanted to use
#includesKey: in #where: clause (something akin

each tools includesKey: aToolId

) to select only rows for which DictionaryMapping uses certain key. It
failed with the error in the lines of "#tools does not resolve to
field". I had to come up with

each tools anySatisfy: [ :tool | tool id = aToolId ]

Is it the bug / feature / problem in my approach? If bug, is it
planned to add #includesKey: translation to DictionaryMapping?

Thanks, Herby







Re: [Pharo-users] Glorp: #includesKey:

2017-10-25 Thread Herby Vojčík

jtuc...@objektfabrik.de wrote:

Herby,

I must admit I've never used Dictionary Mappings with Glorp, so I don't
have an answer.
But I am a bit confused by your code examples. See below


Am 24.10.17 um 20:27 schrieb Herby Vojčík:

Hello!

I am using a DictionaryMapping in my code, and I wanted to use
#includesKey: in #where: clause (something akin

  each tools includesKey: aToolId


What SQL expression would you expect here?


SELECT * FROM AGENT a WHERE a.tool_id = :aToolId

AFAICT, DISTINCT is not needed as <id, tool_id> are fks to other table's 
compound primary key <agent_id, id>, so they are known to be unique.



I would guess that you want to build a subquery like exists, because the
way I understand the query, you want to find all instances of (whatever
each is) that hold an Association in their tools dictionary where the
key is aToolId.


Yes.

Maybe it needs EXISTS, I don't know. Semantics is clear, though.



) to select only rows for which DictionaryMapping uses certain key. It
failed with the error in the lines of "#tools does not resolve to
field". I had to come up with




each tools anySatisfy: [ :tool | tool id = aToolId ]

Hmm. This makes me wonder. Is #tools really a Dictionary? Inside the
Block, I'd expect the :tool parameter to be an Association, and that
doesn't understand #id,does it? I guess @each is the parameter within an
Block like in

self session read: MyClass where: [:each| each tools ...]

If so, I have a hard time believing that anySatisfy: would work (never
tried)...


Yes, it works. Dictionary enumerates values, as I have written in reply 
to Tom's post.



Is it the bug / feature / problem in my approach? If bug, is it
planned to add #includesKey: translation to DictionaryMapping?


I don't know, but would guess it is not currently on the Todo-list.

My first tip would be to try and find some slides (most likely made by
Niall and presented at an ESUG) including the words "subquery", "glorp"
and "exists". You won't find much, but that may be a starting point.


I actually managed to get there, but
  a) using ugly workaround IMO, #includesKey: is part of dictionary's 
protocol, should be known;
  b) as I wrote in Tom's reply, the workaround only worked because 
mapping was to object. If the mapping was to primitive value (number, 
string), I would not have any 'tool id' ready to use and I would be left 
 without option. There is no way to construct such query atm in 
Glorp, afaict, if I cannot use #keys not #includesKey: in where clause. 
Is that not a bug?



Not sure this helps, ;-)


Joachim


Thanks, Herby



Re: [Pharo-users] Glorp: #includesKey:

2017-10-25 Thread Herby Vojčík

Tom Robinson wrote:

Hi Herby,

In my opinion, the way you found to make it work is the way it should
be. The reason is that the first way doesn't translate into SQL and the
second one does. It might be possible to add includesKey: functionality
but resolving that to SQL would be more complex. I would not call this a


I think I disagree with this, but correct me if I am wrong.

With DictionaryMapping, you map a set of (key, value) pairs into 
appropriate fields in a table. In essence, it does not differ at all to 
mapping any other collection containing objects with fields (actually, 
from what I understood, it does internal tricks to do just that - create 
internal "class mapping" for an association of that particular 
dictionary mapping).


In case of primitive value dictionaries, it even _is_ the same: key is 
mapped to one field, value is mapped to different field. If I want to 
create subquery using value, I can freely use things like #anySatisfy: 
to filter on that value (which I did in my case, but I come to that 
later). Since Dictionary enumerates values in do:, select:, collect: 
(and anySatisfy:), writing


  each tools anySatisfy: [...]

is the same as writing

  each tools values anySatisfy: [...]

but what if I wanted to write

  each tools keys anySatisfy: [...]

? I cannot, Glorp fails on 'keys' (I tried to use `keys includes:` 
instead of `includesKey:`, to no avail).


So what I want to point here is, that in DictionaryMapping I map keys 
and values to different fields in table (values can be complex, in which 
keys they are mapped to more fields, but that is not important 
distinction here), but Glorp only allows me to use values (and only 
implicitly) in where clauses; I have no way to use keys at all there.


So I assert here that "resolving that to SQL would be more complex" is 
not true. Key is mapped the same way value is; if I can use where clause 
that uses value in certain way, I should be able to use key as well - 
SQL generating from one or the other have same level of difficulty (in 
fact, I think key is easier, as you do not actually need to join the 
foreign table); the generated SQL could be something like


  SELECT * FROM AGENT a
WHERE a.tool_id = 

The fact that I found a

  each tools anySatisfy: [ :tool | tool id = aToolId ]

is in fact only because non-primitive mappings are processed differently 
in DictionaryMapping, a non-primitive values are _required_ to have a 
field defined (not in table, that is understandable, I need to be able 
to make a join, but in descriptor) a mapping that contains the key. So 
in essence, that could be represented as


  SELECT * FROM AGENT a
WHERE a.tool_id IN
 (SELECT * FROM TOOL t
   WHERE t.agent_id = a.id
 AND t.id = a.tool_id
 AND t.id = )

which is basically same as above, as actually, "a.tool_id = asDbValue>" is executed here as well (plus checking that such dictionary 
actually exists at all; maybe that should be present in previous case as 
well, but Glorp can generate the join, that's not the question here).


It is actually interesting question what SQL Glorp actually generated 
for "TgAgent readOneOf: [:a|a tools anySatisfy: [:t|t id = toolId]]".


Point here is:

  1. Why do I need to work it around via [:tool | tool id = aToolId] 
when I am only interested on "which tools the agent uses" (in fact, give 
me all agents using this tool).
  2. Should this be key -> primitive value mapping, I have simple _no 
way_ to ask the equivalent of #includesKey: at all (as the value is, for 
example, a String or an Integer, so no `tool id` is available).



bug. I would call it a limitation of the implementation. I don't know of
anyone planning to add this feature to Glorp right now.


That's why I would say #keys (and, ideally, #includesKey:) are actually 
needed addition to Glorp's set of known-and-translated selectors in case 
of DictionaryMapping.



Regards,

Tom


Thanks, Herby




[Pharo-users] Glorp: #includesKey:

2017-10-24 Thread Herby Vojčík

Hello!

I am using a DictionaryMapping in my code, and I wanted to use 
#includesKey: in #where: clause (something akin


  each tools includesKey: aToolId

) to select only rows for which DictionaryMapping uses certain key. It 
failed with the error in the lines of "#tools does not resolve to 
field". I had to come up with


  each tools anySatisfy: [ :tool | tool id = aToolId ]

Is it the bug / feature / problem in my approach? If bug, is it planned 
to add #includesKey: translation to DictionaryMapping?


Thanks, Herby




Re: [Pharo-users] using mocketry to mock subcall

2017-10-24 Thread Herby Vojčík

Denis Kudriashov wrote:

Hi Herby.

2017-10-20 18:49 GMT+02:00 Herby Vojčík <he...@mailbox.sk
<mailto:he...@mailbox.sk>>:


I had this problem. I tried something like (though not exactly w/
this code):

Foo stub new will: [ :aMessage |
   | original |
   original := MockExpectedOriginalCall new executeFor: aMessage.
   original stub.
   ^ original ]

but IIRC it failed on doing #stub inside will: block.


So the arguments of block in message #will: are expected to be arguments
of stubbing message (not a full message instance).
We can introduce new kind of expected action to automatically stub any
result of message send:

Foo stub new willStubRealResult.


Yeah, exactly, something like that is useful.


With help of new subclass of MockExpectedOriginalMethodCall:

MockExpectedMethodResultStub>>executeFor: anOccurredMessage

realMethodResult := super executeFor: anOccurredMessage.

realMethodResult stub.

^realMethodResult.

MockExpectedMessage>>willStubRealResult

self will: MockExpectedMethodResultStub new


Actually I tried this path myself but it failed badly during doing the 
sub-#stub with lots of "go one metalevel down" etc. calls in stack, so I 
gave up with "I don't understand this to make it work atm".


Does your solution actually work?


And then you will be able specify expectations for all Foo instances:

(Instance of: Foo) stub someMessage willReturn: #const


As in, would `Foo new someMessage should be: #const`? In that case, 
great and thanks.



And you will be able assert any message which was sent to Foo instances:

(Instance of: Foo) should receive someRequiredMessage which should
equal: #expectedValue

I committed this code to the dev branch. But I am wondering that such
kind of behaviour is really needed. Probably it is useful as you ask
about it.


Herby



Re: [Pharo-users] using mocketry to mock subcall

2017-10-20 Thread Herby Vojčík

Denis Kudriashov wrote:

So you want to stub message to *any* instance of class. Right?

Conceptually, in Mocketry/StateSpecs way, it should looks like:

(Instance of: Something) stub askForName willReturn: 'new'.

or:

(Kind of: Something) stub askForName willReturn: 'new'.


But it will not really works. It will work only on instances which are
already "stubbed".
Making it really transparent will require crazy magic underhood. And I
am not sure that it is really possible.
Because generally it should cover existing instances and not just newly
created during test.
And for simple case: to automatically stub new instances their classes
should be stubbed with special constructors. But system do not know what
exact class side messages are constructors.


I had this problem. I tried something like (though not exactly w/ this 
code):


Foo stub new will: [ :aMessage |
  | original |
  original := MockExpectedOriginalCall new executeFor: aMessage.
  original stub.
  ^ original ]

but IIRC it failed on doing #stub inside will: block.



So now you can just use simple mock which will be returned as simple
stub from concrete constructor of your class:

some := Mock new.

some stub askForName: 'new'.

Something stub new willReturn: some.


I think it looks not not bad and no crazy magic is evolved. And of
course you can replace mock with real instance if you want:

some := Something new.

some stub askForName: 'new'.

Something stub new willReturn: some.



2017-10-20 14:21 GMT+02:00 Denis Kudriashov >:

Yes, but in that case you stub messages to the class itself. For
your example it means:

Something stub askForName willReturn: 'new'.

Something askForName should be: 'new'



2017-10-20 13:58 GMT+02:00 Peter Uhnák >:

Thanks Denis, that did the trick.

But I thought that I can also mock at the class level (at least
it was shown in the docs).

Peter

On Sat, Oct 7, 2017 at 11:24 AM, Denis Kudriashov
> wrote:

Hi Peter.

You should stub instance instead of class:
  s := Something new.
  s stub askFor...

7 окт. 2017 г. 9:18 пользователь "Peter Uhnák"
> написал:

Hi,

maybe I am missing something fundamental, because this
seems like an obvious scenario

I have a class Something with two methods

Something>>askForName
^ UIManager default request: 'Name'

Something>>name
^ self askForName , ' suffix'

Now I want to mock out askForName, so I can run it
automatically in tests...

I've tried just stubbing it...

SomethingTest>>testRenameStub
Something stub askForName willReturn: 'new'.
Something new name should be: 'new suffix'

however it still opens the window to ask for the name,
so the original method is being called.

Right now I have to stub it with metalinks, which
doesn't play well with some tools (hapao)

SomethingTest>>testRenameMeta
| link newName |
link := MetaLink new
metaObject: [ 'new' ];
control: #instead.
(Something >> #askForName) ast link: link.
[ newName := Something new name ]
ensure: [ link uninstall ].
self assert: newName equals: 'new suffix'

Thanks,
Peter









Re: [Pharo-users] Yet another Pharo in docker

2017-10-18 Thread Herby Vojčík

Herby Vojčík wrote:

https://hub.docker.com/r/herbysk/pharo/


Thanks to gotchas@dockerhub whose scripts I adapted.



[Pharo-users] Yet another Pharo in docker

2017-10-18 Thread Herby Vojčík

https://hub.docker.com/r/herbysk/pharo/



Re: [Pharo-users] Mocketry willGenerateValueFrom: (was: Re: How do you mock http?)

2017-10-17 Thread Herby Vojčík

Denis Kudriashov wrote:

I would hide the tricks even more:

   ZnClient stubRequests: [ :request |
   request uri =
('https://onesignal.com/api/v1/players/{1}?app_id={2}
<https://onesignal.com/api/v1/players/%7B1%7D?app_id=%7B2%7D>' format: {
self uidy: 'Q7'. appId }) asZnUrl
 and: [ #(GET HEAD) includes: request method ] ]
 byResponse: [ :request | ZnResponse ok: (ZnEntity json: '{}') ] ].


This would be nice, but it would need cooperation of Zinc (unless I hide 
mocketry use under extension method).



And maybe with set of more simple cases:

   ZnClient
 stubGET: ('https://onesignal.com/api/v1/players/{1}?app_id={2}
<https://onesignal.com/api/v1/players/%7B1%7D?app_id=%7B2%7D>' format: {
self uidy: 'Q7'. appId }) asZnUrl
 byResponse: [ :request | ZnResponse ok: (ZnEntity json: '{}') ] ].


Well, I actually send HEAD in production code, as I only need to know if 
it is 2xx or 4xx (of course using #isSuccess and not testing status code 
myself) and I want be nice. But GET is good as well. Now what? :-)



Or better:

   ZnClient
 stubGET: ('https://onesignal.com/api/v1/players' asZnUrl / (self
uidy: 'Q7')
 withParams: {'app_id' -> appId }
 byResponse: [ :request | ZnResponse ok: (ZnEntity json: '{}') ] ].


Yes, we're getting where nock library is on node. I didn't want to make 
full http mocking dsl (would be nice, though). Actually asked if there 
is one is the original thread. :-)


Herby





2017-10-17 14:22 GMT+02:00 Herby Vojčík <he...@mailbox.sk
<mailto:he...@mailbox.sk>>:

Denis Kudriashov wrote:

Hi Herby.

There is message #will: which accepts the block with possible
arguments
(if needed).

   ZnClient stub new will: [ ZnMockClient ...]

But generally your approach looks bad to me. You put too many
details on
your tests which just duplicate Zinc API used in the domain code. It
makes tests brittle and tightly coupled. And they looks quite
unreadable.


BTW, made it shorter and probably more readable as a result (by also
shortening the test condition):

   ZnClient stub new will: [ ZnMockClient
 whenRequest: [ :request |
   request uri =
('https://onesignal.com/api/v1/players/{1}?app_id={2}
<https://onesignal.com/api/v1/players/%7B1%7D?app_id=%7B2%7D>'
format: { self uidy: 'Q7'. appId }) asZnUrl
 and: [ #(GET HEAD) includes: request method ] ]
 thenResponse: [ :request | ZnResponse ok: (ZnEntity json: '{}')
] ].

The reason why I could not test uri as is, is Zinc adding ':443' to
it. But as mentioned, the previous was very unreadable; with asZnUrl
it is probably better.

Herby







Re: [Pharo-users] How do you mock http?

2017-10-17 Thread Herby Vojčík

Herby Vojčík wrote:

Hello!

I felt the need to mock http api (like nock in node, that is, mock http
request-response itself on low-level part, leaving aside the question of
what wrapper / library one uses to get to that http; in node it mocks
basic http layer, here I tackled ZnClient), but struggled for a time how
to grasp it. Finally I used something like this (with help of Mocketry,
`1 to: 10` to mean "enough to actually be used even if there are more
unrelated uses", it could as well be `1 to: 100`):

ZnClient stub new willReturnValueFrom:
((1 to: 10) collect: [ :i | ZnMockClient
whenRequest: [ :request |
{ request uri scheme. request uri authority. request uri
pathPrintString. request uri query associations asSet }
= { #https. 'onesignal.com'. '/api/v1/players/{1}' format: { UUID
fromString36: 'Q7' }. { 'app_id' -> appId } asSet }
and: [ #(GET HEAD) includes: request method ] ]
thenResponse: [ :request | ZnResponse ok: (ZnEntity json: '{}') ] ]).


Actually, cleaned it up a bit, now I use shorter:

  ZnClient stub new will: [ ZnMockClient
whenRequest: [ :request |
  request uri = 
('https://onesignal.com/api/v1/players/{1}?app_id={2}' format: { UUID 
fromString36: 'Q7'. appId }) asZnUrl

and: [ #(GET HEAD) includes: request method ] ]
thenResponse: [ :request | ZnResponse ok: (ZnEntity json: '{}') ] ].


with the help of this class (garbled utf not my fault, iceberg metacello
integration does it):

'From Pharo6.0 of 13 May 2016 [Latest update: #60512] on 17 October 2017
at 12:05:38.908634 pm'!
ZnClient subclass: #ZnMockClient
instanceVariableNames: 'conditionBlock responseBlock'
classVariableNames: ''
poolDictionaries: ''
category: 'Towergame-Tests'!
!ZnMockClient commentStamp: 'HerbyVojcik 10/16/2017 16:43' prior: 0!
I am a mock ZnClient.

I am created with ZnMockClient whenRequest: whenBlock thenResponse:
thenBlock.

Upon execution of the request, when (whenBlock cull: request) is true,
response is set to (thenBlock cull: request). Otherwise, behaviour is
delegated to super.!


!ZnMockClient methodsFor: 'accessing' stamp: 'HerbertVojčík 10/17/2017
12:00:27'!
conditionBlock
^ conditionBlock! !

!ZnMockClient methodsFor: 'accessing' stamp: 'HerbertVojčík 10/17/2017
12:00:27'!
responseBlock: anObject
responseBlock := anObject! !

!ZnMockClient methodsFor: 'accessing' stamp: 'HerbertVojčík 10/17/2017
12:00:27'!
conditionBlock: anObject
conditionBlock := anObject! !

!ZnMockClient methodsFor: 'accessing' stamp: 'HerbertVojčík 10/17/2017
12:00:27'!
responseBlock
^ responseBlock! !


!ZnMockClient methodsFor: 'private protocol' stamp: 'HerbertVojčík
10/17/2017 12:00:27'!
executeRequestResponse
^ (self conditionBlock cull: self request)
ifTrue: [ response := self responseBlock cull: self request. response
contents ]
ifFalse: [ super executeRequestResponse ]! !

"-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- "!

ZnMockClient class
instanceVariableNames: ''!

!ZnMockClient class methodsFor: 'instance creation' stamp:
'HerbertVojčík 10/17/2017 12:00:27'!
whenRequest: aBlock thenResponse: anotherBlock
^ self new
conditionBlock: aBlock;
responseBlock: anotherBlock;
yourself! !

Question 1: Is there a better way?

Question 2: If not, would ZnMockClient be good addition to Zinc itself,
to ease testing for others?

Herby






Re: [Pharo-users] Mocketry willGenerateValueFrom: (was: Re: How do you mock http?)

2017-10-17 Thread Herby Vojčík

Denis Kudriashov wrote:

Hi Herby.

There is message #will: which accepts the block with possible arguments
(if needed).

  ZnClient stub new will: [ ZnMockClient ...]

But generally your approach looks bad to me. You put too many details on
your tests which just duplicate Zinc API used in the domain code. It
makes tests brittle and tightly coupled. And they looks quite unreadable.


BTW, made it shorter and probably more readable as a result (by also 
shortening the test condition):


  ZnClient stub new will: [ ZnMockClient
whenRequest: [ :request |
  request uri = 
('https://onesignal.com/api/v1/players/{1}?app_id={2}' format: { self 
uidy: 'Q7'. appId }) asZnUrl

and: [ #(GET HEAD) includes: request method ] ]
thenResponse: [ :request | ZnResponse ok: (ZnEntity json: '{}') ] ].

The reason why I could not test uri as is, is Zinc adding ':443' to it. 
But as mentioned, the previous was very unreadable; with asZnUrl it is 
probably better.


Herby



[Pharo-users] Mocketry willGenerateValueFrom: (was: Re: How do you mock http?)

2017-10-17 Thread Herby Vojčík

Herby Vojčík wrote:

Hello!

I felt the need to mock http api (like nock in node, that is, mock http
request-response itself on low-level part, leaving aside the question of
what wrapper / library one uses to get to that http; in node it mocks
basic http layer, here I tackled ZnClient), but struggled for a time how
to grasp it. Finally I used something like this (with help of Mocketry,
`1 to: 10` to mean "enough to actually be used even if there are more
unrelated uses", it could as well be `1 to: 100`):

ZnClient stub new willReturnValueFrom:
((1 to: 10) collect: [ :i | ZnMockClient


For Denis Kudriashov: would you be willing to add something like 
`willGenerateValueFrom: aBlock` to Mocketry, so previous two lines could 
be replaced by simpler:


  ZnClient stub new willGenerateValueFrom: [ ZnMockClient

?

If something allowing it is there, I am sorry but I haven't found it.

Herby


whenRequest: [ :request |
{ request uri scheme. request uri authority. request uri
pathPrintString. request uri query associations asSet }
= { #https. 'onesignal.com'. '/api/v1/players/{1}' format: { UUID
fromString36: 'Q7' }. { 'app_id' -> appId } asSet }
and: [ #(GET HEAD) includes: request method ] ]
thenResponse: [ :request | ZnResponse ok: (ZnEntity json: '{}') ] ]).

with the help of this class (garbled utf not my fault, iceberg metacello
integration does it):

'From Pharo6.0 of 13 May 2016 [Latest update: #60512] on 17 October 2017
at 12:05:38.908634 pm'!
ZnClient subclass: #ZnMockClient
instanceVariableNames: 'conditionBlock responseBlock'
classVariableNames: ''
poolDictionaries: ''
category: 'Towergame-Tests'!
!ZnMockClient commentStamp: 'HerbyVojcik 10/16/2017 16:43' prior: 0!
I am a mock ZnClient.

I am created with ZnMockClient whenRequest: whenBlock thenResponse:
thenBlock.

Upon execution of the request, when (whenBlock cull: request) is true,
response is set to (thenBlock cull: request). Otherwise, behaviour is
delegated to super.!


!ZnMockClient methodsFor: 'accessing' stamp: 'HerbertVojčík 10/17/2017
12:00:27'!
conditionBlock
^ conditionBlock! !

!ZnMockClient methodsFor: 'accessing' stamp: 'HerbertVojčík 10/17/2017
12:00:27'!
responseBlock: anObject
responseBlock := anObject! !

!ZnMockClient methodsFor: 'accessing' stamp: 'HerbertVojčík 10/17/2017
12:00:27'!
conditionBlock: anObject
conditionBlock := anObject! !

!ZnMockClient methodsFor: 'accessing' stamp: 'HerbertVojčík 10/17/2017
12:00:27'!
responseBlock
^ responseBlock! !


!ZnMockClient methodsFor: 'private protocol' stamp: 'HerbertVojčík
10/17/2017 12:00:27'!
executeRequestResponse
^ (self conditionBlock cull: self request)
ifTrue: [ response := self responseBlock cull: self request. response
contents ]
ifFalse: [ super executeRequestResponse ]! !

"-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- "!

ZnMockClient class
instanceVariableNames: ''!

!ZnMockClient class methodsFor: 'instance creation' stamp:
'HerbertVojčík 10/17/2017 12:00:27'!
whenRequest: aBlock thenResponse: anotherBlock
^ self new
conditionBlock: aBlock;
responseBlock: anotherBlock;
yourself! !

Question 1: Is there a better way?

Question 2: If not, would ZnMockClient be good addition to Zinc itself,
to ease testing for others?

Herby






[Pharo-users] How do you mock http?

2017-10-17 Thread Herby Vojčík

Hello!

I felt the need to mock http api (like nock in node, that is, mock http 
request-response itself on low-level part, leaving aside the question of 
what wrapper / library one uses to get to that http; in node it mocks 
basic http layer, here I tackled ZnClient), but struggled for a time how 
to grasp it. Finally I used something like this (with help of Mocketry, 
`1 to: 10` to mean "enough to actually be used even if there are more 
unrelated uses", it could as well be `1 to: 100`):


  ZnClient stub new willReturnValueFrom:
((1 to: 10) collect: [ :i | ZnMockClient
  whenRequest: [ :request |
{ request uri scheme. request uri authority. request uri 
pathPrintString. request uri query associations asSet }
  = { #https. 'onesignal.com'. '/api/v1/players/{1}' format: { 
UUID fromString36: 'Q7' }. { 'app_id' -> appId } asSet }

  and: [ #(GET HEAD) includes: request method ] ]
  thenResponse: [ :request | ZnResponse ok: (ZnEntity json: '{}') ] ]).

with the help of this class (garbled utf not my fault, iceberg metacello 
integration does it):


'From Pharo6.0 of 13 May 2016 [Latest update: #60512] on 17 October 2017 
at 12:05:38.908634 pm'!

ZnClient subclass: #ZnMockClient
instanceVariableNames: 'conditionBlock responseBlock'
classVariableNames: ''
poolDictionaries: ''
category: 'Towergame-Tests'!
!ZnMockClient commentStamp: 'HerbyVojcik 10/16/2017 16:43' prior: 0!
I am a mock ZnClient.

I am created with ZnMockClient whenRequest: whenBlock thenResponse: 
thenBlock.


Upon execution of the request, when (whenBlock cull: request) is true, 
response is set to (thenBlock cull: request). Otherwise, behaviour is 
delegated to super.!



!ZnMockClient methodsFor: 'accessing' stamp: 'HerbertVojčík 10/17/2017 
12:00:27'!

conditionBlock
^ conditionBlock! !

!ZnMockClient methodsFor: 'accessing' stamp: 'HerbertVojčík 10/17/2017 
12:00:27'!

responseBlock: anObject
responseBlock := anObject! !

!ZnMockClient methodsFor: 'accessing' stamp: 'HerbertVojčík 10/17/2017 
12:00:27'!

conditionBlock: anObject
conditionBlock := anObject! !

!ZnMockClient methodsFor: 'accessing' stamp: 'HerbertVojčík 10/17/2017 
12:00:27'!

responseBlock
^ responseBlock! !


!ZnMockClient methodsFor: 'private protocol' stamp: 'HerbertVojčík 
10/17/2017 12:00:27'!

executeRequestResponse
^ (self conditionBlock cull: self request)
		ifTrue: [ response := self responseBlock cull: self request. response 
contents ]

ifFalse: [ super executeRequestResponse ]! !

"-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- "!

ZnMockClient class
instanceVariableNames: ''!

!ZnMockClient class methodsFor: 'instance creation' stamp: 
'HerbertVojčík 10/17/2017 12:00:27'!

whenRequest: aBlock thenResponse: anotherBlock
^ self new
conditionBlock: aBlock;
responseBlock: anotherBlock;
yourself! !

Question 1: Is there a better way?

Question 2: If not, would ZnMockClient be good addition to Zinc itself, 
to ease testing for others?


Herby



[Pharo-users] NeoJSONObject convenience method hint

2017-10-17 Thread Herby Vojčík

newFromAccessors: aCollection ofObject: anObject
"Performs supplied accessors on anObject
and returns my instance with accessor -> result pairs."

	^ self newFrom: (aCollection collect: [ :each | each -> (anObject 
perform: each) ])


"I use it for example in this piece of code:
  NeoJSONObject
newFromAccessors: #(good bad)
ofObject: self dao sumOfAllAnswers
"



Re: [Pharo-users] Zinc release?

2017-10-13 Thread Herby Vojčík

Sven Van Caekenberghe wrote:



On 12 Oct 2017, at 15:58, Herby Vojčík<he...@mailbox.sk>  wrote:

There are a few fixes out there for Zinc, not to mention convenience like ZnEntity 
class>>  json:. Don't you consider releasing the new version (as I tried to 
update it by hand, it is not that easy, it has more components, to load HTTP I had to 
update Character-Encoding as well, so probably better if bumped as a group)?

Herby


That's what configurations are for, to track the latest development release in 
a consistent way. You just do

   ConfigurationOfZincHTTPComponents project bleedingEdge load.

Provided you loaded a recent configuration.


I'm nort sure I want the bleeding edge loaded, albeit for recent configuration, for the production code (thought not mission criticial about lives or millions of $$$). Also I don't know if configurations are updated after each change out there (it must be done by hand I presume). So I was asking if there isn't a time 
to release another stable one.


If not, and I still want not the true bleeding edge, but a "it works for me" 
snapshot in time, to load specific version of ConfigurationOf... and the issue ... 
project bleedingEdge load? Will it load only those version that were bleedingEdge at that 
time?


See the class comment of ConfigurationOfZincHTTPComponents for more info.

Sven








[Pharo-users] Zinc release?

2017-10-12 Thread Herby Vojčík
There are a few fixes out there for Zinc, not to mention convenience 
like ZnEntity class >> json:. Don't you consider releasing the new 
version (as I tried to update it by hand, it is not that easy, it has 
more components, to load HTTP I had to update Character-Encoding as 
well, so probably better if bumped as a group)?


Herby



Re: [Pharo-users] Update of ConfigurationOfGlorp

2017-10-11 Thread Herby Vojčík

stephan wrote:

On 11-10-17 21:50, Herby Vojčík wrote:


129 is from different author, who skipped upload of its 127 and 128
(probably intermediates).

127 is mine, fixes IMO overoptimized case for DirectMapping which
unlike its superclass (thus all the other mappings as there is no
other specialization), stopped converting primary keys to db value in
fk->pk relationships.


Name: ConfigurationOfGlorp-StephanEggermont.62
Author: StephanEggermont
Time: 11 October 2017, 10:00:05.488994 pm
UUID: a7105b9b-ac17-0d00-a62f-44e602acb0bc
Ancestors: ConfigurationOfGlorp-StephanEggermont.61

Patch for stable/release2/2.0.1

Fix error with DirectMapping primary key
not being converted to db type.

In the metarepos, dbxtalk/glorp and dbxtalk/configurations


Thank you very much, sir. :-)



Re: [Pharo-users] Update of ConfigurationOfGlorp

2017-10-11 Thread Herby Vojčík

stephan wrote:

On 11-10-17 20:33, Herby Vojčík wrote:
127 is general, is not sqlite-specific. Fixes any case where primary 
key is not primitive and has converter.


Should 127 and 129 be merged first, and should current development be 
promoted to release? And why is 128 missing?


Stephan




129 is from different author, who skipped upload of its 127 and 128 (probably 
intermediates).

127 is mine, fixes IMO overoptimized case for DirectMapping which unlike its 
superclass (thus all the other mappings as there is no other specialization), 
stopped converting primary keys to db value in fk->pk relationships.




Re: [Pharo-users] Update of ConfigurationOfGlorp

2017-10-11 Thread Herby Vojčík

stephan wrote:

On 06-10-17 17:22, Herby Vojčík wrote:

Any chance of incorporating fixes 127 / 129?


Sure, as soon as someone tells me they are safe to
add. I am just testing Glorp with P3 and Postgres now,
and don't have the capacity to verify these changes
other than by just reading the delta.

Stephan





127 is general, is not sqlite-specific. Fixes any case where primary key is not 
primitive and has converter.



[Pharo-users] Futures, in Scale (was: Re: Embeddable Smalltalk (was: Re: Behold Pharo: The Modern Smalltalk))

2017-10-07 Thread Herby Vojčík

Hi,

I have looked at Scale b/c of different question, and I see it uses futures.

I'd like to ask if those futures are one-off, or is there any consensus 
on how futures should look like in Pharo. The reason I ask is b/c Amber 
already has something like that - Promises, which do not use the native 
JS API directly (though it is inspired), but do have an API of their 
own; and since those concepts seems to be similar (or maybe are the 
same, just called differently), if the API used shouldn't be sort-of 
standardized / used the same way in all occasions.


Herby

P.S.: Including state-of-the-art Promises.st from Amber master branch to 
get a picture:


Smalltalk createPackage: 'Kernel-Promises'!
Object subclass: #Promise
instanceVariableNames: ''
package: 'Kernel-Promises'!

!Promise class methodsFor: 'composites'!

all: aCollection
"Returns a Promise resolved with results of sub-promises."

!

any: aCollection
"Returns a Promise resolved with first result of sub-promises."

! !

!Promise class methodsFor: 'instance creation'!

forBlock: aBlock
"Returns a Promise that is resolved with the value of aBlock,
and rejected if error happens while evaluating aBlock."
^ self new then: aBlock
!

new
"Returns a dumb Promise resolved with nil."

!

new: aBlock
"Returns a Promise that is eventually resolved or rejected.
Pass a block that is called with one argument, model.
You should call model value: ... to resolve the promise
and model signal: ... to reject the promise.
If error happens during run of the block,
promise is rejected with that error as well."

!

signal: anObject
"Returns a Promise rejected with anObject."
Promise.reject(x)})'>

!

value: anObject
"Returns a Promise resolved with anObject."
Promise.resolve(x)})'>

! !

Trait named: #TThenable
package: 'Kernel-Promises'!

!TThenable methodsFor: 'promises'!

catch: aBlock
$core.seamless(function () {

return aBlock._value_(err);
})})'>
!

on: aClass do: aBlock
$core.seamless(function () {

if (err._isKindOf_(aClass)) return aBlock._value_(err);
else throw err;
})})'>
!

on: aClass do: aBlock catch: anotherBlock
^ (self on: aClass do: aBlock) catch: anotherBlock
!

then: aBlockOrArray
"Accepts a block or array of blocks.
Each of blocks in the array or the singleton one is
used in .then call to a promise, to accept a result
and transform it to the result for the next one.
In case a block has more than one argument
and result is an array, first n-1 elements of the array
are put into additional arguments beyond the first.
The first argument always contains the result as-is."
 1 ?
function (result) {return $core.seamless(function () {
if (Array.isArray(result)) {
return 
aBlock._valueWithPossibleArguments_([result].concat(result.slice(0, 
aBlock.length-1)));

} else {
return aBlock._value_(result);
}
})} :
function (result) {return $core.seamless(function () {
return aBlock._value_(result);
})}
);
}, self)'>
!

then: aBlockOrArray catch: anotherBlock
^ (self then: aBlockOrArray) catch: anotherBlock
!

then: aBlockOrArray on: aClass do: aBlock
^ (self then: aBlockOrArray) on: aClass do: aBlock
!

then: aBlockOrArray on: aClass do: aBlock catch: anotherBlock
^ ((self then: aBlockOrArray) on: aClass do: aBlock) catch: anotherBlock
! !

Promise setTraitComposition: {TThenable} asTraitComposition!
! !



p...@highoctane.be wrote:

https://github.com/guillep/Scale <https://github.com/guillep/Scale> is
quite cool for quick scripts on *nix.

Now for embeddable, yes it would help. Bootstrapping will be helpful but
the current main issue is the VM interpret loop that is in the current VMs.

It is not an intrinsic problem. Telepharo is also a great step to enable
that to exist.

Phil

On Oct 7, 2017 13:05, "Herby Vojčík" <he...@mailbox.sk
<mailto:he...@mailbox.sk>> wrote:

Ben Coman wrote:

Nice article. I like the way you've structured it and pushed the
"updated" angle.

I feel a bit too strong a claim is laid on Pharo producing the
CogVM.
Much of the Cog + Spur + 64bit VM work was originally done for
Squeak
with Pharo riding the coat-tails of that work.  Lately Pharo
community
has been involved in improving VM with hotspot optimisation with
Sista
and moving towards making Pharo embeddable.. (@Clement, is "hotspot


Hi, this made me curious. I always had the problem with Amber (and,
all the rest of Smalltalks in similar vein) that it is hard to use
to one-off scripting, as it presumes existence of the not really
small class library (objects, classes, collections, etc.). This
disadvantaged it IMO in the field of "can I just embe

[Pharo-users] Embeddable Smalltalk (was: Re: Behold Pharo: The Modern Smalltalk)

2017-10-07 Thread Herby Vojčík

Ben Coman wrote:

Nice article. I like the way you've structured it and pushed the
"updated" angle.

I feel a bit too strong a claim is laid on Pharo producing the CogVM.
Much of the Cog + Spur + 64bit VM work was originally done for Squeak
with Pharo riding the coat-tails of that work.  Lately Pharo community
has been involved in improving VM with hotspot optimisation with Sista
and moving towards making Pharo embeddable.. (@Clement, is "hotspot


Hi, this made me curious. I always had the problem with Amber (and, all 
the rest of Smalltalks in similar vein) that it is hard to use to 
one-off scripting, as it presumes existence of the not really small 
class library (objects, classes, collections, etc.). This disadvantaged 
it IMO in the field of "can I just embed it here and script it with a 
few lines?" scenarios.


Did you (the Pharo community that is "moving towards making Pharo 
embeddable") find some way to work this around?



cheers -ben


Herby



Re: [Pharo-users] Update of ConfigurationOfGlorp

2017-10-06 Thread Herby Vojčík

Sven Van Caekenberghe wrote:



On 4 Oct 2017, at 17:20, stephan  wrote:

I've added a Pharo 7 version, copied the configurations from DBXTalk/Glorp to 
DBXTalk/Configurations and the metarepos, and
replaced the #'Pharo6.0.x' style names by #'Pharo6.x' style.
Please let me know if that creates problems

Stephan


I tested with the latest ConfigurationOfGlorp-StephanEggermont.61 in Pharo 7 
and that seems OK (not super clean, but OK).

Thx.




Any chance of incorporating fixes 127 / 129?

Herby



Re: [Pharo-users] ZnClient in Pharo 6.1 not working for Https on Windows

2017-10-06 Thread Herby Vojčík

kmo wrote:

I was trying to Soup on Windows 7 and found I could not access https sites

On windows 7 and 10 the following code fails:

ZnEasy get:'https://genius.com/Alice-nuvole-lyrics'.

This works fine in Pharo 5 on Windows. Also works fine with Pharo 6.1 (32
bit) on Linux.

The problem is Pharo 6.1 on Windows.

The error is:

"SSL/TLS plugin initialization failed (VM plugin missing ? OS libraries
missing ?)"


I had it as well. The reason seems to be really strange, but the root 
cause was the zip wasn't unzipped correctly, probably due to Windows 
Defender hijacking it, and SqueakSSL.dll was missing.


Check thouroughly if you actually have all files present in .zip 
actually unzipped in pharo folder.


Herby


I know that his has been seen on Linux in the past - see
http://forum.world.st/SSL-TLS-plugin-initailization-failed-VM-plugin-missing-OS-libraries-missing-td4945857.html

I'm running the latest vm and 6.1 image.



--
Sent from: http://forum.world.st/Pharo-Smalltalk-Users-f1310670.html






Re: [Pharo-users] How to make pharo find sqlite?

2017-10-02 Thread Herby Vojčík

p...@highoctane.be wrote:

Yes, all of this should work and we need to improve on this.

I am willing to do something about that because it frustrates me too.

Herby,

How would you see it working?


Hard to answer. But linux knows where its libs are, at least when you do 
`/sbin/ldconfig -p` in CLI. Pharo should probably use this info somehow, 
question is, is there a kernel API to get to this, or is there another 
sane way to get to it? Of course, accepting 'libfoo' when asking for 
'foo', but that probably is there already (I hope).


And of course, honouring LD_LIBRARY_PATH, as it _is_ set in the pharo 
start script, so maybe even used, but... I am sorry I don't know how 
shared lib resolving works on linux. I am just the "as far as I can 
tell, the whole point of shared libraries is to ask OS for them and get 
them handed over" kind of person here.


Ppl love Smalltalk so much, they forgive lots of rough edges - I think 
we should not be that forgiving (like, garbled utf-8 names read from git 
in iceberg-metacello integration; and there are lots of details like 
this, the issue in this thread being one of more serious ones of them. IMO).


Herby


Phil

On Mon, Oct 2, 2017 at 1:45 PM, Herby Vojčík <he...@mailbox.sk
<mailto:he...@mailbox.sk>> wrote:

Renaud de Villemeur wrote:

Hi


[...snip...]

The reason it pass under windows is because the method library
return by
default sqlite3, which is the dll name you put under pharo VM
directory
to get it work.


Not true. It is not in pharo vm directory. It finds it on %PATH%.

On linux, unless you link your library in the VM folder, the
image has
no clue where to find sqlite3.


And this should be fixed, IMNSHO. It probably _partly_ works, but it
should be made to work better.

Herby

Hope this helps.
Renaud



.






.



Re: [Pharo-users] How to make pharo find sqlite?

2017-10-02 Thread Herby Vojčík

Renaud de Villemeur wrote:

Hi


[...snip...]


The reason it pass under windows is because the method library return by
default sqlite3, which is the dll name you put under pharo VM directory
to get it work.


Not true. It is not in pharo vm directory. It finds it on %PATH%.


On linux, unless you link your library in the VM folder, the image has
no clue where to find sqlite3.


And this should be fixed, IMNSHO. It probably _partly_ works, but it 
should be made to work better.


Herby


Hope this helps.
Renaud



.



Re: [Pharo-users] How to set library path for UFFI on Linux?

2017-09-30 Thread Herby Vojčík

Dan Wilczak wrote:

Hernan -

I haven't opened an issue - how do I do it? (I'm very new to Pharo.)

About continuing the search - I only mean continuing the search of the
LD_LIBRARY_PATH directories, not the whole filesystem. Two changes would be
needed to accomplish this:


FWIW, I had problem with loading 'sqlite3' module. Crossposting the 
solution:


It seems FFI for some reason struggles with 'lib' and/or '.so.0' things 
in linux (even if LD_LIBRARY_PATH is properly set).


I had to do this:

TARGETDIR=`find . -type f -name SqueakSSL.so -print0 | xargs -0 dirname`
ln -s `/sbin/ldconfig -p | sed -e 's|[^/]*||' | grep sqlite3` 
${TARGETDIR}/sqlite3.so


(so, link in plugin directory, and the name is plain 'sqlite3.so'). With 
that, things work. Maybe, libsqlite.so would do the trick as well, but I 
got no nerve to play with it more.


But, frankly, do not tell me this is what ppl need to do to load 
external libs in linux. :-/


Herby



1) Athens-Cairo>>  CairoLibrary would have to return a list of paths to all
the matching libraries rather than just the first one that it finds. This
part seems easy.

2) UFFI Libraries UnixDynamicLoader>>  loadLibrary:flag: would have to take
the list of paths (rather than just one), check them for being 32-bit or
64-bit, and load the first correct one. I can't find any way to perform that
check in Pharo directly. How would you fork or exec the "file" command from
inside Pharo?

Dan




--
Sent from: http://forum.world.st/Pharo-Smalltalk-Users-f1310670.html






Re: [Pharo-users] How to make pharo find sqlite?

2017-09-30 Thread Herby Vojčík

p...@highoctane.be wrote:

Is https://pharo.fogbugz.com/f/cases/19990 showing again?

What is the module being loaded ?


This seems to be the important question. It seems FFI for some reason 
struggles with 'lib' and/or '.so.0' things in linux (even if 
LD_LIBRARY_PATH is properly set).


I had to do this:

TARGETDIR=`find . -type f -name SqueakSSL.so -print0 | xargs -0 dirname`
ln -s `/sbin/ldconfig -p | sed -e 's|[^/]*||' | grep sqlite3` 
${TARGETDIR}/sqlite3.so


(so, link in plugin directory, and the name is plain 'sqlite3.so'). With 
that, things work. Maybe, libsqlite.so would do the trick as well, but I 
got no nerve to play with it more.


But, frankly, do not tell me this is what ppl need to do to load 
external libs in linux. :-/


Herby


Phil


On Sat, Sep 30, 2017 at 1:28 PM, Herby Vojčík <he...@mailbox.sk
<mailto:he...@mailbox.sk>> wrote:

p...@highoctane.be <mailto:p...@highoctane.be> wrote:

What about

LD_LIBRARY_PATH=;$LD_LIBRARYPATH  ./pharo-ui
some.image

Phil


Thanks for answer, did not help.

In fact it must be something different. As can be seen in the stack,
it fails during finalizers, and as can be seen by looking at
UDBCSQLite3DatabaseExternalObject class>>finalizeResourceData: code,
the method it calls is sqlite close. It is hardly the first method
that is should call...

I suspect something around image save / load. Again. Lots of errors
in those parts. But may be something else, as it kicks in only when
SQLite-using tests starts to run. :-(

Herby

P.S.: I saw there is a similar thread out there, but it has problems
with 32bit loaded by 64bit vm; but here, I have 32bit linux, so the
vm installed should be 32bit.

On Thu, Sep 28, 2017 at 7:40 PM, Herby Vojčík <he...@mailbox.sk
<mailto:he...@mailbox.sk>
<mailto:he...@mailbox.sk <mailto:he...@mailbox.sk>>> wrote:

 Hello!

 I try to deploy UDBCSQLite-using image in a 32bit ubuntu
16.04.3.

 I do have libsqlite3:

 root@32bit-agent:~# find / -name '*libsqlite*' -type f
2>>/dev/null
 /usr/lib/i386-linux-gnu/libsqlite3.so.0.8.6
 /var/lib/dpkg/info/libsqlite0.list
 /var/lib/dpkg/info/libsqlite3-0:i386.postinst
 /var/lib/dpkg/info/libsqlite3-0:i386.md5sums
 /var/lib/dpkg/info/libsqlite3-0:i386.shlibs
 /var/lib/dpkg/info/libsqlite0.postrm
 /var/lib/dpkg/info/libsqlite3-0:i386.symbols
 /var/lib/dpkg/info/libsqlite3-0:i386.list
 /var/lib/dpkg/info/libsqlite3-0:i386.triggers
 /var/cache/apt/archives/libsqlite0_2.8.17-12fakesync1_i386.deb

 but I get this in the output of the CI:

 17:16:54.233 + ../pharo/pharo ./filmtower.image
conf/run-tests.st <http://run-tests.st>
<http://run-tests.st>
 17:16:54.508 pthread_setschedparam failed: Operation not
permitted
 17:16:54.509 This VM uses a separate heartbeat thread to
update its
 internal clock
 17:16:54.509 and handle events.  For best operation, this
thread
 should run at a
 17:16:54.509 higher priority, however the VM was unable to
change
 the priority.  The
 17:16:54.509 effect is that heavily loaded systems may
experience
 some latency
 17:16:54.509 issues.  If this occurs, please create the
appropriate
 configuration
 17:16:54.509 file in /etc/security/limits.d/ as shown below:
 17:16:54.509
 17:16:54.509 cat <https://github.com/OpenSmalltalk/opensmalltalk-vm/releases/tag/r3732#linux

<https://github.com/OpenSmalltalk/opensmalltalk-vm/releases/tag/r3732#linux>

<https://github.com/OpenSmalltalk/opensmalltalk-vm/releases/tag/r3732#linux

<https://github.com/OpenSmalltalk/opensmalltalk-vm/releases/tag/r3732#linux>>
 17:16:54.785
 17:16:54.786 TowergameSyncTests
 17:16:54.831 Error: External module not found
 17:16:54.832 ExternalLibraryFunction(Object)>>error:
 17:16:54.832
ExternalLibraryFunction(Object)>>externalCallFailed
 17:16:54.833
 ExternalLibraryFunction(ExternalFunction)>>invokeWithArguments:
 17:16:54.833 UDBCSQLite3DatabaseExternalObject
 class>>finalizeResourceData:
 17:16:54.834 FFICalloutAPI>>function:module:
 17:16:54.834 UDBCSQLite3Library(Object)>>ffiCall:module:
 17:16:54.835 UDBCSQLite3DatabaseExternalObject
 class>>finalizeResourceData:
 17:16:54.836 FFIExternalResourceExecutor>>fin

Re: [Pharo-users] How to set library path for UFFI on Linux?

2017-09-30 Thread Herby Vojčík

Stephane Ducasse wrote:

Would you pls care to look at the sibling thread "How to make pharo find 
SQLite", it seems that it struggles with similar problem, is it supposed 
to created symlinks on linux, or it should be able to fund the library 
but somehow fails?


Thanks, Herby


https://pharo.fogbugz.com

On Sat, Sep 30, 2017 at 6:53 PM, Dan Wilczak  wrote:

Hernan -

I haven't opened an issue - how do I do it? (I'm very new to Pharo.)

About continuing the search - I only mean continuing the search of the
LD_LIBRARY_PATH directories, not the whole filesystem. Two changes would be
needed to accomplish this:

1) Athens-Cairo>>  CairoLibrary would have to return a list of paths to all
the matching libraries rather than just the first one that it finds. This
part seems easy.

2) UFFI Libraries UnixDynamicLoader>>  loadLibrary:flag: would have to take
the list of paths (rather than just one), check them for being 32-bit or
64-bit, and load the first correct one. I can't find any way to perform that
check in Pharo directly. How would you fork or exec the "file" command from
inside Pharo?

Dan




--
Sent from: http://forum.world.st/Pharo-Smalltalk-Users-f1310670.html








Re: [Pharo-users] How to make pharo find sqlite?

2017-09-30 Thread Herby Vojčík

Herby Vojčík wrote:

p...@highoctane.be wrote:

I am using UDBCSQLite on Windows without problems.


Me, too; when developing.

The problem was on Linux, where I deploy.

Things begin to look as if it was really that the module is not found,
though.


Now that I saw into the code CairoLibrary does to try to find the file 
name, and compose the module name from that, and that in my case the 
library name used is the plain 'sqlite3', I don't know - am I supposed 
to make a symlink on the same directory as the image? Is it the thing 
that is normally needed / done routinely?


I may try that... (but on Windows, it just finds the sqlite dll without 
problems).


Herby



I added more diagnostic output, to ExternalFunction >>
invokeWithArguments:, so it writes to transcript whenever the primitive
fails, writing args used to call and self. The output is suddenly filled
with output:

19:48:10.132 + ../pharo/pharo ./filmtower.image conf/run-tests.st
19:48:10.662
19:48:10.662 TowergameServerTests
19:48:10.781 4 run, 4 passes, 0 skipped, 0 expected failures, 0
failures, 0 errors, 0 unexpected passes
19:48:10.781
19:48:10.781 TowergameSyncTests
19:48:10.781
19:48:10.782 TowergameSyncTests>>#testPlayerCanHaveDisabledDeviceSaved
19:48:10.782 ENTER setUp
19:48:10.804
19:48:10.805 an Array('' @ 16r08FF27C0)
19:48:10.806 
19:48:10.806
19:48:10.806 ENTER tearDown
19:48:10.807
19:48:10.807 TowergameSyncTests>>#testPlayerChecksStateVersion
19:48:10.807 ENTER setUp
19:48:10.815
19:48:10.815 an Array('' @ 16r08FF1718)
19:48:10.816 
19:48:10.816
19:48:10.816 ENTER tearDown
19:48:10.816
19:48:10.816
TowergameSyncTests>>#testPlayerChecksStateVersionAndHasFreshlyInstalled
19:48:10.817 ENTER setUp
19:48:10.820
19:48:10.820 an Array('' @ 16r08FF2198)
19:48:10.820 
19:48:10.820
19:48:10.820 ENTER tearDown
19:48:10.821
19:48:10.821 TowergameSyncTests>>#testPlayerChecksStateVersionAndIsBehind
19:48:10.821 ENTER setUp
19:48:10.829
19:48:10.829 an Array('' @ 16r08FFFAD8)
19:48:10.829 
19:48:10.829
19:48:10.830 ENTER tearDown
19:48:10.830
19:48:10.830
TowergameSyncTests>>#testPlayerChecksStateVersionFromDifferentDevice
19:48:10.830 ENTER setUp
19:48:10.832
19:48:10.832 an Array('' @ 16r08FFFAE8)
19:48:10.832 
19:48:10.832
19:48:10.832 ENTER tearDown
19:48:10.832
19:48:10.832
TowergameSyncTests>>#testPlayerChecksStateVersionFromDifferentDeviceAndHasFreshlyInstalled

19:48:10.832 ENTER setUp
19:48:10.839
19:48:10.839 an Array('' @ 16r08FFFAF8)
19:48:10.839 
19:48:10.840
19:48:10.840 ENTER tearDown
19:48:10.840
19:48:10.840
TowergameSyncTests>>#testPlayerChecksStateVersionFromDifferentExistingDevice

19:48:10.840 ENTER setUp
19:48:10.841
19:48:10.841 an Array('' @ 16r08FFFB08)
19:48:10.841 
19:48:10.842
19:48:10.842 ENTER tearDown
19:48:10.842
19:48:10.842
TowergameSyncTests>>#testPlayerChecksStateVersionFromDisabledDevice
19:48:10.842 ENTER setUp
19:48:10.854
19:48:10.854 an Array(@ 16r)
19:48:10.855 
19:48:10.855
19:48:10.855 an Array(@ 16r)
19:48:10.855 
19:48:10.856
19:48:10.856 an Array(@ 16r)
19:48:10.856 
19:48:10.856
19:48:10.856 an Array(@ 16r)
19:48:10.856 
19:48:10.857
19:48:10.857 an Array(@ 16r)
19:48:10.857 
19:48:10.857
19:48:10.858 an Array(@ 16r)
19:48:10.861 
19:48:10.861
19:48:10.862 an Array(@ 16r)
19:48:10.863 
19:48:10.864 Error: External module not found
19:48:10.865 ExternalLibraryFunction(Object)>>error:
etc.

So the mystery of "where are LEAVE messages" is solved: both setUp and
tearDown failed between ENTER and LEAVE.

Now why cannot it find the library (now I am running it in both 32bit
env as well as 64bit env, with appropriate vm installed; but always the
same: sqlite3 module is not found).

At least it seems it is not mysterious vm bug, but (only) failure to
find an external module. Though I don't know how to solve it,
LD_LIBRARY_PATH did not help... :-/


Phil

On Sat, Sep 30, 2017 at 9:11 PM, Herby Vojčík <he...@mailbox.sk
<mailto:he...@mailbox.sk>> wrote:

p...@highoctane.be <mailto:p...@highoctane.be> wrote:

Also, did you try with this VM:

http://get.pharo.org/vmTLatest60 <http://get.pharo.org/vmTLatest60>


18:51:46.191 + curl get.pharo.org/vmTLatest60
<http://get.pharo.org/vmTLatest60>
18:51:46.207 % Total % Received % Xferd Average Speed Time
Time Time Current
18:51:46.208 Dload Upload Total
Spent Left Speed
18:51:46.208
18:51:46.242 0 0 0 0 0 0 0 0 --:--:--
--:--:-- --:--:-- 0
18:51:46.242 100 6126 100 6126 0 0 172k 0 --:--:--
--:--:-- --:--:-- 175k
18:51:46.253 Downloading the latest pharoVM:
18:51:46.253
http://files.pharo.org/get-files/60/pharo-linux-threaded-latest.zip
<http://files.pharo.org/get-files/60/pharo-linux-threaded-latest.zip>
18:51:46.305 [pharo-vm/vm.zip]
18:51:46.305 End-of-central-directory signature not found. Either
this file is not
18:51:46.305 a zipfile, or it constitutes one disk of a multi-part
archive. In the
18:5

Re: [Pharo-users] How to make pharo find sqlite?

2017-09-30 Thread Herby Vojčík

p...@highoctane.be wrote:

I am using UDBCSQLite on Windows without problems.


Me, too; when developing.

The problem was on Linux, where I deploy.

Things begin to look as if it was really that the module is not found, 
though.


I added more diagnostic output, to ExternalFunction >> 
invokeWithArguments:, so it writes to transcript whenever the primitive 
fails, writing args used to call and self. The output is suddenly filled 
with output:


19:48:10.132 + ../pharo/pharo ./filmtower.image conf/run-tests.st
19:48:10.662
19:48:10.662 TowergameServerTests
19:48:10.781 4 run, 4 passes, 0 skipped, 0 expected failures, 0 
failures, 0 errors, 0 unexpected passes

19:48:10.781
19:48:10.781 TowergameSyncTests
19:48:10.781
19:48:10.782 TowergameSyncTests>>#testPlayerCanHaveDisabledDeviceSaved
19:48:10.782 ENTER setUp
19:48:10.804
19:48:10.805 an Array('' @ 16r08FF27C0)
19:48:10.806 
19:48:10.806
19:48:10.806 ENTER tearDown
19:48:10.807
19:48:10.807 TowergameSyncTests>>#testPlayerChecksStateVersion
19:48:10.807 ENTER setUp
19:48:10.815
19:48:10.815 an Array('' @ 16r08FF1718)
19:48:10.816 
19:48:10.816
19:48:10.816 ENTER tearDown
19:48:10.816
19:48:10.816 
TowergameSyncTests>>#testPlayerChecksStateVersionAndHasFreshlyInstalled

19:48:10.817 ENTER setUp
19:48:10.820
19:48:10.820 an Array('' @ 16r08FF2198)
19:48:10.820 
19:48:10.820
19:48:10.820 ENTER tearDown
19:48:10.821
19:48:10.821 TowergameSyncTests>>#testPlayerChecksStateVersionAndIsBehind
19:48:10.821 ENTER setUp
19:48:10.829
19:48:10.829 an Array('' @ 16r08FFFAD8)
19:48:10.829 
19:48:10.829
19:48:10.830 ENTER tearDown
19:48:10.830
19:48:10.830 
TowergameSyncTests>>#testPlayerChecksStateVersionFromDifferentDevice

19:48:10.830 ENTER setUp
19:48:10.832
19:48:10.832 an Array('' @ 16r08FFFAE8)
19:48:10.832 
19:48:10.832
19:48:10.832 ENTER tearDown
19:48:10.832
19:48:10.832 
TowergameSyncTests>>#testPlayerChecksStateVersionFromDifferentDeviceAndHasFreshlyInstalled

19:48:10.832 ENTER setUp
19:48:10.839
19:48:10.839 an Array('' @ 16r08FFFAF8)
19:48:10.839 
19:48:10.840
19:48:10.840 ENTER tearDown
19:48:10.840
19:48:10.840 
TowergameSyncTests>>#testPlayerChecksStateVersionFromDifferentExistingDevice

19:48:10.840 ENTER setUp
19:48:10.841
19:48:10.841 an Array('' @ 16r08FFFB08)
19:48:10.841 
19:48:10.842
19:48:10.842 ENTER tearDown
19:48:10.842
19:48:10.842 
TowergameSyncTests>>#testPlayerChecksStateVersionFromDisabledDevice

19:48:10.842 ENTER setUp
19:48:10.854
19:48:10.854 an Array(@ 16r)
19:48:10.855 
19:48:10.855
19:48:10.855 an Array(@ 16r)
19:48:10.855 
19:48:10.856
19:48:10.856 an Array(@ 16r)
19:48:10.856 
19:48:10.856
19:48:10.856 an Array(@ 16r)
19:48:10.856 
19:48:10.857
19:48:10.857 an Array(@ 16r)
19:48:10.857 
19:48:10.857
19:48:10.858 an Array(@ 16r)
19:48:10.861 
19:48:10.861
19:48:10.862 an Array(@ 16r)
19:48:10.863 
19:48:10.864 Error: External module not found
19:48:10.865 ExternalLibraryFunction(Object)>>error:
etc.

So the mystery of "where are LEAVE messages" is solved: both setUp and 
tearDown failed between ENTER and LEAVE.


Now why cannot it find the library (now I am running it in both 32bit 
env as well as 64bit env, with appropriate vm installed; but always the 
same: sqlite3 module is not found).


At least it seems it is not mysterious vm bug, but (only) failure to 
find an external module. Though I don't know how to solve it, 
LD_LIBRARY_PATH did not help... :-/



Phil

On Sat, Sep 30, 2017 at 9:11 PM, Herby Vojčík <he...@mailbox.sk
<mailto:he...@mailbox.sk>> wrote:

p...@highoctane.be <mailto:p...@highoctane.be> wrote:

Also, did you try with this VM:

http://get.pharo.org/vmTLatest60 <http://get.pharo.org/vmTLatest60>


18:51:46.191 + curl get.pharo.org/vmTLatest60
<http://get.pharo.org/vmTLatest60>
18:51:46.207   % Total% Received % Xferd  Average Speed   Time
Time Time  Current
18:51:46.208  Dload  Upload   Total
SpentLeft  Speed
18:51:46.208
18:51:46.242   0 00 00 0  0  0 --:--:--
--:--:-- --:--:-- 0
18:51:46.242 100  6126  100  61260 0   172k  0 --:--:--
--:--:-- --:--:--  175k
18:51:46.253 Downloading the latest pharoVM:
18:51:46.253
http://files.pharo.org/get-files/60/pharo-linux-threaded-latest.zip
<http://files.pharo.org/get-files/60/pharo-linux-threaded-latest.zip>
18:51:46.305 [pharo-vm/vm.zip]
18:51:46.305   End-of-central-directory signature not found.  Either
this file is not
18:51:46.305   a zipfile, or it constitutes one disk of a multi-part
archive.  In the
18:51:46.305   latter case the central directory and zipfile comment
will be found on
18:51:46.305   the last disk(s) of this archive.
18:51:46.305 unzip:  cannot find zipfile directory in one of
  

Re: [Pharo-users] How to make pharo find sqlite?

2017-09-30 Thread Herby Vojčík

p...@highoctane.be wrote:

Also, did you try with this VM:

http://get.pharo.org/vmTLatest60


18:51:46.191 + curl get.pharo.org/vmTLatest60
18:51:46.207   % Total% Received % Xferd  Average Speed   Time 
Time Time  Current
18:51:46.208  Dload  Upload   Total 
SpentLeft  Speed

18:51:46.208
18:51:46.242   0 00 00 0  0  0 --:--:-- 
--:--:-- --:--:-- 0
18:51:46.242 100  6126  100  61260 0   172k  0 --:--:-- 
--:--:-- --:--:--  175k

18:51:46.253 Downloading the latest pharoVM:
18:51:46.253 
http://files.pharo.org/get-files/60/pharo-linux-threaded-latest.zip

18:51:46.305 [pharo-vm/vm.zip]
18:51:46.305   End-of-central-directory signature not found.  Either 
this file is not
18:51:46.305   a zipfile, or it constitutes one disk of a multi-part 
archive.  In the
18:51:46.305   latter case the central directory and zipfile comment 
will be found on

18:51:46.305   the last disk(s) of this archive.
18:51:46.305 unzip:  cannot find zipfile directory in one of 
pharo-vm/vm.zip or
18:51:46.305 pharo-vm/vm.zip.zip, and cannot find 
pharo-vm/vm.zip.ZIP, period.



It probably does not exist any more (I tried 70+vm and it failed in 
other aspects, it wasn't able to load git repo).


Tried both 61+vmT and 61+vmI both; in 32vm/32os and 64vm/64os 
combinations. Always ended with same result.


Must be some error in UDBCSQLiteLibrary itself. :-/

Although the missing transcript output is scary and shows that vm may be 
culprit as well.


Herby


Phil

On Sat, Sep 30, 2017 at 1:28 PM, Herby Vojčík <he...@mailbox.sk
<mailto:he...@mailbox.sk>> wrote:

p...@highoctane.be <mailto:p...@highoctane.be> wrote:

What about

LD_LIBRARY_PATH=;$LD_LIBRARYPATH  ./pharo-ui
some.image

Phil


Thanks for answer, did not help.

In fact it must be something different. As can be seen in the stack,
it fails during finalizers, and as can be seen by looking at
UDBCSQLite3DatabaseExternalObject class>>finalizeResourceData: code,
the method it calls is sqlite close. It is hardly the first method
that is should call...

I suspect something around image save / load. Again. Lots of errors
in those parts. But may be something else, as it kicks in only when
SQLite-using tests starts to run. :-(

Herby

P.S.: I saw there is a similar thread out there, but it has problems
with 32bit loaded by 64bit vm; but here, I have 32bit linux, so the
vm installed should be 32bit.

On Thu, Sep 28, 2017 at 7:40 PM, Herby Vojčík <he...@mailbox.sk
<mailto:he...@mailbox.sk>
<mailto:he...@mailbox.sk <mailto:he...@mailbox.sk>>> wrote:

 Hello!

 I try to deploy UDBCSQLite-using image in a 32bit ubuntu
16.04.3.

 I do have libsqlite3:

 root@32bit-agent:~# find / -name '*libsqlite*' -type f
2>>/dev/null
 /usr/lib/i386-linux-gnu/libsqlite3.so.0.8.6
 /var/lib/dpkg/info/libsqlite0.list
 /var/lib/dpkg/info/libsqlite3-0:i386.postinst
 /var/lib/dpkg/info/libsqlite3-0:i386.md5sums
 /var/lib/dpkg/info/libsqlite3-0:i386.shlibs
 /var/lib/dpkg/info/libsqlite0.postrm
 /var/lib/dpkg/info/libsqlite3-0:i386.symbols
 /var/lib/dpkg/info/libsqlite3-0:i386.list
 /var/lib/dpkg/info/libsqlite3-0:i386.triggers
 /var/cache/apt/archives/libsqlite0_2.8.17-12fakesync1_i386.deb

 but I get this in the output of the CI:

 17:16:54.233 + ../pharo/pharo ./filmtower.image
conf/run-tests.st <http://run-tests.st>
<http://run-tests.st>
 17:16:54.508 pthread_setschedparam failed: Operation not
permitted
 17:16:54.509 This VM uses a separate heartbeat thread to
update its
 internal clock
 17:16:54.509 and handle events.  For best operation, this
thread
 should run at a
 17:16:54.509 higher priority, however the VM was unable to
change
 the priority.  The
 17:16:54.509 effect is that heavily loaded systems may
experience
 some latency
 17:16:54.509 issues.  If this occurs, please create the
appropriate
 configuration
 17:16:54.509 file in /etc/security/limits.d/ as shown below:
 17:16:54.509
 17:16:54.509 cat <https://github.com/OpenSmalltalk/opensmalltalk-vm/releases/tag/r3732#linux

<https://github.com/OpenSmalltalk/opensmalltalk-vm/releases/tag/r3732#linux>

<https://github.com/OpenSmalltalk/opensmalltalk-vm/releases/tag/r3732#linux

<https://github.com/OpenSmalltalk/opensmalltalk-vm/releases/tag/r3732#linux>>
 17:16:54.785
 17:

Re: [Pharo-users] How to make pharo find sqlite?

2017-09-30 Thread Herby Vojčík

p...@highoctane.be wrote:

Is https://pharo.fogbugz.com/f/cases/19990 showing again?


Actually, maybe not, as can be seen in the other thread I posted for the 
error - it seems sqlite3 is used ok until finalizers kick in, when it 
crashes for some reason... but maybe it relates to this error somehow.




What is the module being loaded ?

Phil


On Sat, Sep 30, 2017 at 1:28 PM, Herby Vojčík <he...@mailbox.sk
<mailto:he...@mailbox.sk>> wrote:

p...@highoctane.be <mailto:p...@highoctane.be> wrote:

What about

LD_LIBRARY_PATH=;$LD_LIBRARYPATH  ./pharo-ui
some.image

Phil


Thanks for answer, did not help.

In fact it must be something different. As can be seen in the stack,
it fails during finalizers, and as can be seen by looking at
UDBCSQLite3DatabaseExternalObject class>>finalizeResourceData: code,
the method it calls is sqlite close. It is hardly the first method
that is should call...

I suspect something around image save / load. Again. Lots of errors
in those parts. But may be something else, as it kicks in only when
SQLite-using tests starts to run. :-(

Herby

P.S.: I saw there is a similar thread out there, but it has problems
with 32bit loaded by 64bit vm; but here, I have 32bit linux, so the
vm installed should be 32bit.

On Thu, Sep 28, 2017 at 7:40 PM, Herby Vojčík <he...@mailbox.sk
<mailto:he...@mailbox.sk>
<mailto:he...@mailbox.sk <mailto:he...@mailbox.sk>>> wrote:

 Hello!

 I try to deploy UDBCSQLite-using image in a 32bit ubuntu
16.04.3.

 I do have libsqlite3:

 root@32bit-agent:~# find / -name '*libsqlite*' -type f
2>>/dev/null
 /usr/lib/i386-linux-gnu/libsqlite3.so.0.8.6
 /var/lib/dpkg/info/libsqlite0.list
 /var/lib/dpkg/info/libsqlite3-0:i386.postinst
 /var/lib/dpkg/info/libsqlite3-0:i386.md5sums
 /var/lib/dpkg/info/libsqlite3-0:i386.shlibs
 /var/lib/dpkg/info/libsqlite0.postrm
 /var/lib/dpkg/info/libsqlite3-0:i386.symbols
 /var/lib/dpkg/info/libsqlite3-0:i386.list
 /var/lib/dpkg/info/libsqlite3-0:i386.triggers
 /var/cache/apt/archives/libsqlite0_2.8.17-12fakesync1_i386.deb

 but I get this in the output of the CI:

 17:16:54.233 + ../pharo/pharo ./filmtower.image
conf/run-tests.st <http://run-tests.st>
<http://run-tests.st>
 17:16:54.508 pthread_setschedparam failed: Operation not
permitted
 17:16:54.509 This VM uses a separate heartbeat thread to
update its
 internal clock
 17:16:54.509 and handle events.  For best operation, this
thread
 should run at a
 17:16:54.509 higher priority, however the VM was unable to
change
 the priority.  The
 17:16:54.509 effect is that heavily loaded systems may
experience
 some latency
 17:16:54.509 issues.  If this occurs, please create the
appropriate
 configuration
 17:16:54.509 file in /etc/security/limits.d/ as shown below:
 17:16:54.509
 17:16:54.509 cat <https://github.com/OpenSmalltalk/opensmalltalk-vm/releases/tag/r3732#linux

<https://github.com/OpenSmalltalk/opensmalltalk-vm/releases/tag/r3732#linux>

<https://github.com/OpenSmalltalk/opensmalltalk-vm/releases/tag/r3732#linux

<https://github.com/OpenSmalltalk/opensmalltalk-vm/releases/tag/r3732#linux>>
 17:16:54.785
 17:16:54.786 TowergameSyncTests
 17:16:54.831 Error: External module not found
 17:16:54.832 ExternalLibraryFunction(Object)>>error:
 17:16:54.832
ExternalLibraryFunction(Object)>>externalCallFailed
 17:16:54.833
 ExternalLibraryFunction(ExternalFunction)>>invokeWithArguments:
 17:16:54.833 UDBCSQLite3DatabaseExternalObject
 class>>finalizeResourceData:
 17:16:54.834 FFICalloutAPI>>function:module:
 17:16:54.834 UDBCSQLite3Library(Object)>>ffiCall:module:
 17:16:54.835 UDBCSQLite3DatabaseExternalObject
 class>>finalizeResourceData:
 17:16:54.836 FFIExternalResourceExecutor>>finalize
 17:16:54.836 WeakFinalizerItem>>finalizeValues
 17:16:54.845 [ each finalizeValues ] in [ :each | [ each
 finalizeValues ] on: Exception fork: [ :ex | ex pass ] ] in
 WeakRegistry>>finalizeValues in Block: [ each finalizeValues ]
 17:16:54.846 BlockClosure>>on:do:
 17:16:54.852 [ Processor terminateActive ] in [ :ex |
 17

Re: [Pharo-users] How to make pharo find sqlite?

2017-09-30 Thread Herby Vojčík

p...@highoctane.be wrote:

Is https://pharo.fogbugz.com/f/cases/19990 showing again?

What is the module being loaded ?


It is taken from this message send:

UDBCSQLite3Library >> library

Smalltalk os isMacOS ifTrue: [ ^ #sqlite3 ].
^ 'sqlite3'

So, sqlite3.


Phil


On Sat, Sep 30, 2017 at 1:28 PM, Herby Vojčík <he...@mailbox.sk
<mailto:he...@mailbox.sk>> wrote:

p...@highoctane.be <mailto:p...@highoctane.be> wrote:

What about

LD_LIBRARY_PATH=;$LD_LIBRARYPATH  ./pharo-ui
some.image

Phil


Thanks for answer, did not help.

In fact it must be something different. As can be seen in the stack,
it fails during finalizers, and as can be seen by looking at
UDBCSQLite3DatabaseExternalObject class>>finalizeResourceData: code,
the method it calls is sqlite close. It is hardly the first method
that is should call...

I suspect something around image save / load. Again. Lots of errors
in those parts. But may be something else, as it kicks in only when
SQLite-using tests starts to run. :-(

Herby

P.S.: I saw there is a similar thread out there, but it has problems
with 32bit loaded by 64bit vm; but here, I have 32bit linux, so the
vm installed should be 32bit.

On Thu, Sep 28, 2017 at 7:40 PM, Herby Vojčík <he...@mailbox.sk
<mailto:he...@mailbox.sk>
<mailto:he...@mailbox.sk <mailto:he...@mailbox.sk>>> wrote:

 Hello!

 I try to deploy UDBCSQLite-using image in a 32bit ubuntu
16.04.3.

 I do have libsqlite3:

 root@32bit-agent:~# find / -name '*libsqlite*' -type f
2>>/dev/null
 /usr/lib/i386-linux-gnu/libsqlite3.so.0.8.6
 /var/lib/dpkg/info/libsqlite0.list
 /var/lib/dpkg/info/libsqlite3-0:i386.postinst
 /var/lib/dpkg/info/libsqlite3-0:i386.md5sums
 /var/lib/dpkg/info/libsqlite3-0:i386.shlibs
 /var/lib/dpkg/info/libsqlite0.postrm
 /var/lib/dpkg/info/libsqlite3-0:i386.symbols
 /var/lib/dpkg/info/libsqlite3-0:i386.list
 /var/lib/dpkg/info/libsqlite3-0:i386.triggers
 /var/cache/apt/archives/libsqlite0_2.8.17-12fakesync1_i386.deb

 but I get this in the output of the CI:

 17:16:54.233 + ../pharo/pharo ./filmtower.image
conf/run-tests.st <http://run-tests.st>
<http://run-tests.st>
 17:16:54.508 pthread_setschedparam failed: Operation not
permitted
 17:16:54.509 This VM uses a separate heartbeat thread to
update its
 internal clock
 17:16:54.509 and handle events.  For best operation, this
thread
 should run at a
 17:16:54.509 higher priority, however the VM was unable to
change
 the priority.  The
 17:16:54.509 effect is that heavily loaded systems may
experience
 some latency
 17:16:54.509 issues.  If this occurs, please create the
appropriate
 configuration
 17:16:54.509 file in /etc/security/limits.d/ as shown below:
 17:16:54.509
 17:16:54.509 cat <https://github.com/OpenSmalltalk/opensmalltalk-vm/releases/tag/r3732#linux

<https://github.com/OpenSmalltalk/opensmalltalk-vm/releases/tag/r3732#linux>

<https://github.com/OpenSmalltalk/opensmalltalk-vm/releases/tag/r3732#linux

<https://github.com/OpenSmalltalk/opensmalltalk-vm/releases/tag/r3732#linux>>
 17:16:54.785
 17:16:54.786 TowergameSyncTests
 17:16:54.831 Error: External module not found
 17:16:54.832 ExternalLibraryFunction(Object)>>error:
 17:16:54.832
ExternalLibraryFunction(Object)>>externalCallFailed
 17:16:54.833
 ExternalLibraryFunction(ExternalFunction)>>invokeWithArguments:
 17:16:54.833 UDBCSQLite3DatabaseExternalObject
 class>>finalizeResourceData:
 17:16:54.834 FFICalloutAPI>>function:module:
 17:16:54.834 UDBCSQLite3Library(Object)>>ffiCall:module:
 17:16:54.835 UDBCSQLite3DatabaseExternalObject
 class>>finalizeResourceData:
 17:16:54.836 FFIExternalResourceExecutor>>finalize
 17:16:54.836 WeakFinalizerItem>>finalizeValues
 17:16:54.845 [ each finalizeValues ] in [ :each | [ each
 finalizeValues ] on: Exception fork: [ :ex | ex pass ] ] in
 WeakRegistry>>finalizeValues in Block: [ each finalizeValues ]
 17:16:54.846 BlockClosure>>on:do:
 17:16:54.852 [ Processor terminateActive ] in [ :ex |
 17:16:54.852 | copy onDoCtx process handler bottom thisCtx |

[Pharo-users] Pharo 6.1 UDCBSQLite problem, masked behind "Error: External module not found"?

2017-09-30 Thread Herby Vojčík

Hello!

I got the strange error first reported as "External module not found", 
but after putting a few diagnostic transcript outputs to the code:


TowergameSyncTests >> setUp
Transcript cr; show: self; cr; show: 'ENTER setUp'; cr.
dao := Towergame daoForLogin: self loginToTemporaryDatabase.
session := dao glorpSession.
Transcript cr; show: 'LEAVE setUp'; cr.

TowergameSyncTests >> tearDown
Transcript cr; show: 'ENTER tearDown'; cr.
session logout.
Transcript cr; show: 'LEAVE tearDown'; cr.

I got this in an (excerpt of an) output in my Go CD agent. It runs on 
32bit Ubuntu 16.04.3, uses 61+vm:


13:10:58.277 [go] Start to execute task: Plugin with ID: script-executor.
13:10:58.295 [script-executor] OS detected: 'Linux'. Is Windows? false
13:10:58.313 [script-executor] Script written into 
'/var/lib/go-agent/pipelines/filmtower-srv/cffa4492-a817-41e0-bb64-72a9e5a3d890.sh'.

13:10:58.325 + cd code
13:10:58.325 + ../pharo/pharo ./filmtower.image conf/run-tests.st
13:10:58.866
13:10:58.867 TowergameServerTests
13:10:58.961 4 run, 4 passes, 0 skipped, 0 expected failures, 0 
failures, 0 errors, 0 unexpected passes

13:10:58.962
13:10:58.962 TowergameSyncTests
13:10:58.962
13:10:58.962 TowergameSyncTests>>#testPlayerCanHaveDisabledDeviceSaved
13:10:58.962 ENTER setUp
13:10:58.991
13:10:58.991 ENTER tearDown
13:10:58.994
13:10:58.995 TowergameSyncTests>>#testPlayerChecksStateVersion
13:10:58.995 ENTER setUp
13:10:59.000
13:10:59.000 ENTER tearDown
13:10:59.001
13:10:59.002 
TowergameSyncTests>>#testPlayerChecksStateVersionAndHasFreshlyInstalled

13:10:59.002 ENTER setUp
13:10:59.014
13:10:59.015 ENTER tearDown
13:10:59.015
13:10:59.021 TowergameSyncTests>>#testPlayerChecksStateVersionAndIsBehind
13:10:59.021 ENTER setUp
13:10:59.031 Error: External module not found
13:10:59.031 ExternalLibraryFunction(Object)>>error:
13:10:59.033 ExternalLibraryFunction(Object)>>externalCallFailed
13:10:59.034 ExternalLibraryFunction(ExternalFunction)>>invokeWithArguments:
13:10:59.034 UDBCSQLite3DatabaseExternalObject class>>finalizeResourceData:
13:10:59.035 FFICalloutAPI>>function:module:
13:10:59.035 UDBCSQLite3Library(Object)>>ffiCall:module:
13:10:59.036 UDBCSQLite3DatabaseExternalObject class>>finalizeResourceData:
13:10:59.036 FFIExternalResourceExecutor>>finalize
13:10:59.036 WeakFinalizerItem>>finalizeValues
13:10:59.059 [ each finalizeValues ] in [ :each | [ each finalizeValues 
] on: Exception fork: [ :ex | ex pass ] ] in 
WeakRegistry>>finalizeValues in Block: [ each finalizeValues ]

13:10:59.059 BlockClosure>>on:do:
13:10:59.079 [ Processor terminateActive ] in [ :ex |
13:10:59.079 | copy onDoCtx process handler bottom thisCtx |
13:10:59.079 onDoCtx := thisContext.
13:10:59.079 thisCtx := onDoCtx home.
13:10:59.079
13:10:59.079 "find the context on stack for which this method's is sender"
13:10:59.079 [ onDoCtx sender == thisCtx ]
13:10:59.079whileFalse: [ onDoCtx := onDoCtx sender.
13:10:59.079onDoCtx
13:10:59.079 			ifNil: [ "Can't find our home context. seems like we're 
already forked
13:10:59.079 and handling another exception in new thread. In this 
case, just pass it through handler." ^ handlerAction cull: ex ] ].

13:10:59.079 bottom := [ Processor terminateActive ] asContext.
13:10:59.079 onDoCtx privSender: bottom.
13:10:59.079 handler := [ handlerAction cull: ex ] asContext.
13:10:59.080 handler privSender: thisContext sender.
13:10:59.081 (Process forContext: handler priority: Processor 
activePriority)

13:10:59.081resume.
13:10:59.081
13:10:59.081 "cut the stack of current process"
13:10:59.081 thisContext privSender: thisCtx.
13:10:59.082 nil ] in BlockClosure>>on:fork: in Block: [ Processor 
terminateActive ]

13:10:59.226
13:10:59.228 [script-executor] Script completed with exit code: 1.
13:10:59.285 [go] Current job status: failed.

There are two dimensions to this:

1. It is not "External module not found" as far as I can say, as a few 
tests passed, going through both setUp and tearDown. Something is wrong 
when finalizers kick in. FWIW, the login I use to log in the test db, 
created anew each time, because it is SQLite temp db, is:


TowergameSyncTests >> loginToTemporaryDatabase
^ Login new
database: UDBCSQLite3Platform new;
host: '';
port: '';
username: '';
password: '';
databaseName: '';
yourself

If I understood correctly, it creates db backed by temp file which gets 
removed once connection closes.


Everything works fine in my dev machine (Win 10, non-headless), where 
tests just pass fine.


2. Where are "LEAVE setUp" and "LEAVE tearDown" messages? They are 
missing from the transcipt (again, on dev machine, in non-headless mode, 
they show up in the Transcript window).



Can someone hint at what is wrong / udbcsqlite authors look at if there 
isn't something incorrect in the sqlite 

Re: [Pharo-users] How to make pharo find sqlite?

2017-09-30 Thread Herby Vojčík

p...@highoctane.be wrote:

What about

LD_LIBRARY_PATH=;$LD_LIBRARYPATH  ./pharo-ui some.image

Phil


Thanks for answer, did not help.

In fact it must be something different. As can be seen in the stack, it 
fails during finalizers, and as can be seen by looking at 
UDBCSQLite3DatabaseExternalObject class>>finalizeResourceData: code, the 
method it calls is sqlite close. It is hardly the first method that is 
should call...


I suspect something around image save / load. Again. Lots of errors in 
those parts. But may be something else, as it kicks in only when 
SQLite-using tests starts to run. :-(


Herby

P.S.: I saw there is a similar thread out there, but it has problems 
with 32bit loaded by 64bit vm; but here, I have 32bit linux, so the vm 
installed should be 32bit.



On Thu, Sep 28, 2017 at 7:40 PM, Herby Vojčík <he...@mailbox.sk
<mailto:he...@mailbox.sk>> wrote:

Hello!

I try to deploy UDBCSQLite-using image in a 32bit ubuntu 16.04.3.

I do have libsqlite3:

root@32bit-agent:~# find / -name '*libsqlite*' -type f 2>>/dev/null
/usr/lib/i386-linux-gnu/libsqlite3.so.0.8.6
/var/lib/dpkg/info/libsqlite0.list
/var/lib/dpkg/info/libsqlite3-0:i386.postinst
/var/lib/dpkg/info/libsqlite3-0:i386.md5sums
/var/lib/dpkg/info/libsqlite3-0:i386.shlibs
/var/lib/dpkg/info/libsqlite0.postrm
/var/lib/dpkg/info/libsqlite3-0:i386.symbols
/var/lib/dpkg/info/libsqlite3-0:i386.list
/var/lib/dpkg/info/libsqlite3-0:i386.triggers
/var/cache/apt/archives/libsqlite0_2.8.17-12fakesync1_i386.deb

but I get this in the output of the CI:

17:16:54.233 + ../pharo/pharo ./filmtower.image conf/run-tests.st
<http://run-tests.st>
17:16:54.508 pthread_setschedparam failed: Operation not permitted
17:16:54.509 This VM uses a separate heartbeat thread to update its
internal clock
17:16:54.509 and handle events.  For best operation, this thread
should run at a
17:16:54.509 higher priority, however the VM was unable to change
the priority.  The
17:16:54.509 effect is that heavily loaded systems may experience
some latency
17:16:54.509 issues.  If this occurs, please create the appropriate
configuration
17:16:54.509 file in /etc/security/limits.d/ as shown below:
17:16:54.509
17:16:54.509 cat <https://github.com/OpenSmalltalk/opensmalltalk-vm/releases/tag/r3732#linux
<https://github.com/OpenSmalltalk/opensmalltalk-vm/releases/tag/r3732#linux>
17:16:54.785
17:16:54.786 TowergameSyncTests
17:16:54.831 Error: External module not found
17:16:54.832 ExternalLibraryFunction(Object)>>error:
17:16:54.832 ExternalLibraryFunction(Object)>>externalCallFailed
17:16:54.833
ExternalLibraryFunction(ExternalFunction)>>invokeWithArguments:
17:16:54.833 UDBCSQLite3DatabaseExternalObject
class>>finalizeResourceData:
17:16:54.834 FFICalloutAPI>>function:module:
17:16:54.834 UDBCSQLite3Library(Object)>>ffiCall:module:
17:16:54.835 UDBCSQLite3DatabaseExternalObject
class>>finalizeResourceData:
17:16:54.836 FFIExternalResourceExecutor>>finalize
17:16:54.836 WeakFinalizerItem>>finalizeValues
17:16:54.845 [ each finalizeValues ] in [ :each | [ each
finalizeValues ] on: Exception fork: [ :ex | ex pass ] ] in
WeakRegistry>>finalizeValues in Block: [ each finalizeValues ]
17:16:54.846 BlockClosure>>on:do:
17:16:54.852 [ Processor terminateActive ] in [ :ex |
17:16:54.852 | copy onDoCtx process handler bottom thisCtx |
17:16:54.852 onDoCtx := thisContext.
17:16:54.852 thisCtx := onDoCtx home.
17:16:54.852
17:16:54.852 "find the context on stack for which this method's is
sender"
17:16:54.852 [ onDoCtx sender == thisCtx ]
17:16:54.852whileFalse: [ onDoCtx := onDoCtx sender.
17:16:54.852onDoCtx
17:16:54.852ifNil: [ "Can't find our home
context. seems like we're already forked
17:16:54.852and handling another
exception in new thread. In this case, just pass it through
handler." ^ handlerAction cull: ex ] ].
17:16:54.852 bottom := [ Processor terminateActive ] asContext.
17:16:54.853 onDoCtx privSender: bottom.
17:16:54.853 handler := [ handlerAction cull: ex ] asContext.
17:16:54.853 handler privSender: thisContext sender.
17:16:54.853 (Process forContext: handler priority: Processor
activePriority)
17:16:54.853resume.
17:16:54.853
17:16:54.853 "cut the stack of current process"
17:16:54.853 thisContext privSender: thisCtx.
17:16:54.853 nil ] in BlockClosure>>on:fork: in Block: [ Processor
terminateActive ]
17:16:54.989

Look like pharo was not able to find the sqlite3 lib.

Any help?

Thanks, Herby








[Pharo-users] How to make pharo find sqlite?

2017-09-28 Thread Herby Vojčík

Hello!

I try to deploy UDBCSQLite-using image in a 32bit ubuntu 16.04.3.

I do have libsqlite3:

root@32bit-agent:~# find / -name '*libsqlite*' -type f 2>>/dev/null
/usr/lib/i386-linux-gnu/libsqlite3.so.0.8.6
/var/lib/dpkg/info/libsqlite0.list
/var/lib/dpkg/info/libsqlite3-0:i386.postinst
/var/lib/dpkg/info/libsqlite3-0:i386.md5sums
/var/lib/dpkg/info/libsqlite3-0:i386.shlibs
/var/lib/dpkg/info/libsqlite0.postrm
/var/lib/dpkg/info/libsqlite3-0:i386.symbols
/var/lib/dpkg/info/libsqlite3-0:i386.list
/var/lib/dpkg/info/libsqlite3-0:i386.triggers
/var/cache/apt/archives/libsqlite0_2.8.17-12fakesync1_i386.deb

but I get this in the output of the CI:

17:16:54.233 + ../pharo/pharo ./filmtower.image conf/run-tests.st
17:16:54.508 pthread_setschedparam failed: Operation not permitted
17:16:54.509 This VM uses a separate heartbeat thread to update its 
internal clock
17:16:54.509 and handle events.  For best operation, this thread should 
run at a
17:16:54.509 higher priority, however the VM was unable to change the 
priority.  The
17:16:54.509 effect is that heavily loaded systems may experience some 
latency
17:16:54.509 issues.  If this occurs, please create the appropriate 
configuration

17:16:54.509 file in /etc/security/limits.d/ as shown below:
17:16:54.509
17:16:54.509 cat <17:16:54.509 and report to the pharo mailing list whether this improves 
behaviour.

17:16:54.512
17:16:54.512 You will need to log out and log back in for the limits to 
take effect.

17:16:54.512 For more information please see
17:16:54.512 
https://github.com/OpenSmalltalk/opensmalltalk-vm/releases/tag/r3732#linux

17:16:54.785
17:16:54.786 TowergameSyncTests
17:16:54.831 Error: External module not found
17:16:54.832 ExternalLibraryFunction(Object)>>error:
17:16:54.832 ExternalLibraryFunction(Object)>>externalCallFailed
17:16:54.833 ExternalLibraryFunction(ExternalFunction)>>invokeWithArguments:
17:16:54.833 UDBCSQLite3DatabaseExternalObject class>>finalizeResourceData:
17:16:54.834 FFICalloutAPI>>function:module:
17:16:54.834 UDBCSQLite3Library(Object)>>ffiCall:module:
17:16:54.835 UDBCSQLite3DatabaseExternalObject class>>finalizeResourceData:
17:16:54.836 FFIExternalResourceExecutor>>finalize
17:16:54.836 WeakFinalizerItem>>finalizeValues
17:16:54.845 [ each finalizeValues ] in [ :each | [ each finalizeValues 
] on: Exception fork: [ :ex | ex pass ] ] in 
WeakRegistry>>finalizeValues in Block: [ each finalizeValues ]

17:16:54.846 BlockClosure>>on:do:
17:16:54.852 [ Processor terminateActive ] in [ :ex |
17:16:54.852 | copy onDoCtx process handler bottom thisCtx |
17:16:54.852 onDoCtx := thisContext.
17:16:54.852 thisCtx := onDoCtx home.
17:16:54.852
17:16:54.852 "find the context on stack for which this method's is sender"
17:16:54.852 [ onDoCtx sender == thisCtx ]
17:16:54.852whileFalse: [ onDoCtx := onDoCtx sender.
17:16:54.852onDoCtx
17:16:54.852 			ifNil: [ "Can't find our home context. seems like we're 
already forked
17:16:54.852 and handling another exception in new thread. In this 
case, just pass it through handler." ^ handlerAction cull: ex ] ].

17:16:54.852 bottom := [ Processor terminateActive ] asContext.
17:16:54.853 onDoCtx privSender: bottom.
17:16:54.853 handler := [ handlerAction cull: ex ] asContext.
17:16:54.853 handler privSender: thisContext sender.
17:16:54.853 (Process forContext: handler priority: Processor 
activePriority)

17:16:54.853resume.
17:16:54.853
17:16:54.853 "cut the stack of current process"
17:16:54.853 thisContext privSender: thisCtx.
17:16:54.853 nil ] in BlockClosure>>on:fork: in Block: [ Processor 
terminateActive ]

17:16:54.989

Look like pharo was not able to find the sqlite3 lib.

Any help?

Thanks, Herby



Re: [Pharo-users] Pharo 7 license question

2017-09-21 Thread Herby Vojčík

Jimmie Houchin wrote:

You say it defends rights. It just removed my right to license my
software how I wish. The only way to preserve that option is to not use
GPL software.

Now, should I choose to not use GPL software. How has that benefited
anybody in the GPL ecosystem? Not at all.

We like to talk about the bad big corporation stealing our hard work and
our software and making millions of dollars. Yes big corp. prefers
MIT/BSD. They also prefer to release their own hard work and dollars as
MIT/BSD licensed software. It isn't as if it is all take on big
corporation's side. They prefer the permissive license both as author
and user.

MIT/BSD simply says you the user may do anything you want. Just don't
blame me (author) for anything. And give author(s) credit for what they
have created.

I would rather have people, businesses believe in open source software
and use and release open source software because they are believers and
not because some license forced them to do so. That is how MIT/BSD
software is. And in reality it is how all authors of open source
software are regardless of license. They do it because the believe in
it. It is wrong to think that MIT authors don't believe in the freedoms
of open source software. We do. We want the user to reciprocate because
they believe, not because we forced them. You can't force anybody. They
always have the choice of choosing something different, or writing it
themselves.


+1


Jimmie


Herby



Re: [Pharo-users] Gofer loads wrong version?

2017-09-14 Thread Herby Vojčík

stephan wrote:

On 12-09-17 21:58, Herby Vojčík wrote:

Bump.

Can you pls help me with how to load proper version / finding what is
wrong here?

As shown in replies, it loads two versions (eg. it seems it loads
everything it finds in the repo).

Herby

Herby Vojčík wrote:

Hello!

As I need to load specific version of Glorp with my fix, and I did not
find out how to force-override it in my baseline, I tried to load it
post-the-baseline via Gofer:

(Gofer new smalltalkhubUser: 'DBXTalk' project: 'Glorp')
package: 'Glorp';
version: 'Glorp-HerbyVojcik.127';
load.

But it actually looks like this: [see attached bogusgofer.png].

OTOH, after the image loads the tests pass and monticello browser looks
like this: [see attached mbloaded127.png].

Am I doing something wrong? I am a bit afraid what is actually loaded /
if it won't break b/c of inconsistent state.


No, it is fine. Telling Gofer package: means prepare to load the latest
version, so you give Gofer two versions to load.


Thank you, sir. Changed it to:

(Gofer new smalltalkhubUser: 'DBXTalk' project: 'Glorp')
  version: 'Glorp-HerbyVojcik.127';
  load.

and now it does exactly what I wanted.

Herby


https://www.lukas-renggli.ch/blog/gofer

Stephan




Re: [Pharo-users] Gofer loads wrong version?

2017-09-13 Thread Herby Vojčík

Guillermo Polito wrote:



On Mon, Sep 4, 2017 at 1:01 PM, Herby Vojčík <he...@mailbox.sk
<mailto:he...@mailbox.sk>> wrote:

Hello!

As I need to load specific version of Glorp with my fix, and I did
not find out how to force-override it in my baseline, I tried to
load it post-the-baseline via Gofer:

(Gofer new smalltalkhubUser: 'DBXTalk' project: 'Glorp')
   package: 'Glorp';
   version: 'Glorp-HerbyVojcik.127';
   load.

But it actually looks like this: [see attached bogusgofer.png].

OTOH, after the image loads the tests pass and monticello browser
looks like this: [see attached mbloaded127.png].

Am I doing something wrong? I am a bit afraid what is actually
loaded / if it won't break b/c of inconsistent state.


 From my understanding, loading was right:
  - underlined in monticello means loaded
  - bold means not loaded
  - normal (not bold nor underlined) means ancestor of loaded version

So I'd say you have effectively loaded Glorp-HerbyVojcik.127 in your system.

Now, why in the progress bar somebody is loading the version 129? I'm
not sure, maybe that's the metacello configuration? Or monticello needs
that to calculate something with ancestors? Dunno...


Epicea item has something like "loading versions 127 and 129" in its 
title, so it probably means it actually loaded both (as they are the 
only ones being there, it seems it loaded all versions that it found). :-/


Now is it a bug in M*cello or is it me who is doing something wrong?



Re: [Pharo-users] Gofer loads wrong version?

2017-09-12 Thread Herby Vojčík

Bump.

Can you pls help me with how to load proper version / finding what is 
wrong here?


As shown in replies, it loads two versions (eg. it seems it loads 
everything it finds in the repo).


Herby

Herby Vojčík wrote:

Hello!

As I need to load specific version of Glorp with my fix, and I did not
find out how to force-override it in my baseline, I tried to load it
post-the-baseline via Gofer:

(Gofer new smalltalkhubUser: 'DBXTalk' project: 'Glorp')
package: 'Glorp';
version: 'Glorp-HerbyVojcik.127';
load.

But it actually looks like this: [see attached bogusgofer.png].

OTOH, after the image loads the tests pass and monticello browser looks
like this: [see attached mbloaded127.png].

Am I doing something wrong? I am a bit afraid what is actually loaded /
if it won't break b/c of inconsistent state.

Herby





Re: [Pharo-users] Standalone HTML Model

2017-09-11 Thread Herby Vojčík

Stephane Ducasse wrote:

Hi Pierce Ng

How different is the API from Seaside?
Because I would like to use it.
I like to think modularly :)

Stef


Maybe porting Silk and let it write to some simulated DOM could be 
interesting as well... :-)


https://lolg.it/herby/silk

Herby


On Mon, Sep 11, 2017 at 12:31 PM, Pierce Ng  wrote:

On Fri, Sep 08, 2017 at 03:15:56PM -0700, Sean P. DeNigris wrote:

I'd like to create HTML via a DSL, like Seaside's canvas builder, but without
loading a whole web framework. Any ideas?

I wrote this but subsequently decided that loading the whole of Seaside into my
image just for this functionality is ok and stopped.

   http://smalltalkhub.com/#!/~PierceNg/WaterMint-HTML

Pierce









[Pharo-users] Glorp: problem with DictionaryMapping with "data objects"

2017-09-07 Thread Herby Vojčík

Hello!

I have the problem with mapping dictionary of key to simple data-holding 
object (two integers) work. Glorp DictionaryMapping dicriminates between 
mapping a simple value (#dictionaryFrom: String to: Integer), where 
things work as assumed (having two fields in a table, one for key, one 
for value; and of course one for FK to owner) and mapping an object 
(#dictionaryFrom: String to: MyClass), where it fails miserably if the 
object does not contain the key itself as the FK as well (which is dumb; 
I want it unidirectional). In case I do not include the key-as-FK, it 
fails to grasp update and/or deletes, and fails during commit with 
either UNIQUE failure (when update is interpreted as insert) or with 
NULL failure (where delete is interpreted as insert with nil in place of 
dict key field).


Maybe it is a missing feature in DictionaryMapping; maybe I am doing 
something wrong. I would like to ask if there isn't someone currently on 
ESUG who would want a bit of more sightseeing and would be willing to 
get a detour via Bratislava (I have a place for stayover) and look at 
the issue face to face (or I can use ScreenHero, but as I understood it 
does not have Linux client, maybe that's why Esteban did not accept my 
invitation yet).


Thanks, Herby



Re: [Pharo-users] Gofer loads wrong version?

2017-09-04 Thread Herby Vojčík

Herby Vojčík wrote:

Hello!

As I need to load specific version of Glorp with my fix, and I did not
find out how to force-override it in my baseline, I tried to load it
post-the-baseline via Gofer:

(Gofer new smalltalkhubUser: 'DBXTalk' project: 'Glorp')
package: 'Glorp';
version: 'Glorp-HerbyVojcik.127';
load.

But it actually looks like this: [see attached bogusgofer.png].

OTOH, after the image loads the tests pass and monticello browser looks
like this: [see attached mbloaded127.png].

Am I doing something wrong? I am a bit afraid what is actually loaded /
if it won't break b/c of inconsistent state.


P.S.: Epicea shows "Loading 127 and 129". I only wanted to load 127. ???


Herby





[Pharo-users] Gofer loads wrong version?

2017-09-04 Thread Herby Vojčík

Hello!

As I need to load specific version of Glorp with my fix, and I did not 
find out how to force-override it in my baseline, I tried to load it 
post-the-baseline via Gofer:


(Gofer new smalltalkhubUser: 'DBXTalk' project: 'Glorp')
  package: 'Glorp';
  version: 'Glorp-HerbyVojcik.127';
  load.

But it actually looks like this: [see attached bogusgofer.png].

OTOH, after the image loads the tests pass and monticello browser looks 
like this: [see attached mbloaded127.png].


Am I doing something wrong? I am a bit afraid what is actually loaded / 
if it won't break b/c of inconsistent state.


Herby


Re: [Pharo-users] new chapter on double dispatch for new book :)

2017-08-23 Thread Herby Vojčík

Maybe in general sumWithFoo: => addSelfToFoo: to make clues clearer.

Herby




Re: [Pharo-users] new chapter on double dispatch for new book :)

2017-08-23 Thread Herby Vojčík

Stephane Ducasse wrote:

feedback is welcome
Good reading


2.4

"is to explicit type check"
- "is to explicitly type check", or
- "is to do explicit type check"

s/we will haveother/we will have other/

s/distabilizing/destabilizing/

"In fact we just to tell the receiver ,,,"
something's missing or is extra here


s/a die or an handle/a die or a handle/

Figure 2.1

I find the diagram strange on the first look. The solid arrow pointing 
to the notes is the cause. I am used to see notes connected with dashed 
line without a tip, an in UML.


2.6

Addition is commutative, so it should not bother form implementation 
PoV, but it seems strange to see:


The previous method я is definitively what we want to do when we have two
dice. So let us rename it as ãïÈr°ê­°ϓ so that we can 
invoke it later.

° мм ãïÈr°ê­°ϓ †°
ի °)†É™Â ɝû
†™™°ϓ ãÂ§Ϟ
†™™°ϓ †°Ϟ āÐïßãÂ§

(eh? why pillar makes non copy-paste friendly pdfs?!)

back to the point, to see renaming + to sumWithDie: and keep the order, 
and later do the magic and say "We just tell the argument (which can be 
a die or a die handle) that we want to add to it an die.". Mind is 
twisted when trying to grasp it as things are flipped.


Much better would be to say "So let us rename it as 
ãïÈr°ê­°ϓ so that we can invoke it later but let us switch 
the roles while doing it" and make the DD method with roles switched 
(self added second).


Then the "We just tell the argument (...) that we want to add to it an 
die." clicks much better (I would use "add itself to a die" in the end).


s/Easy, no./Easy, isn't it?/

"It is easy, isn’t?" - remove, too much of the same near each other

s/we simply creates a new die handle/we simply create a new die handle/

s/add all the die of the previous/add all the dice of the previous/

DieHandle >> sumWithDie: - also do reverse order and change the 
description to be on par


2.7

s/the receiver is a die handle/the receiver of + is a die handle/
With DD, things go there and back, better be explicit.

s/to add a die handle this time/to add itself to a die handle this time/

"We know what is to add two die handles"
something is missing or extra here

We rename ... as ... => We rename ... as ... while switching roles.
and reflect it in code

"Remember that sending back a new message to the argument is the key 
aspect. Why? Because we kick in a new message lookup and dispatch."

Not sure if this helps or confuses. Probably find better form.

s/final behavior just have to/final behavior we just have to/

s/withWithHandle:/sumWithHandle:/

Note: Die >> sumWithHandle: is in correct order, need no switching.

2.8

maybe s/trivial to understand deeply/trivial to deeply understand/

maybe s/it requires time to digest it/it requires time to digest/

maybe s/method based on the argument too/method based on the argument/

s/we step badk/we step back/

s/applied twice the Don’t ask, tell principle/applied the Don’t ask, 
tell principle twice/


s/First the message я plus selects/First the message я selects/

s/selecting the correct method either W or Q/selecting either the 
correct method W or the correct method Q/


s/the receiver and the argument of a messages/the receiver and the 
argument of a + message/




Re: [Pharo-users] Glorp: Is there some way to do insert-or-update?

2017-08-23 Thread Herby Vojčík

jtuchel wrote:

Herby,

as Esteban already said, UPSERT doesn't make any sense in an ORM. It


I don't know... I just create new object (with same "primary key") and 
register it (yes, I know I get an error - maybe I should be able to set 
the policy to "overwrite" and it would makes sense; or not?).



either knows the object as one that has been read in this session or
not. If not, it is new and needs to be inserted.

You could, of course, try and see what happens if you let Glorp's insert
operation always issue an UPSERT. This is probably very easy to do and
at first sight there isn't too much I could think of that could go wrong
with it.

But I guess including a check for existence of an object as Esteban
suggests isn't too bad from the performance and "safety" POV. not sure I
understand how a Dictionary Mapping could help here


Similarly to what was posted above: I can simply at:put: and I don't 
care if I created the new key-value pair or overwritten the old value 
(in cases where simply putting new object under a key is feasible, which 
is in this case).



Joachim


Herby


Am Dienstag, 22. August 2017 12:13:30 UTC+2 schrieb Herbert Vojčík:

Hello!

Is there some way to do insert-or-update operation (that is,
roughly, to
be able to register an object with possibly existing primary key(s) and
let it be written regardless?

Thanks, Herby

P.S.: Use case - I want to have log of USER-DEVICE pairing with last
timestamp and 'enabled' field that would be set to false once push
notification fails - but set to true once user actually logs from the
device (again). I don't want to have many historical records, so I want
to have USER+DEVICE to be a composed primary key. Which means it is
inserted first time, but possibly updated later.

--
You received this message because you are subscribed to the Google
Groups "glorp-group" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to glorp-group+unsubscr...@googlegroups.com
.
To post to this group, send email to glorp-gr...@googlegroups.com
.
Visit this group at https://groups.google.com/group/glorp-group.
For more options, visit https://groups.google.com/d/optout.





[Pharo-users] Glorp: Is there some way to do insert-or-update?

2017-08-22 Thread Herby Vojčík

Hello!

Is there some way to do insert-or-update operation (that is, roughly, to 
be able to register an object with possibly existing primary key(s) and 
let it be written regardless?


Thanks, Herby

P.S.: Use case - I want to have log of USER-DEVICE pairing with last 
timestamp and 'enabled' field that would be set to false once push 
notification fails - but set to true once user actually logs from the 
device (again). I don't want to have many historical records, so I want 
to have USER+DEVICE to be a composed primary key. Which means it is 
inserted first time, but possibly updated later.




Re: [Pharo-users] How do I get a list of all packages in the catalog with a 6.0 tag?

2017-08-21 Thread Herby Vojčík

H. Hirzel wrote:

This helps me to get at the information for a particular singular
entry. I am looking for a list of all catalog entries in 6.0.


That said, if it has menu item (and catalog has), I use Finder to look 
in source code directly for that string, and that is a good starting 
point. I had the impression I found out where catalog is held / updated 
using that, but it was few weeks ago.


Herby


On 8/21/17, Herby Vojčík<he...@mailbox.sk>  wrote:

H. Hirzel wrote:

Hello

On 8/17/17, bdurin<bruno.du...@gmail.com>   wrote:

Maybe something like having the list of all packages and their version
included in a given image version on https://pharo.org could be useful.

What is the code snippet to get a list of all packages with a '6.0'
tag and their description?

The spotter brings up the Catalog Browser but how do I get 'behind the
scenes'?

In Squeak I would just bring up the halo menu, then open an inspector
on the Catalog Browser window and see what model is attached.

How do I do this in Pharo these days?

Load configuration only, then look into ConfigurationOfXxx /
BaselineOfXxx class.

Herby


Thank you for the answer in advance

--Hannes











Re: [Pharo-users] How do I get a list of all packages in the catalog with a 6.0 tag?

2017-08-21 Thread Herby Vojčík

H. Hirzel wrote:

This helps me to get at the information for a particular singular
entry. I am looking for a list of all catalog entries in 6.0.


Ah, sorry. Did not get the question properly.


On 8/21/17, Herby Vojčík<he...@mailbox.sk>  wrote:

H. Hirzel wrote:

Hello

On 8/17/17, bdurin<bruno.du...@gmail.com>   wrote:

Maybe something like having the list of all packages and their version
included in a given image version on https://pharo.org could be useful.

What is the code snippet to get a list of all packages with a '6.0'
tag and their description?

The spotter brings up the Catalog Browser but how do I get 'behind the
scenes'?

In Squeak I would just bring up the halo menu, then open an inspector
on the Catalog Browser window and see what model is attached.

How do I do this in Pharo these days?

Load configuration only, then look into ConfigurationOfXxx /
BaselineOfXxx class.

Herby


Thank you for the answer in advance

--Hannes











Re: [Pharo-users] How do I get a list of all packages in the catalog with a 6.0 tag?

2017-08-21 Thread Herby Vojčík

H. Hirzel wrote:

Hello

On 8/17/17, bdurin  wrote:

Maybe something like having the list of all packages and their version
included in a given image version on https://pharo.org could be useful.


What is the code snippet to get a list of all packages with a '6.0'
tag and their description?

The spotter brings up the Catalog Browser but how do I get 'behind the scenes'?

In Squeak I would just bring up the halo menu, then open an inspector
on the Catalog Browser window and see what model is attached.

How do I do this in Pharo these days?


Load configuration only, then look into ConfigurationOfXxx / 
BaselineOfXxx class.


Herby


Thank you for the answer in advance

--Hannes






Re: [Pharo-users] Where is the installation log? Installing FileMan into Pharo 6.0-60510, which is 6.1.

2017-08-21 Thread Herby Vojčík

H. Hirzel wrote:

Hello

I wanted to install the FileMan package through the catalog into Pharo
6.0-60510 (a.k.a 6.1).

FileMan is library used by Cuis Smalltalk and  also available for
other Smalltalk dialects -  http://wiki.squeak.org/squeak/6333.

There is a FileMan entry in the catalog, but no description and 6.0
compatibility tag. So I tried that installation.

For two seconds then there was a note at the lower left corner of the
screen that there was an installation problem.

I wonder where I do find the installation log? This is necessary  in
such cases in order to spot the problem.


I would look into stdout, stderr (in Windows they are saved to file) and 
.changes file.




However in this case the solution was to go for the README of

 https://github.com/mumez/FileMan

It has


 Gofer it
   url: 'http://squeaksource.com/MetacelloRepository';
   package: 'ConfigurationOfFileMan';
   load.
 (Smalltalk at: #ConfigurationOfFileMan) perform: #load.


Then 13 out of 16 tests pass.

Regards

Hannes






Re: [Pharo-users] [ANN] Pharo wiki , is here

2017-08-20 Thread Herby Vojčík

Stephane Ducasse wrote:

I added some links to books and blogs.


On Sat, Aug 19, 2017 at 11:20 PM, Dimitris Chloupis
  wrote:

I also turned it to be published as a webpage, its can be viewed from this
link

https://squarebracketassociates.github.io/PharoWiki/


Seems like another github-account-only solution, is it?


On Sat, Aug 19, 2017 at 11:54 PM Dimitris Chloupis
wrote:

Many seemed to like the idea of a Pharo wiki , I like it too. I created
one, can be found here and super easy to contribute to.

https://github.com/SquareBracketAssociates/PharoWiki

Will keep this thread for alerting for important updates to wiki. Have fun
:)







Re: [Pharo-users] What is proper fix for this?

2017-08-17 Thread Herby Vojčík

Esteban Lorenzano wrote:



On 17 Aug 2017, at 10:35, Guillermo Polito > wrote:

Just a thought out of thin air: wasn't filetree supposed to provide
common ground for this kind of scenarios? If we shared a single
repository in github that would save us a lot of discussion :P


it doesn’t :)
while exporting VM-Glorp to github will simplify a lot the process,
truth is dialects are so different they cannot talk each other in
general, and changes needs to be applied (by hand).


I know it sounds like pipe dream, but I had the impression 
non-dialect-specific parts of Glorp (that is, most of it) is (was?) 
deliberately written in subset of Smalltalk (not ifNotNil: but isNil 
ifFalse: etc.) that it aimed to actually _be_ portable.


That part at least (and the high-level part with descriptors, mappings, 
queries, glorpexpressions etc. is such) would be nice to actually be 
generic enough to be "any Smalltalk out there".


Herby


but… having a github mirror to be able to diff properly is a good thing.

Esteban






Re: [Pharo-users] What is proper fix for this?

2017-08-17 Thread Herby Vojčík

Guillermo Polito wrote:



On Thu, Aug 17, 2017 at 10:32 AM, Esteban Lorenzano <esteba...@gmail.com
<mailto:esteba...@gmail.com>> wrote:


>  On 17 Aug 2017, at 10:18, Herby Vojčík <he...@mailbox.sk
<mailto:he...@mailbox.sk>> wrote:
>
>  jtuc...@objektfabrik.de <mailto:jtuc...@objektfabrik.de> wrote:
> > Herby,
> >
> > my ccomments were not meant to say you are not competent enough
to fix
> > Glorp. I know you have been active as the maintainer of Amber for
quite
> > a while now and know you are an experienced Smalltalker. So this
is not
>
>  Not really. Actually, never did any production-ready project in
Smalltalk. The one I do now is the first time.
>
> > an attempt to make you look incompetent or "unqualified".
> >
> > I just wanted to point out that
> >
> >  * I think that if there is a bug in Glorp, it should be communicated
> >to the maintainers in order to make sure the fix is making it into
> >newer Glorp versions and from there to all dialects that have
a port
>
>  Yeah, sure. But Esteban's mail suggested that it is a long
process, so maybe it _is_ beneficial to try to shortcut the fix at
Pharo side.
>
>  Don't know the local politics, so can't say myself.

Is not about politics :)
basically: cincom does not maintains other platforms than themselves
(and *I am not* complaining, this is a fair choice, we do the same
with pharo related things). So if we want to keep the port updated
is this community who has to do it.
Now, in the case of Glorp this is not easy to do and last year we
(the consortium) spent money to get an updated port. Now, if there
is a bug and is *we* who found it, I would like to have a fix for
our port (and of course inform it to cincom guys).

In fact… complexity is so big in this project that every port of
Glorp from VW to Pharo is a “de facto” fork (not desired, but
necessary). And any new port/update of the port will require
important efforts we cannot do at the moment (I guess we could diff
versions and update just the changes we find… just to simplify. But
still, this is a lot of work :P).

Anyway, this is why if we have the opportunity to fix a bug in our
platform, I would apply it regardless the cincom process (and our
own update process).


Just a thought out of thin air: wasn't filetree supposed to provide
common ground for this kind of scenarios? If we shared a single
repository in github that would save us a lot of discussion :P


I'd prefer on-premise git.smalltalkhub.com, but it's just me. :-(

OTOH, it can be interesting if the system it runs on is decently 
extensible so Smalltalk-specific plugins be added (edit in place via run 
in squeakjs, deep-linking-like integration when open in internal browser 
inside pharo image, ...).



cheers,
Esteban

 >
 >>of Glorp (Smalltalk is too much of a niche to be able to
stand more
 >>and more niche-ification of forks and stuff, esp. for such a
central
 >>part as Glorp which are way too important to only be
maintained by
 >>one or two developers - which they unfortunately are, at
least to my
 >>knowledge)
 >
 > :-(
 >
 >>  * I am not sure if anybody from Cincom is listening here
looking for
 >>Glorp problems, so I saw/see the danger of "private" fixes /
forks
 >>  * I fixed a few bugs in Glorp in the past just to find out that the
 >>concept was correct but the place to fix it was wrong (or at
least
 >>would not heal all related problems). Glorp is complex and it has
 >>lots of layers. It is a good example of the "avoid
responsibility"
 >>concept that was once (what a coincidence) formulated by Alan
Knight
 >>in an article named "All I've learned about object orientation I
 >>learnt from Dilbert" (or similar) - so I was gad Niall looked
into
 >
 > Yeah, the classic (that is, for me; lots of ppl out there do not
know it, though they should).
 >
 >>these and gave me feedback as well as a "full" fix
 >
 > Yeah, that would be nice.
 >
 >> So I mainly ask you to post your fix and problem description to the
 >> Glorp Mailing list / Google group. It would be a pity if your fix is
 >
 > I posted (that is, I tried to; I hope it got there).
 >
 > Maybe I should reply there with a few more words... or find out
if it got there in the first place.
 >
 >
 >> buried in some fork of Glorp.
 >>
 >> Joachim
 >
 > Herby
 >





--



Guille Polito


Research Engineer

French National Center for Scientific Research - _http://www.cnrs.fr_



*Web:* _http://guillep.github.io_

*Phone: *+33 06 52 70 66 13






Re: [Pharo-users] What is proper fix for this?

2017-08-17 Thread Herby Vojčík

jtuc...@objektfabrik.de wrote:

Herby,

my ccomments were not meant to say you are not competent enough to fix
Glorp. I know you have been active as the maintainer of Amber for quite
a while now and know you are an experienced Smalltalker. So this is not


Not really. Actually, never did any production-ready project in 
Smalltalk. The one I do now is the first time.



an attempt to make you look incompetent or "unqualified".

I just wanted to point out that

  * I think that if there is a bug in Glorp, it should be communicated
to the maintainers in order to make sure the fix is making it into
newer Glorp versions and from there to all dialects that have a port


Yeah, sure. But Esteban's mail suggested that it is a long process, so 
maybe it _is_ beneficial to try to shortcut the fix at Pharo side.


Don't know the local politics, so can't say myself.


of Glorp (Smalltalk is too much of a niche to be able to stand more
and more niche-ification of forks and stuff, esp. for such a central
part as Glorp which are way too important to only be maintained by
one or two developers - which they unfortunately are, at least to my
knowledge)


:-(


  * I am not sure if anybody from Cincom is listening here looking for
Glorp problems, so I saw/see the danger of "private" fixes / forks
  * I fixed a few bugs in Glorp in the past just to find out that the
concept was correct but the place to fix it was wrong (or at least
would not heal all related problems). Glorp is complex and it has
lots of layers. It is a good example of the "avoid responsibility"
concept that was once (what a coincidence) formulated by Alan Knight
in an article named "All I've learned about object orientation I
learnt from Dilbert" (or similar) - so I was gad Niall looked into


Yeah, the classic (that is, for me; lots of ppl out there do not know 
it, though they should).



these and gave me feedback as well as a "full" fix


Yeah, that would be nice.


So I mainly ask you to post your fix and problem description to the
Glorp Mailing list / Google group. It would be a pity if your fix is


I posted (that is, I tried to; I hope it got there).

Maybe I should reply there with a few more words... or find out if it 
got there in the first place.




buried in some fork of Glorp.

Joachim


Herby



Re: [Pharo-users] What is proper fix for this? (was: Re: Big Glorp problem w/ type coercion, pls help)

2017-08-16 Thread Herby Vojčík

Herby Vojčík wrote:

Esteban Lorenzano wrote:

but if he is using Glorp for Pharo and cincom takes the bug and fixes
it, it still will not hit Pharo until someone ports it.
So, while I have literally no idea of what Herby is asking for, I
encourage to keep discussion also here, then solution can hit both
platforms.


Thank you.

In short, if there is DirectMapping with converter in the field used to
foreign-key to other table's primary key (and I put one there as I use
UUID which needs to be converted to/from ByteArray; in FK as well as in
other side's PK), a relation is created with
expressionFor:basedOn:relation: (as is done for other mappings in case
relation like #= is used). Mapping has generic one, which correctly
takes stValue(s) of the left side(s), and converts it to dbValue(s).


Errata: right side(s)


DirectMapping's one was heavily optimized (probablly for perf reasons)
and the conversion was thus lost in the process, I presume.

The fix adds the conversion back, so I can do

where: [ :one | one agent = anA
gentObject ] and have it correctly translated to WHERE table.agentfield
= converted_to_dbvalue(anAgentObject primaryKey).

Herby



Esteban


On 16 Aug 2017, at 00:07, he...@mailbox.sk wrote:

BTW I took the latter way (as method tries to be as optimized as
possible), it is in
http://smalltalkhub.com/#!/~herby/Glorp/versions/Glorp-HerbyVojcik.127,
consider merging in. Thanks.

Herby Vojčík wrote:

Hello!

I think I found the culprit. Few methods posted here:


Mapping>> expressionFor: anObject basedOn: anExpression relation:
aSymbol
"Return our expression using the object's values. e.g. if this was a
direct mapping from id->ID and the object had id: 3, then return
TABLE.ID=3. Used when rewriting object=object into field=field"

| myValue result |
myValue := self expressionFor: anObject.
result := nil.
myValue with: self join allTargetFields do: [:eachValue :eachField |
| source |
source

:= anExpression get: self attribute name.

source hasDescriptor ifTrue: [source := source getField: eachField].
result := (source get: aSymbol withArguments: (Array with: eachValue))
AND: result].
^result



DirectMapping>> expressionFor: anObject basedOn: anExpression relation:
aSymbol
"Return our expression using the object's values. e.g. if this was a
direct mapping from id->ID and the object had id: 3, then return
TABLE.ID=3"

| value |
value := anObject isNil
ifTrue: [nil]
ifFalse:
[anObject isGlorpExpression
ifTrue: [anObject getMapping: self named: self attributeName]
ifFalse: [anObject glorpIsCollection
ifTrue: [anObject collect: [:each | attribute getValueFrom: each]]
ifFalse: [attribute getValueFrom: anObject]]].
^(anExpression get: self attribute name) get: aSymbol withArguments:
(Array with: value)



Mapping>> expressionFor: anObject
"Return an expression

representing the value of the object. This can be

nil, an object value or values, an expression, or a collection of
expressions (for a composite key, if we're passed an expression)"

anObject isNil ifTrue: [^#(nil)].
anObject isGlorpExpression ifFalse: [
^self mappedFields collect: [:each |
self valueOfField: each fromObject: anObject]].
^self mappedFields
collect: [:each | (anObject getField: each)]



Mapping>> getValueFrom: anObject

^self attribute getValueFrom: anObject



DirectMapping>> valueOfField: aField fromObject: anObject
field = aField ifFalse: [self error: 'Mapping doesn''t describe
field'].
^self convertedDbValueOf: (self getValueFrom: anObject)



DirectMapping>> mappedFields
"Return a collection of fields that this mapping will write into any of
the containing object's rows"

^Array with: self field


The thing is, both Mapping>> expressionF

or:basedOn:relation: and the

overridden DirectMapping's version eventually send

someSource get: aSymbol withArguments: (Array with: eachValue)

but in Mapping's code, the value is taken from `myValue := self
expressionFor: anObject`. which, as seen in #expressionFor: code, gets
the value via

self valueOfField: aMappedField fromObject: anObject

and indeed, if tried aDirectMapping expressionFor: anObject in
debugger,
it gets the value of the primary key converted in the below case (that
is, as a ByteArray). This is clear from the DirectMapping>>
valueOfField:fromObject: code above, which does `self getValueFrom:
anObject` (which passes it to `attribute getValueFrom: anObject`)
_and_converts_it_.

But in the overridden DirectMapping>> expressionFor:basedOn:relation:,
the value to be passed in the

someSource get: aSymbol withArguments: (Array with: value)

is obtained by direct

attri

bute getValueFrom: anObject


but _is_not_converted_. IOW, it seems this method was heavily optimized
(`attribute getValueFrom:` instead of `self getValueFrom:`, for
example), but the conversion, normally present via expressionFor: and
ultimately valueOfField:fromObject: was opti

Re: [Pharo-users] What is proper fix for this? (was: Re: Big Glorp problem w/ type coercion, pls help)

2017-08-16 Thread Herby Vojčík

Esteban Lorenzano wrote:

but if he is using Glorp for Pharo and cincom takes the bug and fixes it, it 
still will not hit Pharo until someone ports it.
So, while I have literally no idea of what Herby is asking for, I encourage to 
keep discussion also here, then solution can hit both platforms.


Thank you.

In short, if there is DirectMapping with converter in the field used to 
foreign-key to other table's primary key (and I put one there as I use UUID 
which needs to be converted to/from ByteArray; in FK as well as in other side's 
PK), a relation is created with expressionFor:basedOn:relation: (as is done for 
other mappings in case relation like #= is used). Mapping has generic one, 
which correctly takes stValue(s) of the left side(s), and converts it to 
dbValue(s). DirectMapping's one was heavily optimized (probablly for perf 
reasons) and the conversion was thus lost in the process, I presume.

The fix adds the conversion back, so I can do

where: [ :one | one agent = anA
gentObject ] and have it correctly translated to WHERE table.agentfield = 
converted_to_dbvalue(anAgentObject primaryKey).

Herby



Esteban


On 16 Aug 2017, at 00:07, he...@mailbox.sk wrote:

BTW I took the latter way (as method tries to be as optimized as
possible), it is in
http://smalltalkhub.com/#!/~herby/Glorp/versions/Glorp-HerbyVojcik.127,
consider merging in. Thanks.

Herby Vojčík wrote:

Hello!

I think I found the culprit. Few methods posted here:


Mapping>>  expressionFor: anObject basedOn: anExpression relation:
aSymbol
"Return our expression using the object's values. e.g. if this was a
direct mapping from id->ID and the object had id: 3, then return
TABLE.ID=3. Used when rewriting object=object into field=field"

| myValue result |
myValue := self expressionFor: anObject.
result := nil.
myValue with: self join allTargetFields do: [:eachValue :eachField |
| source |
source 

:= anExpression get: self attribute name.

source hasDescriptor ifTrue: [source := source getField: eachField].
result := (source get: aSymbol withArguments: (Array with: eachValue))
AND: result].
^result



DirectMapping>>  expressionFor: anObject basedOn: anExpression relation:
aSymbol
"Return our expression using the object's values. e.g. if this was a
direct mapping from id->ID and the object had id: 3, then return
TABLE.ID=3"

| value |
value := anObject isNil
ifTrue: [nil]
ifFalse:
[anObject isGlorpExpression
ifTrue: [anObject getMapping: self named: self attributeName]
ifFalse: [anObject glorpIsCollection
ifTrue: [anObject collect: [:each | attribute getValueFrom: each]]
ifFalse: [attribute getValueFrom: anObject]]].
^(anExpression get: self attribute name) get: aSymbol withArguments:
(Array with: value)



Mapping>>  expressionFor: anObject
"Return an expression

representing the value of the object. This can be

nil, an object value or values, an expression, or a collection of
expressions (for a composite key, if we're passed an expression)"

anObject isNil ifTrue: [^#(nil)].
anObject isGlorpExpression ifFalse: [
^self mappedFields collect: [:each |
self valueOfField: each fromObject: anObject]].
^self mappedFields
collect: [:each | (anObject getField: each)]



Mapping>>  getValueFrom: anObject

^self attribute getValueFrom: anObject



DirectMapping>>  valueOfField: aField fromObject: anObject
field = aField ifFalse: [self error: 'Mapping doesn''t describe field'].
^self convertedDbValueOf: (self getValueFrom: anObject)



DirectMapping>>  mappedFields
"Return a collection of fields that this mapping will write into any of
the containing object's rows"

^Array with: self field


The thing is, both Mapping>>  expressionF

or:basedOn:relation: and the

overridden DirectMapping's version eventually send

someSource get: aSymbol withArguments: (Array with: eachValue)

but in Mapping's code, the value is taken from `myValue := self
expressionFor: anObject`. which, as seen in #expressionFor: code, gets
the value via

self valueOfField: aMappedField fromObject: anObject

and indeed, if tried aDirectMapping expressionFor: anObject in debugger,
it gets the value of the primary key converted in the below case (that
is, as a ByteArray). This is clear from the DirectMapping>>
valueOfField:fromObject: code above, which does `self getValueFrom:
anObject` (which passes it to `attribute getValueFrom: anObject`)
_and_converts_it_.

But in the overridden DirectMapping>>  expressionFor:basedOn:relation:,
the value to be passed in the

someSource get: aSymbol withArguments: (Array with: value)

is obtained by direct

attri

bute getValueFrom: anObject


but _is_not_converted_. IOW, it seems this method was heavily optimized
(`attribute getValueFrom:` instead of `self getValueFrom:`, for
example), but the conversion, normally present via expressionFor: and
ultimately valueOfField:fromObject: was optimized away as well.




Now, what is the correct way

[Pharo-users] What is proper fix for this? (was: Re: Big Glorp problem w/ type coercion, pls help)

2017-08-15 Thread Herby Vojčík
tMapping >> expressionFor: anObject basedOn: anExpression relation: 
aSymbol
	"Return our expression using the object's values. e.g. if this was a 
direct mapping from id->ID and the object had id: 3, then return TABLE.ID=3"


| value |
value := anObject isNil
ifTrue: [nil]
ifFalse:
[anObject isGlorpExpression
ifTrue: [anObject getMapping: self named: self 
attributeName]
ifFalse: [anObject glorpIsCollection
	ifTrue: [anObject collect: [:each | self convertedDbValueOf: 
(attribute getValueFrom: each)]]
	ifFalse: [self convertedDbValueOf: (attribute getValueFrom: 
anObject)]]].
	^(anExpression get: self attribute name) get: aSymbol withArguments: 
(Array with: value)




Or something completely different?


Thanks, Herby

Herby Vojčík wrote:

Hello!

I encountered a problem with OneToOneMapping and type coercion. When
writing data, thing work; when reading data, the right child of relation
fails to convert.

I tried everything possible to inject converters (even subclassing
GlorpBlobType), but to no avail. RelationExpression passes conversion to
its left child:

convertedDbValueOf: anObject
"Assume that our types match, so we can ask either child to do the
conversion. That isn't guaranteed, but should at least work for the
common cases."
^leftChild convertedDbValueOf: anObject.

but the left child is FieldExpression in case of OneToOneMapping, which:

convertedDbValueOf: anObject
"We don't do any conversion"
^anObject

What is strange, writing works (even the OneToOneMapping, I opened the
sqlite file with an explorer), but second SELECT, one using the relation
(`state := self dao findStateByAgent: agent` in clientSync), fails with
"GlorpDatabaseReadError: Could not coerce arguments". FWIW, the first
one _does_ convert when creating bindings, as it uses MappingExpression
as left child (stepped over it in debugger).



Is it meant to be a strange case that primary key is something
non-primitive needing coercion (in this case, it is a UUID which needs
coercion to ByteArray, even if it is its subclass)?



Here's the stack of running the test which fails:

PharoDatabaseAccessor(DatabaseAccessor)>>handleError:for:
[ :ex | self handleError: ex for: command ] in [ | result |
self checkPermissionFor: command.
result := [ (self useBinding and: [ command useBinding ])
ifTrue: [ command executeBoundIn: self ]
ifFalse: [ command executeUnboundIn: self ] ]
on: Dialect error
do: [ :ex | self handleError: ex for: command ].
aBoolean
ifTrue: [ result ]
ifFalse: [ result upToEnd ] ] in
PharoDatabaseAccessor(DatabaseAccessor)>>executeCommand:returnCursor:
BlockClosure>>cull:
Context>>evaluateSignal:
Context>>handleSignal:
Error(Exception)>>signal
Error(Exception)>>signal:
ExternalLibraryFunction(Object)>>error:
ExternalLibraryFunction(Object)>>externalCallFailed
ExternalLibraryFunction(ExternalFunction)>>invokeWithArguments:
UDBCSQLite3Library>>apiBindBlob:atColumn:with:with:with:
UDBCSQLite3Library>>with:at:putBlob:
UDBCSQLite3Statement>>at:putByteArray:
UDBCSQLite3ResultSet>>execute:withIndex:withValue:
[ :v | i := self execute: statement withIndex: i withValue: v ] in
UDBCSQLite3ResultSet>>execute:withCollection:
OrderedCollection>>do:
UDBCSQLite3ResultSet>>execute:withCollection:
UDBCSQLite3ResultSet>>execute:with:on:
UDBCSQLite3Connection>>execute:with:
GlorpSQLite3Driver>>basicExecuteSQLString:binding:
PharoDatabaseAccessor>>executeCommandBound:
QuerySelectCommand(DatabaseCommand)>>executeBoundIn:
[ (self useBinding and: [ command useBinding ])
ifTrue: [ command executeBoundIn: self ]
ifFalse: [ command executeUnboundIn: self ] ] in [ | result |
self checkPermissionFor: command.
result := [ (self useBinding and: [ command useBinding ])
ifTrue: [ command executeBoundIn: self ]
ifFalse: [ command executeUnboundIn: self ] ]
on: Dialect error
do: [ :ex | self handleError: ex for: command ].
aBoolean
ifTrue: [ result ]
ifFalse: [ result upToEnd ] ] in
PharoDatabaseAccessor(DatabaseAccessor)>>executeCommand:returnCursor:
BlockClosure>>on:do:
[ | result |
self checkPermissionFor: command.
result := [ (self useBinding and: [ command useBinding ])
ifTrue: [ command executeBoundIn: self ]
ifFalse: [ command executeUnboundIn: self ] ]
on: Dialect error
do: [ :ex | self handleError: ex for: command ].
aBoolean
ifTrue: [ result ]
ifFalse: [ result upToEnd ] ] in
PharoDatabaseAccessor(DatabaseAccessor)>>executeCommand:returnCursor:
[ caught := true.
self wait.
blockValue := mutuallyExcludedBlock value ] in Semaphore>>critical:
BlockClosure>>ensure:
Semaphore>>critical:
PharoDatabaseAccessor(DatabaseAccessor)>>executeCommand:returnCursor:
[ session accessor execu

Re: [Pharo-users] Big Glorp problem w/ type coercion, pls help

2017-08-14 Thread Herby Vojčík

FYI, used a workaround:

TowergameDao >> findStateByAgent: anAgent
| workaround |
workaround := anAgent ifNotNil: [ ByteArray withAll: anAgent id ].
	^ self glorpSession readOneOf: TgState where: [ :one | one agent id = 
workaround ]


But it is _ugly_ (though, it actually generates the same short SQL; it 
was RelationExpression >> condensePrimaryKeyComparison which inspired me 
to do this). I am sure one of the points of Glorp is to be able to write 
the original:


findStateByAgent: anAgent
	^ self glorpSession readOneOf: TgState where: [ :one | one agent = 
anAgent ]


Is it true (should this work)?

Herby

Herby Vojčík wrote:

Esteban A. Maringolo wrote:

Do you have the code somewhere loadable? Reading chunk is something I
do only when everything crashed :D
Esteban A. Maringolo


I will attach the .st files... not loadable as its in private on-premise
git repo :-(

Thank you very much,

Herby


2017-08-14 13:44 GMT-03:00 Herby Vojčík<he...@mailbox.sk>:

Hello!

I encountered a problem with OneToOneMapping and type coercion. When
writing
data, thing work; when reading data, the right child of relation
fails to
convert.

I tried everything possible to inject converters (even subclassing
GlorpBlobType), but to no avail. RelationExpression passes conversion
to its
left child:

convertedDbValueOf: anObject
"Assume that our types match, so we can ask either child to do the
conversion. That isn't guaranteed, but should at least work for the
common
cases."
^leftChild convertedDbValueOf: anObject.

but the left child is FieldExpression in case of OneToOneMapping, which:

convertedDbValueOf: anObject
"We don't do any conversion"
^anObject

What is strange, writing works (even the OneToOneMapping, I opened the
sqlite file with an explorer), but second SELECT, one using the relation
(`state := self dao findStateByAgent: agent` in clientSync), fails with
"GlorpDatabaseReadError: Could not coerce arguments". FWIW, the first
one
_does_ convert when creating bindings, as it uses MappingExpression
as left
child (stepped over it in debugger).



Is it meant to be a strange case that primary key is something
non-primitive
needing coercion (in this case, it is a UUID which needs coercion to
ByteArray, even if it is its subclass)?



Here's the stack of running the test which fails:

PharoDatabaseAccessor(DatabaseAccessor)>>handleError:for:
[ :ex | self handleError: ex for: command ] in [ | result |
self checkPermissionFor: command.
result := [ (self useBinding and: [ command useBinding ])
ifTrue: [ command executeBoundIn: self ]
ifFalse: [ command executeUnboundIn: self ] ]
on: Dialect error
do: [ :ex | self handleError: ex for: command ].
aBoolean
ifTrue: [ result ]
ifFalse: [ result upToEnd ] ] in
PharoDatabaseAccessor(DatabaseAccessor)>>executeCommand:returnCursor:
BlockClosure>>cull:
Context>>evaluateSignal:
Context>>handleSignal:
Error(Exception)>>signal
Error(Exception)>>signal:
ExternalLibraryFunction(Object)>>error:
ExternalLibraryFunction(Object)>>externalCallFailed
ExternalLibraryFunction(ExternalFunction)>>invokeWithArguments:
UDBCSQLite3Library>>apiBindBlob:atColumn:with:with:with:
UDBCSQLite3Library>>with:at:putBlob:
UDBCSQLite3Statement>>at:putByteArray:
UDBCSQLite3ResultSet>>execute:withIndex:withValue:
[ :v | i := self execute: statement withIndex: i withValue: v ] in
UDBCSQLite3ResultSet>>execute:withCollection:
OrderedCollection>>do:
UDBCSQLite3ResultSet>>execute:withCollection:
UDBCSQLite3ResultSet>>execute:with:on:
UDBCSQLite3Connection>>execute:with:
GlorpSQLite3Driver>>basicExecuteSQLString:binding:
PharoDatabaseAccessor>>executeCommandBound:
QuerySelectCommand(DatabaseCommand)>>executeBoundIn:
[ (self useBinding and: [ command useBinding ])
ifTrue: [ command executeBoundIn: self ]
ifFalse: [ command executeUnboundIn: self ] ] in [ | result |
self checkPermissionFor: command.
result := [ (self useBinding and: [ command useBinding ])
ifTrue: [ command executeBoundIn: self ]
ifFalse: [ command executeUnboundIn: self ] ]
on: Dialect error
do: [ :ex | self handleError: ex for: command ].
aBoolean
ifTrue: [ result ]
ifFalse: [ result upToEnd ] ] in
PharoDatabaseAccessor(DatabaseAccessor)>>executeCommand:returnCursor:
BlockClosure>>on:do:
[ | result |
self checkPermissionFor: command.
result := [ (self useBinding and: [ command useBinding ])
ifTrue: [ command executeBoundIn: self ]
ifFalse: [ command executeUnboundIn: self ] ]
on: Dialect error
do: [ :ex | self handleError: ex for: command ].
aBoolean
ifTrue: [ result ]
ifFalse: [ result upToEnd ] ] in
PharoDatabaseAccessor(DatabaseAccessor)>>executeCommand:returnCursor:
[ caught := true.
self wait.
blockValue := mutuallyExcludedBlock value ] in Semaphore>>critical:
BlockClosure>>ensure:
Semaphore>>critical

[Pharo-users] Big Glorp problem w/ type coercion, pls help

2017-08-14 Thread Herby Vojčík

Hello!

I encountered a problem with OneToOneMapping and type coercion. When 
writing data, thing work; when reading data, the right child of relation 
fails to convert.


I tried everything possible to inject converters (even subclassing 
GlorpBlobType), but to no avail. RelationExpression passes conversion to 
its left child:


convertedDbValueOf: anObject
	"Assume that our types match, so we can ask either child to do the 
conversion. That isn't guaranteed, but should at least work for the 
common cases."

^leftChild convertedDbValueOf: anObject.

but the left child is FieldExpression in case of OneToOneMapping, which:

convertedDbValueOf: anObject
"We don't do any conversion"
^anObject

What is strange, writing works (even the OneToOneMapping, I opened the 
sqlite file with an explorer), but second SELECT, one using the relation 
(`state := self dao findStateByAgent: agent` in clientSync), fails with 
"GlorpDatabaseReadError: Could not coerce arguments". FWIW, the first 
one _does_ convert when creating bindings, as it uses MappingExpression 
as left child (stepped over it in debugger).




Is it meant to be a strange case that primary key is something 
non-primitive needing coercion (in this case, it is a UUID which needs 
coercion to ByteArray, even if it is its subclass)?




Here's the stack of running the test which fails:

PharoDatabaseAccessor(DatabaseAccessor)>>handleError:for:
[ :ex | self handleError: ex for: command ] in [ | result |
self checkPermissionFor: command.
result := [ (self useBinding and: [ command useBinding ])
ifTrue: [ command executeBoundIn: self ]
ifFalse: [ command executeUnboundIn: self ] ]
on: Dialect error
do: [ :ex | self handleError: ex for: command ].
aBoolean
ifTrue: [ result ]
	ifFalse: [ result upToEnd ] ] in 
PharoDatabaseAccessor(DatabaseAccessor)>>executeCommand:returnCursor:

BlockClosure>>cull:
Context>>evaluateSignal:
Context>>handleSignal:
Error(Exception)>>signal
Error(Exception)>>signal:
ExternalLibraryFunction(Object)>>error:
ExternalLibraryFunction(Object)>>externalCallFailed
ExternalLibraryFunction(ExternalFunction)>>invokeWithArguments:
UDBCSQLite3Library>>apiBindBlob:atColumn:with:with:with:
UDBCSQLite3Library>>with:at:putBlob:
UDBCSQLite3Statement>>at:putByteArray:
UDBCSQLite3ResultSet>>execute:withIndex:withValue:
[ :v | i := self execute: statement withIndex: i withValue: v ] in 
UDBCSQLite3ResultSet>>execute:withCollection:

OrderedCollection>>do:
UDBCSQLite3ResultSet>>execute:withCollection:
UDBCSQLite3ResultSet>>execute:with:on:
UDBCSQLite3Connection>>execute:with:
GlorpSQLite3Driver>>basicExecuteSQLString:binding:
PharoDatabaseAccessor>>executeCommandBound:
QuerySelectCommand(DatabaseCommand)>>executeBoundIn:
[ (self useBinding and: [ command useBinding ])
ifTrue: [ command executeBoundIn: self ]
ifFalse: [ command executeUnboundIn: self ] ] in [ | result |
self checkPermissionFor: command.
result := [ (self useBinding and: [ command useBinding ])
ifTrue: [ command executeBoundIn: self ]
ifFalse: [ command executeUnboundIn: self ] ]
on: Dialect error
do: [ :ex | self handleError: ex for: command ].
aBoolean
ifTrue: [ result ]
	ifFalse: [ result upToEnd ] ] in 
PharoDatabaseAccessor(DatabaseAccessor)>>executeCommand:returnCursor:

BlockClosure>>on:do:
[ | result |
self checkPermissionFor: command.
result := [ (self useBinding and: [ command useBinding ])
ifTrue: [ command executeBoundIn: self ]
ifFalse: [ command executeUnboundIn: self ] ]
on: Dialect error
do: [ :ex | self handleError: ex for: command ].
aBoolean
ifTrue: [ result ]
	ifFalse: [ result upToEnd ] ] in 
PharoDatabaseAccessor(DatabaseAccessor)>>executeCommand:returnCursor:

[ caught := true.
self wait.
blockValue := mutuallyExcludedBlock value ] in Semaphore>>critical:
BlockClosure>>ensure:
Semaphore>>critical:
PharoDatabaseAccessor(DatabaseAccessor)>>executeCommand:returnCursor:
[ session accessor executeCommand: command returnCursor: true ] in 
SimpleQuery>>rowsFromDatabaseWithParameters:

BlockClosure>>on:do:
SimpleQuery>>rowsFromDatabaseWithParameters:
SimpleQuery(AbstractReadQuery)>>readFromDatabaseWithParameters:
SimpleQuery(AbstractReadQuery)>>executeWithParameters:in:
GlorpSession>>execute:
GlorpSession>>readOneOf:where:
TowergameDao>>findStateByAgent:
[ | agent state |
agent := self dao findAgentById: anObject agentId.
state := self dao findStateByAgent: agent.
^ NeoJSONObject new
agentId: agent id;
stateVersion: state version;
totalAnsweredQuestions:
(NeoJSONObject new
good: 0;
bad: 0;
yourself);
yourself ] in Towergame>>clientSync:
[ myUnitOfWork := self hasUnitOfWork not.
myUnitOfWork
ifTrue: [ self beginUnitOfWork ].
result := aBlock numArgs = 1
ifTrue: [ aBlock value: self ]
  

Re: [Pharo-users] Honest question, new to ecosystem: are Glorp and Garage alive?

2017-08-14 Thread Herby Vojčík

Guillermo Polito wrote:

Hi Holger,


??? :-)


Garage is not maintained as people started developing the alternative
UDBC drivers, that share the same spirit at the end. I do not know if
they share the same API though.


I did not know. So I should use ConfigurationOfGlorpSQLite instead of 
ConfigurationOfGarageGlorp as my dependency?



About Glorp, I will let Esteban Maringolo ask :).


It seems from other responses that Glorp is fine. :-)


On the other side, you should check again how you downloaded Garage and
GarageGlorp because I still believe you're not using the latest
versions. The last thing I did on that was to update it to pharo 6 and
UFFI, so it is working. Moreover, I moved garage's code base to github
to use travis (because inria CI was very unstable).


I loaded what is in Pharo6 catalog. That is, GarageGlorp, which I load 
from Pharo6 meta repo. It then loads Garage and Glorp. Last commit in 
Garage-SQLite loaded via that (offical, I hope) configuration is


Name: Garage-Sqlite3-GuillermoPolito.26
Author: GuillermoPolito
Time: 3 May 2016, 11:24:28.35761 am
UUID: 55bd20d2-c691-4065-94cc-8bf76e775410
Ancestors: Garage-Sqlite3-GuillermoPolito.25

Making it work for all platforms

which is pre-pharo6 as far as I can say (and code actually looks like so).



https://github.com/guillep/garage/blob/master/.travis.yml

And travis reports that last build on pharo 6 was green (at the time I
set that up, travis had no cron jobs, so it was the last commit).

https://travis-ci.org/guillep/garage


This indeed contains much more commit beyond May 2016.

Maybe the ConfigurationOfGarage / Pharo6 catalog needs updating?


So, Garage works on pharo 6 using UFFI, but I cannot maintain it mainly
because I don't use relational databases on my daily work. Thus, this is
complicated to maintain 4 backends for 3 platforms, on 2 pharo versions,
if you don't eat your own food.

Now, documentation may be outdated :)

On Mon, Aug 14, 2017 at 11:57 AM, Herby Vojčík <he...@mailbox.sk
<mailto:he...@mailbox.sk>> wrote:

Hello!

Without wanting to offend anyone, I'd like to know, as I do not
follow the tides yet, what is the state of Glorp and Garage re
maintained / having bright future vs. stalled?

I know about Voyage but I chose not to use it as it seems a bit
"savage" to me with automatically applying to all instances of a
class. Did I backed the right horse by using Glorp (and wanting to
use Garage drivers)?

Thanks, Herby




--



Guille Polito


Research Engineer

French National Center for Scientific Research - _http://www.cnrs.fr_



*Web:* _http://guillep.github.io_

*Phone: *+33 06 52 70 66 13



Thanks, Herby



[Pharo-users] Honest question, new to ecosystem: are Glorp and Garage alive?

2017-08-14 Thread Herby Vojčík

Hello!

Without wanting to offend anyone, I'd like to know, as I do not follow 
the tides yet, what is the state of Glorp and Garage re maintained / 
having bright future vs. stalled?


I know about Voyage but I chose not to use it as it seems a bit "savage" 
to me with automatically applying to all instances of a class. Did I 
backed the right horse by using Glorp (and wanting to use Garage drivers)?


Thanks, Herby



Re: [Pharo-users] Probably a bug in Garage SQLite

2017-08-13 Thread Herby Vojčík

Stephane Ducasse wrote:

ah yes.
probably a missing rename during the migration to the new ffi.

Stef


Unfortunately there's more:

PharoDatabaseAccessor(DatabaseAccessor)>>handleError:for:
[ :ex | self handleError: ex for: command ] in [ | result |
self checkPermissionFor: command.
result := [ (self useBinding and: [ command useBinding ])
ifTrue: [ command executeBoundIn: self ]
ifFalse: [ command executeUnboundIn: self ] ]
on: Dialect error
do: [ :ex | self handleError: ex for: command ].
aBoolean
ifTrue: [ result ]
	ifFalse: [ result upToEnd ] ] in 
PharoDatabaseAccessor(DatabaseAccessor)>>executeCommand:returnCursor:

BlockClosure>>cull:
Context>>evaluateSignal:
Context>>handleSignal:
MessageNotUnderstood(Exception)>>signal
ExternalData(Object)>>doesNotUnderstand: #pointerAt:
FFIExternalArray class>>fromPointer:type:size:
GASqlite3FFI>>blobFrom:at:
GASqlite3Statement>>byteArrayAt:
GASqlite3Statement>>valueOfColumn:
GASqlite3ResultSet>>next
GASqlite3ResultSet>>rows
GAGlorpDriver>>basicExecuteSQLString:
PharoDatabaseAccessor>>basicExecuteSQLString:
PharoDatabaseAccessor>>executeCommandUnbound:
[..snip..]

Now ExternalData does not understand #pointerAt: (only ByteArray and 
FFIExternalStructureReferenceHandle do, don't know how to fix this 
quickly... :-( )


Herby



On Sun, Aug 13, 2017 at 9:04 PM, Herby Vojčík<he...@mailbox.sk>  wrote:

Stephane Ducasse wrote:

Do you have the same bug in 5.0?
Because I do not know if garage was really tested on 60.


Don't know (don't have it), but last commit in Garage-SQLite
GuillermoPolito.26 from 2016. The mentioned #nbBindingOf: is sent at no
place at all in the image (except supersend in the lone method itself), so
it definitely looks like it need to rename to ffiBindingOf: for 60.

Herby



Stef

On Sun, Aug 13, 2017 at 8:41 PM, Herby Vojčík<he...@mailbox.sk>   wrote:

Stephane Ducasse wrote:

Hi Herby

On which version of Pharo are you trying? Because lot of changes
happened on Pharo 60 FFI.

Stef

On Sun, Aug 13, 2017 at 6:11 PM, Herby Vojčík<he...@mailbox.sk>wrote:

Hello!

This testing code:

| databaseFile login accessor sqlString |
   databaseFile := Smalltalk imageDirectory asFileReference /
'play.sqlite'.
   login := Login new
   database: SQLite3Platform new;
   host: '';
   port: '';
   username: '';
   password: '';
   databaseName: databaseFile fullPath asZnUrl
asString;
   yourself.
   accessor := DatabaseAccessor forLogin: login.
   accessor login.
   sqlString := 'SELECT * FROM AGENT'.
   (accessor basicExecuteSQLString: sqlString) contents inspect.

fails (Pharo 6.1, GarageGlorp #stable loaded a week ago as a project


As I wrote here, Pharo 6.1.



dependency) with:

Error: Unable to resolve external type: sqlite3

FFICallout(Object)>>error:
FFICallout>>resolveType:
FFICallout>>typeName:pointerArity:
FFICallout>>argName:indirectIndex:type:ptrArity:
FFIFunctionParser>>parseArgument
FFIFunctionParser>>parseArguments
FFIFunctionParser>>parseNamedFunction:
FFICalloutMethodBuilder>>parseSignature:
FFICalloutMethodBuilder>>generate
FFICalloutMethodBuilder>>build:
FFICalloutAPI>>function:module:
GASqlite3FFI(Object)>>ffiCall:module:
GASqlite3FFI>>apiErrorMessage:
GASqlite3FFI>>signal:with:on:
GASqlite3FFI>>checkForOk:on:
GASqlite3FFI>>prepare:on:with:
GASqlite3Statement>>prepare
GASqlite3Driver>>prepare:
GASqlite3ResultSet>>prepareStatement:
GASqlite3ResultSet>>execute:withCollection:
GASqlite3ResultSet>>execute:with:on:
GASqlite3Driver>>execute:with:
GASqlite3Driver>>execute:
GAGlorpDriver>>basicExecuteSQLString:
PharoDatabaseAccessor>>basicExecuteSQLString:
UndefinedObject>>DoIt
OpalCompiler>>evaluate
[..snip..]

Problem seems to be that in FFICallout>>resolveType:, the line

binding := resolver ffiBindingOf: name asSymbol.

actually produces nil, resolver being GASqlite3FFI class. If looking at
the
class side of GaSqlite3FFI, there is no ffiBindingOf: at all; there is

nbBindingOf: aTypeName
   ^ TypeMap at: aTypeName ifAbsent: [ super nbBindingOf:
aTypeName
]

though. If I copy near-mindlessly and add:

ffiBindingOf: aTypeName
   ^ TypeMap at: aTypeName ifAbsent: [ super ffiBindingOf:
aTypeName ]

then the code above fails correctly with: 'no such table: AGENT'.

Am I missing some nb<->ffi bridge? Or is there a bug in Garage SQLite
driver?

Herby











Re: [Pharo-users] Probably a bug in Garage SQLite

2017-08-13 Thread Herby Vojčík

Stephane Ducasse wrote:

Do you have the same bug in 5.0?
Because I do not know if garage was really tested on 60.


Don't know (don't have it), but last commit in Garage-SQLite 
GuillermoPolito.26 from 2016. The mentioned #nbBindingOf: is sent at no 
place at all in the image (except supersend in the lone method itself), 
so it definitely looks like it need to rename to ffiBindingOf: for 60.


Herby



Stef

On Sun, Aug 13, 2017 at 8:41 PM, Herby Vojčík<he...@mailbox.sk>  wrote:

Stephane Ducasse wrote:

Hi Herby

On which version of Pharo are you trying? Because lot of changes
happened on Pharo 60 FFI.

Stef

On Sun, Aug 13, 2017 at 6:11 PM, Herby Vojčík<he...@mailbox.sk>   wrote:

Hello!

This testing code:

| databaseFile login accessor sqlString |
  databaseFile := Smalltalk imageDirectory asFileReference /
'play.sqlite'.
  login := Login new
  database: SQLite3Platform new;
  host: '';
  port: '';
  username: '';
  password: '';
  databaseName: databaseFile fullPath asZnUrl
asString;
  yourself.
  accessor := DatabaseAccessor forLogin: login.
  accessor login.
  sqlString := 'SELECT * FROM AGENT'.
  (accessor basicExecuteSQLString: sqlString) contents inspect.

fails (Pharo 6.1, GarageGlorp #stable loaded a week ago as a project


As I wrote here, Pharo 6.1.



dependency) with:

Error: Unable to resolve external type: sqlite3

FFICallout(Object)>>error:
FFICallout>>resolveType:
FFICallout>>typeName:pointerArity:
FFICallout>>argName:indirectIndex:type:ptrArity:
FFIFunctionParser>>parseArgument
FFIFunctionParser>>parseArguments
FFIFunctionParser>>parseNamedFunction:
FFICalloutMethodBuilder>>parseSignature:
FFICalloutMethodBuilder>>generate
FFICalloutMethodBuilder>>build:
FFICalloutAPI>>function:module:
GASqlite3FFI(Object)>>ffiCall:module:
GASqlite3FFI>>apiErrorMessage:
GASqlite3FFI>>signal:with:on:
GASqlite3FFI>>checkForOk:on:
GASqlite3FFI>>prepare:on:with:
GASqlite3Statement>>prepare
GASqlite3Driver>>prepare:
GASqlite3ResultSet>>prepareStatement:
GASqlite3ResultSet>>execute:withCollection:
GASqlite3ResultSet>>execute:with:on:
GASqlite3Driver>>execute:with:
GASqlite3Driver>>execute:
GAGlorpDriver>>basicExecuteSQLString:
PharoDatabaseAccessor>>basicExecuteSQLString:
UndefinedObject>>DoIt
OpalCompiler>>evaluate
[..snip..]

Problem seems to be that in FFICallout>>   resolveType:, the line

binding := resolver ffiBindingOf: name asSymbol.

actually produces nil, resolver being GASqlite3FFI class. If looking at
the
class side of GaSqlite3FFI, there is no ffiBindingOf: at all; there is

nbBindingOf: aTypeName
  ^ TypeMap at: aTypeName ifAbsent: [ super nbBindingOf: aTypeName
]

though. If I copy near-mindlessly and add:

ffiBindingOf: aTypeName
  ^ TypeMap at: aTypeName ifAbsent: [ super ffiBindingOf:
aTypeName ]

then the code above fails correctly with: 'no such table: AGENT'.

Am I missing some nb<->ffi bridge? Or is there a bug in Garage SQLite
driver?

Herby









Re: [Pharo-users] Probably a bug in Garage SQLite

2017-08-13 Thread Herby Vojčík

Stephane Ducasse wrote:

Hi Herby

On which version of Pharo are you trying? Because lot of changes
happened on Pharo 60 FFI.

Stef

On Sun, Aug 13, 2017 at 6:11 PM, Herby Vojčík<he...@mailbox.sk>  wrote:

Hello!

This testing code:

| databaseFile login accessor sqlString |
 databaseFile := Smalltalk imageDirectory asFileReference /
'play.sqlite'.
 login := Login new
 database: SQLite3Platform new;
 host: '';
 port: '';
 username: '';
 password: '';
 databaseName: databaseFile fullPath asZnUrl
asString;
 yourself.
 accessor := DatabaseAccessor forLogin: login.
 accessor login.
 sqlString := 'SELECT * FROM AGENT'.
 (accessor basicExecuteSQLString: sqlString) contents inspect.

fails (Pharo 6.1, GarageGlorp #stable loaded a week ago as a project


As I wrote here, Pharo 6.1.


dependency) with:

Error: Unable to resolve external type: sqlite3

FFICallout(Object)>>error:
FFICallout>>resolveType:
FFICallout>>typeName:pointerArity:
FFICallout>>argName:indirectIndex:type:ptrArity:
FFIFunctionParser>>parseArgument
FFIFunctionParser>>parseArguments
FFIFunctionParser>>parseNamedFunction:
FFICalloutMethodBuilder>>parseSignature:
FFICalloutMethodBuilder>>generate
FFICalloutMethodBuilder>>build:
FFICalloutAPI>>function:module:
GASqlite3FFI(Object)>>ffiCall:module:
GASqlite3FFI>>apiErrorMessage:
GASqlite3FFI>>signal:with:on:
GASqlite3FFI>>checkForOk:on:
GASqlite3FFI>>prepare:on:with:
GASqlite3Statement>>prepare
GASqlite3Driver>>prepare:
GASqlite3ResultSet>>prepareStatement:
GASqlite3ResultSet>>execute:withCollection:
GASqlite3ResultSet>>execute:with:on:
GASqlite3Driver>>execute:with:
GASqlite3Driver>>execute:
GAGlorpDriver>>basicExecuteSQLString:
PharoDatabaseAccessor>>basicExecuteSQLString:
UndefinedObject>>DoIt
OpalCompiler>>evaluate
[..snip..]

Problem seems to be that in FFICallout>>  resolveType:, the line

binding := resolver ffiBindingOf: name asSymbol.

actually produces nil, resolver being GASqlite3FFI class. If looking at the
class side of GaSqlite3FFI, there is no ffiBindingOf: at all; there is

nbBindingOf: aTypeName
 ^ TypeMap at: aTypeName ifAbsent: [ super nbBindingOf: aTypeName ]

though. If I copy near-mindlessly and add:

ffiBindingOf: aTypeName
 ^ TypeMap at: aTypeName ifAbsent: [ super ffiBindingOf: aTypeName ]

then the code above fails correctly with: 'no such table: AGENT'.

Am I missing some nb<->ffi bridge? Or is there a bug in Garage SQLite
driver?

Herby




[Pharo-users] Probably a bug in Garage SQLite

2017-08-13 Thread Herby Vojčík

Hello!

This testing code:

| databaseFile login accessor sqlString |
databaseFile := Smalltalk imageDirectory asFileReference / 
'play.sqlite'.
login := Login new
database: SQLite3Platform new;
host: '';
port: '';
username: '';
password: '';
databaseName: databaseFile fullPath asZnUrl asString;
yourself.
accessor := DatabaseAccessor forLogin: login.
accessor login.
sqlString := 'SELECT * FROM AGENT'.
(accessor basicExecuteSQLString: sqlString) contents inspect.

fails (Pharo 6.1, GarageGlorp #stable loaded a week ago as a project 
dependency) with:


Error: Unable to resolve external type: sqlite3

FFICallout(Object)>>error:
FFICallout>>resolveType:
FFICallout>>typeName:pointerArity:
FFICallout>>argName:indirectIndex:type:ptrArity:
FFIFunctionParser>>parseArgument
FFIFunctionParser>>parseArguments
FFIFunctionParser>>parseNamedFunction:
FFICalloutMethodBuilder>>parseSignature:
FFICalloutMethodBuilder>>generate
FFICalloutMethodBuilder>>build:
FFICalloutAPI>>function:module:
GASqlite3FFI(Object)>>ffiCall:module:
GASqlite3FFI>>apiErrorMessage:
GASqlite3FFI>>signal:with:on:
GASqlite3FFI>>checkForOk:on:
GASqlite3FFI>>prepare:on:with:
GASqlite3Statement>>prepare
GASqlite3Driver>>prepare:
GASqlite3ResultSet>>prepareStatement:
GASqlite3ResultSet>>execute:withCollection:
GASqlite3ResultSet>>execute:with:on:
GASqlite3Driver>>execute:with:
GASqlite3Driver>>execute:
GAGlorpDriver>>basicExecuteSQLString:
PharoDatabaseAccessor>>basicExecuteSQLString:
UndefinedObject>>DoIt
OpalCompiler>>evaluate
[..snip..]

Problem seems to be that in FFICallout >> resolveType:, the line

binding := resolver ffiBindingOf: name asSymbol.

actually produces nil, resolver being GASqlite3FFI class. If looking at 
the class side of GaSqlite3FFI, there is no ffiBindingOf: at all; there is


nbBindingOf: aTypeName
^ TypeMap at: aTypeName ifAbsent: [ super nbBindingOf: aTypeName ]

though. If I copy near-mindlessly and add:

ffiBindingOf: aTypeName
^ TypeMap at: aTypeName ifAbsent: [ super ffiBindingOf: aTypeName ]

then the code above fails correctly with: 'no such table: AGENT'.

Am I missing some nb<->ffi bridge? Or is there a bug in Garage SQLite 
driver?


Herby



  1   2   >