Re: Python transpiler

2020-07-14 Thread lscrd
I have translated several programs from Python to Nim. Some of them where quite 
simple and the translation was easy… for a human who knows what the program is 
intended for. Other were a bit more complicated, but as these are programs I 
wrote, there is little usage of dynamic features.

In any case, you have to reconstruct types from objects with their attributes 
whose type is unknown and should be retrieved from the context. And if an 
attribute is initialized to None (which may be frequent), you are in trouble. 
You have also to find the type of the functions parameters, if no annotation 
has been provided to help you (and annotations are seldom used). And you have 
to take care of the range of integers as in Python these are big numbers. 
Translating to Nim integers without precautions could lead to some 
disappointments.

For my programs the conversion was possible. But when I tried to convert to Nim 
some libraries I like, I failed. Firstly, I had to find a way to translate 
inheritance, either by using Nim inheritance or using object with variants. 
Then I had to deal with dynamic dispatch, to find a way to transmit methods as 
parameters of other methods (a nightmare). I had also to do a deep 
restructuring as there was cross-references between modules. And I finally gave 
up as there was too much use of dynamic typing.

In fact, it appeared it was impossible to simply translate on a per module 
basis. The right way to proceed would have been to do some retro-engineering to 
get a view of the whole structure, then to write modules from start, the Nim 
way, using existing Python code only to understand the details.

As regards an automatic conversion with a transpiler, I think it would only be 
possible in the most simple cases (i.e. no use of dynamic typing). Even in this 
case, the transpiler would have to find a way to remove the cross-references 
between modules. And it would have to find how to deal with multiple 
inheritance. Even simple inheritance might cause difficulties.

There exist attempts to translate Python code to another language. _Shedskin_ , 
for instance, translates to C++, but with a strong limitation: it can only 
translate pure, but implicitly statically typed Python (2.4-2.6) programs.

There is also _Nuitka_ which translates to C with few limitations, but which 
produces code to link to _libpython_ , the Python runtime. That explains why it 
works. Currently, it doesn’t use the natives types, but the Python types. Of 
course, as it uses the Python runtime, the GIL is still a limitation.

Finally, there is _pypy_ which translates during execution Python code to 
native code with its JIT compiler. But this is something totally different.


Re: Why Seq search is faster than Table search

2020-07-06 Thread lscrd
This is strange. I get the long execution time in release mode without changing 
a line in the code.

Why is the code not optimized in my case? To get the same behavior, I have to 
use `-d:danger` instead of `-d:release`. May be it depends on some options of 
the C compiler.

But, at least, we have the explanation.


Re: Why Seq search is faster than Table search

2020-07-06 Thread lscrd
I changed the count for 1_000_000 to 100_000 and compiled your test with stable 
and devel in release mode and my results are totally different. With version 
1.2.4:


List: (seconds: 3, nanosecond: 71413798)
Table: (seconds: 0, nanosecond: 496792)


Run

So, I retried with 1_000_000 and got these results:


List: (seconds: 326, nanosecond: 776822305)
Table: (seconds: 0, nanosecond: 4693424)


Run

The times are consistent. There are 10 more values in the list and we do 10 
more searches, so we can expect 100 more time to do the search which is roughly 
what we get. For the table, as we do 10 more accesses, we need 10 more time 
which is also roughly what we get.

The time for the table seems to be in par which what you obtain, my CPU being 
probably somewhat faster. But there is clearly a problem with the time you get 
for the list. And I have no idea what the problem could be. 


Re: First look

2020-06-19 Thread lscrd
Which compiler did you use? Your program compiles correctly with 1.2.2 and 
#devel on a Linux platform.

Note that your use of _echo_ is incorrect. As it is written, you write a tuple 
which contains the string "\nProcessor count " and an int. To get what you 
expect, you have to write:


echo "\nProcessor count ", countProcessors()


Run

or


echo("\nProcessor count ", countProcessors())


Run

without a space after _echo_.


Re: Visual Studio Code plugin

2020-06-18 Thread lscrd
Yes, you are right. But as Nim syntax doesn’t change a lot, this is 
understandable. Nevertheless, there are several issues which should have been 
solved (easier said than done of course).

The alternative plugin seems currently more active. So, this may be a better 
choice in the future. But I think that most current users of the original 
plugin do not even know that an alternative now exists. And, most are satisfied 
with the original plugin despite some issues. 


Re: Visual Studio Code plugin

2020-06-18 Thread lscrd
Personally, I use the original plugin as I was not aware that an alternative 
one exists. This plugin seems in fact quite recent (first commit by Gary M on 
20 April) and, logically, there is some activity on it.

Besides, we can’t say that the original plugin is abandoned as the last commit 
was on 26 March. And this plugin works with stable and development versions of 
Nim.

I have installed the alternative plugin to look at the differences. An issue 
with symbolic links I reported long time ago (for the original plugin) is also 
present in the alternative plugin, so we can’t expect all issues to be solved.

It seems that the main goal of this new plugin is to provide another style for 
syntax highlighting, while solving some syntactical bugs. I tried to compare 
the plugins and have not been convinced by this new style. But this is of 
course a question of taste.

For now, the original plugin is still maintained, even it there are old issues 
still not solved. Maybe this new plugin will be more actively maintained or 
maybe not. Only time will tell.

You can install both plugins and do some comparisons to find the one which best 
suits your taste. Activate only one at a time of course :-).


Re: Help understanding simple string pointer indexing example

2020-04-23 Thread lscrd
I didn’t use `create` but reading the documentation it is clear that the 
parameter `size`, whose default value is 1, is the number of elements to 
allocate. That is, for a string, 8 bytes for a pointer and, under the hood, 8 
bytes for the length, 8 bytes for the capacity and 0 bytes for the actual 
content as the capacity is null.

So your example is wrong. You have indeed created memory for `str.len` strings, 
that is 5 strings. And, as I have said in my previous comment, each string 
capacity is null so there is no room to write directly into them. You have 
either to make room using `strLen` or to use `add`.

The corrected code will be, for instance:


# test.nim
proc main =
  let str = "hello"
  var sptr = string.create()   # Allocate one string.
  sptr[].setLen(str.len)   # Make room to store the chars.
  for i in 0 ..< str.len:
sptr[][i] = str[i]
  echo sptr[]

when isMainModule:
  main()


Run


Re: Help understanding simple string pointer indexing example

2020-04-23 Thread lscrd
When you allocate the string using `create`, it is initialized with zeroes. So 
its capacity and its length are null as if it was assigned `""`. Then, if you 
assign globally the string, it works, but not if you assign each element 
individually.

But this works:


# test.nim
proc main =
  let str = "hello"
  var sptr = string.create(str.len)
  #copyMem(sptr, unsafeAddr str, str.len) # this works
  #sptr[] = str # this also works
  # but let's try manual copy
  for i in 0 ..< str.len:
sptr[].add(str[i])
  echo sptr[]

when isMainModule:
  main()


Run

and this also works:


# test.nim
proc main =
  let str = "hello"
  var sptr = string.create(str.len)
  #copyMem(sptr, unsafeAddr str, str.len) # this works
  #sptr[] = str # this also works
  # but let's try manual copy
  sptr[].setLen(str.len)
  for i in 0 ..< str.len:
sptr[][i] = str[i]
  echo sptr[]

when isMainModule:
  main()


Run


Re: Help understanding proc()

2020-04-16 Thread lscrd
Sorry, but your second example is no more valid than the first one.

In both cases, as @Hlaaftana said, you need to declare the proc before 
referencing it.


Re: Error: got proc, but expected proc {.closure.}

2020-04-15 Thread lscrd
You need to provide an anonymous proc. This works:


import sugar

proc calc(a: int, b: int): (int) -> int =
  let diff = a - b
  result = proc (c: int): int =
result = c - diff


let diff_proc = calc(5, 6)
echo diff_proc(7)


Run


Re: Destructor not called for ref object

2020-04-05 Thread lscrd
As far as I know, the default has not changed in 1.2. It’s better as some 
programs will not work with `--gc:arc` without changes (if you have created 
cycles). I suppose the default is likely to change with 2.0.


Re: Destructor not called for ref object

2020-04-05 Thread lscrd
Note that I have been able to get the right result by compiling your example 
with `--gc:arc`.


Re: Custom exceptions

2020-04-04 Thread lscrd
OK, thanks.

My question was for a catchable error but, indeed, maybe I could use two 
exceptions in this case, one catchable, the other not catchable. 


Custom exceptions

2020-04-04 Thread lscrd
Hi,

Until now, the normal way to create a new exception type was by inheriting from 
Exception. At least, it was the recommended way in the manual (and is still). 
In the development version and in the new 1.2 version, this is considered bad 
style and you get a warning if you inherit from Exception. It’s logic as there 
exist now catchable and non-catchable errors.

It is now recommended to inherit from OSError, IOError, ValueError or some 
other catchable error. OK, but I have written some code using a custom 
exception and this exception is not some kind of OSError, IOError, ValueError 
and so on. It’s an application exception which may be raised by miscellaneous 
causes. Inheriting from ValueError (one of the most general exception) makes 
little sense except in some particular cases.

Of course, it doesn’t matter a lot, but I don’t like things which are not logic 
and inheriting from ValueError would not be logic. For me, the best solution 
would be to inherit from CatchableError. Now, would this be considered OK or 
would it be also bad style?


Re: How does one get a mutable iterator?

2020-03-07 Thread lscrd
To make datum a variable you need to use mitems:


import options

type T = ref object
  data: seq[Option[T]]

proc p(t: var T) =
  for datum in t.data.mitems:
datum = none(T)


Run


Re: Why does the this code work?

2020-02-19 Thread lscrd
I have tried to declare a `varargs[string | seq[string]]` but the compiler 
rejects it. It seems that for `varargs`, more strict rules apply to avoid some 
nasty problems.

But there may exist other problematic cases as Araq said. Adding – in a future 
version – an explicit unpacking as he suggests would simplify things (and 
Python people would say that explicit is better than implicit). Personally, I 
would not care as I have never used this feature, but `unpackVarargs` is really 
an ugly name :-).


Re: Why does the this code work?

2020-02-18 Thread lscrd
As regards implicit dereferencing, the compiler, when looking for the field "a" 
has to detect that "p" is a reference or a pointer, check if the referenced 
type is an object or a tuple and that it contains a field "a" and if this is OK 
generate the code for dereferencing. I don’t see that as stylistic. Other 
languages will detect an error as "p" has not the right type.

As regards the varargs, that Python has more information at runtime than Nim, I 
agree. But, the expansion has to be done at compile time, not at runtime and 
Python lacks information about the expected argument type. When you write `def 
f(x):`, you don’t give the compiler a clue about the type of the elements of 
"x" which may be a list, an int, an dict and so on. So, if I call it with a 
list (without the "*"), the compiler cannot do the expansion as it doesn’t know 
what is expected (and to complicate things, contrary to Nim, the varargs need 
not to be of the same type).

And your example with `printStuff(@[lines]` is wrong. If you want to pass a 
list of strings as a whole to a varargs, you have to declare the procedure to 
accept a sequence of strings, not a string. "printStuff" has been declared to 
accept a `varargs[string]` not a `varargs[seq[string]]`, so, when writing 
`printStuff(lines)` there is no way you can get a unique argument "lines" as it 
has not the expected type "string".

But this works:


proc printStuff(x: varargs[seq[string]]) =
  for element in x:
echo element

let lines = @["abc", "def"]

printStuff(lines)



Run

Comparing Nim and Python is frequently a bad idea as the languages are 
fundamentally different. Python has done things a way which is logical for a 
dynamically typed language. And Nim does the things another way which is 
consistent with its nature.


Re: Why does the this code work?

2020-02-18 Thread lscrd
There is no reason to use a special syntax to tell the compiler that expansion 
is needed. Thanks to static typing, the compiler is able to see if the array or 
sequence must be transmitted as a whole to the procedure or must be expanded.

In Python, you need a special syntax. Given a function _def f(*x)_ and a list 
_lst_ , if you write _f(lst)_ , _x_ will be a tuple whose first (and only) 
elements will be _lst_. You have to use the special syntax _f(*x)_ to transmit 
each element of _lst_ as an individual argument and, in this case, _x_ will 
contain the elements of _lst_ and not the whole list.

There is another case where the Nim compiler uses its knowledge of types to 
help the programmer: this is the automatic dereferencing of references and 
pointers. Given a _ref object_ , when accessing a field _a_ via a reference _p_ 
, you write _p.a_ and the compiler knows that it must dereference _p_. In C, 
you would have to write something like _(*p).a_ or _p- >a_.


Re: How to export custom exception types?

2019-12-09 Thread lscrd
Exporting the type is not enough. You need to mark the fields you want to 
export with an * too. So, here, put an * after "extraData":


type myModuleException* = ref object of Exception
  extraData* : uint32


Run


Re: Advent of Nim 2019 megathread

2019-12-01 Thread lscrd
As last year, I will use Nim and only Nim.

[https://github.com/lscrd/AdventOfCode2019](https://github.com/lscrd/AdventOfCode2019)


Re: 1.0.0 is here

2019-09-23 Thread lscrd
This is a great day for Nim. Many thanks to the development team.


Re: What do you think about the programming language NIM?

2019-07-31 Thread lscrd
Thanks for your explanation.


Re: What do you think about the programming language NIM?

2019-07-31 Thread lscrd
This is Interesting. Thank you.

So the documentation is wrong when it says: “Neither inline nor closure 
iterators can be recursive”. Here: 
[https://nim-lang.org/docs/manual.html#iterators-and-the-for-statement-first-class-iterators](https://nim-lang.org/docs/manual.html#iterators-and-the-for-statement-first-class-iterators)

Or is there something I did not understood? 


Re: What do you think about the programming language NIM?

2019-07-31 Thread lscrd
> Note that Nim iterators correspond to Python generators…

I would like, but they are less powerful. Generators can be recursive in Python 
while iterators cannot. And in Python it’s easy to use a generator without a 
_for_ loop thanks to the _next_ function.

But, despite these limitations, iterators in Nim are very pleasant and easy to 
use, compared with the way to define them in other languages such as Julia. 
And, I agree with you, they are amazingly useful.


Re: A few questions about procs

2019-06-16 Thread lscrd
You are right, I was too restrictive regarding static typing. Provided the code 
in the loop is compatible with each field type, it works.


Re: A few questions about procs

2019-06-16 Thread lscrd
You can iterate on an _openarray_ , of course.


proc p(x: openarray[int]) =
  for val in x:
echo val

p([1, 2, 3])
p(@[4, 5, 6])


Run


Re: Natural is not positive

2019-06-15 Thread lscrd
It depends on authors. I said that French people generally consider 0 as being 
neither positive nor negative. For instance, a positive temperature is above 
0°C.

For the french mathematician group Nicolas Bourbaki, which is a well known 
reference, zero is both positive and negative. But that not means that all 
mathematicians (being french or not) agree with this. I rather think that it is 
a non standard definition of positive (and negative). Bourbaki had certainly 
good reasons to do this, but I think it makes things more complicated. It’s 
better when the mathematical definitions are consistent with the common sense. 
And, in fact, I do not know of any other mathematicians who have adopted this 
point of view.

For Wikipedia in english – see 
[https://en.wikipedia.org/wiki/Positive_real_numbers](https://en.wikipedia.org/wiki/Positive_real_numbers)
 – a positive number is clearly greater than zero and, this, from a 
mathematical point of view. But, even in English, your video shows that there 
is still discussion on this topic.


Re: Natural is not positive

2019-06-13 Thread lscrd
Sorry, I’m French and for me positive means strictly positive (as negative 
means strictly negative), but to avoid any ambiguity, I always use « 
strictement positif » for numbers > 0 and « positif ou nul » for numbers ≥ 0.

Some authors (Bourbaki for instance) considers that zero is both a positive and 
a negative number. But these are exceptions (see Wikipedia on this topic).


Re: rant about shr change

2019-05-30 Thread lscrd
Yes, there is indeed a change in the documentation between 0.19.6 and 0.19.9.

Personally, for signed integers I would find more consistent to use an 
arithmetic shift to the right. With a logical shift, _-2 shr 1_ is equal to 
9223372036854775807 on a 64 bits machine which may seem odd. We would rather 
expect to get -1.

In C, the shift to the right is arithmetic for signed integers, logical for 
unsigned integers, which makes sense. And to get a logical shift for a signed 
integer, you have only to cast it to an unsigned int.

Now, as _shr_ has always been a logical shift, I agree that it would be a 
questionable change which could break code in some subtile ways.


Re: rant about shr change

2019-05-30 Thread lscrd
I have not been able to reproduce this behavior. On my Manjaro Linux with the 
last development version (built with _choosenim_ ) _shr_ is a logical right 
shift, not an arithmetic one. This is also the case with the last stable 
version.

And there is already an arithmetic right shift in the system module. This a 
function (not an operator) named _ashr_.


Re: Understanding performance compared to Numpy

2019-05-10 Thread lscrd
Very interesting explanation.

As regards Python floats, they are native floats (64 bits long in IEEE 754 
format), not arbitrary precision floats. Only integers are in arbitrary 
precision. So, I think that the rounding errors are also present in the Python 
program.


Re: type mismath in simple math

2019-04-22 Thread lscrd
To add to _mratsim_ answer, you can also write:


import math

echo 2.0^2 * 2.0


Run

and even:


import math

echo 2.0^2 * 2


Run

In the latter code, the compiler convert 2 to 2.0 as it knows that a float is 
expected. So, there is some flexibility though.


Re: gintro demo with two columns in a listview / gtktreeview and sortable

2019-03-13 Thread lscrd
I have used "gintro" with a TreeView. There are some things that do not work 
the way I want (for instance the impossibility in Gtk3 to use alternate colors 
for rows if the theme doesn’t provide it), so I still use my previous version 
in gtk2. But here is how I proceed using _gintro_.

To allocate the _ListStore_ , I use this code:


var types = [TYPE_STRING, TYPE_STRING]
let store = newListStore(2, addr(types))


Run

The types are missing in _gintro_ , so I have defined them in another file. 
Here is the beginning of this file with the types:


import gintro/[gtk, gobject, glib]
import gintro/gdk except Window

const Lib* = "libgtk-3.so.0"
{.pragma: libprag, cdecl, dynlib: Lib.}

# GType values.
const
  TYPE_NONE* = GType(1 shl TYPE_FUNDAMENTAL_SHIFT)
  TYPE_CHAR* = GType(3 shl TYPE_FUNDAMENTAL_SHIFT)
  TYPE_BOOLEAN* = GType(5 shl TYPE_FUNDAMENTAL_SHIFT)
  TYPE_INT64* = GType(10 shl TYPE_FUNDAMENTAL_SHIFT)
  TYPE_ENUM* = GType(12 shl TYPE_FUNDAMENTAL_SHIFT)
  TYPE_STRING* = GType(16 shl TYPE_FUNDAMENTAL_SHIFT)
  TYPE_OBJECT* = GType(20 shl TYPE_FUNDAMENTAL_SHIFT)


Run

I didn’t found the _typeFromName_ function, so there is a simpler way to do 
what you want without using my types declaration. But, anyway, you need to 
initialize an array and to pass its address to _newListStore_.

I have implemented a sort too but not when clicking on the header, so I cannot 
tell you how to catch the event. To sort, you can use a selection sort or use 
the reorder function of the Gtk3 API. I use the latter method. The selection 
sort method is simpler but uses an algorithm which is not very efficient (but 
it will be fine if you have not a lot of values to sort).

So, I build a list which contains tuples (value, position). In my case, the 
values are always strings, so that is easy. Then, you sort the list and use 
this sorted list to build a mapping from the new position to the old position. 
Finally, you have to call the method _reorder_ of the _ListStore_ with the 
address of the first element of the mapping as argument (not the address of the 
sequence, of course).

Here is an excerpt of the code I use:


let store = app.store
let count = store.iterNChildren()
var sortlist = newSeqOfCap[tuple[val: string, pos: int32]](count)
var iter: TreeIter

# Build a list of tuples (value, position).
discard store.getIterFirst(iter)
var idx = 0'i32
while true:
  var val: Value
  store.getValue(iter, 0, val)
  sortlist.add((val.getString().toUpper, idx))
  if not store.iterNext(iter):
break   # No more row.
  idx += 1
  
  # Sort the list.
  sortlist.sort(system.cmp)
  
  # Build the mapping new position -> old position.
  var mapping = newSeq[int32](count)
  var changed = false
  for idx, item in sortlist:
changed = changed or idx != item.pos
mapping[idx] = item.pos
  if not changed: return  # Already sorted.
  
  # Apply the reorder mapping.
  app.store.reorder(addr(mapping[0]))


Run

When you click in a cell, you have the possibility to execute a callback which 
gives you a path and a renderer. I think that it is possible to find the row 
and column numbers from the path. In my case, as I only needed the column 
number, I find it by searching the index of the renderer in a list of renderers 
I keep (I think that you need a list of renderers anyway). The path is a 
string, so there is still the possibility to parse it.

As I wrote this program several months ago, I don’t remember all the details, 
but, for sure, _TreeView_ in Gtk is not a simple thing. Initially, the program 
was written in PyQt. I converted it in Nim with Gtk2, then converted it to Gtk3 
using _gintro_. And, as I have said, I don’t use the Gtk3 version despite the 
_gintro_ code being cleaner. 


Re: How to immutably initialize parent object with private fields?

2019-02-15 Thread lscrd
Yes, you are right :-(. I forgot your requirement.

I fear there is no simple solution in this case, as you need to create the 
object with the right type (to get the right size) and, at this moment, you 
don’t have access to the private field. In fact, I’m sure that my first 
solution with a cast doesn’t work if extra fields exists in _ElementA_ (it 
compiles, but will cause problems at runtime).

And I’m not sure that using a macro will change anything: you need to allocate 
in the client module to get the right size and to allocate in the _lib_ module 
to have access to the private field. This is incompatible.

But there are ways to check that _initElement_ is not called several times. For 
instance, you can add a private boolean field _initialized_ in _Element_ which 
is false by default. Then _initElement_ becomes:


proc initElement(elem: Element, id: string) =
  if elem.initialized:
# already initialized: raise some exception
…
  elem.id = id
  elem.initialized = true


Run

Or, if you require that _id_ cannot be the empty string, you no longer need the 
_initialized_ field:


proc initElement(elem: Element, id: string) =
  if id.len == 0:
# id must not be empty: raise some exception.
…
  if elem.id.len != 0:
# already initialized: raise some other exception.
…
  elem.id = id


Run

I agree that these are not very elegant solutions. Maybe using a macro to 
initialize could solve the problem in a more elegant way. I can’t say for sure 
as I have not use macros for now. 


Re: How to immutably initialize parent object with private fields?

2019-02-14 Thread lscrd
In this direction, to make it work, you have to do a cast:


proc newElementA(): ElementA =
  cast[ElementA](newElement("A"))


Run

Another way consists to create a proc _initElement_ in _lib_ :


proc initElement*(elem: Element, id: string) =
  elem.id = id


Run

then to define _newElementA_ this way:


proc newElementA(): ElementA =
  new result
  result.initElement("A")


Run


Re: Nim Advocacy & Promotion Strategies

2019-01-26 Thread lscrd
No, without `global` it works and even this way:


def f():
print(x)

x = 4
f()


Run

But, the following doesn’t work and you have indeed to specify `global` (quite 
different from Nim here as the “equivalent” would be `var x = 2 * x + 1` which 
compiles and does the right thing):


x = 4

def f():
x = 2 * x + 1


Run

Python rules have their own logic, but they are not easier to understand that 
Nim ones. And for sure they are quite different as the binding is done at run 
time whereas the Nim compiler has to know at compile time what a name is 
referring to.


Re: Nim Advocacy & Promotion Strategies

2019-01-25 Thread lscrd
> There is no var keyword that you have to use when first create a variable. My 
> "implicit var" proposal was for Nim to behave the same way for b = 2 as for 
> var b = 2. You'd still get b type-inferred as int with static checking like 
> in Crystal (or like with b := 2 in Go).

But there are cases where var b = 2 and b = 2 have different meanings in Nim, 
for instance:


proc p(b: var int) =
  var b = 2


Run

and


proc p(b: var int) =
  b = 2


Run

And here:


var a = 0
# some code.
while true:
  a = 2  #


Run

compared to:


var a = 0
# some code.
while true:
  var a = 2  #


Run

So we need var to maintain the difference between a declaration with 
initialization and an assignment. Giving the user the possibility to remove var 
and let the compiler guess what he is meaning, would only make Nim confusing. 
In some cases b = 2 would be a declaration, in other cases an assignment, and 
in others something ambiguous. Personally, I think that this would be a mess.

And, still, I maintain that the languages have very different semantic and 
changing the syntax to make them look more closer is a trap. b = 2 in Python 
is, in Nim, neither equivalent to var b = 2 nor to b = 2 despite the apparences.

At least, when Python users use var in Nim, they know:

  * that they have reserved some memory to store a value whose size is 
determined by its type;
  * that no declaration with the same name is allowed in the same block;
  * that when exiting the block containing the declaration, the memory assigned 
to the variable will be freed.



None of these statements is true in Python.


Re: Nim Advocacy & Promotion Strategies

2019-01-20 Thread lscrd
I don’t understand when you are saying that `b = 2` means `let b = 2` in 
Python. Python has neither an equivalent of `var`, nor an equivalent of `let`. 
When you write `b = 2`, you simply bind the object `2` to the name `b`. And 
that doesn't prevent you to write afterwards `b += 1`, so it certainly doesn’t 
make `b` read-only.

In fact, I don’t see why we should worry about the way Python users see the 
need to declare variables either with `let` or `var`, as there is no similar 
concept in Python. I don’t think we should try to match syntax of languages, 
based on supposed analogies. For me, it is certainly a recipe for a lot of 
future problems as, actually, the language are quite different.

Furthermore, even being a long time user of Python, I appreciate the use of 
`var` and `let` in Nim. In Python, the location of the first binding of a name 
(which we could consider to be its “declaration”) is frequently lost somewhere 
in the code. This is the reason why I tried to put all these first bindings at 
a common place with adequate comments (same thing for attributes in classes). 
And this is one of the reasons, too, why Python 3.7 has introduced data classes.

As regards the equivalent for Python dictionaries, the _tables_ module is 
satisfying for me, even if it is less easy to use, for Nim is statically typed. 
And I will certainly don’t use JSON to simulate Python dictionaries. In fact, 
the most important thing for Python users when they use Nim tables is that they 
expect good performances (Python dictionaries are very efficient) and I have 
encountered some surprises when using tables (for instance, extremely slow 
performances when using _clear_ which I expected to be more efficient than 
creating a new table).

There are certainly some possible improvements to make in order to attract some 
Python users, but I’m not sure it would be worth it. I have of course searched 
equivalences when using Nim, especially when converting Python code to Nim 
code. For now, for me two things have been an annoyance (due to the nature of 
the code I was converting):

  * lack of big integers, but, fortunately, there is the _bignum_ module (and 
also _bigint_ ), so this not really a big limitation;
  * no generators; iterators are a good substitute, but they cannot be 
recursive; so you have to use a proc which returns a list and it makes the code 
more complicated and less readable; so, I certainly would like recursive 
iterators.



Now, I doubt that all Python users writing some program in Nim have the same 
requirements as me. 


Re: Nim and Project Euler

2019-01-12 Thread lscrd
I mostly agree with you and I am also rather optimistic for the future of Nim. 
The community is tiny, but composed of truly motivated Nim users. My remark 
about the small number of Nim users in Project Euler was there only to give 
true numbers, and yes, Nim is certainly a marginal language here.

As regards AOC 2018, I’m sorry :-) but the reality is a bit different that what 
you think. There are 118 registered users on the dedicated leaderboard. Only 98 
have solved at least one problem. Others may be Nim users, but have not 
participated.

Now, if you look closely at the solutions when they are available, you will 
discover that some competitors have used exclusively Nim, some have used Nim 
and another language, and some have not used Nim at all. For the latter, they 
may be Nim users, of course, but they have used another language for AOC. And 
if you wanted to compete for the first places, it was better to use an agile 
language which you master well.

The 118 users who have subscribed are certainly interested by Nim. And some who 
were only experimenting with Nim have done very good remarks about the 
language. But you cannot say that there are 100 true active users.

Of course, there are certainly Nim users who have not subscribed and have used 
Nim to solve some problems. Who knows?

For sure, it is difficult to have a good idea of the number of active users of 
Nim. For now, there are private users as me, contributors who write packages 
and libraries, and the development team. Nothing surprising, I think. Nim is 
still in incubation phase. 


Re: Nim and Project Euler

2019-01-12 Thread lscrd
Yes, I can understand that you don’t appreciate Project Euler tasks. They may 
seem less fun that, for example, those of Advent of Code. Most of them have a 
strong mathematical nature and not everybody likes to deal with prime numbers 
and totient function. So, you have to like mathematics or, more accurately, 
this kind of mathematics.

As regards the usefulness of the problems, I don't think that Advent of Code or 
Python Challenge, the two others code-challenge I have practiced, are any more 
useful. If you want to learn a programming language, they are of little 
interest (even Python challenge). They are purely recreative and that’s fine 
for me. But, with Project Euler, there is something recurrent. If you don’t 
find a clever algorithm, you will not solve the problem. Brute force is almost 
never an option, except for the first hundred problems. And, if you don’t know 
the difference between O(n) and O(n²), for instance, you will learn it… the 
hard way.

But I don’t see why the mathematical nature of Project Euler problems could 
explain why there are so few Nim users compared to users of other programming 
languages. Except if, for some reason, Nim users hate mathematics more than, 
say, Cobol users (36), Tcl users (75), Kotlin users (152) or Rust users (697).

Now, as I have said, the preferred programming language of a user may not be 
the actual programming language he/she use. Nim is probably underrepresented, 
as other relatively new languages which don’t have the support of a great 
company.


Re: Nim and Project Euler

2019-01-11 Thread lscrd
> Any ideas how to make it more popular? :-)

In the Project Euler community, this is not easy. Members are mostly 
mathematicians and either they use main languages such as C, C++, Java, Python, 
or they use languages such as Haskell (6863 users!) or more logically 
Mathematica or Matlab. For those members which are computer scientists or 
engineers, they use whatever language they like (and there are a lot of them) 
and they don’t care a lot about other languages, I guess.

Nevertheless, when they have solved a problem, they probably look at other 
people solutions and this is the reason why I try to publish my solutions with 
a strong emphasis on the readability, elegance and performance of Nim. And so, 
I can always imagine I have convinced some programmer to try Nim :-).

One good thing to note is that we can solve any problem of Project Euler, 
provided of course that we find a good algorithm. This has not always been the 
case as without modules to deal with big numbers ( _bignum_ or _bigint_ ), it 
would be really difficult. I know how I could have managed to do this, but, 
probably, I would have rather chosen to use Python. Or I would have built some 
crude binding to GMP, who knows?

Outside of Euler Project, I know that there is much brainstorming to find some 
way to improve the popularity of Nim. I would like to hope that, as with 
Python, the qualities of the language will be acknowledged at last, but it 
could take a while.

Python has gained a lot of popularity among scientists, in biology, language 
analysis, statistics (concurrently with R – 1220 users in Project Euler), etc. 
But, in the reverse, I personally encountered a lot of resistance in my company 
to simply use it. So, I think that we cannot expect gain popularity with 
private companies. They are too conservative, too timid, except for sponsors of 
course.

Nim has probably a place to find as a language for numerical computations. For 
instance, _arraymancer_ is really impressive. Taking a place as a language for 
efficient computations (and scientists need performances) would be a great 
achievement, I think. It would be difficult as Python with _numpy_ , _scipy_ 
and others is well installed. But it remains slow compared to what can be done 
with a compiled language as Nim. There are other competitors such as Julia, but 
I will not bet on their success.

After that, Nim must gain popularity in the open source community. When Linux 
distributions have replaced their Perl scripts by Python scripts, it was 
terminated: Python has gained its battle to become a main language. Of course, 
Nim has to find its place. For instance, I think it would be the best language 
to write photographic software such as _RawTherapee_ or _darktable_ , provided 
it offers good bindings to the graphical toolkits (GTK for _darktable_ ). And 
modules such as _arraymancer_ would really shine here.


Nim and Project Euler

2019-01-11 Thread lscrd
Hi all,

In another thread, someone ( _mratsim_ ) has spoken of a lot of users using Nim 
for Project Euler. Maybe members of this “project” have done a lot of noise to 
make illusion, but I want to give some more precise data regarding Nim and 
Project Euler.

For those who do not know about this “project”, this is a set of mathematical 
and programming challenges. In some cases, they can be solved by hand, but 
generally, at least starting from problem 100, you have to find a clever 
algorithm to solve them with a computer and brute force is seldom the way to go.

The most used language is now Python with 52323 members (it was not the case, 
several years ago). One of the most successful language to solve the problems 
is PARI/GP, a specialized mathematical language, which has only 117 users. 
Python is well suited to solve the problems, but PARI/GP is probably better for 
some problems. I tried it, but was not very efficient and didn’t persevere.

Nim was unknown of Project Euler until I asked the team to add it to the list 
of registered languages. It was done on 2016 October 17 and since then, Nim has 
got 13 members. Yes, only 13 members! And only two of them have reached level 1 
(with 26 and 32 problems solved), one has reached level 2 (with 64 problems 
solved) and one has reached level 8 (with 215 problems solved).

What does it means? Firstly, that Nim is not popular at all. Compared to its 
direct competitors, it is far behind. For instance, Rust has 697 users, D has 
237 users (which is not very good for a pretty old language). Even Cobol has 36 
users. To be fair, there exist certainly users whose main language is another 
one and who use Nim. And this is more likely to happen for Nim than for Python. 
But there is no statistics on the actual use of language and, anyway, you have 
not to publish your solution, even if it can be done (I have done it each time 
I found that my solution was worth it and could give a good image of Nim 
elegance).

Secondly, it shows that Nim has not found an audience with people fond of 
mathematical and programming challenges. In fact, these are not a target for 
Nim, but Python has been very successful here and it’s not due to mathematical 
modules as _numpy_ as they are never needed to solve the problems. Actually, 
Nim is very well suited to these kind of problems, with only two difficulties. 
First, there is no big integers and you have to use _bignum_. Second, the copy 
semantic needs some care. And, as performance matters a lot, it is more 
comfortable to use Nim than Python (I know, I used Python before using Nim), 
even if _pypy_ may give an impressive performance boost.

Of course, Nim popularity on Project Euler has not any importance. This is only 
an indicator and a biased one as I have said. But I wanted to give some precise 
data to remove some false hopes: no, there is not a large deposit of Nim users 
which are members of Project Euler .


Re: Nim Advocacy & Promotion Strategies

2019-01-11 Thread lscrd
For Python users, the syntax of Nim is not disturbing whereas this may be an 
obstacle for those using another language. But, this is a trap. Despite their 
syntax similarities, Python and Nim are very different languages and those who 
are not aware of the constraints of a statically typed and non interpreted 
language may be disappointed if they try to use Nim as they use Python.

OOP is indeed one of the difficulty they will encounter even if Python is more 
flexible that Ruby for which OOP is mandatory. In Python, OOP is one way to do 
things, but it exists and is heavily used. Nevertheless, I think that this is 
not the main difficulty.

I have tried to convert a complicated library (named _pdfminer_ ) from Python 
to Nim and I have given up for now. OOP is used everywhere, of course, and I 
encountered some difficulties with inheritance and dynamic dispatch, mixing 
procs nd methods. But the true difficulties were:

– forward references; I was constrained to change some proc interfaces, to 
split some modules and join others; this was a laborious task; Python users 
don’t care much about that; in some cases, they encounter a problem, but most 
of the time (especially, if they don’t use global variables) it simply works as 
the binding of names is done at run time;

– dynamic typing; in Python, a field may contain anything; when converting to 
Nim, you have first to find all the locations where it is “assigned” a value 
(the term is not right, but it doesn’t matter); then you have either to use 
some form of inheritance, which will raise other difficulties, or you have to 
use variants; in fact, I used both approaches, but the restructuring is complex.

I have never encountered such difficulties when converting my own programs from 
Python to Nim, because, for me, and even in Python, a field must have a type. I 
seldom use extreme dynamic typing in Python as it hurts readability. But other 
Python users have not this shyness.

So, users of Python and other scripting languages such as Ruby or Lua, may have 
unpleasant surprises when trying to use Nim. At least, if they don’t know what 
is a statically typed language which compiles to native code.

Moreover, I doubt that Python users will find in Nim a lot of things they don’t 
find in Python. More checks at compile time which they often find more as an 
inconvenience than an advantage. Better performance, yes, even if _pypy_ does a 
great job compared to _cpython_. Will this make up for the lost of flexibility? 
I’m not sure.

I cannot say for sure what could attract Python users to Nim. I’m not the 
typical Python user, even if I have used the language since its version 1.6 and 
was quite a fan of this language. Indeed, before using Python, I mostly used 
statically typed and non interpreted languages which is the reason I fill 
comfortable with Nim.

I think that users of statically typed languages which compile to native code 
could be a better target as they can see what are the advantages of Nim 
compared to other languages in the same category. Unfortunately, Nim syntax 
will repel a lot of potential users which are accustomed to languages using a 
C-like syntax. This may seem a minor aspect, but it is very important. 
Nevertheless, I don’t preconize for Nim to switch to a C-like syntax: I would 
hate that. But this is a point to be aware of.

As regards macros, they add a lot of flexibility to the language and, so, are 
often the way to make up for Nim lack of flexibility when dealing with types. 
So, some people are trying to do in Nim things which would be better done in 
another language. Again, Nim has limitations due to its nature and even macros 
will not allow to deal with types at run time (except, in a certain way, when 
using objects and inheritance, but this has a cost). The distinction between 
what is done at compile time and what is done at run time is certainly one 
thing which may cause misunderstanding and disappointment for new users.


Re: Advent of Code 2018 megathread

2018-12-16 Thread lscrd
I encountered the same difficulties. The fact that the problem provides several 
test cases shows that the creators are aware of the difficulties. There are 
several traps to fall into. And, for part 2, there are also performance issues.

For now, this problem has been the most challenging. Day 16 is easier.


Re: Cannot prove initialization, again.

2018-12-13 Thread lscrd
It seems that I was not in a great shape yesterday. There are some errors in my 
message.

Indeed, a set of _range[ '0'..'9']_ will occupy 58 bits rounded to 64, not 9 
bits rounded to 16. I will have to declare _Digit_ as a _range[0..9]_ to use 
only 10 bits (not 9!) rounded to 16 and, in this case, the warning is no longer 
emitted as _Digit_ includes the null value.

For my problem I can (and will) use a _range[0..9]_. In some cases, it may not 
be the right solution, so my question is still valid, I think, and the solution 
proposed by _mratsim_ is the way to hide the warning.


Re: Cannot prove initialization, again.

2018-12-12 Thread lscrd
Thanks, it works!

But, as this only hides the warning and doesn’t suppress the check , I hope 
that, despite what is said in the message, this warning will never become a 
compile time error.


Cannot prove initialization, again.

2018-12-12 Thread lscrd
Hi,

When trying to solve some project Euler problem, I declared something like that:


import tables
const N = 16
type Digit = range['0'..'9']
var counters: array[N, TableCount[Digit]]
var exclusion: array[N, set[Digit]]
. . .


Run

and got the usual warning message when dealing with a range which doesn't 
include the null value (here '0'):

_Warning: Cannot prove that 'counters' is initialized. This will become a 
compile time error in the future. [ProveInit]_

Of course, I could use _char_ instead of _Digit_ , but for the sets using a 
_char_ will be less efficient (256 bits for each set instead of 9 bits rounded 
to 16). In this case, this is not the memory cost which would be a problem, but 
the performance. And anyway using _char_ instead of _Digit_ is ugly.

Using {.noInit.} doesn't change anything, so, for now, I have to live with the 
warning.

But, is there a better way to do this?


Re: [help] indirectly imported types not working?

2018-12-09 Thread lscrd
Marking the type as exported is not sufficient. You have to mark the field pos 
as exported (and also for the field dir I guess):


import nico/vec

type
  GameObject* = ref object of RootObj
pos*: Vec2i
  
  Player* = ref object of GameObject
dir*: int


Run


Re: Deprecation of "round" with two arguments

2018-11-23 Thread lscrd
Yes, I agree that this function is not generally what is needed. Most of the 
time, we want a string and "format" is what should be used. I didn’t wanted to 
discuss the decision, I was just curious to know if there exists situations 
where it actually gives a wrong result.

Now, in my case, this is not a big deal. I need only rounding to 0, 1 and 2 
decimals at three places, so I changed to use explicit code: `round(x)`, 
`round(10 * x) / 10`, `round(100 * x) / 100`.


Deprecation of "round" with two arguments

2018-11-23 Thread lscrd
Hello all,

When compiling (with development version of Nim) a module which uses the 
"round" function from the "math" module, more precisely the "round" function 
with two arguments (the second one being the number of positions after decimal 
point), I got a deprecation warning. The recommended way to round to some 
decimal position is now to use "format".

I looked at "math.nim" to better understand the reason of this deprecation and 
it is said that the function is not reliable because there is no way to 
represent exactly the rounded value as a float. I was aware of this, of course, 
but I don’t see how using "format" could be better. As I don’t want a string 
but a float, I would have to convert the exact string representation to a 
float, losing precision in the operation.

I have done some comparisons to check if, for some value of "x", I could get a 
difference between _x.round(2)_ , _x.formatFloat(ffDecimal, 2).parseFloat()_ 
and the expected result (as a float). I failed to find one, but, of course, I 
cannot be sure that there will never exist a difference.

So, I would like to know if there is an example or a theoretical proof which 
shows that the way "round" works (multiplying, rounding, dividing) may give 
less precise results than using format, then parsing the string to a float 
(which will need several floating point operations). Because, to round a float 
to two digits after decimal point for instance, it seems rather overkill to 
convert to a string (with rounding) then convert back to a float, when one can 
simply multiply by 100, round to integer then divide by 100.


Re: Should we get rid of style insensitivity?

2018-11-23 Thread lscrd
Thanks for the precision. But is it really style insensitivity? That’s what I 
thought until I understood that it is only a consequence of ignoring the 
underscores in identifiers as underscores are ignored in numbers.

So Nim style insensitivity may be seen only a consequence of Nim case 
insensitivity and of its special usage of underscores to improve readability in 
numbers and identifiers.

Still, one can dislike it, but it’s not so arbitrary that it might seem.


Re: Should we get rid of style insensitivity?

2018-11-22 Thread lscrd
> Stating that the same group of people dislike style insensitivity, GC, etc is 
> a bold claim.

That’s not what I meant. I suppose I should have been more precise :-).

What I mean is that some people will find any reason to reject a language they 
don’t like. And why do they not like this language? Simply, because it isn’t 
their favorite language and all that’s different is bad. Programming languages 
have always been a very controversial topic.

For me there is a big difference between people who don’t like the case 
insensitivity in Nim, but use the language, and those who pretend that they 
will use Nim if it was case sensitive. Nim has a lot of attracting features and 
I don’t believe that case insensitivity could prevent to use it. There are so 
much more important things.

In fact, anything which is somewhat new has always been criticized. A new 
feature is a differentiation factor and, so, it must be fought. Python has, for 
a long time, criticized for its syntax, which truly is a differentiation 
factor. Guido van Rossum has not changed anything and he was right. Now, 
languages with a syntactically significant indentation as Nim is, are accepted. 
And, for me, this is an important feature (but, again, not one that could 
prevent me to use languages with traditional syntax).


Re: Should we get rid of style insensitivity?

2018-11-22 Thread lscrd
I think that changing Nim to get more users is exactly what should not be done. 
A language should not be adapted to suit the opinion of a majority of potential 
users. Rather users have to learn other ways to work.

For me, many of those who definitively rejected Nim for its case insensivity 
will have, anyway, rejected Nim for other reasons: the fact that it is not 
object oriented, its GC, the exceptions, its syntax, usage of keywords rather 
than symbols ("and" rather than "&"), etc.

And I don’t buy the idea that there are lot of users who would prefer case 
insensitivity. Those who are satisfied with the current situation do not say 
anything. They use Nim, that’s all. It’s easier to find people who complain 
than people who agree.

Look at other languages. Go, for instance, has chosen to use return codes 
rather than exceptions. This is an important choice, much more important than 
case sensitivity/insensitivity. A lot of people have complained. It was their 
right, but Go designers have not changed anything. They have lost potential 
users, me for instance, but have considered that a language should follow a 
philosophy. That’s fine an I respect this.

Python is another example. When it was a young language, its syntactic 
significant indentation has been a blocking point for a lot of users. At this 
moment, Guido van Rossum could have get a lot of users by changing this. He 
don’t. Slowly, the language has received more and more attention, until even 
the most reluctant have finally learned it. For me, Python is the exact example 
of what should be done: no concessions for major points. Whether case 
sensitivity/insensitivity is a major point is somewhat disputable though. 


Re: Should we get rid of style insensitivity?

2018-11-21 Thread lscrd
For me, this idea of voting is strange and even makes me uncomfortable. Does 
the creator of the language agree with this procedure? If this is not the case, 
we are following a wrong path here.

Moreover, are we going to vote for each presumably controversial topic, for 
instance the syntactically significant indentation? If would be the logical 
next step, but I’m not sure it will do any good to the language which will 
quickly look like some of these committee languages with lot of compromises to 
please a majority of users.

In fact, this is the first time I see such a fuss about case insensitivity. 
Before C, most languages were case insensitive. Now, if you choose 
insensitivity, you have to suffer recriminations from some users and have to 
justify why you don’t do as C does. It looks like programmers are getting more 
conservative nowadays.

Personally, I have used case sensitive and case insensitive languages without 
any problems. In both case, I write code in an insensitive way, following the 
style guide if it exists. Using case to differentiate identifiers is really the 
wrong thing to do. So, I can switch between Python and Nim without any 
difficulty despite Python being case sensitive (which, I think, is not the best 
choice for a language with implicit declaration).

Of course, case sensitivity is simpler for the user but may cause lot of 
problems later if someone misuses it. And we can be sure, it will happen. So 
case insensitivity is a better choice to avoid any problem. The only drawback 
are possible variations in the spelling of identifiers, but it can be solved 
with appropriate tools.

So, no vote for me. There are most important topics and this case sensitivity 
vs case insensitivity topic tends to become a little too much of a religious 
affair.


Re: How does one convert from string to integer using an arbitrary radix?

2018-10-21 Thread lscrd
The message is misleading. "result" is the one at line 325 in system.nim, i.e. 
the HSlice which is returned by the proc.

When instanciating the slice (".."), the fields "a" and "b" of "result" (a 
slice) should be initialized with 0. But "b" is of type "range[8..10] and 
cannot be initialized with this value. So, the compiler emits a warning.

To avoid this warning, you simply can write "assert c in DIGITS[0 ..< 
radix.int]".


Re: How does one declare byte array constants?

2018-10-19 Thread lscrd
In any case, you have to indicate at least that the first value is a byte. For 
instance:


const LOOKUP_TABLE: array[4, byte] = [1'u8, 2, 3, 4]


Run

or more elegant


const LOOKUP_TABLE: array[4, byte] = [byte 1, 2, 3, 4]


Run


Re: Some OOP problems

2018-09-24 Thread lscrd
I have already encountered this error. As far as I remember, it occurs when you 
use "var" in a proc whereas the type is a "ref object". If you change this in 
your example, the error disappears.


type
  Animal* = ref object of RootObj
age*: int

proc init(x: Animal) =
  x.age = 0

method cry(x: Animal) {.base.} =
  echo "Animal is silent..."

type Mammal* = ref object of Animal
legs: int

proc init(x: Mammal) =
  x.Animal.init()
  x.legs = 4

method cry(x: Mammal) =
  echo "Mammal is crying..."

type
  Dog* = ref object of Mammal
tail: int
ears: int

proc init(x: Dog) =
  x.Mammal.init()
  x.tail = 1
  x.ears = 2

proc newDog(): Dog =
  new result
  result.init
  return result

method cry(x: Dog) =
  echo "Dog is barking..."

proc birthsday(x: Dog) =
  inc(x.age)

type
  Cat* = ref object of Mammal
tail: int
ears: int
angry: bool

proc init(x: Cat) =
  x.Mammal.init()
  x.tail = 1
  x.ears = 2
  x.angry = false

proc newCat(): Cat =
  new result
  result.init
  return result

method cry(x: Cat) =
  if x.angry:
echo "Cat says: Grrr..."
  else:
echo "Cat says: Mmm.."

proc main =
  var s = newSeq[Animal]()
  let c = newCat()
  c.angry = true
  s.add(c)
  s.add(newDog())
  Dog(s[1]).birthsday()
  
  for a in s:
a.cry()
echo "Age: ", a.age
if a of Dog:
  echo "A dog with ", Dog(a). tail, "tails"

main()


Run

I find this error very annoying as it is not easy to find the cause. And, event 
if "var" is not needed here, this is nevertheless a valid construct.


Re: How to call a proc of Base class from a derived class ?

2018-09-18 Thread lscrd
Here is your example translated to Nim using a method for "cry".


type
  Animal* = ref object of RootObj
weightOfAnimal*: int

method cry(x: Animal) {.base.} =
  echo "Animal is crying..."

type Dog* = ref object of Animal
name*: string

proc newDog*(sName: string = ""): Dog =
  new(result)
  result.name = sName

method cry*(x: Dog) =
  procCall x.Animal.cry()
  echo "But Dog is barking..."

let myAnimal1 = newDog()
let myAnimal2 = Dog(name: "Médor")

myAnimal1.cry
myAnimal2.cry


Run

Some explanations. I have used ref objects but could have used object instead. 
The only reason here is for convenience as I can use a proc "newDog" to create 
Dog objects.

There is no use for a proc "newAnimal" as the standard way to create Animal 
objects is sufficient (weight is initialized at 0 anyway). There is no real 
need for a proc "newDog" here but I have kept it for demonstration purpose.

To call the Animal "cry", you need to use procCall to bypass the dynamic 
binding. And you need a conversion to Animal of course. This way, you have full 
control over what you want to call in the class hierarchy.

To create a Dog object, we can use the "newDog" proc or we can simply provide 
the values of the fields to initialize.

For the name "Médor" for the dog, this is supposed to be a common dog name in 
French (never encountered a dog with this name though :-)). I don’t know the 
common dog name in English.

Hope this helps.


Re: string literals should not allocate; should allocate in ROM an extra `\0`

2018-07-20 Thread lscrd
The copy doesn’t worry me if the behavior is consistent. But, there is 
something I don’t understand.

In this program


proc p() =
  let x = "abc"


Run

there is a string copy (and an allocation) for "x".

But in this one 


proc p() =
  var a = "abc"
  let x = a


Run

there is no string copy for "x", which may cause problems if "a" is modified 
later. "x" is indeed an alias for "x" (with all the problems of aliases which 
Nim normally avoids).

There is no risk to directly assign the pointer in the first case. So why is 
there a questionable optimization in the second case and not in the first case 
where it would be safe?


Re: Arbitrary Precision Integer Math Operators

2018-03-22 Thread lscrd
To add to Stephan's answer, I have tried both packages when solving puzzles 
from "Euler project":

  * "bigints" is pure Nim and has the best API in my opinion, but it is about 
twice slower than "bignum" and has some issues (see comments in source); it is 
still a work in progress.
  * "bignum" uses the "gmp" wrapper and frequently uses types as "clong" in its 
API which I don't like. But it works fine with good performance.



Curiously, I have found Python (using "pypy" interpreter) to be more performing 
when dealing with long integers than Nim with "bignum" (for instance when doing 
Rabbin-Miller primality tests). So, it seems there is still room for some 
improvement.


Re: Wrong copy of sequences?

2018-03-20 Thread lscrd
Yes, I understand. Indeed, it seemed to me it was an issue quite difficult to 
solve. But, it is something very unlikely to happen in real programs and, so 
far, nobody has encountered these problems. I have built these examples when I 
have suspected that something may be wrong in some cases and this is not actual 
code in some program I have written.


Re: Wrong copy of sequences?

2018-03-18 Thread lscrd
Yes, I think the last example is the most annoying one as, to solve it, we have 
to do a copy which is just something we don't want to do for obvious 
performance reasons. I have tried to change the parameter type from sequence to 
openarray with same result. And with an array instead of a sequence, we get the 
same behavior too. So, changing the semantic for sequences would not be enough, 
we would have to change the semantic for arrays, too, and kill the whole copy 
semantic of the language. Not the right way to go, I think.

Maybe a clear distinction between mutable and immutable will indeed solve the 
issue. The difficulty is to find a balance between the increased complexity of 
the language and the performance.


Re: Wrong copy of sequences?

2018-03-17 Thread lscrd
Yes, I know it works with **var**. I have once written a statement  let b = a 
in a program, **a** being a sequence, with a comment  # No copy. Reading this 
some months later, I was not sure that there is actually no copy (and I thought 
that, in fact, a copy is needed). So I done some checks and found these strange 
behaviors.


Re: Wrong copy of sequences?

2018-03-17 Thread lscrd
I use 0.18.0, so the results may differ, of course. When running in the 
browser, I get the same results as with version 0.18.0.

Maybe the compiler does some optimization but it cannot consider that _a_ and 
_b_ are not used in another place: they are used by _echo_.

Assigning with **var** works, of course, so, it 's clearly an optimization when 
assigning to a read-only object. I suppose, this has been done for performance 
reason.

I have tried with version 0.17.2. Indeed, I get a strange result in the first 
case, i.e.


@[0, 1, 2, 3]
@[54014246935360, 1]


So it seems that some bug has been fixed in version 0.18.0. For the other 
tests, the results are indeed the same.


Wrong copy of sequences?

2018-03-17 Thread lscrd
I would like to discuss a problem I have encountered and for which I have 
submitted a report on the bug tracker with a different version using 
_newSeqOfCap_.

Here is a simple program:


proc p() =
  var a = @[0, 1, 2]
  let b = a
  a.add(3)
  echo a  # @[0, 1, 2, 3]
  echo b  # @[0, 1, 2]

p()


The result is logical: _a_ and _b_ are different sequences and modifying _a_ 
doesn 't change _b_.

Now, a somewhat different program.


proc p() =
  var a = @[0, 1, 2, 3]
  discard a.pop
  let b = a
  a.add(5)
  echo a  # @[0, 1, 2, 5]
  echo b  # @[0, 1, 2, 5]

p()


It seems that now _a_ and _b_ are sharing some memory. Looking at the generated 
C code, it appears that in the first case, when adding an element, there is a 
reallocation. This is not the case in the second program as there is enough 
room to receive the new value but it is not clear for me why the length of _b_ 
is modified.

But, what if we don't change the length at all?


proc p() =
  var a = @[0, 1, 2, 3]
  let b = a
  a[^1] = 4
  echo a  # @[0, 1, 2, 4]
  echo b  # @[0, 1, 2, 4]

p()


This seems clearly wrong to me. Now if we replace the sequence by an array.


proc p() =
  var a = [0, 1, 2, 3]
  let b = a
  a[^1] = 4
  echo a  # [0, 1, 2, 4]
  echo b  # [0, 1, 2, 3]

p()


And looking at the C code, there is clearly a copy, which was expected.

We can also get some odd behavior with parameters.


var a = @[0, 1, 2, 3]

proc p(s: seq[int]) =
  echo s  # @[0, 1, 2, 3]
  a[^1] = 4
  echo s  # @[0, 1, 2, 4]

p(a)


I think this problem is not likely to happen frequently, but it may cause some 
troubles. What do you think of it? And how could this been solved?


Re: Big integer litterals

2018-03-08 Thread lscrd
You have not convinced me either. I have shown that on 32 bits platform, when a 
variable is initialized without explicit type, the current rule will cause 
problems as the variable size will vary according to the initialization value. 
I think it's really a bigger problem that which seems a problem for you, i.e. 
that **int** being a different type on 32 bits and 64 bits platforms, some 
assignments to **int** will inevitably failed on a 32 bits machine.

This is the case with var x = 10 * 1_000_000_000 as, here, the compiler doesn't 
magically evaluate the expression as an **int64** (as it will do for a literal) 
and so you will encounter the situation you do not want (different behavior on 
different platforms). What is really ugly is that the way you write the 
initialization value changes the semantic of the program at the point that, on 
a 32 bits machine, it compiles with the literal and fails to compile with the 
expression. From a pure logic point of view, when no size is specified, 
10_000_000_000 should be equal to 10 * 1_000_000_000 (same value, same type). 
The rule for big literals is too _ad hoc_ and will produce this kind of 
inconsistency.

* * *

But the language is defined this way and I don't think it will change – not 
before we have to manage **int128** I suppose . I only asked a question, I got 
answers which explains why this is done this way and I still think it would 
have been better to manage literals without prefix in a more uniform way (as 
stated in the first rule). But I have not made a request for a change (which 
would have few chances to be adopted anyway). It 's a too minor problem and I 
think there are more important things to do .


Re: Big integer litterals

2018-03-07 Thread lscrd
@StasB

> Not sure what you mean. Can you state what you think the general rule is?

The general rule is written in the manual: "Literals without a type suffix are 
of the type **int** , unless the literal contains a dot or **E|e** in which 
case it is of type **float**. " This is the reason why I asked a question. 
Later in the manual, another rule I missed states that for **int** "An integer 
literal that has no type suffix is of this type if it is in the range 
low(int32)..high(int32) otherwise the literal's type is **int64**. " which 
contradicts the previous rule. If I had known this second rule, I would have 
not asked the question but probably issued a report about the inconsistency.

> Because the idea that your code can either pass or fail type checking 
> depending on where it's being compiled is absolutely bonkers.

But this is already what is done when you use _when_ conditions: you compile a 
code depending on these conditions. You cannot expect to execute exactly the 
same code on all platforms, especially if it depends heavily on the **int** 
size.

Now, if you write var x = 10_000_000_000, the type of _x_ is not explicitly 
defined. It is logical to consider that it is an **int**. Adding a special rule 
to specify that, as it cannot fit in a 32 bits signed integer, it has type 
**int64** has the disadvantage to change its type according to its value. So, 
you have to make sure that changing the value to a smaller integer (such as 
1_000_000_000) doesn 't break the code. And the situation becomes really 
complicated on 32 bits platforms as with 10_000_000_000 you get a 64 bits value 
whereas with 1_000_000_000, you get a 32 bits value. It will be very difficult 
to manage this.

But the right way to write portable code here is var x = 10_000_000_000'i64, 
var x: int64 = 10_000_000_000 or var x: int64 = 10_000_000_000'i64. Even if you 
change the value to 1_000_000_000, the code will continue to compile and 
execute on both platforms. No need then for a special rule for, as we have 
seen, it may be dangerous on 32 bits machines. And, in the future, if we have 
to manage 128 bits integers, we will not have to add another special rule to 
give type **int128** to literals which doesn 't fit in a 64 bits signed integer 
.

@mashingan

According to the second rule, a big literal on a 32 bits machine will be of 
type **int64** , so it will be impossible to assign it to an **int** whatever 
its size. So there is no risk and this is an advantage of the rule.

Without this rule (and only the first one which gives type **int** to literals 
without suffix), if people used to work on 64 bits machines forget to specify 
the suffix, they will see an error at the first compilation.

The only problem is when porting a program written without care on a 64 bits 
machine to a 32 bits machine. But I think that, in this case, other problems, 
not related to this one, will occur. It's unlikely that a program depending on 
integer size will compile and execute without error on another platform, if not 
designed carefully (and tested on this platform). For this reason, this problem 
with big literals may not be so important.


Re: Big integer litterals

2018-03-07 Thread lscrd
I don't see why 10_000_000_000 could not be an **int** on a 64 bits platform 
and produce an error on a 32 bits platform. As there is always the possibility 
to write 10_000_000_000 'i64, this is not a restriction.

Furthermore, it would make things consistent as const c = 10 * 1_000_000_000 
would give an **int** on a 64 bits machine and an error on a 32 bits machine 
(or, is it that it would give an **int64**? I don 't have a 32 bits machine to 
check this but it seems unlikely).

But I will not fight about this point which is minor. As long as it is clearly 
stated that big literals are **int664** , I think I can live with that and use 
a conversion to get an **int** on 64 bits platforms . The real issue is this 
small inconsistency in the manual which deceived me.


Re: Big integer litterals

2018-03-06 Thread lscrd
Thank you all for your answers. Is seems that my first replies are still 
blocked .

@Stefan_Salewski

I didn't remember this paragraph, but indeed the compiler does what is written 
in the manual. However, in "Numerical constants" is is said that "Literals 
without a type suffix are of the type **int** , unless the literal contains a 
dot or **E|e** in which case it is of type **float**. " So, there is a 
contradiction or, at least, an imprecision here. Not a big deal but this has 
confused me.

Now, to define an **int** literal greater of 2^32-1 or less than -2^32, we have 
to apply a conversion, whereas it is possible to define directly a literal with 
the right type for other integer types. So, it should be logical to add a 
prefix **' i** for this purpose. Not that I think it should be something with 
high priority .

@Stefan Salewski again.

To answer your first question, I have use **gintro** when converting a program 
from **gtk2** to **gtk3** (running on my Linux Manjaro 64 bits). I have been 
one of your first users and I have issued many reports about bugs or wishes (in 
fact, I have issued the majority of reports at this time  ). Incidentally, it 
is a great work you have done.


Re: Unable to reply?

2018-03-06 Thread lscrd
Seems OK, now 


Re: Unable to reply?

2018-03-06 Thread lscrd
Thank you. I will wait until my messages appear in the thread.


Unable to reply?

2018-03-06 Thread lscrd
Hi. I submitted my first post some time ago without any problem, but my replies 
do not appear in the thread. To reply, I click on "reply", type my message, 
click on "preview" (to check if anything is OK) then on "submit". The window 
closes, but that's all. I have tried three times, first with the initial reply, 
second with a new reply some time later, the third with the same reply after 
deactivating "NoScript". Without success.

What am I doing wrong?


Big integer litterals

2018-03-03 Thread lscrd
Hi. This is my first post on this forum, but I have used Nim for some time now 
(in particular, the Gtk3 bindings).

Compiling one of my old programs with the new 0.18 version, I have encountered 
an error which can be sum up to a single statement:


var x: int = 10_000_000_000


The error is _type mismatch: got  but expected 'int'_

As my computer is a 64 bits machine and **int** is represented by 8 bytes, I 
expected 10_000_000_000 to be of type **int** not **int64** (as described in 
the user manual).

Is there an explanation for this or is it a bug (which is also present is 
0.17.2)?