raise error using zig as cross compile

2020-05-20 Thread zhouhaiming

# nim c -f --cc:zig --zig.options.linker:"-target arm-linux-musleabihf" 
--zig.options.always:"-target arm-linux-musleabihf" hello.nim
command line(1, 2) Error: unknown C compiler: 'zig'. Available options are: 
gcc, switch_gcc, llvm_gcc, clang, bcc, vcc, tcc, env, icl, icc, clang_cl


Run


# nim --version
Nim Compiler Version 1.3.5 [Linux: amd64]
Compiled at 2020-05-20
Copyright (c) 2006-2020 by Andreas Rumpf

git hash: 6969a468ceaab7384d7448bfd88e47a5b24c3a97
active boot switches: -d:release


Run


# zig version
0.6.0


Run


# cat hello.nim
echo "Hello World"


Run


Re: closure procs with the javascript backend

2020-05-20 Thread dawkot
Did you try adding closure annotation the suspect procs yourself? If it works, 
you could create a pull request.


closure procs with the javascript backend

2020-05-20 Thread mildred
Hi,

I would like to use Nim for javascript related work, and I wonder how I can use 
event handlers in Nim when the proc I want to register is a closure. Often, the 
event needs to access external data and gets the `{.closure.}` annotation, and 
I get the following error from the compiler:


Error: type mismatch: got .}> but expected 'proc (event: Event)'


Run

In JavaScript, it's easy to convert a closure to a function as functions are 
already first-class objects, but how to tell the compiler? Else, can I work 
around this?

I also have a related side questions, it seems that the javascript backend does 
not supports iterators (closure iterators), and I was just wondering if there 
was a technical reason to that or if it was just that this was not yet 
implemented in the compiler. I end up having to create iterators objects 
manually which makes some boilerplate that is strange for a language where 
first-class iterators are supported:


type
  ProcIter*[D2] = proc(): tuple[ok: bool, data: D2]
  ProcIterator*[D1,D2]  = proc(d: D1): ProcIter[D2]

proc seqIterator*[D](arr: seq[D]): ProcIter[D] =
  #mixin items
  #var it = items # Instanciate iterator
  var it = 0
  var empty: D
  
  proc next(): tuple[ok: bool, data: D] =
#let item: D = it(data)
#if finished(it): return (false, item)
#else: return (true, data)
if it >= len(arr): return (false, empty)
result = (true, arr[it])
it = it + 1
  
  return next



Run

And instead of a nice for loop, I have to write:


var itf = match.iterate(val)
while true:
  var it = itf()
  if it[0] == false: break
  var item = it[1]
  # ...


Run


Re: How to implement observer (publish/subscribe) pattern?

2020-05-20 Thread dawkot
This should be enough, you can capture whatever type you want inside a closure. 


type
  Publisher = ref object
subs: seq[proc(data: int)]


Run


How to implement observer (publish/subscribe) pattern?

2020-05-20 Thread Kosteg
The publisher has to maintain the list of subscribers.

It's easy to implement if all the subscribers are of the same type (or of the 
multiple pre-defined types).

But is it possible to do not specify all possible types of subscribers? 


Re: How can I pass shared memory between threads?

2020-05-20 Thread mratsim
You can implement atomic refcounting with destructors. See what I do in my 
[multithreading 
runtime](https://github.com/mratsim/weave/blob/33a446ca4ac6294e664d26693702e3eb1d9af326/weave/cross_thread_com/flow_events.nim#L176-L201):


type
  FlowEvent* = object
e: EventPtr
  
  EventPtr = ptr object
refCount: Atomic[int32]
kind: EventKind
union: EventUnion

# Refcounting is started from 0 and we avoid fetchSub with release semantics
# in the common case of only one reference being live.

proc `=destroy`*(event: var FlowEvent) =
  if event.e.isNil:
return
  
  let count = event.e.refCount.load(moRelaxed)
  fence(moAcquire)
  if count == 0:
# We have the last reference
if not event.e.isNil:
  if event.e.kind == Iteration:
wv_free(event.e.union.iter.singles)
  # Return memory to the memory pool
  recycle(event.e)
  else:
discard fetchSub(event.e.refCount, 1, moRelease)
  event.e = nil

proc `=sink`*(dst: var FlowEvent, src: FlowEvent) {.inline.} =
  # Don't pay for atomic refcounting when compiler can prove there is no 
ref change
  `=destroy`(dst)
  system.`=sink`(dst.e, src.e)

proc `=`*(dst: var FlowEvent, src: FlowEvent) {.inline.} =
  `=destroy`(dst)
  discard fetchAdd(src.e.refCount, 1, moRelaxed)
  dst.e = src.e


Run

Multithreaded garbage collection is a very hard problem and it took years for 
Java to nail it.

The `parallel` statement has a proof-of-concept array bound checking algorithm 
that can prove at compile-time that arrays/seq accesses are safe to be made 
concurrently because the cells are not updated by different threads. It's 
mostly a proof-of-concept though and the future Z3 integration (a SMT solver) 
is in part motivated to extend this capability as far as I know.

If you compile with `--gc:boehm` or `--gc:arc` memory is allocated on a shared 
heap. Though I'm currently having trouble with `--gc:arc` \+ `--threads:on` its 
main motivation is easing multithreading.

Note that for data structures shared between threads you will still need to 
handle them via destructors + atomic refcounting, there is no plan to add 
atomic overhead to all types managed by the GC.

Furthermore Nim encourages using message-passing (i.e. share-memory by 
communicating instead of communicating by sharing). 


Re: Using generic objects and inheritance at the same time.

2020-05-20 Thread mratsim
I've been fighting with inheritance + generics in the past to implement layers 
of neural networks.

I think your example wouldn't work if a `Dog[Domestic]` and a `Cat[Domestic]` 
are stored in the same datastructure or if you use the value through a proc 
declared in another module.

In the end, what I do is have the generic inherited objects carry their 
"handler" in a field to avoid `methods`.


Re: Using generic objects and inheritance at the same time.

2020-05-20 Thread JohnAD
Off-topic: is there the equivalent of a "when case"?

I like that nim checks for missing scenarios when writing "case" statements.

I like that nim allows for compile-time code decisions with "when" directives.

Putting those together would be spiffy. Not a top-tier feature by any means, 
but it would be nice.


Using generic objects and inheritance at the same time.

2020-05-20 Thread JohnAD
After many many hours of trying to make this work, I finally got it to go. I 
suspect I'm not the only person to struggle with this, so I figured I'd 
document the answer here.

Rather than getting into theoreticals, let's just jump into code.

Nim supports inheritance for objects. An example:


# Parent object

type
  Animal = ref object of RootObj
id: int

method sayFeet(a: Animal): string =
  result = "[" & $a.id & "] has an unknown number of feet"

# Child object

type
  Dog = ref object of Animal
hairLength: float
legCount: int

method sayFeet(d: Dog): string =
  result = "Dog [" & $d.id & "] has " & $d.legCount & " feet."

# Child object

type
  Cat = ref object of Animal
whiskerCount: int
pawCount: int

method sayFeet(c: Cat): string =
  result = "Cat [" & $c.id & "] has " & $c.pawCount & " feet."

# generic proc

proc describe(foo: Animal) =
  echo "animal detail: " & sayFeet(foo)

# use it

var
  sparky = Dog(id: 1, legCount: 4)
  mittens = Cat(id: 2, pawCount: 4)

sparky.describe()   # "animal detail: Dog [1] has 4 feet."
mittens.describe()  # "animal detail: Cat [2] has 4 feet."


Run

Yes, this and all the examples are silly. This more about proof-of-concept.

Nim also supports generics in object types. Among other things, this allows for 
compile-time optimization.


type
  Domestication = enum
Wild
Feral
Domestic
type
  Animal[wildness: static[Domestication]] = ref object of RootObj
id: int

proc sayFeet(a: Animal): string =
  result = "[" & $a.id & "] has an unknown number of feet"

proc describe(foo: Animal) =
  when foo.wildness == Wild:
echo "Wild animal detail: " & sayFeet(foo)
  elif foo.wildness == Feral:
echo "Feral animal detail: " & sayFeet(foo)
  else:
echo "Tame animal detail: " & sayFeet(foo)

var
  sparky = Animal[Feral](id: 1)
  mittens = Animal[Domestic](id: 2)

sparky.describe()   # "Feral animal detail: [1] has an unknown number of 
feet."
mittens.describe()  # "Tame animal detail: [2] has an unknonw number of 
feet."


Run

But, can you put both ideas together? Yes, but there are apparently a few 
things to keep in mind.

The example of both:


# Parent object

type
  Domestication = enum
Wild
Feral
Domestic

type
  Animal[wildness: static[Domestication]] = ref object of RootObj
id: int

proc sayFeet(a: Animal): string =
  result = "[" & $a.id & "] has an unknown number of feet"

# Child object

type
  Dog[wildness: static[Domestication]] = ref object of Animal[wildness]
hairLength: float
legCount: int

proc sayFeet(d: Dog): string =
  result = "Dog [" & $d.id & "] has " & $d.legCount & " feet."

# Child object

type
  Cat[wildness: static[Domestication]] = ref object of Animal[wildness]
whiskerCount: int
pawCount: int

proc sayFeet(c: Cat): string =
  result = "Cat [" & $c.id & "] has " & $c.pawCount & " feet."

# generic proc

proc describe(foo: Animal) =
  when foo.wildness == Wild:
echo "Wild animal detail: " & sayFeet(foo)
  elif foo.wildness == Feral:
echo "Feral animal detail: " & sayFeet(foo)
  else:
echo "Tame animal detail: " & sayFeet(foo)

# use it

var
  sparky = Dog[Feral](id: 1, legCount: 4)
  mittens = Cat[Domestic](id: 2, pawCount: 4)

sparky.describe()   # "Feral animal detail: Dog [1] has 4 feet."
mittens.describe()  # "Tame animal detail: Cat [2] has 4 feet."


Run

Some things to watch out for:

  * For inheritance in general, you really need "ref object" rather than 
"object" objects. Things get lost otherwise.
  * Notice the ref object of Animal[wildness] on the child object definitions. 
The parent object reference must include the generics.
  * Inheritance WITHOUT generics: use "method" not "proc". WITH generics, use 
"proc" not "method". And no, I don't know why.



Hopefully this is helpful to somebody. :-)


Re: Checking the gcc/g++ versions used to compile nim program vs the dyn linked .so

2020-05-20 Thread kaushalmodi
Turns out that this issue was because of some mixup in the LD_LIBRARY_PATH. 
Once was was fixed, the nim program built with even g++ 9.1.0 links fine with 
the external_lib_64.so built with g++ 6.3.0.


How can I pass shared memory between threads?

2020-05-20 Thread Keithcat1
Hi, Does Nim have some good ways to handle shared memory? The only way I found 
was manual memory management, but that can't handle inner references and, 
hello, that's the reason I'm using a garbage collected language in the first 
place :D The experimental manual says that the parallel statement supports 
shared memory, but I couldn't figure it how. If Nim doesn't have this now, are 
there future plans that will improve this situation? Will there be a garbage 
collection algorithm that allocates all ref T on a shared heep, supports 
passing them between threads, and frees the memory when possible? 


Checking the gcc/g++ versions used to compile nim program vs the dyn linked .so

2020-05-20 Thread kaushalmodi
Hello,

Today I came across this run time error when running a nim compiled program 
that dynamically linked with an external c++ library .so (let's call it 
`external_lib_64.so`)


could not load: external_lib_64.so
compile with -d:nimDebugDlOpen for more information


Run

So I compiled and ran the nim program with `-d:nimDebugDlOpen` and still that 
did not give any useful information.

I knew that the `external_lib_64.so` was compiled using g++ 6.3.0. By a stroke 
of luck, it occurred to me that may be I should check the g++ version in my 
current environment where I was compiling the nim program .. and indeed it was 
different; it was g++ 8.4.0.

Once I switch to g++ 6.3.0, re-compiled the nim program and ran it, that "could 
not load: external_lib_64.so" error went away and things once again ran fine as 
before.

**Questions to the community:**

  * Should we need to match the g++ versions this way all the time? I thought 
that the compiled external_lib_64.so was "bulletproof" now that it was 
compiled. But apparently not?
  * @Araq Is it possible for the Nim compiler to detect this g++ version 
mismatch and throw a better error?



This was using the latest stable nim 1.2.0.


Re: NIM Integration Problems with Server Side Postgres

2020-05-20 Thread panchove
Good news:

It worked perfectly !!

Thanks to both!


Re: NIM Integration Problems with Server Side Postgres

2020-05-20 Thread JPLRouge
hello:-d:useMalloc|   
---|---  
  
I feel like I'm faster, is true


Re: NIM Integration Problems with Server Side Postgres

2020-05-20 Thread panchove
Hello, thanks for answering

I compile and link in this way 


$ (MY_LIB_NIM): * .nim Makefile
 nim c -d: release --passC: -fPIC --noMain --opt: speed --app: 
staticlib --outdir: $ (NIM_OUTDIR) --nimcache: $ (NIM_CACHEDIR) --out: $ 
(basename $ < ) .a --header: $ (basename $ <). h $ <

% .o:% .c Makefile
 gcc -std = c11 -fPIC -O2 -D_GNU_SOURCE -c $ <-I $ (PG_SERVER_INC) 
-I $ (NIM_INCLUDE_BASE) -I $ (NIM_INCLUDE_LOCAL)

$ (MY_LIB_GCC): $ (MY_OBJS)
 gcc -shared -fPIC -O2 $ ^ $ (MY_LIB_NIM) -o $ @



Run

I will test with the parameters you indicate


Re: NIM Integration Problems with Server Side Postgres

2020-05-20 Thread cdome
Do you compile to shared library? Things to check: You do call NimMain or 
setupForeignThreadGC.

Another thing I recommend trying is to compile with \--gc:arc -d:useMalloc this 
gives you garbage collection free environment that will ease integration with 
postgres. 


NIM Integration Problems with Server Side Postgres

2020-05-20 Thread panchove
Hello everyone

I am trying to integrate two server side test functions with Postgres the first 
one, nimAddOne (int): int works fine integrate it like this:


# AddOne adds one to the given arg and retuns it
proc nimAddOne (i: cint): cint {.exportc.} =
result = i + 1


Run

I made the call from C this way:


#include "postgres.h"
#include "fmgr.h"
#undef HAVE_STDINT_H
#include "converter.h"

PG_FUNCTION_INFO_V1 (c_addone);

Datum
c_addone (PG_FUNCTION_ARGS)
{
int32 arg = PG_GETARG_INT32 (0);
int32 result = nimAddOne (arg);
PG_RETURN_INT32 (result);
}


Run

But when I pass strings to another function like this:


# ConcatName concat name and last name
proc nimConcatName (name, last: cstring): cstring {.exportc.} =
var r: cstring
r = $ name & "" & $ last
result = r


Run

And calling it from C like this


#include "postgres.h"
#include "fmgr.h"
#include "utilities.h"
#undef HAVE_STDINT_H
#include "converter.h"

PG_FUNCTION_INFO_V1 (c_fullname);

Datum
c_fullname (PG_FUNCTION_ARGS)
{

text * ptrName = PG_GETARG_TEXT_PP (0);
text * ptrLast = PG_GETARG_TEXT_PP (1);

const char * name = GetStringFromText (ptrName);
const char * last = GetStringFromText (ptrLast);

char * result = nimConcatName ((char *) name, (char *) last);

elog (INFO, "% s", result);

text * ptrResult = (text *) GetTextFromString (result);

PG_RETURN_TEXT_P (ptrResult);
}


Run

Postgres server goes down hopelessly

Could you tell me what I do wrong in the second function?

Thank you!

NOTES: "converter.h" contains the base NIM types after compiling the 
"converter.nim" file that contains both functions


Re: Terminal keyboard and mouse

2020-05-20 Thread JPLRouge
change terminal key

> ATTN PROC CALL Keyboard simulation done by the program and not by the keyboard

→ ATTN keyboard simulation →Passes program name to application server → PROC 
keyboard simulation →Transmits the name of the procedure to be executed in 
Internal → CALL keyboard simulation →Transmits the name of the procedure to be 
executed in External

exemple: if use import termcurs.nim


var callQuery: Table[string, proc(fld : var FIELD)]

var combo  = new(GRIDSFL)
#===
proc callRefTyp(fld : var FIELD) =
  var g_pos : int = -1
  combo  = newGRID("COMBO01",2,2,20)
  
  var g_type  = defCell("Ref.Type",19,ALPHA)
  
  setHeaders(combo, @[g_type])
  addRows(combo, @["TEXT_FREE"])
  addRows(combo, @["ALPHA"])
  addRows(combo, @["ALPHA_UPPER"])
  addRows(combo, @["ALPHA_NUMERIC"])
  addRows(combo, @["ALPHA_NUMERIC_UPPER"])
  addRows(combo, @["TEXT_FULL"])
  addRows(combo, @["PASSWORD"])
  addRows(combo, @["DIGIT"])
  addRows(combo, @["DIGIT_SIGNED"])
  addRows(combo, @["DECIMAL"])
  addRows(combo, @["DECIMAL_SIGNED"])
  addRows(combo, @["DATE_ISO"])
  addRows(combo, @["DATE_FR"])
  addRows(combo, @["DATE_US"])
  addRows(combo, @["TELEPHONE"])
  addRows(combo, @["MAIL_ISO"])
  addRows(combo, @["YES_NO"])
  addRows(combo, @["SWITCH"])
  addRows(combo, @["FPROC"])
  addRows(combo, @["FCALL"])
  printGridHeader(combo)
  
  case fld.text
of "TEXT_FREE": g_pos = 0
of "ALPHA": g_pos = 1
of "ALPHA_UPPER"  : g_pos = 2
of "ALPHA_NUMERIC": g_pos = 3
of "ALPHA_NUMERIC_UPPER"  : g_pos = 4
of "TEXT_FULL": g_pos = 5
of "PASSWORD" : g_pos = 6
of "DIGIT": g_pos = 7
of "DIGIT_SIGNED" : g_pos = 8
of "DECIMAL"  : g_pos = 9
of "DECIMAL_SIGNED"   : g_pos = 10
of "DATE_ISO" : g_pos = 11
of "DATE_FR"  : g_pos = 12
of "DATE_US"  : g_pos = 13
of "TELEPHONE" : g_pos = 14
of "MAIL_ISO" : g_pos = 15
of "YES_NO"   : g_pos = 16
of "SWITCH"   : g_pos = 17
of "FPROC": g_pos = 18
of "FCALL": g_pos = 19
else : discard
  
  while true :
let (keys, val) = ioGrid(combo,g_pos)

case keys
  of Key.Enter :
fld.text  = $val[0]
break
  else: discard

callQuery["callRefTyp"] = callRefTyp

#===


...
  of Key.PROC:
  if isVoidF(pnFx,Index(pnFx)) :
callQuery[getVoidF(pnFx,Index(pnFx))] (pnFx.field[Index(pnFx)])
fldF.reftyp = parseEnum[REFTYP](pnFx.getTextF(F_F1[Freftyp]))
setTerminal()
printPanel(base)


Run


Re: How mature is async/threading in Nim?

2020-05-20 Thread mratsim
Regarding multithreading it really depends on your workload but here are 2 
kinds of code architectures that would allow mixing async on threads:

1\. If the part of your code that you want threaded is stateless and only 
allows the following types to crossover threads:

  * plain old datatypes/variants
  * Ref, seq, strings types but only if they are created and destroyed within 
the task and are not sent across threads
  * pointer to buffers (raw or Nim sequences) that outlive the task (for 
example pointers to matrices)



You can use a threadpool and just spawn myFunctionCall(a, b, c) (or Weave if 
dynamic load balancing, parallel for or producer-consumer task dependencies is 
needed).

2\. Your threads are long-lived, maintain (independent) state and work as a 
service that needs to communicate with other services. Then use channels to 
communicate between them. Nim channels support sending seq and strings via deep 
copy.

3\. You need a shared state, for example a shared hash table. Now you have a 
problem :P, if it has infinite lifetime, you can allocate it on the heap and 
pass a pointer to it, if not you need to implement atomic refcounting which is 
not too difficult with destructors, see for example my [refcounted events in 
Weave](https://github.com/mratsim/weave/blob/17257c2f95594b566abaa7b5c1a875f1a77f3536/weave/cross_thread_com/flow_events.nim#L176-L201)


type
  FlowEvent* = object
e: EventPtr
  
  EventPtr = ptr object
refCount: Atomic[int32]
kind: EventKind
union: EventUnion

# Internal
# 
# Refcounting is started from 0 and we avoid fetchSub with release semantics
# in the common case of only one reference being live.

proc `=destroy`*(event: var FlowEvent) =
  if event.e.isNil:
return
  
  let count = event.e.refCount.load(moRelaxed)
  fence(moAcquire)
  if count == 0:
# We have the last reference
if not event.e.isNil:
  if event.e.kind == Iteration:
wv_free(event.e.union.iter.singles)
  # Return memory to the memory pool
  recycle(event.e)
  else:
discard fetchSub(event.e.refCount, 1, moRelease)
  event.e = nil

proc `=sink`*(dst: var FlowEvent, src: FlowEvent) {.inline.} =
  # Don't pay for atomic refcounting when compiler can prove there is no 
ref change
  `=destroy`(dst)
  system.`=sink`(dst.e, src.e)

proc `=`*(dst: var FlowEvent, src: FlowEvent) {.inline.} =
  `=destroy`(dst)
  discard fetchAdd(src.e.refCount, 1, moRelaxed)
  dst.e = src.e


Run

So for the first 2 architectures, it's easy to "mix", threading and async live 
in separate domains.

I've also added facilities this weekend for Weave to run as a background 
service so that [long-lived threads can also submit jobs to 
Weave](https://github.com/mratsim/weave#foreign-thread--background-service-experimental)
 to ease interaction with async.


Re: Change Nim colour on GitHub

2020-05-20 Thread gemath
> So along with being a neat and clean logo it also encapsulates the roots and 
> initial goal of Nim.

Agreed. Unfortunately, most people -- especially outside of the community -- 
don't know that. It doesn't get across as what it is supposed to mean and 
that's not good communication.