Re: Resolve: Error: parallel 'fields' iterator does not work for 'case' objects

2018-03-17 Thread monster
In my "full modules", I have something like 22 types supported now. avAnySeq is 
the _last_ one to be checked, and the only one to fail (except for the float 
/float64 case). If I understand your reply properly, it's just "luck" that it 
only fails for avAnySeq? Maybe I shouldn't use "42" everywhere as test value...


Re: Case type and float: compiler seems to confuse float and float64

2018-03-17 Thread monster
So, it's a "feature" and not a bug? Is there any known rational, to act 
differently with float, than with int and uint?


Resolve: Error: parallel 'fields' iterator does not work for 'case' objects

2018-03-17 Thread monster
As I said in the previous post, I'm trying to create a "Any Value" case type. 
Another problem I have is supporting "seq[AnyValue]". Seq of anything else 
seems to work. Maybe it's a type recursion problem? Is there a work-around?

Here's the example:


type
  AnyValueType* = enum
avBool,
# ...
avBoolSeq,
# ...
avAnySeq
  
  AnyValue* = object
case kind*: AnyValueType
of avBool: boolValue*: bool
# ...
of avBoolSeq: boolSeqValue*: seq[bool]
# ...
of avAnySeq: anyseqValue*: seq[AnyValue]

converter toBoolAnyValue*(v: bool): AnyValue {.inline, noSideEffect.} =
  result.kind = avBool
  result.boolValue = v

converter toBoolSeqAnyValue*(v: seq[bool]): AnyValue {.inline, 
noSideEffect.} =
  result.kind = avBoolSeq
  result.boolSeqValue = v

converter toSeqAnyValueAnyValue*(v: seq[AnyValue]): AnyValue {.inline, 
noSideEffect.} =
  result.kind = avAnySeq
  result.anyseqValue = v

when isMainModule:
  let boolValue: bool = true
  let boolValueAV: AnyValue = boolValue
  let boolSeqValue: seq[bool] = @[boolValue]
  let boolSeqValueAV: AnyValue = boolSeqValue
  let anyseqValue: seq[AnyValue] = @[boolValueAV]
  let anyseqValueAV: AnyValue = anyseqValue
  
  assert(boolValueAV.boolValue == boolValue)
  assert(boolSeqValueAV.boolSeqValue == boolSeqValue)
  assert(anyseqValueAV.anyseqValue == anyseqValue) # Compiler bug :(



Case type and float: compiler seems to confuse float and float64

2018-03-17 Thread monster
I'm trying to write a "Any Value" type, so I can create a type-safe 
string-to-any-value table. First I noticed the compiler seems to "confuse" byte 
and uint8, but I think it's normal, since I've read byte is just an alias for 
uint8. Unfortunately, it also has problems with float vs float64 (on 64bit 
architecture), which is strange, because it has no problem with int vs int64 
and uint vs uint64. Therefore, I think it's a bug.

Here's the code that compiles but fails to run:


type
  AnyValueType* = enum
avBool,
# ...
avFloat,
avFloat64
  
  AnyValue* = object
case kind*: AnyValueType
of avBool: boolValue*: bool
# ...
of avFloat: floatValue*: float
of avFloat64: float64Value*: float64

converter toBoolAnyValue*(v: bool): AnyValue {.inline, noSideEffect.} =
  result.kind = avBool
  result.boolValue = v

converter toFloatAnyValue*(v: float): AnyValue {.inline, noSideEffect.} =
  result.kind = avFloat
  result.floatValue = v

converter toFloat64AnyValue*(v: float64): AnyValue {.inline, noSideEffect.} 
=
  result.kind = avFloat64
  result.float64Value = v

when isMainModule:
  let boolValue: bool = true
  let floatValue: float = 42.0
  let float64Value: float64 = 42.0'f64
  
  let boolValueAV: AnyValue = boolValue
  let floatValueAV: AnyValue = floatValue
  let float64ValueAV: AnyValue = float64Value
  
  assert(boolValueAV.boolValue == boolValue)
  assert(floatValueAV.floatValue == floatValue)
  assert(float64ValueAV.float64Value == float64Value) # Compiler bug :(


And returns:


float64value.nim(38) float64value
system.nim(2839) sysFatal
Error: unhandled exception: float64Value is not accessible [FieldError]
Error: execution of an external program failed: 'bin/float64value '
The terminal process terminated with exit code: 1



Re: Need help with async client/server

2018-03-12 Thread monster
I tried a few things, like "newAsyncSocket(buffered = false)", but it was no 
use. All I can say is, it doesn't work for me either. 


Re: Need help with async client/server

2018-03-12 Thread monster
I haven't tried your code, but one thing I would check, is that the problem is 
not caused by the threadvars. I'm "pretty sure" that I observed in Windows that 
asyncdispatch would use multiple different threads to run the code. So you 
might set the threadvar in one thread, but the code that reads it runs in a 
different thread... Not sure if that applies to non-Windows OS. There is some 
getThreadId() or similar proc you can use to identify your threads.


Nimble package structure

2018-03-11 Thread monster
Hi,

I "published" a new package to Nimble today. Based on the nimble doc, under the 
src directory, I had created a  directory. And Nimble complained 
today that it had to be called  instead (which I found strange), 
so I just renamed the directory, and Nimble was happy and I could then publish 
the module.

I've since decided to add a new sub-module, and now that I'm done and want to 
check if it works, Nimble complains I have a  directory, and I 
should use a  directory instead!

I'm confused... what is going on? And which structure is correct?


C:\Code\moduleinit>nim --version
Nim Compiler Version 0.18.0 [Windows: amd64]
Copyright (c) 2006-2018 by Andreas Rumpf

git hash: 5ee9e86c87d831d32441db658046fc989a197ac9
active boot switches: -d:release

C:\Code\moduleinit>nimble --version
nimble v0.8.10 compiled at 2018-03-01 23:04:46
git hash: 6a2da5627c029460f6f714a44f5275762102fa4b



Re: times.now() not found?

2018-03-10 Thread monster
Since the doc has "Deprecated since v...", "Available since v..." would be 
useful too; I assumed "now()" existed for ever.

Anyway, I found the local doc folder, and it seems the equivalent in 0.17.2 is 
"getLocalTime(getTime())". Unfortunately, "getLocalTime()" is deprecated. How 
do I write this so it works without deprecation in 0.17.2 AND 0.18.0?


import times

proc nows(): string =
  when false:
$now()
  else:
$getLocalTime(getTime())

proc info(msg: string): void {.nimcall, gcsafe.} =
  echo(nows() & " INFO " & msg)

info("Hello World!")



times.now() not found?

2018-03-10 Thread monster
Hi,

Nim told me it can't find times.now(), which is documented here:

[now](https://nim-lang.org/docs/times.html#now,)

This small examples fails to compile for me:


import times

proc info(msg: string): void {.nimcall, gcsafe.} =
  echo($now() & " INFO " & msg)

info("Hello World!")


with error:


testnow.nim(4, 9) Error: undeclared identifier: 'now'


And my Nim version:


C:>nim --version
Nim Compiler Version 0.17.2 (2017-09-07) [Windows: amd64]
Copyright (c) 2006-2017 by Andreas Rumpf


What am I doing wrong?


Re: Introducing moduleinit

2018-03-05 Thread monster
I coded in Delphi, like 20 years ago. If it was my inspiration, then it must 
have been "unconcious"...


Introducing moduleinit

2018-03-04 Thread monster
I’ve come to a standstill in what I initially set out to code in Nim, but 
discussions about the Nim logging system brought me to write a small library to 
help control initialisation order of dependencies in modules, and in threads (I 
started with threads, and realised I wanted a "full" solution). Before (trying 
to) publish it on the Nimble package list, I’d like to ask for some feedback on 
it.

In short, moduleinit is a Nim package that provides module/thread 
initialisation ordering. Here is the URL: 
[moduleinit](https://github.com/skunkiferous/moduleinit)


Re: Compiler crashes while compiling module with

2018-02-25 Thread monster
@Stefan_Salewski It compiles with int instead of Natural.

@mratsim I just updated my Nim sources (I normally use the binary 
distribution), and rebuilt it, and still get the crash. So I'll post the issue.


Re: Compiler crashes while compiling module with

2018-02-25 Thread monster
@Stefan_Salewski I managed to reduce the crash to this (no distinct required in 
my setup):


type
  StringValue*[LEN: static[Natural]] = array[LEN+Natural(2),char]
  StringValue16* = StringValue[14]


@mashingan I've installed WinCrashReport as you suggested, and got a report. 
But TBH, the only thing I can decipher from it is that there was a 
stack-overflow (I can't interpret assembler code, and don't really want to 
learn it either). Is there any point in posting that report together with a 
github issue?


Compiler crashes while compiling module with "StringValue[LEN: static[Natural]]"

2018-02-24 Thread monster
I'm trying to create a small module that represents "strings" as "values" 
rather than "ref objects". I had this working for a "fixed size" type (254 
chars + "0" \+ "length field"), and now tried to define the size as a generic 
parameter.

First, I had issues trying to get this to compile:


type
  StringValue*[LEN: Natural] = distinct array[LEN+2,char]


But the compiler refuses to do "LEN+2". I even tried to define a compile-time 
proc to add Naturals, but it still would not compile (did not seem to see my + 
proc). So what I then tried, was to replace "[LEN: Natural]" with "[LEN: 
static[Natural]]", and now the compiler crashes.

Here the stringvalue module, with no test code. It _seems_ to compile.


 # Module: stringvalue

## This module can be used to pass small "strings" across threads, or store
## them globally, without having to worry about the local GC (as Nim strings
## are local GCed objects). The "strings" are meant to be used as "values",
## rather than "reference objects", but can also be allocated on the shared
## heap.

when isMainModule:
  echo("COMPILING StringValue ...")


import hashes

export hash, `==`

proc c_strcmp(a, b: cstring): cint {.
  importc: "strcmp", header: "", noSideEffect.}

type
  StringValue*[LEN: static[Natural]] = distinct array[LEN+Natural(2),char]
## Represents a "string value" of up to 254 characters (excluding
## terminating '\0' and length).

proc cstr*[LEN: static[Natural]](sv: var StringValue[LEN]): cstring 
{.inline, noSideEffect.} =
  ## Returns the 'raw' cstring of the StringValue
  result = cast[cstring](addr sv)

proc `[]`*[LEN: static[Natural],I: Ordinal](sv: var StringValue[LEN]; i: 
I): char {.inline, noSideEffect.} =
  ## Returns a char of the StringValue
  cast[ptr char](cast[ByteAddress](addr sv) +% i * sizeof(char))[]

proc `[]`*[LEN: static[Natural],I: Ordinal](sv: StringValue[LEN]; i: I): 
char {.inline, noSideEffect.} =
  ## Returns a char of the StringValue
  cast[ptr char](cast[ByteAddress](unsafeAddr sv) +% i * sizeof(char))[]

proc `[]=`*[LEN: static[Natural],I: Ordinal](sv: var StringValue[LEN]; i: 
I; c: char) {.inline, noSideEffect.} =
  ## Returns a char of the StringValue
  cast[ptr char](cast[ByteAddress](addr sv) +% i * sizeof(char))[] = c

proc len*[LEN: static[Natural]](sv: var StringValue[LEN]): int {.inline, 
noSideEffect.} =
  ## Returns the len of the StringValue
  int(uint8(sv[LEN+1]))

proc len*[LEN: static[Natural]](sv: StringValue[LEN]): int {.inline, 
noSideEffect.} =
  ## Returns the len of the StringValue
  int(uint8(sv[LEN+1]))

proc `$`*[LEN: static[Natural]](sv: var StringValue[LEN]): string 
{.inline.} =
  ## Returns the string representation of the StringValue
  result = $sv.cstr

proc `$`*[LEN: static[Natural]](sv: StringValue[LEN]): string {.inline.} =
  ## Returns the string representation of the StringValue
  result = $sv.cstr

proc hash*[LEN: static[Natural]](sv: var StringValue[LEN]): Hash {.inline, 
noSideEffect.} =
  ## Returns the hash of the StringValue
  result = hash($sv)

proc hash*[LEN: static[Natural]](sv: StringValue[LEN]): Hash {.inline, 
noSideEffect.} =
  ## Returns the hash of the StringValue
  result = hash($sv)

proc `==`*[LEN: static[Natural]](a, b: var StringValue[LEN]): bool 
{.inline, noSideEffect.} =
  ## Compares NoStrings
  (a.len == b.len) and (c_strcmp(a.cstr, b.cstr) == 0)

proc `==`*[LEN: static[Natural]](a, b: StringValue[LEN]): bool {.inline, 
noSideEffect.} =
  ## Compares NoStrings
  result = false
  if a.len == b.len:
result = (c_strcmp(a.cstr, b.cstr) == 0)

proc `==`*[LEN: static[Natural]](sv: var StringValue[LEN], cs: cstring): 
bool {.inline, noSideEffect.} =
  ## Compares a StringValue to a cstring
  result = (cast[pointer](cs) != nil) and (c_strcmp(sv.cstr, cs) == 0)

proc `==`*[LEN: static[Natural]](sv: StringValue[LEN], cs: cstring): bool 
{.inline, noSideEffect.} =
  ## Compares a StringValue to a cstring
  result = (cast[pointer](cs) != nil) and (c_strcmp(sv.cstr, cs) == 0)

proc `==`*[LEN: static[Natural]](cs: cstring, sv: var StringValue[LEN]): 
bool {.inline, noSideEffect.} =
  ## Compares a cstring to a StringValue
  result = (cast[pointer](cs) != nil) and (c_strcmp(sv.cstr, cs) == 0)

proc `==`*[LEN: static[Natural]](cs: cstring, sv: StringValue[LEN]): bool 
{.inline, noSideEffect.} =
  ## Compares a cstring to a StringValue
  result = (cast[pointer](cs) != nil) and (c_strcmp(sv.cstr, cs) == 0)

proc `<`*[LEN: static[Natural]](a, b: var StringValue[LEN]): bool {.inline, 
noSideEffect.} =
  ## Compares NoStrings
  (c_strcmp(a.cstr, b.cstr) < 0)

proc 

Re: How do you get an "IpAddress" from a hostname?

2018-02-18 Thread monster
In case anyone cares, this how I eventually did it:


let first = hostent.addrList[0]
let a = uint8(first[0])
let b = uint8(first[1])
let c = uint8(first[2])
let d = uint8(first[3])
let ipstr = $a & "." & $b & "." & $c & "." & $d
try:
  ip = parseIpAddress(ipstr)
except:
  raise newException(Exception, "Bad URL: '" & hostport & "': failed to 
parse IP of address: " & ipstr)


Rather ugly, but seems to work.


Re: Module logging: how to create the right Logger(s) in a library?

2018-02-13 Thread monster
@andrea Coming from the JVM myself, I know exactly how it is. But, IMHO, the 
only way around that is to have a super powerful/flexible logging system as 
part of the system library, so that no one gets the idea of implementing their 
own instead. By the time Java added their own standard logging library, it was 
way too late; the damage was done, and everyone was doing their own thing, in 
an incompatible way. Basically, Java's "native" logging just makes things worst 
by adding yet-one-more logging system devs have to deal with.


Is there a way to create a Java-style thread-local in Nim?

2018-02-12 Thread monster
I've been wondering if something like Java's ThreadLocal (which I use a 
massively in (Java) server-side code), could be defined in Nim.

I tried those two approaches, but neither is valid code. It seems to me it 
isn't possible.

Attempt #1: 


type
  ThreadLocal*[T] = object
## Represents a thread-local, similar to ThreadLocals in Java.
initialised*: bool
  ## Is this thread local already initialised?
value*: T
  ## The thread local value.
  
  InitThreadLocalProc*[T] = proc(): T {.nimcall, gcsafe.}
## Type of a proc that lazy initialises a ThreadLocal.

proc getValue*[T](init: InitThreadLocalProc[T]): var T =
  ## Returns the value of a thread-local.
  var myThreadVar {.threadvar.}: ThreadLocal[T]
  if not myThreadVar.initialised:
myThreadVar.value = init()
myThreadVar.initialised = true
  return myThreadVar.value

proc Returns42(): int =
  42

echo("TL: ",getValue(Returns42))


Attempt #2 (would be preferable to #1, because you are not limited to 1 
ThreadLocal per type): 


type
  InitThreadLocalProc*[T] = proc(): T {.nimcall, gcsafe.}
## Type of a proc that lazy initialises a ThreadLocal.
  
  ThreadLocal*[T] = object
## Represents a thread-local, similar to ThreadLocals in Java.
lazyInit: InitThreadLocalProc[T]
  ## The lazy initialisation proc.
initialised {.threadvar.}: bool
  ## Is this thread local already initialised?
value {.threadvar.}: T
  ## The thread local value.


proc initThreadLocal*[T](init: InitThreadLocalProc[T]): ThreadLocal[T] =
  ## Initialises a ThreadLocal.
  result.lazyInit = init

proc getValue*[T](tl: ThreadLocal[T]): var T =
  ## Returns the value of a thread-local.
  if not tl.initialised:
tl.value = tl.lazyInit()
tl.initialised = true
  return tl.value


proc Returns42(): int =
  42

var testThreadLocal = initThreadLocal[int](Returns42)

echo("TL: ",getValue(testThreadLocal))



Re: Module logging: how to create the right Logger(s) in a library?

2018-02-12 Thread monster
I think it boils down to me being used to use "binary" libraries in Java. When 
something goes wrong in that case, you can't really tell what happens without 
logging (unless you can reproduce it in a debugger). OTOH, Nim's "libraries" 
are (AFAIK) mostly available in source form, so one can easily add "echo()" 
where appropriate as needed, and so I see there is a lesser need for built-in 
logging with Nim.

The other reason I was considering adding logging to my "libraries", is that 
bugs in multi-threaded apps are generally hard to track down, and given the 
limited amount of time I have to work on them, they are likely to be bugged, so 
it would be useful to just crank the logging to "debug" when needed.

I think a litte utility type that works like a Java ThreadLocal that takes a 
proc pointer/lambda as parameter, and does lazy-init for you would be a nice 
addition to the standard library. I'll try to prototype that one day.


Module logging: how to create the right Logger(s) in a library?

2018-02-11 Thread monster
I've just had my first look at the logging module. The "Warning" immediatly 
caught my eye: "The global list of handlers is a thread var, this means that 
the handlers must be re-added in each thread."

If I'm writing a "library" meant to be used by others (eventually), that create 
it's own threads (I've already got several of those), then how is the library 
meant to initialise the logging according to the user's wish in those threads? 
Do I have to add an "initLogging()" proc pointer as parameter to each library 
"setup()" proc?


What is the content/format of nativesockets.Hostent.addrList?

2018-02-11 Thread monster
I'm trying to write a (very) primitive "service discovery" module, and it seems 
"nativesockets.Hostent.addrList" is not what I would have expected.

Here the code I used:


import nativesockets
import net
import strutils

proc parseURL(hostport: string): tuple[ip: string, port: Port, url: string] 
=
  let idx = hostport.find(':')
  if (idx < 1) or (idx == high(hostport)):
raise newException(Exception, "Bad URL: '" & hostport & "': port 
missing")
  let host = hostport.substr(0,idx-1)
  let portstr = hostport.substr(idx+1)
  var port = -1
  try:
port = parseInt(portstr)
  except:
discard
  if (port < 1) or (port > 65535):
raise newException(Exception, "Bad URL: '" & hostport & "': bad port")
  var ip: IpAddress
  if isIpAddress(host):
try:
  ip = parseIpAddress(host)
except:
  raise newException(Exception, "Bad URL: '" & hostport & "': bad IP 
address")
  else:
var hostent: Hostent
try:
  hostent = getHostByName(host)
except:
  raise newException(Exception, "Bad URL: '" & hostport & "': bad 
hostname")
if hostent.addrList.len == 0:
  raise newException(Exception, "Bad URL: '" & hostport & "': no IP for 
hostname")
let a = hostent.addrList[0]
try:
  ip = parseIpAddress(a)
except:
  raise newException(Exception, "Bad URL: '" & hostport & "': failed to 
parse IP of address: " & $len(a) & " " & a)
  result = ($ip, Port(port), hostport)

echo("RESULT: " & $parseURL("google.com:80"))


This fails with the last "raise" line, like this:


testgethostbyname.nim(40) testgethostbyname
testgethostbyname.nim(37) parseURL
Error: unhandled exception: Bad URL: 'google.com:80': failed to parse IP of 
address: 14 Ï:ð.google.com [Exception]
Error: execution of an external program failed: '../bin/testgethostbyname '


Either there is something terribly wrong with my code, or "hostent.addrList[0]" 
is not something like "1.2.3.4", but something else entirely. In that case, 
since the type is "string", how am I meant to "interpret" it, to get something 
like "1.2.3.4"? Or is there an alternative call to getHostByName() that 
actually returns an IP in a "usable format"?

I wanted to use the "ip" to pass it to Socket.connect(ip, port), so I save the 
DNS query on repeated calls.


Re: asyncdispatch and "closing server socket"

2018-02-05 Thread monster
@Araq I think my example didn't clearly show that the issue was that the 
decision to close the server socket happens "independently" from the code that 
accepts incoming connections. I've now move the "close code" in the same proc 
as the "open code" (as you suggested), and use regular polling on a volatile 
flag to tell the proc to terminate.

I don't remeber seeing that last "{.gcsafe.}:" syntax before. I think that 
might solve some of my problems. 


asyncdispatch and "closing server socket"

2018-02-04 Thread monster
I used asyncdispatch/asyncnet to implement my networking (currently).

I've got a problem "closing the server socket".

Initially I wrote something like this:


proc initCluster(port: Port, address: string) {.async.} =
## Initiates the cluster networking.
echo("Opening server socket...", getThreadId())
var server = newAsyncSocket()
server.setSockOpt(OptReuseAddr, true)
server.bindAddr(port, address)
server.listen()

while true:
  let peer = await server.accept()
  asyncCheck processPeer(peer)


Idk if it worked, because I haven't tried it yet, but it ran without error.

Then I decided to add code to gracefully terminate the process, like this:


var server: AsyncSocket
## The server socket
  
  proc initCluster(port: Port, address: string) {.async.} =
## Initiates the cluster networking.
echo("Opening server socket...")
server = newAsyncSocket()
server.setSockOpt(OptReuseAddr, true)
server.bindAddr(port, address)
server.listen()

while (server != nil):
  let peer = await server.accept()
  asyncCheck processPeer(peer)
  
  proc deinitCluster() {.async.} =
## De-initiates the cluster networking.
if server != nil:
  echo("Closing server socket...")
  try:
server.close()
  except:
discard
  server = nil
else:
  echo("Server socket not open.")


But was told by the compiler that "initCluster()" is now not GC-safe, as I use 
a global ref to a GCed object.

So my thought was, well, with asyncdispatch, "everything" is meant to run in 
the same thread (which I was obviously wrong about), somehow without blocking, 
so I can just change "server" to a "{.threadvar.}", to make the compiler happy. 
If I do, the compiler is happy, but "deinitCluster()" does nothing, because it 
doesn't actually run in the same thread as "initCluster()" (verified by using 
getThreadId() in the output; "initCluster()" runs in thread #2940, and 
"deinitCluster()" runs in thread #4860).

So, what is the correct way to close my async server socket?

And, more generally, does that mean I still need to use shared-heap to pass 
messages/signals across async tasks? Is there a way to say "run this async task 
in the same thread as that other async task"?


Re: Moving "top level" code to a proc affect initialisation of threadvar

2018-02-04 Thread monster
@Stefan_Salewski You're totally right! This would probably go horribly wrong. 
But I don't think this explains the error with the uninitialised HashSet. The 
HashSet itself just contains pointers; it doesn't care if they are still 
"valid" or not.

I'll change the code so that the HashSet is accessed with a "getter" that does 
"lazy" initialisation, per thread. If the error goes away, then that will 
"prove" the HashSet (and presumably every other complex threadvar object) needs 
to be initialised explicitly in each thread. If that is the case, it would be 
nice to have some syntax (something like the opposite of "static:") that takes 
care of that for you.


Moving "top level" code to a proc affect initialisation of threadvar

2018-02-03 Thread monster
In my test, I have this code:


type
  TstMsg = Msg[int]
  TstRep = Msg[float]

var dst: QueueID

initThreadID(ThreadID(0))

var m: TstMsg
m.content = 42

when USE_TOPICS:
  initTopicID(TopicID(33))
  let pid = myProcessID()
  dst = queueID(pid, ThreadID(1), TopicID(99))
else:
  dst = queueID(myProcessID(), ThreadID(1))

assert(not pendingMsg())
sendMsgNow(dst, addr m)

echo("First msg sent from Thread 0.")


which in sendMsgNow() indirectly accesses this threadvar:


var myPendingRequests {.threadvar.}: HashSet[ptr MsgBase]
myPendingRequests.init()


And it works. However, if I refactor my code like this (moving all code after 
"var dst: QueueID" to a proc, and calling it):


type
  TstMsg = Msg[int]
  TstRep = Msg[float]

var dst: QueueID

proc startTest() =
  initThreadID(ThreadID(0))
  
  var m: TstMsg
  m.content = 42
  
  when USE_TOPICS:
initTopicID(TopicID(33))
let pid = myProcessID()
dst = queueID(pid, ThreadID(1), TopicID(99))
  else:
dst = queueID(myProcessID(), ThreadID(1))
  
  assert(not pendingMsg())
  sendMsgNow(dst, addr m)
  
  echo("First msg sent from Thread 0.")

startTest()


Then I get this error:


Traceback (most recent call last)
test_kueues.nim(128) receiver
kueues.nim(830)  recvMsgs
kueues.nim(785)  collectReceived
kueues.nim(771)  validateRequest
sets.nim(204)contains
system.nim(3613) failedAssertImpl
system.nim(3605) raiseAssert
system.nim(2724) sysFatal
Error: unhandled exception: isValid(s) The set needs to be initialized. 
[AssertionError]
Error: execution of an external program failed: '../bin/test_kueues '
The terminal process terminated with exit code: 1


kueues.nim(771) is accessing myPendingRequests.

All the rest of the code is unchanged.

How could myPendingRequests ever _not_ be initialized, and why is that only so 
if I move some top level code to a proc, which I then call at the exact same 
place the top level code was before?

Even if myPendingRequests.init() ran only once, and not once per thread (does 
it? and if yes, how do you say "do this in every thread"?), I still don't think 
it would explain that the error only comes when the code is not "top level".


Re: {.global.} and generics

2018-02-03 Thread monster
Hi, I'm also using 0.17.2. Maybe I interpreted the problem incorrectly... I'll 
use your example to check if it behaves differently in my code.


{.global.} and generics

2018-02-03 Thread monster
To make sure the messages in my message-queue are type-safe, I have to record 
the message type in each message.

This is how I went about it:


proc idOfType*(T: typedesc): MsgTypeID {.inline.} =
  let tid = 0 # Accesses "type register here" ...
  MsgTypeID(tid)

proc sendMsg*[T](q: QueueID, m: ptr Msg[T]): void {.inline.} =
  let typeidOfT {.global.} = idOfType(T)
  m.mybase.mytypeid = typeidOfT
  fillAndSendMsg[T](q, m, true)


It looked like it worked, but then today I realised I was using a single type 
in the test messages, so I added a second type to my test, and then it failed. 
To get it to work again, I have to change the code to this:


proc idOfType*(T: typedesc): MsgTypeID {.inline.} =
  let tid = 0 # Accesses "type register here" ...
  MsgTypeID(tid)

proc sendMsg*[T](q: QueueID, m: ptr Msg[T]): void {.inline.} =
  let typeidOfT = idOfType(T)
  m.mybase.mytypeid = typeidOfT
  fillAndSendMsg[T](q, m, true)


I was under the impressing that since sendMsg() is generic, the compiler would 
generate one typeidOfT _per type T_. Since my test is now failing with two 
types, I assume the compiler generates a single typeidOfT for all sendMsg() 
"instantiations".

Is there a way to work around that? It's rather expensive to compute the ID of 
a type (it involves locks and creating shared heap objects), so I don't want to 
do that for each message, but rather only once for each type.

I've also tried the following, which doesn't compile:


proc sendMsg*[T](q: QueueID, m: ptr Msg[T]): void {.inline.} =
  static:
let typeidOfT {.global.} = idOfType(T)
  m.mybase.mytypeid = typeidOfT
  fillAndSendMsg[T](q, m, true)



proc sendMsg*[T](q: QueueID, m: ptr Msg[T]): void {.inline.} =
  static:
const typeidOfT {.global.} = idOfType(T)
  m.mybase.mytypeid = typeidOfT
  fillAndSendMsg[T](q, m, true)



Re: How to call runForever()?

2018-01-30 Thread monster
OK, I've got it! It has _nothing_ to do with runForever() or asyncdispatch.

Here another example that crashes:


import os

proc whatever() {.thread, nimcall.} =
  echo("TEST")

proc initProcessX(): void =
  echo("In initProcess()")
  var thread: Thread[void]
  createThread(thread, whatever)
  echo("initProcess() done")

proc doStuff(): void =
  echo("In doStuff()")
  # ...
  initProcessX()
  sleep(500)
  # ...
  echo("Crashes before getting here!")

doStuff()


And here, one that doesn't crash:


import os

var thread: Thread[void]

proc whatever() {.thread, nimcall.} =
  echo("TEST")

proc initProcessX(): void =
  echo("In initProcess()")
  #var thread: Thread[void]
  createThread(thread, whatever)
  echo("initProcess() done")

proc doStuff(): void =
  echo("In doStuff()")
  # ...
  initProcessX()
  sleep(500)
  # ...
  echo("Crashes before getting here!")

doStuff()


It looks like the issue is that I don't have a "hard reference" to the thread 
itself, so I assume it gets garbage-collected. Should I always keep a 
hard-reference to threads, until they are done? Or is that a bug?

EDIT: My "real" code is still crashing after the fix I discovered, but now I'm 
getting a stack-trace, so I think I should eventually work it out too:


First msg sent from Thread 0.
Thread 1 initialised.
Traceback (most recent call last)
kueues.nim(1032) runForeverThread
asyncdispatch.nim(278)   runForever
asyncdispatch.nim(283)   poll
Error: unhandled exception: No handles or timers registered in dispatcher. 
[ValueError]
Message received after 0 'timestamp units'
Error: execution of an external program failed: '../bin/test_kueues '
The terminal process terminated with exit code: 1



Re: How to call runForever()?

2018-01-28 Thread monster
Unfortunately, no. All I get is what I wrote under "Result:" in the original 
post. I'll try changing compiler parameters, to see if I can get more info.


"U64: static[int]" as type paratmeter => "cannot generate code for: U64"

2018-01-28 Thread monster
I've been playing with the idea type that would allow quick cross-architecture 
convertion for serialisation.

I'm geting a compile error, and I'd like to know if I'm doing anything qrong, 
or if this is a current limitation of the Nim compiler.


type
  Uint8Array* {.unchecked.} [SIZE: static[int]] = array[SIZE, uint8]
  Uint16Array* {.unchecked.} [SIZE: static[int]] = array[SIZE, uint16]
  Uint32Array* {.unchecked.} [SIZE: static[int]] = array[SIZE, uint32]
  Uint64Array* {.unchecked.} [SIZE: static[int]] = array[SIZE, uint64]
  
  SavableValue* = SomeNumber|char|bool
## Things that can be put in Savable
  
  Savable*[U8,U16,U32,U64: static[int]] = object
## Type for "savable" objects.
## Allows quick endian convertion for cross-platform compatibility.
when U64 > 0:
  u64fields*: Uint64Array[U64]
## All uint64 (bytes) values
when U32 > 0:
  u32fields*: Uint32Array[U32]
## All uint32 (bytes) values
when U16 > 0:
  u16fields*: Uint16Array[U16]
## All uint16 (bytes) values
when U8 > 0:
  u8fields*: Uint8Array[U8]
## All uint8 (bytes) values

proc getAt*[T: SavableValue, U8,U16,U32,U64: static[int]](
s: var Savable[U8,U16,U32,U64], N: static[int]): T {.inline.} =
  ## Returns a field from Savable
  static:
assert(N >= 0, "negative index not allowed: " & $N)
assert((T != int) and (T != uint) and (T != float), "Only fixed-size 
types allowed!")
  when (T == char) or (T == bool):
# We make sure char and bool are treated as 1 byte everywhere.
static:
  assert(N < U8, "index too big: " & $N & " >= " & $U8)
cast[T](s.u8fields[N])
  elif sizeof(T) == 8:
static:
  assert(N < U64, "index too big: " & $N & " >= " & $U64)
cast[T](s.u64fields[N])
  elif sizeof(T) == 4:
static:
  assert(N < U32, "index too big: " & $N & " >= " & $U32)
cast[T](s.u32fields[N])
  elif sizeof(T) == 2:
static:
  assert(N < U16, "index too big: " & $N & " >= " & $U16)
cast[T](s.u16fields[N])
  elif sizeof(T) == 1:
static:
  assert(N < U8, "index too big: " & $N & " >= " & $U8)
cast[T](s.u8fields[N])
  else:
static: assert(false, "invalid parameter size: " & $sizeof(T))

proc setAt*[T: SavableValue, U8,U16,U32,U64: static[int]](
s: var Savable[U8,U16,U32,U64], N: static[int], v: T) {.inline.} =
  ## Sets a field from Savable
  static:
assert(N >= 0, "negative index not allowed!")
assert((T != int) and (T != uint) and (T != float), "Only fixed-size 
types allowed!")
  when (T == char) or (T == bool):
# We make sure char and bool are treated as 1 byte everywhere.
static:
  assert(N < U8, "index too big: " & $N & " >= " & $U8)
s.u8fields[N] = cast[uint8](v)
  elif sizeof(T) == 8:
static:
  assert(N < U64, "index too big: " & $N & " >= " & $U64)
s.u64fields[N] = cast[uint64](v)
  elif sizeof(T) == 4:
static:
  assert(N < U32, "index too big: " & $N & " >= " & $U32)
s.u32fields[N] = cast[uint32](v)
  elif sizeof(T) == 2:
static:
  assert(N < U16, "index too big: " & $N & " >= " & $U16)
s.u16fields[N] = cast[uint16](v)
  elif sizeof(T) == 1:
static:
  assert(N < U8, "index too big: " & $N & " >= " & $U8)
s.u8fields[N] = cast[uint8](v)
  else:
static: assert(false, "invalid parameter size: " & $sizeof(T))

type
  TestSaveable* = Savable[4,3,2,1]

var ts: TestSaveable

ts.setAt(2, true)
var b: bool = ts.getAt[TestSaveable,4,3,2,1](2)
echo($b)


And I'm getting something like this (line numbers are probably wrong): 


savable.nim(97, 3) template/generic instantiation from here
savable.nim(19, 10) Error: cannot generate code for: U64
The terminal process terminated with exit code: 1



Re: How to call runForever()?

2018-01-28 Thread monster
Hmm. I added echo(".") in "doNothing()", to be sure it's not "optimized away", 
but it still crashes. Shouldn't "doNothing()" im my edited example count as a 
"pending operation"?

Also, in my real code, when the error first appeared, I have something like 
this:


proc initCluster(port: Port, address: string) {.async.} =
  ## Initiates the cluster networking.
  var server = newAsyncSocket()
  server.setSockOpt(OptReuseAddr, true)
  server.bindAddr(port, address)
  server.listen()
  while true:
let peer = await server.accept()
asyncCheck processPeer(peer)

proc initProcess*(pid: ProcessID, processMapper: ProcessIDToAddress): void =
  # ...
  asyncCheck initCluster(port, address)


Surely, _that_ should count as a "pending operation".


Best way to define proc that works on ref X, ptr X and var X

2018-01-28 Thread monster
If I have some object type X, and some proc that does something complex with X, 
and I want the proc(s) to be defined for "var X", "ptr X" and for "ref X", what 
is the best/cleanest way to do that, so that the "logic" is defined only once?


Re: How to call runForever()?

2018-01-28 Thread monster
Interesting! Does anyone know what "No handles or timers registered in 
dispatcher" actually mean? That is, why is it an error?

Regarding the compiler, I hadn't thought about trying different compilers to 
work around crashes. But unfortunately, I want to use this in UE4, and UE4 
pretty much requires VC under Windows, so I'll need a work around. I guess I 
could just add a dummy timer, for a start.

EDIT: OK, I added a "timer" (based on [this 
thread](https://forum.nim-lang.org/t/1195)); it made no difference:


import asyncdispatch
import os

proc doNothing() {.async.} =
  while true:
await sleepAsync(100)

proc runForeverThread() {.thread.} =
  ## Executes runForever() in a separate thread.
  runForever()

proc initProcess(): void =
  debugEcho("In initProcess()")
  asyncCheck doNothing()
  var thread: Thread[void]
  createThread(thread, runForeverThread)

proc doStuff(): void =
  debugEcho("In doStuff()")
  # ...
  initProcess()
  sleep(500)
  # ...
  debugEcho("After initProcess()")

doStuff()



Re: How to call runForever()?

2018-01-27 Thread monster
Windows 10, x64


Microsoft Windows [Version 10.0.16299.192]
(c) 2017 Microsoft Corporation. All rights reserved.

C:\Users\Sebastien Diot>nim --version
Nim Compiler Version 0.17.2 (2017-09-07) [Windows: amd64]
Copyright (c) 2006-2017 by Andreas Rumpf

git hash: 811fbdafd958443ddac98ad58c77245860b38620
active boot switches: -d:release

Visual Studio 2017 Developer Command Prompt v15.0.26228.13
Copyright (c) 2017 Microsoft Corporation

[vcvarsall.bat] Environment initialized for: 'x64'



Re: A

2018-01-27 Thread monster
@jlp765 I believe it is an issue with how I call "runForever()". I can even 
reproduce the problem, as defined in my next forum thread. I now tried 
debugEcho() too. It makes no difference.


Re: How to call runForever()?

2018-01-27 Thread monster
I think I can. I'm not sure if it is the same problem as in my main module, but 
this crashes for me:


import asyncdispatch
import os


proc runForeverThread() {.thread.} =
  ## Executes runForever() in a separate thread.
  runForever()

proc initProcess(): void =
  echo("In initProcess()")
  #
  var thread: Thread[void]
  createThread(thread, runForeverThread)

proc doStuff(): void =
  echo("In doStuff()")
  # ...
  initProcess()
  sleep(500)
  # ...
  echo("After initProcess()")

doStuff()



Re: How to call runForever()?

2018-01-27 Thread monster
@Hlaaftana Thanks. That was rather "obscure"; I wouldn't have thought there 
were two versions of createThread(). Unfortunately, I changed the code as you 
suggested, and it does not make the crash go away.

I think there is a single example of calling runForever() in the documentation. 
It is just called at the end of a module. If this is the only correct way to 
use it, it would be good to document this, as it could also just be 
"incidental" in this example.

But the bigger question is, what if the whole "async" thing is an optional 
feature of my module? I'm creating an alternative API to allow threads to send 
messages to each other. Sending messages transparently (as much as possible) 
across the cluster is just an "optional feature" for my API, so it doesn't make 
sense to force the user into "locking the main thread" with runForever(), 
unless the user wants the application to become part of a cluster, which could 
very well be decided throught a run-time parameter, or even interactively (off 
then on then off ...) throught the user-interface.


How to call runForever()?

2018-01-27 Thread monster
In a previous post today, I mentionned I got "unexplanable" crashes, that were 
not always reproducible. It turn out, the crashes seem to be caused by 
runForever().

I have async code in a single module, the same one where I call runForever(). 
If I comment out the code calling runForever(), the crashes go away. If I 
comment out all the async code, but just keep the call to runForever(), it 
still crashes.

As I added asyncdispatch code in my module as an "afterthought", I used this 
code to run the event loop:


# ...
  
  proc runForeverThread() {.thread.} =
## Executes runForever() in a separate thread.
runForever()
  
  proc initProcess*(pid: ProcessID, processMapper: ProcessIDToAddress): 
void =
# ...
#asyncCheck initCluster(port, address)
var thread: Thread[void]
createThread[void](thread, runForeverThread)


I'm assuming this is somehow wrong. The description of "runForever()" in 
asyncdispatch doesn't actually say how to call it; maybe that should be 
explicitly documented.


Re: A

2018-01-27 Thread monster
Thanks for the reply. I've made some progress. Compiling and running the same 
code multiple times doesn't always result in the same output. I now suspect it 
might be a "timing" thing; depending on how fast the program crashes, the 
stdout might be flushed, or not, to the terminal. Looks like I'll have find an 
alternative way to follow the code, or learn to debug Nim...

EDIT: OK, just checked the echo() doc, and it flushes stdout. I guess that 
makes memory/stack corruption the more likely suspect.


Error: 'XXX' is not GC-safe as it accesses 'YYY' which is a global using GC'ed memory

2018-01-24 Thread monster
Hi,

I can't seem to get around this error:


Error: 'mapPID2Address' is not GC-safe as it accesses 'mapper' which is a 
global using GC'ed memory


with code that basically does this:


import asyncdispatch
import asyncnet

type
  ProcessID* = uint32
  NotAString = array[256,char]
  ProcessIDToAddress* = proc (pid: ProcessID, port: var Port, address: var 
NotAString): void {.nimcall.}

var myProcessMapper: pointer
var myClusterPort: uint16

proc mapPID2Address(pid: ProcessID): (string, Port) {.gcSafe.} =
  let mapper = cast[ProcessIDToAddress](myProcessMapper)
  var port = Port(myClusterPort)
  var address: NotAString
  mapper(pid, port, address)
  let alen = address.find(char(0)) + 1
  var address2 = newStringOfCap(alen)
  for i in 0..alen:
address2.add(address[i])
  result = (address2, port)

proc openPeerConnection(pid: ProcessID): Future[AsyncSocket] {.async.} =
  ## Opens a connection to another peer.
  let (address, port) = mapPID2Address(pid)
  result = await asyncnet.dial(address, port)

asyncCheck openPeerConnection(ProcessID(0))


Unfortunately, this example doesn't produce the error; I can't see where the 
difference is, but "mapper(pid, port, address)" is what bothers the compiler in 
my real code.

Initially, I wanted ProcessIDToAddress to return a string, but after getting 
the error, I decided to use an array as var parameter. My understanding is that 
the array is a stack object, and not a GC object, and so mapper doesn't 
actually access any GCed memory, and should not produce the error.


Re: lib\pure\asyncmacro.nim(43, 27) Error: attempting to call undeclared routine: 'callback='

2018-01-22 Thread monster
@dom96 I tried your idea, and it didn't work. Then I changed the forward 
declaration to be a "normal" proc, and implemented it lower down to just 
delegate to the async proc (asyncCheck sendMsgExternally(...)), and it didn't 
work either. I'll try to build an example now. If it's an async macro bug, we 
need an example anyway, to open an issue about it.


lib\pure\asyncmacro.nim(43, 27) Error: attempting to call undeclared routine: 'callback='

2018-01-21 Thread monster
Hi,

I've just finished coding some async "cluster networking" prototype (using 
asyncnet), and I'm getting a compiler error that I don't undertand (don't know 
how to fix), while writing my first test module for it:


import asyncdispatch
import asyncnet

# ...

type
  QueueID* = distinct uint32
  Msg*[T: not (ref|string|seq)] = object
# ...

# ...

# Forward declaration:
proc sendMsgExternally[T](q: QueueID, m: ptr[Msg[T]], size: uint32) 
{.async.}

proc fillAndSendMsg[T](q: QueueID, m: ptr[Msg[T]], buffered: bool): void =
  # ...
  let size = uint32(sizeof(Msg[T]))
  asyncCheck sendMsgExternally(q, m, size)


and on that last line, I am getting this error:


lib\pure\asyncmacro.nim(43, 27) Error: attempting to call undeclared 
routine: 'callback='


The content of sendMsgExternally() makes no difference; I replaced it with 
"discard", and the error stays. This is the only error I'm getting, so it's not 
a consequence of a previous error. Google was no help for this one.


Windows: "import posix" => 'arpa/inet.h': No such file or directory

2018-01-19 Thread monster
Hi,

I'm trying to use the posix module, because I need htonl() and ntohl(), but if 
I import posix (Windows, vcc), I get this error message:


asyncnetcodetest.c(11): fatal error C1083: Cannot open include file: 
'arpa/inet.h': No such file or directory


Can I fix this somehow?


Re: IOError when compiling from VSCode's internal terminal

2018-01-19 Thread monster
I've had this all the time. I can confirm that the fix of @dandev works for me. 


Re: Is anyone using the libuv wrappers?

2018-01-14 Thread monster
@mashingan I'll just "rephrase" to be sure I get it. On Linux (and maybe other 
OSs), there are several OS APIs to do non-blocking IO. On Windows, there is 
only one. asyncdispatch/asyncnet (as well as libuv) are wrappers on top of 
(some of) those. asyncdispatch/asyncnet essentially give you the same high 
functionality on all platforms. And the "as long as you only care about 
non-Windows platforms" comment only relates to "you should be able to work with 
any of these layers[epoll/kqueue] interchangeably". Beyond "Warning: The Peek 
socket flag is not supported on Windows." in asyncdispatch, I haven't seen an 
actual limitation of the API behaviour on Windows.

So, assuming I got the above correctly, since I _don't care_ about which API 
implements the low-level non-blocking IO magic under the hood, I can just use 
asyncdispatch/asyncnet everywhere, and call it a day?


Re: Is anyone using the libuv wrappers?

2018-01-14 Thread monster
@mashingan I think what I'm currently working on (an 'actor' system) would be 
an 'alternative' to 'reactor'. So building on top of reactor makes little sense 
for me. Reactor says 'code looks very similar to a code written in a blocking 
style'. That's kind of the opposite of what I want (no use of "await" 
anywhere). Also, 'reactor' is the name I wanted to use (because my actors are 
"passive"; they mostly "react" to messages, therefore 'reactor' seems like the 
perfect name for it), and so far all other names I thought of aren't as good a 
match for what I'm working on.

@yglukhov Unless otherwise informed, I always read "async" as "runs in some 
other thread". And since the easiest way to do this is just to queue the tasks 
in a thread-pool, that is also what I assumed so far. Apparently, I was wrong. 
This changes everything; I'll have a second look at AsyncDispatch.

EDIT: The "second look" was rather short... "as long as you only care about 
non-Windows platforms" basically mean asyncnet (networking over AsyncDispatch) 
is useless for me. 

I'll also note that I'm currently assuming my code will be CPU-bound, so I'm 
more worried about threads being created in the background, outside of my own 
code, than worried about "ultimate IO efficiency". 


Re: Is anyone using the libuv wrappers?

2018-01-13 Thread monster
@yglukhov Maybe, but if I understand asyncdispatch propertly, it's really not 
comparable to libuv. I haven't checked the implementation of asyncdispatch, but 
it looks to me as if, under the hood, it's still 
one-thread-per-blocking-IO-call. So If I was listening to 100 sockets, it would 
need 100 threads? If that is the case, then asyncdispatch simply isn't an 
option for me, as it won't _scale_ well.

@2vg Well, if the libuv wrapper was deleted from devel, then Araq couldn't 
possibly have been implying that SSL support for libuv should belong to stdlib. 
We'll just have to wait for Araq to reply.


Re: importc: multiple headers?

2018-01-13 Thread monster
Nice! I asked that in december, and got no reply...


Re: Is anyone using the libuv wrappers?

2018-01-11 Thread monster
@Araq Sorry if I'm a bit slow on the uptake, but I'm not sure about what you 
meant. libuv is wrapped in the stdlib, which is great, but if it's really based 
on a 3+ yo version (is it?), AND libuv is updated regularly, which it is, then 
I do see the need for a non-stdlib that is more recent. If what you meant is 
that we should stick to the stdlib wrapper, and it sounds like that to me, then 
I guess you would like a pull-request instead of a third-party wrapper? And if 
you meant something else, then you lost me.


Re: Is anyone using the libuv wrappers?

2018-01-07 Thread monster
@2vg Nice!  I'm going to give it a try. I do need to support Windows; that is 
my primary target for the client (the server will be primarily Linux).

I won't need encryption until I release the first alpha version, and I'm a long 
way away. If you plan to have a look at SSL support, that would be great. 


Is anyone using the libuv wrappers?

2018-01-06 Thread monster
TL;DR:

  1. Does (one of the) libuv wrappers use a recent libuv version (< 6 months 
ago)?
  2. Does anyone know how to add SSL (in Nim) on top of a libuv wrapper?



I'm still trying to decide what to use for networking. I've noticed there was 
Nim support for [libuv](https://github.com/libuv/libuv), which I hadn't heard 
of before (thanks to not wanting to have anything to do with JS and node.js), 
but I'm not sure if it's used at all. The _design_ of libuv sounds perfect for 
my use-case.

First, there is [libuv](https://nim-lang.org/docs/libuv.html) from the Standard 
Library. Based on the comment in the header, which says it was build using 
[this repo](https://github.com/joyent/libuv), which was "moved" 3 years ago, I 
have to assume that that module has not been updated for at least 3 years 
(unless it was, but no one updated the doc for it, and corrected the header). 
So I guess it's not actually used, otherwise someone would have updated it.

Then there is [libuv](https://github.com/lcrees/libuv) from the "Unofficial 
packages", which was updated (created?) "2 months ago", but I can't see any 
hint about which version of libuv it was based on; it just says "Extracted from 
Nim standard library", which might mean it is also based on an (at least) 3 
years old version of libuv.

When it comes to SSL, it seems libuv team is against integration, because they 
don't want to "take sides" (choose a specific SSL impl). A bit of research 
resulted in 2 possible solutions:

  1. [uv_ssl_t](https://github.com/indutny/uv_ssl_t), which at the same time is 
described as "HIGHLY UNSTABLE" and has not been updated for 2 years.
  2. [evt-tls](https://github.com/deleisha/evt-tls), which was updated "8 days 
ago", and used to be called "libuv-tls", but is now a "generic wrapper" over 
OpenSSL, which, if I understand properly, means I have to build my own wrapper 
on top, to use with libuv.



In a world where "everyone" gets hacked all the time, I can't relate to 
"networking libraries/frameworks" which treat "security" as an "optional 
feature".


Re: ASM on Windows basically dead?

2018-01-05 Thread monster
@conven92 Well, since it said it was only for ARM, I hadn't tried those. So now 
I did, and the same result comes out:

> atomiks.obj : error LNK2019: unresolved external symbol 
> InterlockedExchangeAdd64_nf

(with or without a "_" as prefix).

While I'd like to know what is going on, it is basically an "optimisation" 
problem, because the code will still work "with" a fence. So I'm (for now) 
leaving this as an "unsolved mistery", and moving on to add cluster support to 
my code, which is more important.


Re: ASM on Windows basically dead?

2018-01-02 Thread monster
@jackmott I'm confused by your question. I thought that calls like 
_InterlockedExchangeAddNoFence64() _were_ intrinsics, therefore, isn't that 
what I'm already trying to do? (But failing for some unclear reason, since the 
failure is at link time, where they should already have been "replaced") Until 
now, I assumed that "compiler intrinsics" are "things that look like functions 
to the caller, but get replaced directly with assembler by the compiler". Is 
that wrong?


Re: Getting ambiguous call on $(range) and idk why

2017-12-31 Thread monster
@cdome The distinct helped, but was not enough. It seems Nim has problems 
deciding how to convert an "uint" to a string, so I also needed to specify 
"proc $(uint)":


const MAX_THREADS* = 64
const MAX_TOPICS* = 1048576

const MAX_PROCESSES* = int(4294967296'i64 div int64(MAX_THREADS * 
MAX_TOPICS))

type
  ThreadID* = distinct range[0'u8..uint8(MAX_THREADS-1)]
## Thread ID, within process.
  ProcessID* = distinct range[0'u16..uint16(MAX_PROCESSES-1)]
## Process ID of thread.
  TopicID* = distinct range[0'u32..uint32(MAX_TOPICS-1)]
## TopicID, within thread.

proc `$` *(i: uint): string {.inline.} =
  $uint64(i)

proc `==` *(a, b: ThreadID): bool {.borrow.}
proc `$` *(id: ThreadID): string {.borrow.}
proc `==` *(a, b: ProcessID): bool {.borrow.}
proc `$` *(id: ProcessID): string {.borrow.}
proc `==` *(a, b: TopicID): bool {.borrow.}
proc `$` *(id: TopicID): string {.borrow.}

type
  QueueID* = distinct uint32
## The complete queue ID, containing the process ID, thread ID and 
topic ID.

proc tid*(queue: QueueID): ThreadID {.inline, noSideEffect.} =
  ## Dummy simplified impl!
  ThreadID(uint8(queue))

proc pid*(queue: QueueID): ProcessID {.inline, noSideEffect.} =
  ## Dummy simplified impl!
  ProcessID(uint16(queue))

proc cid*(queue: QueueID): TopicID {.inline, noSideEffect.} =
  ## Dummy simplified impl!
  TopicID(uint32(queue))

proc `$` *(queue: QueueID): string =
  ## String representation of a QueueID.
  $pid(queue) & "." & $tid(queue) & "." & $cid(queue)



Getting ambiguous call on $(range) and idk why

2017-12-31 Thread monster
I've got a problem calling "$" on some range type. It all looks perfectly fine 
to me. Atm I would tend to think it's a compiler error. Here is the simplified 
code:


const MAX_THREADS* = 64
const MAX_TOPICS* = 1048576

const MAX_PROCESSES* = int(4294967296'i64 div int64(MAX_THREADS * 
MAX_TOPICS))

type
  ThreadID* = range[0'u8..uint8(MAX_THREADS-1)]
## Thread ID, within process.
  ProcessID* = range[0'u16..uint16(MAX_PROCESSES-1)]
## Process ID of thread.
  TopicID* = range[0'u32..uint32(MAX_TOPICS-1)]
## TopicID, within thread.

proc `$` *(id: ThreadID): string {.inline.} =
  ## Somehow, default `$` is ambiguous.
  $uint8(id)

proc `$` *(id: ProcessID): string {.inline.} =
  ## Somehow, default `$` is ambiguous.
  $uint16(id)

proc `$` *(id: TopicID): string {.inline.} =
  ## Somehow, default `$` is ambiguous.
  $uint32(id)

type
  QueueID* = distinct uint32
## The complete queue ID, containing the process ID, thread ID and 
topic ID.

proc tid*(queue: QueueID): ThreadID {.inline, noSideEffect.} =
  ## Dummy simplified impl!
  ThreadID(uint8(queue))

proc pid*(queue: QueueID): ProcessID {.inline, noSideEffect.} =
  ## Dummy simplified impl!
  ProcessID(uint16(queue))

proc cid*(queue: QueueID): TopicID {.inline, noSideEffect.} =
  ## Dummy simplified impl!
  TopicID(uint32(queue))

proc `$` *(queue: QueueID): string =
  ## String representation of a QueueID.
  let p = $pid(queue)
  let tt: ThreadID = tid(queue)
  let t = $tt # ERROR LINE
  let c = $cid(queue)
  p & "." & t & "." & c


And it fails to compile with:


kueues.nim(356, 13) Error: ambiguous call; both kueues.$(id: 
ThreadID)[declared in kueues.nim(130, 5)]
and kueues.$(id: ProcessID)[declared in kueues.nim(134, 5)] match for: 
(ThreadID)


ThreadID is a range of uint8, and ProcessID is a range of uint16. I do not see 
how I could possibly be more explicit than I already am, which is IMO already 
"too" explicit. The original (working) code, used to look like this:


const MAX_THREADS* = 64
const MAX_TOPICS* = 1048576

# ...

const MAX_PROCESSES* = int(4294967296'i64 div int64(MAX_THREADS * 
MAX_TOPICS))

type
  ThreadID* = range[0..MAX_THREADS-1]
## Thread ID, within process.
  ProcessID* = range[0..MAX_PROCESSES-1]
## Process ID of thread.
  TopicID* = range[0..MAX_TOPICS-1]
## TopicID, within thread.

type
  QueueID* = distinct uint32
## The complete queue ID, containing the process ID, thread ID and 
topic ID.

proc tid*(queue: QueueID): ThreadID {.inline, noSideEffect.} =
  ## Dummy simplified impl!
  ThreadID(uint8(queue))

proc pid*(queue: QueueID): ProcessID {.inline, noSideEffect.} =
  ## Dummy simplified impl!
  ProcessID(uint16(queue))

proc cid*(queue: QueueID): TopicID {.inline, noSideEffect.} =
  ## Dummy simplified impl!
  TopicID(uint32(queue))

proc `$` *(queue: QueueID): string =
  ## String representation of a QueueID.
  $pid(queue) & "." & $tid(queue) & "." & $cid(queue)


Then I decided to change ThreadID, ProcessID and TopicID from range of int, to 
range of appropriate specific size (uint8/uint16/uint32). That is when I got 
the error. First I added the explicit $(ThreadID), $(ProcessID) and $(TopicID) 
procs and then tried to break the $(QueueID) proc into multiple steps. But I 
cannot get rid of the error.


Re: I'm getting a type mismatch despite using a type cast

2017-12-30 Thread monster
@Udiknedormin I can't say 100% for sure, because now I get some error that it 
cannot find the required "Interlocked" function (maybe wrong version of Visual 
Studio installed?), but at least the original error message is gone, so I think 
your solution worked. 


Re: ASM on Windows basically dead?

2017-12-30 Thread monster
@cheatfate So, I _could_ still code something like 
interlockedExchangeAddNoFence64() in ASM, and use it both in 32-bits and 
64-bits with MSVC, but I have to put it in a separate ASM file? And maybe 
explicitly "compile" (not sure if this is the right term) it with some ASM 
tool? I currently have no clue how to do that, but at least that sounds like a 
workable solution. Thanks.


I'm getting a "type mismatch" despite using a type cast

2017-12-29 Thread monster
I’m getting a "type mismatch" when compiling my code, and I cannot see what I’m 
doing wrong. Since the code is somewhat large, here is a summarized version:


# MODULE atomiks

type
  VolatilePtr*[T] = distinct ptr T

template declVolatile*(name: untyped, T: typedesc, init: T) {.dirty.} =
  ## Defines an hidden volatile value, with an initial value,  and a 
pointer to it
  var `volatile XXX name XXX value` {.global,volatile.} = init
  let `name` {.global.} = cast[VolatilePtr[T]](addr `volatile XXX name XXX 
value`)

# ...

proc atomicIncRelaxed*[T: AtomType](p: VolatilePtr[T], x: T = 1): T =
  ## Increments a volatile (32/64 bit) value, and returns the new value.
  ## Performed in RELAXED/No-Fence memory-model.
  ## Will only compile on Windows 8+!
  let pp = cast[pointer](p)
  when sizeof(T) == 8:
cast[T](interlockedExchangeAddNoFence64(pp, cast[int64](x)))
  elif sizeof(T) == 4:
cast[T](interlockedExchangeAddNoFence32(pp, cast[int32](x)))
  else:
static: assert(false, "invalid parameter size: " & $sizeof(T))

# MODULE kueues

type
  MsgSeqID* = int32

declVolatile(myProcessGlobalSeqID, MsgSeqID, MsgSeqID(0))
# Current (global) per Process message sequence ID.

proc seqidProvider(): MsgSeqID =
  ## Generates a message sequence ID for the Thread.
  result = atomicIncRelaxed(myProcessGlobalSeqID) # LINE NO 213


The line with atomicIncRelaxed() doesn’t compile with this message:


kueues.nim(213, 14) Error: type mismatch: got (int) but expected 'MsgSeqID 
= int32'


I think that the type of “myProcessGlobalSeqID” should be 
“VolatilePtr[MsgSeqID]”, therefore the call to 
“atomicIncRelaxed(myProcessGlobalSeqID)” should receive a 
“VolatilePtr[MsgSeqID]” as input parameter, and return a “MsgSeqID” as output, 
since I cast the result of "interlockedExchangeAddNoFenceXX()" to "T". Since 
“result” is of type “MsgSeqID”, everything should be fine, IMO.


ASM on Windows basically dead?

2017-12-28 Thread monster
Hi.

As I mentionned in a previous post, I would have liked to have a "relaxed" 
fetch_and_add, which also works on Windows. Such a method only exists starting 
with Windows 8 (interlockedExchangeAddNoFence64), which I thought was too 
limiting; there are still people out there with Windows 7 that I might want to 
"target" (anything below that is uninteresting). It seems that to be able to 
use interlockedExchangeAddNoFence64() "conditionally" on Windows 8+, I would 
either have to build a "Windows 8" version, and a "Windows 7" version, or put 
the code in a dynamically loaded DLL. This felt like an overkill for just one 
single function. Also, it would have meant using interlockedExchangeAdd64() 
(with fence) instead on Windows 7, which was also stupid, since it's a CPU 
feature, not an OS one.

So I searched and found out that it's basically trivial to implement under 
Intel, as defined in 
[Wikipedia](https://en.wikipedia.org/wiki/Fetch-and-add#x86_implementation) All 
you need is a "lock; xaddl %0, %1". So I set out to try to do the same in Nim, 
but it failed to compile. So I tried the Nim example for ASM usage, and it also 
failed to compile:


{.push stackTrace:off.}
proc addInt(a, b: int): int =
  # a in eax, and b in edx
  asm """
  mov eax, `a`
  add eax, `b`
  jno theEnd
  call `raiseOverflow`
theEnd:
  """
{.pop.}


Which produces:


cl.exe /c /nologo /Z7 /IC:\nim-0.17.2\lib /Fo...atomiks.obj ...atomiks.c
geth_atomiks.c
...atomiks.c(44): error C4235: nonstandard extension used: '__asm' keyword 
not supported on this architecture
...atomiks.c(45): error C2065: 'mov': undeclared identifier
...atomiks.c(45): error C2146: syntax error: missing ';' before identifier 
'eax'


Searching for "error C4235: nonstandard extension used: '__asm' keyword not 
supported on this architecture", I read in several places things like "Inline 
asm on 64bit development is not a supported scenario ...".

So, basically, since x86 apps are dying out, and (almost) nobody owns a 32-bit 
Windows anymore, ASM on Windows is dead? Or am I missing something?

By now, I think that giving up on Windows 7 is probably the rational thing to 
do; anything else is overkill. 


importc: multiple headers?

2017-12-28 Thread monster
Hi,

if I want to use the importc pragma, but the header containing the function I 
want only works properly, if prefixed by one or more other headers, how do I do 
this?

This is the example in the doc:


proc printf(formatstr: cstring) {.header: "", importc: "printf", 
varargs.}


So, what if "" needed to be imported before ""?


Re: Can I somehow emulate Java's System.currentTimeMillis() in Nim?

2017-12-28 Thread monster
Hi. I haven't looked at the patch, but from the pull request comments, I think 
it would do just fine. 


Wrapping BOOL WINAPI IsWindows8OrGreater(void) ...

2017-12-17 Thread monster
I've just discovered that there are "no fence" versions of atomic-add on 
Windows, but they are only available starting with Windows 8/Windows Server 
2012. A quick search told me that I should use 
[IsWindows8OrGreater()](https://msdn.microsoft.com/en-us/library/windows/desktop/dn424961\(v=vs.85\).aspx)
 from VersionHelpers.h.

I've tried something like this:


const
  hasThreadSupport = compileOption("threads") and not defined(nimscript)

if defined(vcc) and hasThreadSupport:
  when defined(cpp):
proc isWindows8OrGreater(): bool
  {.importcpp: "_IsWindows8OrGreater(void)", header: 
"".}
  
  else:
proc isWindows8OrGreater(): bool
  {.importc: "_IsWindows8OrGreater", header: "".}


but trying to use it fails with such errors:


cl.exe /c /nologo /Z7 /IC:\nim-0.17.2\lib ...
atomiks.c
C:\Program Files (x86)\Windows 
Kits\10\include\10.0.14393.0\um\VersionHelpers.h(39): error C2061: syntax 
error: identifier 'BOOL'
C:\Program Files (x86)\Windows 
Kits\10\include\10.0.14393.0\um\VersionHelpers.h(39): error C2059: syntax 
error: ';'
C:\Program Files (x86)\Windows 
Kits\10\include\10.0.14393.0\um\VersionHelpers.h(40): error C2146: syntax 
error: missing ')' before identifier 'wMajorVersion'
C:\Program Files (x86)\Windows 
Kits\10\include\10.0.14393.0\um\VersionHelpers.h(40): error C2061: syntax 
error: identifier 'wMajorVersion'
C:\Program Files (x86)\Windows 
Kits\10\include\10.0.14393.0\um\VersionHelpers.h(40): error C2059: syntax 
error: ';'
C:\Program Files (x86)\Windows 
Kits\10\include\10.0.14393.0\um\VersionHelpers.h(40): error C2059: syntax 
error: ','
C:\Program Files (x86)\Windows 
Kits\10\include\10.0.14393.0\um\VersionHelpers.h(40): error C2059: syntax 
error: ')'
C:\Program Files (x86)\Windows 
Kits\10\include\10.0.14393.0\um\VersionHelpers.h(57): error C2061: syntax 
error: identifier 'BOOL'


I would like to know how to correctly call IsWindows8OrGreater() OR how to 
otherwise check the Windows version in some other "Nim way", if available. This 
would only need to be done once at the start of the application, so "speed" is 
not relevant in this case.

Another related question is, once I can do:


let WIN8_OR_GREATER = isWindows8OrGreater()


Can I write something like this:


proc atomicIncRelaxed*[T: AtomType](p: VolatilePtr[T], x: T = 1): T =
## Increments a volatile (32/64 bit) value, and returns the new value.
## Performed in RELAXED/No-Fence memory-model.
let pp = cast[pointer](p)
when sizeof(T) == 8:
  if WIN8_OR_GREATER:
cast[T](interlockedExchangeAddNoFence64(pp, cast[int64](x)))
  else:
cast[T](interlockedExchangeAdd64(pp, cast[int64](x)))
elif sizeof(T) == 4:
  if WIN8_OR_GREATER:
cast[T](interlockedExchangeAddNoFence32(pp, cast[int32](x)))
  else:
cast[T](interlockedExchangeAdd32(pp, cast[int32](x)))
else:
  static: assert(false, "invalid parameter size: " & $sizeof(T))


and expect it to work on Windows 7? Or will it fail/crash because 
interlockedExchangeAddNoFence64() is not available, even if I never call it, 
due to the runtime check?


Re: Can I somehow emulate Java's System.currentTimeMillis() in Nim?

2017-12-17 Thread monster
Thanks!


Can I somehow emulate Java's System.currentTimeMillis() in Nim?

2017-12-17 Thread monster
I'd like to get the current time such that:

  * It's sub-second resolution (milliseconds sounds best).
  * It's a fixed, cross-platform size (preferably 64-bits, because I need 
sub-second resolution)
  * It's an "integer", so that computation on the same value results in the 
same result on every processor and platform.



I've looked at the module "times", but do not see how to emulate Java's 
System.currentTimeMillis() with it, which is basically what I'd need.


Re: How does a

2017-12-11 Thread monster
@stisa I still had the error after adding it to tekst and rekjister (either 
because I use VCC or maybe I did it wrong), so I added it to typerekjister and 
kueues too (maybe pointlessly?) and then it works. Many thanks! Now I actually 
start working on my actor system, which was my goal when starting with Nim. 


Re: How does a

2017-12-10 Thread monster
@cdome Mixin is not a concept I'm familiar with; I thought that was meant to be 
something basically like a "partial template". I've looked up mixin in the doc; 
from that example I still don't know what it does (looks like some kind of 
"symbol import"?), so I need a bit more direction. I tried adding:


mixin hash
mixin `==`


In the proc that creates the Table:


proc initSharedTable*[A, B](initialSize=64): SharedTable[A, B] =
  assert isPowerOfTwo(initialSize)
  mixin hash
  mixin `==`
  result.counter = 0
  result.data = initSharedArray[KeyValuePair[A, B]](initialSize)


but it didn't seem to make any difference. Then I tried adding it directly at 
the top of the module, but the compiler says it's invalid there, then I tried 
adding it in the template where the error is ("hc = hash(key)"):


template genHashImpl(key, hc: typed) =
  mixin hash
  mixin `==`
  hc = hash(key)
  if hc == 0:   # This almost never taken branch should be very 
predictable.
hc = 314159265  # Value doesn't matter; Any non-zero favorite is fine.


But the error doesn't go away. Where else should I put it?

@boia01 What I mean is, that I have a module that defines the type 
"SharedTable[A, B]", and in another module I have "SharedText", and in a third 
module (TypeRegister) I import both modules and define a type that contains a 
"SharedTable[SharedText, Whatever]", so I'm "passing SharedText as a generic 
parameter to SharedTable".


How does a "Table" find the "hash()" proc of a type?

2017-12-10 Thread monster
So far, I have coded the following:

  * A simple "string-in-shared-heap" module (I know there is something like 
that already in the standard lib, mine caches the hashcode).
  * A seq-in-shared-heap module.
  * A Table-in-shared-heap (I know there is something like that already in the 
standard lib, but mine is on purpose NOT thread-safe).
  * A thread-safe "register", that maps a key both to a value and a unique ID 
on the shared heap, using the above "seq" and "Table".
  * A register for "types", built on the above "register". It uses the 
"string-in-shared-heap" as "key".
  * A module that exports a few "atomics" method (load, store, exchange, cas), 
that it works on pthreads AND Windows.
  * Concurrent message-queues on the shared heap, using all of the above.



All of this seems to work (except the message-queues), at least in the "inline" 
tests writen with "when isMainModule: ..."

My issue is, that when I use the "type register" in the message-queues, 
suddenly the "shared heap table" cannot find the hash function of the "shared 
heap string". The code in the message queues calls the same procs as the test 
code in the "type register", so I see no reason why the hash() proc could not 
be found.

The whole code is rather large now, mostly due to containing a clone of 
"tables.nim", so I don't want to post it here, but I'll create some github 
repo(s) for it, once it has been cleaned up. I'm only posting the (simplified) 
"string-in-heap" module for now, to show how the hash() proc is defined.

Do I need to do anything more, so that a module that receives the SharedText as 
generic parameter can find the hash(SharedText) proc?


import hashes

type
  SharedText* = object
txt*: cstring
txthash*: Hash
txtlen*: int

proc hash*(st: SharedText): Hash {.inline, noSideEffect.} =
  result = st.txthash

proc len*(st: SharedText): int {.inline, noSideEffect.} =
  result = st.txtlen

proc `$`*(st: SharedText): string {.inline.} =
  result = $st.txt

proc `==`*(a, b: SharedText): bool {.inline, noSideEffect.} =
  (a.txthash == b.txthash) and (a.len == b.len) and (cmp(a.txt, b.txt) == 0)

proc initSharedText*(s: cstring): SharedText {.inline, noSideEffect.} =
  result.txt = s
  result.txtlen = len(s)
  result.txthash = hash(s)


And the compiler error message looks something like:


# ...
mysharedtable.nim(153, 12) Error: type mismatch: got (SharedText)
but expected one of:
proc hash(sBuf: string; sPos, ePos: int): Hash
proc hash(x: uint64): Hash
# ...



Re: How does seq handle negative array index?

2017-12-07 Thread monster
Hi, mratsim! I'm actually using Nim version "whatever was available when I 
installed it", which is "0.17.2".

Thanks a lot; that sure will save me masses of time!


Re: How does seq handle negative array index?

2017-12-06 Thread monster
It seems "newSeq[T](N)" creates a seq with not only N "capacity", but also N 
"size" (which contradict my general assumption has to how newly created "empty" 
"collections" normally work (mostly based on coding on Java), and hence my 
mistake trying to emulate seq on the shared heap).


How does seq handle negative array index?

2017-12-05 Thread monster
I've almost finished converting the default Table implementations to running on 
the shared heap, using my "shared seq". But I'm running into trouble because my 
type doesn't handle indexes like the seq does. Unless I'm mistaken, it seems 
that seq can accept _negative_ array indexes too.

Here is the code lines that cause an error when using replacing seq with my 
"shared seq".


template maxHash(t): untyped = high(t.data)
  # ...
  var h: Hash = hc and maxHash(t)
  while isFilled(t.data[h].hcode):  # t.data is a seq in the original code.
# ...


And my "shared seq" gets called with an index of -3359640189252970303, which it 
(obviously) doesn't like. Idk where the implementation of "proc []" for seq is, 
so I cannot look it up. The closest I have found (in system.nim) is this:


# :array|openarray|string|seq|cstring|tuple
  proc `[]`*[I: Ordinal;T](a: T; i: I): T {.
noSideEffect, magic: "ArrGet".}


But idk where/how "ArrGet" is defined.


Re: How to debug a compile error in a template?

2017-12-05 Thread monster
The "proc []" of my "shared seq" had a return type of "T" instead of "var T". 
The compiler did give me all the info I needed; I was just unable to understand 
it.


Re: How to debug a compile error in a template?

2017-12-05 Thread monster
I've seen those, but I was thinking about all the bits that are "red" in 
VS-Code... Mostly because I cannot immediately see what is wrong with the 
location where the compiler stops, and hoped I that it might "go away" if I 
could fix the "other locations" that VS-Code shows. The compiler gives me one 
error and stops, but VS-Code shows me many errors. I guess I could just try to 
fix the one error (with context) that the compiler gives me, and repeat until 
"nothing is red anymore".

But how does VS-Code find multiple "error locations" if the compiler stops 
after the first error? I just check the compiler "help", and could not see 
something that tells it to carry on compiling after the first error.


How to debug a compile error in a template?

2017-12-05 Thread monster
If I have a template, and it is used, directly or indirectly, in many places, 
and it has a compile error, is there a 'trick' to find out which call location 
causes the error? (Beyond commenting out every location until the error is gone)


What is the entire "seq" API?

2017-12-04 Thread monster
Hi.

I've come to the conclusion that what I need to build my shared-heap 
thread-safe concurrent data-structures, rather than thread-safe shared lists 
and shared tables, is _unguarded_ (thread-_unsafe_, if you will) shared lists 
and tables (and probably shared sets too), so that I can manage the lock 
myself, and not end up having to have a lock implementing a "user transaction" 
around a combination data-structures that already have their own locks.

The implementation of "table" is "in plain sight", so that I can probably clone 
it, changing just enough so that it works on the shared heap. But table builds 
on seq, so I need a "shared seq" first. I have something like a shared seq 
already, but idk how “complete” it is. That is the point at which I realized 
that I can neither find the seq source code, which looks like this:


seq*{.magic: "Seq".}[T]


Nor some wiki page that would document what is the actual API of a seq. I'm 
hoping I just "missed" it, and it is actually documented somewhere... 
(considering it's _the_ core data-structure).

OTOH, if someone can point me to an _existing_ shared-heap, 
unguarded/unsynchronized implementation of seq and tables, that would save me 
even more time. A cursory glance at nimble packages didn't reveal anything like 
that.


Re: SharedTable: missing hasKey() and len()

2017-12-04 Thread monster
@Araq Thanks! This seems to work. 

> it's almost impossible to use these without introducing subtle races

I assume you mean races in the _user code_, rahter than the SharedTable code, I 
guess.

But I must still ask: I've seen the "withLock" pattern used im multiple place 
(OK, mine was slightly changed). As long as it's not "public", I thought they 
could not interfere with each other? How is that possible? Is it because it's a 
template?


Re: SharedTable: missing hasKey() and len()

2017-12-03 Thread monster
I have been trying to work around the missing hasKey() using mget() and a 
try/finally, but this fails to compile as well.

This is basically what I'm doing:


import sharedtables
import options
import typetraits

type
  TypeKey* = int
  TypeInfo*[M] = object
myinfo: M
  TypeRegister[M] = object
table: SharedTable[TypeKey, TypeInfo[M]]

proc createTypeRegister*(M: typedesc): TypeRegister[M] =
  result.table = initSharedTable[TypeKey, TypeInfo[M]](64)

proc find[A,B](table: var SharedTable[A,B], k: A): Option[B] {.inline, 
noSideEffect.} =
  ## Ugly work-around for missing SharedTable.hasKey()
  try:
some(table.mget(k))
  except:
none(B)

proc find*[M](reg: var TypeRegister[M], tk: TypeKey): Option[TypeInfo[M]] 
{.inline, noSideEffect.} =
  reg.table.find(tk)

proc `[]=`*[M](reg: var TypeRegister[M], tk: TypeKey, tv: TypeInfo[M]) 
{.inline.} =
  reg.table[tk] = tv

when isMainModule:
  echo("TESTING...")
  var t = createTypeRegister(bool)
  var tv: TypeInfo[bool]
  tv.myinfo = true
  t[TypeKey(4)] = tv
  let find2 = t.find(TypeKey(2))
  let find4 = t.find(TypeKey(4))
  assert(isNone(find2))
  assert(isSome(find4))
  echo("DONE")


This compiles and work fine. In my _real code_, the two "find()" procs are 
_exactly_ the same as in this example. But calling "table.mget(k)" there 
results in this compilation error:


lib\pure\collections\sharedtables.nim(114, 10) Error: undeclared 
identifier: 'hasKey'


The code of SharedTable.mget() looks like this:


proc mget*[A, B](t: var SharedTable[A, B], key: A): var B =
  ## retrieves the value at ``t[key]``. The value can be modified.
  ## If `key` is not in `t`, the ``KeyError`` exception is raised.
  withLock t:
var hc: Hash
var index = rawGet(t, key, hc)
let hasKey = index >= 0
if hasKey: result = t.data[index].val
  if not hasKey:   # THIS IS LINE 114
when compiles($key):
  raise newException(KeyError, "key not found: " & $key)
else:
  raise newException(KeyError, "key not found")


I totally fail to see how this would produce a different result, if called by 
basically the same code.

This is the actual code, with one module (sharedarray) inserted "inline" for 
simplicity:


# A type-register maps types to IDs and back, at runtime.
# No attempt is made to return the same IDs on every run.

import locks
import typetraits
import hashes
import sharedtables
import options
import algorithm
import math
#import tables

#import sharedarray


#
# This should be in imported module: sharedarray

#
type
  # TODO Find a way to make the type restriction be recursive
  SharedArray*[T: not (ref|seq|string)] = ptr object
## Simple variable-length shared-heap array, with a pre-allocated 
capacity.
## Particularly useful because still compatible with open-array.
## It is *not* guarded by a lock, and therefore not thread-safe by 
default.
size: int
  ## "first int" size, so we are compatible with openarray.
mycapacity: int
  ## "second int" is required for seq-to-openarray compatibility, and 
used as "capacity" in our case.
data: UncheckedArray[T]
  ## The data itself

proc len*[T](list: SharedArray[T]): int {.inline, noSideEffect.} =
  ## Returns the current length/size of the shared array. 0 if nil.
  if list != nil:
list.size
  else:
0

proc isNilOrEmpty*[T](list: SharedArray[T]): bool {.inline, noSideEffect.} =
  ## Returns true if the current length/size of the shared array is 0 or if 
it is nil.
  (list == nil) or (list.size == 0)

proc clear*[T](list: var SharedArray[T]): void =
  ## Sets the current length/size of the shared array to 0.
  if list != nil:
list.size = 0

proc capacity*[T](list: SharedArray[T]): int {.inline, noSideEffect.} =
  ## Returns the fixed capacity of the shared array. 0 if nil.
  if list != nil:
list.mycapacity
  else:
0

iterator items*[T](it: var SharedArray[T]): T =
  ## Iterates over all array items. it cannot be nil.
  assert(it != nil)
  for i in 0..it.size-1:
yield it.data[i]

proc initSharedArray*[T](capacity: int = 64): SharedArray[T] =
  ## Returns a new shared-heap array with a given maximum capacity, which 
must be power-of-two.
  assert(capacity >= 0)
  assert(isPowerOfTwo(capacity))
  result = 
cast[SharedArray[T]](allocShared0(2*sizeof(int)+capacity*sizeof(T)))
  

Re: Highest uint64

2017-12-03 Thread monster
Idk why "high(uint64)" doesn't work, but you could try 
"18446744073709551615.0'f64" instead of "float64(high(uint64))".


SharedTable: missing hasKey() and len()

2017-12-02 Thread monster
Hi.

I tried to re-write my "type register" to user SharedTable instead. I did not 
find a documentation page specifying the SharedTable API, so I read the code, 
but it's not all that obvious to me (in particular, the withXXX pattern). 
Mainly, atm, it seems my main problem is that SharedTable doesn't seem to have 
a "hasKey" or a "len" proc, and idk how to work around that. I tried to create 
a (very) simplified example of what I'm trying to do, but _the compiler crashes 
on it_, so I hope you at least get what I'm trying to do, even if this doesn't 
compile.


import locks
import typetraits
import sharedtables

type
  TypeRegister = object
table: SharedTable[char, int]
lock: Lock

proc register*(reg: ptr TypeRegister, T: typedesc): int =
  let tk = T.name[0] # In real code I have a "shared string" kind of thing 
as key
  if reg.table.hasKey(tk): # No "hasKey()"   :(
result = reg.table.mget(tk) # I only need read access, but no "get(k)" 
or "[k]" AFAIK
  else:
acquire(reg.lock)
try:
  if reg.table.hasKey(tk):
result = reg.table.mget(tk)
  else:
let value: int = reg.table.len # No len() either. :(
reg.table[tk] = value
result = value
finally:
  release(reg.lock)

when isMainModule:
  echo("Starting ...")
  let tr = createShared(TypeRegister)
  tr.table = initSharedTable[char, int]()
  initLock tr.lock
  echo("Registering bool ...")
  let id = register(tr, bool)
  echo("Done: " & $id)


On Windows 10 x64 I get a: 


SIGSEGV: Illegal storage access. (Attempt to read from nil?)


when I try to compile this (but I don't get that in my real implementation).

Possible (ugly) work-around for missing:

  * hasKey(): just call mget(), and catch the exception.
  * len(): store the "theoretical" size of table in TypeRegister itself (size 
should only change in register() anyway).




How to define a global proc pointer/value, used by threads?

2017-11-28 Thread monster
I'm trying to define a "configurable proc", to be called by threads. Although I 
would assume procs cannot be garbage collected(?), I get told calling it is not 
GC-safe. I have tried defining the proc "value" as gcsafe through it's type, 
but that doesn't seem to work either.

I guess the "user" could just define the "now" proc directly as a normal proc, 
but I'm still wondering what I'm doing wrong here. Also, I might want to 
instead have now as a hidden global var, setup at runtime, before creating any 
thread. In that case, defining it as a "simple proc" in the "configuration 
module" would not work.


# This part in "configuration module":
type
  Timestamp* = float
  
  TimestampProvider* = proc (): Timestamp {.gcsafe.}

import times
let now*: TimestampProvider = cpuTime

# This part in "implementation module":
proc caller() {.thread.} =
  echo("NOW: " & $now())

var thread: Thread[void]
createThread[void](thread, caller)

joinThread(thread)



Re: atomics: Why is interlockedCompareExchange8 "safe"?

2017-11-27 Thread monster
The number of message types is not fixed. I'm trying something else now, using 
composition instead of inheritance.


Re: local-heap/multi-threading not working as I expect

2017-11-26 Thread monster
Things are clearer now, but that leaves me confused about a few things still.

  * Is there some invisible synchronisation going on every time I write 
top-level variables, or is it unsafe? If it _is_ unsafe, I would have expected 
a warning.
  * If "every top-level variable ... is global" and "{.global.} ... promote 
locally-declared variables to the global scope", then why is adding {.global.} 
to top-level variable legal? While it obviously doesn't do harm, if the 
compiler had told me it was "pointless" as a warning, I would have probably 
realised that I got something wrong...
  * If default allocation (new) is thread-local, why make top-level variable 
global? That feels "counter-intuitive" somehow; I would have expected either 
everything to be global by default, or thread-local by default (which is 
obviously why I was confused). And if writing to top-level variable is not 
synchronized, and therefore unsafe, doesn't that make a good case for having 
them thread-local by default? Then the required {.global.} pragma would be a 
strong hint that access from multiple threads should be guarded.



I'm not saying it's a _bad_ design, but I'm saying it's a _surprising_ one. 
Top-level variables are normally global in the languages I know. But that 
obviously works together with allocation also being global as well. So I guess 
I was imagining that if allocation was local (by default), then top-level 
should be local too, so that it "fits together".


local-heap/multi-threading not working as I expect

2017-11-26 Thread monster
Hi. I thought I understood how the local heap in Nim works, but this simple 
example shows that I don't (it throw an exception).  What am I doing/thinking 
wrong?


var myLocalThreadID = -1

proc initThreadID*(tid: int): void =
  ## Initialises the ThreadID. Can only be called once!
  if myLocalThreadID != -1:
raise newException(Exception, "ThreadID already initialised: " & 
$myLocalThreadID)
  myLocalThreadID = tid

when isMainModule:
  initThreadID(0)
  # ...
  
  proc receiver() {.thread.} =
initThreadID(1)
# ...
echo("receiver done!")
  
  var thread: Thread[void]
  createThread[void](thread, receiver)
  
  joinThread(thread)
  
  echo("main done!")


My assumption was that every thread, including the main thread, should get it's 
_own copy_ of "myLocalThreadID" (because it's not _{.global.}_), safely 
initialized to "-1" for each thread. This test proves me wrong. So either 
"myLocalThreadID" is not local, or it is not initialised to "-1" (or something 
else even stranger?).


Re: Is nimue4 still maintained/used by anyone?

2017-11-26 Thread monster
In case anyone cares, a project maintainer confirmed that nimue4 last update 
was 4.16 and that it is no longer maintained.


Re: atomics: Why is interlockedCompareExchange8 "safe"?

2017-11-24 Thread monster
> No. It means that the object gains a type header that allows for dynamic 
> dispatch, RTTI, and such.

@Jehan Thank you for pointing that out! I thought the {.inheritable.} pragma 
was the solution to my question, but if it adds a "dynamic dispatch" pointer to 
the object, than I cannot use it. Sounds like it basically does the same thing 
as "object of RootObj"; my "messages" are going to be passed to other threads, 
serialized over the network, ... This just won't do. So I guess I'm back to 
"previous: pointer"


atomics: Why is interlockedCompareExchange8 "safe"?

2017-11-24 Thread monster
@coffeepot I _think_ what you pointed out is _by design_. This is also how I 
would expect it to work. OTOH, I guess a compiler warning in the first case 
would be good.


Pointer to generic type with unspecified generic parameter?

2017-11-22 Thread monster
I'm trying to do something like this:


type
  ThreadID* = distinct uint8
  
  Msg*[T] = object
sender*: ThreadID
receiver*: ThreadID
previous: pointer#ptr[Msg[any]]
content*: T

let space = alloc0(sizeof(Msg[int]))
let m = cast[ptr[Msg[int]]](space)
#let p:ptr[Msg[any]] = m.previous


I would like to define "previous" as something meaning ptr[Msg[any]], so that a 
ptr to any concrete Msg[?] can be assigned to it. I'm sure this has been asked 
before, but no mater how I phrase it, nothing came up in the search.

The "ugly workaround" is to cast the pointer to some concrete version of Msg, 
for example Msg[int], and access the "non generic fields", but not content. 
OTOH, I'm not even sure if this is safe, or if the compiler could possibly 
choose a different "byte offset" for the "non generic fields", based on T.


Re: Can I tell Nim to NOT use *reference* for a var parameter?

2017-11-20 Thread monster
@Araq Here is a minimal example. This is compiled with "vcc". It compiles to C, 
but not to C++.


type
  VolatilePtr*[T] = distinct ptr T

proc toVolatilePtr*[T](t: var T): VolatilePtr[T] =
  cast[VolatilePtr[T]](addr t)
  # Pretend we're actually checking it's volatile, which is apparently not 
possible to do.

var my_vbyte {.volatile.}: byte = 42'u8
var my_vbyte_ptr = toVolatilePtr[byte](my_vbyte)

type
  AtomType* = SomeNumber|pointer|ptr|char|bool
when defined(cpp):
  proc interlockedOr8(p: pointer; value: int8): int8
{.importcpp: "_InterlockedOr8(static_cast(#), #)", 
header: "".}
else:
  proc interlockedOr8(p: pointer; value: int8): int8
{.importc: "_InterlockedOr8", header: "".}

proc atomicLoadFull*[T: AtomType](p: VolatilePtr[T]): T {.inline.} =
  let pp = cast[pointer](p)
  when sizeof(T) == 1:
cast[T](interlockedOr8(pp, 0'i8))
  else: # TODO, sizeof(T) == (2|4|8)
static: assert(false, "invalid parameter size: " & $sizeof(T))

assert(atomicLoadFull(my_vbyte_ptr) == 42'u8)



src\nimcache\abc_volatile2.cpp(306): error C2664: 'NU8 
*toVolatilePtr_EP8VL8ilqAQ1Zz9bJAA4rwQ(NU8 &)':
 cannot convert argument 1 from 'volatile NU8' to 'NU8 &'
src\nimcache\abc_volatile2.cpp(306): note: Conversion loses qualifiers


I've had the issue that the error doesn't always come, if I don't delete the 
nimcache, but I think it's caused by VS Code compiling automatically as well, 
using a different configuration.


Re: Can I somehow show context-specific information in an {.error.} ?

2017-11-20 Thread monster
@stisa Actually, I meant "when sizeof(T) == 1" rather than "if sizeof(T) == 1" 
(now corrected). But the problem is, that in the real code I want to support 
multiple paramter sizes, and get a compile time error if the size is 
unexpected. The example you gave me compiles, but fails at runtime. OTOH, this 
led me to find that I can has static asserts (static: assert(false, "invalid 
parameter size: " & $sizeof(T))), and that does the trick. Still, it would be 
nice if the error pragma supported that too.


Can I somehow show context-specific information in an {.error.} ?

2017-11-20 Thread monster
I'm trying to debug a compilation problem: the generic parameter of a proc 
doesn't seem to match my expectation, so I'd like to write something like this:


let b: byte = 42'u8

proc test*[T](t: T): bool =
  if sizeof(T) == 1:
result = true # ...
  else:
{.error: "invalid parameter size: " & $sizeof(T).}

test(b)


But the compiler returns this:


Error: invalid pragma: error: "invalid parameter size: " & $ sizeof(T)


I tried searching in the Nim codebase, but found no example. So is it 
impossible?


find . -name "*.nim" | xargs fgrep "{.error"



Re: Can I tell Nim to NOT use *reference* for a var parameter?

2017-11-19 Thread monster
@Araq I have read it, several times. But I do not see how this helps me, if I 
don't want to directly read or write a volatile, but rather I have to pass a 
pointer to a volatile location to an OS method that expects one.


Can I tell Nim to NOT use *reference* for a var parameter?

2017-11-19 Thread monster
I'm (still) trying to put the "missing bits" of atomic methods together into a 
cross-platform/cross-backend module. I've now got to the point that the VCC C 
version compiles and run (I only test with one thread, so idk if it really 
gives the expected atomic behaviour yet). When I try to build and run the VCC 
C++ version, I have an issue with C++ references. It seems "(t: var T)" is 
translated to a pointer in C, and a reference in C++. And this doesn't seem to 
play well with volatiles.

How to I make this work in C++?


type
  VolatilePtr*[T] = distinct ptr T

proc toVolatilePtr*[T](t: var T): VolatilePtr[T] =
  cast[VolatilePtr[T]](addr t)

var my_vbyte {.volatile.}: byte = 0'u8
var my_vbyte_ptr: VolatilePtr[byte] = toVolatilePtr[byte](my_vbyte)

echo(cast[pointer](my_vbyte_ptr) == nil)


When trying to use this, I get this kind of error:


error C2664: 'NU8 *toVolatilePtr_WWtk4tL3VWMmQk62xcO0Ew(NU8 &)': cannot 
convert argument 1 from 'volatile NU8' to 'NU8 &'



Re: Nim compiling using MS VS2015 (windows)

2017-11-18 Thread monster
@Araq I did not even tried the default config, after reading this post; I 
assumed it didn't work... I'll try it asap.


Re: Is nimue4 still maintained/used by anyone?

2017-11-18 Thread monster
@Araq Many thanks!


Re: Nim compiling using MS VS2015 (windows)

2017-11-18 Thread monster
In case someone wants to use Visual-Studio-Code tasks to compile nim code with 
vcc, it seems to work with this:

  1. Create a vccnim.cmd where your nim.exe is.
  2. Use this line in vccnim.cmd (adjust path to match your installation):



> @call "C:\Program Files (x86)\Microsoft Visual 
> Studio\2017\Community\VC\Auxiliary\Build\vcvars64.bat"
> 
> nim.exe %*

3\. Replace nim.exe in tasks.json with vccnim.cmd

I've read that %* doesn't handle quoted parameters, so if someone has a better 
solution, please tell us.


Is nimue4 still maintained/used by anyone?

2017-11-18 Thread monster
I tried to build the [nimue4](https://github.com/pragmagic/nimue4) 
[sample](https://github.com/pragmagic/NimPlatformerGame), to see how it works, 
but had many problems, so I posted an 
[issue](https://github.com/pragmagic/nimue4/issues/28) on Github about it. That 
was over one month ago, and still no reply. The README of nimue4 mentions 
posting here (or on Nim Gitter) for questions; I tried a Github issue first, as 
I thought it wasn't relevant to most Nim users, but as I got no response there, 
I have no choice but to post here. Also, if the original maintainer(s) have 
abandoned the project, maybe someone else here is still using/maintaining it 
for themselves? I'd like to know that.


Re: Comment causing compiler to fail?

2017-11-18 Thread monster
I was about to post an issue to github for this, as I have a simple 
reproducible case, but a search in the issues turned this out: [Manual should 
specify where documentation comments are 
allowed](https://github.com/nim-lang/Nim/issues/4163) If I replace "## REMOVE 
ME!" with "# REMOVE ME!", it works.


Re: atomics: Why is interlockedCompareExchange8

2017-11-15 Thread monster
@mikra I can't say yet if my code will work, as I have to get Nim to use vcc 
first (found [this thread](https://forum.nim-lang.org/t/2770)), but my approach 
is somewhat different. Firstly, I tried to always use the "right size" call, by 
delegating to the appropriate Windows method using "when sizeof(T) == 8: ..." 
style code. Secondly, I also used "exchange" to replace "store" like you; I 
could not find anything better, but I have seen on stack-overflow people saying 
you should just set it non-atomically, and call a fence afterward. Maybe it 
works, but I didn't like that solution. Thirdly, I think "load" is better 
replaced by using "_InterlockedOr"; (x | 0) makes more sense to me than (x & 
F...).

What I still haven't understood yet, is why there seems to exist both 
"_InterlockedOr64_acq" and "InterlockedOr64Acquire" (for example), doing the 
same thing.


Comment causing compiler to fail?

2017-11-15 Thread monster
I was trying to understand why my code fails to compile, and eventually reduced 
it enough to find out it is caused by a comment. Is that _normal_?


# PART 1 : atomics_test1.nim
type
  VolatilePtr*[T] = distinct ptr T

proc toVolatilePtr*[T](t: var T): VolatilePtr[T] =
  cast[VolatilePtr[T]](addr t)

when declared(atomicLoadN):
  proc atomicLoadNSeqCST*[T: AtomType](p: VolatilePtr[T]): T {.inline.} =
atomicLoadN(cast[ptr[T]](p), ATOMIC_SEQ_CST)
## REMOVE ME!



# PART 2 : atomics_test2.nim
import atomics_test1

var my_vbyte {.volatile.}: byte = 42'u8
var my_vbyte_ptr: VolatilePtr[byte] = toVolatilePtr[byte](my_vbyte)

assert(atomicLoadNSeqCST(my_vbyte_ptr) == 42'u8)


The assert in PART 2 fails to compile, with message:

> atomics_test2.nim(7, 25) template/generic instantiation from here
> 
> atomics_test1.nim(10, 16) Error: expression 'atomicLoadN(cast[ptr [T]](p), 
> ATOMIC_SEQ_CST)' is of type 'byte' and has to be discarded
> 
> The terminal process terminated with exit code: 1

If I remove the comment "## REMOVE ME!" in PART 1, it works again! 


Re: atomics: Why is interlockedCompareExchange8

2017-11-15 Thread monster
@mikra That is basically what I was trying to do; have a single API for all 
platforms. I could share it, once it works, but having a unified API in the 
standard library would surely be beneficial to many users.

On second thought, I'm not sure I want anyone to think I have any clue about 
the Windows atomics APIs; pthreads makes perfect sense to me, but the Windows 
calls are just weird. I'm just guessing what does what based on the online M$ 
docs.


Re: atomics: Why is interlockedCompareExchange8 "safe"?

2017-11-13 Thread monster
@cdome Great!

@Araq So, is/was (I'm assuming the '8' version is going to be used in the 
future), the call safe? I cannot imagine it would have been used at all, if it 
caused random memory overwrite. Maybe the 'compare' part of the call makes this 
safe? But then, why even bother offering 8/16/32 versions in the Windows API? 
Just because it's faster to only access as much memory as you need?


How do you check the presence of an annotation?

2017-11-12 Thread monster
I'd like to check that some "memory location" is annotated with {.volatile.}, 
to make sure my code only compiles, if I use {.volatile.} in the right place. I 
searc the lib code but didn't really find much, except this (asyncmacro.nim):


proc asyncSingleProc(prc: NimNode): NimNode {.compileTime.} =
  # ...
  # LINE NO: 385
  # If proc has an explicit gcsafe pragma, we add it to iterator as well.
  if prc.pragma.findChild(it.kind in {nnkSym, nnkIdent} and $it == 
"gcsafe") != nil:
closureIterator.addPragma(newIdentNode("gcsafe"))


I gave it a try, but did not get very far, as I'm still "practicing" the 
language itself, and haven't learned (or more precisely already forgot), how 
the meta-programming works.

Here is what I tried: 


import macros, strutils

type
  VolatilePtr*[T] = distinct ptr T
  # Means it's a pointer to some volatile value

proc hasAnnotation(stuff: NimNode, annotation: static[string]): bool 
{.compileTime.} =
  (stuff.pragma.findChild(it.kind in {nnkSym, nnkIdent} and $it == 
annotation) != nil)

template toVolatilePtr*[T](t: untyped) =
  when hasAnnotation(t, "volatile"):
VolatilePtr[T](addr t)
  else:
{.error: "t is not volatile!".}


when isMainModule:
  var tua = 42'i32
  let p: VolatilePtr[int32] = toVolatilePtr[int32](tua)
  let tua2: int32 = 42'i32 #atomicLoadNSeqCST[int32](p)
  assert(tua2 == tua)



  1   2   >