Re: Most efficient way to convert a uint64 to a seq of bytes

2019-12-21 Thread drkameleon
Well, I guess I won't... pollute any place any more with my questions anymore, 
in case anyone else discovers how many self-entitled, pretentious people are in 
here. (With the sole exception of @mratsim)

The sure thing is with that type of mindset nothing will ever advance.

So... Have a great day from me! 


Re: What’s your favorite programming language and why?

2019-12-19 Thread drkameleon
But obviously you need a systems language for writing such lean, fast, reusable 
tools, and I think Nim or D are the best candidates.

Yep, I also forgot about D. Before I discovered Nim, that was my #1 system 
programming language. Now - speed-wise - I'm definitely having a conversion...


Re: What’s your favorite programming language and why?

2019-12-19 Thread drkameleon
**Re: Nim**. I couldn't agree more. If there is ONE thing I absolutely _do not_ 
like about the language is the issue with the modules. Basically, to keep my 
head in some state of sanity, I try to keep everything in as few modules as 
possible, and "including" the rest of it (for "modularity") - which is 
practically the same thing.

Yes, re-arranging your code may be good at times, I mean for better code 
organization, but in the end... the module recursion thing will come back at 
you...

I already like the language (obviously), and that's quite a pity.


Re: What’s your favorite programming language and why?

2019-12-19 Thread drkameleon
Haha. I guess, it does.

Perhaps, the more obscure the language, the better chances you have to find a 
project to work on (if you are an expert at it, I mean)


Re: "Selector must be of an ordinal type, float or string"

2019-12-19 Thread drkameleon
Whoa!! Super-weird. Didn't even think of that...

Thanks! :)


Re: What’s your favorite programming language and why?

2019-12-19 Thread drkameleon
The next interesting question would be how many of us _do_ actually use our 
favorite language on a daily basis (meaning: professionally)


Re: Most efficient way to convert a uint64 to a seq of bytes

2019-12-19 Thread drkameleon
I perfectly understand what you're saying. I've benchmarked seq s myself _a 
lot_ and my conclusions are pretty much the same. The add s were left there for 
the sake of the example - my main concern was to get it working, and optimize 
the masking part first.

But I obviously see your point - I'll optimize it afteer the initial issues 
have been solved.


"Selector must be of an ordinal type, float or string"

2019-12-19 Thread drkameleon
I'm trying to write a "switch-case" statement like:


case MyVal
of...


Run

where MyVal is of type "Value" (which is defined as a uint64).

How do I make the compiler interpret it right (since an uint64 is obviously an 
ordinal type)?


Most efficient way to convert a uint64 to a seq of bytes

2019-12-19 Thread drkameleon
Right now this is how I'm doing it, but it definitely doesn't look very... 
efficient:

(Value is just a uint64)


proc writeValue(v:Value) =
CD.add(Byte(v shr 56))
CD.add(Byte(v and Value(0x00ff)))
CD.add(Byte(v and Value(0xff00)))
CD.add(Byte(v and Value(0x00ff)))
CD.add(Byte(v and Value(0xff00)))
CD.add(Byte(v and Value(0x00ff)))
CD.add(Byte(v and Value(0xff00)))
CD.add(Byte(v and Value(0x00ff)))

template readValue():Value =
inc(IP,8); (Value(CD[IP-8]) shl 56) or (Value(CD[IP-7]) shl 48) or 
(Value(CD[IP-6]) shl 40) or (Value(CD[IP-5]) shl 32) or (Value(CD[IP-4]) shl 
24) or (Value(CD[IP-3]) shl 16) or (Value(CD[IP-2]) shl 8) or Value(CD[IP-1])


Run

Any suggestions?


Re: macOs Catalina (10.15.2) - linking warnings

2019-12-19 Thread drkameleon
Hmmm I think I already figured it out.

There were obviously object files remaining from a previous build (that is: 
_before_ the OS upgrade).

What I did to "solve" it (make the warnings disappear actually) was to do a 
\--forceBuild:on release, and have the object files re-generated.

Once I did that, everything works as usual. :-)


macOs Catalina (10.15.2) - linking warnings

2019-12-19 Thread drkameleon
OK, so after a lot of hesitation, I decided to upgrade my system to 10.15 (I 
hope I won't regret it).

Now after compiling my project as usual, I'm getting the following warnings:


ld: warning: linking module flags 'SDK Version': IDs have conflicting 
values ('[2 x i32] [i32 10, i32 15]' from 
/Users/drkameleon/Documents/Code/OpenSource/arturo-lang/arturo2/.cache/release/stdlib_system.nim.c.o
 with '[2 x i32] [i32 10, i32 14]' from ld-temp.o)
ld: warning: linking module flags 'SDK Version': IDs have conflicting 
values ('[2 x i32] [i32 10, i32 15]' from 
/Users/drkameleon/Documents/Code/OpenSource/arturo-lang/arturo2/.cache/release/stdlib_strutils.nim.c.o
 with '[2 x i32] [i32 10, i32 14]' from ld-temp.o)
ld: warning: linking module flags 'SDK Version': IDs have conflicting 
values ('[2 x i32] [i32 10, i32 15]' from 
/Users/drkameleon/Documents/Code/OpenSource/arturo-lang/arturo2/.cache/release/stdlib_tables.nim.c.o
 with '[2 x i32] [i32 10, i32 14]' from ld-temp.o)
ld: warning: linking module flags 'SDK Version': IDs have conflicting 
values ('[2 x i32] [i32 10, i32 15]' from 
/Users/drkameleon/Documents/Code/OpenSource/arturo-lang/arturo2/.cache/release/@mc...@svalue.nim.c.o
 with '[2 x i32] [i32 10, i32 14]' from ld-temp.o)
ld: warning: linking module flags 'SDK Version': IDs have conflicting 
values ('[2 x i32] [i32 10, i32 15]' from 
/Users/drkameleon/Documents/Code/OpenSource/arturo-lang/arturo2/.cache/release/@mvm.nim.c.o
 with '[2 x i32] [i32 10, i32 14]' from ld-temp.o)


Run

Any ideas what it is about?


Re: What’s your favorite programming language and why?

2019-12-18 Thread drkameleon
> Interesting. Why do you hate Python?

Well, it's a bit controversial I know. I have a friend who swears by it. 
However, whenever I _had_ to write sth in Python, I end up struggling either 
with the different versions, virtual environments (or 
I-don't-know-how-it's-called), missing modules that cannot get installed 
no-matter-what (or basically do not have the patience to learn how to) and last 
but not least... indentation issues (yes, I know... we're in a Nim forum, but 
for some reason I've never been a huge fan of the... "off-side" rule).

Now, if it was as performant for systems' programming as Nim, I might have 
given it a second chance... lol


Re: What’s your favorite programming language and why?

2019-12-18 Thread drkameleon
Having been programming fo over 25 years, I most definitely couldn't say just 
one. But I could definitely say a few for sure.

  *  _Ruby_ : because it's the one I'm most familiar with, it's dynamic, 
flexible and I "get the work done" fast
  * _Haskell_ : because of it's power, elegance and logic - although I'm by no 
means good at it
  * _Lisp_ : for inside logic, S-expressions, etc
  * _Rebol_ : for the same reason as above, only more elegant
  * _Arturo_ : (shameless plug ahead) because of all the above and because it's 
my own brain-child
  * _Nim_ : for its clean code and great performance
  * _Pascal_ : for sentimental reasons, since it's the one that got me started 
in this wonderful journey, when I was still a kid




Re: Any way to see the generated assembly?

2019-12-18 Thread drkameleon
Great, thanks! :)


Any way to see the generated assembly?

2019-12-18 Thread drkameleon
I'm on a Mac and using clang.

I know where the generated C files are, but when trying to compile them, I see 
all sorts of missing header files, etc. I can sure make it work, but is there 
any already-available way to see just the generated assembly without messing 
around?


Re: Arrays and Sequences in nim

2019-12-18 Thread drkameleon
> If array index is a constant, then offset is constant too. If array index is 
> a variable, than offset is product of element_size and offset. So there is a 
> runtime mul in later case.

This is a very interesting point.

Given that what I'm currently doing is building a stack machine bytecode 
interpreter (to replace my AST-walking interpreter) - and already seeing some 
serious perfomance upgrade, I'm trying to use every trick available, so these 
subtle details do matter.

Currently, my simple "stack" is pretty much like that:


#[##
Constants
  ==]#

const
MAX_STACK_SIZE = 10_000_000

#[##
Types
  ==]#

type
Stack[T]= array[MAX_STACK_SIZE,T]

Value   = uint64

#[##
Global variables
  ==]#

var
MainStack*  : Stack[Value]   # my main stack
MSP*: int# pointer to the last element

#[##
Implementation
  ==]#

template push*(v: Value) = inc(MSP); MainStack[MSP] = v
template pop*(): Value   = dec(MSP); MainStack[MSP+1]

template popN*(x: int)   = dec(MSP,x)

template top*(x:int=0): Value= MainStack[MSP-x]


Run

so... normally an **ADD** instruction, in my {.computedGoto.} interpreter loop, 
would be something like that:


case OpCode
# other cases
of ADD_OP: push(pop()+pop()); inc(ip)
# inc(MSP); MainStack[MSP] = ((dec(MSP); MainStack[MSP+1]) + (dec(MSP); 
MainStack[MSP+1]))
# ...


Run

which I'm optimizing further (I think... lol) by doing it like: 


case OpCode
# other cases
of ADD_OP: top(1) = top(0)+top(1); dec(MSP); inc(ip)
# MainStack[MSP-1] = MainStack[MSP]+MainStack[MSP-1]; dec(MSP)
# ...


Run

So... lots of different things going on...

Any ideas to make it better (and more performant) are more than welcome! :)


Re: Arrays and Sequences in nim

2019-12-18 Thread drkameleon
Thanks a lot for the explanation!

This is pretty much what I expected - but no harm getting a second opinion to 
make sure you got things right, nope? :)

Basically, I have experimented with pretty much everything A LOT **and** 
benchmarked it **and** checked the produced C code, but wanted to make sure... 
And from what I've observed and from what you say, we're definitely on the same 
page.

My latest experiments point to the same conclusion as well:


var arr: array[10,byte]

echo cast[uint64](addr arr)
echo cast[uint64](addr arr[0])
echo cast[uint64](addr arr[1])
echo cast[uint64](addr arr[2])


Run

The first two - quite obviously - point at the beginning of the array. And the 
next ones point at the consecutive "cells" of memory, one _byte_ apart...

In any case, again: thanks ;-)


Arrays and Sequences in nim

2019-12-17 Thread drkameleon
Ok, this is a rather newbit question but despite all my experiments I'm still 
confused.

Let's say we want an array of int s.

In Nim, I would do it either like this:


var myArray: array[200, int]


Run

or like this:


var myArray: seq[int] = newSeq[int](200)


Run

But what are the _real_ differences between the two options? I mean, obviously, 
the first one declares a fixed-size array, so if our goal is to have an array 
that grows and shrinks during runtime, I guess the only option would be to use 
a seq. But if we actually _do_ want to set an upper limit to our array?

Long story short, in the stack-based bytecode VM I'm currently experimenting on 
I find myself using more of the first type (that is: "simple" fixed-size 
arrays) What are the drawbacks (memory/safety/performance-wise)? What should I 
pay attention to?


Re: How to store an int in a register?

2019-12-17 Thread drkameleon
Thanks a lot - I had spotted it, but not sure it was the right one for this! :)


Re: How to store an int in a register?

2019-12-17 Thread drkameleon
Basically, I want to use it in one of the very very few cases where that 
keyword would - supposedly - make some difference: to store the top of my VM's 
stack.


How to store an int in a register?

2019-12-17 Thread drkameleon
Is there any way I can make Nim store a specific variable (uint64 to be 
precise) into a specific register, for super-fast access?

Something along these lines:


register int *foo asm ("r12");


Run


Any way to force a specific identifier name in C code?

2019-12-17 Thread drkameleon
I know Nim creates unique identifiers for every identifier used, in the form of 
"MyIdentifier__BreJK2peWZ1PII7tA5raZg".

Is there any way I can force a specific "unique" identifier and not have it 
auto-generated?


Force types from specific imported modules

2019-12-12 Thread drkameleon
I have declared a type Value in one of my modules.

It happens that in another module I'm imported, there is also another Value 
type defined, so I'm given the option to fully-qualify the types, like x.Value 
vs y.Value.

Is there any way to force a specific interpretation? (so that Value means 
x.Value for example)?


Most efficient way to implement a stack using Nim?

2019-12-05 Thread drkameleon
OK, so I've been playing and benchmarking ideas of how to implement an (uint64) 
stack in Nim. (seq-based, array-based, and timing push'es and pop's)

How would you approach this? (my main concern is speed)


Best way to store/search/etc and an int-int data structure

2019-12-01 Thread drkameleon
I have a list of int-int, which I'm storing as a _seq_ of _(int,int)_ tuples 
(the first int being a hash value, so it's not like 0-1-2-3...)

I have to test this structure for membership (whether an index - the first 
_int_ \- exists), look up a value (get the second _int_ for a specific first 
_int_ ), update a value, etc. Pretty much like a dictionary / hash table.

Right now, this is how I'm doing it:


proc getValueForKey(ctx: Context, hs: int): Value {.inline.} =
var j = 0
while j

Different code produced when using a template?

2019-12-01 Thread drkameleon
OK, so, I'm a bit confused.

I have this template:


template initTopContextWith(pairs:seq[(int,Value)]) =
shallowCopy(Stack[^1],pairs)


Run

which I'm calling/using from another _proc_.

When I use the template, the produced code is:


//shallowCopy(Stack[^1],pairs)
//shallowCopy(Stack[^1],pairs)
T22_ = (tySequence__27px9cXpFaJ4yUn1cSZAKaw**)0;
T22_ = 
X5BX5D___q5FAoaedQCwl53m2ytrbgQsystem(Stack__WH1ut3OkOocLDQJpFMZt3w->data, 
(Stack__WH1ut3OkOocLDQJpFMZt3w ? Stack__WH1ut3OkOocLDQJpFMZt3w->Sup.len : 0), 
((NI) 1));
//initTopContextWith(args)
unsureAsgnRef((void**) (&(*T22_)), args);


Run

But when I directly include this one line of code in the calling proc, this is 
what I'm getting:


//shallowCopy(Stack[^1],args)

X5BX5Deq___sGWc2llByIT4Ipi0sFgxegsystem(Stack__WH1ut3OkOocLDQJpFMZt3w->data, 
(Stack__WH1ut3OkOocLDQJpFMZt3w ? Stack__WH1ut3OkOocLDQJpFMZt3w->Sup.len : 0), 
((NI) 1), args);



Run

What is even move weird is that - in the second case - whatever this change in 
the code means, it slows down my program by an average 20%.

Could you please shed... some light into the matter?


Re: Fastest way to check for int32 overflows

2019-11-28 Thread drkameleon
Answering my own question:


proc int32AddOverflow(a, b: int32, c: var int32): bool {.
importc: "__builtin_sadd_overflow", nodecl, nosideeffect.}

var rez: int32
if int32AddOverflow(int32(2147483647),int32(1),rez):
echo "overflows!"
else:
echo rez

if int32AddOverflow(int32(2147483646),int32(1),rez):
echo "overflows!"
else:
echo "did not overflow: ", rez


Run

**Output:**


overflows!
did not overflow: 2147483647


Run


Re: setupForeignThreadGc() equivalent for Boehm GC?

2019-11-28 Thread drkameleon
The VM code above is just an experiment.

For now, the interpreter is a plain tree-walking interpreter.

It sure has its issues and I'm trying to find my way around, but things are 
getting better and better and the results so far are more than satisfying :)


Fastest way to check for int32 overflows

2019-11-28 Thread drkameleon
Basically, I want to predict when the result of an operation between 2 int32's 
will overflow (addition, multiplication, exponentation).

Up to now, I was to doing it by catching overflow exceptions, but exceptions 
are quite expensive.

Now, I'm trying checking it like this (for addition):


const MAX_INT = 0x7FFF

if a > MAX_INT-1:
# will overflow
else:
# will not overflow, proceed with the addition


Run

I've also seen the __builtin_add_overflow commands (available for both Clang 
and GCC) and I'm wondering how I could use them.

Any recommendations?


Help with templates and injected symbols

2019-11-27 Thread drkameleon
I have the following template: 


template inPlace*(hs:int, code: untyped) =
block:
var i = len(Stack) - 1
var j: int
while i > -1:
j = 0
while j

Re: setupForeignThreadGc() equivalent for Boehm GC?

2019-11-27 Thread drkameleon
For the sake of comparison, here are the results (the tests are 
micro-benchmarks for isolated parts I'm trying to check, but very intensive 
ones for that matter).

**\--gc:none**

[https://gist.github.com/drkameleon/099c1d373367877fc3bc8c48067ae09f](https://gist.github.com/drkameleon/099c1d373367877fc3bc8c48067ae09f)

**\--gc:refc**

[https://gist.github.com/drkameleon/caa1a3d5087dcabfff8a8158d18e9011](https://gist.github.com/drkameleon/caa1a3d5087dcabfff8a8158d18e9011)


Re: setupForeignThreadGc() equivalent for Boehm GC?

2019-11-27 Thread drkameleon
Well, speaking of... I guess it's my lucky day. Also, the motto that says that 
"only when you ask a question, do you fully understand the issue" seems rather 
confirmed.

So... I'm very very close to have it all working as before (=before attempting 
to turn the GC on). There were missing bits here and there (quite difficult to 
list them all here since they're too project-specific), but I think I will 
manage to fine-tune it.

For now, I'm not extremely happy with the timings (I run a set of benchmarks to 
see how it compares with other interpreted language interpreters and 
incorporating the GC slowed it down a bit in some of the cases - but regarding 
memory management, I'm more than happy!)

Thanks ;-)


Re: "incRef: interiorPtrTraceback" what does it mean?

2019-11-27 Thread drkameleon
Answering to myself (after, hopefully, having fixed the issue):

I was trying to access and change a ref object which was obviously not GC_ref 
-ed before.


Re: setupForeignThreadGc() equivalent for Boehm GC?

2019-11-27 Thread drkameleon
Unfortunately, I fixed that too but without much success.


"incRef: interiorPtrTraceback" what does it mean?

2019-11-27 Thread drkameleon
**Compilation options:**


 -d:debug
-d:useSysAssert
-d:useGcAssert
--debugger:native
--stackTrace:on
--gc:refc


Run

**This is what I 'm getting:**


[GCASSERT] incRef: interiorPtrTraceback (most recent call last)

/Users/drkameleon/Documents/Code/OpenSource/arturo-lang/arturo/src/main.nim(81) 
main

/Users/drkameleon/Documents/Code/OpenSource/arturo-lang/arturo/src/compiler.nim(511)
 runScript

/Users/drkameleon/Documents/Code/OpenSource/arturo-lang/arturo/src/core/expression_list.nim(38)
 addExpression
/usr/local/Cellar/nim/1.0.0/nim/lib/system/gc.nim(238) asgnRef
/usr/local/Cellar/nim/1.0.0/nim/lib/system/gc.nim(113) incRef


Run


Re: setupForeignThreadGc() equivalent for Boehm GC?

2019-11-27 Thread drkameleon
Well, I guess it _is_ only the tip of the iceberg. However, I have made even 
set up super-minimal test environments and I'm still struggling with the GC. 
Here's an example:


import algorithm, system/ansi_c, base64, bitops, hashes, httpClient, json, 
macros, math, md5, oids, os
import osproc, parsecsv, parseutils, random, re, segfaults, sequtils, sets, 
std/editdistance
import std/sha1, streams, strformat, strutils, sugar, unicode, tables, 
terminal
import times, uri

type
#[
C interface
  ]#

yy_buffer_state {.importc.} = ref object
yy_input_file   : File
yy_ch_buf   : cstring
yy_buf_pos  : cstring
yy_buf_size : clong
yy_n_chars  : cint
yy_is_our_buffer: cint
yy_is_interactive   : cint
yy_at_bol   : cint
yy_fill_buffer  : cint
yy_buffer_status: cint

# Parser C Interface

proc yyparse(): cint {.importc.}

proc yy_scan_string(str: cstring): yy_buffer_state {.importc.}
proc yy_switch_to_buffer(buff: yy_buffer_state) {.importc.}
proc yy_delete_buffer(buff: yy_buffer_state) {.importc.}

var yyfilename {.importc.}: cstring
var yyin {.importc.}: File
var yylineno {.importc.}: cint

type
ParamKind = enum
RegParam, NumParam
Param = object
case kind: ParamKind:
of RegParam: reg: int
of NumParam: num: int

Statement = ref object
cmd: int
params: seq[Param]

StatementList = ref object
list: seq[Statement]

var
MainProgram {.exportc.} : StatementList
A,B,C,D,E,F,G,H: int
Regs : seq[int] = newSeq[int](8)
Stack : seq[int]

template benchmark*(benchmarkName: string, code: untyped) =
block:
let t0 = epochTime()
code
let elapsed = epochTime() - t0
let elapsedStr = elapsed.formatFloat(format = ffDecimal, precision 
= 3)
echo "CPU Time [", benchmarkName, "] ", elapsedStr, "s"

proc newStm(cmd: int): Statement {.exportc.} =
Statement(cmd: cmd, params: @[])

proc newStmReg(cmd: int, r0: int): Statement {.exportc.} =
Statement(cmd: cmd, params: @[Param(kind:RegParam, reg: r0)])

proc newStmNum(cmd: int, n0: int): Statement {.exportc.} =
Statement(cmd: cmd, params: @[Param(kind:NumParam, num: n0)])

proc newStmRegReg(cmd: int, r0: int, r1: int): Statement {.exportc.} =
Statement(cmd: cmd, params: @[Param(kind:RegParam, reg: 
r0),Param(kind:RegParam, reg: r1)])

proc newStmRegNum(cmd: int, r0: int, n1: int): Statement {.exportc.} =
Statement(cmd: cmd, params: @[Param(kind:RegParam, reg: 
r0),Param(kind:NumParam, num: n1)])

proc newStmRegRegNum(cmd: int, r0: int, r1: int, n2: int): Statement 
{.exportc.} =
Statement(cmd: cmd, params: @[Param(kind:RegParam, reg: 
r0),Param(kind:RegParam, reg: r1),Param(kind:NumParam, num: n2)])

proc newStmRegNumNum(cmd: int, r0: int, n1: int, n2: int): Statement 
{.exportc.} =
Statement(cmd: cmd, params: @[Param(kind:RegParam, reg: 
r0),Param(kind:NumParam, num: n1),Param(kind:NumParam, num: n2)])

proc newStmListWithStm(stm: Statement): StatementList {.exportc.} =
StatementList(list: @[stm])

proc addStmToList(lst: StatementList, stm: Statement) {.exportc.} =
GC_ref(lst)
lst.list.add(stm)

proc setStatements(stms: StatementList) {.exportc.} =
MainProgram = stms

template R0():untyped {.dirty.} =
Regs[stm.params[0].reg]

template R1():untyped {.dirty.} =
Regs[stm.params[1].reg]

template R2():untyped {.dirty.} =
Regs[stm.params[2].reg]

template N0():untyped {.dirty.} =
stm.params[0].num

template N1():untyped {.dirty.} =
stm.params[1].num

template N2():untyped {.dirty.} =
stm.params[2].num

template IS_REG(x: int):bool {.dirty.} =
stm.params[x].kind == RegParam

template IS_NUM(x: int):bool {.dirty.} =
stm.params[x].kind == NumParam

when isMainModule:
let scriptPath = commandLineParams()[0]

yylineno = 0
yyfilename = scriptPath

#Reg = initTable[string,int]()

discard open(yyin, scriptPath)
MainProgram = StatementList(list: @[])
#setupForeignThreadGc()
benchmark "parse:":
discard yyparse()
#tearDownForeignThreadGc()

benchmark "execute:":
var line: int = 0

while lineR1: line = N2-1; continue
   

Re: setupForeignThreadGc() equivalent for Boehm GC?

2019-11-26 Thread drkameleon
It's an interpreter: 
[https://github.com/arturo-lang/arturo](https://github.com/arturo-lang/arturo)

Basically, so far I had no-GC whatsoever and try to find out what to make the 
whole project WITH a GC. (And then how to tune it)


Re: setupForeignThreadGc() equivalent for Boehm GC?

2019-11-26 Thread drkameleon
So... any idea?


GC and fixed memory addresses

2019-11-26 Thread drkameleon
Let's say I have an object at some specific memory address.

Am I safe that the GC won't move it around? And, if not, how could I make sure 
it won't?


Re: Problem with .exportc-marked variable containing proc reference

2019-11-26 Thread drkameleon
Still the same even after removing .inline.


Error: invalid type: 'SystemFunctionCall[compiler.SystemFunction, 
compiler.ExpressionList, compiler.Value]' in this context: 'StatementList' for 
var


Run


type
StatementList {.shallow.} = ref object
list: seq[Statement]


Run

and 


var
MainProgram {.exportc.} : StatementList


Run


Problem with .exportc-marked variable containing proc reference

2019-11-26 Thread drkameleon
I have the following type:


SystemFunctionCall[F,X,V]   = proc(f: F, xl: X): V {.inline.}


Run

And the following type containing it:


Statement = ref object
pos: int
case kind: StatementKind:
of commandStatement:
code: int
#call: 
SystemFunctionCall[SystemFunction,ExpressionList,Value]
arguments   : ExpressionList
of callStatement:
id  : int
expressions : ExpressionList
of assignmentStatement:
symbol  : int
rValue  : Statement
of expressionStatement:
expression  : Expression


Run

I'm also exporting a variable of Statement type.

However, the compiler seems to have a problem with my SystemFunctionCall -type 
field.

Any workarounds? 


genSym not generating unique symbols?

2019-11-26 Thread drkameleon
I'm trying to fix 
[https://github.com/andreaferretti/memo](https://github.com/andreaferretti/memo)
 to work for me (I have already posted another thread: 
[https://forum.nim-lang.org/t/5581#34695)](https://forum.nim-lang.org/t/5581#34695\)),
 but pretty everything seems broken.

Basically, the pragma generates an "impl" identifier which should be unique:


org.name = genSym(nskProc, "impl")


Run

the thing is... it's not!

(when I go through the generated code, there are just "impl" instances, no 
"[impl](https://forum.nim-lang.org/postActivity.xml#impl)")


Re: Seq's and string with --gc:none

2019-11-25 Thread drkameleon
I've just tried it, but the compiler emits a Error: system module needs: 
nimGCvisitSeq error


Seq's and string with --gc:none

2019-11-25 Thread drkameleon
Is there any way I can use sequences and strings _without_ a GC?

(basically, that's exactly what I'm doing - Nim doesn't complain at all - but I 
guess I'm leaking too much memory)

How do I manually allocate/deallocate a seq/string or whichever object for that 
matter?


Re: Async loops & multi-processing

2019-11-24 Thread drkameleon
First of all, thanks for the very detailed answer! I really appreciate it!

I have been using pretty much the same benchmark code, so times should be 
rather accurate.

I tried your examples and it seems to be working (btw, on macOS, the -fopenmp 
seems redundant for clang, as it supposedly has built-in support for openMP -- 
unless I'm mistaken).

The times I'm getting are pretty much these (for 1 billion repetitions):


Wall time for normal loop: 16.175 s
Wall time for parallelized OpenMP: 16.178 s
Wall time for parallelized Nim spawn: 0.0 s


Run

Note: for less repetitions, the OpenMP-based benchmark seems to be around 
20-30% faster.

Now, I have a question regarding the last solution: Is there any way that I can 
sync all the spawn processed? (I mean... know when all of them have finished)


Problem with forward declarations and pragmas

2019-11-22 Thread drkameleon
As if it wasn't for my passion for forward proc declarations, now I have an 
issue with pragmas + forward declarations.

Here is the story:

  * I'm using the Memo module 
([https://github.com/andreaferretti/memo](https://github.com/andreaferretti/memo))
 which works by assigning a memoized pragma next to the proc declaration
  * The module per-se works absolutely fine
  * The function in-question (one of those that I'm trying to memoize) is also 
marked with the .exportc. pragma
  * Also, the function in-question has a forward declaration (again marked with 
exportc, obviously)
  * Now, if I put the memoized pragma in my proc definition, the compiler 
complains that there are 2 different definitions of the same proc
  * Naturally, I say, so let's add the memoized pragma to my forward 
declaration as well (it worked with exportc, why wouldn't it work with that as 
well?)
  * Supris! Now, the memo module complains of finding 2 different 
definitions of the same function



Any ideas how this can be solved?


Async loops & multi-processing

2019-11-22 Thread drkameleon
Let's say we have a loop performing something over a sequence of items (with 
each operation not related to the others), is there any way to have 2 or more 
of the processed at the same time (using different threads or cores)?

Is there any related example?


How to print a float's binary digits?

2019-11-21 Thread drkameleon
Is there some way to do that without fmt? Is there some different library 
method?


Re: Empty sequence of specific type given problems when compiling with "cpp"

2019-11-21 Thread drkameleon
Spot on! (with one slight difference: given that Context=seq[int,int], it has 
to be:


DICT(newSeq[(int,int)]())


Run

Thanks! ;-)


Advantages of "from... X... import Y" over "import Y"?

2019-11-21 Thread drkameleon
Basically, this is the question:

Is there any reason I should use "from X import Y", other than avoiding 
namespace collisions?


Empty sequence of specific type given problems when compiling with "cpp"

2019-11-21 Thread drkameleon
I have this line of code:


var ret = DICT(cast[Context](@[]))


Run

DICT takes Context as a param. And a Context is nothing but a seq[(int,int)].

Obviously, an empty sequence can be anything, so I cast it to Context.

Now, the thing is: when I compile it with "c", everything goes smooth; when I 
compile it with "cpp" (clang++ on macOS), I have the following error:


error: assigning to 'TGenericSeq *' from incompatible type 
'tySequence__77YVzYb2AOu2vP0iI0b8Dw *'
T11_ = (tySequence__77YVzYb2AOu2vP0iI0b8Dw*)0;
   ^~


Run

Any workarounds?


Performance: 2 indirections or 1?

2019-11-21 Thread drkameleon
I have been thinking of this construct, basically a _reference_ to a sequence.

Is this:


RefSeq = ref object
list: seq[(int,Value)]


Run

the same as this?


RefSeq = ref seq[(int,Value)]


Run

I mean... performance-wise? 


Re: Get name of proc at compile time

2019-11-20 Thread drkameleon
Perfect! Thanks a lot!


Re: Get name of proc at compile time

2019-11-20 Thread drkameleon
Hmm not sure how it works, or if it is what I need.

Let me elaborate a bit.

Let's say we have this:


proc someProc(x: int) =
let z = __proc___ ## <--- this should return the caller's name, hence 
'someProc'


Run


Get name of proc at compile time

2019-11-20 Thread drkameleon
I want to get a proc's name, from inside a proc, at compile time, so as to use 
it as a string.

Is it possible?


Get nimble file directory from within nimble file hook

2019-11-19 Thread drkameleon
I've written a after install hook.

However, I noticed that the current directory has changed (to the packages 
directory) - and is not the directory where the .nimble file is.

How do I get the original directory path?


Re: Is there any way to look something up in a Table using a pre-computed Hash value?

2019-11-19 Thread drkameleon
Thanks a lot! This sounds like a great workaround!


Reversing string Hash value

2019-11-19 Thread drkameleon
I know that if I have a string, I can get its hash value by: str.hash

Is there any way to get the string value from a hash? (like: strHash.str)


Is there any way to look something up in a Table using a pre-computed Hash value?

2019-11-18 Thread drkameleon
OK, so basically let's say we have a Table and we're either trying to retrieve 
a value using a specific key, or trying to see if our Table contains a specific 
key.

The normal way I guess would be with a call to hasKey.

Now, what if we're - possibly - going to perform the same lookup many times? 
Wouldn't it be more practical instead of doing Table.hasKey(someKey) and having 
the hash value of someKey calculated over and over again, to be able to do a 
lookup by Hash value? (provided I've already calculated it and stored it 
somewhere)

Is it even possible? (I've been looking in the sources and specifically into 
rawGetImpl() from lib/pure/collections/hashcommon.nim, but still not sure how 
it would be possible to circumvent this


Re: Force mutable Seq returns

2019-11-18 Thread drkameleon
What would you think of using... ?:


type
SeqRef[T] = ref seq[T]

proc newSeq[T](s:seq[T]): SeqRef[T] =
new(result)
result[] = s


Run


Re: Force mutable Seq returns

2019-11-18 Thread drkameleon
I was thinking of packaging the seq in a ref object, and then move those ref 
objects around. but wouldn't that be an overkill?


Re: Force mutable Seq returns

2019-11-18 Thread drkameleon
But this makes a copy of the sequence, doesn't it?

Is there any way to do the same thing, using the reference? (after all, it's a 
pointer, if I'm not mistaken... all I want to do is change the value/sequence 
to which that pointer is pointing)


Force mutable Seq returns

2019-11-18 Thread drkameleon
I have a template like this:


template ARR(v:Val):seq[Val] =
cast[seq[Val]](bitand(v,bitnot(UNMASK)))


Run

however, I want to be able to change the result like :


a = ARR(x)
a[0] = 


Run

How can I return a var seq[Val] (or sth like that) ?


Re: Binary resulting much larger with --opt:size?

2019-11-18 Thread drkameleon
I don't think they are interfering. The -O4 option is only set for speed and 
I'm getting the exact same result (900+KB vs 600+KB) even after completely 
removing it.


Re: NaN tagging in Nim?

2019-11-18 Thread drkameleon
Hmm... I see your point. But then how would I be able to store an object's 
address? (perhaps, the solution would be to use no GC whatsoever which is 
pretty much what I'm doing?)

Btw... here is an experimental implementation of NaN-tagging I've just written 
(roughly based on this article: 
[https://nikic.github.io/2012/02/02/Pointer-magic-for-efficient-dynamic-value-representations.html)](https://nikic.github.io/2012/02/02/Pointer-magic-for-efficient-dynamic-value-representations.html\)).
 I believe it's working though I haven't had time to thoroughly test it.

**Code:**


import bitops

type
ObjType = enum
strObj, arrObj

Obj = ref object
case kind: ObjType:
of strObj: s: string
of arrObj: a: seq[Obj]

Val {.final,union,pure.} = object
asDouble: float64
asBits: int64

const MaxDouble = cast[int64](0xfff8)
const Int32Tag  = cast[int64](0xfff9)
const PtrTag= cast[int64](0xfffa)

proc newVal(num:int64): Val {.inline.} =
Val(asBits: bitor(num,Int32Tag))

proc newVal(num: float64): Val {.inline.} =
Val(asDouble:num)

proc newVal(obj: Obj): Val {.inline.} =
Val(asBits: bitor(cast[int64](obj),PtrTag))

proc isDouble(v: Val): bool {.inline.} =
v.asBits < MaxDouble

proc isInt32(v: Val): bool {.inline.} =
bitand(v.asBits,Int32Tag) == Int32Tag

proc isObj(v: Val): bool {.inline.} =
bitand(v.asBits,PtrTag) == PtrTag

proc getDouble(v: Val): float64 {.inline.} =
v.asDouble

proc getInt32(v: Val): int32 {.inline.} =
cast[int32](bitand(v.asBits,bitnot(Int32Tag)))

proc getObj(v: Val): Obj {.inline.} =
result = cast[Obj](bitand(v.asBits,bitNot(PtrTag)))

let a = newVal(32)
echo a.getInt32()

let b = newVal(34.53)
echo b.getDouble()

let c = newVal(Obj(kind:strObj, s:"hello"))
echo c.getObj().s


Run

**Output:**


32
34.53
hello


Run


Re: NaN tagging in Nim?

2019-11-18 Thread drkameleon
Do you see anything wrong (particularly memory-safety-wise) with this code?


let v = Value(kind: stringValue, s: "hello")
let z = cast[int](v)

echo repr(cast[Value](z))
echo cast[Value](z).stringify()


Run


NaN tagging in Nim?

2019-11-17 Thread drkameleon
Are there any existing implementation of NaN tagging in Nim?

(I'm about to do it myself - or at least experiment with to see if there are 
any performance benefits - but having sth for reference wouldn't be bad at all 
:))


Binary resulting much larger with --opt:size?

2019-11-17 Thread drkameleon
I've just noticed sth very weird.

I have 2 different tasks in my _nimble_ file. One builds the project with 
\--opt:size and one with \--opt:speed. The thing is the resulting binary with 
the first option is over 900KB, while with \--opt:speed it get down to 
600-sth(!). How is that possible?

Nim version: 1.0.3 [MacOSX: amd64] - Compiled at 2019-11-09 (git hash: 
7ea60f78b5bd90bd34c5b15e1a80d23fd41c36a8)

command options for 'mini' build:


nim c --gcc.options.speed="-O4 -Ofast -flto -march=native 
-fno-strict-aliasing -ffast-math -ldl" --gcc.options.linker="-flto -ldl" 
--clang.options.speed="-O4 -Ofast -flto -march=native -fno-strict-aliasing 
-ffast-math -ldl" --clang.options.linker="-flto -ldl" -d:release -d:mini 
-d:danger --passL:parser.a --threads:on --hints:off --opt:speed --gc:regions 
--path:src -o:arturo -f --nimcache:_cache --embedsrc --checks:off 
--overflowChecks:on src/main.nim


Run

command options for 'full' (speed) build:


nim c --gcc.options.speed="-O4 -Ofast -flto -march=native 
-fno-strict-aliasing -ffast-math -ldl" --gcc.options.linker="-flto -ldl" 
--clang.options.speed="-O4 -Ofast -flto -march=native -fno-strict-aliasing 
-ffast-math -ldl" --clang.options.linker="-flto -ldl" -d:release -d:danger 
--passL:parser.a --threads:on --hints:off --opt:speed --gc:regions --path:src 
-o:arturo -f --nimcache:_cache --embedsrc --checks:off --overflowChecks:on 
src/main.nim


Run

Any ideas?


Re: Differences between simple assignment, shallowCopy and deepCopy

2019-11-17 Thread drkameleon
Thanks a lot. Very thorough explanation!


Differences between simple assignment, shallowCopy and deepCopy

2019-11-16 Thread drkameleon
I've been doing several experiments on the matter and read a lot, but to make 
sure II got it right, I would be grateful if I had your input.

So...

What is the difference between:

  * a = b
  * shallowCopy(a,b)
  * deepCopy(a,b)



when:

  * a/b are objects
  * a/b are ref objects
  * a/b are strings or sequences
  * a/b is everything else (int - for example)




Re: Newbie question, ref object

2019-11-16 Thread drkameleon
Perhaps like this?


sendHeaders(req, newHttpHeaders([("Content-Type","application/json")]))


Run


Re: --gc:regions: how does it work?

2019-11-16 Thread drkameleon
re: "why you cannot interface with Flex/Bison in the way you do and pick a 
different, real GC"

I would be grateful if you could give me some... pointers to the right 
direction. Even the integration of Flex/Bison was a hit'n'miss mission.

The weird thing is that - given that I've been running tons of different 
benchmarks - most of the tests do work with _any_ GC. But _all_ of them, do run 
only with \--gc:regions turned on. That's what I find perplexing.

Also, for the tests that _are_ running fine, regardless of the GC option being 
used, how is it possible that all different gc options result in more-or-less 
slower execution times?


--gc:regions: how does it work?

2019-11-16 Thread drkameleon
For one reason or the other, the only GC that works (and very efficiently in 
terms of speed - aside from too much average memory consumption in some cases) 
with my current project is "regions".

The thing is I decided to go with that only because it works.

However, I'm still not sure how it's working.

I'm trying to optimize my code as much as possible and I would love to know 
what I could do make it work even better (especially reducing memory usage 
would be ideal). Are there any specific things to have in mind?

My code makes _heavy_ use of ref objects, sequences, string, pretty much 
everything. However, I'm pretty sure some memory isn't freed when it should 
(while with some of the other GCs, memory usage is usually much lower)


Re: Need advice regarding using templates

2019-11-15 Thread drkameleon
I've thought about it, but will it have the same effect?


Re: Need advice regarding using templates

2019-11-15 Thread drkameleon
> you are a PhD.

LOL. If this was going for me, the "dr" part is because I was once a gonna-be 
(medical) doctor... But thank god, this never happened ;-)


Need advice regarding using templates

2019-11-15 Thread drkameleon
I have a call to a proc like that:

let v = xl.validate(f)

My **validate** proc is:


proc validate(xl: ExpressionList, f: SystemFunction): seq[Value] =
result = xl.list.map((x) => x.evaluate())

if unlikely(not f.req.contains(result.map((x) => x.kind))):
validationError(f.req, result, f.name)


Run

The thing is I want to convert the validate proc to a template (for perfomance 
reasons).

How can I do it?

(I tried converting like the following example, but then - when using v after 
the call - the variable is obviously undeclared)


template validate(xl: ExpressionList, f: SystemFunction): untyped {.dirty.} 
=
let v = xl.list.map((x) => x.evaluate())

if unlikely(not f.req.contains(v.map((x) => x.kind))):
validationError(f.req, v, f.name)


Run

Another idea would be to first declare the var in every calling proc, like var 
v: seq[Value] and then call the _dirty_ template. But it looks a bit messy (and 
not sure if it'll have any performance impact). Am I missing something?


Re: Prime factorization of Fermat numbers using BigNum/GMP

2019-11-14 Thread drkameleon
And... without the one million ugly comments:


proc pollardG*(n: var Int, m: Int) {.inline.} =
discard mul(n,n,n)
discard add(n,n,1)
discard `mod`(n,n,m)

proc pollardRho*(n: Int): Int {.noSideEffect.} =
var x = newInt(2)
var y = newInt(2)
var d = newInt(1)
var z = newInt(1)

var count = 0
var t = newInt(0)

while true:
pollardG(x,n)
pollardG(y,n)
pollardG(y,n)

discard abs(t,sub(t,x,y))
discard `mod`(t,t,n)
discard mul(z,z,t)

inc(count)
if count==100:
discard gcd(d,z,n)
if cmp(d,1)!=0:
break
discard set(z,1)
count = 0

if cmp(d,n)==0:
return newInt(0)
else:
return d

proc primeFactors*(num: Int): seq[Int] {.noSideEffect.} =
result = @[]
var n = num

if n.probablyPrime(10)!=0:
result.add(n)

let factor1 = pollardRho(num)
if factor1==0:
return @[]

if factor1.probablyPrime(10)==0:
return @[]

let factor2 = n div factor1
if factor2.probablyPrime(10)==0:
return @[]

result.add(factor1)
result.add(factor2)


Run


Re: Prime factorization of Fermat numbers using BigNum/GMP

2019-11-14 Thread drkameleon
You're a star! So simple, but I was missing it. I did it with the in-place 
routines and it worked beautifully!

No memory issues, and SUPER fast (relatively).

Here's my updated code in case somebody needs it:


proc pollardG*(n: var Int, m: Int) {.inline.} =
discard mul(n,n,n)
discard add(n,n,1)
discard `mod`(n,n,m)

proc pollardRho*(n: Int): Int {.noSideEffect.} =
var x = newInt(2)
var y = newInt(2)
var d = newInt(1)
var z = newInt(1)

var count = 0
var t = newInt(0)

while true:
pollardG(x,n) # x = (x*x + 1) mod n
pollardG(y,n)
pollardG(y,n) # y = (((y*y+1) mod n)*((y*y+1) mod n) + 1) mod n

discard abs(t,sub(t,x,y)) # t = abs(x-y)
#var t = abs(x-y)
discard `mod`(t,t,n) # t = t mod n
discard mul(z,z,t) # z = z * t
inc(count)
if count==100:
discard gcd(d,z,n) # d = gcd(z,n)
if cmp(d,1)!=0: # d!=1:
break
discard set(z,1) # z = newInt(1)
count = 0

if cmp(d,n)==0:
return newInt(0)
else:
return d

proc primeFactors*(num: Int): seq[Int] {.noSideEffect.} =
result = @[]
var n = num

if n.probablyPrime(10)!=0:
result.add(n)

let factor1 = pollardRho(num)
if factor1==0:
return @[]

if factor1.probablyPrime(10)==0:
return @[]

let factor2 = n div factor1
if factor2.probablyPrime(10)==0:
return @[]

result.add(factor1)
result.add(factor2)


Run

Thanks again!


GC_ref & GC_unref - and when to use them

2019-11-14 Thread drkameleon
I've sure read the documentation but I'm still unsure when these two commands 
are to be used.|

I'm interfacing with flex/bison, meaning bison calls Nim routines (with the 
tokens,ids,etc found), I create the appropriate AST objects and then control 
goes back to Bison.

For example, Bison calls:


$$ = argumentFromIdentifier($1);


Run

and then in Nim:


proc argumentFromIdentifier(i: cstring): Argument {.exportc.} =
Argument(kind: identifierArgument, i: $i)


Run

The thing is I suspect I'm doing something wrong. When compiling the project 
with anything other than 


--gc:regions

Run

, there are memory-related crashes.

Are cstring s the culprit?


Problems with default GC (and practically any other GC), works fine with --gc:regions

2019-11-14 Thread drkameleon
As I've mentioned in a previous post, I've been working on an interpreter 
written in Nim 
([https://github.com/arturo-lang/arturo)](https://github.com/arturo-lang/arturo\)).

The lexer/parser part is handled by Flex/Bison, so I'm basically interfacing 
with C code (the code calls back my Nim code, creates objects, turns control 
over to Bison, and so on).

The thing is - not in every case, but in most cases - this triggers SIGSEGV: 
Illegal storage access errors.

Everything is solved though when setting **\--gc:regions**.

I guess this has to do with memory management and the Nim-C interaction?

Any suggestions are more than welcome!


Prime factorization of Fermat numbers using BigNum/GMP

2019-11-14 Thread drkameleon
I've been trying to implement Pollard's Rho algorithm 
([https://en.wikipedia.org/wiki/Pollard%27s_rho_algorithm](https://en.wikipedia.org/wiki/Pollard%27s_rho_algorithm))
 in Nim, using the BigNum module (which in turn uses GMP).

Up to the 6th Fermat number (18446744073709551617) it works fine. And it works 
fine for F(8) as well.

However, it keeps crashing for F(7). Basically, it consumes close to 100GB of 
disk space and then the process gets killed.

I know there is an issue with F(7) factorization, given that it doesn't have 
small factors.

The question is: do you see anything wrong with my code or anything that could 
be written more efficiently?


proc pollardRho*(n: Int): Int =
var x = newInt(2)
var y = newInt(2)
var d = newInt(1)
var z = newInt(1)

var count = 0

while true:
x = (x*x + 1) mod n
y = (((y*y+1) mod n)*((y*y+1) mod n) + 1) mod n
var t = abs(x-y)
t = t mod n
z = z * t

inc(count)
if count==100:
d = gcd(z,n)
if d!=1:
break
z = newInt(1)
count = 0

if cmp(d,n)==0:
return newInt(0)
else:
return d

proc primeFactors*(num: Int): seq[Int] =
result = @[]
var n = num

if n.probablyPrime(10)!=0:
result.add(n)

let factor1 = pollardRho(num)
if factor1==0:
return @[]

if factor1.probablyPrime(10)==0:
return @[]

let factor2 = n div factor1
if factor2.probablyPrime(10)==0:
return @[]

result.add(factor1)
result.add(factor2)


Run


How to package external packages into a single standalone binary

2019-11-11 Thread drkameleon
I make use of an external dependency (bignum: 
[https://github.com/FedeOmoto/bignum](https://github.com/FedeOmoto/bignum)) 
which I've install via nimble install bignum. And every compiles and runs fine 
- locally.

If I deploy my mac to run on a different machine, the bignum library is missing 
and I have to install it again via nimble.

Is there any way to avoid that? (that is: to incorporate bignum's sources to my 
project).

P.S. The question is not so bignum-specific, but more general, concerning any 
library installed via nimble install ...


Re: Get contents of directory at given path

2019-11-09 Thread drkameleon
Awesome! Thanks a lot!


Get contents of directory at given path

2019-11-09 Thread drkameleon
Is there a command for that?

I've been looking for in the library but still haven't spotted anything.


Re: Efficient way to validate function arguments

2019-11-08 Thread drkameleon
I've thought about using hashes, though I haven't tried it yet. I'll let you 
know!


Re: Efficient way to validate function arguments

2019-11-08 Thread drkameleon
Here is the interpreter (and language) I'm talking about: 
[https://github.com/arturo-lang/arturo](https://github.com/arturo-lang/arturo)


Efficient way to validate function arguments

2019-11-08 Thread drkameleon
_(I 've also posted the following on SO, but I guess it's a more Nim-specific 
question)_

**Before saying anything, please let me make it clear that this question is 
part of my work on an interpreter - and has nothing to do with validating a Nim 
proc 's arguments.**

Let's say we have an enum of Value types. So a value can be:


SV, // stringValue
IV, // integerValue
AV, // arrayValue
etc, etc


Run

then let's say we have a function **F** which takes one of the following 
combinations of arguments:


[
   [SV],
   [SV,IV],
   [AV]
]


Run

Now, the function is called, we calculate the values passed, and get their 
types. Let's say we get [XV,YV].

**The question is:**

What is the most **efficient** way to check if the passed values are allowed?

* * *

In more specific terms, let's say we have these constraints:


@[@[AV,FV],@[DV,FV],@[BV,FV],@[IV,FV]]


Run

meaning: first argument can be AV, second FV - **OR** \- first argument DV, 
second FV - **OR** \- and so on...

and this is my validation function:


proc validate(xl: ExpressionList, name: string, req: FunctionConstraints): 
seq[Value] {.inline.} =
## Validate given ExpressionList against given array of constraints

result = xl.list.map((x) => x.evaluate()) # each x.evaluate returns a 
Value

if not req.contains(result.map((x) => x.kind)):  # each Value has a 
kind: SV, IV, AV, etc...


Run

Of course, even the .map, .contains and all that can be expensive when called 
many times...

So, I'm looking for alternatives. All ears! :)


Re: A super newbie git-related question

2019-11-08 Thread drkameleon
Great information. Couldn't be more grateful! ;-)


Re: Why does nim put the Export marker on the right side?

2019-11-08 Thread drkameleon
I definitely agree. Variable visibility should be secondary. Not to mention 
that having it on the left side, just for SOME object fields for example, would 
make the whole thing ugly (at least for me):


type
  MyType = ref object
a: int32
*exported: int32
not_exported: int32


Run


Re: A super newbie git-related question

2019-11-08 Thread drkameleon
Guys, awesome! Thanks a lot.

I did exactly what you suggested and it seems to have worked fine - so simple...

Now, one more question (I guess until I sit down and study how this thing 
works, I'll remain confused):

The local copy looks fine, the forked repo looks fine too.

Why does github show this message: _This branch is 2 commits ahead of 
nim-lang:devel._ ?

When I go to "Compare" to see what these changes are all it shows me is my 1 
commit (which has already been merged to _nim-lang:devel_ and the merge/push I 
just did from _nim-lang:devel_. So practically, there is nothing to be done.

Is there anything I have to fix in my setup?


A super newbie git-related question

2019-11-08 Thread drkameleon
I've been writing code (compilers, operating systems, chess engines, 
you-name-it) since 1994, but I still cannot get my head around Git... (or at 
least, when I think I get it working I make a completely mess).

So, given that I definitely want to contribute to the project (but without 
having to set the whole thing up from scratch every single time, because I 
manage to mess it up), I... shamelessly ask you:

I have forked the Nim project here: 
[https://github.com/drkameleon/Nim](https://github.com/drkameleon/Nim) I then 
git clone-d it. Whenever I make some change, I guess the way is: commit, push 
the commit (to my fork) and then make a pull request. (I'm still not sure I 
have the terminology right but, oh well...)

**Now, here comes the question...**

Given that in the meantime, there must have been changes in the original repo 
(@ [https://github.com/nim-lang/Nim)](https://github.com/nim-lang/Nim\)), how 
do I fetch those changes so that my forked version is up-to-date? (I guess the 
way would be to fetch the latest changes here, locally to my Mac, and then push 
them to my fork? but still not sure...)

Any answer to such a beginner question would be more than welcome! :)

\--- P.S. I mostly use SourceTree for Mac, but Terminal commands would work too 
ofc...


Re: Getting memory address of object into string

2019-11-07 Thread drkameleon
I still have issues...

I have some ref objects and trying to get the hex address of the objects using 
repr(addr(x)) but when trying to echo the value the output is ... whitespace(!).

Is it possible that it has sth to do with my configuration and/or compilation 
options? I cannot explain it otherwise...


Re: Get first element in Table

2019-11-07 Thread drkameleon
Given that a table is not an ordered collection, how is it possible to get any 
first element?

As an alternative you could implement it as an array of tuples for example:


var a = @[ (1,"one"), (2,"two") ]


Run

and then you could easily retrieve a[0].


Re: Function overloading based on object.kind

2019-11-06 Thread drkameleon
That's exactly what I was looking for! Thanks!


Function overloading based on object.kind

2019-11-06 Thread drkameleon
OK, let me explain what I'm trying to do. Let's say we have a function that 
takes a (ref) object as an argument. Like this:


proc myProc(x: MyObj) =
   # do sth


Run

and and object like this: 


type
  MyObj* = ref object
case kind*: ObjKind:
   of typeA  : s: string
   of typeB  : i: int


Run

One way would be to check the object's "kind" from inside the function, like if 
x.kind==typeA:, etc.

The question is: could this be done in any different way, like normal function 
overloading? 


Re: undeclared identifier: 'PGenericSeq' when using '--seqsv2:on'

2019-11-06 Thread drkameleon
Thanks a lot for the thorough explanation!


undeclared identifier: 'PGenericSeq' when using '--seqsv2:on'

2019-11-05 Thread drkameleon
I'm trying different things and experimenting and that's how I ended up trying 
to use \--seqsv2:on. However, it shows the above error.

The compilation is done on macOS 10.14.6 with Nim 1.0.99.

The complete compilation command is:


nim c --gcc.options.speed="-O4 -Ofast -flto -march=native 
-fno-strict-aliasing -ffast-math " \
--gcc.options.linker="-flto" --clang.options.speed="-O4 -Ofast -flto 
-march=native -fno-strict-aliasing -ffast-math " \
--clang.options.linker="-flto" -d:release -d:danger --passL:parser.a 
--threads:on --hints:off --opt:speed \
--nilseqs:on --seqsv2:on --gc:regions --path:src -o:arturo -f 
--nimcache:_cache --embedsrc --checks:off --overflowChecks:on \
src/main.nim


Run

But I'm getting:


/Users/drkameleon/Documents/Code/Tests/Nim/lib/system/gc_regions.nim(342, 
8) Error: undeclared identifier: 'PGenericSeq'
stack trace: (most recent call last)

/private/var/folders/0k/4qyh49ss2pqg9262tw1sn23rgn/T/nimblecache/nimscriptapi.nim(165,
 16)

/Users/drkameleon/Documents/Code/OpenSource/arturo-lang/arturo/nim_98658.nims(151,
 16) releaseTask


Run

Any ideas what is going on? Is it incompatible with \--gc:regions?

/Users/drkameleon/Documents/Code/Tests/Nim/lib/system/nimscript.nim(252, 7) 
exec|   
---|---


Things to watch in the produced nimcache files - for performance

2019-11-04 Thread drkameleon
In the past few days I've been examining a lot the nimcache files trying to 
figure out what the final code looks like and what optimizations I could do to 
my original code.

For example, I discovered things that I had not known and fixed them, mainly by 
spotting crucial points were copy-assignments were happening (e.g. via 
genericSeqAssign or copyString calls, etc)

All in all, the optimizations I've made have made significant difference.

Are there any other things to pay attention to?


Slightly confused with how to use templates

2019-11-04 Thread drkameleon
I have a proc which I want to directly include in my code, that's why I decided 
templates is the way to go.

For example, in my caller proc I have:


case someVar
  of 0: result = doSth()
  ...


Run

(obviously doSth() also returns a value)

Now, to make this work I converted my doSth() in this template:


template Core_Print_Templ*(xl: ExpressionList): Value =
let v = xl.evaluate(forceArray=true).a

echo v[0].stringify(quoted=false)
v[0]


Run

So, basically result (in my caller proc) gets the value of v[0] from the 
template.

Now, the question is: how could I solve this, while keeping the 
Core_Print_Templ as a template, if this function has more than one exit points 
- meaning: if i returns a result from different locations?

For example, the following code (currently a proc), how could I turn this into 
a template?


proc Core_Not*[F,X,V](f: F, xl: X): V {.inline.} =
let v0 = f.validateOne(xl.list[0],[BV,IV])

case v0.kind
of BV:
if v0.b: result = TRUE
else: result = FALSE
of IV:
result = valueFromInteger(bitnot(v0.i))
else: discard


Run

(I know templates is about code being substituted ed directly in the caller 
code, but I still do not fully understand how it works - so any ideas are more 
than welcome)


Re: Suggestion for optimizations (for better performance/speed) for an interpreter written in Nim

2019-11-01 Thread drkameleon
Look, I'm now 33 and having been programming since I was 7 - and working as a 
professional programmer for the past (many) years.

Back then in 1993, I was dreaming of creating languages, operating systems, and 
all this low-level stuff. Fast forward to now, my work aside (which definitely 
involves more mundane stuff), this is **exactly** what I love doing. I guess... 
you cannot escape who you are! ;) Good luck!


  1   2   >