Re: Nim 1.0.6 is out!

2020-01-27 Thread Araq
For ARC you can download a nightly build here: 
[https://github.com/nim-lang/nightlies/releases/tag/2020-01-27-devel-a716543](https://github.com/nim-lang/nightlies/releases/tag/2020-01-27-devel-a716543)


Re: Game unlock gui written with gintro

2020-01-27 Thread Stefan_Salewski
Nice to hear that. For memory leaks or corruption I generally test with 
--gc:arc, -d:useMalloc and valgind, and generally get some leak reports. Not 
only for gintro but also for pure Nim programs, see 
[https://forum.nim-lang.org/t/5839](https://forum.nim-lang.org/t/5839) for 
example. But I have still to learn how to exactly interpret what valgrind shows 
us, and when there are Nim related leaks, then Nim devs will care. For GTK 
related leaks I will care later, for now it is most important that it runs 
stable. Please let me know if there should be issues for Windows, use the 
github issue tracker then.


Re: Nim macro help

2020-01-27 Thread Hlaaftana
In `createSlide`, you have 2 arguments with type varargs[string], because you 
put a comma after `title`. Also `body.sons` is not defined.


import macros

proc createSlide(title: string, body: varargs[string]): NimNode =
  result = newEmptyNode()
  echo "createSlide", title, body

proc slideDslImpl(head, body: NimNode): NimNode =
  if body.kind == nnkIdent:
var args: seq[string]
for n in body:
  if n.kind == nnkStrLit:
args.add n.strVal
result = createSlide(body.strVal, args)

macro slide*(head, body: untyped): untyped =
  result = slideDslImpl(head, body)
  echo result.treeRepr # let us inspect the result


Run

Documentation of the macros module: 
[https://nim-lang.org/docs/macros.html](https://nim-lang.org/docs/macros.html)


Re: Nim 1.0.6 is out!

2020-01-27 Thread miran
> I don’t understand why the next version will be 1.2 whereas the version 1.1.1 
> hasn’t been published officially.

Do you understand why the current release is 1.0.6 (the previous was 1.0.4), 
"whereas the version 1.0.5 hasn’t been published officially"?


Re: Nim 1.0.6 is out!

2020-01-27 Thread juancarlospaco
`devel` is never published as an `stable` version by definition. 


Re: Game unlock gui written with gintro

2020-01-27 Thread Dankrad
I've build the gui and then the whole project with \--gc:arc. On linux 
everything works fine. No memory leaks or something else. On windows I cannot 
build because of a dependency (winim). I think I've to wait until \--gc:arc is 
more stable/more used.


Re: Nim 1.0.6 is out!

2020-01-27 Thread AMoura
I don’t understand why the next version will be 1.2 whereas the version 1.1.1 
hasn’t been published officially.


Re: Nim 1.0.6 is out!

2020-01-27 Thread miran
> I think it's headed for the 1.1 release.

Minor fix: Current devel is already v1.1.1, the next officially released 
feature release will be 1.2.0.


Re: Nim 1.0.6 is out!

2020-01-27 Thread leorize
I think it's headed for the 1.1 release. Patch releases never get new features.


Re: Nim 1.0.6 is out!

2020-01-27 Thread didlybom
Does this include the new --gc:arc option? I guess not?


Re: Fizzbuzz game

2020-01-27 Thread mratsim
I have one in Arraymancer that uses a neural network trained on Fizzbuzz on 
number between 101 and 1024 and then tested on 1 ..< 100.

It seems to have learned division: 
[https://github.com/mratsim/Arraymancer/blob/v0.6.0/examples/ex04_fizzbuzz_interview_cheatsheet.nim](https://github.com/mratsim/Arraymancer/blob/v0.6.0/examples/ex04_fizzbuzz_interview_cheatsheet.nim)


# A port to Arraymancer of Joel Grus hilarious FizzBuzz in Tensorflow:
# http://joelgrus.com/2016/05/23/fizz-buzz-in-tensorflow/

# Interviewer: Welcome, can I get you a coffee or anything? Do you need a 
break?
# ...
# Interviewer: OK, so I need you to print the numbers from 1 to 100,
#  except that if the number is divisible by 3 print "fizz",
#  if it's divisible by 5 print "buzz", and if it's divisible 
by 15 print "fizzbuzz".

# Let's start with standard imports
import ../src/arraymancer, math, strformat

# We want to input a number and output the correct "fizzbuzz" representation
# ideally the input is a represented by a vector of real values between 0 
and 1
# One way to do that is by using the binary representation of number
func binary_encode(i: int, num_digits: int): Tensor[float32] =
  result = newTensor[float32](1, num_digits)
  for d in 0 ..< num_digits:
result[0, d] = float32(i shr d and 1)

# For the input, we distinguish 4 cases: nothing, fizz, buzz and fizzbuzz.
func fizz_buzz_encode(i: int): int =
  if   i mod 15 == 0: return 3 # fizzbuzz
  elif i mod  5 == 0: return 2 # buzz
  elif i mod  3 == 0: return 1 # fizz
  else  : return 0

# Next, let's generate training data, we don't want to train on 1..100, 
that's our test values
# We can't tell the neural net the truth values it must discover the logic 
by itself.
# so we use values between 101 and 1024 (2^10)
const NumDigits = 10

var x_train = newTensor[float32](2^NumDigits - 101, NumDigits)
var y_train = newTensor[int](2^NumDigits - 101)

for i in 101 ..< 2^NumDigits:
  x_train[i - 101, _] = binary_encode(i, NumDigits)
  y_train[i - 101] = fizz_buzz_encode(i)

# How many neurons do we need to change a light bulb, sorry do a division? 
let's pick ...
const NumHidden = 100

# Let's setup our neural network context, variables and model
let
  ctx = newContext Tensor[float32]
  X   = ctx.variable x_train

network ctx, FizzBuzzNet:
  layers:
hidden: Linear(NumDigits, NumHidden)
output: Linear(NumHidden, 4)
  forward x:
x.hidden.relu.output

let model = ctx.init(FizzBuzzNet)
let optim = model.optimizerSGD(0.05'f32)

func fizz_buzz(i: int, prediction: int): string =
  [$i, "fizz", "buzz", "fizzbuzz"][prediction]

# Phew, finally ready to train, let's pick the batch size and number of 
epochs
const BatchSize = 128
const Epochs= 2500

# And let's start training the network
for epoch in 0 ..< Epochs:
  # Here I should probably shuffle the input data.
  for start_batch in countup(0, x_train.shape[0]-1, BatchSize):

# Pick the minibatch
let end_batch = min(x_train.shape[0]-1, start_batch + BatchSize)
let X_batch = X[start_batch ..< end_batch, _]
let target = y_train[start_batch ..< end_batch]

# Go through the model
let clf = model.forward(X_batch)

# Go through our cost function
let loss = clf.sparse_softmax_cross_entropy(target)

# Backpropagate the errors and let the optimizer fix them.
loss.backprop()
optim.update()
  
  # Let's see how we fare:
  ctx.no_grad_mode:
echo &"\nEpoch #{epoch} done. Testing accuracy"

let y_pred = model
  .forward(X)
  .value
  .softmax
  .argmax(axis = 1)
  .squeeze

let score = y_pred.accuracy_score(y_train)
echo &"Accuracy: {score:.3f}%"
echo "\n"


# Our network is trained, let's see if it's well behaved

# Now let's use what we really want to fizzbuzz, numbers from 1 to 100
var x_buzz = newTensor[float32](100, NumDigits)
for i in 1 .. 100:
  x_buzz[i - 1, _] = binary_encode(i, NumDigits)

# Wrap them for neural net
let X_buzz = ctx.variable x_buzz

# Pass it through the network
ctx.no_grad_mode:
  let y_buzz = model
.forward(X_buzz)
.value
.softmax
.argmax(axis = 1)
.squeeze

# Extract the answer
var answer: seq[string] = @[]

for i in 1..100:
  answer.add fizz_buzz(i, y_buzz[i - 1])

echo answer
# @["1", "fizzbuzz", "fizz", "4", "buzz", 

Re: Code golfing in Nim

2020-01-27 Thread mratsim
> You can replace indentation with ; and parentheses

Now we can just admit that Nim is a syntax skin over Lisp


Re: runnableExamples Question

2020-01-27 Thread kaushalmodi
Yeah that's a local hack (that works).. the back is in the associated closed PR.

The real fix is pending because I don't know yet how to add the `--backend` 
switch just to the `doc` subcommand. 


Nim 1.0.6 is out!

2020-01-27 Thread federico3
Details at 
[https://nim-lang.org/blog/2020/01/24/version-106-released.html](https://nim-lang.org/blog/2020/01/24/version-106-released.html)


Re: runnableExamples Question

2020-01-27 Thread chemist69
Ah, cool, yes I saw you tried to use a `-d:doccpp` switch in your std_vector 
repo, but I did not find the issue on Github. Thanks a lot!

KR Axel


Re: runnableExamples Question

2020-01-27 Thread kaushalmodi
I needed this feature few weeks back and opened this issue: 
[https://github.com/nim-lang/Nim/issues/13129](https://github.com/nim-lang/Nim/issues/13129).

It's on my list to try to implement the suggested `-backend` switch for `nim 
doc` that uses the specified backend for compiling and running the 
runnableExamples and :test: code block examples. 


Re: Game unlock gui written with gintro

2020-01-27 Thread Stefan_Salewski
I have just shipped gintro v0.7.0 with \--gc:arc support, so you may test your 
app with it. See top of readme in 
[https://github.com/StefanSalewski/gintro](https://github.com/StefanSalewski/gintro)
 for details. There are some serious changes, so it may not work for you out of 
the box. But we should be able to get it working.


Re: Nim macro help

2020-01-27 Thread juancarlospaco
`import macros`


Nim macro help

2020-01-27 Thread kidandcat
Hi, I'm very noob with macros, I'm trying to learn, but I'm getting this error:

undeclared field: 'kind' for type system.NimNode


proc createSlide(title, body: varargs[string]) =
  echo "createSlide", title, body

proc slideDslImpl(head, body: NimNode): NimNode =
  if body.kind == nnkIdent:
var args: seq[string]
for n in body.sons:
  if n.kind == nnkStrLit:
args.add n.strVal
result = createSlide(body.strVal, args)

macro slide*(head, body: untyped): untyped =
  result = slideDslImpl(head, body)
  echo result.treeRepr # let us inspect the result


Run


runnableExamples Question

2020-01-27 Thread chemist69
Hi,

I would like to use `runnableExamples` in a module that needs to be compiled in 
cpp mode and that needs to be linked against several C++ shared object files. 
Currently the `nim doc` command that I implemented in the `nimble` file fails 
because it is not aware of these requirements. Is there a way to handle this?

Many thanks in advance.

Kind regards, Axel