Your builder doesn't return int, it returns some proc returning int. As simple
as that.
Oh, probably. I thought that by "attach" you mean:
type Shape {.inheritable.} = ref object
...
area: proc(Shape): float
type Square = ref object of Shape
...
proc newShape(...): Shape =
...
area = areaShape
proc newSquare(...):
@Araq
Well, I don't think it's possible to use generics in this solution. As far as I
know, Rust's trait objects are implemented in a similar manner and they lack
generic methods (that's one of the reasons why it's often advised to use
generic traits rather than generic methods within non-gener
@mashingan
No, it's not about number of arguments. See that:
proc fun(x,y: int) = echo x+y
proc gun(x,y: int): int = x+y
fun 1, 2 # works fine
let x = gun 1, 2 # doesn't compile
let x = gun 1: 2 # compiles (!), although it looks weird
Just like GULPF sa
@GULPF
That's very interesting, actually! I never thought I'm SO mainstream using so
many macros.
@didlybom
But strings and sequences ARE already treated a little differently than plain
reference object types, aren't they? The most trivial example being: strings
have their literals and sequences (and arrays) have `openArray` but neither
string nor sequence has object constructor. So what no
Why not just treat `discard` like a function call which takes another function
call? Then `discard fun x, y` would parse as `discard(fun(x), y)`, just like
with, say, `echo`. Then it just makes sense.
@jzakiya
So you DID inspect C code differences and also checked another compiler, but
you still blame Nim, not gcc, for that time difference? Even to the point of
calling it a security issue?
seg->data[k] = (NU8)(seg->data[k] | ((NI) 1));
// vs:
seg->data[k] = ((NU8) 1);
Well, in the first case, you use NI-sized xor operator, as NU8 xor NI == NI.
Int-sized operations often tend to be faster than int8-sized analogues
(especially if vectorization is involved
@erikenglund
Will you please stop spamming with the damn same example again and again? There
are libraries with both float32 and float64 as their default and the opposite
literal can break it in both cases. Both make it binary incompatible and while
float64-instead-of-float32 additionally lower
Surely, why not?
Actually, Julia's main advantage isn't pure performance. It's easy of usage.
Please notice Julia is NOT a true HPC language as the only type of parallelism
it provides is master-slave (as far as I know) which isn't even common in HPC.
However, it's handy to be able to use Python, C and Fortran (
I'm wondering whenever it's possible to iterate over all entities available in
a module. My main idea is a wrapper module macro, which could sort-of-borrow
other type's procedures declared in another module.
Well, I know why it happened now. The declaration was injected into another
module via a macro, so the {.experimental.} should be used in the caller
module, not the macro defining one.
@planhths
Yes, I actually overlooked where it happens. Now I see what it does is
essentially partial transposition (I mentioned transposition can give nice
results before, it also plays nicely with parallelization).
But did you really called THAT one "naive - algorithm"? Doesn't really look
n
I do use {.experimental.}. The communicate is precisely: "Warning: overloaded
'.' and '()' operators are now .experimental; () is deprecated [Deprecated]"
Cool! I didn't see it coming, () operator. Last time I asked about it, there
was no proposal of it yet, if my memory serves me well.
I would like to have a function behaving like an object, so I could overload an
operator on that function. I need the following to be true:
assert 1.fun ^ 2 == fun ^ 2
I know I could use fun() with a default argument but that's that syntax isn't
acceptable in this context.
Ca
But that's just how the terminal works. For me, it returns:
* ESC: 27
* F1: 27, then 79, then 80 (THREE chars per one keydown)
* →: 27, then 91, then 67
The problem's not with the lib. Go ahead try it in C, you should get the same
results.
Pretty nice game, I'll make sure to look at the sources soon.
You allocated a wholly new seq in optMatrixProduct, no wonder why it's slower
than unoptimized version. By the way: have you tried changing the
representation of the matrix before multiplication (so that for k in 0 ..< a.n:
a.data[i,k] and b.data[k,j] are both linear in memory)? As far as I know
Update: I succeeded the first project. Used gnuplot-nim for plotting, it's not
bad (I'll make some PR soon, though).
Hi folks!
I've been more or less active on forum for quite a long time but to tell you
the truth, I haven't really used Nim for any "serious" stuff. Now when I
started Physical Processes Modeling course as a part of my studies, a friend of
my asked me: "Will you use Matlab or Python?". It was q
Well, if you need gmatch3 then it won't do. But if gmatch1 and gmatch2 are
enough, use C++-like approach and make those a proc which returns a structure
with items and pairs iterators.
type MatchWrap = distinct seq[string]
proc gmatch(src, pat: string): MatchWrap =
l
@mashingan @StasB What if a macro uses, let's say, system date? Or connects to
a database (you can't check whenever it changed until you do connect)? And
still, even a noSideEffect macro is a code generator/transformer, not a textual
substituter so there is no analogy to C.
@Demos Well, I do but I consider it dirty.
@guibar Thank you, guibar, I forgot I need a static[T] to change macros'
semantics. I haven't used that for some time now.
@mratsim Thank you. I haven't read this blog post before, actually.
Thank you guys, I didn't notice custom field pragmas were possible in the last
devel. I waited for them for quite a lot of time.
@Araq How can I do it with macros? getImpl returns the const's default value,
not the strdefined one:
import macros
const module {.strdefine.}: string = "math"
macro importconst(name: string): untyped =
let value = name.symbol.getImpl
echo "variable nam
And that's one of the reasons why I like functional programming... Firstly, you
don't really need a new seq for the job you're doing. Your should utilize
iterating over sections separated by ',' but you can modify them in-place (or
even better --- directly write them to the file!).
Here is a li
> Interesting, I am also am a Rust user. (...) Nim is amazingly productive.
I prefer Rust for more complex projects but heretically use Nim for reusable
scripts (interchangeably with Python).
Well... the template is the right thing to do but without closure magic ---
just use a block:
template scope(code): auto =
block:
code
let a = scope:
echo "hello"
1
echo a
Actually, it works for ALL of type's parameters, including types:
type Matrix[W, H: static[int], T] = object
data: array[W * H, T]
var m : Matrix[3, 2, int]
m.data = [1,2, 3,4, 5,6]
proc `[]`(m: Matrix, x, y: int): m.T {.inline.} = # here m.T is a return
Better yet, use a single seq and iterate over it as if it was NxM. It will be
more cache-friendly.
@sendell Not necessarily. Global scope can be unique in some ways, although I
agree it could be quite confusing in Nim specifically. That's the way things
are in some languages, for example:
* in C, global variables can't be initialized in a function call (they can in
C++)
* in Fortran, glo
@Serenitor It's different in many ways:
* also changes semantics of == ("is it actually the same object" instead of
"is this (other) object the same")
* no dynamic dispatch is usable anyway unless {.inheritable.} is applied first
* poorer performance due to dynamic heap allocation (and GC)
It is. The tricks I showed are for just until the introduction of
=sink
.
But of course, I'd be glad.
Bizarre enough, the following seems to work:
#module A
static:
var test=0
proc test_proc(): void {.discardable,compileTime.} =
var a = test
echo a
static:
test_proc()
#module B
import A
static:
I think so. I tried it with both typed and untyped macro arguments. What's
funny, it works as expected for untyped ones:
macro sth(code: untyped): untyped =
echo code.repr
sth:
let s = 5
echo s
But not for typed:
macro sth(code: typed):
As for writing REALLY Python-like Nim code, please have a look at
[nimpylib](https://github.com/Yardanico/nimpylib). In some simple cases and
when good design-patterns are followed in the Python code, it can get almost
1:1.
You don't need macros at all.
Here is what you really wanted to do:
proc fibo(a, b: int): iterator: int =
iterator inner: int =
var
a = a
b = b
while true:
yield a
(a, b) = (b, a+b)
inner
let x = fibo
ast = getAst(inner(body))
code = $toStrLit(ast)
is replaceable by:
code = body.repr
Other than that, it seems to be a bug as body.repr.parseStmt should be an
identity and here it's not (parse error due to invalid indentation).
Rewriting doesn't have to be bad. One of the reasons is that it may be possible
to write a more efficient implementation more easily in another language (see:
Rust's enums are faster than C++ virtual, especially for small ones). Also:
just like a wrapper with native types (uses seq instead of T
Well, it is not an issue but I don't get why it doesn't work for a block:
{.experimental.}
type MyObj = object
a: int
proc `=destroy`(m: MyObj) =
echo "Destruct"
block:
let x = MyObj(a: 5)
I find it misleading as a block should creat
The {.dirty.} attribute forces template to copy-paste its code at the call
side. It's typically used to make all symbols inject but it also means all the
code inside of it is not symbol-checked too, it is deferred until unrolling the
whole template, which effectively means it can inject its symb
You're welcome. I mostly use Nim for metaprogramming fun so I'm kind of used to
these kind of tricks.
I think I had a similar problem a similar problem with printing some time
ago... Anyway: use BiggestUInt instead of uint64. It has better semantics and
could be platform-adjusted, e.g. we could add uint128 if there was a platform
which natively supports it and the BiggestInt would then be uint12
# Replace:
proc atomicIncRelaxed*[T: AtomType](p: VolatilePtr[T], x: T = 1): T
# with:
proc atomicIncRelaxed*[T: AtomType](p: VolatilePtr[T], x: T = 1.int32): T
Note it will still work for int64 thanks to the conversion.
The reason of this problem is that int32 is int at the
Well, the whole problem is that = can't be AST-overloaded. That would be the
best and most nimish solution. However, I found three other solutions as well,
one of which I will quote. Sadly, all three require patterns so if you use
\--patterns:off, the checks will be disabled. Here it comes, the
In fact, I consider it a bug in the compiler that the following doesn't work:
proc button*[W, H: UILength = UIAbsLength](self: UISystem,
buttonLabel = none(string),
width = W(0.0),
I just grabbed the first part of code that is very easy to understand. I didn't
know that changing the order of yielded values doesn't make any difference for
you (I did think about how greatly it would simplify transform but forgot when
posting).
I only had a glance at the rest of the functio
It should not forbit it but allow it. Please notice that generics are almost
replaceable by () operator (it should be replaceable by [] operator but then it
only works when called explicitly, not by operator syntax):
template Sth(t: typedesc = int): typedesc =
type `Sth t` = o
It's quite easy to speed it up, actually. Let's take a look at your transform
iterator:
proc flip(s: seq[string]): seq[string] =
result = s # copy
result[0] = s[^1]
result[^1] = s[0]
proc transpose(s: seq[string]): seq[string] =
result = s # copy
I followed the manual as for how to use higher-kinded concepts. I was quite
surprised when the code containing genericHead actually compiled, but returned
something I don't really get...
import future, typetraits, options
type Functor[A] = concept f
f.get is A
No, you're wrong. Iterable is ANY container that can be iterated over
(including lists, sets etc) while openArray is anything that has an array-like
memory layout, i.e. array or seq. Your code fails for containers which with
non-linear memory layout:
import lists
var li =
I can't see any proc to create a CountTable from any iterable so I think you're
wrong with assigning ones, it seems it's defined only for openArray`s. Also,
it's not just like any table, as it provides `inc proc. Please notice this
method works even if the table does NOT contain the key yet!
{.noSideEffect, codeGenDecl: "__attribute__((pure)) $# $#$#".} still doesn't
help for my nim 0.17.2 and gcc 5.4.0.
Compiles just fine for my Nim 0.17.2.
Use object variants, they're exactly for cases like that --- the number of
variants is fixed.
I know all the three languages and attended to embedded systems course during
my studies but know virtually nothing about p2p. Is it for me, too?
@mratsim Well, it seemed to me the current idea is to push the GC aside so that
Nim will become scope-based "by default" with optional GC just where it's
needed. Thanks for the explanation though.
@Araq Thank you, I didn't know that (I don't really use Nim's parallel
capabilities). That's something new for me, especially comparing to how it
works in, say, Rust.
It suggests the new direction the Nim is to be heading to (according to Araq's
blog post), i.e. turning away from GC, it would more or less ruin things for
the guy talking.
@Araq Do you mean the slices have to be passed directly, without a local
binding?
@jzakiya Too bad there is no seq constructor from raw pointer and size. This
way, you could just make seq which is, in fact, a view of another seq.
Hypothetical example:
var s = @[3,1,4,1,5,9,2]
var v = ptrToSeq(s[2].addr, 3)
assert(v == @[4,1,5])
@monster I think the following is far nicer:
let yi = 2'u64 * cast[uint64](high(int64)) + 1'u64
let yf = float64(yi)
echo yi
echo yi+1'u64
echo yf
I don't remember who it was, but someone here on the forums complained about
indentiation-based syntax as they preffered braces. The answer was using
parentheses and semicolons, just like I did right now.
@mratsim As far as I know, you can do the same for Julia so it sounds like
cheating.
@adrien79 Yes, that's how it works. There is no items for types, as far as I
know.
Seems like a bug to me but it can be hacked quite easily:
const required_fields: array[0, tuple[f1: int, f2: string]] =
static(var a: array[0, tuple[f1: int, f2: string]]; a)
@adrien79 Actually, if you tried to iterate over Points as you illustrated, it
would break too:
for pt in Points:
do_sth(pt)
You should use explicit low and high:
for pt in Points.low .. Points.high:
do_sth(pt)
That is not true. Vtable pointers can be elements of a seq, just like any ptr
or ref type.
What I mean is a vectorized structure-of-arrays for x, y, z (and possibly
others) for a set of particles. They should be ordered according to their place
in space grid. As I said, in numpy I can have a any-D numpy array and sort it,
no python lists involved. I imagine the same for tensors in Arr
@mratsim Vtptrs are not ready yet, as I've heard? If they were, I guess they
would solve the problem (it is how it would probably be solved in Rust,
actually).
Let's say I want to do some operations on particles. They should be vectorized
(and maybe parallelized, some calculations could also benfit from GPU) and it
would be really nice if particles from the same space grid would be in the same
place in the sequence, as they will need to access the same
@monster Is the number of different kinds of messages fixed? If it is, you can
use variant types.
@mratsim Oh, really, you don't know any example of an operation the cost of
depends on values? Well, I easily know one: sorting.
It seems a little messy, I'd say. And not really too many examples. I think
you should make up your mind: do you want people to get interested in Nim (=>
you don't really have to explain things that much) or do you want to teach them
Nim (=> you make a "tutorial for experienced programmers").
@cblake I entirely agree, B-trees are pretty awesome. When learning about Rust
iterators I discovered that it's often faster to turn a container into a B-tree
and then collect it into the starting container type than to operate directly
on the initial container.
@mratsim No, it's not. That's why I asked whenever you use dynamic scheduling.
Imagine you have a sequence of 1, 2, 4, 8, ..., 1048576. Now, map it with an
operation with O(N) complexity, where N is the value of your number. If you use
static scheduling, it's entirely possible most of the work w
@monster Why not use inheritance?
type
ThreadID* = distinct uint8
AnyMsg = object {.inheritable.}
sender*: ThreadID
receiver*: ThreadID
previous: ptr[AnyMsg]
Msg*[T] = object of AnyMsg
content*: T
let space = all
@mratsim Oh, I forgot Arraymancer always uses OpneMP so you're talking about
threads created by it. I don't use OpenMP that often, that's probably why I
forgot about it.
Oh, by the way: the elementary operations you mentioned, addition, sum etc, can
be easily split into equal chunks. But not so
@Araq Happy to hear that!
Could you elaborate about the main thread being the only one being able to
create and destroy the objects? It sounds quite restrictive so I'd like to hear
what your motivation and the general idea was.
nodejs package? Does it mean Nim is to be compiled on JS backend for Android or
am I wrong (please say I'm wrong)?
@dawkot To put it simply, what Araq says is: in the first case, the macro
operates directly on procA at compile time so it behaves as expected but in the
second case, it actually operates on p argument (which has no implementation as
it's not an actual procedure, therefore its implementation is
@Jehan The fact that an arbitrary string is ambiguous without a context is
probably the reason a context is passed as a separate parameter in Rust macros,
I guess. I sometimes miss that possibility in Nim, it would makes tricks
unnecessary and macros would be less of magic, I guess.
@mratsim Would you mind if I make a reference to your lib in my bachelor thesis
about optimization?
I don't think it's possible, actually. By using pars, you force non-standard
operator (let's assume : could be an operator) precedence. Then, it's possible
for ? to just eat an untyped block, not caring too deeply about whenever : is
really an operator or not. But without pars, things are differ
@woggioni Well, I guess Nim's philosophy is different here. If the let-binding
exists, it must have a value. But you're in the middle of describing the proc's
body so the value isn't ready yet. Why is it practical? Let's say you call a
macro from within your fibo's body. A macro on a recursive c
Let's see what the compiler says to your example:
rec.nim(3, 9) Error: undeclared identifier: 'fibo'
Just what it says. In JS, variables are dynamic. When the function's body is
created, it doesn't "know" about the variable it will be binded to but it
doesn't matter. Nim is s
Personally, I think I started with CBOT. Then some JavaScript and Lua but
nothing serious, really. Just simple scripts for a website and hacking some
Battle for Wesnoth's hidden functionality (I guess it was adding a new status
icon for units). Then I learned C as a part of my studies. The rest
@Lando Thank you for pointing it out. Too bad it's not documented, I tend to
avoid using undocumented stuff. ^^" Actually, I think I even used lineInfoObj
once like a year ago so I probably forgot about it.
Is it possible to generate a docs link based on NimSym? Or, even better, based
on an argument's type?
Here comes an example:
# a.nim
type Mock = object
# b.nim
import a
macro add2docs(sym: typed, docs: string): typed =
newCommentStmtNode(docs.format(
Update:
Thanks to Nim now handling doc comments AST (it wasn't there back when I
introduced the lib), contracts now generate docs.
@Araq Oh, I shold bold it I mean Nim optimization and inlining, not C's ones...
as weird as it sounds. I mean, any pattern-template or pattern-macro which
matches a noSideEffect routine would match here, despite the fact this routine
has side effects, actually. I guess whenever it's good or not
I have a problem with StmtCommentNode... When I have a Stmt of the following
structure:
Expr1
Expr2 ## doc comment
repr prints it as above. But treeRepr prints the following:
StmtList
(Expr1 ...)
(Expr2 ...)
What's more, s.last.repr prints
@Araq That looks nasty! Wouldn't it confuse the compiler? Also --- it prevents
inlining, I guess?
1 - 100 of 203 matches
Mail list logo