Re: pros/cons of dissecting types via getTypeInfo() vs getTypeImpl()?

2019-02-09 Thread deansher
@timotheecour Could that^ approach meet your needs?


Re: pros/cons of dissecting types via getTypeInfo() vs getTypeImpl()?

2019-02-09 Thread deansher
In response to [my PR](https://github.com/nim-lang/Nim/pull/10596), 
@timotheecour described a pre-compiled debugging plugin he has built that makes 
extensive use of RTTI including `Any`. Here are some excerpts from his comments:

> I rely on typeinfo.nim for a lldb plugin I wrote that allows to use nim 
> plugins during a debugging session (and can do more stuff then nim-gdb python 
> plugin, including printing enum names, fully customizable pretty printing 
> etc).
> 
> I already implemented it and it works, and is 100% dependent on RTTI; I can 
> share more details if needed, but in short, it converts a (pointer, 
> type_name) (provided by debugger, in my case lldb but should work with gdb), 
> to a (pointer, PNimType) thanks to a mapping from type_name to PNimType; from 
> there i get a Any, and from there I can do arbitrary data navigation 
> (read/write/call) using typeinfo api

So, @Araq, what are the liabilities of RTTI that lead you to say "In the longer 
run I would like to deprecate runtime type information."? And could you unpack 
why you say `macros.getTypeImpl` and similar are the preferred solution? 


Re: safe way to hold traced reference of unknown type?

2019-02-06 Thread deansher
As I write my own code, I feel as though I'd want to either avoid this (perhaps 
instead using the approach I showed at the opening of this thread) or PR a 
documentation change that blesses it. Thoughts on which is better?


Re: pros/cons of dissecting types via getTypeInfo() vs getTypeImpl()?

2019-02-06 Thread deansher
Should I submit a PR to capture what you, @Araq, just said in some appropriate 
way in the typeinfo documentation?


Re: safe way to hold traced reference of unknown type?

2019-02-05 Thread deansher
I see that @yglukhov's `variant` package stores arbitrary references this way:


elif T is ref:
# T is already a ref, so just store it as is
result.isRef = true
result.refval = cast[ref RootObj](val)


Run

If this works, I think it works for very subtle reasons that we should 
document. Here's what I think is going on. Could anyone confirm?

Assuming this works, it seems as though by copying a ref's traced reference 
into a `ref RootObj`, _even though the referenced type may not be a subtype of_ 
` RootObj`, we keep the referenced value and any reference graph below it 
safely live in any possible GC implementation.

Is that a safe assumption? If so, where and how would we articulate the 
invariants that ensure it?

The current documentation of `cast` is notably unhelpful in answering this sort 
of question: "Type casts are a crude mechanism to interpret the bit pattern of 
an expression as if it would be of another type. Type casts are only needed for 
low-level programming and are inherently unsafe."


pros/cons of dissecting types via getTypeInfo() vs getTypeImpl()?

2019-02-02 Thread deansher
The implementation of `Any` in the standard library package 
[typeinfo](https://nim-lang.org/docs/typeinfo.html) and the implementation of 
`Variant` in @yglukhov's lovely [variant](https://github.com/yglukhov/variant) 
package take an interestingly different approach to dissecting and using types 
during compile-time execution. `typeinfo`'s approach is based on 
`getTypeInfo()`, which yields a `PNimType`. `variant`'s approach is based on 
`getTypeImpl()`, which yield a `NimNode`.

What are the strengths and weaknesses of these two approaches? Do they have 
different limitations? Different maintainability? Are they ultimately 
complementary? I.e. if a package goes far enough in dissecting and using types, 
will it tend to use both? (And if so, is that a good thing?)

The implementation of `Any` begins by converting the type to a `PNimType`: 


proc toAny*[T](x: var T): Any {.inline.} =
  newAny(addr(x), cast[PNimType](getTypeInfo(x)))


Run

Then it dissects and uses that `PNimType` (as `rawType`). Here's a 
representative example: 


proc invokeNewSeq*(x: Any, len: int) =
  assert x.rawType.kind == tySequence
  var z = newSeq(x.rawType, len)
  genericShallowAssign(x.value, addr(z), x.rawType)


Run

In contrast, the implementation of `Variant` begins by obtaining the type's 
implementation as a `NimNode`: 


proc mangledName(t: NimNode): string =
mangledNameAux(getTypeImpl(t)[1])


Run

Then it dissects the type by understanding that `NimNode` and by navigating 
deeper with additional calls to `getTypeImpl()` or `getImpl()`. Here's a 
representative example: 


proc mangledNameAux(t: NimNode): string =
case t.typeKind
of ntyAlias:
assert(t.kind == nnkSym)
let impl = t.symbol.getImpl()
assert(impl.kind == nnkTypeDef)
result = mangledNameAux(impl[^1])
. . .
of ntySequence:
let impl = t.getTypeImpl()
assert impl.kind == nnkBracketExpr
assert impl.len == 2
result = "seq[" & mangledNameAux(impl[^1]) & "]"
. . .


Run

Thoughts on the pros and cons of these two approaches?

Dean 


Re: safe way to hold traced reference of unknown type?

2019-01-31 Thread deansher
To me, it feels cleaner to actually hold onto a reference than to increment a 
reference count that must be carefully decremented later. Is there a benefit to 
the `GC_ref` approach for this use case? Or am I just thinking about this wrong?


safe way to hold traced reference of unknown type?

2019-01-31 Thread deansher
I want a value that holds onto a traced reference of an unknown type, to 
protect it from being garbage collected while I access it through an `Any`. Is 
this the best way to do that? Any other feedback on this code?


import typeinfo

type
  DynKind* = enum
dkInt
  
  DynType* = object
kind*: DynKind
  
  DynValue* {.inheritable.} = object
dtype*: DynType
anyValue: Any
  
  DynRef*[T] = object of DynValue
refValue: ref T

proc dynHeapValue*[T](valueRef: ref T): DynRef[T] =
  DynRef[T](dtype: DynType(kind: dkInt),
  anyValue: valueRef[].toAny,
  refValue: valueRef)

proc anyKind*(value: DynValue): AnyKind =
  value.anyValue.kind


Run


Re: proposing new doc terminology for "compile-time" and "runtime"

2019-01-29 Thread deansher
PR Submitted: [10497](https://github.com/nim-lang/Nim/pull/10497)


Re: proposing new doc terminology for "compile-time" and "runtime"

2019-01-20 Thread deansher
I'm not actually proposing changing "compile time" to "analysis". I'm still 
proposing using the term "compile time" to talk about everything that happens 
during compilation. I'm just proposing that when we talk about what happens at 
compile time, we make a careful distinction between "semantic analysis" (or 
simply "analysis") and compile-time execution.

Athough I agree that "execution" is a far less common term in a browser setting 
than "runtime", I think it would be very helpful to use the same term for code 
execution at compile time and at runtime. Another available term would be 
"evaluation", but truly nobody says "evaluated in the browser". I see 320,000 
Google hits for "executed in the browser".

Here's a sentence from the first few paragraphs of [V8's 
documentation](https://v8.dev/docs):

_V8 compiles and executes JavaScript source code, handles memory allocation for 
objects, and garbage collects objects it no longer needs._


proposing new doc terminology for "compile-time" and "runtime"

2019-01-20 Thread deansher
I believe our documentation's current use of the terms "compile time" and 
"runtime" is confusing and often misleading. Here is a proposed new direction. 
Once we've discussed it a bit here, I have a meta-question: is this worth an 
RFC, or should I just PR the doc changes?

**example 1 of current confusion**

The manual says, _" The compiler must be able to evaluate the expression in a 
constant declaration at compile time."_

Now consider the following code: 


static:
  var v = 3.1415
  echo v


Run

It echoes 3.1415 during compilation. So what would you expect the following 
code to do? 


static:
  var v = 3.1415
  const pi = v
  echo pi


Run

Compilation fails: "Error: cannot evaluate at compile time: v"

How about this code? 


static:
  const pi = block:
var v = 3.1415
v
  echo pi


Run

Sure, fine, that echoes 3.1415 during compilation.

**example 2 of current confusion**

The manual says, _" A static error is an error that the implementation detects 
before program execution."_ Contrast that with a subsequent statement, _" A 
checked runtime error is an error that the implementation detects and reports 
at runtime."_ So, how about the following code: 


static:
  var data: array[3, int]
  proc store(loc: int, value: int) =
data[loc] = value
  for i in 0 .. 3:
store(i, 42)


Run

That gives the following error during compilation: 


stack trace: (most recent call last)
error_in_static_block.nim(6) error_in_static_block
error_in_static_block.nim(4) store
error_in_static_block.nim(4, 9) Error: index out of bounds


Run

Surely it fits the definition given above of "static error"? But it isn't what 
we'd normally think of as a "static error". (Although it does occur in a 
`static` block!) Isn't it more of a runtime error? It even has a stack trace! 
But it happened during compilation!?

**proposed solution**

We use "compile time" to talk about everything that happens during compilation 
-- including compile-time execution.

We use "runtime" to talk about everything that happens when running the output 
of the compiler (such as a binary).

We distinguish between "semantic analysis" (or simply “analysis”) and 
“execution”.

We can unambiguously talk about "semantic analysis" (or simply “analysis”), 
since that only happens at compile time. But we have to be careful about when 
to simply say "execution", versus when to be more specific by saying 
"compile-time execution" or "runtime execution".

We say that at compile time, we interleave semantic analysis and execution. For 
example, we may do semantic analysis on a macro, semantic analysis on code that 
invokes the macro, execution of the macro, and then semantic analysis on the 
invoking code again after macro expansion.

**example 1 with the proposed solution**

The manual could now say something like this:

_The expression in a constant declaration can only use language features that 
are supported for compile-time execution, and can only depend on literal 
values, previously declared constants, and previously declared procs, macros, 
and templates. It can only depend on procs whose bodies meet these same 
requirements._

**example 2 with proposed solution**

The manual could now say things like this:

_A static error is an error that the compiler detects during semantic analysis._

_A checked execution error is an error that is detected during code execution, 
whether at compile time or at runtime._

**my meta-question**

Is this worth an RFC, or should I just informally seek a consensus (such as 
here in the forum) and then PR the doc changes?


Re: Cannot define `(T: type) -> T` proc within a template defined in another template

2019-01-13 Thread deansher
The behavior that you describe seems like a bug, when I compare it to the 
following excerpt from the manual:

> Whether a symbol that is declared in a template is exposed to the 
> instantiation scope is controlled by the inject and gensym pragmas: gensym'ed 
> symbols are not exposed but inject'ed are.
> 
> The default for symbols of entity type, var, let and const is gensym and for 
> proc, iterator, converter, template, macro is inject. However, if the name of 
> the entity is passed as a template parameter, it is an inject'ed symbol


Re: Associating data to types

2019-01-11 Thread deansher
:-) For this example as written, you could use (drum roll) overloaded procs:


type
  MsgAlpha = object
id: int
x, y: int
url: string
  
  MsgBeta = object
id: int
resource: string

proc msgId(m: MsgAlpha): int = 1
proc msgId(m: MsgBeta): int = 2

proc serialize[T](msg: T) =
  var msgcopy = msg
  msgcopy.id = msg.msgId
  echo msgcopy

serialize MsgAlpha(x: 10, y: 30, url: "http://foo.bar;)
serialize MsgBeta(resource: "GLES2")


Run

I suspect you may be trying to do something more elaborate that isn't captured 
by your opening example here?


Re: "Nim needs better documentation" - share your thoughts

2019-01-05 Thread deansher
Thank you, @Araq, I'll give that a try! I'll also try some PRs against the 
macros tutorial.


Re: Nim Advocacy & Promotion Strategies

2019-01-05 Thread deansher
Although I personally am making an earnest effort to use Nim for what I expect 
to be a large, long-term project, I don't think that's where Nim is right now 
in any broad sense. Rather, I think Nim is a fascinating exploration in 
programming language design. It has lots of great ideas, occasional brilliance 
in how features synergize, lots of half-finished and/or half-documented ideas, 
and some lingering flaws. (And no, I don't claim to know which is which! :-) )

I see nothing wrong with this at all. For me personally, it's a feature not a 
bug, because I have some hope of influencing Nim's direction -- just in the 
margin -- in areas I care about most. But I think it would be dangerous rather 
than helpful to promote Nim more at this stage. One reason is that many 
would-be Nim programmers will still have bad experiences until we harden code 
and improve documentation. Another is that we risk joining the unhappy club of 
languages who get panned by lots of programmers because everyone has heard 
their names and assumes that, if they were any good, they'd be popular by now. 
(D, Haskell, and O'Caml come to mind.) Another is that we simply aren't at 1.0 
-- for most programmers, a language with no backward compatibility guarantee is 
pointless. I am _glad_ we have a low TIOBE rank at this stage!

It seems to me that a strong move toward mainstream acceptance (e.g. breaking 
into top 50 TIOBE with upward momentum) would need the following ingredients:

  * A clear statement that you will have a great experience with Nim if you are 
doing XYZ.
  * A strong hit rate (75%?) that programmers who try XYZ in Nim have a great 
experience. (The broader we make XYZ, the more work required to clear that bar.)
  * Declaring 1.0 on the above basis.
  * At least a subliminal case -- maybe an explicit case -- that if @Araq gets 
tired of this or gets hit by a bus, it doesn't all evaporate.




Re: inserting one template inside another

2019-01-05 Thread deansher
>From what I can tell so far, this works with macros but does not work with 
>templates. [Here is my 
>testing](https://github.com/joy-prime/the-edge-of-Nim/blob/master/tests/t_consuming_macro_declared_var.nim).
> I would love more information about this!


Re: "Nim needs better documentation" - share your thoughts

2019-01-05 Thread deansher
I love this discussion and agree with most of what has been said. I am looking 
for the best way to help. @Libman 's proposal is extravagant but appealing.

I have a serious worry that's a little different from "documentation not good 
enough for Nim programmers to get started, be productive, etc." I am worried 
that until we have very solid "language spec" level explanations of Nim itself, 
we won't even know all the nooks and crannies of the de facto semantics of the 
implementation. Then, if we declare 1.0 while in that state, these nooks and 
crannies become guaranteed backward-compatible semantics of the language.

As an example to illustrate what I mean, the project I'm working on in Nim 
tends to push macros and compile-time code execution pretty hard. The current 
version of the Nim Manual just hazily sketches how macros and templates work 
and hazily refers here and there to the semantics and restrictions of 
compile-time execution. From [my testing so 
far](https://github.com/joy-prime/the-edge-of-Nim), the nooks and crannies of 
these semantics and restrictions seem far more intricate than what one would 
expect from that hazy material in the Manual. I don't even know whether my 
testing is exposing bugs or features, nor do I know what terminology we'd like 
to use for talking about the semantics. So it is hard for me to respond either 
by submitting bug reports or by contributing to documentation.


Re: List of pending CT evaluation features

2019-01-05 Thread deansher
[Idris](https://www.idris-lang.org/) takes CTFE to an extreme, in what strikes 
me as very elegant ways. Going that far is, at best, experimental -- not 
something I'd recommend for Nim! -- but I see Idris as a reasonably complete 
checklist of what's worth considering.


Re: interesting exercise in Nim metaprogramming: Clojure-inspired data

2019-01-05 Thread deansher
@andrea, It's a broad topic. I went looking for a concise existing explanation, 
but couldn't find it. If you are interested enough to wade through some detail, 
[this podcast transcript](http://blog.cognitect.com/cognicast-transcripts/103) 
of an interview with Clojure's creator, Rich Hickey, is good. Here's my shot at 
excerpting enough bullet points from that talk to capture the general idea. At 
the end, I'll try to tie this back to how you asked the question.

(Quoting Rich Hickey.)

  * "The fundamental idea in terms of the problems it solves is to come up with 
a machine leverageable way to talk about how things work and then to get as 
much leverage out of it as we can."
  * "[A common problem is that] we combine the specification for an aggregate 
with the specification for its parts, which leaves us with a lot of rigidity in 
systemswhen we do this, we end up with a bunch of things that are not good. 
One is, our reuse is low because now our definitions of these parts are context 
dependent, and they're tied to the aggregate. It's as if you were going to 
define what a tire is only inside defining what a car was."



  * "If you want to describe a car, you say it has a chassis and tires. But, 
you leave the description of what tires are to an independent description. When 
you do that, you get more reuse. This matters tremendously, as our systems 
become more dynamic, especially in the data space. But, even in the Web 
services space, you're combining subsets and making intersections of sets all 
the time. You'll take some data you got from here, and it had X, Y, and Z. You 
took some other data from there that had A, B, and C. Then you hand the next 
piece of the program: A and X. If the definitions of those parts are in the 
aggregates, then every single intersection and union and subset needs its own 
definition and will re-specify that same stuff again. I think that leads to 
rigidity in systems, and I think it actually doesn't work well at all in the 
fully dynamic case when I don't want to know necessarily what's flowing through 
this part of the system. "



(Me speaking again.)

Going back to how you asked the question, I'm trying to define `Attribute` s 
that have both names and types, and that can be recombined freely into 
aggregate types (in Nim, tuples or objects). Although you can't see it yet in 
the code sample, my aim is to support both run-time and compile-time 
combinations of attributes. I want to verify at compile time as much as the 
programmer chooses to nail down at compile time, but allow the programmer to 
choose to do some things dynamically at run time.

Here's a simple example for reading a row from a database, doing some 
computation, and using the result in a REST reply:

  * At run time, verify that an expected set of attributes are present in the 
row. (Specifically, verify that columns with those attribute names exist and 
have compatible types.)
  * At run time, place that set of attributes into a compile-time tuple type 
that requires that set of attributes.
  * Next, execute some statically typed logic on the tuple type. That logic can 
freely make compile-time-checked assumptions about the set of attributes being 
present, because the tuple type guarantees they are.
  * Finally, use macro-generated code to inject some resulting tuple value into 
a dynamic environment like a JSON response to a REST call.




interesting exercise in Nim metaprogramming: Clojure-inspired data

2018-12-30 Thread deansher
I am working on a macro package that will allow a user to declare an 
"attribute" in module A and then use it from module B. An attribute will have 
an identifier, a fully qualified name, and a type. The fully qualified name 
will be constructed from the module name and the identifier. This is inspired 
by Clojure's philosophy of defining namespaced keys that always represent the 
same value data type and semantics. For example, the "postalCode" attribute 
defined in the "mailingAddress" namespace would always represent a postal code 
in the same way, even if it sometimes occurs in a "contact" object and 
sometimes in a "business" object. I'd like to just export one thing from module 
A with the attribute name as its identifier, so that importing the attribute 
from module B will work as the programmer expects.

It is interestingly tricky in Nim to stash a type during compilation (such as 
in an attribute declaration macro) and then use that type in a subsequent 
declaration (such as for a tuple field). The following code does that, but it 
uses macros with `static[var Attribute]` parameters -- is that even intended to 
work!? -- and more broadly I wonder whether it can be simplified. Ideas?

As temporary simplifications while exploring the approach, this code

  * leaves out the "qualified name" idea,
  * doesn't actually export the attribute,
  * and only allows each data type to have a single attribute.




import macros, tables

type Attribute = tuple[name: string, typeAst: NimNode]

macro declareAttribute(name: untyped): untyped =
  expectKind(name, nnkIdent)
  nnkVarSection.newTree(
nnkIdentDefs.newTree(
  nnkPragmaExpr.newTree(
newIdentNode($name),
nnkPragma.newTree(
  newIdentNode("compileTime")
)
  ),
  newIdentNode("Attribute"),
  newEmptyNode()
)
  )

macro defineAttribute(attr: static[var Attribute],
  name, typ: untyped): untyped =
  attr = (name: $name, typeAst: typ)
  newEmptyNode()

template attribute(name, typ: untyped): untyped =
  declareAttribute(name)
  defineAttribute(name, name, typ)

macro data(tupleTypeName: untyped,
   attr: static[var Attribute]): typed =
  let nameAst = newIdentNode(attr.name)
  let typeAst = attr.typeAst
  quote:
type `tupleTypeName` = tuple[`nameAst`: `typeAst`]

attribute(firstName, string)
data(Person, firstName)

let p: Person = (firstName: "Sybil")
echo p.firstName


Run