[julia-users] Improved performance for anonymous functions

2014-01-07 Thread Mike Innes
Anonymous functions in Julia currently have a lot of overhead. However, 
I've come up with a temporary solution to this by writing a little macro 
which speeds up anonymous function calls - all you have to do is wrap your 
functions in @fn and they'll speed up, e.g.

x - x^2

becomes 

@fn x - x^2

or

@fn function(x)
  x^2
end

A quick benchmark shows that calling this wrapped function is about 15x 
faster. The macro defined at line 74 of this gist, along with some other 
useful utils:

https://gist.github.com/one-more-minute/8299575

Enjoy!


[julia-users] Re: Juila vs Mathematica (Wolfram language): high-level features

2014-01-25 Thread Mike Innes
I think what this question boils down to is this: All else being equal 
(performance, price, support), which language is ultimately nicer to 
program in?

Mathematica is a fantastic - if not the best around - DSL for symbolic 
maths. It's term-rewriting paradigm is pretty much perfectly suited what it 
does, so if you're using it for quick equation plots or to cheat on your 
calculus homework, don't hold your breath waiting for a replacement. It 
also ends up having what looks a lot like Julia's multiple dispatch on 
steroids; you can match not just on type, but also on internal structure, 
or even the unevaluated expression or the function being called itself. 
This is enormously powerful; you can implement a fully-functioning 
dictionary https://gist.github.com/one-more-minute/8618047 type in seven 
lines of Mathematica, and it might not be fast but it's beautifully 
declarative. So, if you're looking for Julia to have some killer feature 
that makes it more powerful than Mathematica, you probably won't find one.

*But*. But. As cool as some of Mathematica's features are, *fundamentally it's 
a language designed to let non-programmers write one-liners*. And 
unfortunately, what are great design decisions in that context fall flat 
outside of it. For example, its syntax - writing S-Expressions as function 
calls is mostly great, until you have to write control flow statements and 
variable bindings as if they were function calls, which gets old fast. The 
syntax needs to either be simpler (a la Clojure) or more helpful (a la 
Julia) IMHO. Also, holding arguments, blurring the line between code and 
expressions; all great for short-term usability, but all that magic under 
the surface ends up being a lot of conceptual overhead for anything 
complex. Symbolic programming with no syntactical overhead is a great 
feature one percent of the time, but it gets in the way the other 
ninety-nine.

Some points for Julia: firstly, interop. You can import a C library and 
call it really easily; same for Python and soon Java. Also, Julia's code is 
generally easier to reason about than Mathematica's, especially for writing 
macros; there's no magic going on where you don't expect it. What's great 
about Julia is that despite this, it's still really powerful (thanks to 
multiple dispatch etc.). Also, it's more straightforward to generate and 
evaluate code in Julia, which is often useful. For these reasons, Julia 
would be my language of choice even if Mathematica was just as 
fast/free/whatever.

Last point: the language's approach to types has little to do with the 
presence/absence of a compiler. Yes, the Julia implementation is compiled, 
but this isn't about implementations, right? Both Julia and Mathematica are 
dynamically, strongly typed (not untyped - everything in Mathematica has a 
type, accessible via Head[]). Anyway, there's nothing stopping you from 
using lists and maps to represent your data, just as you would in 
Mathematica, and avoiding using types altogether. Persistent.jl and Lazy.jl 
give strong support for functional programming, so you're not going to be 
missing out. And you can get rid of code repetition / boilerplate by 
generating and evaluating code as above.

Oh, and there's no typo in that definition - Expr.typ will be dynamically 
typed, or equivalently have type Any.

Hope this helps.


[julia-users] Google Summer of Code: Your Project Suggestions

2014-02-16 Thread Mike Innes
We've published a project ideas list for GSoC here:

http://julialang.org/gsoc/2014/

We'd like our ideas page to be as healthy and diverse as possible, so 
please do make your suggestions. Projects can include things like new 
packages, specific language/package features, or something more 
experimental; really, there's scope for any kind of coding project here, 
but those which fit roughly three months of work and have a clear, tangible 
benefit are best.

If you maintain or use a package which is missing key features, now would 
be a great time to ask for them!

You're welcome to add project descriptions via github, but if you want to 
suggest something more informally you can do so here - I'll continue to 
write up as many as I can.

Thanks,
Mike



[julia-users] Re: Google Summer of Code: Your Project Suggestions

2014-02-17 Thread Mike Innes
Any time, Stefan - and I wholeheartedly agree about keeping this up 
permanently. Speaking from experience, the issue tracker can be a little 
intimidating for people who want to get involved.

I didn't mean to suggest any preference for CUDA over OpenCL, so I'll add a 
note about the latter (this isn't my most knowledgeable area). Jake, that 
does sound like an interesting project - if you get a chance, it would be 
great if you could add it to the page. I don't mind looking into it myself, 
but you'll no doubt understand it better than I do.


Re: [julia-users] Re: Google Summer of Code: Your Project Suggestions

2014-02-17 Thread Mike Innes
Ok, I've added an autoformat project to the list. Jake, thanks for your 
additions.

Do we have any kind of support for R interop? I might have missed it, but 
if not I'll add it to the list.

Also, I'm thinking that it would be great for gradual adoption if there was 
a good story for calling Julia from Python (and perhaps other languages). 
Might be a good project, although perhaps too difficult?



[julia-users] Re: ANN: Konthe Convenience functions for OpenGL plotting

2014-03-05 Thread Mike Innes
Seems like that link is broken - works for me if I use raw.github.com/... 
and then save the file.

Nice work, though! Is there any way you could get this to work with 
Three.js?

On Wednesday, 5 March 2014 14:22:28 UTC, Fabian Gans wrote:

 Hi all, 

 I collected some of my convenience functions for 3D plots using OpenGL 
 into a repository. You can make surface plots from matrices and parametric 
 surfaces form functions. The plots are not rendered to screen but into an 
 Image object from the Images package which show directly in an ipython 
 notebook. If anyone is interested, here is the repo:

 https://github.com/meggart/Konthe https://github.com/meggart/Konthe.jl
 .jl

 and an example notebook:

 https://rawgithub.com/meggart/Konthe.jl/master/examples/KontheExamples.htmlhttps://rawgithub.com/meggart/Konthe/master/examples/KontheExamples.html

 Fabian



[julia-users] Re: ANN: Konthe Convenience functions for OpenGL plotting

2014-03-07 Thread Mike Innes
Well, I don't know that much about OpenGL, but I figure can take what 
you've got and dump the data into Three.js, you've got the basics of 
interactive 3D plotting. This would be especially great because it could be 
integrated with IJulia / Light Table. Just a thought, really - something 
I'd quite like to see, but probably a long way off.

Mike


[julia-users] Re: Packing and unpacking parameters

2014-03-07 Thread Mike Innes
RE some kind of `@unpack` macro: have a look at this question on Stack 
Overflow and its answers:
http://stackoverflow.com/questions/9345056/in-clojure-how-to-destructure-all-the-keys-of-a-map/11182432#11182432

Interestingly enough it is possible to an extent, but for many reasons ends 
up being a bad idea.

What you probably want instead is the Dictionary type - see below. Using 
it, you can write params[:a], params[:b] etc.

http://docs.julialang.org/en/release-0.2/stdlib/base/#indexable-collections

That said, it would be nice to have some kind of map destructuring macro 
similar to Clojure's. Hopefully the pattern matching libraries will 
eventually include something along those lines.

Hope this helps,
Mike


[julia-users] Re: [ANN] Two packages: Lazy.jl Mathematica.jl

2014-03-08 Thread Mike Innes
So, to clarify, Iterators aren't a thing in themselves. Iteration is an 
interface, and to call something an iterator just means that you can put it 
in a for loop. Tasks and Lazy Lists are both iterators; so are arrays, 
sets, dictionaries, and a whole bunch of other things. But although you can 
use them in a similar way if you want to, they are all designed to solve 
very different problems.

Now, Tasks and Lazy Lists do look similar in that you can produce and 
consume a stream of values with both, but conceptually they are quite 
different - Tasks are a mechanism for control flow, whereas Lazy Lists are 
a data structure. Perhaps you could call them the procedural and functional 
analogies of each other. I can't tell you what's best for you, but if 
you're thinking of Tasks as representing a sequence of data, then there's a 
good chance you'll find Lazy Lists easier to reason about.

For example, consider the partition() function. In Lazy.jl terms this 
splits a single list into a list of lists - it's fairly easy to visualise 
this:

 partition(3, seq(1:9))
List:
  (1 2 3)
  (4 5 6)
  (7 8 9)

If you wanted to write partition() for Tasks, you'd end up with tasks that 
produce tasks. I don't know about you, but that gives me a headache.

You'll also notice that working with general iterators takes a lot of work; 
consider the Iterators.jl version of take(), which takes about twenty 
lines, versus the two-liner in Lazy.jl. Some things are simply impossible 
to do generically, like flatten().

That's not to say that Tasks aren't useful - they're better if you want to 
do more things in terms of control flow and less in terms of manipulating 
the data itself, for example. Both Tasks and Lazy Lists are extremely 
powerful, but each within their own scope - hence it's useful to have both.

Is this roughly what you were looking for? Let me know if I've missed 
anything.


Re: [julia-users] Re: [ANN] Two packages: Lazy.jl Mathematica.jl

2014-03-08 Thread Mike Innes
Ok, fair enough - I think the confusion for me lies in the fact that I
wouldn't have said that Julia has lazy lists, tasks and iterators, in the
same way that I wouldn't say it has floats, integers and numbers, because
the former two are just types of the latter. But now I think I understand
that by iterator you mean iterator implementation via a custom type -
like the Take and Repeat types that Iterators.jl uses. Right? Also, I want
to separate the idea of tasks and generators, because tasks are just
coroutines - they can be used to make generators, as you have, but it's not
their only purpose.

I think I'm in agreement with you that iterators, in that sense, are best
reserved for when they have a specific purpose (like Ranges, for example).
I'm not convinced that the Iterators.jl style is the best idea myself, so
lets leave that alone for now. Then it comes down to generators and lazy
sequences, which as you've pointed out are two different ways to solve the
same problem.

As I've mentioned, these are both reflections of two very different styles
of programming, procedural vs. functional. In my view, the fact that
different people have different tastes is *exactly *the reason to support
both paradigms, as opposed to deciding on one true way for everyone. That
article, while it doesn't apply 1:1 to our discussion, also looks at the
idea that in many cases one style is objectively preferable to another - in
which case, it only make sense for Julia to support both.

I'd be interested to see the tree-walking iterator mentioned in the article
implemented via a task. I could be wrong, but I imagine it would be
reasonably difficult compared to the lazy sequence version. Equally, I
don't know of anything that's harder with sequences than with generators,
so if you can think of anything I'd be interested in having a go at it.


On 8 March 2014 11:44, andrew cooke and...@acooke.org wrote:


 i realise that in julia iterators are a protocol (that they rely on start,
 done and next, and that the underlying type used to do the iteration
 depends on what is being iterated over).  but that's not true in python,
 for example, where all iterators are implemented as coroutines.  the only
 reason i can see for julia adding a separate mechanism for iterators
 separate from tasks is efficiency - it's less work to use the iterator
 protocol to effectively manage an integer than to have a task.  or maybe
 it's that consume is explicit in julia while it's not in python, so tasks
 look uglier in julia?

 to me this seems confusing.  for example, it would be nice to have
 something that takes a task and generates a new task than is the contents
 of the old task repeated?  but the repeat() function in Iterators.jl
 doesn't do that.  instead it gives you an iterator.  i don't know if this
 matters in practice - i haven't use tasks and iterators enough - but it
 seems like a mess.  why two different things?

 similarly, i understand, i think, that both lazy streams and tasks are
 implemented differently.  but a task that produces tasks doesn't give me a
 headache any more than lazy streams of lazy streams.  in fact tasks
 generally seem simpler (to me) because you don't have to worry about making
 the flow work nicely - you can just bail out with a produce.  but maybe
 it's just that i am more used to python than to scheme.  again, why two
 different things?  just because you are used to programming in scheme and i
 am used to python?  that's not a great answer in my book.

 (and the task version of take doesn't require 20 lines, for example -
 https://github.com/andrewcooke/BlockCipherSelfStudy.jl/blob/master/src/Tasks.jl#L5)

 someone else has pointed me to
 http://journal.stuffwithstuff.com/2013/01/13/iteration-inside-and-out/which i 
 haven't read yet, saying that it explains the difference between
 iterators and tasks.  maybe that will help me.

 thinking more about this last night i did realise that my instinctive
 aversion to having lots of ways to do the same thing isn't necessarily
 reasonable in julia.  in a sense, what does it matter if julia has lazy
 streams, tasks and iterators, if they all use the same names for
 functions?  because then you can swap types out and code will still work.
 so i guess the cost to have take defined for iterators, and for tasks and
 for lazy streams is less than i imagined.

 andrew


 On Saturday, 8 March 2014 08:06:33 UTC-3, Mike Innes wrote:

 So, to clarify, Iterators aren't a thing in themselves. Iteration is an
 interface, and to call something an iterator just means that you can put it
 in a for loop. Tasks and Lazy Lists are both iterators; so are arrays,
 sets, dictionaries, and a whole bunch of other things. But although you can
 use them in a similar way if you want to, they are all designed to solve
 very different problems.

 Now, Tasks and Lazy Lists do look similar in that you can produce and
 consume a stream of values with both, but conceptually they are quite
 different

[julia-users] Re: A Case For Other Dependent Types?

2014-03-08 Thread Mike Innes
I think there's definitely a case for allowing any immutable type as 
parameters, at least. I would open an issue on github if there isn't one 
already, that way you can start a discussion.


[julia-users] Re: Example for using a PipeBuffer() ?

2014-03-08 Thread Mike Innes
names(Type) may be what you are looking for, e.g. names(Complex) = [:re, 
:im]

On Saturday, 8 March 2014 14:41:47 UTC, Uwe Fechner wrote:

 Thanks, iob.data works.

 Is there a way in Julia to find out what are the field names of a Type, 
 like the dir function in Python?

 Best regards:

 Uwe

 On Saturday, March 8, 2014 3:33:27 PM UTC+1, Ivar Nesje wrote:

 Judging from 
 base/iobuffer.jlhttps://github.com/JuliaLang/julia/blob/master/base/iobuffer.jl,
  
 you can access iob.data to see the actual Uint8 array that backs a 
 PipeBuffer.

 Just note that type fields is not considered part of the exported and 
 (somewhat) stable interface.

 Ivar

 kl. 14:34:42 UTC+1 lørdag 8. mars 2014 skrev Uwe Fechner følgende:

 Hello,

 I would like to convert the content of a pipebuffer int an array of 
 Uint8.

 My code so far:

 iob = PipeBuffer();
 write(iob, char(14))
 writeproto(iob, pub.state)

 Now I want to print the content of iob as hex string for debugging.

 I can use bytes2hex to convert an array of Uint8 to a hex string,
 like in this example:

 s=Hello world!
 bytes2hex(convert(Array{Uint8, 1}, s))

 but I could not find a way to access the content of a PipeBuffer.

 Any idea?

 Regards:

 Uwe



Re: [julia-users] Re: [ANN] Two packages: Lazy.jl Mathematica.jl

2014-03-09 Thread Mike Innes
 enough to solve really hard
 problems well. When you're trying to solve a truly difficult problems,
 don't you want all the best and most powerful tools available - even if
 that means that there are lots of ways to solve easier problems?



 On Sat, Mar 8, 2014 at 8:36 AM, Mike Innes mike.j...@gmail.com wrote:

 Ok, fair enough - I think the confusion for me lies in the fact that I
 wouldn't have said that Julia has lazy lists, tasks and iterators, in the
 same way that I wouldn't say it has floats, integers and numbers, because
 the former two are just types of the latter. But now I think I understand
 that by iterator you mean iterator implementation via a custom type -
 like the Take and Repeat types that Iterators.jl uses. Right? Also, I want
 to separate the idea of tasks and generators, because tasks are just
 coroutines - they can be used to make generators, as you have, but it's not
 their only purpose.

 I think I'm in agreement with you that iterators, in that sense, are
 best reserved for when they have a specific purpose (like Ranges, for
 example). I'm not convinced that the Iterators.jl style is the best idea
 myself, so lets leave that alone for now. Then it comes down to generators
 and lazy sequences, which as you've pointed out are two different ways to
 solve the same problem.

 As I've mentioned, these are both reflections of two very different
 styles of programming, procedural vs. functional. In my view, the fact that
 different people have different tastes is *exactly *the reason to
 support both paradigms, as opposed to deciding on one true way for
 everyone. That article, while it doesn't apply 1:1 to our discussion, also
 looks at the idea that in many cases one style is objectively preferable to
 another - in which case, it only make sense for Julia to support both.

 I'd be interested to see the tree-walking iterator mentioned in the
 article implemented via a task. I could be wrong, but I imagine it would be
 reasonably difficult compared to the lazy sequence version. Equally, I
 don't know of anything that's harder with sequences than with generators,
 so if you can think of anything I'd be interested in having a go at it.


 On 8 March 2014 11:44, andrew cooke and...@acooke.org wrote:


 i realise that in julia iterators are a protocol (that they rely on
 start, done and next, and that the underlying type used to do the
 iteration depends on what is being iterated over).  but that's not true in
 python, for example, where all iterators are implemented as coroutines.
 the only reason i can see for julia adding a separate mechanism for
 iterators separate from tasks is efficiency - it's less work to use the
 iterator protocol to effectively manage an integer than to have a task.  or
 maybe it's that consume is explicit in julia while it's not in python, so
 tasks look uglier in julia?

 to me this seems confusing.  for example, it would be nice to have
 something that takes a task and generates a new task than is the contents
 of the old task repeated?  but the repeat() function in Iterators.jl
 doesn't do that.  instead it gives you an iterator.  i don't know if this
 matters in practice - i haven't use tasks and iterators enough - but it
 seems like a mess.  why two different things?

 similarly, i understand, i think, that both lazy streams and tasks are
 implemented differently.  but a task that produces tasks doesn't give me a
 headache any more than lazy streams of lazy streams.  in fact tasks
 generally seem simpler (to me) because you don't have to worry about making
 the flow work nicely - you can just bail out with a produce.  but maybe
 it's just that i am more used to python than to scheme.  again, why two
 different things?  just because you are used to programming in scheme and i
 am used to python?  that's not a great answer in my book.

 (and the task version of take doesn't require 20 lines, for example -
 https://github.com/andrewcooke/BlockCipherSelfStudy.jl/blob/
 master/src/Tasks.jl#L5 )

 someone else has pointed me to http://journal.stuffwithstuff.
 com/2013/01/13/iteration-inside-and-out/ which i haven't read yet,
 saying that it explains the difference between iterators and tasks.  maybe
 that will help me.

 thinking more about this last night i did realise that my instinctive
 aversion to having lots of ways to do the same thing isn't necessarily
 reasonable in julia.  in a sense, what does it matter if julia has lazy
 streams, tasks and iterators, if they all use the same names for
 functions?  because then you can swap types out and code will still work.
 so i guess the cost to have take defined for iterators, and for tasks and
 for lazy streams is less than i imagined.

 andrew


 On Saturday, 8 March 2014 08:06:33 UTC-3, Mike Innes wrote:

 So, to clarify, Iterators aren't a thing in themselves. Iteration is
 an interface, and to call something an iterator just means that you can 
 put
 it in a for loop. Tasks and Lazy Lists are both iterators; so

Re: [julia-users] Re: [ANN] Two packages: Lazy.jl Mathematica.jl

2014-03-09 Thread Mike Innes
I'm sorry if that came off as though it was targeted at you - I meant to a
general statement about the philosophy of having zero duplication. Of
course, you're right, duplication has a cost too, and it doesn't work to
just throw everything together either, so like everything else in life it's
about compromise.

For what it's worth, I think you asked a good question, made a valid point,
and an interesting discussion came out of it. Regardless of one's approach,
it's always right to question whether you're really doing the right thing -
so I didn't take your question as being demanding at all, far from it.
Thanks for taking the time to join the discussion, and again, apologies if
you've felt that there's any hostility involved.


On 9 March 2014 11:23, andrew cooke and...@acooke.org wrote:



 On Sunday, 9 March 2014 07:48:25 UTC-3, Mike Innes wrote:

 Stefan: you want all the best and most powerful tools available Yes,
 yes, yes - this a thousand times.

 The problem with demanding that every problem has a single solution is
 that you end up with every solution fits a single problem as well. You
 can't possibly foresee all of the solutions and determine which is best,
 and you can't foresee all of the problems a tool can solve either. So
 instead of trying to create a one-size-fits all solution to each class of
 problem, let's make powerful tools that work seamlessly together and trust
 our users to do what they do best - problem solving.



 Go read the Julia issues.  They're full of tradeoffs between simplicity
 and functionality.

 Life, and language and library design, aren't as simple as you are making
 out.  There are costs to duplication and alternatives.

 And painting my posts as demanding a single solution is plain wrong.  I
 asked a question.  I didn't demand anything.

 Andrew




[julia-users] Re: GSOC 2014: Project Syntax Checker

2014-03-11 Thread Mike Innes
Yes, as Stefan has pointed out, making your own lexer/parser is not likely 
to be the best way to approach this project (and perhaps we should make 
that more explicit).

Your best bet here is to have a look at the existing Julia parser - get a 
feel for things, then think about improvements you could make. I can't give 
you a specific warm-up task, but making some improvement to the existing 
code would be a great start.

Hope this helps!


Re: [julia-users] Packing and unpacking parameters

2014-03-14 Thread Mike Innes

 I'm trying to write an unpack macro, just to learn a bit about
 meta-programming (I get it's probably not the best idea,
 but sometimes you learn a lot doing stupid things)


 Absolutely agreed! But bear in mind that this may be a hard macro to write
in the general case. Nevertheless, I can give you some pointers.

Firstly, @unpack n p isn't going to work in the way you want, because the
value of n will not be known (in principle) until runtime. It is possible -
the way to do it is to construct and evaluate a let binding, but doing this
every time the code runs will be *extremely* slow. Interpreted languages
like R sometimes offer these kinds of features because they avoid the
compiler overhead. You could also call eval(n) within the macro but that's
very brittle.

On the other hand, as long as you aren't going to change the value of n at
run time, you can reference it from within the macro itself - something
like (not tested)

const n = [:a, :b, :c]
macro unpack(p)
  Expr(:block, [:($(n[i]) = $p[$i]) for i = 1:length(n)])
end

@unpack [1,2,3] # Now a = 1, b = 2, etc.

The problem with this, then, is generalising it; you don't want to define
this macro for every set of parameters, after all. But if you want to avoid
that, you'll need to write a macro that writes a macro - I'll leave that as
an exercise for the reader.


Re: [julia-users] Packing and unpacking parameters

2014-03-14 Thread Mike Innes
macro unpack(p)
  Expr(:block, [:($(n[i]) = $p[$i]) for i = 1:length(n)]...) | esc
end

Ok, that should work better. To avoid the hygiene pass you can wrap
everything in esc; for example

macro unpack()
  quote
a = p[1]
b = p[2]
  end | esc
end


On 14 March 2014 18:30, John Myles White johnmyleswh...@gmail.com wrote:

 Stupid seems a little harsh. Maybe just a bit vague. :)

  -- John

 On Mar 14, 2014, at 11:27 AM, Yuuki Soho yu...@vivaldi.net wrote:

 That was a slightly stupid question John, I should have thought about it 2
 minutes:)
 I was hopping to do something like that, but it doesn't work because of
 the hygiene I guess.

 macro unpack()

 quote

 a = p[1]

 b = p[2]

 end

 end

 macroexpand(:(@unpack))
 #184#a = p[1] # line 1: #185#b = p[2]

 This is what I get with yours Mike:

 @unpack [1,2,3]
 3-element Array{Expr,1}: :(a = [1,2,3][1]) :(b = [1,2,3][2]) :(c =
 [1,2,3][3])

 Thanks for the answers!





[julia-users] Re: [First post] What IDE everyone uses?

2014-03-20 Thread Mike Innes


 On that note is somebody working on a LightTable integration?


As it happens I'm having a go at it. It's probably buggy and definitely 
missing features at the moment, but it's already a nice way to develop 
packages (no more pesky reloading files).

https://github.com/one-more-minute/Jewel

I'm hoping we'll have this as a GSoC project and we can get something 
really good going.


[julia-users] Re: Remaining GSoC projects

2014-03-21 Thread Mike Innes
Absolutely - the plan is to keep the ideas list around so that people who 
want to get involved have a nice set of projects to pick from. Aside from 
anything else it's probably a good idea to separate actual issues and 
would be cool projects.

That said, if we're keeping the ideas list around we could do with making 
it a bit nicer. I don't know what the best way to format it is but a table 
of contents would be really useful.


[julia-users] [ANN] Julia + Light Table!

2014-03-30 Thread Mike Innes
Hey all,

TL;DR: New Light Table integration available 
herehttps://github.com/one-more-minute/Jewel
.

I just wanted to share the milestone I recently passed with my Light Table 
plugin. Thanks to the recent support for modules, I can now use the Julia 
client to develop itself; I make a change, Ctrl+Enter, and everything 
updates seamlessly, no rebooting Julia or reloading files required. This, I 
think, is pretty neat, and also means that Light Table is ready to become 
my go-to environment for playing with Julia and maintaining packages.

As well as module support, you get all the standard Light Table features: 
Evaluate lines and definitions and see the results inline; `Ctrl+d/m` for 
quick access to docs and methods; great autocompletion (including for 
modules and even for packages when you type `using` or `Pkg.add()`).

There's still a long way to go *–* I want to make this a really polished 
experience, and rich, interactive, graphical output and input are also high 
priorities *–* but I hope you'll give it a shot and let me know what you 
think so far. Feel free to use the gitter chat or ask here if you have any 
questions or problems.

https://github.com/one-more-minute/Jewel

Enjoy!

*—* Mike


[julia-users] Re: [ANN] Julia + Light Table!

2014-03-31 Thread Mike Innes
Thanks for the tipoff, Andrew. Would you mind opening an issue to remind me 
to look into Winston?

I've tried to make the readme a bit clearer, and also pushed a new update 
which will install the Jewel.jl package for you and start a client on 
startup. Hopefully this will make things easier and feel a bit snappier.

On Sunday, 30 March 2014 20:25:23 UTC+1, Andrew Dabrowski wrote:


 That's great, I've been meaning to have a serious look at LT for a long 
 time.  Just got it going and seems very nice.

 Two hiccups so far:

 1. Apparently you must restart LT after adding a new plugin.

 2. When run in terminal Winston redraws to the same window, but in LT it 
 seems to create a new window every time it's invoked.  The code I'm working 
 updates the window often so my desktop was quickly and completely overrun 
 by Winstons.  Apparently graphing is no-go in LT for now.




Re: [julia-users] Re: [ANN] Julia + Light Table!

2014-03-31 Thread Mike Innes
Thanks!

I felt that the cloning suggestion was polluting the instructions, but I'll
add it back in its own section when I can. All you have to do is clone the
Jewel repo into one of the following folders:

OS X: ~/Library/Application Support/LightTable/plugins/
Linux: ~/.config/LightTable/plugins/
Windows: %APPDATALOCAL%/LightTable/plugins/


On 31 March 2014 15:50, Andrew Dabrowski unhandya...@gmail.com wrote:

 Issue added.

 I notice you removed the suggestion that Jewel could be used from a git
 clone.  Is there no way to make that work in LT?





[julia-users] Re: Delay Macro evaluation, or how to load OpenGL function Pointers

2014-04-01 Thread Mike Innes
There are probably better ways to resolve this problem – as Isaiah points 
out, if getFuncPointer is fast enough, you needn't do anything at all. If 
it's slow, I would probably just memoize it.

That said, if for some reason you particularly needed to inline a runtime 
constant, you could do so by defining a function that redefines itself. For 
example, as a proof of concept:

function f(x)
  @eval begin
function f(x)
  return x*$(hard_calculation())
end
f($x)
  end
end
 
hard_calculation() = (sleep(1); 5)
 
# Will take a second the first time, but is instant thereafter.
@time f(10)


And of course, as there are many functions that follow this format you'd 
want to use a macro to generate them. With all the quoting and splicing 
that might take some decent macro-fu, though. Happy to help with this if 
you want.


[julia-users] Re: Delay Macro evaluation, or how to load OpenGL function Pointers

2014-04-01 Thread Mike Innes
Although, getProcAddress is defined by an independent package, right? In 
which case you probably want something more like:

module OpenGL
 
function init_opengl_fns(getProcAddr)
  @eval begin
export glGetString, ...
glGetString(name::GLenum) = ccall($(getProcAddr(glGetString)), , 
, name)
 
# further definitions
  end
end
 
end
 
# then...
 
using OpenGL, SomeContextCreator
SomeContextCreator.init_context()
OpenGL.init_opengl_fns(SomeContextCreator.getProcAddr)


Forgive me if I'm not understanding your problem correctly.


[julia-users] Re: Delay Macro evaluation, or how to load OpenGL function Pointers

2014-04-02 Thread Mike Innes
Ok, if you're using an include then you don't need those @evals. Your @eval 
include(gl4_3.jl) is definitely redundant, since include happens at run 
time anyway.

You should benchmark it yourself to be sure (or perhaps check the llvm 
output) but I'm pretty sure Isaiah is right here – you can also get rid of 
the @eval in gl4_3.jl without any performance penalty, and that definitely 
cleans things up.

In which case you end up with:

#OpenGL.jl:
function load(getProc::Function)
global const getProcAddress = getProc
include(gl4_3.jl)
end
 
#gl4_3.jl
glGetString(name::Uint16) = ccall(getProcAddress(glGetString), Ptr{Cchar}, 
(Uint16,), name)
export glGetString


On Wednesday, 2 April 2014 13:15:45 UTC+1, Simon Danisch wrote:

 Hm... just the pointer to getProcAddress will be inlined, but not the 
 pointer to glGetString, right?

 with Mike's solution the lowered code looks like I want to have it:

 $(Expr(:lambda, {:name}, {{},{{:name,Uint16,0}},{}}, :(begin  # 
 /home/s/load.jl, line 2:
 return top(ccall)(Ptr{Void} 
 @0x7f402e6dadc0,Ptr{Int8},(Uint16,),name::Uint16,0)::Ptr{Int8}
 end::Ptr{Int8})))


 I've one more question, though!
 I use this code to include the definitions from another file, but this 
 doesn't look very elegant...
 Are there better options?

 #OpenGL.jl:
 function load(getProc::Function)
   global const getProcAddress = getProc
   @eval include(gl4_3.jl)

 end
  
 #gl4_3.jl
 @eval begin
   glGetString(name::Uint16) = ccall($(getProcAddress(glGetString)), 
 Ptr{Cchar}, (Uint16,), name)
   export glGetString
 end



 Thank you very much!

 Am Dienstag, 1. April 2014 14:30:12 UTC+2 schrieb Simon Danisch:

 Hi,
 I’m working on the OpenGL package and I want to make it finally usable in 
 a nice and clean way on all platforms.
 The problem is, that one needs pointer for the GL functions, which you 
 can only get, after initialization of the OpenGL context.
 But initializing the context and creating a window shouldn’t be part of 
 the OpenGL package.

 So I tried two different approaches, which both seem to have their 
 downsides:

 1.
 Initialize OpenGL context when including the OpenGL package
 This is bad, because this makes the OpenGL package dependent on some 
 third party OpenGL context creation library.

 2.
 Load the functions later with a loading Function.
 Bad, because the function definitions are not visible for any other 
 module, that relies on the OpenGL package.

 My ideal solution would be, to evaluate a macro when the function is 
 called and not when the module is included.
 Like this, I can define all the OpenGL functions already in the OpenGL 
 module, and when you call them the first time,
 the right function ptr gets inserted into the ccall, or an error is 
 raised, when OpenGL context is not initialized.

 this could look like this:


 module OpenGL

 macro getFuncPointer(name::ASCIIString)
return getProcAddress(name)
 end

 glGetString(name::GLenum) = ccall(@getFuncPointer(glGetString), , 
 , name)
 export glGetString
 end


 using OpenGL
 ...create OpenGL context
 #define getProcAddress
 global const getProcAddress = glutGetProcAddress # If using GLUT for GL 
 context creation
 #call gl Functions
 glGetString(GL_VERSION)

 Any ideas how to do this in a clean way?


 Cheers,

 Simon



[julia-users] Re: Delay Macro evaluation, or how to load OpenGL function Pointers

2014-04-02 Thread Mike Innes
Although, it strikes me as odd that you would need runtime evaluation to 
implement a library like this in the first place. Java must have OpenGL 
bindings, for example – how do those libraries solve this problem? Are they 
just really slow?


Re: [julia-users] Re: Delay Macro evaluation, or how to load OpenGL function Pointers

2014-04-03 Thread Mike Innes
Ok, that's interesting. This is obviously a pretty clever macro, because
it's not at all obvious that it should run as quickly as it does.

But, I'm still not sure I understand why that solution is necessarily the
least brittle and most correct. As I see it, we have three potential
solutions here, which in order of most elegant / least performant are (1)
getFuncPointerM memoization, (2) @getFunctionPointer memoization, (3) @eval
inlining. (1) and (2) are exactly the same amount of typing, (3) a few
characters more. (3), I would argue, is more clear about what it does and
why it's fast, but perhaps your opinion will differ.

Without clear reasoning, drawing a line of correctness between (2) and
(3) seems pretty arbitrary. I can see why you might argue that only (1) is
correct, since it's evidently easier to understand and maintain. But what
exactly makes (3) so much worse than (2), objectively speaking?


On 3 April 2014 09:21, Simon Danisch sdani...@gmail.com wrote:

 Can someone explain me what exactly is brittle about it?
 I don't like the solution too much myself, as it disguises things a little.

 But I don't know Julia ( or macros and programming concepts in general)
 well enough to see the bad side effects.
 At least it gets the job done, which is to generate a function body that
 only consists of a ccall with an inlined function ptr.

 Or is this not desirable?
 On Apr 3, 2014 5:19 AM, Jameson Nash vtjn...@gmail.com wrote:

 As I indicated, I used the macro to make the code look neater. I could
 have expanded it (see below). But just as functions help DRY code,
 macros help DRY typing. Memoizing using a dict isn't much better than
 calling dlsym, which is also some form of dict (and may even benefit
 from better memory localization). I am doing almost the same (using
 the module global namespace as a dictionary), but I am cheat because I
 can inform the compiler that the result of this dictionary lookup is
 static, which lets it emit a direct entry into the module global
 lookup table, at essentially negligible runtime performance impact
 (one mov and one jmp instruction more than directly ccall'ing a
 pointer).

 Using eval to do inlining is a brittle usage of both eval and
 inlining. it is nice to have the computer run things fast, but it is
 better have it do them right :)

 const glGetString_func_pointer = C_NULL
 function glGetString(name::Uint16)
 global glGetString_func_pointer
 if glGetString_func_pointer::Ptr{Void} == C_NULL
 glGetString_func_pointer::Ptr{Void} =
 getFuncPointer(glGetString)
 end
 ccall(glGetString_func_pointer::Ptr{Void}, Ptr{Cchar}, (Uint16,),
 name)
 end

 On Wed, Apr 2, 2014 at 9:59 PM, Mike Innes mike.j.in...@gmail.com
 wrote:
  I agree entirely that macros and eval should be avoided if possible -
 so why
  have you used them for memoization?
 
 
  const func_pointers = Dict{String, Ptr{Void}}()
 
 
  getFuncPointerM(name) =
 
haskey(func_pointers, name) ?
 
  func_pointers[name] :
 
  (func_pointers[name] = getFuncPointer(name))
 
 
  Perhaps I'm missing something, but that seems equivalent and a lot
 neater to
  me. The benefit to your macro is that it's faster, which is exactly the
  reason to use eval in Simon's case.
 
  If retrieving a pointer carries a significant overhead compared to
 inlining
  it (which may well be a concern in an OpenGL library), then using eval
 to do
  that inlining is perfectly justified.
 
 
  On 3 April 2014 02:13, Jameson Nash vtjn...@gmail.com wrote:
 
  A delayed macro is just a function call. Don't make you life
  complicated by trying to use one in place of the other.
 
  Using macros and eval can get you into a lot of trouble by helping you
  write brittle code. They allow you to confuse compile and run time,
  even though Julia does make a strong distinction between them. Avoid
  using them.
 
  The correct way to write this is using memoization. We can use a macro
  to make this look neater. This one is copied from PyCall:
 
  macro getFuncPointer(func)
  z = gensym(string(func))
  @eval global $z = C_NULL
  quote begin
  global $z
  if $z::Ptr{Void} == C_NULL
  $z::Ptr{Void} = $(getFuncPointer(esc(func
  end
  $z::Ptr{Void}
  end end
  end
 
  glGetString(name::GLenum) = ccall(@getFuncPointer(glGetString),
  , , name)
 
 
  A good macro should be equivalent to a pure function: regardless of
  when or how it is run, or how the program state changes, it must
  return the same result given the same inputs. In this case, it returns
  a static variable and a method of retrieving the actual function
  pointer. Note that it does not actually look up the function pointer
  or return it -- that cannot be done until runtime.
 
  eval is actually just a macro call in disguise, so the same guidelines
  apply. This also happens to demonstrate an appropriate (safe) use of
  eval.
 
 
  PS. this code should be causing a compile-time

Re: [julia-users] Re: Delay Macro evaluation, or how to load OpenGL function Pointers

2014-04-07 Thread Mike Innes


 (1) and (2) do equivalent operations at runtime, they only differ in
 how the memoization is implemented (dict vs. global variable). Neither
 requires an OpenGL context at compile time, making them roughly
 equivalent

 

 (3) caches information derived from the OpenGL context, which may be
 different from the OpenGL context that is used at runtime.


Ok, there seems to be some confusion about how (3) actually works, perhaps 
due to the misapprehension that Julia has strictly separate compile- and 
run-time phases.

To be clear, solution (3) is *exactly* equivalent to the others insofar as 
it caches the function pointers at runtime (and therefore does not require 
a context at compile time). The only difference is that they are loaded in 
one go rather than as each function is called. Yes, there is compilation 
involved in the caching, but that is only *triggered* by init_opengl_fns(), 
which is called *at runtime*. init_opengl_fns() will only be called after 
the relevant OpenGL context is loaded (again, at run time), meaning that 
the problem of different compile/run time contexts simply does not apply.

Also, Simon, I know you've gone with Jameson's solution for now, but just 
in case you're interested I thought I'd spitball how you could multiple 
contexts might work. Basically, the idea is to dispatch on the type of the 
context, dynamically generating the methods via init_opengl_fns().

module OpenGL
 
function init_opengl_fns(getProcAddr, ctx_type)
  @eval begin
export glGetString, ...
glGetString(ctx::$ctx_type, name::GLenum) = 
ccall($(getProcAddr(glGetString)), , , name)
 
# further definitions
  end
end
 
end
 
# then...
 
using OpenGL, CUDA, GLUT
# You can call this as many times as you want, generating specialised methods 
for each type.
# (The function pointers are cached and methods generated when these lines are 
run, not compiled)
OpenGL.init_opengl_fns(CUDA.getProcAddr, CUDAContext)
OpenGL.init_opengl_fns(GLUT.getProcAddr, GLUTContext)
 
ctx1 = CUDA.new_context()
glGetString(ctx1, ...) # calls correct C function


I don't know if this is necessarily applicable, just an idea.


Re: [julia-users] I noticed there is no do while loop

2014-04-08 Thread Mike Innes
If while true loops are idiomatic, could we perhaps make the true optional?

On Tuesday, 8 April 2014 18:57:53 UTC+1, Stefan Karpinski wrote:

 Good idea. For what it's worth, these days I would be perfectly happy to 
 only program with while true loops and explicit breaks.

 On Apr 8, 2014, at 1:41 PM, Kevin Squire kevin@gmail.comjavascript: 
 wrote:

 Although not too frequent, this would be a good FAQ entry. 

 Cheers, Kevin 

 On Tuesday, April 8, 2014, Pierre-Yves Gérardy 
 pyg...@gmail.comjavascript: 
 wrote:

 This does the trick:

 while true
 # ...
 condition || break
 end



 On Tuesday, April 8, 2014 5:26:00 AM UTC+2, Freddy Chua wrote:

 as stated in question..



[julia-users] Re: Online Database + IDE for Julia

2014-04-10 Thread Mike Innes
It would definitely be good to have something like clojuredocs.org. You 
could include the docs for functions in all packages and Base, make 
everything searchable. If you could build meaningful connections between 
packages, even better.

But – and please forgive me if I've missed the point here – but, it does 
sound like you're trying to reinvent a lot of wheels here. I'm not saying 
that projects like the OpenGL IDE are a bad thing, but they seem pretty 
tangential to this idea of having a knowledge-graph of sorts for Julia's 
ecosystem. If you think you can solve all of the issues around that, that's 
exactly what you should be doing, you know?

What stops you from implementing automated testing and benchmarking for 
GitHub, or providing access to documentation within an existing IDE? It 
doesn't preclude them working well together, since they're pretty 
orthogonal. And if nothing else, rebuilding all these things is several 
lifetimes worth of work.

Anyway, nothing wrong with being ambitious. Good luck to you both!


[julia-users] Re: bit-twiddling micro benchmark

2014-04-10 Thread Mike Innes
To be fair, the simplest implementation being the fastest isn't 
*necessarily* to Julia's credit, since it may also mean that Julia can only 
optimise the simplest code. Not saying that's the case here, but it's worth 
looking at it from that angle.

Chris' point gives me an idea for an @unroll macro – it could unroll a 
generic for loop to do n at a time, or even take the loop out entirely when 
the iterator is a compile-time constant.


Re: [julia-users] Re: bit-twiddling micro benchmark

2014-04-10 Thread Mike Innes
Ah, nice
On 10 Apr 2014 15:39, Isaiah Norton isaiah.nor...@gmail.com wrote:

 See @simd: https://github.com/JuliaLang/julia/pull/5355


 On Thu, Apr 10, 2014 at 10:26 AM, Mike Innes mike.j.in...@gmail.comwrote:

 To be fair, the simplest implementation being the fastest isn't
 *necessarily* to Julia's credit, since it may also mean that Julia can
 only optimise the simplest code. Not saying that's the case here, but it's
 worth looking at it from that angle.

 Chris' point gives me an idea for an @unroll macro - it could unroll a
 generic for loop to do n at a time, or even take the loop out entirely
 when the iterator is a compile-time constant.





[julia-users] Re: Is it wrong to think of Julia as a Lisp that uses m-expressions?

2014-04-11 Thread Mike Innes
Lisp has certainly had a strong influence on Julia. First class functions, 
homoiconicity, running code at compile time and compiling code at runtime, 
the interactive REPL, dynamic typing and GC, everything you could really 
want from a Lisp is there.

But, syntax does make a difference, I think. For example, deeply nesting 
expressions concisely is far less natural in Julia than Lisp. Macros, while 
just as powerful, are a little more awkward thanks to Julia's more complex 
syntax and the fact that you have to often write @foo begin rather than 
just (foo. Don't get me wrong, Julia's syntax is great, and it's entirely 
the right choice for the technical space, but for that reason I think I'd 
say that Julia isn't *a* Lisp as such. Yes, they're similar in terms of raw 
language power, but each for different problems.

I've often thought that a Lisp which compiles to Julia would be a cool 
project, though.

If you're interested, this Paul Graham 
essayhttp://www.paulgraham.com/icad.html is 
an interesting read. It seems he was spot on about languages with 
algol-like syntax adopting more Lisp features.


Re: [julia-users] Create formatted string

2014-04-12 Thread Mike Innes
That @sprintf is a macro sort of explains why using a run-time value 
doesn't work in the same way, but it isn't really *the* reason since 
@sprintf(fmt, 
val) could work in principle – it would just have to delegate to a function 
if its argument isn't a compile-time string.

If using a run-time string is particularly useful to you, I'd suggest 
opening an issue about this, since it appears to be missing functionality.

On Saturday, 12 April 2014 07:56:54 UTC+1, John Myles White wrote:

 @sprintf is a macro, not a function. It doesn't evaluate its inputs: it 
 just rewrites the inputs into something else (usually less readable) that 
 carries out the actual computation. You can see what it does using the 
 macroexpand function: 

 julia macroexpand(quote @sprintf(%8.1e, 3.1415) end) 
 :(begin  # none, line 1: 
 Base.Printf.sprint(#63#io-begin  # printf.jl, line 783: 
 begin 
 #59#out = #63#io 
 #60###x#3463 = 3.1415 
 local #61#neg, #57#pt, #58#len, #62#exp 
 if Base.Printf.isfinite(#60###x#3463) 
 Base.Printf.ini_dec(#60###x#3463,2) 
 #61#neg = Base.Printf.NEG[1] 
 #62#exp = 
 Base.Printf.-(Base.Printf.POINT[1],1) 
 
 (Base.Printf.-(Base.Printf.-(1,Base.Printf.|((#62#exp Base.Printf.= 
 -100),(100 Base.Printf.= #62#exp))),#61#neg) Base.Printf. 0)  
 Base.Printf.write(#59#out,' ') 
 #61#neg  Base.Printf.write(#59#out,'-') 
 
 Base.Printf.write(#59#out,Base.Printf.DIGITS[1]) 
 Base.Printf.write(#59#out,'.') 
 
 Base.Printf.write(#59#out,Base.Printf.+(Base.Printf.pointer(Base.Printf.DIGITS),1),1)
  

 Base.Printf.write(#59#out,'e') 
 Base.Printf.print_exp(#59#out,#62#exp) 
 else 
 Base.Printf.write(#59#out,begin  # printf.jl, 
 line 141: 
 if Base.Printf.isnan(#60###x#3463) 
  NaN 
 else 
 if (#60###x#3463 Base.Printf. 0) 
 -Inf 
 else 
  Inf 
 end 
 end 
 end) 
 end 
 Base.Printf.nothing 
 end 
 end) 
 end) 

  -- John 

 On Apr 11, 2014, at 11:46 PM, Dominique Orban 
 dominiq...@gmail.comjavascript: 
 wrote: 

  As a follow-up question, why is the following not allowed? 
  
  julia fmt = %8.1e; 
  
  julia @sprintf(fmt, 3.1415) 
  ERROR: first or second argument must be a format string 
  
  I don't see how it's different from 
  
  julia @sprintf(%8.1e, 3.1415) 
  
  What's the appropriate syntax? 
  
  Thanks. 
  
  
  On Friday, April 11, 2014 11:24:50 PM UTC-7, Dominique Orban wrote: 
  Thank you! Such a basic operation could feature a bit more prominently 
 in the documentation. 
  
  
  On Friday, April 11, 2014 11:21:28 PM UTC-7, John Myles White wrote: 
  @sprintf 
  
  On Apr 11, 2014, at 11:18 PM, Dominique Orban dominiq...@gmail.com 
 wrote: 
  
  Sorry if this is a RTFM, but I can't find the answer in the 
 documentation or on the web. I may have missed it. I come from Python where 
 I can build strings with formatted data using a syntax like 
  
  s = pi=%7.1e % acos(-1) 
  
  How do I accomplish that in Julia? @printf doesn't do the job because 
 it doesn't return anything: 
  
  julia s = @printf(%7.1e, 3.14) 
  3.1e+00 
  julia s 
  
  
  
  
  Thanks. 
  
  



Re: [julia-users] Function naming idioms in Julia

2014-04-12 Thread Mike Innes
This is something of an unsolved problem at the moment, but one way to 
explore is methodswith(AType).

If you're looking for more flexible function chaining, I recommend having a 
look at the threading macros in 
Lazy.jlhttps://github.com/one-more-minute/Lazy.jl#macros
.

On Saturday, 12 April 2014 06:45:43 UTC+1, jason-sage wrote:

 On 4/11/14, 17:49, Ben Racine wrote: 
  But, when a function really does logically belong to its first argument, 
  I (sometimes) find myself missing the function namespacing inherent to 
  those systems. I find myself wanting to do what one can do in R and 
  inject a '.' into the function name just for the illusion of 
 namespacing. 

 Thanks for asking this.  I've also been thinking about how things like 
 tab completion and other discoverability interfaces work in Julia.  In 
 python, for example, having that namespacing that you bring up above 
 makes tab completion on an object a very natural way to explore a 
 vocabulary of functions associated with the object.  How does Julia 
 usually address discoverability and interactive exploration of functions 
 (like with tab completion, etc.)? 

 Thanks, 

 Jason 



Re: [julia-users] Re: Create formatted string

2014-04-13 Thread Mike Innes
It occurs to me that, if you really need this, you can define

sprintf(args...) = eval(:@sprintf($(args...)))

It's not pretty or ideal in terms of performance, but it will do the job.

fmt = %8.1e
sprintf(fmt, 3.141) #=  3.1e+00

On Sunday, 13 April 2014 22:47:12 UTC+1, Dominique Orban wrote:

 So what's the preferred Julia syntax to achieve what I meant here:

 julia fmt = %8.1e;
 julia @sprintf(fmt, 3.1415)
 ERROR: first or second argument must be a format string



 On Sunday, April 13, 2014 1:31:57 PM UTC-7, John Myles White wrote:

 As far as the macro is concerned, the splat isn’t executed: it’s just 
 additional syntax that gets taken in as a whole expression. 

 The contrast between how a function with splatting works and how a macro 
 with splatting works might be helpful: 

 julia function splat(a, b...) 
println(a) 
println(b) 
return 
end 
 splat (generic function with 2 methods) 

 julia splat(1, 2, 3) 
 1 
 (2,3) 

 julia splat(1, [2, 3]...) 
 1 
 (2,3) 

 julia macro splat(a, b...) 
   println(a) 
   println(b) 
   :() 
   end 

 julia @splat(1, 2, 3) 
 1 
 (2,3) 
 () 

 julia @splat(1, [2, 3]...) 
 1 
 (:([2,3]...),) 
 () 


  — John 

 On Apr 13, 2014, at 1:20 PM, Jeff Waller trut...@gmail.com wrote: 

  Likewise I am having problems with @sprintf 
  
  Is this because @sprinf is macro?  The shorthand of expanding a printf 
 with format the contents of an array is desirable.  I would have expected 
 the ... operator to take an array of length 2 and turn it into 2 arguments. 
  
  julia X=[1 2] 
 1x2 Array{Int64,2}: 
  1  2 
  
  julia @sprintf(%d%d,1,2) 
  12 
  
  julia @sprintf(%d%d,X...) 
  ERROR: @sprintf: wrong number of arguments 
  
  julia @sprintf(%d%d,(1,2)...) 
  ERROR: @sprintf: wrong number of arguments 
  
  julia @sprintf(%d,X...) 
  ERROR: error compiling anonymous: unsupported or misplaced 
 expression ... in function anonymous 
  in sprint at io.jl:460 
  in sprint at io.jl:464 
  
  julia macroexpand(quote @sprintf(%d%d,X...) end) 
  :($(Expr(:error, ErrorException(@sprintf: wrong number of 
 arguments 
  



[julia-users] Re: A project for compact expressiveness

2014-04-14 Thread Mike Innes
People often talk about code concision as if it's a simple case of number 
of characters or lines per idea / function / project / whatever. And sure, 
SLOC is an indicator of how expressive a language is, but a fundamentally 
flawed one because it conflates two distinct issues: (1), the number of 
distinct concepts you need to express an idea, and (2), the number of 
characters you need to express those concepts.

map() isn't better than a for loop *because* it's short; it's better 
because it lets you express the same idea with only three concepts 
(functional + function + data). Shortness (in characters) is just a side 
effect.

You couldn't improve Java by making every method and class name shorter, 
because then not only would you still need an insane number of concepts to 
write a simple hello world, but those concepts would be needlessly 
obfuscated. Take APL/J for contrast: these languages are concise both 
because of implicit function composition (cool) and because every function 
is one or two characters long (not so much). Julia hits the sweet spot 
here, I think, because it's expressive enough to be succinct without 
worrying about concision for its own sake.

In the real world, typing is never the bottleneck – it might be if function 
names were hundreds of characters, but for one, three, even ten, choose 
clarity over concision every time.


Re: [julia-users] Anyone created MIME type for julia files?

2014-04-29 Thread Mike Innes
I'm using text/julia at the moment. That's following CodeMirror's 
standard of using e.g. text/x-python, but with the x dropped because it's 
been deprecated.


[julia-users] Re: Bindling a command line utility with a package

2014-04-29 Thread Mike Innes
I'm not sure that packages should export main() functions – that's going to 
complicated things when a user wants to require CRC and define their own 
main().

How about leaving it unexported and using julia -e using PKG; 
PKG.main(ARGS)?

On Sunday, 27 April 2014 16:12:41 UTC+1, andrew cooke wrote:

 Just to follow-up on this.  The simplest way to bundle a command line 
 utility with a package seems to be:

 - use ArgeParse as normal and define a main(ARGS) function
 - export the main() function from the package
 - tell the user to call or alias  julia -e using PKG; main(ARGS)

 This works because julia itself only interprets options *before* -e.  
 Options *after* -e are passed in ARGS.  I guess this is obvious in 
 retrospect - it couldn't really be any other way.

 Doing it this way means that only an alias or similar needs to be 
 defined.  The user doesn't need to know where any Julia-related files are 
 installed.

 Andrew



[julia-users] Re: Julian way to write longer if/elseif/else clauses?

2014-05-07 Thread Mike Innes
If you need deeper pattern matching Match.jl is a great option, but you may 
also be interested in the @switch macro that lives in Lazy.jl – it will 
have zero overhead for cases like your example.

https://github.com/one-more-minute/Lazy.jl#macros

On Friday, 2 May 2014 15:11:08 UTC+1, Cameron McBride wrote:

 I'm still trying to settle into proper syntax and style, and all comments 
 are welcome!

 For potentially longer if / elseif / else clauses, e.g. 

 # an overly simplistic example
 if ndims(wt) == 2
println(Matrix stuff)
 elseif ndims(wt) == 1
println(Vector stuff)
 else
println(Scalar stuff)
 end

 Multiple dispatch doesn't seem to help the specific case I have in mind, 
 as this is only a small part of the logic.  

 This is perhaps as easy application of a case / switch statement, which I 
 don't think Julia has (I have ruby's case / when / else in mind).  

 Even more general, is there any mechanism to do pattern matching (e.g. 
 'match' in OCaml)?

 Cameron



Re: [julia-users] Re: Julia: First Month

2014-05-07 Thread Mike Innes
You could certainly define homoiconicity as zero syntactical distinction 
between data and code – Clojure would pass this test because writing code 
and writing a list are exactly the same, whereas Julia would fail it.

This definition is perfectly valid, so the problem isn't that it's 
incorrect so much as that it's a really uninteresting criterion. I for one 
am not nearly as interested in syntax as the ability to produce and 
manipulate expressions, which is something that both languages obviously 
share.

Perhaps a better definition would be the ability to quote language 
expressions, e.g. '(foo x y) or :(foo(x,y)). Clojure and Julia pass, 
JavaScript does not – and that actually tells you something about the 
abilities of the language.



On Wednesday, 7 May 2014 12:48:03 UTC+1, Isaiah wrote:

 The word homoiconic was removed from the official docs because it was 
 not very informative and was occasionally contentious.


 On Wed, May 7, 2014 at 4:03 AM, Ivar Nesje iva...@gmail.com javascript:
  wrote:

 Great!
 Finally I got a definition of homoiconic that Julia does not fulfill. 
 According to Jason, homoiconicity has nothing to do with language 
 features/standard library features or how you are encouraged to structure 
 your code to get an idiomatic program, but is solely a vague measure of the 
 complexity of the parser for syntactic code-AST transformation.

 By that definition Julia fails, because the syntax is created to be 
 simple humans to write and read to express ideas, and the AST is designed 
 to be easy for a human to work with through a machine. The transformation 
 between is complex and not one to one.

 Ivar


 kl. 09:12:44 UTC+2 onsdag 7. mai 2014 skrev Jason Merrill følgende:

 Glad to hear that you've been enjoying picking up Julia. I've felt the 
 same way about Julia having a more gradual on-ramp than some other cool 
 languages. You don't have to wrap your head around a new paradigm to get 
 started, but there are lots of nice advanced features waiting for you when 
 you're ready.

 Re homoiconicity: I know a lot of people have been saying that Julia is 
 homoiconic, but I don't really buy it. Is the claim that any language that 
 gives you access to the parser and eval at runtime is homoiconic?

 Let's consider Javascript for a second. 5 years ago, I don't think 
 anyone would have said that Javascript is homoiconic. But now there are 
 javascript parsers like Esprima and Acorn written in javascript, so you can 
 take a string of JS code, parse it into an AST at runtime, traverse and 
 manipulate the AST the same way you traverse and manipulate other objects, 
 and then emit and eval new code. Did JS become homoiconic when someone 
 wrote Esprima? I don't really think so.

 In my mind, syntax is relevant to homoiconicity. For a language to be 
 homoiconic, the map from syntax to AST should be very simple; or in other 
 words, the parser should be fairly trivial. This is true of Lisp, but not 
 really true of Julia. Incidentally, the parsers of concatenative languages 
 like Forth are even simpler than for Lisp, and more modern concatenative 
 languages like Joy and Factor take advantage of this in really cool ways.

 All that said, I think the people who say Julia is homoiconic would be 
 right if they instead said that Julia gives you the power to do most of the 
 things that people take advantage of homoiconicity to do in homoiconic 
 languages. Meta-programming Julia really isn't too bad at all.

 On Tuesday, May 6, 2014 8:15:36 PM UTC-7, Abram Demski wrote:

 Hi all!

 I've been using Julia for a little over a month now, and I thought it 
 would be fun/informative to write a little post about my experience.

 I'm mostly a Common Lisp guy. I write AI code in an academic setting. 
 However, I recently became enthusiastic about Clojure and was trying to 
 start a project in that.

 Indecisive as usual, I continued poking around looking for the best 
 programming language to do the project in, even as I became fairly 
 committed to doing it in Clojure.

 I had heard about Julia some time ago (looked at it very briefly), but 
 looked at it a second time when a friend mentioned it on twitter earlier 
 this year.

 Looking at it again, I realized that:

 1) Julia is essentially a Lisp, IE, it is homoiconic (despite not 
 appearing so) and has the metaprogramming capabilities I am used to. (I 
 don't need these often, but if a language lacks it, I feel like I'm 
 missing 
 an essential tool.)
 2) Julia's type system and multiple dispatch capabilities give me much 
 of what I liked about Clojure's multimethods and protocols.
 3) Julia is significantly faster (at least for many things).

 I decided to start hacking out my project in Julia, abandoning the 
 Clojure code I had started.

 After using it for a bit, I feel like it's been much easier to pick up 
 what I need to know than it was with Clojure. Both Julia and Clojure have 
 the deep, elegant stuff I like in a programming 

Re: [julia-users] Re: Julia: First Month

2014-05-07 Thread Mike Innes
I see what you're saying – in principle any Turing complete language can
have metaprogramming capabilities, in the limiting case by implementing an
interpreter for the language within itself.

But the logical extension of this attitude is that all languages are the
same, which clearly isn't right – or we'd all be using assembly. So I'm not
thinking about what's possible so much as what's accessible, which
admittedly isn't a wholly objective thing. For example: there are libraries
designed to make functional programming possible in Java, but you couldn't
really argue that Java is a functional programming language.

I actually think JavaScript is a pretty nice language for metaprogramming,
but the techniques you'd naturally use in JavaScript and Julia are very
different from each other. You don't need expressions and eval when you can
inject any closure into any object (although JavaScript still sorely misses
macros).


On 7 May 2014 15:24, Jason Merrill jwmerr...@gmail.com wrote:

 On Wednesday, May 7, 2014 6:37:20 AM UTC-7, Mike Innes wrote:

 You could certainly define homoiconicity as zero syntactical distinction
 between data and code – Clojure would pass this test because writing code
 and writing a list are exactly the same, whereas Julia would fail it.

 This definition is perfectly valid, so the problem isn't that it's
 incorrect so much as that it's a really uninteresting criterion. I for one
 am not nearly as interested in syntax as the ability to produce and
 manipulate expressions, which is something that both languages obviously
 share.


 I think being able to produce and manipulate expressions means being able
 to parse the language into a tree, and then manipulate the tree. Are there
 any languages that make it actually impossible to implement a parser for
 themselves? Pretty much every language can manipulate trees. Of course once
 you've manipulated a tree, you want to be able to execute the new tree, and
 not every language has eval, so that at least seems like a useful
 distinction.


 Perhaps a better definition would be the ability to quote language
 expressions, e.g. '(foo x y) or :(foo(x,y)). Clojure and Julia pass,
 JavaScript does not – and that actually tells you something about the
 abilities of the language.


 Do you see a fundamental distinction between :(foo(x,y)), and
 parse(foo(x, y))? I agree the first one is aesthetically preferable, but
 they both get the same job done, right? And it's possible to make the
 second one in JS, even though JS doesn't have the first one as a
 syntactical construct.

 Another nice thing that Julia has going for it is that it's very easy to
 traverse the AST. Everything is either a leaf, or is an expression with a
 head and args, so you can walk the whole tree by walking all the args of
 every expression you encounter.

 Some ASTs aren't like this: e.g. the JS AST that people are settling on
 has different names for the args of expressions with different heads,
 so you need a lot more logic to make sure you traverse everything. For
 example, their IfStatement has the properties: type (the head), test,
 consequent, and alternate (the three args) [1]. You have to know all these
 names when you encounter an IfStatement in order to keep walking all of its
 children. But that isn't really a feature of the language, it's just a
 feature of a particular AST representation that some people are starting to
 use. It would certainly be possible to create a more Julia-like AST for JS
 without changing the language.

 It's hard to talk about things like the definition of homoiconic without
 appearing to be trolling, but it actually is useful (to me anyway) to get
 to the bottom of which features are critical for effective
 meta-programming, and which features aren't.

 [1]
 https://developer.mozilla.org/en-US/docs/Mozilla/Projects/SpiderMonkey/Parser_API#Statements


 On Wednesday, 7 May 2014 12:48:03 UTC+1, Isaiah wrote:

 The word homoiconic was removed from the official docs because it was
 not very informative and was occasionally contentious.


 On Wed, May 7, 2014 at 4:03 AM, Ivar Nesje iva...@gmail.com wrote:

 Great!
 Finally I got a definition of homoiconic that Julia does not fulfill.
 According to Jason, homoiconicity has nothing to do with language
 features/standard library features or how you are encouraged to structure
 your code to get an idiomatic program, but is solely a vague measure of the
 complexity of the parser for syntactic code-AST transformation.

 By that definition Julia fails, because the syntax is created to be
 simple humans to write and read to express ideas, and the AST is designed
 to be easy for a human to work with through a machine. The transformation
 between is complex and not one to one.

 Ivar


 kl. 09:12:44 UTC+2 onsdag 7. mai 2014 skrev Jason Merrill følgende:

 Glad to hear that you've been enjoying picking up Julia. I've felt the
 same way about Julia having a more gradual on-ramp than some other cool

[julia-users] Re: [newb] objects with state

2014-05-07 Thread Mike Innes
As a rule, you generally want to translate

object.func(args...)

into

func(object, args...)

State can be kept in the fields of the object, and accessed using the 
dot-notation as above. Julia doesn't have class-base inheritance but you 
can still do objects just fine, albeit with slightly different notation.

I'm not sure if I've understood you correctly – does that help?

On Wednesday, 7 May 2014 16:46:19 UTC+1, Neal Becker wrote:

 I've always used the following rule in languages such as python and c++ 

 If an object has state, use a class.  Otherwise use a function. 

 In languages lacking classes (and objects) e.g., FORTRAN, state must be 
 maintained outside of the object.  This is ugly and error prone. 

 How is this addressed in julia? 



Re: [julia-users] Julia: First Month

2014-05-08 Thread Mike Innes
Am I right in thinking that the issues you have with function call
overloading are equivalent to those involved with multiple inheritance? In
the sense that overloading function application effectively makes an array
(for example) a member of both AbstractArray and Callable, which will
create ambiguities in functions which dispatch on both.

In that case, it may be best to separate these two issues. So you can
overload x(y) and it will work when you directly call x, but
x::Callablewill fail. Then you declare
x:Callable, and the multiple inheritance part of the type system kicks in
to help you resolve ambiguities.

Another option (in terms of multiple inheritance) might be to have some
kind of priority for parents: x is an AbstractArray first, and a
Callablesecond, i.e. if there's any ambiguity dispatch as
AbstractArray. In the sense it's the same as the way children are
prioritised over parents already. This would would make multiple
inheritance very low-effort at the user level, but the implicit-ness might
end up being a bad idea if it makes things hard to reason about (I don't
know if it will).


On 8 May 2014 07:47, Stefan Karpinski stefan.karpin...@gmail.com wrote:

 Thanks for reporting these first impressions – I find this sort of thing
 really helpful and interesting. We did try really hard to make the learning
 curve for Julia very gentle, the idea being that people who don't care
 about fancy language features should be able to use it productively too and
 learn about new feature only when they become useful. It's nice that this
 comes across.

 The business of function application and array indexing being different
 has come up a number of times recently – someone mentioned it to me in
 comparison with J/APL programming. And obviously in Matlab they're the same
 syntax, but since neither functions nor arrays can be used in a higher
 order fashion, that hardly matters. It makes me wonder about a design where
 they're the same thing and what problems that would solve and introduce. It
 would require some degree of overloading of function application syntax,
 which we've contemplated but don't do. In the meantime, we can pretty
 easily leverage multiple dispatch to make map(a, v) work as desired when a
 is an array. Which highlights one obvious issue: in a language with
 multiple dispatch, if lots of things are callable as functions, it makes a
 lot of higher order programming signatures pretty ambiguous. Map is easy –
 the first argument is function-like – but other functions that have higher
 order and non-higher-order variants require more thought. Is it better to
 make function application and array indexing the same thing and have that
 similarity fall out everywhere but also force users to deal with that
 ambiguity when writing method signatures, or is it better to go through the
 standard library and make sure that every place where a function could be
 used in a way that an array would also make sense, we allow it? I'm
 honestly not sure.

 On May 6, 2014, at 11:15 PM, Abram Demski abramdem...@gmail.com wrote:

 Hi all!

 I've been using Julia for a little over a month now, and I thought it
 would be fun/informative to write a little post about my experience.

 I'm mostly a Common Lisp guy. I write AI code in an academic setting.
 However, I recently became enthusiastic about Clojure and was trying to
 start a project in that.

 Indecisive as usual, I continued poking around looking for the best
 programming language to do the project in, even as I became fairly
 committed to doing it in Clojure.

 I had heard about Julia some time ago (looked at it very briefly), but
 looked at it a second time when a friend mentioned it on twitter earlier
 this year.

 Looking at it again, I realized that:

 1) Julia is essentially a Lisp, IE, it is homoiconic (despite not
 appearing so) and has the metaprogramming capabilities I am used to. (I
 don't need these often, but if a language lacks it, I feel like I'm missing
 an essential tool.)
 2) Julia's type system and multiple dispatch capabilities give me much of
 what I liked about Clojure's multimethods and protocols.
 3) Julia is significantly faster (at least for many things).

 I decided to start hacking out my project in Julia, abandoning the Clojure
 code I had started.

 After using it for a bit, I feel like it's been much easier to pick up
 what I need to know than it was with Clojure. Both Julia and Clojure have
 the deep, elegant stuff I like in a programming language; however, it seems
 like Clojure creates a rich, interlocking set of concepts which you *must* 
 learn
 in order to write very much code, whereas Julia has a gentle learning
 curve, facilitating normal programming and allowing the user to learn the
 deeper features as they become useful. At least, that's been my feeling.

 Monkeying around with the metaprogramming *has* taught me that it's a
 *bit* less convenient than Lisp. Thinking about expr.head and expr.args
 is not as 

[julia-users] [ANN] - Markdown.jl

2014-05-11 Thread Mike Innes
Hey all,

Markdown.jl https://github.com/one-more-minute/Markdown.jl is available
to try with Pkg.clone(Markdown). It's very much in beta but it's still
pretty useful if (for example) you want to quickly look at a package's
readme.

Parsed markdown should display nicely (ish) in the terminal as well as in
IJulia, Light Table etc. Contributions are welcome, of course – once things
are more feature-complete I'll try to get the Markdown test suite running.

Enjoy!

– Mike


[julia-users] Re: Translate Julia into another language

2014-05-11 Thread Mike Innes
This is certainly possible, but I imagine it's not going to happen except 
as an outside project.

Making a compiler like this isn't exactly easy, either – you'd probably 
have to do a lot of reaching into Julia's C internals to get to the 
type-inferred code.

You're welcome to try this, and I'm sure that you'd be able to find help on 
the mailing list for the trickier parts, but it might be best to wait until 
we have static compilation and link against Julia code as a library – you 
can avoid the run time that way.

– Mike

On Sunday, 11 May 2014 15:43:14 UTC+1, francoi...@gmail.com wrote:

 Hi Julia users,

 As a newcomer, let me first introduce myself. I have some experience in 
 numeric code, written mostly in Fortran 2008, C++, C#, Delphi and 
 Mathematica. As a consultant, I have to write some numerical codes but 
 those need to be written in languages that are different from clients to 
 clients. Most of the time, the code can't be linked to any open source 
 library for licensing (or management) reasons. Therefore, I need to rewrite 
 some basic codes in a lot of different languages. Bug fixed in one language 
 are a pain to fix in another one.

 Therefore, it would be nice to have a unified language for prototyping. 
 Julia seems to be an amazing candidate for different reasons :
 - Execution speed in the same order as compiled languages
 - I love the type system and the multiple dispatch idea
 - The language is quite close to all the above languages when it comes to 
 number crunching
 - Its reflection capabilities

 As Julia code can be manipulated by Julia, I am wondering if translating 
 Julia code to any of those language could be mostly automated. With 
 Mathematica, it is quite easy to generate code from mathematical 
 expressions, which is an extremely powerful way of preprocessing symbolic 
 expressions before the big number crunching.

 I am thinking of a Julia function that takes as input :
 - A Julia function that use only function whose concrete output types can 
 be inferred from their concrete input types
 - Some concrete types
 - A language : Fortran, C, C#, Delphi
 and output some code in that language.

 Even if this process can't be completely automated, it there a way to make 
 have a code helping at the translation ?

 Best regards to all of you,
 François



Re: [julia-users] Re: [ANN] Julia + Light Table!

2014-05-13 Thread Mike Innes
Weird. Given that this is a Julia issue, I'd either open an issue on github 
or start a new thread on the mailing list – you'll get much better help 
than I can offer you alone.

On Tuesday, 13 May 2014 12:24:54 UTC+1, Robert Feldt wrote:

 You are right, it cannot even load the file. That is indeed very weird. I 
 run this on a MacBook Pro Retina (2013), latest Mac OS X, in a bash shell 
 inside of iTerm 2.

 Details and julia versions below:

 feldt:~$ julia /Users/feldt/Library/Application 
 Support/LightTable/plugins/Jewel/jl/init.jl
 ERROR: could not open file /Users/feldt/Library/Application
  in include at boot.jl:238
  in include_from_node1 at loading.jl:114
  in process_options at client.jl:303
  in _start at client.jl:389

 feldt:~$ julia -v
 julia version 0.2.0
 feldt:~$ julia03 /Users/feldt/Library/Application 
 Support/LightTable/plugins/Jewel/jl/init.jl
 ERROR: could not open file /Users/feldt/Library/Application
  in include at boot.jl:243
  in include_from_node1 at loading.jl:120
  in process_options at client.jl:320
  in _start at client.jl:382

 feldt:~$ julia03 -v
 julia version 0.3.0-prerelease+2157



 Den måndagen den 12:e maj 2014 kl. 19:51:46 UTC+2 skrev Mike Innes:

 So, Light Table calls a command which looks like julia 
 /Users/feldt/Library/Application 
 Support/LightTable/plugins/Jewel/jl/init.jl [port] [id].

 From the looks of it finding Julia isn't the problem – it's just that 
 Julia itself is choking on the space and can't find the init.jl file. This 
 is really odd, because it works just fine on my (also OS X) system.

 Can you verify that this command works from the terminal? (Leave off the 
 port and id, you know if it's working if you get a BoundsError()). 
 Presumably you're using the latest Light Table/Jewel – how about Julia 
 itself?


 On 12 May 2014 13:17, Robert Feldt robert...@gmail.com wrote:

 I'd like to try this out but after fresh install of Lighttable and then 
 install of Julia plugin using LT's plugin manager I restart LT and get:

 Couldn't connect to Julia

 ERROR: could not open file /Users/feldt/Library/Application
  in include at boot.jl:238
  in include_from_node1 at loading.jl:114
  in process_options at client.jl:303
  in _start at client.jl:389

 julia is on my path and the instructions on the Jewel github page does 
 not help since I do not know much about LT and don't know where I should:

 Either make sure julia is on your path or set the :app behaviour 
 (:lt.objs.langs.julia/julia-path 
 /path/to/julia).

 Seems it is trying to access something in the Application support 
 directory but does not handle the space in there correctly?

 Would appreciate advice,

 Robert Feldt

 Den måndagen den 31:e mars 2014 kl. 16:55:29 UTC+2 skrev Mike Innes:

 Thanks!

 I felt that the cloning suggestion was polluting the instructions, but 
 I'll add it back in its own section when I can. All you have to do is 
 clone 
 the Jewel repo into one of the following folders:

 OS X: ~/Library/Application Support/LightTable/plugins/
 Linux: ~/.config/LightTable/plugins/
 Windows: %APPDATALOCAL%/LightTable/plugins/


 On 31 March 2014 15:50, Andrew Dabrowski unhan...@gmail.com wrote:

 Issue added.

 I notice you removed the suggestion that Jewel could be used from a 
 git clone.  Is there no way to make that work in LT?






[julia-users] Re: Macro question

2014-05-15 Thread Mike Innes
I think your second snippet must have gotten a bit muddled, since `expr` 
should end up with the value 5.

macro createVar(name, value)
  quote
$name = $value;
  end
end

expr = @createVar foo 5
# This is equivalent to `expr = (foo = 5)`, *not* `expr = :(foo = 5)`

expr == 5

If you do want `createVar` to return an expression, it should be a function 
instead of a macro. Maybe try running the example again to check it's 
behaving in the expected way?


On Thursday, 15 May 2014 12:29:13 UTC+1, Abe Schneider wrote:

 As an experiment I wrote a simple macro to set a variable:

 macro createVar(name, value)
   eval(quote
 $name = $value;
   end)
 end

 which works as expected:

 @createVar foobar 5;
 println(foobar = $foobar); # foobar = 5 (OK)

 However, if I instead do:

 macro createVar(name, value)
   quote
 $name = $value;
   end
 end

 expr = @createVar(foobar, 5);

 println($expr); # foobar = 5 (OK)

 # now evaluate the expression to do the actual assignment
 eval(expr);
 println(foobar = $foobar);

 I get ERROR: foobar not defined. I would expect that if I do the eval 
 outside of the macro I should get the same result as doing the eval inside 
 the macro. Is this expected behavior?

 I should add that I'm using a version of 0.3 from the repository.



[julia-users] Re: Macro question

2014-05-15 Thread Mike Innes
Alternatively

macroexpand(:@createVar(foo, 5))

Might have the desired behaviour.


On Thursday, 15 May 2014 12:51:15 UTC+1, Mike Innes wrote:

 I think your second snippet must have gotten a bit muddled, since `expr` 
 should end up with the value 5.

 macro createVar(name, value)
   quote
 $name = $value;
   end
 end

 expr = @createVar foo 5
 # This is equivalent to `expr = (foo = 5)`, *not* `expr = :(foo = 5)`

 expr == 5

 If you do want `createVar` to return an expression, it should be a 
 function instead of a macro. Maybe try running the example again to check 
 it's behaving in the expected way?


 On Thursday, 15 May 2014 12:29:13 UTC+1, Abe Schneider wrote:

 As an experiment I wrote a simple macro to set a variable:

 macro createVar(name, value)
   eval(quote
 $name = $value;
   end)
 end

 which works as expected:

 @createVar foobar 5;
 println(foobar = $foobar); # foobar = 5 (OK)

 However, if I instead do:

 macro createVar(name, value)
   quote
 $name = $value;
   end
 end

 expr = @createVar(foobar, 5);

 println($expr); # foobar = 5 (OK)

 # now evaluate the expression to do the actual assignment
 eval(expr);
 println(foobar = $foobar);

 I get ERROR: foobar not defined. I would expect that if I do the eval 
 outside of the macro I should get the same result as doing the eval inside 
 the macro. Is this expected behavior?

 I should add that I'm using a version of 0.3 from the repository.



[julia-users] Re: Macro question

2014-05-15 Thread Mike Innes
Oh, the other thing I should point out (sorry to post repeatedly) is that 
the macro hygiene pass will mean that `foo` is not actually defined by this 
macro. You probably want:

macro createVar(name, value)
  quote
$(esc(name)) = $value;
  end
end

See the 
hygienehttp://docs.julialang.org/en/latest/manual/metaprogramming/#hygienesection
 of the manual.

On Thursday, 15 May 2014 12:53:13 UTC+1, Mike Innes wrote:

 Alternatively

 macroexpand(:@createVar(foo, 5))

 Might have the desired behaviour.


 On Thursday, 15 May 2014 12:51:15 UTC+1, Mike Innes wrote:

 I think your second snippet must have gotten a bit muddled, since `expr` 
 should end up with the value 5.

 macro createVar(name, value)
   quote
 $name = $value;
   end
 end

 expr = @createVar foo 5
 # This is equivalent to `expr = (foo = 5)`, *not* `expr = :(foo = 5)`

 expr == 5

 If you do want `createVar` to return an expression, it should be a 
 function instead of a macro. Maybe try running the example again to check 
 it's behaving in the expected way?


 On Thursday, 15 May 2014 12:29:13 UTC+1, Abe Schneider wrote:

 As an experiment I wrote a simple macro to set a variable:

 macro createVar(name, value)
   eval(quote
 $name = $value;
   end)
 end

 which works as expected:

 @createVar foobar 5;
 println(foobar = $foobar); # foobar = 5 (OK)

 However, if I instead do:

 macro createVar(name, value)
   quote
 $name = $value;
   end
 end

 expr = @createVar(foobar, 5);

 println($expr); # foobar = 5 (OK)

 # now evaluate the expression to do the actual assignment
 eval(expr);
 println(foobar = $foobar);

 I get ERROR: foobar not defined. I would expect that if I do the eval 
 outside of the macro I should get the same result as doing the eval inside 
 the macro. Is this expected behavior?

 I should add that I'm using a version of 0.3 from the repository.



Re: [julia-users] for loops

2014-05-15 Thread Mike Innes
You may also be interested in zip:

for (file, i) in zip(files, 1:length(files))
  println(file,  , i)
end

# Even better:
using Lazy

for (file, i) in zip(files, range())
  println(file,  , i)
end

Enumerate is definitely the best solution here, but zip is more general if 
you find yourself bumping against similar problems.


On Thursday, 15 May 2014 14:00:09 UTC+1, Michele Zaffalon wrote:

 What about enumerate: 
 http://docs.julialang.org/en/latest/stdlib/base/?highlight=enumerate#Base.enumerate?


 On Thu, May 15, 2014 at 2:48 PM, Yakir Gagnon 12.y...@gmail.comjavascript:
  wrote:

 I love the 
 for file in files
 ... do something with file ...
 end

 syntax. But sometimes it's really useful to be able to have an iterator 
 accessible in the for loop, like:

 for file in files
 ... do something with file ...
 ... and with i that equals find(file == files) ...
 end


 Is there something built in like that, other than the usual way of:

 for i = 1:length(files)
 ... do something with files(i) ...
 ... and with i ...
 end

 ?




Re: [julia-users] for loops

2014-05-15 Thread Mike Innes
Well, if you want the first syntax you can easily define

Base.enumerate(f::Function, args...) = map(t-f(t...), enumerate(args...))

You could always open a pull request if you wanted to see this in Base, too.

On Thursday, 15 May 2014 21:18:31 UTC+1, Cameron McBride wrote:

 I missed enumerate() for a while,  and was happy I found it.  I find it 
 amusing how satisfying a few missing keystrokes can be.  

 On a related but different note, from a similar influence, I keep wanting 
 to pass blocks to iterators.  Any chance that will ever happen?

 I realize that do..end blocks are used currently as syntactic sugar for 
 methods that take a function as the first arg (e.g. open(), map()), and the 
 same functionality can be achieved with three letters and two braces (map), 
 but it still seems somewhat cleaner to write: 

 enumerate(a) do i,x
 ...
 end

 over
  
 map(enumerate(a)) do i,x
 ...
 end

 which are really just equivalent, as we know, to

 for i,x, in enumerate(a) 
 ...
 end

 Are there technical reasons this is a bad idea to assume?

 Cameron

 On Thu, May 15, 2014 at 1:01 PM, John Myles White 
 johnmyl...@gmail.comjavascript:
  wrote:

 I kind of suspect Stefan, like me, would instinctively call this 
 operation `each_with_index`.

  -- John

 On May 15, 2014, at 6:33 AM, Kevin Squire kevin@gmail.comjavascript: 
 wrote:

 One nice thing about Julia is that she borrows many (though not all) good 
 ideas from other languages. In this case, enumerate came from Python 
 (although it likely has other incarnations).

 Cheers!
Kevin 

 On Thursday, May 15, 2014, Billou Bielour jonatha...@epfl.chjavascript: 
 wrote:

 I was thinking the same thing the other day, when using *for x in xs* I 
 often find myself needing an index at some point and then I have to change 
 the for loop, or write an index manually.

 Enumerate is exactly what I need in this case. 

 +1 for Julia





Re: [julia-users] for loops

2014-05-15 Thread Mike Innes
Ah, I see what you mean. I'm not sure if it's possible, though.

You'd have to determine whether f() do ... meant function with do block
or iterator with do block, which isn't possible in general. So at least
you'd need special syntax for it, and by that point it's probably easier to
stick with map.


On 15 May 2014 22:33, Cameron McBride cameron.mcbr...@gmail.com wrote:

 Sure, Mike.  But the idea is to have this for all iterator objects
 intrinsically rather than defining it for each function that returns an
 iterator.

 There is likely a way to do this automagically for all iterators, but my
 julia-fu isn't strong enough that it jumped out at me when I looked over
 some source in base/.  I expect it's simple, but I don't have time to
 figure it out today.

 Cameron



 On Thu, May 15, 2014 at 4:51 PM, Mike Innes mike.j.in...@gmail.comwrote:

 Well, if you want the first syntax you can easily define

 Base.enumerate(f::Function, args...) = map(t-f(t...), enumerate(args
 ...))

 You could always open a pull request if you wanted to see this in Base,
 too.


 On Thursday, 15 May 2014 21:18:31 UTC+1, Cameron McBride wrote:

 I missed enumerate() for a while,  and was happy I found it.  I find it
 amusing how satisfying a few missing keystrokes can be.

 On a related but different note, from a similar influence, I keep
 wanting to pass blocks to iterators.  Any chance that will ever happen?

 I realize that do..end blocks are used currently as syntactic sugar for
 methods that take a function as the first arg (e.g. open(), map()), and the
 same functionality can be achieved with three letters and two braces (map),
 but it still seems somewhat cleaner to write:

 enumerate(a) do i,x
 ...
 end

 over

 map(enumerate(a)) do i,x
 ...
 end

 which are really just equivalent, as we know, to

 for i,x, in enumerate(a)
 ...
 end

 Are there technical reasons this is a bad idea to assume?

 Cameron

 On Thu, May 15, 2014 at 1:01 PM, John Myles White 
 johnmyl...@gmail.comwrote:

 I kind of suspect Stefan, like me, would instinctively call this
 operation `each_with_index`.

  -- John

 On May 15, 2014, at 6:33 AM, Kevin Squire kevin@gmail.com wrote:

 One nice thing about Julia is that she borrows many (though not all) good
 ideas from other languages. In this case, enumerate came from Python
 (although it likely has other incarnations).

 Cheers!
Kevin


 On Thursday, May 15, 2014, Billou Bielour jonatha...@epfl.ch wrote:

 I was thinking the same thing the other day, when using *for x in xs* I
 often find myself needing an index at some point and then I have to change
 the for loop, or write an index manually.

 Enumerate is exactly what I need in this case.

 +1 for Julia







Re: [julia-users] Re: Accessing the loop variable (and pi as float)

2014-05-17 Thread Mike Innes
Well, you could do this by defining another method on the (more specific)
Integer type:

test(x::Real) = x*x
test(x::Integer) = error()

There's also the FloatingPoint type, but that excludes pi.

I have to say, though, that it seems odd that you'd want to do this, seeing
as integers are effectively a special case of floating point numbers.


On 17 May 2014 13:40, Hans W Borchers hwborch...@gmail.com wrote:

 Thanks, Mike, for the prompt answer.
 But what if i want to explicitly exclude integers.
 I think with Real I would allow them.


 On Saturday, May 17, 2014 2:13:09 PM UTC+2, Mike Innes wrote:

 I think your first example is right, although someone may well correct me
 on that. That's how I've done similar things, anyway.

 As for your second example, this is happening because your type parameter
 is overly restrictive – if you use ::Real it works as you would expect.

 Conversions like this will certainly take place where they make sense,
 but they don't really make sense here – it has to be assumed that you've
 asked for a Float64 because you specifically need one.

 – Mike

 On Saturday, 17 May 2014 12:59:02 UTC+1, Hans W Borchers wrote:

 Yesterday I implemented a function calculating arc length of curves (to
 the last digit) when I came across the following stumbling blocks. Image
 the following function where I leave a for-loop with a 'break' statement:

 function testfun1(x::Vector{Float64})
 for i = 1:length(x)
 if x[i] == 0.0
 break
 end
 end
 return i-1
 end

 julia testfun([1.0, 2.0, 3.0, 0.0, 1.0])
 ERROR: i not defined
  in testfun at none:7

 I understand that the scope of the loop variable is restricted to the
 loop itself. What is the best way to export  i  to the outside? For
 the moment I settled with defining i before the loop.

 function testfun2(x::Vector{Float64})
 local i::Int
 for i = 1:length(x)
 if x[i] == 0.0
 break
 end
 end
 return i-1
 end

 This works, but I must admit it runs against my gut feeling and
 experience with other scientific programming languages.


 The next shock was the following:

 function testfun(x::Float64)
 return x^2
 end

 julia testfun(pi)
 ERROR: no method testfun(MathConst{:π})

 Again, I learned that I can use testfun(float(pi) , but my feeling
 would be that pi should be converted to float automatically whenever
 the context requires it. On the mailing list I think I have seen other
 complaints about this. I would prefer that  pi  and MathConst{:π} (or
 even better π alone) were different objects.




Re: [julia-users] Re: Design patterns for an API

2014-05-17 Thread Mike Innes
That macro being slow at the top level isn't really a strike against the 
macro technique, because it's easily resolved:

(Although oddly enough, a let binding doesn't really help here – anyone 
know why?)

macro sumf(f, xs)
  quote
function inner(s = 0.0, x = $(esc(xs)))
  for i = 1:length(x)
s += $(esc(f))(x[i])
  end
  s
end
inner()
  end
end

Although you're right that if you call this from a function which itself 
takes f as a parameter, you'll lose the speed boost again (technically you 
could resolve this by making the calling function a macro in the same way, 
but that's equally awful).

Hopefully this kind of inlining will be automatic soon, so we won't have to 
resort to ugly language hacks to make things fast; on the other hand, it's 
kinda cool that we *can* resort to ugly language hacks to make things fast 
:)


On Saturday, 17 May 2014 16:18:15 UTC+1, Tim Holy wrote:

 If you want to make that fast, you need to wrap that inside a function, 
 using 
 a separate name for each user-supplied f. Example: 

 function sumf_with_sinc_plus_x(xs) 
 @sumf(sinc_plus_x, xs) 
 end 

 function sumf_with_exp(xs) 
 @sumf(exp, xs) 
 end 

 If you don't wrap it in a function, then it runs in global scope, and 
 that's 
 horribly slow. 

 My version allows you to pass a function as an argument, and mimics what 
 it 
 would be like if we could pass functions-as-arguments without a 
 performance 
 penalty. 

 --Tim 

 On Saturday, May 17, 2014 05:03:07 AM Mike Innes wrote: 
  I may be missing the point here, but wouldn't it be easier to define 
 sumf 
  as a macro? 
  
  macro sumf(f, xs) 
quote 
  s = 0.0 
  x = $(esc(xs)) 
  for i = 1:length(x) 
s += $(esc(f))(x[i]) 
  end 
  s 
end 
  end 
  
  @sumf(sinc_plus_x, x) 
  
  This is just as fast and has the advantage that it will work when f is 
 only 
  in the local scope. 
  
  On Saturday, 17 May 2014 11:50:56 UTC+1, Tim Holy wrote: 
   On Friday, May 16, 2014 02:36:03 PM francoi...@gmail.com 
 javascript:wrote: 
- The solver need to be fast and for that, inlining is of paramount 
importance. I know that there is no way to inline F for the time 
 being. 
   
   Do 
   
we expect inlining on function argument in the near future of Julia 
 ? 
   
   I can't speak for when this will happen in a nice way, but using 
 Julia's 
   metaprogramming capabilities there is a (somewhat ugly) way to get 
 what 
   you 
   want. Since I'm planning to use this trick myself shortly, I created a 
   little 
   demonstration: 
   
   https://gist.github.com/timholy/bdcee95f9b7725214d8b 
   
   If you prefer, you don't have to separate out the definition of the 
 body 
   from 
   the eval(quote...end) part, I just did it that way to better 
 illustrate 
   the 
   separate ideas. 
   
   --Tim 



[julia-users] Re: GSOC 3D Visualizations plotting API - Make a wish!

2014-05-17 Thread Mike Innes
You might want to have a look through matplotlib's 3D API – personally I'd 
be really to see basic 3D plotting working really well.

http://matplotlib.org/1.3.1/mpl_toolkits/mplot3d/tutorial.html

Looking forward to seeing what you come up with!

On Saturday, 17 May 2014 17:51:37 UTC+1, Simon Danisch wrote:

 Hi, 
 I'm currently in the planning phase for my GSOC 3D Visualization project, 
 which also means, that I need to define what the most important 
 visualization forms are.
 I must admit, that I haven't done much plotting myself, so I would have to 
 guess what the really important bits are.
 Instead of slavishly http://www.dict.cc/englisch-deutsch/slavishly.html 
 imitating 
 Matlabs plot functions with some mix-ins from my side, I thought we can do 
 better, by getting feedback off the people, that actually plan to use 3D 
 plotting in Julia!
 It would help me a lot, if you could specify what you need exactly in 
 great detail.
 Just tell me what you hate about current solutions, what features you 
 really like, the format of your data, how you like to work, etc...
 Like this, I can find out what needs to be done in order to visualize your 
 data, rate the importance and difficulty and than decide in which order I 
 implement the different plotting capabilities.
 Any feedback, ideas and comments are welcome!

 Best wishes,
 Simon



Re: [julia-users] Re: Design patterns for an API

2014-05-18 Thread Mike Innes
Fair point. I timed this in a loop and it seems to be about an order of 
magnitude slower, which (considering that you're redefining the function 
every time it runs) is actually surprisingly good – it seems that doing so 
only takes a microsecond.

On Saturday, 17 May 2014 21:20:34 UTC+1, Tim Holy wrote:

 Right, but there are still issues with this approach. Are you planning on 
 always executing this function as a macro? If so, then you'll have to 
 pay 
 the compilation price each time you use it. You might not care if xs is 
 huge, 
 but if xs has 10 items then you won't be very happy. 

 The performance of the dict-based method cache is not ideal in such 
 circumstances either, but it's about three orders of magnitude faster: 

 julia y = rand(10); 

 julia @time @sumf exp y 
 elapsed time: 0.01007613 seconds (131500 bytes allocated) 
 19.427283165919906 

 julia @time @sumf exp y 
 elapsed time: 0.011217276 seconds (131500 bytes allocated) 
 19.427283165919906 

 In contrast, 

 julia @time sumf(:exp, y) 
 elapsed time: 0.014134811 seconds (94888 bytes allocated) 
 19.427283165919906 

 julia @time sumf(:exp, y) 
 elapsed time: 1.0356e-5 seconds (64 bytes allocated) 
 19.427283165919906 

 Three orders of magnitude is enough of a speed difference to notice :). 

 --Tim 


 On Saturday, May 17, 2014 09:31:29 AM Mike Innes wrote: 
  That macro being slow at the top level isn't really a strike against the 
  macro technique, because it's easily resolved: 
  
  (Although oddly enough, a let binding doesn't really help here – anyone 
  know why?) 
  
  macro sumf(f, xs) 
quote 
  function inner(s = 0.0, x = $(esc(xs))) 
for i = 1:length(x) 
  s += $(esc(f))(x[i]) 
end 
s 
  end 
  inner() 
end 
  end 
  
  Although you're right that if you call this from a function which itself 
  takes f as a parameter, you'll lose the speed boost again (technically 
 you 
  could resolve this by making the calling function a macro in the same 
 way, 
  but that's equally awful). 
  
  Hopefully this kind of inlining will be automatic soon, so we won't have 
 to 
  resort to ugly language hacks to make things fast; on the other hand, 
 it's 
  kinda cool that we *can* resort to ugly language hacks to make things 
 fast 
  
  :) 
  
  On Saturday, 17 May 2014 16:18:15 UTC+1, Tim Holy wrote: 
   If you want to make that fast, you need to wrap that inside a 
 function, 
   using 
   a separate name for each user-supplied f. Example: 
   
   function sumf_with_sinc_plus_x(xs) 
   
   @sumf(sinc_plus_x, xs) 
   
   end 
   
   function sumf_with_exp(xs) 
   
   @sumf(exp, xs) 
   
   end 
   
   If you don't wrap it in a function, then it runs in global scope, and 
   that's 
   horribly slow. 
   
   My version allows you to pass a function as an argument, and mimics 
 what 
   it 
   would be like if we could pass functions-as-arguments without a 
   performance 
   penalty. 
   
   --Tim 
   
   On Saturday, May 17, 2014 05:03:07 AM Mike Innes wrote: 
I may be missing the point here, but wouldn't it be easier to define 
   
   sumf 
   
as a macro? 

macro sumf(f, xs) 

  quote 
  
s = 0.0 
x = $(esc(xs)) 
for i = 1:length(x) 

  s += $(esc(f))(x[i]) 

end 
s 
  
  end 

end 

@sumf(sinc_plus_x, x) 

This is just as fast and has the advantage that it will work when f 
 is 
   
   only 
   
in the local scope. 

On Saturday, 17 May 2014 11:50:56 UTC+1, Tim Holy wrote: 
 On Friday, May 16, 2014 02:36:03 PM francoi...@gmail.com 
   
   javascript:wrote: 
  - The solver need to be fast and for that, inlining is of 
 paramount 
  importance. I know that there is no way to inline F for the time 
   
   being. 
   
 Do 
 
  we expect inlining on function argument in the near future of 
 Julia 
   
   ? 
   
 I can't speak for when this will happen in a nice way, but using 
   
   Julia's 
   
 metaprogramming capabilities there is a (somewhat ugly) way to get 
   
   what 
   
 you 
 want. Since I'm planning to use this trick myself shortly, I 
 created a 
 little 
 demonstration: 
 
 https://gist.github.com/timholy/bdcee95f9b7725214d8b 
 
 If you prefer, you don't have to separate out the definition of 
 the 
   
   body 
   
 from 
 the eval(quote...end) part, I just did it that way to better 
   
   illustrate 
   
 the 
 separate ideas. 
 
 --Tim 



Re: [julia-users] Re: Design patterns for an API

2014-05-18 Thread Mike Innes
Although it's worth pointing out the overhead is only present when using 
the scope workaround; if you're in an inner loop and the 1μs overhead is 
non-negligible (it seems unlikely that this would actually be a bottleneck, 
but who knows) you could just use my original macro. Overall I'm not 
convinced that the macro solution is worse unless you specifically want to 
simulate higher-order functions (which you do, of course, so fair enough).

On Sunday, 18 May 2014 07:24:38 UTC+1, Mike Innes wrote:

 Fair point. I timed this in a loop and it seems to be about an order of 
 magnitude slower, which (considering that you're redefining the function 
 every time it runs) is actually surprisingly good – it seems that doing so 
 only takes a microsecond.

 On Saturday, 17 May 2014 21:20:34 UTC+1, Tim Holy wrote:

 Right, but there are still issues with this approach. Are you planning on 
 always executing this function as a macro? If so, then you'll have to 
 pay 
 the compilation price each time you use it. You might not care if xs is 
 huge, 
 but if xs has 10 items then you won't be very happy. 

 The performance of the dict-based method cache is not ideal in such 
 circumstances either, but it's about three orders of magnitude faster: 

 julia y = rand(10); 

 julia @time @sumf exp y 
 elapsed time: 0.01007613 seconds (131500 bytes allocated) 
 19.427283165919906 

 julia @time @sumf exp y 
 elapsed time: 0.011217276 seconds (131500 bytes allocated) 
 19.427283165919906 

 In contrast, 

 julia @time sumf(:exp, y) 
 elapsed time: 0.014134811 seconds (94888 bytes allocated) 
 19.427283165919906 

 julia @time sumf(:exp, y) 
 elapsed time: 1.0356e-5 seconds (64 bytes allocated) 
 19.427283165919906 

 Three orders of magnitude is enough of a speed difference to notice :). 

 --Tim 


 On Saturday, May 17, 2014 09:31:29 AM Mike Innes wrote: 
  That macro being slow at the top level isn't really a strike against 
 the 
  macro technique, because it's easily resolved: 
  
  (Although oddly enough, a let binding doesn't really help here – anyone 
  know why?) 
  
  macro sumf(f, xs) 
quote 
  function inner(s = 0.0, x = $(esc(xs))) 
for i = 1:length(x) 
  s += $(esc(f))(x[i]) 
end 
s 
  end 
  inner() 
end 
  end 
  
  Although you're right that if you call this from a function which 
 itself 
  takes f as a parameter, you'll lose the speed boost again (technically 
 you 
  could resolve this by making the calling function a macro in the same 
 way, 
  but that's equally awful). 
  
  Hopefully this kind of inlining will be automatic soon, so we won't 
 have to 
  resort to ugly language hacks to make things fast; on the other hand, 
 it's 
  kinda cool that we *can* resort to ugly language hacks to make things 
 fast 
  
  :) 
  
  On Saturday, 17 May 2014 16:18:15 UTC+1, Tim Holy wrote: 
   If you want to make that fast, you need to wrap that inside a 
 function, 
   using 
   a separate name for each user-supplied f. Example: 
   
   function sumf_with_sinc_plus_x(xs) 
   
   @sumf(sinc_plus_x, xs) 
   
   end 
   
   function sumf_with_exp(xs) 
   
   @sumf(exp, xs) 
   
   end 
   
   If you don't wrap it in a function, then it runs in global scope, and 
   that's 
   horribly slow. 
   
   My version allows you to pass a function as an argument, and mimics 
 what 
   it 
   would be like if we could pass functions-as-arguments without a 
   performance 
   penalty. 
   
   --Tim 
   
   On Saturday, May 17, 2014 05:03:07 AM Mike Innes wrote: 
I may be missing the point here, but wouldn't it be easier to 
 define 
   
   sumf 
   
as a macro? 

macro sumf(f, xs) 

  quote 
  
s = 0.0 
x = $(esc(xs)) 
for i = 1:length(x) 

  s += $(esc(f))(x[i]) 

end 
s 
  
  end 

end 

@sumf(sinc_plus_x, x) 

This is just as fast and has the advantage that it will work when f 
 is 
   
   only 
   
in the local scope. 

On Saturday, 17 May 2014 11:50:56 UTC+1, Tim Holy wrote: 
 On Friday, May 16, 2014 02:36:03 PM francoi...@gmail.com 
   
   javascript:wrote: 
  - The solver need to be fast and for that, inlining is of 
 paramount 
  importance. I know that there is no way to inline F for the 
 time 
   
   being. 
   
 Do 
 
  we expect inlining on function argument in the near future of 
 Julia 
   
   ? 
   
 I can't speak for when this will happen in a nice way, but 
 using 
   
   Julia's 
   
 metaprogramming capabilities there is a (somewhat ugly) way to 
 get 
   
   what 
   
 you 
 want. Since I'm planning to use this trick myself shortly, I 
 created a 
 little 
 demonstration: 
 
 https://gist.github.com/timholy/bdcee95f9b7725214d8b 
 
 If you prefer, you don't have to separate out the definition of 
 the 
   
   body 
   
 from 
 the eval(quote

[julia-users] Re: Proof of concept: hydrodynamics in julia!

2014-05-18 Thread Mike Innes
This is really cool! Thanks for sharing it, especially with the video.

You might be interested in packaging this 
uphttp://docs.julialang.org/en/latest/manual/packages/#package-development– 
it doesn't have to be officially registered or anything, but just putting 
everything into a module would mean that people can download it and try out 
some fluid dynamics really easily.

On Sunday, 18 May 2014 12:41:43 UTC+1, Joonas Nättilä wrote:

 Hi all,

 After some twiddling and debugging I can finally announce the first alpha 
 version of (what I suspect to be the first) hydrodynamics code written in 
 julia:

 http://github.com/natj/hydro https://github.com/natj/hydro

 There are still quite a lot of things to do like parallelization but even 
 currently it is capable of running a 100x100 grid with reasonable speed 
 live. The original python code I based this on, was able to maintain 
 approximately the same speed in 1 dimension but we are already doing 
 computations in 2d!

 One of my design goals was to make this as modular and flexible as 
 possible so that it could be used a as basis for more complex calculations. 
 Due to this it should be relatively straightforward to upgrade it to for 
 example to magnetohydrodynamics or to shallow water equations. Also, now 
 that I have the initial frame done, I plan to begin a heavy testing and 
 optimization period so all comments, tips and improvements are welcome! 

 I also computed some eye-candy for you that you can amaze here 
 https://vimeo.com/95607699


 Cheers,
 Joonas



Re: [julia-users] GSOC 3D Visualizations plotting API - Make a wish!

2014-05-19 Thread Mike Innes
If you're looking for a consistent API between 2D and 3D it might be worth
taking some inspiration from Gadfly – I don't know how easy it would be to
extend the Grammar of Graphics style interface to 3D but if you can it
would be a solid base to build on (alongside a more matlab-style API, I
suppose).


On 19 May 2014 14:56, Chris Foster chris...@gmail.com wrote:

 On Mon, May 19, 2014 at 11:44 PM, Chris Foster chris...@gmail.com wrote:
  I think these last two points probably generalise to 1D, 2D and 3D data.
  IMO there's a case to be made for a simple default GUI with a capable
  camera model and control over data set visibility.  This would go a long
  way toward making interactive exploration of 3D data a joy rather than a
  chore.

 Another point I forgot to mention: In my current system, shader uniform
 parameters can be connected to GUI controls via a one-line annotation in
 the shader source.  I've found this to be extremely useful for interactive
 data exploration, since it allows you to change visualization parameters
 with a simple swipe of the mouse rather than laboriously changing them as
 numeric values.  Obviously it probably wouldn't work quite this way in
 julia, but having an extremely easy way to add basic GUI controls would be
 very useful.

 ~Chris



Re: [julia-users] function isabstract(::DataType)

2014-05-21 Thread Mike Innes
A quick search turns up isleaftype, which seems to do exactly the opposite
of what you want:

isleaftype(AbstractArray) = false
isleaftype(Array{Int, 2}) = true

Hope that helps.


On 21 May 2014 20:15, Stephen Chisholm sbchish...@gmail.com wrote:

 Is there a way to check if a given DataType is an abstract type?  I've
 come up with a crude method below but thought there should be a better way.


 function isabstract(t::DataType)
 try
 t()
 catch exception
 return (typeof(exception) == ErrorException 
 string(exception.msg) == type cannot be constructed)
 end
 return false
 end


 Cheers, Steve



Re: [julia-users] DataFrames: Problems with Split-Apply-Combine strategy

2014-05-22 Thread Mike Innes
Link:
http://stackoverflow.com/questions/23806758/julia-dataframes-problems-with-split-apply-combine-strategy

I definitely agree that having a greater presence on SO would be useful, so
it might be best to answer there (sorry I can't be more directly helpful,
OP)


On 22 May 2014 13:56, Paulo Castro p.oliveira.cas...@gmail.com wrote:

  *I made this question on StackOverflow, but I think I will get better
 results posting it here. We should use that platform more, so Julia is more
 exposed to R/Python/Matlab users needing something like it.*

 I have some data (from a R course assignment, but that doesn't matter)
 that I want to use split-apply-combine strategy, but I'm having some
 problems. The data is on a DataFrame, called outcome, and each line
 represents a Hospital. Each column has an information about that hospital,
 like name, location, rates, etc.

 *My objective is to obtain the Hospital with the lowest Mortality by
 Heart Attack Rate of each State.*

 I was playing around with some strategies, and got a problem using the 
 byfunction:

 best_heart_rate(df) = sort(df, cols = :Mortality)[end,:]

 best_hospitals = by(hospitals, :State, best_heart_rate)

  The idea was to split the hospitals DataFrame by State, sort each of the
 SubDataFrames by Mortality Rate, get the lowest one, and combine the lines
 in a new DataFrame

 But when I used this strategy, I got:

 ERROR: no method nrow(SubDataFrame{Array{Int64,1}})

  in sort at /home/paulo/.julia/v0.3/DataFrames/src/dataframe/sort.jl:311

  in sort at /home/paulo/.julia/v0.3/DataFrames/src/dataframe/sort.jl:296

  in f at none:1
  in based_on at 
 /home/paulo/.julia/v0.3/DataFrames/src/groupeddataframe/grouping.jl:144

  in by at 
 /home/paulo/.julia/v0.3/DataFrames/src/groupeddataframe/grouping.jl:202

 I suppose the nrow function is not implemented for SubDataFrames for a
 good reason, so I gave up from this strategy. Then I used a nastier code:

 best_heart_rate(df) = (df[sortperm(df[:,:Mortality] , rev=true), :])[1,:]

 best_hospitals = by(hospitals, :State, best_heart_rate)

 Seems to work. But now there is a NA problem: how can I remove the rows
 from the SubDataFrames that have NA on the Mortality column? Is there a
 better strategy to accomplish my objective?



Re: [julia-users] Re: OT: entering Unicode characters

2014-05-22 Thread Mike Innes
Great! This feature will be in Light Table soon, too – complete with fuzzy
searching, so that it's easy to browse all available symbols :)


On 22 May 2014 18:27, Steven G. Johnson stevenj@gmail.com wrote:

 A quick update for people who haven't been tracking git closely:

 The Julia REPL (#6911), IJulia, and (soon) Emacs julia-mode (#6920) now
 allows you to type many mathematical Unicode characters simply by typing
 the LaTeX symbol and hitting TAB.

 e.g. you can type \alphaTAB and get α, or x\hatTAB and get x̂.

 There are currently 736 supported symbols (though not all of them are
 valid in Julia identifiers).   This should provide a consistent,
 cross-platform Julian idiom for entering Unicode math.

 Hopefully this can also be added to other popular editors at some point,
 e.g. presumably vim can be programmed to do this, and there is a somewhat
 similar mode for Sublime (https://github.com/mvoidex/UnicodeMath).
  (Less-programmable editors might need source-level patches, but it doesn't
 seem like an unreasonable patch to suggest.)



Re: [julia-users] pager in the repl

2014-05-23 Thread Mike Innes
Why do you say it's too far fetched? I agree that it shouldn't be the
default, but I don't see any technical issues with the idea.


On 23 May 2014 13:36, Andrea Pagnani andrea.pagn...@gmail.com wrote:

 The idea to have a pager that allows for browsing efficiently large data
 structures as proposed by Stefan is great but probably too far fetched.
 Also I agree that having a long output paged by default is not what I
 really need in my everyday data/program debugging. Suppose that
 ``myprog()`` prints a long output on the screen, what I would really fancy
  would be something like ``myprog()  | less``  ( the symbol ``|``  might
 not be the good one though) or  ``tail``, ``head`` which, I do not know for
 windows system, but at least for MAC and Linux are already available. I use
 it quite often the syntax
 command|less
 and redirects you through ``less`` to the line of julia code  where
 ``command`` is defined. Is there any way to redirect on the fly STDOUT from
 screen to, say less, but on the fly?

 Andrea




 On Friday, May 23, 2014 11:45:18 AM UTC+2, Tomas Lycken wrote:

 Personally, I love the way Julia outputs large matrices - some rows from
 the start, followed by ... and then some rows from the end. If the matrix
 is both wide and tall, it's truncated in both directions, but the central
 point is that it's truncated in the middle, rather than on either end. That
 lets me quickly inspect the entire thing - if it both starts and ends as I
 expect, it's probably OK even in the parts that I don't see. If i'm not
 sure, I can always use some more (or less) fine-tuned command (e.g. @show)
 to look at the entire thing.

 The ideal solution to me, would be to do the same thing for all kinds of
 output: the default way of displaying it would be to truncate it in the
 middle, and if the user wants something else, they can manually request it.
 And then, of course, it would be awesome if they could also manually pipe
 it to something that works just like less, tail or whatever from the native
 terminal.

 But I think making long output in the REPL paginated by default is a bad
 idea - if I type something that results in a large amount of output just
 because I forgot to add ; at the end, suddenly I have to get out of the
 paginated output view before I can type my next command. I don't feel like
 making the read-eval-print-loop more like a read-eval-print-
 getoutofpaginatedview-loop...

 // T

 On Friday, May 23, 2014 10:06:11 AM UTC+2, Tamas Papp wrote:

 I don't think a pager is the right solution, for the following reasons:

 1. typing directly in to the REPL running in a terminal is not an
 efficient way to program anything nontrivial, most users would use an
 IDE (incl Emacs) that would allow scrolling and inspection of a value,

 2. how many elements to print from large arrays etc could be controlled
 by something like Common Lisp's *PRINT-LENGTH*, eg see
 http://clhs.lisp.se/Body/v_pr_lev.htm (sorry if this already exists in
 Julia, could not find it).

 Best,

 Tamas

 On Thu, May 22 2014, Stefan Karpinski ste...@karpinski.org wrote:

  Now that we have native terminal support, it would be a reasonable
 project
  to write a pager in Julia. Why write our own pager (you ask)? Because
 it
  could allow you to do things like efficiently page around a huge array
  without having to print the whole thing. You could, e.g., instantly
 page to
  the bottom right of a massive, distributed array, without any lag at
 all.
  Of course, the thing is you want to use shared infrastructure for
 doing
  this kind of data exploration in the terminal, IJulia, and maybe your
  editor. But the pager part could be pretty decoupled from that.
 
 
  On Thu, May 22, 2014 at 3:30 PM, Kevin Squire kevin@gmail.comwrote:

 
  Thanks!
 
 
  On Thursday, May 22, 2014 11:52:12 AM UTC-7, Bob Nnamtrop wrote:
 
  OK done. See https://github.com/JuliaLang/julia/issues/6921
 
 
  On Thu, May 22, 2014 at 12:20 PM, Kevin Squire 
  kevin@gmail.comwrote:

 
  I agree that that would be nice.  Would you be willing to open up
 an
  issue for this?
 
 
  On Thu, May 22, 2014 at 11:04 AM, Bob Nnamtrop 
  bob.nn...@gmail.comwrote:

 
  I often find myself wishing for a pager in the repl when outputing
  large amount of output. I see that there is a Base.less but it is
 only used
  on files and not for outputting other stuff in the repl. In fact,
 it would
  be great to have support for less, head, and tail like
 functionality for
  looking at arrays, hashes, etc. Thus to be able to do:
 
  arr | less
  or
  less(arr)
  or
  arr | tail
 
  In addition, I think having the output of show() automatically go
  through less if it longer that one page would be great. I hate
 seeing 100's
  of pages of output fly by when, e.g., a huge hash gets shown at
 the
  prompt (I just cannot seem to get in the habit of typing the ; at
 the right
  time). This behavior could be configurable of course.
 
  Bob
 
 
 
 




Re: [julia-users] pager in the repl

2014-05-23 Thread Mike Innes
Ah, I see.

Well, as it happens I began some work on a pager like this as a
proof-of-concept. If there's significant interest I could flesh it out into
something usable over the summer.


On 23 May 2014 13:58, Andrea Pagnani andrea.pagn...@gmail.com wrote:

 Too far fetched for my programming skills.  As I said, I would be already
 happy with a simple workaround to redirect STDOUT on less.


 On Friday, May 23, 2014 2:45:56 PM UTC+2, Mike Innes wrote:

 Why do you say it's too far fetched? I agree that it shouldn't be the
 default, but I don't see any technical issues with the idea.


 On 23 May 2014 13:36, Andrea Pagnani andrea@gmail.com wrote:

 The idea to have a pager that allows for browsing efficiently large data
 structures as proposed by Stefan is great but probably too far fetched.
 Also I agree that having a long output paged by default is not what I
 really need in my everyday data/program debugging. Suppose that
 ``myprog()`` prints a long output on the screen, what I would really fancy
  would be something like ``myprog()  | less``  ( the symbol ``|``  might
 not be the good one though) or  ``tail``, ``head`` which, I do not know for
 windows system, but at least for MAC and Linux are already available. I use
 it quite often the syntax
 command|less
 and redirects you through ``less`` to the line of julia code  where
 ``command`` is defined. Is there any way to redirect on the fly STDOUT from
 screen to, say less, but on the fly?

 Andrea




 On Friday, May 23, 2014 11:45:18 AM UTC+2, Tomas Lycken wrote:

 Personally, I love the way Julia outputs large matrices - some rows
 from the start, followed by ... and then some rows from the end. If the
 matrix is both wide and tall, it's truncated in both directions, but the
 central point is that it's truncated in the middle, rather than on either
 end. That lets me quickly inspect the entire thing - if it both starts and
 ends as I expect, it's probably OK even in the parts that I don't see. If
 i'm not sure, I can always use some more (or less) fine-tuned command (e.g.
 @show) to look at the entire thing.

 The ideal solution to me, would be to do the same thing for all kinds
 of output: the default way of displaying it would be to truncate it in the
 middle, and if the user wants something else, they can manually request it.
 And then, of course, it would be awesome if they could also manually pipe
 it to something that works just like less, tail or whatever from the native
 terminal.

 But I think making long output in the REPL paginated by default is a
 bad idea - if I type something that results in a large amount of output
 just because I forgot to add ; at the end, suddenly I have to get out of
 the paginated output view before I can type my next command. I don't feel
 like making the read-eval-print-loop more like a read-eval-print-
 getoutofpaginatedview-loop...

 // T

 On Friday, May 23, 2014 10:06:11 AM UTC+2, Tamas Papp wrote:

 I don't think a pager is the right solution, for the following
 reasons:

 1. typing directly in to the REPL running in a terminal is not an
 efficient way to program anything nontrivial, most users would use an
 IDE (incl Emacs) that would allow scrolling and inspection of a value,

 2. how many elements to print from large arrays etc could be
 controlled
 by something like Common Lisp's *PRINT-LENGTH*, eg see
 http://clhs.lisp.se/Body/v_pr_lev.htm (sorry if this already exists
 in
 Julia, could not find it).

 Best,

 Tamas

 On Thu, May 22 2014, Stefan Karpinski ste...@karpinski.org wrote:

  Now that we have native terminal support, it would be a reasonable
 project
  to write a pager in Julia. Why write our own pager (you ask)?
 Because it
  could allow you to do things like efficiently page around a huge
 array
  without having to print the whole thing. You could, e.g., instantly
 page to
  the bottom right of a massive, distributed array, without any lag at
 all.
  Of course, the thing is you want to use shared infrastructure for
 doing
  this kind of data exploration in the terminal, IJulia, and maybe
 your
  editor. But the pager part could be pretty decoupled from that.
 
 
  On Thu, May 22, 2014 at 3:30 PM, Kevin Squire 
  kevin@gmail.comwrote:

 
  Thanks!
 
 
  On Thursday, May 22, 2014 11:52:12 AM UTC-7, Bob Nnamtrop wrote:
 
  OK done. See https://github.com/JuliaLang/julia/issues/6921
 
 
  On Thu, May 22, 2014 at 12:20 PM, Kevin Squire 
 kevin@gmail.comwrote:
 
  I agree that that would be nice.  Would you be willing to open up
 an
  issue for this?
 
 
  On Thu, May 22, 2014 at 11:04 AM, Bob Nnamtrop 
 bob.nn...@gmail.comwrote:
 
  I often find myself wishing for a pager in the repl when
 outputing
  large amount of output. I see that there is a Base.less but it
 is only used
  on files and not for outputting other stuff in the repl. In
 fact, it would
  be great to have support for less, head, and tail like
 functionality for
  looking at arrays, hashes, etc. Thus to be able to do

Re: [julia-users] Shameless plug for DotPlot.jl - plots in your terminal

2014-05-23 Thread Mike Innes
Incidentally, interop with other packages without a hard dependency is
something that's around the corner, so you will be able to do this soon.


On 23 May 2014 15:32, Adam Smith swiss.army.engin...@gmail.com wrote:

 Thanks all for the feedback! I have renamed it to TextPlot.jl, added
 support for plotting just about any combination of
 functions/vectors/matrix, made the API more flexible for Gadfly
 compatibility, and greatly expanded the documentation/examples. It is now
 quite a bit more powerful than ASCIIPlots:
 https://github.com/sunetos/TextPlot.jl

 Ivar: I like the idea of having this be a backend for one of the other
 plotting packages, but the dependency would need to be the other direction.
 Meaning, they would need to add support for TextPlot, not the other way
 around. Right now TextPlot has zero dependencies, so you can use it in
 basically any environment, including a console-only server connected over
 SSH. Installing Gadfly requires quite a few dependencies on other packages,
 including Cairo and other graphical packages if you want PNG charts (for
 iTerm2+IPython inline charts, a similar use case to this one). TextPlot
 would be quite useful for machines that cannot build all those other
 packages, so I don't want to make TextPlot depend on any of those packages.

 I think TextPlot is pretty capable already; please let me know if you can
 think of anything it's missing!


 On Friday, May 23, 2014 5:24:50 AM UTC-4, Ivar Nesje wrote:

 Yes, that was definitely my intention to suggest. It looks to me like
 ASCIIPlots.jl and DotPlot.jl solves the same problem in a very similar way,
 and whether to use Unicode for higher resolution seems like something I
 would expect to be an option.

 Anyway, the ultimate goal for ASCII art plots, would be to implement it
 as a backend for one of the normal plotting packages.

 Ivar

 kl. 10:06:42 UTC+2 fredag 23. mai 2014 skrev Tobias Knopp følgende:

 I think merge was meant as: Lets create one uniform package and join
 the efforts. Since ASCIIPlots is not actively maintained I think it would
 be really great if you could take the lead to make an awsome text plotting
 tool.

 I like the name TextPlot by the way.

 Am Donnerstag, 22. Mai 2014 17:42:06 UTC+2 schrieb Adam Smith:

 TextPlot seems like a good name.

 Thanks for the offer on merging, but again, there's really nothing to
 merge. Adding scatterplots to dotplot will be trivial; I'll do that soon
 (making dotplot's features a superset of ASCIIPlots). There is nothing
 compatible/overlapping between these two (small) codebases for merging to
 make sense.

 I would be curious what John Myles White thinks about a more complete
 terminal plotting package for Julia. ASCIIPlots clearly imitates Matlab's
 plotting functions (imagesc), and I was going for something closer to
 Mathematica or Maple (which are more symbolic-oriented than Matlab), since
 I think the syntax is prettier. However, I know a large portion of Julia's
 users are also Matlab users, so if Matlab-compatibility is a goal, you may
 want to keep the packages separate.

 On Thursday, May 22, 2014 11:25:01 AM UTC-4, Leah Hanson wrote:

 Maybe something like TextPlot would be a good merged name? It conveys
 what the package does (text plots) rather than how it does it (Braille
 characters).

 Having a more complete plotting package for the terminal would move
 towards having a way to make `plot` just work when you start up a Julia
 REPL, which I think is a goal. I'd be happy to help merge them, but
 probably won't have time for a couple weeks.

 -- Leah


 On Thu, May 22, 2014 at 7:49 AM, Adam Smith swiss.arm...@gmail.comwrote:

 I'm not totally opposed to it, but my initial reaction is not to:

1. I don't necessarily agree about the name. I personally think
dot plot has a nice ring to it, and it is a more accurate 
 description of
what it does (using Braille characters). This very specifically 
 exploits
Unicode (non-ASCII) characters, so calling it an ASCII plot would be
misleading (for those who want the restricted character set for some
reason).
2. There's not really a single line of code they have in common,
so there's nothing to merge: it would just be a rename. I didn't 
 look at
the code of ASCIIPlots before making it, and we chose completely 
 different
APIs. For example, ASCIIPlots doesn't have a way to plot functions, 
 and
DotPlot doesn't (yet) have a way to scatterplot an array.
3. They are both quite small and simple (dotplot is ~100 lines of
code, ascii is ~250); merging would probably be more work than either
originally took to create.


 On Thursday, May 22, 2014 1:31:10 AM UTC-4, Ivar Nesje wrote:

 Would it make sense to merge this functionality into ASCIIPlots? To
 me that seems like a better name, and John Myles White is likely to be
 willing to transfer the repository if you want to be the maintainer. 
 That
 package started from code posted on the mailing list, 

Re: [julia-users] pager in the repl

2014-05-23 Thread Mike Innes
Cool, I'll add this to my ever-growing list :)

Perhaps less could emit some kind of Scrollable wrapper type, which would
display itself in the REPL as a pager, or in IJulia as an HTML scroll pane?
I've been meaning to try out table display of arrays anyway, so this
would tie in neatly with that.


Re: [julia-users] Shameless plug for DotPlot.jl - plots in your terminal

2014-05-24 Thread Mike Innes
StreamPunkPlots.jl?


On 24 May 2014 07:44, Keno Fischer kfisc...@college.harvard.edu wrote:

 Do you mean StreamPunkPlot.jl?


 On Sat, May 24, 2014 at 6:44 AM, Isaiah Norton isaiah.nor...@gmail.comwrote:

 If the name is still up for debate, I'd like to nominate SteamPunkPlot.jl


 On Fri, May 23, 2014 at 3:56 PM, Stefan Karpinski 
 ste...@karpinski.orgwrote:

 This is so cool:

 julia plot(cumsum(randn(1000)))

  2.73641 ⡤⢤
  ⡇⡼⠇⠀⠀⢸
  ⡇⡇⢃⠀⠀⢸
  ⡇⠀⢈⡀⠀⠀⠀⣀⡀⢸
  ⡇⠀⠐⡇⠀⠀⡀⠀⠀⠀⠠⡴⡄⠀⡀⡾⠅⢸
  ⡇⠀⠀⠚⢀⢠⢣⠀⠀⠀⠄⠀⠀⠀⣬⠋⣧⠀⣰⠁⠄⢀⠀⠀⠀⢸
  ⡇⠀⠀⢈⣃⢘⠙⠦⠀⢀⡇⠀⠀⠀⠠⠀⠀⠀⡁⠀⠈⣦⡖⠀⢃⣺⠀⠀⠀⢸
  ⡇⠀⠀⠸⢕⡏⠨⢛⢰⡸⢻⣄⢀⢀⣒⣖⠀⠄⠘⣧⡅⠐⠇⠀⠀⠮⠀⠀⠹⠃⠆⠀⠀⢸
  ⡇⠀⠀⠀⠐⠃⠀⠨⣶⠏⠀⠛⣅⠀⠀⢰⢼⣨⡻⢛⢬⠃⡅⢹⢩⠘⠀⠀⠈⠀⣄⠀⠀⢸
  ⡇⠀⠀⠀⠈⠁⠀⠀⢣⠐⣦⡄⠀⠀⠀⡊⡍⠇⠁⠐⡝⢷⠃⠀⠘⡕⢹⡿⣃⠀⣀⠀⠀⢸
  ⡇⠀⠀⠀⢈⡆⢀⠀⠘⡸⠛⣳⠄⢀⠀⠆⠀⠀⠀⡃⠁⠘⠀⠀⠀⠋⠈⠀⢸⣾⡷⠀⠀⢸
  ⡇⠀⠀⠀⠘⠁⠀⠀⠀⢠⠀⠀⢐⢇⣨⢈⡞⠏⠀⠙⣣⣺⣞⡅⠀⠀⢀⠃⠀⠋⠘⡄⡀⢸
  ⡇⢧⣴⣅⣆⣜⢆⠀⢸⠘⡳⡾⢈⡱⢱⢩⠀⠀⠸⠁⠀⠀⠀⣡⠀⢸
  ⡇⠸⠏⠿⢫⠍⢹⡄⡖⢠⠁⠁⠈⠁⠀⢨⣄⣨⡮⠙⠀⢸
  ⡇⠀⠀⠇⡆⠐⠀⠐⠋⠛⠀⠀⠀⢸
  ⡇⠀⠀⠺⠀⢸
  ⡇⠀⠀⠈⠀⢸
 -35.0016 ⠓⠚
  1
   1000




 On Fri, May 23, 2014 at 3:02 PM, Adam Smith 
 swiss.army.engin...@gmail.com wrote:

 Sigh, I had a nagging feeling it was supposed to be plural. I did one
 more (hopefully final) rename to TextPlots.jl:
 https://github.com/sunetos/TextPlots.jl and updated the source/readme
 and such.


 On Friday, May 23, 2014 2:18:54 PM UTC-4, Stefan Karpinski wrote:

 There's a convention to name packages plurally – i.e. TextPlotsrather than
 TextPlot. This is nice partly because using TextPlots reads more
 naturally than using TextPlot, but more importantly because if you,
 as is likely, end up having a type called TextPlot, then you don't
 get a name collision.

 On Fri, May 23, 2014 at 10:56 AM, Mike Innes mike.j...@gmail.comwrote:

 Incidentally, interop with other packages without a hard dependency
 is something that's around the corner, so you will be able to do this 
 soon.


 On 23 May 2014 15:32, Adam Smith swiss.arm...@gmail.com wrote:

 Thanks all for the feedback! I have renamed it to TextPlot.jl, added
 support for plotting just about any combination of
 functions/vectors/matrix, made the API more flexible for Gadfly
 compatibility, and greatly expanded the documentation/examples. It is 
 now
 quite a bit more powerful than ASCIIPlots: https://github.
 com/sunetos/TextPlot.jl

 Ivar: I like the idea of having this be a backend for one of the
 other plotting packages, but the dependency would need to be the other
 direction. Meaning, they would need to add support for TextPlot, not the
 other way around. Right now TextPlot has zero dependencies, so you can 
 use
 it in basically any environment, including a console-only server 
 connected
 over SSH. Installing Gadfly requires quite a few dependencies on other
 packages, including Cairo and other graphical packages if you want PNG
 charts (for iTerm2+IPython inline charts, a similar use case to this 
 one).
 TextPlot would be quite useful for machines that cannot build all those
 other packages, so I don't want to make TextPlot depend on any of those
 packages.

 I think TextPlot is pretty capable already; please let me know if
 you can think of anything it's missing!


 On Friday, May 23, 2014 5:24:50 AM UTC-4, Ivar Nesje wrote:

 Yes, that was definitely my intention to suggest. It looks to me
 like ASCIIPlots.jl and DotPlot.jl solves the same problem in a very 
 similar
 way, and whether to use Unicode for higher resolution seems like 
 something
 I would expect to be an option.

 Anyway, the ultimate goal for ASCII art plots, would be to
 implement it as a backend for one of the normal plotting packages.

 Ivar

 kl. 10:06:42 UTC+2 fredag 23. mai 2014 skrev Tobias Knopp følgende:

 I think merge was meant as: Lets create one uniform package and
 join the efforts. Since ASCIIPlots is not actively maintained I think 
 it
 would be really great if you could take the lead to make an awsome 
 text
 plotting tool.

 I like the name TextPlot by the way.

 Am Donnerstag, 22. Mai 2014 17:42:06 UTC+2 schrieb Adam Smith:

 TextPlot seems like a good name.

 Thanks for the offer on merging, but again, there's really
 nothing to merge. Adding scatterplots to dotplot will be trivial; 
 I'll do
 that soon (making dotplot's features a superset

Re: [julia-users] packaging parameters for functions revisited

2014-05-30 Thread Mike Innes
You don't seem to have tried keyword arguments – these are usually the best
option if you have a large set of parameters to pass a single function.
They may not fit your use case but they might be interesting to benchmark
at least.


On 30 May 2014 10:42, Jon Norberg jon.norb...@ecology.su.se wrote:

 There have been several posts about this, so I tried to compile what I
 could find to compare speed and pretty coding:


 http://nbviewer.ipython.org/urls/dl.dropboxusercontent.com/u/38371278/Function%20parms%20passing%20speed%20test.ipynb

 Best speed is assigning values or variables inside the function
 Second best is reassigning parameters from array for each parameter. The
 @pack/@unpack macro also is pretty fast and much prettier
 Third best is separate parameters and global assignment (longer function
 call code if many parameters or/and just no no to use global?)
 Slowest is using indexed array as parameters in function (but this is ugly
 to read) and immutable types

 So I am wondering, did I miss any that improve speed/prettiness?

 Best, Jon






Re: [julia-users] Re: Addition of (unused uninstatiated) type, slows down program by 25%

2014-05-31 Thread Mike Innes
You should definitely open an issue about this – if your timings are right
it's definitely not desirable behaviour.

https://github.com/JuliaLang/julia/issues?state=open


On 31 May 2014 11:00, mike c coolbutusel...@gmail.com wrote:

 I've narrowed down the problem.  It's not a profiling problem.  Julia
 seems to have a step-change in speed when there are too many functions of a
 similar signature.

 I've made a short example that reproduces this slowdown:
 http://pastebin.com/iHAa2Cws

 Run the code once as-is, and then uncomment the intersect() function which
 is currently disabled and run it again.   I see a 20% drop in speed. Note:
 This intersect function is NEVER actually being called. And the type it is
 related to is NEVER INSTANTIATED.

 I think this probably qualifies as a bug, but it may just be the price to
 pay for multiple dispatch when there are too many functions (in this case 5
 functions) to choose from.



Re: [julia-users] Experimental package documentation package.

2014-06-08 Thread Mike Innes
This is a great start as way to do external documentation – I've been 
thinking a lot about this problem but hadn't even considered this kind of 
format.

I imagine that you don't do much parsing of the markdown files except to 
separate entries, but I should mention that I started the Markdown.jl 
https://github.com/one-more-minute/Markdown.jl package with exactly that 
purpose in mind. I'm going to spend some time on it over the summer – 
hopefully soon we'll be able to parse documentation formats like this and 
display nicely formatted docs in IJulia, Light Table and even (to an 
extent) the REPL. Then we can start doing things like rendering inline 
plots, which will be really nice. I also want to have editor support for 
documentation (both inline and in the form of literate-julia-style files), 
syntax highlighting, evaluation, the whole shebang.

Daniel, as far as inline documentation is concerned I've been thinking of 
using:


# FFT
The `fft` function computes an FFT.
fft(vec)

function fft(...)...

I've been using this a little in my own code and it works nicely (haven't 
heard any objections yet anyway), and it's nice in that it requires no new 
syntax. Another option would be some kind of doc-comment marker like #|, 
which would take up fewer lines but would be a bit fiddlier in editors 
without support for it.

On Sunday, 8 June 2014 17:22:52 UTC+1, Michael Hatherly wrote:

 Agreed, It's definitely a noticeable gap at the moment. What's needed 
 currently is a standard backend to interface with the built-in help. (My 
 solution is obviously not sustainable in the long term.) The syntax used to 
 document is, as you say, going to be disliked by some. Several different 
 documentation frontends with different syntaxes would be ideal, catering to 
 everyone's needs. (Comment blocks, external files, etc)

 I'm trying to focus as little time on the syntax, keeping it to a minimum 
 usable subset of markdown at the moment. I'll get a chance later next week 
 to do some more work on this.

 On Sunday, 8 June 2014 14:44:25 UTC+2, Daniel Jones wrote:

  
 A good way of documenting packages is one of the biggest gaps in the 
 julia ecosystem right now. Part of the reason why is evinced in the issues 
 you cite: no matter what the system is, someone is going to hate it. At 
 this point, I'm sort of hoping someone will just ignore all feedback and 
 build whatever they want.
  
 That said, I think this is a pretty elegant solution. Just relying on 
 markdown h1 and h2 headers leaves open the possibility of generating html 
 documentation from the same source. That's something I appreciate, since 
 I'd also want to generate html docs with example plots rendered for gadfly.
  
 With Jake Bolewski's julia parser, I hope it will become easier to 
 extract documentation from source code, either from comments or something 
 like docstrings. Have you given any though to that?
  
  
 On Thu, Jun 5, 2014, at 03:13 PM, Michael Hatherly wrote:

 Hi all,
  
 I've just put up a rough prototype for package documentation at 
 https://github.com/MichaelHatherly/Docile.jl. This is *not* meant to be 
 a solution to the documentation problem, but rather to start some fresh 
 discussion on the matter.
  
 Any feedback would be great. There's more details in the readme.
  
 Regards,
 Mike



Re: [julia-users] Experimental package documentation package.

2014-06-09 Thread Mike Innes
Woops – yeah, terminal_print takes a columns keyword argument.

sprint(io - Markdown.terminal_print(io, md, columns = 80))


On 9 June 2014 11:16, Michael Hatherly michaelhathe...@gmail.com wrote:

 Great, just pushed some other changes so I'll look into this later this
 week.
 Having a quick look though, sprint(Markdown.terminal_print, ans) strips
 out the line wrapping. Is there an easy way to retain that formatting in
 the string?


 On Monday, 9 June 2014 10:49:56 UTC+2, Mike Innes wrote:

 I just fixed it up to work with n level headers – it should do everything
 you need it to now.

 Just to get you started, this will render the first docstring from
 docile.md:

 julia Markdown.Block(Markdown.parse_file(/users/Mike/Documents/do
 cile.md)[3:7])
 julia sprint(Markdown.terminal_print, ans)

 On Sunday, 8 June 2014 22:07:40 UTC+1, Michael Hatherly wrote:

 So it does :) I'll have a closer look soon.

 On Sunday, 8 June 2014 22:29:13 UTC+2, Tim Holy wrote:

 On Sunday, June 08, 2014 01:16:51 PM Michael Hatherly wrote:
  Since everything in help is in Base as
  well, it doesn't seem to be a problem currently.

 Actually, the help system does take the module into account (I believe
 Carlo
 Baldassi implemented this):

  help Base.print
 Base.print(x)

Write (to the default output stream) a canonical (un-decorated)
text representation of a value if there is one, otherwise call
show. The representation used by print includes minimal
formatting and tries to avoid Julia-specific details.

  help Profile.print
 Base.Profile.print([io::IO = STDOUT], [data::Vector]; format = :tree, C
 =
 false, combine = true, cols = tty_cols())

Prints profiling results to io (by default, STDOUT). If you
do not supply a data vector, the internal buffer of accumulated
backtraces will be used.  format can be :tree or :flat.
If C==true, backtraces from C and Fortran code are shown.
combine==true merges instruction pointers that correspond to
the same line of code.  cols controls the width of the display.

 Base.Profile.print([io::IO = STDOUT], data::Vector, lidict::Dict;
 format =
 :tree, combine = true, cols = tty_cols())

Prints profiling results to io. This variant is used to examine
results exported by a previous call to Profile.retrieve().
Supply the vector data of backtraces and a dictionary
lidict of line information.


  I'll take another look
  when I get a chance.
 
  [1] https://github.com/JuliaLang/julia/blob/master/base/help.jl#L102
 
  On Sunday, 8 June 2014 21:32:36 UTC+2, Tim Holy wrote:
   I agree with Daniel. We just need _something_, and on this issue
 the
   diversity
   of tastes seems to make consensus impossible. So kudos to you. I
 really
   hope
   this keeps moving forward.
  
   What prevents it from working with functions rather than strings?
  
   --Tim
  
   On Saturday, June 07, 2014 02:16:11 PM Daniel Jones wrote:
A good way of documenting packages is one of the biggest gaps in
 the
julia ecosystem right now. Part of the reason why is evinced in
 the
issues you cite: no matter what the system is, someone is going
 to hate
it. At this point, I'm sort of hoping someone will just ignore
 all
feedback and build whatever they want.
   
   
   
That said, I think this is a pretty elegant solution. Just
 relying on
markdown h1 and h2 headers leaves open the possibility of
 generating
html documentation from the same source. That's something I
 appreciate,
since I'd also want to generate html docs with example plots
 rendered
for gadfly.
   
   
   
With Jake Bolewski's julia parser, I hope it will become easier
 to
extract documentation from source code, either from comments or
something like docstrings. Have you given any though to that?
   
   
   
   
   
On Thu, Jun 5, 2014, at 03:13 PM, Michael Hatherly wrote:
   
Hi all,
   
   
   
I've just put up a rough prototype for package documentation at
[1]https://github.com/MichaelHatherly/Docile.jl. This is not
 meant to
be a solution to the documentation problem, but rather to start
 some
fresh discussion on the matter.
   
   
   
Any feedback would be great. There's more details in the readme.
   
   
   
Regards,
   
Mike
   
References
   
1. https://github.com/MichaelHatherly/Docile.jl




Re: [julia-users] Experimental package documentation package.

2014-06-09 Thread Mike Innes
This is just because Markdown.jl didn't have a release – I don't know if
there's a way to depend on such packages and/or arbitrary git repositories
(if not perhaps we should have a way?). Adding Pkg.clone(Markdown) during
the build step would work I guess.

Anyway, I just pushed 0.1.0 so it should work if you re-do the build.


On 9 June 2014 13:04, Michael Hatherly michaelhathe...@gmail.com wrote:

 I've managed to get it to store the markdown parsed docs now and display
 them correctly. Travis is complaining that it can't find Markdown though.
 Should I be doing something different in my REQUIRE file?


 On Monday, 9 June 2014 12:23:56 UTC+2, Mike Innes wrote:

 Woops – yeah, terminal_print takes a columns keyword argument.

 sprint(io - Markdown.terminal_print(io, md, columns = 80))


 On 9 June 2014 11:16, Michael Hatherly michael...@gmail.com wrote:

 Great, just pushed some other changes so I'll look into this later this
 week.
 Having a quick look though, sprint(Markdown.terminal_print, ans) strips
 out the line wrapping. Is there an easy way to retain that formatting in
 the string?


 On Monday, 9 June 2014 10:49:56 UTC+2, Mike Innes wrote:

 I just fixed it up to work with n level headers – it should do
 everything you need it to now.

 Just to get you started, this will render the first docstring from
 docile.md:

 julia Markdown.Block(Markdown.parse_file(/users/Mike/Documents/do
 cile.md)[3:7])
 julia sprint(Markdown.terminal_print, ans)

 On Sunday, 8 June 2014 22:07:40 UTC+1, Michael Hatherly wrote:

 So it does :) I'll have a closer look soon.

 On Sunday, 8 June 2014 22:29:13 UTC+2, Tim Holy wrote:

 On Sunday, June 08, 2014 01:16:51 PM Michael Hatherly wrote:
  Since everything in help is in Base as
  well, it doesn't seem to be a problem currently.

 Actually, the help system does take the module into account (I
 believe Carlo
 Baldassi implemented this):

  help Base.print
 Base.print(x)

Write (to the default output stream) a canonical (un-decorated)
text representation of a value if there is one, otherwise call
show. The representation used by print includes minimal
formatting and tries to avoid Julia-specific details.

  help Profile.print
 Base.Profile.print([io::IO = STDOUT], [data::Vector]; format = :tree,
 C =
 false, combine = true, cols = tty_cols())

Prints profiling results to io (by default, STDOUT). If you
do not supply a data vector, the internal buffer of accumulated
backtraces will be used.  format can be :tree or :flat.
If C==true, backtraces from C and Fortran code are shown.
combine==true merges instruction pointers that correspond to
the same line of code.  cols controls the width of the display.

 Base.Profile.print([io::IO = STDOUT], data::Vector, lidict::Dict;
 format =
 :tree, combine = true, cols = tty_cols())

Prints profiling results to io. This variant is used to examine
results exported by a previous call to Profile.retrieve().
Supply the vector data of backtraces and a dictionary
lidict of line information.


  I'll take another look
  when I get a chance.
 
  [1] https://github.com/JuliaLang/julia/blob/master/base/help.jl#
 L102
 
  On Sunday, 8 June 2014 21:32:36 UTC+2, Tim Holy wrote:
   I agree with Daniel. We just need _something_, and on this issue
 the
   diversity
   of tastes seems to make consensus impossible. So kudos to you. I
 really
   hope
   this keeps moving forward.
  
   What prevents it from working with functions rather than strings?
  
   --Tim
  
   On Saturday, June 07, 2014 02:16:11 PM Daniel Jones wrote:
A good way of documenting packages is one of the biggest gaps
 in the
julia ecosystem right now. Part of the reason why is evinced in
 the
issues you cite: no matter what the system is, someone is going
 to hate
it. At this point, I'm sort of hoping someone will just ignore
 all
feedback and build whatever they want.
   
   
   
That said, I think this is a pretty elegant solution. Just
 relying on
markdown h1 and h2 headers leaves open the possibility of
 generating
html documentation from the same source. That's something I
 appreciate,
since I'd also want to generate html docs with example plots
 rendered
for gadfly.
   
   
   
With Jake Bolewski's julia parser, I hope it will become easier
 to
extract documentation from source code, either from comments or
something like docstrings. Have you given any though to that?
   
   
   
   
   
On Thu, Jun 5, 2014, at 03:13 PM, Michael Hatherly wrote:
   
Hi all,
   
   
   
I've just put up a rough prototype for package documentation at
[1]https://github.com/MichaelHatherly/Docile.jl. This is not
 meant to
be a solution to the documentation problem, but rather to start
 some
fresh discussion on the matter.
   
   
   
Any feedback would be great. There's more details in the
 readme.
   
   
   
Regards,
   
Mike
   
References
   
1

Re: [julia-users] Experimental package documentation package.

2014-06-09 Thread Mike Innes
writemime(text/html, md) is the best interface to use for outputting
HTML. Honestly, though, I spent about twenty minutes on HTML output so it's
going to be basic at best. Improvements are welcome, of course.


On 9 June 2014 13:31, Michael Hatherly michaelhathe...@gmail.com wrote:

 Thanks.

 Is Markdown.html_inline what I should be using to produce html output in
 a similar manner to terminal_print?


 On Monday, 9 June 2014 14:24:12 UTC+2, Mike Innes wrote:

 This is just because Markdown.jl didn't have a release – I don't know if
 there's a way to depend on such packages and/or arbitrary git repositories
 (if not perhaps we should have a way?). Adding Pkg.clone(Markdown) during
 the build step would work I guess.

 Anyway, I just pushed 0.1.0 so it should work if you re-do the build.


 On 9 June 2014 13:04, Michael Hatherly michael...@gmail.com wrote:

 I've managed to get it to store the markdown parsed docs now and display
 them correctly. Travis is complaining that it can't find Markdown though.
 Should I be doing something different in my REQUIRE file?


 On Monday, 9 June 2014 12:23:56 UTC+2, Mike Innes wrote:

 Woops – yeah, terminal_print takes a columns keyword argument.

 sprint(io - Markdown.terminal_print(io, md, columns = 80))


 On 9 June 2014 11:16, Michael Hatherly michael...@gmail.com wrote:

 Great, just pushed some other changes so I'll look into this later
 this week.
 Having a quick look though, sprint(Markdown.terminal_print, ans) strips
 out the line wrapping. Is there an easy way to retain that formatting in
 the string?


 On Monday, 9 June 2014 10:49:56 UTC+2, Mike Innes wrote:

 I just fixed it up to work with n level headers – it should do
 everything you need it to now.

 Just to get you started, this will render the first docstring from
 docile.md:

 julia Markdown.Block(Markdown.parse_file(/users/Mike/Documents/do
 cile.md)[3:7])
 julia sprint(Markdown.terminal_print, ans)

 On Sunday, 8 June 2014 22:07:40 UTC+1, Michael Hatherly wrote:

 So it does :) I'll have a closer look soon.

 On Sunday, 8 June 2014 22:29:13 UTC+2, Tim Holy wrote:

 On Sunday, June 08, 2014 01:16:51 PM Michael Hatherly wrote:
  Since everything in help is in Base as
  well, it doesn't seem to be a problem currently.

 Actually, the help system does take the module into account (I
 believe Carlo
 Baldassi implemented this):

  help Base.print
 Base.print(x)

Write (to the default output stream) a canonical (un-decorated)
text representation of a value if there is one, otherwise call
show. The representation used by print includes minimal
formatting and tries to avoid Julia-specific details.

  help Profile.print
 Base.Profile.print([io::IO = STDOUT], [data::Vector]; format =
 :tree, C =
 false, combine = true, cols = tty_cols())

Prints profiling results to io (by default, STDOUT). If you
do not supply a data vector, the internal buffer of
 accumulated
backtraces will be used.  format can be :tree or :flat.
If C==true, backtraces from C and Fortran code are shown.
combine==true merges instruction pointers that correspond to
the same line of code.  cols controls the width of the
 display.

 Base.Profile.print([io::IO = STDOUT], data::Vector, lidict::Dict;
 format =
 :tree, combine = true, cols = tty_cols())

Prints profiling results to io. This variant is used to
 examine
results exported by a previous call to Profile.retrieve().
Supply the vector data of backtraces and a dictionary
lidict of line information.


  I'll take another look
  when I get a chance.
 
  [1] https://github.com/JuliaLang/julia/blob/master/base/help.jl#
 L102
 
  On Sunday, 8 June 2014 21:32:36 UTC+2, Tim Holy wrote:
   I agree with Daniel. We just need _something_, and on this
 issue the
   diversity
   of tastes seems to make consensus impossible. So kudos to you.
 I really
   hope
   this keeps moving forward.
  
   What prevents it from working with functions rather than
 strings?
  
   --Tim
  
   On Saturday, June 07, 2014 02:16:11 PM Daniel Jones wrote:
A good way of documenting packages is one of the biggest gaps
 in the
julia ecosystem right now. Part of the reason why is evinced
 in the
issues you cite: no matter what the system is, someone is
 going to hate
it. At this point, I'm sort of hoping someone will just
 ignore all
feedback and build whatever they want.
   
   
   
That said, I think this is a pretty elegant solution. Just
 relying on
markdown h1 and h2 headers leaves open the possibility of
 generating
html documentation from the same source. That's something I
 appreciate,
since I'd also want to generate html docs with example plots
 rendered
for gadfly.
   
   
   
With Jake Bolewski's julia parser, I hope it will become
 easier to
extract documentation from source code, either from comments
 or
something like docstrings. Have you given any though to that?
   
   
   
   
   
On Thu, Jun 5

[julia-users] Re: Computing colors of molecules with Julia

2014-06-09 Thread Mike Innes
This is really cool.

It looks like there are still some issues with syntax highlighting in these 
notebooks (breaking on triple quoted strings, highlighting unicode 
characters as errors for example) but this situation should improve greatly 
once my CodeMirror mode is ready for IJulia.

On Monday, 9 June 2014 16:04:45 UTC+1, Jiahao Chen wrote:

 I've started a blog http://jiahao.github.io/julia-blog/ showcasing a 
 few IJulia notebooks I've been working on over the past few months. 
 Currently the only published post is one of my most recent notebooks on 
 using Color.jl to calculate colors of molecules 
 http://jiahao.github.io/julia-blog/2014/06/09/the-colors-of-chemistry.html 
 from their UV-vis spectra.

 http://jiahao.github.io/julia-blog/2014/06/09/the-colors-of-chemistry.html

 IJulia notebooks live in a separate repo:

 https://github.com/jiahao/ijulia-notebooks

 This was a fun excursion into my former life as a chemist and hopefully 
 gives people an idea of what you can do with IJulia, Color.jl, Gadfly.jl, 
 SIUnits.jl, and Unicode characters.



Re: [julia-users] Re: Reducing algorithm obfuscation

2014-06-12 Thread Mike Innes
FWIW – putting to one side the question of whether or not this is a good 
idea – it would be possible to do this without new language syntax. 
However, you'd have to either pass a type hint or be explicit about the 
variables you want:

e.g. 

function tick(state::SvfSinOsc, coef::SvfSinOscCoef)
  @with state::SvfSinOsc, coef::SvfSinOsc
  # or
  @with state (ic1eq, ic2eq) coef (g0, g1)
  
  lv1 = g0*ic1eq - g1*ic2eq
  lv2 = g1*ic1eq + g0*ic2eq
  SvfSinOsc(2*v1 - ic1eq, 2*v2 - ic2eq)
end

This would work in the non-mutating case by calling names() on the type and 
making appropriate variable declarations.

You could then go further and implement

function tick(state::SvfSinOsc, coef::SvfSinOscCoef)
  @with state (ic1eq, ic2eq) coef (g0, g1) begin
  
lv1 = g0*ic1eq - g1*ic2eq
lv2 = g1*ic1eq + g0*ic2eq
SvfSinOsc(2*v1 - ic1eq, 2*v2 - ic2eq)
  end
end

Which would walk over the expression, replacing `a` with `Foo.a`. However, 
it would be tricky to implement this correctly since you'd have to be aware 
of variable scoping within the expression.

I may implement the non-mutating version of this at some point – it seems 
like it could be useful.


On Thursday, 12 June 2014 08:21:42 UTC+1, Andrew Simper wrote:

 On Thursday, June 12, 2014 2:16:30 PM UTC+8, Andrew Simper wrote:

 It seems that the local keyword is a bit of a language kludge to me, 
 since it is implied in most cases, apart from stating the new scope in the 
 form of a for loop etc. It would seem more natural and consistent to me to 
 add the local keyword in front of all variables you want to be local in 
 scope, and everyting else is global. This line of reasoning I'm sure has 
 already been argued to death, and obviously having an implicit local was 
 decided to be best.


 Having the local keyword like it is makes most sense to me, but I suppose 
 it isn't a big deal to me that if you don't explicitly specify local you 
 could be referring to something outside the current scope, which is the 
 case with for loops. 



Re: [julia-users] Re: Reducing algorithm obfuscation

2014-06-12 Thread Mike Innes
Actually, the mutating case is easier than that, you'd just transform the 
code to:

function foo(a::Foo)
  a = Foo.a
  # do stuff with a
  Foo.a = a
end

So long as you don't access Foo.a directly this would work fine.

At some point I'm going to make a repository of Frequently Asked Macros – 
just to show off the things you can do with Julia even though you really 
shouldn't.

On Thursday, 12 June 2014 09:21:58 UTC+1, Mike Innes wrote:

 FWIW – putting to one side the question of whether or not this is a good 
 idea – it would be possible to do this without new language syntax. 
 However, you'd have to either pass a type hint or be explicit about the 
 variables you want:

 e.g. 

 function tick(state::SvfSinOsc, coef::SvfSinOscCoef)
   @with state::SvfSinOsc, coef::SvfSinOsc
   # or
   @with state (ic1eq, ic2eq) coef (g0, g1)
   
   lv1 = g0*ic1eq - g1*ic2eq
   lv2 = g1*ic1eq + g0*ic2eq
   SvfSinOsc(2*v1 - ic1eq, 2*v2 - ic2eq)
 end

 This would work in the non-mutating case by calling names() on the type 
 and making appropriate variable declarations.

 You could then go further and implement

 function tick(state::SvfSinOsc, coef::SvfSinOscCoef)
   @with state (ic1eq, ic2eq) coef (g0, g1) begin
   
 lv1 = g0*ic1eq - g1*ic2eq
 lv2 = g1*ic1eq + g0*ic2eq
 SvfSinOsc(2*v1 - ic1eq, 2*v2 - ic2eq)
   end
 end

 Which would walk over the expression, replacing `a` with `Foo.a`. However, 
 it would be tricky to implement this correctly since you'd have to be aware 
 of variable scoping within the expression.

 I may implement the non-mutating version of this at some point – it seems 
 like it could be useful.


 On Thursday, 12 June 2014 08:21:42 UTC+1, Andrew Simper wrote:

 On Thursday, June 12, 2014 2:16:30 PM UTC+8, Andrew Simper wrote:

 It seems that the local keyword is a bit of a language kludge to me, 
 since it is implied in most cases, apart from stating the new scope in the 
 form of a for loop etc. It would seem more natural and consistent to me to 
 add the local keyword in front of all variables you want to be local in 
 scope, and everyting else is global. This line of reasoning I'm sure has 
 already been argued to death, and obviously having an implicit local was 
 decided to be best.


 Having the local keyword like it is makes most sense to me, but I suppose 
 it isn't a big deal to me that if you don't explicitly specify local you 
 could be referring to something outside the current scope, which is the 
 case with for loops. 



Re: [julia-users] Re: Reducing algorithm obfuscation

2014-06-12 Thread Mike Innes
Ok, managed to have a quick go at this – source with some examples:

https://gist.github.com/one-more-minute/668c5c7cdd8fd8b81d35

Currently it does nothing to avoid the issue Keno pointed out, but in 
principle you could throw an error when the mutating version is used 
without explicit types.

If there's any interest in having this in Base you're welcome to it, 
otherwise I'll probably just clean it up and store it in Lazy.jl.

On Thursday, 12 June 2014 09:44:30 UTC+1, Andrew Simper wrote:

 Brilliant Mike! This is exactly what I was after, I just want a way to 
 write shorthand names for things within a scope, and the @with macro does 
 just that :) In the example I posted I split the coefficients away from the 
 state so that only the state needs to be returned, I think this is good for 
 efficiency. I'll have a play with @with and see how I go. Passing in names 
 (Typename), isn't a problem, since when a new name is added there is no 
 duplication in doing this.

 Keno, sorry for not understanding that this is probably what you meant 
 when you said this would be best off done by using macros, I didn't think 
 of enclosing the entire algorithm in a macro.

 On Thursday, June 12, 2014 4:21:58 PM UTC+8, Mike Innes wrote:

 FWIW – putting to one side the question of whether or not this is a good 
 idea – it would be possible to do this without new language syntax. 
 However, you'd have to either pass a type hint or be explicit about the 
 variables you want:

 e.g. 

 function tick(state::SvfSinOsc, coef::SvfSinOscCoef)
   @with state::SvfSinOsc, coef::SvfSinOsc
   # or
   @with state (ic1eq, ic2eq) coef (g0, g1)
   
   lv1 = g0*ic1eq - g1*ic2eq
   lv2 = g1*ic1eq + g0*ic2eq
   SvfSinOsc(2*v1 - ic1eq, 2*v2 - ic2eq)
 end

 This would work in the non-mutating case by calling names() on the type 
 and making appropriate variable declarations.

 You could then go further and implement

 function tick(state::SvfSinOsc, coef::SvfSinOscCoef)
   @with state (ic1eq, ic2eq) coef (g0, g1) begin
   
 lv1 = g0*ic1eq - g1*ic2eq
 lv2 = g1*ic1eq + g0*ic2eq
 SvfSinOsc(2*v1 - ic1eq, 2*v2 - ic2eq)
   end
 end

 Which would walk over the expression, replacing `a` with `Foo.a`. 
 However, it would be tricky to implement this correctly since you'd have to 
 be aware of variable scoping within the expression.

 I may implement the non-mutating version of this at some point – it seems 
 like it could be useful.


 On Thursday, 12 June 2014 08:21:42 UTC+1, Andrew Simper wrote:

 On Thursday, June 12, 2014 2:16:30 PM UTC+8, Andrew Simper wrote:

 It seems that the local keyword is a bit of a language kludge to me, 
 since it is implied in most cases, apart from stating the new scope in the 
 form of a for loop etc. It would seem more natural and consistent to me to 
 add the local keyword in front of all variables you want to be local in 
 scope, and everyting else is global. This line of reasoning I'm sure has 
 already been argued to death, and obviously having an implicit local was 
 decided to be best.


 Having the local keyword like it is makes most sense to me, but I 
 suppose it isn't a big deal to me that if you don't explicitly specify 
 local you could be referring to something outside the current scope, which 
 is the case with for loops. 



Re: [julia-users] Experimental package documentation package.

2014-06-12 Thread Mike Innes
Is there a particular reason to document functions as opposed to just
methods? I would have thought that documentation for the most generic
method + some specific methods where necessary would be enough. It's best
if documentation is as simple as possible – we do want to encourage people
to use it.

Personally I'll be happy if documentation looks exactly as Stefan has
suggested. Custom formatting might be a useful feature, but I think it's
best if plain docs are treated as Markdown by default – having consistency
in the way code blocks etc. are formatted will be helpful even when viewing
documentation as plain text, especially now that we can view Markdown in
the terminal.


Re: [julia-users] Experimental package documentation package.

2014-06-12 Thread Mike Innes
The issue with only storing text is that we want to support interpolation
of results, including for things like plots. If you only store the text
then those results will be re-evaluated every time the help is displayed.
Really, you'd want to parse everything and evaluate results immediately,
then store the resulting tree.

I'd better get to work on support for tables and equations then :) I can
also support things like parameter lists if there's a consensus on the best
way to format them.


Re: [julia-users] Re: Reducing algorithm obfuscation

2014-06-12 Thread Mike Innes
No problem!

The non-mutating version of this is exactly equivalent to writing a = Foo.a
etc., so there should be zero overhead. The mutating version uses a let
binding, which I think has a very small additional overhead, but I would
just benchmark it to make sure.


On 12 June 2014 15:03, Andrew Simper andrewsim...@gmail.com wrote:

 Mike, you rule!

 That is a serious cool macro, thankyou so much for taking the time to
 write this!! I like dot notation sometimes, when you have two things like
 Coordinates / Points / Complex etc it makes perfect sense to be able to see
 which one you are talking about, but for crunching numbers on a set of
 states it just makes things very ugly and obfuscates what is actually going
 on.

 Can I please double check something with you? Apart from the one off
 overhead of parsing through the code and prefixing the names will this run
 identically fast as me having type all the names in full? (I am 99.9% the
 answer is yes, but I want to be sure!)


 On Thursday, June 12, 2014 6:14:15 PM UTC+8, Mike Innes wrote:

 Ok, managed to have a quick go at this – source with some examples:

 https://gist.github.com/one-more-minute/668c5c7cdd8fd8b81d35

 Currently it does nothing to avoid the issue Keno pointed out, but in
 principle you could throw an error when the mutating version is used
 without explicit types.

 If there's any interest in having this in Base you're welcome to it,
 otherwise I'll probably just clean it up and store it in Lazy.jl.

 On Thursday, 12 June 2014 09:44:30 UTC+1, Andrew Simper wrote:

 Brilliant Mike! This is exactly what I was after, I just want a way to
 write shorthand names for things within a scope, and the @with macro does
 just that :) In the example I posted I split the coefficients away from the
 state so that only the state needs to be returned, I think this is good for
 efficiency. I'll have a play with @with and see how I go. Passing in names
 (Typename), isn't a problem, since when a new name is added there is no
 duplication in doing this.

 Keno, sorry for not understanding that this is probably what you meant
 when you said this would be best off done by using macros, I didn't think
 of enclosing the entire algorithm in a macro.

 On Thursday, June 12, 2014 4:21:58 PM UTC+8, Mike Innes wrote:

 FWIW – putting to one side the question of whether or not this is a
 good idea – it would be possible to do this without new language syntax.
 However, you'd have to either pass a type hint or be explicit about the
 variables you want:

 e.g.

 function tick(state::SvfSinOsc, coef::SvfSinOscCoef)
   @with state::SvfSinOsc, coef::SvfSinOsc
   # or
   @with state (ic1eq, ic2eq) coef (g0, g1)

   lv1 = g0*ic1eq - g1*ic2eq
   lv2 = g1*ic1eq + g0*ic2eq
   SvfSinOsc(2*v1 - ic1eq, 2*v2 - ic2eq)
 end

 This would work in the non-mutating case by calling names() on the type
 and making appropriate variable declarations.

 You could then go further and implement

 function tick(state::SvfSinOsc, coef::SvfSinOscCoef)
   @with state (ic1eq, ic2eq) coef (g0, g1) begin

 lv1 = g0*ic1eq - g1*ic2eq
 lv2 = g1*ic1eq + g0*ic2eq
 SvfSinOsc(2*v1 - ic1eq, 2*v2 - ic2eq)
   end
 end

 Which would walk over the expression, replacing `a` with `Foo.a`.
 However, it would be tricky to implement this correctly since you'd have to
 be aware of variable scoping within the expression.

 I may implement the non-mutating version of this at some point – it
 seems like it could be useful.


 On Thursday, 12 June 2014 08:21:42 UTC+1, Andrew Simper wrote:

 On Thursday, June 12, 2014 2:16:30 PM UTC+8, Andrew Simper wrote:

 It seems that the local keyword is a bit of a language kludge to me,
 since it is implied in most cases, apart from stating the new scope in 
 the
 form of a for loop etc. It would seem more natural and consistent to me 
 to
 add the local keyword in front of all variables you want to be local in
 scope, and everyting else is global. This line of reasoning I'm sure has
 already been argued to death, and obviously having an implicit local was
 decided to be best.


 Having the local keyword like it is makes most sense to me, but I
 suppose it isn't a big deal to me that if you don't explicitly specify
 local you could be referring to something outside the current scope, which
 is the case with for loops.




Re: [julia-users] Experimental package documentation package.

2014-06-12 Thread Mike Innes
Sure, I don't know exactly the best way to do the caching (it probably
depends on how we handle function metadata in the first place) but there
are definitely ways to store arbitrary Julia data on disk.


On 12 June 2014 15:01, Michael Hatherly michaelhathe...@gmail.com wrote:

 Yes, the tree is a much better idea. I presume that parsing and evaluation
 would only happen
 when specifically required rather than every time the package is imported?



Re: [julia-users] support for '?' suffix on functions that return boolean types.

2014-06-12 Thread Mike Innes
Indeed, Ruby has the ternary operator as well.

I imagine it's unlikely to change at this point, but +1 for trailing ? from 
me – just in case :)

On Thursday, 12 June 2014 19:30:58 UTC+1, Stefan Karpinski wrote:

 It's definitely a surmountable thing – I'd actually be rather in favor of 
 using a trailing ? instead of the is prefix for predicates. I believe Jeff 
 prefers the is prefix.


 On Thu, Jun 12, 2014 at 1:53 PM, Jameson Nash vtj...@gmail.com 
 javascript: wrote:

 Interestingly (to me) Apples new language, Swift, uses ? as both a 
 ternary operator and a suffix for 'nullable' values, so this isn't an 
 insurmountable obstacle. 


 On Thursday, June 12, 2014, Steven G. Johnson steve...@gmail.com 
 javascript: wrote:



 On Thursday, June 12, 2014 1:08:37 PM UTC-4, Aerlinger wrote:

 Ruby has a useful convention where methods can end in a '?' to indicate 
 that it returns a boolean value. This capability would be useful in Julia 
 as well. Much like the bang (!) suffix on functions it might look 
 something 
 like this:

 function isEven?(n::Int)
   n % 2 == 2
 end


 Yes, both the ! and ? suffixes are common conventions, possibly 
 originating in Scheme.  Note that if you have a ? suffix, then you don't 
 need the is prefix.

 However, ? is already being used for the ternary operator in Julia and 
 hence is not available for use in identifiers.  Hence we instead adopt the 
 is prefix convention.




Re: [julia-users] Re: Reducing algorithm obfuscation

2014-06-13 Thread Mike Innes
That was my first thought, too – and it's fine in principle, but remember 
for that macro to be correct you'd have to handle let bindings, quoting, 
local variable declarations and expanding any macros that might result in 
these, then test all of those things carefully to make sure it's working 
correctly.

Plus, if the overhead of unnecessary writes is an issue, so will be that of 
the let binding used by the mutating version of the macro. This again is 
probably solvable, but for all that effort you could just use the faster 
non-mutating version and store a couple of changes by hand.

That's not meant to put off anyone who wants to have a go at this, just 
warning that it wouldn't be as trivial as it sounds.

On Friday, 13 June 2014 02:01:28 UTC+1, David Moon wrote:

 Mike Innes' atsign with macro is good, but it would be better if it would 
 iterate over the AST for its last argument and replace each occurrence of 
 field with obj.field.  That way there wouldn't be any unexpected 
 assignments to fields which were not actually changed, and in general no 
 wasted motion at run time.  The macro would be a little more complex to 
 write but it should not be very difficult.

 On Thursday, June 12, 2014 6:14:15 AM UTC-4, Mike Innes wrote:

 Ok, managed to have a quick go at this – source with some examples:

 https://gist.github.com/one-more-minute/668c5c7cdd8fd8b81d35

 Currently it does nothing to avoid the issue Keno pointed out, but in 
 principle you could throw an error when the mutating version is used 
 without explicit types.

 If there's any interest in having this in Base you're welcome to it, 
 otherwise I'll probably just clean it up and store it in Lazy.jl.

 On Thursday, 12 June 2014 09:44:30 UTC+1, Andrew Simper wrote:

 Brilliant Mike! This is exactly what I was after, I just want a way to 
 write shorthand names for things within a scope, and the @with macro does 
 just that :) In the example I posted I split the coefficients away from the 
 state so that only the state needs to be returned, I think this is good for 
 efficiency. I'll have a play with @with and see how I go. Passing in names 
 (Typename), isn't a problem, since when a new name is added there is no 
 duplication in doing this.

 Keno, sorry for not understanding that this is probably what you meant 
 when you said this would be best off done by using macros, I didn't think 
 of enclosing the entire algorithm in a macro.

 On Thursday, June 12, 2014 4:21:58 PM UTC+8, Mike Innes wrote:

 FWIW – putting to one side the question of whether or not this is a 
 good idea – it would be possible to do this without new language syntax. 
 However, you'd have to either pass a type hint or be explicit about the 
 variables you want:

 e.g. 

 function tick(state::SvfSinOsc, coef::SvfSinOscCoef)
   @with state::SvfSinOsc, coef::SvfSinOsc
   # or
   @with state (ic1eq, ic2eq) coef (g0, g1)
   
   lv1 = g0*ic1eq - g1*ic2eq
   lv2 = g1*ic1eq + g0*ic2eq
   SvfSinOsc(2*v1 - ic1eq, 2*v2 - ic2eq)
 end

 This would work in the non-mutating case by calling names() on the type 
 and making appropriate variable declarations.

 You could then go further and implement

 function tick(state::SvfSinOsc, coef::SvfSinOscCoef)
   @with state (ic1eq, ic2eq) coef (g0, g1) begin
   
 lv1 = g0*ic1eq - g1*ic2eq
 lv2 = g1*ic1eq + g0*ic2eq
 SvfSinOsc(2*v1 - ic1eq, 2*v2 - ic2eq)
   end
 end

 Which would walk over the expression, replacing `a` with `Foo.a`. 
 However, it would be tricky to implement this correctly since you'd have 
 to 
 be aware of variable scoping within the expression.

 I may implement the non-mutating version of this at some point – it 
 seems like it could be useful.


 On Thursday, 12 June 2014 08:21:42 UTC+1, Andrew Simper wrote:

 On Thursday, June 12, 2014 2:16:30 PM UTC+8, Andrew Simper wrote:

 It seems that the local keyword is a bit of a language kludge to me, 
 since it is implied in most cases, apart from stating the new scope in 
 the 
 form of a for loop etc. It would seem more natural and consistent to me 
 to 
 add the local keyword in front of all variables you want to be local in 
 scope, and everyting else is global. This line of reasoning I'm sure has 
 already been argued to death, and obviously having an implicit local was 
 decided to be best.


 Having the local keyword like it is makes most sense to me, but I 
 suppose it isn't a big deal to me that if you don't explicitly specify 
 local you could be referring to something outside the current scope, 
 which 
 is the case with for loops. 


 On Thursday, June 12, 2014 6:14:15 AM UTC-4, Mike Innes wrote:

 Ok, managed to have a quick go at this – source with some examples:

 https://gist.github.com/one-more-minute/668c5c7cdd8fd8b81d35

 Currently it does nothing to avoid the issue Keno pointed out, but in 
 principle you could throw an error when the mutating version is used 
 without explicit types.

 If there's any interest in having this in Base you're

Re: [julia-users] Re: Reducing algorithm obfuscation

2014-06-13 Thread Mike Innes
Not by default, but it should be simple enough (and correct, I think) to
just call macroexpand on macro calls.

macro test(expr)
  (expr,)
end

 (@test @foo x) == (:(@foo x),)

All I meant about the let binding is that the mutating version expands to:

let a = Foo.a
  # code
  Foo.a = a
end

AFAIK let bindings have a small overhead (compared to a normal
declaration), so if a redundant assignment is a significant overhead in
your code then using the let binding will be prohibitive anyway. I haven't
particularly tested that, though, so the situation could have changed
recently.


On 13 June 2014 14:59, David Moon dave_m...@alum.mit.edu wrote:

 [I can't get this damned thing not to include a quote of all previous
 messages.  I guess it only works in Google Chrome; what a pain.  So sorry
 about the unnecessarily long post.]

 In the argument to a macro all nested macro calls are already expanded, I
 think.  It's certainly true that for complete correctness you would need to
 handle shadowing of the bindings introduced by atsign-with by local
 bindings of the same name.  It's even more true that Julia does not provide
 any assistance in processing Expr's and other AST objects, nor even much
 documentation, so far as I know.

 I don't understand your comment about the overhead of the let binding used
 by the mutating version of the macro.  What extra overhead is that?

 On Friday, June 13, 2014 3:57:07 AM UTC-4, Mike Innes wrote:

 That was my first thought, too – and it's fine in principle, but remember
 for that macro to be correct you'd have to handle let bindings, quoting,
 local variable declarations and expanding any macros that might result in
 these, then test all of those things carefully to make sure it's working
 correctly.

 Plus, if the overhead of unnecessary writes is an issue, so will be that
 of the let binding used by the mutating version of the macro. This again is
 probably solvable, but for all that effort you could just use the faster
 non-mutating version and store a couple of changes by hand.

 That's not meant to put off anyone who wants to have a go at this, just
 warning that it wouldn't be as trivial as it sounds.

 On Friday, 13 June 2014 02:01:28 UTC+1, David Moon wrote:

 Mike Innes' atsign with macro is good, but it would be better if it
 would iterate over the AST for its last argument and replace each
 occurrence of field with obj.field.  That way there wouldn't be any
 unexpected assignments to fields which were not actually changed, and in
 general no wasted motion at run time.  The macro would be a little more
 complex to write but it should not be very difficult.

 On Thursday, June 12, 2014 6:14:15 AM UTC-4, Mike Innes wrote:

 Ok, managed to have a quick go at this – source with some examples:

 https://gist.github.com/one-more-minute/668c5c7cdd8fd8b81d35

 Currently it does nothing to avoid the issue Keno pointed out, but in
 principle you could throw an error when the mutating version is used
 without explicit types.

 If there's any interest in having this in Base you're welcome to it,
 otherwise I'll probably just clean it up and store it in Lazy.jl.

 On Thursday, 12 June 2014 09:44:30 UTC+1, Andrew Simper wrote:

 Brilliant Mike! This is exactly what I was after, I just want a way to
 write shorthand names for things within a scope, and the @with macro does
 just that :) In the example I posted I split the coefficients away from 
 the
 state so that only the state needs to be returned, I think this is good 
 for
 efficiency. I'll have a play with @with and see how I go. Passing in names
 (Typename), isn't a problem, since when a new name is added there is no
 duplication in doing this.

 Keno, sorry for not understanding that this is probably what you meant
 when you said this would be best off done by using macros, I didn't think
 of enclosing the entire algorithm in a macro.

 On Thursday, June 12, 2014 4:21:58 PM UTC+8, Mike Innes wrote:

 FWIW – putting to one side the question of whether or not this is a
 good idea – it would be possible to do this without new language syntax.
 However, you'd have to either pass a type hint or be explicit about the
 variables you want:

 e.g.

 function tick(state::SvfSinOsc, coef::SvfSinOscCoef)
   @with state::SvfSinOsc, coef::SvfSinOsc
   # or
   @with state (ic1eq, ic2eq) coef (g0, g1)

   lv1 = g0*ic1eq - g1*ic2eq
   lv2 = g1*ic1eq + g0*ic2eq
   SvfSinOsc(2*v1 - ic1eq, 2*v2 - ic2eq)
 end

 This would work in the non-mutating case by calling names() on the
 type and making appropriate variable declarations.

 You could then go further and implement

 function tick(state::SvfSinOsc, coef::SvfSinOscCoef)
   @with state (ic1eq, ic2eq) coef (g0, g1) begin

 lv1 = g0*ic1eq - g1*ic2eq
 lv2 = g1*ic1eq + g0*ic2eq
 SvfSinOsc(2*v1 - ic1eq, 2*v2 - ic2eq)
   end
 end

 Which would walk over the expression, replacing `a` with `Foo.a`.
 However, it would be tricky to implement this correctly since you'd have 
 to
 be aware

Re: [julia-users] Re: Reducing algorithm obfuscation

2014-06-13 Thread Mike Innes
Absolutely – I didn't want to get into correct/incorrect when I implemented
@with but I definitely think the

@with Foo::(a, b)

syntax is preferable. I think I'll disable the type syntax, add support for
indexing and then send a PR to see if there's any chance of having it in
Base.

(Personally I think the type syntax could be acceptable if used very
sparingly and only where it has significant benefit – e.g. for a very large
config object that's used in many places. But I do agree that such a
construct wouldn't be best placed in Base.)


On 13 June 2014 15:25, Stefan Karpinski ste...@karpinski.org wrote:

 Keno's example showed how a simple error like forgetting that you had
 assigned to `a` would cause problems, but it's even worse – that's just a
 matter of making an error about the current state of the program. It's
 worse than that though: if someone adds a field to a type that is used
 *anywhere* with such a `using` construct, that code becomes incorrect. The
 fundamental problem with this construct is that it makes it locally
 impossible to reason about what it means when you access or assign a
 binding. Let's say I have this code:

 function foo(x::Bar)
   using x
   a = b + 1

  end


 What does this mean? b could be a global variable or a field of x; a could
 be a local variable or a field of x. Without knowing the structure of Bar,
 we can't know. In Julia, we never does this: you can always tell the
 meaning of code from purely local syntactic analysis. The code may be wrong
 – you might try to access or assign a field that doesn't exist, but there
 is only one thing your code could mean. For the same exact reasons, I
 wouldn't accept a macro that simulates this into base – it has the exact
 same problems. The only way to make this acceptable is if can be locally
 disambiguated what the code means. You could, for example, do something
 like this:

 function foo(x::Bar)

   using x: a, b

   a = b + 1

 end


 Now you can immediately tell that `a` and `b` are both fields of `x` and
 not global or local variables (not by accident, this is independent of the
 definition of Bar).


 On Thu, Jun 12, 2014 at 7:40 AM, Jameson Nash vtjn...@gmail.com wrote:

  Having the local keyword like it is makes most sense to me, but I
 suppose it isn't a big deal to me that if you don't explicitly specify
 local you could be referring to something outside the current scope, which
 is the case with for loops.

 Javascript does this. It also has the using block that you describe
 (see with). They are probably the worst (mis)features in the entire
 language. In the latest version of javascript, it has finally been removed,
 to the relief of javascript programmers everywhere:
 https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions_and_function_scope/Strict_mode
 .

 I would be interested in having a language feature akin to visual basic's
 with block, since it is only a trivial source transform to annotate each
 `.` with the result of the expression in the nearest `with` block.
 function f(A)
   with A
 .c = .a + .b
   end
 end
 However, since this only saves a one character, it really isn't
 worthwhile.
 function f(A)
   Z = A
   Z.c = Z.a + Z.b
 end


 On Thu, Jun 12, 2014 at 6:14 AM, Mike Innes mike.j.in...@gmail.com
 wrote:

 Ok, managed to have a quick go at this – source with some examples:

 https://gist.github.com/one-more-minute/668c5c7cdd8fd8b81d35

 Currently it does nothing to avoid the issue Keno pointed out, but in
 principle you could throw an error when the mutating version is used
 without explicit types.

 If there's any interest in having this in Base you're welcome to it,
 otherwise I'll probably just clean it up and store it in Lazy.jl.


 On Thursday, 12 June 2014 09:44:30 UTC+1, Andrew Simper wrote:

 Brilliant Mike! This is exactly what I was after, I just want a way to
 write shorthand names for things within a scope, and the @with macro does
 just that :) In the example I posted I split the coefficients away from the
 state so that only the state needs to be returned, I think this is good for
 efficiency. I'll have a play with @with and see how I go. Passing in names
 (Typename), isn't a problem, since when a new name is added there is no
 duplication in doing this.

 Keno, sorry for not understanding that this is probably what you meant
 when you said this would be best off done by using macros, I didn't think
 of enclosing the entire algorithm in a macro.

 On Thursday, June 12, 2014 4:21:58 PM UTC+8, Mike Innes wrote:

 FWIW – putting to one side the question of whether or not this is a
 good idea – it would be possible to do this without new language syntax.
 However, you'd have to either pass a type hint or be explicit about the
 variables you want:

 e.g.

 function tick(state::SvfSinOsc, coef::SvfSinOscCoef)
   @with state::SvfSinOsc, coef::SvfSinOsc
   # or
   @with state (ic1eq, ic2eq) coef (g0, g1)

   lv1 = g0*ic1eq - g1*ic2eq
   lv2 = g1*ic1eq + g0*ic2eq

Re: [julia-users] Re: Reducing algorithm obfuscation

2014-06-14 Thread Mike Innes
function process8 (state::CircuitModel, input::Float)
state.a = 5
end

You're right that you can't tell whether the code above is correct without
knowing about CircuitModel, but it is obvious where all the variables are
coming from and what's happening to them – and that's really valuable
if/when things do go wrong.

You're also absolutely right that @with f::Foo is statically resolved, so
it's nowhere near as bad as the JS version – but nevertheless, needing
extra information about Foo means that the function no longer stands for
itself. That's what we mean by local reasoning.

That's why I also say that this ability might be OK for a config object –
if it's something you're only using a couple of times in the same file as
the object's definition, that IMO is local enough. But using it for
anything more than that will obfuscate your code more, not make it clearer.


On 14 June 2014 08:45, Andrew Simper andrewsim...@gmail.com wrote:



 On Saturday, June 14, 2014 3:41:16 PM UTC+8, Andrew Simper wrote:



 On Saturday, June 14, 2014 3:32:02 PM UTC+8, Andrew Simper wrote:



 On Saturday, June 14, 2014 3:06:46 PM UTC+8, Stefan Karpinski wrote:

 On Sat, Jun 14, 2014 at 1:18 AM, Andrew Simper andrew...@gmail.com
 wrote:

 process5 (state::CircuitModel, input::Float)
 @withonly state::CircuitModel, input begin
 v1 = 1  # fine since v1 is a name of CircuitModel
 v5 = input  # fine since v5 is a name of CircuitModel, and
 input is allowed as well
 a = 2   # error, a is not not a name of
 CircuitModel and is not input
 local b = 3 # ok, since this is specifically a new local
 variable b
 gr5 = b # ok since b is the new local variable and gr5
 is a name of CircuitModel
 end
 end


 So this addresses your points about not being able to reason locally
 about what is going on, since any variable must be from one of the blocks
 arguments


 Since the meaning of the code depends on the definition of
 CircuitModel, which is defined elsewhere, that isn't reasoning locally.


 But it is! Give me the name of any variable in the above block and I can
 tell you where it comes from without ambiguity, it has to be either locally
 defined with local blah, a specifically named thing passed as in variable
 passed to @withonly, or a name of the one and only allowable composite type
 which can be passed as the first argument, otherwise there will be a
 compiler error.


 Ok, I think I get what you mean by reason locally as this is the only
 code you will ever see then sure, but then you won't be able to tell if
 this is valid either:

 function process8 (state::CircuitModel, input::Float)
 state.a = 5 # this should be an error but you won't know until you
 try and compile it
 end


 Following your logic through to completion every time we use a composite
 type we should specify all the names it contains otherwise we can't reason
 locally.



Re: [julia-users] Re: Reducing algorithm obfuscation

2014-06-14 Thread Mike Innes
Ok, I see what you mean – make the global scope explicit so that the local
scope can be implicit. This is actually a really interesting idea and could
make for a neat solution, but it also has problems of its own and would be
tricky to implement well. I'll definitely think about it some more when I
have time, though.


On 14 June 2014 10:46, Andrew Simper andrewsim...@gmail.com wrote:



 On Saturday, June 14, 2014 5:22:23 PM UTC+8, Mike Innes wrote:

 function process8 (state::CircuitModel, input::Float)
 state.a = 5
 end

 You're right that you can't tell whether the code above is correct
 without knowing about CircuitModel, but it is obvious where all the
 variables are coming from and what's happening to them – and that's really
 valuable if/when things do go wrong.

 You're also absolutely right that @with f::Foo is statically resolved,
 so it's nowhere near as bad as the JS version – but nevertheless, needing
 extra information about Foo means that the function no longer stands for
 itself. That's what we mean by local reasoning.

 That's why I also say that this ability might be OK for a config object –
 if it's something you're only using a couple of times in the same file as
 the object's definition, that IMO is local enough. But using it for
 anything more than that will obfuscate your code more, not make it clearer.


 But the local reasoning is identical in the case I have proposed, since
 there are no implicit local variables, and  you have to explicitly tell the
 block scope everything it is going to be accessing. So if you do this:

 function process8 (state::CircuitModel, input::Float)
 @withonly state::CircuitModel, input begin
 a = 5
 end
 end

 you know that since a is not input then it what you have written will
 be interpreted only in one way:
 state.a = 5

 you would need to specify:
 local b

 to introduce any new names into the scope, or specify then at the start of
 the block, otherwise they must be from inside the (one and only possible)
 composite type CircuitModel.

 Now I understand that this is possibly too ugly a proposition, but I feel
 it does hold up to the local reasoning assertion just as well as using
 explicit dot notation.



Re: [julia-users] Test for a warning

2014-06-20 Thread Mike Innes
function with_out_str(f::Function)
  orig_stdout = STDOUT
  rd, wr = redirect_stdout()
  f()
  redirect_stdout(orig_stdout)
  return readavailable(rd)
end

macro with_out_str(expr)
  :(with_out_str(()-$expr)) | esc
end

You can use this as

@with_out_str begin
  ... code ...
end

But I think you'll need to change stdout to stderr in the above 
definition to capture warnings.

On Friday, 20 June 2014 21:35:51 UTC+1, Laszlo Hars wrote:

 Could someone help with redirecting stderr? For example, the following 
 code does not get the error message shown in the Julia console in Windows 
 7, Julia Version 0.3.0-prerelease+3789:
 ~~~
 stderr_orig = STDERR
 rd, wr = redirect_stderr()
 1^-1
 close(wr)
 eof(rd)
 close(rd)
 out = readall(rd)
 redirect_stderr(stderr_orig)
 ~~~

 On Friday, June 20, 2014 8:36:44 AM UTC-6, Jameson wrote:

 You could redirect_stderr and test for content writte 



Re: [julia-users] Test for a warning

2014-06-20 Thread Mike Innes
The macro expression itself will return the output string. So if you type
it into a repl you'll get something like

julia @with_err_str warn(foo)
\e[1m\e[31mWARNING: foo\n\e[0m

If you want to capture that string you'd just assign it to a variable:

julia err = @with_err_str warn(foo);

One thing to be careful of is that those ANSI codes may not be included
when running the tests.


On 20 June 2014 22:29, Laszlo Hars laszloh...@gmail.com wrote:

 I am confused: I can change all the outs to errs, but what variable
 will contain the error message to be inspected? The error messages seem to
 only appear in the console, nowhere else.


 On Friday, June 20, 2014 2:47:28 PM UTC-6, Mike Innes wrote:

 function with_out_str(f::Function)
   orig_stdout = STDOUT
   rd, wr = redirect_stdout()
   f()
   redirect_stdout(orig_stdout)
   return readavailable(rd)
 end

 macro with_out_str(expr)
   :(with_out_str(()-$expr)) | esc
 end

 You can use this as

 @with_out_str begin
   ... code ...
 end

 But I think you'll need to change stdout to stderr in the above
 definition to capture warnings.

 On Friday, 20 June 2014 21:35:51 UTC+1, Laszlo Hars wrote:

 Could someone help with redirecting stderr? For example, the following
 code does not get the error message shown in the Julia console in Windows
 7, Julia Version 0.3.0-prerelease+3789:
 ~~~
 stderr_orig = STDERR
 rd, wr = redirect_stderr()
 1^-1
 close(wr)
 eof(rd)
 close(rd)
 out = readall(rd)
 redirect_stderr(stderr_orig)
 ~~~

 On Friday, June 20, 2014 8:36:44 AM UTC-6, Jameson wrote:

 You could redirect_stderr and test for content writte




Re: [julia-users] Re: Incoming Gadfly changes

2014-06-24 Thread Mike Innes
Fantastic sleuthing, I never would've found that. It works like a charm for
getting the scripts out.

The only issue now is this line
https://github.com/dcjones/Compose.jl/blob/264d87d87d74857437cc05690bbfa53e70cadd96/src/svg.jl#L338.
Node.js provides its own require function, so when the script runs I get an
error about require expecting a string. You'll know better than me how to
fix that, but I figure a try/catch would do the trick if nothing else.
Beyond that we should be golden.

A dark theme for Gadfly would be awesome, especially if it plays well with June
Night https://github.com/one-more-minute/June-LT :) Theming is a bit
tricky at the moment, though, especially because I want to support both
light and dark themes well across the board.

My preferred solution would be to have both themes in CSS like so:

.gadfly { // Default, will apply in IPython etc.
  background: white;
}

.dark .gadfly { // These will take over when the containing element has the
dark class
  background: black;
}

Does that seem reasonable? If you're not using CSS to style Gadfly already
I do think it would make sense – it'd be nice if people could easily
produce custom themes. Of course, if this isn't feasible that's fine too –
I haven't done much in the way of allowing custom display methods yet, but
I'm happy to do whatever's best for you.


On 23 June 2014 17:32, Daniel Jones danielcjo...@gmail.com wrote:


 Sorry to break Jewel support. I did a little digging in MDN.  Here
 https://developer.mozilla.org/en-US/docs/Web/API/HTMLScriptElement it
 says that the text field is a concatenation of contents of all the child
 text nodes. Now, text elements have a field wholeText
 https://developer.mozilla.org/en-US/docs/Web/API/Text.wholeText which
 gives the concatenation of all the text node siblings. So I'm thinking it
 should be equivalent, and standards-compliant to to do:

 script.childNodes.length  0 ? script.childNodes[0].wholeText : 

 And that should work for HTMLScriptElement and SVGScriptElement. Also, if
 it turns out not to be practical to eval the SVGScriptElement nodes, it
 should be completely possible to move the script tags out of the SVG.

 I would like to have a custom theme for plots when displaying on
 LightTable, to make them jive better with a dark background. I'm guessing
 that would I would have to import Jewel and define a special display method?


 On Monday, June 23, 2014 9:02:05 AM UTC-7, Mike Innes wrote:

 Turns out that I can't eval the Gadfly scripts because they are now
 inside SVGScriptElement blocks (as opposed to HTMLScriptElement), which
 don't have a .text property. Any javascript wizards around who know of a
 way around this?

 On Monday, 23 June 2014 09:45:43 UTC+1, Mike Innes wrote:

 This looks great. Unfortunately it did break the interactivity on Light
 Table as predicted, but hopefully that'll be easy to fix. But the plots now
 display properly even without JS, which is nice.

 One issue is that plots don't show up well against dark backgrounds –
 you might want to consider adding a white background to them. If not I can
 always add in some custom CSS though.

 On Friday, 20 June 2014 23:24:14 UTC+1, Daniel Jones wrote:

 I've just merged branches in Gadfly and Compose that I've been working
 on for a while, and I'm going to tag a new version relatively soon.

 For the most part things should work as before. But if you are
 explicitly using the D3 backend or Compose, then I'm about to break your
 code. Sorry.

 Here's a brief explanation of the changes: http://nbviewer.ipython.org/
 gist/dcjones/4d3088d74db6a83b12d3

 If you run into problems after the update, please let me know. I
 realize I've been a little inattentive to issues lately, but with this out
 of the way, I'm going to focus on working through those.




Re: [julia-users] Re: Experimental package documentation package.

2014-06-27 Thread Mike Innes
This looks great! When I get the chance I'll look into integrating this
with Light Table, so that if you evaluate a function with a docstring it's
automatically applied.


On 27 June 2014 14:50, Michael Hatherly michaelhathe...@gmail.com wrote:

 *Update:*

 Some major changes have been made to the package. Package documentation is
 now extracted from doc strings in source files rather than external
 markdown files.

 Output formats include:

- plain text: basically markdown files that are viewable reasonably
easily directly from Github.


- html: currently with minimal formatting and everything on a single
page. Hopefully this will soon change to a something a bit more modern.


- helpdb.jl files: allowing package documentation to be read from the
REPL, LightTable (Jewel), and IJulia. Perhaps from others as well, haven't
checked

 Feedback always welcome.

 -- Mike



Re: [julia-users] Re: Experimental package documentation package.

2014-06-27 Thread Mike Innes
What would be great is if Docile could have it's own help database, i.e. a
global dict mapping arbitrary objects to Markdown AST objects. Then you can
patch the Base help() function to look there first and fall back to the
helpdb.

That's roughly the approach we're heading for in Base anyway, so it'd make
sense to prototype it here – and that way we can get rich docs going across
LT and the REPL.


[julia-users] GSoC: Julia IDE Progress update

2014-06-29 Thread Mike Innes
Hey all,

I've released the latest version of the Julia environment 
https://github.com/one-more-minute/Jupiter-LT I'm building. There are a 
whole bunch of improvements but the main ones are:

   - Support for latex completions (\alpha etc.)
   - Support for graphics including Gadfly and Images.jl (though until my 
   patch is released you'll need to Pkg.checkout(Gadfly) to get 
   interactivity)
   - Rewritten and improved autocomplete system, which now completes 
   package names in Pkg functions and paths in include statements, and can be 
   extended to support anything else
   - Support for accessing methods and docs both while on a function and 
   within its parentheses
   - Auto-detection of the module you're working in
   - Links and highlighted lines for error messages
   - Semantic highlighting in the june night theme
   - Highlighting support for string interpolation
   - Full support for unicode
   - More documentation
   - Tabs are restored after restarting, like Sublime
   - Several new and improved rough edges

I also want to shout out to all the people who have tried this out so far, 
given feedback, and/or sent me PRs – every bit of enthusiasm really makes a 
big difference, so thank you.

– Mike


Re: [julia-users] Re: GSoC: Julia IDE Progress update

2014-06-29 Thread Mike Innes
Technically you do need the Jewel.jl package, but this should be installed
for you when the client boots the first time. If this isn't working you can
of course run Pkg.add(Jewel) by hand – let me know if there are any
issues with that and I'll see what I can do.


On 29 June 2014 18:22, Johan Sigfrids johan.sigfr...@gmail.com wrote:

 Do you need to add some package Julia side to make this work?


 On Sunday, June 29, 2014 12:46:21 PM UTC+3, Mike Innes wrote:

 Hey all,

 I've released the latest version of the Julia environment
 https://github.com/one-more-minute/Jupiter-LT I'm building. There are
 a whole bunch of improvements but the main ones are:

- Support for latex completions (\alpha etc.)
- Support for graphics including Gadfly and Images.jl (though until
my patch is released you'll need to Pkg.checkout(Gadfly) to get
interactivity)
- Rewritten and improved autocomplete system, which now completes
package names in Pkg functions and paths in include statements, and can be
extended to support anything else
- Support for accessing methods and docs both while on a function and
within its parentheses
- Auto-detection of the module you're working in
- Links and highlighted lines for error messages
- Semantic highlighting in the june night theme
- Highlighting support for string interpolation
- Full support for unicode
- More documentation
- Tabs are restored after restarting, like Sublime
- Several new and improved rough edges

 I also want to shout out to all the people who have tried this out so
 far, given feedback, and/or sent me PRs – every bit of enthusiasm really
 makes a big difference, so thank you.

 – Mike




Re: [julia-users] Re: GSoC: Julia IDE Progress update

2014-06-29 Thread Mike Innes
I'm sorry that you're having problems – unfortunately Windows has been a
big source of issues, but making the process smoother is a high priority.
At some point I'll get an installer/bundle together which should help.

J Luis, in your case I suspect that Light Table hasn't downloaded the
latest version of the plugin for some reason. If you go back onto the
installed tab of the plugin manager, can you check that Jewel is on
0.6.2? If not, clicking update should be enough, although be sure to
disconnect Julia first to avoid problems there.

Johan, this is less likely to apply to you but it might be worth checking.

I'll see if I can reproduce these errors and get back to you.


On 29 June 2014 19:02, J Luis jmfl...@gmail.com wrote:

 I cannot make it work either (local Julia build Win7)

 Just try a
 *println(blabla)*
 WARNING: Jewel: key not found: module
  in getindex at dict.jl:615
  in anonymous at C:\j\.julia\v0.3\Jewel\src\LightTable\completions.jl:4
 WARNING: Jewel: key not found: module
  in getindex at dict.jl:615
  in anonymous at C:\j\.julia\v0.3\Jewel\src\LightTable\completions.jl:4
 WARNING: Jewel: key not found: module
  in getindex at dict.jl:615
  in anonymous at C:\j\.julia\v0.3\Jewel\src\LightTable\completions.jl:4
 WARNING: Jewel: key not found: module
  in getindex at dict.jl:615
  in anonymous at C:\j\.julia\v0.3\Jewel\src\LightTable\completions.jl:4
 WARNING: Jewel: key not found: module
  in getindex at dict.jl:615
  in anonymous at C:\j\.julia\v0.3\Jewel\src\LightTable\completions.jl:4
 WARNING: Jewel: key not found: module
  in getindex at dict.jl:615
  in anonymous at C:\j\.julia\v0.3\Jewel\src\LightTable\completions.jl:4
 WARNING: Jewel: key not found: module
  in getindex at dict.jl:615
  in anonymous at C:\j\.julia\v0.3\Jewel\src\LightTable\eval.jl:32
  in anonymous at C:\j\.julia\v0.3\Jewel\src\LightTable\eval.jl:28
  in anonymous at C:\j\.julia\v0.3\Jewel\src\LightTable\eval.jl:32
  in anonymous at C:\j\.julia\v0.3\Jewel\src\LightTable\eval.jl:28



 Domingo, 29 de Junho de 2014 18:54:43 UTC+1, Johan Sigfrids escreveu:

 Well, apparently it had installed Jewel, bur running Pkg.update() did
 install a update for it. It is still throwing an error though:

 ERROR: connect: address not available (EADDRNOTAVAIL)
  in connect! at socket.jl:583
  in ltconnect at C:\Users\admin\.julia\v0.3\Jewel\src\LightTable/
 LightTable.jl:30
  in server at C:\Users\admin\.julia\v0.3\Jewel\src\LightTable/
 LightTable.jl:17
  in server at C:\Users\admin\.julia\v0.3\Jewel\src\Jewel.jl:13
  in include at boot.jl:244
  in include_from_node1 at loading.jl:128
  in process_options at client.jl:285
  in _start at client.jl:354
 while loading 
 C:\Users\admin\AppData\Local\LightTable\plugins\Jewel\jl\init.jl,
 in expression starting on line 26


 On Sunday, June 29, 2014 8:35:06 PM UTC+3, Mike Innes wrote:

 Technically you do need the Jewel.jl package, but this should be
 installed for you when the client boots the first time. If this isn't
 working you can of course run Pkg.add(Jewel) by hand – let me know if
 there are any issues with that and I'll see what I can do.


 On 29 June 2014 18:22, Johan Sigfrids johan.s...@gmail.com wrote:

 Do you need to add some package Julia side to make this work?


 On Sunday, June 29, 2014 12:46:21 PM UTC+3, Mike Innes wrote:

 Hey all,

 I've released the latest version of the Julia environment
 https://github.com/one-more-minute/Jupiter-LT I'm building. There
 are a whole bunch of improvements but the main ones are:

- Support for latex completions (\alpha etc.)
- Support for graphics including Gadfly and Images.jl (though
until my patch is released you'll need to Pkg.checkout(Gadfly) to get
interactivity)
- Rewritten and improved autocomplete system, which now completes
package names in Pkg functions and paths in include statements, and 
 can be
extended to support anything else
- Support for accessing methods and docs both while on a function
and within its parentheses
- Auto-detection of the module you're working in
- Links and highlighted lines for error messages
- Semantic highlighting in the june night theme
- Highlighting support for string interpolation
- Full support for unicode
- More documentation
- Tabs are restored after restarting, like Sublime
- Several new and improved rough edges

 I also want to shout out to all the people who have tried this out so
 far, given feedback, and/or sent me PRs – every bit of enthusiasm really
 makes a big difference, so thank you.

 – Mike





Re: [julia-users] Re: GSoC: Julia IDE Progress update

2014-06-29 Thread Mike Innes
Brilliant, I'll add a note to the instructions so that others don't get
caught out by this.

Light Table automatically installs dependencies of a plugin, but seems to
only install the minimum required version as opposed to the most recent,
which is really odd behavior.


On 29 June 2014 19:52, J Luis jmfl...@gmail.com wrote:

 Mike, yes you were right thanks. Updated ad now it works (I had the
 impression that it warned me to update some time ago so didn't bother to
 check it now)

 Joaquim

 Domingo, 29 de Junho de 2014 19:40:19 UTC+1, Johan Sigfrids escreveu:

 I update all by Mike Innes plugins (none of which were up to date) and
 now it works. :D

 On Sunday, June 29, 2014 9:24:45 PM UTC+3, Mike Innes wrote:

 I'm sorry that you're having problems – unfortunately Windows has been a
 big source of issues, but making the process smoother is a high priority.
 At some point I'll get an installer/bundle together which should help.

 J Luis, in your case I suspect that Light Table hasn't downloaded the
 latest version of the plugin for some reason. If you go back onto the
 installed tab of the plugin manager, can you check that Jewel is on
 0.6.2? If not, clicking update should be enough, although be sure to
 disconnect Julia first to avoid problems there.

 Johan, this is less likely to apply to you but it might be worth
 checking.

 I'll see if I can reproduce these errors and get back to you.


 On 29 June 2014 19:02, J Luis jmf...@gmail.com wrote:

 I cannot make it work either (local Julia build Win7)

 Just try a
 *println(blabla)*
 WARNING: Jewel: key not found: module
  in getindex at dict.jl:615
  in anonymous at C:\j\.julia\v0.3\Jewel\src\LightTable\completions.jl:4
 WARNING: Jewel: key not found: module
  in getindex at dict.jl:615
  in anonymous at C:\j\.julia\v0.3\Jewel\src\LightTable\completions.jl:4
 WARNING: Jewel: key not found: module
  in getindex at dict.jl:615
  in anonymous at C:\j\.julia\v0.3\Jewel\src\LightTable\completions.jl:4
 WARNING: Jewel: key not found: module
  in getindex at dict.jl:615
  in anonymous at C:\j\.julia\v0.3\Jewel\src\LightTable\completions.jl:4
 WARNING: Jewel: key not found: module
  in getindex at dict.jl:615
  in anonymous at C:\j\.julia\v0.3\Jewel\src\LightTable\completions.jl:4
 WARNING: Jewel: key not found: module
  in getindex at dict.jl:615
  in anonymous at C:\j\.julia\v0.3\Jewel\src\LightTable\completions.jl:4
 WARNING: Jewel: key not found: module
  in getindex at dict.jl:615
  in anonymous at C:\j\.julia\v0.3\Jewel\src\LightTable\eval.jl:32
  in anonymous at C:\j\.julia\v0.3\Jewel\src\LightTable\eval.jl:28
  in anonymous at C:\j\.julia\v0.3\Jewel\src\LightTable\eval.jl:32
  in anonymous at C:\j\.julia\v0.3\Jewel\src\LightTable\eval.jl:28



 Domingo, 29 de Junho de 2014 18:54:43 UTC+1, Johan Sigfrids escreveu:

 Well, apparently it had installed Jewel, bur running Pkg.update() did
 install a update for it. It is still throwing an error though:

 ERROR: connect: address not available (EADDRNOTAVAIL)
  in connect! at socket.jl:583
  in ltconnect at C:\Users\admin\.julia\v0.3\Jewel\src\LightTable/
 LightTable.jl:30
  in server at C:\Users\admin\.julia\v0.3\Jewel\src\LightTable/
 LightTable.jl:17
  in server at C:\Users\admin\.julia\v0.3\Jewel\src\Jewel.jl:13
  in include at boot.jl:244
  in include_from_node1 at loading.jl:128
  in process_options at client.jl:285
  in _start at client.jl:354
 while loading C:\Users\admin\AppData\Local\L
 ightTable\plugins\Jewel\jl\init.jl, in expression starting on line 26


 On Sunday, June 29, 2014 8:35:06 PM UTC+3, Mike Innes wrote:

 Technically you do need the Jewel.jl package, but this should be
 installed for you when the client boots the first time. If this isn't
 working you can of course run Pkg.add(Jewel) by hand – let me know if
 there are any issues with that and I'll see what I can do.


 On 29 June 2014 18:22, Johan Sigfrids johan.s...@gmail.com wrote:

 Do you need to add some package Julia side to make this work?


 On Sunday, June 29, 2014 12:46:21 PM UTC+3, Mike Innes wrote:

 Hey all,

 I've released the latest version of the Julia environment
 https://github.com/one-more-minute/Jupiter-LT I'm building.
 There are a whole bunch of improvements but the main ones are:

- Support for latex completions (\alpha etc.)
- Support for graphics including Gadfly and Images.jl (though
until my patch is released you'll need to Pkg.checkout(Gadfly) to 
 get
interactivity)
- Rewritten and improved autocomplete system, which now
completes package names in Pkg functions and paths in include 
 statements,
and can be extended to support anything else
- Support for accessing methods and docs both while on a
function and within its parentheses
- Auto-detection of the module you're working in
- Links and highlighted lines for error messages
- Semantic highlighting in the june night theme
- Highlighting support for string interpolation
- Full support for unicode

Re: [julia-users] Re: GSoC: Julia IDE Progress update

2014-06-30 Thread Mike Innes
Alireza – I think you've won the weirdest error competition. This looks
like some kind of low-level graphics issue, so if it's caused by the plugin
something very strange is going on. Does it still happen when Julia is
disconnected? What about if you type in a non-Julia file?

The other Mike is spot on about the tab settings, I've added some more
detail about configuration here
https://github.com/one-more-minute/Jupiter-LT/wiki/Settings-%26-Configuration
.


On 30 June 2014 12:21, Ivo Balbaert ivo.balba...@gmail.com wrote:

 Hi Mike,

 I use : julia version 0.3.0-prerelease+2809
 on Windows 8 - 64 bit

 Ivo

 Op maandag 30 juni 2014 09:00:09 UTC+2 schreef Michael Hatherly:

 Hi Ivo,

 Which version of julia are you using, v0.2.1 or v0.3? latex_symbols looks
 like it's only a v0.3 feature.

 -- other Mike

 On Monday, 30 June 2014 08:56:03 UTC+2, Ivo Balbaert wrote:

 Hi Mike,

 It looks very nice, but I still have an issue.
 I followed all recommendations, have updated all plugins, on startup
 after Spinning up a Julia client, I always get the error:

  Couldn't connect to Julia


 ERROR: latex_symbols not definedin anonymous at no file


  in include at boot.jl:244
  in include_from_node1 at loading.jl:128
  in include at boot.jl:244
  in include_from_node1 at loading.jl:128
  in reload_path at loading.jl:152
  in _require at loading.jl:67
  in require at loading.jl:51
  in include at boot.jl:244


  How could I fix this?

 Thanks,
 Ivo




 Op zondag 29 juni 2014 11:46:21 UTC+2 schreef Mike Innes:

 Hey all,

 I've released the latest version of the Julia environment
 https://github.com/one-more-minute/Jupiter-LT I'm building. There
 are a whole bunch of improvements but the main ones are:

- Support for latex completions (\alpha etc.)
- Support for graphics including Gadfly and Images.jl (though until
my patch is released you'll need to Pkg.checkout(Gadfly) to get
interactivity)
- Rewritten and improved autocomplete system, which now completes
package names in Pkg functions and paths in include statements, and can 
 be
extended to support anything else
- Support for accessing methods and docs both while on a function
and within its parentheses
- Auto-detection of the module you're working in
- Links and highlighted lines for error messages
- Semantic highlighting in the june night theme
- Highlighting support for string interpolation
- Full support for unicode
- More documentation
- Tabs are restored after restarting, like Sublime
- Several new and improved rough edges

 I also want to shout out to all the people who have tried this out so
 far, given feedback, and/or sent me PRs – every bit of enthusiasm really
 makes a big difference, so thank you.

 – Mike




Re: [julia-users] Re: GSoC: Julia IDE Progress update

2014-06-30 Thread Mike Innes
PSA: I've just realised that installing the plugin doesn't change the
default theme as I had thought. I've added an instruction to the readme,
but if you installed before that you'll want to see here
https://github.com/one-more-minute/Jupiter-LT/wiki/Settings-%26-Configuration
to change the theme to June. (This is only strictly necessary if you want
plotting to work, but the error features do look better with either of the
June themes).

I've also added a few more pages of documentation which may be worth
skimming through (wiki https://github.com/one-more-minute/Jupiter-LT/wiki
).


On 30 June 2014 13:16, Mike Innes mike.j.in...@gmail.com wrote:

 Alireza – I think you've won the weirdest error competition. This looks
 like some kind of low-level graphics issue, so if it's caused by the plugin
 something very strange is going on. Does it still happen when Julia is
 disconnected? What about if you type in a non-Julia file?

 The other Mike is spot on about the tab settings, I've added some more
 detail about configuration here
 https://github.com/one-more-minute/Jupiter-LT/wiki/Settings-%26-Configuration
 .


 On 30 June 2014 12:21, Ivo Balbaert ivo.balba...@gmail.com wrote:

 Hi Mike,

 I use : julia version 0.3.0-prerelease+2809
 on Windows 8 - 64 bit

 Ivo

 Op maandag 30 juni 2014 09:00:09 UTC+2 schreef Michael Hatherly:

 Hi Ivo,

 Which version of julia are you using, v0.2.1 or v0.3? latex_symbols
 looks like it's only a v0.3 feature.

 -- other Mike

 On Monday, 30 June 2014 08:56:03 UTC+2, Ivo Balbaert wrote:

 Hi Mike,

 It looks very nice, but I still have an issue.
 I followed all recommendations, have updated all plugins, on startup
 after Spinning up a Julia client, I always get the error:

  Couldn't connect to Julia


 ERROR: latex_symbols not definedin anonymous at no file


  in include at boot.jl:244
  in include_from_node1 at loading.jl:128
  in include at boot.jl:244
  in include_from_node1 at loading.jl:128
  in reload_path at loading.jl:152
  in _require at loading.jl:67
  in require at loading.jl:51
  in include at boot.jl:244


  How could I fix this?

 Thanks,
 Ivo




 Op zondag 29 juni 2014 11:46:21 UTC+2 schreef Mike Innes:

 Hey all,

 I've released the latest version of the Julia environment
 https://github.com/one-more-minute/Jupiter-LT I'm building. There
 are a whole bunch of improvements but the main ones are:

- Support for latex completions (\alpha etc.)
- Support for graphics including Gadfly and Images.jl (though
until my patch is released you'll need to Pkg.checkout(Gadfly) to get
interactivity)
- Rewritten and improved autocomplete system, which now completes
package names in Pkg functions and paths in include statements, and 
 can be
extended to support anything else
- Support for accessing methods and docs both while on a function
and within its parentheses
- Auto-detection of the module you're working in
- Links and highlighted lines for error messages
- Semantic highlighting in the june night theme
- Highlighting support for string interpolation
- Full support for unicode
- More documentation
- Tabs are restored after restarting, like Sublime
- Several new and improved rough edges

 I also want to shout out to all the people who have tried this out so
 far, given feedback, and/or sent me PRs – every bit of enthusiasm really
 makes a big difference, so thank you.

 – Mike





Re: [julia-users] Re: GSoC: Julia IDE Progress update

2014-06-30 Thread Mike Innes
Ok, I need to correct my instructions: to get interactivity in Gadfly you 
must use Pkg.checkout(Compose), not Pkg.checkout(Gadfly). Checking out 
Gadfly too won't hurt, but isn't mandatory. I can't guarantee that either 
will be stable so if there are problems just call Pkg.free(Compose) (same 
for Gadfly if applicable) and it should return to a non-interactive but 
stable state.

Johan, running Pkg.checkout(Compose) this should fix your issue.

On Monday, 30 June 2014 14:37:14 UTC+1, Johan Sigfrids wrote:

 I can't get Gadfly to plot anything. It gives me this error:

 WARNING: Jewel: pad_inner not defined
  in render at C:\Users\admin\.julia\v0.3\Gadfly\src\Gadfly.jl:643
  in writemime at C:\Users\admin\.julia\v0.3\Gadfly\src\Gadfly.jl:736
  in sprint at io.jl:465
  in display_result at 
 C:\Users\admin\.julia\v0.3\Jewel\src\LightTable/LightTable.jl:126
  in anonymous at C:\Users\admin\.julia\v0.3\Jewel\src\LightTable\eval.jl:51
  in handle_cmd at 
 C:\Users\admin\.julia\v0.3\Jewel\src\LightTable/LightTable.jl:65
  in server at 
 C:\Users\admin\.julia\v0.3\Jewel\src\LightTable/LightTable.jl:22
  in server at C:\Users\admin\.julia\v0.3\Jewel\src\Jewel.jl:13
  in include at boot.jl:244
  in include_from_node1 at loading.jl:128
  in process_options at client.jl:285
  in _start at client.jl:354

 On Monday, June 30, 2014 3:24:43 PM UTC+3, Mike Innes wrote:

 PSA: I've just realised that installing the plugin doesn't change the 
 default theme as I had thought. I've added an instruction to the readme, 
 but if you installed before that you'll want to see here 
 https://github.com/one-more-minute/Jupiter-LT/wiki/Settings-%26-Configuration
  
 to change the theme to June. (This is only strictly necessary if you want 
 plotting to work, but the error features do look better with either of the 
 June themes).

 I've also added a few more pages of documentation which may be worth 
 skimming through (wiki 
 https://github.com/one-more-minute/Jupiter-LT/wiki).



  1   2   3   >