11 seconds seems like an awfully long time. In the days of the slow REPL when 
Julia compiled itself upon starting up, that's about how long it took. What's 
your versioninfo?


> On Nov 23, 2014, at 8:37 PM, Tim Holy <[email protected]> wrote:
> 
>> On Sunday, November 23, 2014 04:23:18 PM Christian Peel wrote:
>> The 11 seconds was entirely spent in compiling the 6 functions in the file
>> that I was working on.  I had no 'using' or 'include' commands in the file;
>> I think the time was entirely compilation.
> 
> You must have used `include` or similar to load the file itself. Is the 11 
> seconds counting from after the file was loaded?
> 
> It's a little strange it would take 11 seconds to compile 6 functions. 
> Presumably it was compiling a bunch of base code, too---the flip side of 
> Julia's support for many different types means that many functions are not 
> pre-
> instantiated. If you have certain routines you use heavily, you can create a 
> file called userimg.jl in base/, and include precompile statements. (See 
> precompile.jl for examples.) That might enable your code to start faster.
> 
> That said, I myself have problems that take 30-60s to actually get started 
> doing useful computations. But those import a _lot_ of code.
> 
> --Tim
> 
> 
>> 
>> I would be happy with the mode you suggest.  I see it as a development
>> mode; ideally in addition to quick compilation, changed files are detected
>> and automatically reloaded and in general the 'compile' part of the
>> edit-compile-test cycle is very short and we can essentially iterate on an
>> edit-test cycle.   Then we can have a compilation mode, in which the LLVM
>> can do all kinds of fancy tricks, and it's ok that it takes 11 seconds to
>> compile a relatively small module.
>> 
>> I'm glad to hear about Jameson's caching work; I think this is important.
>> 
>> Chris
>> 
>>> On Sunday, November 23, 2014 12:21:43 PM UTC-8, Tim Holy wrote:
>>> I think Christian has a good point.
>>> 
>>> The time in Julia does depend on how many extra packages, etc, one loads.
>>> In
>>> Jameson's up-and-coming caching work, packages should get compiled the
>>> first
>>> time you use them, and thereafter be very fast to load. But if your
>>> package
>>> has a large codebase on its own, this will only get you so far.
>>> 
>>> Christian, how much of that 11 seconds was _parsing_ (statements like
>>> `using`
>>> or `include`) vs. the time to _compile_ (the delay when you run something
>>> like
>>> `runtest(A)`, where `A` is the input)? If it's mostly the latter, then I
>>> wonder if it would be helped by having a mode in which compilation is less
>>> specialized, and therefore fewer methods need to be built.
>>> 
>>> --Tim
>>> 
>>>> On Sunday, November 23, 2014 08:06:34 PM Milan Bouchet-Valat wrote:
>>>>> Le dimanche 23 novembre 2014 à 10:32 -0800, Christian Peel a écrit :
>>>>> Stefan,
>>>>> 
>>>>> Thank you for the explanation of the issues with line numbers and
>>>>> LLVM, and how it compares with Matlab and Julia.
>>>>> 
>>>>> In your recent papers and on http://julialang.org/benchmarks/  we can
>>>>> see that Julia is among the fastest languages to *run*.  I'd be very
>>>>> interested to see the speed with which a language can detect a syntax
>>>>> error.  In an informal test a minute ago using similar 500-line
>>>>> functions in both languages, Matlab was able to detect my syntax error
>>>>> in under a second.  Julia took around 11 seconds.   So Matlab has an
>>>>> edit-test cycle; it's very quick.   Julia has what feels very much
>>>>> like an edit-compile-test cycle; the compile step is very visible.
>>>>> Matlab allows me to make thousands of edits per day and quickly
>>>>> develop high-level code; Julia development feels slower overall, even
>>>>> with faster runtime.   Tamas made a suggestion above which I
>>>>> interpreted as asking for a flag that would allow one to have faster
>>>>> syntax error checking at the expense of slower runtime.  I agree with
>>>>> you that it would be ideal to have 'always fast debug info';  if for
>>>>> some reason you can't get the syntax error checking down; I would
>>>>> support a flag such as Tamas suggests.
>>>> 
>>>> Stefan was talking about getting backtraces when runtime errors happen.
>>>> As regards simple syntax errors, I don't think anything implies Julia
>>>> must be slow to report them: just like Matlab, she needs to parse the
>>>> code before doing anything with it. The compilation step, which involves
>>>> more work in Julia, happens later (and only if the code is correct).
>>>> 
>>>> But most of the time your code won't probably include syntax errors: in
>>>> that case, improving compilation times may prove trickier.
>>>> 
>>>> 
>>>> Regards
>>>> 
>>>>> Thanks for your work!
>>>>> 
>>>>> Chris
>>>>> 
>>>>> On Sunday, November 23, 2014 5:44:32 AM UTC-8, Stefan Karpinski wrote:
>>>>>        I'd rather shoot for always fast, always good debug info :-).
>>>>>        We're not very far off, we just need to keep upstreaming
>>>>>        improvements to LLVM.
>>>>> 
>>>>>        On Sun, Nov 23, 2014 at 8:16 AM, Tamas Papp <[email protected]>
>>>>> 
>>>>>        wrote:
>>>>>                On Sun, Nov 23 2014, Stefan Karpinski
>>>>> 
>>>>>                <[email protected]> wrote:
>>>>>> On Sun, Nov 23, 2014 at 6:02 AM, Christian Peel
>>>>> 
>>>>>                <[email protected]> wrote:
>>>>>>> Milan,
>>>>>>> 
>>>>>>> Thanks for the comments.  I also am confident that
>>>>> 
>>>>>                info about what line
>>>>> 
>>>>>>> errors occur on will improve.
>>>>>> 
>>>>>> For some context, the reason traditional dynamic
>>>>> 
>>>>>                systems like e.g. Matlab
>>>>> 
>>>>>> or Python, don't have this issue is that they use
>>>>> 
>>>>>                interpreters and
>>>>> 
>>>>>> literally walk through a fairly faithful
>>>>> 
>>>>>                representation of the code you
>>>>> 
>>>>>> wrote, including line number annotations. Thus, when
>>>>> 
>>>>>                an error occurs, it's
>>>>> 
>>>>>> a trivial matter to report the line number you are
>>>>> 
>>>>>                currently interpreting –
>>>>> 
>>>>>> the parsed code representation is the exact thing
>>>>> 
>>>>>                you're operating on when
>>>>> 
>>>>>> the error occurs. Moreover, to give a full stack
>>>>> 
>>>>>                trace, you simply need to
>>>>> 
>>>>>> walk back through the stack data structure that the
>>>>> 
>>>>>                interpreter maintains
>>>>> 
>>>>>> as it executes the program.
>>>>>> 
>>>>>> In a JIT system like Julia, things are quite
>>>>> 
>>>>>                different. By the time your
>>>>> 
>>>>>> code is running, it no longer closely resembles the
>>>>> 
>>>>>                code you wrote and
>>>>> 
>>>>>> there is no interpreter and the "stack" is the
>>>>> 
>>>>>                actual stack. All of this is
>>>>> 
>>>>>> what gives you C-like performance. When an error
>>>>> 
>>>>>                occurs, the error message
>>>>> 
>>>>>                's behalf> and stack trace have to be determined from
> the
>>>>> 
>>>>>                machine code that is
>>>>> 
>>>>>> currently executing and the stack trace is derived
>>>>> 
>>>>>                from *the* stack, not
>>>>> 
>>>>>> some data structure. All this is further complicated
>>>>> 
>>>>>                by the fact that
>>>>> 
>>>>>> LLVM's JIT does not tend to produce debug data quite
>>>>> 
>>>>>                as well as the static
>>>>> 
>>>>>> compilation infrastructure does. People don't seem
>>>>> 
>>>>>                to use debuggers on
>>>>> 
>>>>>> JITed code or particularly care about backtraces, so
>>>>> 
>>>>>                we're pushing the
>>>>> 
>>>>>> envelope here, but things are getting better quite
>>>>> 
>>>>>                steadily.
>>>>> 
>>>>>                I am wondering if the approach used by Common Lisp
>>>>>                implementations could
>>>>>                be applicable in Julia. In Common Lisp, one can have
>>>>>                various
>>>>>                optimization and debug level settings, eg
>>>>> 
>>>>>                (optimize (speed 3) (safety 0) (debug 0))
>>>>> 
>>>>>                will produce very fast code (comparable to C on modern
>>>>>                implementations
>>>>>                if type info is available), with the trade-off that
>>>>>                one has very little
>>>>>                debug information, while
>>>>> 
>>>>>                (optimize (debug 3))
>>>>> 
>>>>>                does the opposite. Of course, the function needs to be
>>>>>                recompiled for
>>>>>                these to take effect, but given that you can always
>>>>>                modify the image in
>>>>>                a backtrace (AFAIK you can do the same in Julia), it
>>>>>                works fine in
>>>>>                practice.
>>>>> 
>>>>>                Best,
>>>>> 
>>>>>                Tamas
> 

Reply via email to