Stefan, ok. My advice was actually also constructive. I have tried various 
open source software in my life and there were several that were broken. 
But then I have simple not used it if I am not satisfied.

I think it is clear that Julia's development model could be improved. But 
unless a company hires some full time developer to work on Julia a change 
in the development model is not easily done.

Cheers

Tobias 

Am Montag, 29. Dezember 2014 22:14:07 UTC+1 schrieb Stefan Karpinski:
>
> Let's please take Dan's comments as a constructive critique (which it is), 
> rather than an attack. I know Dan personally and happen to know that this 
> is where he's coming from.
>
> On Mon, Dec 29, 2014 at 3:58 PM, Tobias Knopp <[email protected] 
> <javascript:>> wrote:
>
>> So you dislike Julia and encountered several bugs. Reading your posts is 
>> like you want to blame someone for that. If you are not satisfied with 
>> Julia simply do not use it.
>>
>> And seriously: You cannot compare Julia with a project that has Google in 
>> the background. Its clear that they have a "more clear" development model 
>> and more documentation. Some goes for Rust. Julia is from and for 
>> researchers. And there are several people very satisfied with how Julia 
>> evolves (including me).
>>
>> Tobias
>>
>>
>> Am Montag, 29. Dezember 2014 21:37:33 UTC+1 schrieb Dan Luu:
>>>
>>> Here are a few responses to the stuff in this thread, plus Keno's 
>>> comment on HN. 
>>>
>>> The Travis stats are only for builds against master, i.e., only for 
>>> things that got merged. BTW, this is mentioned in the linked post. For 
>>> every project listed in http://danluu.com/broken-builds/, only the 
>>> "main" branch on github was used, with the exception of a couple 
>>> projects where that didn't make sense (IIRC, scala has some weird 
>>> setup, as does d). "Using travis less" doesn't really make sense in 
>>> that context. I heard that response a lot from people in various 
>>> communities when I wrote that post, but from checking the results, the 
>>> projects that have better travis results are more rigorous and, on 
>>> average, the results are biased against the projects with the highest 
>>> uptimes. There are exceptions, of course. 
>>>
>>> I have a "stable" .3 build I use for all my Julia scripts and IIRC 
>>> that's where I saw the dates issue with Gadfly. I dunno, maybe I 
>>> should only use older releases? But if I go to the Julia download 
>>> page, the options are 0.3.4, 0.2.1, and 0.1.2. This might not be true, 
>>> but I'm guessing that most packages don't work with 0.2.1 or 0.1.2. I 
>>> haven't tried with 0.3.4 since I haven't touched Julia for a while. 
>>> It's possible that the issue is now fixed, but the issue is still open 
>>> and someone else also commented that they're seeing the same problem. 
>>>
>>> Sorry, I'm not being a good open source citizen and filing bugs, but 
>>> when you run into 4 bugs when writing a half-hour script, filing bugs 
>>> is a drag on productivity. A comment I've gotten here and elsewhere is 
>>> basically "of course languages have bugs!". But there have been 
>>> multiple instances where I've run into more bugs in an hour of Julia 
>>> than I've ever hit with scala and go combined, and scala is even known 
>>> for being buggy! Between scala and go, I've probably spent 5x or 10x 
>>> the time I've spent in Julia. Just because some number will be 
>>> non-zero doesn't mean that all non-zero numbers are the same. There 
>>> are various reasons that's not a fair comparison. I'm just saying that 
>>> I expect to hit maybe one bug per hour while writing Julia, and I 
>>> expect maybe 1 bug per year for most languages, even pre-1.0 go. 
>>>
>>> I don't think 40 vs. 50 really changes the argument, but of course 
>>> I've been drifting down in github's ranking since I haven't done any 
>>> Julia lately and other people are committing code. 
>>>
>>> I don't think it's inevitable that language code is inscrutable. If I 
>>> grep through the go core code (excluding tests, but including 
>>> whitespace), it's 9% pure comment lines, and 16% lines with comments. 
>>> It could use more comments, but it's got enough comments (and 
>>> descriptive function and variable names) that I can go into most files 
>>> and understand the code. 
>>>
>>> It sounds like, as is, there isn't a good story for writing robust 
>>> Julia program? There are bugs and exceptions will happen. Putting 
>>> aside that `catch` non-deterministically fails to catch, what's a 
>>> program supposed to do when some bug in Base causes a method to throw 
>>> a bounds error? You've said that the error handling strategy is 
>>> go-like, but I basically never get a panic in go (I actually can't 
>>> recall it ever having happened, although it's possible I've gotten one 
>>> at some point). That's not even close to being true in Julia. 
>>> Terminating is fine for scripts where I just want to get at the source 
>>> of the bug and fix it, but it's not so great for programs that 
>>> shouldn't ever crash or corrupt data? Is the answer just "don't write 
>>> stuff like that in Julia"? 
>>>
>>> On Mon, Dec 29, 2014 at 1:38 PM, Stefan Karpinski <[email protected]> 
>>> wrote: 
>>> > There's lots of things that are very legitimate complaints in this 
>>> post but 
>>> > also other things I find frustrating. 
>>> > 
>>> > On-point 
>>> > 
>>> > Testing & coverage could be much better. Some parts of Base were 
>>> written a 
>>> > long time ago before we wrote tests for new code. Those can have a 
>>> scary 
>>> > lack of test coverage. Testing of Julia packages ranges from 
>>> non-existent to 
>>> > excellent. This also needs a lot of work. I agree that the the current 
>>> way 
>>> > of measuring coverage is nearly useless. We need a better approach. 
>>> > 
>>> > The package manager really, really needs an overhaul. This is my fault 
>>> and I 
>>> > take full responsibility for it. We've been waiting a frustratingly 
>>> long 
>>> > time for libgit2 integration to be ready to use. Last I checked, I 
>>> think 
>>> > there was still some Windows bug pending. 
>>> > 
>>> > Julia's uptime on Travis isn't as high as I would like it to be. There 
>>> have 
>>> > been a few periods (one of which Dan unfortunately hit), when Travis 
>>> was 
>>> > broken for weeks. This sucks and it's a relief whenever we fix the 
>>> build 
>>> > after a period like that. Fortunately, since that particularly bad 
>>> couple of 
>>> > weeks, there hasn't been anything like that, even on Julia master, and 
>>> we've 
>>> > never had Julia stable (release-0.3 currently) broken for any 
>>> significant 
>>> > amount of time. 
>>> > 
>>> > Documentation of Julia internals. This is getting a bit better with 
>>> the 
>>> > developer documentation that has recently been added, but Julia's 
>>> internals 
>>> > are pretty inscrutable. I'm not convinced that many other programming 
>>> > language implementations are any better about this, but that doesn't 
>>> mean we 
>>> > shouldn't improve this a lot. 
>>> > 
>>> > Frustrating 
>>> > 
>>> > Mystery Unicode bug - Dan, I've been hearing about this for months 
>>> now. 
>>> > Nobody has filed any issues with UTF-8 decoding in years (I just 
>>> checked). 
>>> > The suspense is killing me - what is this bug? Please file an issue, 
>>> no 
>>> > matter how vague it may be. Hell, that entire throwaway script can 
>>> just be 
>>> > the body of the issue and other people can pick it apart for specific 
>>> bugs. 
>>> > 
>>> > The REPL rewrite, among other things, added tests to the REPL. Yes, it 
>>> was a 
>>> > disruptive transition, but the old REPL needed to be replaced. It was 
>>> a 
>>> > massive pile of hacks around GNU readline and was incomprehensible and 
>>> > impossible to test. Complaining about the switch to the new REPL which 
>>> is 
>>> > actually tested seems misplaced. 
>>> > 
>>> > Unlike Python, catching exceptions in Julia is not considered a valid 
>>> way to 
>>> > do control flow. Julia's philosophy here is closer to Go's than to 
>>> Python's 
>>> > - if an exception gets thrown it should only ever be because the 
>>> caller 
>>> > screwed up and the program may reasonably panic. You can use try/catch 
>>> to 
>>> > handle such a situation and recover, but any Julia API that requires 
>>> you to 
>>> > do this is a broken API. So the fact that 
>>> > 
>>> >> When I grepped through Base to find instances of actually catching an 
>>> >> exception and doing something based on the particular exception, I 
>>> could 
>>> >> only find a single one. 
>>> > 
>>> > 
>>> > actually means that the one instance is actually a place where we're 
>>> doing 
>>> > it wrong and hacking around something we know to be broken. The next 
>>> move is 
>>> > to get rid of that one instance, not add more code like this. The UDP 
>>> thing 
>>> > is a problem and needs to be fixed. 
>>> > 
>>> > The business about fixing bugs getting Dan into the to 40 is weird. 
>>> It's not 
>>> > quite accurate - Dan is #47 by commits (I'm assuming that's the metric 
>>> here) 
>>> > with 28 commits, so he's in the top 50 but not the top 40. There are 
>>> 23 
>>> > people who have 100 commits or more, and that's roughly the group I 
>>> would 
>>> > consider to be the "core devs". This paragraph is frustrating because 
>>> it 
>>> > gives the imo unfair impression that not many people are working on 
>>> Julia. 
>>> > Having 23+ people working actively on a programming language 
>>> implementation 
>>> > is a lot. 
>>> > 
>>> > Ranking of how likely Travis builds are to fail by language doesn't 
>>> seem 
>>> > meaningful. A huge part of this is how aggressively each project uses 
>>> > Travis. We automatically test just about everything, even completely 
>>> > experimental branches. Julia packages can turn Travis testing on with 
>>> a flip 
>>> > of a switch. So lots of things are broken on Travis because we've made 
>>> it 
>>> > easy to use. We should, of course, fix these things, but other 
>>> projects 
>>> > having higher uptime numbers doesn't imply that they're more reliable 
>>> - it 
>>> > probably just means they're using Travis less. 
>>> > 
>>> > In general, a lot of Dan's issues only crop up if you are using Julia 
>>> > master. The Gadfly dates regression is probably like this and the two 
>>> weeks 
>>> > of Travis failures was only on master during a "chaos month" - i.e. a 
>>> month 
>>> > where we make lots of reckless changes, typically right after 
>>> releasing a 
>>> > stable version (in this case it was right after 0.3 came out). These 
>>> days, 
>>> > I've seen a lot of people using Julia 0.3 for work and it's pretty 
>>> smooth 
>>> > (package management is by far the biggest issue and I just take care 
>>> of that 
>>> > myself). If you're a normal language user, you definitely should not 
>>> be 
>>> > using Julia master. 
>>>
>>
>

Reply via email to