Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-07-02 Thread Joseph Rushton Wakeling

On Monday, 1 July 2013 at 17:45:59 UTC, Joakim wrote:
Then they should choose a mixed license like the Mozilla Public 
License or CDDL, which keeps OSS files open while allowing 
linking with closed source files within the same application.  
If they instead chose a license that allows closing all source, 
one can only assume they're okay with it.  In any case, I could 
care less if they're okay with it or not, I was just surprised 
that they chose the BSD license and then were mad when someone 
was thinking about closing it up.


The trouble is, even very weak copyleft licenses like MPL and 
CDDL can result in licensing incompatibilities.  Only by granting 
very permissive licensing terms can you guarantee that your 
software will be usable by the full range of free software 
alternatives.


For what it's worth, I have also made the argument on many 
occasions that projects shouldn't pick permissive licenses unless 
they're happy to see their work turned into proprietary products. 
 But if a developer releases software under a liberal license, 
saying "I'm doing this so that everyone can use it but please 
keep it free," I think they have a right to be pissed off when 
someone ignores their moral request.


There's no doubt that even if they chose a permissive license 
like the MIT or BSD license, these communities work primarily 
with OSS code and tend to prefer that code be open.  I can 
understand if they then tend to rebuff attempts to keep source 
from them, purely as a social phenomenon, however irrational it 
may be.  That's why I asked Walter if he had a similar opinion, 
but he didn't care.


Yes, the conscious choice of an extremely permissive license for 
druntime and Phobos is a different situation.  It's completely 
right in this case to facilitate all forms of development and 
re-use, under all licensing scenarios.


I still think it's ridiculous to put your code under an 
extremely permissive license and then get mad when people take 
you up on it, particularly since they never publicly broadcast 
that they want everything to be open.  It is only after you 
talk to them that you realize that the BSD gang are often as 
much freetards as the GPL gang, just in their own special way. 
;)


It's a shame that you feel the need to resort to name-calling 
because someone has come to a considered moral or strategic 
position that's different from yours.  It also doesn't really 
help your position -- you're better off just getting on with 
developing software using your strategy and showing how it serves 
free software in the long run.


I wouldn't call closing source that they legally allowed to be 
closed antisocial.  I'd call their contradictory, angry 
response to what their license permits antisocial. :)


Personally speaking, I find there are a lot of things in life 
which I prefer to be legally permitted, but still nevertheless 
consider antisocial -- and I don't think there need be a 
contradiction there.



http://www.phoronix.com/scan.php?page=article&item=sprewell_licensing

Note that this article was written when Android had less than 
10% of the almost billion users it has today, by using a 
similar hybrid model, and I was thinking up these ideas years 
before, long before I'd heard of Android.


My project was a small one, so it couldn't be a resounding 
proof of my time-limited version of the hybrid model, but it 
worked for its purpose and I'm fairly certain it will be the 
dominant model someday. :)


Thanks for the interesting read.  I think you have a point 
inasmuch as this is a model that clearly works very well from a 
business perspective where apps are concerned -- and if you're 
going to have open core, I'd rather it be one where the closed 
parts are guaranteed to eventually be opened up.  Of course, this 
is not the same as moral approval :-)


What I'd say, though, is that what works for apps isn't going to 
be what works for languages or their core development tools.  
Most apps seem to be single- or small-team developments, not 
community projects, they are targeting niche requirements, and 
ultimately they're being delivered to a target audience that's 
used to paying for software.


On the other hand with a language your overwhelming goal is to 
grow the user community, and (unless you're Microsoft or Apple, 
who can dictate terms to software developers) the best way by far 
to do that is to secure the language quality while keeping the 
development tools available free of charge.  You'll get more 
mileage out of monetising other things -- e.g. bug-fixing 
services, support, consultancy -- than you will out of 
restricting access to tools that enable people to use the 
language effectively.


You also have to consider the user perspective.  If I was offered 
a new language whose tools were delivered with open core terms, I 
would almost certainly refuse -- I'd feel unable to trust that I 
wouldn't at some point find all future releases locked up, 
leaving me with the 

Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-07-02 Thread John Colvin

On Tuesday, 2 July 2013 at 14:40:42 UTC, Joakim wrote:
You're really splitting hairs at this point.  If you _allow_ 
almost anything, as most permissive licenses like the BSD or 
MIT license do, nobody is going to then ask permission of the 
community for every possible thing they might do, to see who 
"wants" it, particularly since the community hasn't stated 
anything publicly.  Since the community likely has a variety of 
opinions, as you yourself just admitted, such a poll of "wants" 
would likely be meaningless anyway.


Unless the particular community puts out a public statement of 
"wants" that most of them can get behind, which very few of 
them do, it is silly to talk about what they might "want" which 
isn't in the license.  The license is essentially all that 
matters.


The difference between what people allow and what people want is 
much more significant than just "splitting hairs". However, I 
agree that there is often no coherent set of "wants" in a 
community, which makes it hard to consider them meaningfully.


However, I do believe there's a level of common courtesy that 
should be honoured when using other people's work in a 
significant project, including at the very least making them 
aware that you will be doing so (anonymously, if secrecy is 
important). I know many people will just take whatever they can 
get and give as little as they can, but that doesn't make it 
right.



I suspect we will never see eye to eye on this. You are convinced 
that the letter of the licence is all that matters, I am not.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-07-02 Thread Joakim

On Tuesday, 2 July 2013 at 09:59:19 UTC, John Colvin wrote:
This is all a bit moot as I was making a general point, not 
specifically related to BSD. However, in their case, I think it 
is perfectly fine that some don't like closed source 
personally, but as a group they decide to endorse it. A group 
where everyone is forced to agree on everything isn't an 
organisation, it's a cult.
Of course there will be a wide variety of opinions within any 
community but the point is those who push such 
permissively-licensed software but privately dislike closing of 
source, then lash out at those who try to do it, are being silly.



I think what I'm really trying to say is this:

A license is a description of what you will *allow*, not what 
you *want*.
I personally like to take in to account what people *want* me 
to do, not just what they will *allow* me to do.
You're really splitting hairs at this point.  If you _allow_ 
almost anything, as most permissive licenses like the BSD or MIT 
license do, nobody is going to then ask permission of the 
community for every possible thing they might do, to see who 
"wants" it, particularly since the community hasn't stated 
anything publicly.  Since the community likely has a variety of 
opinions, as you yourself just admitted, such a poll of "wants" 
would likely be meaningless anyway.


Unless the particular community puts out a public statement of 
"wants" that most of them can get behind, which very few of them 
do, it is silly to talk about what they might "want" which isn't 
in the license.  The license is essentially all that matters.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-07-02 Thread John Colvin

On Tuesday, 2 July 2013 at 05:21:35 UTC, Joakim wrote:

On Monday, 1 July 2013 at 21:29:21 UTC, John Colvin wrote:

On Monday, 1 July 2013 at 17:45:59 UTC, Joakim wrote:
I wouldn't call closing source that they legally allowed to 
be closed antisocial.  I'd call their contradictory, angry 
response to what their license permits antisocial. :)


Just because you're doing something legal doesn't mean you're 
not being antisocial.
Read my previous post.  Of course it's possible for a license 
to technically allow something but for the authors to 
disapprove of it, not that its antisocial to simply do 
something they disapprove of.  But, as I said earlier, the BSD 
crowd does not publicly broadcast that they disapprove of 
closing source.  In fact, they will occasionally link to press 
releases about contributions back from corporations who closed 
the source.


For people using the BSD license to then get mad when yet 
another person comes along to close source is the only 
"antisocial" behavior I'm seeing here.  It'd be one thing if 
they publicly said that while the BSD license allows closing 
source, they're against it.  Feel free to provide such a public 
statement, you won't find it.  It's only after you talk to them 
privately about closing source that you realize how many of 
them are against it.


As I've said repeatedly, I don't much care that their behavior 
is so "antisocial," :) as long as its legal to close source.  
But it is pretty funny to cast that tag on somebody else, who 
is simply doing what their license allows and what their press 
releases trumpet.


It's a pretty psychopathic attitude to conflate legality and 
morality, it's effectively saying "I have the moral right to 
do whatever I can get away with"
On the contrary, it's a pretty psychopathic attitude to make 
such claims about morality when


1. nobody was talking about morality

2. the BSD crowd doesn't publicly talk about their problems 
with closing source either, whether they think it's immoral or 
antisocial or whatever.


This is all a bit moot as I was making a general point, not 
specifically related to BSD. However, in their case, I think it 
is perfectly fine that some don't like closed source personally, 
but as a group they decide to endorse it. A group where everyone 
is forced to agree on everything isn't an organisation, it's a 
cult.


I think what I'm really trying to say is this:

A license is a description of what you will *allow*, not what you 
*want*.
I personally like to take in to account what people *want* me to 
do, not just what they will *allow* me to do.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-07-02 Thread Iain Buclaw
On 1 July 2013 18:45, Joakim  wrote:
>> In other cases there may be a broad community consensus that builds up
>> around a piece of software, that this work should be shared and contributed
>> to as a common good (e.g. X.org).  Attempts to close it up violate those
>> social norms and are rightly seen as an attack on that community and the
>> valuable commons they have cultivated.
>
> There's no doubt that even if they chose a permissive license like the MIT
> or BSD license, these communities work primarily with OSS code and tend to
> prefer that code be open.  I can understand if they then tend to rebuff
> attempts to keep source from them, purely as a social phenomenon, however
> irrational it may be.  That's why I asked Walter if he had a similar
> opinion, but he didn't care.
>
> I still think it's ridiculous to put your code under an extremely permissive
> license and then get mad when people take you up on it, particularly since
> they never publicly broadcast that they want everything to be open.  It is
> only after you talk to them that you realize that the BSD gang are often as
> much freetards as the GPL gang, just in their own special way. ;)
>

To be 'retarded' is to be held back or hindered in the development or
progress of an action or process.  F/OSS comes with no such hindrance,
unlike some other model that people falsely advertise as Everything
Open and Free.  All the Time!*

--
Iain Buclaw

*except whatever I am selling.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-07-02 Thread Joseph Rushton Wakeling

On Monday, 1 July 2013 at 21:20:39 UTC, Walter Bright wrote:

On 7/1/2013 2:04 PM, Brad Roberts wrote:
Actually, Boost was specifically chosen because it didn't 
require attribution
when redistributing. If BSD hadn't had that clause we probably 
would be using it

instead.


That was indeed another important reason for it. But we were 
well aware of and approved of the idea that people could take 
it and make closed source versions.


It was always clear (and logical) to me why the core libraries 
were permissively licensed, but the 
no-need-to-give-attribution-for-non-source-distribution feature 
was a subtlety I hadn't considered before.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-07-02 Thread Joseph Rushton Wakeling

On Sunday, 30 June 2013 at 03:29:06 UTC, Walter Bright wrote:

On 6/29/2013 5:08 AM, Joseph Rushton Wakeling wrote:
True, distribution was mainly by physical mail. There was some 
via BBS's and Usenet, but these were severely limited by 
bandwidth.


I'd receive bug reports by fax, paper listings, and mailed 
floppies.



This was also the heyday of the BBC Micro in UK schools, and I 
remember well the shelves full of books of sample programs in BBC 
Basic. We had lots of fun typing them up, working out how they 
worked, and then twisting them to our more evil designs.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-07-01 Thread Joakim

On Monday, 1 July 2013 at 21:29:21 UTC, John Colvin wrote:

On Monday, 1 July 2013 at 17:45:59 UTC, Joakim wrote:
I wouldn't call closing source that they legally allowed to be 
closed antisocial.  I'd call their contradictory, angry 
response to what their license permits antisocial. :)


Just because you're doing something legal doesn't mean you're 
not being antisocial.
Read my previous post.  Of course it's possible for a license to 
technically allow something but for the authors to disapprove of 
it, not that its antisocial to simply do something they 
disapprove of.  But, as I said earlier, the BSD crowd does not 
publicly broadcast that they disapprove of closing source.  In 
fact, they will occasionally link to press releases about 
contributions back from corporations who closed the source.


For people using the BSD license to then get mad when yet another 
person comes along to close source is the only "antisocial" 
behavior I'm seeing here.  It'd be one thing if they publicly 
said that while the BSD license allows closing source, they're 
against it.  Feel free to provide such a public statement, you 
won't find it.  It's only after you talk to them privately about 
closing source that you realize how many of them are against it.


As I've said repeatedly, I don't much care that their behavior is 
so "antisocial," :) as long as its legal to close source.  But it 
is pretty funny to cast that tag on somebody else, who is simply 
doing what their license allows and what their press releases 
trumpet.


It's a pretty psychopathic attitude to conflate legality and 
morality, it's effectively saying "I have the moral right to do 
whatever I can get away with"
On the contrary, it's a pretty psychopathic attitude to make such 
claims about morality when


1. nobody was talking about morality

2. the BSD crowd doesn't publicly talk about their problems with 
closing source either, whether they think it's immoral or 
antisocial or whatever.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-07-01 Thread Walter Bright

On 7/1/2013 2:29 PM, John Colvin wrote:

On Monday, 1 July 2013 at 17:45:59 UTC, Joakim wrote:

I wouldn't call closing source that they legally allowed to be closed
antisocial.  I'd call their contradictory, angry response to what their
license permits antisocial. :)


Just because you're doing something legal doesn't mean you're not being 
antisocial.

It's a pretty psychopathic attitude to conflate legality and morality, it's
effectively saying "I have the moral right to do whatever I can get away with"


(A nit: it is not illegal to break a contract's terms. You're quite right, 
though, that morality and legality are very different things, and are too often 
in contradiction.)


Anyhow, the reason we have written contracts is because people often (and 
honestly) misremember or misinterpret verbally agreed upon terms. Having a 
written record is a lot better when the two parties find themselves in a dispute.


There are many licenses to choose from, and we chose Boost knowing full well 
what was said (and not said) in it about closed source usage.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-07-01 Thread John Colvin

On Monday, 1 July 2013 at 17:45:59 UTC, Joakim wrote:
I wouldn't call closing source that they legally allowed to be 
closed antisocial.  I'd call their contradictory, angry 
response to what their license permits antisocial. :)


Just because you're doing something legal doesn't mean you're not 
being antisocial.


It's a pretty psychopathic attitude to conflate legality and 
morality, it's effectively saying "I have the moral right to do 
whatever I can get away with"


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-07-01 Thread Walter Bright

On 7/1/2013 2:04 PM, Brad Roberts wrote:

On 7/1/13 11:42 AM, Walter Bright wrote:

On 7/1/2013 10:45 AM, Joakim wrote:

Then they should choose a mixed license like the Mozilla Public License or CDDL,
which keeps OSS files open while allowing linking with closed source files
within the same application.  If they instead chose a license that allows
closing all source, one can only assume they're okay with it.  In any case, I
could care less if they're okay with it or not, I was just surprised that they
chose the BSD license and then were mad when someone was thinking about closing
it up.


I should point out that the Boost license was chosen for Phobos specifically
because it allowed
people to copy it and use it for whatever purpose, including making closed
source versions, adapting
them for use with Go :-), whatever.


Actually, Boost was specifically chosen because it didn't require attribution
when redistributing. If BSD hadn't had that clause we probably would be using it
instead.


That was indeed another important reason for it. But we were well aware of and 
approved of the idea that people could take it and make closed source versions.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-07-01 Thread Brad Roberts

On 7/1/13 11:42 AM, Walter Bright wrote:

On 7/1/2013 10:45 AM, Joakim wrote:

Then they should choose a mixed license like the Mozilla Public License or CDDL,
which keeps OSS files open while allowing linking with closed source files
within the same application.  If they instead chose a license that allows
closing all source, one can only assume they're okay with it.  In any case, I
could care less if they're okay with it or not, I was just surprised that they
chose the BSD license and then were mad when someone was thinking about closing
it up.


I should point out that the Boost license was chosen for Phobos specifically 
because it allowed
people to copy it and use it for whatever purpose, including making closed 
source versions, adapting
them for use with Go :-), whatever.


Actually, Boost was specifically chosen because it didn't require attribution when redistributing. 
If BSD hadn't had that clause we probably would be using it instead.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-07-01 Thread Walter Bright

On 7/1/2013 10:45 AM, Joakim wrote:

Then they should choose a mixed license like the Mozilla Public License or CDDL,
which keeps OSS files open while allowing linking with closed source files
within the same application.  If they instead chose a license that allows
closing all source, one can only assume they're okay with it.  In any case, I
could care less if they're okay with it or not, I was just surprised that they
chose the BSD license and then were mad when someone was thinking about closing
it up.


I should point out that the Boost license was chosen for Phobos specifically 
because it allowed people to copy it and use it for whatever purpose, including 
making closed source versions, adapting them for use with Go :-), whatever.


It would be pretty silly of us to complain about that after the fact.

People who don't want closed source versions should use GPL licenses.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-07-01 Thread Joakim
On Monday, 1 July 2013 at 10:15:34 UTC, Joseph Rushton Wakeling 
wrote:

On Sunday, 30 June 2013 at 19:45:06 UTC, Joakim wrote:
OK, glad to hear that you wouldn't be against it.  You'd be 
surprised how many who use permissive licenses still go nuts 
when you propose to do exactly what the license allows, ie 
close up parts of the source.


Because people don't just care about the strict legal 
constraints, but also about the social compact around software.


Often people choose permissive licenses because they want to 
ensure other free software authors can use their software 
without encountering the licensing incompatibilities that can 
result from the various forms of copyleft.  Closing up their 
software is rightly seen as an abuse of their goodwill.
Then they should choose a mixed license like the Mozilla Public 
License or CDDL, which keeps OSS files open while allowing 
linking with closed source files within the same application.  If 
they instead chose a license that allows closing all source, one 
can only assume they're okay with it.  In any case, I could care 
less if they're okay with it or not, I was just surprised that 
they chose the BSD license and then were mad when someone was 
thinking about closing it up.


In other cases there may be a broad community consensus that 
builds up around a piece of software, that this work should be 
shared and contributed to as a common good (e.g. X.org).  
Attempts to close it up violate those social norms and are 
rightly seen as an attack on that community and the valuable 
commons they have cultivated.
There's no doubt that even if they chose a permissive license 
like the MIT or BSD license, these communities work primarily 
with OSS code and tend to prefer that code be open.  I can 
understand if they then tend to rebuff attempts to keep source 
from them, purely as a social phenomenon, however irrational it 
may be.  That's why I asked Walter if he had a similar opinion, 
but he didn't care.


I still think it's ridiculous to put your code under an extremely 
permissive license and then get mad when people take you up on 
it, particularly since they never publicly broadcast that they 
want everything to be open.  It is only after you talk to them 
that you realize that the BSD gang are often as much freetards as 
the GPL gang, just in their own special way. ;)


Community anger against legal but antisocial behaviour is 
hardly limited to software, and is a fairly important mechanism 
for ensuring that people behave well towards one another.
I wouldn't call closing source that they legally allowed to be 
closed antisocial.  I'd call their contradictory, angry response 
to what their license permits antisocial. :)


Since you have been so gracious to use such permissive 
licenses for almost all of D, I'm sure someone will try the 
closed/paid experiment someday and see if which of us is 
right. :)


Good luck with that :-)

By the way, you mentioned a project of your own where you 
employed the short-term open core model you describe.  Want to 
tell us more about that?  Regardless of differences of opinion, 
it's always good to hear about someone's particular experience 
with a project.
I wrote up an article a couple years back talking about the new 
hybrid model I used, it's up on Phoronix and my project is 
mentioned there:


http://www.phoronix.com/scan.php?page=article&item=sprewell_licensing

Note that this article was written when Android had less than 10% 
of the almost billion users it has today, by using a similar 
hybrid model, and I was thinking up these ideas years before, 
long before I'd heard of Android.


My project was a small one, so it couldn't be a resounding proof 
of my time-limited version of the hybrid model, but it worked for 
its purpose and I'm fairly certain it will be the dominant model 
someday. :)


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-07-01 Thread Joseph Rushton Wakeling

On Sunday, 30 June 2013 at 19:45:06 UTC, Joakim wrote:
OK, glad to hear that you wouldn't be against it.  You'd be 
surprised how many who use permissive licenses still go nuts 
when you propose to do exactly what the license allows, ie 
close up parts of the source.


Because people don't just care about the strict legal 
constraints, but also about the social compact around software.


Often people choose permissive licenses because they want to 
ensure other free software authors can use their software without 
encountering the licensing incompatibilities that can result from 
the various forms of copyleft.  Closing up their software is 
rightly seen as an abuse of their goodwill.


In other cases there may be a broad community consensus that 
builds up around a piece of software, that this work should be 
shared and contributed to as a common good (e.g. X.org).  
Attempts to close it up violate those social norms and are 
rightly seen as an attack on that community and the valuable 
commons they have cultivated.


Community anger against legal but antisocial behaviour is hardly 
limited to software, and is a fairly important mechanism for 
ensuring that people behave well towards one another.


Since you have been so gracious to use such permissive licenses 
for almost all of D, I'm sure someone will try the closed/paid 
experiment someday and see if which of us is right. :)


Good luck with that :-)

By the way, you mentioned a project of your own where you 
employed the short-term open core model you describe.  Want to 
tell us more about that?  Regardless of differences of opinion, 
it's always good to hear about someone's particular experience 
with a project.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-30 Thread CJS
Well, it is in the sense that it _is_ a deficiency of built-in 
AAs for those
who want to be able to use different implementations for 
different use cases,
but it's not something that can actually be fixed, and having 
to use a library
solution isn't exactly all that bad anyway, especially when 
most languages
don't have AAs built-in in the first place. It's just a 
convenience feature.


- Jonathan M Davis


Thanks for the dicussion! I thankfully haven't run into problems 
with needed highly optimized assiociative arrays, but it's good 
to know what the limitations are and why.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-30 Thread Jonathan M Davis
On Sunday, June 30, 2013 22:45:04 Steven Schveighoffer wrote:
> On Sun, 30 Jun 2013 22:02:11 -0400, Jonathan M Davis 
> > I know. My point was that that's an inherent problem with built-in AAs
> > that
> > can't be overcome (regardless of how well they're implemented). If you
> > want
> > that level of control, you _have_ to use a library solution.
> 
> OK. I guess I was just reading your statement like it was a "problem" :)

Well, it is in the sense that it _is_ a deficiency of built-in AAs for those 
who want to be able to use different implementations for different use cases, 
but it's not something that can actually be fixed, and having to use a library 
solution isn't exactly all that bad anyway, especially when most languages 
don't have AAs built-in in the first place. It's just a convenience feature.

- Jonathan M Davis


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-30 Thread Steven Schveighoffer
On Sun, 30 Jun 2013 22:02:11 -0400, Jonathan M Davis   
wrote:



On Sunday, June 30, 2013 21:54:08 Steven Schveighoffer wrote:
On Sun, 30 Jun 2013 21:43:53 -0400, Jonathan M Davis  



wrote:
> But I think that they key issue with swapping out
> the implementation is not whether you can swap out the implementation
> for your
> whole program but rather being able to choose different  
implementations

> for
> different parts of your program.

This would never happen.  AAs are only ever going to be one
implementation.  If you want to use another map type, you will have to  
use

a struct/class.  I suppose AA's could simply be polymorphic, but I don't
see the benefit.


I know. My point was that that's an inherent problem with built-in AAs  
that
can't be overcome (regardless of how well they're implemented). If you  
want

that level of control, you _have_ to use a library solution.


OK. I guess I was just reading your statement like it was a "problem" :)

-Steve


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-30 Thread Jonathan M Davis
On Sunday, June 30, 2013 21:59:45 Steven Schveighoffer wrote:
> On Sun, 30 Jun 2013 21:43:53 -0400, Jonathan M Davis 
> 
> wrote:
> > On Sunday, June 30, 2013 19:20:47 Steven Schveighoffer wrote:
> >> No, the main issue is the current one is runtime-only, and so simple
> >> function calls such as toHash and opCmp cannot be inlined.
> > 
> > Yeah. That's a big problem. We really need to templatize all that -
> > though the
> > current implementation is enough of a mess to make that difficult.
> 
> The current implementation suffers from two problems:
> 
> 1. The compiler doesn't treat T[U] as a direct instantiation of
> AssocArray!(T, U) in all cases.
> 2. The compiler has bugs in terms of pure/@safe/ctfe/etc that it
> "overlooks" when using built-in AAs.  I think those who have tried to make
> a complete library-replacement for AAs have found this out.
> 
> The best path to resolution I think is:
> 
> 1. make a complete replacement for AAs that can be instantiated and
> operate without mapping to the AA syntax.  It may not build, but it should
> be written and bugs against the compiler filed.
> 2. Fix all bugs to make 1. compile/possible
> 3. Switch compiler to use type in 1. whenever AA's are used.
> 
> I believe some have made a very good attempt to do 1 (H.S. Teoh I think?
> maybe someone else)

Yeah. He was working on it and seems to have pretty much decided that the job 
is too big for one man (and/or that he doesn't have enough time). I believe 
that there was a post on D.Learn a few months back where you pointed to where 
he had his changes thus far (so that others could look at it and potentially 
continue his work), but AFAIK, he's given up on the whole thing for now.

- Jonathan M Davis


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-30 Thread Jonathan M Davis
On Sunday, June 30, 2013 21:54:08 Steven Schveighoffer wrote:
> On Sun, 30 Jun 2013 21:43:53 -0400, Jonathan M Davis 
> 
> wrote:
> > But I think that they key issue with swapping out
> > the implementation is not whether you can swap out the implementation
> > for your
> > whole program but rather being able to choose different implementations
> > for
> > different parts of your program.
> 
> This would never happen.  AAs are only ever going to be one
> implementation.  If you want to use another map type, you will have to use
> a struct/class.  I suppose AA's could simply be polymorphic, but I don't
> see the benefit.

I know. My point was that that's an inherent problem with built-in AAs that 
can't be overcome (regardless of how well they're implemented). If you want 
that level of control, you _have_ to use a library solution.

- Jonathan M Davis


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-30 Thread Steven Schveighoffer
On Sun, 30 Jun 2013 21:43:53 -0400, Jonathan M Davis   
wrote:



On Sunday, June 30, 2013 19:20:47 Steven Schveighoffer wrote:



No, the main issue is the current one is runtime-only, and so simple
function calls such as toHash and opCmp cannot be inlined.


Yeah. That's a big problem. We really need to templatize all that -  
though the

current implementation is enough of a mess to make that difficult.


The current implementation suffers from two problems:

1. The compiler doesn't treat T[U] as a direct instantiation of  
AssocArray!(T, U) in all cases.
2. The compiler has bugs in terms of pure/@safe/ctfe/etc that it  
"overlooks" when using built-in AAs.  I think those who have tried to make  
a complete library-replacement for AAs have found this out.


The best path to resolution I think is:

1. make a complete replacement for AAs that can be instantiated and  
operate without mapping to the AA syntax.  It may not build, but it should  
be written and bugs against the compiler filed.

2. Fix all bugs to make 1. compile/possible
3. Switch compiler to use type in 1. whenever AA's are used.

I believe some have made a very good attempt to do 1 (H.S. Teoh I think?  
maybe someone else)


-Steve


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-30 Thread Steven Schveighoffer
On Sun, 30 Jun 2013 21:43:53 -0400, Jonathan M Davis   
wrote:



But I think that they key issue with swapping out
the implementation is not whether you can swap out the implementation  
for your
whole program but rather being able to choose different implementations  
for

different parts of your program.


This would never happen.  AAs are only ever going to be one  
implementation.  If you want to use another map type, you will have to use  
a struct/class.  I suppose AA's could simply be polymorphic, but I don't  
see the benefit.


-Steve


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-30 Thread Jonathan M Davis
On Sunday, June 30, 2013 19:20:47 Steven Schveighoffer wrote:
> On Sun, 30 Jun 2013 15:51:32 -0400, Jonathan M Davis 
> 
> wrote:
> > On Sunday, June 30, 2013 21:05:41 CJS wrote:
> >> In the talk Andrei seems to mentions that D's associative arrays
> >> are lacking in performance somehow. I'm very new to D, but it's
> >> not obvious to me what the shortcoming is. I assume it's that for
> >> some reason it's hard to specialize associative arrays to specfic
> >> types to give increased performance in specfic cases, but I'm
> >> unclear why that would be difficult. Could someone please
> >> elaborate?
> > 
> > There's one implementation, and you can't swap it out, whereas different
> > use
> > cases may perform better with different implementations. On top of that,
> > the
> > current implementation is rather buggy and fragile, but that's an
> > implementation issue rather than an inherent one.
> 
> No, the main issue is the current one is runtime-only, and so simple
> function calls such as toHash and opCmp cannot be inlined.

Yeah. That's a big problem. We really need to templatize all that - though the 
current implementation is enough of a mess to make that difficult.

> You absolutely can change implementations (Walter did a few years ago from
> tree-based collision resolution to linked-list based).  What you can't do
> is switch to a fully generated AA, or change the compiler-expected API.

Okay. I didn't know that. But I think that they key issue with swapping out 
the implementation is not whether you can swap out the implementation for your 
whole program but rather being able to choose different implementations for 
different parts of your program. If you really care about your containers 
enough to worry about optimizing them for your particular use cases, then 
unless you only have one use case within your program, there's a good chance 
that you're going to want different implementations for different parts of your 
program. With library containers, that's as simple as swapping which one you 
use. With the bulit-in stuff like AAs, you can't do that. You only get one 
(even if you can make it different across programs).

Now, if you just use library types, you don't have that problem, so in the 
long run, folks who really want to optimize their containers will probably do 
that. And if they _really_ want to optimize their containers, they're probably 
writing them themselves anyway.

Regardless, while having a built-in AA simplifies the common case, it _is_ more 
limiting, and if you really care about how your AAs function, you're going to 
have to use a library solution (even if the bult-in AAs have a solid 
implementation).

- Jonathan M Davis


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-30 Thread Steven Schveighoffer
On Sun, 30 Jun 2013 15:51:32 -0400, Jonathan M Davis   
wrote:



On Sunday, June 30, 2013 21:05:41 CJS wrote:

In the talk Andrei seems to mentions that D's associative arrays
are lacking in performance somehow. I'm very new to D, but it's
not obvious to me what the shortcoming is. I assume it's that for
some reason it's hard to specialize associative arrays to specfic
types to give increased performance in specfic cases, but I'm
unclear why that would be difficult. Could someone please
elaborate?


There's one implementation, and you can't swap it out, whereas different  
use
cases may perform better with different implementations. On top of that,  
the

current implementation is rather buggy and fragile, but that's an
implementation issue rather than an inherent one.


No, the main issue is the current one is runtime-only, and so simple  
function calls such as toHash and opCmp cannot be inlined.


You absolutely can change implementations (Walter did a few years ago from  
tree-based collision resolution to linked-list based).  What you can't do  
is switch to a fully generated AA, or change the compiler-expected API.


-Steve


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-30 Thread Jonathan M Davis
On Sunday, June 30, 2013 21:05:41 CJS wrote:
> In the talk Andrei seems to mentions that D's associative arrays
> are lacking in performance somehow. I'm very new to D, but it's
> not obvious to me what the shortcoming is. I assume it's that for
> some reason it's hard to specialize associative arrays to specfic
> types to give increased performance in specfic cases, but I'm
> unclear why that would be difficult. Could someone please
> elaborate?

There's one implementation, and you can't swap it out, whereas different use 
cases may perform better with different implementations. On top of that, the 
current implementation is rather buggy and fragile, but that's an 
implementation issue rather than an inherent one.

- Jonathan M Davis


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-30 Thread Joakim

On Sunday, 30 June 2013 at 19:24:54 UTC, Walter Bright wrote:

On 6/30/2013 2:50 AM, Joakim wrote:
I wondered if you have any opinion on such code reuse, if 
someone takes your
code and closes it, even if you wouldn't try to block it 
because you have

already released it under a permissive license.


No, I don't have an opinion on it, other than that I'd rather 
they didn't try to create an incompatible language and still 
call it "D".
OK, glad to hear that you wouldn't be against it.  You'd be 
surprised how many who use permissive licenses still go nuts when 
you propose to do exactly what the license allows, ie close up 
parts of the source.


Since you have been so gracious to use such permissive licenses 
for almost all of D, I'm sure someone will try the closed/paid 
experiment someday and see if which of us is right. :)


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-30 Thread Walter Bright

On 6/30/2013 2:50 AM, Joakim wrote:

I wondered if you have any opinion on such code reuse, if someone takes your
code and closes it, even if you wouldn't try to block it because you have
already released it under a permissive license.


No, I don't have an opinion on it, other than that I'd rather they didn't try to 
create an incompatible language and still call it "D".




Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-30 Thread CJS
In the talk Andrei seems to mentions that D's associative arrays 
are lacking in performance somehow. I'm very new to D, but it's 
not obvious to me what the shortcoming is. I assume it's that for 
some reason it's hard to specialize associative arrays to specfic 
types to give increased performance in specfic cases, but I'm 
unclear why that would be difficult. Could someone please 
elaborate?


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-30 Thread Joakim

On Sunday, 30 June 2013 at 09:34:14 UTC, Walter Bright wrote:

On 6/29/2013 11:39 PM, Joakim wrote:
What do you think of my idea of segmenting the market though? 
Keep providing a
free-as-in-beer dmd, like you are now, for the people who want 
it, while Remedy
and others who want performance pay for a dmd that puts out 
more performant
code, with those improvements slowly merged back into the free 
dmd over time.


It won't work. Those days are gone.

I disagree.  We'll find out.

If you are not interested in selling a paid compiler yourself, 
I've noted that
there's nothing stopping someone else from doing this.  They 
can take the dmd
frontend under the Artistic license, compile it with the 
BSD-licensed llvm
backend and boost-licensed druntime and phobos, and sell a 
paid compiler,

without any permission from you or any other D contributors.

You could not do anything legally to stop this, as the 
permissive OSS licenses
allow it.  However, as one of the main authors of this code, 
do you have any

preference for or against someone taking your code to do this?


Part of issuing it under a permissive license is I won't try to 
block someone from doing whatever they want to that is allowed 
by the license.

I understand, but that wasn't exactly my question.

I wondered if you have any opinion on such code reuse, if someone 
takes your code and closes it, even if you wouldn't try to block 
it because you have already released it under a permissive 
license.


Some wouldn't try to close the source if you expressed a 
preference that it not be done- I have no such compunction, if 
the license allows closing source, but others might- just 
wondering if you have an opinion or preference on your source 
being closed up.


Thanks for all the great work you have done on D and the dmd 
compiler.  As much as I'd like to see a commercial 
implementation, it is amazing how much you have given away for 
free. :)


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-30 Thread Walter Bright

On 6/29/2013 11:39 PM, Joakim wrote:

What do you think of my idea of segmenting the market though? Keep providing a
free-as-in-beer dmd, like you are now, for the people who want it, while Remedy
and others who want performance pay for a dmd that puts out more performant
code, with those improvements slowly merged back into the free dmd over time.


It won't work. Those days are gone.


If you are not interested in selling a paid compiler yourself, I've noted that
there's nothing stopping someone else from doing this.  They can take the dmd
frontend under the Artistic license, compile it with the BSD-licensed llvm
backend and boost-licensed druntime and phobos, and sell a paid compiler,
without any permission from you or any other D contributors.

You could not do anything legally to stop this, as the permissive OSS licenses
allow it.  However, as one of the main authors of this code, do you have any
preference for or against someone taking your code to do this?


Part of issuing it under a permissive license is I won't try to block someone 
from doing whatever they want to that is allowed by the license.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-29 Thread Joakim

I was wondering if Walter or Andrei would respond to this thread.

On Saturday, 29 June 2013 at 08:37:48 UTC, Walter Bright wrote:
I agree with your post, I just want to make a couple of minor 
corrections.
What exactly do you agree with Luca about, considering all your 
"minor corrections" basically demolish all his points? ;)


Your C++ history was really interesting, as I first used it in 
'97, right when it was peaking.


ZTC++ was cheap as dirt, and at the time people didn't mind 
paying for compilers. Those days are over, though. People have 
different expectations today.
There's no doubt that developers have been spoiled by all the 
free and shareware tools out there these days.


What do you think of my idea of segmenting the market though?  
Keep providing a free-as-in-beer dmd, like you are now, for the 
people who want it, while Remedy and others who want performance 
pay for a dmd that puts out more performant code, with those 
improvements slowly merged back into the free dmd over time.


If you are not interested in selling a paid compiler yourself, 
I've noted that there's nothing stopping someone else from doing 
this.  They can take the dmd frontend under the Artistic license, 
compile it with the BSD-licensed llvm backend and boost-licensed 
druntime and phobos, and sell a paid compiler, without any 
permission from you or any other D contributors.


You could not do anything legally to stop this, as the permissive 
OSS licenses allow it.  However, as one of the main authors of 
this code, do you have any preference for or against someone 
taking your code to do this?


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-29 Thread Walter Bright

On 6/29/2013 7:56 PM, CJS wrote:

Wow. That's interesting reading. Thanks for the history lesson!


There are other versions of this history, none of which mention the role ZTC++ 
played in C++ attaining critical mass, so I like to repeat my version now and 
then :-)




Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-29 Thread Walter Bright

On 6/29/2013 9:10 AM, Leandro Lucarella wrote:

Even when extremely interesting, I think the ZTC++ history before open
source existed or was really viable (the free software movement started
in 1983, the FSF was founded in 1985 and the open source definition was
made in 1998) is irrelevant in terms to analyze if right now it would be
valuable to make the reference compiler partly closed.



Yes, I agree. Things are fundamentally different now.



Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-29 Thread Walter Bright

On 6/29/2013 5:08 AM, Joseph Rushton Wakeling wrote:

On Saturday, 29 June 2013 at 08:37:48 UTC, Walter Bright wrote:

The bottom line was the open source movement was not a very significant force
in the 1980's when C++ gained traction. Open source really exploded around
2000, along with the internet. I wonder if open source perhaps needed the
internet in order to be viable.


That's a very good point.  It's before my time really, but if I understand the
history right, the main way to get hold of copies of stuff like GCC in the early
days was to pay for a set of disks with it on -- and there was no infrastructure
for easily sharing changes.  So neither the free-as-in-beer or
free-as-in-freedom advantages were as readily apparent or effective as they are
today.


True, distribution was mainly by physical mail. There was some via BBS's and 
Usenet, but these were severely limited by bandwidth.


I'd receive bug reports by fax, paper listings, and mailed floppies.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-29 Thread CJS

On Saturday, 29 June 2013 at 08:37:48 UTC, Walter Bright wrote:
I agree with your post, I just want to make a couple of minor 
corrections.


On 6/27/2013 4:58 AM, Leandro Lucarella wrote:

Do you really think C++ took off because there are commercial
implementations?


I got into the C++ fray in the 1987-88 time frame. At the time, 
there was a great debate between C++ and Objective-C, and they 
were running neck-and-neck. I was casting about looking for a 
way to get a competitive edge with my C compiler, and 
investigated.


Objective-C was put out by Stepstone. They wanted royalties 
from anyone who implemented a clone, and kept a tight fist over 
the licensing.


C++ only existed in its AT&T cfront implementation. I wrote a 
letter to AT&T's lawyers, asking if I could create a C++ clone, 
and they phoned me up and were very nice. They said sure, and I 
wouldn't have to pay any license or royalties.


So I went with C++. I don't really know if cfront was open 
source at the time or not, but I never looked at its source. I 
think cfront source came with a paid license for unix, but I'm 
not positive.


Anyhow, I wound up implementing the first native C++ compiler 
for the PC. Directly afterward, C++ took off like a rocket. Was 
it because of Zortech C++? I think there's strong evidence it 
was. A lot of programmers turned up their noses at the peasants 
programming on DOS, but that's where the action was in the 
1980's, and ZTC++ had no realistic competitors.


You could also see the results in Usenet. Postings about C++ 
and O-C were neck-and-neck until ZTC++ came out, and then 
things tilted heavily in C++'s favor, and O-C disappeared into 
oblivion (later to be resurrected by Steve Jobs, but that's 
another tale).


ZTC++ was so successful that Borland and Microsoft (according 
to rumor) abandoned their efforts at making a proprietary OOP 
C, and went with C++.


ZTC++ was closed source, as were Borland's Turbo C++ and 
Microsoft C++.



Do you think being a standardized language didn't help?


C++ wasn't standardized until 1998, 10 years later. The 90's 
were pretty much the heyday of C++.


Do you think the fact that there was a free implementation 
around that
it supported virtually any existing platform didn't help? Do 
you think
the fact was it was (almost) compatible with C (which was born 
freeish,
since back then software was freely shared between 
universities) didn't

help?


ZTC++ was cheap as dirt, and at the time people didn't mind 
paying for compilers. Those days are over, though. People have 
different expectations today.




No. A standard is something that was standardized by a standard
committee which, ideally, have some credits to do so. C++ is
standardized by ISO. I guess Walter and Andrei can give you 
more
details, since I think they both were involved in the 
standardization of

C++.


I've attended a few ISO C++ meetings, but I never became a 
voting member, and have had pretty much zero influence over the 
direction C++ took after the 1980's.


The bottom line was the open source movement was not a very 
significant force in the 1980's when C++ gained traction. Open 
source really exploded around 2000, along with the internet. I 
wonder if open source perhaps needed the internet in order to 
be viable.


Wow. That's interesting reading. Thanks for the history lesson!



Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-29 Thread Leandro Lucarella
Walter Bright, el 29 de June a las 01:37 me escribiste:
> The bottom line was the open source movement was not a very
> significant force in the 1980's when C++ gained traction. Open
> source really exploded around 2000, along with the internet. I
> wonder if open source perhaps needed the internet in order to be
> viable.

Yes, I think that's the whole point, without Internet open source was
extremely niche, without resources to distribute it, it was almost
impossible to take off, and almost impossible to collaborate, which is
the big different open source have vs traditional commercial software.

Even when extremely interesting, I think the ZTC++ history before open
source existed or was really viable (the free software movement started
in 1983, the FSF was founded in 1985 and the open source definition was
made in 1998) is irrelevant in terms to analyze if right now it would be
valuable to make the reference compiler partly closed.

-- 
Leandro Lucarella (AKA luca) http://llucax.com.ar/
--
GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)
--
EL PRIMER MONITO DEL MILENIO...
-- Crónica TV


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-29 Thread Joseph Rushton Wakeling

On Saturday, 29 June 2013 at 08:37:48 UTC, Walter Bright wrote:
The bottom line was the open source movement was not a very 
significant force in the 1980's when C++ gained traction. Open 
source really exploded around 2000, along with the internet. I 
wonder if open source perhaps needed the internet in order to 
be viable.


That's a very good point.  It's before my time really, but if I 
understand the history right, the main way to get hold of copies 
of stuff like GCC in the early days was to pay for a set of disks 
with it on -- and there was no infrastructure for easily sharing 
changes.  So neither the free-as-in-beer or free-as-in-freedom 
advantages were as readily apparent or effective as they are 
today.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-29 Thread Walter Bright

I agree with your post, I just want to make a couple of minor corrections.

On 6/27/2013 4:58 AM, Leandro Lucarella wrote:

Do you really think C++ took off because there are commercial
implementations?


I got into the C++ fray in the 1987-88 time frame. At the time, there was a 
great debate between C++ and Objective-C, and they were running neck-and-neck. I 
was casting about looking for a way to get a competitive edge with my C 
compiler, and investigated.


Objective-C was put out by Stepstone. They wanted royalties from anyone who 
implemented a clone, and kept a tight fist over the licensing.


C++ only existed in its AT&T cfront implementation. I wrote a letter to AT&T's 
lawyers, asking if I could create a C++ clone, and they phoned me up and were 
very nice. They said sure, and I wouldn't have to pay any license or royalties.


So I went with C++. I don't really know if cfront was open source at the time or 
not, but I never looked at its source. I think cfront source came with a paid 
license for unix, but I'm not positive.


Anyhow, I wound up implementing the first native C++ compiler for the PC. 
Directly afterward, C++ took off like a rocket. Was it because of Zortech C++? I 
think there's strong evidence it was. A lot of programmers turned up their noses 
at the peasants programming on DOS, but that's where the action was in the 
1980's, and ZTC++ had no realistic competitors.


You could also see the results in Usenet. Postings about C++ and O-C were 
neck-and-neck until ZTC++ came out, and then things tilted heavily in C++'s 
favor, and O-C disappeared into oblivion (later to be resurrected by Steve Jobs, 
but that's another tale).


ZTC++ was so successful that Borland and Microsoft (according to rumor) 
abandoned their efforts at making a proprietary OOP C, and went with C++.


ZTC++ was closed source, as were Borland's Turbo C++ and Microsoft C++.


Do you think being a standardized language didn't help?


C++ wasn't standardized until 1998, 10 years later. The 90's were pretty much 
the heyday of C++.



Do you think the fact that there was a free implementation around that
it supported virtually any existing platform didn't help? Do you think
the fact was it was (almost) compatible with C (which was born freeish,
since back then software was freely shared between universities) didn't
help?


ZTC++ was cheap as dirt, and at the time people didn't mind paying for 
compilers. Those days are over, though. People have different expectations today.




No. A standard is something that was standardized by a standard
committee which, ideally, have some credits to do so. C++ is
standardized by ISO. I guess Walter and Andrei can give you more
details, since I think they both were involved in the standardization of
C++.


I've attended a few ISO C++ meetings, but I never became a voting member, and 
have had pretty much zero influence over the direction C++ took after the 1980's.


The bottom line was the open source movement was not a very significant force in 
the 1980's when C++ gained traction. Open source really exploded around 2000, 
along with the internet. I wonder if open source perhaps needed the internet in 
order to be viable.




Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-27 Thread Iain Buclaw
On 27 June 2013 14:17, Joakim  wrote:
> As I said earlier, I'm done with this debate.
>
> There is no point talking to people who make blatantly ignorant statements
> like, "Binary blobs are the exception rather than the rule in Linux, and
> many hardware vendors would flat out say 'no' to doing any support on them."
> This assertion is so ignorant of the facts, it's laughable. :)

Fact: That quote you find laughable isn't my opinion.  It was what
Linus said during a Q&A after one of his talks (at least, if I
remember it correctly ;).


> I have no idea what to make of Iain's talking about gdc or that it is a 
> "one-man team"
> in response to my prediction that ldc could go closed/paid and obsolete dmd:
> there is absolutely no connection between the topics.
>

I suppose that was my ignorance there, I assumed that you at least
*knew* a little bit of history behind the development of D1/D2.  I'm
sure people would raise their eyebrows and sigh to have the age old
question "why don't we just drop development of DMD and move it to X?"
asked again.  :o)


--
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-27 Thread Leandro Lucarella
Joakim, el 27 de June a las 15:17 me escribiste:
> As I said earlier, I'm done with this debate.
> 
> There is no point talking to people who make blatantly ignorant
> statements like, "Binary blobs are the exception rather than the
> rule in Linux, and many hardware vendors would flat out say 'no' to
> doing any support on them."  This assertion is so ignorant of the
> facts, it's laughable. :) I have no idea what to make of Iain's
> talking about gdc or that it is a "one-man team" in response to my
> prediction that ldc could go closed/paid and obsolete dmd: there is
> absolutely no connection between the topics.
> 
> As for Luca's long response, it is filled with basic mistakes, silly
> and incorrect rehashes of material already covered, or trivial

How convenient is to put a lot of adjectives together and not a single
fact to say someone is wrong. Almost as convenient as calling people
religious zaelots when you run out of arguments. :)

And is so funny that you keep talking about the D contributors not
participating in the thread when evidently you don't know who the
contributors are.

I'm just so glad that you are done with this debate... My eyes were
hurting from reading so much crap.

Bye, bye! Have fun with Visual C++!

-- 
Leandro Lucarella (AKA luca) http://llucax.com.ar/
--
GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)
--
Did you know the originally a Danish guy invented the burglar-alarm
unfortunately, it got stolen


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-27 Thread Iain Buclaw
On 27 June 2013 14:40, Joakim  wrote:
> On Thursday, 27 June 2013 at 13:25:06 UTC, John Colvin wrote:
>>
>> On Thursday, 27 June 2013 at 13:18:01 UTC, Joakim wrote:
>>>
>>> Look, I get it, you guys are religious zealots- you tip your hand when
>>> you allude to ethical or moral reasons for using open source, a crazy idea
>>> if there ever was one- and you will come up with all kinds of silly
>>> arguments in the face of overwhelming evidence that _pure_ open source has
>>> failed.
>>
>>
>> Most replies to you have been quite measured and reasonable. I'm not sure
>> what justifies you calling people zealots.
>
> Read the rest of the sentence which you quoted, my reasons are stated.  When
> I come across so many arguments that are _factually_ wrong- "the Artistic
> license doesn't allow closing source," "most linux installs don't use binary
> blobs"- I know I'm dealing with religious zealots.

Which is quite amusing, as those quotes aren't found anywhere in this
thread. :o)

--
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-27 Thread Joakim

On Thursday, 27 June 2013 at 13:25:06 UTC, John Colvin wrote:

On Thursday, 27 June 2013 at 13:18:01 UTC, Joakim wrote:
Look, I get it, you guys are religious zealots- you tip your 
hand when you allude to ethical or moral reasons for using 
open source, a crazy idea if there ever was one- and you will 
come up with all kinds of silly arguments in the face of 
overwhelming evidence that _pure_ open source has failed.


Most replies to you have been quite measured and reasonable. 
I'm not sure what justifies you calling people zealots.
Read the rest of the sentence which you quoted, my reasons are 
stated.  When I come across so many arguments that are 
_factually_ wrong- "the Artistic license doesn't allow closing 
source," "most linux installs don't use binary blobs"- I know I'm 
dealing with religious zealots.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-27 Thread Dicebot

On Thursday, 27 June 2013 at 13:18:01 UTC, Joakim wrote:
There is no point talking to people who make blatantly ignorant 
statements


Yeah, I keep wondering why someone even bothered to waste time 
explaining all this to someone who is incapable of both providing 
own reasoning and studying opponent one.


I hope that anyone that has followed D history is perfectly aware 
of numbers that prove how beneficial transition to a 
community-based open development was.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-27 Thread John Colvin

On Thursday, 27 June 2013 at 13:18:01 UTC, Joakim wrote:

As I said earlier, I'm done with this debate.

There is no point talking to people who make blatantly ignorant 
statements like, "Binary blobs are the exception rather than 
the rule in Linux, and many hardware vendors would flat out say 
'no' to doing any support on them."  This assertion is so 
ignorant of the facts, it's laughable. :) I have no idea what 
to make of Iain's talking about gdc or that it is a "one-man 
team" in response to my prediction that ldc could go 
closed/paid and obsolete dmd: there is absolutely no connection 
between the topics.


As for Luca's long response, it is filled with basic mistakes, 
silly and incorrect rehashes of material already covered, or 
trivial twits, like the fact that D has a spec but isn't 
standardized by any international body.  For example, I 
originally pointed out several examples of other projects with 
existing commercial models and I was told that they're not 
"closed."  I responded that I never said that they were all 
closed, only commercial, and I'm now told that since my 
proposed model for D is closed, I'm "misstating" myself. (Slaps 
head)


These responses seem written by people who have a very tenuous 
grasp on the text I wrote.


Look, I get it, you guys are religious zealots- you tip your 
hand when you allude to ethical or moral reasons for using open 
source, a crazy idea if there ever was one- and you will come 
up with all kinds of silly arguments in the face of 
overwhelming evidence that _pure_ open source has failed.  
Instead, you claim success when hybrid models bring more open 
source into the world, then nonsensically reverse course and 
claim that either they aren't actually hybrid or that such 
hybrid models are not really "open source," that it's a lie to 
call it that. (Slaps head again)


I'm not trying to convince you zealots.  You want to keep 
banging your heads against the wall for the greater glory of 
your religion, have fun with that.


I'm simply putting forward a case for D going the route of the 
most successful projects these days, by using a hybrid model, 
with a unique variation that I came up with :) and have 
successfully used for a project of my own.


Those who aren't religious about _pure_ open source can 
consider what I've proposed and my evidence and see if it makes 
sense to them.


Most replies to you have been quite measured and reasonable. I'm 
not sure what justifies you calling people zealots.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-27 Thread Joakim

As I said earlier, I'm done with this debate.

There is no point talking to people who make blatantly ignorant 
statements like, "Binary blobs are the exception rather than the 
rule in Linux, and many hardware vendors would flat out say 'no' 
to doing any support on them."  This assertion is so ignorant of 
the facts, it's laughable. :) I have no idea what to make of 
Iain's talking about gdc or that it is a "one-man team" in 
response to my prediction that ldc could go closed/paid and 
obsolete dmd: there is absolutely no connection between the 
topics.


As for Luca's long response, it is filled with basic mistakes, 
silly and incorrect rehashes of material already covered, or 
trivial twits, like the fact that D has a spec but isn't 
standardized by any international body.  For example, I 
originally pointed out several examples of other projects with 
existing commercial models and I was told that they're not 
"closed."  I responded that I never said that they were all 
closed, only commercial, and I'm now told that since my proposed 
model for D is closed, I'm "misstating" myself. (Slaps head)


These responses seem written by people who have a very tenuous 
grasp on the text I wrote.


Look, I get it, you guys are religious zealots- you tip your hand 
when you allude to ethical or moral reasons for using open 
source, a crazy idea if there ever was one- and you will come up 
with all kinds of silly arguments in the face of overwhelming 
evidence that _pure_ open source has failed.  Instead, you claim 
success when hybrid models bring more open source into the world, 
then nonsensically reverse course and claim that either they 
aren't actually hybrid or that such hybrid models are not really 
"open source," that it's a lie to call it that. (Slaps head again)


I'm not trying to convince you zealots.  You want to keep banging 
your heads against the wall for the greater glory of your 
religion, have fun with that.


I'm simply putting forward a case for D going the route of the 
most successful projects these days, by using a hybrid model, 
with a unique variation that I came up with :) and have 
successfully used for a project of my own.


Those who aren't religious about _pure_ open source can consider 
what I've proposed and my evidence and see if it makes sense to 
them.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-27 Thread Joseph Rushton Wakeling

On Thursday, 27 June 2013 at 08:21:12 UTC, Joakim wrote:
I'm familiar with its arguments from a summary, not 
particularly interested in reading the whole thing.


You know, I think I see what your problem is ... :-)


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-27 Thread Leandro Lucarella
Joakim, el 26 de June a las 17:52 me escribiste:
> On Wednesday, 26 June 2013 at 11:08:17 UTC, Leandro Lucarella wrote:
> >Joakim, el 25 de June a las 23:37 me escribiste:
> >>I don't know the views of the key contributors, but I wonder if
> >>they
> >>would have such a knee-jerk reaction against any paid/closed
> >>work.
> >
> >Against being paid no, against being closed YES. Please don't even
> >think
> >about it. It was a hell of a ride trying to make D more open to
> >step back now.
> I suggest you read my original post more carefully.  I have not
> suggested closing up the entire D toolchain, as you seem to imply.

Well, I'm not. I'm sticking with what you said.

> I have suggested working on optimization patches in a closed-source
> manner and providing two versions of the D compiler: one that is
> faster, closed, and paid, with these optimization patches, another
> that is slower, open, and free, without the optimization patches.

I know, and that's what my e-mail was all about. I don't know why you
got another impression. I even end the e-mail saying is a very bad
business model too to just offer a paid better optimizer.

> >What we need is companies paying to people to improve the
> >compiler and toolchain. This is slowly starting to happen, in
> >Sociomantic we are already 2 people dedicating some time to
> >improve D as
> >part of our job (Don and me).
> Thanks for the work that you and Don have done with Sociomantic.
> Why do you think more companies don't do this?  My point is that if

Because D is a new language and isn't as polished as other programming
languages. I think Sociomantic was a bit crazy to adopt it so early
really (my personal opinion). But it worked well (we had to do quite
a lot extra efforts but I guess the time it saves in the daily usage
paid for it).

> there were money coming in from a paid compiler, Walter could fund
> even more such work.

Well, I think with a paid compiler you remove one of the main reasons
why early adopters can be tempted to use D, because is free. What I'm
sure is Sociomantic wouldn't pick D if they had to paid at that time,
because it was a statup and startup usually don't have much money at
first.

> >We need more of this, and to get this, we need companies to start
> >using D, and to get this, we need professionalism (I agree 100% with
> >Andrei on this one). Is a bootstrap effort, and is not like
> >volunteers need more time to be professional, is just that you have
> >to want to make the jump.
> I think this ignores the decades-long history we have with open
> source software by now.  It is not merely "wanting to make the
> jump," most volunteers simply do not want to do painful tasks like
> writing documentation or cannot put as much time into development
> when no money is coming in.  Simply saying "We have to try harder to
> be professional" seems naive to me.

Well, I guess we have very different views about the decades-long
history of open source software, because I know tons of examples of
applications being free, without "commercial implementations" or "paid
modules" and very few with a more commercial model. Even more, the few
examples I know of "paid modules" are quite recent, not decades-old.

> >I think is way better to do less stuff but with higher quality,
> >nobody is asking people for more time, is just changing the focus
> >a bit, at least for some time. Again, this is only bootstrapping, and
> >is always hard and painful. We need to make the jump to make
> >companies comfortable using D, then things will start rolling by
> >themselves.
> If I understand your story right, the volunteers need to put a lot
> of effort into "bootstrapping" the project to be more professional,
> companies will see this and jump in, then they fund development from
> then on out?  It's possible, but is there any example you have in
> mind?  The languages that go this completely FOSS route tend not to
> have as much adoption as those with closed implementations, like
> C++.

Are you kidding me? Python, Ruby, PHP, Perl. Do I have to say more than
that?  Do you really think C++ took off because there are commercial
implementations? Do you think being a standardized language didn't help?
Do you think the fact that there was a free implementation around that
it supported virtually any existing platform didn't help? Do you think
the fact was it was (almost) compatible with C (which was born freeish,
since back then software was freely shared between universities) didn't
help?

> >First of all, your examples are completely wrong. The projects you
> >are mentioning are 100% free, with no closed components (except for
> >components done by third-party).
> You are misstating what I said: I said "commercial," not "closed,"

You said close. Not just in the previous e-mail, but you just repeated
it in this one:
> I have suggested working on optimization patches in a CLOSED-SOURCE
> manner and providing two versions of the D compiler: one that is
> faster, CLOSED, 

Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-27 Thread Iain Buclaw
On 27 June 2013 09:53, Joakim  wrote:
> those involved with the D compiler can decide if this would be a worthwhile
> direction.  From their silence so far, I can only assume that they are not
> interested in rousing the ire of the freetards and will simply maintain the
> status quo of keeping all source public.
>

True, I tend to just ignore comments from opportunists who jump in and
shout "Hey guys! I'm new around here, have you guys tried to do
something completely radical on the off chance that it will work? I
have a good feeling about this..!!"

But as it stands, I'm taking a quick break from my usual GDC work
before I reach stage 12 of burnout. ;)



> This will lead to D's growth being slowed, compared to the alternative of
> providing a paid compiler also.  That's their choice to make.
>

In your opinion.


> If somebody stumbles across this thread later, perhaps they will close up
> optimization patches to ldc and sell a paid version.  Given that those
> behind dmd have not expressed any interest in a paid version, maybe these
> ldc vendors will not involve them with the money or feature decisions of
> their paid ldc.  It would be likely that this paid compiler becomes the
> dominant one and the original dmd project is forgotten.
>

This was said when GDC got the D2 language stable.  The reality? I
wouldn't hold my breath...  I'm still a one-man team, and there is no
contingency in place should something happen to me.

--
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-27 Thread Iain Buclaw
On 27 June 2013 09:21, Joakim  wrote:
> But lets assume that you are right and the optimization patches I'm talking
> about would tend to end up only in the backend. In that case, the frontend
> would not have any closed patches and the paid version of dmd would simply
> have a slightly-closed, more-optimized backend.  There go all of Joseph's
> previous arguments about the paid version not making the same OSS frontend
> available to the free reference compiler or ldc and gdc.
>
> You are making my case for me. :)
>

Now you are just re-hashing what my initial thoughts were...  ;)


>>> Never read it but I have corresponded with the author, and I found him to
>>
>> be as religious about pure open source as Stallman is about the GPL.  I
>> suggest you try examining why D is still such a niche language even with
>> "ten fold" growth.  If you're not sure why, I suggest you look at the
>> examples and reasons I've given, as to why closed source and hybrid models
>> do much better.
>>>
>>>
>>
>> Then you should read it, as the 'cathedral' in question was GCC - a
>> project
>> started by Stallman. :)
>
> I'm familiar with its arguments from a summary, not particularly interested
> in reading the whole thing.  Insofar as he made the case for various
> benefits of open source, some of the arguments I've heard make sense and I
> have no problem with it.  Insofar as he and others believe that it is an
> argument for _pure_ open source or that _all_ source will eventually be
> open, I think history has shown that argument to be dead wrong along with
> the reasons why.
>
> It boils down to the facts that there is nowhere near as much money in pure
> OSS models and volunteers cannot possibly provide all the materials
> necessary for a full product, both of which I've mentioned before.  This is
> why hybrid models are now taking off, blending the benefits of open and
> closed source.
>

But it's not blending the benefits at all.  Open-core, however you try
to sway or pitch it, does not qualify as open source. It is closed
source. It is the opposite of open source.

Personally, it is not acceptable that you market yourself as an open
source product when in fact your business model is to sell closed
source. This is confusing, I'd say it is border line lying. Well,
marketing often is lying, but in the open source community we call out
such lies, however subtle.

Most open core vendors still market themselves as open source leaders,
then come to you to sell closed source software. (They deserve to be
critizised if you ask me).


>>> Not sure what point you are trying to make, as both gdc and dmd are open
>>
>> source.  I'm suggesting closing such patches, for a limited time.
>>>
>>>
>>
>> Closing patches benefit no one.  And more to the point,  you can't say
>> that
>> two compiler's implement the same language if both have different language
>> features.
>
> The closed patches benefit those making money from the paid compiler and
> since the free compiler would get these patches after a time limit, they
> eventually benefit the community also.  As for your purist approach to
> compiler implementation, by that rationale, no two C++ compilers and all the
> D compilers do not implement the "same language," since there are always
> differences in the features supported by the different compilers.
>
> I'd say that some differentiation between compilers is normal and necessary.
>

This is were C/C++ went horribly wrong.  Different compilers having a
vagary of macros to identify the same platform or architecture, the
question of what is valid syntax being different between compilers,
code written in a certain way working in one compiler but throws an
error with another...

We are striving to be better than that from the start.


> Also, many of the hybrid projects have pulled in previously purely-open,
> purely community projects like KHTML/WebKit or mostly-open projects like the
> linux kernel.  The linux kernel repo evolved over time to include many
> binary blobs and effectively become a hybrid model.
>

Binary blobs are the exception rather than the rule in Linux, and many
hardware vendors would flat out say 'no' to doing any support on them.
Moreover, the position of the Linux Foundation is that any
closed-source kernel module is harmful and undesirable, and they are
always urging for vendors to adopt a policy of supporting their
customers on Linux with open-source kernel code.

Which goes to show how useful a hybrid model has been for them...


> And not all hybrid companies are that large: MySQL before it got bought out
> was pulling in hundreds of millions of dollars in revenue using an "open
> core" model but certainly wasn't in the super-sized class of Google or
> Oracle.  There are a handful of small companies that provided closed,
> optimized versions of the PostgreSQL database (since most of the underlying
> code is open source, it can be considered a hybrid model).
>

MySQL iirc was the first to practice this model.

Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-27 Thread Joakim

On Thursday, 27 June 2013 at 03:20:37 UTC, Mathias Lang wrote:
I've read (almost), everything, so I hope I won't miss a point 
here:

a) I've heard about MSVC, Red Hat, Qt, Linux and so on. From my
understanding, none of the projects mentionned have gone from 
free (as in
free beer) to hybrid/closed. And I'm not currently able to 
think of one

successful, widespread project that did.
Then you are not paying attention.  As I've already noted several 
times, Visual Studio, which is the way most use MSVC, has both 
paid and free versions.  Red Hat contains binary blobs and 
possibly other non-OSS software and charges companies for 
consulting and support.  Qt is an "open core" project that is 
dual-licensed under both OSS and commercial licenses, the latter 
of which you pay for.  Linux contains binary blobs in the vast 
majority of installs and most people running it paid for it.


If your implied point is that the original authors aren't the 
ones taking the project hybrid or paid, it depends on the 
license.  Sometimes it is those owning the original copyright, as 
it had to be in the Qt, MySQL, and other dual-licensing cases, 
other times it isn't.


b) Thinking that being free (as a beer and/or as freedom), 
hybrid, closed
source of whatever is a single critera of success seems 
foolish. I'm not
asking for a complete comparison (I think my mailbox won't 
stand it ;-) ),
but please stop comparing a free operating software with a paid 
compiler,
and assume the former have more users than the later because 
it's free (and
vice-versa). In addition, I don't see the logic behind 
comparing something
born in the 90s with something from the 2000s. Remember the 
Dot-com bubble ?
Obviously nothing is a "single criteria of success," as has been 
stated already.  In complex social fields like business or 
technology ventures, where there are many confounding factors, 
judgement and interpretation are everything.


By your rationale, we might as well do _anything_ because how 
could we possibly know that C++ wasn't immensely successful only 
because Bjarne Stroustrup is a Dane?  Obviously none of this 
discussion matters, as D has very little Danish involvement and 
therefore can never be as popular. ;)


You have to have the insight to be able to weigh all these 
competing factors and while I agree that most cannot, those who 
are successful do.


d) People pay for something they need. They don't adopt 
something because
they can pay for it. That's why paid compiler must follow 
language

promotion, not the other way around.
These assertions are somewhat meaningless.  Those who value 
performance will pay for the optimized version of the dmd 
compiler that I've proposed.  Those who don't will use the 
slower, pure-OSS version.  There is no reason for a paid compiler 
to only follow promotion, both must be done at the same time.


In any case, I've lost interest in this debate.  I've made my 
case, those involved with the D compiler can decide if this would 
be a worthwhile direction.  From their silence so far, I can only 
assume that they are not interested in rousing the ire of the 
freetards and will simply maintain the status quo of keeping all 
source public.


This will lead to D's growth being slowed, compared to the 
alternative of providing a paid compiler also.  That's their 
choice to make.


If somebody stumbles across this thread later, perhaps they will 
close up optimization patches to ldc and sell a paid version.  
Given that those behind dmd have not expressed any interest in a 
paid version, maybe these ldc vendors will not involve them with 
the money or feature decisions of their paid ldc.  It would be 
likely that this paid compiler becomes the dominant one and the 
original dmd project is forgotten.


If you don't choose the best approach, a hybrid model, you leave 
it open for somebody else to do it and take the project in a 
different direction.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-27 Thread Joakim

On Wednesday, 26 June 2013 at 21:15:34 UTC, Iain Buclaw wrote:

On Jun 26, 2013 9:00 PM, "Joakim"  wrote:
This is flat wrong. I suggest you read the Artistic license, 
it was
chosen for a reason, ie it allows closing of source as long as 
you provide
the original, unmodified binaries with any modified binaries.  
I suspect

optimization fixes will be in both the frontend and backend.




Code generation is in the back end, so the answer to that is 
simply 'no'.
From what I understand about the kinds of optimizations that 
Walter was talking about, at least some of them would require 
work on the frontend also.


But lets assume that you are right and the optimization patches 
I'm talking about would tend to end up only in the backend. In 
that case, the frontend would not have any closed patches and the 
paid version of dmd would simply have a slightly-closed, 
more-optimized backend.  There go all of Joseph's previous 
arguments about the paid version not making the same OSS frontend 
available to the free reference compiler or ldc and gdc.


You are making my case for me. :)

Never read it but I have corresponded with the author, and I 
found him to
be as religious about pure open source as Stallman is about the 
GPL.  I
suggest you try examining why D is still such a niche language 
even with
"ten fold" growth.  If you're not sure why, I suggest you look 
at the
examples and reasons I've given, as to why closed source and 
hybrid models

do much better.




Then you should read it, as the 'cathedral' in question was GCC 
- a project

started by Stallman. :)
I'm familiar with its arguments from a summary, not particularly 
interested in reading the whole thing.  Insofar as he made the 
case for various benefits of open source, some of the arguments 
I've heard make sense and I have no problem with it.  Insofar as 
he and others believe that it is an argument for _pure_ open 
source or that _all_ source will eventually be open, I think 
history has shown that argument to be dead wrong along with the 
reasons why.


It boils down to the facts that there is nowhere near as much 
money in pure OSS models and volunteers cannot possibly provide 
all the materials necessary for a full product, both of which 
I've mentioned before.  This is why hybrid models are now taking 
off, blending the benefits of open and closed source.


Not sure what point you are trying to make, as both gdc and 
dmd are open
source.  I'm suggesting closing such patches, for a limited 
time.




Closing patches benefit no one.  And more to the point,  you 
can't say that
two compiler's implement the same language if both have 
different language

features.
The closed patches benefit those making money from the paid 
compiler and since the free compiler would get these patches 
after a time limit, they eventually benefit the community also.  
As for your purist approach to compiler implementation, by that 
rationale, no two C++ compilers and all the D compilers do not 
implement the "same language," since there are always differences 
in the features supported by the different compilers.


I'd say that some differentiation between compilers is normal and 
necessary.


I see no reason why another "upcoming" project like D 
couldn't do the

same. :)



You seem to be confusing D for an Operating System, 
Smartphone, or any

general consumer product.


You seem to be confusing the dmd compiler to not be a piece of 
software,
just like the rest, or the many proprietary C++ compilers out 
there.




You seem to think when I say D I'm referring to dmd, or any 
other D

compiler out there.
I referred to the D project and have been talking about the 
compiler all along.  The fact that you decided to make a 
meaningless statement, presumably about how D is a spec and 
therefore cannot be compared with Android, is irrelevant and 
frankly laughable. :)


- The language implementation is open source. This allows 
anyone to take

the current front-end code - or even write their own clean-room
implementation from ground-up - and integrate it to their own 
backend X.


Sort of.  The dmd frontend is open source, but the backend is 
not under
an open source license.  Someone can swap out the backend and 
go completely
closed, for example, using ldc (ldc used to have one or two GPL 
files,

those would obviously have to be removed).




The backend is not part of the D language implementation / 
specification.

(for starters, it's not documented anywhere except as code).
Of course the backend is part of the language implementation.  
It's not part of the spec, but you never mentioned the spec 
originally.


- The development model of D on github has adopted a "pull, 
review and
merge" system, where any changes to the language or compiler do 
not go in
unless it goes through proper coding review and testing 
(thank's to the
wonderful auto-tester).  So your suggestion of an "open core" 
model has a
slight fallacy here in that any changes to the closed off 

Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Mathias Lang
I've read (almost), everything, so I hope I won't miss a point here:
a) I've heard about MSVC, Red Hat, Qt, Linux and so on. From my
understanding, none of the projects mentionned have gone from free (as in
free beer) to hybrid/closed. And I'm not currently able to think of one
successful, widespread project that did.
b) Thinking that being free (as a beer and/or as freedom), hybrid, closed
source of whatever is a single critera of success seems foolish. I'm not
asking for a complete comparison (I think my mailbox won't stand it ;-) ),
but please stop comparing a free operating software with a paid compiler,
and assume the former have more users than the later because it's free (and
vice-versa). In addition, I don't see the logic behind comparing something
born in the 90s with something from the 2000s. Remember the Dot-com bubble ?
c) There are other way to get more people involved, for exemple if
dlang.orgbecomes a foundation (see related thread), we would be able
to apply for
GSoC.
d) People pay for something they need. They don't adopt something because
they can pay for it. That's why paid compiler must follow language
promotion, not the other way around.


2013/6/27 Joseph Rushton Wakeling 

> On Wednesday, 26 June 2013 at 21:29:12 UTC, Iain Buclaw wrote:
>
>> Don't call be Shirley...
>>
>
> Serious? :-)
>
>  By the way, I hope you didn't feel I was trying to speak on behalf of GDC
>>> -- wasn't my intention. :-)
>>>
>>
>> I did, and it hurt.  :o)
>>
>
> Oh no.  50 shades of #DD ? :-)
>


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joseph Rushton Wakeling

On Wednesday, 26 June 2013 at 21:29:12 UTC, Iain Buclaw wrote:

Don't call be Shirley...


Serious? :-)

By the way, I hope you didn't feel I was trying to speak on 
behalf of GDC -- wasn't my intention. :-)


I did, and it hurt.  :o)


Oh no.  50 shades of #DD ? :-)


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joseph Rushton Wakeling

On Wednesday, 26 June 2013 at 19:01:42 UTC, Joakim wrote:
Why are they guaranteed such patches?  They have advantages 
because they use different compiler backends.  If they think 
their backends are so great, let them implement their own 
optimizations and compete.


I could respond at greater length, but I think that substantial 
flaws of your point of view are exposed in this single paragraph. 
 GDC and LDC aren't competitors, they are valuable collaborators.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Iain Buclaw
On Jun 26, 2013 9:50 PM, "Joseph Rushton Wakeling" <
joseph.wakel...@webdrake.net> wrote:
>
> On Wednesday, 26 June 2013 at 19:26:37 UTC, Iain Buclaw wrote:
>>
>> I can't be bothered to read all points the both of you have mentioned
thus far, but I do hope to add a voice of reason to calm you down. ;)
>
>
> Quick, nurse, the screens!
>
> ... or perhaps, "Someone throw a bucket of water over them"? :-P
>
>

Don't call be Shirley...

>> From a licensing perspective, the only part of the source that can be
"closed off" is the DMD backend.  Any optimisation fixes in the DMD backend
does not affect GDC/LDC.
>
>
> To be honest, I can't see the "sales value" of optimization fixes in the
DMD backend given that GDC and LDC already have such strong performance.
 The one strong motivation to use DMD over the other two compilers is (as
you describe) access to the bleeding edge of features, but I'd have thought
this will stop being an advantage in time as/when the frontend becomes a
genuinely "plug-and-play" component.
>

Sometimes it feels like achieving this is as trying to break down a brick
barrier with a shoelace.

> By the way, I hope you didn't feel I was trying to speak on behalf of GDC
-- wasn't my intention. :-)
>

I did, and it hurt.  :o)

>> Having used closed source languages in the past, I strongly believe that
closed languages do not stimulate growth or adoption at all.  And where
adoption does occur, knowledge is kept within specialised groups.
>
>
> Last year I had the dubious privilege of having to work with MS Visual
Basic for a temporary job.  What was strikingly different from the various
open source languages was that although there was an extensive quantity of
documentation available from Microsoft, it was incredibly badly organized,
much of it was out of date, and there was no meaningful community support
that I could find.
>
> I got the job done, but I would surely have had a much easier experience
with any of the open source languages out there.  Suffice to say that the
only reason I used VB in this case was because it was an obligatory part of
the work -- I'd never use it by choice.
>

Yes, it's like trying to learn D, but the only reference you have of the
language is the grammar page, and an IDE which offers thousands of
auto-complete options for things that *sound* like what you want, but don't
compile when it comes to testing.  :o)

Regards
-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Iain Buclaw
On Jun 26, 2013 9:00 PM, "Joakim"  wrote:
>
> On Wednesday, 26 June 2013 at 19:26:37 UTC, Iain Buclaw wrote:
>>
>> From a licensing perspective, the only part of the source that can be
"closed off" is the DMD backend.  Any optimisation fixes in the DMD backend
does not affect GDC/LDC.
>
> This is flat wrong. I suggest you read the Artistic license, it was
chosen for a reason, ie it allows closing of source as long as you provide
the original, unmodified binaries with any modified binaries.  I suspect
optimization fixes will be in both the frontend and backend.
>

Code generation is in the back end, so the answer to that is simply 'no'.

>> You should try reading The Cathedral and the Bazaar if you don't
understand why an open approach to development has caused the D programming
language to grow by ten fold over the last year or so.
>>
>> If you still don't understand, read it again ad infinitum.
>
> Never read it but I have corresponded with the author, and I found him to
be as religious about pure open source as Stallman is about the GPL.  I
suggest you try examining why D is still such a niche language even with
"ten fold" growth.  If you're not sure why, I suggest you look at the
examples and reasons I've given, as to why closed source and hybrid models
do much better.
>

Then you should read it, as the 'cathedral' in question was GCC - a project
started by Stallman. :)

>> Think I might just point out that GDC had SIMD support before DMD. And
that Remedy used GDC to get their D development off the ground.  It was
features such as UDAs, along with many language bug fixes that were only
available in DMD development that caused them to switch over.
>>
>> In other words, they needed a faster turnaround for bugs at the time
they were adopting D, however the D front-end in GDC stays pretty much
stable on the current release.
>
> Not sure what point you are trying to make, as both gdc and dmd are open
source.  I'm suggesting closing such patches, for a limited time.
>

Closing patches benefit no one.  And more to the point,  you can't say that
two compiler's implement the same language if both have different language
features.

>>> I see no reason why another "upcoming" project like D couldn't do the
same. :)
>>
>>
>> You seem to be confusing D for an Operating System, Smartphone, or any
general consumer product.
>
> You seem to be confusing the dmd compiler to not be a piece of software,
just like the rest, or the many proprietary C++ compilers out there.
>

You seem to think when I say D I'm referring to dmd, or any other D
compiler out there.

>
>> - The language implementation is open source. This allows anyone to take
the current front-end code - or even write their own clean-room
implementation from ground-up - and integrate it to their own backend X.
>
> Sort of.  The dmd frontend is open source, but the backend is not under
an open source license.  Someone can swap out the backend and go completely
closed, for example, using ldc (ldc used to have one or two GPL files,
those would obviously have to be removed).
>

The backend is not part of the D language implementation / specification.
(for starters, it's not documented anywhere except as code).

>> - The compiler itself is not associated with the development of the
language, so those who are owners of the copyright are free to do what they
want with their binary releases.
>>
>> - The development model of D on github has adopted a "pull, review and
merge" system, where any changes to the language or compiler do not go in
unless it goes through proper coding review and testing (thank's to the
wonderful auto-tester).  So your suggestion of an "open core" model has a
slight fallacy here in that any changes to the closed off compiler would
have to go through the same process to be accepted into the open one - and
it might even be rejected.
>
> I'm not sure why you think "open core" patches that are opened after a
time limit would be any more likely to be rejected from that review
process.  The only fallacy I see here is yours.
>

Where did I say that? I only invited you to speculate on what would happen
if a 'closed patch' got rejected.  This leads back to the point that you
can't call it a compiler for the D programming language if it derives from
the specification / implementation.

>
>> DMD - as in refering to the binary releases - can be closed / paid /
whatever it likes.
>>
>> The D Programming Language - as in the D front-end implementation - is
under a dual GPL/Artistic license and cannot be used by any closed source
product without said product releasing their copy of the front-end sources
also.  This means that your "hybrid" proposal only works for code that is
not under this license - eg: the DMD backend - which is not what the vast
majority of contributors actually submit patches for.
>
> Wrong, you have clearly not read the Artistic license.
>

I'll allow you to keep on thinking that for a while longer...

>> If you strongly believe that a pro

Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joseph Rushton Wakeling

On Wednesday, 26 June 2013 at 19:26:37 UTC, Iain Buclaw wrote:
I can't be bothered to read all points the both of you have 
mentioned thus far, but I do hope to add a voice of reason to 
calm you down. ;)


Quick, nurse, the screens!

... or perhaps, "Someone throw a bucket of water over them"? :-P

From a licensing perspective, the only part of the source that 
can be "closed off" is the DMD backend.  Any optimisation fixes 
in the DMD backend does not affect GDC/LDC.


To be honest, I can't see the "sales value" of optimization fixes 
in the DMD backend given that GDC and LDC already have such 
strong performance.  The one strong motivation to use DMD over 
the other two compilers is (as you describe) access to the 
bleeding edge of features, but I'd have thought this will stop 
being an advantage in time as/when the frontend becomes a 
genuinely "plug-and-play" component.


By the way, I hope you didn't feel I was trying to speak on 
behalf of GDC -- wasn't my intention. :-)


Having used closed source languages in the past, I strongly 
believe that closed languages do not stimulate growth or 
adoption at all.  And where adoption does occur, knowledge is 
kept within specialised groups.


Last year I had the dubious privilege of having to work with MS 
Visual Basic for a temporary job.  What was strikingly different 
from the various open source languages was that although there 
was an extensive quantity of documentation available from 
Microsoft, it was incredibly badly organized, much of it was out 
of date, and there was no meaningful community support that I 
could find.


I got the job done, but I would surely have had a much easier 
experience with any of the open source languages out there.  
Suffice to say that the only reason I used VB in this case was 
because it was an obligatory part of the work -- I'd never use it 
by choice.


- The development model of D on github has adopted a "pull, 
review and merge" system, where any changes to the language or 
compiler do not go in unless it goes through proper coding 
review and testing (thank's to the wonderful auto-tester).  So 
your suggestion of an "open core" model has a slight fallacy 
here in that any changes to the closed off compiler would have 
to go through the same process to be accepted into the open one 
- and it might even be rejected.


I had a similar thought but from a slightly different angle -- 
that allowing "open core" in the frontend would damage the 
effectiveness of the review process.  How can you restrict 
certain features to proprietary versions without having also a 
two-tier hierarchy of reviewers?  And would you be able to 
maintain the broader range of community review if some select, 
paid few had privileged review access?


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joakim

On Wednesday, 26 June 2013 at 19:26:37 UTC, Iain Buclaw wrote:
From a licensing perspective, the only part of the source that 
can be "closed off" is the DMD backend.  Any optimisation fixes 
in the DMD backend does not affect GDC/LDC.
This is flat wrong. I suggest you read the Artistic license, it 
was chosen for a reason, ie it allows closing of source as long 
as you provide the original, unmodified binaries with any 
modified binaries.  I suspect optimization fixes will be in both 
the frontend and backend.


You should try reading The Cathedral and the Bazaar if you 
don't understand why an open approach to development has caused 
the D programming language to grow by ten fold over the last 
year or so.


If you still don't understand, read it again ad infinitum.
Never read it but I have corresponded with the author, and I 
found him to be as religious about pure open source as Stallman 
is about the GPL.  I suggest you try examining why D is still 
such a niche language even with "ten fold" growth.  If you're not 
sure why, I suggest you look at the examples and reasons I've 
given, as to why closed source and hybrid models do much better.


Think I might just point out that GDC had SIMD support before 
DMD. And that Remedy used GDC to get their D development off 
the ground.  It was features such as UDAs, along with many 
language bug fixes that were only available in DMD development 
that caused them to switch over.


In other words, they needed a faster turnaround for bugs at the 
time they were adopting D, however the D front-end in GDC stays 
pretty much stable on the current release.
Not sure what point you are trying to make, as both gdc and dmd 
are open source.  I'm suggesting closing such patches, for a 
limited time.


I see no reason why another "upcoming" project like D couldn't 
do the same. :)


You seem to be confusing D for an Operating System, Smartphone, 
or any general consumer product.
You seem to be confusing the dmd compiler to not be a piece of 
software, just like the rest, or the many proprietary C++ 
compilers out there.


Having used closed source languages in the past, I strongly 
believe that closed languages do not stimulate growth or 
adoption at all.  And where adoption does occur, knowledge is 
kept within specialised groups.
Perhaps there is some truth to that.  But nobody is suggesting a 
purely closed-source language either.


I don't think a "purely community-run project" is a worthwhile 
goal, particularly if you are aiming for a million users and 
professionalism.  I think there is always opportunity for 
mixing of commercial implementations and community 
involvement, as very successful hybrid projects like Android 
or Chrome have shown.


Your argument seems lost on me as you seem to be taking a very 
strange angle of association with the D language and/or 
compiler, and you don't seem to understand how the development 
process of D works either.
I am associating D, an open source project, with Android and 
Chrome, two of the most successful open source projects at the 
moment, which both benefit from hybrid models.  I find it strange 
that you cannot follow.  If I don't understand how the 
development process of D works, you could point out an example, 
instead of making basic mistakes in not knowing what licenses it 
uses and what they allow. :)


- The language implementation is open source. This allows 
anyone to take the current front-end code - or even write their 
own clean-room implementation from ground-up - and integrate it 
to their own backend X.
Sort of.  The dmd frontend is open source, but the backend is not 
under an open source license.  Someone can swap out the backend 
and go completely closed, for example, using ldc (ldc used to 
have one or two GPL files, those would obviously have to be 
removed).


- The compiler itself is not associated with the development of 
the language, so those who are owners of the copyright are free 
to do what they want with their binary releases.


- The development model of D on github has adopted a "pull, 
review and merge" system, where any changes to the language or 
compiler do not go in unless it goes through proper coding 
review and testing (thank's to the wonderful auto-tester).  So 
your suggestion of an "open core" model has a slight fallacy 
here in that any changes to the closed off compiler would have 
to go through the same process to be accepted into the open one 
- and it might even be rejected.
I'm not sure why you think "open core" patches that are opened 
after a time limit would be any more likely to be rejected from 
that review process.  The only fallacy I see here is yours.


- Likewise, because of licensing and copyright assignments in 
place on the D front-end implementation.  Any closed D compiler 
using it would have to make its sources of the front-end, with 
local modifications, available upon request.  So it makes no 
sense whatsoever to make language features - such as SIMD - 

Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Iain Buclaw
I can't be bothered to read all points the both of you have 
mentioned thus far, but I do hope to add a voice of reason to 
calm you down. ;)




On Wednesday, 26 June 2013 at 17:42:23 UTC, Joakim wrote:
On Wednesday, 26 June 2013 at 12:02:38 UTC, Joseph Rushton 
Wakeling wrote:
Now, in trying to drive more funding and professional effort 
towards D development, do you _really_ think that the right 
thing to do is to turn around to all those people and say: 
"Hey guys, after all the work you put in to make D so great, 
now we're going to build on that, but you'll have to wait 6 
months for the extra goodies unless you pay"?
Yes, I think it is the right thing to do.  I am only talking 
about closing off the optimization patches, all bugfixes and 
feature patches would likely be applied to both the free and 
paid compilers, certainly bugfixes.  So not _all_ the "extra 
goodies" have to be paid for, and even the optimization patches 
are eventually open-sourced.




From a licensing perspective, the only part of the source that 
can be "closed off" is the DMD backend.  Any optimisation fixes 
in the DMD backend does not affect GDC/LDC.



How do you think that will affect the motivation of all those 
volunteers -- the code contributors, the bug reporters, the 
forum participants?  What could you say to the maintainers of 
GDC or LDC, after all they've done to enable people to use the 
language, that could justify denying their compilers 
up-to-date access to the latest features?  How would it affect 
the atmosphere of discussion about language development -- 
compared to the current friendly, collegial approach?
I don't know how it will affect their motivation, as they 
probably differ in the reasons they contribute.


If D becomes much more popular because the quality of 
implementation goes up and their D skills and contributions 
become much more prized, I suspect they will be very happy. :) 
If they are religious zealots about having only a single, 
completely open-source implementation- damn the superior 
results from hybrid models- perhaps they will be unhappy.  I 
suspect the former far outnumber the latter, since D doesn't 
employ the purely-GPL approach the zealots usually insist on.




You should try reading The Cathedral and the Bazaar if you don't 
understand why an open approach to development has caused the D 
programming language to grow by ten fold over the last year or so.


If you still don't understand, read it again ad infinitum.



... and -- how do you think it would affect uptake, if it was 
announced that access to the best features would come at a 
price?
Please stop distorting my argument.  There are many different 
types of patches added to the dmd frontend every day: bugfixes, 
features, optimizations, etc.  I have only proposed closing the 
optimization patches.


However, I do think some features can also be closed this way.  
For example, Walter has added features like SIMD modifications 
only for Remedy.  He could make this type of feature closed 
initially, available only in the paid compiler.  As the feature 
matures and is paid for, it would eventually be merged into the 
free compiler.  This is usually not a problem as those who want 
that kind of performance usually make a lot of money off of it 
and are happy to pay for that performance: that is all I'm 
proposing with my optimization patches idea also.




Think I might just point out that GDC had SIMD support before 
DMD. And that Remedy used GDC to get their D development off the 
ground.  It was features such as UDAs, along with many language 
bug fixes that were only available in DMD development that caused 
them to switch over.


In other words, they needed a faster turnaround for bugs at the 
time they were adopting D, however the D front-end in GDC stays 
pretty much stable on the current release.



In another email you mentioned Microsoft's revenues from 
Visual Studio but -- leaving aside for a moment all the moral 
and strategic concerns of closing things up -- Visual Studio 
enjoys that success because it's a virtually essential tool 
for professional development on Microsoft Windows, which still 
has an effective monopoly on modern desktop computing.  
Microsoft has the market presence to be able to dictate terms 
like that -- no one else does.  Certainly no upcoming 
programming language could operate like that!
Yes, Microsoft has unusual leverage.  But Visual Studio's 
compiler is not the only paid C++ compiler in the market, hell, 
Walter still sells C and C++ compilers.


I'm not proposing D operate just like Microsoft.  I'm 
suggesting a subtle compromise, a mix of that familiar closed 
model and the open source model you prefer, a hybrid model that 
you are no doubt familiar with, since you correctly pegged the 
licensing lingo earlier, when you mentioned "open core."


These hybrid models are immensely popular these days: the two 
most popular software projects of the last decade, iOS and 
Android, a

Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joakim
On Wednesday, 26 June 2013 at 17:28:22 UTC, Joseph Rushton 
Wakeling wrote:
Perhaps you'd like to explain to the maintainers of GDC and LDC 
why, after all they've done for D, you think it would be 
acceptable to turn to them and say: "Hey guys, we're going to 
make improvements and keep them from you for 9 months so we can 
make money" ... ?
Why are they guaranteed such patches?  They have advantages 
because they use different compiler backends.  If they think 
their backends are so great, let them implement their own 
optimizations and compete.


Or doesn't the cooperative relationship between the 3 main D 
compilers mean much to you?
As I've noted in an earlier response, LDC could also provide a 
closed version and license those patches.


Leaving aside the moral issues, you might consider that any 
work paid for by revenues would be offset by a drop in 
voluntary contributions, including corporate contributors.  And 
sensible companies will avoid "open core" solutions.
Or maybe the work paid by revenues would be far more and even 
more people would volunteer, when D becomes a more successful 
project through funding from the paid compiler.  Considering how 
dominant "open core" and other hybrid models are these days, it 
is laughable that you suggest that anyone is avoiding it. :)



A few articles worth reading on these factors:
http://webmink.com/essays/monetisation/
http://webmink.com/essays/open-core/
http://webmink.com/essays/donating-money/
I have corresponded with the author of that blog before.  I found 
him to be a religious zealot who recounted the four freedoms of 
GNU to me like a mantra.  Perhaps that's why Sun was run into the 
ground when they followed his ideas about open sourcing most 
everything.  I don't look to him for worthwhile reading on these 
issues.


I think this ignores the decades-long history we have with 
open source software by now.  It is not merely "wanting to 
make the jump," most volunteers simply do not want to do 
painful tasks like writing documentation or cannot put as much 
time into development when no money is coming in.  Simply 
saying "We have to try harder to be professional" seems naive 
to me.


Odd that you talk about ignoring things, because the general 
trend we've seen in the decades-long history of free software 
is that the software business seems to getting more and more 
open with every year.  These days there's a strong expectation 
of free licensing.
Yes, it is getting "more and more open," because hybrid models 
are being used more. :) Pure open source software, with no binary 
blobs, has almost no adoption, so it isn't your preferred purist 
approach that is doing well.  And the reasons are the ones I 
gave, volunteers can do a lot of things, but there are a lot of 
things they won't do.


It's hardly fair to compare languages without also taking into 
account their relative age.  C++ has its large market share 
substantially due to historical factors -- it was a major 
"first mover", and until the advent of D, it was arguably the 
_only_ language that had that combination of power/flexibility 
and performance.

Yes, C++ has been greatly helped by its age.

So far as compiler implementations are concerned, I'd say that 
it was the fact that there were many different implementations 
that helped C++.  On the other hand, proprietary 
implementations may in some ways have damaged adoption, as 
before standardization you'd have competing, incompatible 
proprietary versions which limited the portability of code.
But you neglect to mention that most of those "many different 
implementations" were closed.  I agree that completely closed 
implementations can also cause incompatibilities, which is why I 
have suggested a hybrid model with limited closed-source patches.


The binary blobs are nevertheless part of the vanilla kernel, 
not something "value added" that gets charged for.  They're 
irrelevant to the development model of the kernel -- they are 
an irritation that's tolerated for practical reasons, rather 
than a design feature.
They are not always charged for, but they put the lie to the 
claims that linux uses a pure open source model.  Rather, it is 
usually a different kind of hybrid model.  If it were so pure, 
there would be no blobs at all.  The blobs are certainly not 
irrelevant, as linux wouldn't run on all the hardware that needs 
those binary blobs, if they weren't included.  Not sure what to 
make of your non sequitur of binary blobs not being a "design 
feature."


As for paying for blobs, I'll note that the vast majority of 
linux kernels installed are in Android devices, where one pays 
for the hardware _and_ the development effort to develop the 
blobs that run the hardware.  So paying for the "value added" 
from blobs seems to be a very successful model. :)


So if one looks at linux in any detail, hybrid models are more 
the norm than the exception, even with the GPL. :)


But no one is selling proprietary extensions to the

Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joakim
On Wednesday, 26 June 2013 at 12:02:38 UTC, Joseph Rushton 
Wakeling wrote:
Now, in trying to drive more funding and professional effort 
towards D development, do you _really_ think that the right 
thing to do is to turn around to all those people and say: "Hey 
guys, after all the work you put in to make D so great, now 
we're going to build on that, but you'll have to wait 6 months 
for the extra goodies unless you pay"?
Yes, I think it is the right thing to do.  I am only talking 
about closing off the optimization patches, all bugfixes and 
feature patches would likely be applied to both the free and paid 
compilers, certainly bugfixes.  So not _all_ the "extra goodies" 
have to be paid for, and even the optimization patches are 
eventually open-sourced.


How do you think that will affect the motivation of all those 
volunteers -- the code contributors, the bug reporters, the 
forum participants?  What could you say to the maintainers of 
GDC or LDC, after all they've done to enable people to use the 
language, that could justify denying their compilers up-to-date 
access to the latest features?  How would it affect the 
atmosphere of discussion about language development -- compared 
to the current friendly, collegial approach?
I don't know how it will affect their motivation, as they 
probably differ in the reasons they contribute.


If D becomes much more popular because the quality of 
implementation goes up and their D skills and contributions 
become much more prized, I suspect they will be very happy. :) If 
they are religious zealots about having only a single, completely 
open-source implementation- damn the superior results from hybrid 
models- perhaps they will be unhappy.  I suspect the former far 
outnumber the latter, since D doesn't employ the purely-GPL 
approach the zealots usually insist on.


We could poll them and find out.  You keep talking about closed 
patches as though they can only piss off the volunteers.  But if 
I'm right and a hybrid model would lead to a lot more funding and 
adoption of D, their volunteer work places them in an ideal 
position, where their D skills and contributions are much more 
valued and they can then probably do paid work in D.  I suspect 
most will end up happier.


I have not proposed denying GDC and LDC "access to the latest 
features," only optimization patches.  LDC could do the same as 
dmd and provide a closed, paid version with the optimization 
patches, which it could license from dmd.  GDC couldn't do this, 
of course, but that is the result of their purist GPL-only 
approach.


Why do you think a hybrid model would materially "affect the 
atmosphere of discussion about language development?"  Do you 
believe that the people who work on hybrid projects like Android, 
probably the most widely-used, majority-OSS project in the world, 
are not able to collaborate effectively?


... and -- how do you think it would affect uptake, if it was 
announced that access to the best features would come at a 
price?
Please stop distorting my argument.  There are many different 
types of patches added to the dmd frontend every day: bugfixes, 
features, optimizations, etc.  I have only proposed closing the 
optimization patches.


However, I do think some features can also be closed this way.  
For example, Walter has added features like SIMD modifications 
only for Remedy.  He could make this type of feature closed 
initially, available only in the paid compiler.  As the feature 
matures and is paid for, it would eventually be merged into the 
free compiler.  This is usually not a problem as those who want 
that kind of performance usually make a lot of money off of it 
and are happy to pay for that performance: that is all I'm 
proposing with my optimization patches idea also.


As for how it would "affect uptake," I think most people know 
that free products are usually less capable than paid products.  
The people who don't need the capability use Visual Studio 
Express, those who need it pay for the full version of Visual 
Studio.  There's no reason D couldn't employ a similar segmented 
model.


 There are orders of magnitude of difference between uptake of 
free and non-free services no matter what the domain, and 
software is one where free (as in freedom and beer) is much 
more strongly desired than in many other fields.
Yes, you're right, non-free services have orders of magnitude 
more uptake. :p


I think there are advantages to both closed and open source, 
which is why hybrid open/closed source models are currently very 
popular.  Open source allows more collaboration from outside, 
while closed source allows for _much_ more funding from paying 
customers.  I see no reason to dogmatically insist that these 
source models not be mixed.


There's a big difference between introducing commercial models 
with a greater degree of paid professional work, and 
introducing closed components.  Red Hat is a good example of 
that -- I can get, legally 

Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joseph Rushton Wakeling

On Wednesday, 26 June 2013 at 15:52:33 UTC, Joakim wrote:
I suggest you read my original post more carefully.  I have not 
suggested closing up the entire D toolchain, as you seem to 
imply.  I have suggested working on optimization patches in a 
closed-source manner and providing two versions of the D 
compiler: one that is faster, closed, and paid, with these 
optimization patches, another that is slower, open, and free, 
without the optimization patches.


Over time, the optimization patches are merged back to the free 
branch, so that the funding from the closed compiler makes even 
the free compiler faster, but only after some delay so that 
users who value performance will actually pay for the closed 
compiler.  There can be a hard time limit, say nine months, so 
that you know any closed patches from nine months back will be 
opened and applied to the free compiler.  I suspect that the 
money will be good enough so that any bugfixes or features 
added by the closed developers will be added to the free 
compiler right away, with no delay.


Perhaps you'd like to explain to the maintainers of GDC and LDC 
why, after all they've done for D, you think it would be 
acceptable to turn to them and say: "Hey guys, we're going to 
make improvements and keep them from you for 9 months so we can 
make money" ... ?


Or doesn't the cooperative relationship between the 3 main D 
compilers mean much to you?


Thanks for the work that you and Don have done with 
Sociomantic.  Why do you think more companies don't do this?  
My point is that if there were money coming in from a paid 
compiler, Walter could fund even more such work.


Leaving aside the moral issues, you might consider that any work 
paid for by revenues would be offset by a drop in voluntary 
contributions, including corporate contributors.  And sensible 
companies will avoid "open core" solutions.


A few articles worth reading on these factors:
http://webmink.com/essays/monetisation/
http://webmink.com/essays/open-core/
http://webmink.com/essays/donating-money/

I think this ignores the decades-long history we have with open 
source software by now.  It is not merely "wanting to make the 
jump," most volunteers simply do not want to do painful tasks 
like writing documentation or cannot put as much time into 
development when no money is coming in.  Simply saying "We have 
to try harder to be professional" seems naive to me.


Odd that you talk about ignoring things, because the general 
trend we've seen in the decades-long history of free software is 
that the software business seems to getting more and more open 
with every year.  These days there's a strong expectation of free 
licensing.


If I understand your story right, the volunteers need to put a 
lot of effort into "bootstrapping" the project to be more 
professional, companies will see this and jump in, then they 
fund development from then on out?  It's possible, but is there 
any example you have in mind?  The languages that go this 
completely FOSS route tend not to have as much adoption as 
those with closed implementations, like C++.


It's hardly fair to compare languages without also taking into 
account their relative age.  C++ has its large market share 
substantially due to historical factors -- it was a major "first 
mover", and until the advent of D, it was arguably the _only_ 
language that had that combination of power/flexibility and 
performance.


So far as compiler implementations are concerned, I'd say that it 
was the fact that there were many different implementations that 
helped C++.  On the other hand, proprietary implementations may 
in some ways have damaged adoption, as before standardization 
you'd have competing, incompatible proprietary versions which 
limited the portability of code.


And yet the linux kernel ships with many binary blobs, almost 
all the time.  I don't know how they legally do it, considering 
the GPL, yet it is much more common to run a kernel with binary 
blobs than a purely FOSS version.  The vast majority of linux 
installs are due to Android and every single one has 
significant binary blobs and closed-source modifications to the 
Android source, which is allowed since most of Android is under 
the more liberal Apache license, with only the linux kernel 
under the GPL.


The binary blobs are nevertheless part of the vanilla kernel, not 
something "value added" that gets charged for.  They're 
irrelevant to the development model of the kernel -- they are an 
irritation that's tolerated for practical reasons, rather than a 
design feature.


Again, I don't know how they get away with all the binary 
drivers in the kernel, perhaps that is a grey area with the 
GPL.  For example, even the most open source Android devices, 
the Nexus devices sold directly by Google and running stock 
Android, have many binary blobs:


https://developers.google.com/android/nexus/drivers

Other than Android, linux is really only popular on servers, 
where 

Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Jacob Carlborg

On 2013-06-26 15:18, Joseph Rushton Wakeling wrote:


They don't own them, though -- they commit resources to them because the
language's ongoing development serves their business needs.


Yes, exactly.

--
/Jacob Carlborg


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joakim
On Wednesday, 26 June 2013 at 11:08:17 UTC, Leandro Lucarella 
wrote:

Joakim, el 25 de June a las 23:37 me escribiste:
I don't know the views of the key contributors, but I wonder 
if they
would have such a knee-jerk reaction against any paid/closed 
work.


Against being paid no, against being closed YES. Please don't 
even think
about it. It was a hell of a ride trying to make D more open to 
step back now.
I suggest you read my original post more carefully.  I have not 
suggested closing up the entire D toolchain, as you seem to 
imply.  I have suggested working on optimization patches in a 
closed-source manner and providing two versions of the D 
compiler: one that is faster, closed, and paid, with these 
optimization patches, another that is slower, open, and free, 
without the optimization patches.


Over time, the optimization patches are merged back to the free 
branch, so that the funding from the closed compiler makes even 
the free compiler faster, but only after some delay so that users 
who value performance will actually pay for the closed compiler.  
There can be a hard time limit, say nine months, so that you know 
any closed patches from nine months back will be opened and 
applied to the free compiler.  I suspect that the money will be 
good enough so that any bugfixes or features added by the closed 
developers will be added to the free compiler right away, with no 
delay.



What we need is companies paying to people to improve the
compiler and toolchain. This is slowly starting to happen, in
Sociomantic we are already 2 people dedicating some time to 
improve D as

part of our job (Don and me).
Thanks for the work that you and Don have done with Sociomantic.  
Why do you think more companies don't do this?  My point is that 
if there were money coming in from a paid compiler, Walter could 
fund even more such work.


We need more of this, and to get this, we need companies to 
start using
D, and to get this, we need professionalism (I agree 100% with 
Andrei on
this one). Is a bootstrap effort, and is not like volunteers 
need more
time to be professional, is just that you have to want to make 
the jump.
I think this ignores the decades-long history we have with open 
source software by now.  It is not merely "wanting to make the 
jump," most volunteers simply do not want to do painful tasks 
like writing documentation or cannot put as much time into 
development when no money is coming in.  Simply saying "We have 
to try harder to be professional" seems naive to me.


I think is way better to do less stuff but with higher quality, 
nobody
is asking people for more time, is just changing the focus a 
bit, at
least for some time. Again, this is only bootstrapping, and is 
always
hard and painful. We need to make the jump to make companies 
comfortable

using D, then things will start rolling by themselves.
If I understand your story right, the volunteers need to put a 
lot of effort into "bootstrapping" the project to be more 
professional, companies will see this and jump in, then they fund 
development from then on out?  It's possible, but is there any 
example you have in mind?  The languages that go this completely 
FOSS route tend not to have as much adoption as those with closed 
implementations, like C++.


First of all, your examples are completely wrong. The projects 
you are

mentioning are 100% free, with no closed components (except for
components done by third-party).
You are misstating what I said: I said "commercial," not 
"closed," and gave different examples of commercial models.  But 
lets look at them.



Your examples are just reinforcing what
I say above. Linux is completely GPL, so it's not even only 
open source.
Is Free Software, meaning the license if more restrictive than, 
for
example, phobos. This means is harder to adopt by companies and 
you
can't possibly change it in a closed way if you want to 
distribute

a binary.
And yet the linux kernel ships with many binary blobs, almost all 
the time.  I don't know how they legally do it, considering the 
GPL, yet it is much more common to run a kernel with binary blobs 
than a purely FOSS version.  The vast majority of linux installs 
are due to Android and every single one has significant binary 
blobs and closed-source modifications to the Android source, 
which is allowed since most of Android is under the more liberal 
Apache license, with only the linux kernel under the GPL.


Again, I don't know how they get away with all the binary drivers 
in the kernel, perhaps that is a grey area with the GPL.  For 
example, even the most open source Android devices, the Nexus 
devices sold directly by Google and running stock Android, have 
many binary blobs:


https://developers.google.com/android/nexus/drivers

Other than Android, linux is really only popular on servers, 
where you can "change it in a closed way" because you are not 
"distributing a binary."  Google takes advantage of this to run 
linux on a millio

Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Iain Buclaw
On 26 June 2013 15:04, eles  wrote:
> On Tuesday, 25 June 2013 at 08:21:38 UTC, Mike Parker wrote:
>>
>> On Tuesday, 25 June 2013 at 05:57:30 UTC, Peter Williams wrote:
>> D Season of Code! Then we don't have to restrict ourselves to one time of
>> the year.
>
>
> D Seasons of Code! Why to restrict to a single season? Let's code all the
> year long! :)

Programmers need to hibernate too, you know. ;)

--
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Leandro Lucarella
Jacob Carlborg, el 26 de June a las 14:39 me escribiste:
> On 2013-06-26 12:16, Leandro Lucarella wrote:
> 
> >Yeah, right, probably Python and Ruby have only 5k users...
> 
> There are companies backing those languages, at least Ruby, to some
> extent.

Read my other post, I won't repeat myself :)

-- 
Leandro Lucarella (AKA luca) http://llucax.com.ar/
--
GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)
--
JUNTAN FIRMAS Y HUELLAS POR EL CACHORRO CONDENADO A MUERTE...
-- Crónica TV


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread eles

On Tuesday, 25 June 2013 at 08:21:38 UTC, Mike Parker wrote:

On Tuesday, 25 June 2013 at 05:57:30 UTC, Peter Williams wrote:
D Season of Code! Then we don't have to restrict ourselves to 
one time of the year.


D Seasons of Code! Why to restrict to a single season? Let's code 
all the year long! :)


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joseph Rushton Wakeling

On Wednesday, 26 June 2013 at 12:39:05 UTC, Jacob Carlborg wrote:

On 2013-06-26 12:16, Leandro Lucarella wrote:


Yeah, right, probably Python and Ruby have only 5k users...


There are companies backing those languages, at least Ruby, to 
some extent.


They don't own them, though -- they commit resources to them 
because the language's ongoing development serves their business 
needs.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Jacob Carlborg

On 2013-06-26 12:16, Leandro Lucarella wrote:


Yeah, right, probably Python and Ruby have only 5k users...


There are companies backing those languages, at least Ruby, to some extent.

--
/Jacob Carlborg


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joseph Rushton Wakeling

On Tuesday, 25 June 2013 at 21:38:01 UTC, Joakim wrote:
I don't know the views of the key contributors, but I wonder if 
they would have such a knee-jerk reaction against any 
paid/closed work.  The current situation would seem much more 
of a kick in the teeth to me: spending time trying to be 
"professional," as Andrei asks, and producing a viable, stable 
product used by a million developers, corporate users included, 
but never receiving any compensation for this great tool you've 
poured effort into, that your users are presumably often making 
money with.


Obviously I can't speak for the core developers, or even for the 
community as a group.  But I can make the following observations.


D's success as a language is _entirely_ down to volunteer effort 
-- as Walter highlighted in his keynote.  Volunteer effort is 
responsible for the development of the compiler frontend, the 
runtime, and the standard library.  Volunteers have put in the 
hard work of porting these to other compiler backends.  
Volunteers have made and reviewed language improvement proposals, 
and have been vigilant in reporting and resolving bugs.  
Volunteers also contribute to vibrant discussions on these very 
forums, providing support and advice to those in need of help.  
And many of these volunteers have been doing so over the course 
of years.


Now, in trying to drive more funding and professional effort 
towards D development, do you _really_ think that the right thing 
to do is to turn around to all those people and say: "Hey guys, 
after all the work you put in to make D so great, now we're going 
to build on that, but you'll have to wait 6 months for the extra 
goodies unless you pay"?


How do you think that will affect the motivation of all those 
volunteers -- the code contributors, the bug reporters, the forum 
participants?  What could you say to the maintainers of GDC or 
LDC, after all they've done to enable people to use the language, 
that could justify denying their compilers up-to-date access to 
the latest features?  How would it affect the atmosphere of 
discussion about language development -- compared to the current 
friendly, collegial approach?


... and -- how do you think it would affect uptake, if it was 
announced that access to the best features would come at a price? 
 There are orders of magnitude of difference between uptake of 
free and non-free services no matter what the domain, and 
software is one where free (as in freedom and beer) is much more 
strongly desired than in many other fields.


I understand that such a shift from being mostly OSS to having 
some closed components can be tricky, but that depends on the 
particular community.  I don't think any OSS project has ever 
become popular without having some sort of commercial model 
attached to it.  C++ would be nowhere without commercial 
compilers; linux would be unheard of without IBM and Red Hat 
figuring out a consulting/support model around it; and Android 
would not have put the linux kernel on hundreds of millions of 
computing devices without the hybrid model that Google 
employed, where they provide an open source core, paid for 
through increased ad revenue from Android devices, and the 
hardware vendors provide closed hardware drivers and UI skins 
on top of the OSS core.


There's a big difference between introducing commercial models 
with a greater degree of paid professional work, and introducing 
closed components.  Red Hat is a good example of that -- I can 
get, legally and for free, a fully functional copy of Red Hat 
Enterprise Linux without paying a penny.  It's just missing the 
Red Hat name and logos and the support contract.


In another email you mentioned Microsoft's revenues from Visual 
Studio but -- leaving aside for a moment all the moral and 
strategic concerns of closing things up -- Visual Studio enjoys 
that success because it's a virtually essential tool for 
professional development on Microsoft Windows, which still has an 
effective monopoly on modern desktop computing.  Microsoft has 
the market presence to be able to dictate terms like that -- no 
one else does.  Certainly no upcoming programming language could 
operate like that!


This talk prominently mentioned scaling to a million users and 
being professional: going commercial is the only way to get 
there.


It's more likely that closing off parts of the offering would 
limit that uptake, for reasons already given.  On the other hand, 
with more and more organizations coming to use and rely on D, 
there are plenty of other ways professional development could be 
brought in.  Just to take one example: companies with a 
mission-critical interest in D have a corresponding interest in 
their developers giving time to the language itself.  How many 
such companies do you think there need to be before D has a 
stable of skilled professional developers being paid explicitly 
to maintain and develop the language?


Your citation of the Linux kerne

Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Dicebot
On Wednesday, 26 June 2013 at 11:08:17 UTC, Leandro Lucarella 
wrote:
Android might be the only valid case (but I'm not really 
familiar with Android model), but the kernel, since is based on 
Linux, has to have the source code when

released. Maybe the drivers are closed source.


It is perfectly open 
http://source.android.com/source/licenses.html ;)
Drivers tend to be closed source, but drivers are not part fo 
Android project, they are private to vendors.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Leandro Lucarella
Joakim, el 26 de June a las 08:33 me escribiste:
> It is amazing how far D has gotten with no business model: money
> certainly isn't everything.  But it is probably impossible to get to
> a million users or offer professionalism without commercial
> implementations.

Yeah, right, probably Python and Ruby have only 5k users...

This argument is BS.

-- 
Leandro Lucarella (AKA luca) http://llucax.com.ar/
--
GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)
--
Are you such a dreamer?
To put the world to rights?
I'll stay home forever
Where two & two always
makes up five


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Leandro Lucarella
Joakim, el 25 de June a las 23:37 me escribiste:
> On Tuesday, 25 June 2013 at 20:58:16 UTC, Joseph Rushton Wakeling
> wrote:
> >>I wonder what the response would be to injecting some money and
> >>commercialism into the D ecosystem.
> >
> >Given how D's whole success stems from its community, I think an
> >"open core" model (even with time-lapse) would be disastrous. It'd
> >be like kicking everyone in the teeth after all the work they put
> >in.
> I don't know the views of the key contributors, but I wonder if they
> would have such a knee-jerk reaction against any paid/closed work.

Against being paid no, against being closed YES. Please don't even think
about it. It was a hell of a ride trying to make D more open to step
back now. What we need is companies paying to people to improve the
compiler and toolchain. This is slowly starting to happen, in
Sociomantic we are already 2 people dedicating some time to improve D as
part of our job (Don and me).

We need more of this, and to get this, we need companies to start using
D, and to get this, we need professionalism (I agree 100% with Andrei on
this one). Is a bootstrap effort, and is not like volunteers need more
time to be professional, is just that you have to want to make the jump.
I think is way better to do less stuff but with higher quality, nobody
is asking people for more time, is just changing the focus a bit, at
least for some time. Again, this is only bootstrapping, and is always
hard and painful. We need to make the jump to make companies comfortable
using D, then things will start rolling by themselves.

> The current situation would seem much more of a kick in the teeth to
> me: spending time trying to be "professional," as Andrei asks, and
> producing a viable, stable product used by a million developers,
> corporate users included, but never receiving any compensation for
> this great tool you've poured effort into, that your users are
> presumably often making money with.
> 
> I understand that such a shift from being mostly OSS to having some
> closed components can be tricky, but that depends on the particular
> community.  I don't think any OSS project has ever become popular
> without having some sort of commercial model attached to it.  C++
> would be nowhere without commercial compilers; linux would be
> unheard of without IBM and Red Hat figuring out a consulting/support
> model around it; and Android would not have put the linux kernel on
> hundreds of millions of computing devices without the hybrid model
> that Google employed, where they provide an open source core, paid
> for through increased ad revenue from Android devices, and the
> hardware vendors provide closed hardware drivers and UI skins on top
> of the OSS core.

First of all, your examples are completely wrong. The projects you are
mentioning are 100% free, with no closed components (except for
components done by third-party). Your examples are just reinforcing what
I say above. Linux is completely GPL, so it's not even only open source.
Is Free Software, meaning the license if more restrictive than, for
example, phobos. This means is harder to adopt by companies and you
can't possibly change it in a closed way if you want to distribute
a binary. Same for C++, which is not a project, is a standards, but the
most successful and widespread compiler, GCC, not only is free, is the
battle horse of free software, of the GNU project and created by the
most extremist free software advocate ever. Android might be the only
valid case (but I'm not really familiar with Android model), but the
kernel, since is based on Linux, has to have the source code when
released. Maybe the drivers are closed source.

You are missing more closely related projects, like Python, Haskel,
Ruby, Perl, and probably 90% of the newish programming languages, which
are all 100% open source. And very successful I might say. The key is
always breaking into the corporate ground and make those corporations
contribute.

There are valid examples of project using hybrid models but they are
usually software as a service models, not very applicable to
a compiler/language, like Wordpress, or other web applications. Other
valid examples are MySQL, or QT I think used an hybrid model at least
once. Lots of them died and were resurrected as 100% free projects, like
StarOffice -> OpenOffice -> LibreOffice.

And finally making the *optimizer* (or some optimizations) closed will
be hardly a good business, being that there are 2 other backends out
there that usually kicks DMD backend ass already, so people needing more
speed will probably just switch to gdc or ldc.

> This talk prominently mentioned scaling to a million users and being
> professional: going commercial is the only way to get there.

As in breaking into the commercial world? Then agreed. If you imply
commercial == closing some parts of the source, then I think you are WAY
OFF.

-- 
Leandro Lucarella (AKA luca) http://llucax.com.ar/

Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-25 Thread Joakim

On Wednesday, 26 June 2013 at 01:25:42 UTC, Bill Baxter wrote:
On Tue, Jun 25, 2013 at 2:37 PM, Joakim  
wrote:
This talk prominently mentioned scaling to a million users and 
being

professional: going commercial is the only way to get there.



IDEs are something you can have a freemium model for.  Core 
languages are
not these days.  If you have to pay to get the optimized 
version of the
language there are just too many other places to look that 
don't charge.
 You want the best version of the language to be in everyone's 
hands...  Hard to make much money selling things to developers.
I agree that there is a lot of competition for programming 
languages.  However, Visual Studio brought in $400 million in 
extensions alone a couple years back:


http://blogs.msdn.com/b/somasegar/archive/2011/04/12/happy-1st-birthday-visual-studio-2010.aspx

Microsoft doesn't break out numbers for Visual Studio itself, but 
it might be a billion+ dollars a year, not to mention all the 
other commercial C++ compilers out there.  If the aim is to 
displace C++ and gain a million users, it is impossible to do so 
without commercial implementations.  All the languages that you 
are thinking about that do no offer a single commercial 
implementation- remember, even Perl and Python have commercial 
options, eg ActiveState- have almost no usage compared to C++.  
It is true that there are large companies like Apple or 
Sun/Oracle that give away a lot of tooling for free, but D 
doesn't have such corporate backing.


It is amazing how far D has gotten with no business model: money 
certainly isn't everything.  But it is probably impossible to get 
to a million users or offer professionalism without commercial 
implementations.


In any case, the fact that the D front-end is under the Artistic 
license and most of the rest of the code is released under 
similarly liberal licensing means that someone can do this on 
their own, without any other permission from the community, and I 
expect that if D is successful, someone will.


I'm simply suggesting that the original developers jump-start 
that process by doing it themselves, in the hybrid form I've 
suggested, rather than potentially getting cut out of the 
decision-making process when somebody else does it.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-25 Thread Bill Baxter
On Tue, Jun 25, 2013 at 2:37 PM, Joakim  wrote:

> On Tuesday, 25 June 2013 at 20:58:16 UTC, Joseph Rushton Wakeling wrote:
>
>> I wonder what the response would be to injecting some money and
>>> commercialism into the D ecosystem.
>>>
>>
>> Given how D's whole success stems from its community, I think an "open
>> core" model (even with time-lapse) would be disastrous. It'd be like
>> kicking everyone in the teeth after all the work they put in.
>>
> I don't know the views of the key contributors, but I wonder if they would
> have such a knee-jerk reaction against any paid/closed work.  The current
> situation would seem much more of a kick in the teeth to me: spending time
> trying to be "professional," as Andrei asks, and producing a viable, stable
> product used by a million developers, corporate users included, but never
> receiving any compensation for this great tool you've poured effort into,
> that your users are presumably often making money with.
>
> I understand that such a shift from being mostly OSS to having some closed
> components can be tricky, but that depends on the particular community.  I
> don't think any OSS project has ever become popular without having some
> sort of commercial model attached to it.  C++ would be nowhere without
> commercial compilers; linux would be unheard of without IBM and Red Hat
> figuring out a consulting/support model around it; and Android would not
> have put the linux kernel on hundreds of millions of computing devices
> without the hybrid model that Google employed, where they provide an open
> source core, paid for through increased ad revenue from Android devices,
> and the hardware vendors provide closed hardware drivers and UI skins on
> top of the OSS core.
>
> This talk prominently mentioned scaling to a million users and being
> professional: going commercial is the only way to get there.
>

IDEs are something you can have a freemium model for.  Core languages are
not these days.  If you have to pay to get the optimized version of the
language there are just too many other places to look that don't charge.
 You want the best version of the language to be in everyone's hands.  But
there can be some tools you have to pay for.  http://www.wingware.com/ is a
good example of a commercial Python IDE that adds value to the community
with a commercial offering.  I paid for a copy back when I was doing a lot
of python development.   It is definitely not a business I would want to be
in, though.  I was surprised to see they are still alive, actually.  Hard
to make much money selling things to developers.

--bb


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-25 Thread Manu
On 26 June 2013 09:59, Peter Williams  wrote:

> On 26/06/13 06:14, Nick Sabalausky wrote:
>
>> On Tue, 25 Jun 2013 15:57:18 +1000
>> Peter Williams  wrote:
>>
>>>
>>> Can you think of a better name than "D Summer Of Code"?  It's very
>>> northern hemisphere centric and makes us southerners feel like the
>>> rest of the world doesn't know there is a southern hemisphere (or if
>>> they do that they don't know the seasons work) :-).
>>>
>>>
>> I'm pretty sure the southern hemisphere has summer too...It's just a lot
>> colder ;) Nobody called it "D Warm-Summer of Code".
>>
>>
> Not all of it.  In tropical Australia, they have two seasons - the wet
> season (aka the suicide season) and the dry season :-).


I like to think of it as the soaking bloody wet season, and the slightly
less wet season ;)
Slightly more tolerable than indonesia, which has only a single 'soaking
wet at precisely 4pm every day, but otherwise lovely weather (if you like
humidity) season'...


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-25 Thread Peter Williams

On 26/06/13 06:14, Nick Sabalausky wrote:

On Tue, 25 Jun 2013 15:57:18 +1000
Peter Williams  wrote:


Can you think of a better name than "D Summer Of Code"?  It's very
northern hemisphere centric and makes us southerners feel like the
rest of the world doesn't know there is a southern hemisphere (or if
they do that they don't know the seasons work) :-).



I'm pretty sure the southern hemisphere has summer too...It's just a lot
colder ;) Nobody called it "D Warm-Summer of Code".



Not all of it.  In tropical Australia, they have two seasons - the wet 
season (aka the suicide season) and the dry season :-).


Peter


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-25 Thread Nick Sabalausky
On Mon, 24 Jun 2013 09:13:48 -0700
Andrei Alexandrescu  wrote:

> reddit: 
> http://www.reddit.com/r/programming/comments/1gz40q/dconf_2013_closing_keynote_quo_vadis_by_andrei/
> 

Torrents and links up, plus a torrent now for the original MP4 of the
previous talk:

http://semitwist.com/download/misc/dconf2013/



Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-25 Thread Joakim
On Tuesday, 25 June 2013 at 20:58:16 UTC, Joseph Rushton Wakeling 
wrote:
I wonder what the response would be to injecting some money 
and commercialism into the D ecosystem.


Given how D's whole success stems from its community, I think 
an "open core" model (even with time-lapse) would be 
disastrous. It'd be like kicking everyone in the teeth after 
all the work they put in.
I don't know the views of the key contributors, but I wonder if 
they would have such a knee-jerk reaction against any paid/closed 
work.  The current situation would seem much more of a kick in 
the teeth to me: spending time trying to be "professional," as 
Andrei asks, and producing a viable, stable product used by a 
million developers, corporate users included, but never receiving 
any compensation for this great tool you've poured effort into, 
that your users are presumably often making money with.


I understand that such a shift from being mostly OSS to having 
some closed components can be tricky, but that depends on the 
particular community.  I don't think any OSS project has ever 
become popular without having some sort of commercial model 
attached to it.  C++ would be nowhere without commercial 
compilers; linux would be unheard of without IBM and Red Hat 
figuring out a consulting/support model around it; and Android 
would not have put the linux kernel on hundreds of millions of 
computing devices without the hybrid model that Google employed, 
where they provide an open source core, paid for through 
increased ad revenue from Android devices, and the hardware 
vendors provide closed hardware drivers and UI skins on top of 
the OSS core.


This talk prominently mentioned scaling to a million users and 
being professional: going commercial is the only way to get there.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-25 Thread Joseph Rushton Wakeling

On Tuesday, 25 June 2013 at 15:44:02 UTC, Joakim wrote:
Just finished watching Andrei's talk, it was up to his usual 
high standard.


I found the bits about professionalism a bit weird though: can 
we really expect that from a volunteer effort?  I'm pretty sure 
the A/V guys at the conference weren't volunteers, ie they were 
paid.


Along the line that QAston started, if you want more 
professionalism, is there any interest in producing a 
commercial D compiler?  If not, why not?  I notice that Walter 
sells C and C++ compilers and source on digitalmars.com, but 
strangely not D.
 There are interesting business/source models nowadays where 
you can be mostly open source and still sell a commercial 
product.


For example, Walter has often talked about optimizations in the 
compiler that he'd like to get to.  There could be two 
compilers: one where the source is fully publicly available, 
another made available to paying users, which has additional 
optimizations done either by Walter or others who he 
supervises, but the source for those optimizations would not be 
available publicly, though perhaps made available only to the 
buyers under a non-OSS license.  After enough time has passed 
for the optimization work to be paid for, the optimization 
patches would eventually be merged into the slower, non-paid 
version.  Android uses a similar hybrid model, which has 
obviously been enormously successful.


Another possibility is a bounty system, where users pledge 
money towards needed features or bug fixes.  It'd basically be 
a more distributed version of the hybrid approach I've outlined.


I wonder what the response would be to injecting some money and 
commercialism into the D ecosystem.



Given how D's whole success stems from its community, I think an 
"open core" model (even with time-lapse) would be disastrous. 
It'd be like kicking everyone in the teeth after all the work 
they put in.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-25 Thread Jacob Carlborg

On 2013-06-25 11:42, Jonas Drewsen wrote:


I'm a Danish guy so there is a at least one dane using D :)


Tomas Lindquist Olsen, creator of LDC (LLVMDC back then) is Danish, if I 
recall correctly.


--
/Jacob Carlborg


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-25 Thread Andrei Alexandrescu

On 6/24/13 9:13 AM, Andrei Alexandrescu wrote:

reddit:
http://www.reddit.com/r/programming/comments/1gz40q/dconf_2013_closing_keynote_quo_vadis_by_andrei/


facebook: https://www.facebook.com/dlang.org/posts/662488747098143

twitter: https://twitter.com/D_Programming/status/349197737805373441

hackernews: https://news.ycombinator.com/item?id=5933818

youtube: http://youtube.com/watch?v=4M-0LFBP9AU


Andrei


HD version available: http://archive.org/details/dconf2013-day03-talk06

Andrei


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-25 Thread Nick Sabalausky
On Tue, 25 Jun 2013 15:57:18 +1000
Peter Williams  wrote:
> 
> Can you think of a better name than "D Summer Of Code"?  It's very 
> northern hemisphere centric and makes us southerners feel like the
> rest of the world doesn't know there is a southern hemisphere (or if
> they do that they don't know the seasons work) :-).
> 

I'm pretty sure the southern hemisphere has summer too...It's just a lot
colder ;) Nobody called it "D Warm-Summer of Code".



Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-25 Thread Ali Çehreli

On 06/24/2013 10:57 PM, Peter Williams wrote:

> Can you think of a better name than "D Summer Of Code"?  It's very
> northern hemisphere centric and makes us southerners feel like the rest
> of the world doesn't know there is a southern hemisphere

The only southern country is Mexico, which I am told is in the Northern 
hemisphere. :p


Ali



Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-25 Thread Joakim
Just finished watching Andrei's talk, it was up to his usual high 
standard.


I found the bits about professionalism a bit weird though: can we 
really expect that from a volunteer effort?  I'm pretty sure the 
A/V guys at the conference weren't volunteers, ie they were paid.


Along the line that QAston started, if you want more 
professionalism, is there any interest in producing a commercial 
D compiler?  If not, why not?  I notice that Walter sells C and 
C++ compilers and source on digitalmars.com, but strangely not D. 
 There are interesting business/source models nowadays where you 
can be mostly open source and still sell a commercial product.


For example, Walter has often talked about optimizations in the 
compiler that he'd like to get to.  There could be two compilers: 
one where the source is fully publicly available, another made 
available to paying users, which has additional optimizations 
done either by Walter or others who he supervises, but the source 
for those optimizations would not be available publicly, though 
perhaps made available only to the buyers under a non-OSS 
license.  After enough time has passed for the optimization work 
to be paid for, the optimization patches would eventually be 
merged into the slower, non-paid version.  Android uses a similar 
hybrid model, which has obviously been enormously successful.


Another possibility is a bounty system, where users pledge money 
towards needed features or bug fixes.  It'd basically be a more 
distributed version of the hybrid approach I've outlined.


I wonder what the response would be to injecting some money and 
commercialism into the D ecosystem.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-25 Thread Leandro Lucarella
Peter Williams, el 25 de June a las 15:57 me escribiste:
> On 25/06/13 02:13, Andrei Alexandrescu wrote:
> >reddit:
> >http://www.reddit.com/r/programming/comments/1gz40q/dconf_2013_closing_keynote_quo_vadis_by_andrei/
> >
> >
> >facebook: https://www.facebook.com/dlang.org/posts/662488747098143
> >
> >twitter: https://twitter.com/D_Programming/status/349197737805373441
> >
> >hackernews: https://news.ycombinator.com/item?id=5933818
> >
> >youtube: http://youtube.com/watch?v=4M-0LFBP9AU
> >
> >
> >Andrei
> 
> Can you think of a better name than "D Summer Of Code"?  It's very
> northern hemisphere centric and makes us southerners feel like the
> rest of the world doesn't know there is a southern hemisphere (or if
> they do that they don't know the seasons work) :-).

Or they know, but they just don't give a fuck :)

-- 
Leandro Lucarella (AKA luca) http://llucax.com.ar/
--
GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)
--
The average person laughs 13 times a day


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-25 Thread Jonas Drewsen


I'm a Danish guy so there is a at least one dane using D :)

/Jonas


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-25 Thread Mike Parker

On Tuesday, 25 June 2013 at 05:57:30 UTC, Peter Williams wrote:



Can you think of a better name than "D Summer Of Code"?  It's 
very northern hemisphere centric and makes us southerners feel 
like the rest of the world doesn't know there is a southern 
hemisphere (or if they do that they don't know the seasons 
work) :-).


Peter


D Season of Code! Then we don't have to restrict ourselves to one 
time of the year.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-24 Thread Peter Williams

On 25/06/13 02:13, Andrei Alexandrescu wrote:

reddit:
http://www.reddit.com/r/programming/comments/1gz40q/dconf_2013_closing_keynote_quo_vadis_by_andrei/


facebook: https://www.facebook.com/dlang.org/posts/662488747098143

twitter: https://twitter.com/D_Programming/status/349197737805373441

hackernews: https://news.ycombinator.com/item?id=5933818

youtube: http://youtube.com/watch?v=4M-0LFBP9AU


Andrei


Can you think of a better name than "D Summer Of Code"?  It's very 
northern hemisphere centric and makes us southerners feel like the rest 
of the world doesn't know there is a southern hemisphere (or if they do 
that they don't know the seasons work) :-).


Peter


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-24 Thread David Gileadi

On 6/24/13 9:19 AM, David Gileadi wrote:

Slides seem to be missing from
http://dconf.org/2013/talks/alexandrescu.pdf; I get a 404.


I posted too soon; they're there now.  Sorry for the noise.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-24 Thread David Gileadi
Slides seem to be missing from 
http://dconf.org/2013/talks/alexandrescu.pdf; I get a 404.


DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-24 Thread Andrei Alexandrescu
reddit: 
http://www.reddit.com/r/programming/comments/1gz40q/dconf_2013_closing_keynote_quo_vadis_by_andrei/


facebook: https://www.facebook.com/dlang.org/posts/662488747098143

twitter: https://twitter.com/D_Programming/status/349197737805373441

hackernews: https://news.ycombinator.com/item?id=5933818

youtube: http://youtube.com/watch?v=4M-0LFBP9AU


Andrei