Re: DUB 0.9.22 released

2014-10-05 Thread Sönke Ludwig via Digitalmars-d-announce

Am 02.10.2014 14:27, schrieb Ben Boeckel via Digitalmars-d-announce:

On Fri, Sep 26, 2014 at 06:29:19 +, Dragos Carp via Digitalmars-d-announce 
wrote:

1.2.3.x is an invalid version number. Only 3 group numbers are
allowed [1]. Though you could use prerelease and/or build
suffixes (1.2.3-0w / 1.2.3+0w).


How would you version a library which wraps another with 4 version
components? Enforced semver to the limit that only 3 components are
supported seems a little heavy-handed to me.

--Ben



The idea is to have an interoperable standard - modifying it in any way 
would break that, so that we could as well completely invent our own 
standard.


The way I see it is that the binding should be considered as 
individually versioned. It should usually start at 1.0.0 (maybe X.0.0, 
where X is the major version of the wrapped library, if that makes sense 
for the original version scheme) and be incremented purely according to 
SemVer. The version of the wrapped library can be documented as build 
metadata, but that's it.


To me a big argument against supporting something non-standard, such as 
a fourth version digit is that it facilitates blindly adopting a 
libraries original version scheme, even if that may work in a completely 
different way w.r.t. major, minor and patch versions.


But the idea of SemVer is that you can safely specify a version range 
such as 1.2.x and be sure to only get bugfixes, or 1.x.x and only get 
backwards compatible changes. Many other schemes don't have such 
guarantees, so directly translating them would be the a step to chaos.


Re: [OT Security PSA] Shellshock: Update your bash, now!

2014-10-05 Thread eles via Digitalmars-d-announce

On Thursday, 2 October 2014 at 11:12:12 UTC, Kagamin wrote:

On Thursday, 2 October 2014 at 07:43:54 UTC, eles wrote:

update-manager -d

It works.


Does it perform package upgrade? The comments are rather scary:
---
Hi, I have installed Linux mint 15 with Mint4Win as Dual boot 
with Windows 7.

Then upgraded it to Mint 16 and it was running fine.
But when I upgrade to Mint 17 (Qiana), after restarting the 
partition loop0 (or loopback0 or something like that) fails to 
load.
It shows an error like, Press I to ignore, S to skip or M for 
manual recovery.


Hi,

A bit of news here, as just updated my knoledge about Linux Mint 
 Linux Mint Debian Edition.


In short, from this discussion and its comments:

http://segfault.linuxmint.com/2014/08/upcoming-lmde-2-to-be-named-betsy/

Linux Mint Debian abandons its (semi-)rolling model and will 
basically become just a kind of Ubuntu, but based on Debian 
Stable (Ubuntu, AFAIK, is based on Debian Unstable). The will 
require full-upgrades every 2 years, but the upgrades shall be 
smooth (no reinstall required). For two years, you will not need 
to do such upgrade, just the basic security upgrades and some 
updates (mainly browser and email clients).


Linux Mint, starting from version 17, marks a departure from 
previous releases (this is why you migh have encountered 
difficulties in upgrading) by keeping the same code base (Ubuntu 
14.04 LTS) for the next 5 years. So, during this time, it will 
basically be a rolling-distribution, as some software will get 
updated just as regular (security fixes etc.) happens. Probably, 
after those 5 years, they will change the code base to the next 
Ubuntu LTS, which will start a new 5-years long upgrade.


One piece of advice: Debian Testing might seem (by the name) more 
secure than Debian Unstable. The truth is that the latter is more 
up-to-date and receives security fixes first (they are entering 
the Debian Unstable first, then they are pre-validated before 
going in Debian Testing). More, Debian Unstable is not as 
unstable as its name might tell but, yes, it requires you messing 
sometimes (read: maybe once every three months) with the apt-get 
and vim. But is not such a big deal.


Re: SDC-32bit

2014-10-05 Thread Stefan Koch via Digitalmars-d-announce

I just updated my fork.

  https://github.com/UplinkCoder/sdc32-experimental

* test0037 passes now
  meaning that alias works in more cases

* I implemented foreach for Arrays
  though since ArrayLiterals are currently not supported this is 
not too helpful.




Re: [OT Security PSA] Shellshock: Update your bash, now!

2014-10-05 Thread Paul O'Neil via Digitalmars-d-announce
On 10/01/2014 04:50 PM, Nick Sabalausky wrote:
 On 10/01/2014 01:38 PM, Iain Buclaw via Digitalmars-d-announce wrote:

 One nice thing about Ubuntu is that they even give you access to
 future kernel versions through what they call HWE.  In short, I can
 run a 14.04 LTS kernel on a 12.04 server, so that I'm able to use
 modern hardware and take advantage of software that uses features of
 Linux that are actively worked on (like LXC) on an older software
 stack.

 
 Is there anything similar in Debian?
 

Debian Backports: backports.debian.org

-- 
Paul O'Neil
Github / IRC: todayman


Re: [OT Security PSA] Shellshock: Update your bash, now!

2014-10-05 Thread Kagamin via Digitalmars-d-announce

On Friday, 3 October 2014 at 11:25:59 UTC, eles wrote:
Debian and Debian-based asks you to confirm file overwrite 
(usually, the diff is displayed too).


Isn't it the same package manager? It should be able to do the 
same on mint. Or may be fstab can be copied somewhere and then 
back at some point?


On Sunday, 5 October 2014 at 08:54:46 UTC, eles wrote:
Linux Mint, starting from version 17, marks a departure from 
previous releases (this is why you migh have encountered 
difficulties in upgrading) by keeping the same code base 
(Ubuntu 14.04 LTS) for the next 5 years. So, during this time, 
it will basically be a rolling-distribution, as some software 
will get updated just as regular (security fixes etc.) happens.


Truly rolling or only security updates?
Well, I'm ok with a fresh install. But can it run under the 
target linux itself? Or rather what to run from the disk? Since 
mint4win installation is a virtual disk, I'm not sure the 
installer will find it gracefully, they're usually 
partition-oriented. Not sure if this eliminates problem with 
fstab though.


Re: [OT Security PSA] Shellshock: Update your bash, now!

2014-10-05 Thread eles via Digitalmars-d-announce

On Sunday, 5 October 2014 at 21:13:01 UTC, Kagamin wrote:

On Friday, 3 October 2014 at 11:25:59 UTC, eles wrote:
Debian and Debian-based asks you to confirm file overwrite 
(usually, the diff is displayed too).


Isn't it the same package manager? It should be able to do the 
same on mint. Or may be fstab can be copied somewhere and then 
back at some point?


It should be the same, but I am never sure about the homegrown 
patches that the Mint team applies (for example, they applied 
that patch that presents update packs).




Truly rolling or only security updates?


Actually, a kind of releases, every 6 months, but that only comes 
down to updating the Mint plug-ins and a selected handful of 
programs (probably, browser, update manager and e-mail clients). 
There is no much difference wrt a rolling release, because the 
code base does not change. Basically, the releases will be 
nothing else that some glorified update packs, so basically the 
same that LMDE does today. Call it a semi-rolling. At least 
this is my understanding of it.



Well, I'm ok with a fresh install.


My advice is to wait a bit for the new LMDE to get out. 
Installing LMDE now as the current model approaches its end of 
life is not the best, since mostly sure, you'll have to do it 
again since they change the code base (from testing to stable).


But can it run under the target linux itself? Or rather what to 
run from the disk? Since mint4win installation is a virtual 
disk, I'm not sure the installer will find it gracefully, 
they're usually partition-oriented. Not sure if this eliminates 
problem with fstab though.


Sorry, I have no direct experience with Mint directly, I 
extrapolate my understanding of other distribution to it, from 
the comments. Could not answer to those questions as they require 
first-hand experience.


Anyway, if you feel a bit adventurous, the current LMDE model is 
somewhat continued by a distribution called SolidXK (google it) 
and a new-comer on the scene is Tranglu, that I just installed in 
a VM and which looks very promising (a mix of Debian Stable, 
Testing and Unstable, release-style, but hopefully with 
undisruptive upgrades).


Re: Program logic bugs vs input/environmental errors

2014-10-05 Thread Cliff via Digitalmars-d
On Sunday, 5 October 2014 at 05:46:56 UTC, ketmar via 
Digitalmars-d wrote:

On Sun, 05 Oct 2014 03:47:31 +
Cliff via Digitalmars-d digitalmars-d@puremagic.com wrote:

This is a great feature where we lack a really solid IDE 
experience (which would have intellisense and auto-completion 
that could be accurate and prevent such errors from occurring 
in the first place.)  Otherwise it would probably be redundant.
i'm not using IDEs for more than a decade (heh, i'm using 
mcedit to
write code). yet this feature drives me mad: it trashes my 
terminal
with useless garbage output. it was *never* in help, there were 
no
moment when i looked at suggested identifier and thinked: aha, 
THAT is
the bug! but virtually each time i see usggestion i'm 
thinking: oh,

well, i know. 'cmon, why don't you just shut up?!

it's like colorizing the output, yet colorizing can be turned 
off, and

suggestions can't.


That you even make the bug at all which triggers the error is an 
indication the developer workflow you use is fundamentally 
flawed.  This is something which should be caught much earlier - 
when you are at the point the typo was made - not after you have 
committed a change to disk and presented it to the compiler, 
where your train of thought may be significantly different.


I'd much rather energy be directed at the prevention of mistakes, 
not the suppression of help in fixing them - if I had to choose.  
But I wouldn't object to having a switch to turn off the help if 
it bothers you that much.  Seems like a very small thing to add.


Re: scope() statements and return

2014-10-05 Thread ketmar via Digitalmars-d
On Sat, 04 Oct 2014 14:48:26 -0700
Andrei Alexandrescu via Digitalmars-d digitalmars-d@puremagic.com
wrote:

 There's no interesting way to check this because functions don't list 
 the exceptions they might throw (like Java does). -- Andrei
sure there is no way to check. this 'final try' helper is required
exactly 'cause compiler can't do necessary checks, and programmer can
assure compiler that it's ok, we'll catching 'em all here! this way
'try/catch' will not be nothrow, only 'final try/catch'. and there
will be *nothing* that can be catched outside of 'final try/catch'.
i.e. if something was not catched in 'final', it's a fatal bug.
crash-boom-band, we are dead.

but i don't want to create ER, 'cause it will be rejected. it implies
code breaking (simple try/catch can't be used in 'nothrow' functions
anymore), and ERs with code breaking feature have no chances.


signature.asc
Description: PGP signature


Re: Program logic bugs vs input/environmental errors

2014-10-05 Thread Walter Bright via Digitalmars-d

On 10/4/2014 10:24 PM, Ola Fosheim Grostad wrote:

On Saturday, 4 October 2014 at 22:24:08 UTC, Nick Sabalausky wrote:

And the specification itself may have flaws as well, so again, there are NO
guarantees here whatsoever. The only thing proofs do in an engineering context
is decrease the likelihood of problems, just like any other engineering 
strategy.


Machine validated proofs guarantee that there are no bugs in the source code for
any reasonable definition of guarantee. There is no reason for having proper
asserts left in the code after that.

If the specification the contract is based on is inadequate, then that is not an
issue for the contractor. You still implement according to the spec/contract
until the contract is changed by the customer.

If an architect didn't follow the requirements of the law when drawing a house,
then he cannot blame the carpenter for building the house according to the
drawings.


Carpenters can be liable for building things they know are wrong, regardless of 
what the spec says.


Re: Program logic bugs vs input/environmental errors

2014-10-05 Thread ketmar via Digitalmars-d
On Sun, 05 Oct 2014 05:55:37 +
Cliff via Digitalmars-d digitalmars-d@puremagic.com wrote:

 That you even make the bug at all which triggers the error is an 
 indication the developer workflow you use is fundamentally 
 flawed.
what we are talking about here? sorry, i'm lost.

  This is something which should be caught much earlier - 
 when you are at the point the typo was made - not after you have 
 committed a change to disk and presented it to the compiler, 
 where your train of thought may be significantly different.
i'm writing code in big pieces, and fixing typos, etc. then works as a
background thread in my brain, while main thread is still thinking
about the overall picture. no IDE helps me to improve this, and i don't
need all their tools and their bloat. and i'm not making alot of
typos. ;-)

 But I wouldn't object to having a switch to turn off the help if 
 it bothers you that much.  Seems like a very small thing to add.
this is a spin-off ;-) of the actual discussion. what i'm talking about
originally is that compiler should stop after first error it
encountered, not trying to parse/analyze code further. and there i was
talking about IDEs which helps to fix alot of typos and other grammar
bugs even before compilation starts.

and what i'm talking about is that trying to make sense from garbage
input is not a sign of robust software, if that software is not a
special case software which was designed to parse garbage.

and that's why i'm against any warnings in compilers: all warnings
should be errors.


signature.asc
Description: PGP signature


Re: What are the worst parts of D?

2014-10-05 Thread Shammah Chancellor via Digitalmars-d

On 2014-09-25 01:54:26 +, H. S. Teoh via Digitalmars-d said:

On Wed, Sep 24, 2014 at 05:37:37PM -0700, Andrei Alexandrescu via 
Digitalmars-d wrote:

On 9/24/14, 4:48 PM, H. S. Teoh via Digitalmars-d wrote:

You're misrepresenting my position.*In spite of their current flaws*,
modern build systems like SCons and Tup already far exceed make in
their basic capabilities and reliability.


Fair enough, thanks. Anyhow the point is, to paraphrase Gandhi: Be
the change you want to see in dlang's build system :o). -- Andrei


Well, Cliff  I (and whoever's interested) will see what we can do about
that. Perhaps in the not-so-distant future we may have a D build tool
that can serve as the go-to build tool for D projects.

T



Please submit PRs for dub, instead of creating a new project.   Dub is 
a nice way of managing library packages already.   I'd rather not use 
two different tools.




Re: Program logic bugs vs input/environmental errors

2014-10-05 Thread via Digitalmars-d

On Sunday, 5 October 2014 at 06:09:44 UTC, Walter Bright wrote:
Carpenters can be liable for building things they know are 
wrong, regardless of what the spec says.


You can be made liable if you don't notify the responsible entity 
you work for that the solution is unsuitable for the purpose.


How it works in my country is that to raise a building you need a 
qualified entity that is responsible for the quality of the 
construction which could be an architect, a building engineer or 
a carpenter with some extra certification. They are responsible 
for ensuring/approving the overall quality. They are required to 
ensure the quality of the spec/work and are therefore liable vs 
the customer.


Re: Deprecations: Any reason left for warning stage?

2014-10-05 Thread Daniel Murphy via Digitalmars-d
David Nadlinger  wrote in message 
news:tsxbhkdfwilqjpqek...@forum.dlang.org...


Second, if I'm using -w, I'm typically interested in errors if I write 
fishy code, not because some third-party library I just updated made a 
small change to its API. I don't see where the advantage would be in 
conflating the two things.


You don't get 'warning' deprecations from third-party code, you get them 
from the compiler when you use a feature that is on its way to being 
deprecated.  Usually because it's fishy. 



Re: Deprecations: Any reason left for warning stage?

2014-10-05 Thread Iain Buclaw via Digitalmars-d
On 4 Oct 2014 22:30, David Nadlinger via Digitalmars-d 
digitalmars-d@puremagic.com wrote:

 On Monday, 29 September 2014 at 15:13:28 UTC, Daniel Murphy wrote:

 But that's a good thing - the people who get their code broken are the
people who are asking for it with '-w'.


 I don't think there is much merit to this argument.

 First, it's not like the ability to make diagnostics halt the build is
something specific to Warnings. Just pass -de and use of deprecated symbols
will halt compilation too. (Actually, not passing -de would lead to a funny
error(Warning)-warning(Deprecation)-error(Error) progression right now.)


One reason why on GDC I removed the distinction between -w and -de, there
are now three GCC-style switches: -Wall (-wi), -Wdeprecated (-d), and
-Werror (-w, -de). All of which are off by default.

Iain.


Re: Deprecations: Any reason left for warning stage?

2014-10-05 Thread Iain Buclaw via Digitalmars-d
On 5 Oct 2014 08:10, Daniel Murphy via Digitalmars-d 
digitalmars-d@puremagic.com wrote:

 David Nadlinger  wrote in message
news:tsxbhkdfwilqjpqek...@forum.dlang.org...


 Second, if I'm using -w, I'm typically interested in errors if I write
fishy code, not because some third-party library I just updated made a
small change to its API. I don't see where the advantage would be in
conflating the two things.


 You don't get 'warning' deprecations from third-party code, you get them
from the compiler when you use a feature that is on its way to being
deprecated.  Usually because it's fishy.

The 'deprecated' for all intents and purposes might as well emit a warning
when you try to use deprecated features of third party code.

Didn't you Phobos devs have a hardDeprecation message too that would force
an error?


Re: Deprecations: Any reason left for warning stage?

2014-10-05 Thread Jonathan M Davis via Digitalmars-d
On Friday, September 26, 2014 23:12:47 Daniel Kozák via Digitalmars-d wrote:
 The only one and right solution is print warning message by default

 - Původní zpráva -
 Od:David Nadlinger via Digitalmars-d digitalmars-d@puremagic.com
 Odesláno:‎26. ‎9. ‎2014 18:20
 Komu:digitalmars-d@puremagic.com digitalmars-d@puremagic.com
 Předmět:Deprecations: Any reason left for warning stage?

 As Walter mentioned in a recent pull request discussion [1], the
 first formal deprecation protocol we came up with for language
 changes looked something like this:

 1. remove from documentation
 2. warning
 3. deprecation
 4. error

This makes no sense now. Realistically, warning is more restrictive than
deprecation at this point, because -w makes warnings errors, and the
equivalent for deprecated is probably used much less (certainly, it's much
newer and therefore less likely to be used). So, step 2 to 3 is essentially
making things _less_ restrictive. And really, warnings have nothing to do with
deprecation. Using them made sense when there was no way to print deprecation
messages without having them be an error, but now that deprecation messages
are just messages normally and do not alter compilation at all, using warnings
for deprecation makes no sense at all.

- Jonathan M Davis




Re: std.utf.decode @nogc please

2014-10-05 Thread via Digitalmars-d
On Saturday, 4 October 2014 at 22:02:14 UTC, Andrei Alexandrescu 
wrote:

On 10/4/14, 4:24 AM, Marc Schütz schue...@gmx.net wrote:
On Friday, 3 October 2014 at 19:51:40 UTC, Andrei Alexandrescu 
wrote:

On 10/3/14, 11:35 AM, Dmitry Olshansky wrote:

01-Oct-2014 14:10, Robert burner Schadek пишет:
lately when working on std.string I run into problems 
making stuff nogc

as std.utf.decode is not nogc.

https://issues.dlang.org/show_bug.cgi?id=13458


Trivial to do. But before that somebody got to make  one of:

a) A policy on reuse of exceptions. Literally we have easy 
TLS why not
put 1 copy of each possible exception there? (**ck the 
chaining, who

need it anyway?)
b) Make all exceptions ref-counted.

The benefit of A is that creating exceptions becomes MUCH 
faster.


This seems to be going in circles. Didn't we just agree we 
solve this
by making exceptions reference counted? Please advise. -- 
Andrei


Depends on who we is. There was a large discussion with 
several

alternative suggestions and no clear conclusion.


I proposed in this forum that we use reference counting and 
there was general agreement that that would help, no killer 
counterargument, and no other better solution. Conclusion was 
pretty clear to me: we move to reference counted exceptions. -- 
Andrei


There was indeed agreement on reference counting (although 
someone suggested disallowing cycles or removing chaining 
altogether). But what I meant is that there was no agreement on a 
specific solution, and several ones were proposed, from full 
general compiler supported refcounting to library implementation.


Re: Program logic bugs vs input/environmental errors

2014-10-05 Thread Paolo Invernizzi via Digitalmars-d

On Sunday, 5 October 2014 at 09:06:45 UTC, Paolo Invernizzi wrote:

On Sunday, 5 October 2014 at 06:55:16 UTC, Ola Fosheim Grøstad

Oh, I think that here in Italy we outperform your country with 
that, as for sure we are the most bureaucratised country on the 
hearth.


/hearh/earth

I must stop writing while doing breakfast *sigh*

---
/P


Re: Program logic bugs vs input/environmental errors

2014-10-05 Thread Paolo Invernizzi via Digitalmars-d
On Sunday, 5 October 2014 at 06:55:16 UTC, Ola Fosheim Grøstad 
wrote:

On Sunday, 5 October 2014 at 06:09:44 UTC, Walter Bright wrote:
Carpenters can be liable for building things they know are 
wrong, regardless of what the spec says.


You can be made liable if you don't notify the responsible 
entity you work for that the solution is unsuitable for the 
purpose.


How it works in my country is that to raise a building you need 
a qualified entity that is responsible for the quality of the 
construction which could be an architect, a building engineer 
or a carpenter with some extra certification. They are 
responsible for ensuring/approving the overall quality. They 
are required to ensure the quality of the spec/work and are 
therefore liable vs the customer.


Oh, I think that here in Italy we outperform your country with 
that, as for sure we are the most bureaucratised country on the 
hearth.
So it happens that we have people responsible for the quality 
for pretty everything, from building houses to doing pizzas.
And guess it, here the buildings made by ancient romans are still 
up and running, while we have schools building made in the '90 
that come down at every earthquake...


Guru meditation... ;-P

---
/Paolo


Re: Program logic bugs vs input/environmental errors

2014-10-05 Thread via Digitalmars-d

On Sunday, 5 October 2014 at 09:06:45 UTC, Paolo Invernizzi wrote:
Oh, I think that here in Italy we outperform your country with 
that, as for sure we are the most bureaucratised country on the 
hearth.


Hah! In Norway parents sign evaluation of progress for 6 years 
old school children every 2 weeks due to a quality reform. And 
and  all pupils are kept on the same level of progress so that 
nobody should feel left behind due to social democracy 
principles... In Italy you have Montesorri! Consider yourself 
lucky!


In Norway bureaucracy is a life style and the way of being. In 
northern Norway the public sector accounts for 38-42% of all 
employees.


In Norway the cost of living is so high that starting a business 
is a risky proposition unless the public sector is your customer 
base. :^)


Re: Program logic bugs vs input/environmental errors

2014-10-05 Thread Tobias Müller via Digitalmars-d
Walter Bright newshou...@digitalmars.com wrote:
 On 10/4/2014 3:30 AM, Steven Schveighoffer wrote:
 On 10/4/14 4:47 AM, Walter Bright wrote:
 On 9/29/2014 8:13 AM, Steven Schveighoffer wrote:
 I can think of cases where it's programmer error, and cases where it's
 user error.
 
 More carefully design the interfaces if programmer error and input error
 are conflated.
 
 
 You mean more carefully design File's ctor? How so?
 
 You can start with deciding if random  binary data passed as a file
 name is legal input to the ctor or not.

I think it helps to see contracts as an informal extension to the type
system.
Ideally, the function signature would not allow invalid input at all. In
practice, that's not always possible and contracts are a less formal way to
specify the function signature. But conceptually they are still part of the
signature.

And of course (as with normal contract-less functions) you are always
allowed to provide convenience functions with extended input validation.
Those should then be based on the strict version.

For example take a constructor for an XML document class.
It could take the (unvalidated) file path as string parameter. Or a file
(validated that it exists) object. Or a stream (validated that it exists
and is already opened).
You can provide all three for convenience, but I think it's still good
design to provide three _different_ functions.
Contracts are not a magical tool to provide you all three variants in one
function depending somehow on the needs of the caller.

Tobi


Re: Program logic bugs vs input/environmental errors

2014-10-05 Thread Tobias Müller via Digitalmars-d
Paolo Invernizzi paolo.invernizzi@no.address wrote:
 And guess it, here the buildings made by ancient romans are still up and
 running, while we have schools building made in the '90 that come down
 at every earthquake...

All the bad buildings from the ancient romans already came down while the
last 2000 years. The best 1% survived.

Tobi


Re: scope() statements and return

2014-10-05 Thread monarch_dodra via Digitalmars-d
On Saturday, 4 October 2014 at 18:42:05 UTC, Shammah Chancellor 
wrote:
Didn't miss anything.  I was responding to Andrei such that he 
might think it's not so straightforward to evaluate that code.
 I am with you on this.  It was my original complaint months 
ago that resulted in this being disallowed behavior.  
Specifically because you could stop error propigation by 
accident even though you did not intend to prevent their 
propigation.  e.g:


int main()
{
scope(exit) return 0;
assert(false, whoops!);
}

-S


Isn't this the should scope(exit/failure) catch Error issue 
though?


In theory, you should seldom ever catch Errors. I don't 
understand why scope(exit) are catching them.


Re: scope() statements and return

2014-10-05 Thread ketmar via Digitalmars-d
On Sun, 05 Oct 2014 11:28:59 +
monarch_dodra via Digitalmars-d digitalmars-d@puremagic.com wrote:

 In theory, you should seldom ever catch Errors. I don't 
 understand why scope(exit) are catching them.
'cause scope(exit) keeps the promise to execute cleanup code before
exiting code block?


signature.asc
Description: PGP signature


Re: std.utf.decode @nogc please

2014-10-05 Thread Andrei Alexandrescu via Digitalmars-d

On 10/5/14, 1:58 AM, Marc Schütz schue...@gmx.net wrote:

There was indeed agreement on reference counting (although someone
suggested disallowing cycles or removing chaining altogether). But what
I meant is that there was no agreement on a specific solution, and
several ones were proposed, from full general compiler supported
refcounting to library implementation.


Understood, thanks. I think there's no other way around than language 
support. Here's a sketch of the solution:


* Introduce interface AutoRefCounted. For practical reasons it may 
inherit IUnknown, though that's not material to the concept.


* AutoRefCounted and any interface or class inheriting it have the 
compiler insert calls to methods that increment and decrement the 
reference count (e.g. AddRef/Release).


* All descendants of AutoRefCounted must be descendants of it through 
all paths (i.e. there can be no common descendants of AutoRefCounted and 
either IUnknown or Object).


* After all this infra is in place, unhook Throwable from its current 
place and have it inherit AutoRefCounted.


* All is good and there's much rejoicing.


Andrei



Re: scope() statements and return

2014-10-05 Thread monarch_dodra via Digitalmars-d
On Sunday, 5 October 2014 at 12:36:30 UTC, ketmar via 
Digitalmars-d wrote:

On Sun, 05 Oct 2014 11:28:59 +
monarch_dodra via Digitalmars-d digitalmars-d@puremagic.com 
wrote:


In theory, you should seldom ever catch Errors. I don't 
understand why scope(exit) are catching them.
'cause scope(exit) keeps the promise to execute cleanup code 
before

exiting code block?


Promises hold provided the precondition your program is in a 
valid state. Having an Error invalidates that precondition, hence 
voids that promise.


RAII also makes the promise, but you don't see Errors giving much 
of a fuck about that.


Re: What are the worst parts of D?

2014-10-05 Thread Dicebot via Digitalmars-d
I am in the mood to complain today so this feels like a good 
moment to post a bit more extended reply here.


There are three big issues that harm D development most in my 
opinion:


1) lack of vision

TDPL was an absolutely awesome book because it expained why? as 
opposed to how?. Such insight into language authors rationale 
is incredibly helpful for long-term contribution. Unfortunately, 
it didn't cover all parts of the language and many new things has 
been added since it was out.


Right now I have no idea where the development is headed and what 
to expect from next few releases. I am not speaking about 
wiki.dlang.org/Agenda but about bigger picture. Unexpected focus 
on C++ support, thread about killing auto-decoding, recent ref 
counting proposal - all this stuff comes from language authors 
but does not feel like a strategic additions. It feels like yet 
another random contribution, no different from contribution/idea 
of any other D user.


Anarchy-driven development is pretty cool thing in general but 
only if there is a base, a long-term vision all other 
contributions are built upon. And I think it is primary 
responsibility of language authors to define such as clear as 
possible. It is very difficult task but it simply can't be 
delegated.


2) reliable release base

I think this is most important part of open-source infrastructure 
needed to attract more contributions and something that also 
belongs to the core team. I understand why Walter was so eager 
to delegate is but right now the truth is that once Andrew has to 
temporarily leave all release process has immediately stalled. 
And finding replacement is not easy - this task is inherently 
ungrateful as it implies spending time and resources on stuff you 
personally don't need at all.


Same applies to versioning - it got considerably better with 
introduction of minor version but it is still far from reasonable 
SemVer and cherry-picking approach still feels like madness.


And current situation where dissapearance of one person has 
completely blocked and releases simply tells everyone D is still 
terribly immature.


3) lack of field testing

Too many new features get added simply because they look 
theoretically sound. I think it is quite telling that most robust 
parts of D are the ones that got designed based on mistake 
experience of other languages (primarily C++) and most 
innovations tend to fall into collection of hacks stockpiled 
together (something same C++ is infamous for).


I am disturbed when Andrei comes with proposal that possibly 
affects whole damn Phobos (memeory management flags) and asks to 
trust his experience and authority on topic while rejecting 
patterns that are confirmed to be working well in real production 
projects. Don't get me wrong, I don't doubt Andrei authority on 
memory management topic (it is miles ahead of mine at the very 
least) but I simply don't believe any living person in this world 
can design such big change from scratch without some extended 
feedback from real deployed projects.


This is closely related to SemVer topic. I'd love to see D3. And 
D4 soon after. And probably new prime version increase every year 
or two. This allows to tinker with really big changes without 
being concerned about how it will affect your code in next 
release.


Don has been mentioning that Sociomantic is all for breaking the 
code for the greater good and I fully agree with him. But 
introducing such surprise solutions creates a huge risk of either 
sticking with imperfect design and patching it (what we have now) 
or changing same stuff back and forth every basic release (and 
_that_ is bad).


Re: Program logic bugs vs input/environmental errors (checked exceptions)

2014-10-05 Thread Marco Leise via Digitalmars-d
Am Fri, 03 Oct 2014 21:35:01 +0200
schrieb Jacob Carlborg d...@me.com:

 On 2014-10-03 14:36, David Nadlinger wrote:
 
  you are saying that specific exceptions were replaced by enforce? I
  can't recall something like this happening.
 
 I have no idea about this but I know there are a lot of enforce in 
 Phobos and it sees to be encouraged to use it. Would be really sad if 
 specific exceptions were deliberately replaced with less specific 
 exceptions.

Nice, finally someone who actually wants to discern Exception
types. I'm always at a loss as to what warrants its own
exception type. E.g. when looking at network protocols, would
a 503 be a NetworkException, a HTTPException or a
HTTPInternalServerErrorException ?
Where do *you* wish libraries would differentiate?
Or does it really come down to categories like illegal
argument, division by zero, null pointer, out of memory
for you?

-- 
Marco



Re: scope() statements and return

2014-10-05 Thread ketmar via Digitalmars-d
On Sun, 05 Oct 2014 14:53:37 +
monarch_dodra via Digitalmars-d digitalmars-d@puremagic.com wrote:

 Promises hold provided the precondition your program is in a 
 valid state. Having an Error invalidates that precondition, hence 
 voids that promise.
so Error should not be catchable and should crash immidiately, without
any unwinding. as long as Errors are just another kind of exception,
the promise must be kept.


signature.asc
Description: PGP signature


Re: On exceptions, errors, and contract violations

2014-10-05 Thread Dicebot via Digitalmars-d
I agree with most parts and this pretty much fits my unhappy 
experience of trying to use D assert/contract system. However I 
don't feel like contracts and plain assertions should throw 
different kinds of exceptions - it allows to distinguish some 
cases but does not solve the problem in general. And those are 
essentially the same tools so having same exception types make 
sense.


Different compilation versions sound more suitable but that 
creates usual distribution problems with exponential version 
explosion.




Re: Program logic bugs vs input/environmental errors

2014-10-05 Thread Dicebot via Digitalmars-d

On Saturday, 4 October 2014 at 19:36:02 UTC, Walter Bright wrote:

On 10/4/2014 9:16 AM, Sean Kelly wrote:
On Saturday, 4 October 2014 at 09:18:41 UTC, Walter Bright 
wrote:


Threads are not isolated from each other. They are not. Not. 
Not.


Neither are programs that communicate in some fashion.


Operating systems typically provide methods of interprocess 
communication that are robust against corruption, such as 
pipes, message passing, etc. The receiving process should 
regard such input as user/environmental input, and must 
validate it. Corruption in it would not be regarded as a logic 
bug in the receiving process (unless it failed to check for it).


Interprocess shared memory, though, is not robust.




I'll grant that the
possibility of memory corruption doesn't exist in this case (a 
problem unique to
systems languages like D), but system corruption still does.  
And I absolutely
agree with you that if memory corruption is ever even 
suspected, the process
must immediately halt.  In that case I wouldn't even throw an 
Error, I'd call

exit(1).


System corruption is indeed a problem with this type of setup. 
We're relying here on the operating system not having such bugs 
in it, and indeed OS vendors work very hard at preventing an 
errant program from corrupting the system.


We all know, of course, that this sort of thing happens anyway. 
An even more robust system design will need a way to deal with 
that, and failure of the hardware, and failure of the data 
center, etc.


All components of a reliable system are unreliable, and a 
robust system needs to be able to recover from the inevitable 
failure of any component. This kind of thinking needs to 
pervade the initial system design from the ground up, it's hard 
to tack it on later.


This is not different from fiber or thread based approach. If one 
uses only immutable data for inter-thread communications (== does 
not use inter-process shared memory) same gurantees and reasoning 
come. And such design allows for many data optimizations hard or 
impossible to do with process-based approach.


There is no magic solution that does not allow to screw up in 
100% of cases whatever programmer does. Killing process is 
pragmatical default but not a pragmatical silver bullet and from 
pure theoretical point of few it has no advantages over killing 
the thread/fiber - it is all about chances of failure, not 
preventing it.


Same in Erlang - some failure warrant killing the runtime, some 
only specific process. It is all about the context and programmer 
should decide what is best approach for any specific program. I 
am fine with non-default being hard but I want it to be still 
possible within legal language restricions.


Re: What are the worst parts of D?

2014-10-05 Thread Andrei Alexandrescu via Digitalmars-d

On 10/5/14, 7:55 AM, Dicebot wrote:

1) lack of vision


The vision is to expand user base and make a compelling case for using D 
alongside existing code bases. There are two important aspects to that: 
interoperability with C++, and using D without a garbage collector.



Right now I have no idea where the development is headed and what to
expect from next few releases. I am not speaking about
wiki.dlang.org/Agenda but about bigger picture. Unexpected focus on C++
support, thread about killing auto-decoding, recent ref counting
proposal - all this stuff comes from language authors but does not feel
like a strategic additions.


1. C++ support is good for attracting companies featuring large C++ 
codebases to get into D for new code without disruptions.


2. Auto-decoding is blown out of proportion and a distraction at this time.

3. Ref counting is necessary again for encouraging adoption. We've 
framed GC as an user education matter for years. We might have even been 
right for the most part, but it doesn't matter. Fact is that a large 
potential user base will simply not consider a GC language.



It feels like yet another random
contribution, no different from contribution/idea of any other D user.

Anarchy-driven development is pretty cool thing in general but only if
there is a base, a long-term vision all other contributions are built
upon. And I think it is primary responsibility of language authors to
define such as clear as possible. It is very difficult task but it
simply can't be delegated.


I'm all about vision. I do agree we've been less so in the past.


2) reliable release base

I think this is most important part of open-source infrastructure needed
to attract more contributions and something that also belongs to the
core team. I understand why Walter was so eager to delegate is but
right now the truth is that once Andrew has to temporarily leave all
release process has immediately stalled. And finding replacement is not
easy - this task is inherently ungrateful as it implies spending time
and resources on stuff you personally don't need at all.


We now have Martin Nowak as the point of contact.


3) lack of field testing

Too many new features get added simply because they look theoretically
sound.


What would those be?


I think it is quite telling that most robust parts of D are the
ones that got designed based on mistake experience of other languages
(primarily C++) and most innovations tend to fall into collection of
hacks stockpiled together (something same C++ is infamous for).

I am disturbed when Andrei comes with proposal that possibly affects
whole damn Phobos (memeory management flags) and asks to trust his
experience and authority on topic while rejecting patterns that are
confirmed to be working well in real production projects.


Policy-based design is more than one decade old, and older under other 
guises. Reference counting is many decades old. Both have been humongous 
success stories for C++.


No need to trust me or anyone, but at some point decisions will be made. 
Most decisions don't make everybody happy. To influence them it suffices 
to argue your case properly. I hope you don't have the feeling appeal to 
authority is used to counter real arguments. I _do_ trust my authority 
over someone else's, especially when I'm on hook for the decision made. 
I won't ever say this is a disaster, but we did it because a guy on the 
forum said it'll work.



Don't get me
wrong, I don't doubt Andrei authority on memory management topic (it is
miles ahead of mine at the very least) but I simply don't believe any
living person in this world can design such big change from scratch
without some extended feedback from real deployed projects.


Feedback is great, thanks. But we can't test everything before actually 
doing anything. I know how PBD works and I know how RC works, both from 
having hacked with them for years. I know where this will go, and it's 
somewhere good.



This is closely related to SemVer topic. I'd love to see D3. And D4 soon
after. And probably new prime version increase every year or two. This
allows to tinker with really big changes without being concerned about
how it will affect your code in next release.


Sorry, I'm not on board with this. I believe it does nothing than 
balkanize the community, and there's plenty of evidence from other 
languages (Perl, Python). Microsoft could afford to do it with C# only 
because they have lock-in with their user base, monopoly on tooling, and 
a simple transition story (give us more money).



Don has been mentioning that Sociomantic is all for breaking the code
for the greater good and I fully agree with him. But introducing such
surprise solutions creates a huge risk of either sticking with imperfect
design and patching it (what we have now) or changing same stuff back
and forth every basic release (and _that_ is bad).


I don't see what is surprising about my vision. It's simple and clear. 
C++ and GC. C++ 

Re: Program logic bugs vs input/environmental errors

2014-10-05 Thread Marco Leise via Digitalmars-d
Am Sat, 04 Oct 2014 13:12:43 -0700
schrieb Walter Bright newshou...@digitalmars.com:

 On 10/4/2014 3:30 AM, Steven Schveighoffer wrote:
  On 10/4/14 4:47 AM, Walter Bright wrote:
  On 9/29/2014 8:13 AM, Steven Schveighoffer wrote:
  I can think of cases where it's programmer error, and cases where it's
  user error.
 
  More carefully design the interfaces if programmer error and input error
  are conflated.
 
 
  You mean more carefully design File's ctor? How so?
 
 You can start with deciding if random  binary data passed as a file name is 
 legal input to the ctor or not.

In POSIX speak [1] a file name consisting only of A-Za-z0-9.,-
is a character string (a portable file name) whereas anything
not representable in all locales is just a string.
Locales' charsets are required to be able to represent
A-Za-z0-9.,- but may use a different mapping than ASCII for
that. Only the slash '/' must have a fixed value of 0x2F.

From that I conclude, that File() should open files by ubyte[]
exclusively to be POSIX compliant.

This is the stuff that's frustrating me much about POSIX. It
practically makes it impossible to write correct code. Even Qt
and Gtk+ settled for the system locale and UTF-8 respectively
as the assumed I/O charset for all file names, although each
file system could be mounted in a different charset. E.g.
CD-ROMs in ISO charset.
Windows does much better by offering Unicode versions on top
of the ANSI functions.
The only fix I see for POSIX is to deprecate all other locales
except UTF-8 at some point.

[1]
http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap03.html#tag_03_267

-- 
Marco



Re: Program logic bugs vs input/environmental errors (checked exceptions)

2014-10-05 Thread Andrei Alexandrescu via Digitalmars-d

On 10/5/14, 8:08 AM, Marco Leise wrote:

Nice, finally someone who actually wants to discern Exception
types. I'm always at a loss as to what warrants its own
exception type. E.g. when looking at network protocols, would
a 503 be a NetworkException, a HTTPException or a
HTTPInternalServerErrorException ?
Where do*you*  wish libraries would differentiate?
Or does it really come down to categories like illegal
argument, division by zero, null pointer, out of memory
for you?


That reflects my misgivings about using exception hierarchies for error 
kinds as well. -- Andrei




Re: Program logic bugs vs input/environmental errors (checked exceptions)

2014-10-05 Thread Jacob Carlborg via Digitalmars-d

On 2014-10-05 17:08, Marco Leise wrote:


Nice, finally someone who actually wants to discern Exception
types. I'm always at a loss as to what warrants its own
exception type.


Yeah, that can be quite difficult. In the end, if you have a good 
exception hierarchy I don't think it will hurt to have many different 
exception types.



E.g. when looking at network protocols, would
a 503 be a NetworkException, a HTTPException or a
HTTPInternalServerErrorException ?


For this specific case I would probably have one general exception for 
all HTTP error codes.



Where do *you* wish libraries would differentiate?


I would like to have as specific exception type as possible. Also a nice 
hierarchy of exception when catching a specific exception is not 
interesting. Instead of just a FileException there could be 
FileNotFoundException, PermissionDeniedExcepton and so on.



Or does it really come down to categories like illegal
argument, division by zero, null pointer, out of memory
for you?



--
/Jacob Carlborg


Re: Program logic bugs vs input/environmental errors

2014-10-05 Thread ketmar via Digitalmars-d
On Sun, 5 Oct 2014 17:44:31 +0200
Marco Leise via Digitalmars-d digitalmars-d@puremagic.com wrote:

 From that I conclude, that File() should open files by ubyte[]
 exclusively to be POSIX compliant.
and yet there is currently no way in hell to open file with non-utf8
name with hardcoded name and without ugly hacks with File(...). and
ubyte will not help here, as D has no way to represent non-utf8 string
literal without unmaintainable shit like \xNN.

and speaking about utf-8: for strings we at least have a hack, and for
shebangs... nothing. nada. ничего. locale settings? other encodings? who
needs that, there CAN'T be non-utf8 shebangs. OS interoperability? it's
overhyped, there is only two kinds of OSes: those which are D-compatible
and bad. changing your locale to non-utf8 magically turns your OS to
bad, D will not try to interoperate with it anymore.


signature.asc
Description: PGP signature


Re: What are the worst parts of D?

2014-10-05 Thread Dicebot via Digitalmars-d
On Sunday, 5 October 2014 at 15:38:58 UTC, Andrei Alexandrescu 
wrote:

On 10/5/14, 7:55 AM, Dicebot wrote:

1) lack of vision


The vision is to expand user base and make a compelling case 
for using D alongside existing code bases. There are two 
important aspects to that: interoperability with C++, and using 
D without a garbage collector.


Right now I have no idea where the development is headed and 
what to

expect from next few releases. I am not speaking about
wiki.dlang.org/Agenda but about bigger picture. Unexpected 
focus on C++
support, thread about killing auto-decoding, recent ref 
counting
proposal - all this stuff comes from language authors but does 
not feel

like a strategic additions.


1. C++ support is good for attracting companies featuring large 
C++ codebases to get into D for new code without disruptions.


2. Auto-decoding is blown out of proportion and a distraction 
at this time.


3. Ref counting is necessary again for encouraging adoption. 
We've framed GC as an user education matter for years. We might 
have even been right for the most part, but it doesn't matter. 
Fact is that a large potential user base will simply not 
consider a GC language.


No need to explain it here. When I speak about vision I mean 
something that anyone coming to dlang.org page or GitHub repo 
sees. Something that is explained in a bit more details, possibly 
with code examples. I know I am asking much but seeing quick 
reference for imagine this stuff is implemented, this is how 
your program code will be affected and this is why it is a good 
thing could have been huge deal.


Right now your rationales get lost in forum discussion threads 
and it is hard to understand what really is Next Big Thing and 
what is just forum argument blown out of proportion. There was a 
go at properties, at eliminating destructors, at rvalue 
references and whatever else I have forgotten by now. It all 
pretty much ended with do nothing outcome for one reason or the 
other.


The fact that you don't seem to have a consensus with Walter on 
some topic (auto-decoding, yeah) doesn't help either. Language 
marketing is not about posting links on reddit, it is a very hard 
work of communicating your vision so that it is clear even to 
random by-passer.



2) reliable release base

I think this is most important part of open-source 
infrastructure needed
to attract more contributions and something that also belongs 
to the
core team. I understand why Walter was so eager to delegate 
is but
right now the truth is that once Andrew has to temporarily 
leave all
release process has immediately stalled. And finding 
replacement is not
easy - this task is inherently ungrateful as it implies 
spending time

and resources on stuff you personally don't need at all.


We now have Martin Nowak as the point of contact.


And what if he gets busy too? :)


3) lack of field testing

Too many new features get added simply because they look 
theoretically

sound.


What would those be?


Consider something like `inout`. It is a very look feature to 
address an issue specific to D and it looked perfectly reasonable 
when it was introduces. And right now there are some fishy hacks 
about it even in Phobos (like forced inout delegates in traits) 
that did come from originally unexpected usage cases. It is quite 
likely that re-designing it from scratch based on existing field 
experience would have yielded better results.


Policy-based design is more than one decade old, and older 
under other guises. Reference counting is many decades old. 
Both have been humongous success stories for C++.


Probably I have missed the point where new proposal was added but 
original one was not using true policy-based design but set of 
enum flags instead (no way to use user-defined policy). Reference 
counting experience I am aware of shows that it is both 
successful in some cases and unapplicable for the others. But I 
don't know of any field experience that shows that chosing 
between RC and GC as a policy is a good/sufficient tool to 
minimize garbage creation in libraries - real issue we need to 
solve that original proposal does not mention at all.


No need to trust me or anyone, but at some point decisions will 
be made. Most decisions don't make everybody happy. To 
influence them it suffices to argue your case properly. I hope 
you don't have the feeling appeal to authority is used to 
counter real arguments. I _do_ trust my authority over someone 
else's, especially when I'm on hook for the decision made. I 
won't ever say this is a disaster, but we did it because a guy 
on the forum said it'll work.


I don't want to waste your time arguing about irrelevant things 
simply because I have misinterprete how proposed solution fits 
the big picture. It is still unclear why proposed scheme is 
incompatible
with tweaking Phobos utilities into input/output ranges. I am 
stipid and I am asking for detailed explanations before any 
arguments can be 

Re: Program logic bugs vs input/environmental errors (checked exceptions)

2014-10-05 Thread Andrei Alexandrescu via Digitalmars-d

On 10/5/14, 8:56 AM, Jacob Carlborg wrote:

I would like to have as specific exception type as possible. Also a nice
hierarchy of exception when catching a specific exception is not
interesting. Instead of just a FileException there could be
FileNotFoundException, PermissionDeniedExcepton and so on.


Exceptions are all about centralized error handling. How, and how often, 
would you handle FileNotFoundException differently than 
PermissionDeniedException?


Andrei


Re: Program logic bugs vs input/environmental errors (checked exceptions)

2014-10-05 Thread Dicebot via Digitalmars-d
On Sunday, 5 October 2014 at 16:18:33 UTC, Andrei Alexandrescu 
wrote:

On 10/5/14, 8:56 AM, Jacob Carlborg wrote:
I would like to have as specific exception type as possible. 
Also a nice
hierarchy of exception when catching a specific exception is 
not

interesting. Instead of just a FileException there could be
FileNotFoundException, PermissionDeniedExcepton and so on.


Exceptions are all about centralized error handling. How, and 
how often, would you handle FileNotFoundException differently 
than PermissionDeniedException?


Andrei


While precise formalization of the principle is hard I think this 
comment nails it in general. When defining exceptions hierarchies 
it makes sense to think about it in terms of what code is likely 
to catch it and why? and not what this code should throw?. 
Dedicated exception type only makes sense if it is a common to 
catch it separately, any additional details can be stored as 
runtime field (like status code for HTTPStatusException)


Re: scope() statements and return

2014-10-05 Thread monarch_dodra via Digitalmars-d
On Sunday, 5 October 2014 at 15:03:08 UTC, ketmar via 
Digitalmars-d wrote:

On Sun, 05 Oct 2014 14:53:37 +
monarch_dodra via Digitalmars-d digitalmars-d@puremagic.com 
wrote:


Promises hold provided the precondition your program is in a 
valid state. Having an Error invalidates that precondition, 
hence voids that promise.
so Error should not be catchable and should crash immidiately, 
without

any unwinding.


Don't put words in my mouth. Also, Errors do only partial stack 
unwinding, so yes, once an Error has been thrown, your program 
should terminate.



as long as Errors are just another kind of exception,
the promise must be kept.


Errors aren't Exceptions. They make no promises.


Re: scope() statements and return

2014-10-05 Thread via Digitalmars-d
On Sunday, 5 October 2014 at 15:03:08 UTC, ketmar via 
Digitalmars-d wrote:
so Error should not be catchable and should crash immidiately, 
without
any unwinding. as long as Errors are just another kind of 
exception,

the promise must be kept.


I find it strange if you cannot recover from out-of-memory-error. 
The common trick is to preallocate a bulk of memory, then free it 
when you throw the out of memory exception so that you can 
unwind. When destroying the out-of-memory-object you need to 
reallocate the bulk of memory.


I also find the D terminology confusing, one should avoid 
redefining terms.


Does D have exception chaining? The language spec seems to imply 
that finally swallows thrown exceptions if another exception A is 
running and stuff them in a bag in the A exception. This is kind 
of dangerous since you hide potentially serious exceptions this 
way and it is not what I think of as exception chaining. To me 
exception chaining is preserve the exception chain on re-throws 
(like preserving the call stack).


So yep, ketmar, you are right. You should probably not be able to 
throw in finally without catching it, and you should be able to 
do a catch all without wrapping it up in a function. The 
alternatives such as the mechanics described in the language 
specs will lead to unreliable exception handling and poor 
recovery strategies IMO.


Re: scope() statements and return

2014-10-05 Thread ketmar via Digitalmars-d
On Sun, 05 Oct 2014 16:33:48 +
monarch_dodra via Digitalmars-d digitalmars-d@puremagic.com wrote:

 Errors aren't Exceptions. They make no promises.
if it looks like a duck, if it quacks like a duck, if it can be catched
like a duck... it's a duck. that's not me who titled exception base
class Throwable. it can be catched? it is using 'throw'? it does
unwinding? it is an exception.

and what means 'partial stack unwinding'? by random, or only even stack
frames?


signature.asc
Description: PGP signature


Re: scope() statements and return

2014-10-05 Thread Dicebot via Digitalmars-d
On Sunday, 5 October 2014 at 16:30:47 UTC, Ola Fosheim Grøstad 
wrote:

Does D have exception chaining?


Yes. http://dlang.org/phobos/object.html#.Throwable.next
Though it seems to do more harm then good so far.


Re: std.experimental.logger formal review round 3

2014-10-05 Thread Robert burner Schadek via Digitalmars-d

On Thursday, 2 October 2014 at 20:15:32 UTC, Sönke Ludwig wrote:
I still think that there should be the two predefined log 
levels debug (for developer related diagnostics) and 
diagnostic (for end user related diagnostics) between trace 
and info. This is important for interoperability of different 
libraries, so that they have predictable debug output.


But independent of that, there should really be a function for 
safely generating the user defined intermediate log levels, 
maybe like this:


LogLevel raiseLevel(LogLevel base_level, ubyte increment)
{
assert(base_level is a valid base level);
assert(base_level + increment smaller than the next 
base level);

return cast(LogLevel)(base_level + increment);
}

// ok
enum notice = raiseLevel(LogLevel.info, 16);

// error, overlaps with the next base level
enum suspicious = raiseLevel(LogLevel.info, 32);

Casting to an enum type is a pretty blunt operation, so it 
should at least be made as safe as possible.


from log4d
/// Aliases for debugX and fineX functions
alias debug1  = defaultLogFunction!(Log4DLogger.LOG_LEVEL_DEBUG1);
alias debug2  = defaultLogFunction!(Log4DLogger.LOG_LEVEL_DEBUG2);
alias debug3  = defaultLogFunction!(Log4DLogger.LOG_LEVEL_DEBUG3);

but adding a raiseLevel function should be no problem


Re: std.experimental.logger formal review round 3

2014-10-05 Thread Robert burner Schadek via Digitalmars-d

On Thursday, 2 October 2014 at 18:11:26 UTC, Marco Leise wrote:

How would I typically log an exception?
I thought of one line per exception in the chain plus a full
trace for the last exception in the chain.

So the messages would be like:
  Initialization failed.
  Could not load configuration.
  File ~/.config/app/settings.ini no found.
  Stack trace (outer to inner):
_Dmain_yadda_yadda+120
Config.__ctor(…)+312
…
File.__ctor(…)+12

So far so good, but I'm stuck at this line of code (where `e`
is a Throwable):

  error(e.msg, e.line, e.file, funcName, prettyFuncName, 
moduleName);


I don't know how to get at the function and module name where
the exception was thrown. I know this stuff is part of the
symbolic debug information, but I would think it is a common
use case of a logger to log exceptions.


I guess that should be part of the Exception. I have no idea how 
to get __FUCTION__ of and Exception from inside another function.




Is it? And if so, what do we want to do about it?




Re: scope() statements and return

2014-10-05 Thread Dicebot via Digitalmars-d
On Sunday, 5 October 2014 at 16:42:24 UTC, ketmar via 
Digitalmars-d wrote:

it does
unwinding?


It is not guaranteed by spec (I guess this was added to allow 
assert(0) to be converted into HLT instruction) though in most 
cases it does. Neither it is guaranteed to run destructors of 
RAII entities (and it doesn't already for some cases).


Pretty much only reason `Error` is not equivalent to plain 
`abort` call is to allow some last resort debbugging dump and 
provide more meaningful information about the failure. Any 
application that tries to recover from Error in any way falls 
into unstandard D domain.


Re: scope() statements and return

2014-10-05 Thread ketmar via Digitalmars-d
On Sun, 05 Oct 2014 16:47:26 +
Dicebot via Digitalmars-d digitalmars-d@puremagic.com wrote:

 Pretty much only reason `Error` is not equivalent to plain 
 `abort` call is to allow some last resort debbugging dump and 
 provide more meaningful information about the failure.
than it was done very wrong. it's ok to have some arcane chain of
error handlers in runtime, 'cause writing such handlers is not the
kind of task people do often. but allowing to throw Error as any
other exception or to catch Error the same way other exceptions can be
catched is misleading. any sane person will think about Error as just
another kind of Exception, just with different naming.

there must be separate case for Errors, something like final throw,
which will not do ANY unwinding, cannot be catched, and just calling
error handlers chain and aborts. it's not enough to just paint a duck.


signature.asc
Description: PGP signature


Re: scope() statements and return

2014-10-05 Thread Andrei Alexandrescu via Digitalmars-d

On 10/5/14, 9:42 AM, Dicebot wrote:

On Sunday, 5 October 2014 at 16:30:47 UTC, Ola Fosheim Grøstad wrote:

Does D have exception chaining?


Yes. http://dlang.org/phobos/object.html#.Throwable.next
Though it seems to do more harm then good so far.


What harm does it do? -- Andrei


Re: scope() statements and return

2014-10-05 Thread Dicebot via Digitalmars-d
On Sunday, 5 October 2014 at 17:03:07 UTC, Andrei Alexandrescu 
wrote:

On 10/5/14, 9:42 AM, Dicebot wrote:
On Sunday, 5 October 2014 at 16:30:47 UTC, Ola Fosheim Grøstad 
wrote:

Does D have exception chaining?


Yes. http://dlang.org/phobos/object.html#.Throwable.next
Though it seems to do more harm then good so far.


What harm does it do? -- Andrei


Good chunk of issues with pre-allocated exceptions (and possible 
cycles in reference counted ones) comes from the chanining 
possibility. At the same time I have yet to see it actively used 
as a feature.


Doesn't mean it is bad thing, just not used wide enough to 
compensate for trouble right now.


Re: std.experimental.logger formal review round 3

2014-10-05 Thread Sean Kelly via Digitalmars-d

On Thursday, 2 October 2014 at 10:37:02 UTC, Kevin Lamonte wrote:


Would PR 
https://github.com/D-Programming-Language/phobos/pull/1910 
provide a way given a Tid to determine: a) What underlying 
concurrency model it is using (Thread, Fiber, process, future)? 
b) Uniquely identify that structure (Thread ID string, Fiber 
address string, process ID, something else)?  c) Be capable of 
using that identifying immutable (because it needs to be 
send()able to another Tid writing to network/file/etc) 
string-representable thing to find the original Tid again?  A+B 
is necessary for using std.logger to debug concurrent 
applications, C is a very nice-to-have that comes up 
periodically.


register() is meant to provide a means of referring to a thread.  
But the relevant thing there is finding a thread by role, not by 
instance.  So if a thread doing a known job terminates, a new one 
can spawn and register under the same name so proper operation 
can continue.  Having an identifier for logging is a bit 
different.  Would using the MessageBox address be sufficient?  
I'd be happy to add a Tid.id property that returns a value like 
this.  I'd rather not try to generate a globally unique 
identifier though (that would probably mean a UUID, which is long 
and expensive to generate).


Re: scope() statements and return

2014-10-05 Thread via Digitalmars-d

On Sunday, 5 October 2014 at 16:42:38 UTC, Dicebot wrote:

Yes. http://dlang.org/phobos/object.html#.Throwable.next
Though it seems to do more harm then good so far.


Hm, so the next field is used for two different purposes? Both 
for capturing the original exception on a rethrow and for 
capturing concurrent exceptions originating in a finally block? 
That is messy.


I personally find regular exceptions overcomplicated for a system 
level language. I think I'd rather allocate resources through 
manager objects and release on runtime-registered landing pads, 
allowing the omission of frame-pointers.


Re: Who pays for all this?

2014-10-05 Thread Etienne via Digitalmars-d

On 2014-10-04 11:33 PM, Walter Bright wrote:

We're not really limited by lack of funds, but more by lack of focussed
effort. If anyone wants to contribute funds, probably the best use would
be to add bug bounties for bugzilla issues that they find to be
neglected. The bounties don't really compensate at professional rates,
but they do work as a nice thanks to those who donate their valuable
time.



Programmers cost money, it would be nice to have a D Foundation where 
companies can donate and maybe eventually use the funds to pay for 
professional staffing rather than relying only on contributors. The D 
foundation can eventually grow towards having engineers on the phone to 
reassure some about development bottlenecks in the low-level software. 
Examples would be Mozilla foundation or Wikimedia foundation but with an 
Oracle or IBM type of service for support. It's an easily missed 
requirement in corporate decisions for reliance on software.


Re: Program logic bugs vs input/environmental errors

2014-10-05 Thread Marco Leise via Digitalmars-d
Am Sat, 04 Oct 2014 01:43:40 -0700
schrieb Walter Bright newshou...@digitalmars.com:

 Would you agree that every time DMD reports a syntax error in user code, it 
 should also include a stack trace of the DMD source code to where in DMD it 
 reported the error?

Of course not. DMD is a compiler and it is part of its normal
operation to have sophisticated code to report syntax errors.
I distinguish between expected exceptions and unexpected
exceptions.

Expected exceptions are those that occur due to validating
user input or as part of handling documented error codes at
the interface layer with external APIs.

Unexpected exceptions are those we don't handle explicitly
because we don't expect them. Maybe because we think that
code wont throw, since the input had been validated or a bug
in an external library or laziness.

A FileNotFoundException can be the expected result of an open
file dialog with the user typing the name of a non-existent
file or the unexpected result of loading a static asset with
an index that is off-by-one (e.g. iconidx.bmp).
In the case of DMD, syntax errors are *expected* as part of
validating user input. And that's why it prints a single line
that can be parsed by IDEs to jump the the source.
Now to make things interesting, we also see *unexpected*
exceptions in DMD: internal compiler errors.

While expected exceptions are commonly handled with a simple
message, unexpected exceptions are handled depending on
application. In non-interactive applications like DMD they can
all be AssertErrors that terminate the program. In interactive
software like a video editor, someone might just have spent
hours on editing and the user should be allowed to reason
about the severity of the fault and decide to ignore the
exception or quit the program. There are other solutions like
logging and emailing the error to someone.

The example above, about the off-by-one when reading an icon
is a typical case of an exception that you would want to
investigate with an exception stack trace, but keep the program
running.

-- 
Marco



Re: Program logic bugs vs input/environmental errors

2014-10-05 Thread Marco Leise via Digitalmars-d
Am Fri, 03 Oct 2014 10:38:21 -0700
schrieb Brad Roberts via Digitalmars-d
digitalmars-d@puremagic.com:

 Just as within an airplane, to use Walter's 
 favorite analogy, the seat entertainment system is physically and 
 logically separated from flight control systems thus a fault within the 
 former has no impact on the latter.

And just like the Erlang runtime, the electrical components
could be faulty or operating beyond their limits and cause the
aircraft to shutdown.
http://www.dailymail.co.uk/news/article-2519705/Blaze-scare-BA-Jumbo-Serious-electrical-sparked-planes-flight-entertainment-system.html
http://en.wikipedia.org/wiki/Swissair_Flight_111
http://www.iasa.com.au/folders/sr111/747IFE-fire.htm

-- 
Marco



Re: Program logic bugs vs input/environmental errors

2014-10-05 Thread Nick Sabalausky via Digitalmars-d
On 10/05/2014 05:35 AM, Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
ola.fosheim.grostad+dl...@gmail.com wrote:

On Sunday, 5 October 2014 at 09:06:45 UTC, Paolo Invernizzi wrote:

Oh, I think that here in Italy we outperform your country with that,
as for sure we are the most bureaucratised country on the hearth.


Hah! In Norway parents sign evaluation of progress for 6 years old
school children every 2 weeks due to a quality reform. And and  all
pupils are kept on the same level of progress so that nobody should feel
left behind due to social democracy principles...


Aside from those 2 week evals (ouch!), the US isn't a whole lot 
different. US schools are still notoriously bureaucracy-heavy (just ask 
any school employee), and No child left behind is a big thing (at 
least, supposedly) while any advanced kids are capped at the level of 
the rest of their age group and forbidden from advancing at their own 
level (thus boring the shit out of them and seeding quite a few 
additional problems).


Partly, that level-capping is done because there's a prevalent (but 
obviously BS) belief that kids should be kept with others of the same 
age, rather than with others of the same level of development or even a 
healthy mix. But also, they call this capping of advanced students 
Being fair to the *other* kids. Obviously US teachers have no idea 
what the word fair actually means. But then, in my experience, there's 
a LOT that US teachers don't know.


I blame both the teacher's unions (that's not intended as a statement on 
unions in general, BTW) and the complete and total lack of logic being 
part of the curriculum *they* were taught as kids (which is still 
inexcusably absent from modern curriculums).



In Italy you have
Montesorri! Consider yourself lucky!



US has a few of those too. They're constantly ridiculed (leave it to the 
US to blast anything that isn't group-think-compatible), but from what 
I've seen Montesorri's are at least less god-awful than US public 
schools. I almost went to one (but backed out since, by that point, it 
would have only been for one year - actually wound up with one of the 
best teachers I ever had that year, so it worked out fine in the end).




Re: Program logic bugs vs input/environmental errors

2014-10-05 Thread Walter Bright via Digitalmars-d

On 10/5/2014 8:35 AM, Dicebot wrote:

I am fine with non-default being hard but I
want it to be still possible within legal language restricions.


D being a systems language, you can without much difficulty do whatever works 
for you.


People do look to us for guidance, however.

The levels of programming mastery:

newbie: follow the rules because you're told to
master: follow the rules because you understand them
guru: break the rules because you understand their limitations

I'd be doing our users an injustice by not making sure they understand the rules 
before trying to break them.





Re: Who pays for all this?

2014-10-05 Thread Brad Anderson via Digitalmars-d

On Sunday, 5 October 2014 at 20:45:06 UTC, Sean Kelly wrote:

On Sunday, 5 October 2014 at 18:06:49 UTC, Etienne wrote:


Programmers cost money, it would be nice to have a D 
Foundation where companies can donate and maybe eventually use 
the funds to pay for professional staffing rather than relying 
only on contributors. The D foundation can eventually grow 
towards having engineers on the phone to reassure some about 
development bottlenecks in the low-level software. Examples 
would be Mozilla foundation or Wikimedia foundation but with 
an Oracle or IBM type of service for support. It's an easily 
missed requirement in corporate decisions for reliance on 
software.


Boost consulting comes to mind as well.  Though I honestly 
couldn't say how practical this is for D today.


Well, Boost Consulting is no more so, given D's much smaller user
base, I suspect it wouldn't be very sustainable for D either.


Re: Who pays for all this?

2014-10-05 Thread Sean Kelly via Digitalmars-d

On Sunday, 5 October 2014 at 18:06:49 UTC, Etienne wrote:


Programmers cost money, it would be nice to have a D Foundation 
where companies can donate and maybe eventually use the funds 
to pay for professional staffing rather than relying only on 
contributors. The D foundation can eventually grow towards 
having engineers on the phone to reassure some about 
development bottlenecks in the low-level software. Examples 
would be Mozilla foundation or Wikimedia foundation but with an 
Oracle or IBM type of service for support. It's an easily 
missed requirement in corporate decisions for reliance on 
software.


Boost consulting comes to mind as well.  Though I honestly 
couldn't say how practical this is for D today.


Re: Program logic bugs vs input/environmental errors

2014-10-05 Thread Walter Bright via Digitalmars-d

On 10/5/2014 3:00 AM, Tobias Müller wrote:

All the bad buildings from the ancient romans already came down while the
last 2000 years. The best 1% survived.


Yay for survivorship bias!



Re: Program logic bugs vs input/environmental errors

2014-10-05 Thread Marco Leise via Digitalmars-d
Am Sun, 5 Oct 2014 19:04:23 +0300
schrieb ketmar via Digitalmars-d digitalmars-d@puremagic.com:

 On Sun, 5 Oct 2014 17:44:31 +0200
 Marco Leise via Digitalmars-d digitalmars-d@puremagic.com wrote:
 
  From that I conclude, that File() should open files by ubyte[]
  exclusively to be POSIX compliant.
 and yet there is currently no way in hell to open file with non-utf8
 name with hardcoded name and without ugly hacks with File(...). and
 ubyte will not help here, as D has no way to represent non-utf8 string
 literal without unmaintainable shit like \xNN.
 
 and speaking about utf-8: for strings we at least have a hack, and for
 shebangs... nothing. nada. ничего. locale settings? other encodings? who
 needs that, there CAN'T be non-utf8 shebangs. OS interoperability? it's
 overhyped, there is only two kinds of OSes: those which are D-compatible
 and bad. changing your locale to non-utf8 magically turns your OS to
 bad, D will not try to interoperate with it anymore.

There comes the day that you have to let Sputnik go and board
the ISS.
Still I and a others agree with you that Phobos should not
assume Unicode locales everywhere, no need to rant.

What I find difficult is to define just where in std.stdio the
locale transcoding has to happen. Java probably just wraps
a stream in a stream in a stream as usual, but in std.stdio it
is just a File struct that more or less directly writes to the
file descriptor. So my guess is, the transcoding has to happen
at an earlier stage. Next, when output is NOT a terminal you
typically want to output with no transcoding or set it up
depending on your needs.
Personally I want transcoding to the system locale when
stdout/stderr is a terminal and UTF-8 for everything else
(i.e. pipes and files). That would be my defaults.

-- 
Marco


signature.asc
Description: PGP signature


Re: On exceptions, errors, and contract violations

2014-10-05 Thread Marco Leise via Digitalmars-d
Am Fri, 03 Oct 2014 19:46:15 +
schrieb Ola Fosheim Grøstad
ola.fosheim.grostad+dl...@gmail.com:

 You simply turn the logic error into a cannot compute this 
 result if that is suitable for the application. And the 
 programming language should not make this hard.

I don't get this. When we say logic error we are talking bugs
in the program. Why would anyone turn an outright bug into
cannot compute this. When a function cannot handle division
by zero it should not be fed a zero in the first place. That's
part of input validation before getting to that point.
Or do you vote for removing these validations and wait for the
divide by zero to happen inside the callee in order to catch
it in the caller and say in hindsight: It seems like in one
way or another this input was not computable?

-- 
Marco



Re: Program logic bugs vs input/environmental errors (checked exceptions)

2014-10-05 Thread Walter Bright via Digitalmars-d

On 10/5/2014 9:18 AM, Andrei Alexandrescu wrote:

Exceptions are all about centralized error handling. How, and how often, would
you handle FileNotFoundException differently than PermissionDeniedException?


You would handle it differently if there was extra data attached to that 
particular exception, specific to that sort of error.




Re: Program logic bugs vs input/environmental errors (checked exceptions)

2014-10-05 Thread Dmitry Olshansky via Digitalmars-d

05-Oct-2014 20:18, Andrei Alexandrescu пишет:

On 10/5/14, 8:56 AM, Jacob Carlborg wrote:

I would like to have as specific exception type as possible. Also a nice
hierarchy of exception when catching a specific exception is not
interesting. Instead of just a FileException there could be
FileNotFoundException, PermissionDeniedExcepton and so on.


Exceptions are all about centralized error handling. How, and how often,
would you handle FileNotFoundException differently than
PermissionDeniedException?



Seems like it should be possible to define multiple interfaces for 
exceptions, and then catch by that (and/or combinations of such).


Each of interface would be interested in a particular property of 
exception. Then catching by:


FileException with PermissionException

would mean OS-level permission viloated and it was during file access,

while

ProcessException with PermissionException would mean process 
manipulation was forbiden, etc.


Of course, some code may be interested only in PermissionException side 
of things, while other code may want to contain anything related to 
files, and the catch-all-sensible-ones inside of the main function.


--
Dmitry Olshansky


Re: On exceptions, errors, and contract violations

2014-10-05 Thread via Digitalmars-d

On Sunday, 5 October 2014 at 21:16:17 UTC, Marco Leise wrote:

I don't get this. When we say logic error we are talking bugs
in the program.


By what definition?

And what if I decide that I want my programs to recover from bugs 
in insignificant code sections and keep going?


Is a type error in a validator a bug? It makes perfect sense to 
let the runtime throw implicitly on things you cannot be bothered 
to check explicitly because they should not happen for valid 
input. If that is a bug, then it is a good bug that makes it 
easier to write code that responds properly. The less verbose a 
validator is, the easier it is to ensure that it responds in a 
desirable fashion. Why force the programmer to replicate the work 
that the compiler/runtime already do anyway?


Is a out-of-range-error when processing a corrupt file a bug or 
it is a deliberate reliance on D's range check feature? Isn't the 
range check more useful if you don't have to do explicit checks 
for valid input? Useful as in: saves time and money with the same 
level of correctness as long as you know what you are doing?


Is deep recursion a bug? Not really.

Is running out of memory a bug? Not really.

Is division by a very small number that is coerced to zero a bug? 
Not really.


Is hitting the worst case running time which cause timeouts a 
bug? Not really, it is bad luck.


Can the compiler/library/runtime reliably determine what is a bug 
and what is not? Not in a consistent fashion.



Why would anyone turn an outright bug into
cannot compute this. When a function cannot handle division
by zero it should not be fed a zero in the first place. That's
part of input validation before getting to that point.


I disagree. When you want to computations to be performant it 
makes a lot of sense to do speculative computation in a SIMD like 
manner using the less robust method, then recompute the 
computations that failed using a slower and more robust method.


Or simply ignore the results that were hard to compute: Think of 
a ray tracer that solves very complex equations using a numerical 
solver that will not always produce a meaningful result. You are 
then better off using the faster solver and simply ignore the 
rays that produce unreasonable results according to some 
heuristics. You can compensate by firing more rays per pixel with 
slightly different x/y coordinates. The alternative is to produce 
images with pixel noise or use a much slower solver.



Or do you vote for removing these validations and wait for the
divide by zero to happen inside the callee in order to catch
it in the caller and say in hindsight: It seems like in one
way or another this input was not computable?


There is a reason for why the FP handling in ALUs let this be 
configurable. It is up to the application to decide.


Re: Program logic bugs vs input/environmental errors

2014-10-05 Thread Dicebot via Digitalmars-d

On Sunday, 5 October 2014 at 20:41:44 UTC, Walter Bright wrote:

On 10/5/2014 8:35 AM, Dicebot wrote:

I am fine with non-default being hard but I
want it to be still possible within legal language restricions.


D being a systems language, you can without much difficulty do 
whatever works for you.


Yes but it shouldn't be in undefined behaviour domain. In other 
words there needs to be a confidence that some new compiler 
optimization will not break the application completely.


Right now Throwable/Error docs heavily suggest catching it is 
shoot yourself in the foot thing and new compiler release can 
possibly change its behaviour without notice. I'd like to have a 
bit more specific documentation about what can and what can't be 
expected. Experimental observations are that one shouldn't rely 
on any cleanup code (RAII / scope(exit)) to happen but other than 
that it is OK to consume Error if execution context for it (fiber 
in our case) gets terminated. As D1 compiler does not change it 
is good enough observation for practical means. But for D2 it 
would be nice to have some official clarification.


I think this is the only important concern I have as long as 
power user stuff remains possible without re-implementing whole 
exception system from scratch.


Re: Program logic bugs vs input/environmental errors

2014-10-05 Thread eles via Digitalmars-d

On Sunday, 5 October 2014 at 20:46:16 UTC, Walter Bright wrote:

On 10/5/2014 3:00 AM, Tobias Müller wrote:
All the bad buildings from the ancient romans already came 
down while the

last 2000 years. The best 1% survived.


Yay for survivorship bias!


The same happens when asking people if such or such dictatorship 
was good or bad. You hear it wasn't so bad, after all. Yes, 
because they only ask the survivors, they don't go to ask the 
dead, too.


Sorry, my rant about politics.


Re: What are the worst parts of D?

2014-10-05 Thread via Digitalmars-d

On Sunday, 5 October 2014 at 16:14:18 UTC, Dicebot wrote:
No need to explain it here. When I speak about vision I mean 
something that anyone coming to dlang.org page or GitHub repo 
sees. Something that is explained in a bit more details, 
possibly with code examples. I know I am asking much but seeing 
quick reference for imagine this stuff is implemented, this is 
how your program code will be affected and this is why it is a 
good thing could have been huge deal.


Something like this would be nice:

http://golang.org/s/go14gc



Re: What are the worst parts of D?

2014-10-05 Thread eles via Digitalmars-d

On Sunday, 5 October 2014 at 14:55:38 UTC, Dicebot wrote:

Right now I have no idea where the development is headed and 
what to expect from next few releases. I am not speaking about 
wiki.dlang.org/Agenda but about bigger picture. Unexpected 
focus on C++ support, thread about killing auto-decoding, 
recent ref counting proposal


Just to add more salt:

http://blogs.msdn.com/b/oldnewthing/archive/2010/08/09/10047586.aspx

Raymond Chen:  When you ask somebody what garbage collection is, 
the answer you get is probably going to be something along the 
lines of Garbage collection is when the operating environment 
automatically reclaims memory that is no longer being used by the 
program. It does this by tracing memory starting from roots to 
identify which objects are accessible.


This description confuses the mechanism with the goal. It's like 
saying the job of a firefighter is driving a red truck and 
spraying water. That's a description of what a firefighter does, 
but it misses the point of the job (namely, putting out fires 
and, more generally, fire safety).


Garbage collection is simulating a computer with an infinite 
amount of memory. The rest is mechanism. And naturally, the 
mechanism is reclaiming memory that the program wouldn't notice 
went missing. It's one giant application of the as-if rule.


Interesting, in the comments, the distinction that is made 
between finalizers and destructors, even if they happen to have 
the same syntax, for example in C# and C++.


For example here: 
http://blogs.msdn.com/b/oldnewthing/archive/2010/08/09/10047586.aspx#10047776


I feel difficult to swallow the fact that D classes do not lend 
themselves to RAII. While I could accept the memory management 
could be left outside RAII, running destructors (or disposers) 
deterministically is a must.


I particularily find bad that D recommends using structs to free 
resources because the destructor of those is run automatically. 
Just look at this example:


http://dlang.org/cpptod.html#raii

struct File
{
Handle h;

~this()
{
h.release();
}
}

void test()
{
if (...)
{
auto f = File();
...
} // f.~this() gets run at closing brace, even if
  // scope was exited via a thrown exception
}

Even if C++ structs are almost the same as classes, the logical 
solit between the two is: structs are DATA, classes are BEHAVIOR. 
I will not get my head around the fact that I will *recommend* 
putting methods in a struct.


Re: What are the worst parts of D?

2014-10-05 Thread eles via Digitalmars-d

On Sunday, 5 October 2014 at 22:11:38 UTC, eles wrote:
On Sunday, 5 October 2014 at 21:59:21 UTC, Ola Fosheim Grøstad 
wrote:

On Sunday, 5 October 2014 at 16:14:18 UTC, Dicebot wrote:
No need to explain it here. When I speak about vision I mean 
something that anyone coming to dlang.org page or GitHub repo 
sees. Something that is explained in a bit more details, 
possibly with code examples. I know I am asking much but 
seeing quick reference for imagine this stuff is 
implemented, this is how your program code will be affected 
and this is why it is a good thing could have been huge deal.


Something like this would be nice:

http://golang.org/s/go14gc


(sorry, replying to this answer because is shorter)

Would a strategy where pointers are by default unique, and only 
become shared, weak and naked if explicitely declared as such?


of course, would it be viable?


Re: What are the worst parts of D?

2014-10-05 Thread eles via Digitalmars-d
On Sunday, 5 October 2014 at 21:59:21 UTC, Ola Fosheim Grøstad 
wrote:

On Sunday, 5 October 2014 at 16:14:18 UTC, Dicebot wrote:
No need to explain it here. When I speak about vision I mean 
something that anyone coming to dlang.org page or GitHub repo 
sees. Something that is explained in a bit more details, 
possibly with code examples. I know I am asking much but 
seeing quick reference for imagine this stuff is implemented, 
this is how your program code will be affected and this is why 
it is a good thing could have been huge deal.


Something like this would be nice:

http://golang.org/s/go14gc


(sorry, replying to this answer because is shorter)

Would a strategy where pointers are by default unique, and only 
become shared, weak and naked if explicitely declared as such?


Re: Program logic bugs vs input/environmental errors

2014-10-05 Thread Walter Bright via Digitalmars-d

On 10/5/2014 2:55 PM, eles wrote:

The same happens when asking people if such or such dictatorship was good or
bad. You hear it wasn't so bad, after all. Yes, because they only ask the
survivors, they don't go to ask the dead, too.


Yeah, I've often wondered what the dead soldiers would say about wars - whether 
it was worth it or not. Only the ones who lived get interviewed.



Sorry, my rant about politics.




Re: Program logic bugs vs input/environmental errors

2014-10-05 Thread Walter Bright via Digitalmars-d

On 10/5/2014 2:51 PM, Dicebot wrote:

On Sunday, 5 October 2014 at 20:41:44 UTC, Walter Bright wrote:

On 10/5/2014 8:35 AM, Dicebot wrote:

I am fine with non-default being hard but I
want it to be still possible within legal language restricions.


D being a systems language, you can without much difficulty do whatever works
for you.


Yes but it shouldn't be in undefined behaviour domain. In other words there
needs to be a confidence that some new compiler optimization will not break the
application completely.


Relying on program state after entering an unknown state is undefined by 
definition. I don't see how a language can make a statement like it's probably ok.




Right now Throwable/Error docs heavily suggest catching it is shoot yourself in
the foot thing and new compiler release can possibly change its behaviour
without notice. I'd like to have a bit more specific documentation about what
can and what can't be expected. Experimental observations are that one shouldn't
rely on any cleanup code (RAII / scope(exit)) to happen but other than that it
is OK to consume Error if execution context for it (fiber in our case) gets
terminated. As D1 compiler does not change it is good enough observation for
practical means. But for D2 it would be nice to have some official 
clarification.


Definitely unwinding may or may not happen from Error throws, nothrow 
functions may throw Errors, and optimizers need not account for Errors being thrown.


Attempting to unwind the stack when an Error is thrown may cause further 
corruption (if the Error was thrown because of corruption), another reason for 
the language not to try to do it.


An Error is, by definition, unrecoverable.



I think this is the only important concern I have as long as power user stuff
remains possible without re-implementing whole exception system from scratch.


You can catch an Error. But what is done from there is up to you - and to do 
more than just log the error, engage the backup, and exit, I cannot recommend.


To do more, use an Exception. But to throw an Exception when a logic bug has 
been detected, then try and continue based on it probably being ok, is 
something I cannot recommend and D certainly cannot guarantee anything. If the 
program does anything that matters, that is.




Re: Program logic bugs vs input/environmental errors

2014-10-05 Thread Sean Kelly via Digitalmars-d

On Sunday, 5 October 2014 at 23:01:48 UTC, Walter Bright wrote:


Definitely unwinding may or may not happen from Error throws, 
nothrow functions may throw Errors, and optimizers need not 
account for Errors being thrown.


This is the real concern.  If an Error is thrown out of a nothrow 
function that contains a synchronized block, for example, the 
mutex might still be locked.  So the only viable option is to 
terminate, even for something theoretically recoverable like a 
divide by zero or an OOME.


Re: std.experimental.logger formal review round 3

2014-10-05 Thread Kevin Lamonte via Digitalmars-d

On Sunday, 5 October 2014 at 17:06:06 UTC, Sean Kelly wrote:

Having an identifier for logging is a bit different.  Would 
using the MessageBox address be sufficient?  I'd be happy to 
add a Tid.id property that returns a value like this.  I'd 
rather not try to generate a globally unique identifier though 
(that would probably mean a UUID, which is long and expensive 
to generate).


I think Tid.id returning the MessageBox address would be fine for 
logging purposes.  The main value is being able to distinguish 
messages coming in at the same time from multiple threads.  Even 
if a MessageBox address was re-used by a new receive()er after a 
previous one exited I doubt it would confuse users very much.  I 
thing that a doc on Tid.id like this value will not be the same 
as any other Tid currently existing, but might be the same as a 
Tid that has exited previously would be sufficient.


Re: scope() statements and return

2014-10-05 Thread deadalnix via Digitalmars-d
On Sunday, 5 October 2014 at 16:30:47 UTC, Ola Fosheim Grøstad 
wrote:
On Sunday, 5 October 2014 at 15:03:08 UTC, ketmar via 
Digitalmars-d wrote:
so Error should not be catchable and should crash immidiately, 
without
any unwinding. as long as Errors are just another kind of 
exception,

the promise must be kept.


I find it strange if you cannot recover from 
out-of-memory-error. The common trick is to preallocate a bulk 
of memory, then free it when you throw the out of memory 
exception so that you can unwind. When destroying the 
out-of-memory-object you need to reallocate the bulk of memory.




I know of several cases where this trick was used and it turned 
out horribly wrong. OOE is NOT recoverable. It may be in some 
cases, and you can use trick to make them recoverable in more 
cases, like the one mentioned, but ultimately, you have no 
guarantee, and worse, no way to know if you are in a recoverable 
situation or not.


The only valid use case i know of to catch this kind of error is 
at top level to return various error code. Even logging may be 
broken at this point.


Re: Who pays for all this?

2014-10-05 Thread Shammah Chancellor via Digitalmars-d

On 2014-10-05 03:33:36 +, Walter Bright said:

We're not really limited by lack of funds, but more by lack of focussed 
effort. If anyone wants to contribute funds, probably the best use 
would be to add bug bounties for bugzilla issues that they find to be 
neglected. The bounties don't really compensate at professional rates, 
but they do work as a nice thanks to those who donate their valuable 
time.


I've placed a couple of anonymous bounties, but I personally think it's 
a bad way to get directed focused effort.  A democracy of people trying 
to get what they individually want done through small donations?


There are many languages which have grown more quickly than D (despite 
being less interesting) because they have a foundation where people can 
donate, or some company, which provides for the core developers.   I'm 
not saying that having a non-profit will magically generate money, but 
there are a few companies who use D out there who just might be willing 
to donate non-trivial sums of money to further development if there was 
a non-profit to see that the money was put to good use.


Just to name a few:

Python: https://www.python.org/psf-landing/
Node.JS:  http://www.joyent.com/
Perl: http://www.perlfoundation.org
Linux Core Developers: http://en.wikipedia.org/wiki/Linux_Foundation
Ruby Core Developers: https://www.heroku.com (A subsidiary of Salesforce)

-S



Re: scope() statements and return

2014-10-05 Thread Shammah Chancellor via Digitalmars-d

On 2014-10-05 11:28:59 +, monarch_dodra said:


On Saturday, 4 October 2014 at 18:42:05 UTC, Shammah Chancellor wrote:
Didn't miss anything.  I was responding to Andrei such that he might 
think it's not so straightforward to evaluate that code.
I am with you on this.  It was my original complaint months ago that 
resulted in this being disallowed behavior.  Specifically because you 
could stop error propigation by accident even though you did not intend 
to prevent their propigation.  e.g:


int main()
{
scope(exit) return 0;
assert(false, whoops!);
}

-S


Isn't this the should scope(exit/failure) catch Error issue though?

In theory, you should seldom ever catch Errors. I don't understand why 
scope(exit) are catching them.


It doesn't catch the error. Propigation should continue as normal.  
However, in the case I gave a return statement is executed in a cleanup 
block before propigation can continue.   As has been pointed out, this 
is just like a finally{} block and it behaves the same way.   Throws, 
and returns should be prohibited from those as well.




Re: Program logic bugs vs input/environmental errors

2014-10-05 Thread Walter Bright via Digitalmars-d

On 10/5/2014 4:28 PM, Sean Kelly wrote:

On Sunday, 5 October 2014 at 23:01:48 UTC, Walter Bright wrote:


Definitely unwinding may or may not happen from Error throws, nothrow
functions may throw Errors, and optimizers need not account for Errors being
thrown.


This is the real concern.  If an Error is thrown out of a nothrow function that
contains a synchronized block, for example, the mutex might still be locked.  So
the only viable option is to terminate, even for something theoretically
recoverable like a divide by zero or an OOME.


Divide by zero is not recoverable since you don't know why it occurred. It could 
be the result of overflowing a buffer with 0s. Until a human debugs it and 
figures out why it happened, it not recoverable.


Because it could be the result of corruption like buffer overflows, the less 
code that is executed between the detection of the bug and terminating the 
program, the safer the program is. Continuing execution may mess up user data, 
may execute injected malware, etc.




Re: Program logic bugs vs input/environmental errors (checked exceptions)

2014-10-05 Thread Andrei Alexandrescu via Digitalmars-d

On 10/5/14, 2:27 PM, Walter Bright wrote:

On 10/5/2014 9:18 AM, Andrei Alexandrescu wrote:

Exceptions are all about centralized error handling. How, and how
often, would
you handle FileNotFoundException differently than
PermissionDeniedException?


You would handle it differently if there was extra data attached to that
particular exception, specific to that sort of error.


Indeed. Very few in Phobos do. -- Andrei



Re: What are the worst parts of D?

2014-10-05 Thread Andrei Alexandrescu via Digitalmars-d

On 10/5/14, 3:08 PM, eles wrote:

On Sunday, 5 October 2014 at 14:55:38 UTC, Dicebot wrote:


Right now I have no idea where the development is headed and what to
expect from next few releases. I am not speaking about
wiki.dlang.org/Agenda but about bigger picture. Unexpected focus on
C++ support, thread about killing auto-decoding, recent ref counting
proposal


Just to add more salt:

http://blogs.msdn.com/b/oldnewthing/archive/2010/08/09/10047586.aspx

Raymond Chen:  When you ask somebody what garbage collection is, the
answer you get is probably going to be something along the lines of
Garbage collection is when the operating environment automatically
reclaims memory that is no longer being used by the program. It does
this by tracing memory starting from roots to identify which objects are
accessible.

This description confuses the mechanism with the goal. It's like saying
the job of a firefighter is driving a red truck and spraying water.
That's a description of what a firefighter does, but it misses the point
of the job (namely, putting out fires and, more generally, fire safety).

Garbage collection is simulating a computer with an infinite amount of
memory. The rest is mechanism. And naturally, the mechanism is
reclaiming memory that the program wouldn't notice went missing. It's
one giant application of the as-if rule.

Interesting, in the comments, the distinction that is made between
finalizers and destructors, even if they happen to have the same syntax,
for example in C# and C++.

For example here:
http://blogs.msdn.com/b/oldnewthing/archive/2010/08/09/10047586.aspx#10047776


I feel difficult to swallow the fact that D classes do not lend
themselves to RAII. While I could accept the memory management could be
left outside RAII, running destructors (or disposers) deterministically
is a must.

I particularily find bad that D recommends using structs to free
resources because the destructor of those is run automatically. Just
look at this example:

http://dlang.org/cpptod.html#raii

struct File
{
 Handle h;

 ~this()
 {
 h.release();
 }
}

void test()
{
 if (...)
 {
 auto f = File();
 ...
 } // f.~this() gets run at closing brace, even if
   // scope was exited via a thrown exception
}

Even if C++ structs are almost the same as classes, the logical solit
between the two is: structs are DATA, classes are BEHAVIOR. I will not
get my head around the fact that I will *recommend* putting methods in a
struct.


The main distinction between structs and classes in D is the former are 
monomorphic value types and the later are polymorphic reference types. 
-- Andrei




Re: Who pays for all this?

2014-10-05 Thread Andrei Alexandrescu via Digitalmars-d

On 10/4/14, 8:33 PM, Walter Bright wrote:

Might it be time for a formation of a D Programming Language
Foundation to which
people can donate and funds some of the hosting, and possibly pay for
some time
of the various heavy contributors?


We're not really limited by lack of funds,


whaaa


but more by lack of focussed
effort. If anyone wants to contribute funds, probably the best use would
be to add bug bounties for bugzilla issues that they find to be
neglected. The bounties don't really compensate at professional rates,
but they do work as a nice thanks to those who donate their valuable
time.


A $150 monthly contribution would cover our hosting costs. $1000 per 
month would cover the yearly basic costs for DConf. $500 more per month 
would add A/V for the conference. We've had DConf partially sponsored, 
but it's good to have autonomy. Some more couple thousands would buy us 
things like a web designer. $2000 or more per month would possibly get 
us a person to put on things that are urgent and important.


We're very much limited by the lack of funds.


Andrei



Re: Who pays for all this?

2014-10-05 Thread Andrei Alexandrescu via Digitalmars-d

On 10/5/14, 7:28 PM, Shammah Chancellor wrote:

There are many languages which have grown more quickly than D (despite
being less interesting) because they have a foundation where people can
donate, or some company, which provides for the core developers.   I'm
not saying that having a non-profit will magically generate money, but
there are a few companies who use D out there who just might be willing
to donate non-trivial sums of money to further development if there was
a non-profit to see that the money was put to good use.

Just to name a few:

Python: https://www.python.org/psf-landing/
Node.JS:  http://www.joyent.com/
Perl: http://www.perlfoundation.org
Linux Core Developers: http://en.wikipedia.org/wiki/Linux_Foundation
Ruby Core Developers: https://www.heroku.com (A subsidiary of Salesforce)


C++ also has a foundation since 2012: 
http://pocoproject.org/blog/?p=671. It paid for CppCon 2014, which was 
very successful.


I believe a foundation would help D. Unfortunately, setting one up is 
very laborious, and neither Walter nor I know anything about that - from 
what I understand it takes a _lot_ of work. If anyone is able and 
willing to embark on creating a foundation for D, that would be a great 
help to the language and its community.



Andrei



GDC Pandaboard/QEMU Framebuffer

2014-10-05 Thread John A via Digitalmars-d

Not sure where to post this; it's not a question, just some info.

I have created a set of pages here:

  http://arrizza.org/wiki/index.php/Dlang

with instructions which include:
  - how to create a GDC cross-compiler using crosstool-ng
  - how to create some sample applications to test out GDC
  - how to create a sample app that writes to the framebuffer via 
GDC

  - how to set up and run these apps on QEMU
  - how to run the same apps on a Pandaboard

There is nothing new here. Others have written it already, some 
of which worked as advertised for me, some didn't. I've just 
gathered it up, tried it out and wrote down very specific 
instructions about what I needed to do to get it to work. 
Hopefully it works well for you.


Note I use Ubuntu 12.04 for everything, so these pages won't help 
Windows folks.


John


Re: Code fails with linker error. Why?

2014-10-05 Thread ketmar via Digitalmars-d-learn
On Sat, 04 Oct 2014 16:52:08 +
John Colvin via Digitalmars-d-learn digitalmars-d-learn@puremagic.com
wrote:

 It is a little noisy, but really very little. The only extra 
 noise is the call to Afoo.
it's very little turns to alot with alot of classes with alot of
methods. and code readability suffers even from one such hack.

 Namespace pollution? In what way would that cause a problem here?
all this AFoo, BFoo, CFoo, XYZFoo are useless. all they do is polluting
namespace, making debug info bigger and processing debug info slower.
each unnecessary thing that computer must process is... well,
unnecessary. ;-) i'm doing some tricks with debug info -- such as live
code instrumentation, and i hate all that 'proxies'. ;-) i again must
write code to deal with things that compiler should deal with.


signature.asc
Description: PGP signature


Re: DUB Errors

2014-10-05 Thread Sönke Ludwig via Digitalmars-d-learn

Am 05.10.2014 02:11, schrieb David Nadlinger:

On Friday, 3 October 2014 at 23:00:53 UTC, Brian Hechinger wrote:

With my old set of packages I had no problems. I just now deleted
~/.dub and now I too get this error. Some issue with the openssl
module, yes, but what? This is a bit of an issue. :)


At first glance, this seems like a forward reference issue.
deimos.openssl.ossl_typ imports deimos.openssl.ssl, but also the other
way round.

I had a similar problem where everything worked before, but then I
deleted all DUB caches and now vibe.d doesn't build anymore (master and
0.7.21-rc1).

David


Judging by the log output it should be fixed (on vibe.d's side) with 
[1] by using a version based dependency to the OpenSSL bindings with an 
old version*. I've tagged a new RC-2 version now (although it's not 
really an RC with more known fixes to come).


* Since there was no reaction on the corresponding ticket [2], I've 
decided to tag my own fork instead and register it on code.dlang.org. If 
anyone has a better idea...


[1]: 
https://github.com/rejectedsoftware/vibe.d/commit/4fd45376a81423adae33092326b5be2cc69422c8

[2]: https://github.com/D-Programming-Deimos/openssl/issues/17



How to detect start of Unicode symbol and count amount of graphemes

2014-10-05 Thread Uranuz via Digitalmars-d-learn
I have struct StringStream that I use to go through and parse 
input string. String could be of string, wstring or dstring type. 
I implement function popChar that reads codeUnit from Stream. I 
want to have *debug* mode of parser (via CT switch), where I 
could get information about lineIndex, codeUnitIndex, 
graphemeIndex. So I don't want to use *front* primitive because 
it autodecodes everywhere, but I want to get info abot index of 
*user perceived character* in debug mode (so decoding is needed 
here).


Question is how to detect that I go from one Unicode grapheme to 
another when iterating on string, wstring, dstring by code unit? 
Is it simple or is it attempt to reimplement a big piece of 
existing std library code?


As a result I should just increment internal graphemeIndex.

There short version of implementation that I want follows

struct StringStream(String)
{
   String str;
   size_t index;
   size_t graphemeIndex;

   auto popChar()
   {
  index++;
  if( ??? ) //How to detect new grapheme?
  {
 graphemeIndex++;
  }
  return str[index];
   }

}

Sorry for very simple question. I just have a mess in my head 
about Unicode and D strings


Re: curl and proxy

2014-10-05 Thread via Digitalmars-d-learn

On Saturday, 4 October 2014 at 21:59:43 UTC, notna wrote:

Cool,thanks.
Btw., there could be more special chars to encode...replace 
beside :... like / @ and so on... see also 
http://www.cyberciti.biz/faq/unix-linux-export-variable-http_proxy-with-special-characters/ 
for the background


Does std.uri.encode() not take care of these either? Anyway, 
according to the documentation [1], it effectively only cares for 
:. Therefore, I removed the calls to encode().


[1] http://curl.haxx.se/libcurl/c/CURLOPT_PROXYUSERPWD.html


line numbers in linux stack traces?

2014-10-05 Thread Nick Sabalausky via Digitalmars-d-learn
I know this keeps getting asked every year or so, but I couldn't find 
recent info.


Are line numbers in linux stack traces supposed to be working at this 
point? Because I'm not getting any with 2.066.0 with either -g or -gc 
even when running under gdb. Kind of a pain, esp. compared to D dev on 
windows.


Re: How to detect start of Unicode symbol and count amount of graphemes

2014-10-05 Thread monarch_dodra via Digitalmars-d-learn

On Sunday, 5 October 2014 at 08:27:58 UTC, Uranuz wrote:
I have struct StringStream that I use to go through and parse 
input string. String could be of string, wstring or dstring 
type. I implement function popChar that reads codeUnit from 
Stream. I want to have *debug* mode of parser (via CT switch), 
where I could get information about lineIndex, codeUnitIndex, 
graphemeIndex. So I don't want to use *front* primitive because 
it autodecodes everywhere, but I want to get info abot index of 
*user perceived character* in debug mode (so decoding is needed 
here).


Question is how to detect that I go from one Unicode grapheme 
to another when iterating on string, wstring, dstring by code 
unit? Is it simple or is it attempt to reimplement a big piece 
of existing std library code?


You can use std.uni.byGrapheme to iterate by graphemes:
http://dlang.org/phobos/std_uni.html#.byGrapheme

AFAIK, graphemes are not self synchronizing, but codepoints 
are. You can pop code units until you reach the beginning of a 
new codepoint. From there, you can iterate by graphemes, though 
your first grapheme might be off.


Re: How to detect start of Unicode symbol and count amount of graphemes

2014-10-05 Thread Uranuz via Digitalmars-d-learn

You can use std.uni.byGrapheme to iterate by graphemes:
http://dlang.org/phobos/std_uni.html#.byGrapheme

AFAIK, graphemes are not self synchronizing, but codepoints 
are. You can pop code units until you reach the beginning of a 
new codepoint. From there, you can iterate by graphemes, though 
your first grapheme might be off.


Maybe there is some idea how to just detect first code unit of 
grapheme without overhead for using Grapheme struct? I just tried 
to check if ch  128 (for UTF-8). But this dont work. How to 
check if byte is continuation of code for single code point or if 
new sequence started?




Re: DUB Errors

2014-10-05 Thread Nordlöw

On Sunday, 5 October 2014 at 06:39:00 UTC, Sönke Ludwig wrote:

At first glance, this seems like a forward reference issue.
deimos.openssl.ossl_typ imports deimos.openssl.ssl, but also 
the other

way round.


The only explanation I can think of is that

version = OPENSSL_NO_SSL_INTERN;

for some reason gets set during compilation as this leads to

struct ssl_ctx_st

not become visible in modules that import ssl.d.

Ideas anyone?


Re: DUB Errors

2014-10-05 Thread Nordlöw

On Sunday, 5 October 2014 at 06:39:00 UTC, Sönke Ludwig wrote:
Judging by the log output it should be fixed (on vibe.d's 
side) with [1] by using a version based dependency to the 
OpenSSL bindings with an old version*. I've tagged a new RC-2 
version now (although it's not really an RC with more known 
fixes to come).


* Since there was no reaction on the corresponding ticket [2], 
I've decided to tag my own fork instead and register it on 
code.dlang.org. If anyone has a better idea...


[1]: 
https://github.com/rejectedsoftware/vibe.d/commit/4fd45376a81423adae33092326b5be2cc69422c8

[2]: https://github.com/D-Programming-Deimos/openssl/issues/17


Is reset my dub.selections.json and now things work.

Thx.


std.parallelism curious results

2014-10-05 Thread flamencofantasy via Digitalmars-d-learn

Hello,

I am summing up the first 1 billion integers in parallel and in a 
single thread and I'm observing some curious results;


parallel sum : 45, elapsed 102833 ms
single thread sum : 45, elapsed 1667 ms

The parallel version is 60+ times slower on my i7-3770K CPU. I 
think that maybe due to the CPU constantly flushing and reloading 
the caches in the parallel version but I don't know for sure.


Here is the D code;

shared ulong sum = 0;
ulong iter = 1_000_000_000UL;

StopWatch sw;

sw.start();

foreach(i; parallel(iota(0, iter)))
{
atomicOp!+=(sum, i);
}

sw.stop();

	writefln(parallel sum : %s, elapsed %s ms, sum, 
sw.peek().msecs);


sum = 0;

sw.reset();

sw.start();

for (ulong i = 0; i  iter; ++i)
{
sum += i;
}

sw.stop();

	writefln(single thread sum : %s, elapsed %s ms, sum, 
sw.peek().msecs);


Out of curiosity I tried the equivalent code in C# and I got this;

parallel sum : 45, elapsed 20320 ms
single thread sum : 45, elapsed 1901 ms

The C# parallel is about 3 times faster than the D parallel which 
is strange on the exact same CPU.


And here is the C# code;

long sum = 0;
long iter = 10L;

var sw = Stopwatch.StartNew();

Parallel.For(0, iter, i =
{
Interlocked.Add(ref sum, i);
});

Console.WriteLine(parallel sum : {0}, elapsed {1} ms, sum, 
sw.ElapsedMilliseconds);


sum = 0;

sw = Stopwatch.StartNew();

for (long i = 0; i  iter; ++i)
{
sum += i;
}

Console.WriteLine(single thread sum : {0}, elapsed {1} ms, sum, 
sw.ElapsedMilliseconds);


Thoughts?


Re: std.parallelism curious results

2014-10-05 Thread Russel Winder via Digitalmars-d-learn
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 05/10/14 15:27, flamencofantasy via Digitalmars-d-learn wrote:
 Hello,
 
 I am summing up the first 1 billion integers in parallel and in a
 single thread and I'm observing some curious results;

I am fairly certain that your use of parallel for introduces quite a
lot of threads other than you master one.

 parallel sum : 45, elapsed 102833 ms single thread
 sum : 45, elapsed 1667 ms
 
 The parallel version is 60+ times slower on my i7-3770K CPU. I
 think that maybe due to the CPU constantly flushing and reloading
 the caches in the parallel version but I don't know for sure.

I would bet there are cache problems, but far more likely that the
core problem is all the thread activity and in particular all the
synchronization.

 Here is the D code;
 
 shared ulong sum = 0; ulong iter = 1_000_000_000UL;
 
 StopWatch sw;
 
 sw.start();
 
 foreach(i; parallel(iota(0, iter))) { atomicOp!+=(sum, i); }

Well that will be the problem then, lots and lots of synchronization
with the billion tasks you have set up. I am highly surprised this is
only 60 times slower than sequential!

 sw.stop();
 
 writefln(parallel sum : %s, elapsed %s ms, sum,
 sw.peek().msecs);
 
 sum = 0;
 
 sw.reset();
 
 sw.start();
 
 for (ulong i = 0; i  iter; ++i) { sum += i; }
 
 sw.stop();
 
 writefln(single thread sum : %s, elapsed %s ms, sum, 
 sw.peek().msecs);
 
 Out of curiosity I tried the equivalent code in C# and I got this;
 
 parallel sum : 45, elapsed 20320 ms single thread
 sum : 45, elapsed 1901 ms
 
 The C# parallel is about 3 times faster than the D parallel which
 is strange on the exact same CPU.
 
 And here is the C# code;
 
 long sum = 0; long iter = 10L;
 
 var sw = Stopwatch.StartNew();
 
 Parallel.For(0, iter, i = { Interlocked.Add(ref sum, i); });

Useful moral of this story is that C# synchronization in this
(somewhat perverse) context is relatively much more efficient than
that of D.

There is almost certainly a useful benchmark test that can come of
this for the std.parallelism implementation (if only I had a few
cycles to get really stuck in to a review and analysis of the module :-(

 Console.WriteLine(parallel sum : {0}, elapsed {1} ms, sum, 
 sw.ElapsedMilliseconds);
 
 sum = 0;
 
 sw = Stopwatch.StartNew();
 
 for (long i = 0; i  iter; ++i) { sum += i; }
 
 Console.WriteLine(single thread sum : {0}, elapsed {1} ms, sum, 
 sw.ElapsedMilliseconds);
 
 Thoughts?


- -- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip:
sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iEYEARECAAYFAlQxeZ0ACgkQ+ooS3F10Be+DKQCgu2Ro+2bVmEua3oPHZ6kAqUVv
cg8AoLpN3BRvLBQLT8qDaiP0wVMS5dQZ
=w4Gx
-END PGP SIGNATURE-


Re: std.parallelism curious results

2014-10-05 Thread Artem Tarasov via Digitalmars-d-learn

Welcome to the world of multithreading.
You have just discovered that atomic operations are performance 
killers, congratulations on this.


Re: line numbers in linux stack traces?

2014-10-05 Thread Vladimir Panteleev via Digitalmars-d-learn

On Sunday, 5 October 2014 at 09:10:06 UTC, Nick Sabalausky wrote:
I know this keeps getting asked every year or so, but I 
couldn't find recent info.


Are line numbers in linux stack traces supposed to be working 
at this point?


Not the ones that the program itself prints on an unhandled 
exception. The main problem is with licensing (GPL). See here for 
details:

https://d.puremagic.com/issues/show_bug.cgi?id=1001

Because I'm not getting any with 2.066.0 with either -g or -gc 
even when running under gdb. Kind of a pain, esp. compared to D 
dev on windows.


It should work when running under gdb. Make sure you're using a 
recent gdb and you're not stripping the binary. If you link as a 
separate step, you may need to pass -g to DMD during linking as 
well. For delegates invoked through the runtime, or to see stack 
traces of crashes inside the runtime/phobos, you may need to 
rebuild Phobos and Druntime with -gs. Don't use -gc, it is no 
longer relevant.


Re: std.parallelism curious results

2014-10-05 Thread Sativa via Digitalmars-d-learn
Two problems, one, you should create your threads outside the 
stop watch, it is not generally a fair comparison in the real 
world. It throws of the results for short tasks.


Second, you are creating one thread per integer, this is bad. Do 
you really want to create 1B threads when you only have probably 
4 cores?


Below there are 4 threads used. Each thread adds up 1/4 of the 
integers. So it is like 4 threads, each adding up 250M integers. 
The speed, compared to a single thread adding up 250M integers, 
shows how much the parallelism costs per thread.


import std.stdio, std.parallelism, std.datetime, std.range, 
core.atomic;


void main()
{   
StopWatch sw;
shared ulong sum1 = 0, sum2 = 0, sum3 = 0, time1, time2, time3;

auto numThreads = 4;
ulong iter = numThreads*10UL;


auto thds = parallel(iota(0, iter, iter/numThreads));

sw.start();
	foreach(i; thds) { ulong s = 0; for(ulong k = 0; k  
iter/numThreads; k++) { s += k; } s += i*iter/numThreads; 
atomicOp!+=(sum1, s); }

sw.stop(); time1 = sw.peek().usecs;



	sw.reset();	sw.start();	for (ulong i = 0; i  iter; ++i) { sum2 
+= i; } sw.stop(); time2 = sw.peek().usecs;


writefln(parallel sum : %s, elapsed %s us, sum1, time1);
writefln(single thread sum : %s, elapsed %s us, sum2, time2);
writefln(Efficiency : %s%%, 100*time2/time1);
}

http://dpaste.dzfl.pl/bfda7bb2e2b7

Some results:

parallel sum : 780, elapsed 3356 us
single thread sum : 780, elapsed 1984 us Efficiency : 59%


(Not sure all the code is correct, the point is you were creating 
1B threads with 1B atomic operations. The worse possible 
comparison one can do between single and multi-threaded tests.





Re: How to detect start of Unicode symbol and count amount of graphemes

2014-10-05 Thread Jacob Carlborg via Digitalmars-d-learn

On 2014-10-05 14:09, Uranuz wrote:


Maybe there is some idea how to just detect first code unit of grapheme
without overhead for using Grapheme struct? I just tried to check if ch
 128 (for UTF-8). But this dont work. How to check if byte is
continuation of code for single code point or if new sequence started?


Have a look here [1]. For example, if you have a byte that is between 
U+0080 and U+07FF you know that you need two bytes to get that whole 
code point.


[1] http://en.wikipedia.org/wiki/UTF-8#Description

--
/Jacob Carlborg


Re: std.parallelism curious results

2014-10-05 Thread Ali Çehreli via Digitalmars-d-learn

On 10/05/2014 07:27 AM, flamencofantasy wrote:

 I am summing up the first 1 billion integers in parallel and in a single
 thread and I'm observing some curious results;

 parallel sum : 45, elapsed 102833 ms
 single thread sum : 45, elapsed 1667 ms

 The parallel version is 60+ times slower

Reducing the number of threads is key. However, unlike what others said, 
parallel() does not use that many threads. By default, TaskPool objects 
are constructed by 'totalCPUs - 1' worker threads. All of parallel()'s 
iteration are executed on that few threads.


The main problem here is the use of atomicOp, which necessarily 
synchronizes the whole process.


Something like the following takes advantage of parallelism and reduces 
the execution time by half on my machine (4 cores (hyperthreaded 2 actul 
ones)).


ulong adder(ulong beg, ulong end)
{
ulong localSum = 0;

foreach (i; beg .. end) {
localSum += i;
}

return localSum;
}

enum totalTasks = 10;

foreach(i; parallel(iota(0, totalTasks)))
{
ulong beg = i * iter / totalTasks;
ulong end = beg + iter / totalTasks;

atomicOp!+=(sum, adder(beg, end));
}

Ali



Re: std.parallelism curious results

2014-10-05 Thread Sativa via Digitalmars-d-learn

On Sunday, 5 October 2014 at 21:25:39 UTC, Ali Çehreli wrote:
import std.stdio, std.cstream, std.parallelism, std.datetime, 
std.range, core.atomic;


void main()
{   
StopWatch sw;
	shared ulong sum1 = 0; ulong sum2 = 0, sum3 = 0, time1, time2, 
time3;


	enum numThreads = 4; // If numThreads is a variable then it 
significantly slows down the process

ulong iter = 100L;
	iter = numThreads*cast(ulong)(iter/numThreads); // Force iter to 
be a multiple of the number of threads so we can partition 
uniformly


	auto thds = parallel(iota(0, cast(uint)iter, 
cast(uint)(iter/numThreads)));


sw.reset(); sw.start();
	foreach(i; thds) { ulong s = 0; for(ulong k = 0; k  
iter/numThreads; k++) { s += k; } s += i*iter/numThreads; 
atomicOp!+=(sum1, s); }

sw.stop(); time1 = sw.peek().usecs;



	sw.reset();	sw.start();	for (ulong i = 0; i  iter; ++i) { sum2 
+= i; } sw.stop(); time2 = sw.peek().usecs;


writefln(parallel sum : %s, elapsed %s us, sum1, time1);
writefln(single thread sum : %s, elapsed %s us, sum2, time2);
if (time1  0) writefln(Efficiency : %s%%, 100*time2/time1);
din.getc();
}

Playing around with the code above, it seems when numThreads is 
an enum, the execution time is significantly effected(that from 
being  100% to being 100% efficiency).


results on a 4 core laptop with release builds:

parallel sum : 4950, elapsed 2469 us
single thread sum : 4950, elapsed 8054 us
Efficiency : 326%


when numThreads is an int:

parallel sum : 4950, elapsed 21762 us
single thread sum : 4950, elapsed 8033 us
Efficiency : 36%


  1   2   >