Re: XCB Bindings?

2014-04-15 Thread Marco Leise via Digitalmars-d
Am Tue, 15 Apr 2014 22:38:48 +
schrieb Jeroen Bollen jbin...@gmail.com:

 Does anyone know of any (preferably complete) XCB bindings for D?

2 of the 2 people I know who looked into this decided that D
bindings for C bindings for X is a silly exercise, since these
C bindings are 95% generated automatically from XML files and
there is no reason why that generator couldn't be adapted to
directly output D code to create a XDB.
The remaining 5% (mostly login and X server connection
handling) that have to be written manually never got
implemented though :p

-- 
Marco



Re: DIP60: @nogc attribute

2014-04-18 Thread Marco Leise via Digitalmars-d
Am Thu, 17 Apr 2014 06:19:56 +
schrieb Ola Fosheim Grøstad
ola.fosheim.grostad+dl...@gmail.com:

 On Wednesday, 16 April 2014 at 23:14:27 UTC, Walter Bright wrote:
  On 4/16/2014 3:45 PM, Ola Fosheim Grøstad I've written 
  several myself that do not use malloc.
 
 If it is shared or can call brk() it should be annotated.
 
  Even the Linux kernel does not use malloc. Windows offers many 
  ways to allocate memory without malloc. Trying to have a core 
  language detect attempts to write a storage allocator is way, 
  way beyond the scope of what is reasonable for it to do.
 
 Library and syscalls can be marked, besides you can have dynamic 
 tracing in debug mode.

It's a bit of a grey zone. There are probably real-time
malloc() implementations out there. And syscalls like mmap()
can be used to allocate virtual memory or just to map a file
into virtual memory. If you mark all syscalls that doesn't
matter of course.

At the end of the day you cannot really trace what a library
uses that you happen to call into. Or at least not without
significant overhead at runtime.

-- 
Marco



Re: DIP60: @nogc attribute

2014-04-19 Thread Marco Leise via Digitalmars-d
Am Wed, 16 Apr 2014 20:32:20 +
schrieb Peter Alexander peter.alexander...@gmail.com:

 On Wednesday, 16 April 2014 at 20:29:17 UTC, bearophile wrote:
  Peter Alexander:
 
  (I assume that nothrow isn't meant to be there?)
 
  In D nothrow functions can throw errors.
 
 Of course, ignore me :-)
 
 
  You could do something like this:
 
  void foo() @nogc
  {
 static err = new Error();
 if (badthing)
 {
 err.setError(badthing happened);
 throw err;
 }
  }
 
  To be mutable err also needs to be __gshared.
 
 But then it isn't thread safe. Two threads trying to set and 
 throw the same Error is a recipe for disaster.

Also: As far as I remember from disassembling C++, static
variables in functions are initialized on first access and
guarded by a bool. The first call to foo() would execute
err = new Error(); in that case. This code should not
compile under @nogc.

-- 
Marco



Re: DIP60: @nogc attribute

2014-04-21 Thread Marco Leise via Digitalmars-d
Am Sun, 20 Apr 2014 08:19:45 +
schrieb monarch_dodra monarchdo...@gmail.com:

 D static initialization doesn't work the same way. Everything is 
 initialized as the program is loaded, […]

Ah ok, it's all good then :)

 Also, just doing this is good enough:
 
 //
 void foo() @nogc
 {
 static err = new Error(badthing happened);
 if (badthing)
 throw err;
 }
 //
 
 It does require the message be known before hand, and not custom 
 built. But then again, where were you going to allocate the 
 message, and if allocated, who would clean it up?
 
 
 
 That said, while the approach works, there could be issues with 
 re-entrance, or chaining exceptions.

Yes, we've discussed the issues with that approach in other
threads. At least this allows exceptions to be used at all.

-- 
Marco



Re: Compile-time memory footprint of std.algorithm

2014-04-23 Thread Marco Leise via Digitalmars-d
Am Wed, 23 Apr 2014 21:23:17 +0400
schrieb Dmitry Olshansky dmitry.o...@gmail.com:

 23-Apr-2014 20:56, Daniel Murphy пишет:
  Dmitry Olshansky  wrote in message 
  news:lj7mrr$1p5s$1...@digitalmars.com...
  At a times I really don't know why can't we just drop in a Boehm GC
  (the stock one, not homebrew stuff) and be done with it. Speed? There
  is no point in speed if it leaks that much.
 
  Or you know, switch to D and use druntime's GC.
 
 Good point. Can't wait to see D-only codebase.

Hmm. DMD doesn't use a known and tried, imprecise GC because
it is a lot slower. How is DMD written in D using the druntime
GC going to help that ? I wondered about this ever since there
was talk about DDMD. I'm totally expecting compile times to
multiply by 1.2 or so.

-- 
Marco



Re: Discusssion on the Discussion of the Design for a new GC

2014-04-23 Thread Marco Leise via Digitalmars-d
Am Wed, 23 Apr 2014 18:35:25 +
schrieb Messenger d...@shoot.me:

 What is the state of Rainer Schütze's precise gc? Duplication of 
 effort and all that.

+1. And I hope you know what you are up to :D. Some people
may expect a magic pill to emerge from your efforts that makes
the GC approx. as fast as manual memory management for typical
uses or at least as good as the one in Java. We must not
forget that Java has just-in-time compilation and no raw
pointer access. They might have found clever ways to make use
of features/restrictions in Java, that are not available to D.
Memory compaction is one from the top of my head.

I only know for sure that you are looking into using some
ideas from TCMalloc. Other than that, what are the exact
problems you are trying to solve? That would be good to know,
since different goals might require different implementations.
E.g. a precise GC is generally slowed down by checking
data types, but it doesn't keep memory alive because some
float variable happens to look like a pointer to it.

What are the limitations of garbage collection? As an example:
If someone loads some million items graph structure into
memory, can they still make any assumption about the run time
of GC.alloc()? Can generational collection be implemented?

-- 
Marco



Re: [OT] from YourNameHere via Digitalmars-d

2014-04-23 Thread Marco Leise via Digitalmars-d
Am Tue, 22 Apr 2014 20:47:13 +1000
schrieb Manu via Digitalmars-d digitalmars-d@puremagic.com:

 On 22 April 2014 19:00, Walter Bright via Digitalmars-d
 digitalmars-d@puremagic.com wrote:
  On 4/21/2014 10:49 PM, Manu via Digitalmars-d wrote:
 
  I like gmail. I've been using it for the better part of 10 years, and
  I can access it from anywhere. Installing client software to read
  email feels like I'm back on my Amiga in the 90's ;)

I feel quite the opposite. Recently my email provider lured me
into some paid option by cleverly placing a buy option right
after the login where you expect some inbox button.
With a client software you know what you have. Things don't
unexpectedly change. You can have your messages on your
computer and don't depend on internet and server uptime.
Well, I'd say that about any cloud service I guess.

  Congrats on fixing it so it doesn't send the html version! This is much
  better.
 
 Yeah sorry. I could never tell the difference. Since it's not a full
 client, it doesn't have features like raw-view or anything.

If posts actually contain something that cannot be represented
in plain text I don't mind some HTML. All posts load quickly
in Claws Mail anyways, regardless of mime-type. It's just not
so exciting to view the very same text message with a
non-default font. :p

-- 
Marco



Re: Package permission and symbol resolution

2014-04-23 Thread Marco Leise via Digitalmars-d
Am Tue, 22 Apr 2014 18:07:21 +1000
schrieb Manu via Digitalmars-d digitalmars-d@puremagic.com:

   extern (C) void func(const(char)* pString);

By the way: What about adding nothrow there?

-- 
Marco



Re: Static Analysis Tooling / Effective D

2014-04-23 Thread Marco Leise via Digitalmars-d
Am Tue, 21 Jan 2014 04:34:56 +
schrieb Brian Schott briancsch...@gmail.com:

 There's a small feature wishlist in the project's README, but I'd 
 like to get some opinions from the newsgroup: What kinds of 
 errors have you seen in your code that you think a static 
 analysis tool could help with?

Yes, this one:

size_t shiftAmount = 63;
[…]
auto x = 1  shiftAmount;

The trouble is that auto will now resolve to int instead of
size_t as indicated by the type of shiftAmount. Sure, my fault
was to use an innocent »1« instead of »cast(size_t) 1«. So the
result is:

int x = -2147483648;

But »1  size_t« doesn't always yield an int result! Compare to
this:

size_t x = 1  shiftAmount;

which becomes:

size_t x = 18446744071562067968;


Two possible warnings could be:
- Shifting an »int« by a »size_t« is not the correct way to
  enforce a »size_t« result. Please use
  »cast(size_t) 1  shiftAmount« if that was the intention.
- »auto« variable definition will resolve to »int« and may
  lose information from expression »1  shiftAmount«. Please
  replace »auto« with »int« if that is what you want or set
  the correct data type otherwise.

In both cases an explicit mention of a data type resolves the
ambiguity.

-- 
Marco



Re: Static Analysis Tooling / Effective D

2014-04-23 Thread Marco Leise via Digitalmars-d
Am Wed, 23 Apr 2014 22:56:27 -0400
schrieb Steven Schveighoffer schvei...@yahoo.com:

 On Wed, 23 Apr 2014 22:56:54 -0400, Marco Leise marco.le...@gmx.de wrote:
 
  Am Tue, 21 Jan 2014 04:34:56 +
  schrieb Brian Schott briancsch...@gmail.com:
 
  There's a small feature wishlist in the project's README, but I'd
  like to get some opinions from the newsgroup: What kinds of
  errors have you seen in your code that you think a static
  analysis tool could help with?
 
  Yes, this one:
 
  size_t shiftAmount = 63;
  […]
  auto x = 1  shiftAmount;
 
  The trouble is that auto will now resolve to int instead of
  size_t as indicated by the type of shiftAmount. Sure, my fault
  was to use an innocent »1« instead of »cast(size_t) 1«.
 
 You could use 1UL instead.
 
 -Steve

No, that would give you a ulong result.

-- 
Marco



Re: Static Analysis Tooling / Effective D

2014-04-24 Thread Marco Leise via Digitalmars-d
Am Thu, 24 Apr 2014 13:11:18 +0200
schrieb Artur Skawina via Digitalmars-d
digitalmars-d@puremagic.com:

 `size_t x = 1  shiftAmount` is definitely not something that
 should be recommended, see above. Just use the correct type on
 the lhs of shift operators.

auto x = cast(size_t) 1  shiftAmount;  // really ?! :(

In that case it is better to define ONE as a constant :)

enum ONE = cast(size_t) 1;
auto x = ONE  shiftAmount;

-- 
Marco



Re: Static Analysis Tooling / Effective D

2014-04-24 Thread Marco Leise via Digitalmars-d
Am Thu, 24 Apr 2014 10:26:48 -0400
schrieb Steven Schveighoffer schvei...@yahoo.com:

 On Wed, 23 Apr 2014 23:15:01 -0400, Marco Leise marco.le...@gmx.de wrote:
 
  Am Wed, 23 Apr 2014 22:56:27 -0400
  schrieb Steven Schveighoffer schvei...@yahoo.com:
 
  On Wed, 23 Apr 2014 22:56:54 -0400, Marco Leise marco.le...@gmx.de  
  wrote:
 
   Am Tue, 21 Jan 2014 04:34:56 +
   schrieb Brian Schott briancsch...@gmail.com:
  
   There's a small feature wishlist in the project's README, but I'd
   like to get some opinions from the newsgroup: What kinds of
   errors have you seen in your code that you think a static
   analysis tool could help with?
  
   Yes, this one:
  
   size_t shiftAmount = 63;
   […]
   auto x = 1  shiftAmount;
  
   The trouble is that auto will now resolve to int instead of
   size_t as indicated by the type of shiftAmount. Sure, my fault
   was to use an innocent »1« instead of »cast(size_t) 1«.
 
  You could use 1UL instead.
 
  -Steve
 
  No, that would give you a ulong result.
 
 Hm... I was thinking in terms of 1  63, that must be a ulong, no?

Actually in _that_ case the compiler will yell at you that the
valid range to shift an »int« is [0..31].

 But I see your point that size_t may be 32 bits.

 I also think this will work:
 
 size_t(1);
 
 -Steve

Both you and Artur mentioned it. Will this generalized ctor
syntax be in 2.066 ? It looks much less like code smell when
you don't have to use a cast any more even if it is just a
rewrite.

-- 
Marco



Re: DIP61: Add namespaces to D

2014-04-27 Thread Marco Leise via Digitalmars-d
Am Sun, 27 Apr 2014 00:08:17 -0700
schrieb Walter Bright newshou...@digitalmars.com:

 On 4/26/2014 11:22 PM, Daniel Murphy wrote:
  I am NOT suggesting module name and namespace mangling should be tied 
  together.
  Use D modules for symbol organisation, and add a simple feature for 
  specifying a
  C++ namespace when required.
 
 The 'namespace' feature actually does work analogously to modules.

Does that mean the full name in D would become something like this?:

[D module ] [C++ namespace]
company.project.company.project.func

-- 
Marco



Re: Discusssion on the Discussion of the Design for a new GC

2014-05-04 Thread Marco Leise via Digitalmars-d
Am Wed, 30 Apr 2014 19:10:04 +
schrieb Orvid King blah38...@gmail.com:

 Just thought of a minor addition to the guidelines.
 
 While discussion of the naming of the public API should occur on 
 the newsgroup, discussion of the names of locals, or non-public 
 APIs should occur on Github.
 
 Any objections/concerns/improvements to this?

No and I think it comes natural that we don't need to discuss
internal variable names here. :)

-- 
Marco



Re: A few considerations on garbage collection

2014-05-04 Thread Marco Leise via Digitalmars-d
Am Wed, 30 Apr 2014 08:33:25 -0700
schrieb Andrei Alexandrescu seewebsiteforem...@erdani.org:

 I'm mulling over a couple of design considerations for allocators, and 
 was thinking of the following restriction:
 
 1. No out-of-bounds tricks and no pointer arithmetic. Consider:
 
 int[] a = new int[1000];
 int* p = a[500 .. $].ptr;
 a = a[250 .. 750];
 
 Subsequently the GC should be within its rights to deallocate any memory 
 within the first and last 250 integers allocated, even though in theory 
 the user may get to them by using pointer arithmetic.

I see that you are trying to account for allocator designs
that could reuse these memory fragments.
If this is for @safe, then maybe some memory could be released,
but you'd have to statically verify that internal pointers
don't make it into unsafe code where I wonder if any memory
would be released if I wrote:

size_t length = 100;
int* p = (new int[](length)).ptr;
GC.collect();
p[length-1] = 42;

So it is difficult to give a good answer. I'd say no until it
is clear how it would work outside of @safe.

 6. The point above brings to mind more radical possibilities, such as 
 making all arrays reference-counted and allowing compulsive deallocation 
 when the reference counter goes down to zero. That would rule out things 
 like escaping pointers to data inside arrays, which is quite extreme.

Would that affect all arrays, only arrays containing structs
or only affect arrays containing structs with dtors?

printf(hello\n.ptr);

should still work after all.

-- 
Marco



Re: FYI - mo' work on std.allocator

2014-05-04 Thread Marco Leise via Digitalmars-d
Am Thu, 01 May 2014 08:01:43 -0700
schrieb Andrei Alexandrescu seewebsiteforem...@erdani.org:

 On 5/1/14, 3:36 AM, Temtaime wrote:
  Hi Andrey. Have you even test your allocator on different arch(32/64)
  and/or with different compiler flags ?
 
 Thanks, I'll look into that! -- Andrei

If size_t was a distinct type ... just thinking loudly.

-- 
Marco



Re: FYI - mo' work on std.allocator

2014-05-04 Thread Marco Leise via Digitalmars-d
Am Sun, 27 Apr 2014 09:01:58 -0700
schrieb Andrei Alexandrescu seewebsiteforem...@erdani.org:

 On 4/27/14, 7:51 AM, bearophile wrote:
  Andrei Alexandrescu:
 
  Destruction is as always welcome. I plan to get into tracing tomorrow
  morning.
 
  How easy is to implement a OS-portable (disk-backed) virtual memory
  scheme using std.allocator? :-) Is it a good idea to include one such
  scheme in std.allocator?
 
  Bye,
  bearophile
 
 I just added MmapAllocator:
 
 http://erdani.com/d/phobos-prerelease/std_allocator.html#.MmapAllocator
 
 If anyone would like to add a Windows implementation, that would be great.
 
 
 Andrei

Virtual memory allocators seem obvious, but there are some
details to consider.

1) You should not hard code the allocation granularity in the
   long term. It is fairly easy to get it on Windows and Posix
   systems:

On Windows:
  SYSTEM_INFO si;
  GetSystemInfo(si);
  return si.allocationGranularity;

On Posix:
  return sysconf(_SC_PAGESIZE);

2) For embedded Linux systems there is the flag
   MAP_UNINITIALIZED to break the guarantee of getting
   zeroed-out memory. So if it is desired, »zeroesAllocations«
   could be a writable property there.

In the cases where I used virtual memory, I often wanted to
exercise more of its features. As it stands now »MmapAllocator«
works as a basic allocator for 4k blocks of memory. Is that
the intended scope or are you open to supporting all of it?
From the top of my head there is:

- committing/decommitting ranges of memory
- setting protection attributes
- remapping virtual memory pages to other physical pages

Each allows some use cases, that I could expand on if you
want. So it would be beneficial in any case to have those
primitives in a portable form in Phobos. The question is,
should that place be »MmapAllocator« or some std.vm module?

-- 
Marco



Re: FYI - mo' work on std.allocator

2014-05-05 Thread Marco Leise via Digitalmars-d
Am Sun, 04 May 2014 21:05:01 -0700
schrieb Andrei Alexandrescu seewebsiteforem...@erdani.org:

 I've decided that runtime-chosen page sizes are too much of a 
 complication for the benefits.

Alright. Note however, that on Windows the allocation
granularity is larger than the page size (64 KiB). So it is a
cleaner design in my eyes to use portable wrappers around page
size and allocation granularity.

  2) For embedded Linux systems there is the flag
  MAP_UNINITIALIZED to break the guarantee of getting
  zeroed-out memory. So if it is desired, »zeroesAllocations«
  could be a writable property there.
 
 This can be easily done, but from what MAP_UNINITIALIZED is strongly 
 discouraged and only implemented on small embedded systems.

Agreed.

  In the cases where I used virtual memory, I often wanted to
  exercise more of its features. As it stands now »MmapAllocator«
  works as a basic allocator for 4k blocks of memory. Is that
  the intended scope or are you open to supporting all of it?
 
 For now I just wanted to get a basic mmap-based allocator off the 
 ground. I am aware there's a bunch of things to do. The most prominent 
 is that (according to Jason Evans) Linux is pretty bad at munmap() so 
 it's actually better to advise() pages away upon deallocation but never 
 unmap them.


 Andrei

That sounds like a more complicated topic than anything I had
in mind. I think a »std.virtualmemory« module should already
implement all the primitives in a portable form, so we don't
have to do that again for the next use case. Since
cross-platform code is always hard to get right, it could also
avoid latent bugs.
That module would also offer functionality to get the page
size and allocation granularity and wrappers for common needs
like getting n KiB of writable memory. Management however
(i.e. RAII structs) would not be part of it.
It sounds like not too much work with great benefit for a
systems programming language.

-- 
Marco



Re: More radical ideas about gc and reference counting

2014-05-05 Thread Marco Leise via Digitalmars-d
Am Mon, 5 May 2014 09:39:30 -0700
schrieb H. S. Teoh via Digitalmars-d
digitalmars-d@puremagic.com:

 On Mon, May 05, 2014 at 03:55:12PM +, bearophile via Digitalmars-d wrote:
  Andrei Alexandrescu:
  
  I think the needs to support BigInt argument is not a blocker - we
  can release std.rational to only support built-in integers, and then
  adjust things later to expand support while keeping backward
  compatibility. I do think it's important that BigInt supports
  appropriate traits to be recognized as an integral-like type.
  
  Bigints support is necessary for usable rationals, but I agree this
  can't block their introduction in Phobos if the API is good and
  adaptable to the successive support of bigints.
 
 Yeah, rationals without bigints will overflow very easily, causing many
 usability problems in user code.
 
 
  If you, Joseph, or both would want to put std.rational again through
  the review process I think it should get a fair shake. I do agree
  that a lot of persistence is needed.
  
  Rationals are rather basic (important) things, so a little of
  persistence is well spent here :-)
 [...]
 
 I agree, and support pushing std.rational through the queue. So, please
 don't give up, we need it get it in somehow. :)
 
 
 T

That experimental package idea that was discussed months ago
comes to my mind again. Add that thing as exp.rational and
have people report bugs or shortcomings to the original
author. When it seems to be usable by everyone interested it
can move into Phobos proper after the formal review (that
includes code style checks, unit tests etc. that mere users
don't take as seriously).

As long as there is nothing even semi-official, it is tempting
to write such a module from scratch in a quickdirty fashion
and ignore existing work.
The experimental package makes it clear that this code is
eventually going to the the official way and home brewed stuff
wont have a future. Something in the standard library is much
less likely to be reinvented. On the other hand, once a module
is in Phobos proper, it is close to impossible to change the
API to accommodate for a new use case. That's why I think the
most focused library testing and development can happen in the
experimental phase of a module. The longer it is, the more
people will have tried it in their projects before formal
review, which would greatly improve informed decisions.
The original std.rationale proposal could have been in active
use now for months!

-- 
Marco



Re: FYI - mo' work on std.allocator

2014-05-05 Thread Marco Leise via Digitalmars-d
Am Mon, 05 May 2014 11:23:58 -0700
schrieb Andrei Alexandrescu seewebsiteforem...@erdani.org:

 On 5/5/14, 9:57 AM, Marco Leise wrote:
  That sounds like a more complicated topic than anything I had
  in mind. I think a »std.virtualmemory« module should already
  implement all the primitives in a portable form, so we don't
  have to do that again for the next use case. Since
  cross-platform code is always hard to get right, it could also
  avoid latent bugs.
  That module would also offer functionality to get the page
  size and allocation granularity and wrappers for common needs
  like getting n KiB of writable memory. Management however
  (i.e. RAII structs) would not be part of it.
  It sounds like not too much work with great benefit for a
  systems programming language.
 
 I think adding portable primitives to 
 http://dlang.org/phobos/std_mmfile.html (plus better yet refactoring its 
 existing code to use them) would be awesome and wouldn't need a DIP. -- 
 Andrei

I like Dmitry's core.vm better, since conceptually we are not
necessarily dealing with memory mapped files, but probably
with just-in-time compilation, circular buffers, memory
access tracing etc. Virtual memory really is a basic building
block.

-- 
Marco



Re: More radical ideas about gc and reference counting

2014-05-05 Thread Marco Leise via Digitalmars-d
Am Mon, 05 May 2014 17:24:38 +
schrieb Dicebot pub...@dicebot.lv:

  That experimental package idea that was discussed months ago
  comes to my mind again. Add that thing as exp.rational and
  have people report bugs or shortcomings to the original
  author. When it seems to be usable by everyone interested it
  can move into Phobos proper after the formal review (that
  includes code style checks, unit tests etc. that mere users
  don't take as seriously).
 
 And same objections still remain.

Sneaky didn't work this time.

-- 
Marco



Re: FYI - mo' work on std.allocator

2014-05-06 Thread Marco Leise via Digitalmars-d
Am Mon, 05 May 2014 21:13:10 +0400
schrieb Dmitry Olshansky dmitry.o...@gmail.com:

 05-May-2014 20:57, Marco Leise пишет:
 
  That sounds like a more complicated topic than anything I had
  in mind. I think a »std.virtualmemory« module should already
  implement all the primitives in a portable form, so we don't
  have to do that again for the next use case. Since
  cross-platform code is always hard to get right, it could also
  avoid latent bugs.
 
 I had an idea of core.vmm. It didn't survive the last review though, 
 plus I never got around to test OSes aside from Windows  Linux.
 Comments on initial design are welcome.
 https://github.com/D-Programming-Language/druntime/pull/653

That's exactly what I had in mind and more. :)
These are all free functions that can be used as building
blocks for more specific objects. Was there a dedicated review
thread on the news group? All I could find was a discussion
about why not to use a VMM struct with static functions as
a namespace replacement.

-- 
Marco



Re: FYI - mo' work on std.allocator

2014-05-06 Thread Marco Leise via Digitalmars-d
Am Tue, 06 May 2014 22:55:37 +0400
schrieb Dmitry Olshansky dmitry.o...@gmail.com:

 06-May-2014 10:20, Marco Leise пишет:
  Am Mon, 05 May 2014 21:13:10 +0400
  schrieb Dmitry Olshansky dmitry.o...@gmail.com:
 
  I had an idea of core.vmm. It didn't survive the last review though,
  plus I never got around to test OSes aside from Windows  Linux.
  Comments on initial design are welcome.
  https://github.com/D-Programming-Language/druntime/pull/653
 
  That's exactly what I had in mind and more. :)
 
 Cool.
 I was ambitious at start until I released that there were about 5-6 
 logically consistent primitives of which many OS-es provided say 3 or 4 
 with little or inexact overlap. That's why I thought of focusing on 
 common recipes, and provide building blocks for them.

These subtle differences between OSs can kill every clean
design, hehe. The last time I thought about it I came to the
conclusion that some cross-platform APIs are better designed
more around use-cases than blindly mapping OS functions. I.g.
both chmod() and an opaque integer are bad abstractions for
file attributes.

  These are all free functions that can be used as building
  blocks for more specific objects. Was there a dedicated review
  thread on the news group? All I could find was a discussion
  about why not to use a VMM struct with static functions as
  a namespace replacement.
 
 I don't recall such but I think I did a tiny topic on it in general D NG.

You probably mean the same thread:
http://forum.dlang.org/thread/l4u68b$30fo$1...@digitalmars.com

-- 
Marco



Re: radical ideas about GC and ARC : need to be time driven?

2014-05-11 Thread Marco Leise via Digitalmars-d
Am Sun, 11 May 2014 14:52:50 +1000
schrieb Manu via Digitalmars-d digitalmars-d@puremagic.com:

 On 11 May 2014 05:39, H. S. Teoh via Digitalmars-d
 digitalmars-d@puremagic.com wrote:
  On Sat, May 10, 2014 at 09:16:54PM +0200, Xavier Bigand via Digitalmars-d 
  wrote:
   - Same question if D migrate to ARC?
 
  I highly doubt D will migrate to ARC. ARC will probably become
  *possible*, but some language features fundamentally rely on the GC, and
  I can't see how that will ever be changed.
 
 Which ones are incompatible with ARC?

Pass-by-value slices as 2 machine words

-- 
Marco



Re: More radical ideas about gc and reference counting

2014-05-11 Thread Marco Leise via Digitalmars-d
Am Wed, 07 May 2014 06:50:33 +
schrieb Paulo Pinto pj...@progtools.org:

 A *nix package manager is brain dead idea for software
 development as it ties the language libraries to the specific OS
 one is using.

What do you have in mind here? I don't quite get the picture.
Are you opposed to a developer writing a package only for his
pet distribution of Linux and ignoring all others?

The typical packages I know come with some configure script
and offer enough hooks to custom tailer the installation, so
the library/application can work on any distribution.
Most of the Linux software is open source and and their
packages are maintained by the community around the specific
distribution. That doesn't preclude the use of package
managers like Cabal, CPAN, Maven, you-name-it. But for a
systems language integration with the existing C/C++ is of
utmost importance. After all, D compiles to 'regular' native
binaries. Typically when an application is useful, someone
will add it to their favorite distribution as a package
including all the libraries as dependencies.

 Good luck getting packages if the author did not consider your
 OS. Specially if they are only available in binary format, as it
 is standard in the enterprise world.

Then use dub. Oh wait... the packages on code.dlang.org are
open source, too. And at least since the curl debacle we know
that there is not one binary for all *nix systems. I don't
know where you are trying to get with this argument. I think
it has nothing to do with what dub strives for and is worth a
separate topic Binary D library distribution.

 With a language pakage manager I can produce package XPTO that
 will work on all OS, it won't conflict with the system packages,
 specially important on servers used for CI of multiple projects.
 
 --
 Paulo

What is this XPTO that will magically work on all OS? I never
heard of it, but I'm almost certain it has to do with
languages that compile at most to machine independent byte
code.
And why do you run CI on the live system instead of a chroot
environment, if you are afraid of messing it up? :) I do trust
my package manager to correctly install libraries into a
chroot. It is as simple as prepending an environment variable
override. As a bonus you can then cleanly uninstall/update
libs in the chroot environment with all the sanity checks the
package manager may offer.
A language package manager is a good idea, but there are
certain limits for it you leave the development stage. At that
point the system package manager takes over. Both should be
considered with equal care.

-- 
Marco



Re: radical ideas about GC and ARC : need to be time driven?

2014-05-11 Thread Marco Leise via Digitalmars-d
Am Mon, 12 May 2014 03:36:34 +1000
schrieb Manu via Digitalmars-d digitalmars-d@puremagic.com:

 On 12 May 2014 02:38, Marco Leise via Digitalmars-d
 digitalmars-d@puremagic.com wrote:
  Am Sun, 11 May 2014 14:52:50 +1000
  schrieb Manu via Digitalmars-d digitalmars-d@puremagic.com:
 
  On 11 May 2014 05:39, H. S. Teoh via Digitalmars-d
  digitalmars-d@puremagic.com wrote:
   On Sat, May 10, 2014 at 09:16:54PM +0200, Xavier Bigand via 
   Digitalmars-d wrote:
- Same question if D migrate to ARC?
  
   I highly doubt D will migrate to ARC. ARC will probably become
   *possible*, but some language features fundamentally rely on the GC, and
   I can't see how that will ever be changed.
 
  Which ones are incompatible with ARC?
 
  Pass-by-value slices as 2 machine words
 
 64bit pointers are only 40-48 bits, so there's 32bits waste for an
 offset... and if the base pointer is 32byte aligned (all allocated
 memory is aligned), then you can reclaim another 5 bits there... I
 think saving an arg register would probably be worth a shift.
 32bit pointers... not so luck :/
 video games consoles though have bugger all memory, so heaps of spare
 bits in the pointers! :P

And remember how people abused the high bit in 32-bit until
kernels were modified to support the full address space and
the Windows world got that LARGE_ADDRESS_AWARE flag to mark
executables that do not gamble with the high bit.

On the positive side the talk about Rust, in particular how
reference counted pointers decay to borrowed pointers made me
think the same could be done for our scope args. A reference
counted slice with 3 machine words could decay to a 2 machine
word scoped slice. Most of my code at least just works on the
slices and doesn't keep a reference to them. A counter example
is when you have something like an XML parser - a use case
that D traditionally (see Tango) excelled in. The GC
environment and slices make it possible to replace string
copies with cheap slices into the original XML string.

-- 
Marco



Re: More radical ideas about gc and reference counting

2014-05-11 Thread Marco Leise via Digitalmars-d
Am Sun, 11 May 2014 14:41:10 -0700
schrieb Walter Bright newshou...@digitalmars.com:

 Your proposal still relies on a GC to provide the memory safety,
 […] it is a hybrid ARC/GC system.

But I thought ARC cannot be designed without GC to resolve
cycles. Or is your comment pure rethoric?

-- 
Marco



Re: More radical ideas about gc and reference counting

2014-05-11 Thread Marco Leise via Digitalmars-d
Am Sun, 11 May 2014 17:50:25 -0700
schrieb Walter Bright newshou...@digitalmars.com:

 As long as those pointers don't escape. Am I right in that one cannot store a 
 borrowed pointer into a global data structure?

Right, and that's the point and entirely positive-to-do™.
Your general purpose function does not know how the memory was
allocated, that it receives a pointer to. In particular it
must not assume that it is safe to keep a reference to it as
there are several memory management schemes that are
incompatible with that, like reference counting or stack
allocations.

Expanding on these two, Rust can now safely use _more_
allocation schemes with functions that take borrowed pointers
than is safely possible in D!

RC pointers:
You cannot pass them as raw pointers in D. In Rust they can
be passed as borrowed.

Stack pointers:
Not allowed in D in @safe code and inherently unsafe in
@system code. Again this is safe to do in Rust due to
borrowing.

 The similarity is that there are 
 one way conversions from one to the other, and one of the types is more 
 general. 
 I infer from your other statements about Rust that it doesn't actually have a 
 general pointer type.

Yes it does:
http://static.rust-lang.org/doc/0.10/guide-unsafe.html#raw-pointers

But the design principle in Rust is to only have them in
@system code (speaking in D terms), in particular to interface
with C.

Turning the argument back to D and assuming you wrote a
function that takes a raw pointer because you plan to store it
in a global variable. How do you make sure you get a pointer
to something with infinite life-time? Let me answer this: You
either use GC pointers exclusively or you rely on the
convention that the function takes ownership of the memory.
The former is impractical and the latter cannot be statically
enforced.

Borrowed pointers add an @safe way to deal with the situation
in all contexts where you don't need to store a reference.
But if you _do_ need that capability: ask explicitly for GC
pointers as they can guarantee unlimited life-time.
If that's still too restrictive mark it @system and use raw
pointers (in Rust: unsafe keyword).

Finally, this is not Rust vs. D, because D has had borrowed
pointer function arguments since ages as well - maybe even
longer than Rust. The semantics of in/scope were just never
fully implemented. Once this is done we can also write:

@safe void main()
{
auto stack = 42;
foo(stack);
}

@safe void foo(scope int*);

-- 
Marco



Re: More radical ideas about gc and reference counting

2014-05-12 Thread Marco Leise via Digitalmars-d
Am Sun, 11 May 2014 22:11:28 -0700
schrieb Walter Bright newshou...@digitalmars.com:

  But I thought ARC cannot be designed without GC to resolve
  cycles.
 
 It can be, there are various schemes to deal with that, including don't 
 create 
 cycles. GC is just one of them.
 
 http://en.wikipedia.org/wiki/Reference_counting#Dealing_with_reference_cycles

Yes that article mentions:
a) avoid creating them
b) explicitly forbid reference cycles
c) Judicious use of weak references
d) manually track that data structure's lifetime
e) tracing garbage collector
f) adding to a root list all objects whose reference
   count is decremented to a non-zero value and periodically
   searching all objects reachable from those roots.

To pick up your statement again: »Your proposal still relies
on a GC to provide the memory safety, […] it is a hybrid
ARC/GC system.«

a) and b) let's assume never creating cycles is not a feasible
  option in a systems programming language
c) and d) don't provide said memory safety
e) and f) ARE tracing garbage collectors

ergo: »But I thought ARC cannot be designed without GC to
resolve cycles.«

Your were arguing against Michel Fortin's proposal on the
surface, when your requirement cannot even be fulfilled
theoretically it seems. Which could mean that you don't like
the idea of replacing D's GC with an ARC solution.

»This is the best horse I could find for the price. It is
 pretty fast and ...«
»No, it still has four legs.«

-- 
Marco



Re: More radical ideas about gc and reference counting

2014-05-12 Thread Marco Leise via Digitalmars-d
Am Mon, 12 May 2014 01:54:58 -0700
schrieb Walter Bright newshou...@digitalmars.com:

 On 5/11/2014 10:57 PM, Marco Leise wrote:
  Am Sun, 11 May 2014 17:50:25 -0700
  schrieb Walter Bright newshou...@digitalmars.com:
 
  As long as those pointers don't escape. Am I right in that one cannot 
  store a
  borrowed pointer into a global data structure?
 
  Right, and that's the point and entirely positive-to-do™.
 
 This means that a global data structure in Rust has to decide what memory 
 allocation scheme its contents must use, and cannot (without tagging) mix 
 memory 
 allocation schemes.
 
 For example, let's say a compiler has internally a single hash table of 
 strings. 
 With a GC, those strings can be statically allocated, or on the GC heap, or 
 anything with a lifetime longer than the table's. But I don't see how this 
 could 
 work in Rust.

:( Good question. I have no idea.

-- 
Marco



Re: More radical ideas about gc and reference counting

2014-05-12 Thread Marco Leise via Digitalmars-d
Am Mon, 12 May 2014 09:32:58 +
schrieb Paulo Pinto pj...@progtools.org:

 On Monday, 12 May 2014 at 09:05:39 UTC, John Colvin wrote:
  On Monday, 12 May 2014 at 08:45:56 UTC, Walter Bright wrote:
  On 5/12/2014 12:12 AM, Manu via Digitalmars-d wrote:
  What? You've never offered me a practical solution.
 
  I have, you've just rejected them.
 
 
  What do I do?
 
  1. you can simply do C++ style memory management. 
  shared_ptr, etc.
 
  2. you can have the non-pausible code running in a thread that 
  is not registered with the gc, so the gc won't pause it. This 
  requires that this thread not allocate gc memory, but it can 
  use gc memory allocated by other threads, as long as those 
  other threads retain a root to it.
 
  3. D allows you to create and use any memory management scheme 
  you want. You are simply not locked into GC. For example, I 
  rewrote my Empire game into D and it did not do any allocation 
  at all - no GC, not even malloc. I know that you'll need to do 
  allocation, I'm just pointing out that GC allocations and 
  pauses are hardly inevitable.
 
  4. for my part, I have implemented @nogc so you can track down 
  gc usage in code. I have also been working towards refactoring 
  Phobos to eliminate unnecessary GC allocations and provide 
  alternatives that do not allocate GC memory. Unfortunately, 
  these PR's just sit there.
 
  5. you can divide your app into multiple processes that 
  communicate via interprocess communication. One of them 
  pausing will not pause the others. You can even do things like 
  turn off the GC collections in those processes, and when they 
  run out of memory just kill them and restart them. (This is 
  not an absurd idea, I've heard of people doing that 
  effectively.)
 
  6. If you call C++ libs, they won't be allocating memory with 
  the D GC. D code can call C++ code. If you run those C++ libs 
  in separate threads, they won't get paused, either (see (2)).
 
  7. The Warp program I wrote avoids GC pauses by allocating 
  ephemeral memory with malloc/free, and (ironically) only using 
  GC for persistent data structures that should never be free'd. 
  Then, I just turned off GC collections, because they'd never 
  free anything anyway.
 
  8. you can disable and enable collections, and you can cause 
  collections to be run at times when nothing is happening (like 
  when the user has not input anything for a while).
 
 
  The point is, the fact that D has 'new' that allocates GC 
  memory simply does not mean you are obliged to use it. The GC 
  is not going to pause your program if you don't allocate with 
  it. Nor will it ever run a collection at uncontrollable, 
  random, asynchronous times.
 
  The only solutions to the libraries problem that I can see here 
  require drastic separation of calls to said libraries from any 
  even vaguely time critical code. This is quite restrictive.
 
  Yes, calling badly optimised libraries from a hot loop is a bad 
  idea anyway, but the GC changes this from
 
  well it might take a little more time than usual, but we can 
  spare a few nano-seconds and it'll show up easily in the 
  profiler
 
  to
 
  it might, sometimes, cause the GC to run a full collection on 
  our 3.96 / 4.00 GB heap with an associated half-second pause.
 
  And here we go again, I can't use that library, it's memory 
  management scheme is incompatible with my needs, I'll have to 
  rewrite it myself...
 
 A badly placed malloc() in library code can also trigger OS
 virtualization mechanisms and make processes being swapped out to
 disk, with the respective overhead in disk access and time spent
 on kernel code.
 
 So it is just not the we can spare a few nano-seconds.
 
 --
 Paulo

Yes, it could easily extend to a longer wait. I think we all
know programs that hang while the system is swapping out.
Don't let it get to that! A PC game would typically reduce
caches or texture resolutions before running out of RAM.

Linux has a threshold of free pages it tries to keep available
at any time to satisfy occasional small allocations.
http://www.science.unitn.it/~fiorella/guidelinux/tlk/node39.html

All-in-all malloc is less likely to cause long pauses. It just
allocates and doesn't ask itself if there might be dead memory
to salvage to satisfy a request.

Time will tell if all well written D libraries will be @nogc
to move the question of allocations to the user.

-- 
Marco



Re: radical ideas about GC and ARC : need to be time driven?

2014-05-13 Thread Marco Leise via Digitalmars-d
Am Mon, 12 May 2014 08:44:51 +
schrieb Marc Schütz schue...@gmx.net:

 On Monday, 12 May 2014 at 04:22:21 UTC, Marco Leise wrote:
  On the positive side the talk about Rust, in particular how
  reference counted pointers decay to borrowed pointers made me
  think the same could be done for our scope args. A reference
  counted slice with 3 machine words could decay to a 2 machine
  word scoped slice. Most of my code at least just works on the
  slices and doesn't keep a reference to them. A counter example
  is when you have something like an XML parser - a use case
  that D traditionally (see Tango) excelled in. The GC
  environment and slices make it possible to replace string
  copies with cheap slices into the original XML string.
 
 Rust also has a solution for this: They have lifetime 
 annotations. D's scope could be extended to support something 
 similar:
 
  scope(input) string getSlice(scope string input);
 
 or with methods:
 
  struct Xml {
  scope(this) string getSlice();
  }
 
 scope(symbol) means, this value references/aliases (parts of) 
 the value referred to by symbol. The compiler can then make 
 sure it is never assigned to variables with longer lifetimes than 
 symbol.

Crazy shit, now we are getting into concepts that I have no
idea of how well they play in real code. There are no globals,
but threads all create their own call stacks with independent
lifetimes. So at that point lifetime annotations become
interesting.

-- 
Marco



Re: D Language Version 3

2014-05-29 Thread Marco Leise via Digitalmars-d
Am Thu, 29 May 2014 00:06:29 +
schrieb deadalnix deadal...@gmail.com:

 On Wednesday, 28 May 2014 at 22:48:22 UTC, Jonathan M Davis via
 Digitalmars-d wrote:
 
 
 That's interesting :D

So I figure you didn't read the text/html part with the nice
1.5x line height which makes reading easy on the eyes? No? :)

-- 
Marco



Re: The GC and performance, but not what you expect

2014-05-29 Thread Marco Leise via Digitalmars-d
Am Thu, 29 May 2014 20:01:16 +
schrieb Sean Kelly s...@invisibleduck.org:

 On Thursday, 29 May 2014 at 19:00:24 UTC, Walter Bright wrote:
 
  If it's single threaded, and the single thread is doing the 
  collecting, who is starting up a new thread?
 
 class Foo {
   ~this() {
   auto t = new Thread({ auto a = new char[100]; });
   t.start();
   }
 }

Nice try, but destructors called by the GC are currently
effectively @nogc. So don't try that at home.

-- 
Marco



Re: The GC and performance, but not what you expect

2014-05-29 Thread Marco Leise via Digitalmars-d
Am Thu, 29 May 2014 12:35:46 +
schrieb safety0ff safety0ff@gmail.com:

 On Thursday, 29 May 2014 at 10:09:18 UTC, Atila Neves wrote:
 
  The GC is preventing me from beating Java, but not because of
  collections. It's the locking it does to allocate instead! I
  don't know about the rest of you but I definitely didn't see 
  that
  one coming.
 
 I would have seen it coming if the app. was multi-threaded.
 
 Hopefully a re-write of the GC would include something akin to 
 Boehm GC's THREAD_LOCAL_ALLOC / google's TCMalloc.

Orvid has been inspired by TCMalloc. If he can realize his
idea of a GC it should have a similar thread local allocation
heap with no locking.

-- 
Marco



Re: The GC and performance, but not what you expect

2014-05-29 Thread Marco Leise via Digitalmars-d
Am Thu, 29 May 2014 18:00:13 +
schrieb Brian Schott briancsch...@gmail.com:

 On Thursday, 29 May 2014 at 10:09:18 UTC, Atila Neves wrote:
  If you're profiling binaries on Linux, this thing is a must 
  have and I have no idea how I'd never heard about it before.
 
 On the topic of perf, I found a stupid trick the other day and 
 wrote it down on the wiki: http://wiki.dlang.org/Perf

But does that beat simply attaching the debugger to the
process and pausing it?

-- 
Marco



Re: std.experimental – DConf?

2014-05-29 Thread Marco Leise via Digitalmars-d
Am Thu, 29 May 2014 18:35:49 +0200
schrieb Joseph Rushton Wakeling via Digitalmars-d
digitalmars-d@puremagic.com:

 On 29/05/14 16:47, Steven Schveighoffer via Digitalmars-d wrote:
  javax was the experimental branch for Java's experimental code. Now 
  javax.xml is
  PERMANENT.
 
 Point taken.  That said, I fear that _any_ module or package that gets widely 
 used carries such a risk.

But why didn't they change it? 

o Didn't they make it clear enough that a rename is coming?
o Was it known, but impractical to change all Java code?
  (I.e. closed source byte code files would break)
o Were both the original xml implementation and javax.xml used
  too much to replace one with the other? (Assuming the APIs
  were different.)

I remember javax being mentioned here a few times as to why
version 2 packages are proven bad, but noone ever mentioned
what ultimately stopped Sun from moving javax.* packages.
It is possible that in D we might have a different view on it.
E.g. versioned Phobos shared libraries or better communication
of what to expect from the experimental package.

-- 
Marco



Re: Hardware Traps for Integer Overflow

2014-05-30 Thread Marco Leise via Digitalmars-d
Am Thu, 29 May 2014 20:10:13 +
schrieb John Colvin john.loughran.col...@gmail.com:

 On Thursday, 29 May 2014 at 20:01:25 UTC, Tobias Pankrath wrote:
  On Thursday, 29 May 2014 at 15:32:54 UTC, Wanderer wrote:
  I don't see any valid alternatives. What should ideally happen 
  if you increment 0x..? Should the value remain the 
  same?
 
  I know at least one firmware running in cars from several 
  manufacturers where
  this is the desired behavior in dozens of places. Saturated 
  arithmetic is common.
 
  (I'm not saying it should be the default)
 
 There are dedicated instruction in more recent versions of SSE 
 for saturated arithmetic.

Actually  such instructions exist since MMX on Intel CPUs. The
question is: Can these new SSE instructions replace integer
math seemlessly?

-- 
Marco



Re: Including Dub with D

2014-05-30 Thread Marco Leise via Digitalmars-d
Am Sat, 24 May 2014 21:32:04 +0200
schrieb Sönke Ludwig slud...@rejectedsoftware.com:

 * It may also be a good step to solve the chicken-egg issue here, where 
 the argument is that because SDL isn't so common, it shouldn't be used. 
 I think it's a really nice little format that deserves to get some support.

It looks to me like what .INI files should have been defined
to be from day one. It looks like less like a data definition
language for computers than XML or even JSON with its raw
text file appearance of tags. Only the occasional {} or =
really remind you that there is a formal syntax to it.

-- 
Marco



Re: The GC and performance, but not what you expect

2014-05-30 Thread Marco Leise via Digitalmars-d
Am Fri, 30 May 2014 10:29:58 +0200
schrieb Rainer Schuetze r.sagita...@gmx.de:

 
 
 On 29.05.2014 12:09, Atila Neves wrote:
  The GC is preventing me from beating Java, but not because of
  collections. It's the locking it does to allocate instead! I
  don't know about the rest of you but I definitely didn't see that
  one coming.
 
 
 A lock should not be more than a CAS operation that always succeeds in a 
 single threaded application, but OS overhead might be considerable.
 
 Adam Sakareassen has recently announced a rewrite of the GC that has 
 lock-free allocations: 
 http://forum.dlang.org/thread/mailman.655.1399956110.2907.digitalmar...@puremagic.com
 
 Unfortunately, it isn't available yet...

I recently tried to write a single-type memory allocator and
thought I'd come out faster than TCMalloc due to its
simplicity. But as soon as I added a single CAS I was already
over the time that TCMalloc needs. That way I learned that CAS
is not as cheap as it looks and the fastest allocators work
thread local as long as possible.

-- 
Marco



Re: New opportunities for D = ASM.js

2014-05-30 Thread Marco Leise via Digitalmars-d
Am Tue, 13 May 2014 13:38:43 -0400
schrieb Etienne etci...@gmail.com:

 Also, nothing says a thread pool won't be in the works if it becomes 
 necessary.

Besides that JavaScript is single-threaded. That could be a
bit of a show stopper.

-- 
Marco



Re: The GC and performance, but not what you expect

2014-05-30 Thread Marco Leise via Digitalmars-d
Am Fri, 30 May 2014 15:54:57 +
schrieb Ola Fosheim Grøstad
ola.fosheim.grostad+dl...@gmail.com:

 On Friday, 30 May 2014 at 09:46:10 UTC, Marco Leise wrote:
  simplicity. But as soon as I added a single CAS I was already
  over the time that TCMalloc needs. That way I learned that CAS
  is not as cheap as it looks and the fastest allocators work
  thread local as long as possible.
 
 22 cycles latency if on a valid cacheline?
 + overhead of going to memory
 
 Did you try to add explicit prefetch, maybe that would help?
 
 Prefetch is expensive on Ivy Brigde (43 cycles throughput, 0.5 
 cycles on Haswell). You need instructions to fill the pipeline 
 between PREFETCH and LOCK CMPXCHG. So you probably need to go ASM 
 and do a lot of testing on different CPUs. Explicit prefetching, 
 lock free strategies etc are tricky to get right. Get it wrong 
 and it is worse than the naive implementation.

I'm on a Core 2 Duo. But this doesn't sound like I want to try
it. core.atomic is as low as I wanted to go. Anyway I deleted
that code when I realized just how fast allocation is with
TCMalloc already. And that's a general purpose allocator.

-- 
Marco



Re: Performance

2014-05-30 Thread Marco Leise via Digitalmars-d
Run this with: -O3 -frelease -fno-assert -fno-bounds-check -march=native
This way GCC and LLVM will recognize that you alternately add
p0 and p1 to the sum and partially unroll the loop, thereby
removing the condition. It takes 1.4 nanoseconds per step
on my not so new 2.0 Ghz notebook, so I assume your PC will
easily reach parity with your original C++ version.



import std.stdio;
import core.time;

alias ℕ = size_t;

void main()
{
run!plus(1_000_000_000);
}

double plus(ℕ steps)
{
enum p0 = 0.0045;
enum p1 = 1.00045452 - p0;

double sum = 1.346346;
foreach (i; 0 .. steps)
sum += i%2 ? p1 : p0;
return sum;
}

void run(alias func)(ℕ steps)
{
auto t1 = TickDuration.currSystemTick;
auto output = func(steps);
auto t2 = TickDuration.currSystemTick;
auto nanotime = 1_000_000_000.0 / steps * (t2 - t1).length / 
TickDuration.ticksPerSec;
writefln(Last: %s, output);
writefln(Time per op: %s, nanotime);
writeln();
}

-- 
Marco



Re: Performance

2014-05-31 Thread Marco Leise via Digitalmars-d
Am Sat, 31 May 2014 17:44:23 +
schrieb Thomas t.leich...@arcor.de:

 Thank you for the help. Which OS is running on your notebook ? 
 For I compiled your source code with your settings with the GCC 
 compiler. The run took 3.1 nanoseconds per step. For the DMD 
 compiler the run took 5. nanoseconds. So I think the problem 
 could be specific to the linux versions of the GCC and the DMD 
 compilers.
 
 
 Thomas

Gentoo Linux 64-bit. Aside from the 64-bit maybe, I can't make
out a good reason why the runtime should depend on the OS so
much.
Are you sure you don't run on a PC from 2000 and did you use
the compiler flags I gave on top of my post? Did you disable
CPU power saving and was no other process running at the same
time?
By the way I get very similar results when using the LDC
compiler.

-- 
Marco



Re: The GC and performance, but not what you expect

2014-06-01 Thread Marco Leise via Digitalmars-d
Am Sat, 31 May 2014 08:52:34 +
schrieb Kagamin s...@here.lot:

 The design of TCMalloc seems to be compatible with Boehm GC, so D 
 GC can probably use same ideas.

Did I mention that Orvid planned on doing that for his
replacement GC?
It looks like there are now a lot of proof of concept GC
modifications by several people. I think that's ok for the
benefit of exploring possibly incompatible changes and
trade-offs. A too early joint effort could mean that some
ideas cannot be explored to their full extent with the typical
myths about how good or bad some idea would have turned out.

-- 
Marco



Re: Performance

2014-06-03 Thread Marco Leise via Digitalmars-d
Am Mon, 02 Jun 2014 10:57:24 +
schrieb Thomas t.leich...@arcor.de:

 My PC is 5 years old. Of course I used your flags. Besides I am 
 not an idiot, I am programming since 20 years and used 6 
 different programming languages. I did't post that just for fun, 
 for I am evaluating D as language for numerical programming.
 
 Thomas

You posted a comparing benchmark between 3 languages providing
only the source code for one and didn't even run an optimized
compile. That had me thinking. :)
Back on topic: Any chance we can see the C++ code so we can
compare more directly? It's hard to compare the numbers only
for the D version when everyone has different system specs.
Also you say your PC is 5 years old. Is your system 32-bit
then? That would certainly effect the efficiency of loading and
storing 64-bit floating point values and might be a clue in
the right direction. I don't want to believe that the OS has
an effect on a loop that doesn't make any calls to the OS.

-- 
Marco



Re: Extra Carriage Returns using write

2014-06-03 Thread Marco Leise via Digitalmars-d
Am Mon, 02 Jun 2014 14:14:11 +
schrieb Dave G dgregor...@gmail.com:

 Hello,
 
 Why does the following add an extra CR in front of the CRLF's
 
 auto outf = new File(out.txt, w);
 outf.write(this\r\nis\r\na\r\ntest);
 outf.close;
 
 If I make the file binary
 auto outf = new File(out.txt, wb);
 
 it works as expected.
 
 I am using 2.065 and windows 7
 
 Thanks,
 Dave G

Redirection of D's I/O through the C runtime needs to be killed
with fire. It inherits C's flaws like the various vendor
specific extensions to the mode string for important flags like
inheritance of file handles in child processes.

-- 
Marco



Re: Swift is based LLVM,what will the D's LDC do?

2014-06-06 Thread Marco Leise via Digitalmars-d
Am Thu, 05 Jun 2014 14:30:40 +
schrieb Dicebot pub...@dicebot.lv:

 On Thursday, 5 June 2014 at 14:01:43 UTC, bioinfornatics wrote:
  On Thursday, 5 June 2014 at 06:40:17 UTC, Walter Bright wrote:
  On 6/4/2014 9:25 AM, Iain Buclaw via Digitalmars-d wrote:
  This likewise gdc too.  All you need to do is look at the 
  downloads
  page on dlang.org !
 
  It still says nothing about doing:
 
sudo apt-get install gdc
 
  on Ubuntu! Why keep it a secret? :-)
 
 
  On Fedora
 sudo yum install ldc
 
  ;-)
 
 pacman -Sy dlang-ldc
 pacman -Sy dlang-gdc
 
 ;)

Late for the show! On Gentoo:

layman -a dlang
emerge dmd ldc2 gcc[d]

-- 
Marco



Re: Using up-to-date GDC [was Re: Swift is based LLVM,what will the D's LDC do?]

2014-06-06 Thread Marco Leise via Digitalmars-d
Am Thu, 5 Jun 2014 22:47:15 +0200
schrieb Johannes Pfau nos...@example.com:

 archlinux has a 'pragmatic' approach regarding licenses  patents
 anyway. They also ship libdvdcss, mesa with --enable-texture-float,
 all multimedia codec packages are in the standard repos etc.

On Gentoo, due to the compile-from-source mentality the user
has the option to enable patented algorithms for their
personal use. Enabling these flags comes with a warning, that
the resulting binaries must not be redistributed. That way the
distribution stays safe from legal issues and the end user
doesn't miss out on relevant features. (Unless their hatred
for software patents makes them unable to swallow their pride,
that is :) )

-- 
Marco



Re: nothrow function callbacks in extern(C) code - solution

2014-07-08 Thread Marco Leise via Digitalmars-d
Am Thu, 19 Jun 2014 12:59:00 -0700
schrieb Walter Bright newshou...@digitalmars.com:

 With nothrow and @nogc annotations, we've been motivated to add these 
 annotations to C system API functions, because obviously such functions 
 aren't 
 going to throw D exceptions or call the D garbage collector.
 
 But this exposed a problem - functions like C's qsort() take a pointer to a 
 callback function. The callback function, being supplied by the D programmer, 
 may throw and may call the garbage collector. By requiring the callback 
 function 
 to be also nothrow @nogc, this is an unreasonable requirement besides 
 breaking 
 most existing D code that uses qsort().
 
 This problem applies as well to the Windows APIs and the Posix APIs with 
 callbacks.

I just stumbled upon this thread now...
In general you cannot throw exceptions unless you know how the
call stack outside of your language's barrier is constructed.
What I mean is that while DMD on 64-bit Linux uses frame
pointers that druntime uses to unwind, GCC omits them. So
druntime bails out once it reaches the C part of the call
stack.

That makes for two options:
1) D callbacks passed to other languages must not throw, like
   Andrei proposes if I understood that right.
2) Druntime must adapt to the system's C compiler's ABI.
   (by the use of libunwind)

-- 
Marco



Re: Cool Stuff for D that we keep Secret

2014-07-14 Thread Marco Leise via Digitalmars-d
Am Wed, 09 Jul 2014 16:21:46 -0700
schrieb Andrei Alexandrescu seewebsiteforem...@erdani.org:

 On 7/9/14, 2:59 PM, H. S. Teoh via Digitalmars-d wrote:
  So why not link to select wiki pages from dlang.org?
 
 Where's the pull request?
 
 […]

 Walter and I are busy enough as is working on D to NOT have new work cut 
 out for us. Please steal any work you can from us.
 
 
 Andrei

I'm sure most of the NG folks worry about stepping on
someone's toe by making pull requests for the official
language website without getting an ok from whoever designed
it and from Walter and you.
From my perspective the design between Wiki and front page is
wildly different and causes a friction when navigating the
website. It is possible someone writes a pull request,
someone else who is uninvolved with the web site gives it an ok
and later the original author is frustrated because he
intentionally separated the Wiki from the static part of
dlang.org.
(Not to say there isn't more talk than action etc., but we
 shouldn't pass changes over the respective web site
 lieutenant.)

-- 
Marco



Re: Cool Stuff for D that we keep Secret

2014-07-14 Thread Marco Leise via Digitalmars-d
Am Wed, 09 Jul 2014 23:56:20 +
schrieb w0rp devw...@gmail.com:

 http://w0rp.com:8010/download
 
 The download page is the page I've changed the most thus far. I 
 started by taking the different D compilers and so on and 
 breaking them into headings with short paragraphs explaining what 
 each is. I was thinking of putting sections in there for 
 instructions for installing on popular Linux distributions.

 […]

I'm getting strange question marks on the right side bar in
Opera 12/Linux:
DMD ? Version 1
DMC ? Digital Mars C and C++ Compiler

The version on the top left is more visibly separated and
overall the design feels more light weight with all the
spacing. The list of installers is now a bit too slim for my
taste. I miss the information about the type of download. For
example that the OS X version is a DMG package, or that the
All platforms version is a ZIP also containing the sources.
In a way I liked those old-school HTML tables with images.
Personally I could never make friends with the Windows 8
Metro design built on two or three colors and flat
rectangles. It feels so 80s to me.

Then I wondered if the Documentation section should be
renamed Language Specifications and the links renamed to
DMD 1 and DMD 2 or if they should be merged into the
sections for DMD 1 and DMD 2 respectively, because 7 year old
DMD 1 specs are now pretty much obsolete? Someone new to the
web site looking for (current) compiler documentation will
only get confused.

The red bottom line is great. I also prefer clear end of page
markers with a huge margin.

Concerning the instructions for different Linux versions, you
may find that they are better maintained on D Wiki or
respective Wiki pages of the different distributions. YMMV.
Just my 2¢.

-- 
Marco



Re: Google definitely biased…

2014-08-12 Thread Marco Leise via Digitalmars-d
Am Mon, 11 Aug 2014 16:23:19 +0100
schrieb Russel Winder via Digitalmars-d
digitalmars-d@puremagic.com:

 … so what's new?
 
 I was trying to search for web-based material on D ranges and Go slices
 to see if they are basically the same thing. As soon as golang is a
 query term, no other language makes it onto the front page of the query
 results, cf. dlang range slice golang
 
 Google definitely try to push Go :-)

I think the results are bad in part due to Google using 'dlang'
synonymously with 'd'. So you get dozens of false positives
which flood the first page of search results.

%d
you'd
I'd

Use the literal search instead, which disables synonyms:

https://www.google.de/search?tbs=li%3A1q=dlang+golang+range+OR+ranges+OR+slice+OR+slices

I think these results look very fair.

-- 
Marco


signature.asc
Description: PGP signature


Re: Using D

2014-08-25 Thread Marco Leise via Digitalmars-d
Am Sat, 12 Jul 2014 11:38:08 +0100
schrieb Russel Winder via Digitalmars-d
digitalmars-d@puremagic.com:

  That's not to say that Java, the language, (as opposed to the class
  library or the marketing hype) isn't a pretty good language. In fact,
  it's quite a beautiful language -- in the idealistic, ivory tower,
  detached-from-real-life sense of being a perfect specimen suitable for a
  museum piece. Its disconnect from the messy real world, unfortunately,
  makes it rather painful to use in real-life. Well, except with the help
  of automated tools like IDEs and what-not, which makes one wonder, if we
  need a machine to help us communicate with a machine, why not just write
  assembly language instead? But I digress. :-P
 
 Now this is mis-direction. Java is a real-world language in that it is
 used in the real world. Whilst there are many using Java because they
 know no better, many are using it out of choice. Java evolves with the
 needs of the users prepared to get involved in evolving the language.

Yes, Java is verbose, but its modularity makes it very
flexible. The classic example is how you read lines of text
from a file. Instead of a special class for that, you take
use simple primitives with descriptive names and assemble
something that reads lines of UTF-8 text from a buffer that
has a file as its input. It actually acknowledges quite a bit
of real-world mess when you look at it, for example different
encodings on stdin and stdout.
Conventions like beans, where every property is implemented as
a pair of getter/setter or naming rules like ...Adapter,
...Comparator make it easy to reflect on unknown code.
On the one hand it is limiting to only have Java OOP in the
toolbox, on the other hand it is cheap to train someone on
Java and Java libraries and actually not a horror to try and
make sense of other people's code, because it wont be
implemented in any of 5 different paradigms + s.o.'s personal
naming conventions.
I've never been a fan of developing in vi or emacs and as far
as I am concerned, a programming language need not be designed
like a human language. There are many graphical programming
environments as well, for example for physics.
The simpler the language the more robust the refactoring tools
can become. The more conventions are in use, the better custom
tailored tools and IDEs can emerge. I.e. in Eclipse you only
type the capital letters of a long class name and have the
auto-completion figure out which class in scope or available
import paths matches these initials. Heck, it even fills in
the parameters when you call a method using the available
variables in scope. If you were unaware that you need a third
argument, the IDE can generate a new variable with a name
based on the method parameter or place a constructor call for
the required type.
Sometimes you can just focus on the program logic and have the
IDE generate most of the code. Delphi's and C# IDEs similarly
expose properties of GUI objects in tables and generate the
code for event handlers on double-clicks. It saves time, you
cannot misspell anything... I like it.

-- 
Marco


signature.asc
Description: PGP signature


Re: Some Notes on 'D for the Win'

2014-08-25 Thread Marco Leise via Digitalmars-d
Am Sun, 24 Aug 2014 06:39:28 +
schrieb Paulo Pinto pj...@progtools.org:

 Examples of real, working desktop OS, that people really used at 
 their work, done in system programming languages with GC.
 
 Mesa/Cedar
 https://archive.org/details/bitsavers_xeroxparcteCedarProgrammingEnvironmentAMidtermRepo_13518000
 
 Oberon and derivatives
 http://progtools.org/article.php?name=oberonsection=compilerstype=tutorial
 
 SPIN
 http://en.wikipedia.org/wiki/Modula-3
 http://en.wikipedia.org/wiki/SPIN_%28operating_system%29
 
 What is really needed for the average Joe systems programmer to 
 get over this GC in systems programming stygma, is getting the 
 likes of Apple, Googlel, Microsoft and such to force feed them 
 into programming languages.

Yes, but when these systems were invented, was the focus on a
fast lag free multimedia experience or on safety? How do you
get the memory for the GC heap, when you are just about to
write the kernel that manages the systems physical memory?
Do these systems manually manage memory in performance
sensitive parts or do they rely on GC as much as technically
feasible? Could they use their languages as is or did they
create a fork for their OS? What was the expected memory space
at the time of authoring the kernel? Does the language usually
allow raw pointers, unions, interfacing with C etc., or is it
more confined like Java?
I see how you can write an OS with GC already in the kernel or
whatever. However there are too many question marks to jump to
conclusions about D.

o the larger the heap the slower the collection cycles
  (how will it scale in the future with e.g. 1024 GiB RAM?)
o the less free RAM, the more often the collector is called
  (game consoles are always out of RAM)
o tracing GCs have memory overhead
  (memory, that could have been used as disk cache for example)
o some code needs to run in soft-real-time, like audio
  processing plugins; stop-the-world GC is bad here
o non-GC threads are still somewhat arcane and system
  specific
o if you accidentally store the only reference to a GC heap
  object in a non-GC thread it might get collected
  (a hypothetical or existing language may have a better
  offering here)

For programs that cannot afford garbage collection, Modula-3
provides a set of reference types that are not traced by the
garbage collector.
Someone evaluating D may come across the question What if I
get into one of those 10% of use case where tracing GC is not
a good option? It might be some application developer working
for a company that sells video editing software, that has to
deal with complex projects and object graphs, playing sounds
and videos while running some background tasks like generating
preview versions of HD material or auto-saving and serializing
the project to XML.
Or someone writing an IDE auto-completion plugin that has
graphs of thousands of small objects from the source files in
import paths that are constantly modified while the user types
in the code-editor.
Plus anyone who finds him-/herself in a memory constrained
and/or (soft-)realtime environment.

Sometimes it results in the idea that D's GC is some day going
to be as fast as the primary one in Java or C#, which people
found to be acceptable for soft-real-time desktop applications.
Others start contemplating if it is worth writing their own D
runtime to remove the stop-the-world GC entirely.
Personally I mostly want to be sure Phobos is transparent about
GC allocations and that not all threads stop for the GC cycle.
That should make soft-real-time in D a lot saner :)

-- 
Marco



Re: Why does D rely on a GC?

2014-08-26 Thread Marco Leise via Digitalmars-d
Am Tue, 19 Aug 2014 07:23:33 +
schrieb Kagamin s...@here.lot:

   With GC you give up deterministic behavior, which is 
  *absolutely* not worth giving up for 1% of objects.
 
 Memory needs deterministic management only under condition of 
 memory scarcity, but it's not a common case, and D allows manual 
 memory management, but why force it on everyone because only 
 someone needs it?

Don't dismiss his point easily. The real memory cost is not
always visible. Take for example bindings to GUI libraries
where bitmap objects left to be garbage collected may have a
16 byte wrapper object, but several megabytes of memory
associated with it inside a C library. The GC wont see the
need to run a sweep and the working set blows out of
proportion. (Happened to me a few years ago.)
Other times you may just run out of handles because the GC is
not called for a while.

In practice you then add .close/.release methods to every
resource object: and here we are back at malloc/free. Cycles
aside, reference counting does a better job here.
In other words, a GC cannot handle anything outside of the
runtime it was written for, in short: OS handles and
foreign language library data structures.

-- 
Marco



Re: Voting: std.logger

2014-08-26 Thread Marco Leise via Digitalmars-d
Am Wed, 30 Jul 2014 01:42:57 +
schrieb uri em...@ether.com:

 On Wednesday, 30 July 2014 at 00:15:26 UTC, H. S. Teoh via 
 Digitalmars-d wrote:
  On Tue, Jul 29, 2014 at 04:55:04PM -0700, Andrei Alexandrescu 
  via Digitalmars-d wrote:
  On 7/29/14, 4:16 PM, H. S. Teoh via Digitalmars-d wrote:
  I propose 'stdlog'.
  
  I thought of the same but then rejected it - stdlog looks like
  offering the same interface as stdout and stderr. Nevertheless 
  it's a
  sensible choice, too. -- Andrei
 
  I don't like 'theLog'. What about 'defaultLog'?
 
 
  T
 
 +1 for !theLog. I actually like dlog because I hate typing but 
 defaultLog would be fine.
 
 
 /uri

appLog/applog



Re: Voting: std.logger

2014-08-26 Thread Marco Leise via Digitalmars-d
Am Tue, 26 Aug 2014 18:23:30 +
schrieb Dicebot pub...@dicebot.lv:

 On Tuesday, 26 August 2014 at 15:44:19 UTC, Robert burner Schadek 
 wrote:
  BTW:
* move std.logger to std.experimental.logger
* the github project has unittests for the version statements 
  (all pass)
 
  whats next?
 
 I will compare changelist against list of requirements from 
 voters this weekend and if all seems to be addressed will start a 
 new round of review/voting.

Someone else mentioned it before: Logging in destructors would
be a requirement for me, too. Thanks to the GC we usually
don't release memory in them, but foreign resources like
e.g. hardware audio playback buffers would typically handled
in a dtor. I see two ways which both require logging:

1) Dtor calls a C function to release the resource, which may
   return an error code, that you want to log. You keep the
   program running since if all else fails you could still
   reinitialize the audio device, thereby releasing all
   buffers.

2) If waiting for the GC to eventually call the dtor is not an
   option because the resource is very limited, you require
   the user to call some .release/.close method. If in the dtor
   the resource is still open, you log something like
   WARNING: Destructor called, but audio buffer still
attached. Call .close() on the last reference.

As much as I see this as non-negotiable, (chancellor Merkel
would have said alternativlos,) I know it would currently
require the whole log system to be nothrow @nogc and we may
not want to wait till allocating and throwing is allowed
during GC sweeps, before we get std.log.

-- 
Marco



Re: RFC: scope and borrowing

2014-08-26 Thread Marco Leise via Digitalmars-d
Am Sun, 24 Aug 2014 13:14:43 +
schrieb Marc Schütz schue...@gmx.net:

 In the Opportunities for D thread, Walter again mentioned the 
 topics ref counting, GC, uniqueness, and borrowing, from which a 
 lively discussion developed [1]. I took this thread as an 
 opportunity to write down some ideas about these topics. The 
 result is a rather extensive proposal for the implementation of 
 borrowing, and its implementations:
 
 http://wiki.dlang.org/User:Schuetzm/scope

The amount of possible use-cases you listed for this extension is
staggering. It surely carries its own weight.
scope!(ident1, ident2, ...) was quite clever. inout could
borrow from this.

 This is not a real DIP, but before I put more work into 
 formalizing it, I'd like to hear some thoughts from the languages 
 gurus here:
 
 * Is this the general direction we want to go? Is it acceptable 
 in general?
 * Is the proposal internally consistent?

Can anyone tell without actually implementing it? :)
You could try to formalize some error messages and how the
compiler's reasoning would go. What happens when I pass
identifiers to scope!(…) return types?
- Can the compiler look up the identifiers' types and scopes
  in all cases?
- Will the compiler deduce the return type from these
  identifiers? E.g. scope!(someString) ref getString(); will
  work like in your example? I don't think so, because the
  lifetime identifiers could be structs containing the
  returned type.
- What if we used incompatible types like
  scope!(someString, someIntPtr) there?
- What about variables?

  static int someInt = 32;
  string someString;
  scope!(someString, someInt) int* x;
  x = someInt;

  Is the declaration of x in error? Strings don't contain
  integers unless unsafe casts are used, so why would they
  narrow the lifetime of an integer reference?

- Is it necessary to keep around all declared lifetime
  identifiers? In the snippet above (assuming it is valid), it
  looks like the shorter lived `someString' is enough to
  establish the semantics.

 * How big would the effort to implement it be? (I suspect it's a 
 large amount of work, but relatively straightforward.)
 
 [1] http://forum.dlang.org/thread/lphnen$1ml7$1...@digitalmars.com

-- 
Marco



Re: Voting: std.logger

2014-08-26 Thread Marco Leise via Digitalmars-d
Am Tue, 26 Aug 2014 20:59:57 +
schrieb Robert burner Schadek rburn...@gmail.com:

 nothrow I get, but nothrow in dtors is a much wider topic (please 
 open a new thread if you want to discuss this) and see my example 
 to hack around it.

You are right.

 but no nogc should be no problem as long as you use a Logger that 
 doesn't allocate for logging, as for example FileLogger. And even 
 than, what is the problem with no nogc logging in dtors?
 
 --
 class Foo {
  ~this() {
  try {
  log(Foo); // log to file
  } catch(Exception e) {}
  }
 }
 --

As far as I know, exactly this is not possible with the
current GC implementation. The exception you catch there has
just been allocated somewhere deeper in the log function. But
all GC allocations in a GC invoked dtor cause MemoryErrors and
program abortion/crashes. :(

In a perfect world I'd imagine you can set up a fallback
logger. So if the disk is full an exception is thrown by e.g.
std.stdio.File, which is passed as an error level log message
to the fallback logger, which might write to stderr:
ERROR: Could not write the following message to logXYZ:
message
The reason was: Disk full

-- 
Marco



Re: Before we implement SDL package format for DUB

2014-08-26 Thread Marco Leise via Digitalmars-d
Am Tue, 26 Aug 2014 13:47:05 +0300
schrieb ketmar via Digitalmars-d digitalmars-d@puremagic.com:

 On Tue, 26 Aug 2014 10:36:14 +
 eles via Digitalmars-d digitalmars-d@puremagic.com wrote:
 
  Not exactly that, but look here two approaches for introducing 
  comments in standard JSON:
 they both 'hacks'. and i'm pretty sure that most people who using JSON
 never bother to read specs, they just expect it to work like
 javascript. i myself wasn't aware about JSON limitations when i was
 writing my own JSON parser, so my parser allows comments and unquoted
 field names from the beginning.

It depends on the personality of the person looking into
it. Diligent people, when faced with something that looks like
something else, first drop that notion to avoid taking
incorrect shortcuts subconsciously. Then they read the official
documentation until they can't imagine any more questions
and corner cases of the kind Is there a length limitation for
numbers? How do i deal with overflow? Are other encodings than
Unicode allowed?

But in the end it comes down to the robustness principle:
Be conservative in what you do,
be liberal in what you accept from others.

-- 
Marco


signature.asc
Description: PGP signature


Re: Voting: std.logger

2014-08-27 Thread Marco Leise via Digitalmars-d
Am Wed, 27 Aug 2014 01:09:21 +
schrieb Dicebot pub...@dicebot.lv:

 On Wednesday, 27 August 2014 at 00:09:15 UTC, Marco Leise wrote:
  As far as I know, exactly this is not possible with the
  current GC implementation. The exception you catch there has
  just been allocated somewhere deeper in the log function. But
  all GC allocations in a GC invoked dtor cause MemoryErrors and
  program abortion/crashes. :(
 
  In a perfect world I'd imagine you can set up a fallback
  logger. So if the disk is full an exception is thrown by e.g.
  std.stdio.File, which is passed as an error level log message
  to the fallback logger, which might write to stderr:
  ERROR: Could not write the following message to logXYZ:
  message
  The reason was: Disk full
 
 I don't think it will help either. The very moment exception is 
 allocated inside std.stdio.File your program will crash, it won't 
 get to fallback. Only solution is to implement your logger as 
 @nothrow thing by using only C functions internally instead of 
 std.stdio - something that feels overly limited for a general use 
 case.

Exactly, I just needed someone else to speak it out. :)

 I really think this is the case where you should roll your 
 own FileNoThrowingLogger and go with it.

*Me* or everyone who needs to log something in a dtor?

 In a long term this is something much better to be fixed in GC 
 implementation than constantly hacked in stdlib modules.

Or is this maybe the other language change (besides not
generating code for unused lambdas) that should be pushed
with std.log, because otherwise it will never be solved ?

I don't know, but no logging in dtors is a serious
and hard to sell limitation. Not the author's fault though.

-- 
Marco



Re: Bug or what?

2014-08-27 Thread Marco Leise via Digitalmars-d
Am Wed, 27 Aug 2014 20:30:08 +
schrieb Phil Lavoie maidenp...@hotmail.com:

 On Wednesday, 27 August 2014 at 20:28:11 UTC, Phil Lavoie wrote:
  On Wednesday, 27 August 2014 at 20:05:27 UTC, MacAsm wrote:
  On Wednesday, 27 August 2014 at 19:51:48 UTC, Phil Lavoie 
  wrote:
  Ok so me and one of my colleagues have been working on some 
  code at a distance. We both use dmd as the compiler. I am 
  under Windows, she OSX.
 
  It is not uncommon that she experiences more strictness in 
  the type system than I do. For example, something like this 
  does compile for me, but not for her:
 
  int func(size_t i)
  {
  return i;
  }
 
  It passes my compilation. She gets an error msg about 
  implicit casting of uint to int. I'm just wondering... has 
  anybody else experienced that and what is the expected 
  behavior?
 
  Thanks,
  Phil
 
  size_t is a typedef to unsigned (check out 
  http://dlang.org/type.html). So this warning is correct. I 
  don't get this warning too. Maybe it's the type-checking that 
  does differ on OSX. Are you using same compiler version and 
  flags?
 
  Yeah yeah I checked it out and we both use same versions and 
  everything. Basically, to bit word size coherent I should just 
  have writtent this instead:
 
  ptrdiff_t func(size_t i) {return i;}
 
  Though it is still somewhat unsafe, at least it behaves the 
  same on both our machines.
 
  Phil
 
 Note that the compiler behaves the same, the code, not 
 necessarily.

In my opinion this should always give you a compiler warning,
as it is not portable to 64-bit:

uint func(size_t i) {return i;}

But that has been discussed and reported to death already :D

I'm also somewhat pedantic about assigning unsigned to signed
types and vice versa. Most of the times I'd rather change the
code so that I can keep using e.g. unsigned absolute values
instead of differences.

-- 
Marco



Re: Voting: std.logger

2014-08-31 Thread Marco Leise via Digitalmars-d
Am Sun, 31 Aug 2014 01:09:32 +
schrieb Ola Fosheim Grøstad
ola.fosheim.grostad+dl...@gmail.com:

 I've got some questions:
 
 How does logging interact with pure? You need to be able to log 
 in pure functions.

How do you come to that conclusion? Purity is a synonym for
_not_ having side effects. That said - as usual - debug
statements allow you to punch a hole into purity.

 […]
 
 Is logf() needed? Can't you somehow detect that the string is an 
 immutable string literal with string formatting characters?

1) The first argument does not need to be a literal.
2) Checking the first argument for formatting chars slows the
   system down.
3) If you want to log a regular string, e.g. an incoming HTTP
   request or something that contains formatting symbols, log()
   would throw an exception about a missing second argument.
   This in turn could become a DOS vulnerability.

Other than that, you could create an additional log function
that only accepts compile-time known formatting strings as a CT
argument and verifies the runtime argument types at the same
time.

-- 
Marco



Re: [OT] EU patents [was Microsoft filled patent applications for scoped and immutable types]

2014-08-31 Thread Marco Leise via Digitalmars-d
Am Thu, 28 Aug 2014 11:08:29 +0100
schrieb Russel Winder via Digitalmars-d
digitalmars-d@puremagic.com:

 Jérôme,
 
 On Thu, 2014-08-28 at 11:53 +0200, Jérôme M. Berger via Digitalmars-d
 wrote:
 […]
  PPS: IANAL but I have had lots of contacts with patent lawyers and I
  have taken part in several patent disputes as an expert witness.
  However, this was in France so most of my knowledge applies to
  French law and things may be different in the US.
 
 Are you tracking the new EU unitary patent and TTIP activity? We need to
 make sure the US does impose on the EU the same insane patent framework
 the US has.

Haha :*). Don't worry, we EU citizens are more concerned about
the issues of privacy, food regulations and corporate entities
suing states for changing laws that cause them profit losses.
A rubber stamping patent system without professionals
investigating the claims has already been established years
ago.
All in all I am not too worried about TTIP anymore, seeing
that the US reps didn't move in all the years of negotiations.
With NGOs running against TTIP and an overall negative public
stance I don't see it being bent over the knee.

-- 
Marco


signature.asc
Description: PGP signature


Re: [OT] Microsoft filled patent applications for scoped and immutable types

2014-08-31 Thread Marco Leise via Digitalmars-d
Am Thu, 28 Aug 2014 12:12:14 +0200
schrieb Daniel Kozak via Digitalmars-d
digitalmars-d@puremagic.com:

 V Thu, 28 Aug 2014 11:53:35 +0200
 Jérôme M. Berger via Digitalmars-d digitalmars-d@puremagic.com
 napsáno:
  
  I should have said that in D it is used when declaring an
  instance (i.e. at the place of the instance declaration) whereas in
  the patent it is used when declaring the type. For a patent lawyer,
  this will be enough to say that the patent is new.
  
 
 I don't agree completly
 
 // immutable is used when declaring the type IS
 immutable struct IS {
   string s;
 }
 
 IS s = IS(fff);
 s.s = d;
 writeln(s);

^ That I agree with! Prior art.

-- 
Marco



Re: Voting: std.logger

2014-08-31 Thread Marco Leise via Digitalmars-d
Am Sun, 31 Aug 2014 08:52:58 +
schrieb Ola Fosheim Grøstad
ola.fosheim.grostad+dl...@gmail.com:

 On Sunday, 31 August 2014 at 06:11:56 UTC, Marco Leise wrote:
  Am Sun, 31 Aug 2014 01:09:32 +
  schrieb Ola Fosheim Grøstad
  ola.fosheim.grostad+dl...@gmail.com:
 
  How does logging interact with pure? You need to be able to 
  log in pure functions.
 
  How do you come to that conclusion? Purity is a synonym for
  _not_ having side effects. That said - as usual - debug
  statements allow you to punch a hole into purity.
 
 1. ~90% of all functions are weakly pure, if you cannot log 
 execution in those functions then logging becomes a liability.

 2. If you define logging in a weakly pure function as tracing of 
 execution rather than logging of state, then you can allow 
 memoization too.

Ok, here is the current state: Logging is not a first-class
language feature with special semantics. Drop your pure
keywords on those 90% of functions or only log in debug.
 
 3. You don't normally read back the log in the same execution, 
 state is thus not preserved through logging within a single 
 execution. It has traits which makes it less problematic than 
 general side effects that change regular global variables.

I still don't see it fly even theoretically. The stdlog will
be an interface with an arbitrary implementation behind it. A
file logger will eventually hit a disk full state and throw
an exception. Since pure implies that a function call can be
elided such a change of execution path cannot work.
It is much like discussing the strictness of transitive const.
If you need to cache values of initialize on first access, you
just have to drop const.

-- 
Marco



Re: rdmd - No output when run on WebFaction

2014-08-31 Thread Marco Leise via Digitalmars-d
Am Sun, 31 Aug 2014 09:01:23 +
schrieb David Chin dlc...@me.com:

 Hi all,
 
 On WebFaction, my Linux shared hosting server, I created a simple 
 Hello, World! script in a file named hello.d.
 
 Attempting to run the script with rdmd hello.d yielded no error 
 messages, and strangely, no output.
 
 However, the following works, and I see the output:
 
 (1) dmd hello.d
 (2) ./hello
 
 Running rdmd --chatty hellod.d shows D writing some files to 
 /tmp folder and calling exec on the final temp file.
 
 I suspect the reason I'm not seeing any output using rdmd has 
 something to do with WebFaction's security policies disallowing 
 any execution on files in the /tmp folder.
 
 May I request that rdmd check, say, the environment variables tmp 
 or temp for the path to write the temporary files in?
 
 Thanks!

Sorry for hijacking. I believe any program should remove what
it placed in /tmp on exit and actively manage any cached files
under /var/lib, e.g. by age or size of the cache. In the case
of rdmd, I'd even prefer to just do what most native compilers
do and place object files alongside the sources, but allow
custom object file directories. Usually I wouldn't say
anything, because endless bike-shedding ensues, but it looks
like now there is a technical argument, too. :p

-- 
Marco



Re: DIP(?) Warning to facilitate porting to other archs

2014-08-31 Thread Marco Leise via Digitalmars-d
Am Fri, 02 May 2014 01:56:49 +
schrieb bearophile bearophileh...@lycos.com:

 Temtaime:
 
  I think it's need to have -w64(or other name, offers ?) flag 
  that warns if code may not compile on other archs.
 
 It's a problem and I'd like some way to compiler help solving 
 such problems.
 
 I suggested this, that was refused (I don't remember who reopened 
 it):
 https://issues.dlang.org/show_bug.cgi?id=5063
 
 Bye,
 bearophile

That would have be me going renegade against a
RESOLVED-WONTFIX after I found a library that wouldn't compile
on my amd64 because it mixed size_t and uint as if it was the
same. Later I found a little flood of bug reports for other
libraries as well. Whether a warning or size_t as a distinct
type like some other language does, the current situation is
the source of statically checkable, avoidable portability bugs.

-- 
Marco



Re: DIP(?) Warning to facilitate porting to other archs

2014-08-31 Thread Marco Leise via Digitalmars-d
Am Sat, 03 May 2014 03:17:23 +0200
schrieb Jonathan M Davis via Digitalmars-d
digitalmars-d@puremagic.com:

 […]
 
 Putting warnings in the compiler always seems to result in forcing people to
 change their code to make the compiler shut up about something that is
 perfectly fine.
 
 - Jonathan M Davis

I agree with you about warnings about clarity of operator
precedence and the like, where it is just a matter of style.
But I don't see what that has to do with this issue and code
like:

  size_t = ulong; // breaks when porting from 64 to 32 bit
  uint = size_t;  // breaks when porting from 32 to 64 bit

which is obviously broken, but accepted. I would really like
to force people to change their code to make the compiler shut
up. See some of the linked bugs for examples:
https://issues.dlang.org/show_bug.cgi?id=5063#c4

Now we have 3 bad options and no good one: :(

- add warnings to dmd, which should never have real warnings
  and dozens of flags to control them
- make size_t a distinct type, which is unfeasible to implement
  and is likely to break _something_
- keep the status quo with libraries that don't compile and
  try to educate people about the issue

-- 
Marco



Re: why does DMD compile hello world to about 500 _kilobytes_ on Mac OS X [x86_64]?!?

2014-09-05 Thread Marco Leise via Digitalmars-d
Am Tue, 02 Sep 2014 07:03:52 +
schrieb Dicebot pub...@dicebot.lv:

 On Tuesday, 2 September 2014 at 06:18:27 UTC, Jacob Carlborg
 wrote:
  On 01/09/14 20:33, Dicebot wrote:
 
  Any reason why it can't work for OSX in a same way? Assuming 
  LDC does
  emit ModuleInfo  Co sections the same way it does on Linux, 
  using OSX
  specific alternative to --gc-sections should just work.
 
  It does not emit these sections the same way, at least not on 
  DMD.
 
 Well I am speaking about LDC ;) --gc-sections don't work with
 Linux DMD either.

Hey, every new release I go and try it, but found that dub
will still crash when linked with --gc-sections, so some
symbols need to be added as GC roots.

That said with -defaultlib=phobos2 (picking up the .so
version) the file size is:

   48 KiB  !!!

after `strip main` it comes down to:

   34 KiB  !!!

Only when you think about how people put 5 minutes of a
stunning 3D gfx demo into 64 KiB you start to worry about 34
KiB for Hello World! again.

-- 
Marco



Re: [OT] If programming languages were weapons

2014-09-05 Thread Marco Leise via Digitalmars-d
Am Tue, 02 Sep 2014 08:29:24 +
schrieb Iain Buclaw ibuc...@gdcproject.org:

 In normal fashion, it's missing an entry for D.
 
 http://bjorn.tipling.com/if-programming-languages-were-weapons
 
 I'll let your imaginations do the work.
 
 
 Iain.

How can our beloved, can-do-anything C be a rifle that you
cannot even reload until you shot all 8 bits of ammo ?

-- 
Marco



Re: why does DMD compile hello world to about 500 _kilobytes_ on Mac OS X [x86_64]?!?

2014-09-05 Thread Marco Leise via Digitalmars-d
Am Fri, 05 Sep 2014 19:38:13 +
schrieb deadalnix deadal...@gmail.com:

 On Friday, 5 September 2014 at 11:50:37 UTC, Ola Fosheim Grøstad
 wrote:
  On Friday, 5 September 2014 at 09:27:41 UTC, Marco Leise wrote:
  Only when you think about how people put 5 minutes of a
  stunning 3D gfx demo into 64 KiB you start to worry about 34
  KiB for Hello World! again.
 
  You meant 4KiB…
 
  https://www.youtube.com/watch?v=RCh3Q08HMfslist=PLA5E2FF8E143DA58C
 
 That is beyond sanity...
 
 I love it.

Me too... This is the Matrix. Mesmerizing.
Of course printing 12 characters to stdout is still much more
involved than even 5 of those ... demos ... with their graphics
shaders, metaballs, fish eye effects, camera fly throughs and
polyphonic synthesizer music.

-- 
Marco



Re: kill the commas! (phobos code cleanup)

2014-09-06 Thread Marco Leise via Digitalmars-d
Am Thu, 4 Sep 2014 00:55:47 +0300
schrieb ketmar via Digitalmars-d digitalmars-d@puremagic.com:

 On Wed, 03 Sep 2014 21:38:55 +
 via Digitalmars-d digitalmars-d@puremagic.com wrote:
 
  That sucks! Now I had to do it myself. (I think you should 
  upgrade to a decent editor on a decent OS and save me some 
  unicode-work…;)
 no-no-no-no! utf of any size is boring. i want my strings to be
 indexable without any hidden function calls! ;-)
 
 i even did native-encoded strings patch (nhello!) and made lexer
 don't complain about bad utf in comments. i love my one-byte locale!

But there lies greatness in the unification of all locales
into just one. All the need for encodings in HTTP that more
often than not were incorrectly declared making browsers guess.
Text files that looked like gibberish because they came from
DOS or were written in another language that you even happen to
speak, but now cannot decypher the byte mess. Or do you
remember the mess that happened to file names with accented
characters if they were sufficiently often copied between file
systems?
I'm all for performance, but different encodings on each
computing platform and language just didn't work in the
globalized world. You are a relic :)

-- 
Marco


signature.asc
Description: PGP signature


Re: Automated source translation of C++ to D

2014-09-06 Thread Marco Leise via Digitalmars-d
Am Thu, 21 Aug 2014 17:57:11 +
schrieb Joakim dl...@joakim.airpost.net:

 On Thursday, 21 August 2014 at 10:00:43 UTC, Daniel Murphy wrote:
 
  You might want to look at DDMD, which is automatically 
  converted.
 
 Yes, I'm aware of ddmd.  You've mentioned many times that it only 
 works because dmd is written using a very unC++-like style, to 
 the point where github's source analyzer claims that dmd is 
 written in 66.7% C, 28.4% D (presumably the tests right now), 
 4.4% C++, and 0.5% other. :)

OT: That's because the analyzer analyzes the file name part
after the last . and dmd uses an atypical extension for C++.

-- 
Marco



Re: kill the commas! (phobos code cleanup)

2014-09-06 Thread Marco Leise via Digitalmars-d
Am Sat, 6 Sep 2014 14:52:19 +0300
schrieb ketmar via Digitalmars-d digitalmars-d@puremagic.com:

 On Sat, 06 Sep 2014 11:05:13 +
 monarch_dodra via Digitalmars-d digitalmars-d@puremagic.com wrote:
 
  That sounds so much better than UTF-32.
 why, in the name of hell, do you need UTF-32?! doesn't
 0x1 chars enough for everyone?!
 
   btw: are there fonts that can display all unicode?
   i doubt it (ok, maybe one).
  Fonts are encoding agnostic, your point is irrelevant.
 so where can i download font collection with fonts contains all unicode
 chars?
 
  This is all done without the need for font-display
 thank you, but i don't need any text i can't display (and therefore
 read). i bet you don't need to process Thai, for example -- 'cause this
 requires much more than just character encoding convention. and bytes
 are encoding-agnostic.
 
  which is on the burden of the final client, and their respective 
  local needs.
 hm... text processing software developed on systems which can't display
 processing text? wow! i want two!

Dude! This is handled the same way sound fonts for Midi did
it. You can mix and match fonts to create the complete
experience. If your version of Arial doesn't come with
Thai symbols, you just install _any_ Asian font which includes
those and it will automatically be used in places where your
favorite font lacks symbols. Read this Wikipedia article from
2005 on it: http://en.wikipedia.org/wiki/Fallback_font
In practice it is a solved problem, as you can see in your
browser when you load a web site with mixed writing systems.

If all else fails, there is usually something like this in
place:
http://scripts.sil.org/cms/scripts/page.php?site_id=nrsiid=UnicodeBMPFallbackFont
E.g. Missing symbols are replaced by a square with the
hexadecimal code point. So the missing symbol can at least be
identified correctly (and a matching font installed).

-- 
Marco


signature.asc
Description: PGP signature


Re: What criteria do you take

2014-09-06 Thread Marco Leise via Digitalmars-d
Am Sat, 06 Sep 2014 02:30:49 +
schrieb Cassio Butrico cassio_butr...@ig.com.br:

 What criteria do you take into consideration for the choice of a 
 programming language.
 and why? does not mention what language would be, but what 
 criteria led them to choose.

In a start-up:

- known and been used by many developers
- low potential of running into unsolved issues
- rich eco-system with a potential solution for anything I
  planned
- lots of free/open source solutions to get started without a
  big investment first
- works as well for programmers on Windows/Linux/OS X
- minimizes internal bike-shedding

In other words Java. :)
The only bike-shedding I ever had was weather we should write
getID or getId. One says ID is the correct abbreviation of
identification, the other says the field's name is id and
camel case rules dictate getId.



Personally it just comes down fun to work with and my bias
towards maximum efficiency and static checking through the
compiler. D ranks very high here.

+ templates, CTFE and UDAs are fun to work with; it is easy to
  do meta-programming with these tools
+ if needed also works as a better-C
+ can use efficient C libraries without conversion layer
+ dynamic linking allows for small native executables and
  memory reuse of already loaded read-only segments of
  phobos2.so.
+ lots of static checking and template constraints
+ removes friction between systems by defining long as 64-bit
  and character arrays to be in Unicode encoding

- incompatible ABI between compiler vendors requires 3
  installations of the same lib to do testing with each
- inline ASM syntax also diverges between vendors
- GC implementation and efficiency issues
- being flexible, safe and efficient at the same time is
  sometimes a liability; e.g. File.byLine buffer reuse issue,
  Java taking the freedom to offer inefficient operations on
  all containers to keep them interchangeable.

Before D I used Delphi. It is IMHO the best programming
environment for efficient, native GUI applications on Windows.
It comes with integrated compiler, linker, debugger and that
sort of stuff and the GUI designer is integral part of the
IDE. Most other GUI designers feel like or are an external
tool with no interaction with the code editor.

-- 
Marco



Re: kill the commas! (phobos code cleanup)

2014-09-06 Thread Marco Leise via Digitalmars-d
Am Sat, 6 Sep 2014 15:52:09 +0300
schrieb ketmar via Digitalmars-d digitalmars-d@puremagic.com:

 On Sat, 6 Sep 2014 14:52:50 +0200
 Marco Leise via Digitalmars-d digitalmars-d@puremagic.com wrote:
 
  In practice it is a solved problem, as you can see in your
  browser when you load a web site with mixed writing systems.
 and hurts my eyes. i have a little background in typography, and mixing
 different fonts makes my eyes bleed.

Japanese and Latin are already so far apart that the font
doesn't make much of a difference anymore, so long as it has
similar size and hinting options. As for mixing writing
systems there are of course dozens of use cases. Presenting an
English website with links to localized versions labeled with
each language's name, programs dealing with
mathematical/technical symbols can use regular text allowing
for easy copypaste, instead of resorting to bitmaps, e.g.
for logical OR or Greek variables. And to make your eyes
bleed even more here is a Cyrillic Wikipedia article on Mao
Tse-Tung, using traditional and simplified versions of his
name in Chinese and the two transliterations to Latin according
to Pinyin and Wade-Giles:
https://ru.wikipedia.org/wiki/Мао_Цзэдун

  E.g. Missing symbols are replaced by a square with the
  hexadecimal code point. So the missing symbol can at least be
  identified correctly (and a matching font installed).
 this can't help me reading texts. really, i'm not a computer, i don't
 remember which unicode number corresponds to which symbol.

Yes, but why do you prefer garbled symbols incorrectly mapped
to your native encoding or even invalid characters silently
removed ?
Do you understand that with the symbols displayed as code
points you still have all the information even if it doesn't
look readable immediately ?
It offers you new options:
* You can copy and paste the text into an online translator to
  get an idea of what the text says.
* You can enter the code into a tool that tells you which
  script it is from and then look for a font that contains
  that script to get an acceptable display.

-- 
Marco


signature.asc
Description: PGP signature


Re: Some notes on performance

2014-09-06 Thread Marco Leise via Digitalmars-d
Am Tue, 02 Sep 2014 10:23:57 +
schrieb po y...@no.com:

   The first link says that Chrome is a *90* meg binary!  Gawd 
 damn. Either they write some really bloated code, or modern 
 browsers require way too much shit to function.

Hmm, my installation of Lynx is 1.6 MiB in size. But
gfx and HTML 5 are kind of non-existent.

-- 
Marco



Re: kill the commas! (phobos code cleanup)

2014-09-06 Thread Marco Leise via Digitalmars-d
Am Sat, 6 Sep 2014 17:51:23 +0300
schrieb ketmar via Digitalmars-d digitalmars-d@puremagic.com:

 On Sat, 6 Sep 2014 16:38:50 +0200
 Marco Leise via Digitalmars-d digitalmars-d@puremagic.com wrote:
 
  Yes, but why do you prefer garbled symbols incorrectly mapped
  to your native encoding or even invalid characters silently
  removed ?
 i prefer to not read the text i cannot understand. there is zero
 information in Chinese, or Thai, or even Spanish for me. those texts
 looks (for me) like gibberish anyway. so i don't care if they are
 displayed correctly or not. that's why i using one-byte encoding and
 happy with it.
 
  Do you understand that with the symbols displayed as code
  points you still have all the information even if it doesn't
  look readable immediately ?
 no, i don't understand this. for me Chinese glyph and abstract painting
 is the same. and simple box, for that matter.
 
  It offers you new options:
 only one: trying to paste URL to google translate and then trying to
 make sense from GT output. and i don't care what encoding was used for
 page in this case.

So because you see no use for Unicode (which is hard to
believe considering all the places where localized strings
may be used), everyone has to keep supporting hacks to guess
text encodings or NFC normalize and convert strings to the
system locale that go to the terminal. Thanks for the extra
work :p

-- 
Marco


signature.asc
Description: PGP signature


Re: What criteria do you take

2014-09-07 Thread Marco Leise via Digitalmars-d
Am Sat, 06 Sep 2014 21:50:05 -0400
schrieb Nick Sabalausky seewebsitetocontac...@semitwist.com:

 On 9/6/2014 9:07 PM, Cassio Butrico wrote:
  On Saturday, 6 September 2014 at 22:16:02 UTC, Cassio Butrico wrote:
  Thank you all, the reason for the question and what we can do for
  programmers that will program in D.
  Thank you again.
  If I can start over again, thousands of miles away, I will have in mind,
  I would program in D.
 
 Heh, who knew Reznor was a D fan? :)
 

Lol, I just thought but the lyrics are 'If I could start
again, a million miles away'. People sing Cash's version now
and then over here in an Irish pub at Karaoke nights.

-- 
Marco



Re: kill the commas! (phobos code cleanup)

2014-09-07 Thread Marco Leise via Digitalmars-d
On Sunday, 7 September 2014 at 10:29:41 UTC, ketmar via 
Digitalmars-d wrote:
 but there is no need in extra work actually. using ASCII
 and English for program UI will work in any encoding.

I'm not so convinced that many people would be happy with
reduction of they alphabet to ASCII. Some for aesthetics and
some for political reasons. Cyrillic, Arabic or Japanese just
wouldn't look right anymore. But I figure, your system is 100%
English anyways and you have no use for NLS ? :D

 index nth symbol! ucs-4 (aka dchar/dstring) is ok though.

Now you mentally map UCS-4 onto your 1-byte encodig and try
to see it as the same, just 4 times larger and think that
C style indexing solves all use cases.
But it doesn't. While Latin places letters in a sequence which
you could cut off anywhere, Korean uses blocks containing
multiple consonants and vowels. For truncation of text you
would be interested in the whole block or grapheme not a
single vowel/consonant.

Am Sun, 07 Sep 2014 10:45:22 +
schrieb Ola Fosheim Grøstad
ola.fosheim.grostad+dl...@gmail.com:

 [...]
 
 I think the D approach to strings is unpleasant. You should not 
 have slices of strings, only slices of ubyte arrays.

Rust does that for at least OS paths.

 If you want real speedups for streams of symbols you have to move 
 into the landscape of huffman-encoding, tries, dedicated 
 datastructures…
 
 Having uniform string support in libraries (i.e. only supporting 
 utf-8) is a clear advantage IMO, that will allow for APIs that 
 are SSE backed and performant.

-- 
Marco



Re: Voting: std.logger

2014-09-08 Thread Marco Leise via Digitalmars-d
Am Mon, 08 Sep 2014 11:17:48 +
schrieb Robert burner Schadek rburn...@gmail.com:

 On Saturday, 30 August 2014 at 02:16:55 UTC, Dicebot wrote:
 
  ==
  Martin Nowak
  ==
 
  Support duck-typing for the log functions.
  Logger should be a concept and log functions should be 
  free-standing
  UFCS functions that take any `isLogger!T`.
  To support a global `defaultLog` variable, you could add a 
  Logger
  interface and a loggerObject shim. See
  http://dlang.org/phobos/std_range.html#inputRangeObject for 
  this a pattern.
 
  Neither seem to be addressed nor countered.
 
 Overly complicated IMO

This may sound surprising, but I believe if we want to make
Phobos consistent and give no incentive to roll your own
stuff, we should do this for a lot of APIs. Without going into
depth (but we could) there are good reasons to use classes and
there are good reasons to use duck typing structs.
Another API where this mixed scheme would apply are streams.
By using function templates with `if (isLogger!T)` and an
abstract class Logger, it will only get instantiated once for
all derived classes reducing template bloat, while allowing
custom instantiations for logger structs to avoid virtual
function calls or GC issues. So I agree with Martin.
It is a great way to bring the two camps together without
major casualties.

-- 
Marco



Re: Voting: std.logger

2014-09-08 Thread Marco Leise via Digitalmars-d
Am Mon, 08 Sep 2014 11:06:42 +
schrieb Robert burner Schadek rburn...@gmail.com:

  
  Francesco Cattoglio
  
 
  As far as I undestood, there's no way right now to do logging
  without using the GC. And that means it is currently impossible
  to log inside destructor calls. That is a blocker in my book.
 
  First part partially addressed - missing @nogc @nothrow logger 
  implementation out of the box. […]
 
 at least for logf nothrow will not work because of a wrong 
 formatting string or args. log can not be nothrow because custom 
 toString for structs and class are allowed.
 
 nogc is not possible because of custom toString
 
 that won't fix, but for default types it is nogc.

It is fairly obvious that the next GC implementation needs to
allow allocations during a sweep. Maybe we should just assume
that it already works ?

-- 
Marco



Re: Voting: std.logger

2014-09-08 Thread Marco Leise via Digitalmars-d
Am Mon, 08 Sep 2014 13:37:02 +
schrieb Robert burner Schadek rburn...@gmail.com:

 On Monday, 8 September 2014 at 13:20:27 UTC, Robert burner 
 Schadek wrote:
  On Monday, 8 September 2014 at 12:36:29 UTC, Marco Leise wrote:
 
  I think the template bloat argument is invalid as __LINE__ and 
  friends are passed as template arguments to allow write and 
  writef type logging.

You are right, this benefit of classes doesn't apply here.

  Anyway I will try to make them free standing
 
 The biggest problem I have currently with this that you, or at 
 least I, can not override the free standing function.
 
 void log(L)(ref L logger) if(isLogger!L) { ... } will match always
 and if I create void log(L)(ref L logger) if(isMySpecialLogger!L) 
 { ... }
 both match and thats a nogo

Ok, no matter what the outcome is, I'll see if I can write a
simple file logger that I can use in RAII struct dtors (where
neither allocations nor throwing seem to be an issue) and that
has a fallback to writing to stderr. I wrote earlier that I
would want a fallback logger if writing via the network fails
or the disk is full, but maybe this logic can be implemented
inside a logger implementation. I haven't actually tried your
API yet!

-- 
Marco



Re: Which patches/mods exists for current versions of the DMD parser?

2014-09-08 Thread Marco Leise via Digitalmars-d
Am Mon, 08 Sep 2014 15:22:03 +
schrieb Ola Fosheim Grøstad
ola.fosheim.grostad+dl...@gmail.com:

 On Monday, 8 September 2014 at 15:09:27 UTC, Dicebot wrote:
  It is not about D community but about yourself. Do _you_ want 
  to be viewed as a valuable member of community? Do _you_ want 
  to receive on topic responses to your threads?
 
 I only want to receive a response on this thread from community 
 members who are willing to share their patches! Your contribution 
 to this thread is counter productive.
 
 Ketmar is a noble example that I'd encourage others to follow. 
 More people like him would bring D out of stagnation.
 
  If answer is yes, you will consider people expectation as much 
  as a license.
 
 No, I don't consider other people's disparage expectations on 
 this topic. I consider the orignal author's choice of license. I 
 am sure he considered the licensing-options and stands for his 
 own choice. If he does not, then an explanation from the original 
 author is in place.
 
  We add to the eco system. We don't detract from it.
 
  Bullshit. Any kind of forking wastes most valuable resource 
  open source world can possibly have - developer attention.
 
 Uhm, no. I would not use D in it's current incarnation so I need 
 to modify it. Ketmar and I are not DMD developers. We are 
 currently digging into the code base. Modifying the parser is a 
 good way to learn the AST. Maybe we one day will become DMD 
 developers, but this attitude you and others are exposing in this 
 thread and the bug-report-patch thread aint sexy. It's a turn off.
 
 What you are doing is telling prospective contributors that this 
 community is about cohesive military discipline. Totalitarian 
 regimes tend to run into trouble. I most definitely will never 
 join a cult that expose it as an ideal. I'm not one of your 
 lieutenants. Sorry.

And now we all calm down a little, ok? The D community is as
diverse as the language and even if three people yell in the
same tone, it doesn't mean everyone else believes the same.

On topic: Adding more ways to instantiate templates, I see no
value in. It only causes confusion for the reader.
Short syntax for declaring auto/const/immutable variables is
nice, because it probably saves typing and variable names are
all left aligned. You might want to check if you can really
fulfill the goal. E.g. sometimes your expression evaluates to
something const which you cannot store in an immutable
variable. Whereas a const variable can receive an immutable.
How do you go about pointers? I.e. Does :== declare an
immutable(char)[] or an immutable(char[])?
New Unicode operators. Personally I find them sexy, because √
is a short, well known operator. But you may find people that
still require ASCII for source code. Also this specific
rewrite requires std.math to be imported, and like ^^ it will
cause bewildered looks when something breaks for a built-in
operator. So if you want to push this make it an operator
that is understood by the front-end like ! or ~. Also you
might want to consider adding .opSqrt for consistency.
For array length we already have .length and .opDollar. Even
more ways to express the length? Granted it is one of the
most common properties you ask for in D code, but #arr looks
very unusual. Well, it is your fork I'd say. If you ever make
any pull requests be sure to propose one feature at a time, D
is already short on reviewers that understand the effects of
the code changes. And be sure to document the corner cases
you dealt with, especially with the :== operator.

-- 
Marco



Re: Which patches/mods exists for current versions of the DMD parser?

2014-09-08 Thread Marco Leise via Digitalmars-d
Am Mon, 8 Sep 2014 18:34:10 +0300
schrieb ketmar via Digitalmars-d digitalmars-d@puremagic.com:

 On Mon, 08 Sep 2014 17:25:07 +0200
 Timon Gehr via Digitalmars-d digitalmars-d@puremagic.com wrote:
 
  int square(int x)=x*x;
 noted.

To clarify: There is x^^2, but the implementation uses
pow(x,2) and presumably yields a real result instead of an
integer. So in that case the correct solution would be to
special case int^^int.

-- 
Marco


signature.asc
Description: PGP signature


Re: Which patches/mods exists for current versions of the DMD parser?

2014-09-08 Thread Marco Leise via Digitalmars-d
Am Mon, 8 Sep 2014 20:27:41 +0300
schrieb ketmar via Digitalmars-d digitalmars-d@puremagic.com:

 On Mon, 8 Sep 2014 18:55:46 +0200
 Marco Leise via Digitalmars-d digitalmars-d@puremagic.com wrote:
 
  but #arr looks very unusual
 not for those who loves Lua. ;-)

... an Perl and Bash, yes.

-- 
Marco


signature.asc
Description: PGP signature


Re: Which patches/mods exists for current versions of the DMD parser?

2014-09-08 Thread Marco Leise via Digitalmars-d
Am Mon, 08 Sep 2014 23:31:47 +
schrieb Dicebot pub...@dicebot.lv:

 […] fuck […] off-topic flamewar […] quite intentional.
 […] won't let you do that easily […] off-topic bullshit
 […] shooting people […] don't buy this […] attention whore
 […] troll […] retard […] You are crossing the line
 […] screw this language […] demagogue rhetorics
 […] face the reaction

*gulp*



Re: Which patches/mods exists for current versions of the DMD parser?

2014-09-08 Thread Marco Leise via Digitalmars-d
Am Mon, 08 Sep 2014 19:12:22 +0200
schrieb Timon Gehr timon.g...@gmx.ch:

 On 09/08/2014 07:00 PM, Marco Leise wrote:
  Am Mon, 8 Sep 2014 18:34:10 +0300
  schrieb ketmar via Digitalmars-d digitalmars-d@puremagic.com:
 
  On Mon, 08 Sep 2014 17:25:07 +0200
  Timon Gehr via Digitalmars-d digitalmars-d@puremagic.com wrote:
 
  int square(int x)=x*x;
  noted.
 
  To clarify:
 
 The above is not valid D 2.066 syntax.
 Your apparent confusion supports a point I made in favour of it some 
 time ago though. My post was about function declaration syntax, not 
 squaring numbers. I assume Ola will still want to support x² though. :o)

I have to say, that was clever. I really didn't notice the
wrong syntax until now. It doesn't get my vote though to keep
some uniformness in function/method definitions. One time fire
and forget lambdas are something different. They appear in the
middle of expressions etc.

  There is x^^2, but the implementation uses pow(x,2)
 
 Is this really still true?
 
  and presumably yields a real result
 
 No, both pow(x,2) and x^^2 yield an 'int' result.

Ok, memorized.

-- 
Marco



Re: Self-hosting D compiler -- Coming Real Soon Now(tm)

2014-09-11 Thread Marco Leise via Digitalmars-d
Am Thu, 11 Sep 2014 07:12:49 +0100
schrieb Iain Buclaw via Digitalmars-d
digitalmars-d@puremagic.com:

 By way of example, the version of D shipped with gcc-4.9 in
 Debian/Ubuntu is 2.065, if we were to switch now, then that compiler
 version will need to be able to build whatever will be the current
 when gcc-5.0 comes out.
 
 Iain.

For Gentoo I used the third version component as well. I found
it better matches the D release cycle:

4.8.1 = 2.063
4.8.2 = 2.064
4.8.3 = 2.065

-- 
Marco



Re: [Article] D's Garbage Collector Problem

2014-09-11 Thread Marco Leise via Digitalmars-d
Am Thu, 11 Sep 2014 14:30:05 +
schrieb Kagamin s...@here.lot:

 There are various api the compiler use to allocate from the GC. 
 Some do not specify if the allocated memory contains pointers or 
 not, and none do specify the type qualifier of the memory.
 
 Is it true about pointers? Which functions?
 And why type qualifiers matter?

Immutable data structures cannot have pointers changed or set
to null. Also they can only reference other immutable data.
This means that they form sort of a big blob that is kept
alive by one or more pointers to it, but the GC never needs
to check the immutable pointers inside of it.

Shared/unshared may affect implementations that provide thread
local GC. E.g. only shared data needs to be handled by a
global stop the world GC. I'm not sure though.

-- 
Marco



std.experimental.logger: practical observations

2014-09-11 Thread Marco Leise via Digitalmars-d
So I've implemented my first logger based on the abstract
logger class, (colorize stderr, convert strings to system
locale for POSIX terminals and wstring on Windows consoles).

1. Yes, logging is slower than stderr.writeln(Hello, world!);
   It is a logging framework with timestamps, runtime
   reconfiguration, formatting etc. One has to accept that. :p

2. I noticed that as my logger implementation grew more complex
   and used functionality from other modules I wrote, that if
   these used logging as well I'd easily end up in a recursive
   logging situation.

   Can recursion checks be added somewhere
   before .writeLogMsg()?

3. Exceptions and loggin don't mix.
   Logging functions expect the file and line to be the one
   where the logging function is placed. When I work with C
   functions I tend to call them through a template that will
   check the error return code. See:
   http://dlang.org/phobos/std_exception.html#.errnoEnforce
   Such templates pick up file and line numbers from where
   they are instantiated and pass them on to the exception
   ctor as runtime values.
   Now when I use error(), I see no way to pass it runtime
   file and line variables to make the log file reflect the
   actual file and line where the error occured, instead of
   some line in the template or where ever I caught the
   exception.
   Not all errors/exceptions are fatal and we might just want
   to log an exception and continue with execution.

-- 
Marco



Re: RFC: scope and borrowing

2014-09-11 Thread Marco Leise via Digitalmars-d
Am Thu, 11 Sep 2014 13:58:38 +
schrieb Marc Schütz schue...@gmx.net:

 PING
 
 Now that there are again several GC related topics being 
 discussed, I thought I'd bump this thread.
 
 Would be nice if Walter and/or Andrei could have a look and share 
 there opinions. Is this something worth pursuing further? Are 
 there fundamental objections against it?

I just needed this again for a stack based allocator. It would
make such idioms safer where you return a pointer into an RAII
struct and need to make sure it doesn't outlive the struct.
It got me a nasty overwritten stack. I cannot comment on the
implementation, just that I have long felt it is missing.

-- 
Marco



Re: std.experimental.logger: practical observations

2014-09-11 Thread Marco Leise via Digitalmars-d
Am Thu, 11 Sep 2014 21:32:44 +
schrieb Robert burner Schadek rburn...@gmail.com:

 On Thursday, 11 September 2014 at 16:55:32 UTC, Marco Leise wrote:
  2. I noticed that as my logger implementation grew more complex
 and used functionality from other modules I wrote, that if
 these used logging as well I'd easily end up in a recursive
 logging situation.
 
 Can recursion checks be added somewhere
 before .writeLogMsg()?
 
 I think I don't follow. Just to clear
 
 foo() {
  log(); bar();
 }
 
 bar() {
  log(); foo();
 }

Let me clarify. Here is some code from 2015:

void main()
{
stdlog = new MyLogger();
// This call may overflow the stack if
// 'somethingBadHappened in someFunc():
error(ERROR!!!);
}

class MyLogger : Logger
{
override void writeLogMsg(ref LogEntry payload)
{
auto bla = someFunc();
useBlaToLog(bla, payload.msg);
}
}

// This is just some helper function unrelated to logging
// but it uses the stdlog functionality from Phobos itself
// as that is good practice in 2015.
auto someFunc()
{
...
if (somethingBadHappened)
{
// Now I must not be used myself in a logger
// implementation, or I overflow the stack!
error(something bad in someFunc);
}
...
}

  3. Exceptions and loggin don't mix.
 Logging functions expect the file and line to be the one
 where the logging function is placed. When I work with C
 functions I tend to call them through a template that will
 check the error return code. See:
 http://dlang.org/phobos/std_exception.html#.errnoEnforce
 Such templates pick up file and line numbers from where
 they are instantiated and pass them on to the exception
 ctor as runtime values.
 Now when I use error(), I see no way to pass it runtime
 file and line variables to make the log file reflect the
 actual file and line where the error occured, instead of
 some line in the template or where ever I caught the
 exception.
 Not all errors/exceptions are fatal and we might just want
 to log an exception and continue with execution.
 
 hm, I think adding template function as requested by dicebot 
 would solve that problem, as it would take line and friends as 
 function parameters

How do you log errors that also throw exceptions ?

-- 
Marco



Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-12 Thread Marco Leise via Digitalmars-d
Am Fri, 12 Sep 2014 13:45:45 +0200
schrieb Jacob Carlborg d...@me.com:

 On 12/09/14 08:59, Daniel Kozak via Digitalmars-d wrote:
 
  toUpperInPlace could help little, but still not perfect
 
 Converting text to uppercase doesn't work in-place in some cases. For 
 example the German double S will take two letters in uppercase form.

The German double S, I see ... Let me help you out of this.

The letter ß, named SZ, Eszett, sharp S, hunchback S, backpack
S, Dreierles-S, curly S or double S in Swiss, becomes SS in
upper case since 1967, because it is never used as the start
of a word and thus doesn't have an upper case representation
of its own. Before, from 1926 on, the translation was to SZ.
So a very old Unicode library might give you incorrect results.

The uppercase letter I on the other hand depends on the locale.
E.g. in England the lower case version is i, whereas in Turkey
it is ı, because they also have a dotted İ, which becomes i.

;)

-- 
Marco



Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-12 Thread Marco Leise via Digitalmars-d
Am Thu, 11 Sep 2014 13:44:09 +
schrieb Adam D. Ruppe destructiona...@gmail.com:

 On Thursday, 11 September 2014 at 12:38:54 UTC, Andrey Lifanov 
 wrote:
  And I think of idea of complete extraction of GC from D.
 
 You could also recompile the runtime library without the GC. 
 Heck, with the new @nogc on your main, the compiler (rather than 
 the linker) should even give you nicish error messages if you try 
 to use it, but I've done it before that was an option.
 
 Generally though, GC fear is overblown. Use it in most places and 
 just don't use it where it makes things worse.

The Higgs JIT compiler running 3x faster just because you call
GC.reserve(1024*1024*1024); show how much fear is appropriate
(with this GC implementation).

-- 
Marco



Re: std.experimental.logger: practical observations

2014-09-12 Thread Marco Leise via Digitalmars-d
Am Fri, 12 Sep 2014 09:46:18 +
schrieb Robert burner Schadek rburn...@gmail.com:

 On Thursday, 11 September 2014 at 22:10:01 UTC, Marco Leise wrote:
  Let me clarify. Here is some code from 2015:
 
  void main()
  {
  stdlog = new MyLogger();
  // This call may overflow the stack if
  // 'somethingBadHappened in someFunc():
  error(ERROR!!!);
  }
 
  class MyLogger : Logger
  {
  override void writeLogMsg(ref LogEntry payload)
  {
  auto bla = someFunc();
  useBlaToLog(bla, payload.msg);
  }
  }
 
  // This is just some helper function unrelated to logging
  // but it uses the stdlog functionality from Phobos itself
  // as that is good practice in 2015.
  auto someFunc()
  {
  ...
  if (somethingBadHappened)
  {
  // Now I must not be used myself in a logger
  // implementation, or I overflow the stack!
  error(something bad in someFunc);
  }
  ...
  }
 
 well you could set the LogLevel to off and reset it afterwards

Remember that the stdlog is __gshared? Imagine we set the
LogLevel to off and while executing writeLogMsg ...

* a different thread wants to log a warning to stdlog
* a different thread wants to inspect/set the log level

It is your design to have loggers shared between threads.
You should go all the way to make them thread safe.

* catch recursive calls from within the same thread,
  while not affecting other threads' logging
* make Logger a shared class and work with atomicLoad/Store,
  a synchronized class or use the built-in monitor field
  through synchronized(this) blocks.

  3. Exceptions and loggin don't mix.
  How do you log errors that also throw exceptions ?
 
 please elaborate. I think I misunderstand

I know when to throw an exception, but I never used logging
much. If some function throws, would I also log the same
message with error() one line before the throw statement?
Or would I log at the place where I catch the exception?
What to do about the stack trace when I only have one line per
log entry?
You see, I am a total newbie when it comes to logging and from
the question that arose in my head I figured exceptions and
logging don't really mix. Maybe only info() and debug() should
be used and actual problems left to exception handling alone.

-- 
Marco



Re: C++/D interface: exceptions

2014-09-12 Thread Marco Leise via Digitalmars-d
Am Fri, 12 Sep 2014 15:55:37 +
schrieb Sean Kelly s...@invisibleduck.org:

 On Friday, 12 September 2014 at 06:56:29 UTC, Jacob Carlborg 
 wrote:
  On 64bit Objective-C can catch C++ exceptions. But I don't 
  think you can do anything with the exception, i.e. it uses the 
  following catch syntax:
 
  @catch(...) {}
 
  Would that be easier?
 
 I think the trick is setting up the stack frame in such a way 
 that the C++ exception mechanism knows there's a catch block 
 available at all.  From there, we should be able to use the 
 standard interface-to-class method to call virtual functions on 
 the exception object, and hopefully the C++ runtime will handle 
 cleanup for us.

What exception object?

throw bad things happened;

-- 
Marco



Re: C++/D interface: exceptions

2014-09-12 Thread Marco Leise via Digitalmars-d
Am Thu, 11 Sep 2014 17:35:25 -0700
schrieb Andrei Alexandrescu seewebsiteforem...@erdani.org:

 Hello,
 
 
 We are racking our brains to figure out what to do about exceptions 
 thrown from C++ functions into D code that calls them.
 
 A few levels of Nirvana would go like this:
 
 0. Undefined behavior - the only advantage to this is we're there 
 already with no work :o).
 
 1. C++ exceptions may be caught only by C++ code on the call stack; D 
 code does the stack unwinding appropriately (dtors, scope statements) 
 but can't catch stuff.
 
 2. D code can catch exceptions from C++ (e.g. via a CppException wrapper 
 class) and give some info on them, e.g. the what() string if any.
 
 Making any progress on this is likely to be hard work, so any idea that 
 structures and simplifies the design space would be welcome.
 
 
 Andrei

I would say aim for 1. I wouldn't expect any less or any more.
Exception handling seems to have a platform wide standard on
major OSs that D should follow (e.g. libunwind helps with
this on GCC dominated systems), but dealing with C++'s throw
anything seems overkill to me for the next milestone in C++
interop. After all there could be exceptions using multiple
inheritance, templated objects or basic data types thrown from
the C++ side.

-- 
Marco



  1   2   3   4   5   6   >