Re: Another bug in function overloading?

2014-04-26 Thread Jonathan M Davis via Digitalmars-d-learn
On Sat, 26 Apr 2014 06:55:38 +
Domain via Digitalmars-d-learn digitalmars-d-learn@puremagic.com
wrote:

 module test;
 
 public interface I
 {
  void foo();
  void foo(int);
 }
 
 public abstract class A : I
 {
  public void bar()
  {
  foo();
  }
 
  public void foo(int i)
  {
  }
 }
 
 public class C : A
 {
  public void foo()
  {
  }
 
  public void bar2()
  {
  foo(1);
  }
 }
 
 Error: function test.A.foo (int i) is not callable using argument 
 types ()
 Error: function test.C.foo () is not callable using argument 
 types (int)


No. That's expected. If you've overloaded a function from a base class,
only the functions in the derived class are in the overload set, so you
have to bring the base class' overload into the overload by either
overriding the base class overload in the derived class or by aliasing
it in the derived class. e.g.

module test;

public interface I
{
void foo();
void foo(int);
}

public abstract class A : I
{
public void bar()
{
foo();
}

alias I.foo foo;
public void foo(int i)
{
}
}

public class C : A
{
alias A.foo foo;
public void foo()
{
}

public void bar2()
{
foo(1);
}
}

- Jonathan M Davis


Re: how to print ubyte*

2014-04-30 Thread Jonathan M Davis via Digitalmars-d-learn
On Wed, 30 Apr 2014 07:27:23 +
brad clawsie via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:

 hi, I'm back again with another openssl related question.
 
 given this program
 
 --
 
import std.stdio;
import deimos.openssl.hmac;
import deimos.openssl.evp;
 
void main() {
HMAC_CTX *ctx = new HMAC_CTX;
HMAC_CTX_init(ctx);
auto key = 123456;
auto s = hello;
 
auto digest = HMAC(EVP_sha1(),
   cast(void *) key,
   cast(int) key.length,
   cast(ubyte*) s,
   cast(int) s.length,
   null,null);
}
 
 --
 
 digest should be of type ubyte*
 
 does anyone know how to print this out as ascii?

If you want to print a ubyte*, then you can do something like

auto str = cast(char[])digest[0 .. lengthOfDigest];
writeln(str);

Slicing the pointer results in an array, and you can cast ubyte[] to
char[], which will print as characters rather than their integral
values.

- Jonathan M Davis


Re: Strings concatenated at compile time?

2014-05-01 Thread Jonathan M Davis via Digitalmars-d-learn
On Thu, 01 May 2014 11:12:41 +
anonymous via Digitalmars-d-learn digitalmars-d-learn@puremagic.com
wrote:

 On Thursday, 1 May 2014 at 10:42:36 UTC, Unwise wrote:
  In the following example from the documentation, are strings 
  concatenated at compile time?
 
  template foo(string s) {
string bar() { return s ~  betty; }
  }
 
  void main() {
writefln(%s, foo!(hello).bar()); // prints: hello betty
  }
 
 I guess it's not guaranteed, but constant folding should take 
 care of it, yes.

If you want it to be guaranteed, you'd do something like

template foo(string s)
{
enum foo = s ~  betty;
}

void main()
{
writeln(foo!hello);
}

I would hope that the optimizer would have optimized out the
concatenation in your example though.

- Jonathan M Davis


Re: map!(char)(string) problem

2014-05-03 Thread Jonathan M Davis via Digitalmars-d-learn
On Sat, 03 May 2014 14:47:56 -0700
David Held via Digitalmars-d-learn digitalmars-d-learn@puremagic.com
wrote:

 import std.algorithm;
 
 int toInt(char c) { return 1; }
 
 void main()
 {
  map!(a = toInt(a))(hello);
 }
 
 Can someone please explain why I get this:
 
 Bug.d(10): Error: function Bug.toInt (char c) is not callable using 
 argument types (dchar)
 ^^^
 D:\D\dmd2\windows\bin\..\..\src\phobos\std\algorithm.d(425): Error: 
 template instance Bug.main.__lambda1!dchar error instantiating
 D:\D\dmd2\windows\bin\..\..\src\phobos\std\algorithm.d(411): 
 instantiated from here: MapResult!(__lambda1, string)
 Bug.d(10):instantiated from here: map!string
 D:\D\dmd2\windows\bin\..\..\src\phobos\std\algorithm.d(411): Error: 
 template instance Bug.main.MapResult!(__lambda1, string) error
 instantiating Bug.d(10):instantiated from here: map!string
 Bug.d(10): Error: template instance Bug.main.map!((a) = 
 toInt(a)).map!string error instantiating
 
 I thought that string == immutable char[], but this implies that it
 is getting inferred as dchar[], I guess.

All strings are treated as ranges of dchar by Phobos.

http://stackoverflow.com/questions/12288465

If you really want to operate on strings as ranges of code units rather
than code points, then you need to use std.string.representation and
convert them to the equivalent integral types (e.g. immutable(ubyte)[]).


- Jonathan M Davis


Re: const ref parameters and r-value references

2014-05-04 Thread Jonathan M Davis via Digitalmars-d-learn
On Fri, 02 May 2014 08:17:06 +
Mark Isaacson via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:

 I'm in the process of learning/practicing D and I noticed
 something that seems peculiar coming from a C++ background:

 If I compile and run:

 void fun(const ref int x) {
//Stuff
 }

 unittest {
fun(5); //Error! Does not compile
 }

 I get the specified error in my unit test. I understand that the
 cause is that I've attempted to bind ref to an r-value, what's
 curious is that in C++, the compiler realizes that this is a
 non-issue because of 'const' and just 'makes it work'. Is there a
 rationale behind why D does not do this? Is there a way to write
 'fun' such that it avoids copies but still pledges
 const-correctness while also allowing r-values to be passed in?

By design, in D, ref only accepts lvalues. Unlike in C++, constness has no
effect on that. IIRC, the reasons have something to do with being able to tell
whether the argument is indeed an lvalue or not as well as there being
implementation issues with the fact that const T foo in C++ doesn't necessary
have a variable for foo to refer to. I don't think that I've ever entirely
understood the rationale behind it, but Andrei is quite adamant on the matter,
and it's been argued over quite a few times. I don't know whether D's choice
on the matter is right or not, but it's not changing at this point.
Regardless, the problem that it generates is the fact that we don't have a
construct which does what C++'s const T does - i.e. indicate that you want
to accept both lvalues and rvalues without making a copy.

Andrei suggested auto ref to fix this problem, and Walter implemented it, but
he misunderstood what Andrei had meant, so the result was a template-only
solution. If you declare

auto foo(T)(auto ref T bar) {...}

then when you call foo with an lvalue, foo will be instantiated with bar being
a ref T, whereas if foo is called with an rvalue, it will be instintiated with
bar being a T. In either case, no copy will take place. However, it requires
that foo be templated, and it results in a combinatorial explosion of template
instantiations as more auto ref parameters are added, and the function is used
with various combinations of lvalues and rvalues.

The alternative is to declare each overload yourself:

auto foo(ref T bar) {...}
auto foo(T bar) {...}

That doesn't require the function to be templated, and it works for one, maybe
two function parameters, but you have the same combinatorial explosion of
function declarations as you had with template instantiations with auto ref -
except now you're declaring them all explicitly yourself instead of the
compiler generating them for you.

What has been suggested is that we have a way to mark a non-templated function
as accepting both lvalues and rvalues - e.g.

auto foo(NewRefThingy T bar) {...}

and what it would do is make it so that underneath the hood, foo would
actually be

auto foo(ref T bar) {...}

but instead of giving an error when you pass it an rvalue, it would do the
equivalent of

auto temp = returnsRValue();
foo(temp);

so that foo would have an rvalue. You still wouldn't get any copying
happening, you'd only have to declare one function, and you wouldn't get a
combinatorial explosion of function declarations or template instantiations.
I believe that that is essentially what Andrei originally intended for auto
ref to be.

The problem is that we don't want to introduce yet another attribute to do
this. We could reuse auto ref for it, so you'd do

auto foo(auto ref T bar) {...}

but that either means that we redefine what auto ref does with templated
functions (which would be a problem, because it's actually useful there for
other purposes, because it's the closest thing that we have to perfect
forwarding at this point), or we make it so that auto ref does something
different with normal functions than it does with template functions, which
could be confusing, and you might actually like to be able to use the
non-templated auto ref solution with templated functions. It should be
possible _some_ of the time for the compiler to determine that it can optimize
the templated version into the non-templated one (i.e. when it can determine
that the forwarding capabilities of thet templated auto ref aren't used), but
it's not clear how well that will work, or whether it's an acceptable
solution.

Walter has suggested that we just redefine ref itself to do what I just
described rather than using auto ref or defining a new attribute. However,
both Andrei and I argued with him quite a bit over that, because that makes it
so that you can't tell whether a ref argument is intended to mutate what's
being passed in, or whether it's just an optimization (and you can't just use
const in all of the situations where you don't want the mutation, because D's
const is far more restrictive than C++'s const). Others agree with us, and
there are probably some that agree with Walter, but I don't 

Re: const ref parameters and r-value references

2014-05-04 Thread Jonathan M Davis via Digitalmars-d-learn
On Sun, 04 May 2014 19:08:27 +
Mark Isaacson via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:

 Thanks for the insights! I suppose we'll get a chance to see
 where things stand at this year's dconf.

 It's quite interesting that D's concept of r-values seems less
 developed than C++. Here's hoping that that only results in a
 better thought out solution.

Well, IIRC, rvalue-references are exactly what exactly what Andrei wants to
avoid due the large number of complications that the introduce to the
language. Ultimately, we want a solution in D that is simpler but still does
the job. We have simpler, but haven't quite sorted out the still does the
job part. I expect that we'll get there eventually, but we really should have
gotten there long before now and haven't.

- Jonathan M Davis


Re: Implicit static-dynamic arr and modifying

2014-05-06 Thread Jonathan M Davis via Digitalmars-d-learn
On Mon, 05 May 2014 22:16:58 -0400
Nick Sabalausky via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:

 On 5/5/2014 10:11 PM, Nick Sabalausky wrote:
  Is this kinds stuff a sane thing to do, or does it just work by
  accident?:
 
  void modify(ubyte[] dynamicArr)
  {
   dynamicArr[$-1] = 5;
  }
 
  void main()
  {
   ubyte[4] staticArr = [1,1,1,1];
   modify(staticArr);
   assert(staticArr == [1,1,1,5]);
  }

 Duh, it's just using a normal slice of the static array...

 // Roughly:
 dynamicArr.ptr = staticArr;
 dynamicArr.length = typeof(staticArr).sizeof;

 So all is well, and deliberately so. Pardon the noise.

It's definitely deliberate, though I think that it's a flaw in the language's
design. IMHO, static arrays should never be automatically sliced, but
unfortunately, changing that would break too much code at this point. The
biggest problem is the fact that it's inherently unsafe, though unfortunately,
the compiler currently considers it @safe:

https://issues.dlang.org/show_bug.cgi?id=8838

- Jonathan M Davis


Re: [Rosettacode] D code line length limit

2014-05-07 Thread Jonathan M Davis via Digitalmars-d-learn
On Wed, 07 May 2014 13:39:55 +
Meta via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote:
 Maybe D programmers need to adopt a new convention for
 annotations in the long term. Instead of:

 void doSomething(int n) pure @safe @nogc nothrow
 {
 }

 We should write:

 pure @safe @nogc nothrow
 void doSomething(int n)
 {
 }

My eyes... Oh, how that hurts readibily. Obviously, folks are free to format
their code how they like, but I very much hope that that style never becomes
prevalent. About the only place in a signature that I like to break it up
across lines is with the parameters, and I generally will let the signature
line get to the limit before I will break it up (which in the case of Phobos,
would be 120 characters, since it has a hard limit of 120, and a soft limit of
80 - and functions are one place where I will definitely go passed the soft
limit).

Regardless, formatting code is one are that's always going to be highly
subjective.

- Jonathan M Davis


Re: [Rosettacode] D code line length limit

2014-05-08 Thread Jonathan M Davis via Digitalmars-d-learn
On Wed, 07 May 2014 18:51:58 +
Meta via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote:

 On Wednesday, 7 May 2014 at 14:40:37 UTC, Jonathan M Davis via
 Digitalmars-d-learn wrote:
  My eyes... Oh, how that hurts readibily.

 While I agree that

 pure @safe @nogc nothrow
 void doSomething(int n)
 {
 }

 is quite ugly, it is really not much worse than

 void doSomething(int n) pure @safe @nogc nothrow
 {
 }

 I would argue that the latter hurts readability more, as parsing
 meaning from long lines is difficult for humans. Also, we can
 always go deeper ;-)

Actually, I find the second version perfectly readable, and I think that it is
by far the best way for that function signature to be written, whereas I find
the first one to be much, much harder to read. But ultimately, this sort of
thing pretty much always ends up being highly subjective.

- Jonathan M Davis


Re: [Rosettacode] D code line length limit

2014-05-08 Thread Jonathan M Davis via Digitalmars-d-learn
On Thu, 08 May 2014 07:29:08 +
bearophile via Digitalmars-d-learn digitalmars-d-learn@puremagic.com
wrote:

 Jonathan M Davis:

  ultimately, this sort of
  thing pretty much always ends up being highly subjective.

 But please put the const/immutable of methods on the right:

 struct Foo {
  void bar1() const {} // Good
  const void bar2() {} // Bad
 }

Well, that's one case where there's actually an objective reason to put it on
the right due to one of the flaws in the language - that it's the one place
that const inconsistently does not apply to the type immediately to its right
(though it's consistent with how attributes are applied to the function itself
- just not consistent with variables).

It's also because of this that I favor putting most attributes on the right
(though that's subjective, unlike with const). I only put attributes on the
left if they're on the left in C++ or Java (e.g. static, public, or final).
Everything else goes on the right.

Unfortunately, making this consistent by doing something like enforcing that
all function attributes go on the right would then be inconsistent with other
languages with regards to the attributes that they have which go on the left,
so I don't know how we could have gotten it completely right. No matter which
way you go, it's inconsistent in one way or another. If it were up to me, I'd
probably enforce that all attributes which could be ambiguous go on the right
but that all others could go on either side, but Walter has never liked that
idea. So, we're stuck with arguing that everyone should put them on the right
by convention in order to avoid the ambiguity.

- Jonathan M Davis


Re: throws Exception in method

2014-05-08 Thread Jonathan M Davis via Digitalmars-d-learn
On Thu, 08 May 2014 09:15:13 +
amehat via Digitalmars-d-learn digitalmars-d-learn@puremagic.com
wrote:

 Hello everyone,

 in java, you can have exceptions on methods.
 Thus we can write:
 public static void control (String string) throws
 MyException {}

 Is that possible in D and if so how does it work? If I write this
 D:

 public void testMe () throws MyException {}

 The compiler refuses to compile.

 What is the proper behavior for this D?

 thank you

At this point, the programming community at large seems to have decided that
while checked exceptions seem like a good idea, they're ultimately a bad one.
This article has a good explanation from one of the creators of C# as to why:

http://www.artima.com/intv/handcuffs.html

At this point, Java is the only language I'm aware of which has checked
exceptions (though there may be a few others somewhere), and newer languages
have learned from Java's mistake and chosen not to have them.

What D has instead is the attribute nothrow. Any function marked with nothrow
cannot throw an exception. e.g.

auto func(int bar) nothrow {...}

It's similar to C++11's noexcept except that it's checked at compile time
(like Java's checked exceptions), whereas noexcept introduces a runtime check.

If a function is not marked with nothrow, then the only ways to know what it
can throw are to read the documentation (which may or may not say) or to read
the code. There are obviously downsides to that in comparison to checked
exceptions, but the consensus at this point is that it's ultimately better.

- Jonathan M Davis


Re: [Rosettacode] D code line length limit

2014-05-08 Thread Jonathan M Davis via Digitalmars-d-learn
On Thu, 08 May 2014 09:30:38 +
bearophile via Digitalmars-d-learn digitalmars-d-learn@puremagic.com
wrote:

 Jonathan M Davis:

  Unfortunately, making this consistent by doing something like
  enforcing that
  all function attributes go on the right would then be
  inconsistent with other
  languages with regards to the attributes that they have which
  go on the left,

 This is a job for a Lint. like DScanner :-)

Sure, that could point out that putting const or the left is bug-prone and
warn you that you should change it, but while it's not really possible to
have a fully consistent design with regards to function attributes, I still
think that allowing const on the left is simply a bad design decision. A
linter is then just a way to help you work around that bad design decision.

- Jonathan M Davis



Re: [Rosettacode] D code line length limit

2014-05-08 Thread Jonathan M Davis via Digitalmars-d-learn
On Thu, 08 May 2014 10:27:17 +
bearophile via Digitalmars-d-learn digitalmars-d-learn@puremagic.com
wrote:

 Jonathan M Davis:

  I still think that allowing const on the left is simply a
  bad design decision.

 I opened a request on this, and it was closed down :-)

I know. Walter doesn't agree that it was a bad decision. He thinks that being
consistent with the other function attributes and letting them all be on both
sides of the function is more important, but given the fact that that's
highly error-prone for const and immutable, I definitely think that it was a
bad decision. Unfortunately, we're stuck with it, because Walter doesn't
agree, and I don't expect that anyone is going to be able to convince him.

- Jonathan M Davis


Re: [Rosettacode] D code line length limit

2014-05-08 Thread Jonathan M Davis via Digitalmars-d-learn
On Thu, 8 May 2014 07:32:52 -0700
H. S. Teoh via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:

 On Thu, May 08, 2014 at 01:59:58AM -0700, Jonathan M Davis via
 Digitalmars-d-learn wrote:
  On Thu, 08 May 2014 07:29:08 +
  bearophile via Digitalmars-d-learn
  digitalmars-d-learn@puremagic.com wrote:
 
   Jonathan M Davis:
  
ultimately, this sort of thing pretty much always ends up being
highly subjective.

 FWIW, for very long function signatures I write it this way:

   const(T)[] myVeryLongFunction(T)(const(T)[] arr,
intx,
inty,
intz,
ExtraArgs  extraArgs)
   pure @safe nothrow
   if (is(T : int) 
   someOtherLongCriteria!T 
   yetMoreVeryLongCriteria!T)
   {
   ...
   }

That's that I do with the params, but I'd still put the attributes to the
right of the last param rather than on their own line. I don't think that I'd
_ever_ put attributes on their own line. But as always, it's all quite
subjective, and everyone seems to prefer something at least slightly
different.

- Jonathan M Davis


Re: question about passing associative array to a function

2014-05-11 Thread Jonathan M Davis via Digitalmars-d-learn
On Sun, 11 May 2014 17:00:13 +
 Remind me again why we can't just change this to a sensible
 initial state? Or at least add a .initialize()?

All reference types have a null init value. Arrays and classes have the exact
same issue as AAs. Anything else would require not only allocating memory but
would require that that state persist from compile time to runtime, because
the init value must be known at compile time, and there are many cases, where
a variable exists at compile time (e.g. a module-level or static variable),
making delayed initialization problematic. Previously, it was impossible to
allocate anything other than arrays at compile time and have it's state
persist through to runtime, though it's not possible to do that with classes
(I don't know about AAs).

So, it _might_ now be possible to make it so that AAs had an init value other
than null, but because there's only one init value per type, even if the init
value for AAs wasn't null, it wouldn't solve the problem. It would just result
in all AAs of the same type sharing the same value unless they were directly
initialized rather than having their init value used.

Essentially, the way that default-initialization works in D makes it so that a
default-initialized AA can't be its own value like you're looking for. For
that, we'd need default construction (like C++ has), but then we'd lose out on
the benefits of having a known init value for all types and would have the
problems that that was meant to solve. It causes us problems with structs too
for similar reasons (the lack of default construction there also gets
complained about fairly frequently).

Ultimately, it's a set of tradeoffs, and you're running into the negative
side of this particular one.

- Jonathan M Davis


Re: Templating everything? One module per function/struct/class/etc, grouped by package?

2014-05-12 Thread Jonathan M Davis via Digitalmars-d-learn
On Mon, 12 May 2014 08:37:42 +
JR via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote:

 Given that...

 1. importing a module makes it compile the entirety of it, as
 well as whatever it may be importing in turn
 2. templates are only compiled if instantiated
 3. the new package.d functionality

 ...is there a reason *not* to make every single
 function/struct/class separate submodules in a package, and make
 *all* of those templates? Unnused functionality would never be
 imported nor instantiated, and as such never be compiled, so my
 binary would only include what it actually uses.


 std/stdio/package.d:
  module std.stdio;
  // still allows for importing the entirety of std.stdio

  public import std.stdio.foo;
  public import std.stdio.writefln;
  __EOF__


 std/stdio/foo.d:
  module std.stdio.foo;

  void fooify(Args...)(Args args)
  if (Args.length  0)
  {
  // ...
  }

  void fooify()()
  {
  // don't need this, won't compile this
  }
  __EOF__


 std/stdio/writefln.d;
  module std.stdio.writefln;

  // nevermind the incompatible signature
  auto writefln(string pattern, Args...)(Args args)
  if (!pattern.canFind(PatternIdentifier.json))
  {
  // code that doesn't need std.json -- it is never
 imported
  }
  __EOF__


 What am I missing?

Well, that would be a lot of extraneous files, which would be very messy IMHO.
It also makes it much harder to share private functionality, because
everything is scattered across modules - you'd be force to use the package for
that. It also wouldn't surprise me if it cost more to compile the code that
way if you were actually using most of it (though it may very well save
compilation time if you're using a relatively small number of the functions
and types). So, from a purely organization perspective, I think that it's a
very bad idea, though others may think that it's a good one. And since
package.d imports all of those modules anyway, separating them out into
separate files didn't even help you any.

Also, templates cost more to compile, so while you may avoid having to compile
some functions, becasue you're not using them, everything that _does_ get
compiled will take longer to compile. And if you templatize them in a way that
would result in more template instantiations (e.g. you actually templatize the
parameters rather than just giving the function an empty template parameter
list), then not only will the functions have to be compiled more frequently
(due to multiple instantiations), but they'll take up more space in the final
binary. Also, while D does a _much_ better job with template errors than C++
does, template-related errors in D are still far worse than with functions
that aren't templated, so you're likely going to cost yourself more time
debugging template-related compilation errors than you ever would gain in
reduced compilation times.

In addition, if a function is templatized, it's harder to use it with function
prototypes, which can be a definite problem for some code. It's also a
horrible idea for libraries to have functions templatized just to be
templatized, because that means that a function has to be compiled _every_
time that a program uses it rather than having it compiled once when the
library is compiled (the function would still have to be parsed unless it just
had its signature in a .di file, but that's still faster than full
compilation - and if a .di file is used, then all that has to be parsed is the
signature). So, while it's often valuable to templatize functions,
templatizing them to save compilation times is questionable at best.

D already does a _very_ good job at compiling quickly. Often, the linking step
costs more than the actualy compilation does (though obviosuly, as programs
grow larger, the compilation time does definitely exceed the link time).
Unless you're running into problems with compilation speed, I'd strongly
advise against trying to work around the compiler to speed up code
compilation. Templatize functions when it makes sense to do so, but don't
templatize them just in an attempt to avoid having them be compiled.

If you're looking to speed up compilation times, it makes far more sense to
look at doing things like reducing how much CTFE you use and how many
templates you use. CTFE in particular is ridiculously expensive thanks to it
effectively just being hacked into the compiler originally. Don has been doing
work to improve that, and I expect it to improve over time, but I don't know
how far along he is, and I don't know that it'll ever be exactly cheap to use
CTFE.

Keep in mind that lexing and parsing are the _cheap_ part of the compiler. So,
importing stuff really doesn't cost you much. Already, the compiler won't
fully compile all of the symbols within a module except when it's compiling
that module. Simply importing it just causes it to process the module as much
as required to 

Re: Why std.algorithm.sort can't be applied to char[]?

2014-05-12 Thread Jonathan M Davis via Digitalmars-d-learn
On Mon, 12 May 2014 14:49:52 +
hane via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote:

 and is there any way to sort char array with algorithm.sort?
 ---
 import std.algorithm;
 import std.range;

 void main()
 {
int[] arr = [5, 3, 7];
sort(arr); // OK

char[] arr2 = ['z', 'g', 'c'];
sort(arr2); // error
sort!q{ a[0]  b[0] }(zip(arr, arr2)); // error
 }
 ---
 I don't know what's difference between int[] and char[] in D, but
 it's very unnatural.

All strings in D are treated as ranges of dchar, not their element type. This
has to with the fact that a char or wchar are only part of a character. If you
want to sort arrays of characters, you need to use dchar[].

http://stackoverflow.com/questions/12288465
http://stackoverflow.com/questions/16590650

- Jonathan M Davis


Re: Why std.algorithm.sort can't be applied to char[]?

2014-05-12 Thread Jonathan M Davis via Digitalmars-d-learn
On Mon, 12 May 2014 11:08:47 -0700
Charles Hixson via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:

 On 05/12/2014 09:29 AM, Jonathan M Davis via Digitalmars-d-learn
 wrote:
  On Mon, 12 May 2014 14:49:52 +
  hane via Digitalmars-d-learn digitalmars-d-learn@puremagic.com
  wrote:
 
  and is there any way to sort char array with algorithm.sort?
  ---
  import std.algorithm;
  import std.range;
 
  void main()
  {
  int[] arr = [5, 3, 7];
  sort(arr); // OK
 
  char[] arr2 = ['z', 'g', 'c'];
  sort(arr2); // error
  sort!q{ a[0]  b[0] }(zip(arr, arr2)); // error
  }
  ---
  I don't know what's difference between int[] and char[] in D, but
  it's very unnatural.
  All strings in D are treated as ranges of dchar, not their element
  type. This has to with the fact that a char or wchar are only part
  of a character. If you want to sort arrays of characters, you need
  to use dchar[].
 
  http://stackoverflow.com/questions/12288465
  http://stackoverflow.com/questions/16590650
 
  - Jonathan M Davis
 
 Given that he was working with pure ASCII, he should be able to cast
 the array to byte[] and sort it, but I haven't tried.

Sure, you can cast char[] to ubyte[] and sort that if you know that the array
only holds pure ASCII. In fact, you can use std.string.representation to do it
- e.g.

auto ascii = str.representation;

and if str were mutable, then you could sort it. But that will only work if
the string only contains ASCII characters. Regardless, he wanted to know why
he couldn't sort char[], and I explained why - all strings are treated as
ranges of dchar, making it so that if their element type is char or wchar, so
they're not random access and thus can't be sorted.

 Also char[] isn't string.

Yes, string is aliased to immutable(string)[], but char[] is still a string
type, and what I said applies to all string types. In particular, arrays of
char or wchar are called narrow strings, because it's not guaranteed that
one of their code units is a full code point, unlike arrays of dchar. But all
arrays of char, wchar, or dchar are strings.

 Strings are immutable, and thus cannot be sorted in place.

string is immutable and thus couldn't be sorted regardless of whether narrow
strings were treated as ranges of dchar or not, but char[] is a string just as
much as immutable(char)[] is. It's just not string.

- Jonathan M Davis


Re: newbie question about variables in slices..

2014-05-12 Thread Jonathan M Davis via Digitalmars-d-learn
On Mon, 12 May 2014 20:12:41 +
Kai via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote:

 Hi I am trying to iterate over a mmfile (ubyte[]) and convert it
 to uint

 void main(){
   MmFile inn = new MmFile(mmData.dat);
   ubyte[] arr = cast(ubyte[])inn[];
   for(ulong index = 0; indexarr.length; index+=4){
   ulong stop = index+4;
   uint num  =
 littleEndianToNative!uint(arr[index..stop]); }
 if i try to compile this i get the following error:
 Error: template std.bitmanip.littleEndianToNative cannot deduce
 function from argument types !(uint)(ubyte[])

 but if change the last line to:
 uint num  = littleEndianToNative!uint(arr[30..34]);

 then it compiles and runs...

 Am I doing something wrong with my variables index and stop?
 cheers

The problem is that the compiler isn't smart enough to realize that
arr[index .. stop] is guaranteed to result in a array with a length of 4.

auto num = littleEndianToNative!uint(cast(ubyte[4])arr[index..stop]);

would work. On a side note, if you wanted to be safer, you should probably use
uint.sizeof everyewhere instead of 4. that would also make it easier to
convert it to a different integral type. Also, you should be using size_t, not
ulong for the indices. Array indices are size_t, and while that's ulong on a
64-bit system, it's uint on a 32-bit system, so your code won't compile on a
32-bit system.

- Jonathan M Davis


Re: Why std.algorithm.sort can't be applied to char[]?

2014-05-14 Thread Jonathan M Davis via Digitalmars-d-learn
On Wed, 14 May 2014 08:27:45 +
monarch_dodra via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:

 On Monday, 12 May 2014 at 18:44:22 UTC, Jonathan M Davis via
 Digitalmars-d-learn wrote:
  Sure, you can cast char[] to ubyte[] and sort that if you know
  that the array
  only holds pure ASCII. In fact, you can use
  std.string.representation to do it
  - e.g.
 
  auto ascii = str.representation;
 
  and if str were mutable, then you could sort it. But that will
  only work if
  the string only contains ASCII characters. Regardless, he
  wanted to know why
  he couldn't sort char[], and I explained why - all strings are
  treated as
  ranges of dchar, making it so that if their element type is
  char or wchar, so
  they're not random access and thus can't be sorted.

 Arguably, a smart enough implementation should know how to sort a
 char[], while still preserving codepoint integrity.

I don't think that that can be done at the same algorithmic complexity though.
So, I don't know if that would be acceptable or not from the standpoint of
std.algorithm.sort. But even if it's a good idea, someone would have to
special case sort for char[], and no one has done that.

 As a matter of fact, the built in sort property does it.

 void main()
 {
  char[] s = éöeèûà.dup;
  s.sort;
  writeln(s);
 }
 //prints:
 eàèéöû

I'm surprised. I thought that one of Bearophile's favorite complaints was that
it didn't sort unicode properly (and hence one of the reasons that it should
be removed). Regardless, I do think that it should be removed.

- Jonathan M Davis



Re: Array!T and find are slow

2014-05-14 Thread Jonathan M Davis via Digitalmars-d-learn
On Wed, 14 May 2014 20:54:19 +
monarch_dodra via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:

 I'm usually reluctant to add extra code when the generic case
 works, but I feel we should make an exception for find.

We should avoid it where it doesn't gain us much, but the standard library
needs to be fast, so if there's a definite performance gain to be made in
adding overloads, we should. That being said, I agree that we should be trying
to make ranges in general faster rather than optimizing for individual range
types such as Array's range type. Still, if there's a significant performance
gain to be made in adding an overload for find to Array's range type, we
should at least consider it. UFCS should then make it just work. But that
solution should be reserved for if there's a fundamental reason why it would
be faster to create a specialization for Array's range type rather than just
because ranges aren't optimized well enough by the compiler or because
std.algorithm.find itself needs improvement. If we can speed up ranges in
general, then we gain everywhere, and if we can speed up std.algorithm.find,
then it's sped up for many types rather than just making it faster for Array's
range type.

- Jonathan M Davis


Re: Array!T and find are slow

2014-05-14 Thread Jonathan M Davis via Digitalmars-d-learn
On Wed, 14 May 2014 21:20:05 +
Kapps via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote:

 That pull shows that the previous behaviour was to use enforce?
 Isn't this very expensive, particularly considering that enforce
 uses lazy non-scope arguments?

Yeah, much as Andrei would hate to hear it (enforce was his idea, and he quite
likes the idiom), the fact that lazy is so inefficient makes it so that it's
arguably bad practice to use it in high performance code. We really need to
find a way to make it so that lazy is optimized properly so that we _can_
safely use enforce, but for now, it's not a good idea unless the code that
you're working on can afford the performance hit.

Honestly, in general, I'd avoid most anything which uses lazy (e.g. that's why
I'd use explict try-catch blocks rather than use
std.exception.assumeWontThrow - like enforce, it's a nice idea, but it's too
expensive at this point).

- Jonathan M Davis


Re: Array!T and find are slow

2014-05-14 Thread Jonathan M Davis via Digitalmars-d-learn
On Thu, 15 May 2014 01:29:23 +
Kapps via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote:

 On Wednesday, 14 May 2014 at 23:50:34 UTC, Meta wrote:
  On the topic of lazy, why *is* it so slow, exactly? I thought
  it was just shorthand for taking a function that evaluates the
  expression, and wrapping said expression in that function at
  the call site. That is, I thought that:
 
  int doSomething(lazy int n)
  {
  return n();
  }
 
  Was more or less equivalent to:
 
  int doSomething(int function(int) n)
  {
  return n();
  }

 It's more equivalent to:

 int doSomething(int delegate(int) n)
 {
  return n();
 }

 And (I could be very likely wrong here), I believe that it's
 expensive because it's not scope and possibly requires a closure.
 Again, very likely may be wrong.

Yeah. It generates a delegate. You even use the value internally as a
delegate. So, that's definitely part of the problem, though IIRC, there were
other issues with it. However, I don't remember at the moment. The big one
IIRC (which may be due to its nature as a delegate) is simply that it can't be
inlined, and in many cases, you very much what the code to be inlined (enforce
would be a prime example of that).

enforce(cond, failure);

really should just translate to something close to

if(!cond) throw new Exception(failure);

but it doesn't do anything close to that. And as long as it doesn't, enforce
is of questionable value in any code that cares about efficiency.

- Jonathan M Davis



Re: Array!T and find are slow

2014-05-15 Thread Jonathan M Davis via Digitalmars-d-learn
On Thu, 15 May 2014 05:53:45 +
monarch_dodra via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:

 As a workaround, I'm sure we could specialize enforce without
 lazy for built-in types?

No. I don't think that that would work. The problem is that you'd have to be
able to overload between stuff like error message and
format(error message: %s, foo), because you don't want the first one to be
lazy, whereas you do want the second one to be lazy.

 BTW: Why *is* enforce lazy again? I don't really see it. I makes
 more sense for things like collectException I guess, but I
 don't see it for enforce.

It's lazy so that the second argument isn't evaluated. That might not be
obvious in the example

enforce(cond, failure);

but if you have

enforce(cond, format(failure: %s, foo));

or

enforce(cond, new BarException(blah));

then it would definitely be costing you something if the second parameter
weren't lazy - and in theory, the cost of it not being lazy could be
arbitrarily large, because you can pass it arbitrarily complex expressions
just so long as their result is the correct type. Now, given how slow lazy is,
maybe it would actually be faster to just have it be strict instead of lazy,
but in theory, it's saving you something. And if lazy were actually properly
efficient, then it would definitely be saving you something. Regardless, using
an explicit if statement is definitely faster, because then you don't have the
lazy parameter to worry about _or_ the potentially expensive expression, since
the expression would only be encountered if the condition failed.

- Jonathan M Davis


Re: Array!T and find are slow

2014-05-16 Thread Jonathan M Davis via Digitalmars-d-learn
On Thu, 15 May 2014 08:04:59 -0300
Ary Borenszweig via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:

 Isn't there a way in D to just expand:

 enforce(cond, failure);

 (or something with a similar syntax) to this, at compile-time:

 if(!cond) throw new Exception(failure);

 I thought D could do this, so enforce should do this instead of using
 lazy arguments.

No. enforce is a function, and the only other things that it could be with
that syntax would be other callables (e.g. a lambda, delegate, or functor).
Mixins are the only constructs that can completely replace themselves with
another construct. So, I suppose that you could have a function called enforce
that returned a string and be able to do something like

mixin(enforce(cond, `failure`));

and you could probably do something similar with a template mixin.

mixin(enforce!(cond, failure));

might be possible if both of the template parameters were alias parameters.
But there's no way to take one piece of code and completely translate it into
another piece of code without mixins. To do that probably would have meant
using some kind of macros, or that was specifically avoided in D's design.

And conceptually, using lazy for enforce is a perfect fit. The problem is with
the current implementation of lazy.

- Jonathan M Davis


Re: Array!T and find are slow

2014-05-17 Thread Jonathan M Davis via Digitalmars-d-learn
On Fri, 16 May 2014 11:51:31 -0400
Steven Schveighoffer via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:

 On Fri, 16 May 2014 11:36:44 -0400, Jonathan M Davis via
 Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote:

  On Thu, 15 May 2014 08:04:59 -0300
  Ary Borenszweig via Digitalmars-d-learn
  digitalmars-d-learn@puremagic.com wrote:
 
  Isn't there a way in D to just expand:
 
  enforce(cond, failure);
 
  (or something with a similar syntax) to this, at compile-time:
 
  if(!cond) throw new Exception(failure);
 
  I thought D could do this, so enforce should do this instead of
  using lazy arguments.
 
  No. enforce is a function, and the only other things that it could
  be with
  that syntax would be other callables (e.g. a lambda, delegate, or
  functor).

 I think it *could* optimize properly, and that would be an amazing
 improvement to the compiler, if someone wants to implement that.

 Essentially, you need to be able to inline enforce (not a problem
 since it's a template), and then deduce that the lazy calls can just
 be moved to where they are used, in this case, only once.

 This would make a logging library even better too.

Sure, the compiler could be improved to optimize enforce such that

enforce(cond, failure);

becomes

if(!cond) throw new Exception(failure);

And I think that that's what needs to be done. However, what I understood the
question to be was whether there was a construct in the language that we could
use where

enforce(cond, failure);

would be automatically converted to

if(!cond) throw new Exception(failure);

without the compiler having to do optimizations to make it happen. The closest
to that would be mixins, but they can't do it with the same syntax. You'd be
required to write something more complex to use mixins to do the job.

But I think that the correct solution is to improve the compiler with regards
to lazy. The fact that lazy is so slow is a serious problem, and enforce is
just one manifestation of it (albeit the worst because of how often it's
used).

- Jonathan M Davis


Re: Building 32bit program with MSVC?

2014-05-29 Thread Jonathan M Davis via Digitalmars-d-learn
On Thu, 29 May 2014 20:12:52 +
Remo via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote:

 On Thursday, 29 May 2014 at 18:25:19 UTC, Jeremy DeHaan wrote:
  I know that we can use MSVC to build a 64 bit program, but is
  it also possible to use it to build a 32 bit program as well?

 Yes of course it is possible.
 It you are talking about Visual-D then it is possible there too.

If you are talking about building a 32-bit program with dmd and linking with
Microsoft's linker, it's my understanding that that will not work, because dmd
always produces OMF object files in 32-bit (which is what optlink uses),
whereas dmd produces COFF object files in 64-bit (which is what Microsoft's
linker uses). Walter went to the trouble of getting dmd to produce COFF
object files to link with Microsoft's linker when he added 64-bit Windows
support but did not want to go to the trouble of adding COFF support to
32-bit.

- Jonathan M Davis


Re: Building 32bit program with MSVC?

2014-05-31 Thread Jonathan M Davis via Digitalmars-d-learn
On Sat, 31 May 2014 06:38:46 +
Kagamin via Digitalmars-d-learn digitalmars-d-learn@puremagic.com
wrote:

 They may use different debugging formats, but just linking should
 be possible, especially with import libraries.

_Dynamic_ linking is possible. Static linking is not.

- Jonathan M Davis


Re: Building 32bit program with MSVC?

2014-05-31 Thread Jonathan M Davis via Digitalmars-d-learn
On Sat, 31 May 2014 07:53:40 +
Kagamin via Digitalmars-d-learn digitalmars-d-learn@puremagic.com
wrote:

 By dynamic linking do you mean LoadLibrary or linking with import
 library?

Both will work, otherwise we couldn't use Microsoft's libraries - e.g.
std.windows.registry uses advapi32.dll to talk to the registry. But static
linking requires that the library formats match. However, I'm afraid that I
don't know enough about how linking works to know why that's a problem for
static linking and not for dynamic linking.

- Jonathan M Davis


Re: Indicating incompatible modules

2014-06-01 Thread Jonathan M Davis via Digitalmars-d-learn
On Sat, 31 May 2014 18:26:53 +0200
Joseph Rushton Wakeling via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:

 Hello all,

 Is there a straightforward way to indicate that two modules should
 not be used together in the same program?  Preferably one that does
 not require editing both of the modules?

 The application I have in mind is when one is making available an
 experimental module which is planned to replace one that already
 exists; it's useful for the experimental module to be able to say,
 Hey, use me _or_ the standard module, but not both of us.

 Any thoughts ... ?

I wouldn't really worry about it. I'd just document the new module as a
replacement for the other and that they're not intended to work together. I
don't see much point in trying to get the compiler to complain about code that
uses both.

- Jonathan M Davis


Re: Forward reference to nested function not allowed?

2014-06-01 Thread Jonathan M Davis via Digitalmars-d-learn
On Sat, 31 May 2014 16:18:33 +
DLearner via Digitalmars-d-learn digitalmars-d-learn@puremagic.com
wrote:

 Hi,

 import std.stdio;
 void main() {

 writefln(Entered);

 sub1();
 sub1();
 sub1();

 writefln(Returning);

 void sub1() {
static int i2 = 6;

i2 = i2 + 1;
writefln(%s,i2);
 };
 }

 does not compile, but

 import std.stdio;
 void main() {
 void sub1() {
static int i2 = 6;

i2 = i2 + 1;
writefln(%s,i2);
 };
 writefln(Entered);

 sub1();
 sub1();
 sub1();

 writefln(Returning);


 }

 compiles and runs as expected.

 Is this intended?

Currently, you cannot forward reference a nested function. Kenji was looking
into it recently, so maybe we'll be able to at some point in the future, but
right now, we definitely can't. But even if we can in the future, I'd expect
that you'd have to declare a prototype for the function to be able to use it
before it's declared. In general though, I think that the only reason that it
would really be useful would be to have two nested functions refer to each
other, since otherwise, you just declare the nested function earlier in the
function.

- Jonathan M Davis


Re: Are tests interruptible/concurrent? Is use of a (thread local) global safe in tests?

2014-06-01 Thread Jonathan M Davis via Digitalmars-d-learn
On Fri, 30 May 2014 20:13:19 +
Mark Isaacson via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:

 I'm having fun running some unittests. I set up a simple homemade
 mock of std.net.curl's functions that essentially just consists
 of a global queue that I can add strings to and get back in a
 predictable order when calling std.net.curl.get/post/etc.

 I use this mock in a couple of different modules for unit
 testing, namely the module I have the mock in and one other one.
 When I run my unit tests, it seems to enqueue all of the
 responses from both of my unit tests (from different modules)
 before finishing those tests and removing things from the global
 queue. This is problematic in that I cannot anticipate the state
 of the global queue for my tests. Does this sound more like a bug
 or a feature. My understanding is that, at least for now, tests
 are not run concurrently on the latest official dmd release;
 since my queue is not qualified with shared, things should be
 thread local anyway.

 TLDR:
 Executation of tests A and B is as follows:
 A pushes to global queue
 B pushes to global queue
 B pops on global queue -- program crashes

 Expected order:
 A pushes to global queue
 A pops from global queue
 B pushes to global queue
 B pops from global queue

 Or switch the order in which A and B execute, doesn't really
 matter.


Well, the behavior you're seeing would make sense if you're using static
destructors for the popping part, since they aren't going to be run until the
program is shut down, but if you're using the unittest blocks to do the
popping, that's definitely a bit odd. I would have expected each module's unit
tests to be run sequentially. Certainly, within a module, the tests are
currently run sequentially. However, there has been recent discussion of
changing it so that unittest blocks will be run in parallel (at lest by
default), so in the future, they may very well run in parallel (probably
requiring an attribute of some kind to make them run sequentially).

It's generally considered good practice to make it so that your unittest
blocks don't rely on each other and that they don't change the global state,
in which case, it doesn't matter what order they're in (though that still
allows for setting stuff up in static constructors and shutting it down in
static destructors so long as that state doesn't change after a unittest block
is run).

- Jonathan M Davis


Re: floating point conversion

2014-06-01 Thread Jonathan M Davis via Digitalmars-d-learn
On Sun, 01 Jun 2014 14:42:34 +
Famous via Digitalmars-d-learn digitalmars-d-learn@puremagic.com
wrote:

 On Sunday, 1 June 2014 at 12:45:26 UTC, bearophile wrote:
  It's a bad question.

 Actually, Martin's question is a good one.

 Initializing a variable of type float via a literal or as
 conversion from string should be the same, exacly, always.
 Casting a float to double should be deterministic as well.

Not necessarily, particularly because any floating point operations done at
compile time are generally done at much higher precision than those done at
runtime. So, it's pretty trivial for very similar floating point operations to
end up with slightly different results. In general, expecting any kind of
exactness from floating point values is asking for trouble. Sure, they follow
the rules that they have consistently, but there are so many numbers that
aren't actually representable by a floating point value, the precisions vary
just enough, and slightly different code paths can result in slightly
different results that depending on floating point operations resulting in
any kind of specific values except under very controlled circumstances just
isn't going to work.

- Jonathan M Davis


Re: On ref return in DMD

2014-06-03 Thread Jonathan M Davis via Digitalmars-d-learn
On Tue, 03 Jun 2014 10:14:21 +
Nordlöw via Digitalmars-d-learn digitalmars-d-learn@puremagic.com
wrote:

 The title

 More ref return fixes in std.datetime now that the compiler
 allows them

 of

 https://github.com/D-Programming-Language/phobos/pull/2227/files

 made me curious to what is meant by ref return. Is this a recent
 improvement in DMD?

No. We've been able to return by ref for ages. However, when std.datetime was
originally written back in 2010, there were bugs that prevented it from
working for a number of the functions in std.datetime. Unfortunately, I can't
remember what the bugs were at this point, but they've long since been fixed,
and I just finally got around to making it so that those functions in
std.datetime return by ref instead of void.

- Jonathan M Davis



Re: DateTime custom string format

2014-06-03 Thread Jonathan M Davis via Digitalmars-d-learn
On Tue, 03 Jun 2014 17:07:02 +0200
Robert Schadek via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:

 Is there a function in phobos that lets me do something like
 DateTime.format(MM:DD: ) with a DateTime instance?

Not currently. It's on my todo list. I intend to get back to it after I've
finished with splitting std.datetime (which I should get to fairly soon but
have been doing some cleanup in std.datetime first), but I don't know when it
will actually be ready. So, for now, you'd have to use std.string.format and
the getters on DateTime.

- Jonathan M Davis


Re: DateTime custom string format

2014-06-03 Thread Jonathan M Davis via Digitalmars-d-learn
On Tue, 03 Jun 2014 19:39:14 +0200
Robert Schadek via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:

 On 06/03/2014 07:12 PM, Jonathan M Davis via Digitalmars-d-learn
 wrote:
  On Tue, 03 Jun 2014 17:07:02 +0200
  Robert Schadek via Digitalmars-d-learn
  digitalmars-d-learn@puremagic.com wrote:
 
  Is there a function in phobos that lets me do something like
  DateTime.format(MM:DD: ) with a DateTime instance?
  Not currently. It's on my todo list. I intend to get back to it
  after I've finished with splitting std.datetime (which I should get
  to fairly soon but have been doing some cleanup in std.datetime
  first), but I don't know when it will actually be ready. So, for
  now, you'd have to use std.string.format and the getters on
  DateTime.
 
  - Jonathan M Davis
 Ok, I had people asking me for this because of my std.logger default
 output format.

 Do you accept PRs for that?

Well, I would prefer to do it myself, but I obviously can't say that I
wouldn't accept it if someone else did it and did a good job of it. The main
problem however is that we need to come up with a good formatting scheme -
that is the format of the custom time strings. What C has doesn't cut it, and
what I proposed a while back turned out to be too complicated.  There's this
3rd party library which had some interesting ideas:

http://pr.stewartsplace.org.uk/d/sutil/doc/datetimeformat.html

but I'm convinced that what's there is too simplistic. I'll need to dig up my
old proposal and look at how I can simplify it (possibly making it more like
what's at that link) without removing too much power from it.

Once I have the custom time format strings sorted out, I intend to create both
functions which take the format string as a runtime argument and those which
take them as compile-time arguments and their corresponding from functions
as well (e.g. toCustomString and fromCustomString). We'll end up with
functions for SysTime, DateTime, Date, and TimeOfDay (possibly by templatizing
functions or creating specifically named functions for each of them). My
intention is to put them in std.datetime.format once std.datetime has been
split (and they've been implemented) rather than sticking them on the types
directly.

I'll probably also look at adding a way to get at the exact pieces of
information you want efficiently without having to have flags for everything
in the custom time format strings, making it easier to create strings that are
more exotic but still do so without having to call all of the individual
getters (which isn't necessarily efficient). But by doing something like that
I should be able to simplify the custom time format strings somewhat and avoid
some of the more complicated constructs that I had in my previous proposal.

- Jonathan M Davis


Re: DateTime custom string format

2014-06-03 Thread Jonathan M Davis via Digitalmars-d-learn
On Tue, 03 Jun 2014 22:54:11 +
Brad Anderson via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:

 On Tuesday, 3 June 2014 at 18:22:59 UTC, Jonathan M Davis via
 Digitalmars-d-learn wrote:
  Well, I would prefer to do it myself, but I obviously can't say
  that I
  wouldn't accept it if someone else did it and did a good job of
  it. The main
  problem however is that we need to come up with a good
  formatting scheme -
  that is the format of the custom time strings. What C has
  doesn't cut it, and
  what I proposed a while back turned out to be too complicated.

 Just for reference to Robert and others reading, here's
 Jonathan's old proposal:

 http://forum.dlang.org/post/mailman.1806.1324525352.24802.digitalmar...@puremagic.com

The link to the docs in that past is no longer valid, but you can download a
tarball which includes them here:

https://drive.google.com/file/d/0B-tyih3w2oDZaUkyU0pJZl9TeEk/edit?usp=sharing

After untaring the file (which should then create the folder datetime_format),
just open datetime_format/phobos-prerelease/std_datetime.html in your browser,
and you should see the documentation for toCustomString and fromCustomString.

But as I said, the proposal was overly complicated (particularly since it
tried to accept functions for processing parts of the string). Ideally, we'd
probably have something that's somewhere in between what Stewart did and what
I was proposing - something simpler than what I proposed but more powerful
than what Stewart has. However, I'm going to have to look over both proposals
again and mull over it for a bit before I can come up with a better proposal.
I'll get back to it, but I'm putting it off until I've finished splitting
std.datetime.

- Jonathan M Davis


Re: override toString() for a tuple?

2014-06-04 Thread Jonathan M Davis via Digitalmars-d-learn
On Wed, 04 Jun 2014 05:35:18 +
Steve D via Digitalmars-d-learn digitalmars-d-learn@puremagic.com
wrote:

 Is it possible to override std tuple's toString format?

 so that
  auto a = tuple(hello,1,2,3);
  writeln(a);

 prints
  (hello, 1, 2, 3)
 and not
  Tuple!(string, int, int, int)(hello, 1, 2, 3)


 I'm aware I could write a custom formatter function, but it would
 be nice not to
 have to use such a function for every tuple printed by the
 program.
 Overriding toString() one time in program (if possible) would
 give the ideal default behaviour. (I would duplicate the current
 typecons.d toString() and strip off the prefix)

 thanks for any help

toString is a member of Tuple, and there's no way to override that externally.
You could create a wrapper struct for a Tuple whose toString method did what
you want, and you could just create a function which generated the string that
you wanted that you used whenever printing out a Tuple, but there is no way to
globally override Tuple's toString.

The closest that you could do to overriding Tuple's toString in one place
would be to write your own wrappers for whatever printing functions you want
to use, have them detect when they're given a Tuple, and then print them the
way that you want and pass everything else directly on to writeln or whatever
it is you're wrapping. Then, the print functions would take care of it for
you, but writing such a function wouldn't exactly be fun.

If you're really determined to print tuples differently, you _could_ simply
copy std.typecons.Tuple to your own code and alter it to do what you want.

- Jonathan M Davis


Re: override toString() for a tuple?

2014-06-04 Thread Jonathan M Davis via Digitalmars-d-learn
On Wed, 04 Jun 2014 06:25:53 +
Steve D via Digitalmars-d-learn digitalmars-d-learn@puremagic.com
wrote:

 You would think the promise of OO and Inheritance
 would make it easy and free us from hacks like this ;)

That would require using OO and inheritance, which has nothing to do with
Tuple. ;)

And actually, I find that I very rarely need inheritance. It's definitely the
right solution for some problems, but in the vast majority of cases, I find
that structs are a better solution - especially because they're far more
composable. OO is actually very bad for code reuse, because it's not
particularly composable at all.

- Jonathan M Davis


Re: Weird behaviour when using -release in dmd

2014-06-06 Thread Jonathan M Davis via Digitalmars-d-learn
On Fri, 06 Jun 2014 10:10:24 +
Mikko Aarnos via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:

 Hello all, I have a program which works perfectly when compiled
 without -release:

 parser a implies b equivalent not b implies not a
 Input: ((a implies b) equivalent ((not b) implies (not a)))
 CNF: (not a) or b) or (not b)) and (((not a) or b) or a)) and
 (((b or (not a
 )) or a) and ((b or (not a)) or (not b
 Time(in ms) = 0

 However, when I add -release, this happens:

 parser a implies b equivalent not b implies not a
 object.Error: assert(0) or HLT instruction
 
 0x0040220C
 0x0040208F
 0x0040206B
 0x00403AAF
 0x004084C8
 0x0040849B
 0x004083B4
 0x0040454B
 0x74F0338A in BaseThreadInitThunk
 0x774A9F72 in RtlInitializeExceptionChain
 0x774A9F45 in RtlInitializeExceptionChain

 Does anybody here have any idea what could cause this? The error
 seems to happen in a completely innocent part of code without any
 reason so I haven't the faintest idea what could cause it. I can
 post the source code if necessary, it's not long (under 1000
 lines including unit tests) but it's probably pretty hard to read
 on some parts, mainly the parser.

I don't think that we can help you much without more code. If you hit an
assert(0), then it's usually because of one of two reasons.

1. You have an assert(0) in your code (or an assertion where the condition is
statically known to be equivalent to 0).

2. You hit the end of a function without hitting a proper return statement.

There are probably a few others, but those are the two main ones that come to
mind at the moment. Now, normally, I'd expect to see an AssertError throw
without -release rather than your program working fine, so that makes me
wonder whether you have any variables which aren't being initialized properly
- though usually, that can only happen if you initialiaze a variable to void,
so I wouldn't have thought that that would the problem, since most people
don't do that. And of course it's always possible that you found a compile
bug, but without code that we can compile ourselves, I don't think that we can
help much.

You could use https://github.com/CyberShadow/DustMite to reduce the code to
something much smaller which still exhibits the problem (and is what you
really should do if you need to report a compiler bug anyway, which you might
in this case).

- Jonathan M Davis


Re: Don't Understand why Phobos Auto-Tester fails for PR #3606

2014-06-07 Thread Jonathan M Davis via Digitalmars-d-learn
On Sat, 07 Jun 2014 08:56:37 +
Nordlöw via Digitalmars-d-learn digitalmars-d-learn@puremagic.com
wrote:

 My recent

 https://github.com/D-Programming-Language/dmd/pull/3606

 fails in all the Auto-Testers but I don't understand why.

 Running make unittest locally in phobos using my locally built
 branch of dmd passes all tests.

The first thing that I would check would be to make sure that you're using the
latest code for dmd, druntime, and phobos. If you're missing an update for any
of them, then you could get different results.

Also, you're probably going to need to use DMD= to set dmd to the one that you
built in order to use the one that you built when building druntime and Phobos
instead of the one you installed normally and is in your PATH. e.g. on my box,
it would be something like

DMD=../dmd/src/dmd make -f posix.make MODEL=64

So, if you weren't aware of needing to do that, then that would easily explain
why you're seeing different results. But if you are doing that, and everything
is up-to-date, then I don't know what could be going wrong. Based on the
error, my guess would be that it's a compiler problem (and thus probably that
you're not testing with your updated compiler), but I don't know.

- Jonathan M Davis



Re: array as parameter

2014-06-07 Thread Jonathan M Davis via Digitalmars-d-learn
On Sat, 07 Jun 2014 20:56:13 +
Paul via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote:

 Dynamic array is really reference. Right? But why modification of
 parameter in this case does not work:

 void some_func(string[] s) {
   s ~= xxx; s ~= yyy;
 }

 but this works:

 void some_fun(ref string[] s) {
   s ~= xxx; s ~= yyy;
 }

 In the 1st case s is reference too, is not it?

The first case just slices the array, so it refers to the same data, but the
slice itself is a different slice, so if you append to it, it doesn't affect
the original slice, and it could then result in a reallocation so that the two
slices don't even refer to the same data anymore.

You should read this: http://dlang.org/d-array-article.html

- Jonathan M Davis


Re: Conversion string-int

2014-06-07 Thread Jonathan M Davis via Digitalmars-d-learn
On Sat, 07 Jun 2014 20:53:02 +
Paul via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote:

 I can not understand, why this code works:

  char s[2] = ['0', 'A'];
  string ss = to!string(s);
  writeln(parse!uint(ss, 16));

 but this can deduces template:

  char s[2] = ['0', 'A'];
  writeln(parse!uint(to!string(s), 16));

 What's the reason? And what is the right way to parse
 char[2]-int with radix?

std.conv.to converts the entire string at once. std.conv.parse takes the
beginning of the string until it finds whitespace, and converts that first
part of the string. And because it does that, it takes the string by ref so
that it's able to actually pop the elements that it's converting off of the
front of the string, leaving the rest of the string behind to potentially be
parsed as something else, whereas because std.conv.to converts the whole
string, it doesn't need to take its argument by ref.

So, what's causing you trouble you up is the ref, because if a parameter is a
ref parameter, then it only accepts lvalues, so you have to pass it an actual
variable, not the result of to!string. Also,

string s = ['A', '0'];

will compile, so you don't need to use to!string in this case.

- Jonathan M Davis


Re: Basic dynamic array question. Use of new versus no new.

2014-06-11 Thread Jonathan M Davis via Digitalmars-d-learn
On Wed, 11 Jun 2014 02:30:00 +
WhatMeWorry via Digitalmars-d-learn digitalmars-d-learn@puremagic.com
wrote:

 In Mr. Cehreli's book it says

 Additionally, the length of dynamic arrays can be changed by
 assigning a value to this property:

 int[] array; // initially empty
 array.length = 5; // now has 5 elements

 while in Mr. Alexandrescu's book, it says

 To create a dynamic array, use a new expression (§ 2.3.6.1 on
 page 51) as follows:

 int[] array = new int[20]; // Create an array of 20 integers


 Could someone please compare and contrast the two syntaxes. I
 presume the new command places the 2nd array in heap memory.

They do essentially the same thing but, the first one does it in two steps
instead of one. To better understand arrays in D, I'd advise reading this
article:

http://dlang.org/d-array-article.html

- Jonathan M Davis



Re: Multiple alias this failed workaround...obscure error message

2014-06-11 Thread Jonathan M Davis via Digitalmars-d-learn
 Sent: Wednesday, June 11, 2014 at 8:07 PM
 From: matovitch via Digitalmars-d-learn digitalmars-d-learn@puremagic.com
 To: digitalmars-d-learn@puremagic.com
 Subject: Multiple alias this failed workaround...obscure error message

 I was looking for a workaround to multiple alias this (or 
 opImplicitCast) the following trick doesn't work (why shouldn't 
 it ?). The error message is quite obscure to me.
 
 
 import std.stdio;
 
 class A(Derived)
 {
  alias cast(ref Derived)(this).x this;
 }
 
 class B : A!B
 {
  float x;
 }
 
 class C : A!C
 {
  int x;
 }
 
 void main()
 {
  B b;
  b.x = 0.5;
 
  float f;
  f = b;
 }
 
 
 output :
 
 source/app.d(5): Error: basic type expected, not cast
 source/app.d(5): Error: no identifier for declarator int
 source/app.d(5): Error: semicolon expected to close alias 
 declaration
 source/app.d(5): Error: Declaration expected, not 'cast'
 Error: DMD compile run failed with exit code 1

I don't believe that it's legal to use a cast in an alias declaration, and
that's certainly what the error seems to be indicating. Also, using ref in a
cast is definitely illegal regardless of where the cast is. ref is not part
of a type. The only places that you can use it are function parameters, return
types, and foreach variables. If you want to do anything like you seem to be
trying to do, you're going to have to alias a function which does the cast for
you rather than try and alias the variable itself.

I also don't see anything here that would work as any kind of multiple alias
this.

- Jonathan M Davis


Re: Multiple alias this failed workaround...obscure error message

2014-06-11 Thread Jonathan M Davis via Digitalmars-d-learn
On Wed, 11 Jun 2014 23:01:31 +
matovitch via Digitalmars-d-learn digitalmars-d-learn@puremagic.com
wrote:

 About alias working with identifier but not with (runtime)
 expression. Alias should work with compile time expression like
 map!(x=2*x) right ? So a static cast should work isn't it ?
 (though static_cast doesn't exist in D :/)

alias is for _symbols_, not expressions.

http://dlang.org/declaration.html#alias

I really don't think that what you're trying to do is going to work.

- Jonathan M Davis


Re: Does __gshared have shared semantics?

2014-06-14 Thread Jonathan M Davis via Digitalmars-d-learn
On Sat, 14 Jun 2014 01:24:03 +
Mike Franklin via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:

 In other words, is 'shared __gshared' redundant?

Redundant? Not exactly.

__gshared makes it so that the variable is treated like a C variable - it's
not in TLS - but its _type_ is still considered to be thread-local by D. So,
you get no protection from the type system when using a  __gshared variable.
It'll treat it like a normal, TLS variable. So, you need to be very careful
when using __gshared.

shared on the other hand _is_ treated differently by the type system. Like
__gshared, it's not in TLS, but that fact is then embedded in its type, so the
compiler won't make any optimizations based on TLS for a shared variable, and
it will at least partially protect you agains using it in contexts which are
guaranteed to be wrong for shared. e.g with the current version of the
compiler in git, this code

shared int i;

void main()
{
++i;
}

produces this error

q.d(5): Deprecation: Read-modify-write operations are not allowed for shared
variables. Use core.atomic.atomicOp!+=(i, 1) instead.

whereas if it were __gshared, it wouldn't complain at all.

So, when marking a variable both shared and __gshared, the __gshared is kind
of pointless, since shared essentially indicates everything that __gshared
does plus more. However, the compiler will give you an error message if you
try (at least with the current git master anyway - I'm not sure what it did
with 2.065, since I don't have it installed at the moment), so there really
isn't much reason to worry about what it would actually do if it compiled.

- Jonathan M Davis


Re: Subclass of Exception

2014-06-14 Thread Jonathan M Davis via Digitalmars-d-learn
On Sat, 14 Jun 2014 11:59:52 +
Paul via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote:

 One stupid question: in Python subclassing of Exception looks
 like:
class MyError(Exception): pass
 but in D, if I'm right, we should write more code:
class MyError : Exception {
  this(string msg) { super(msg); }
}
 (without constructor we get error: ...Cannot implicitly generate
 a default ctor when base class BASECLASS is missing a default
 ctor...)

 Is any shorter D way?

If you're creating an exception that doesn't take any new arguments (so it's
just its type that's important rather than it having any new members), then
the typical declaration would be

/++
My exception type.
  +/
class MyException : Exception
{
/++
Params:
msg  = The message for the exception.
file = The file where the exception occurred.
line = The line number where the exception occurred.
next = The previous exception in the chain of exceptions, if any.
  +/
this(string msg, string file = __FILE__, size_t line = __LINE__, Throwable 
next = null) @safe pure nothrow
{
super(msg, file, line, next);
}

/++
Params:
msg  = The message for the exception.
next = The previous exception in the chain of exceptions.
file = The file where the exception occurred.
line = The line number where the exception occurred.
  +/
this(string msg, Throwable next, string file = __FILE__, size_t line = 
__LINE__) @safe pure nothrow
{
super(msg, file, line, next);
}
}

There have been attempts to write mixins or templates which generate this for
you - e.g.

mixin(genExceptionType(MyException));

but then you can't have documentation on it, because mixed-in code is not
examined when the documentation is generated, and you can't document the mixin
itself. So, at this point, it just makes the most sense to take my example,
change its name and documentation, and then use that rather than trying to
generate it - though if you don't care about documentation at all (which is
usually a bad idea but might make sense on small projects), then it would be
simple enough to create a function which will generate the string to mix in
for you.

- Jonathan M Davis


Re: '!' and naming conventions

2014-06-18 Thread Jonathan M Davis via Digitalmars-d-learn
 Sent: Wednesday, June 18, 2014 at 11:02 PM
 From: Brad Anderson via Digitalmars-d-learn 
 digitalmars-d-learn@puremagic.com
 To: digitalmars-d-learn@puremagic.com
 Subject: Re: '!' and naming conventions

 There is a style guide on the website: 
 http://dlang.org/dstyle.html
 
 Personally I just consider this a Phobos contributor style guide 
 and not like a PEP8 style guideline.

It was written with the hope that it would be generally followed by the D
community, and that's part of the reason that it specifically focuses on the
API and not the formatting of the code itself. So, ideally, most D projects
would follow it (particularly if they're being distributed publicly) so that
we have consistency across the community (particularly with regards to how
things are captitalized and whatnot), but by no means is it required that
every D project follow it. It's up to every developer to choose how they want
to go about writing their APIs. We're not fascists and don't require that all
code out there be formatted in a specific way or that all APIs follow exact
naming rules (we couldn't enforce that anyway). But still, I would hope that
most public D librares would follow the naming guidelines in the D style
guide.

Now, for Phobos, it's required, and there are even a couple of formatting
rules added to the end specifically for Phobos, but outside of official D
projects, it's up to the developers of those projects to choose what they want
to do.

- Jonathan M Davis


Re: how to resolve matches more than one template declaration?

2014-06-20 Thread Jonathan M Davis via Digitalmars-d-learn
On Fri, 20 Jun 2014 20:40:49 +
Juanjo Alvarez via Digitalmars-d-learn digitalmars-d-learn@puremagic.com
wrote:

 Hi,

 Newbie question:

 void foo(S)(S oneparam){}
 void foo(S)(S oneparam, int anotherParam){}
 alias fooStr = foo!string;

 Gives:

 Error: template test.foo matches more than one template
 declaration:

 How could I fix it so the alias aliases one of the two versions?
 Also, why can't fooStr be an alias for both templates?
 Considering than the generic parameter is not what changes
 between both, I tough I could do fooStr(somestring) or
 fooStr(somestring, someint) with this alias.

void foo(S)(S oneparam) {}

is equivalent to

template foo(S)
{
void foo(S oneparam) {}
}

so do this:

template foo(S)
{
void foo(S oneparam){}
void foo(S oneparam, int anotherParam){}
}

alias fooStr = foo!string;

- Jonathan M Davis


Re: How to free memory of an associative array

2014-06-24 Thread Jonathan M Davis via Digitalmars-d-learn


 Sent: Tuesday, June 24, 2014 at 11:12 AM
 From: Mark Isaacson via Digitalmars-d-learn 
 digitalmars-d-learn@puremagic.com
 To: digitalmars-d-learn@puremagic.com
 Subject: How to free memory of an associative array

 How can I free the memory used by an associative array?
 
 I need to be able to reuse the same array, but set it to an empty
 state and free up the memory it used previously.
 
 I do not believe that setting the associative array to null is
 sufficient to free the memory, as it is possible that someone
 still has a reference to an element inside, and so the garbage
 collector must be conservative.

On Tue, 24 Jun 2014 18:12:06 +
Mark Isaacson via Digitalmars-d-learn digitalmars-d-learn@puremagic.com
wrote:

 How can I free the memory used by an associative array?

 I need to be able to reuse the same array, but set it to an empty
 state and free up the memory it used previously.

 I do not believe that setting the associative array to null is
 sufficient to free the memory, as it is possible that someone
 still has a reference to an element inside, and so the garbage
 collector must be conservative.

Well, if something still has references to its internals, freeing the memory
would be a bug in your program. Also, manually freeing GC memory is pretty
much always a bad idea. If you really want to do that, use malloc and free
(though that would require writing your own AA implementation - the built-in
one is designed to manage itself and is not particularly tweakable).
Regardless, the memory of the AA is managed entirely by the GC, and I don't
believe that there is any way that you can force it to be freed. The best that
you can do is make sure that no references to the AA currently exist and then
explicitly run a collection.

http://dlang.org/phobos/core_memory.html#.GC.collect

Regardless, if you want to be managing memory, I'd strongly suggest that you
not do it with GC-allocated memory. It's just begging for trouble. If you want
to use the GC, let it do its job. Anything along the lines of forcibly freeing
GC memory will be incredibly bug-prone.

- Jonathan M Davis


Re: Assosiative array pop

2014-06-25 Thread Jonathan M Davis via Digitalmars-d-learn
On Wednesday, June 25, 2014 09:30:48 seany via Digitalmars-d-learn wrote:
 Given an assosiative array : int[string] k, is there a way
 (either phobos or tango) to pop the first element of this array
 and append it to another array?

 I can come up with a primitive soluiton:

 int[string] k;
 // populate k here

 int[string] j;


 foreach(sttring key, int val; k)
 {

 j[key] = val;
 break;
 }

 but could it be better? it is wroth noting that the keys are not
 known beforehand.

There's no such thing as the first element of an AA. An associative array is
a hash table and has no order to it. The order you get when iterating with
foreach is undefined. If you just want to get _a_ key from an AA, then you
need to either iterate over it with foreach and then break like you're doing
or use byKey to get a range. e.g. something like

auto key = k.byKey().front;
j[key] = k[key];

should work. But there is no first key, so I don't know really understand
what you're really trying to do here and can't provide a better answer without
more information.

- Jonathan M Davis



Re: integer out of range

2014-07-03 Thread Jonathan M Davis via Digitalmars-d-learn
On Thu, 03 Jul 2014 10:24:26 +
pgtkda via Digitalmars-d-learn digitalmars-d-learn@puremagic.com
wrote:

 On Thursday, 3 July 2014 at 10:22:14 UTC, ponce wrote:
  On Thursday, 3 July 2014 at 10:15:25 UTC, pgtkda wrote:
  why is this possible?
 
  int count = 50_000_000;
 
  int is always 4 bytes, it can contains from -2_147_483_648 to
  2_147_483_647.

 oh, ok. I thought it only contains numbers to 2_000_000, but 2 to
 the power of 32 is your result, thanks.

If you want to know the min and max of the numeric types, just use their min
and max properties - e.g. int.min or long.max.

- Jonathan M Davis


Re: Setting dates

2014-07-13 Thread Jonathan M Davis via Digitalmars-d-learn
On Friday, July 11, 2014 04:01:24 Joel via Digitalmars-d-learn wrote:
 I've been trying to set a date for my program (a small struct):

 import std.datetime;

 auto date = cast(DateTime)Clock.currTime();
 setDate(date.day, date.month, date.year);

 Problem is that day  month are not integers. And date.day.to!int
 doesn't work either.

You're going to need to provid more details. SetDate is not a standard
function, so it must be yours, and we don't know anything about it - not even
its signature, which makes it awfully hard to help you.

That being said, date.day returns ubyte, date.month returns
std.datetime.Month, and date.year returns ushort, all of which implicitly
convert to int. So, I don't see why you would be having an problems converting
them to int. This compiles just fine

int year = date.year;
int month = date.month;
int day = date.day;

And date.day.to!int or date.day.to!int() both compile just fine as long as you
import std.conv. But calling to!int() is completely unnecessary, because the
conversion is implicit, as show above.

So, without more details, we can't help you.

- Jonathan M Davis



Re: get number of items in DList

2014-07-15 Thread Jonathan M Davis via Digitalmars-d-learn
On Fri, 11 Jul 2014 07:46:37 -0700
H. S. Teoh via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:
 On Fri, Jul 11, 2014 at 10:23:58AM -0300, Ary Borenszweig via
 Digitalmars-d-learn wrote:
  On 7/11/14, 4:46 AM, bearophile wrote:
  pgtkda:
  
  How can i get the number of items which are currently hold in a
  DList?
  
  Try (walkLength is from std.range):
  
  mydList[].walkLength
  
  Bye,
  bearophile
 
  So the doubly linked list doesn't know it's length? That seems a bit
  inefficient...

 It should be relatively simple to write a wrapper that *does* keep
 track of length.

 The main problem, though, comes from list splicing: given two
 arbitrary points in the list, if you splice out the section of the
 list in between, there's no easy way to know how many items lie in
 between, so you'll have to walk the list to recompute the length
 then. Which sorta defeats the purpose of having a linked list. :)

You can either make a doubly-linked list which is efficient for splicing or
efficient for getting the length, not both. It's trivial to build a linked
list type which knows its length efficiently around one which splices
efficiently, but not the other way round.

C++98 went with one which spliced efficiently but then made the mistake of
providing size(), and people kept using it, thinking that it was O(1), when it
was O(n). C++11 silently switched it to so that size() is O(1) and splicing is
inefficient (which I object to primarily on the grounds that it was silent). D
went the route of making splicing efficient but not providing length/size so
that folks don't accidentally think that it's O(1), when it's actually O(n).
Having to use walkLength makes it more explicit.

That being said, we should probably consider making a wrapper type for
std.container which wraps DList and has O(1) length, allowing folks to choose
which they want based on the requirements of their program. That way, we get
the best of both worlds.

- Jonathan M Davis


Re: get os thread handles

2014-07-20 Thread Jonathan M Davis via Digitalmars-d-learn

On Sunday, 20 July 2014 at 09:34:46 UTC, Sean Campbell wrote:

How do i get an os thread handle from a thread object.
or are d thread not wrapped os threads.


They do wrap OS threads, but they encapsulate them in a 
cross-platform manner, and looking over Thread, it doesn't look 
like anything along the lines of an OS thread handle is exposed 
in the API.


What do you need the OS thread handle for?


Re: get os thread handles

2014-07-20 Thread Jonathan M Davis via Digitalmars-d-learn

On Sunday, 20 July 2014 at 10:03:47 UTC, Sean Campbell wrote:

On Sunday, 20 July 2014 at 09:53:52 UTC, Jonathan M Davis wrote:

On Sunday, 20 July 2014 at 09:34:46 UTC, Sean Campbell wrote:

How do i get an os thread handle from a thread object.
or are d thread not wrapped os threads.


They do wrap OS threads, but they encapsulate them in a 
cross-platform manner, and looking over Thread, it doesn't 
look like anything along the lines of an OS thread handle is 
exposed in the API.


What do you need the OS thread handle for?


sonce the standard so i can get pause/resume support for d 
threads


I'd suggest opening up an enhancement request. Assuming that that 
functionality exists across all of the various OSes, it can 
probably be added:


https://issues.dlang.org

You can also open an enhancement request for getting access to 
the OS thread handles, but my guess is that that wouldn't happen, 
because it makes it so that the Thread class no longer has full 
control, which would make it impossible to have any kind of 
@safety for Thread (though it doesn't seem to currently have any 
such annotations).


But if what you're looking for is thread functionality that is 
common across OSes, then there's a good chance that it's 
reasonable to add it to Thread, making it unnecessary to provide 
access to its innards.


In the meantime, I expect that you'll have to either use the C 
APIs directly or create your own class  which is a copy of Thread 
and tweak it to do what you need.


Re: myrange.at(i) for myrange.dropExactly(i).front

2014-07-25 Thread Jonathan M Davis via Digitalmars-d-learn
On Friday, 25 July 2014 at 21:33:23 UTC, Timothee Cour via 
Digitalmars-d-learn wrote:

Is there a function for doing this?
myrange.at(i)
(with meaning of myrange.dropExactly(i).front)
it's a common enough operation (analog to myrange[i]; the 
naming is from

C++'s std::vectorT::at)


That would require a random access range, in which case you can 
just index directly. For a non-random access range, which you're 
doing would be the most direct way of doing it.


- Jonathan M Davis


Re: myrange.at(i) for myrange.dropExactly(i).front

2014-07-25 Thread Jonathan M Davis via Digitalmars-d-learn

On Saturday, 26 July 2014 at 00:28:32 UTC, Ary Borenszweig wrote:

On 7/25/14, 6:39 PM, Jonathan M Davis wrote:

On Friday, 25 July 2014 at 21:33:23 UTC, Timothee Cour via
Digitalmars-d-learn wrote:

Is there a function for doing this?
myrange.at(i)
(with meaning of myrange.dropExactly(i).front)
it's a common enough operation (analog to myrange[i]; the 
naming is from

C++'s std::vectorT::at)


That would require a random access range, in which case you 
can just
index directly. For a non-random access range, which you're 
doing would

be the most direct way of doing it.

- Jonathan M Davis


No, the OP said the meaning was `myrange.dropExactly(i).front`, 
which is not a random access.


Sometimes you *do* want the n-th element of a range even if the 
range is not a random access.


That is an inherently expensive operation, so it would be a very 
bad idea IMHO to support it. The OP referenced vector, which has 
random access, and that's a completely different ballgame.


In general, when operating on ranges, you should be trying to 
iterate over them only once and to backtrack as little as 
possible if you have backtrack. It's true that's not always 
possible, but if at() were O(n), then it would make inefficient 
code less obvious.


I'd argue against at() working on non-random access ranges for 
the same reason that std.container doesn't support containers 
with a length property of O(n) - because it's a function that 
looks like it's O(1), and programmers will consistently think 
that it's O(1) and misuse it. C++ has had that problem with 
std::list' size function which is O(n). at() looks like it would 
be O(1) (and it always is in C++), so it would be inappropriate 
to have it in cases where it would need to be O(n), and since we 
already have [], why add at()? It exists on vector in addition to 
[] to give it range checking random-access. We already have that 
in D with [].


myrange.dropExactly(i).front makes it much more obvious what 
you're doing and that it's inefficient. It might be necessary in 
some cases, but we don't want to give the impression that it's 
cheap, which at() would do.


- Jonathan M Davis


Re: why does isForwardRange work like this?

2014-07-31 Thread Jonathan M Davis via Digitalmars-d-learn

On Thursday, 31 July 2014 at 20:34:42 UTC, Vlad Levenfeld wrote:
What's the rationale behind stating the condition this way as 
opposed to, say,


is (typeof(R.init.save)) == R) || is ((typeof(R.init.save()) == 
R)


so that member fields as well as @property and non-@property 
methods will match


save should never have been a property, since it doesn't really 
emulate a variable, but because it was decided that it was a 
property, it is required by the API that it be a property. And 
the reason why it's required to be a property once it was decided 
that it should be one is quite simple. What would happen if you a 
function did this


auto s = range.save();

and save was a property? The code would fail to compile. Because 
it was decided that save should be a property, _every_ time that 
save is used, it must be used as a property, or it won't work 
with any range that did define save as a property. As such, there 
is no reason to allow save to be a non-property function. 
Allowing that would just make it easier to write code which 
called save incorrectly but worked with the ranges that it was 
tested with (because they defined save as a function instead of a 
property). In addition, if it works with your range, it's 
perfectly legal to define save as a member variable (though that 
would be a rather bizarre thing to do), and allowing save to be 
called as a function by the range API would break that.


So, once it's been decided that it's legal for something in a 
templated API to be a property, it _must_ be a property, a 
variable, or an enum, or there are going to be problems, because 
it has to be used without parens.


- Jonathan M Davis


Re: why does isForwardRange work like this?

2014-07-31 Thread Jonathan M Davis via Digitalmars-d-learn

On Thursday, 31 July 2014 at 22:21:10 UTC, Vlad Levenfeld wrote:
Yes, I see the problem now. I can't think of any reason why I'd 
want to make save anything but a function (especially since 
`save` is a verb) but I guess someone out there might have a 
good one.


It's Andrei's fault. I'm not quite sure what he was thinking. But 
unfortunately, we're stuck with it. So, it's just become one of 
D's little quirks that we have to learn and live with.


So, what is gained by (inout int = 0) over ()? I wasn't even 
aware that giving a default value for an unlabeled parameter 
would compile. What does it do?


I've wondered that myself but never taken the time to look into 
it. However, according to this post:


http://forum.dlang.org/post/mailman.102.1396007039.25518.digitalmars-d-le...@puremagic.com

it looks like it convinces the compiler to make the function an 
inout function so that the range variable that's declared can be 
treated as inout and therefore be able to have ranges with inout 
in their type work.


Re: why does isForwardRange work like this?

2014-08-01 Thread Jonathan M Davis via Digitalmars-d-learn

On Friday, 1 August 2014 at 11:51:55 UTC, Marc Schütz wrote:
On Friday, 1 August 2014 at 04:52:35 UTC, Jonathan M Davis 
wrote:
On Thursday, 31 July 2014 at 22:21:10 UTC, Vlad Levenfeld 
wrote:
Yes, I see the problem now. I can't think of any reason why 
I'd want to make save anything but a function (especially 
since `save` is a verb) but I guess someone out there might 
have a good one.


It's Andrei's fault. I'm not quite sure what he was thinking. 
But unfortunately, we're stuck with it. So, it's just become 
one of D's little quirks that we have to learn and live with.


Can we not at least deprecate it? And while we're at it, the 
same for `dup` and `idup`?


It would break too much code to change save at this point. 
There's no way that you're going to talk Andrei or Walter into 
changing something like that over whether it makes sense for it 
to be a property or not. That's not the kind of thing that they 
think is important, and you're more likely to get Andrei to try 
and kill of @property again rather than anything useful.


As for dup and idup, they were replaced with functions recently 
(maybe for 2.066 but not 2.065 - i'm not sure when the changes 
were made), so they might actually work with parens now. I'm not 
sure. But since dup and idup aren't being implemented by lots of 
different people like the range API is, changing those doesn't 
risk breaking code where folks made it a variable.


- Jonathan M Davis


Re: why does isForwardRange work like this?

2014-08-01 Thread Jonathan M Davis via Digitalmars-d-learn

On Friday, 1 August 2014 at 19:59:16 UTC, Jonathan M Davis wrote:
But since dup and idup aren't being implemented by lots of 
different people like the range API is, changing those doesn't 
risk breaking code where folks made it a variable.


Well, I probably shouldn't put it quite that way, since that's 
not the only problem with changing save (which I guess that that 
statement implies). The real problem with changing save is that 
we'd have to change the template constraint to use save() to make 
sure that no one declared it as a variable, and that would break 
everyone's code who declared save as a property - so, everyone. 
And _that_ is why save isn't going to change.


- Jonathan M Davis


Re: unittest affects next unittest

2014-08-02 Thread Jonathan M Davis via Digitalmars-d-learn
On Fri, 01 Aug 2014 23:09:37 +
sigod via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote:

 Code: http://dpaste.dzfl.pl/51bd62138854
 (It was reduced by DustMite.)

 Have I missed something about structs? Or this simply a bug?

Don't do this with a member variable:

private Node * _root = new Node();

Directly initializing it like that sets the init value for that struct, and
that means that every struct of that type will have exactly the same value for
_root, so they will all share the same root rather than having different
copies. You need to initialize _root in the constructor.

- Jonathan M Davis


Re: unittest affects next unittest

2014-08-05 Thread Jonathan M Davis via Digitalmars-d-learn

On Tuesday, 5 August 2014 at 17:41:06 UTC, Marc Schütz wrote:

On Tuesday, 5 August 2014 at 15:39:55 UTC, sigod wrote:
On Saturday, 2 August 2014 at 06:46:04 UTC, Jonathan M Davis 
via Digitalmars-d-learn wrote:

On Fri, 01 Aug 2014 23:09:37 +
sigod via Digitalmars-d-learn 
digitalmars-d-learn@puremagic.com wrote:



Code: http://dpaste.dzfl.pl/51bd62138854
(It was reduced by DustMite.)

Have I missed something about structs? Or this simply a bug?


Don't do this with a member variable:

private Node * _root = new Node();

Directly initializing it like that sets the init value for 
that struct, and
that means that every struct of that type will have exactly 
the same value for
_root, so they will all share the same root rather than 
having different

copies. You need to initialize _root in the constructor.

- Jonathan M Davis


So, it's a static initialization? Documentation didn't mention 
it. (In class' section only 2 sentences about it and none in 
struct's section.)


This is different from many languages (C#, Java... don't know 
about C and C++). What was the reason to make this 
initialization static?


It's a consequence of the fact that every type in D has a 
default initializer which is known at compile time.


That and it solves a lot of problems with undefined behavior 
(this is particularly true when talking about module-level 
variables). static initialization ordering problems are hell in 
other languages (especially in C++). By making it so that all 
direct initializations of variables other than local variables 
are done statically, all kinds of nasty, subtle bugs go away. The 
one nasty, subtle issue that it causes that I'm aware of is that 
if you directly initialize any member variables which are 
reference types, then all instances of that type end up referring 
to the same object - and that's what you ran into. But 
fortunately, that's easy to fix, whereas the static 
initialization problems that were fixed by making all of those 
variables have to be initialized at compile time are much harder 
to fix.


- Jonathan M Davis


Re: unittest affects next unittest

2014-08-06 Thread Jonathan M Davis via Digitalmars-d-learn

On Wednesday, 6 August 2014 at 02:12:16 UTC, Era Scarecrow wrote:

On Tuesday, 5 August 2014 at 17:41:06 UTC, Marc Schütz wrote:
It's a consequence of the fact that every type in D has a 
default initializer which is known at compile time.


 Then doesn't this mean it should pop out a warning in case 
that's the behavior you wanted, perhaps a reference to the D 
specs?


 Beyond that it would be easy to forget it does that, since 
class initializes things different than structs because of the 
'known at compile time' logic.


It wouldn't make sense to warn about that, because it could be 
very legitimately be what the programmer wants to do. We can't 
warn about anything that would be legitimate to have, because it 
would force programmers to change their code to get rid of the 
warning, even when the code was valid. So, while in most cases, 
it might be a problem, we can't warn about it. But I do think 
that the spec should be clearer about it.


- Jonathan M Davis


Re: private selective imports

2014-08-06 Thread Jonathan M Davis via Digitalmars-d-learn

On Wednesday, 6 August 2014 at 18:33:23 UTC, Dicebot wrote:
Most voted DMD bug : 
https://issues.dlang.org/show_bug.cgi?id=314


Yeah, it's why I'd suggest that folks not use selective imports 
right now. But people seem to really love the feature, so they 
use it and keep running into this problem.


- Jonathan M Davis


Re: private selective imports

2014-08-06 Thread Jonathan M Davis via Digitalmars-d-learn

On Wednesday, 6 August 2014 at 19:35:02 UTC, Dicebot wrote:
On Wednesday, 6 August 2014 at 19:31:04 UTC, Jonathan M Davis 
wrote:

On Wednesday, 6 August 2014 at 18:33:23 UTC, Dicebot wrote:
Most voted DMD bug : 
https://issues.dlang.org/show_bug.cgi?id=314


Yeah, it's why I'd suggest that folks not use selective 
imports right now. But people seem to really love the feature, 
so they use it and keep running into this problem.


- Jonathan M Davis


scope-local selective imports are not affected


Sure, but people keep using them at the module-level, which 
really shouldn't be done until the bug is fixed. IMHO, we'd be 
better off making it illegal to use selective imports at the 
module-level rather than keeping it as-is.


- Jonathan M Davis


Re: Removing an element from a list or array

2014-08-06 Thread Jonathan M Davis via Digitalmars-d-learn

On Wednesday, 6 August 2014 at 19:01:26 UTC, Patrick wrote:
I feel dumb.  I've been searching for how to do this, and each 
page or forum entry I read makes me more confused.


Let's say I have a list of values (Monday, Tuesday, Wednesday, 
Thursday, Friday).  I can store this list in an Slist, Dlist, 
Array etc -- any collection is fine.


I decide I want to remove Thursday from the list.

How?  I see that linearRemove is meant to do this, but that 
takes a range.  How do I get a range of 'Thursday'?


Slicing a container gives you a range for that container, and 
it's that type that needs to be used to remove elements (either 
that, or that type wrapped with std.range.Take), since otherwise, 
the container wouldn't know which elements you were trying to 
remove - just their values.


You need to use std.algorithm.find to find the element that you 
want to remove, in which case, you have a range starting at that 
element (but it contains everything after it too). So, you used 
std.range.take to take the number of elements that you want from 
the range, and then you pass that result to linearRemove. e.g.


import std.algorithm;
import std.container;
import std.range;

void main()
{
auto arr = Array!string(Monday, Tuesday, Wednesday,
Thursday, Friday);
auto range = arr[];
assert(equal(range, [Monday, Tuesday, Wednesday,
 Thursday, Friday]));
auto found = range.find(Thursday);
assert(equal(found, [Thursday, Friday]));
arr.linearRemove(found.take(1));
assert(equal(arr[], [Monday, Tuesday, Wednesday, 
Friday]));

}

C++ does it basically the same way that D does, but it's actually 
one place where iterators are cleaner, because you can just pass 
the iterator to erase, whereas with a range, that would remove 
all of the elements after that element, which is why you need 
take, which makes it that much more complicated.


Using opSlice like that along with range-based functions like 
find which don't return a new range type will always be what 
you'll need to do in the general case, but it would definitely be 
nice if we added functions like removeFirst to remove elements 
which matched a specific value so that the simple use cases 
didn't require using find.


- Jonathan M Davis


Re: opApply outside of struct/class scope

2014-08-10 Thread Jonathan M Davis via Digitalmars-d-learn

On Sunday, 10 August 2014 at 18:45:00 UTC, Freddy wrote:

Is there any why i can put a opApply outside of a struct scope


No overloaded operators in D can be put outside of a struct or 
class. They have to be member functions.


- Jonathan M Davis


Re: opApply outside of struct/class scope

2014-08-10 Thread Jonathan M Davis via Digitalmars-d-learn

On Sunday, 10 August 2014 at 19:01:18 UTC, Era Scarecrow wrote:
On Sunday, 10 August 2014 at 18:58:50 UTC, Jonathan M Davis 
wrote:
No overloaded operators in D can be put outside of a struct or 
class. They have to be member functions.


 If I remember right, opApply was somewhat broken and only 
worked correctly in a few cases. But that was 18 months ago, a 
lot could have happened...


I'm not aware of opApply being broken, but I never use it, since 
in most cases where you might use opApply, you can use ranges, 
and they're far more flexible. But regardless, it's not legal to 
declare an overloaded operator outside of the type that it's for, 
so whether you're talking about opApply, opBinary, opAssign, or 
any other overloaded operator, declaring it as a free function 
like the OP is trying to do isn't going to work.


- Jonathan M Davis


Re: opApply outside of struct/class scope

2014-08-10 Thread Jonathan M Davis via Digitalmars-d-learn

On Sunday, 10 August 2014 at 22:03:28 UTC, Era Scarecrow wrote:
On Sunday, 10 August 2014 at 21:57:29 UTC, Jonathan M Davis 
wrote:

I'm not aware of opApply being broken, but I never use it,


 I remember very specifically it was brought up, that opApply 
was not working correctly and you could only use it with a very 
specific cases because... the implementation was incomplete?


 I think the discussion was partially on removing opApply from 
BitArray and going for a range or array approach instead 
because the problem would go away, but a lot of this is fuzzy 
and from memory. I have so much to catch up on :(


IIRC, opApply doesn't play well with various attributes, but I 
don't remember the details. That's the only issue with opApply 
that I'm aware of. It looks like you'll have to go digging into 
those other threads if you want to know what was supposed to be 
wrong with it.


- Jonathan M Davis


Re: implicit conversion

2014-08-12 Thread Jonathan M Davis via Digitalmars-d-learn
On Tue, 12 Aug 2014 06:21:17 +
uri via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote:

 Hi,

 I'm trying to allow implicit conversions for my own type
 happening. I have the following:

 
 import std.math;
 import std.traits;

 struct S(T)
 if(isFloatingPoint!T)
 {
  T val;
  alias val this;
 }
 void main()
 {

  auto s = S!float();
  assert(isNaN(s));
  s = 10.0;
  assert(!isNaN(s));
 }
 

 But I get a compile time error:

 
 Error: template std.math.isNaN cannot deduce function from
 argument types !()(S!float), candidates are:

 std/math.d(4171):std.math.isNaN(X)(X x) if
 (isFloatingPoint!X)
 

 Is there a way I can to do this, maybe opCall/opCast (I tried
 these but failed)?

The problem is that isNaN is now templatized, and its constraint uses
isFloatingPoint, which requires that the type _be_ a floating point type, not
that it implicitly convert to one. So, as it stands, isNAN cannot work with
any type which implicitly converts to a floating point value. Either it will
have to be instantiated with the floating point type - e.g. isNaN!float(s) -
or you're going to have to explicitly cast s to a floating point type.

You can open a bug report - https://issues.dlang.org - and mark it as a
regression, and it might get changed, but the reality of the matter is that
templates don't tend to play well with implicit conversions. It's _far_ too
easy to allow something in due to an implicit conversion and then have it not
actually work, because the value is never actually converted. In general, I
would strongly advise against attempting to give types implicit conversions
precisely because they tend to not play nicely with templates.

- Jonathan M Davis


Re: implicit conversion

2014-08-12 Thread Jonathan M Davis via Digitalmars-d-learn
On Tue, 12 Aug 2014 13:17:37 +
Meta via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote:

 On Tuesday, 12 August 2014 at 06:37:45 UTC, Jonathan M Davis via
 Digitalmars-d-learn wrote:
  The problem is that isNaN is now templatized, and its
  constraint uses
  isFloatingPoint, which requires that the type _be_ a floating
  point type, not
  that it implicitly convert to one. So, as it stands, isNAN
  cannot work with
  any type which implicitly converts to a floating point value.
  Either it will
  have to be instantiated with the floating point type - e.g.
  isNaN!float(s) -
  or you're going to have to explicitly cast s to a floating
  point type.
 
  You can open a bug report - https://issues.dlang.org - and mark
  it as a
  regression, and it might get changed, but the reality of the
  matter is that
  templates don't tend to play well with implicit conversions.
  It's _far_ too
  easy to allow something in due to an implicit conversion and
  then have it not
  actually work, because the value is never actually converted.
  In general, I
  would strongly advise against attempting to give types implicit
  conversions
  precisely because they tend to not play nicely with templates.
 
  - Jonathan M Davis

 I think this should be considered a bug. A type with alias this
 should work in all cases that the aliased type would. If the
 function fails to be instantiated with S!float, then it should be
 forwarded to the S's val member.

AFAIK, the only time that the implicit conversion would take place is when the
type is being used in a situation where it doesn't work directly but where the
aliased type is used. In that case, the compiler sees the accepted types and
sees that the type can implicitly convert to one of the accepted types and
thus does the conversion. So, it knows that the conversion will work before it
even does it. The compiler never attempts to do the conversion just to see
whether it will work, which is essentially what it would have to do when
attempting to use the type with a templated function. You can certainly create
an enhancement request for such behavior, but I have no idea how likely it is
get implemented. There are currently _no_ cases where the compiler does
anything with template instantiations to try and make them pass if simply
trying to instantiate them with the given type failed.

- Jonathan M Davis


Re: implicit conversion

2014-08-12 Thread Jonathan M Davis via Digitalmars-d-learn

On Tuesday, 12 August 2014 at 15:39:09 UTC, Meta wrote:

What I mean is that this breaks the Liskov Substitution
Principle, which alias this should obey, as it denotes a 
subtype.

Since S!float has an alias this to float, it should behave as a
float in all circumstances where a float is expected; otherwise,
we've got a big problem with alias this on our hands.


IMHO, it was a mistake to add alias this to the language. It's
occasionally useful, but it's too dangerous. Implicit conversions
wreak havoc with templates, because inevitably what happens is
that a type is tested for whether it implicitly converts to a
particular type, but then the template is instantiated with the
original type, not the implicitly converted one, and then the
template frequently fails to compile - or if it does compile, it
may do weird things, because you're dealing with a type that
doesn't act as expected.

If you're dealing with a template which doesn't accept implicit
conversions (e.g. isNaN), and the implicit conversion were tested
after the actual type failed, and the template was then
instantiated with the implicitly converted type, then maybe that
could work, but that's not how it works now, and in general, I
think alias this is just too dangerous to use.

- Jonathan M Davis


Re: implicit conversion

2014-08-12 Thread Jonathan M Davis via Digitalmars-d-learn
On Tuesday, 12 August 2014 at 19:03:58 UTC, H. S. Teoh via 
Digitalmars-d-learn wrote:
tl;dr: there are so many ways template code can go wrong, that 
I don't

it justifies blaming alias this for problems.


Allowing implicit conversions makes the problem much worse IMHO. 
It makes it far too easy to write a template constraint which 
passes due to the implicit conversion (even if an implicit 
conversion wasn't explicitly checked for) but then have the 
function fail to work properly because the implicit conversion 
never actually takes place within the function (and if the 
template constraint doesn't explicitly test for an implicit 
conversion, then the argument that the value should have been 
explicitly converted doesn't hold). Fortunately, in many cases, 
the result is a compilation error rather than weird behavior, but 
in some cases, it will be weird behavior - especially when the 
code involved is highly templatized and generic.


I'm not even vaguely convinced that allowing implicit conversions 
is worth the pain that they cause with templated code. Sure, they 
have their uses, and it would be a loss to have nothing like 
alias this, but on the whole, I'd much rather pay the cost of 
having no implicit conversions than having to deal with the havoc 
they wreak on templated code.


- Jonathan M Davis


Re: drop* and take* only for specific element values

2014-08-13 Thread Jonathan M Davis via Digitalmars-d-learn
On Wed, 13 Aug 2014 14:28:29 +
Meta via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote:

 On Wednesday, 13 August 2014 at 12:37:34 UTC, Nordlöw wrote:
  Are there variants of drop* and take* that only drop element if
  its equal to a value kind of like strip does?
 
  If not I believe they should be added.

 No, but it'd probably be useful. Maybe call them dropIf/takeIf,
 or just add an overload that takes a predicate... I'll look into
 making a pull request sometime this week. How do you envision
 these working?

They're called find and until. You just have to give them the opposite
predicate that you'd give a function called dropIf or takeIf.

- Jonathan M Davis



Re: drop* and take* only for specific element values

2014-08-13 Thread Jonathan M Davis via Digitalmars-d-learn
On Wed, 13 Aug 2014 07:45:17 -0700
Jonathan M Davis via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:

 On Wed, 13 Aug 2014 14:28:29 +
 Meta via Digitalmars-d-learn digitalmars-d-learn@puremagic.com
 wrote:

  On Wednesday, 13 August 2014 at 12:37:34 UTC, Nordlöw wrote:
   Are there variants of drop* and take* that only drop element if
   its equal to a value kind of like strip does?
  
   If not I believe they should be added.
 
  No, but it'd probably be useful. Maybe call them dropIf/takeIf,
  or just add an overload that takes a predicate... I'll look into
  making a pull request sometime this week. How do you envision
  these working?

 They're called find and until. You just have to give them the opposite
 predicate that you'd give a function called dropIf or takeIf.

I should probably pointed out that we attempted to put dropWhile and takeWhile
into Phobos quite some time ago (which would basically be dropIf and takeIf),
but Andrei refused to let them in, because all they did was reverse the
predicate, so arguments that they should be added based on the fact that find
and until take the opposite predicate aren't going to fly.

- Jonathan M Davis



Re: String Prefix Predicate

2014-08-14 Thread Jonathan M Davis via Digitalmars-d-learn

On Thursday, 14 August 2014 at 17:41:08 UTC, Nordlöw wrote:

On Thursday, 14 August 2014 at 17:33:41 UTC, Justin Whear wrote:

std.algorithm.startsWith?  Should auto-decode, so it'll do a


What about 
https://github.com/D-Programming-Language/phobos/pull/2043


Auto-decoding should be avoided when possible.

I guess something like

whole.byDchar().startsWith(part.byDchar())

is preferred right?

If so is this what we will live with until Phobos has been 
upgraded to using pull 2043 in a few years?


Except that you _have_ to decode in this case. Unless the string 
types match, there's no way around it. And startsWith won't 
decode if the string types match. So, I really see no issue in 
just straight-up using startsWith.


Where you run into problems with auto-decoding in Phobos 
functions is when a function results in a new range type. That 
forces you into a range of dchar, whether you wanted it or not. 
But beyond that, Phobos is actually pretty good about avoiding 
unnecessary decoding (though there probably are places where it 
could be improved). The big problem is that that requires 
special-casing a lot of functions, whereas that wouldn't be 
required with a range of char or wchar.


So, the biggest problems with automatic decoding are when a 
function returns a range of dchar when you wanted to operate on 
code units or when you write a function and then have to special 
case it for strings if you want to avoid the auto-decoding, 
whereas that's already been done for you with most Phobos 
functions.


- Jonathan M Davis


Re: Appender is ... slow

2014-08-14 Thread Jonathan M Davis via Digitalmars-d-learn
On Thursday, 14 August 2014 at 17:16:42 UTC, Philippe Sigaud 
wrote:
From time to time, I try to speed up some array-heavy code by 
using std.array.Appender, reserving some capacity and so on.


It never works. Never. It gives me executables that are maybe 
30-50% slower than bog-standard array code.


I don't do anything fancy: I just replace the types, use 
clear() instead of = null...


Do people here get good results from Appender? And if yes, how 
are you using it?


I've never really tried to benchmark it, but it was my 
understanding that the idea behind Appender was to use it to 
create the array when you do that via a lot of appending, and 
then you use it as a normal array and stop using Appender. It 
sounds like you're trying to use it as a way to manage reusing 
the array, and I have no idea how it works for that. But then 
again, I've never actually benchmarked it for just creating 
arrays via appending. I'd just assumed that it was faster than 
just using ~=, because that's what it's supposedly for. But maybe 
I just completely misunderstood what the point of Appender was.


- Jonathan M Davis


Re: Appender is ... slow

2014-08-14 Thread Jonathan M Davis via Digitalmars-d-learn
On Thursday, 14 August 2014 at 19:29:28 UTC, Philippe Sigaud via 
Digitalmars-d-learn wrote:
It sounds like you're trying to use it as a way to manage 
reusing

the array, and I have no idea how it works for that.


There is a misunderstanding there: I'm using clear only to 
flush the
state at the beginning of the computation. The Appender is a 
class
field, used by the class methods to calculate. If I do not 
clear it at
the beginning of the methods, I keep appending new results to 
old
computations, which is not what I want. But really, calling 
clear is a

minor point: I'm interested in Appender's effect on *one* (long,
concatenation-intensive) computation.


Then it sounds like you're reusing the Appender. I've never done 
that. In fact, I would have assumed that that would mean that you 
were attempted to fill in the same array again, and I wouldn't 
have even thought that that was safe, because I would have 
assumed that Appnder used assumeSafeAppend, which would mean that 
reusing the Appender would be highly unsafe unless you weren't 
using the array that you got from it anymore.


I always use Appender to construct an array, and then I get rid 
of the Appender. I don't think that I've ever had a member 
variable which was an Appender. I only ever use it for local 
variables or function arguments.



I've
never actually benchmarked it for just creating arrays via 
appending. I'd
just assumed that it was faster than just using ~=, because 
that's what it's
supposedly for. But maybe I just completely misunderstood what 
the point of

Appender was.


I don't know. People here keep telling newcomers to use it, but 
I'm
not convinced by its results. Maybe I'm seeing worse results 
because
my arrays are do not have millions of elements and Appender 
shines for

long arrays?


I have no idea. It was my understandnig that it was faster to 
create an array via appending using Appender than ~=, but I've 
never benchmarked it or actually looked into how it works. It's 
quite possible that while it's _supposed_ to be faster, it's 
actually flawed somehow and is actually slower, and no one has 
noticed previously, simply assuming that it was faster because 
it's supposed to be.


- Jonathan M Davis


Re: Appender is ... slow

2014-08-14 Thread Jonathan M Davis via Digitalmars-d-learn

On Thursday, 14 August 2014 at 19:47:33 UTC, Brad Anderson wrote:
On Thursday, 14 August 2014 at 19:10:18 UTC, Jonathan M Davis 
wrote:
I've never really tried to benchmark it, but it was my 
understanding that the idea behind Appender was to use it to 
create the array when you do that via a lot of appending, and 
then you use it as a normal array and stop using Appender. It 
sounds like you're trying to use it as a way to manage reusing 
the array, and I have no idea how it works for that. But then 
again, I've never actually benchmarked it for just creating 
arrays via appending. I'd just assumed that it was faster than 
just using ~=, because that's what it's supposedly for. But 
maybe I just completely misunderstood what the point of 
Appender was.


- Jonathan M Davis


I too have trouble understanding what Appender does that 
supposedly makes it faster (at least from the documentation). 
My old, naive thought was that it was something like a linked 
list of fixed size arrays so that appends didn't have to move 
existing elements until you were done appending, at which point 
it would bake it into a regular dynamic array moving each 
element only once looking at the code it appeared to be nothing 
like that (an std::deque with a copy into a vector in c++ 
terms).


Skimming the code it appears to be more focused on the much 
more basic ~= always reallocates performance problem. It 
seems it boils down to doing essentially this (someone feel 
free to correct me) in the form of an output range:


auto a = /* some array */;
auto b = a;
a = a.array();
for(...)
  b.assumeSafeAppend() ~= /* element */;


It was my understanding that that was essentially what it did, 
but I've never really studied its code, and I don't know if there 
are any other tricks that it's able to pull. It may be that it 
really doesn't do anything more than be  wrapper which handles 
assumeSafeAppend for you correctly. But if that's the case, I 
wouldn't expect operating on arrays directly to be any faster. 
Maybe it would be _slightly_ faster, because there are no wrapper 
functions to inline away, but it wouldn't be much faster, it 
would ensure that you used assumeSafeAppend correctly. If it's 
really as much slower as Phillippe is finding, then I have no 
idea what's going on. Certainly, it merits a bug report and 
further investigation.


(assumeSafeAppend's documentation doesn't say whether or not 
it'll reallocate when capacity is exhausted, I assume it does).


All assumeSafeAppend does is tell the runtime that this 
particular array is the array the farthest into the memory block 
(i.e. that any of the slices which referred to farther in the 
memory block don't exist anymore). So, the value in the runtime 
that keeps track of the farthest point into the memory block 
which has been referred to by an array is then set to the end of 
the array that you passed to assumeSafeAppend. And because it's 
now the last array in that block, it's safe to append to it 
without reallocating. But the appending then works the same way 
that it always does, and it'll reallocate when there's no more 
capacity. The whole idea is to just make it so that the runtime 
doesn't think that the memory after that array is unavailable for 
that array to expand into.


- Jonathan M Davis


Re: Appender is ... slow

2014-08-14 Thread Jonathan M Davis via Digitalmars-d-learn
On Thursday, 14 August 2014 at 21:00:55 UTC, Jonathan M Davis 
wrote:
On Thursday, 14 August 2014 at 19:47:33 UTC, Brad Anderson 
wrote:
On Thursday, 14 August 2014 at 19:10:18 UTC, Jonathan M Davis 
wrote:
I've never really tried to benchmark it, but it was my 
understanding that the idea behind Appender was to use it to 
create the array when you do that via a lot of appending, and 
then you use it as a normal array and stop using Appender. It 
sounds like you're trying to use it as a way to manage 
reusing the array, and I have no idea how it works for that. 
But then again, I've never actually benchmarked it for just 
creating arrays via appending. I'd just assumed that it was 
faster than just using ~=, because that's what it's 
supposedly for. But maybe I just completely misunderstood 
what the point of Appender was.


- Jonathan M Davis


I too have trouble understanding what Appender does that 
supposedly makes it faster (at least from the documentation). 
My old, naive thought was that it was something like a linked 
list of fixed size arrays so that appends didn't have to move 
existing elements until you were done appending, at which 
point it would bake it into a regular dynamic array moving 
each element only once looking at the code it appeared to be 
nothing like that (an std::deque with a copy into a vector in 
c++ terms).


Skimming the code it appears to be more focused on the much 
more basic ~= always reallocates performance problem. It 
seems it boils down to doing essentially this (someone feel 
free to correct me) in the form of an output range:


auto a = /* some array */;
auto b = a;
a = a.array();
for(...)
 b.assumeSafeAppend() ~= /* element */;


It was my understanding that that was essentially what it did, 
but I've never really studied its code, and I don't know if 
there are any other tricks that it's able to pull. It may be 
that it really doesn't do anything more than be  wrapper which 
handles assumeSafeAppend for you correctly. But if that's the 
case, I wouldn't expect operating on arrays directly to be any 
faster. Maybe it would be _slightly_ faster, because there are 
no wrapper functions to inline away, but it wouldn't be much 
faster, it would ensure that you used assumeSafeAppend 
correctly. If it's really as much slower as Phillippe is 
finding, then I have no idea what's going on. Certainly, it 
merits a bug report and further investigation.


Okay. This makes no sense actually. You call assumeSafeAppend 
after you _shrink_ an array and then want to append to it, not 
when you're just appending to it. So, assumeSafeAppend wouldn't 
help with something like


auto app = appender!string();
// Append a bunch of stuff to app
auto str = app.data;

So, it must be doing something else (though it may be using 
assumeSafeAppend in other functions). I'll clearly have to look 
over the actual code to have any clue about what it's actually 
doing.


Though in reference to your idea of using a linked list of 
arrays, IIRC, someone was looking at changing it to do something 
like that at some point, but it would drastically change what 
Appender's data property would do, so I don't know if it would be 
a good idea to update Appender that way rather than creating a 
new type. But I don't recall what became of that proposal.


- Jonathan M Davis


Re: Appender is ... slow

2014-08-14 Thread Jonathan M Davis via Digitalmars-d-learn

On Thursday, 14 August 2014 at 21:11:51 UTC, safety0ff wrote:
IIRC it manages the capacity information manually instead of 
calling the runtime which reduces appending overhead.


That would make some sense, though it must be completely avoiding 
~= then and probably is even GC-mallocing the array itself. 
Regardless, I clearly need to study the code if I want to know 
what it's actually doing.


- Jonathan M Davis


Re: Appender is ... slow

2014-08-14 Thread Jonathan M Davis via Digitalmars-d-learn
On Thursday, 14 August 2014 at 21:34:04 UTC, Jonathan M Davis 
wrote:

On Thursday, 14 August 2014 at 21:11:51 UTC, safety0ff wrote:
IIRC it manages the capacity information manually instead of 
calling the runtime which reduces appending overhead.


That would make some sense, though it must be completely 
avoiding ~= then and probably is even GC-mallocing the array 
itself. Regardless, I clearly need to study the code if I want 
to know what it's actually doing.


It looks like what it does is essentially to set the array's 
length to the capacity that the GC gave it and then manage the 
capacity itself (so, basically what you were suggesting) and 
essentially avoids the runtime overhead of ~= by reimplementing 
~=. Whether it does it in a more efficient manner is an open 
question, and it also begs the question why it would be cheaper 
to do it this way rather than in the GC. That's not at all 
obvious to me at the moment, especially because the code for 
ensureAddable and put in Appender are both fairly complicated.


So, I really have no idea how Appender fairs in comparison to 
just using ~=, and I have to wonder why something similar can't 
be done in the runtime itself if Appender actually is faster. I'd 
have to spend a lot more time looking into that to figure it out.


- Jonathan M Davis


Re: Appender is ... slow

2014-08-14 Thread Jonathan M Davis via Digitalmars-d-learn
On Thursday, 14 August 2014 at 17:16:42 UTC, Philippe Sigaud 
wrote:
From time to time, I try to speed up some array-heavy code by 
using std.array.Appender, reserving some capacity and so on.


It never works. Never. It gives me executables that are maybe 
30-50% slower than bog-standard array code.


I don't do anything fancy: I just replace the types, use 
clear() instead of = null...


Do people here get good results from Appender? And if yes, how 
are you using it?


Quick test...


import std.array;
import std.datetime;
import std.stdio;

enum size = 1000;

void test1()
{
auto arr = appender!(int[])();
foreach(n; 0 .. size)
arr.put(n);
}

void test2()
{
int[] arr;
foreach(n; 0 .. size)
arr ~= n;
}

void test3()
{
auto arr = new int[](size);
foreach(n; 0 .. size)
arr[n] = n;
}

void test4()
{
auto arr = uninitializedArray!(int[])(size);
foreach(n; 0 .. size)
arr[n] = n;
}

void main()
{
auto result = benchmark!(test1, test2, test3, test4)(10_000);
writeln(cast(Duration)result[0]);
writeln(cast(Duration)result[1]);
writeln(cast(Duration)result[2]);
writeln(cast(Duration)result[3]);
}



With size being 1000, I get

178 ms, 229 μs, and 4 hnsecs
321 ms, 272 μs, and 8 hnsecs
27 ms, 138 μs, and 7 hnsecs
24 ms, 970 μs, and 9 hnsecs

With size being 100,000, I get

15 secs, 731 ms, 499 μs, and 1 hnsec
29 secs, 339 ms, 553 μs, and 8 hnsecs
2 secs, 602 ms, 385 μs, and 2 hnsecs
2 secs, 409 ms, 448 μs, and 9 hnsecs

So, it looks like using Appender to create an array of ints is 
about twice as fast as appending to the array directly, and 
unsurprisingly, creating the array at the correct size up front 
and filling in the values is far faster than either, whether the 
array's elements are default-initialized or not. And the numbers 
are about the same if it's an array of char rather than an array 
of int.


Also, curiously, if I add a test which is the same as test2 (so 
it's just appending to the array) except that it calls reserve on 
the array with size, the result is almost the same as not 
reserving. It's a smidgeon faster but probably not enough to 
matter. So, it looks like reserve doesn't buy you much for some 
reason. Maybe the extra work for checking whether it needs to 
reallocate or whatever fancy logic it has to do in ~= dwarfs the 
extra cost of the reallocations? That certainly seems odd to me, 
but bizarrely, the evidence does seem to say that reserving 
doesn't help much. That should probably be looked into.


In any case, from what I can see, if all you're doing is creating 
an array and then throwing away the Appender, it's faster to use 
Appender than to directly append to the array. Maybe that changes 
with structs? I don't know. It would be interesting to know 
what's different about your code that's causing Appender to be 
slower, but from what I can see, it's definitely faster to use 
Appender unless you know the target size of the array up front.


- Jonathan M Davis


Re: @safe, pure and nothrow at the beginning of a module

2014-08-15 Thread Jonathan M Davis via Digitalmars-d-learn

On Friday, 15 August 2014 at 16:54:54 UTC, Philippe Sigaud wrote:

So I'm trying to use @safe, pure and nothrow.

If I understand correctly Adam Ruppe's Cookbook, by putting

@safe:
pure:
nothrow:

at the beginning of a module, I distribute it on all 
definitions, right? Even methods, inner classes, and so on?


Because I did just that on half a dozen of modules and the 
compiler did not complain. Does that mean my code is clean(?) 
or that what I did has no effect?


Hmmm... It _should_ apply to everything, but maybe it only 
applies to the outer-level declarations. Certainly, in most 
cases, I'd be surprised if marking everything in a module with 
those attributes would work on the first go. It's _possible_, 
depending on what you're doing, but in my experience, odds are 
that you're doing _something_ that violates one or all of those 
in several places.


- Jonathan M Davis


Re: Appender is ... slow

2014-08-15 Thread Jonathan M Davis via Digitalmars-d-learn

On Friday, 15 August 2014 at 16:48:10 UTC, monarch_dodra wrote:
If you are using raw GC arrays, then the raw append 
operation will, outweigh the relocation cost on extension. So 
pre-allocation wouldn't really help in this situation (though 
the use of Appender *should*)


Is that because it's able to grab memory from the GC without 
actually having to allocate anything? Normally, I would have 
expected allocations to far outweigh the cost on extension and 
that preallocating would help a lot. But that would be with 
malloc or C++'s new rather than the GC, which has already 
allocated memory to reuse after it collects garbage.


- Jonathan M Davis


Re: @safe, pure and nothrow at the beginning of a module

2014-08-16 Thread Jonathan M Davis via Digitalmars-d-learn
On Sat, 16 Aug 2014 14:39:00 +0200
Artur Skawina via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:

 On 08/16/14 13:58, Philippe Sigaud via Digitalmars-d-learn wrote:
  On Sat, Aug 16, 2014 at 1:30 PM, Artur Skawina via
  Digitalmars-d-learn

  http://forum.dlang.org/post/mailman.125.1397731134.2763.digitalmar...@puremagic.com
 
  Okay...
 
  So @safe includes child scopes. I suppose @trusted and @system work
  in the same way.
 
  *but*
 
  nothrow, @nogc and UDA's do not include child scopes. Putting them
  at the beginning of a module will not affect methods in
  aggregates...
 
  What's the situation for pure? (I don't have a D compiler handy
  right now, or I would test it myself).

 @safe, @trusted, @system, shared, immutable, const, inout and `extern
 (...)` affect child scopes. `synchronized` does too, but in a rather
 unintuitive way; hopefully nobody uses this. ;)

 Other attributes, including 'pure' and 'nothrow' only affect symbols
 in the current scope.

It sounds like a bug to me if they're not consistent.

- Jonathan M Davis


Re: @safe, pure and nothrow at the beginning of a module

2014-08-16 Thread Jonathan M Davis via Digitalmars-d-learn

On Saturday, 16 August 2014 at 20:48:25 UTC, monarch_dodra wrote:
On Saturday, 16 August 2014 at 19:30:16 UTC, Jonathan M Davis 
via Digitalmars-d-learn wrote:

On Sat, 16 Aug 2014 14:39:00 +0200
Artur Skawina via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:


@safe, @trusted, @system, shared, immutable, const, inout and 
`extern
(...)` affect child scopes. `synchronized` does too, but in a 
rather

unintuitive way; hopefully nobody uses this. ;)

Other attributes, including 'pure' and 'nothrow' only affect 
symbols

in the current scope.


It sounds like a bug to me if they're not consistent.

- Jonathan M Davis


Well, you got  @system to override @safe, but no @impure or 
@throws. So the behavior can kind of make sense in a way. Maybe.


Except that attributes like const, immutable, shared, and inout
can't be reversed either (in fact @system, @trusted, and @safe -
and maybe extern - are the only ones from that list that can be,
so while I could see making that separation, that's not what's
actually happening.

- Jonathan M Davis


Re: Static function at module level

2014-08-17 Thread Jonathan M Davis via Digitalmars-d-learn
On Mon, 18 Aug 2014 01:32:40 +
Phil Lavoie via Digitalmars-d-learn digitalmars-d-learn@puremagic.com
wrote:

 Ok, so after years of D usage I just noticed that this is valid D
 (compiles anyways):

 static void myFunc() {}

 What is a static function at module level exactly? In C, that
 means private, in D, that means ___?

I'm pretty sure that it means nothing. It's just one of those cases where an
attribute is ignored, because it doesn't apply rather than resulting in an
error.

- Jonathan M Davis


Re: In the new D release why use free functions instead of properties?

2014-08-18 Thread Jonathan M Davis via Digitalmars-d-learn

On Monday, 18 August 2014 at 21:02:09 UTC, Gary Willoughby wrote:
In the new D release there have been some changes regarding 
built-in types.


http://dlang.org/changelog.html?2.066#array_and_aa_changes

I would like to learn why this has been done like this and why 
it is desired to be free functions rather than properties?


Probably because they never should have been properties in the 
first place. Properties are supposed to emulate variables, 
whereas something like dup is clearly an action. So, it's clearly 
not supposed to be a property. However, because D doesn't require 
parens on a function with no arguments, you can still call it 
without parens. Some of the changes probably also help with 
cleaning up the AA internals, which is sorely needed.


- Jonathan M Davis


Re: In the new D release why use free functions instead of properties?

2014-08-19 Thread Jonathan M Davis via Digitalmars-d-learn

On Tuesday, 19 August 2014 at 16:28:54 UTC, monarch_dodra wrote:
Actually, the new free functions *are* properties. All that you 
just declared is valid, but we never got around to doing it. 
Walter (If I remember correctly) was opposed.


So right now, even if dup is a free function, myArray.dup() 
is still invalid.


Yuck. It shouldn't even be a breaking change to make them 
functions unless code is specifically testing dup vs dup(). We 
really should make that change IMHO.


- Jonathan M Davis


Re: shared and idup

2014-08-19 Thread Jonathan M Davis via Digitalmars-d-learn

On Tuesday, 19 August 2014 at 19:00:49 UTC, Marc Schütz wrote:
On Tuesday, 19 August 2014 at 17:56:31 UTC, Low Functioning 
wrote:

shared int[] foo;
auto bar() {
foo ~= 42;
return foo.idup;
}
Error: cannot implicitly convert element type shared(int) to 
immutable in foo.idup


Is this not correct? If I instead dup'd an array of ints (or 
some other non-reference elements) and cast to immutable, 
would I be in danger of undefined behavior?


Try upgrading you compiler to the just released 2.067, it works 
for me with that version.


Actually, it's 2.066, but regardless, dup and idup were turned 
into free functions, so that will probably fix some bugs where 
they didn't work with shared or weren't nothrow or somesuch (or 
if it doesn't, it puts them one step closer to it).


- Jonathan M Davis


Re: Auto attributes for functions

2014-08-20 Thread Jonathan M Davis via Digitalmars-d-learn
On Wed, 20 Aug 2014 01:38:52 +
uri via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote:

 Hi all,

 Bit new to D so this might be a very naive question...

 Can the compiler auto infer function attributes?

 I am often adding as many attributes as possible and use the
 compiler to show me where they're not applicable and take them
 away. It would be great if this could be achieved like so:

 auto function() @auto
 {}

 instead of manually writing:

 auto function() pure @safe nothrow @nogc const
 {}

Currently, just templated functions get their attributes inferred. The biggest
problem with inferring them for all functions is that you can declare a
function without defining it in the same place (e.g. if you're using .di
files), in which case the compiler has no function body to use for attribute
inferrence.

There have been discussions on ways to reasonably infer attributes under more
circumstances, but nothing has come of them yet. However, I'd expect that
there will be at least some improvements to the situation at some point given
that there is a general consensus that while the attributes are quite useful,
it's also rather annoying to have to keep typing them all.

- Jonathan M Davis


Re: Can you explain this?

2014-08-20 Thread Jonathan M Davis via Digitalmars-d-learn

On Wednesday, 20 August 2014 at 20:12:58 UTC, Justin Whear wrote:

On Wed, 20 Aug 2014 20:01:03 +, Colin wrote:


It looks veryhacky.

I see 3 distinct parts playing a role in my confusion:
A) The 'is' keyword. What does it do when you have 
is(expression);
B) typeof( expression ); whats this doing? Particularly when 
the
expression its acting on is a closure that returns nothing? 
(at least as

far as I can see)
C) The closure expression:
(inout int = 0) {
// Check to see if I can do InputRangy stuff...
}
Why is there a need for that inout int = 0 clause at the start 
of it?


Sorry for the long question!

Thanks,
Colin


Before the introduction of __traits(compiles, ...), 
`is(typeof(...))` was used.


is(typeof(foo)) and __traits(compiles, foo) are not the same. The 
first tests for the existence of the symbol, whereas the second 
checks whether the code will actually compile. In most cases, 
there's no real difference, but if you're trying to use a symbol 
in a context where it's not legal (e.g. using a private variable 
that you don't have access to), then the is expression will pass, 
whereas the _traits(compiles.. will fail. At least in Phobos, 
is(typeof... is used far more freqently than __traits(compiles... 
The trait is used almost exclusively in unit tests, not in 
template constraints or in user-defined traits such as 
isInputRange.


- Jonathan M Davis


Re: Can you explain this?

2014-08-20 Thread Jonathan M Davis via Digitalmars-d-learn

On Wednesday, 20 August 2014 at 21:06:49 UTC, monarch_dodra wrote:
On Wednesday, 20 August 2014 at 20:39:42 UTC, Jonathan M Davis 
wrote:
is(typeof(foo)) and __traits(compiles, foo) are not the same. 
The first tests for the existence of the symbol, whereas the 
second checks whether the code will actually compile.


Is that even true? I mean, are you just repeating something 
you've heard, or have you checked? is(typeof(foo)) has always 
failed for me merelly if foo fails to compile. foo being an 
existing (but private) symbol is enough.


Test case:
//
module foo;

struct S
{
private int i;
}
//
import foo;

void main(string[] args)
{
S s;
writeln(is(typeof(s.i)));
writeln(__traits(compiles, s.i));
}
//

This says false, false.

I've yet to find a usecase where is(typeof(...)) and 
__traits(compiles, ...) aren't interchangeable.


I mean, I may have missed something, but it seems the whole 
private thing is just miss-information.


Well, hereas an example of them not being the same:

---
import std.stdio;

struct S
{
static void foo()
{
writeln(is(typeof(this)));
writeln(__traits(compiles, this));
}
}

void main()
{
S.foo();
}
---

I originally found out about it from Don here: 
https://issues.dlang.org/show_bug.cgi?id=8339


I don't know why your example doesn't show them as different. But 
we should probably change it so that they _are_ the same - either 
that or document their differences explicitly and clearly, but I 
don't know why the is(typeof.. behavior here would be desirable. 
Maybe so that we could do type(this) to declare a local variable 
of the class type generically? I don't know. They're _almost_ the 
same but not quite, and I don't know what the exact differences 
are. Pretty much all I have to go on is Don's explanation in that 
bug report.


- Jonathan M Davis


Re: RAII limitations in D?

2014-08-21 Thread Jonathan M Davis via Digitalmars-d-learn
On Friday, 22 August 2014 at 03:00:01 UTC, Timothee Cour via 
Digitalmars-d-learn wrote:
On Thu, Aug 21, 2014 at 7:26 PM, Dicebot via 
Digitalmars-d-learn 

digitalmars-d-learn@puremagic.com wrote:


http://dlang.org/phobos/std_typecons.html#.RefCounted



That doesn't work with classes though; is there any way to get 
a ref

counted class?

(and btw RefCounted should definitely appear in
http://dlang.org/cpptod.html#raii)


It can be made to work with classes and probably should be. 
There's no fundamental reason why it can't. It's probably just 
more complicated.


- Jonathan M Davis


Re: Why no multiple-dispatch?

2014-08-24 Thread Jonathan M Davis via Digitalmars-d-learn

On Sunday, 24 August 2014 at 23:42:51 UTC, Aerolite wrote:

Hey all,

I was surprised to learn yesterday that D does not actually
support Multiple-Dispatch, also known as Multimethods. Why is
this? Support for this feature is already present in Scala, C#
4.0, Groovy, Clojure, etc... Would it not make sense for D to
remain competitive in this regard?

While I think many of us are aware that problems of the nature
that require Multiple-Dispatch can be approached with the 
Visitor

Pattern, there seems to be a general consensus that the Visitor
Pattern is pretty cumbersome and boilerplate-heavy, and thus
should be avoided.

The common response from my searching around seems to be that a
template-based, static implementation of Multiple-Dispatch is 
the

go-to solution in D, but considering the existing template-bloat
issues we have, I can't help but wonder if language support for
this feature might be a better path to go down. Seems like it
wouldn't be too difficult to implement, although I've not looked
very deeply into dmd's source-code.

So what seems to be the situation here?


At this point, if something can be implemented in a library 
rather than in the language, the odds are low that it will be 
solved in the language. The language is very powerful and already 
a bit complicated, so usually the response for questions like 
this is that we'll take advantage of D's existing features to 
implement the new feature rather than complicating the language 
further. If you could come up with a very good reason why it had 
to be in the language, then maybe it would happen, but my guess 
is that that's not likely to happen.


- Jonathan M Davis


Re: Error with constraints on a templated fuction

2014-08-25 Thread Jonathan M Davis via Digitalmars-d-learn
On Mon, 25 Aug 2014 15:48:10 +
Jeremy DeHaan via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:

 I've done things like this before with traits and I figured that
 this way should work as well, but it gives me errors instead.
 Perhaps someone can point out my flaws.

 immutable(T)[] toString(T)(const(T)* str)
   if(typeof(T) is dchar)//this is where the error is
 {
   return str[0..strlen(str)].idup; //I have strlen defined for
 each *string
 }

 I was going to add some || sections for the other string types,
 but this one won't even compile.

 src/dsfml/system/string.d(34): Error: found ')' when expecting
 '.' following dchar
 src/dsfml/system/string.d(35): Error: found '{' when expecting
 identifier following 'dchar.'
 src/dsfml/system/string.d(36): Error: found 'return' when
 expecting ')'
 src/dsfml/system/string.d(36): Error: semicolon expected
 following function declaration
 src/dsfml/system/string.d(36): Error: no identifier for
 declarator str[0 .. strlen(str)]
 src/dsfml/system/string.d(36): Error: no identifier for
 declarator .idup
 src/dsfml/system/string.d(37): Error: unrecognized declaration


 It compiles if I remove the 'if(typeof(T) is dchar)' section. Any
 thoughts?

As the others have pointed out, you need to do is(T == dchar). The way you
used is the is operator and it checks for bitwise equality (most frequently
used for comparing pointers), and it's a runtime operation, whereas the is
that you need to use in a template constraint is an is expression, which is a
compile time operation:

http://dlang.org/expression.html#IsExpression

is expressions actually get pretty complicated in their various forms, but the
most basic two are probably is(T == dchar), which checks that the two types ar
the same, and is(T : dchar), which checks that T implictly converts to dchar.
Another commonly used one is is(typeof(foo)). typeof(foo) gets the type of foo
and will result in void if foo doesn't exist, and is(void) is false, whereas
is(someOtherType) is true, so it's frequently used to check whether something
is valid. The Phobos source code is littered with examples (especially in
std.algorithm, std.range, and std.traits), since is expressions are frequently
used in template constraints.

- Jonathan M Davis


  1   2   3   4   5   6   7   8   9   10   >