Re: foreach() behavior on ranges

2021-08-26 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On Wednesday, 25 August 2021 at 19:51:36 UTC, H. S. Teoh wrote:
What I understand from what Andrei has said in the past, is 
that a range is merely a "view" into some underlying storage; 
it is not responsible for the contents of that storage.  My 
interpretation of this is that .save will only save the 
*position* of the range, but it will not save the contents it 
points to, so it will not (should not) deep-copy.


That definition is potentially misleading if we take into account 
that a range is not necessarily iterating over some underlying 
storage: ranges can also be defined by algorithmic processes.  
(Think e.g. iota, or pseudo-RNGs, or a range that iterates over 
the Fibonacci numbers.)


However, if the range is implemented by a struct that contains 
a reference to its iteration state, then yes, to satisfy the 
definition of .save it should deep-copy this state.


Right.  And in the case of algorithmic ranges (rather than 
container-derived ranges), the state is always and only the 
iteration state.  And then as well as that there are ranges that 
are iterating over external IO, which in most cases can't be 
treated as forward ranges but in a few cases might be (e.g. 
saving the cursor position when iterating over a file's contents).


Arguably I think a lot of problems in the range design derive 
from not thinking through those distinctions in detail 
(external-IO-based vs. algorithmic vs. container-based), even 
though superficially those seem to map well to the input vs 
forward vs bidirectional vs random-access range distinctions.


That's also not taking into account edge cases, e.g. stuff like 
RandomShuffle or RandomSample: here one can in theory copy the 
"head" of the range but one arguably wants to avoid correlations 
in the output of the different copies (which can arise from at 
least 2 different sources: copying under-the-hood pseudo-random 
state of the sampling/shuffling algorithm itself, or copying the 
underlying pseudo-random number generator).  Except perhaps in 
the case where one wants to take advantage of the pseudo-random 
feature to reproduce those sequences ... but then one wants that 
to be a conscious programmer decision, not happening by accident 
under the hood of some library function.


(Rabbit hole, here we come.)

Andrei has mentioned before that in retrospect, .save was a 
design mistake.  The difference between an input range and a 
forward range should have been keyed on whether the range type 
has reference semantics (input range) or by-value semantics 
(forward range).  But for various reasons, including the state 
of the language at the time the range API was designed, the 
.save route was chosen, and we're stuck with it unless Phobos 
2.0 comes into existence.


Either way, though, the semantics of a forward range pretty 
much dictates that whatever type a range has, if it claims to 
be a forward range then .save must preserve whatever iteration 
state it has at that point in time. If this requires 
deep-copying some state referenced from a struct, then that's 
what it takes to satisfy the API.  This may take the form of a 
.save method that copies state, or a copy ctor that does the 
same, or simply storing iteration state as PODs in the range 
struct so that copying the struct equates to preserving the 
iteration state.


Yes.  FWIW I agree that when _implementing_ a forward range one 
should probably make sure that copying by value and the `save` 
method produce the same results.


But as a _user_ of code implemented using the current range API, 
it might be a bad idea to assume that a 3rd party forward range 
implementation will necessarily guarantee that.


Re: foreach() behavior on ranges

2021-08-25 Thread Joseph Rushton Wakeling via Digitalmars-d-learn
On Wednesday, 25 August 2021 at 17:01:54 UTC, Steven 
Schveighoffer wrote:
In a world where copyability means it's a forward range? Yes. 
We aren't in that world, it's a hypothetical "if we could go 
back and redesign".


OK, that makes sense.

Technically this is true. In practice, it rarely happens. The 
flaw of `save` isn't that it's an unsound API, the flaw is that 
people get away with just copying, and it works 99.9% of the 
time. So code is simply untested with ranges where `save` is 
important.


This is very true, and makes it quite reasonable to try to pursue 
"the obvious/lazy thing == the thing you're supposed to do" 
w.r.t. how ranges are defined.


I'd be willing to bet $10 there is a function in phobos right 
now, that takes forward ranges, and forgets to call `save` when 
iterating with foreach. It's just so easy to do, and works with 
most ranges in existence.


I'm sure you'd win that bet!


The idea is to make the meaning of a range copy not ambiguous.


Yes, this feels reasonable.  And then one can reserve the idea of 
a magic deep-copy method for special cases like pseudo-RNGs where 
one wants them to be copyable on user request, but without code 
assuming it can copy them.


Re: foreach() behavior on ranges

2021-08-25 Thread Joseph Rushton Wakeling via Digitalmars-d-learn
On Wednesday, 25 August 2021 at 10:59:44 UTC, Steven 
Schveighoffer wrote:
structs still provide a mechanism (postblit/copy ctor) to 
properly save a forward range when copying, even if the guts 
need copying (unlike classes). In general, I think it was a 
mistake to use `.save` as the mechanism, as generally `.save` 
is equivalent to copying, so nobody does it, and code works 
fine for most ranges.


Consider a struct whose internal fields are just a pointer to its 
"true" internal state.  Does one have any right to assume that 
the postblit/copy ctor would necessarily deep-copy that?


If that struct implements a forward range, though, and that 
pointed-to state is mutated by iteration of the range, then it 
would be reasonable to assume that the `save` method MUST 
deep-copy it, because otherwise the forward-range property would 
not be respected.


With that in mind, I am not sure it's reasonable to assume that 
just because a struct implements a forward-range API, that 
copying the struct instance is necessarily the same as saving the 
range.


Indeed, IIRC quite a few Phobos library functions program 
defensively against that difference by taking a `.save` copy of 
their input before iterating over it.


What should have happened is that input-only ranges should not 
have been copyable, and copying should have been the save 
mechanism. Then it becomes way way more obvious what is 
happening. Yes, this means forgoing classes as ranges.


I think there's a benefit of a method whose definition is 
explicitly "If you call this, you will get a copy of the range 
which will replay exactly the same results when iterating over 
it".  Just because the meaning of "copy" can be ambiguous, 
whereas a promise about how iteration can be used is not.


Re: foreach() behavior on ranges

2021-08-25 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On Tuesday, 24 August 2021 at 09:15:23 UTC, bauss wrote:
A range should be a struct always and thus its state is copied 
when the foreach loop is created.


That's quite a strong assumption, because its state might be a 
reference type, or it might not _have_ state in a meaningful 
sense -- consider an input range that wraps reading from a 
socket, or that just reads from `/dev/urandom`, for two examples.


Deterministic copying per foreach loop is only guaranteed for 
forward ranges.


Re: Question about: ("1.1").to!int;

2020-10-23 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On Friday, 23 October 2020 at 14:16:50 UTC, user1234 wrote:
The third case is just like `cast(int) 1.1` it's not _at 
programmer request_ from my point of view


If the programmer explicitly writes a `to!int` or the 
`cast(int)`, then it's pretty clearly at their request.  And it's 
unambiguous what they are asking for.


But if the input to the conversion is a string, it's important 
that the conversion fail unless the string is an unambiguous 
representation of the intended destination type.  Otherwise, 
there is more than one data conversion going on, and one of them 
is being hidden from the programmer.


Re: Question about: ("1.1").to!int;

2020-10-23 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On Wednesday, 21 October 2020 at 22:50:27 UTC, matheus wrote:
Since (1.1).to!int = 1, shouldn't the string value 
("1.1").to!int at least try to convert to float/double and then 
to int?


The thing is, that's a great way for hard-to-identify bugs to 
creep into code.  In these cases:


auto a = (1).to!int; // this works
auto b = ("1").to!int;   // this works
auto c = (1.1).to!int;   // this works and c = 1

... then what the programmer wants is unambiguous.  In the first 
case it's just converting int => int.  In the second, it's 
converting from a string that unambiguously represents an integer 
value, to an int.  And in the third, it's converting _at 
programmer request_ from a double to an int (which has a 
well-defined behaviour).


However, if ("1.1").to!int were to work, this would be the `to` 
function making a judgement call on how to handle something 
ambiguous.  And while that judgement call may be acceptable for 
your current use-case, it won't be for others.


In particular, if `to` just accepted any string numerical 
representation for conversion to int, how could the caller 
explicitly _exclude_ non-integer input, if that is their use-case?


So it's far better to require you, as the programmer, to make 
what you want unambiguous and explicitly write code that will (i) 
deserialize any numerical string that is acceptable to you and 
(ii) convert to integer.


Re: UDA inheritance

2020-09-10 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On Thursday, 10 September 2020 at 13:14:47 UTC, drug wrote:
Just a thought - couldn't you use classes for this? Get an UDA 
and check if it is a descendant of the specific class.


Yes, I did wonder about that, but it doesn't allow all the 
inference that I'm looking for.  For example:


class First
{
enum A;
enum B;
}

class Second : First
{
}


@(Second.A)
struct MySecond
{
}


import std.traits : hasUDA;

static assert(hasUDA!(MySecond, Second.A));  // OK!
static assert(hasUDA!(MySecond, First.A));   // OK!
static assert(hasUDA!(MySecond, Second));// fails
static assert(hasUDA!(MySecond, First)); // fails

It's obvious _why_, of course, given how I set the above up.  And 
this contrasts with what one can do with enums:


enum Something { A, B }

@(Something.A)
struct MySomething { }

import std.traits : hasUDA;

static assert(hasUDA!(MySomething, Something.A));  // OK!
static assert(hasUDA!(MySomething, Something));// OK!

... where I can check for the enum specialization _or_ the 
general enum type and have things just work.


I'm looking, ideally, to be able to do both.  I did try something 
like:


enum First { A, B }

enum Second : First { A = First.A, B = First.B }

... but that doesn't work.  Hence why I thought it might be worth 
making a forum post.


UDA inheritance

2020-09-10 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

Hello folks,

Is there any way to define UDAs such that they automatically 
inherit other UDA definitions?


For example, suppose I define:

enum BaseUDA { A, B }

Is there a way to define `AnotherUDA` such that if `hasUDA!(T, 
AnotherUDA)` then it is a given that `hasUDA!(T, BaseUDA)` will 
also be true?  (And similarly for the `A`, `B` specializations?)


The use-case here is to create a UDA that defines some general 
distinction of code properties, and to allow downstream code to 
define its own more specialized cases of that distinction.


Thanks in advance for any thoughts or advice!

Thanks and best wishes,

  -- Joe


Re: Fetching licensing info for all dependencies of a DUB project

2020-05-12 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On Tuesday, 12 May 2020 at 12:59:14 UTC, Paul Backus wrote:
You should be able to get this information from the JSON output 
of `dub describe`.


Cool, thanks.  Much appreciated :-)

Has anyone created any tools to condense that into a licensing 
report?  No worries if not, just curious.


Fetching licensing info for all dependencies of a DUB project

2020-05-12 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

Hello folks,

Are there any tools that exist to help prepare a report of all 
the different software licenses used by dependencies of a DUB 
project?  (This should cover all pulled in dependencies, not just 
direct dependencies.)


Thanks and best wishes,

-- Joe


Detecting unneeded imports in CI

2020-02-06 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

Hello folks,

Are there any well-established CI patterns/tools for detecting 
unneeded imports in D code?  Ideally including detecting unused 
symbols from selective imports?


Thanks and best wishes,

 -- Joe


Re: Thin UTF8 string wrapper

2019-12-07 Thread Joseph Rushton Wakeling via Digitalmars-d-learn
On Saturday, 7 December 2019 at 15:57:14 UTC, Jonathan M Davis 
wrote:
There may have been some tweaks to std.encoding here and there, 
but for the most part, it's pretty ancient. Looking at the 
history, it's Seb who marked some if it as being a replacement 
for std.utf, which is just plain wrong.


Ouch!  I must say it was a surprise to read, precisely because 
std.encoding seemed weird and clunky.  Good to know that it's 
misleading.


Unfortunately that adds to the list I have of weirdly misleading 
docs that seem to have crept in over the last months/years :-(


std.utf.validate does need a replacement, but doing so gets 
pretty complicated. And looking at std.encoding.isValid, I'm 
not sure that what it does is any better from simply wrapping 
std.utf.validate and returning a bool based on whether an 
exception was thrown.


Unfortunately I'm dealing with a use case where exception 
throwing (and indeed, anything that generates garbage) is 
preferred to be avoided.  That's why I was looking for a function 
that returned a bool ;-)


Depending on the string, it would actually be faster to use 
validate, because std.encoding.isValid iterates through the 
entire string regardless. The way it checks validity is also 
completely different from what std.utf does. Either way, some 
of the std.encoding internals do seem to be an alternate 
implementation of what std.utf has, but outside of std.encoding 
itself, std.utf is what Phobos uses for UTF-8, UTF-16, and 
UTF-32, not std.encoding.


Thanks -- good to know.

I did do a PR at one point to add isValidUTF to std.utf so that 
we could replace std.utf.validate, but Andrei didn't like the 
implementation, so it didn't get merged, and I haven't gotten 
around to figuring out how to implement it more cleanly.


Thanks for the attempt, at least!  While I get the reasons it was 
rejected, it feels a bit of a shame -- surely it's easier to do a 
more major under-the-hood rewrite with the public API (and tests) 
already in place ... :-\


Re: Thin UTF8 string wrapper

2019-12-07 Thread Joseph Rushton Wakeling via Digitalmars-d-learn
On Saturday, 7 December 2019 at 03:23:00 UTC, Jonathan M Davis 
wrote:

The module to look at here is std.utf, not std.encoding.


Hmmm, docs may need updating then -- several functions in 
`std.encoding` explicitly state they are replacements for 
`std.utf`.  Did you mean `std.uni`?


It is honestly a bit confusing which of these 3 modules to use, 
especially as they each offer different (and useful) tools.  For 
example, `std.utf.validate` is less useful than 
`std.encoding.isValid`, because it throws rather than returning a 
bool and giving the user the choice of behaviour.  `std.uni` 
doesn't seem to have any equivalent for either.


Thanks in any case for the as-ever characteristically detailed 
and useful advice :-)


Thin UTF8 string wrapper

2019-12-06 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

Hello folks,

I have a use-case that involves wanting to create a thin struct 
wrapper of underlying string data (the idea is to have a type 
that guarantees that the string has certain desirable properties).


The string is required to be valid UTF-8.  The question is what 
the most useful API is to expose from the wrapper: a sliceable 
random-access range?  A getter plus `alias this` to just treat it 
like a normal string from the reader's point of view?


One factor that I'm not sure how to address w.r.t. a full range 
API is how to handle iterating over elements: presumably they 
should be iterated over as `dchar`, but how to implement a 
`front` given that `std.encoding` gives no way to decode the 
initial element of the string that doesn't also pop it off the 
front?


I'm also slightly disturbed to see that `std.encoding.codePoints` 
requires `immutable(char)[]` input: surely it should operate on 
any range of `char`?


I'm inclining towards the "getter + `alias this`" approach, but I 
thought I'd throw the problem out here to see if anyone has any 
good experience and/or advice.


Thanks in advance for any thoughts!

All the best,

 -- Joe


Re: Why must a bidirectional range also be a forward range?

2019-09-20 Thread Joseph Rushton Wakeling via Digitalmars-d-learn
On Thursday, 19 September 2019 at 22:55:55 UTC, Jonathan M Davis 
wrote:
For better or worse, ranges were more or less set up as a 
linear hierarchy, and it's unlikely that use cases for 
bidirectional ranges which aren't forward ranges are common. I 
expect that it's a bit like infinite, bidirectional ranges. In 
theory, they could be a thing, but the use cases for them are 
uncommon enough that we don't really support them. Also, I 
expect that most range-based algorithms which operate on 
bidirectional ranges would require save anyway. A lot of 
algorithms do to the point that basic input ranges can be 
incredibly frustrating to deal with.


[ ... ]


Thanks for the characteristically thorough description of both 
the design considerations and the history involved.


On reflection it occurs to me that the problem in my thinking may 
be the idea that `save` should result in a full deep copy.  If 
instead we go by how `save` is implemented for dynamic arrays, 
it's only ever a shallow copy: it's not possible to make valid 
assumptions of reproducible behaviour if the original copy is 
modified in any way.


If instead we assume that `save` is only suitable for temporary 
shallow-copies that are made under the hood of algorithms, then 
my problems go away.


Assuming we were redesigning the range API (which may happen if 
we do indeed end up doing a Phobos v2), then maybe we could 
make it so that bidirectional ranges don't have to be forward 
ranges, but honestly _any_ ranges which aren't forward ranges 
are a bit of a problem. We do need to support them on some 
level for exactly the kind of reasons that you're looking to 
avoid save with a bidirectional range, but the semantic 
differences between what makes sense for a basic input range 
and a forward range really aren't the same (in particular, it 
works far better for basic input ranges to be reference types, 
whereas it works better for forward ranges to be value types).


It occurs to me that the distinction we're missing here might 
between "true" input ranges (i.e. which really come from IO of 
some kind), which indeed must be reference types, versus "pure" 
input ranges (which are deterministic, but which don't 
necessarily allow algorithms to rely on the ability to save and 
replay them).


As it stands, I don't think that we can change 
isBidirectionalRange, because it's likely that most code using 
it relies on its check for isForwardRange. So, I think that 
we're stuck for the moment, but it is food for thought in a 
possible range API redesign. I'll add it to my notes on the 
topic. Some aspects of a range API redesign should look like 
are pretty clear at this point, whereas others are very much an 
open question.


Oh, I wasn't asking for any changes to the existing definition 
(at least not without much thought from everyone!).  I was just 
wanting to understand the reasons for the current situation.  But 
thanks for putting it on the list of things to consider.


I may have some follow-up to your other remarks but I think at 
least now I have a way forward with my code.  Thanks!


Why must a bidirectional range also be a forward range?

2019-09-19 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

Hello folks,

A question that occurred to me while implementing a new data 
structure recently, which I'm not sure I've ever seen a reason 
for.


Why must bidirectional ranges also be forward ranges (as opposed 
to just input ranges)?


It doesn't seem to me that the `save` property is inherently 
required to iterate backwards over a range -- just the `back` and 
`popBack` methods.


It makes sense that, for bidirectionality, the range needs to be 
deterministic, so that iterating backward gives the exact same 
elements as iterating forward, just in reverse order.  But it 
seems strange to require the `save` property in order to 
automatically assume deterministic behaviour.


For context, the use-case I have is a data structure which stores 
an internal buffer as an array.  A robust `save` method would 
therefore have to duplicate the array (or at least the active 
subset of its contents).  This means a fresh heap allocation per 
`save`, which has some nasty implications for phobos algorithms 
that eagerly `.save` when they can.


So, I'd rather not implement `save` in this case.  But there is 
nothing that blocks implementing `back` and `popBack`; yet I 
can't use these with any of the functionality that requires 
bidirectionality, because the current `isBidirectionalRange` 
check requires `save`.


So what gives?  Are there some reasons for the `save` requirement 
on bidirectional ranges that I'm missing?  And regardless, any 
advice on how to handle my particular use-case?


Thanks & best wishes,

  -- Joe


Re: Using output-range overloads of SysTime.toISO{Ext}String with formatting code

2019-07-09 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On Tuesday, 9 July 2019 at 08:51:59 UTC, Mitacha wrote:
I've managed to make it work using 'alias this' and wrapper 
struct.

https://run.dlang.io/is/3SMEFZ
It's not an elegant solution, there could be a better way to do 
this.


Yea, a wrapper struct with custom `toString` seems the most 
obvious way forward.  No need to bother with `alias this`, 
though, since one only needs the wrapper struct at the point 
where one wants to format the datetime info.  A wrapper struct 
like this:


struct ISOExtSysTime
{
private SysTime systime;

public void toString (Output) (ref Output output)
if (isOutputRange!(Output, char))
{
this.systime.toISOExtString(output);
}
}

allows easy usage along the lines of:

writefln!"%s"(ISOExtSysTime(sys_time));

... and one could easily write a small factory function to 
shorten the code if needed.


Re: Using output-range overloads of SysTime.toISO{Ext}String with formatting code

2019-07-09 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On Monday, 8 July 2019 at 12:53:18 UTC, Digital Mars wrote:
I guess that there is no way to have `writeln` automatically 
use the output range overload instead of allocating one. You 
need somehow to provide the output range to `toISOExtString` 
explicitly because `writeln` outputs the return of 
`toISOExtString` and have no ability to use specific overload. 
That is compiler calls `toISOExtString` and then passes its 
return to `writeln`. Probably library solution isn't possible 
in this case. Workaround is using own wrapper to provide output 
range to `toISOExtString`.


This is pretty much what I'd concluded myself, but I wanted to 
check to make sure there wasn't some clever option I didn't know 
about.  A helper function with wrapper struct seems the obvious 
way forward.


Re: Using output-range overloads of SysTime.toISO{Ext}String with formatting code

2019-07-08 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On Sunday, 7 July 2019 at 20:12:30 UTC, drug wrote:

07.07.2019 17:49, Joseph Rushton Wakeling пишет:
it's possible to do something like 
`writefln!"%s"(now.toISOExtString)` and have it automatically 
use the output range overload rather than allocating a new 
string instance.


This is exactly how it is intended to work: 
https://run.dlang.io/is/ATjAkx


Thanks for taking the time to answer, but I don't think this 
really addresses my question.


Your example shows a struct with `toString` overloads.  However, 
SysTime.toISOExtString does not work like this: it is a method 
with two explicit overloads, one of which just returns a newly 
allocated `string`, the other of which returns nothing but 
accepts an output range as input:

https://dlang.org/phobos/std_datetime_systime.html#.SysTime.toISOExtString

I want to know if there's an easy way to work with that in 
`format` and `writefln` statements.


Note that while SysTime does also have `toString` methods, these 
give no control over the kind of datetime string that results:

https://dlang.org/phobos/std_datetime_systime.html#.SysTime.toString

Since I explicitly need the extended ISO format, I need to use 
`toISOExtString` directly.


Using output-range overloads of SysTime.toISO{Ext}String with formatting code

2019-07-07 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

Hello folks,

Is there an idiomatic/intended way to use the output-range taking 
overloads of SysTime.toISOString and toISOExtString with stuff 
like `writeln` and `format`, as opposed to explicitly generating 
an output range to stdout or a string, and passing that to these 
methods?


I'm a bit unfamiliar with exactly all the ins and outs of the 
more recent output-range-based formatting design, but what I'm 
interested in whether it's possible to do something like 
`writefln!"%s"(now.toISOExtString)` and have it automatically use 
the output range overload rather than allocating a new string 
instance.


If that exact syntax doesn't work, is there anything almost as 
convenient?


Thanks & best wishes,

 -- Joe


Re: First time user of LDC and getting newbie problems.

2017-06-13 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On Tuesday, 13 June 2017 at 20:04:46 UTC, WhatMeWorry wrote:
Sorry I didn't reply sooner. I just reinstalled everything and 
it's all good.  Something was really screwed up.


"Screwed up" is also a fairly good way to describe my responses 
too, since I also missed your clear statement that you were using 
Xubuntu.  I think I was having One Of Those Days ... :-)


The thought in my mind was that maybe you'd somehow installed a 
32-bit LDC (which would probably still run all right on your 
64-bit Xubuntu), and this could be the reason for the x86_64 
build issues.  But I honestly couldn't say just on the basis of 
what's in your post.


Anyway, glad to hear you got things sorted.


Re: First time user of LDC and getting newbie problems.

2017-06-13 Thread Joseph Rushton Wakeling via Digitalmars-d-learn
On Tuesday, 13 June 2017 at 12:38:03 UTC, Joseph Rushton Wakeling 
wrote:

On Sunday, 11 June 2017 at 21:58:27 UTC, WhatMeForget wrote:

Just trying to compile a "Hello World" using dub and ldc2.


I presume from your command line you're running Windows?


... I don't know where I got that idea from.  Must be having a 
distracted day.


What OS/distro are you running, in any case?




Re: First time user of LDC and getting newbie problems.

2017-06-13 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On Sunday, 11 June 2017 at 21:58:27 UTC, WhatMeForget wrote:

Just trying to compile a "Hello World" using dub and ldc2.


Let's start from the beginning: how did you install LDC?  I 
presume from your command line you're running Windows?


Re: -fPIC and 32-bit dmd.conf settings

2017-04-09 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On Sunday, 9 April 2017 at 07:12:51 UTC, Patrick Schluter wrote:
32 bit modes support for PIC is painful because the RIP 
relative addressing mode does not exist. AMD introduced it when 
they added 64 bit support in the Opteron processors. In 32 bit 
mode it was possible to generate PIC code but it required some 
ugly hacks (CALL 0) which broke some hardware optimization (the 
return address target cache) and adds a layer of indirection 
for shared libraries (GOT), which had the consequence of 
breaking the sharing, i.e. .so and .dll weren't really shared 
in 32 bit mode.
Here an in depth article that I haven't yet read but seem to be 
interesting on the subject 
https://www.technovelty.org/c/position-independent-code-and-x86-64-libraries.html

Of course, this has nothing to do with D per se.


Thanks for the explanation.  TBH I find myself wondering whether 
`-fPIC` should be in the flags defined in dmd.conf _at all_ (even 
for 64-bit environments); surely that should be on request of 
individual project builds ... ?


-fPIC and 32-bit dmd.conf settings

2017-04-08 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

Hello folks,

The default dmd.conf settings for 64-bit environments include the 
-fPIC flag (for good reason), but the settings for 32-bit 
environments do not.  Any particular reason for this?


Thanks & best wishes,

-- Joe


Re: How to enforce compile time evaluation (and test if it was done at compile time)

2017-02-28 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On Tuesday, 28 February 2017 at 00:22:28 UTC, sarn wrote:
If you ever have doubts, you can always use something like 
this to check:


assert (__ctfe);


Sorry, "enforce" would more appropriate if you're really 
checking.


if (!__ctfe) assert(false);

... might be the best option.  That shouldn't be compiled out 
even in -release builds.


Re: CTFE difference between dmd and ldc2

2017-01-08 Thread Joseph Rushton Wakeling via Digitalmars-d-learn
On Saturday, 7 January 2017 at 22:55:55 UTC, Joseph Rushton 
Wakeling wrote:
I should probably also create a formal issue for this.  Any 
thoughts on how best to break it down into a minimal example?  
It does not appear easy to do so at first glance :-\


Turned out to be easier than I anticipated.  It was not a CTFE 
problem but one of default initialization of struct fields:

https://issues.dlang.org/show_bug.cgi?id=17073

In short, `void` default initialization seems to take priority 
with dmd regardless of anything else.


Re: CTFE difference between dmd and ldc2

2017-01-07 Thread Joseph Rushton Wakeling via Digitalmars-d-learn
On Thursday, 29 December 2016 at 09:57:25 UTC, Joseph Rushton 
Wakeling wrote:
On Thursday, 29 December 2016 at 09:24:23 UTC, Joseph Rushton 
Wakeling wrote:
Sorry for delay in following up on this.  Yes, the same 
problem occurs with dmd 2.071 (as installed from the deb 
package downloaded from dlang.org).


Specifically, I tested with 2.071.2, which I understand is the 
exact same frontend version as LDC 1.1.0-beta6.


So, looks like the issue could be backend-related?


Just to re-raise the issue: it's a blocker for what would 
otherwise be quite a nice and useful PR for Phobos: 
https://github.com/dlang/phobos/pull/5011


Assuming a fix is not on the cards any time soon, if anyone could 
suggest an alternative way to achieve the desired result, I'd be 
very grateful.


I should probably also create a formal issue for this.  Any 
thoughts on how best to break it down into a minimal example?  It 
does not appear easy to do so at first glance :-\


Re: CTFE difference between dmd and ldc2

2016-12-29 Thread Joseph Rushton Wakeling via Digitalmars-d-learn
On Thursday, 29 December 2016 at 09:24:23 UTC, Joseph Rushton 
Wakeling wrote:
On Tuesday, 27 December 2016 at 22:34:50 UTC, Johan Engelen 
wrote:
Do you see the same with dmd 2.071? (that's the same front-end 
code as the LDC version tested)


Sorry for delay in following up on this.  Yes, the same problem 
occurs with dmd 2.071 (as installed from the deb package 
downloaded from dlang.org).


Specifically, I tested with 2.071.2, which I understand is the 
exact same frontend version as LDC 1.1.0-beta6.


So, looks like the issue could be backend-related?


Re: CTFE difference between dmd and ldc2

2016-12-29 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On Tuesday, 27 December 2016 at 22:34:50 UTC, Johan Engelen wrote:

On Tuesday, 27 December 2016 at 17:56:07 UTC, Stefan Koch wrote:
I doubt that this is a CTFE bug since there should be little 
difference in the ctfe code between ldc and dmd.

That said, it is of course a possibility.


Do you see the same with dmd 2.071? (that's the same front-end 
code as the LDC version tested)


Sorry for delay in following up on this.  Yes, the same problem 
occurs with dmd 2.071 (as installed from the deb package 
downloaded from dlang.org).


Re: CTFE difference between dmd and ldc2

2016-12-27 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On Tuesday, 27 December 2016 at 17:56:07 UTC, Stefan Koch wrote:
I doubt that this is a CTFE bug since there should be little 
difference in the ctfe code between ldc and dmd.

That said, it is of course a possibility.

I'll have a look.


Thanks!  It's very weird how, of the values in the `state` 
variable, one winds up being set correctly and the others are all 
zero.  That might suggest that the `state` variable _is_ being 
set correctly and then something else is happening that zeroes 
out most of the values ... ?


CTFE difference between dmd and ldc2

2016-12-27 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

Hello all,

I've recently been working on a port of the mir.random Mersenne 
Twister implementation to Phobos, and I've run into a bit of 
weirdness with CTFE.


Specifically, I use some CTFE functionality to generate a default 
starting state for the generator:

https://github.com/WebDrake/mersenne-twister-range/blob/dda0bb006ee4633ae4fa227f9bad1453ba0e9885/src/mersenne_twister_range.d#L126
https://github.com/WebDrake/mersenne-twister-range/blob/dda0bb006ee4633ae4fa227f9bad1453ba0e9885/src/mersenne_twister_range.d#L166

However, it turns out that while this works while compiling with 
the latest ldc2 beta (v1.1.0-beta6, which uses the dmd 2.071.2 
frontend), it doesn't when compiled with the latest dmd master or 
with dmd 2.072.1 (as installed from the deb package downloaded 
from dlang.org).


Specifically, if we look at the unittests where the generator is 
initialized with implicit construction, versus where the default 
seed is explicitly supplied:

https://github.com/WebDrake/mersenne-twister-range/blob/dda0bb006ee4633ae4fa227f9bad1453ba0e9885/src/mersenne_twister_range.d#L399-L407

... different results emerge when compiled with dmd; it turns out 
that in the implicit-construction case, only the `state.z` 
variable is set (the rest are all zeroes), whereas with the 
explicit seeding, all elements of the `state` variable are set 
correctly.


When ldc2 is used, this doesn't happen and in both cases the 
`state` variable is set correctly.


Can anyone advise what could be going wrong here?  This looks 
like a nasty CTFE bug to me :-(


Thanks & best wishes,

-- Joe


Re: std.net.curl and libcurl.so

2016-09-24 Thread Joseph Rushton Wakeling via Digitalmars-d-learn
On Saturday, 24 September 2016 at 19:42:11 UTC, Joseph Rushton 
Wakeling wrote:
On Saturday, 24 September 2016 at 19:27:31 UTC, Joseph Rushton 
Wakeling wrote:
Further to earlier remarks: I now think this may be a general 
problem of LDC 1.0.0 and not a problem of the snap package.


I tried building my simple curl-using program using an LDC 
1.0.0 build and installed from source in the standard cmake && 
make && make install fashion.  The same segfault occurs.


More on this: ldc 0.17.1 (the version packaged with Ubuntu 
16.04) is based on dmd 2.068.2, which still linked against 
libcurl.  It's only from v2.069.0+ that libcurl is loaded 
dynamically:

https://dlang.org/changelog/2.069.0.html#curl-dynamic-loading

ldc 1.0.0 is based on 2.070.2.


Downloaded a pre-built copy of ldc 1.0.0 from here (I used the 
Linux x86_64 tarball):

https://github.com/ldc-developers/ldc/releases/tag/v1.0.0

... and this runs my little curl-based program without any 
trouble.  Maybe something about how the build was carried out?


Re: std.net.curl and libcurl.so

2016-09-24 Thread Joseph Rushton Wakeling via Digitalmars-d-learn
On Saturday, 24 September 2016 at 19:27:31 UTC, Joseph Rushton 
Wakeling wrote:
Further to earlier remarks: I now think this may be a general 
problem of LDC 1.0.0 and not a problem of the snap package.


I tried building my simple curl-using program using an LDC 
1.0.0 build and installed from source in the standard cmake && 
make && make install fashion.  The same segfault occurs.


More on this: ldc 0.17.1 (the version packaged with Ubuntu 16.04) 
is based on dmd 2.068.2, which still linked against libcurl.  
It's only from v2.069.0+ that libcurl is loaded dynamically:

https://dlang.org/changelog/2.069.0.html#curl-dynamic-loading

ldc 1.0.0 is based on 2.070.2.


Re: std.net.curl and libcurl.so

2016-09-24 Thread Joseph Rushton Wakeling via Digitalmars-d-learn
On Saturday, 24 September 2016 at 19:11:52 UTC, Joseph Rushton 
Wakeling wrote:

On Friday, 23 September 2016 at 00:55:43 UTC, Stefan Koch wrote:

This suggests that libcurl is loaded.
could you compile with -g ?
and then post the output ?


Further to earlier remarks: I now think this may be a general 
problem of LDC 1.0.0 and not a problem of the snap package.


I tried building my simple curl-using program using an LDC 1.0.0 
build and installed from source in the standard cmake && make && 
make install fashion.  The same segfault occurs.


Re: std.net.curl and libcurl.so

2016-09-24 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On Friday, 23 September 2016 at 00:55:43 UTC, Stefan Koch wrote:

This suggests that libcurl is loaded.
could you compile with -g ?
and then post the output ?


Thanks Stefan!  It was compiled with -g, but I was missing the 
libcurl3-dbg package.  Here's the results:


#0  0x0046b048 in gc.gc.Gcx.smallAlloc(ubyte, ref ulong, 
uint) ()
#1  0x0046918a in gc.gc.GC.malloc(ulong, uint, ulong*, 
const(TypeInfo)) ()

#2  0x004682bc in gc_qalloc ()
#3  0x0046293b in core.memory.GC.qalloc(ulong, uint, 
const(TypeInfo)) ()
#4  0x004534f2 in 
std.array.Appender!(char[]).Appender.ensureAddable(ulong) ()
#5  0x004530c5 in 
std.uni.toCase!(std.uni.toLowerIndex(dchar), 1043, 
std.uni.toLowerTab(ulong), 
char[]).toCase(char[]).__foreachbody2(ref ulong, ref dchar) ()

#6  0x0046dc96 in _aApplycd2 ()
#7  0x0044d997 in 
std.net.curl.HTTP.Impl.onReceiveHeader(void(const(char[]), 
const(char[])) delegate).__lambda2(const(char[])) ()
#8  0x00451d78 in 
std.net.curl.Curl._receiveHeaderCallback(const(char*), ulong, 
ulong, void*) ()
#9  0x76f75a1d in Curl_client_chop_write 
(conn=conn@entry=0x741fd0, type=type@entry=2,
ptr=0x72d3a0 "Server: Apache/2.4.17 (FreeBSD) OpenSSL/1.0.2d 
PHP/5.6.16\r\n", len=59) at sendf.c:454
#10 0x76f75b9d in Curl_client_write 
(conn=conn@entry=0x741fd0, type=type@entry=2,

ptr=, len=) at sendf.c:511
#11 0x76f73f90 in Curl_http_readwrite_headers 
(data=data@entry=0x72fb60,

conn=conn@entry=0x741fd0, nread=nread@entry=0x7fffd370,
stop_reading=stop_reading@entry=0x7fffd36f) at http.c:3739
#12 0x76f8be30 in readwrite_data (done=0x7fffd40b, 
didwhat=, k=0x72fbd8,

conn=0x741fd0, data=0x72fb60) at transfer.c:492
#13 Curl_readwrite (conn=0x741fd0, data=data@entry=0x72fb60, 
done=done@entry=0x7fffd40b)

at transfer.c:1074
#14 0x76f95f42 in multi_runsingle 
(multi=multi@entry=0x7388b0, now=...,

data=data@entry=0x72fb60) at multi.c:1544
#15 0x76f96ddd in curl_multi_perform 
(multi_handle=multi_handle@entry=0x7388b0,
running_handles=running_handles@entry=0x7fffd5a4) at 
multi.c:1821
#16 0x76f8d84b in easy_transfer (multi=0x7388b0) at 
easy.c:724

#17 easy_perform (events=false, data=0x72fb60) at easy.c:812
#18 curl_easy_perform (easy=0x72fb60) at easy.c:831
#19 0x0044f971 in 
std.net.curl.HTTP.perform(std.typecons.Flag!("throwOnError").Flag) ()
#20 0x0040db01 in 
std.net.curl._basicHTTP!(char)._basicHTTP(const(char)[], 
const(void)[], std.net.curl.HTTP) ()
#21 0x004031ff in std.net.curl.get!(std.net.curl.HTTP, 
char).get(const(char)[], std.net.curl.HTTP) ()
#22 0x00403020 in 
std.net.curl.get!(std.net.curl.AutoProtocol, 
char).get(const(char)[], std.net.curl.AutoProtocol) ()

#23 0x00402f98 in D main ()



Also the call seems to fail during allocation.


Yup.  And that allocation seems to occur during use of an 
Appender while converting unicode text to lowercase ... ?


std.net.curl and libcurl.so

2016-09-22 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

Hello all,

As some have you may have followed, I've been working on 
snap-packaging LDC.  However, I've run into an issue when it 
comes to programs that use std.net.curl.


Here's a simple example:


void main ()
{
import std.net.curl : get;
auto website = "http://dlang.org/".get;
}


When I compile with my snap-packaged LDC, the program builds, but 
segfaults when I run it.  Here's the gdb backtrace:


Thread 1 "curlget" received signal SIGSEGV, Segmentation fault.
0x0046b048 in gc.gc.Gcx.smallAlloc(ubyte, ref ulong, 
uint) ()

(gdb) bt
#0  0x0046b048 in gc.gc.Gcx.smallAlloc(ubyte, ref ulong, 
uint) ()
#1  0x0046918a in gc.gc.GC.malloc(ulong, uint, ulong*, 
const(TypeInfo)) ()

#2  0x004682bc in gc_qalloc ()
#3  0x0046293b in core.memory.GC.qalloc(ulong, uint, 
const(TypeInfo)) ()
#4  0x004534f2 in 
std.array.Appender!(char[]).Appender.ensureAddable(ulong) ()
#5  0x004530c5 in 
std.uni.toCase!(std.uni.toLowerIndex(dchar), 1043, 
std.uni.toLowerTab(ulong), 
char[]).toCase(char[]).__foreachbody2(ref ulong, ref dchar) ()

#6  0x0046dc96 in _aApplycd2 ()
#7  0x0044d997 in 
std.net.curl.HTTP.Impl.onReceiveHeader(void(const(char[]), 
const(char[])) delegate).__lambda2(const(char[])) ()
#8  0x00451d78 in 
std.net.curl.Curl._receiveHeaderCallback(const(char*), ulong, 
ulong, void*) ()
#9  0x76f7695d in ?? () from 
/usr/lib/x86_64-linux-gnu/libcurl.so
#10 0x76f74ed0 in ?? () from 
/usr/lib/x86_64-linux-gnu/libcurl.so
#11 0x76f8cce0 in ?? () from 
/usr/lib/x86_64-linux-gnu/libcurl.so
#12 0x76f96b22 in ?? () from 
/usr/lib/x86_64-linux-gnu/libcurl.so
#13 0x76f97986 in curl_multi_perform () from 
/usr/lib/x86_64-linux-gnu/libcurl.so
#14 0x76f8e65b in curl_easy_perform () from 
/usr/lib/x86_64-linux-gnu/libcurl.so
#15 0x0044f971 in 
std.net.curl.HTTP.perform(std.typecons.Flag!("throwOnError").Flag) ()
#16 0x0040db01 in 
std.net.curl._basicHTTP!(char)._basicHTTP(const(char)[], 
const(void)[], std.net.curl.HTTP) ()
#17 0x004031ff in std.net.curl.get!(std.net.curl.HTTP, 
char).get(const(char)[], std.net.curl.HTTP) ()
#18 0x00403020 in 
std.net.curl.get!(std.net.curl.AutoProtocol, 
char).get(const(char)[], std.net.curl.AutoProtocol) ()

#19 0x00402f98 in D main ()

Since I'm not super-familiar with libcurl, it's a bit un-obvious 
what's going on here.  If I understand right, libcurl is lazily 
loaded at runtime rather than explicitly linked.  
libcurl4-gnutls-dev is installed on my machine, so it can't just 
be lack of availability.  (I've also tried uninstalling 
libcurl4-gnutls-dev and installing libcurl4-openssl-dev instead, 
with no difference in the result.)


The segfault would suggest to me that either the loading of the 
library fails or that there's some resource phobos expects to 
find which it can't access.  Can anyone advise what could be 
going on here?


In particular, is there anything that happens at compile-time 
which could affect the ability of the resulting executable to 
locate libcurl.so or any other resources it needs?  My suspicion 
is that the snap-package containerization of LDC could mean that 
expectations about system-path locations are interfered with.


Can anyone advise?

Thanks & best wishes,

-- Joe


Re: Specifying content-type for a POST request using std.net.curl

2016-08-09 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On Tuesday, 9 August 2016 at 14:30:21 UTC, Seb wrote:

There is also

https://github.com/ikod/dlang-requests

Which I find in general more intuitive to use ;-)


Interesting, I'd not come across that before.  Thanks -- I'll 
give it a glance some time ...


Re: Specifying content-type for a POST request using std.net.curl

2016-08-09 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On Tuesday, 9 August 2016 at 14:21:09 UTC, ketmar wrote:

http://dpldocs.info/experimental-docs/std.net.curl.HTTP.setPostData.html

https://dlang.org/phobos/std_net_curl.html#.HTTP.setPostData

reading documentation rox!


Yea, mea culpa.  I had actually glanced at that but was asking on 
the assumption that I might have misunderstood something about 
the simpler functions -- it was a bit odd that (for example) a 
`post` call whose input data was `void[]` or `ubyte[]` would 
still be treated as text content-type.


In any case, I now have a working solution using 
HTTP.setPostData, so all's well ;-)  Thanks!


Specifying content-type for a POST request using std.net.curl

2016-08-09 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

Hello all,

I'm currently writing a little client app whose job is to make a 
POST request to a vibe.d webserver and output the response.  
However, vibe.d is picky about the content-type of the request 
body, and so far as I can see there is no way to specify this via 
the `std.net.curl.post` API.


So far as I can see from the vibe.d log output, `post` is using 
the `text/plain` content type, to which vibe.d objects (rightly) 
because the content is not UTF8.  It's a little bizarre that 
`text/plain` is chosen, because the input data is `void[]` or 
`ubyte[]` data.


Can anyone advise (if it's possible at all) how to specify the 
content type for the post request body using std.net.curl (or an 
alternative)?


Thanks & best wishes,

-- Joe


Re: Request assistance converting C's #ifndef to D

2016-05-16 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On Thursday, 12 May 2016 at 22:51:17 UTC, Andrew Edwards wrote:
The following preprocessor directives are frequently 
encountered in C code, providing a default constant value where 
the user of the code has not specified one:


#ifndef MIN
#define MIN 99
#endif

#ifndef MAX
#define MAX 999
#endif


One option here is that the programmer is trying to avoid 
multiple definitions of MIN and MAX if for some reason this 
header is included together with another header that also defines 
a MIN and MAX.


So, you might start by checking if any other header/source file 
does so.  It's entirely possible the programmer is just going 
overkill with the kind of stuff one does to guard against 
multiple #include's of the same header, and that this header is 
the _only_ place where MIN and MAX are defined, and that it's 
ALWAYS valid for them to be 99 and 999 respectively.


The other thought is that the programmer might have in mind to be 
able to choose alternative MIN and MAX at compile time via 
environment variables (perhaps the project's build scripts make 
use of this?).  If you think so, is that something you want to 
support?  There are probably better ways of achieving the same 
result.


I suspect it'll probably turn out to be fine to just use

enum MIN = 99;
enum MAX = 999;

... but H. S. Teoh's suggestion looks sane as a more cautious 
alternative.


Re: `return ref`, DIP25, and struct/class lifetimes

2016-05-16 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On Monday, 16 May 2016 at 15:33:09 UTC, Dicebot wrote:
tl; dr: DIP25 is so heavily under-implemented in its current 
shape it can be considered 100% broken and experimenting will 
uncover even more glaring holes.


Well, it's always fun to find the holes in things ... :-)

To be more precise, judging by experimental observation, 
currently dip25 only works when there is explicitly a `ref` 
return value in function or method. Any escaping of reference 
into a pointer confuses it completely:


To be fair, this is all in line with the DIP25 spec that I 
re-read after running into these issues with my wrapper struct.  
AFAICS pretty much the only case where it really relates to 
structs is when a struct method is returning a reference to an 
internal variable.


It's just frustrating there _isn't_ any thought for the kind of 
wrapper I have in mind, because as you say,


But there isn't any way to write such wrapper struct without 
using pointers AFAIK.


As for your point:

In your actual example putting `return` on `get` method 
annotation is additionally very misleading because it only 
implies ensuring result does not outlive struct instance itself 
- but it gets passed around by value anyway.


I thought much the same, but thought I'd try it on the off chance 
it would make a difference to detection of the problem.


Worst part of all this is that even an invariant to 
assert(this.data !is null) won't protect against issues: the 
pointer doesn't get reset to 0 after the data it points to goes 
out of scope, it just now points to potentially garbage data.


In fact, it's only with compiler optimizations enabled that the 
example I posted even generates the wrong result in its 
`writeln()` call :-P


Basically, it sounds to me like there _is_ no way to guarantee 
the safety/validity of wrapping data via pointer in this way ... 
? :-(


`return ref`, DIP25, and struct/class lifetimes

2016-05-16 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

Hello all,

Consider a struct that wraps a pointer to some other piece of 
data as follows:


struct MyWrapper(T)
{
private T* data;

public this(ref T input)
{
this.data = 
}

... other methods that use `this.data` ...
}

Is there any way to guarantee at compile time that the input data 
will outlive the wrapper struct?


I'd assumed (dangerous thing to do...) that DIP25 would allow 
this to be guaranteed by `return ref`, but compiling/running the 
following program, with or without the --dip25 flag, would appear 
to suggest otherwise:




struct MyWrapper(T)
{
private T* data;

public this(return ref T input)
{
this.data = 
}

public T get() return
{
return *(this.data);
}

invariant()
{
assert(this.data !is null);
}
}

auto badWrapper()
{
double x = 5.0;
return MyWrapper!double(x);
}

void main()
{
import std.stdio;
auto badWrap = badWrapper();
writeln(badWrap.get());
}



Is there any current way to achieve what I'm looking for here, or 
is this all on a hiding to nothing? :-(


N.B. for motivation behind this request, see:
https://github.com/WebDrake/dxorshift/pull/1


Re: Why is Linux the only OS in version identifier list that has a lowercase name?

2016-04-12 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On Tuesday, 12 April 2016 at 18:23:25 UTC, Jonathan M Davis wrote:
Well, work has been done to make it so that different runtimes 
will work - e.g. there's a CRuntime_Glibc and a CRuntime_Bionic.


That's pretty cool.  Was that a result of the recent Android 
porting work, or was it a longer-standing division?


 So, druntime is a lot better off than just 
linux vs FreeBSD vs whatever. But it wouldn't surprise me in 
the least if someone using something like Debian GNU/kFreeBSD 
would find some problems with what we currently have.


Ah, fun stuff :-)

BTW, glad to see you will be making it to DConf -- looking 
forward to catching up with you!


Re: Why is Linux the only OS in version identifier list that has a lowercase name?

2016-04-12 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On Tuesday, 12 April 2016 at 01:32:02 UTC, Brian Schott wrote:

On Monday, 11 April 2016 at 23:01:08 UTC, marcpmichel wrote:

Is it because Linux is not an OS ? :p


I gnu somebody would bring that up.


There's actually a serious point here, though -- as D is ported 
to other platforms and architectures, it's going to wind up being 
used to target environments that use a Linux kernel but are not 
GNU.  Conversely, there may also be systems that are GNU but not 
Linux (e.g. the recent proposal for an Ubuntu flavour based on 
the FreeBSD kernel).


Are druntime and phobos ready to deal with those kinds of 
eventuality?


Re: Extensible struct type via variadic template arguments

2016-04-11 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On Monday, 11 April 2016 at 22:06:00 UTC, ag0aep6g wrote:


mixin template multipleEdgeProperties (EdgePropertyList...)
{
static if (EdgePropertyList.length >= 1)
{
mixin singleEdgeProperty!(EdgePropertyList[0]);
mixin multipleEdgeProperties!(EdgePropertyList[1 .. $]);
}
}

mixin template singleEdgeProperty (alias property)
{
mixin(`public property.Type[] ` ~ property.name ~ `;`);
}


Ah, nice!  I knew I was missing a trick somewhere with mixin 
templates, but I haven't used them for sufficiently long that my 
brain wasn't in the right place to find it.  Your solution is 
_much_ more elegant than what I have now.


The string mixin is hidden away in singleEdgeProperty, but it's 
still there.


To get rid of it, you'd have take a different approach, I think.


Yea, makes sense.  I'm probably going to hang onto the broader 
details of the approach (a string mixin or two isn't _that_ 
horrendous...) because there are some other aspects to it that I 
would like to play with.  But your suggested alternative is 
interesting to consider.


Thanks very much for the useful ideas! :-)


Extensible struct type via variadic template arguments

2016-04-10 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

Hello all,

I've been playing around recently with some new ideas for a 
reworking of my Dgraph library:

https://github.com/WebDrake/Dgraph

... and particularly, considering how to use D's metaprogramming 
techniques to best allow the fundamental graph data structures to 
be extended arbitrarily.


One very basic idea is to allow graph edges to have arbitrary 
properties, and to do that, I've tried to come up with a basic 
data structure that can be extended via variadic template 
arguments.  However, I'm not entirely satisfied with the design, 
as I suspect it can be simplified/made nicer.  Here's what I have 
at present:



//

/**
 * Basic data structure containing arrays all of the same
 * length, such that fieldname[i] is the corresponding
 * property of edge i
 */
struct ExtensibleEdgeList (EdgePropertyList...)
{
public size_t[] tail;
public size_t[] head;

mixin(multipleEdgeProperties!(EdgePropertyList));
}

/**
 * Custom edge properties can be specified in terms of
 * the type to be used and the name to be used
 */
struct EdgeProperty
{
string type;
string name;
}

/**
 * Template that evaluates to the string of code
 * describing the custom edge property fields
 */
template multipleEdgeProperties (EdgePropertyList...)
{
static if (EdgePropertyList.length == 0)
{
const multipleEdgeProperties = ``;
}
else static if (EdgePropertyList.length == 1)
{
const multipleEdgeProperties = 
singleEdgeProperty!(EdgePropertyList[0]);

}
else
{
const multipleEdgeProperties = 
singleEdgeProperty!(EdgePropertyList[0]) ~ `\n` ~ 
multipleEdgeProperties!(EdgePropertyList[1 .. $]);

}
}

/**
 * Template that evaluates to the string of code
 * for a single edge property field
 */
template singleEdgeProperty (EdgeProperty property)
{
const singleEdgeProperty = `public ` ~ property.type ~ `[] ` 
~ property.name ~ `;`;

}

//


There are two things I'm wondering about the above design:

  * is it possible to redesign EdgeProperty as a template that 
would take
a type parameter directly rather than as a string, so one 
could write

e.g. EdgeProperty!("weight", double) ... ?

  * is it possible to rework things to avoid the string mixins 
...?


My metaprogramming-fu is a bit rusty these days (one reason for 
pursuing this project is to try and start exercising it 
again...), so I'd be very grateful for any thoughts or advice 
anyone could offer (on the above questions or the design in 
general).


Thanks & best wishes,

 -- Joe


Re: dstep problem: "fatal error: 'limits.h' file not found"

2015-11-26 Thread Joseph Rushton Wakeling via Digitalmars-d-learn
On Thursday, 26 November 2015 at 07:28:37 UTC, Jacob Carlborg 
wrote:
Hmm, I was pretty sure I fixed this, but perhaps not for that 
file. Please report an issue. In the meantime there's a 
workaround in the documentation [1], second paragraph, perhaps 
not very clear though.


[1] https://github.com/jacob-carlborg/dstep#libclang


OK, I'll do that this evening once I've had an opportunity to 
check the workaround etc.  Thanks!


Re: Creating D bindings for a C library

2015-11-25 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On Tuesday, 24 November 2015 at 23:49:26 UTC, cym13 wrote:
There are some binding generator the most two famous being htod 
and dstep: 
http://wiki.dlang.org/List_of_Bindings#Binding_generators


Is htod maintained any more?  I had the impression it had kind of 
fallen by the wayside.


I'll give dstep a go, though. Thanks!


Re: Creating D bindings for a C library

2015-11-25 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On Wednesday, 25 November 2015 at 17:45:48 UTC, ponce wrote:
If doing it by hand, some tips here: 
http://p0nce.github.io/d-idioms/#Porting-from-C-gotchas


Cool, thanks.  The stuff about using c_long and c_ulong is 
particularly useful/relevant to my use-case, so it's good to be 
reminded of that.


dstep problem: "fatal error: 'limits.h' file not found"

2015-11-25 Thread Joseph Rushton Wakeling via Digitalmars-d-learn
I've just built and installed dstep (on Ubuntu 15.10, using 
libclang-3.7) but whenever I try to run it on a header file, I 
run into the error message:


File(8AC8E0, "")/usr/include/limits.h:123:16: fatal error: 
'limits.h' file not found


I suspect this is a libclang problem, but does anyone have any 
advice how to address it?


It appears to be occurring in these lines in 
/usr/include/limits.h:


#if defined __GNUC__ && !defined _GCC_LIMITS_H_
/* `_GCC_LIMITS_H_' is what GCC's file defines.  */
# include_next 
#endif



Creating D bindings for a C library

2015-11-24 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

Hello all,

I'm considering creating some D bindings for a C library.  Before 
I start, I was wondering if anyone could advise me on the current 
state-of-the-art in automatically converting C headers to .d or 
.di files; it's a long time since I've looked at anything to do 
with this and the interfacing-with-C page in the D reference 
doesn't mention any tools.


Thanks in advance for any advice,

Best wishes,

-- Joe


Re: Creating D bindings for a C library

2015-11-24 Thread Joseph Rushton Wakeling via Digitalmars-d-learn
On Tuesday, 24 November 2015 at 23:14:14 UTC, Joseph Rushton 
Wakeling wrote:

I'm considering creating some D bindings for a C library.


I should probably clarify that I know what to do assuming I have 
to write all the bindings manually.  However, as the library has 
quite a lot of header files, I'd really much rather auto-convert 
as much as possible, hence the questions above :-)


Re: [sdc] linker problems when building programs with sdc

2015-10-22 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

... or, it turns out, sdc doesn't like it when you forget to rewrite
`void main()` as `int main()`, and its error messages are still in the cryptic 
stage :-P


Onwards and upwards, then ... :-)

On 18/10/15 19:58, Joseph Rushton Wakeling via Digitalmars-d-learn wrote:

Turns out even `return 42;` is a bit heavy-duty.  I do really like the way that
sdc is obviously hooking into llvm's error reporting mechanism, though:

$ ./sdc nohello.d
; ModuleID = 'nohello.d'

define i32 @_Dmain() {
   br label %body

body: ; preds = %0
   call void @_D7nohello4mainFMZv()
   ret i32 0
}

define void @_D7nohello4mainFMZv() {

body: ; No predecessors!
}
nohello.d:3:4: error: d.ir.error.ErrorExpression is not supported
 return 42;
 ^~
d.exception.CompileException@libd/src/d/exception.d(15):
d.ir.error.ErrorExpression is not supported

./sdc(llvm.c.core.__LLVMOpaqueValue*
util.visitor.dispatchImpl!(_D1d4llvm10expression13ExpressionGen5visitMFC1d2ir10expression10ExpressionZ14__funcliteral2FC1d2ir10expression10ExpressionZPS4llvm1c4core17__LLVMOpaqueValue,
d.llvm.expression.ExpressionGen, d.ir.expression.Expression).dispatchImpl(ref
d.llvm.expression.ExpressionGen, d.ir.expression.Expression)+0x538) [0x138fae0]
./sdc(llvm.c.core.__LLVMOpaqueValue*
util.visitor.dispatch!(_D1d4llvm10expression13ExpressionGen5visitMFC1d2ir10expression10ExpressionZ14__funcliteral2FC1d2ir10expression10ExpressionZPS4llvm1c4core17__LLVMOpaqueValue,
d.llvm.expression.ExpressionGen, d.ir.expression.Expression).dispatch(ref
d.llvm.expression.ExpressionGen, d.ir.expression.Expression)+0x1d) [0x138f5a5]
./sdc(llvm.c.core.__LLVMOpaqueValue*
d.llvm.expression.ExpressionGen.visit(d.ir.expression.Expression)+0x55) 
[0x137ec55]
./sdc(llvm.c.core.__LLVMOpaqueValue*
d.llvm.statement.StatementGen.genExpression(d.ir.expression.Expression)+0x3f)
[0x138a627]
./sdc(void
d.llvm.statement.StatementGen.visit(d.ast.statement.ReturnStatement!(d.ir.expression.Expression,
d.ir.statement.Statement).ReturnStatement)+0x75) [0x138b1bd]
./sdc(void
util.visitor.__T12dispatchImplS294util7visitor14__funcliteral6TS1d4llvm9statement12StatementGenTC1d2ir9statement9StatementZ.dispatchImpl(ref
d.llvm.statement.StatementGen, d.ir.statement.Statement)+0x274) [0x13907ac]
./sdc(void
util.visitor.__T8dispatchS294util7visitor14__funcliteral6TS1d4llvm9statement12StatementGenTC1d2ir9statement9StatementZ.dispatch(ref
d.llvm.statement.StatementGen, d.ir.statement.Statement)+0x1d) [0x1390535]
./sdc(void d.llvm.statement.StatementGen.visit(d.ir.statement.Statement)+0x55)
[0x138a4ad]
./sdc(void
d.llvm.statement.StatementGen.visit(d.ast.statement.BlockStatement!(d.ir.statement.Statement).BlockStatement)+0xc2)
[0x138ad3a]
./sdc(void d.llvm.local.LocalGen.genBody(d.ir.symbol.Function,
llvm.c.core.__LLVMOpaqueValue*)+0xbb8) [0x1388730]
./sdc(bool d.llvm.local.LocalGen.maybeDefine(d.ir.symbol.Function,
llvm.c.core.__LLVMOpaqueValue*)+0x10e) [0x1387b56]
./sdc(llvm.c.core.__LLVMOpaqueValue*
d.llvm.local.LocalGen.define(d.ir.symbol.Function)+0x8e) [0x138798e]
./sdc(llvm.c.core.__LLVMOpaqueValue*
d.llvm.global.GlobalGen.define(d.ir.symbol.Function)+0x15b) [0x13860cb]
./sdc(void d.llvm.global.GlobalGen.define(d.ir.symbol.Symbol)+0xd0) [0x1385db8]
./sdc(d.ir.symbol.Module
d.llvm.codegen.CodeGenPass.visit(d.ir.symbol.Module)+0xe2) [0x132c0f2]
./sdc(void d.llvm.backend.LLVMBackend.emitObject(d.ir.symbol.Module[],
immutable(char)[])+0xd1) [0x1328c79]
./sdc(void sdc.sdc.SDC.codeGen(immutable(char)[])+0x87) [0x12c7d17]
./sdc(void sdc.sdc.SDC.codeGen(immutable(char)[], immutable(char)[])+0x74)
[0x12c7d94]
./sdc(_Dmain+0x59a) [0x12be462]
/opt/dmd/lib64/libphobos2.so.0.68(_D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ6runAllMFZ9__lambda1MFZv+0x28)
[0x7f7c02d54ee8]
/opt/dmd/lib64/libphobos2.so.0.68(void rt.dmain2._d_run_main(int, char**, extern
(C) int function(char[][])*).tryExec(scope void delegate())+0x2d) 
[0x7f7c02d54e2d]
/opt/dmd/lib64/libphobos2.so.0.68(void rt.dmain2._d_run_main(int, char**, extern
(C) int function(char[][])*).runAll()+0x2d) [0x7f7c02d54e8d]
/opt/dmd/lib64/libphobos2.so.0.68(void rt.dmain2._d_run_main(int, char**, extern
(C) int function(char[][])*).tryExec(scope void delegate())+0x2d) 
[0x7f7c02d54e2d]
/opt/dmd/lib64/libphobos2.so.0.68(_d_run_main+0x1e7) [0x7f7c02d54da7]
./sdc(main+0x20) [0x12c7708]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0) [0x7f7c0173fa40]






[sdc] linker problems when building programs with sdc

2015-10-18 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

Hello all,

I recently decided to have another play with sdc to see how it's doing.  Since 
my dmd is installed in /opt/dmd/ I had to do a couple of tricks to get sdc 
itself to build:


(i) put a dlang.conf in /etc/ld.so.conf.d/ containing the /opt/dmd/lib64 path;

(ii) call 'make LD_PATH=/opt/dmd/lib64' when building sdc

sdc itself then builds successfully, and I wind up with a bin/ directory 
containing sdc and sdc.conf (which contains includePath and libPath options) and 
a lib/ directory containing libd.a, libd-llvm.a, libphobos.a and libsdrt.a.


However, when I try to build any program, even a simple hello-world, I get a 
linker error:


$ ./sdc hello.d
hello.o: In function `_D5hello4mainFMZv':
hello.d:(.text+0x1c): undefined reference to `_D3std5stdio7writelnFMAyaZv'
collect2: error: ld returned 1 exit status

To solve this, I tried adding in a library-path flag, but this simply resulted 
in an exception being thrown by sdc's options parsing:


$ ./sdc -L$MYHOMEDIR/code/D/sdc/lib hello.d
std.getopt.GetOptException@/opt/dmd/bin/../import/std/getopt.d(604): 
Unrecognized option -L$MYHOMEDIR/code/D/sdc/lib


[cut great big backtrace]

Can anyone advise what's missing in my setup?  I did also try adding 
$MYHOMEDIR/code/D/sdc/lib to the /etc/ld.so.conf.d/dlang.conf file, and 
re-running ldconfig, but that didn't seem to make any difference.


Thanks & best wishes,

-- Joe


Re: [sdc] linker problems when building programs with sdc

2015-10-18 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 18/10/15 19:43, Marco Leise via Digitalmars-d-learn wrote:

Maybe you should have started with `return 42;`? :D
writeln is not a light-weight in terms of exercised compiler
features. I didn't even know that it compiles yet. Last time I
heard it was not usable.


Hahahahahahahaha :-D

Turns out even `return 42;` is a bit heavy-duty.  I do really like the way that 
sdc is obviously hooking into llvm's error reporting mechanism, though:


$ ./sdc nohello.d
; ModuleID = 'nohello.d'

define i32 @_Dmain() {
  br label %body

body: ; preds = %0
  call void @_D7nohello4mainFMZv()
  ret i32 0
}

define void @_D7nohello4mainFMZv() {

body: ; No predecessors!
}
nohello.d:3:4: error: d.ir.error.ErrorExpression is not supported
return 42;
^~
d.exception.CompileException@libd/src/d/exception.d(15): 
d.ir.error.ErrorExpression is not supported


./sdc(llvm.c.core.__LLVMOpaqueValue* 
util.visitor.dispatchImpl!(_D1d4llvm10expression13ExpressionGen5visitMFC1d2ir10expression10ExpressionZ14__funcliteral2FC1d2ir10expression10ExpressionZPS4llvm1c4core17__LLVMOpaqueValue, 
d.llvm.expression.ExpressionGen, d.ir.expression.Expression).dispatchImpl(ref 
d.llvm.expression.ExpressionGen, d.ir.expression.Expression)+0x538) [0x138fae0]
./sdc(llvm.c.core.__LLVMOpaqueValue* 
util.visitor.dispatch!(_D1d4llvm10expression13ExpressionGen5visitMFC1d2ir10expression10ExpressionZ14__funcliteral2FC1d2ir10expression10ExpressionZPS4llvm1c4core17__LLVMOpaqueValue, 
d.llvm.expression.ExpressionGen, d.ir.expression.Expression).dispatch(ref 
d.llvm.expression.ExpressionGen, d.ir.expression.Expression)+0x1d) [0x138f5a5]
./sdc(llvm.c.core.__LLVMOpaqueValue* 
d.llvm.expression.ExpressionGen.visit(d.ir.expression.Expression)+0x55) [0x137ec55]
./sdc(llvm.c.core.__LLVMOpaqueValue* 
d.llvm.statement.StatementGen.genExpression(d.ir.expression.Expression)+0x3f) 
[0x138a627]
./sdc(void 
d.llvm.statement.StatementGen.visit(d.ast.statement.ReturnStatement!(d.ir.expression.Expression, 
d.ir.statement.Statement).ReturnStatement)+0x75) [0x138b1bd]
./sdc(void 
util.visitor.__T12dispatchImplS294util7visitor14__funcliteral6TS1d4llvm9statement12StatementGenTC1d2ir9statement9StatementZ.dispatchImpl(ref 
d.llvm.statement.StatementGen, d.ir.statement.Statement)+0x274) [0x13907ac]
./sdc(void 
util.visitor.__T8dispatchS294util7visitor14__funcliteral6TS1d4llvm9statement12StatementGenTC1d2ir9statement9StatementZ.dispatch(ref 
d.llvm.statement.StatementGen, d.ir.statement.Statement)+0x1d) [0x1390535]
./sdc(void d.llvm.statement.StatementGen.visit(d.ir.statement.Statement)+0x55) 
[0x138a4ad]
./sdc(void 
d.llvm.statement.StatementGen.visit(d.ast.statement.BlockStatement!(d.ir.statement.Statement).BlockStatement)+0xc2) 
[0x138ad3a]
./sdc(void d.llvm.local.LocalGen.genBody(d.ir.symbol.Function, 
llvm.c.core.__LLVMOpaqueValue*)+0xbb8) [0x1388730]
./sdc(bool d.llvm.local.LocalGen.maybeDefine(d.ir.symbol.Function, 
llvm.c.core.__LLVMOpaqueValue*)+0x10e) [0x1387b56]
./sdc(llvm.c.core.__LLVMOpaqueValue* 
d.llvm.local.LocalGen.define(d.ir.symbol.Function)+0x8e) [0x138798e]
./sdc(llvm.c.core.__LLVMOpaqueValue* 
d.llvm.global.GlobalGen.define(d.ir.symbol.Function)+0x15b) [0x13860cb]

./sdc(void d.llvm.global.GlobalGen.define(d.ir.symbol.Symbol)+0xd0) [0x1385db8]
./sdc(d.ir.symbol.Module 
d.llvm.codegen.CodeGenPass.visit(d.ir.symbol.Module)+0xe2) [0x132c0f2]
./sdc(void d.llvm.backend.LLVMBackend.emitObject(d.ir.symbol.Module[], 
immutable(char)[])+0xd1) [0x1328c79]

./sdc(void sdc.sdc.SDC.codeGen(immutable(char)[])+0x87) [0x12c7d17]
./sdc(void sdc.sdc.SDC.codeGen(immutable(char)[], immutable(char)[])+0x74) 
[0x12c7d94]

./sdc(_Dmain+0x59a) [0x12be462]
/opt/dmd/lib64/libphobos2.so.0.68(_D2rt6dmain211_d_run_mainUiPPaPUAAaZiZ6runAllMFZ9__lambda1MFZv+0x28) 
[0x7f7c02d54ee8]
/opt/dmd/lib64/libphobos2.so.0.68(void rt.dmain2._d_run_main(int, char**, extern 
(C) int function(char[][])*).tryExec(scope void delegate())+0x2d) [0x7f7c02d54e2d]
/opt/dmd/lib64/libphobos2.so.0.68(void rt.dmain2._d_run_main(int, char**, extern 
(C) int function(char[][])*).runAll()+0x2d) [0x7f7c02d54e8d]
/opt/dmd/lib64/libphobos2.so.0.68(void rt.dmain2._d_run_main(int, char**, extern 
(C) int function(char[][])*).tryExec(scope void delegate())+0x2d) [0x7f7c02d54e2d]

/opt/dmd/lib64/libphobos2.so.0.68(_d_run_main+0x1e7) [0x7f7c02d54da7]
./sdc(main+0x20) [0x12c7708]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0) [0x7f7c0173fa40]



Re: Struct that destroys its original handle on copy-by-value

2015-08-03 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On Monday, 3 August 2015 at 09:01:51 UTC, Dicebot wrote:

It is now verified as safe by `return ref`.


Yes, until you pointed this out to me I'd been convinced that 
classes were the way forward for RNGs.  I think that `return ref` 
is going to be a _very_ powerful tool for facilitating 
stack-allocated RNG functionality.


Re: Struct that destroys its original handle on copy-by-value

2015-08-02 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 02/08/15 03:38, Dicebot via Digitalmars-d-learn wrote:

On Saturday, 1 August 2015 at 17:50:28 UTC, John Colvin wrote:

I'm not sure how good an idea it is to totally enforce a range to be
non-copyable, even if you could deal with the function call chain problem.
Even in totally save-aware code, there can still be valid assignment of a
range type. I'm pretty sure a lot of phobos ranges/algorithms would be unusable.


This is exactly why I proposed to Joe design with destructive copy originally -
that would work with any algorithms expecting implicit pass by value but prevent
from actual double usage.

Sadly, this does not seem to be implementable in D in any reasonable way.


Yup.  This work is follow-up on a really creative bunch of suggestions Dicebot 
made to me on our flight back from DConf, and which we followed up on at the 
recent Berlin meetup.


The design principle of destructive copy is great -- it really cuts through a 
bunch of potential nastinesses around random number generation -- but it really 
doesn't look like it's straightforwardly possible :-(




Re: Struct that destroys its original handle on copy-by-value

2015-08-01 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 31/07/15 19:21, Ali Çehreli via Digitalmars-d-learn wrote:

On 07/26/2015 04:29 AM, Joseph Rushton Wakeling via Digitalmars-d-learn wrote:

  is this design idea even feasible in principle, or just a bad
  idea from the get-go?

As I understand it, it is against one of fundamental D principles: structs are
value types where any copy can be used in place of any other.

I expect there are examples where even Phobos violates it but the struct
documentation still says so: A struct is defined to not have an identity; that
is, the implementation is free to make bit copies of the struct as convenient.

   http://dlang.org/struct.html


That really feels very bad for the problem domain I have in mind -- random 
number generation.  No implementation should be free to make copies of a random 
number generator as convenient, that should be 100% in the hands of the 
programmer!




  And if feasible -- how would I go about it?

Disallowing automatic copying and providing a function comes to mind.


Yes, I considered that, but I don't think it really delivers what's needed :-(

Let me give a concrete example of why I was thinking in this direction. 
Consider RandomSample in std.random.  This is a struct (a value type, 
instantiated on the stack).  However, it also wraps a random number generator. 
It needs to be consumed once and once only, because otherwise there will be 
unintended statistical correlations in the program.  Copy-by-value leads to a 
situation where you can accidentally consume the same sequence twice (or 
possibly, only _part_ of the sequence).


Now, indeed, one way is to just @disable this(this) which prevents 
copy-by-value.  But then you can't do something natural and desirable like:


iota(100).randomSample(10, gen).take(5).writeln;

... because you would no longer be able to pass the RandomSample instance into 
`take`.


On the other hand, what you want to disallow is this:

   auto sample = iota(100).randomSample(10, gen);

   sample.take(5).writeln;
   sample.take(5).writeln;   // statistical correlations result,
 // probably unwanted

The first situation is still possible, and the second disallowed (or at least, 
guarded against), _if_ a copy-by-value is finalized by tweaking the source to 
render it an empty range.


I would happily hear alternative solutions to the problem, but that's why I was 
interested in a struct with the properties I outlined in my original post.




Re: Struct that destroys its original handle on copy-by-value

2015-08-01 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 31/07/15 13:40, Kagamin via Digitalmars-d-learn wrote:

On Sunday, 26 July 2015 at 12:16:30 UTC, Joseph Rushton Wakeling wrote:

Example:

Unique!Random rng = new Random(unpredictableSeed);
rng.take(10).writeln;
My aim by contrast is to _allow_ that kind of use, but render the original
handle empty when it's done.


`take` stores the range, you can try to use some sort of a weak reference.


Yea, but that's not what I'm trying to achieve.  I know how I can pass something 
to `take` so as to e.g. obtain reference semantics or whatever; what I'm trying 
to achieve is a range that _doesn't rely on the user knowing the right way to 
handle it_.


I'll expand on this more responding to Ali, so as to clarify the context of what 
I'm aiming for and why.




Re: Struct that destroys its original handle on copy-by-value

2015-07-29 Thread Joseph Rushton Wakeling via Digitalmars-d-learn
On Sunday, 26 July 2015 at 11:30:16 UTC, Joseph Rushton Wakeling 
wrote:

Hello all,

A design question that came up during the hackathon held during 
the last Berlin D Meetup.


[...]


Ping on the above -- nobody has any insight...?


Re: Why hide a trusted function as safe?

2015-07-26 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 26/07/15 14:24, Adam D. Ruppe via Digitalmars-d-learn wrote:

If the whole function is marked @trusted, the compiler doesn't try to check it
at all - it just takes your word for it.

There was a bit of argument about this a while ago in bugzilla, not everyone
agrees it is a good idea. I don't remember where though.


If I recall right, the argument wasn't over the principle of trying to isolate 
the @trusted bits, but that @trusted was being used only to wrap up the 
particular function call that was technically triggering @system stuff, rather 
than the entirety of what actually needed to be trusted.


I can't remember a good example off the top of my head, but no doubt the stuff 
is there in the bugzilla reports ;-)




Struct that destroys its original handle on copy-by-value

2015-07-26 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

Hello all,

A design question that came up during the hackathon held during the last Berlin 
D Meetup.


I was trying to come up with a range that can be copied by value, but when this 
is done, destroys the original handle.  The idea would be behaviour something 
like this:


auto originalRange = myWeirdRange(whatever);
originalRange.take(10).writeln;
assert(originalRange.empty,
   The original handle points to an empty range after copy-by-value.);

A very minimal prototype (missing out the details of front and popFront as 
irrelevant) would be something like:


struct MyWeirdRange (R)
if (isInputRange!R)
{
  private:
R* input_;   // Assumed to never be empty, FWIW.  I'm missing out
 // a template check on that for brevity's sake.

  public:
this (return ref R input)
{
/* return ref should guarantee the pointer
 * is safe for the lifetime of the struct,
 * right ... ?
 */
this.input_ = input;
}

bool empty () @property
{
return this.input_ is null;
}

auto front () @property { ... }

void popFront () { ... }

void opAssign (ref typeof(this) that)
{
/* copy the internal pointer, then
 * set that of the original to null
 */
this.input_ = that.input_;
that.input_ = null;
}
}

Basically, this is a range that would actively enforce the principle that its 
use is a one-shot.  You copy it by value (whether by direct assignment or by 
passing it to another function), you leave the original handle an empty range.


However, the above doesn't work; even in the event of a direct assignment, i.e.

newRange = originalRange;

... the opAssign is never called.  I presume this is because of the 
ref-parameter input, but it's not obvious to me according to the description 
here why this should be: http://dlang.org/operatoroverloading.html#assignment 
Can anyone clarify what's going on here?


Anyway, my main question is: is this design idea even feasible in principle, or 
just a bad idea from the get-go?  And if feasible -- how would I go about it?


Thanks  best wishes,

-- Joe


Re: Struct that destroys its original handle on copy-by-value

2015-07-26 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 26/07/15 13:45, Martijn Pot via Digitalmars-d-learn wrote:

Sounds like unique_ptr (so UniqueRange might be a nice name). Maybe you can get
some ideas from that.


There is already a Unique in std.typecons.  However, I'm not sure that it's 
doing what I require.


Example:

Unique!Random rng = new Random(unpredictableSeed);
rng.take(10).writeln;

... will fail with an error:

Error: struct std.typecons.Unique!(MersenneTwisterEngine!(uint, 32LU, 
624LU, 397LU, 31LU, 2567483615u, 11LU, 7LU, 2636928640u, 15LU, 4022730752u, 
18LU)).Unique is not copyable because it is annotated with @disable


My aim by contrast is to _allow_ that kind of use, but render the original 
handle empty when it's done.




Re: partialShuffle only shuffles subset.

2015-05-20 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On Tuesday, 19 May 2015 at 14:31:21 UTC, Ivan Kazmenko wrote:

On Tuesday, 19 May 2015 at 10:00:33 UTC, BlackEdder wrote:
The documentation seems to indicate that partialShuffle: 
Partially shuffles the elements of r such that upon returning 
r[0..n] is a random subset of r, (which is what I want), but 
it seems that partialShuffle actually only shuffles the first 
subset of the range (which you could do probably also do by 
[0..n].randomShuffle).


This different behaviour was problem created since: 
https://issues.dlang.org/show_bug.cgi?id=11738. Does anyone 
know what the intended behaviour is/was?


Reading the current documentation and unittests, I now also 
believe the fix was a mistake.  Reopened the issue for now with 
a comment: https://issues.dlang.org/show_bug.cgi?id=11738#c2


I hope Joseph Rushton Wakeling looks into it soon.


Reading the documentation it does appear that the function 
behaviour is at odds with what is described.  I don't know how I 
came to that misunderstanding.


In the short term, if you want a randomly-shuffled random subset 
of a range, you could get it via something like,


original_range.randomSample(n).array.randomShuffle;

or maybe better

original_range.randomShuffle.randomSample(n);


Re: idiomatic D: what to use instead of pointers in constructing a tree data structure?

2015-01-08 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 07/01/15 16:02, Laeeth Isharc via Digitalmars-d-learn wrote:

class node
{
 string name;
 node ref;
}


Small recommendation (apart from the reserved word issue which you fixed): it's 
generally considered good D style to give structs and classes names that start 
with capital letters, JustLikeThis.  So, I suggest Node rather than node.


Very minor point, and of course, your code is yours to style as you wish, but it 
can be helpful to meet the standard style conventions in order to make it as 
easy as possible for everyone else to understand.


See also: http://dlang.org/dstyle.html



opDollar and length

2014-12-28 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

A question that suddenly occurred to me, and I realized I didn't know the 
answer.

Why is it necessary/desirable to define separate .length and .opDollar methods 
for custom types?


Re: opDollar and length

2014-12-28 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 28/12/14 19:21, Tobias Pankrath via Digitalmars-d-learn wrote:

To allow slicing for types that don't have a length property but
are terminated by a sentinel value, like null terminated strings
or single linked lists.

It's usefull for multi-dimensional containers as well.


Ah, clear.  Thanks very much.  (Must remember to look into the multi-dimensional 
container issue in more detail, that looks like something worth understanding in 
depth.)


Re: Inheritance and in-contracts

2014-12-22 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 22/12/14 19:06, aldanor via Digitalmars-d-learn wrote:

https://github.com/D-Programming-Language/dmd/pull/4200


Yes, I saw that PR with some joy -- thanks for the link! :-)


Re: Inheritance and in-contracts

2014-12-22 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 22/12/14 20:12, aldanor via Digitalmars-d-learn wrote:

On Monday, 22 December 2014 at 19:11:13 UTC, Ali Çehreli wrote:

On 12/22/2014 10:06 AM, aldanor wrote:

https://github.com/D-Programming-Language/dmd/pull/4200


Thank you! This fixes a big problem with the contracts in D.

Ali


It's not my PR but I just thought this thread would be happy to know :)


Actually, the author is a friend of mine, and an all-round wonderful guy. :-)




Re: @property usage

2014-12-10 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 09/12/14 08:31, Nicholas Londey via Digitalmars-d-learn wrote:

Does @property ever make sense for a free floating function?


http://dlang.org/phobos/std_random.html#.rndGen :-)



Inheritance and in-contracts

2014-12-05 Thread Joseph Rushton Wakeling via Digitalmars-d-learn
Suppose I have a base class where one of the methods has an in-contract, and a 
derived class that overrides it:


/
import std.stdio;

abstract class Base
{
abstract void foo(int n)
in
{
assert(n  5);
}
body
{
assert(false, Shouldn't get here);
}
}

class Deriv : Base
{
override void foo(int n)
{
writeln(n = , n);
}
}


void main()
{
Base b = new Deriv;

b.foo(7);
b.foo(3);
}
/

This outputs,

n = 7
n = 3

In other words, the lack of explicit in-contract on Deriv.foo is being taken as 
an _empty_ in-contract, which is being interpreted as per the rule that a 
derived class can have a less restrictive contract than its base (cf. TDPL 
pp.329-331).


Question: is there any way of indicating that Deriv.foo should inherit the 
in-contract from the base method, without actually calling super.foo ... ?


Following the example on p.331, I did try calling super.__in_contract_format(n) 
(... or this.Base.__in_contract_format(n) or other variants), but that doesn't 
seem to work:


Error: no property '__in_contract_foo' for type 'incontract.Base'

... so can anyone advise if there is a reasonable way of achieving this?

Thanks,

 -- Joe


Re: Inheritance and in-contracts

2014-12-05 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 05/12/14 23:45, Ali Çehreli via Digitalmars-d-learn wrote:

This is a known problem with contract inheritance. The following bug report
mentions the ugly hack of defining assert(0) as the derived's 'in' contract:

   https://issues.dlang.org/show_bug.cgi?id=6856


Thanks for the clarification.  This is a not-nice situation; FWIW I would second 
Don's proposal that the absence of an explicit in-contract on the derived-class 
method ought to indicate inheritance of the base contract.


I guess the assert(false) method will do, but I find it as ugly as you do :-(

One further annoyance, pointed out to me by a colleague earlier today: given 
that base and derived in-contracts basically come down to,


   try
   {
   Base.in()
   }
   catch (Throwable)
   {
  Derived.in()
   }

... aren't there some nasty consequences here for memory allocation and the 
generation of garbage?




Re: Inheritance and in-contracts

2014-12-05 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 06/12/14 00:24, bearophile via Digitalmars-d-learn wrote:

Is this a strong need?


Let's put it this way: I don't mind copy-pasting the same in-contract into 
derived class methods.  I'd just rather avoid it, and was hoping there was a way 
to do so which was trivial.


It's disappointing that the lack of an explicitly empty in-contract doesn't 
imply inheritance of the base contract, but I could live with it much more 
easily if I could explicitly indicate that desired inheritance.


std.algorithm.map with side-effects

2014-12-05 Thread Joseph Rushton Wakeling via Digitalmars-d-learn
Here's a little experiment I was trying out earlier today in order to try and 
convert foreach-style code to using UFCS of ranges:


//
import std.algorithm, std.range, std.stdio;

void main()
{
size_t s = 0;

void essify(size_t n)
{
writeln(n = , n);
++s;
}

auto filteredRange = iota(0, 10).filter!(a = (a % 2));

filteredRange.map!(a = essify(a));

writeln(s);

foreach (n; filteredRange)
{
essify(n);
}

writeln(s);
}
//

I'd assumed the two uses of filteredRange would produce equivalent results, but 
in fact the transformation using map does nothing -- the writeln statement 
inside the essify() function never gets triggered, suggesting the function body 
is never executed.


Can anyone advise why, and whether there's a nice range iteration option to 
ensure that this function gets called using each element of the filteredRange?


Re: std.algorithm.map with side-effects

2014-12-05 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 06/12/14 00:58, bearophile via Digitalmars-d-learn wrote:

Joseph Rushton Wakeling:


Can anyone advise why,


map is lazy, like most other ranges.


Ah, I see.  That function would only be called on consumption of the results of 
the map.



Lazy higher order functions like map/filter should be used only with pure
functions. There are bugs/troubles in using them on impure code.


Yes, I did wonder about that.  I'll post up the actual code tomorrow -- I was 
having some fun playing with one of the metrics in my Dgraph library and trying 
to see to what extent I could simplify it (reading-wise) with a range-based 
approach.



There was a proposal for a each function to terminate a range chain with
something effectful, but I think it has gone nowhere. This means you have to use
a foreach on a range.


Yes, I remember you requesting that.  Were there ever any PRs, or was it just 
spec?


Re: Problem interfacing with GSL

2014-11-30 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 30/11/14 13:21, Arjan via Digitalmars-d-learn wrote:

Hi!
D noob here.
I'm trying to call this function from the GSL lib:


Out of curiosity (since your question has already been answered), what 
functionality is it that is making you want to use GSL?  I ask because I want to 
be sure we're not missing something we ought to have in Phobos.


Re: Appender is ... slow

2014-08-14 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 14/08/14 19:16, Philippe Sigaud via Digitalmars-d-learn wrote:

Do people here get good results from Appender? And if yes, how are you using it?


An example where it worked for me:
http://braingam.es/2013/09/betweenness-centrality-in-dgraph/

(You will have to scroll down a bit to get to the point where appenders get 
introduced.)




Re: Appender is ... slow

2014-08-14 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 14/08/14 23:33, Joseph Rushton Wakeling via Digitalmars-d-learn wrote:

An example where it worked for me:
http://braingam.es/2013/09/betweenness-centrality-in-dgraph/


I should add that I don't think I ever explored the case of just using a regular 
array with assumeSafeAppend.  Now that you've raised the question, I think I 
ought to try it :)




Building SDC from source

2014-07-27 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

Hi all,

I'm running into a little trouble trying to build SDC.  The first problem was 
that the makefile does not auto-detect the appropriate llvm-config, or the 
libphobos2 location, but that's simply enough fixed by doing (in my case):


make LLVM_CONFIG=llvm-config-3.4 LD_PATH=/opt/dmd/lib64

It also needs libncurses5-dev to be installed (Debian/Ubuntu package name, may 
vary on other distros).


However, even allowing for this, the build still falls over when it reaches this 
stage:


gcc -o bin/sdc obj/sdc.o -m64  -L/opt/dmd/lib64 -lphobos2 -Llib -ld-llvm -ld 
`llvm-config-3.4 --ldflags` `llvm-config-3.4 --libs` -lstdc++ -export-dynamic 
-ldl -lffi -lpthread -lm -lncurses

bin/sdc -c -o obj/rt/dmain.o libsdrt/src/d/rt/dmain.d -Ilibsdrt/src
bin/sdc: error while loading shared libraries: libphobos2.so.0.66: cannot open 
shared object file: No such file or directory


This is a bit mysterious, as libphobos2.so.0.66 does indeed exist in the 
/opt/dmd/lib64 directory.  Or is this just an incompatible library version, with 
SDC expecting 2.065  ?


Can anyone advise?

Thanks  best wishes,

  -- Joe


Re: Building SDC from source

2014-07-27 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 27/07/14 14:22, Stefan Koch via Digitalmars-d-learn wrote:

your LD_PATH seems to not have the lib in it


From the Makefile, I'd understood that LD_PATH was meant to point to the 
_directory_ where the libraries are contained, not the actual library itself ... 
?  After all, it's just mapped to a -L flag.


Re: Building SDC from source

2014-07-27 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 27/07/14 14:16, Stefan Koch via Digitalmars-d-learn wrote:

sdc itself should not use phobos at all as far as I can tell.
libsdrt should be selfcontaint.


Yes, it's obviously being used to build the compiler, and supplying the flag 
directly is clearly only necessary in this case for the gcc call that brings 
together the sdc frontend and the LLVM backend.




Re: Building SDC from source

2014-07-27 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 27/07/14 15:10, Dicebot via Digitalmars-d-learn wrote:

Is shared Phobos library in /opt/dmd known do ldconfig?


No, but isn't that what the -L flag should be passing to gcc?


Re: Building SDC from source

2014-07-27 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 27/07/14 16:20, Stefan Koch via Digitalmars-d-learn wrote:

if gcc knows where the lib is it can compile agianst it.
But the library beeing _shared_ will not be linked with the executable but the
library will be loaded form the path's supplied as LD_PATH or in ldconfig.
The -L flag does not change the library-loader in any way (I think).


Damn, I just realized I was misreading the messages.  The gcc build is now 
succeeding -- it's the calls to bin/sdc that are failing.


Adding a dlang.conf file in /etc/ld.so.conf.d/ which adds the /opt/dmd/lib64 
path solves things.


Re: Generating Strings with Random Contents

2014-07-16 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

Alternative:

randomSample(lowercase, 10, lowercase.length).writeln;


No, I don't think that's appropriate, because it will pick 10 individual 
characters from a, b, c, ... , z (i.e. no character will appear more than once), 
and the characters picked will appear in alphabetical order.


Incidentally, if lowercase has the .length property, there's no need to pass the 
length separately to randomSample.  Just passing lowercase itself and the number 
of sample points desired is sufficient.


Re: DStyle: Braces on same line

2014-07-13 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 12/07/14 21:01, Danyal Zia via Digitalmars-d-learn wrote:

I noticed that in Andrei's talks and his book, he used braces on the same line
of delcaration, however Phobos and other D libraries I know use braces on their
own line. Now I'm in a position where I need to take decision on coding style of
my library and I get accustomed to use braces on same line but I'm worried if
that would make my library less readable to other D users.

Should I worry about it? Or is that's just a debatable style that won't really
matter if it's persistent throughout library?


As long as your coding style is self-consistent, then it really doesn't matter a 
lot.  In particular this choice of bracing style is a very minor issue that 
people are well used to having to deal with.


However, I do think there's value in deliberately matching the code style of the 
standard library, as it extends the volume of public D code with a common style. 
 So unless you have a strong personal preference, I'd go with that.


Re: DStyle: Braces on same line

2014-07-13 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 13/07/14 16:52, H. S. Teoh via Digitalmars-d-learn wrote:

I had my own style before, but after I started contributing to Phobos, I
found it a pain to keep switching back and forth between styles (and to
convert styles before submitting PR's), so eventually I decided to just
adopt Phobos style for all my D code, including my personal projects.
That way I never have to worry again about which project is in what
style, but everything is consistently the same style.


Same here. :-)


It also helps that my previous supervisor at my work also used a similar
style, which was different from my own, so I already had to adapt my
style to his in the past. That was what convinced me that other inferior
styles than my own had any merit at all. ;-)


Two consequences of adapting myself to Phobos style were that I realized (i)how 
little most of these things really matter, and (ii) pretty much any stylistic 
choice carries both benefits and drawbacks.


E.g. in this case, Egyptian-style braces definitely make your code more 
compact, but separate-line opening braces definitely make it easier to see where 
scopes begin and end.




Re: DStyle: Braces on same line

2014-07-13 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 13/07/14 14:23, Danyal Zia via Digitalmars-d-learn wrote:

I'm going with Andrei's style of preference on his talks ;)


Andrei can no doubt speak for himself about his preferences, but I'd be wary of 
assuming that the style he uses in his talks necessarily reflects his actual 
stylistic preference.  As has been pointed out by others, same-line opening 
brace is a common style for slides and books because it saves vertical space; 
people will use it there even if in their actual code they prefer something 
different.




Re: DStyle: Braces on same line

2014-07-13 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 13/07/14 19:24, Timon Gehr via Digitalmars-d-learn wrote:

Wrong. There are things which are simply bad ideas.


I think that we can take it as read that I meant, Any reasonable stylistic 
choice.  Of course, YMMV about what counts as reasonable, but most of the 
things that people fuss over are fairly minimal differences in practice.



I.e. you see where everything is.


Compactness can also be a disadvantage.  Some people have a preference for a 
hyper-compact style where there are minimal blank lines in the code; I accept 
their goal as valid, and I think there are cases where it can surely help, but 
it's not one that I personally find very helpful.


In fact, one reason that I've come to appreciate standard D style is the way in 
which separate opening braces actually help to space out the code into more 
obvious paragraphs.



This is the only argument I have heard in favour of doing this, but it is not
actually valid. This critique might apply to Lisp style.


Well, I personally find that separate-line opening braces do make it easier to 
line up the opening and ending of scopes.  If it doesn't do anything for you, 
that's a shame; but it doesn't make the argument invalid.


Re: DStyle: Braces on same line

2014-07-13 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 13/07/14 19:51, Brian Rogoff via Digitalmars-d-learn wrote:

Yes, the same argument for books and slides is also applicable to all other
media.


Not really.  In a book or a slide you have an unavoidable constraint on how much 
vertical space you can take up.  On a screen, you are unavoidably going to have 
to scroll up or down at some point; it's just a question of how often.


Scrolling media (not just code) allow you to make a tradeoff between less 
vertically compact styles that better highlight different semantic blocks, 
versus more compact styles that packs more data into one screen's worth of 
lines, while usually making it less easy to highlight the semantics of what's 
being displayed.


You may lean towards favouring compact code over other factors, but at the end 
of the day this is a preference based on your personal priorities, not a hard 
and fast rule.


Re: Basics of calling C from D

2014-06-11 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 11/06/14 16:22, Adam D. Ruppe via Digitalmars-d-learn wrote:

On Wednesday, 11 June 2014 at 14:11:04 UTC, simendsjo wrote:

I believe the correct answer should be Buy my book!.


ah, of course! I should just make a .sig file lol

http://www.packtpub.com/discover-advantages-of-programming-in-d-cookbook/book

chapter 4 talks about this kind of thing :P


My copy arrived today.

Life is clearly going to be more fun. :-)



Interesting bug with std.random.uniform and dchar

2014-06-08 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

Hello all,

Here's an interesting little bug that arises when std.random.uniform is called 
using dchar as the variable type:


//
import std.conv, std.random, std.stdio, std.string, std.typetuple;

void main()
{
foreach (C; TypeTuple!(char, wchar, dchar))
{
writefln(Testing with %s: [%s, %s], C.stringof, to!ulong(C.min), 
to!ulong(C.max));

foreach (immutable _; 0 .. 100)
{
auto u = uniform![](C.min, C.max);

assert(C.min = u, format(%s.min = %s, u = %s, C.stringof, 
to!ulong(C.min), to!ulong(u)));
assert(u = C.max, format(%s.max = %s, u = %s, C.stringof, 
to!ulong(C.max), to!ulong(u)));

}
}
}
//

When closed boundaries [] are used with uniform, and the min and max of the 
distribution are equal to T.min and T.max (where T is the variable type), the 
integral/char-type uniform() makes use of an optimization, and returns


std.random.uniform!ResultType(rng);

That is, it uses a specialization of uniform() for the case where one wants a 
random number drawn from all the possible bits of a given integral type.


With char and wchar (8- and 16-bit) this works fine.  However, dchar (32-bit) 
has a .max value that is less than the corresponding number of bits used to 
represent it: dchar.max is 1114111, while its 32 bits are theoretically capable 
of handling values of up to 4294967295.


A second consequence is that uniform!dchar (the all-the-bits specialization) 
will return invalid code points.


I take it this is a consequence of dchar being something of an oddity as a data 
type -- while stored as an integral-like value, it doesn't actually make use 
of the full range of values available in its 32 bits (unlike char and wchar 
which make full use of their 8-bit and 16-bit range).


I think it should suffice to forbid uniform!T from accepting dchar parameters 
and to tweak the integral-type uniform()'s internal check to avoid calling that 
specialization with dchar.


Thoughts ... ?

Thanks  best wishes,

-- Joe


Re: Interesting bug with std.random.uniform and dchar

2014-06-08 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 08/06/14 16:25, monarch_dodra via Digitalmars-d-learn wrote:

Arguably, the issue is the difference between invalid and downright illegal
values. The thing about dchar is that while it *can* have values higher than
dchar max, it's (AFAIK) illegal to have them, and the compiler (if it can) will
flag you for it:

dchar c1 = 0x_D800; //Invalid, but fine.
dchar c2 = 0x_; //Illegal, nope.


Yup.  If you use an invalid wchar (say, via writeln), you'll get a nonsense 
symbol on your screen, but it will work.  Try and writeln a dchar whose value is 
greater than dchar.max and you'll get an exception/error thrown.


Re: Interesting bug with std.random.uniform and dchar

2014-06-08 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 08/06/14 18:41, Joseph Rushton Wakeling via Digitalmars-d-learn wrote:

On 08/06/14 10:47, Joseph Rushton Wakeling via Digitalmars-d-learn wrote:

Here's an interesting little bug that arises when std.random.uniform is called
using dchar as the variable type:


Issue report submitted: https://issues.dlang.org/show_bug.cgi?id=12877


Fix: https://github.com/D-Programming-Language/phobos/pull/2235



Compiler support for T(n) notation for initialization of variables

2014-06-07 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

Hello all,

Just a quick check -- which version of the dmd frontend introduces support for 
the notation T(n) to initialize a variable as type T?  Is it 2.065 or the 
upcoming 2.066?  I ask because as I'm always running git-HEAD DMD, I'm never 
entirely on top of what's in which version ... :-)


Thanks  best wishes,

-- Joe


Re: Compiler support for T(n) notation for initialization of variables

2014-06-07 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 07/06/14 19:57, Peter Alexander via Digitalmars-d-learn wrote:

Well, it doesn't work in 2.065, so must be 2.066 :-)

P.S. thanks for letting me know about this feature. I had no idea it was going 
in!


I only discovered it when I was advised to use it in a recent Phobos PR of 
mine. :-)

The reason I wanted to know is to check whether or not it was safe to use it 
in library code that might conceivably be used with earlier compilers.  I think 
requiring 2.065+ is reasonable, but requiring 2.066 obviously is not.


Oh well, (cast(T) 1) it is then. :-(

(I think that to!T will also work with a compile time constant, but it feels a 
bit dodgy to import std.conv for something like this.)


Re: Compiler support for T(n) notation for initialization of variables

2014-06-07 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 07/06/14 19:57, bearophile via Digitalmars-d-learn wrote:

Joseph Rushton Wakeling:


which version of the dmd frontend introduces support for the notation T(n) to
initialize a variable as type T?  Is it 2.065 or the upcoming 2.066?


In 2.066.


Thanks for the confirmation :-)



Re: Different random shuffles generated when compiled with gdc than with dmd

2014-06-02 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 02/06/14 08:57, Chris Cain via Digitalmars-d-learn wrote:

On Sunday, 1 June 2014 at 14:22:31 UTC, Joseph Rushton Wakeling via
Digitalmars-d-learn wrote:

I missed the debate at the time, but actually, I'm slightly more concerned
over the remark in that discussion that the new uniform was ported from
java.util.Random.  Isn't OpenJDK GPL-licensed ... ?


To clarify, no, the uniform was not ported from Java's Random.


I'm really sorry, Chris, I was obviously mixing things up: on rereading, the 
person in the earlier forum discussion (not PR thread) who talks about porting 
from Java wasn't you.  I'm very glad to be corrected on that; your PR was a nice 
submission and I'm glad we have it in Phobos.


text() vs. format() for formatting assert/exception messages

2014-06-01 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

Hello all,

What's the current state of preference (if any) for using std.conv.text vs. 
std.string.format (or others?) when formatting error messages for 
assert/enforce/exceptions?


I seem to recall a deprecation message pushing the use of std.string.format at 
some point in the past, but text() now seems to pass without comment.


Thanks  best wishes,

-- Joe


Re: Different random shuffles generated when compiled with gdc than with dmd

2014-06-01 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 01/06/14 14:11, Ivan Kazmenko via Digitalmars-d-learn wrote:

I second the thought that reproducibility across different versions is an
important feature of any random generation library.  Sadly, I didn't use a
language yet which supported such a flavor of reproducibility for a significant
period of time in its default random library, so I have to use my own randomness
routines when it matters.  I've reported my concern [1] at the moment of
breakage, but apparently it didn't convince people. Perhaps I should make a more
significant effort next time (like a pull request) for the things that matter to
me.  Well, now I know it does matter for others, at least.


Yes, there probably should be a high bar for changes that break reproducibility 
in this way (although there certainly shouldn't be a ban: we shouldn't 
artificially constrain ourselves to avoid significant improvements to the module).


I missed the debate at the time, but actually, I'm slightly more concerned over 
the remark in that discussion that the new uniform was ported from 
java.util.Random.  Isn't OpenJDK GPL-licensed ... ?


  1   2   >