Re: (u)byte calling char overload instead of int

2018-09-01 Thread Peter Alexander via Digitalmars-d-learn

On Saturday, 1 September 2018 at 17:17:37 UTC, puffi wrote:

Hi,
Is it by design that when calling functions with either ubyte 
or byte variables the char overload is called instead of the 
int (or generic) one?


It seems this is by design.

"If two or more functions have the same match level, then partial 
ordering is used to try to find the best match. Partial ordering 
finds the most specialized function."


char is more specialized than int, and since the implicit 
conversion byte->char exists, it is called. Even f(1UL) will call 
f(char) rather than f(long).


Re: Is this a good idea?

2018-09-01 Thread Peter Alexander via Digitalmars-d-learn

On Saturday, 1 September 2018 at 16:20:11 UTC, Dr.No wrote:

why move flush to outside the synchronized block?


flush should be thread safe. In general, yiu want as little code 
as possible to run under the lock. Not that important though.


trying out this approach I found to be ok except in some cases, 
the output look like that:


...

also there's that extra ♪◙ character. Thos sounds memory 
violation somewhere.
This only happens when using parallel. Any guess what's 
possibily happeing?


Hard to say without seeing code. Agree it looks like a race.



Re: Is this a good idea?

2018-08-30 Thread Peter Alexander via Digitalmars-d-learn

On Thursday, 30 August 2018 at 19:59:17 UTC, Dr.No wrote:
I would to process the current block in parallel but priting 
need to be theread-safe so I'm using



foreach(x; parallel(arr)) {
   auto a = f(x);
   auto res = g(a);
   synchronized {
stdout.writeln(res);
stdout.flush();
}
}



Since f() and g() are some heavy functions, I'd like to process 
in parallel but the printing (doesn't need to respect order but 
must be thread-safe) hence I'm using synchronized. Is this 
counter-productive in any way?


I don't see any problem with that assuming f and g are 
significantly more expensive than writeln. The flush can be moved 
outside the synchronized block.


Re: Parallelizing factorial computation

2018-08-24 Thread Peter Alexander via Digitalmars-d-learn

On Friday, 24 August 2018 at 13:04:47 UTC, Uknown wrote:
I was quite surprised by the fact that parallel ran so much 
slower than recursive and loop implementations. Does anyone 
know why?


n = 100 is too small to see parallelism gains.

Try n = 1

https://run.dlang.io/is/XDZTSd


Patterns to avoid GC with capturing closures?

2018-08-24 Thread Peter Alexander via Digitalmars-d-learn

Consider this code, which is used as an example only:

auto scaleAll(int[] xs, int m) {
  return xs.map!(x => m * x);
}

As m is captured, the delegate for map will rightly allocate the 
closure in the GC heap.


In C++, you would write the lambda to capture m by value, but 
this is not a facility in D.


I can write scaleAll like this:

auto scaleAll(int[] xs, int m) @nogc {
  return repeat(m).zip(xs).map!(mx => mx[0] * mx[1]);
}

So that repeat(m) stores m, but it is quite hacky to work like 
this.


I could write my own range that does this, but this is also not 
desirable.


Are there any established patterns, libraries, or language 
features that can help avoid the GC allocation in a principled 
way here?


Re: Fun with floating point

2015-02-07 Thread Peter Alexander via Digitalmars-d-learn

On Saturday, 7 February 2015 at 21:33:51 UTC, Kenny wrote:
The above code snippet works correctly when I use LDC compiler 
(it finds expected 'f' value and prints it to console). I'm 
wondering is it a bug in DMD?


p.s. the final code used by both compilers:

import std.stdio;
import std.conv;

int main(string[] argv)
{
const float eps = 1.0f;
float f = 0.0f;
while (f + eps != f)
f += 1.0f;

writeln(eps = , eps, , max_f = , f);
return 0;
}


Intermediate calculations may be performed at higher precision 
than the precision of the values themselves. In particular, the f 
+ eps may be performed with 80 bits of precision, even though 
both values are 32-bit. The comparison will then fail.


The reason for the difference between DMD and LDC is that DMD 
tends to use the FPU more with 80 bits of precision, whereas LDC 
and GDC will use the SSE2 instructions, which only support 32-bit 
and 64-bit precision.


Re: Fun with floating point

2015-02-07 Thread Peter Alexander via Digitalmars-d-learn

On Saturday, 7 February 2015 at 23:06:15 UTC, anonymous wrote:

On Saturday, 7 February 2015 at 22:46:56 UTC, Ali Çehreli wrote:

1.0 is famously not representable exactly.


1.0 is representable exactly, though.


I think he meant 0.1 :-)


Re: Shared and GC

2015-01-15 Thread Peter Alexander via Digitalmars-d-learn
On Thursday, 15 January 2015 at 17:05:32 UTC, Ola Fosheim Grøstad 
wrote:
On Thursday, 15 January 2015 at 15:31:17 UTC, Peter Alexander 
wrote:
On Thursday, 15 January 2015 at 15:24:55 UTC, Ola Fosheim 
Grøstad wrote:
That would be nice, because then a precise garbage collector 
could choose between local collection scans and global 
collection scans.


I think something like this is part of the plan, but shared 
semantics are still up in the air.


That sounds like a very important aspect of a plan to get fast 
GC without completely changing the language and non-gc 
performance.


I've looked at bit at how to do a fast stop-the-thread GC. 
Estimates on what the hardware supports (bandwidth and cache 
performance) suggests that it is possible to get acceptable 
rates for not-densely-linked heaps with some tweaks to 
semantics:


- shared-awareness in new-expressions to support local 
collection


- removing class-destructors

- locating traceable pointers to the same cachelines in class 
instances (negative offsets is the easy solution)



Then you cn use a ache-optimized collector using batched 
non-caching queues with prefetching to get bitmaps that fits in 
1st level cache without wrecking the cache for other threads 
and having collection dominated by cache misses.


Yah, this was all discussed at length not that long ago, although 
I can't find the thread just now.


Re: Shared and GC

2015-01-15 Thread Peter Alexander via Digitalmars-d-learn
On Thursday, 15 January 2015 at 15:24:55 UTC, Ola Fosheim Grøstad 
wrote:

I am trying to understand the idea behind shared typing fully.

If I am only allowed to share objects with another thread if it 
is typed shared, doesn't that imply that it should be 
allocated as shared too and only be allowed to contain pointers 
to shared?


Yes, shared is transitive.

struct S { int* p; }
void main() {
S s1;
shared S s2 = s1;  // error, but ok if p is int.
}


That would be nice, because then a precise garbage collector 
could choose between local collection scans and global 
collection scans.


I think something like this is part of the plan, but shared 
semantics are still up in the air.


Re: Copy only frame pointer between objects of nested struct

2015-01-07 Thread Peter Alexander via Digitalmars-d-learn

On Tuesday, 6 January 2015 at 23:32:25 UTC, Artur Skawina via
That shows a static struct, so I'm not sure it's the same 
problem.


static structs with template alias parameters to local symbols 
count as nested structs.


Your solution would likely work, but yes, I'm looking for 
something less hacky :-)




Copy only frame pointer between objects of nested struct

2015-01-06 Thread Peter Alexander via Digitalmars-d-learn

Consider:

auto foo(T)(T a) {
T b;  // Error: cannot access frame pointer of main.X
b.data[] = 1;
return b;
}

void main() {
struct X {
this(int) {}
int[4096] data;
}
foo(X());   
}

Note the error is because you cannot construct the main.X object 
without a frame pointer.


You could do `T b = a` here to get a's frame pointer, but it 
would also copy all of a's data, which is expensive and 
unnecessary.


Is there a way to only copy a's frame pointer into b?

(Note: this is just an illustrative example, real problem here: 
https://issues.dlang.org/show_bug.cgi?id=13935)


Re: What exactly shared means?

2015-01-02 Thread Peter Alexander via Digitalmars-d-learn

On Friday, 2 January 2015 at 23:51:05 UTC, John Colvin wrote:
The rule (in C(++) at least) is that all data is assumed to be 
visible and mutable from multiple other threads unless proved 
otherwise. However, given that you do not write a race, the 
compiler will provide full sequential consistency. If you do 
write a race though, all bets are off.


The memory is visible and mutable, but that's pretty much the 
only guarantee you get. Without synchronization, there's no 
guarantee a write made by thread A will ever be seen by thread B, 
and vice versa.


Analogously in D, if a thread modifies a __gshared variable, 
there's no guarantees another thread will ever see that 
modification. The variable isn't thread local, but it's almost as 
if the compiler to treat it that way.


These relaxed guarantees allow the compiler to keep variables in 
registers, and re-order memory writes. These optimizations are 
crucial to performance.


Re: Can the order in associative array change when keys are not midified?

2015-01-01 Thread Peter Alexander via Digitalmars-d-learn
On Thursday, 1 January 2015 at 13:13:10 UTC, Andrej Mitrovic via 
Digitalmars-d-learn wrote:

On 1/1/15, Idan Arye via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:

If I have an associative array and I only modify it's values,
without changing the keys, can I assume that the order won't
change?


Associative arrays are not ordered at all.

See the first note here: http://dlang.org/hash-map.html


The order is unspecified, but an iteration must iterate in *some* 
order. The question (if I've understood it correctly), is whether 
that order of iteration changes when the keys aren't changed.


The spec doesn't say anything about this, although I would expect 
in practice that the order will not change.


I've added a bug to track this omission from the spec: 
https://issues.dlang.org/show_bug.cgi?id=13923


Re: Initialization of nested struct fields

2015-01-01 Thread Peter Alexander via Digitalmars-d-learn

On Friday, 2 January 2015 at 00:08:02 UTC, anonymous wrote:
Apparently dmd thinks that the result of f must be a nested 
struct. I.e. it needs a context pointer. And I guess hell would 
break loose if you'd use a nested struct with a null context 
pointer. At least when the context pointer is actually used, 
unlike here.


Ah, I see. So the problem is that the nested struct doesn't 
really have a sensible default value, meaning you must initialize 
it explicitly in the constructor.


Thanks for the clarification.


Initialization of nested struct fields

2015-01-01 Thread Peter Alexander via Digitalmars-d-learn
Can someone please explain this behaviour? I find it totally 
bizarre.


auto f(T)(T x) {
struct S {
T y;
this(int) { }
}
return S(0);
}


void main() {
f(f(0));
}

Error: constructor f376.f!(S).f.S.this field y must be 
initialized in constructor, because it is nested struct


Why must y be initialized in the constructor? It isn't const. Why 
isn't it default initialized?


Is this explained anywhere in the docs? I can't see anything in 
the nested struct section, or in any constructor section.


Re: readln with buffer fails

2014-10-29 Thread Peter Alexander via Digitalmars-d-learn

You need to take a slice of the buffer:

char[] buf = Input[];
readln(buf);
// line now in buf

The reason for this is because you need to know where the string 
ends. If you just passed in Input, how would you know how long 
the line read was?


Re: Reflections on isPalindrome

2014-10-24 Thread Peter Alexander via Digitalmars-d-learn

On Friday, 24 October 2014 at 21:56:20 UTC, Nordlöw wrote:

bool isPalindrome(R)(in R range) @safe pure


Aside: for templates, just let the compiler infer @safe and pure. 
You don't know whether the range operations on R are pure or not.


As for the actual algorithm, there's no need for the random 
access version, and you bidirectional version does twice as much 
as necessary:


Just do:

while (!range.empty)
{
  if (range.front != range.back) return false;
  range.popFront();
  if (range.empty) break;
  range.popBack();
}
return true;

This automatically handles narrow strings.


Further, I would like to extend isPalindrome() with a minimum 
length argument minLength that for string and wstring does


import std.uni: byDchar;
range.byDchar.array.length = minLength.

AFAIK this will however prevent my algorithm from being 
single-pass right?


I'm not sure what you are saying here, but hopefully the above 
code obviates this anyway.




Re: Are there desktop appications being developed in D currently?

2014-08-09 Thread Peter Alexander via Digitalmars-d-learn

On Saturday, 9 August 2014 at 00:34:43 UTC, Puming wrote:
Yes, rust is a more infantile language compared to D, but 
people are already using them to create complicate applications 
like browser!


Rust was designed to build Servo. The people building Servo are 
the people building Rust. With all due respect to Rust, I don't 
think that counts as endorsement of the language.


Re: spawnProcess command-line arguments help

2014-08-04 Thread Peter Alexander via Digitalmars-d-learn

On Sunday, 3 August 2014 at 23:48:09 UTC, Martin wrote:
When I use the spawnProcess function in std.process, the 
command line arguments that I provide to the function seem to 
get quoted.


I can't reproduce this on OS X with 2.066rc1 (args are unquoted).

Can someone else check Windows? Sounds like a bug to me.


Re: How to test templates for equality?

2014-06-30 Thread Peter Alexander via Digitalmars-d-learn

template Foo(T...) {}
template Bar(T...) {}

template isFoo(alias F)
{
enum isFoo = __traits(isSame, F, Foo);
}

pragma(msg, isFoo!Foo); // true
pragma(msg, isFoo!Bar); // false


Re: Compiler support for T(n) notation for initialization of variables

2014-06-07 Thread Peter Alexander via Digitalmars-d-learn

Well, it doesn't work in 2.065, so must be 2.066 :-)

P.S. thanks for letting me know about this feature. I had no idea 
it was going in!