Re: Preventing implicit conversion
On Thursday, 5 November 2015 at 09:33:40 UTC, ixid wrote: In C++ I can add two shorts together without having to use a cast to assign the result to one of the two shorts. It just seems super clunky not to be able to do basic operations on basic types without casts everywhere. +1 If automatic shrink is droped from the C legacy stuff, so interger propagation should also be dropped (or changed to propagate no further than to the actual size of a type). D has a far better type system, throw away bad old C habits! -> this would also make the defect comparison of signed to unsigned types visible for small types and hopefully force the introduction of the correct comparison!
Re: Associative array with duplicated keys?
On Thursday, 5 November 2015 at 09:27:35 UTC, tcak wrote: On Thursday, 5 November 2015 at 08:55:10 UTC, Andrea Fontana wrote: Check this: http://dpaste.dzfl.pl/ebbb3ebac60e It doesn't give any error or warning. And writeln seems confused (do you see that "," at the end?) I am sure the coder of writeln was lazy to prevent putting ", " after last item. You can report it as a bug I guess. Not really: it appears only if array has duplicated keys. Anyway: are duplicated keys on declaration allowed?
Associative array with duplicated keys?
Check this: http://dpaste.dzfl.pl/ebbb3ebac60e It doesn't give any error or warning. And writeln seems confused (do you see that "," at the end?)
Re: Preventing implicit conversion
On Thursday, 5 November 2015 at 05:41:46 UTC, Jonathan M Davis wrote: On Wednesday, November 04, 2015 21:22:02 ixid via Digitalmars-d-learn wrote: On Wednesday, 4 November 2015 at 19:09:42 UTC, Maxim Fomin wrote: > On Wednesday, 4 November 2015 at 14:27:49 UTC, ixid wrote: >> Is there an elegant way of avoiding implicit conversion to >> int when you're using shorter types? > > Only with library solution. Implicit conversions are built > into language. Doesn't that seem rather limiting and unnecessary? Why? You can't affect what conversions do and don't work for the built-in types in _any_ language that I've ever used, and I've never heard of a language that allowed anything like that. If you want different conversion rules, you need to create a user-defined type that defines the conversions you want. That's pretty normal. And AFAIK, there aren't very many folks trying to avoid the built-in implicit conversions in D, particularly since D eliminated the various implicit narrowing conversions that you get in C/C++. - Jonathan M Davis In C++ I can add two shorts together without having to use a cast to assign the result to one of the two shorts. It just seems super clunky not to be able to do basic operations on basic types without casts everywhere.
Re: Associative array with duplicated keys?
On Thursday, 5 November 2015 at 08:55:10 UTC, Andrea Fontana wrote: Check this: http://dpaste.dzfl.pl/ebbb3ebac60e It doesn't give any error or warning. And writeln seems confused (do you see that "," at the end?) I am sure the coder of writeln was lazy to prevent putting ", " after last item. You can report it as a bug I guess.
Re: looking for sdl2 based application skeleton
On Thursday, 5 November 2015 at 07:26:08 UTC, drug wrote: It seems to me I saw somewhere the project like this. I don't want to make another one if there is something like that. Where you thinking of Derelict? https://github.com/DerelictOrg/DerelictSDL2 or perhaps Dgame: https://github.com/Dgame/Dgame Craig
Re: Help with Concurrency
On Tuesday, 3 November 2015 at 23:16:59 UTC, bertg wrote: I am having trouble with a simple use of concurrency. Running the following code I get 3 different tid's, multiple "sock in" messages printed, but no receives. I am supposed to get a "received!" for each "sock in", but I am getting hung up on "receiving...". [...] I had a similarily odd experience with std.concurrency - my receive would not work unless I also received on Variant, although the Variant receiver was a no-op: receive( (Event event) { // handle event }, (Variant v) {} );
Re: Associative array with duplicated keys?
On Thursday, 5 November 2015 at 10:04:02 UTC, Andrea Fontana wrote: Anyway: are duplicated keys on declaration allowed? They shouldn't be...
Re: Align a variable on the stack.
On Thursday, 5 November 2015 at 03:52:47 UTC, TheFlyingFiddle wrote: I don't really know where to go from here to figure out the underlying cause. Does anyone have any suggestions? Can you publish two compilable and runnable versions of the code that exhibit the difference? Then we can have a look at the generated assembly. If there's really different code being generated depending on whether the .init value is explicitly set to float.nan or not, then this suggests there is a bug in DMD.
Re: Preventing implicit conversion
And I want to have small number litterals automatically choosing the smallest fitting type. If I write ubyte b = 1u; auto c = b + 1u; I expect the 1u to be of type ubyte - and also c.
Re: Associative array with duplicated keys?
https://issues.dlang.org/show_bug.cgi?id=15290
Re: Associative array with duplicated keys?
On Thursday, 5 November 2015 at 10:04:02 UTC, Andrea Fontana wrote: Anyway: are duplicated keys on declaration allowed? IMHO This should at least be a warning.
Re: Associative array with duplicated keys?
On Thursday, 5 November 2015 at 08:55:10 UTC, Andrea Fontana wrote: Check this: http://dpaste.dzfl.pl/ebbb3ebac60e It doesn't give any error or warning. And writeln seems confused (do you see that "," at the end?) This is an outright bug, please report on issues.dlang.org: void main() { import std.stdio : writeln; ["key": 10, "key" : 20, "key" : 30].length.writeln; ["key" : 30].length.writeln; } Prints: 3 1
Re: conver BigInt to string
On Thursday, 5 November 2015 at 16:29:30 UTC, Namal wrote: Hello I am trying to convert BigInt to string like that while trying to sort it: string s1 = to!string(a).dup.sort; and get an error cannot implicitly convert expression (_adSortChar(dup(to(a of type char[] to string what do I do wrong? Try this instead: string s1 = to!string(a).idup.sort() I suspect there are two things happening here. First, calling dup on a string yields char[], not string (the difference being that string is immutable and char[] is not). A char[] cannot implicitly convert to string due to the difference in immutability, although the compiler *should* realize that the result of dup is a "unique" expression and be able to implicitly convert it... It may be because you then pass it to sort, which must keep it as a char[]. The second issue is that using .sort instead of .sort() (note the parentheses) calls the built-in sort instead of std.algorithm.sort, which is strongly discouraged. You should always use std.algorithm.sort over the built-in sort, which you can do just by appending those parentheses.
Re: conver BigInt to string
On Thursday, 5 November 2015 at 16:35:01 UTC, BBasile wrote: On Thursday, 5 November 2015 at 16:29:30 UTC, Namal wrote: Hello I am trying to convert BigInt to string like that while trying to sort it: string s1 = to!string(a).dup.sort; and get an error cannot implicitly convert expression (_adSortChar(dup(to(a of type char[] to string what do I do wrong? try ".idup" otherwise "auto s1 = " auto did it, but idup leads to Error: can only sort a mutable array
Re: conver BigInt to string
On Thursday, 5 November 2015 at 17:13:07 UTC, Ilya Yaroshenko wrote: string s1 = to!string(a).dup.sort.idup; well, but I was just told not to use sort ??
Re: conver BigInt to string
On Thursday, 5 November 2015 at 16:45:10 UTC, Meta wrote: On Thursday, 5 November 2015 at 16:29:30 UTC, Namal wrote: Hello I am trying to convert BigInt to string like that while trying to sort it: string s1 = to!string(a).dup.sort; and get an error cannot implicitly convert expression (_adSortChar(dup(to(a of type char[] to string what do I do wrong? Try this instead: string s1 = to!string(a).idup.sort() If I try it like that i get: Error: template std.algorithm.sorting.sort cannot deduce function from argument types !()(char[]), candidates are: /../src/phobos/std/algorithm/sorting.d(996):
conver BigInt to string
Hello I am trying to convert BigInt to string like that while trying to sort it: string s1 = to!string(a).dup.sort; and get an error cannot implicitly convert expression (_adSortChar(dup(to(a of type char[] to string what do I do wrong?
Re: conver BigInt to string
On Thursday, 5 November 2015 at 16:39:03 UTC, Namal wrote: On Thursday, 5 November 2015 at 16:35:01 UTC, BBasile wrote: On Thursday, 5 November 2015 at 16:29:30 UTC, Namal wrote: Hello I am trying to convert BigInt to string like that while trying to sort it: string s1 = to!string(a).dup.sort; and get an error cannot implicitly convert expression (_adSortChar(dup(to(a of type char[] to string what do I do wrong? try ".idup" otherwise "auto s1 = " auto did it, but idup leads to Error: can only sort a mutable array sorry, I feel embarrassed now...good luck i wish you.
Re: conver BigInt to string
On Thursday, 5 November 2015 at 16:53:50 UTC, Namal wrote: On Thursday, 5 November 2015 at 16:45:10 UTC, Meta wrote: On Thursday, 5 November 2015 at 16:29:30 UTC, Namal wrote: Hello I am trying to convert BigInt to string like that while trying to sort it: string s1 = to!string(a).dup.sort; and get an error cannot implicitly convert expression (_adSortChar(dup(to(a of type char[] to string what do I do wrong? Try this instead: string s1 = to!string(a).idup.sort() If I try it like that i get: Error: template std.algorithm.sorting.sort cannot deduce function from argument types !()(char[]), candidates are: /../src/phobos/std/algorithm/sorting.d(996): string s1 = to!string(a).dup.sort.idup;
cast(T) documentation
Hello, Is cast(T) documented all in one place somewhere? I'd like to understand exactly what it does and does not do.
Re: good reasons not to use D?
On Monday, 2 November 2015 at 17:07:33 UTC, Laeeth Isharc wrote: On Sunday, 1 November 2015 at 09:07:56 UTC, Ilya Yaroshenko wrote: On Friday, 30 October 2015 at 10:35:03 UTC, Laeeth Isharc wrote: Any other thoughts? Floating point operations can be extended automatically (without some kind of 'fastmath' flag) up to 80bit fp on 32 bit intel processors. This is worst solution for language that want to be used in accounting or math. Thoughts like "larger precision entails more accurate answer" are naive and wrong. --Ilya Thanks, Ilya. An important but subtle point. Funnily enough, I haven't done so much specifically heavily numerical work so far, but trust that in the medium term the dlangscience project by John Colvin and others will bear fruit. What would you suggest is a better option to address the concerns you raised? The main goal is that we need to rewrite std.math & std.mathspecial to 1. Support all FP types (float/double) 2. Be portable (no Assembler when possible!, we _can_ use CEPHES code) 3. Produce equal results on different machines with different compilers for float/double. The problem is that many math functions use compensatory hacks to make result more accurate. But this hack have _very_ bad behaviour if we allow 80bit math for double and float. Q: What we need to change? A: Only dmd backend for old intel/amd 32 bit processors without sse support. Other compilers and platforms are not affected. So, D will have the same FP behaviour like C. See also the comment: https://github.com/D-Programming-Language/phobos/pull/2991#issuecomment-74506203 -- Ilya
Re: conver BigInt to string
On Thursday, 5 November 2015 at 16:29:30 UTC, Namal wrote: Hello I am trying to convert BigInt to string like that while trying to sort it: string s1 = to!string(a).dup.sort; and get an error cannot implicitly convert expression (_adSortChar(dup(to(a of type char[] to string what do I do wrong? try ".idup" otherwise "auto s1 = "
Re: conver BigInt to string
Namal: Hello I am trying to convert BigInt to string like that while trying to sort it: void main() { import std.stdio, std.algorithm, std.conv, std.bigint, std.string; auto n = 17.BigInt ^^ 179; n.text.dup.representation.sort().release.assumeUTF.writeln; } Bye, bearophile
Re: conver BigInt to string
void main() { import std.stdio, std.algorithm, std.conv, std.bigint, std.string; auto n = 17.BigInt ^^ 179; n.text.dup.representation.sort().release.assumeUTF.writeln; } Better: n.to!(char[]).representation.sort().release.assumeUTF.writeln; Bye, bearophile
Re: bearophile is back! :) was: Re: conver BigInt to string
On Thursday, 5 November 2015 at 19:38:23 UTC, Ali Çehreli wrote: Good one! ;) I'm really happy that he is still around. Ali So am I! The more, the merrier!
Re: cast(T) documentation
On 11/05/2015 07:51 AM, Carl Sturtivant wrote: Hello, Is cast(T) documented all in one place somewhere? I'd like to understand exactly what it does and does not do. This is what I can find: http://dlang.org/expression.html#CastExpression Ali
bearophile is back! :) was: Re: conver BigInt to string
On 11/05/2015 09:40 AM, bearophile wrote: Bye, bearophile Were you immersed in another language? Rust? Ali
Re: bearophile is back! :) was: Re: conver BigInt to string
On 11/05/2015 11:35 AM, Chris wrote: On Thursday, 5 November 2015 at 19:30:02 UTC, Ali Çehreli wrote: On 11/05/2015 09:40 AM, bearophile wrote: Bye, bearophile Were you immersed in another language? Rust? Ali His D doesn't seem to be Rusty though! Good one! ;) I'm really happy that he is still around. Ali
Re: bearophile is back! :) was: Re: conver BigInt to string
On Thursday, 5 November 2015 at 19:30:02 UTC, Ali Çehreli wrote: On 11/05/2015 09:40 AM, bearophile wrote: Bye, bearophile Were you immersed in another language? Rust? Ali His D doesn't seem to be Rusty though!
Re: Operator implicit conversion difference
On 11/05/2015 05:20 AM, ixid wrote: > This seems very inconsistent, does a += b not lower to a = a + b? Apparently not: http://dlang.org/expression.html#AssignExpression It says "The right operand is implicitly converted to the type of the left operand". So, the rules are different. Ali
Re: parallel
On 05.11.2015 21:30, Handyman wrote: Seems that 4 cores go all out on first 4 dishes, then 1 core deals with the last dish. With 4 cores I expect diner is ready after 5/4 = 1.25 secs though. What did I do wrong? You describe the situation correctly. The unit of work is a dish. That is, the work for a single dish is not split between cores. So one of your four cores has to make two dishes. That takes two seconds.
Re: parallel
On 11/05/2015 12:58 PM, Handyman wrote: > On Thursday, 5 November 2015 at 20:54:37 UTC, anonymous wrote: >> There is not attempt to split the `prepare` action up and run parts of >> it in parallel. > > So 1.25 secs is impossible? For the given example, yes, impossible. However, as mentioned elsewhere in this thread, if prepare() is parallelizable itself, then it would be possible. Ali
Re: Align a variable on the stack.
On Thursday, 5 November 2015 at 21:24:03 UTC, TheFlyingFiddle wrote: On Thursday, 5 November 2015 at 21:22:18 UTC, TheFlyingFiddle wrote: On Thursday, 5 November 2015 at 11:14:50 UTC, Marc Schütz wrote: ~10x slowdown... I forgot to mention this but I am using DMD 2.069.0-rc2 for x86 windows. I reduced it further: struct A { float x, y, z ,w; } struct B { float x=float.nan; float y=float.nan; float z=float.nan; float w=float.nan; } void initVal(T)(ref T t, ref float k) { pragma(inline, false); } void benchA() { foreach(float f; 0 .. 1000_000) { A val = A.init; initVal(val, f); } } void benchB() { foreach(float f; 0 .. 1000_000) { B val = B.init; initVal(val, f); } } int main(string[] argv) { import std.datetime; import std.stdio; auto res = benchmark!(benchA, benchB)(1); writeln("Default: ", res[0]); writeln("Explicit: ", res[1]); readln; return 0; } also i am using dmd -release -boundcheck=off -inline The pragma(inline, false) is there to prevent it from removing the assignment in the loop.
Re: Align a variable on the stack.
On Thursday, 5 November 2015 at 23:37:45 UTC, TheFlyingFiddle wrote: On Thursday, 5 November 2015 at 21:24:03 UTC, TheFlyingFiddle wrote: [...] I reduced it further: [...] these run at the exact same speed for me and produce identical assembly output from a quick glance dmd 2.069, -O -release -inline
Re: conver BigInt to string
On Thursday, 5 November 2015 at 16:45:10 UTC, Meta wrote: The second issue is that using .sort instead of .sort() (note the parentheses) calls the built-in sort instead of std.algorithm.sort, which is strongly discouraged. You should always use std.algorithm.sort over the built-in sort, which you can do just by appending those parentheses. Whoa whoa whoa... This is the first time I've heard about this difference, and I've used .sort plenty of times... That seems like really, REALLY bad design, especially considering the language allows functions to be called without parentheses. I thought I was using std.algorithm's version the whole time. What's the difference between the implementations of arrays' .sort property and std.algorithm.sort()? And does sort() throw out that "unable to deduce function argument" error for a character array of all things?
Re: parallel
On 11/05/2015 12:43 PM, Handyman wrote: On Thursday, 5 November 2015 at 20:40:00 UTC, anonymous wrote: So one of your four cores has to make two dishes. That takes two seconds. So make fine-grained? foreach (i; 0..50) Thread.sleep(20.msecs); But then my program still says: '2 secs'. Please enlighten me. That's still 1 second per task. The function prepare() cannot be executed by more than one core. Ali
Re: parallel
On Thursday, 5 November 2015 at 20:30:05 UTC, Handyman wrote: Seems that 4 cores go all out on first 4 dishes, then 1 core deals with the last dish. With 4 cores I expect diner is ready after 5/4 = 1.25 secs though. What did I do wrong? The first four dishes get scheduled, all of them sleep for 1 second in parallel, then complete at roughly the same time. One second has passed. Now there's one dish left. It gets scheduled, sleeps for 1 second, and finishes (the other threads remain idle). Two seconds have passed.
Re: parallel
On Thursday, 5 November 2015 at 20:40:00 UTC, anonymous wrote: So one of your four cores has to make two dishes. That takes two seconds. So make fine-grained? foreach (i; 0..50) Thread.sleep(20.msecs); But then my program still says: '2 secs'. Please enlighten me.
Re: parallel
On 05.11.2015 21:43, Handyman wrote: foreach (i; 0..50) Thread.sleep(20.msecs); But then my program still says: '2 secs'. Please enlighten me. Let's look at the line that does the `parallel` call: foreach (dish; parallel(dishes, 1)) dish.prepare(); This means that `dishes` is processed in parallel. Multiple threads are started to execute `prepare()` on multiple elements of `dishes` at the same time. Each of those `dish.prepare()` calls is done on only one thread, though. There is not attempt to split the `prepare` action up and run parts of it in parallel.
Re: parallel
On Thursday, 5 November 2015 at 20:45:25 UTC, Ali Çehreli wrote: That's still 1 second per task. The function prepare() cannot be executed by more than one core. Thanks. OK. So 'prepare' is atomic? Then let's turn it around: how can I make the cores prepare a meal of 5 dishes in 1.25 secs? Should I rewrite, or split, 'prepare'?
Re: parallel
On Thursday, 5 November 2015 at 20:54:37 UTC, anonymous wrote: There is not attempt to split the `prepare` action up and run parts of it in parallel. So 1.25 secs is impossible?
Re: parallel
On 05.11.2015 21:52, Handyman wrote: On Thursday, 5 November 2015 at 20:45:25 UTC, Ali Çehreli wrote: That's still 1 second per task. The function prepare() cannot be executed by more than one core. Thanks. OK. So 'prepare' is atomic? Then let's turn it around: how can I make the cores prepare a meal of 5 dishes in 1.25 secs? Should I rewrite, or split, 'prepare'? You'd have to split `prepare` further into parallelizable parts. In a real world scenario that may or may not be possible. When the goal is just sleeping we can do it, of course. Just do another `parallel` loop in `prepare`: import std.range: iota; foreach (i; parallel(iota(50))) Thread.sleep(20.msecs); This won't get you down to exactly 1.25 seconds, because the start/finish print outs still take some time, and because of parallelization overhead.
Re: Align a variable on the stack.
On Thursday, 5 November 2015 at 11:14:50 UTC, Marc Schütz wrote: On Thursday, 5 November 2015 at 03:52:47 UTC, TheFlyingFiddle wrote: Can you publish two compilable and runnable versions of the code that exhibit the difference? Then we can have a look at the generated assembly. If there's really different code being generated depending on whether the .init value is explicitly set to float.nan or not, then this suggests there is a bug in DMD. I created a simple example here: struct A { float x, y, z ,w; } struct B { float x=float.nan; float y=float.nan; float z=float.nan; float w=float.nan; } void initVal(T)(ref T t, ref float k) { pragma(inline, false); t.x = k; t.y = k * 2; t.z = k / 2; t.w = k^^3; } __gshared A[] a; void benchA() { A val; foreach(float f; 0 .. 1000_000) { val = A.init; initVal(val, f); a ~= val; } } __gshared B[] b; void benchB() { B val; foreach(float f; 0 .. 1000_000) { val = B.init; initVal(val, f); b ~= val; } } int main(string[] argv) { import std.datetime; import std.stdio; auto res = benchmark!(benchA, benchB)(1); writeln("Default: ", res[0]); writeln("Explicit: ", res[1]); return 0; } output: Default: TickDuration(1637842) Explicit: TickDuration(167088) ~10x slowdown...
Re: parallel
On Thursday, 5 November 2015 at 21:10:16 UTC, anonymous wrote: parallel(iota(50))) Wow. I have dealt with ranges and 'iota' (and with parallel), but I admit I have to think hard about this example. Thanks a bunch all for your patience.
parallel
import std.stdio; import core.thread; import std.datetime; // for stopwatch import std.parallelism; void say(string s) { // write and flush writeln(s); stdout.flush(); } struct Dish { string name; void prepare() { say("Start with the " ~ name ~ "."); Thread.sleep(1.seconds); // kunstmatig tijd verbruiken say("Finished the " ~ name ~ "."); } } void main() { auto dishes = [ Dish("soup"), Dish("sauce"), Dish("fries"), Dish("fish"), Dish("ice") ]; auto sw = StopWatch(AutoStart.yes); foreach (dish; parallel(dishes, 1)) dish.prepare(); sw.stop; writefln("Diner is ready. Cooking took %.3f seconds.", cast(float) sw.peek.msecs / 1000); } gives: Start with the soup. Start with the sauce. Start with the fries. Start with the fish. Finished the sauce. Finished the fries. Start with the ice. Finished the soup. Finished the fish. Finished the ice. Diner is ready. Cooking took 1.999 seconds. Seems that 4 cores go all out on first 4 dishes, then 1 core deals with the last dish. With 4 cores I expect diner is ready after 5/4 = 1.25 secs though. What did I do wrong?
Re: Preventing implicit conversion
On Thursday, 5 November 2015 at 13:23:34 UTC, Adam D. Ruppe wrote: On Thursday, 5 November 2015 at 10:07:30 UTC, Dominikus Dittes Scherkl wrote: ubyte b = 1u; auto c = b + 1u; I expect the 1u to be of type ubyte - and also c. This won't work because of the one-expression rule. In the second line, it doesn't know for sure what b is, it just knows it is somewhere between 0 and 255. So it assumes the worst, that it is 255, and you add one, giving 256... which doesn't fit in a byte. That would be fine - but c is not ushort (which the worst-case 256 would fit in), not even uint, but int! A signed type! Just because of the crazy C interger propagation rules! And, ok, one needs to accept that auto may not do exactly what I wish for, but if I give an exact type that is likely to fit (and has to if all operands are of the same type), I expect it to work without extra casts: ubyte d = b + 1u; // doesn't compile ubyte d = b + (ubyte)1; // works - and overflows to 0 if b is 255
Re: Align a variable on the stack.
On Thursday, 5 November 2015 at 21:22:18 UTC, TheFlyingFiddle wrote: On Thursday, 5 November 2015 at 11:14:50 UTC, Marc Schütz wrote: ~10x slowdown... I forgot to mention this but I am using DMD 2.069.0-rc2 for x86 windows.
Re: Preventing implicit conversion
On Thursday, 5 November 2015 at 22:15:46 UTC, Dominikus Dittes Scherkl wrote: On Thursday, 5 November 2015 at 13:23:34 UTC, Adam D. Ruppe wrote: On Thursday, 5 November 2015 at 10:07:30 UTC, Dominikus Dittes Scherkl wrote: ubyte d = b + (ubyte)1; Sorry, should of course be: ubyte d = b + ubyte(1); Too much C lately :-/
Re: conver BigInt to string
On Thursday, 5 November 2015 at 20:45:45 UTC, TheGag96 wrote: Whoa whoa whoa... This is the first time I've heard about this difference, and I've used .sort plenty of times... That seems like really, REALLY bad design, especially considering the language allows functions to be called without parentheses. I thought I was using std.algorithm's version the whole time. It's a legacy issue that will hopefully be fixed someday. The issue is that ever since D1, arrays have had a .sort property that uses a built-in sorting function. The compiler will only recognize it as the std.algorithm version of sort if you use parentheses. What's the difference between the implementations of arrays' .sort property and std.algorithm.sort()? And does sort() throw out that "unable to deduce function argument" error for a character array of all things? The built-in sort is buggy and slow and should never be used. As far as I know it does not produce errors of its own, so any error messages you see like that are coming from the std.algorithm sort.
Re: Unittest in a library
On Friday, 6 November 2015 at 03:59:07 UTC, Charles wrote: Is it possible to have unittest blocks if I'm compiling a library? I've tried having this: test.d: class Classy { unittest { assert(0, "failed test"); } } and then build it with `dmd test.d -lib -unittest` and it doesn't fail the unittest. You can test the unittests by using the -main switch. http://dlang.org/dmd-linux.html#switch-main
Re: cast(T) documentation
On Thursday, 5 November 2015 at 18:52:05 UTC, Ali Çehreli wrote: On 11/05/2015 07:51 AM, Carl Sturtivant wrote: Hello, Is cast(T) documented all in one place somewhere? I'd like to understand exactly what it does and does not do. This is what I can find: http://dlang.org/expression.html#CastExpression Ali Hello Ali, I probably should have found that! Thanks.
Unittest in a library
Is it possible to have unittest blocks if I'm compiling a library? I've tried having this: test.d: class Classy { unittest { assert(0, "failed test"); } } and then build it with `dmd test.d -lib -unittest` and it doesn't fail the unittest.
Re: Align a variable on the stack.
On Friday, 6 November 2015 at 00:43:49 UTC, rsw0x wrote: On Thursday, 5 November 2015 at 23:37:45 UTC, TheFlyingFiddle wrote: On Thursday, 5 November 2015 at 21:24:03 UTC, TheFlyingFiddle wrote: [...] I reduced it further: [...] these run at the exact same speed for me and produce identical assembly output from a quick glance dmd 2.069, -O -release -inline Are you running on windows? I tested on windows x64 and there I also get the exact same speed for both functions.
Re: Align a variable on the stack.
On Friday, 6 November 2015 at 01:17:20 UTC, TheFlyingFiddle wrote: On Friday, 6 November 2015 at 00:43:49 UTC, rsw0x wrote: On Thursday, 5 November 2015 at 23:37:45 UTC, TheFlyingFiddle wrote: On Thursday, 5 November 2015 at 21:24:03 UTC, TheFlyingFiddle wrote: [...] I reduced it further: [...] these run at the exact same speed for me and produce identical assembly output from a quick glance dmd 2.069, -O -release -inline Are you running on windows? I tested on windows x64 and there I also get the exact same speed for both functions. linux x86-64
Re: Preventing implicit conversion
On Thursday, November 05, 2015 09:33:39 ixid via Digitalmars-d-learn wrote: > In C++ I can add two shorts together without having to use a cast > to assign the result to one of the two shorts. It just seems > super clunky not to be able to do basic operations on basic types > without casts everywhere. That's why we have value range propagation - so that when the compiler can prove that the result will fit in the smaller type, it'll let you assign to it. Perhaps the compiler should do more with that than it currently does, but it's definitely help reduce the number of casts that are required for narrowing conversions. But allowing implicit narrowing conversions is a source of bugs, which is why languages like D, C#, and Java have all made narrowing conversions illegal without a cast. Yes, that can be annoying when you need to do math on a byte or short, and you want the result to end up in a byte or short, but it prevents bugs. It's a tradeoff. Fortunately, VPR improves the situation, but we're not going to be able to prevent narrowing bugs while still allowing implicit narrowing conversions. C/C++ went the route that requires fewer casts but more easily introduces bugs, whereas D, Java, and C# went the route where it's harder to introduce bugs but doing arithmetic on types smaller than int gets a bit annoying. Personally, I think that the route that D has taken is the better one, but it is a matter of opinion and priorities. But if it's important enough to you to not need to cast for arithmetic operations on small integer types, you can always create a wrapper type that does all of the casts for you so that you get the implicit conversions. - Jonathan M Davis
Operator implicit conversion difference
This may have been overlooked in my other thread so I wanted to ask again: This seems very inconsistent, does a += b not lower to a = a + b? I guess not based on the below: ushort a = ushort.max, b = ushort.max; a += b; // Compiles fine a = a + b; // Error: cannot implicitly convert expression (cast(int)a + cast(int)b) of type int to ushort
Re: Align a variable on the stack.
On Thursday, 5 November 2015 at 03:52:47 UTC, TheFlyingFiddle wrote: [...] I solved the problem by changing the struct to look like this. align(16) struct Pos { float x = float.nan; float y = float.nan; float z = float.nan; float w = float.nan; } wow that's quite strange. FP members should be initialized without initializer ! Eg you should get the same with align(16) struct Pos { float x, y, ,z, w; }
Re: Preventing implicit conversion
On Thursday, 5 November 2015 at 10:07:30 UTC, Dominikus Dittes Scherkl wrote: And I want to have small number litterals automatically choosing the smallest fitting type. It does, that's the value range propagation at work. Inside one expression, if the compiler can prove it fits in a smaller type, the explicit cast is not necessary. ubyte a = 255; // allowed, despite 255 being an int literal ubyte b = 253L + 2L; // allowed, though I used longs there ubyte c = 255 + 1; // disallowed, 256 doesn't fit However, the key there was "in a single expression". If you break it into multiple lines with runtime values, the compiler assumes the worst: int i = 254; int i2 = 1; ubyte a2 = i + i2; // won't work because it doesn't realize the values But, adding some constant operation can narrow it back down: ubyte a3 = (i + i2) & 0xff; // but this does because it knows anything & 0xff will always fit in a byte ubyte b = 1u; auto c = b + 1u; I expect the 1u to be of type ubyte - and also c. This won't work because of the one-expression rule. In the second line, it doesn't know for sure what b is, it just knows it is somewhere between 0 and 255. So it assumes the worst, that it is 255, and you add one, giving 256... which doesn't fit in a byte. It requires the explicit cast or a &0xff or something like that to make the bit truncation explicit. I agree this can be kinda obnoxious (and I think kinda pointless if you're dealing with explicitly typed smaller things throughout) but knowing what it is actually doing can help a little.