Re: String Comparison Operator

2017-04-30 Thread Xinok via Digitalmars-d-learn

On Sunday, 30 April 2017 at 19:05:18 UTC, bauss wrote:

On Sunday, 30 April 2017 at 16:15:41 UTC, Xinok wrote:

On Sunday, 30 April 2017 at 15:31:39 UTC, Jolly James wrote:

Is there a String Comparison Operator in D?


Yeah, just the usual comparison operators:

"abc" == "abc"
"abc" != "ABC"


~ is for string concatenation, i.e.:

"abc" ~ "def" == "abcdef"


Just to clarify.

It's not actually a string concatenation operator, it's an 
array appending operator.


Strings are just an alias for immutable(char)[] and not 
actually a type unlike other languages like C#, Java etc. where 
strings are objects.


In fact it doesn't have any operators that doesn't work with 
any other type of arrays. Just like functions such as replace 
etc. aren't necessarily string functions, but works with any 
type of arrays.


Regarding concatenation vs appending, it's kind of both depending 
on the type of the operands. What I mean is all of the following 
are valid:


[10, 20] ~ [30, 40] == [10, 20, 30, 40]  // Concatenation
[10, 20] ~ 30   == [10, 20, 30]  // Appending
10 ~ [20, 30]   == [10, 20, 30]  // Prepending


Re: String Comparison Operator

2017-04-30 Thread Xinok via Digitalmars-d-learn

On Sunday, 30 April 2017 at 15:31:39 UTC, Jolly James wrote:

Is there a String Comparison Operator in D?


Yeah, just the usual comparison operators:

"abc" == "abc"
"abc" != "ABC"


~ is for string concatenation, i.e.:

"abc" ~ "def" == "abcdef"


Re: Interpolated strings

2017-04-15 Thread Xinok via Digitalmars-d

On Saturday, 15 April 2017 at 20:04:13 UTC, Jonas Drewsen wrote:

Hi all




I shared my thoughts on such a feature just a couple weeks ago:

https://forum.dlang.org/post/oedeijdewmhazaqaz...@forum.dlang.org


Re: [OT] ISO C++ 17 changes

2017-04-04 Thread Xinok via Digitalmars-d

On Tuesday, 4 April 2017 at 02:43:26 UTC, evilrat wrote:
String interpolation would be nice too, it would really help 
with readability!


This really isn't in the spirit of D and is better left to 
library functions which give the user far more power and 
flexibility. Incorporating such a feature into the language 
raises many questions and concerns:


* It becomes a language feature thereby further bloating the 
language and runtime

* What if you want it to be @nogc?
* What if you want to combine it with allocators?
* What if you want to store the result in a particular buffer?
* What if you want the result to be lazily evaluated?
* What if you want an input range of chars? ... that is lazily 
evaluated?

* What if you want to format the arguments in a specific way?

Given the ease in which such a feature can be implemented and 
used as a library function, I don't see interpolated strings as 
being a necessary feature in D.


Re: Why double not? (!!)

2016-11-18 Thread Xinok via Digitalmars-d-learn

On Saturday, 19 November 2016 at 03:52:02 UTC, Ryan wrote:
Why do I see double `not` operators sometimes in D code? An 
example it the last post of this thread.


http://forum.dlang.org/thread/ktlpnikvdwgbvfaam...@forum.dlang.org


import core.sys.windows.windows : GetConsoleCP;
bool hasConsole = !!GetConsoleCP();


Thanks.


It's a more concise way of writing:
GetConsoleCP() != 0

You can do this in C/C++ as well (and presumably some other 
languages).


Re: Fallback 'catch-all' template functions

2016-09-02 Thread Xinok via Digitalmars-d

On Thursday, 1 September 2016 at 05:37:50 UTC, Manu wrote:

So, consider a set of overloads:

  void f(T)(T t) if(isSomething!T) {}
  void f(T)(T t) if(isSomethingElse!T) {}
  void f(T)(T t) {}

I have a recurring problem where I need a fallback function 
like the bottom one, which should be used in lieu of a more 
precise match. This is obviously an ambiguous call, but this is 
a pattern that comes up an awful lot. How to do it in D?


I've asked this before, and people say:

  void f(T)(T t) if(!isSomething!T && !isSomethingElse!T) {}

Consider that more overloads are being introduced by users 
spread out across many modules that define their own kind of T; 
this solution is no good.


In the past, I have suggested using the "default" keyword to 
specify a fallback function of this kind. I think it's a useful 
pattern for generic algorithms that have optimized variants on 
specific types for performance.


   void f(T)(T t) if(isSomething!T) {}
   void f(T)(T t) if(isSomethingElse!T) {}
   void f(T)(T t) default {}


Re: IPFS

2016-08-14 Thread Xinok via Digitalmars-d

On Sunday, 14 August 2016 at 21:21:25 UTC, Nordlöw wrote:

I advice you all to read about IPFS at

https://ipfs.io/
...


It sounds interesting but what concerns me is the lack of 
attention given to privacy and anonymity, two things that should 
be core values and top priorities for any protocol that intends 
to supersede the HTTP protocol. If that wasn't bad enough, this 
protocol would archive everything on the web and the idea of an 
internet that "never forgets" is very disconcerting.


Re: pure D JPEG decoder, with progressive JPEG support, public domain

2016-06-17 Thread Xinok via Digitalmars-d-announce

On Friday, 17 June 2016 at 22:15:47 UTC, ketmar wrote:
i put it under unlicense[1], as some other works of the same 
author is using it, and it is basically the same PD.


[1] http://unlicense.org/


Unfortunately, using unlicense is just as problematic as using 
public domain:


https://programmers.stackexchange.com/questions/147111/what-is-wrong-with-the-unlicense

The next best thing is the CC0 license (Creative Commons Zero) 
which is better written than unlicense but it's currently not 
recommended for software / source code.


http://copyfree.org/content/standard/licenses/cc0/license.txt

After that, the most-open licenses with good legal standing would 
be Boost and MIT but then you run into the same issues again with 
incompatible licenses.


I don't have any recommendations but I thought it was worth 
pointing out that unlicense isn't the solution here.


Re: Idea: swap with multiple arguments

2016-05-24 Thread Xinok via Digitalmars-d
On Tuesday, 24 May 2016 at 18:51:32 UTC, Andrei Alexandrescu 
wrote:

On 05/24/2016 02:48 PM, Xinok wrote:
BTW, Phobos already has a function called bringToFront which 
can
rotate/roll ranges but the interface is a bit different 
compared to

other languages.

https://dlang.org/phobos/std_algorithm_mutation.html#.bringToFront


This may be a misunderstanding. The original discussion was 
about rotating a fixed and compile-time-known number of values, 
not a range. -- Andrei


No misunderstanding; I realize that what you're proposing is a 
variadic function which takes some arbitrary set of 
variables/l-values passed by reference. The message was directed 
at those discussing rotate/roll/etc from other languages. It 
seemed nobody was aware this functionality was already available 
in Phobos though understandable since bringToFront is an unusual 
name for this function.


Re: Idea: swap with multiple arguments

2016-05-24 Thread Xinok via Digitalmars-d

On Monday, 23 May 2016 at 20:01:08 UTC, Andrei Alexandrescu wrote:

...


I wish to make a different point which is more general regarding 
ideas like these. I see a lot of proposals to add this or that to 
the standard library and a lot of debate pursues. One point I've 
never seen mentioned though is that these ideas would probably 
have limited usage in the real world. I think a good rule of 
thumb to consider is that the standard library should be *general 
purpose* and mostly contain common functionality, with some 
exceptions of course. We should consider whether these things 
will actually see common usage or just add bloat to the standard 
library. Also consider that once it's incorporated into Phobos, 
the responsibility of maintaining that code falls on the 
community rather than those few individuals who actually desire 
that functionality.


I'm sure this feature has some useful application in some 
specific domain but I question it's value in the community at 
large. I personally see no value in having a swap which can 
rotate three or more arguments to the left, or however we may 
choose to incorporate this functionality. On the other hand, I 
use swap() with two arguments all the time; it's very common 
functionality that most of us has likely used at least a few 
times.




BTW, Phobos already has a function called bringToFront which can 
rotate/roll ranges but the interface is a bit different compared 
to other languages.



https://dlang.org/phobos/std_algorithm_mutation.html#.bringToFront


Re: Idea: swap with multiple arguments

2016-05-23 Thread Xinok via Digitalmars-d

On Monday, 23 May 2016 at 20:01:08 UTC, Andrei Alexandrescu wrote:
So swap(a, b) swaps the contents of a and b. This could be 
easily generalized to multiple arguments such that swap(a1, a2, 
..., an) arranges things such that a1 gets an, a2 gets a1, a3 
gets a2, etc. I do know applications for three arguments. 
Thoughts? -- Andrei


It doesn't make sense for "swap" to make take than two arguments 
and it's unintuitive how it should rearrange the elements when 
you write swap(a1, a2, a3, ...). Who's to say that it should 
shift the elements to the left?


I'm not saying this is useless but it would really need a better 
name. "swap" only makes intuitive sense when it takes two 
arguments, no more and no less.


While this is technically a rotation, a "rotate" function 
generally takes an extra argument which is an element/index to 
rotate on. See: https://www.sgi.com/tech/stl/rotate.html


Re: Project: better partition

2016-05-22 Thread Xinok via Digitalmars-d
On Wednesday, 18 May 2016 at 19:54:19 UTC, Andrei Alexandrescu 
wrote:

...
No worries. Please take anything you need from there for your 
code, make it better, and contribute it back to the stdlib! -- 
Andrei


As it turns out, easier said than done.  I've been thinking about 
it for a few days now but I don't see a simple way to optimally 
merge the two techniques. The way that I alternate between 
iterating "lo" and "hi" (or lef/rig in my code) doesn't really 
work when you need to keep the iterator stationary until 
something fills the vacancy.


This is the best solution I have so far and it doesn't feel like 
a good solution at that:


for (;;)
{
++lo;
for (;;)
{
if(r[lo] < p)  ++lo; else break;
if(r[lo] <= p) ++lo; else break;
}

if(lo > hi) lo = hi;
r[hi] = r[lo];

--hi;
for (;;)
{
if(p < r[hi])  --hi; else break;
if(p <= r[hi]) --hi; else break;
}
if(lo >= hi) break;
r[lo] = r[hi];
}

The idea is simple: alternate the check for equality in hopes of 
skipping some equal elements. Unfortunately, this modification 
requires a little more work and TWO sentinels at either end of 
the range because it may skip over the first.


In most real-world data, there's only marginal gains to be made 
in skipping over equal elements, too small to justify 
compromising the gains achieved by using sentinels and vacancies. 
So unless an optimal solution exists, it's just not worth it.


Re: Project: better partition

2016-05-18 Thread Xinok via Digitalmars-d

On Tuesday, 17 May 2016 at 19:27:22 UTC, Xinok wrote:
On Tuesday, 17 May 2016 at 17:31:47 UTC, Andrei Alexandrescu 
wrote:
We should take advantage of the improved partition code I 
discussed at ACCU. Also there's a person on 
https://www.reddit.com/r/programming/comments/4jlkhv/accu_2016_keynote_by_andrei_alexandrescu/ discussing a simpler algorithm based on a couple of additional assumptions.

...


Interesting optimization, I hope you don't mind if I use it for 
my implementation of Quicksort. However, I would like to 
suggest another improvement that I devised a while back.

...


I realize that I may have wrote this post a bit prematurely. I 
only looked at the code in your slides before writing this and 
didn't realize that you had mentioned the same point I made here 
in your live talk (about equal elements). So I may have come off 
a bit condescending and that wasn't my intention. Great talk 
though, interesting topic and always entertaining to watch you 
speak.


Re: Project: better partition

2016-05-17 Thread Xinok via Digitalmars-d
On Tuesday, 17 May 2016 at 17:31:47 UTC, Andrei Alexandrescu 
wrote:
We should take advantage of the improved partition code I 
discussed at ACCU. Also there's a person on 
https://www.reddit.com/r/programming/comments/4jlkhv/accu_2016_keynote_by_andrei_alexandrescu/ discussing a simpler algorithm based on a couple of additional assumptions.

...


Interesting optimization, I hope you don't mind if I use it for 
my implementation of Quicksort. However, I would like to suggest 
another improvement that I devised a while back.


One shortcoming I find in most implementations of partition is 
the unnecessary swapping of elements equal to the pivot resulting 
in much unneeded work. The code in your slides has this same 
shortcoming. Imagine, for some reason, you call a pivot on an 
array full of zeroes. It's going to be moving lots of elements 
around for no good reason.


The obvious solution is to simply skip over equal elements but 
that is not enough. Reconsider the array full of zeroes; if you 
simply skip over all equal elements on the first pass, then the 
pivot will end up at the very front or end of the array. Ideally, 
at least when sorting, you want the pivot to occur as close to 
the center as possible.


My solution is to alternate between incrementing "lo" and "hi" 
only one step at a time, skipping over equal elements in the 
process. A priori, with an array full of zeroes, the pivot ends 
up in the center. Only once you find an element that belongs in 
the other partition do you fall back to the Hoare partition 
scheme and increment the other iterator until you find another 
element to swap with, but do not skip over equal elements in this 
case! Otherwise, you can trigger the same behavior as before with 
quadratic running time.


Anyways, my solution can be found at the link below. It can be 
over twice as fast in an ideal case, but when applied to real 
world data with lots of duplicate elements, maybe 5-10% faster.


https://github.com/Xinok/XSort/blob/master/xsort/introsort.d#L171

I don't claim credit for this technique. Admittedly I haven't 
really tried looking around to see if anybody else has come up 
with the same solution but I'm probably not the first.


Re: D mentioned and criticized

2016-05-16 Thread Xinok via Digitalmars-d

On Monday, 16 May 2016 at 18:25:23 UTC, Chris wrote:

I had a look at Loci, more specifically the language goals[1]:
...


I've been skimming through the docs and found one mention of D:

http://loci-lang.org/Exceptions.html#scope-exit-block

It seems to me that Loci is not a very "inspired" language 
meaning that it doesn't take inspiration from many other 
languages. The docs only ever refer to some classic mainstream 
languages including C++, Obj-C, Java, C#, with a hint of D. There 
are a few nice aspects such as modules, algebraic types, tuples, 
and concepts. But then I feel like it takes too much from C++ and 
mimics Java-esque OOP and concepts. I'm not saying these are bad 
things but there isn't much that makes Loci stand out as a 
programming language.


I'll add this to my list of PL bookmarks. It's a new language 
with it's first stable build released just two years ago so maybe 
the best is yet to come.


Re: Always false float comparisons

2016-05-09 Thread Xinok via Digitalmars-d

On Monday, 9 May 2016 at 20:14:36 UTC, Walter Bright wrote:

On 5/9/2016 11:37 AM, Xinok wrote:
All of these scenarios are capable of producing "incorrect" 
results, are a
source of discrete bugs (often corner cases that we failed to 
consider and
test), and can be hard to detect. It's about time we stopped 
being stubborn and
flagged these things as warnings. Even if they require a 
special compiler flag

and are disabled by default, that's better than nothing.


I've used a B+D language that does as you suggest (Wirth 
Pascal). It was highly unpleasant to use, as the code became 
littered with casts. Casts introduce their own set of bugs.


Maybe it's a bad idea to enable these warnings by default but 
what's wrong with providing a compiler flag to perform these 
checks anyways? For example, GCC has a compiler flag to yield 
warnings for signed+unsigned comparisons but it's not even 
enabled with the -Wall flag, only by specifying -Wextra or 
-Wsign-compare.


Re: Always false float comparisons

2016-05-09 Thread Xinok via Digitalmars-d

On Monday, 9 May 2016 at 18:51:58 UTC, tsbockman wrote:

On Monday, 9 May 2016 at 18:37:10 UTC, Xinok wrote:

...
(3) Generalize it to all comparisons as well, including < and 
...
(3) Makes no sense though; inequalities with mixed 
floating-point types are perfectly safe. (Well, as safe as any 
floating-point code can be, anyway.)

...


Not necessarily. Reusing 1.3 from the original case, the 
following assertion passes:


float f = 1.3;
assert(f < 1.3);

And considering we also have <= and >=, we may as well check all 
of the comparison operators. It's a complex issue because there 
isn't necessarily right or wrong behavior, only things that 
*might* be wrong. But if we want to truly detect all possible 
cases of incorrect behavior, then we have to be exhaustive in our 
checks.


Re: Always false float comparisons

2016-05-09 Thread Xinok via Digitalmars-d

On Monday, 9 May 2016 at 09:10:19 UTC, Walter Bright wrote:

Don Clugston pointed out in his DConf 2016 talk that:

float f = 1.30;
assert(f == 1.30);

will always be false since 1.30 is not representable as a 
float. However,


float f = 1.30;
assert(f == cast(float)1.30);

will be true.

So, should the compiler emit a warning for the former case?


(1) Yes, emit a warning for this case.
(2) Generalize it to all variables, like Nordlöw suggested.
(3) Generalize it to all comparisons as well, including < and > .
(4) While we're at it, let's also emit a warning when comparing 
signed and unsigned types.
(5) Dare I say it... warn against implicit conversions of double 
to float.

(6) The same applies to "real" as well.

All of these scenarios are capable of producing "incorrect" 
results, are a source of discrete bugs (often corner cases that 
we failed to consider and test), and can be hard to detect. It's 
about time we stopped being stubborn and flagged these things as 
warnings. Even if they require a special compiler flag and are 
disabled by default, that's better than nothing.


Discrete semantics of lambda expressions

2016-05-02 Thread Xinok via Digitalmars-d
D has a few ways of writing lambda expressions / anonymous 
functions:


x => doSomething()
{ doSomething(); }
(){ doSomething(); }

While the flexibility is great, there's a hidden issue for those 
programmers who come from different languages and are used to 
writing:


x => { doSomething(); doSomethingElse(); }

At first glance, this may seem okay but what's actually happening 
is that this is a lambda returning a lambda. The correct way 
would be to rewrite this as one of:


x => { doSomething(); doSomethingElse(); }()
(x){ doSomething(); doSomethingElse(); }

This particular issue as popped up twice in the last couple days 
alone and presumably many more times in the past:


http://forum.dlang.org/thread/qsayoktyffczskrnm...@forum.dlang.org
http://forum.dlang.org/thread/thgyqyarccinzuqhc...@forum.dlang.org

I'm proposing that we add a warning to the compiler for this 
particular case. If the programmer intended to return a lambda, 
then rewrite the expression as one of:


x => (){ doSomething(); doSomethingElse(); }
x => ({ doSomething(); doSomethingElse(); })


Re: Checking if an Integer is an Exact Binary Power

2016-04-26 Thread Xinok via Digitalmars-d
On Monday, 25 April 2016 at 15:35:14 UTC, Dominikus Dittes 
Scherkl wrote:

On Monday, 25 April 2016 at 15:27:02 UTC, Xinok wrote:

Brute force.

http://dpaste.dzfl.pl/882d7cdc5f74

Yeah. And your test spares the failed case int.min (0x8000),
because in this case x & -x is negative, but of cause x>>>1 is 
positive, so it returns false despite -(2^^31) is in fact a 
power of two. So this requires an extra cast to work correct 
(in fact no big difference in the assembly):


return (Unsigned!T)(x & -x) > (Unsigned!T)(x >>> 1);


How is it wrong? Negative numbers are NOT powers of two (unless 
you have an imaginary/complex exponent), so it's still correct to 
return false for -int.min. Should it not?


http://www.wolframalpha.com/input/?i=solve+2^n+%3D+-%282^31%29+for+n


Re: Checking if an Integer is an Exact Binary Power

2016-04-25 Thread Xinok via Digitalmars-d
On Monday, 25 April 2016 at 12:56:27 UTC, Andrei Alexandrescu 
wrote:

On 4/25/16 6:42 AM, Solomon E wrote:
On Monday, 25 April 2016 at 05:35:12 UTC, Andrei Alexandrescu 
wrote:


With gdc https://godbolt.org/g/jcU4np isPow2B is the winner 
(shortest

code, simplest instructions). -- Andrei


I generalized function isPow2B to work for negative numbers in 
case of a
signed type, while keeping the assembly the same or better. 
(That also

incidentally made it correctly return false for zero.)


bool isPow2B(T)(T x)
{
   return (x & -x) > (x - 1);
}


bool isPow2F(T)(T x)
{
 return (x & -x) > (x >>> 1);
}

assembly diff:
--subl$1, %edi
++shrl%edi



That's smaller and faster, thanks. But how do you show it is 
still correct? -- Andrei


Brute force.

http://dpaste.dzfl.pl/882d7cdc5f74


Re: Checking if an Integer is an Exact Binary Power

2016-04-24 Thread Xinok via Digitalmars-d

On Sunday, 24 April 2016 at 23:17:53 UTC, David Nadlinger wrote:

On Sunday, 24 April 2016 at 23:00:56 UTC, Temtaime wrote:

Please no cmp.
Just
bool isPowerOf2(uint x) { return x && !(x & (x - 1)); }


You do realise that this will (typically) emit a branch?

 — David


I compiled using dmd -O and looked at the disassembly using 
objdump -D and there are indeed a couple branches in the 
disassembly:


 <_D4test8powerOf2FkZk>:
   0:   50  push   %rax
   1:   48 89 f9mov%rdi,%rcx
   4:   85 c9   test   %ecx,%ecx
   6:   74 0a   je 12 
<_D4test8powerOf2FkZk+0x12>

   8:   8d 81 ff ff ff ff   lea-0x1(%rcx),%eax
   e:   85 c1   test   %eax,%ecx
  10:   74 04   je 16 
<_D4test8powerOf2FkZk+0x16>

  12:   31 c0   xor%eax,%eax
  14:   eb 05   jmp1b 
<_D4test8powerOf2FkZk+0x1b>

  16:   b8 01 00 00 00  mov$0x1,%eax
  1b:   59  pop%rcx
  1c:   c3  retq
  1d:   0f 1f 00nopl   (%rax)


I modified David's solution a bit to (hopefully) eliminate the 
branch:

bool powerOf2(uint x){ return !x ^ !(x & (x - 1)); }

 <_D4test8powerOf2FkZb>:
   0:   50  push   %rax
   1:   53  push   %rbx
   2:   48 89 famov%rdi,%rdx
   5:   85 d2   test   %edx,%edx
   7:   0f 94 c0sete   %al
   a:   8d 8a ff ff ff ff   lea-0x1(%rdx),%ecx
  10:   85 ca   test   %ecx,%edx
  12:   0f 94 c3sete   %bl
  15:   32 c3   xor%bl,%al
  17:   5b  pop%rbx
  18:   59  pop%rcx
  19:   c3  retq
  1a:   66 0f 1f 44 00 00   nopw   0x0(%rax,%rax,1)

The branches are gone but I'm really not sure if this is better 
or worse.


Re: Checking if an Integer is an Exact Binary Power

2016-04-24 Thread Xinok via Digitalmars-d

On Monday, 25 April 2016 at 01:17:48 UTC, Xinok wrote:

...


Sorry, didn't mean to say David's solution. Too many edits. >_<


Re: @nogc inconsistent for array comparison depending on mutability of elements

2016-04-08 Thread Xinok via Digitalmars-d-learn

On Friday, 8 April 2016 at 10:15:10 UTC, Dicebot wrote:

On Friday, 8 April 2016 at 09:56:41 UTC, Nick Treleaven wrote:
Semantically, array literals are always allocated on the 
heap. In this case, the optimizer can obviously place the 
array on the stack or even make it static/global.


So @nogc is enforced by the optimizer?


Yes, sadly.


What? O_o  I stated that it's not and explained why doing so 
would be a bad idea.


To make it sane one would need to make a list of all 
optimization DMD makes in that regard and put them into spec as 
mandatory for all compilers.


Re: @nogc inconsistent for array comparison depending on mutability of elements

2016-04-07 Thread Xinok via Digitalmars-d-learn

On Friday, 8 April 2016 at 01:36:18 UTC, rcorre wrote:

@nogc unittest {
int[2] a = [1, 2];
assert(a == [1, 2]); // OK

immutable(int)[2] b = [1, 2];
assert(b == [1, 2]); // fail: array literal may cause 
allocation

}

Is there any logic behind allowing the comparison with `a` but 
not `b`, or is this a compiler bug?


Semantically, array literals are always allocated on the heap. In 
this case, the optimizer can obviously place the array on the 
stack or even make it static/global. However, it would be in poor 
design to rely on the optimizer to satisfy @nogc. It comes down 
to portability; if the code compiles successfully with one 
compiler, then it should compile successfully in any other 
compiler.


Also, the way that compilers are typically built is that semantic 
analysis is done in the frontend and optimization is done in the 
backend. Trying to build a bridge between these two would be 
incredibly difficult and make implementing the compiler much more 
complex with little gain.


Re: Better Phobos contribution guide

2016-03-25 Thread Xinok via Digitalmars-d

On Friday, 25 March 2016 at 07:14:52 UTC, Seb wrote:
- dont add @nogc, @pure, @safe attributes yourself - let the 
compiler infer it!


I'd argue this is only applicable to some functions, not all. In 
particular, for non-templated functions,



- avoid to use `auto` as return type


I understand the intent: "auto" is bad for documentation. 
However, auto is used a lot for type inference. So I would 
rephrase this to be clear about that, or simply add a couple 
words, "... wherever possible".



- check: did you add enough tests?


I would extend this to emphasize "complete code coverage" in unit 
tests.


As a last word, a couple of this things could be enforced - 
especially the style guide. Are there already plans to check 
for this automatically?


I believe the autotester already checks whitespace (are you using 
spaces or tabs?) though I could be wrong about this.



Overall, I think these would be good additions to the 
contribution guide.


Re: Some crazy ideas from a high level perspective

2016-03-22 Thread Xinok via Digitalmars-d

On Tuesday, 22 March 2016 at 14:08:55 UTC, _d0s_ wrote:

Idea1: a general interface to describe n-dimensional matrices


Perhaps this is what you're looking for?

https://dlang.org/phobos/std_experimental_ndslice.html


Re: Defer in D

2016-03-21 Thread Xinok via Digitalmars-d

On Monday, 21 March 2016 at 17:46:05 UTC, Dmitry Olshansky wrote:

...
The main use case in Go that needs it specifically as a 
function level primitive is  this:


files := []File{}
for i := paths {
files[i], err := os.Open(paths[i])
if err != nil {
return errors.Errorf("Failed to open %s", paths[i])
}
defer files[i].Close()
}
... // lots of code using files


So in a nutshell - lack of RAII while operating on collections 
of resources.


D and Go have different mechanisms for handling 
errors/exceptions. I'll refrain from debating "A is better than 
B" but I haven't seen any use case which isn't already adequately 
covered by the existing mechanisms in D.


However, one thing I do find inferior is the inability of D 
lambdas to capture by value. Thus, the simple example of 
"writeln(i)" may produce unexpected results.



@Nick: https://github.com/Xinok/scrapheap/blob/master/defer.d


Defer in D

2016-03-19 Thread Xinok via Digitalmars-d
I stumbled upon an example demonstrating defer in Go which I 
thought was interesting. Defer is similar to scope in D except 
they're called at end of function rather than end of scope; you 
can queue multiple defer calls by writing them inside of a loop. 
This implies that it internally builds a stack of delegates which 
are then executed LIFO once the function returns (or panics).


https://tour.golang.org/flowcontrol/13

I took a quick look through Phobos but didn't see anything 
similar so I wrote this as a proof of concept and to elicit 
discussion. This also works should the function throw rather than 
return gracefully.


http://dpaste.dzfl.pl/23c665bdae9e

I think a library solution is elegant enough that it doesn't need 
to be built into the language but that doesn't mean it needs to 
be in Phobos either. Does anybody have a use case for "defer" 
that isn't already adequately covered by scope statements?


Re: Suggested Change to Contract Syntax

2016-03-10 Thread Xinok via Digitalmars-d

On Thursday, 10 March 2016 at 21:07:09 UTC, FatalCatharsis wrote:
I am very new to D so I apologize if I'm very uninformed. I'm 
learning D by porting a big (awful) c++ project to D so that I 
may take advantage of all the lovely testing and QA features. I 
am currently moving a class over that I have which represents a 
2 float vector and I've run into a problem with contracts. Here 
is a short excerpt:

...


The most "elegant" solution I can think of is to move the 
contracts into the body of the function itself and wrap them in 
version(unittest) or version(assert). The pre-contract would be 
placed at the very start of the function and the post-contract 
would be wrapped in a scope(exit) or scope(success).


Regarding your proposal, I don't think it's necessary to 
introduce new syntax; a simple change in semantics would suffice. 
If we simply preserved the body of the pre-contract and made it 
accessible in the post-contract, then your example would work as 
is.


I'm not sure how the compilers translate the contracts into code, 
but it's definitely feasible. Code of the form:


auto foo()
in{ ... }
out(result){ ... }
body{ ... }

Could simply be rewritten as:

auto foo()
{
// paste pre-contract here

auto bodyOfFoo()
{
// paste body here
}

auto result = bodyOfFoo();

// paste post-contract here

return result;
}


Re: How to sort a range

2016-03-09 Thread Xinok via Digitalmars-d-learn

On Wednesday, 9 March 2016 at 15:39:55 UTC, rcorre wrote:
Still curious as to why it fails; maybe the range is getting 
copied at some point? I guess I need to step through it.


That's my suspicion as well. It seems that OnlyResult is 
pass-by-value so every time it gets passed to another function, 
it creates a copy of all the elements. A simple solution is to 
provide a wrapper type which refers to the elements in the 
original container.


However, I'm going to argue that the sort function is fine but 
the modifications you made to OnlyResult are incorrect. I tried 
running your example of only(...).sort but got a compilation 
error. Similarly, trying to sort a static array also gives a 
compilation error. However, if I slice the static array before 
passing it to sort (thus passing by reference), then it works 
just fine.


Re: Uniform Function Call Syntax?

2016-03-06 Thread Xinok via Digitalmars-d
On Sunday, 6 March 2016 at 07:45:58 UTC, Ola Fosheim Grøstad 
wrote:
I think it would be better idea to just add the ability to add 
unicode operators, and to avoid precedence issues one could 
just require them to use parentheses. That way you could define 
opCustom"•" and use it as:


( point1 • point2 )


Please no. If I need to open up Character Map just to write code, 
something has gone horribly wrong.


Re: [OT] Some neat ideas from the Kotlin language

2016-02-23 Thread Xinok via Digitalmars-d

On Tuesday, 23 February 2016 at 19:43:43 UTC, rsw0x wrote:

...
How does this differ from the example I gave where the branch 
is only taken if the pointer is non-null?


D doesn't prevent you from dereferencing a null pointer whereas 
these scenarios should be impossible in Kotlin as well as Rust. 
Case and point, this code compiles without issue but crashes at 
runtime:


int* foo()
{
return null;
}

void main()
{
if(int* ptr = new int)
{
ptr = foo();  // Whoops...
*ptr = 35;// Crash
}
}

In D, pointers and reference types can spontaneously become null 
under almost any context. That's the difference.


Re: [OT] Some neat ideas from the Kotlin language

2016-02-23 Thread Xinok via Digitalmars-d

On Tuesday, 23 February 2016 at 07:18:09 UTC, rsw0x wrote:
On Tuesday, 23 February 2016 at 06:49:46 UTC, Tobias Müller 
wrote:
OTOH in the examples in Kotlin/Rust the variable 'var' changes 
its type

from 'int?' to plain 'int'.
In Kotlin this is done with static analysis, in Rust with 
rebinding of the

name.

Tobi


Rust's Option checks are not done at compile-time in most 
cases unless something changed drastically in the past ~18 
months.


Option is an enum type in Rust (i.e. algebraic data type). The 
Rust compiler forces you to check all possible cases so you can't 
use it improperly. So you would have to use something like 
pattern matching or "if let" to check the state.


https://doc.rust-lang.org/book/match.html#matching-on-enums


Re: [OT] Some neat ideas from the Kotlin language

2016-02-20 Thread Xinok via Digitalmars-d
On Saturday, 20 February 2016 at 09:40:40 UTC, Tobias Müller 
wrote:

...
It's not much more verbose but more explicit.
Changing the type of a variable based on static analysis is 
just advanced
obfuscation. It hurts readability and the gain is questionable. 
At least it

only works for nullable types.


I consider this feature similar to requiring all control paths to 
initialize a variable or return a value. Imagine requiring some 
explicit or verbose syntax to enforce this behavior. I don't see 
this being an issue as long as the behavior is consistent between 
compilers (if code works in one compiler, it works in all 
compilers).


Re: Official compiler

2016-02-17 Thread Xinok via Digitalmars-d
On Wednesday, 17 February 2016 at 22:57:20 UTC, Márcio Martins 
wrote:
I was reading the other thread "Speed kills" and was wondering 
if there is any practical reason why DMD is the official 
compiler?


...


I pretty much asked this same question a little over a year ago. 
Thread is here:


http://forum.dlang.org/thread/mjwitvqmaqlwvoudj...@forum.dlang.org


Re: How to force evaluation of range?

2016-02-12 Thread Xinok via Digitalmars-d-learn
On Friday, 12 February 2016 at 20:43:24 UTC, Taylor Hillegeist 
wrote:

So I have this code and I have to add the element
.each!(a => a.each!("a"));
to the end in order for it to evaluate the range completely and 
act like I expect it too. Is there a  better thing to put in 
the place of

.each!(a => a.each!("a"));?

...


The following combination might work:

.joiner.each;

http://dlang.org/phobos/std_algorithm_iteration.html#.joiner

http://dlang.org/phobos/std_algorithm_iteration.html#.each


Re: How to force evaluation of range?

2016-02-12 Thread Xinok via Digitalmars-d-learn
On Saturday, 13 February 2016 at 01:11:53 UTC, Nicholas Wilson 
wrote:

...

If you just want the range evaluated use walkLength


It might work in this case, but in general this won't work for 
any range which defines .length as a member. In that case, 
walkLength will simply return .length of that range.


Re: How to force evaluation of range?

2016-02-12 Thread Xinok via Digitalmars-d-learn

On Saturday, 13 February 2016 at 03:16:09 UTC, cym13 wrote:

On Saturday, 13 February 2016 at 02:17:17 UTC, Xinok wrote:
On Friday, 12 February 2016 at 20:43:24 UTC, Taylor Hillegeist 
wrote:

So I have this code and I have to add the element
.each!(a => a.each!("a"));
to the end in order for it to evaluate the range completely 
and act like I expect it too. Is there a  better thing to put 
in the place of

.each!(a => a.each!("a"));?

...


The following combination might work:

.joiner.each;

http://dlang.org/phobos/std_algorithm_iteration.html#.joiner

http://dlang.org/phobos/std_algorithm_iteration.html#.each


Why not just  .each;   ?


The thing he's trying to iterate over is a range of ranges. A 
single .each will only iterate over the outermost range so you 
need .joiner first to "flatten" the range, then you can use .each 
on that result.


Re: Challenge: fair partition function

2016-02-08 Thread Xinok via Digitalmars-d
On Monday, 8 February 2016 at 23:25:00 UTC, Andrei Alexandrescu 
wrote:
Consider defining a function that partitions a range around a 
given index like this:


size_t pivotPartition(alias less = (a, b) => a < b, Range)
(Range r, size_t pivot);

Returns x, one of the the indexes that r[pivot] would have if r 
were sorted. Also, r[0 .. x] contains stuff no greater than 
r[x] and r[x + 1 .. $] contains stuff no less than r[x].


The challenge is to implement such a function with fairness: if 
several elements are equal to r[pivot], return the index 
closest to r.length / 2.


The function should be efficient, minimize calls to less and 
swap, etc. A variant that is efficient but does not implement 
fairness is below.


...


Ultimately, I think you're going to have to perform two 
comparisons on *some* elements to check for equality. I thought 
of a few tricks which may or may not help.




(1) Keep your code as is and add a final pass to count the number 
of elements. However, you only need to scan the larger partition 
since it contains the index (r.length / 2) and you're trying to 
move closer to that point. The elements in the smaller partition 
can simply be ignored.




(2) You may have code which looks something like this:

if(less(e, pivot)){ /+ less than +/ }
else if(less(pivot, e)){ /+ greater than +/ }
else{ /+ equal to +/ }}

This is a big win if most of the elements are less than the 
pivot, but also a big loss if most of the elements are greater 
than the pivot. A simple trick is to repeat the code twice but 
swap the order of comparisons so you get:


if(less(pivot, e)){ /+ greater than +/ }
else if(less(e, pivot)){ /+ less than +/ }
else{ /+ equal to +/ }}

This will place us closer to an average 1.5 comparisons per 
element which is better than (1).




(3) Similar to (2) except you only compare each element once so 
you're not checking for equality. You're simply alternating 
between checking if (e < pivot) or if (pivot < e). So the code 
may look something like this:


if(less(pivot, e)){ less }
else{ greater or equal }

if(less(e, pivot)){ greater }
else{ less or equal }

From this, you can group the elements into four partitions:

[LE L G GE]

So you have elements which are definitely less than or greater 
than the pivot but also some elements which *may* be equal to the 
pivot. Combine this with the first trick (1) and you only have to 
scan LE or GE for equality but not both. This is putting us 
closer to an average 1.25 comparisons per element but a 4-way 
partition would also require much more work.


Re: [suggestion] Automated one-stop compiler version chart

2016-02-07 Thread Xinok via Digitalmars-d

On Sunday, 7 February 2016 at 18:46:48 UTC, Nick Sabalausky wrote:
I was just updating a project's .travis.yml file and noticed: 
It doesn't seem we have any one-stop-shop location to check all 
the versions of DMD/LDC/GDC currently available on travis-ci.


It's be really nice if we had some auto-updated chart like that 
which ALSO listed the DMDFE, LLVM and GCC versions each LDC/GDC 
version is based on.


That info seems especially difficult to find for GDC. It's a 
little easier for LDC, since I found this page ( 
https://github.com/ldc-developers/ldc/releases ), but it'd be 
really nice to have just a simple chart somewhere.


The GDC downloads page has this info:
http://gdcproject.org/downloads

Perhaps it would be good to add this info to the main downloads 
page:

http://dlang.org/download.html


Re: Idempotent partition around median of 5?

2016-02-05 Thread Xinok via Digitalmars-d

On Friday, 5 February 2016 at 17:33:55 UTC, Era Scarecrow wrote:

On Friday, 5 February 2016 at 15:41:11 UTC, Fool wrote:

One swap usually decomposes into three moves.


 Recently from reading some interesting hacks and super code, a 
good swap can also be done via 3 xor operations (and avoiding 
an intermediate storage unit).


http://aggregate.ee.engr.uky.edu/MAGIC/#Swap%20Values%20Without%20a%20Temporary

x ^= y;
y ^= x;
x ^= y;

 Since these can be used directly as registers it might be 
faster (assuming all 5 are mapped to registers until the final 
write out, which might not work).


It's a cool trick but I would be surprised if this were actually 
faster. Modern x64 processors have 16 "general purpose" registers 
so saving a single register for a simple set of instructions is 
likely to not have any benefit. My other thought is that the 
compiler may not recognize this pattern, making it more difficult 
to optimize. On the other hand, compilers are very good at 
rearranging simple reads and writes to avoid stalling the 
pipeline in the CPU.


Re: Idempotent partition around median of 5?

2016-02-05 Thread Xinok via Digitalmars-d

On Friday, 5 February 2016 at 15:13:56 UTC, tn wrote:

On Thursday, 4 February 2016 at 20:30:57 UTC, Timon Gehr wrote:
At most 6 comparisons, <=3 swaps, idempotent (optimal number 
of swaps):


...



Inspired by this, I made four other versions of the function 
that are shorter but make more swaps (at most 4, 6, 7 and 8 
swaps respectively). Each makes 6 comparisons and should be 
idempotent.


http://dpaste.dzfl.pl/1c53d8f432f7

...


Very nice! I'm curious, to both you and Timon, how did you go 
about writing these and coming up with the solutions? I'm not 
sure if I could come up with these results myself and so quickly 
at that.


Re: Idempotent partition around median of 5?

2016-02-04 Thread Xinok via Digitalmars-d

On Thursday, 4 February 2016 at 20:30:57 UTC, Timon Gehr wrote:
At most 6 comparisons, <=3 swaps, idempotent (optimal number of 
swaps):


void partition5(ref int[5] a){
  if(a[0]

Re: Idempotent partition around median of 5?

2016-02-03 Thread Xinok via Digitalmars-d

On Thursday, 4 February 2016 at 01:33:54 UTC, Era Scarecrow wrote:
On Thursday, 4 February 2016 at 01:24:15 UTC, Andrei 
Alexandrescu wrote:
This appears a simple problem: given numbers a, b, c, d, e, 
swap them around so as to place the median in c and partition 
the others around it. I.e. the postcondition is: a <= c && b 
<= c && c <= d && c <= e.




So there's got to be a better solution. Your challenge - 
should you choose to accept it :o) - is an algorithm that does 
the partitioning in 6 comparisons and <= 9 swaps, which is 
idempotent: when applied twice, it always does 0 swaps.


 Well if we look at it, the max compares for optimal sorting 
algorithm is the log2 of the max combinations in how the 
elements could be arranged; !5 = 120 or 7 compares.


The order of a,b and d,e don't matter because we're partitioning, 
not sorting. So this problem can be solved in six comparisons.


 I'm not sure of the max swaps, seems like it could be 4 if 
done right, but that might be impractical.


I believe that's correct. Each swap can move at least one element 
into it's final position. After three swaps, there may be two 
elements which are still out of place, and swapping them will 
move them both into their final position. Thus, four swaps max. 
:-)



A solution to this problem does exist: Build a tree of if-else 
branches for all the different possibilities (2^6=64) and 
hard-code the optimal sequence of swaps. However, this function 
would be huge so this isn't very practical. Ideally, we want to 
minimize the size of the function as much as possible, adding in 
a few extra swaps if necessary.


I'll take a crack at it this weekend. It sounds like an 
interesting problem.


Stable Partition3 Redux

2016-02-01 Thread Xinok via Digitalmars-d
I'm not sure what the convention is for resurrecting old threads 
but I felt it would be best to start fresh. For reference, the 
older thread is here:


http://forum.dlang.org/thread/pyduqwmskkkoicsxi...@forum.dlang.org

tl;dr - I studied a paper with the goal of implementing a stable 
3-way partitioning algorithm which runs in linear time without 
allocating any memory on the heap. I spent over a month studying 
that paper and only managed to understand about 80% of the 
content. However, that was enough for me to derive a partitioning 
algorithm which runs in O(n) time using O(log n) space.


The good news is that the O(log n) space can easily be allocated 
on the stack so this achieves the original goal of avoiding 
memory allocations. The bad news is that, despite running in 
linear time, it comes at the cost of a huge constant factor which 
makes it too slow for any practical purposes. It's a complex 
algorithm which I never fully implemented but preliminary 
benchmarks were discouraging.


With that said, I've been sitting on this for several months and 
Phobos is still lacking a stable partition3 implementation so 
it's about time I did something about that. I implemented a few 
"potential candidates" with various pros and cons, two of which 
can accept a variable-length buffer of a specific type. DPaste 
isn't the best platform for running benchmarks so I suggest 
compiling and running on your own machine.


http://dpaste.dzfl.pl/2a22af60baf9

Notes:
* "Assignments" requires the range have assignable elements; 
"Swaps" and "Insertions" works on both swappable and assignable 
elements.


* "Assignments Large Buffer" is how you would classically 
implement a stable partitioning algorithm using O(n) space. 
Clearly, it's by far the fastest.


* "Swaps Large Buffer" vigorously thrashes the cache so it gets 
much slower on larger inputs.


* The recursive code is identical in all three implementations; 
all that changes is the method for partitioning small sublists. 
So it's possible to merge all these techniques into a single 
function or even make them external functions separate from the 
recursive code.




Any feedback is appreciated. I'll make a pull request once we 
have consensus on what is the best solution.


Re: D vs Rust

2016-01-31 Thread Xinok via Digitalmars-d
On Sunday, 31 January 2016 at 16:30:47 UTC, Ola Fosheim Grøstad 
wrote:

On Sunday, 31 January 2016 at 16:18:21 UTC, bearophile wrote:

Regarding the code reliability, D is better than C++11


That's a bold claim. What do you mean by "code reliability"? 
You get as strong typing as you want with C++.


(And no, Laeeth, the fact that Manu has a nice boss that allows 
him to use an unfinished product in production is not a good 
reason for other people to do so. Try to be reasonable, just 
try...)


He likely means that, in general, D code has fewer bugs than 
C++11 code. There are numerous reasons for this but I'll just 
give a few examples:


* Certain types of implicit casts are not allowed in D where the 
original value may be lost, e.g. int to short.

* Functions MUST return a value in D or at least throw an error.
* D has far less "undefined behavior" than C++.
* safe, pure, transitive const/immutable, thread-local by 
default, etc.


This and much more makes D far more "reliable" than C++.


Re: reduce -> fold?

2016-01-30 Thread Xinok via Digitalmars-d
On Saturday, 30 January 2016 at 12:11:37 UTC, Ola Fosheim Grøstad 
wrote:

Currying is pointless, I believe Swift is removing it. ...


It might be fairly useless in D but it's definitely not useless 
in general. It' a different design pattern and functional 
languages make great use of it. OTOH, D substitutes it with other 
design patterns like UFCS and template arguments, e.g. 
arr.sort!"a > b".


Phobos even has a function which mimics currying via 
std.functional.partial:


https://dlang.org/phobos/std_functional.html#partial


Re: Collapsing n-dimensional array to linear (1 dimensional)

2016-01-22 Thread Xinok via Digitalmars-d-learn

On Friday, 22 January 2016 at 12:07:11 UTC, abad wrote:

Let's say I have an array like this:

int[][][] array;

And I want to generate a linear int[] based on its data. Is 
there a standard library method for achieving this, or must I 
iterate over the array manually?


What I'm thinking of is something like this:

int[] onedim = std.array.collapse(array);


You can use std.algorithm.joiner but you have to call it twice to 
flatten a 3D array to a 1D array.


import std.algorithm, std.stdio, std.array;

void main()
{
int[][][] arr = [[[1], [2, 3]], [[4, 5], [6, 7]], [[8], 
[9, 10], [11]]];

int[] arr2 = arr.joiner.joiner.array;
writeln(arr);
writeln(arr2);
}

http://dlang.org/phobos/std_algorithm_iteration.html#.joiner


Re: Distributed Memory implementation

2016-01-18 Thread Xinok via Digitalmars-d

On Monday, 18 January 2016 at 09:56:17 UTC, Adrian Matoga wrote:

...
Your idea seems interesting, but IMHO a compacting GC should be 
the preferred solution for heap fragmentation.


Implementing a compacting GC in D would be exceedingly difficult, 
if not impossible, because of raw pointers, unions, non-GC 
memory, linked C/C++ code, and possibly other reasons. Other 
languages, such as Java, have a much simpler model that can be 
dealt with. There's simply too many caveats in D and any such 
benefits would be minimal at best.


Re: Distributed Memory implementation

2016-01-18 Thread Xinok via Digitalmars-d

On Monday, 18 January 2016 at 11:46:36 UTC, Nemanja Boric wrote:

On Monday, 18 January 2016 at 09:28:59 UTC, tcak wrote:
On Monday, 18 January 2016 at 08:12:03 UTC, Nemanja Boric 
wrote:

Check https://dlang.org/phobos/std_experimental_allocator.html


Which part of this module provide the functionality of using 
non-consecutive memory(distributed) blocks like they are 
consecutive?


IIRC, none of them, sorry, but if you're going to implement it, 
my guess it that

it should be compatible with `std.experimental.allocator`.


Allocators work with void[] arrays which require memory to be 
contiguous. This idea of fragmented memory simply isn't 
compatible with allocators. Such a library could provide a range 
interface but it would have to be wrapped in a container of some 
sort. It could potentially be a candidate for std.container 
though.


Re: topN using a heap

2016-01-18 Thread Xinok via Digitalmars-d
On Tuesday, 19 January 2016 at 00:11:40 UTC, Andrei Alexandrescu 
wrote:
Of course not. I think this back-and-forth takes away from the 
gist of things. So let me summarize what has happened:


1. topN was reportedly slow. It was using a random pivot. I 
made it use getPivot (deterministic) instead of a random pivot 
in https://github.com/D-Programming-Language/phobos/pull/3921. 
getPivot is also what sort uses.


2. Now that both functions use getPivot, I set out to improve 
it in 
https://github.com/D-Programming-Language/phobos/pull/3922. The 
problem is I couldn't make getPivot impure; pure functions 
already call sort, so making it impure would have been a 
regression.


3. So I made getPivot use just a bit of randomness taken from 
the length of the range.


4. sort was and is attackable before all of these changes

5. So now we have pure topN and sort (subject to the range 
offering pure primitives) but they are both attackable.


6. PRNGs don't have any other downside than making functions 
impure.


The way I see it we have these solutions:

(a) make topN still use a random pivot. That way there's no 
regression. Then proceed and make sort() avoid quadratic cases 
in other ways, e.g. switch to heapsort if performance degrades.


(b) Improve sort() first, then apply a similar strategy to 
improving topN. Do not use RNGs at all.


(c) Seek randomness in other places, e.g. address of elements, 
data hashes etc. Come with a solution that may still be 
attacked in narrow cases but make that unlikely enough to be a 
real concern.



Andrei


To be clear, sort is NOT attackable. Introsort is used for 
unstable sorting which begins with quick sort but falls back to 
heap sort after too many recursions to maintain O(n log n) 
running time. Timsort is used for stable sorting which is a 
variant of merge sort but still runs in O(n log n) time. In 
either case, you're guaranteed to have O(n log n) running time in 
the worst case.


On the other hand, someone can improve upon quick sort by 
choosing better pivots but be careful not to add too much 
overhead. A couple years ago, I tried choosing the pivot from a 
median of five but the result was as much as 10% slower. One idea 
is to begin initially choosing better pivots, but once the 
sublists fall below a certain length, simply choose the pivot 
from a median of three to avoid the extra overhead.


As for topN, there are two approaches I'm aware of that are 
deterministic without resorting to impurities or RNGs. The first 
is introselect which is similar to introsort in that it has a 
fall back algorithm. The other approach is to choose the pivot 
from a median of medians. My idea is a variant of the latter 
approach: Begin by choosing the pivot from a median of five. If 
you continuously choose bad pivots, take a larger sample of 
elements to choose the pivot from. Keep growing the sample as 
necessary.


Speaking of RNGs, they're technically pure as long as you always 
use the same seed. So what if we generated a random seed at the 
start of the process, but then reused that same seed over and 
over in pure functions for the duration of the process?


Re: Balanced match with std.regex

2015-12-14 Thread Xinok via Digitalmars-d-learn

On Tuesday, 15 December 2015 at 01:29:39 UTC, Jakob Ovrum wrote:

Thanks, that makes sense.

String manipulation in D without regex is pretty nice anyway, 
so it's not a big loss.


There is a library named Pegged which can match against balanced 
parens:


https://github.com/PhilippeSigaud/Pegged


Re: D programming video tutorial

2015-12-13 Thread Xinok via Digitalmars-d-learn

On Sunday, 13 December 2015 at 20:29:47 UTC, Pederator wrote:
Hi. Does anybody who is familair with D consider to make a 
comprehensive D programming video tutorial / training / course? 
This could be encouraging and helpful for people to start with 
D. It could also help in promoting D programming language. This 
is a question for all the community, please comment what do you 
think about this idea, we will know if there is an interest in 
such a training video, or is it just me.


It's almost two years old now but I found this series of videos 
on D:


https://www.youtube.com/playlist?list=PL-zvF7DyWePQad_E6Y9J9kbYvM7AjnBD1


Re: functional way doing array stuff/ lambda functions

2015-12-12 Thread Xinok via Digitalmars-d-learn

On Saturday, 12 December 2015 at 23:36:43 UTC, cym13 wrote:

...
So, in your example:

int product(const ref int[] arr) {
import std.array: array;
import std.algorithm: reduce;

arr = arr.reduce!((p, i) => p*i).array;
}


A good post overall but you got reduce wrong. In D, reduce 
computes and returns the result immediately, not a lazy range. 
The following code is correct:


int product(const ref int[] arr) {
import std.array: array;
import std.algorithm: reduce;

return arr.reduce!((p, i) => p*i)();
}

Example: http://dpaste.dzfl.pl/fc2c2eab2d02


Re: Microsoft to contribute to Clang and LLVM project

2015-12-09 Thread Xinok via Digitalmars-d

On Monday, 7 December 2015 at 11:26:27 UTC, Bruno Medeiros wrote:
With these developments, one asks again, is it wise to spend 
any more time working and using the Digital Mars backend for 
D?...


I asked this very question about a year ago. The thread is here:

http://forum.dlang.org/post/mjwitvqmaqlwvoudj...@forum.dlang.org


Re: Complexity nomenclature

2015-12-03 Thread Xinok via Digitalmars-d

On Friday, 4 December 2015 at 01:55:15 UTC, Timon Gehr wrote:
Regarding nomenclature, it would be awesome if we could call 
this "running time" instead of "complexity". Algorithms have 
running times and memory usages. Problems have (time and space) 
complexities. I know that calling running time 'complexity' is 
awfully popular outside of actual complexity theory papers. I 
don't really understand what calling it 'complexity' actually 
buys though.


I've seen you mention this before, and while you may be correct, 
the term "complexity" is so widely applied to algorithms that 
it's not worth going against the norm. For the vast majority of 
computer scientists, when they hear the term "time complexity", 
they immediately understand it as running time. In fact, what 
I've seen most often is that algorithms have complexities and 
problems fall into a *complexity class*. For example, just take a 
look at the Wikipedia page on time complexity:


https://en.wikipedia.org/wiki/Time_complexity


Re: Will std.allocator make it easy to bypass the GC?

2015-11-14 Thread Xinok via Digitalmars-d

On Saturday, 14 November 2015 at 14:56:57 UTC, maik klein wrote:

http://dlang.org/phobos/std_experimental_allocator.html

I am not sure what the plan is but it seems to me that the 
standard library could make use of the "theAllocator" for 
allocation and also allow you to swap in different allocators.


So could it then be possible to completely bypass the GC once 
"allocators" are used by the std?


I imagine that we could set "theAllocator = Malloc" and every 
allocation will use malloc? Or maybe some function allocates by 
default but you know that the allocation is small enough to be 
allocated on the stack which would allow us to call the 
function like this "someFunction!(StackAlloactor)(foo);" ?


Allocators provide mechanisms for allocating memory but don't do 
anything for memory management. Unlike the GC which automatically 
frees memory, you must explicitly free memory in allocators. In 
fact, some allocators aren't even required to implement free().


Allocators and containers should go a long way in eliminating 
much use of the GC in the std library. However, some language 
features still rely on the GC and we'll need alternate mechanisms 
to deal with those, such as RC or lifetime semantics (a la Rust).


std.stdio.writeln should not accept infinite ranges?

2015-11-12 Thread Xinok via Digitalmars-d

The following code compiles and runs:

import std.stdio, std.random;

void main()
{
writeln(rndGen);
}

Since rndGen is an infinite range, this code runs forever. It 
seems to be that we need to add a check for isInfinite on writeln 
and other related functions.


Does anybody have a use case where this is actually practical? I 
don't see a reason for allowing infinite ranges here, considering 
how easy it is to write "range.take(n)".


Re: std.stdio.writeln should not accept infinite ranges?

2015-11-12 Thread Xinok via Digitalmars-d
On Friday, 13 November 2015 at 01:19:22 UTC, Andrei Alexandrescu 
wrote:

On 11/12/2015 08:18 PM, Xinok wrote:

The following code compiles and runs:

import std.stdio, std.random;

void main()
{
 writeln(rndGen);
}

Since rndGen is an infinite range, this code runs forever. It 
seems to
be that we need to add a check for isInfinite on writeln and 
other

related functions.

Does anybody have a use case where this is actually practical? 
I don't
see a reason for allowing infinite ranges here, considering 
how easy it

is to write "range.take(n)".


Piping a program that produces infinite output into less is 
practical. -- Andrei


Fair enough. I'm thinking more on the side of safety though and I 
assume that, more often than not, printing an infinite range is 
unintentional. Should it be this easy to shoot ourselves in the 
foot? I'm not saying it should be impossible, but just make it 
explicit that we intended to print an infinite range.


Re: Purity of std.conv.to!string

2015-09-26 Thread Xinok via Digitalmars-d-learn

On Saturday, 26 September 2015 at 17:08:00 UTC, Nordlöw wrote:

Why is the following code not pure:

float x = 3.14;
import std.conv : to;
auto y = x.to!string;


???

Is there a reason for it not being pure? If not, this is a 
serious problem as this is such a fundamental function.


I don't know the exact reason but I found a couple of functions 
which could be marked pure which are not, including 
strippedOctalLiteral and isOctalLiteralString. It's possible that 
some function was simply overlooked and needs to be marked pure 
or infer purity.


The larger issue at hand is that the compiler doesn't tell you 
where an impurity lies, merely that it exists. I mentioned this 
issue not too long ago while experiencing my own difficulties 
respecting purity.


Re: Floating point in the type system

2015-09-12 Thread Xinok via Digitalmars-d

On Saturday, 12 September 2015 at 16:08:31 UTC, Robert wrote:
On Saturday, 12 September 2015 at 15:49:23 UTC, Atila Neves 
wrote:

What do think is unusual?

Atila


It's unusual, because `float.nan != float.nan`, so one might 
expect that `typeof(Foo!(float.nan) != Foo!(float.nan))`, 
whereas this is clearly not the case, even with both the static 
assert and runtime assert failing. I'm just curious to 
understand the reasoning behind this, whether it's intentional, 
and whether it matters at all.


(1) f = f2; // This is assignment, not comparison
(2) alias VAL = f; // This is not a data member so is not 
involved in comparisons or assignments


change "alias VAL" to "float VAL" and then you might see the 
behavior you expect.


Re: Error reporting is terrible

2015-09-03 Thread Xinok via Digitalmars-d

On Thursday, 3 September 2015 at 23:56:53 UTC, Prudence wrote:
After being away from D and deciding to test the idea of 
writing a commercial app in it, There were 2 big things that 
have jumped out at me:


1. The setup is a much compared to most modern day compilers 
and software. While VS is huge, obviously has a ton of money 
behind it, it installs without much fanfare. When installing VS 
you know that ones it's done after a few mins you can jump into 
programming and actually get something done.


Can you elaborate? These days, I find setting up D on Windows or 
Linux quite easy.


2. The error messages in D are horrendous. They tend to be 
terse, point to places where the error actually doesn't occur, 
and almost always require one to loop up the error, if it's not 
obvious. This is a waste of time for the programmer. Usually 
the more complex code the more crypographic the errors are.


Again, specific examples would help. Often, when newcomers detail 
the trouble they encountered during their first experience with 
D, one or more people will get to work fixing or alleviating the 
specific issues they mention.


3. Since I use VS, I installed VD. It works up to a point. But 
is so ill-integrated into VS that it makes me want to just jump 
back into .NET.


Unfortunately, some features of D make tooling support difficult. 
In particular, CTFE combined with string mixins are extremely 
powerful but basically require a fully featured D interpreter to 
make any sense of them.


Re: A Small Enhancement Idea

2015-09-02 Thread Xinok via Digitalmars-d
On Wednesday, 2 September 2015 at 19:34:53 UTC, Jack Stouffer 
wrote:

On Wednesday, 2 September 2015 at 19:15:08 UTC, jmh530 wrote:
I wasn't familiar with version(release). Would you need to 
compile with -version=release then?


No, it seems to work with just the -release flag.

The Python programmer in me hates the inconsistency between 
these two, but it's fine.


I think it's for the better. The trouble with including keywords 
in the language is that it encourages their use. For example, 
there was once a suggestion to add a "namespace" keyword to the D 
language simply for the sake of linking with C++. There was a 
huge backlash because of the reason I stated before. I think the 
same philosophy applies here; we shouldn't add a "release" 
keyword because it would encourage it's use.


But why not encourage it? Because debug builds are for testing 
and, well, debugging your code. However, if there is 
"release-only" segments of code, then these are excluded from 
debug builds and so don't receive the same extensive testing and 
error checking as all other code. On the other hand, there are 
those corner cases when you actually need this which are 
adequately covered by the "version(release)" statement.


In summary, from a software engineering perspective, this is a 
bad idea in general and should be avoided wherever possible.


Re: Dazz new description for D

2015-09-01 Thread Xinok via Digitalmars-d

On Monday, 31 August 2015 at 07:38:18 UTC, Enamex wrote:
Some posters on reddit remarked that D's sales pitch on the 
website is unnecessarily long and unindicative...


I used to suffer from long compile times, poor move semantics, 
and undefined behavior. Then my doctor told me about D...


Side effects may include satisfaction, enjoyment, too much free 
time, ...


D is not for everyone. Do not use D if you are an incessant 
fanboy, ideological bigot, ...


Re: How many bytes in a real ?

2015-08-24 Thread Xinok via Digitalmars-d
On Monday, 24 August 2015 at 21:55:40 UTC, Guillaume Chatelet 
wrote:
On linux x86_64 : real.sizeof == 16 but it looks like only the 
first the first 10 bytes are used (ie. 80bits)


Is there a way to know the real size of a real ?


The best I can think of is to use the mant_dig property which 
returns the number of bits in the mantissa.


http://dpaste.dzfl.pl/9889b3d0bd5b


Re: foreach multiple loop sugar

2015-08-18 Thread Xinok via Digitalmars-d-learn

On Tuesday, 18 August 2015 at 15:51:55 UTC, ixid wrote:
Though sugar seems to be somewhat looked down upon I thought 
I'd suggest this- having seen the cartesianProduct function 
from std.algorithm in another thread I thought it would be an 
excellent piece of sugar in the language. It's not an earth 
shattering change but it makes something very common more 
elegant and reduces indentation significantly for multiple 
nested loops. Braces make nested loops very messy and any 
significant quantity of code in the loop body benefits from not 
being in a messy nesting.


...


What's wrong with just putting all the foreach statements on a 
single line?


foreach(i; 0..10) foreach(j; 0..10) foreach(k; 0..10)
{
writeln(i, j, k);
}


Re: Bug in std.container: SList

2015-08-14 Thread Xinok via Digitalmars-d

On Friday, 14 August 2015 at 18:12:42 UTC, anonymous wrote:
Other insert*  functions call the private function 
SList.initialize() which does the null-check for _root. I am 
working on a PR adding the missing call in insertAfter - that's 
a 1 line change. I am not a phobos dev.


I would do the null check in the insert* functions themselves. 
Otherwise, you have the overhead of a function call, assuming the 
function isn't inlined.


Re: Bug in std.container: SList

2015-08-14 Thread Xinok via Digitalmars-d

On Friday, 14 August 2015 at 06:44:53 UTC, ted wrote:

Hi,

(earlier posting on D.learn, but better test included here)...


dmd.2.067, dmd.2068 (64bit linux)


import std.container: SList;

void main()
{
SList!int tmp=( 4 );
SList!int tmp1;

// Ensure tmp is empty
tmp.removeAny();
assert( tmp.empty == true );

// Both tmp and tmp1 are empty
// (however tmp1 is not internally initialised)
tmp.insertAfter( tmp[], 3 );
//tmp1.insertAfter( tmp1[], 3 );== asserts here.

}

core.exception.AssertError@std/container/slist.d(57): Assertion 
failure


Changes in phobos mean that you cannot append to an 
_uninitialised_ empty singly linked list (you can, however, 
append to an _initialised_ empty linked list.


bug ?

--ted


I can confirm that this is a bug but I'm not sure what the 
correct way is to fix it. SList creates a dummy node for the 
root of the list, but because structs don't allow default 
constructors, this dummy node is never allocated in the first 
case. Either we can add lots of null checks to initialize the 
list, or we can add this line:


@disable this();

Thoughts?


Re: Does D have syntax for adding subscopes to classes?

2015-08-12 Thread Xinok via Digitalmars-d-learn

On Wednesday, 12 August 2015 at 15:47:37 UTC, Adam D. Ruppe wrote:

Example with them:
...


That's the idea I had except I would use a struct instead because 
using a class requires a second allocation.


Re: Interpreting the D grammar

2015-08-02 Thread Xinok via Digitalmars-d

On Sunday, 2 August 2015 at 14:50:35 UTC, Jacob Carlborg wrote:
I'm trying to read the D grammar [1] to enhance the D TextMate 
bundle. If we take the add expression as an example. It's 
defined like this in the grammar:


AddExpression:
MulExpression
AddExpression + MulExpression
AddExpression - MulExpression
CatExpression

And like this in the grammar made by Brian [2]:

addExpression:
  mulExpression
| addExpression ('+' | '-' | '~') mulExpression
;

I'm not so familiar with grammars but this looks like it's 
recursive. Is it possible to translate this piece of grammar to 
a regular expression? TextMate uses regular expressions and a 
couple of enhancements/extensions to define a grammar for a 
language.


[1] http://dlang.org/grammar.html
[2] https://rawgit.com/Hackerpilot/DGrammar/master/grammar.html


I guess you're not familiar with the theoretical aspect of 
formal languages. The D grammar is a context-free grammar which 
cannot be reduced to a regular expression. As cym13 stated, there 
are some simple context-free grammars which can be rewritten as 
regular expressions, but the D grammar cannot be. Take a look at 
the Chomsky Hierarchy [1] for a better understanding.


The classic example of a context-free language is the set of 
balanced parenthesis, i.e. (()) is balanced and () is not. 
This language is not regular meaning you cannot write a regular 
expression for it, but you can write a context-free grammar for 
it.


[1] https://en.wikipedia.org/wiki/Chomsky_hierarchy#The_hierarchy


Re: Interpreting the D grammar

2015-08-02 Thread Xinok via Digitalmars-d

On Sunday, 2 August 2015 at 17:33:35 UTC, Jacob Carlborg wrote:

On 02/08/15 18:37, MakersF wrote:

Of course it's recursive! Do you want the grammar to be able 
to only

define a finite number of programs?


I don't know how this work, that's why I'm asking. But I read 
something about left recursion needs to be removed to be able 
to parse a grammar, at least for some parsers.


There's lots of videos online that show you how to do this. I 
suppose some parsers are smart enough to rewrite the grammar to 
remove left recursion. Otherwise, for a simple parser which does 
nothing more than a breadth-first search, it may require 
exponential time to parse a string.


Is Key Type?

2015-08-02 Thread Xinok via Digitalmars-d-learn
is there a trait in D or Phobos which will tell you if a type can 
be used as a key for an associative array? For example, where T 
is some type:


static assert(isKeyType!T)
int[T] hashTable = ...


Re: Stable partition3 implementation

2015-07-22 Thread Xinok via Digitalmars-d

On Saturday, 11 July 2015 at 02:56:55 UTC, Timon Gehr wrote:

...
The algorithm runs in O(n*log*(n)). It's not n log(n).
...


I understand now. I had never heard of an iterated logarithm 
before. I took the asterisk to mean some constant, like a wild 
card if you will. Sorry for the confusion.


I wonder if the true O(n) algorithm is still worth pursuing. 
Although log*(n) may only be a factor of 2 or 3, the O(n) 
algorithm may have a small constant factor which makes it 2-3x 
faster.



That paper appears to be available here:
https://github.com/lispc/lispc.github.com/raw/master/assets/files/situ.pdf


No idea what the constant is though, I have not read the 
paper (yet).


I don't know if there is anything relevant but that paper 
focuses on a

different problem involving sorting.


No, it focuses on a few closely related problems, and the 
O(n*log*(n))-algorithms solves a problem that is a 
straightforward generalization of the problem you are looking 
at. Stable three way partition is stable sorting where there 
are only three distinct values (smaller, equal, greater). The 
paper you provided builds on top of this algorithm. It is the 
main reference and part (ii) of the Algorithm B part of the 
O(n)/O(1) algorithm does not occur in the paper you provided, 
but only in that other paper, so yes, it is relevant. :-)


I get that much now, but this paper is still way above my head. 
There's some notation in the sample code that I don't understand. 
In this statement:


e - #{ j : L[j] = x, 1 = j = n }

I'm not sure what the pound # signifies or what exactly is being 
assigned to e. I'm assuming it performs some operation on the 
set, like max, but I'm not sure what.


Re: What about GDC and LDC after DMD becomes D?

2015-07-22 Thread Xinok via Digitalmars-d
On Wednesday, 22 July 2015 at 11:42:24 UTC, Shriramana Sharma 
wrote:
Once the front end of DMD becomes fully D, I read that the 
backend will also become D, but then what will happen to GDC 
and LDC whose backends are C++ IIUC?


The Rust compiler is written in Rust but uses LLVM as a backend, 
like LDC. I don't know the complexities of linking a D frontend 
with a C++ backend, but if the Rust team can do it, I'm sure that 
means GDC and LDC will continue to live on.


Re: Stable partition3 implementation

2015-07-10 Thread Xinok via Digitalmars-d

On Friday, 10 July 2015 at 21:26:50 UTC, Timon Gehr wrote:
I think this method is likely to be less practically relevant 
than the one they improve upon. (log* n really is constant for 
all practical purposes, it is the number of times one needs to 
iteratively take the logarithm until a number below 1 is 
obtained.)


log* n grows fast for small inputs, so we can't simply ignore it. 
For example, if n = 2^16 = 65536, then n*lg(n) = 16*2^16 = 2^20 = 
1048576.


That paper appears to be available here: 
https://github.com/lispc/lispc.github.com/raw/master/assets/files/situ.pdf


No idea what the constant is though, I have not read the paper 
(yet).


I don't know if there is anything relevant but that paper focuses 
on a different problem involving sorting.


Re: Stable partition3 implementation

2015-07-10 Thread Xinok via Digitalmars-d
On Saturday, 11 July 2015 at 00:00:47 UTC, Andrei Alexandrescu 
wrote:

On 7/9/15 5:57 PM, Xinok wrote:

...

Any thoughts?


I'd say both would be nice (not to mention the one in the 
paper) so how about both are present selectable with a policy 
ala Yes.tightMemory or something. -- Andrei


I'm hesitant to add yet another template parameter; IMHO, it's 
bad enough that we have to manually write the predicate just to 
reach the swap strategy.


There's a fourth option I didn't think to mention. It's easy to 
utilize a small buffer which would be used to partition small 
sublists instead of insertions. partition3 would not allocate 
it's own memory so would be in-place by default, but the user 
could provide their own buffer and pass it as an extra function 
argument. For example:


int[] arr = iota(0, 10).array;
int[] buf = new int[100];
partition3!(...)(arr, 1000, buf);

Interestingly, for some constant k, if you implement this 
algorithm to use O(n/k) space, then it runs in O(n lg n) time 
because it performs at most O(lg k) rotations.


Regarding the algorithm in the paper, I don't have it completely 
figured out yet because it refers to algorithms in other papers 
which I can't find.


Re: Stable partition3 implementation

2015-07-09 Thread Xinok via Digitalmars-d

On Thursday, 9 July 2015 at 21:57:39 UTC, Xinok wrote:
I found this paper which describes an in-place algorithm with 
O(n) time complexity but it's over my head at the moment.


http://link.springer.com/article/10.1007%2FBF01994842


I apologize, I didn't realize this link was behind a paywall. The 
paper is freely available here:


http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.25.5554rep=rep1type=pdf


Stable partition3 implementation

2015-07-09 Thread Xinok via Digitalmars-d
I wanted to get some feedback on a stable partition3 
implementation for Phobos before I work on a pull request. I 
wrote this implementation which is in-place (zero memory 
allocations) but runs in O(n lg n) time.


http://dpaste.dzfl.pl/e12f50ad947d

I found this paper which describes an in-place algorithm with 
O(n) time complexity but it's over my head at the moment.


http://link.springer.com/article/10.1007%2FBF01994842

It is trivial to implement an algorithm with O(n) time complexity 
and O(n) space complexity, but that requires allocating memory. 
Furthermore, using a buffer requires the range to have assignable 
elements.



Any thoughts?


Re: Map Purity

2015-06-30 Thread Xinok via Digitalmars-d-learn

On Sunday, 28 June 2015 at 15:55:51 UTC, jmh530 wrote:
My understanding of pure is that a function labeled pure can 
only include pure functions. I've been confused by the fact 
that a function calling map (like below) can be labeled pure 
without any problems. The only way I can rationalize it is that 
map really isn't a function, it's (if I'm understanding it 
correctly) a template that creates a template function that 
calls a struct template.


auto test_map(T)(T x) pure
{
return x.map!(a = a + a);
}

This is related to
http://forum.dlang.org/thread/ppmokalxdgtszzllz...@forum.dlang.org?page=1
but maybe my question is more basic.


Map isn't explicitly marked as pure. D can infer purity for 
templated functions, so as long as you give it a pure predicate, 
map can be inferred as pure. This only works for templated 
functions; non-templated functions must be explicitly marked as 
pure.


Re: A Vision on Improved DMD Template Instantiation Diagonostics

2015-06-12 Thread Xinok via Digitalmars-d

On Friday, 12 June 2015 at 15:51:02 UTC, Per Nordlöw wrote:

On Friday, 12 June 2015 at 15:35:55 UTC, Xinok wrote:
However, template constraints in D are much more powerful and 
flexible than C++ concepts. As such, there are bound to be 
several cases in which the compiler can't provide any useful 
information, no matter how smart it may be.


Can you give an example of such a case where the compiler isn't 
smart enough?


The satisfiability problem, i.e. it may be there is no set of 
inputs that satisfies the template constraints.


https://en.wikipedia.org/wiki/Boolean_satisfiability_problem


Re: A Vision on Improved DMD Template Instantiation Diagonostics

2015-06-12 Thread Xinok via Digitalmars-d

On Friday, 12 June 2015 at 12:20:58 UTC, Per Nordlöw wrote:

...

Destroy.


Improving compiler diagnostics for failed template instantiations 
would be a great start. However, template constraints in D are 
much more powerful and flexible than C++ concepts. As such, there 
are bound to be several cases in which the compiler can't provide 
any useful information, no matter how smart it may be.


In general, I think it needs to be easier for us programmers to 
produce our own error messages specifying why the template failed 
to instantiate. One trick that I use, which others have mentioned 
in other threads, is to use static asserts to find the exact 
condition that failed and print an informative error message.


Another idea I've mentioned before is to make it easy to define a 
default instance when none of the template constraints are 
satisfied. This would be a good place to insert some diagnostics 
to determine why the template instantiation failed and provide a 
meaningful error message.


A crude example:

int foo(T)(T arg) if(...){ }
int foo(T)(T arg) if(...){ }
int foo(T)(T arg) default // Used when all other instances 
fail

{
// Insert diagnostics here
static assert(isRandomAccessRange!T, ...);
...
}


Re: Associative array on the heap

2015-05-18 Thread Xinok via Digitalmars-d-learn

On Tuesday, 19 May 2015 at 00:31:50 UTC, Freddy wrote:

Sorry mis-phrased my question,
 Who do you allocate a pointer to an associative 
array(int[string]*).


Ignoring the why for a moment, one trick is to place it in an 
array literal so it's heap allocated. This requires writing an 
associative array literal with a single key-element pair though.


int[string]* a = [[zero:0]].ptr;


Another trick is to initially define the associative array in a 
class. Since classes are heap allocated, you can allocate an 
instance of the class and grab a pointer to the associative array.


class HeapAA
{
int[string] a;
}

int[string]*b = (new HeapAA).a;


Inferring Purity Woes

2015-05-15 Thread Xinok via Digitalmars-d
I've been avoiding purity in D for a while but I decided to take 
a stab at it today. I encountered two issues which make inferring 
purity difficult.



(1) While the compiler infers purity for instantiated functions, 
it's unable to infer purity for functions within templates. 
Consider the following three statements:


void foo()(){ }
template T(){  void foo()(){ }  }
template T(){  void foo(){ }  }

The compiler can infer purity for the first two statements but 
not the third. So even though the functions are within a 
template, you still have to make each individual function an 
instantiated function in order to infer purity for them.


I think the compiler should infer purity for all functions, but 
adding support for functions within templates would be good 
enough. Otherwise, you end up with a lot of syntactic noise by 
adding empty parenthesis to all of these function declarations 
without any clear reason why they're even there.



(2) When the compiler fails to infer purity, it prints an error 
pure function [] cannot call impure function []. However, it 
fails to tell you where the impurity occurs. So if you have a 
series of instantiated functions spanning thousands of lines of 
code, you have to manually sift through all of that code to find 
the impurity.



Are there any bug reports for these two issues? I tried searching 
but couldn't find anything.


Re: Coding for solid state drives

2015-04-25 Thread Xinok via Digitalmars-d

On Saturday, 25 April 2015 at 20:12:55 UTC, Walter Bright wrote:
Hard disks are dead today for anyone who cares about 
performance.


I still use them, but only for secondary storage.


For anybody who wants to buy 4TB of storage for $100, hard drives 
are still very much alive. Not to mention USB flash drives and SD 
cards which don't have the performance characteristics of SSDs.


Let's not be so hasty. Until SSDs truly replace all other forms 
of storage, it's best that we don't optimize D and Phobos for one 
type of storage only.


Re: const as default for variables

2015-03-14 Thread Xinok via Digitalmars-d

On Saturday, 14 March 2015 at 20:15:30 UTC, Walter Bright wrote:
I've often thought, as do many others here, that immutability 
should be the default for variables.


[This is a long term issue. Just thought maybe it's time for a 
conversation about it.]


Because immutable is transitive, declaring variables as 
immutable by default would be problematic. A more practical way 
would be to make them const.


As it is now:

1.int x = 1;   // mutable
2.auto x = 1;  // mutable
3.const x = 1; // const
4.immutable x = 1; // immutable

Case (1) is what I'm talking about here. If it is made const, 
then there are a couple ways forward in declaring a mutable 
variable:


a) Introduce a new storage class, called 'var' or 'mut'. 
(Please, no bikeshedding on names at the moment. Let's stay on 
topic.)


b) Use 'auto' as meaning 'mutable' if the initializer is also 
mutable. Extend 'auto' to allow an optional type,


auto T t = initializer;

There may be some ambiguity issues with 'auto ref', haven't 
thought it through.



Once there is a non-default way to declare variables as 
mutable, a compiler switch can be added to change the default 
to be const. Eventually, the language can default to them being 
const, with a legacy switch to support the mutable default.


My two and a half cents, I think this is going to lead to all 
sorts of complications and simply wouldn't be worth the hassle. 
While I believe that immutable by default is a fine choice for 
any new language, D was designed with mutable by default in 
mind and it's simply too late to try and change that now.


For example, this could be an issue for generic functions:

void foo(T)(T[] arr){
T value = arr[0]; // Mutable or immutable?
}

Also, you forgot a fifth case where only part of the type is 
const/immutable:


5. const(T)[] x = ...


Purity not enforced for default arguments?

2015-03-10 Thread Xinok via Digitalmars-d-learn
The following code fails to compile because unpredictableSeed is 
impure:


void main()
{
foreach(i; 0..10) writeln(pureRandom);
}

pure uint pureRandom()
{
auto r = Random(unpredictableSeed);
return r.front;
}

However, make unpredictableSeed a default argument, wrap the call 
in another pure function, and it compiles:


void main()
{
foreach(i; 0..10) writeln(pureRandom2);
}

pure uint pureRandom2()
{
return pureRandom;
}

pure uint pureRandom(uint seed = unpredictableSeed)
{
auto r = Random(seed);
return r.front;
}

I'm inclined to believe this is a bug. While pureRandom could be 
considered weakly pure, pureRandom2 has no arguments so it should 
be strongly pure. Yet, it yields a different value on every call.


Re: Purity not enforced for default arguments?

2015-03-10 Thread Xinok via Digitalmars-d-learn

On Tuesday, 10 March 2015 at 22:00:29 UTC, safety0ff wrote:

On Tuesday, 10 March 2015 at 21:56:39 UTC, Xinok wrote:


I'm inclined to believe this is a bug.


https://issues.dlang.org/show_bug.cgi?id=11048


Thanks for the link, and I didn't mean to post this in D.learn. 
.


Re: Parallel Merge Sort

2015-03-03 Thread Xinok via Digitalmars-d

On Tuesday, 3 March 2015 at 22:55:05 UTC, Josh wrote:
How can I make my parallel code more efficient. Currently, I am 
getting destroyed by the serial merge sort.


http://pastebin.com/M0GKfTTX

Thank you.


I've implemented a concurrent merge sort if you want to take a 
look at my code. It's much faster than serial merge sort.


https://github.com/Xinok/XSort/blob/master/mergesort.d#L88


Re: @inverse

2015-02-25 Thread Xinok via Digitalmars-d
On Thursday, 26 February 2015 at 01:40:32 UTC, Andrei 
Alexandrescu wrote:
Since you're here, do you plan to fix stable sort as recently 
discussed? -- Andrei


While the fix seems straightforward, I haven't taken the time to 
study the problem. I plan to do so and if I feel confident that I 
can fix the issue and provide a test case, then I'll make a pull 
request. That's the best I can say for now.


Re: @inverse

2015-02-25 Thread Xinok via Digitalmars-d

On Thursday, 26 February 2015 at 02:05:25 UTC, Sativa wrote:

On Thursday, 26 February 2015 at 00:56:02 UTC, Xinok wrote:
I like the idea but feel that it's application is too narrow. 
I prefer features which are more general and offer greater 
flexibility. I believe I've read somewhere that some 
[functional] languages define common patterns and equivalent 
substitutions for optimization purposes.


inc(dec(x)) - x
dec(inc(x)) - x
cos(x)^^2 + sin(x)^^2 - 1


Which would exactly be the result of what Daniel is talking 
about except you are adding invariance which is a harder 
problem yet can be taken care of by the programmer(you would 
never intentionally write cos(x)^2 + sin(x)^2 for anything 
since it is equal to 1 and 1 is more efficient to compute).


Not intentionally, but consider that D is a very generative 
language. Templates and string mixins are used heavily. However, 
the generated code is usually pretty generic so little things 
like trig identities may appear in the code which the compiler 
doesn't know how to optimize away. Some C++ compilers make 
assumptions about standard library functions so techniques 
similar to this are not unheard of.


The problem is one of composition and it is difficult in real 
circumstances since compositions may not be simply ordered.


e.g., what if you have

inc(foo(dec(x))

?

In this case one can't simplify because one doesn't know what 
foo does.


Hence, to do it properly one would have to create a whole 
compositional system. e.g., @linear, @nonlinear, @additive, 
@commutative, etc...


e.g., if we new foo was linear then we could simplify the above 
to foo(x).


...and, as you hinted at, most functions are non-linear and 
therefor will make @inverse nearly useless.


That almost seems too complicated, like it would require a 
complete CAS built into the compiler. Some functions don't have 
clearly defined inverses as well, such as x^^3. Then x^^2 is only 
invertible if you limit the domain and range to x = 0.




I suppose, though, one might be able to do something like setup 
@inverse functions for actions.


e.g., user clicks on button X. The inverse then is sort of an 
undo of that.


In an undo system one expects every action to be linear(have 
an inverse)... Hence it might be useful in such circumstances.


I wouldn't use the term linear to mean invertible. A linear 
function is one such that:

f(c*a + b) = c*f(a) + f(b)


Re: @inverse

2015-02-25 Thread Xinok via Digitalmars-d

On Wednesday, 25 February 2015 at 21:25:49 UTC, Daniel N wrote:
Just throwing an idea out there... How about using annotations 
to teach the compiler which functions are inverses of 
each-other, in order to facilitate optimizing away certain 
redundant operations even if they are located inside a 
library(i.e. no source).


A little pseudo-code for illustrational purposes, in case my 
above text is incomprehensible:


void inc() pure nothrow @inverse(dec)
void dec() pure nothrow @inverse(inc)

void swap(T)(ref T lhs, ref T rhs) pure nothrow @inverse(swap!T)


I like the idea but feel that it's application is too narrow. I 
prefer features which are more general and offer greater 
flexibility. I believe I've read somewhere that some [functional] 
languages define common patterns and equivalent substitutions for 
optimization purposes.


inc(dec(x)) - x
dec(inc(x)) - x
cos(x)^^2 + sin(x)^^2 - 1


Re: Consistency

2015-02-15 Thread Xinok via Digitalmars-d

On Sunday, 15 February 2015 at 17:18:08 UTC, Steve D wrote:

Python (built-in)

dict1 = {a:1,b:2}
tup1  = (0,1,2,3)
arr1  = [0,1,2,3]  # list type
str1  = abcd

print b in dict1# True
print 3 in tup1   # True
print 3 in arr1   # True
print c in str1 # True

print tup1.index(2)   # 2
print arr1.index(2)   # 2
print str1.index(c) # 2

There is some sort of consistency of use, though they are 
different sequence/collection types.


Now try the same in D

auto dict1 = [a:1,b:2];
auto tup1  = tuple(0,1,2,3);
auto arr1  = [0,1,2,3];
auto str1  = abcd;

Having some consistency involving 
sequences/arrays/strings/collections etc, which are the 
foundations of any language makes programming much easier, 
intuitive and pleasant. It shouldn't be too difficult for a 
super bare-metal language like D.
I'm honestly not knocking D (I love it), just some thoughts 
(maybe provoke some discussion?, for D3?)


Part of the issue is that in means to search by key, not by 
value. In that respect, it only makes sense for dictionaries (and 
perhaps sets). I guess the idea is that this generic code should 
be valid for any container which implements the in operator:


if(a in r) writeln(r[a]);

On a different note, I do wish D had tuple literals like Python 
has.


Re: Consistency

2015-02-15 Thread Xinok via Digitalmars-d

On Sunday, 15 February 2015 at 18:15:13 UTC, Meta wrote:

On Sunday, 15 February 2015 at 17:18:08 UTC, Steve D wrote:

Python (built-in)

dict1 = {a:1,b:2}
tup1  = (0,1,2,3)
arr1  = [0,1,2,3]  # list type
str1  = abcd

print b in dict1# True

O(1) lookup


A small nitpick, but I'm sure that should be O(log n). 
Dictionaries don't have constant lookup time.


Re: Template constraints

2015-02-14 Thread Xinok via Digitalmars-d
On Saturday, 14 February 2015 at 17:00:33 UTC, Andrei 
Alexandrescu wrote:
There's been recurring discussion about failing constraints not 
generating nice error messages.


void fun(T)(T x) if (complicated_condition) { ... }
struct Type(T)(T x) if (complicated_condition) { ... }

If complicated_condition is not met, the symbol simply 
disappears and the compiler error message just lists is as a 
possible, but not viable, candidate.


I think one simple step toward improving things is pushing the 
condition in a static_assert inside type definitions:


void fun(T)(T x) if (complicated_condition) { ... } // no change
struct Type(T)(T x)
{
  static assert(complicated_condition, Informative message.);
  ...
}

This should improve error messages for types (only). The 
rationale is that it's okay for types to refuse compilation 
because types, unlike functions, don't overload. The major 
reason for template constraints in functions is allowing for 
good overloading.



Andrei


I've done this myself for the same reason. I may split up the 
condition into multiple 'static assert' statements which gives a 
more specific error message. However, I wonder if there's a 
better solution that we can incorporate in to D. The trouble with 
template constraints is that, if you have complex conditions, 
there's no easy way to fall back to a default state. You would 
have to duplicate all the conditions and write not this and not 
this and not this and 


Template specializations can fall back to a default state, but 
not template constraints. So what if we were to add a feature 
that would allow us to do just that?


void test(Range)(Range r) if(isRandomAccessRange!Range)
{
...
}

void test(Range)(Range r) default
{
static assert(false, descriptive error message);
}

I'm not claiming this is a better solution; I'm simply putting 
the idea out there.


Re: Suggestion: array slice through arr[base, size]

2015-02-08 Thread Xinok via Digitalmars-d

On Sunday, 8 February 2015 at 15:20:17 UTC, karl wrote:

Hi, it's a bit unwieldy to write/read this:
result = src[base .. base+size];
instead of:
result = src[base, size];

Maybe such syntax would be a welcome addition to D? I don't see 
it conflicting with the existing grammar, and only the 2 
slicing-operators need further extending to support it.


If this is a common pattern in your code, you could write your 
own function to do this and call it using UFCS:


Range slice(Range)(Range r, size_t base, size_t size)
{
return r[base .. base+size];
}

auto arr = [0,1,2,3,4,5,6,7,8,9];
assert(arr.slice(4,2) == [4,5]);


Re: sortUniq

2015-01-22 Thread Xinok via Digitalmars-d
On Thursday, 22 January 2015 at 21:40:57 UTC, Andrei Alexandrescu 
wrote:
There's this classic patter on Unix: |sort|uniq, i.e. sort some 
data and only display the unique elements.


What would be a better integrated version - one that does 
sorting and uniq in one shot? I suspect the combination could 
be quite a bit better than doing the two in sequence.


A few google searches didn't yield much. Ideas?


Thanks,

Andrei


My thought is that uniq on a sorted list is only an O(n) 
operation, so it's not an expensive operation by any means. If 
there's to be any benefit to a sortUniq, it has to be capable of 
reducing work incurred during the sorting process; else you're 
just going to end up with something less efficient.


One solution may be RedBlackTree, which has an option to disallow 
duplicate elements. This container has three useful properties:


(1) The tree grows one element at a time. This is in opposition 
to other algorithms like quicksort or heapsort in which you must 
operate on the entire set of elements at once.


(2) Removing duplicates only requires a single comparison per 
element, thus retaining the worst-case of |sort|uniq.


(3) Duplicate elements are removed immediately. Inserting an 
element into a balanced tree with N elements is an O(lg n) 
operation, so the smaller n is, the less work that is required.


The shortcoming of RedBlackTree is that it's not very fast. 
However, I don't know any other O(n lg n) sorting algorithm which 
has these properties.


Re: dlang.org redesign n+1

2015-01-21 Thread Xinok via Digitalmars-d
On Wednesday, 21 January 2015 at 14:46:22 UTC, Sebastiaan Koppe 
wrote:
Just for fun and proof-of-concept I went ahead and forked the 
dlang.org site. I basically took the 
`do-what-everybody-else-is-doing` approach:


http://dlang.skoppe.eu



This gets a big thumbs up from me. The layout is great, 
everything flows nicely, and a good blend of colors with the 
unofficial D red.


The only issue is that some elements aren't laid out correctly in 
Firefox.


Re: auto return for some recursive functions in C++11

2015-01-14 Thread Xinok via Digitalmars-d
On Wednesday, 14 January 2015 at 22:37:22 UTC, ketmar via 
Digitalmars-d wrote:

On Wed, 14 Jan 2015 22:29:08 +
bearophile via Digitalmars-d digitalmars-d@puremagic.com 
wrote:



According to Wikipedia:
http://en.wikipedia.org/wiki/C%2B%2B14#Function_return_type_deduction

C++14 is able to compile code like this:


auto correct(int i) {
 if (i == 1)
 return i;
 else
 return correct(i - 1) + i;
}


But not like this:

auto correct(int i) {
 if (i != 1)
 return correct(i - 1) + i;
 else
 return i;
}


D isn't able to compile both. Is it a good idea to allow the 
first function in D too?


Bye,
bearophile
i'm pretty sure that D *SHOULD* compile the first sample. the 
spec says
that return type for `auto` function is taken from the first 
`return`
operator parser met. so for the second `return` the return type 
is

already known.

i think that this is not just let it compile, but a bug in 
compiler.

something should be fixed: either compiler or specs.


Well, the error the compiler prints is:

Error: forward reference to inferred return type of function call 
'correct'


I played with it a bit and it seems to deduce a common type from 
all the return statements, which would be more in the style of D 
anyways (take type deduction for arrays). For example, the 
following code prints float rather than int:


import std.stdio;

auto one(int i)
{
if(i  0)
return cast(int)i;
else
return cast(float)i;
}

void main()
{
writeln(typeid(typeof(one(10;
}


Re: CTFE pow()

2015-01-01 Thread Xinok via Digitalmars-d
On Thursday, 1 January 2015 at 16:56:24 UTC, Manu via 
Digitalmars-d wrote:
Does anyone know how to fix this? Can we please do so? It's 
been a

problem for like 5 years it seems.
It's a bit insane that we can't resolve any non-linear 
functions at

compile time.


I was looking for a relevant issue in the bug tracker. I stumbled 
upon issue [1] and pull request [2] which was closed for the 
following reasons:


1. if(__ctfe) kills inlining in DMD.
2. Other CTFE math can not be implemented until CTFE-able unions.


[1] https://issues.dlang.org/show_bug.cgi?id=5227
[2] 
https://github.com/D-Programming-Language/phobos/pull/2521#issuecomment-56265251


  1   2   >