Re: Template method and type resolution of return type

2014-04-20 Thread matovitch via Digitalmars-d-learn

On Sunday, 20 April 2014 at 00:55:31 UTC, David Held wrote:
On 4/19/2014 3:31 PM, Andrej Mitrovic via Digitalmars-d-learn 
wrote:

[...]
struct S
{
int get()  { return 0; }
T get(T)() { return T.init; }
}

void main()
{
S s;
float x = s.get();  // which overload? (currently int 
get())

}


Isn't this just because concrete methods are better overload 
candidates than method templates?


Dave


struct S
{
   ubyte get()  { return 0 ; }
   float get()  { return 0.; }
}

void main()
{
   S s;
   float x = s.get();  // does'nt know which overload, does'nt 
compile.

}


Re: Template method and type resolution of return type

2014-04-20 Thread monarch_dodra via Digitalmars-d-learn

On Sunday, 20 April 2014 at 07:52:08 UTC, matovitch wrote:


struct S
{
   ubyte get()  { return 0 ; }
   float get()  { return 0.; }
}

void main()
{
   S s;
   float x = s.get();  // does'nt know which overload, does'nt 
compile.

}


What I do find interesting though, is that you are allowed to 
write the overload, whereas C++ would outright block you for 
ambiguity at the source.


This means that with proper meta magic eg 
`__traits(getOverloadSet, S, get)`, you could, *manually* 
resolve the ambiguity yourself.


Re: Template method and type resolution of return type

2014-04-20 Thread matovitch via Digitalmars-d-learn

On Sunday, 20 April 2014 at 08:28:07 UTC, monarch_dodra wrote:

On Sunday, 20 April 2014 at 07:52:08 UTC, matovitch wrote:


struct S
{
  ubyte get()  { return 0 ; }
  float get()  { return 0.; }
}

void main()
{
  S s;
  float x = s.get();  // does'nt know which overload, does'nt 
compile.

}


What I do find interesting though, is that you are allowed to 
write the overload, whereas C++ would outright block you for 
ambiguity at the source.


This means that with proper meta magic eg 
`__traits(getOverloadSet, S, get)`, you could, *manually* 
resolve the ambiguity yourself.


You mean getOverloads ? (yes it's interesting)

How about this :

class S
{
public
{
this() {}

union other {
SFloat u_float;
SUbyte u_ubyte;
}

alias other this;
}
}

struct SFloat
{
float data;
alias data this;
}

struct SUbyte
{
ubyte data;
alias data this;
}

void main()
{
   S s;
   s.other.u_float.data = 0.5;
   //float x = s;
}

This gives :

main.d(31): Error: need 'this' for 'data' of type 'float'



Re: std.file.read returns void[] why?

2014-04-20 Thread Andrej Mitrovic via Digitalmars-d-learn
On 4/18/14, monarch_dodra via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:
 Yeah... static assert(void.sizeof == 1); passes :/

Note that you can even have static void arrays. E.g.:

https://issues.dlang.org/show_bug.cgi?id=9691

I'm not sure whether this is an oversight (accepts-invalid) or
something else. But it needs to be properly documented.


Re: number formatting

2014-04-20 Thread steven kladitis via Digitalmars-d-learn

Note sure if you can edit messages once sent.

 $13,456.67
 245,678,541


On Sunday, 20 April 2014 at 12:50:52 UTC, steven kladitis wrote:

How do you format numbers to have things like.
 Leading $ or , or CR with or without leading zeros.
  for example $56.00
  $056.00
  $1,3456.67
  345.89CR




number formatting

2014-04-20 Thread steven kladitis via Digitalmars-d-learn

How do you format numbers to have things like.
 Leading $ or , or CR with or without leading zeros.
  for example $56.00
  $056.00
  $1,3456.67
  345.89CR



Re: number formatting

2014-04-20 Thread monarch_dodra via Digitalmars-d-learn

On Sunday, 20 April 2014 at 12:53:11 UTC, steven kladitis wrote:

Note sure if you can edit messages once sent.

 $13,456.67
 245,678,541


On Sunday, 20 April 2014 at 12:50:52 UTC, steven kladitis wrote:

How do you format numbers to have things like.
Leading $ or , or CR with or without leading zeros.
 for example $56.00
 $056.00
 $1,3456.67
 345.89CR


Simply add what you want in the format string. For example:

double d = 56.55;
writefln($03.5s, d);
writefln(.4sCR, d);

will print
$056.55
56.55CR

I don't know of any built-in way to do number grouping.

Also, when dealing with monetary amounts, you shouldn't be using 
doubles (I'm not saying you are), but some other structure 
specifically designed to track cents. Ideally, such a structure 
would have built-in toString formating.


Structs insted of classes for Performance

2014-04-20 Thread Frustrated via Digitalmars-d-learn
I know the difference between a struct and a class but I remember 
seeing somewhere that structs are much faster than classes in D 
for some strange reason.


I'm not worried too much about class allocation performance 
because I will try and use classes when they will not be created 
frequently and structs or class reuse when they will be.


So, is the only argument really about performance when creating 
structs vs creating classes or was there some finer detail I 
missed?


Basically that boils down to stack allocation vs heap allocation 
speed? Which, while allocation on the heap shouldn't be too much 
slower than stack, the GC makes it worse?




Re: number formatting

2014-04-20 Thread JR via Digitalmars-d-learn

On Sunday, 20 April 2014 at 12:53:11 UTC, steven kladitis wrote:

Note sure if you can edit messages once sent.

 $13,456.67
 245,678,541


On Sunday, 20 April 2014 at 12:50:52 UTC, steven kladitis wrote:

How do you format numbers to have things like.
Leading $ or , or CR with or without leading zeros.
 for example $56.00
 $056.00
 $1,3456.67
 345.89CR


As for grouping by thousands http://dpaste.dzfl.pl/bddb71eb75bb 
*does* work, but I'm not particularly happy with the approach.


As monarch_dodra said, representing money via a struct or similar 
would be wiser than dealing with raw doubles/reals (as that paste 
does).


Re: Get and set terminal size

2014-04-20 Thread Denis Mezhov via Digitalmars-d-learn

On Saturday, 19 April 2014 at 12:06:58 UTC, FreeSlave wrote:

I use
ldc2 main.d -L-lcurses
or
dmd main.d -L-lcurses

and following source code:

import std.stdio;

extern(C) int tgetnum(const(char) *id);

int main()
{
writeln(tgetnum(li));
return 0;
}

Note that you don't need to apply toStringz to string literals 
since they implicitly cast to const char*.


It's work) Thanks.


Re: Structs insted of classes for Performance

2014-04-20 Thread Ali Çehreli via Digitalmars-d-learn
My understanding is not perfect. There may be compiler and CPU 
optimizations that I am not aware of.


On 04/20/2014 08:03 AM, Frustrated wrote:

 is the only argument really about performance when creating
 structs vs creating classes

Not only creating but also when using. A class variable is a reference 
to the actual object, implemented by the compiler as a pointer. So, 
there is that extra indirection overhead to access member variables of a 
class object.


When the class variable and the object are far apart in memory, they may 
be fall outside of CPU caches.


Further, unless they are defined as final or static, class member 
functions are virtual. Virtual member funtions are dispatched through 
the virtual function table (vtbl) pointer. So, a call like o.foo() must 
first hit the class vtbl in memory, read the value of the function 
pointer off that table and then jump to the function.


Related to the above, class objects are larger than struct objects 
because they have the extra vtbl pointer, as well as another pointer 
(monitor) that allows every class object to be used as a synchronization 
item in concurrency.


Larger objects are more expensive because less of those can fit in CPU 
caches.


 Basically that boils down to stack allocation vs heap allocation speed?

Not to forget, struct objects can be allocated on the stack as well by 
std.typecons.scoped.


 Which, while allocation on the heap shouldn't be too much slower than
 stack, the GC makes it worse?

Stack allocation almost does not exist as some location on the stack is 
reserved for a given object. There is no allocation or deallocation cost 
at runtime other than certain decisions made by the compiler at compile 
time.


On the other hand, any dynamic allocation and deallocation scheme must 
do some work to find room for the object at runtime.


Ali



string - string literal

2014-04-20 Thread Ellery Newcomer via Digitalmars-d-learn
is there a function in phobos anywhere that takes a string and 
escapes it into a string literal suitable for string mixins? 
something like


assert (f(abc\ndef) == \abc\\ndef\);


Re: Structs insted of classes for Performance

2014-04-20 Thread Frustrated via Digitalmars-d-learn

On Sunday, 20 April 2014 at 16:56:59 UTC, Ali Çehreli wrote:
My understanding is not perfect. There may be compiler and CPU 
optimizations that I am not aware of.


On 04/20/2014 08:03 AM, Frustrated wrote:

 is the only argument really about performance when creating
 structs vs creating classes

Not only creating but also when using. A class variable is a 
reference to the actual object, implemented by the compiler as 
a pointer. So, there is that extra indirection overhead to 
access member variables of a class object.


When the class variable and the object are far apart in memory, 
they may be fall outside of CPU caches.


Further, unless they are defined as final or static, class 
member functions are virtual. Virtual member funtions are 
dispatched through the virtual function table (vtbl) pointer. 
So, a call like o.foo() must first hit the class vtbl in 
memory, read the value of the function pointer off that table 
and then jump to the function.


Related to the above, class objects are larger than struct 
objects because they have the extra vtbl pointer, as well as 
another pointer (monitor) that allows every class object to be 
used as a synchronization item in concurrency.


Larger objects are more expensive because less of those can fit 
in CPU caches.


Yes, but this is the standard argument between structs and 
classes. Obviously the additional benefits of classes cost... 
else no one would use structs. If structs had inheritance, there 
would be no real reason for classes.


I don't mind the cost of classes because I will try and use them 
were appropriately. Also, these problems are not language 
specific but simply because classes are heavier.


The article I read was about D's specific issues and that using 
structs GREATLY sped up certain things... I'm sure it had to do 
with the GC and all that but can't remember.



 Basically that boils down to stack allocation vs heap
allocation speed?

Not to forget, struct objects can be allocated on the stack as 
well by std.typecons.scoped.


 Which, while allocation on the heap shouldn't be too much
slower than
 stack, the GC makes it worse?

Stack allocation almost does not exist as some location on the 
stack is reserved for a given object. There is no allocation or 
deallocation cost at runtime other than certain decisions made 
by the compiler at compile time.


On the other hand, any dynamic allocation and deallocation 
scheme must do some work to find room for the object at runtime.


Ali



Again, all those arguments are about the specific difference 
between a struct and class and apply to all languages that use 
those types of structures.


In D though, I guess because of the GC(but which is why I am 
asking because I don't know specifically), classes could be much 
slower due to all the references causing the GC to take longer 
scan the heap and all that. If allocate or free a lot of classes 
in a short period of time it also can cause issues IIRC.


I just can't remember if there was some other weird reasons why 
D's classes are, in general, not as performant as they should be. 
If I remember correctly, I came across a page that compared a few 
test cases with the GC on and off and there was a huge factor 
involved showing that the GC had a huge impact on performance.






Re: string - string literal

2014-04-20 Thread monarch_dodra via Digitalmars-d-learn

On Sunday, 20 April 2014 at 17:55:25 UTC, Ellery Newcomer wrote:
is there a function in phobos anywhere that takes a string and 
escapes it into a string literal suitable for string mixins? 
something like


assert (f(abc\ndef) == \abc\\ndef\);


It's a bit hackish, but it avoids deploying code and reinventing 
anything. You can use format string-range formating to print 
the string escaped. Catch that, and then do it again:


string s = abc\ndef;
writefln([%s]\n, s); //raw

s = format(%(%s%), [s]);
writefln([%s]\n, s); //escaped

s = format(%(%s%), [s]);
writefln([%s]\n, s); //escapes are escaped

As you can see from the output, after two iterations:

[abc
def]

[abc\ndef]

[\abc\\ndef\]

I seem to recall that printing strings escaped has been 
requested before, but, AFAIK, this is the best we are currently 
providing.


Unless you call std.format's formatElement directly. However, 
this is an internal and undocumented function, and the fact it 
isn't private is probably an oversight.


Re: Structs insted of classes for Performance

2014-04-20 Thread anonymous via Digitalmars-d-learn

On Sunday, 20 April 2014 at 18:08:19 UTC, Frustrated wrote:
In D though, I guess because of the GC(but which is why I am 
asking because I don't know specifically), classes could be 
much slower due to all the references causing the GC to take 
longer scan the heap and all that. If allocate or free a lot of 
classes in a short period of time it also can cause issues IIRC.


(You probably now this, but just so that we're on the same page:)
Structs on the stack are not GC'ed. They don't add garbage, they
don't trigger collections. When you `new` a struct the GC is in
charge again.
Class instances on the heap are GC'ed. Putting them on the stack
isn't typical, and somewhat for experts. After all, if you want
to put it on the stack, you can probably use a (more
light-weight) struct instead.

I'd expect many short-lived objects to not perform very well.
There would be much garbage, it would have to be collected often.
D's GC isn't the best in town. Advice regarding GC performance
often comes down to avoid collections.

I just can't remember if there was some other weird reasons why 
D's classes are, in general, not as performant as they should 
be. If I remember correctly, I came across a page that compared 
a few test cases with the GC on and off and there was a huge 
factor involved showing that the GC had a huge impact on 
performance.


I guess that shows that it's the collections that are slow. So,
D's classes probably /could/ perform better with a better GC.
That's mostly an issue with the GC implementation, I think, not
so much a consequence from D's design.

Coming back to your original questions:

On Sunday, 20 April 2014 at 15:03:34 UTC, Frustrated wrote:
So, is the only argument really about performance when creating 
structs vs creating classes or was there some finer detail I 
missed?


Yes, it's all about performance, I think. You can write correct
programs using classes for everything.

Basically that boils down to stack allocation vs heap 
allocation speed? Which, while allocation on the heap shouldn't 
be too much slower than stack, the GC makes it worse?


Stack vs heap does make a difference. It's an indirection and
smarter people than me can think about caches and stuff. The GC
does make it worse, especially in its current incarnation.

But then, if you care enough about performance, and you need to
use the heap, then D does allow you to manage your memory
yourself, without going through the GC.


async socket programming in D?

2014-04-20 Thread Bauss via Digitalmars-d-learn
I know the socket has the nonblocking settings, but how would I 
actually go around using it in D? Is there a specific procedure 
for it to work correctly etc.



I've taken a look at splat.d but it seems to be very outdated, so 
that's why I went ahead and asked here as I'd probably have to 
end up writing my own wrapper.


Re: Function to print a diamond shape

2014-04-20 Thread Jay Norwood via Digitalmars-d-learn

On Tuesday, 25 March 2014 at 08:42:30 UTC, monarch_dodra wrote:


Interesting. I'd have thought the extra copy would be an 
overall slowdown, but I guess that's not the case.




I installed ubuntu 14.04 64 bit, and measured some of these 
examples using gdc, ldc and dmd on a corei3 box.  The examples 
that wouldn't build had something to do with use of 
array.replicate and range.replicate conflicting in the libraries 
for gdc and ldc builds, which were based on 2.064.2.



This is the ldc2 (0.13.0 alpha)(2.064.2) result:
jay@jay-ubuntu:~/ec_ddt/workspace/diamond/source$ ./main 
1/dev/null

brad: time: 2107[ms]
sergei: time: 2441[ms]
jay2: time: 26[ms]
diamondShape: time: 679[ms]
printDiamond: time: 19[ms]
printDiamonde2a: time: 9[ms]
printDiamonde2b: time: 8[ms]
printDiamond3: time: 14[ms]

This is the gdc(2.064.2) result:
jay@jay-ubuntu:~/ec_ddt/workspace/diamond/source$ ./a.out 
1/dev/null

brad: time: 3216[ms]
sergei: time: 2828[ms]
jay2: time: 26[ms]
diamondShape: time: 776[ms]
printDiamond: time: 19[ms]
printDiamonde2a: time: 13[ms]
printDiamonde2b: time: 13[ms]
printDiamond3: time: 51[ms]

This is the dmd(2.065) result:
jay@jay-ubuntu:~/ec_ddt/workspace/diamond/source$ ./main 
1/dev/null

brad: time: 10830[ms]
sergei: time: 3480[ms]
jay2: time: 29[ms]
diamondShape: time: 2462[ms]
printDiamond: time: 23[ms]
printDiamonde2a: time: 13[ms]
printDiamonde2b: time: 10[ms]
printDiamond3: time: 23[ms]


So this printDiamonde2b example had the fastest time of the 
solutions, and had similar times on all three builds. The ldc2 
compiler build is performing best in most examples on ubuntu.


void printDiamonde2b(in uint N)
{
uint N2 = N/2;
char pSpace[] = uninitializedArray!(char[])(N2);
pSpace[] = ' ';

char pStars[] = uninitializedArray!(char[])(N+1);
pStars[] = '*';

pStars[$-1] = '\n';

auto w = appender!(char[])();
w.reserve(N*3);

foreach (n ; 0 .. N2 + 1){
w.put(pSpace[0 .. N2 - n]);
w.put(pStars[$-2*n-2 .. $]);
}

foreach_reverse (n ; 0 .. N2){
w.put(pSpace[0 .. N2 - n]);
w.put(pStars[$-2*n-2 .. $]);
}

write(w.data);
}






Re: async socket programming in D?

2014-04-20 Thread Tolga Cakiroglu via Digitalmars-d-learn

On Sunday, 20 April 2014 at 22:44:28 UTC, Bauss wrote:
I know the socket has the nonblocking settings, but how would I 
actually go around using it in D? Is there a specific procedure 
for it to work correctly etc.



I've taken a look at splat.d but it seems to be very outdated, 
so that's why I went ahead and asked here as I'd probably have 
to end up writing my own wrapper.


The Socket class has blocking property that is boolean. 
Before starting to listening as a server socket, or connecting to 
a server as client, if you set it to false, then you can use 
them (waiting for client with `accept` or receiving messages) as 
asynchronous.


Already as you know, a client that is created from the `accept` 
method of non-blocking server socket is non-blocking as well. So 
you don't have to do anything about it.


Re: On Concurrency

2014-04-20 Thread Etienne Cimon via Digitalmars-d-learn

On 2014-04-18 13:20, Nordlöw wrote:

Could someone please give some references to thorough explainings on
these latest concurrency mechanisms

- Go: Goroutines
- Coroutines (Boost):
   - https://en.wikipedia.org/wiki/Coroutine
   -
http://www.boost.org/doc/libs/1_55_0/libs/coroutine/doc/html/coroutine/intro.html

- D: core.thread.Fiber: http://dlang.org/library/core/thread/Fiber.html
- D: vibe.d

and how they relate to the following questions:

1. Is D's Fiber the same as a coroutine? If not, how do they differ?

2. Typical usecases when Fibers are superior to threads/coroutines?

3. What mechanism does/should D's builtin Threadpool ideally use to
package and manage computations?

4. I've read that vibe.d's has a more lightweight mechanism than what
core.thread.Fiber provides. Could someone explain to me the difference?
When will this be introduced and will this be a breaking change?

5. And finally how does data sharing/immutability relate to the above
questions?


I'll admit that I'm not the expert you may be expecting for this but I 
could answer somewhat 1, 2, and 5. Coroutines, fibers, threads, 
multi-threading and all of this task-management stuff is a very 
complex science and most of the kernels actually rely on this to do 
their magic, keeping stack frames around with contexts is the idea and 
working with it made me feel like it's much more complex than 
meta-programming but I've been reading and getting a hang of it within 
the last 7 months now.


Coroutines give you control over what exactly you'd like to keep around 
once the yield returned. You make a callback with 
boost::asio::yield_context or something of the likes and it'll contain 
 exactly what you're expecting, but you're receiving it in another 
function that expects it as a parameter, making it asynchronous but it 
can't just resume within the same function because it does rely on a 
callback function like javascript.


D's fibers are very much simplified (we can argue whether it's more or 
less powerful), you launch them like a thread ( Fiber fib = new Fiber( 
delegate ) ) and just move around from fiber to fiber with 
Fiber.call(fiber) and Fiber.yield(). The yield function called within a 
Fiber-called function will stop in a middle of that function's 
procedures if you want and it'll just return like the function ended, 
but you can rest assured that once another Fiber calls that fiber 
instance again it'll resume with all the stack info restored. They're 
made possible through some very low-level assembly magic, you can look 
through the library it's really impressive, the guy who wrote that must 
be some kind of wizard.


Vibe.d's fibers are built right above this, core.thread.fiber (explained 
above) with the slight difference that they're packed with more power by 
putting them on top of a kernel-powered event loop rotating infinitely 
in epoll or windows message queues to resume them, (the libevent driver 
for vibe.d is the best developed event loop for this). So basically when 
a new Task is called (which has the Fiber class as a private member) 
you can yield it with yield() until the kernel wakes it up again with a 
timer, socket event, signal, etc. And it'll resume right after the 
yield() function. This is what helps vibe.d have async I/O while 
remaining procedural without having to shuffle with mutexes : the fiber 
is yielded every time it needs to wait for the network sockets and 
awaken again when packets are received so until the expected buffer 
length is met!


I believe this answer is very mediocre and you could go on reading about 
all I said for months, it's a very wide subject. You can have Task 
message queues and Task concurrency with Task semaphores, it's like 
multi-threading in a single thread!


Re: async socket programming in D?

2014-04-20 Thread Etienne Cimon via Digitalmars-d-learn

On 2014-04-20 18:44, Bauss wrote:

I know the socket has the nonblocking settings, but how would I actually
go around using it in D? Is there a specific procedure for it to work
correctly etc.


I've taken a look at splat.d but it seems to be very outdated, so that's
why I went ahead and asked here as I'd probably have to end up writing
my own wrapper.


I was actually working on this in a new event loop for vibe.d here: 
https://github.com/globecsys/vibe.d/tree/native-events/source/vibe/core/events


I've left it without activity for a week b/c I'm currently busy making a 
(closed source) SSL library to replace openSSL in my projects, but I'll 
return to this one project here within a couple weeks at most.


It doesn't build yet, but you can probably use some of it at least as a 
reference, it took me a while to harvest the info on windows and linux 
kernels for async I/O. Some interesting parts like that which you wanted 
are found here:

https://github.com/globecsys/vibe.d/blob/native-events/source/vibe/core/events/epoll.d#L403

I think I was at handling a new connection or incoming data though, so 
you won't find accept() or read callbacks, but with it I think it was 
pretty much ready for async TCP.


Re: async socket programming in D?

2014-04-20 Thread Etienne Cimon via Digitalmars-d-learn

On 2014-04-21 00:32, Etienne Cimon wrote:

On 2014-04-20 18:44, Bauss wrote:

I know the socket has the nonblocking settings, but how would I actually
go around using it in D? Is there a specific procedure for it to work
correctly etc.


I've taken a look at splat.d but it seems to be very outdated, so that's
why I went ahead and asked here as I'd probably have to end up writing
my own wrapper.


I was actually working on this in a new event loop for vibe.d here:
https://github.com/globecsys/vibe.d/tree/native-events/source/vibe/core/events


I've left it without activity for a week b/c I'm currently busy making a
(closed source) SSL library to replace openSSL in my projects, but I'll
return to this one project here within a couple weeks at most.

It doesn't build yet, but you can probably use some of it at least as a
reference, it took me a while to harvest the info on windows and linux
kernels for async I/O. Some interesting parts like that which you wanted
are found here:
https://github.com/globecsys/vibe.d/blob/native-events/source/vibe/core/events/epoll.d#L403


I think I was at handling a new connection or incoming data though, so
you won't find accept() or read callbacks, but with it I think it was
pretty much ready for async TCP.


But of course, nothing stops you from using vibe.d with libevent ;)