Re: Thin UTF8 string wrapper

2019-12-06 Thread Jonathan Marler via Digitalmars-d-learn
On Friday, 6 December 2019 at 16:48:21 UTC, Joseph Rushton 
Wakeling wrote:

Hello folks,

I have a use-case that involves wanting to create a thin struct 
wrapper of underlying string data (the idea is to have a type 
that guarantees that the string has certain desirable 
properties).


The string is required to be valid UTF-8.  The question is what 
the most useful API is to expose from the wrapper: a sliceable 
random-access range?  A getter plus `alias this` to just treat 
it like a normal string from the reader's point of view?


One factor that I'm not sure how to address w.r.t. a full range 
API is how to handle iterating over elements: presumably they 
should be iterated over as `dchar`, but how to implement a 
`front` given that `std.encoding` gives no way to decode the 
initial element of the string that doesn't also pop it off the 
front?


I'm also slightly disturbed to see that 
`std.encoding.codePoints` requires `immutable(char)[]` input: 
surely it should operate on any range of `char`?


I'm inclining towards the "getter + `alias this`" approach, but 
I thought I'd throw the problem out here to see if anyone has 
any good experience and/or advice.


Thanks in advance for any thoughts!

All the best,

 -- Joe


Good questions. I don't have answers to them all but I hope this 
information is helpful.


I use wrapper structs to represent properties in this way as 
well.  For example my  "mar" library has the SentinelPtr and 
SentinelArray types which guarantee that the underlying pointer 
and/or array is terminted by some value (i.e. like a 
null-terminated C string).


If I'm creating and use these wrapper types inside a 
self-contained program then I don't really care about API 
compatibility so I would use a simple powerful mechanism like 
"alias this".  For libraries where the API boundary is important 
I implement the most limited API I can.  The reason for this, is 
it allows you to see all possible interaction with the type.  
This way, when you need to change the API you know all the 
existing ways it can be interacted with and iterate on the API 
design appropriately.  This is the case for SentinelPtr and 
SentinelArray.  For this case I only implement the operations I 
know are being used, and I made this easy by creating a simple 
module I call "wrap.d" 
(https://github.com/dragon-lang/mar/blob/master/src/mar/wrap.d).


If you have a struct that wraps a string and guarantees it's UTF8 
encoded, wrap.d lets you declare that it's a wrapper type and 
allows you to mixin the operations you want to expose like this:


struct Utf8String
{
private string str;
import mar.wrap;

// this verifies the size of the wrapper struct and the 
underlying field
// are the same, and creates the wrappedValueRef method that 
the other

// wrapper mixins use to access the underlying wrapped value
mixin WrapperFor!"str";

// Now you can mixin different operations, for example
mixin WrapOpCast;
mixin WrapOpIndex;
mixin WrapOpSlice;
}


On the topic of immutable(char)[] vs const(char)[]. If a function 
takes const data, I take it to mean that the function won't 
change the data.  If it takes immutable data, I take it to mean 
that the function won't change it AND the caller must ensure data 
won't change while the function has it.  However in practice, 
functions that require immutable data sill declare their data be 
"const" instead of "immutable".  I think this is because 
declaring it as immutable would require extra boiler-plate all 
over your code to cast data to immutable all the time.  So most 
functions end up using const even though they require immutable.




Re: Any 3D Game or Engine with examples/demos which just work (compile) out of the box on linux ?

2019-10-20 Thread Jonathan Marler via Digitalmars-d-learn
On Friday, 18 October 2019 at 06:11:37 UTC, Ferhat Kurtulmuş 
wrote:

On Friday, 18 October 2019 at 05:52:19 UTC, Prokop Hapala wrote:
Already >1 year I consider to move from C++ to Dlang or to 
Rust in my hobby game development (mostly based on physical 
simulations 
https://github.com/ProkopHapala/SimpleSimulationEngine). I 
probably prefer Dlang because it compiles much faster, and I 
can copy C/C++ code to it without much changes.


[...]


I cannot make any comment for others. But Dagon should work. I 
wrote a very little demo game some time ago 
https://github.com/aferust/dagon-shooter. I didn't try to 
compile and run it on Linux.I think you need to have a 
nuklear.so in your path, since Bindbc loaders try to load 
dynamic libraries by default.


This is what I get when I clone dagon-shooter and build it with 
"dub":


WARNING: A deprecated branch based version specification is used 
for the dependency dagon. Please use numbered versions instead. 
Also note that you can still use the dub.selections.json file to 
override a certain dependency to use a branch instead.
WARNING: A deprecated branch based version specification is used 
for the dependency bindbc-soloud. Please use numbered versions 
instead. Also note that you can still use the dub.selections.json 
file to override a certain dependency to use a branch instead.
Performing "debug" build using 
C:\tools\dmd.2.088.1.windows\dmd2\windows\bin\dmd.exe for x86_64.
bindbc-loader 0.2.1: target for configuration "noBC" is up to 
date.
bindbc-soloud ~master: target for configuration "library" is up 
to date.
bindbc-opengl 0.8.0: target for configuration "dynamic" is up to 
date.
bindbc-sdl 0.8.0: target for configuration "dynamic" is up to 
date.
dlib 0.17.0-beta1: target for configuration "library" is up to 
date.

dagon ~master: target for configuration "library" is up to date.
dagon-shooter ~master: building configuration "application"...
source\enemy.d(10,1): Error: undefined identifier EntityController
source\enemy.d(16,19): Error: function 
enemyctrl.EnemyController.update does not override any function

source\enemy.d(48,1): Error: undefined identifier EntityController
source\enemy.d(77,19): Error: function 
enemyctrl.BoomController.update does not override any function

source\mainscene.d(80,17): Error: undefined identifier LightSource
source\mainscene.d(82,21): Error: undefined identifier 
FirstPersonView

source\mainscene.d(93,16): Error: undefined identifier NuklearGUI
source\mainscene.d(95,15): Error: undefined identifier FontAsset
source\mainscene.d(102,5): Error: undefined identifier 
SceneManager
source\mainscene.d(128,19): Error: function 
mainscene.MainScene.onAssetsRequest does not override any function
source\mainscene.d(198,19): Error: function 
mainscene.MainScene.onAllocate does not override any function
source\mainscene.d(469,19): Error: function void 
mainscene.MainScene.onUpdate(double dt) does not override any 
function, did you mean to override void 
dagon.resource.scene.Scene.onUpdate(Time t)?
source\mainscene.d(541,1): Error: undefined identifier 
SceneApplication
C:\tools\dmd.2.088.1.windows\dmd2\windows\bin\dmd.exe failed with 
exit code 1.


Re: Is betterC affect to compile time?

2019-07-25 Thread Jonathan Marler via Digitalmars-d-learn

On Thursday, 25 July 2019 at 12:46:48 UTC, Oleg B wrote:
On Thursday, 25 July 2019 at 12:34:15 UTC, rikki cattermole 
wrote:

Those restrictions don't stop at runtime.


It's vary sad.

What reason for such restrictions? It's fundamental idea or 
temporary implementation?


Yes it is very sad.  It's an implementation thing.  I can guess 
as to a couple reasons why it doesn't work, but I think there's a 
few big ones that contribute to not being able to use certain 
features at compile-time without having it introduce things at 
runtime.




Re: Mixin mangled name

2019-07-01 Thread Jonathan Marler via Digitalmars-d-learn

On Monday, 1 July 2019 at 19:40:09 UTC, Andrey wrote:

Hello,
Is it possible to mixin in code a mangled name of some entity 
so that compiler didn't emit undefined symbol error? For 
example mangled function name or template parameter?


If you've got undefined symbol "foo", you could just add this to 
one of your modules:


extern (C) void foo() { }



Re: make C is scriptable like D

2019-06-20 Thread Jonathan Marler via Digitalmars-d-learn

On Thursday, 20 June 2019 at 06:20:17 UTC, dangbinghoo wrote:

hi there,

a funny thing:


$ cat rgcc
#!/bin/sh
cf=$@
mycf=__`echo $cf|xargs basename`
cat $cf | sed '1d'  > ${mycf}
gcc ${mycf} -o a.out
rm ${mycf}
./a.out

$ cat test.c
#!/home/user/rgcc
#include 
int main()
{
printf("hello\n");
}


And then,


chmod +x test.c
./test.c


output hello.

is rdmd implemented similarly?

thanks!


binghoo


rdmd adds a few different features as well, but the bigger thing 
it does is cache the results in a global temporary directory.  So 
If you run rdmd on the same file with the same options twice, the 
second time it won't compile anything, it will detect that it was 
already compiled and just run it.


dmd -nodefaultlibs?

2019-05-21 Thread Jonathan Marler via Digitalmars-d-learn
Is there a way to prevent dmd from adding any default libraries 
to its linker command?


Something equivalent to "-nodefaultlibs" from gcc? 
https://gcc.gnu.org/onlinedocs/gcc/Link-Options.html


I'd still like to use the dmd.conf file, so I don't want to use 
"-conf="




DMD Test Suite Windows

2017-12-18 Thread Jonathan Marler via Digitalmars-d-learn
Trying to run the dmd test suite on windows, looks like Digital 
Mars "make" doesn't work with the Makefile, I tried Gnu Make 3.81 
but no luck with that either.  Anyone know which version of make 
it is supposed to work with on windows?  Is it supposed to work 
on windows at all?


Re: range of ranges into one range?

2017-06-26 Thread Jonathan Marler via Digitalmars-d-learn

On Monday, 26 June 2017 at 06:19:07 UTC, rikki cattermole wrote:

Perhaps?
http://dlang.org/phobos/std_algorithm_iteration.html#.joiner


Thank you.


range of ranges into one range?

2017-06-26 Thread Jonathan Marler via Digitalmars-d-learn
I'm using the phobos "chain" function to iterate over a set of 
string arrays.  However, one of the variables is actually an 
array of structs that each contain a string array.  So I use 
"map" to get a range of string arrays, but "chain" expects each 
variable to be a string array, not a range of string arrays.  The 
solution I came up with was to find a way to convert a range of 
ranges into one range.  I couldn't find this transformation in 
phobos.  Does anyone know if it already exists?  It's actually 
similar to chain itself but chains a range of ranges, not a tuple 
of ranges.  I implemented something that works below, but would 
love to use something that already exists in phobos.


import std.stdio, std.range, std.algorithm;

struct MyStrings
{
string[] strings;
}

void main()
{
auto normalStringArray1 = ["a", "b", "c"];
auto normalStringArray2 = ["d", "e", "f"];
auto myStringArrays = [
MyStrings(["g", "h", "i"]),
MyStrings(["j", "k", "l"]),
];

foreach(str; chain(
normalStringArray1,
normalStringArray2,
splice(map!(a => a.strings)(myStringArrays
{
writeln(str);
}
}

auto splice(Range)(Range inputRange)
{
static struct SplicedRanges(K)
{
static assert(K.init.empty);

Range inputRange;
K current;
this(Range inputRange)
{
this.inputRange = inputRange;
if(!this.inputRange.empty)
{
current = this.inputRange.front;
if(current.empty)
{
setCurrentToNext();
}
}
}

private void setCurrentToNext()
{
while(!inputRange.empty)
{
inputRange.popFront;
if(inputRange.empty)
{
break;
}
current = inputRange.front;
if(!current.empty)
{
break;
}
}
}

@property bool empty()
{
return current.empty;
}
@property auto front()
{
return current.front;
}
void popFront()
{
if(!current.empty)
{
current.popFront;
if(!current.empty)
{
return;
}
}
setCurrentToNext();
}
}

return SplicedRanges!(typeof(inputRange.front))(inputRange);
}


Re: Binding a udp socket to a port(on the local machine)

2017-04-23 Thread Jonathan Marler via Digitalmars-d-learn

On Saturday, 22 April 2017 at 21:24:33 UTC, Chainingsolid wrote:
I couldn't figure out how to make a udp socket bound to a port 
of my choosing on the local machine, to use for listening for 
incoming connections.


I assume you meant "incoming datagrams" and not "incoming 
connections".


import std.stdio, std.socket;

void main()
{
bool ipv6;
ushort port = 1000;

Address bindAddress;
if(ipv6)
{
bindAddress = new 
Internet6Address(Internet6Address.ADDR_ANY, port);

}
else
{
bindAddress = new 
InternetAddress(InternetAddress.ADDR_ANY, port);

}


Socket udpSocket = new Socket(bindAddress.addressFamily, 
SocketType.DGRAM, ProtocolType.UDP);

udpSocket.bind(bindAddress);
writefln("listening for udp datagrams on port %s", port);

ubyte[3000] buffer;
while(true)
{
Address from;
auto length = udpSocket.receiveFrom(buffer, from);
writefln("received %s byte datagram from %s", length, 
from);

}
}


Re: Output-Range and char

2017-04-23 Thread Jonathan Marler via Digitalmars-d-learn

On Sunday, 23 April 2017 at 11:17:37 UTC, Mafi wrote:

Hi there,
every time I want to use output-ranges again they seem to be 
broken in a different way (e.g. value/reference semantics). 
This time it is char types and encoding.


[...]


Use sformat:

import std.format, std.stdio;

void main() {
char[20] buffer;
sformat(buffer, "Long string %s\n", "more more more");
write(buffer);
}

Note: the buffer is not large enough to hold the entire string in 
your example so this will cause a stack buffer overflow.


Re: std.socket classes

2017-04-10 Thread Jonathan Marler via Digitalmars-d-learn

On Monday, 10 April 2017 at 18:57:13 UTC, Adam D. Ruppe wrote:

On Monday, 10 April 2017 at 16:18:20 UTC, Jonathan Marler wrote:
An interesting benefit. However, I don't think this is the 
ideal way to support such a use case.


If I was doing it myself, I'd probably do an interface / final 
class split too (which also opens up a wee bit of additional 
easy optimization), but the Socket class isn't *bad* for this.


Yeah I agree, not perfect but not *bad*.




My first thought is that since the interface you are using 
(the Socket class) wasn't really designed to be overridden, 
you probably had to do some interesting hacks to make it work.



No, the code is very straight-forward, regular class method 
overrides with appropriate forwards to the base class 
implementation where needed.


 For example, when you accept a new socket, you probably had 
to delete the Socket object you got and create a new SSLSocket 
object passing the handle from one to the other, and make sure


I didn't implement the server, but if I did, it would be 
another simple case of


override SslSocket accept() {
   return new SslSocket();
}

or better yet, `override SslSocket accepting() { ...}`, since 
there IS a method specifically designed for this:


http://dpldocs.info/experimental-docs/std.socket.Socket.accepting.html

That'd work fine too.


Ah, it seems someone already ran into this accept problem, hence 
why the new "accepting" function was added. Funny timing that 
this was added near the time I asked the question.


Since my last post I've been looking through std.socket and I see 
quite a bit of inefficiency especially when it comes to GC 
memory.  I think moving forward the best solution for me is to 
use my version of std.socket. It would also be great if Walter's 
new ref-counted exceptions proposal gets implemented soon because 
then I could make it @nogc.  Anyway, thanks for the feedback.





Re: std.socket classes

2017-04-10 Thread Jonathan Marler via Digitalmars-d-learn

On Monday, 10 April 2017 at 04:32:20 UTC, Adam D. Ruppe wrote:

On Sunday, 9 April 2017 at 14:47:39 UTC, Jonathan Marler wrote:
Does anyone know why Socket and Address in phobos were created 
as classes instead of structs?


It is probably just the historical evolution, but I find it 
pretty handy: I did a subclass of Socket to do SSL, then as is 
nice with classes, I can pass that to other code using the 
Socket interface and have it work.


So I'd be sad if they changed it now too much, the class is 
legitimately useful here and actually not very expensive`.


An interesting benefit. However, I don't think this is the ideal 
way to support such a use case.  I think it would have been 
better if there was a shared stream/socket-like interface that 
you could override to use raw sockets or SSL. I'll explain why.


My first thought is that since the interface you are using (the 
Socket class) wasn't really designed to be overridden, you 
probably had to do some interesting hacks to make it work.  For 
example, when you accept a new socket, you probably had to delete 
the Socket object you got and create a new SSLSocket object 
passing the handle from one to the other, and make sure that the 
original Socket object didn't close it. I'm guessing that to 
prevent this close you probably set the socket handle on the 
accepted Socket object to null/invalid?  Or maybe you just called 
the raw accept function and created a new object.  But if the 
library is the one calling accept, then you would obviously have 
to override the accept function and do something that I would 
call "hacky".


My other thought is that by separating both the virtual interface 
and the raw socket functions, you have provided both a low-level 
and high-level API that each application can choose to use.  The 
tradeoff being "control" vs "extensibility".  The high-level 
being more extensible (can be overriden to support things like 
SSL), and the low-level being less abstracted and therefore 
provides more control or access to the underlying implementation. 
 This low-level access is more useful for code that needs to use 
socket-specific features.


I will say that one disadvantage with this approach is that by 
separating both the virtual interface and the direct socket 
interface, you open up the door for library writers to make the 
mistake of using the wrong level of the API.  If a library used 
the lower-level API and you wanted to override it with say, an 
SSL implementation, then you are out of luck unless you update 
the library to use the higher level interface.  Of course this is 
more of a "practical real world" disadvantage that "in theory" 
can be prevented with good libraries.


---
DISCLAIMER
---
I would like to say something to anyone who wants to contribute 
to this thread. These comments are meant to discuss the pros/cons 
of the std.socket design and DO NOT serve as justification for 
changing phobos. Such a change would require much more 
discussion.  The problem I've seen is that people will 
immediately halt a conversation by jumping to the end and arguing 
that such ideas will never be implemented because the benefit to 
risk ratio is too low.  Benefit to risk ratio is very good and 
necessary discussion to have, however it's not good to stop a 
conversation early by jumping to this stage before people have 
even had a chance to discuss the merits of design and ideas on 
their own. Any discussion on the ideas/design with your 
thoughts/feedback/experience are welcome.  If you want to discuss 
whether ideas/changes should be carried out, I would hold off 
those comments since it derails good discussion. Thanks.


Re: std.socket classes

2017-04-09 Thread Jonathan Marler via Digitalmars-d-learn

On Sunday, 9 April 2017 at 15:04:29 UTC, rikki cattermole wrote:

On 09/04/2017 3:56 PM, Jonathan Marler wrote:
On Sunday, 9 April 2017 at 14:49:14 UTC, rikki cattermole 
wrote:
Don't think too hard, times have changed since std.socket was 
written.
It certainly isn't designed for high performance hence e.g. 
libasync.


What an odd response... You don't think I should ask questions 
about why
decisions were made?  If I took that approach how would I 
learn?  And if
you discourage other people from asking questions by telling 
them they
are "thinking too hard" what kind of effect does that have on 
the

community?

As for "high performance", my questions have less to do with 
performance
than they do with an API that makes sense and doesn't feel 
"kludgy".


Oh sorry, I had a brain derp and thought at the end there you 
had that you thought about it and it didn't make sense. Hence 
the slightly weirder response.


Ah ok.  That response was surprising to me coming from you (based 
on what I've read from you in the past) but I see it was a 
misunderstanding.




What I meant is that, for common use cases it works well enough 
and it does use reasonably sound API even if a bit cludgy.


When asking about classes, one of the big things is the vtable. 
They are slow (compared to final classes and structs). This is 
the main reason people want to switch over to structs instead 
of classes. However if you take a look at the more performance 
aware libraries like libasync you will see classes used 
extensively still.


Here is my suggestion, its a little harder to toy with ideas 
without real code to show for it. All our more existing API's 
are mostly class based for high performance sockets, timers 
ext. So, would you like to have a go and explore this area 
since we are playing it a bit too safe for your liking?


What I've found myself having to do is use the lower level 
platform specific APIs that use socket_t and sockaddr, but then I 
get platform dependencies and can't access alot of the library 
because it requires the higher level Socket and Address objects.  
I would be willing to explore this area, but before I do work in 
an area I research what's already been done.  Hence why I'm 
asking the questions about why it was done this way in the first 
place.  For all I know there are very good reasons it was done 
this way that I just don't know about.




Re: std.socket classes

2017-04-09 Thread Jonathan Marler via Digitalmars-d-learn

On Sunday, 9 April 2017 at 14:49:14 UTC, rikki cattermole wrote:
Don't think too hard, times have changed since std.socket was 
written.
It certainly isn't designed for high performance hence e.g. 
libasync.


What an odd response... You don't think I should ask questions 
about why decisions were made?  If I took that approach how would 
I learn?  And if you discourage other people from asking 
questions by telling them they are "thinking too hard" what kind 
of effect does that have on the community?


As for "high performance", my questions have less to do with 
performance than they do with an API that makes sense and doesn't 
feel "kludgy".


std.socket classes

2017-04-09 Thread Jonathan Marler via Digitalmars-d-learn
Does anyone know why Socket and Address in phobos were created as 
classes instead of structs?


My guess is that making Socket a class prevents socket handle 
leaks because you can clean up the handle in the destructor when 
the memory gets freed if no one closes it.  Is this the reason it 
is a class and are there any other reasons?


As for Address, I can't think of a reason why this one is a 
class.  It doesn't have to free any underlying OS resources, it's 
just a chunk of memory that can be passed to and from the socket 
API.  Using sockaddr in C/C++ is more flexible because it allows 
the application to decide where the memory lives (which will 
almost always be on the stack).  It feels like whoever made 
Address a class probably wasn't familiar with sockaddr.  Is that 
the case or are there reasons why it was made a class?


If I was implementing sockaddr in D, I would have chosen to use 
addressFamily as a sort of "makeshift Vptr", which is really how 
it is used in C (even though C doesn't support classes). Using 
this technique, I believe you could expose pretty much the same 
API without the overhead of wrapping it inside a D object.  Does 
anyone know if this solution was considered?


Re: Big Oversight with readln?

2017-02-23 Thread Jonathan Marler via Digitalmars-d-learn
On Friday, 24 February 2017 at 03:45:35 UTC, Nick Sabalausky 
(Abscissa) wrote:

On 02/23/2017 09:43 PM, Jonathan Marler wrote:
I can't figure out how to make use of the full capacity of 
buffers that
are allocated by readln.  Take the example code from the 
documentation:


 // Read lines from $(D stdin) and count words

 void main()
 {
 char[] buf;
 size_t words = 0;

 while (!stdin.eof)
 {
 char[] line = buf;
 stdin.readln(line);
 if (line.length > buf.length)
 buf = line;

 words += line.split.length;
 }

 writeln(words);
 }

When buf is not large enough to hold the line, readln will 
allocate a
new buffer to accomodate and this example shows how you can 
save that
new buffer to reuse the next time.  The problem is that the 
capacity of
the new buffer is nowhere to be found. readln only returns the 
line that
was read which is only a slice of the buffer that was 
allocated.  The
next time that readln is called, it will not read past the 
slice even if
the capacity of the buffer it allocated was larger.  This will 
cause a
new allocation/copy every time you read a line that was larger 
than all
the previous lines, even if a previous allocation was already 
large
enough. This seems like a big oversight to me, I must be 
missing

something right?


I don't think that problem is actually occurring:

Let's step through the code, and suppose you're reading the 
following four lines of text:


12345
123456789
123
1234567

Starting out, buf.length is 0. When reading the first line, the 
buffer isn't big enough for 5, so readln allocates returns new 
buffer of length 5. That is more than buf.length (0), so the 
new buffer becomes the new buf.


Second line, again, buf (length 5) isn't big enough for 9, so 
readln allocates a new buffer length 9. That's more than the 
old one (5), so again your code sets buf to the larger new 
buffer (length 9).


Third line: buf (length 9) can definitely hold length 3, so 
readln does not allocate. The new slice returned (length 3) is 
NOT longer than buf (still length 9), so buf is NOT set to the 
slice returned by readln. So buf REMAINS length 9.


Fourth line: buf (still length 9) can definitely hold length 7, 
so readln does not allocate.


You're looking at this from the apps perspective and forgetting 
about what readln is doing under the hood.  It can't know how big 
the next line is going to be before it reads it so it's going 
guess how much to allocate.  If you look at the implementation in 
(http://github.com/dlang/phobos/blob/master/std/stdio.d), you can 
see it doubles the size of the current buffer and adds some more 
for good measure (on line 4479 as of writing this).


So in your example after it reads the first line, its going to 
allocate an initial buffer of some size, maybe 200 or so, then 
eventually returns a slice of the first 5 characters into that 
buffer.  When it reads the second line it can't use the rest of 
that initial buffer because the size of the buffer is gone, so it 
has to allocate a new buffer. At least that's what I thought 
until I found what I was missing!


I discovered the .capacity property of arrays.  I don't know why 
I've never seen this but it looks like this is how readln is 
recovering this seemingly lost peice of data.  This does have an 
odd consequence though, if you pass a slice into readln it will 
read past the end of it if the underlying buffer is larger.  This 
might be something worth adding to the documentation.


Also I'm not completely sure how .capacity works, I assume it has 
to look up this information in the memory management metadata.  
Any enlightenment on this subject is appreciated. It also says 
it's a O(log(n)) operation so I'm guessing it's looking it up in 
some sort of binary tree data structure.





Big Oversight with readln?

2017-02-23 Thread Jonathan Marler via Digitalmars-d-learn
I can't figure out how to make use of the full capacity of 
buffers that are allocated by readln.  Take the example code from 
the documentation:


// Read lines from $(D stdin) and count words

void main()
{
char[] buf;
size_t words = 0;

while (!stdin.eof)
{
char[] line = buf;
stdin.readln(line);
if (line.length > buf.length)
buf = line;

words += line.split.length;
}

writeln(words);
}

When buf is not large enough to hold the line, readln will 
allocate a new buffer to accomodate and this example shows how 
you can save that new buffer to reuse the next time.  The problem 
is that the capacity of the new buffer is nowhere to be found. 
readln only returns the line that was read which is only a slice 
of the buffer that was allocated.  The next time that readln is 
called, it will not read past the slice even if the capacity of 
the buffer it allocated was larger.  This will cause a new 
allocation/copy every time you read a line that was larger than 
all the previous lines, even if a previous allocation was already 
large enough. This seems like a big oversight to me, I must be 
missing something right?


Re: Module Clarification

2016-09-27 Thread Jonathan Marler via Digitalmars-d-learn
On Tuesday, 27 September 2016 at 13:48:39 UTC, Steven 
Schveighoffer wrote:

On 9/22/16 4:16 PM, Jonathan Marler wrote:
On Thursday, 22 September 2016 at 20:09:41 UTC, Steven 
Schveighoffer wrote:


Before package.d support, you could not do any importing of 
packages.
You could only import modules. package.d was how the compiler 
allowed

importing packages.

I don't know that there is a fundamental difference between
foo/package.d and foo.d, but this is just the solution that 
was
chosen. Is it a mistake? I don't think so, it's just a 
preference.


Prior to this, it was common to put "package" imports into an 
"all.d"

file:

foo/all.d // import fooPart1.d fooPart2.d
foo/fooPart1.d



Ok, do you know why is this not allowed?


I'm sure if you search the forums, you can find discussions of 
this. Walter probably had a reason. I'm not sure if the reason 
is valid anymore now that package.d is supported.


-Steve


foo.d
foo/bar.d

I would think the reason for not supporting this is you wouldn't 
want something to be a "module" and a "package" at the same time, 
but introduction of the "package.d" semantics has broken that 
rule.


From what I can see, it seems like the concept of "packages" 
doesn't have any useful meaning anymore.  Before adding 
"package.d" support, a "package" was a directory/node you could 
find modules underneath, but now that it can also be a module 
itself, saying something is a "package" doesn't really have any 
meaning. Take the following 2 cases:


Case 1: foo.d
Case 2: foo/package.d

In case 1, foo is a "module", and in case 2, foo is a "package".  
The problem is that foo can behave EXACTLY THE SAME in both 
cases.  foo could contain typical module code, or publicly import 
other modules like a typical "package.d" file, in both cases.  
Saying that foo is a "package" doesn't tell you anything about 
how "foo" behaves.  The "package" concept seems pretty 
meaningless now.





Re: Module Clarification

2016-09-22 Thread Jonathan Marler via Digitalmars-d-learn
On Thursday, 22 September 2016 at 20:09:41 UTC, Steven 
Schveighoffer wrote:


Before package.d support, you could not do any importing of 
packages. You could only import modules. package.d was how the 
compiler allowed importing packages.


I don't know that there is a fundamental difference between 
foo/package.d and foo.d, but this is just the solution that was 
chosen. Is it a mistake? I don't think so, it's just a 
preference.


Prior to this, it was common to put "package" imports into an 
"all.d" file:


foo/all.d // import fooPart1.d fooPart2.d
foo/fooPart1.d

-Steve


Ok, do you know why is this not allowed?

foo.d
foo/bar.d



Re: D code optimization

2016-09-22 Thread Jonathan Marler via Digitalmars-d-learn

On Thursday, 22 September 2016 at 16:09:49 UTC, Sandu wrote:

It is often being claimed that D is at least as fast as C++.
Now, I am fairly new to D. But, here is an example where I want 
to see how can this be made possible.


So far my C++ code compiles in ~850 ms.
While my D code runs in about 2.1 seconds.


Can you include the C++ source code, the C++ compiler command 
line, and the D compiler command line?





Re: Module Clarification

2016-09-22 Thread Jonathan Marler via Digitalmars-d-learn
On Thursday, 22 September 2016 at 15:02:01 UTC, Lodovico Giaretta 
wrote:


I think that having package.d provides a better layout. Look at 
the difference between this:



ls std/experimental

drw-rw-rw- allocator
drw-rw-rw- logger
drw-rw-rw- ndslice
-rw-rw-rw- typecons.d

and this:


ls std/experimental

drw-rw-rw- allocator
-rw-rw-rw- allocator.d
drw-rw-rw- logger
-rw-rw-rw- logger.d
drw-rw-rw- ndslice
-rw-rw-rw- ndslice.d
-rw-rw-rw- typecons.d

Having to put part of a package outside the package folder is 
ugly to see and a bit more difficult to manage.


Yes that does seem like a nice benefit.  What do you think about 
hierarchical modules?  Do you think we should have supported 
modules that also have modules underneath them? i.e.


foo.d
foo/bar.d
foo/bar/baz.d

Or do you think it's fine to require the higher level modules to 
exist in package.d files?


foo/package.d
foo/bar/package.d
foo/bar/baz.d

It just seems odd because the modules aren't packages.  I suppose 
I would understand if hierarchical modules are discouraged, is 
that the case?  I ran into this problem because I'm working on a 
.NET to D transpiler, and put all the symbols in a .NET namespace 
into the same D module.  So currently I have to do this:


System/package.d
System/Net/package.d
System/Net/Sockets.d

but I think it would make more sense to have this:

System.d
System/Net.d
System/Net/Sockets.d


Re: Module Clarification

2016-09-22 Thread Jonathan Marler via Digitalmars-d-learn
On Thursday, 22 September 2016 at 11:40:17 UTC, Steven 
Schveighoffer wrote:

This should be fine. x/package.d is equivalent to module x.


Ok, it looks like no-one thought what I was doing was off-base. I 
guess this brings up another question.  Why doesn't the compiler 
support modules in a hierarchy?


foo.d
foo/bar.d

The only reason I can see is that you would have to setup some 
rules on how to handle it when you have both a module file, and a 
package.d file in a directory with the same name:


foo.d
foo/package.d // huh? error?

Actually, the more I think about it, I'm not sure there's a good 
reason for the "package.d" semantics to exist.  I guess it 
establishes a pattern when people would like to combine smaller 
modules into one public module, but it doesn't have to be used 
that way.  The opposite is true that you could use a normal 
module (not a package.d module) to publicly import smaller 
modules:


Instead of:
foo/package.d // publically imports fooPart1 and fooPart2
foo/fooPart1.d
foo/fooPart2.d

What was wrong with:
foo.d // still publically imports fooPart1 and fooPart2
foo/fooPart1.d
foo/fooPart2.d

If the package.d file didn't exist, then I don't think there 
would be any problem with hierarchical modules.  Is this the 
right conclusion?  Was package.d a mistake?  Maybe the reasoning 
is that D doesn't really like hierarchical modules, so creating 
them should look a bit odd?


foo/package.d
foo/bar/package.d
foo/bar/baz/package.d



Re: Stacktrace on Null Pointer Derefence

2016-09-21 Thread Jonathan Marler via Digitalmars-d-learn

On Wednesday, 21 September 2016 at 23:36:08 UTC, Nordlöw wrote:

Doing a null deref such as

int* y = null;
*y = 42;// boom

[...]


Can you include compiler command line?  I use -g -gs -debug to 
get stack traces on windows.


Module Clarification

2016-09-21 Thread Jonathan Marler via Digitalmars-d-learn
I'm working on a code generation tool and wanted to make sure my 
module approach was correct.  The generated code has a module 
hierarchy, where modules can appear at any level of the hierarchy.


module foo;
module foo.bar;

In this case, module foo and foo.bar are independent modules.  
The foo module does not publicly import foo.bar, like a typical 
package.d module would do. At first I organized the modules like 
this:


foo.d (module foo)
foo/bar.d (module foo.bar)

But this doesn't work because the module file foo.d, cannot have 
the same name as a the directory foo.  So now I organize it like 
this:


foo/package.d (module foo)
foo/bar.d (module foo.bar)

This is not the typical usage for the "package.d" file.  
Normally, package.d would publicly import other modules, however, 
in this case, package.d is an independent module.  This also 
means that if another module was added, say foo.bar.baz, the new 
file system would have to look like this:


foo/package.d (module foo)
foo/bar/package.d (module foo.bar)
foo/bar/baz.d (module foo.bar.baz)

This technique seems a bit odd, but it works.  I'm just wondering 
if there's a better way to achieve these semantics, or if this is 
the appropriate solution?


Mutable class reference to immutable class

2016-09-10 Thread Jonathan Marler via Digitalmars-d-learn
This is been bugging me for a while. Is it possible to have a 
mutable reference to an immutable class?  In other words, can you 
set a class variable to an immutable class, and then set that 
variable to another immutable class later?


Mutable "slices" to immutable data are easy:

immutable(char[]) x = "string for x"; // immutable data x
immutable(char[]) y = "string for y"; // immutable data y

immutable(char)[] mutableRef = x; // mutable slice to 
immutable data
mutableRef = y; // OK, you can set mutableRef to another 
immutable slice


But I can't figure out how to make a mutable "class" to immutable 
classes:


immutable(SomeClass) x = new immutable SomeClass(); // 
immutable class x
immutable(SomeClass) y = new immutable SomeClass(); // 
immutable class y


immutable(SomeClass) mutableRef = x;
mutableRef = y; // Error: cannot modify mutable expression 
mutableRef



A workaround would be to make mutableRef an 
immutable(SomeClass)*, but this adds an extra level of 
indirection.  Since all classes in D are pointers, x is already a 
pointer, so making a pointer to x would be making a pointer to a 
pointer that points to a class.


It's obvious this issue is a result of the fact that all class 
variables are pointers.  I don't suppose there is a way to 
represent a class's value type that I don't know about is there?


SomeClass.ValueType?  // Is there semantics for this I don't 
know about?


If so, you could solve the problem by declaring mutableRef as:

immmutable(SomeClass.ValueType)* mutableRef = x;

I haven't encountered semantics for this anywhere in the 
language, but maybe someone else can enlighten me?  If not, is 
there another way to get a mutable reference to an immutable 
class?  Thanks in advance for the help.


Re: Command Line Utility Library

2016-08-15 Thread Jonathan Marler via Digitalmars-d-learn

On Monday, 15 August 2016 at 10:48:11 UTC, Seb wrote:


Are you trying to parse arguments?
There's a lot of good stuff for it already:

https://dlang.org/phobos/std_getopt.html
https://code.dlang.org/packages/darg
https://blog.thecybershadow.net/2014/08/05/ae-utils-funopt/


For configuration files:

https://code.dlang.org/packages/onyx-config
https://code.dlang.org/packages/inid
https://code.dlang.org/packages/yamkeys
https://code.dlang.org/packages/runtimer
https://code.dlang.org/packages/variantconfig


Seb how in the heck do you know about all these libraries, geeze.


Re: full path to source file __FILE__

2016-07-27 Thread Jonathan Marler via Digitalmars-d-learn

On Thursday, 21 July 2016 at 19:54:34 UTC, Jonathan Marler wrote:
Is there a way to get the full path of the current source file? 
Something like:


__FILE_FULL_PATH__

I'm asking because I'm rewriting a batch script in D, meant to 
be ran with rdmd.  However, the script needs to know it's own 
path.  The original batch script uses the %~dp0 variable for 
this, but I'm at a loss on how to do this in D.  Since rdmd 
compiles the executable to the %TEMP% directory, thisExePath 
won't work.


BATCH
-
echo "Directory of this script is " %~dp0


DLANG
-
import std.stdio;
int main(string[] args) {
writeln("Directory of this script is ", ???);
}


For others who may see this thread, the __FULL_FILE_PATH__ 
special trait was added to the dmd compiler with this PR: 
https://github.com/dlang/dmd/pull/5959


At the time of this post, the latest released version of D is 
2.071.1, so this trait should be available on any release after 
that.


Re: Cannot compare object.opEquals is not nogc

2016-07-24 Thread Jonathan Marler via Digitalmars-d-learn

On Sunday, 24 July 2016 at 15:41:55 UTC, Lodovico Giaretta wrote:

On Sunday, 24 July 2016 at 15:28:53 UTC, Jonathan Marler wrote:
Whoa wait a second...I didn't know you could do this.  I 
thought everything had to inherit from the object class.  Can 
you share the syntax to define a class that doesn't derive 
from object?


Currently, you cannot. Everything inherits from Object. I 
personally think this is not the best idea. But it's not that 
horrible either, so probably not worth a big change.


But you can just ignore it. You can put on your opCmp all the 
attributes you want and forget about it inheriting from Object. 
You can decide to never write a method that takes Object. 
Always take the root of your sub-hierarchy, so that you know 
what attributes you have.
If it derives from Object or not, nobody cares as long as your 
sub-root overrides all opXXX with new (even abstract) 
declarations that have @nogc.


This is one of those problems that are going to have pros and 
cons either way you go. It's the balance between generality which 
yields facilities for sharing code, and specificity which 
inhibits shared code.  Templates provide an interesting middle 
ground for this by allowing you to instantiate an infinite number 
of implementations that will fit wherever you want on this 
spectrum.  But templates don't work with virtual methods :(


Just spitballing here, but why weren't the methods on the Object 
class defined in interfaces instead?

  interface Hashable;
  interface Comparable;
  interface Stringable;

I'm sure there's some big drawback to designing it this way, but 
the reason is escaping me at the moment.  Can someone enlighten 
me?


(Note: if a feature like this 
(http://forum.dlang.org/post/mrtgipukmwrxbpayu...@forum.dlang.org) was implemented, the interfaces could still provide default implementations)




Re: Cannot compare object.opEquals is not nogc

2016-07-24 Thread Jonathan Marler via Digitalmars-d-learn

On Sunday, 24 July 2016 at 15:09:53 UTC, Lodovico Giaretta wrote:
Remember that comparison of complex objects may require 
normalization, which may change the objects themselves and 
allocate memory.


Sure but this case will be the exception.  If an application 
really needs this they can implement their own normalizedEquals?  
It wouldn't work with the comparison operators, but I don't 
really like to use those comparison operators for classes anyway 
since they do waaay to much in most cases:


https://github.com/dlang/druntime/blob/master/src/object.d#L136

auto opEquals(Object lhs, Object rhs)
{
// If aliased to the same object or both null => equal
if (lhs is rhs) return true;

// If either is null => non-equal
if (lhs is null || rhs is null) return false;

// If same exact type => one call to method opEquals
if (typeid(lhs) is typeid(rhs) ||
!__ctfe && typeid(lhs).opEquals(typeid(rhs)))
/* CTFE doesn't like typeid much. 'is' works, but 
opEquals doesn't
(issue 7147). But CTFE also guarantees that equal 
TypeInfos are
always identical. So, no opEquals needed during CTFE. 
*/

{
return lhs.opEquals(rhs);
}

// General case => symmetric calls to method opEquals
return lhs.opEquals(rhs) && rhs.opEquals(lhs);
}

...Also, comparisons may throw exceptions that need the GC (see 
above). So I'm personally against making those methods @nogc.



Definitely true. One thing to note is that toHash is nothrow. 
Whether or not this is too restrictive is definitely up for 
debate, but also making opCmp/opEquals nothrow wouldn't be the 
worst thing in the world.  Of course at this point, it would 
likely break ALOT of code, so probably not worth it pragmatically.




But I'm also against a singly-rooted hierarchy. Removing Object 
and having multiple class hierarchies would entirely solve the 
issue. But please note that you can already "do" that: if you 
never use Object, but always subclasses, the fact that Object 
isn't @nogc is no longer an issue.


Whoa wait a second...I didn't know you could do this.  I thought 
everything had to inherit from the object class.  Can you share 
the syntax to define a class that doesn't derive from object?


P.S.

Talking about throwing exceptions in @nogc is preaching to the 
choir :)


https://forum.dlang.org/post/ubtlemuqisxluxfts...@forum.dlang.org

I've explored this issue as well, I came up with a way to throw 
exceptions allocated on the NonGC heap, but to clean up memory 
the catcher needs to do something to dereference the exception so 
it gets cleaned up.  There is a DIP for natively supporting 
reference counted memory, I don't remember it, but it would allow 
such things to be safe to use.






Re: Cannot compare object.opEquals is not nogc

2016-07-24 Thread Jonathan Marler via Digitalmars-d-learn

On Sunday, 24 July 2016 at 09:03:04 UTC, Lodovico Giaretta wrote:

On Sunday, 24 July 2016 at 02:17:27 UTC, Rufus Smith wrote:

[...]


Now you are telling me to "program by trust", because there's 
nothing ensuring that I remember to free everything I allocated 
with malloc/free, while a GC would guarantee no memory leaks. 
Again there's nothing stopping me from returning pointers to 
things allocated on the stack. And now there are lots...
Before you told me that programming by trust is a wrong 
attitude, and now you propose me to use it, risking memory 
leakage in a function that may be executed hundreds of times 
per second.



[...]


No. If you put a big @nogc attribute on Object.opXXX, then 
nobody can write GC code in his classes. So if everything is 
@nogc, you cannont write GC code, because it woudn't interact 
with Phobos. Example: if you mark an algorithm that takes a 
delegate @nogc, then you cannot pass GC delegates to it. So you 
cannot use it in GC code.



[...]


Yes. All building blocks must be as much @nogc as possible. But 
customization points (virtual functions, delegate arguments, 
...) must not be @nogc, otherwise it is no possible to have 
classes that use the GC or callbacks that use the GC.



[...]


I still don't understand why you want Object.opXXX @nogc. As I 
already said, you can still make your functions @nogc, just 
accepting parameters of @nogc types. It's obvious. If I wrote a 
wonderful library that uses the GC, you will not use it. If I 
have a class that uses the GC in opXXX (and I am free to have 
it, because maybe I need it, and maybe it's the most efficient 
way for my use case), you will not use it. The same applies 
here. You'll have your algorithms work only on classes that 
declare opXXX as @nogc.


Not all memory allocation patterns are good for malloc/free. 
Not all of them are good for stack allocations. Some of them 
are not even good for reference counting. Every class shall use 
the best solution for its job. And everybody must still be able 
to extend the base class.
If you want to use a method specific to a subclass, you 
downcast. If you want to use the @nogc opXXX when the base does 
not enforce it, you downcast. It's the same principle: more 
advanced functionalities require more derived types (and @nogc 
is more derived, because it is covariant to not-@nogc). Basic 
OOP.


I believe Rufus was only referring to the virtual methods defined 
in the object class.  That would be:


toHash (Note: this is already nothrow, that's intersting and 
quite restrictive)

opCmp
opEquals

I think all 3 of these are good candidates for @nogc.  However, 
AFAIK, making them @nogc would break any code that implements 
them because they would have to add the @nogc attribute to their 
implementations (unless I am mistaken?  Do subclass overrides 
need to explicitly have @nogc if the parent class does?).  If 
adding @nogc is not required in the implementation, then the only 
code that would break would be implementations that actually do 
allocate GC memory.


Some would think that restricting GC usage inside these virtual 
methods is a good thing because it has the benefit of 
discouraging memory allocation for these types of operations.  
Really, you probably shouldn't be allocating memory to perform a 
comparison.  If you really need to, you can either allocate non 
GC memory, or use a different mechanism then the opCmp/opEquals 
methods.


Re: Modules

2016-07-24 Thread Jonathan Marler via Digitalmars-d-learn

On Sunday, 24 July 2016 at 02:45:57 UTC, rikki cattermole wrote:

On 24/07/2016 2:28 PM, Rufus Smith wrote:
NM, ignore. Seems it was something else going on. Although, if 
you know
how how dmd resolves this stuff exactly, it would be nice to 
know. Does
it just use the module names regardless of path or does the 
path where
the module is located have any play(assuming they are properly 
passed to

the compiler).


My understanding is this:

1. For each file passed, use supplied module name
2. If an import is unknown look at each of the directories 
passed via -I and find it based upon a/b/c.d for module a.b.c;
3. For each file passed, if an import is unknown try to guess 
based upon paths


Of course rdmd adds another level of behavior on top and is 
mostly based upon the third one.


If in doubt, try using dub. It can show you all this without 
looking at code ;)


The thing I remember being confused about was that you had to use 
the -I option to specify what root level module directories you 
want to import from AND in the file that is being imported you 
have to explicitly put the module name at the top of the file. In 
both cases, the error message you get usually doesn't make it 
obvious what you've done wrong.  I think if you forget to put the 
module name at the top of the file you'll end up with a very 
generic message like "can't import module y".


Note that this only takes care of compilation, and doesn't 
include how to make sure linking works.  If you need more info on 
that let me know.


For your example:
foo
   bar
  x
   baz
  y
  baz.d(package)

If you had a module inside the y directory, you would need to 
include the root level path for the y package like this:


foo/bar/x> dmd main.d -I../baz

Then each module you want from y, should be imported explicitly 
like this:

import y.coollib;
import y.awesomeutil;

If you want to import multiple files from y using "import y;", 
then there needs to be a package.d file inside the y directory:


foo/baz/y/package.d:
public import y.coollib;
public import y.awesomeutil;

The library may or may not have a package.d file.  If it does 
not, then each module is probably meant to be imported 
independently.


Also if you really need to know what's going on, you can find the 
source code that finds imports in dmd here(I just happen to know 
this because I just made a PR modifying this code): 
https://github.com/dlang/dmd/blob/master/src/dmodule.d#L48


Hope this helps.  I do remember being confused about how all this 
worked a few years ago but now it all makes sense.  Not sure if 
this information is easy to find or not, if it's not, it should 
be added somewhere.


Re: Default implementations in inherited interfaces

2016-07-24 Thread Jonathan Marler via Digitalmars-d-learn

On Thursday, 21 July 2016 at 13:37:30 UTC, Saurabh Das wrote:

On Thursday, 21 July 2016 at 12:42:14 UTC, Adam D. Ruppe wrote:

On Thursday, 21 July 2016 at 09:41:27 UTC, Saurabh Das wrote:
Java 8 has a 'default' keyword that allows interfaces to 
provide a default implementation and sub-classes can optionally 
override it if needed. The rationale behind it was extending 
interfaces without causing old code to faill. (called "virtual 
extension methods" or "defender methods"). The use case is 
similar to above.


Is there a way to achieve an equivalent functionality in D?

Thanks,
Saurabh


What an interesting technique. I've never seen this before. Maybe 
a DIP is in order? I think it would be low priority relative to 
the current work being done, but this technique seems like a good 
thing to support in the language.


Re: Cannot compare object.opEquals is not nogc

2016-07-24 Thread Jonathan Marler via Digitalmars-d-learn

On Sunday, 24 July 2016 at 02:17:27 UTC, Rufus Smith wrote:
On Saturday, 23 July 2016 at 22:48:07 UTC, Lodovico Giaretta 
wrote:

[...]


This just isn't right. What your saying is that because someone 
screwed up, we must live with the screw up and build everyone 
around the screw up. This mentality is why everyone is so 
screwed up in the first place, do you not see that?


[...]


I pretty much agree and had the same thoughts you've expressed 
here Rufus. Your arguments are logical and make sense. However, I 
can already tell you this kind of a change is going to elicit 
alot of negative feedback. I think you're gonna find yourself 
frustrated in a losing battle trying to get the community to see 
reason. I hope Im wrong, but know you got me on your side.


Re: Cannot compare object.opEquals is not nogc

2016-07-23 Thread Jonathan Marler via Digitalmars-d-learn

On Saturday, 23 July 2016 at 16:46:20 UTC, Jonathan Marler wrote:

[...]


Actually Im going to disagree with myself. This technique 
actually wouldn't work with virtual methods:)


Re: Cannot compare object.opEquals is not nogc

2016-07-23 Thread Jonathan Marler via Digitalmars-d-learn
On Saturday, 23 July 2016 at 15:25:02 UTC, Steven Schveighoffer 
wrote:

On 7/23/16 10:53 AM, Rufus Smith wrote:
On Saturday, 23 July 2016 at 14:15:03 UTC, Lodovico Giaretta 
wrote:

On Saturday, 23 July 2016 at 13:18:03 UTC, Rufus Smith wrote:
Trying to compare a *ptr value with a value in nogc code 
results in

the error:

Error: @nogc function '...' cannot call non-@nogc function
'object.opEquals'

Shouldn't object opEquals be marked?


If object.opEquals is marked @nogc, than all D classes must 
implement
it as @nogc, because (of course) you cannot override a @nogc 
method
with a not-@nogc one (while the opposite is possible, of 
course).
So marking it @nogc is not only a big breaking change, but 
also very

limiting.


Um, this isn't right. GC code can always call non-gc code.


The issue is that for *classes*, the proper way to add an 
opEquals is to override the base version. The base version 
existed LONG before @nogc did, and so it's not appropriately 
marked.


Not only that, but @nogc is too limiting (if I want to use GC 
in opEquals, I should be able to).


The real problem here is that there is a base method at all. We 
have been striving to remove it at some point, but it is very 
difficult due to all the legacy code which is written.


Almost all the Object base methods need to be removed IMO. You 
can add them at a higher level if you need them, and then 
specify your requirements for derived classes.


Including opHash, opCmp, toString, etc.

If you mark opEquals nogc, it breaks nothing except 
implementations of
opEquals that use the GC. GC code can still call it nogc 
opequals, it
only enforces opEquals code to avoid the GC itself, which 
isn't a

terrible thing.


It breaks all classes which use GC in opEquals. Note that you 
can't really compare two immutable or const objects either! 
(well, actually you can, but that's because the runtime just 
casts away const and lets you have undefined behavior).



What is terrible is that nogc code can never have any equality
comparisons! It is impossible unless one manually tests them, 
but how?
Every method would be brittle. Do a memory test? compare 
element by

element? One can't predict what to do.


It is unfortunate. I hope we can fix it. I'd rather not add 
another exception like we have for comparing const objects.


So, you are trying off laziness to break nogc. As it stands, 
if nogc
code can't compare equality, it is broken and useless. Why put 
it in the

language then?


@nogc is not useless, it just cannot handle Objects at the 
moment.


Broke! Even if opEquals of T does not use any GC we can't 
write test to
be nogc, which means we can't have S be nogc or anything that 
depends on
S that is nogc. This must be a dirty trick played by the 
implementors of

nogc to keep everyone on the gc nipple?


I assure you, it's not a trick. It's legacy. It needs fixing, 
but the fix isn't painless or easy.


-Steve


I've seen this type of problem many times before when using the 
@nogc attribute.  With alot of work, and breaking changes, you 
could fix it in the case of opEquals, and in the end you still 
end up with the restriction that you can't use the GC in 
opEquals, which may be a good thing, but some would disagree.  
But this is a common problem and I think a more exhaustive 
solution would be to allow @nogc code to call that that is either 
marked as nogc, or inferred to be @nogc.  Instead of treating 
@nogc as a special compiler directive to check for GC code, the 
compiler could check for GC code in all cases, and infer the 
attribute for all functions.  Then @nogc would simply be a way 
for the developer to tell the compiler to make sure they aren't 
using @nogc where they don't intend to.  This allows code that is 
written without @nogc to be called from code that does use it.  
It takes an effort away from the developer and moves it to the 
compiler.  It allows @nogc to work with existing code.


Maybe this would result in a big performance hit on the compiler 
because now it would always have to check for GC code, instead of 
just when it's specified with @nogc...not sure.


Anyway, this is just a general overview.  There's obviously alot 
of details and specifics that were glossed over but I think the 
general idea could be a good solution.


Re: full path to source file __FILE__

2016-07-22 Thread Jonathan Marler via Digitalmars-d-learn
On Friday, 22 July 2016 at 19:23:30 UTC, Steven Schveighoffer 
wrote:

On 7/22/16 2:43 PM, Kagamin wrote:

On Friday, 22 July 2016 at 13:50:55 UTC, Jonathan Marler wrote:

shell/anypath> rdmd /somedir/clean.d
Removing /somedir/build...


So for command rdmd /somedir/clean.d what __FILE__ contains? 
LDC tells
me the same path as specified on the command line, and that is 
specified

relative to current directory, where the compiler is called, so
absolutePath(__FILE__) should give the right result.


The issue which is not being expressed completely by Jonathan, 
is that rdmd caches the build.


So if I run the script from one directory, then cd elsewhere, 
it has the same __FILE__ as before, but the cwd has moved. So 
it won't work.


I had assumed rdmd would rebuild, but it doesn't.

-Steve


Thanks for pointing this out, somehow I overlooked this use case.


Re: full path to source file __FILE__

2016-07-22 Thread Jonathan Marler via Digitalmars-d-learn

On Friday, 22 July 2016 at 19:13:31 UTC, sdhdfhed wrote:

On Friday, 22 July 2016 at 14:02:03 UTC, Jonathan Marler wrote:
The __FILE__ trait seems to be used most useful for error 
messages.


Another usage is for testing parsers or string functions 
directly on the source. E.g in "devel" mode the main function


void main(string[] args)
{
version(devel)
{
// dont mess with params, use the text in source to 
catch most simple bugs.

File f = File(__FILE__, "r");
}
else
{
// load using args
}
}

I could see him wanting it to be a relative path sometimes and 
an absolute one other times.  By redefining it to always be 
absolute would solve this problem,


I'm for this, always absolute. Eventually forced by a new 
switch: default behavior is not changed.


Actually I realized if __FILE__ was always absolute, then all 
your exception messages would contain the full path of the file 
it was thrown from on the machine it was compiled on. This would 
be quite odd. Both a relative and absolute version are useful in 
different cases.


Re: full path to source file __FILE__

2016-07-22 Thread Jonathan Marler via Digitalmars-d-learn

On Friday, 22 July 2016 at 09:37:24 UTC, sdhdfhed wrote:

On Friday, 22 July 2016 at 08:36:37 UTC, Jonathan Marler wrote:

On Friday, 22 July 2016 at 07:57:35 UTC, sdhdfhed wrote:
On Friday, 22 July 2016 at 07:47:14 UTC, Jonathan Marler 
wrote:

On Friday, 22 July 2016 at 05:41:00 UTC, fdgdsgf wrote:

What's wrong with __FILE__.dirName ?


It's kinda weird, sometimes I've noticed that the __FILE__ 
keyword is an absolute path, and sometimes it isn't.  If it 
was always an absolute path, that would work.  I decided to 
take a stab at implementing this in the dmd compiler:


https://github.com/dlang/dmd/pull/5959

It adds a __FILE_FULL_PATH__ trait which would solve the 
issue.


Personally I've never seen a relative __FILE__. Is this an 
issue that's confirmed ?


I mean  that it would be better to fix __FILE__ so that its 
result is always absolute then. I think that such a "PPR" 
(punk-pull-request) has 0% chance of being accepted, 
especially since it adds a special keyword !


It's definitely confirmed.  And now that I've walked through 
the source code, I see that it wasn't implemented to be an 
absolute path, it just happens to be some of the time 
depending on how the file is found.  I'm sure Walter will have 
an opinion as to what solution he prefers.  Either redefining 
the __FILE__ trait or adding a new one. He's communicating 
fixes to the PR on github so that a good sign.  We'll see.


Yes, i've seen he 's started to review.

I don't know if you've seen my other suggestion but another 
solution would be to force relative fnames passed to the 
compiler to be translated to absolute. This is also why I've 
never seen a relative __FILE__. The build tool I use always 
does the expansion in intern before calling the compiler.


Again that's Walter's call.  The __FILE__ trait seems to be used 
most useful for error messages.  I could see him wanting it to be 
a relative path sometimes and an absolute one other times.  By 
redefining it to always be absolute would solve this problem, but 
might make others things harder.  I'm not particularly for or 
against either solution (not sure why you're trying to convince 
me of this one), that would be up to the owners of the language :)


Re: full path to source file __FILE__

2016-07-22 Thread Jonathan Marler via Digitalmars-d-learn
On Friday, 22 July 2016 at 13:30:10 UTC, Steven Schveighoffer 
wrote:

On 7/22/16 3:47 AM, Jonathan Marler wrote:

What's wrong with __FILE__.dirName ?


It's kinda weird, sometimes I've noticed that the __FILE__ 
keyword is an

absolute path, and sometimes it isn't.


If you combine it with current working directory, this should 
give you the full path.


Looks like std.path gives you a mechanism, I think this should 
work:


import std.path;
auto p = __FILE__.absolutePath;

http://dlang.org/phobos/std_path.html#.absolutePath

-Steve


That doesn't work in the example I provided:

/somedir/clean.d
/somedir/build

Say clean.d is meant to remove the build directory that lives in 
the same path as the clean.d script itself.


shell/anypath> rdmd /somedir/clean.d
Removing /somedir/build...

Since you are running the script from "anypath", the information 
that clean.d exists at /somedir is lost.  The last component to 
know where the file was found is the compiler itself.


Re: full path to source file __FILE__

2016-07-22 Thread Jonathan Marler via Digitalmars-d-learn

On Friday, 22 July 2016 at 07:57:35 UTC, sdhdfhed wrote:

On Friday, 22 July 2016 at 07:47:14 UTC, Jonathan Marler wrote:

On Friday, 22 July 2016 at 05:41:00 UTC, fdgdsgf wrote:

What's wrong with __FILE__.dirName ?


It's kinda weird, sometimes I've noticed that the __FILE__ 
keyword is an absolute path, and sometimes it isn't.  If it 
was always an absolute path, that would work.  I decided to 
take a stab at implementing this in the dmd compiler:


https://github.com/dlang/dmd/pull/5959

It adds a __FILE_FULL_PATH__ trait which would solve the issue.


Personally I've never seen a relative __FILE__. Is this an 
issue that's confirmed ?


I mean  that it would be better to fix __FILE__ so that its 
result is always absolute then. I think that such a "PPR" 
(punk-pull-request) has 0% chance of being accepted, especially 
since it adds a special keyword !


It's definitely confirmed.  And now that I've walked through the 
source code, I see that it wasn't implemented to be an absolute 
path, it just happens to be some of the time depending on how the 
file is found.  I'm sure Walter will have an opinion as to what 
solution he prefers.  Either redefining the __FILE__ trait or 
adding a new one. He's communicating fixes to the PR on github so 
that a good sign.  We'll see.


Re: full path to source file __FILE__

2016-07-22 Thread Jonathan Marler via Digitalmars-d-learn

On Friday, 22 July 2016 at 06:45:58 UTC, Jacob Carlborg wrote:

On 2016-07-22 04:24, Jonathan Marler wrote:

The script depends on other files relative to where it exists 
on the
file system.  I couldn't think of a better design to find 
these files

then knowing where the script exists, can you?


What kind of files are we talking about. Resource files, config 
files? Are they static? For static resource files you can 
bundle them in the executable with a string import. For config 
files it might be better to store it in a completely different 
directory, like the user's home directory. This actually 
depends on what kind of config files and the operating system.


I suppose I should have been more specific.  The script actually 
operates on the filesystem relative to where it lives.  It copies 
files, modifies directories, etc.  It is meant to be ran from any 
directory, but is only meant to modify the filesystem relative to 
where it lives.  Take a simple example of a clean script:


/somedir/clean.d
/somedir/build

Say clean.d is meant to remove the build directory that lives in 
the same path as the clean.d script itself.


shell/anypath> rdmd /somedir/clean.d
Removing /somedir/build...

It's important to remember that the clean.d script is ran with 
rdmd, and that it is meant to be called from any directory.  
Since it's ran with rdmd, the thisExePath won't give you the 
right directory, and since you can call it from any directory, 
you also can't use the current directory.  As you can see, what 
you really want to know is where the script itself lives.








Re: full path to source file __FILE__

2016-07-22 Thread Jonathan Marler via Digitalmars-d-learn

On Friday, 22 July 2016 at 05:41:00 UTC, fdgdsgf wrote:
On Thursday, 21 July 2016 at 19:54:34 UTC, Jonathan Marler 
wrote:
Is there a way to get the full path of the current source 
file? Something like:


__FILE_FULL_PATH__

I'm asking because I'm rewriting a batch script in D, meant to 
be ran with rdmd.  However, the script needs to know it's own 
path.  The original batch script uses the %~dp0 variable for 
this, but I'm at a loss on how to do this in D.  Since rdmd 
compiles the executable to the %TEMP% directory, thisExePath 
won't work.


BATCH
-
echo "Directory of this script is " %~dp0


DLANG
-
import std.stdio;
int main(string[] args) {
writeln("Directory of this script is ", ???);
}


What's wrong with __FILE__.dirName ?


It's kinda weird, sometimes I've noticed that the __FILE__ 
keyword is an absolute path, and sometimes it isn't.  If it was 
always an absolute path, that would work.  I decided to take a 
stab at implementing this in the dmd compiler:


https://github.com/dlang/dmd/pull/5959

It adds a __FILE_FULL_PATH__ trait which would solve the issue.


Re: full path to source file __FILE__

2016-07-21 Thread Jonathan Marler via Digitalmars-d-learn

On Thursday, 21 July 2016 at 22:57:06 UTC, Jonathan M Davis wrote:
On Thursday, July 21, 2016 18:39:45 Steven Schveighoffer via 
Digitalmars-d- learn wrote:

[...]


It would be pretty terrible actually to put the executable in 
the source path, and in many cases, the user wouldn't even have 
the permissions for it. For instance, what if the script were 
in /usr/local/bin? They won't have the permissions for the 
executable to end up there, and it would just cause a bunch of 
clutter in /usr/local/bin, since you'd get a new executable 
every time it decided that it needed to rebuild it (and you 
wouldn't want it to delete the executable every time, otherwise 
it would have to rebuild it every time, making it so that it 
would _always_ have to compile your script when it runs instead 
of just sometimes). Right now, the executable ends up in a temp 
directory, which makes a lot of sense.


Maybe it would make sense to have such a flag for very rare 
cases, but in general, it seems like a terrible idea.


- Jonathan M Davis


I agree this isn't a very good solution for the problem at hand.  
Putting the executable in a temporary directory makes sense in 
any cases I can think of. I posted an idea for another potential 
solution 
(http://forum.dlang.org/thread/cmydxneeghtjqjrox...@forum.dlang.org), please let me know your thoughts.  Thanks.


Re: full path to source file __FILE__

2016-07-21 Thread Jonathan Marler via Digitalmars-d-learn

On Friday, 22 July 2016 at 01:52:57 UTC, Adam D. Ruppe wrote:
On Thursday, 21 July 2016 at 22:47:42 UTC, Jonathan Marler 
wrote:
I explain in the original post. Any ideas Adam? Thanks in 
advance.


But why does the batch script use it? Since you are rewriting 
anyway, maybe you can find an easier/better way to achieve the 
goal.


The script depends on other files relative to where it exists on 
the file system.  I couldn't think of a better design to find 
these files then knowing where the script exists, can you?




Re: full path to source file __FILE__

2016-07-21 Thread Jonathan Marler via Digitalmars-d-learn
On Thursday, 21 July 2016 at 22:39:45 UTC, Steven Schveighoffer 
wrote:

On 7/21/16 3:54 PM, Jonathan Marler wrote:

Is there a way to get the full path of the current source file?
Something like:

__FILE_FULL_PATH__

I'm asking because I'm rewriting a batch script in D, meant to 
be ran
with rdmd.  However, the script needs to know it's own path.  
The
original batch script uses the %~dp0 variable for this, but 
I'm at a
loss on how to do this in D.  Since rdmd compiles the 
executable to the

%TEMP% directory, thisExePath won't work.

BATCH
-
echo "Directory of this script is " %~dp0


DLANG
-
import std.stdio;
int main(string[] args) {
writeln("Directory of this script is ", ???);
}


Sure seems like an unwanted limitation.

rdmd does forward all dmd options, but there isn't really an 
option to say "put the exe in the source path".


You should file an enhancement.

-Steve


An option for rdmd would be good, but then requires the user to 
call rdmd in a particular way. It doesnt allow the script itself 
know where it lives, which is needed in my case.


Re: full path to source file __FILE__

2016-07-21 Thread Jonathan Marler via Digitalmars-d-learn

On Thursday, 21 July 2016 at 22:33:39 UTC, Adam D. Ruppe wrote:

On Thursday, 21 July 2016 at 22:28:39 UTC, zabruk70 wrote:

won't? what this means?


That gives the path to the .exe but he wants the path to the .d.

But why? I would think the current working directory is 
probably adequate and that's easy to get...


I explain in the original post. Any ideas Adam? Thanks in advance.


full path to source file __FILE__

2016-07-21 Thread Jonathan Marler via Digitalmars-d-learn
Is there a way to get the full path of the current source file? 
Something like:


__FILE_FULL_PATH__

I'm asking because I'm rewriting a batch script in D, meant to be 
ran with rdmd.  However, the script needs to know it's own path.  
The original batch script uses the %~dp0 variable for this, but 
I'm at a loss on how to do this in D.  Since rdmd compiles the 
executable to the %TEMP% directory, thisExePath won't work.


BATCH
-
echo "Directory of this script is " %~dp0


DLANG
-
import std.stdio;
int main(string[] args) {
writeln("Directory of this script is ", ???);
}


Re: Casting classes

2016-07-01 Thread Jonathan Marler via Digitalmars-d-learn

On Friday, 1 July 2016 at 17:34:25 UTC, Basile B. wrote:

On Friday, 1 July 2016 at 17:32:26 UTC, Basile B. wrote:

On Friday, 1 July 2016 at 15:45:35 UTC, Jonathan Marler wrote:
How do casts work under the hood?  I'm mostly interested in 
what needs to be done in order to cast a class to a subclass.
 I'd like to know what is being done to determine whether the 
object is a valid instance of the cast type.  If the code is 
implemented in the druntime, a pointer to where it lives 
would suffice.  Thanks in advance for any help.


https://github.com/dlang/druntime/blob/master/src/rt/cast_.d#L62

niptick


damn, where is the "edit" button ^^


You pointed me right to it! Thanks this is exactly what I was 
looking for.  I figured it was doing something like this but now 
I have confirmation.


Casting classes

2016-07-01 Thread Jonathan Marler via Digitalmars-d-learn
How do casts work under the hood?  I'm mostly interested in what 
needs to be done in order to cast a class to a subclass.  I'd 
like to know what is being done to determine whether the object 
is a valid instance of the cast type.  If the code is implemented 
in the druntime, a pointer to where it lives would suffice.  
Thanks in advance for any help.


Re: Associative array of const items

2016-07-01 Thread Jonathan Marler via Digitalmars-d-learn

On Friday, 1 July 2016 at 06:57:59 UTC, QAston wrote:
On Thursday, 30 June 2016 at 17:08:45 UTC, Jonathan Marler 
wrote:
Is there a way to have an associative array of const values? I 
thought it would have been:


const(T)[K] map;
map[x] = y;

but the second line gives Error: cannot modify const 
expression.  I would think that the const(T)[K] would behave 
similarly to const(T)[], where you can modify the array, just 
not the individual elements, but associative arrays don't seem 
to have the same semantics.  Is there a way to achieve these 
semantics with an associative array?


You can use std.typecons.Rebindable!(const T)[K].


I think this may have been exactly what I was looking for.  Cool 
that the language supports adding these kind of semantics through 
a library.  I'll try it out and see if it works.  Thanks for the 
tip.


Associative array of const items

2016-06-30 Thread Jonathan Marler via Digitalmars-d-learn
Is there a way to have an associative array of const values? I 
thought it would have been:


const(T)[K] map;
map[x] = y;

but the second line gives Error: cannot modify const expression.  
I would think that the const(T)[K] would behave similarly to 
const(T)[], where you can modify the array, just not the 
individual elements, but associative arrays don't seem to have 
the same semantics.  Is there a way to achieve these semantics 
with an associative array?








Re: Cast vs Virtual Method vs TypeId?

2016-06-30 Thread Jonathan Marler via Digitalmars-d-learn

On Thursday, 30 June 2016 at 00:27:57 UTC, rikki cattermole wrote:

On 30/06/2016 12:25 PM, Jonathan Marler wrote:
Assume you have a function that accepts a GameObject but does 
something
special if that GameObject happens to be an instance of the 
Player
class. How would you go about determining this? (Note: assume 
you need
to make the distinction at runtime, so you can't use a static 
if with an

'is' expression inside a template.)


void func(GameObject o) {
if (Player player = cast(Player)o) {
// something special
}
}


Thanks for the response.  Do you also have any information on how 
cast works under the hood?




Cast vs Virtual Method vs TypeId?

2016-06-29 Thread Jonathan Marler via Digitalmars-d-learn
I'd like to hear peoples thoughts on the various solutions for 
the following problem.  Say you have some hierarchy of classes 
like:


class GameObject {
  // ...
}
class Entity : GameObject {
  // ...
}
class Player : Entity {
  // ...
}
class Enemy : Entity {
  // ...
}
// ...

Assume you have a function that accepts a GameObject but does 
something special if that GameObject happens to be an instance of 
the Player class. How would you go about determining this? (Note: 
assume you need to make the distinction at runtime, so you can't 
use a static if with an 'is' expression inside a template.)


I'd like to hear what people think in 2 cases
  1) Only need to know if it's an instance of the Player class.
  2) Need to know if it's an instance of the Player class AND 
need an instance of the Player class.


The potential solutions I thought of were:

1) typeid (Only handles case 1)

if(typeid(obj) == typeid(Player) {
  // treat as player object
}

If you don't need an instance of the Player class, maybe this one 
is good? I don't know in terms of efficiency if this is better, 
or casting is better.  Maybe cast uses the typeid under the hood 
to determine if the cast can be performed?


2) Custom Type Enum (Only handles case 1)

enum GameObjectType {
  gameObject, player, ...
}
class GameObject {
  GameObjectType type;
}

if(obj.type == GameObjectType.player) {
  // treat it as a player object
  // Note: if you need to use Player class specific fields
  //   then you'll need to use the cast or virtual function
  //   design which kinda defeats the purpose of this design 
in the

  //   case where it is actually a Player object.
}

This method may be similar to the typeid method, not sure since I 
don't know how typeid works under the hood.  If it's similar then 
this would just be a waste of memory and should not be used in 
favor of the typeid method.


3) Cast (Handles case 1 and 2)

auto player = cast(Player)obj;
if(player) {
  // treat it as a player object
}

I don't know how cast works under the hood so it's hard to 
compare it to other methods.  Any information on how cast works 
under the hood would be great.


4) Virtual Method (Handles case 1 and 2)

class GameObject {
  Player asPlayer() { return null; }
}
class Player {
  override Player asPlayer() { return this; }
}

auto player = obj.asPlayer;
if(player) {
  // treat it as a player object
} else {
  // treat it as any other game object
}

This solution handles the same cases as regular casting, but I 
can't compare them since I don't know how casting works under the 
hood.  One thing to consider is that this method scales linearly 
since you need to add a new virtual method for every type you 
want to support, so the vtable gets larger as you add more types.



Any other solutions? Thoughts?  Thanks in advance for the 
feedback.




Forward References

2016-06-27 Thread Jonathan Marler via Digitalmars-d-learn
Do the various D compilers use multiple passes to handle forward 
references or some other technique?


Re: Bug in Rdmd?

2016-06-13 Thread Jonathan Marler via Digitalmars-d-learn

On Tuesday, 14 June 2016 at 03:40:01 UTC, Adam D. Ruppe wrote:

On Tuesday, 14 June 2016 at 03:15:04 UTC, Jonathan Marler wrote:

It actually is a free function


no, it isn't, it is on File.

Your code doesn't compile on my dmd (and indeed it shouldn't on 
yours either unless you have a version mismatch. rdmd just 
calls dmd, it doesn't produce its own errors)


Shoot stupid mistake.  You were right Jeremy and Adam.  Thanks 
for replying and showing me my silly error.  I could have sworn 
byLine was a template and calling it with file was just 
UFCS...and I don't know how I was able to compile that with 
DMD...must have made a mistake somewhere.  Thanks again.


Re: Bug in Rdmd?

2016-06-13 Thread Jonathan Marler via Digitalmars-d-learn

On Tuesday, 14 June 2016 at 01:35:32 UTC, Jeremy DeHaan wrote:

On Tuesday, 14 June 2016 at 01:05:46 UTC, Jonathan Marler wrote:

This code doesn't seem to work with rdmd.  Is this a bug?

  import std.stdio : byLine;
  int main(string[] args)
  {
foreach(line; stdin.byLine) {
}
return 0;
  }

Compiler Output:
  Error: module std.stdio import 'byLine' not found


Try removing the 'byLine' from the import statement. The error 
message looks like it can't find the function 'byLine' in the 
std.stdio module. It isn't a free function, but one of File's 
methods.


It actually is a free function (not a method on the File object). 
 This works if you compile it with dmd, just not with rdmd.




Bug in Rdmd?

2016-06-13 Thread Jonathan Marler via Digitalmars-d-learn

This code doesn't seem to work with rdmd.  Is this a bug?

  import std.stdio : byLine;
  int main(string[] args)
  {
foreach(line; stdin.byLine) {
}
return 0;
  }

Compiler Output:
  Error: module std.stdio import 'byLine' not found


Re: Fibers under the hood

2016-06-09 Thread Jonathan Marler via Digitalmars-d-learn

On Thursday, 9 June 2016 at 11:45:01 UTC, Andrew Edwards wrote:

On 6/9/16 2:15 PM, Jonathan Marler wrote:

On Thursday, 9 June 2016 at 05:07:33 UTC, Nikolay wrote:
On Thursday, 9 June 2016 at 04:57:30 UTC, Jonathan Marler 
wrote:
I've googled and searched through the forums but haven't 
found too
much on how fibers are implemented.  How does yield return 
execution
to the caller but then resume execution in the same place on 
the next
call?  Also some information on how the fiber call stack 
works would
be nice.  I'm assuming it allocates the stack on the GC 
heap.  If so,
what is the default size and is that configurable?  Any 
information
or pointers to resources that provide this information would 
be

helpful.  Thanks.


See "Documentation of Fiber internals" inside
https://github.com/dlang/druntime/blob/master/src/core/thread.d


Exactly what I was looking for, thanks.  Would be nice if this
documentation was published on the website somewhere (probably 
in the

Fiber library documentation).


Might be wrong but did you mean this?

https://dlang.org/phobos/core_thread.html#.Fiber


I don't see that documentation anywhere on that page.  That's 
where I looked first actually.  It may or may not make sense to 
include that doc in the api documentation, but I think it would 
definitely make sense to include it on it's own page that talks 
about how fibers are implemented.  This information is more about 
learning about fibers as opposed to how to use them (which is all 
that most people want to know).


Re: Fibers under the hood

2016-06-08 Thread Jonathan Marler via Digitalmars-d-learn

On Thursday, 9 June 2016 at 05:07:33 UTC, Nikolay wrote:

On Thursday, 9 June 2016 at 04:57:30 UTC, Jonathan Marler wrote:
I've googled and searched through the forums but haven't found 
too much on how fibers are implemented.  How does yield return 
execution to the caller but then resume execution in the same 
place on the next call?  Also some information on how the 
fiber call stack works would be nice.  I'm assuming it 
allocates the stack on the GC heap.  If so, what is the 
default size and is that configurable?  Any information or 
pointers to resources that provide this information would be 
helpful.  Thanks.


See "Documentation of Fiber internals" inside
https://github.com/dlang/druntime/blob/master/src/core/thread.d


Exactly what I was looking for, thanks.  Would be nice if this 
documentation was published on the website somewhere (probably in 
the Fiber library documentation).


Fibers under the hood

2016-06-08 Thread Jonathan Marler via Digitalmars-d-learn
I've googled and searched through the forums but haven't found 
too much on how fibers are implemented.  How does yield return 
execution to the caller but then resume execution in the same 
place on the next call?  Also some information on how the fiber 
call stack works would be nice.  I'm assuming it allocates the 
stack on the GC heap.  If so, what is the default size and is 
that configurable?  Any information or pointers to resources that 
provide this information would be helpful.  Thanks.


Re: dlang.org using apache?

2016-06-08 Thread Jonathan Marler via Digitalmars-d-learn

On Wednesday, 8 June 2016 at 17:43:03 UTC, Seb wrote:
On Wednesday, 8 June 2016 at 17:05:42 UTC, Jonathan Marler 
wrote:

On Wednesday, 8 June 2016 at 15:51:58 UTC, Adam D. Ruppe wrote:
On Wednesday, 8 June 2016 at 15:05:54 UTC, Ola Fosheim 
Grøstad wrote:

The forum-index http header report:

Server:nginx/1.4.6 (Ubuntu)

People check out stuff like that.


Yeah, and that's an industry-standard production deployment.

But perhaps we should just change the server line for the 
people who do look at it. No need to change the deployment, 
just the apache/nginx config to spit out something different.


I can picture the article now:

The D programming language maintains its own web framework 
called vibe.d, but the official website dlang.org doesn't use 
it.  Instead they use the Apache framework written in C.  
They also decided to modify Apache to make it look like their 
own vibe.d framework.  Apparently tricking people into 
thinking they use their own code was easier the actually 
using it.


Mike's call for help was about actively _improving_ dlang.org 
by pointing out


- what is (or could be) confusing for newcomers
- bad written texts
- missing examples
- not user-friendly parts of the documentation
- missing info
- ...

Basically everything that could stop someone from having 
awesome first five minutes with D!


Great points to make dlang.org more welcoming.  Where it is now 
is much further ahead then years past.


I read your post but I don't think we in any disagreement.  I 
think everyone can agree that it would look better to others if 
dlang.org used it's own web framework.  Whether or not it makes 
sense to actually implement it is another question.  Since I'm 
not intimately familiar with the internals of dlang.org, or the 
consequences of switching, I don't assert that either way would 
be better.  I am, however, pointing out that there are going to 
be poeple trying to share the D language in a negative light, and 
dlang.org not using vibe is exactly the kind of thing these 
people will feed off of and possibly use to turn off others from 
the language.


Re: dlang.org using apache?

2016-06-08 Thread Jonathan Marler via Digitalmars-d-learn

On Wednesday, 8 June 2016 at 15:51:58 UTC, Adam D. Ruppe wrote:
On Wednesday, 8 June 2016 at 15:05:54 UTC, Ola Fosheim Grøstad 
wrote:

The forum-index http header report:

Server:nginx/1.4.6 (Ubuntu)

People check out stuff like that.


Yeah, and that's an industry-standard production deployment.

But perhaps we should just change the server line for the 
people who do look at it. No need to change the deployment, 
just the apache/nginx config to spit out something different.


I can picture the article now:

The D programming language maintains its own web framework 
called vibe.d, but the official website dlang.org doesn't use 
it.  Instead they use the Apache framework written in C.  They 
also decided to modify Apache to make it look like their own 
vibe.d framework.  Apparently tricking people into thinking 
they use their own code was easier the actually using it.




Re: dlang.org using apache?

2016-06-08 Thread Jonathan Marler via Digitalmars-d-learn

On Wednesday, 8 June 2016 at 14:43:35 UTC, Mike Parker wrote:
Really? I just don't see it as that big of a deal. Again, three 
subdomains are using D right now. So it's not like it's not 
being used at all. Moving the website to D just hasn't been a 
priority (nor should it be, IMO). Anyone in the community who 
*does* feel it's important is certainly free to put together a 
prototype and pitch it to the core team. I would ask their 
thoughts about it first, though, before embarking on such a 
project.


I can definitely see and relate to your points.  You're using 
sound arguments when making decisions about software in general.  
However, I think you have to consider the emotional impact of 
this.  If you walked into a printer company and found out they 
didn't use their own printers, what would that say to you?  Since 
dlang.org is the face of the D programming language, it's going 
to be the first thing people use to judge it.  IMO that makes it 
a big deal.


Re: dlang.org using apache?

2016-06-08 Thread Jonathan Marler via Digitalmars-d-learn

On Wednesday, 8 June 2016 at 14:30:53 UTC, Adam D. Ruppe wrote:
These servers tend to be very efficient at front end tasks like 
load balancing, static file serving and cache management, 
standards compliance (including automatically up/down grading 
HTTP versions or TLS requirements), management, security 
(including handling horribly malformed requests) - stuff that 
can take megabytes of code to get right and is typically 
outside the scope of an application server.


That's actually the reason I would think dlang.org should use 
vibe.  Those features are critical to the success and viability 
of vibe.  By making dlang.org dependent on vibe, those features 
are much more likely to be flushed out and maintained at a high 
standard.




BTW ironically, a lot of people complain that D DOES use its 
own web technology on the website: it is mostly statically 
generated ddoc!


I saw some discussion on that in the forums when I was searching 
for info on why dlang.org doesn't use vibe.  I personally like 
that dlang uses ddoc, but I don't know too much about the 
realistic pros and cons.  I do like the concept though.




Re: dlang.org using apache?

2016-06-08 Thread Jonathan Marler via Digitalmars-d-learn

On Wednesday, 8 June 2016 at 13:32:00 UTC, Mike Parker wrote:
Why would we change over when Apache is working quite happily 
to serve up static content?


I've heard that same argument as the reason people don't use the 
D language.  Why would I change over to D when C/C++ is working 
quite happily?


If the official D website doesn't feel like migrating it's own 
infrastructure to use D, why would anyone else?  Of course apache 
works (so does C++), but choosing not to put in the time to 
switch says a lot to the rest of the world.




dlang.org using apache?

2016-06-08 Thread Jonathan Marler via Digitalmars-d-learn
I've decided to write a web application using vibe and was 
shocked to see that dlang.org was using apache.


Should I be scared that even after this long, the official D 
website doesn't rely on its own web tools?


Re: core.sys.windows so lean?

2016-06-06 Thread Jonathan Marler via Digitalmars-d-learn

On Monday, 6 June 2016 at 17:11:44 UTC, Vladimir Panteleev wrote:

On Monday, 6 June 2016 at 16:51:20 UTC, Jonathan Marler wrote:

Hmmm...it seems to be missing quite alot though.


You could've mentioned you meant just the winsock modules.

They have not been brought over because they were not 
explicitly under an open-source license.


:(

(Sorry I didn't realize they were all only in the winsock module)

Weird that you could include the standard windows module but not 
winsock?  Also seems odd that a language including bindings to an 
operating system api would be a licensing issue.


Re: core.sys.windows so lean?

2016-06-06 Thread Jonathan Marler via Digitalmars-d-learn

On Monday, 6 June 2016 at 16:13:48 UTC, Vladimir Panteleev wrote:

On Monday, 6 June 2016 at 16:04:30 UTC, Jonathan Marler wrote:
I'm writing some platform specific D code and I've found that 
what the druntime exposes for the windows platform is pretty 
lean.  I'm guessing that the purpose of the druntime version 
of the windows api is to implement the minimum required to 
support the windows platform and not meant to be a 
full-featured interface to windows.  Is this the case?


Erm, not since 2.070:



Hmmm...it seems to be missing quite alot though.  Especially the 
winsock api.  Over the weekend I was writing some code that uses 
a windows IOCompletionPort and had to add a fair amount of code 
that was missing:


  import core.sys.windows.windows;
  import core.sys.windows.winsock2;

  // NOTE: not sure if this stuff should be included in 
core.sys.windows.winsock2
  alias u_long = uint; // NOTE: not sure if uint is the best 
alias for u_long


  // The actual sockaddr_in structure in windows is 24 bytes long,
  // but the sockaddr_in defined in core.sys.windows.winsock2 is 
only 20.
  // This caused an error when calling AcceptEx that indicated 
the buffer

  // size for sockaddr_in was too small.
  union real_sockaddr_in {
sockaddr_in addr;
ubyte[24] padding; // make sure the sockaddr_in takes 24 bytes
  }

  struct WSABUF {
u_long len;
char* buf;
  }
  alias LPWSABUF = WSABUF*;

  // NOTE: WSAOVERLAPPED is supposed to be castable to and from 
OVERLAPPED.
  //   Maybe this doesn't need to be defined, maybe I could 
just always use OVERLAPPED?

  struct WSAOVERLAPPED {
ULONG* Internal;
ULONG* InternalHigh;
union {
  struct {
DWORD Offset;
DWORD OffsetHigh;
  }
  PVOID Pointer;
}
HANDLE hEvent;
  }
  alias LPWSAOVERLAPPED = WSAOVERLAPPED*;
  alias LPWSAOVERLAPPED_COMPLETION_ROUTINE = void function(uint, 
uint, LPWSAOVERLAPPED, uint);


  enum : int {
SIO_GET_EXTENSION_FUNCTION_POINTER = 0xc806,
  }

  enum GUID WSAID_ACCEPTEX = 
{0xb5367df1,0xcbac,0x11cf,[0x95,0xca,0x00,0x80,0x5f,0x48,0xa1,0x92]};
  alias LPFN_ACCEPTEX = extern(Windows) bool function(SOCKET 
listenSocket,

  SOCKET acceptSocket,
  PVOID outputBuffer,
  DWORD receiveDataLength,
  DWORD localAddressLength,
  DWORD remoteAddressLength,
  DWORD* bytesReceived,
  OVERLAPPED* overlapped) 
nothrow @nogc;


  extern(Windows) int WSAIoctl(SOCKET s, uint dwIoControlCode,
   void* lpvInBuffer, uint cbInBuffer,
   void* lpvOutBuffer, uint cbOutBuffer,
   uint* lpcbBytesReturned,
			   LPWSAOVERLAPPED lpOverlapped, 
LPWSAOVERLAPPED_COMPLETION_ROUTINE lpCompletionRoutine) nothrow 
@nogc;
  extern(Windows) int WSARecv(SOCKET s, LPWSABUF lpBuffer, DWORD 
bufferCount,

  LPDWORD numberOfBytesReceived, LPDWORD flags,
			  LPWSAOVERLAPPED overlapped, 
LPWSAOVERLAPPED_COMPLETION_ROUTINE completionRoutine);


core.sys.windows so lean?

2016-06-06 Thread Jonathan Marler via Digitalmars-d-learn
I'm writing some platform specific D code and I've found that 
what the druntime exposes for the windows platform is pretty 
lean.  I'm guessing that the purpose of the druntime version of 
the windows api is to implement the minimum required to support 
the windows platform and not meant to be a full-featured 
interface to windows.  Is this the case?


If so, is there a good library that someone has implemented to 
support the full windows API? I think I remember there was one a 
few years ago, but now I'm unable to find it.  I think I remember 
that the windows .lib files installed with dmd were missing 
symbols/functions I needed so I had to use the ones in the 
system32 directory installed with windows.  I also had to convert 
them to OMF (since optlink doesn't support COFF).  I'm just 
wondering if someone can shed some light on this, it's just been 
a while and google didn't seem to be much help so pointers in the 
right direction would be much appreciated.  Thanks.


Is this possible in D?

2015-02-19 Thread Jonathan Marler via Digitalmars-d-learn
I am having a heck of a time trying to figure out how to do this. 
 How do I change the attributes of a function based on the 
version without copying the function body?  For example:


version(StaticVersion) {
static void myLongFunction()
{
// long body ...
}
} else {
void myLongFunction()
{
// same long body copied...
}
}

In one version I want the function to be static and in another I 
don't want it to be static.  I cannot figure out how to do this 
without copy/pasting the entire function body to both versions.


Re: Is this possible in D?

2015-02-19 Thread Jonathan Marler via Digitalmars-d-learn

On Thursday, 19 February 2015 at 17:23:47 UTC, Mike Parker wrote:
I agree that string mixins can kill readability. I encountered 
that when I used them to support both D1 and D2 in Derelict 2 
years ago. But I think that when they are kept small and local 
as in cases like this, they aren't bad at all.


Thanks for your example.  It's ugly but it's the only solution 
I've seen that gives me what I'm looking for.  I hadn't thought 
about putting the function body inside a q{ string }


Quick help on version function parameter

2015-02-18 Thread Jonathan Marler via Digitalmars-d-learn
Does anyone know a good way to support versioned function 
parameters?  Say, in one version I want a variable to be a global 
and in another I want it to be a parameter.


version(GlobalVersion)
{
int x;
void foo()
{
// A huge function that uses x
}
} else {
void foo(int x)
{
// A huge function that uses x (same code as 
GlobalVersion)

}
}

The problem with this is that the code is duplicated.  Is there a 
way to do this versioning without having 2 copies of the same 
function body?  The following definitely does not work:


version(GlobalVersion)
{
int x;
void foo()
} else {
void foo(int x)
}
{
// A huge function that uses x
}


Re: Quick help on version function parameter

2015-02-18 Thread Jonathan Marler via Digitalmars-d-learn
On Wednesday, 18 February 2015 at 23:49:26 UTC, Adam D. Ruppe 
wrote:
I'd write a foo_impl which always takes a parameter. Then do 
the versioned foo() functions which just forward to it:


void foo_impl(int x) { long function using x here }

version(globals) {
   int x;
   void foo() {
  foo_impl(x);
   }
} else {
   void foo(int x) { foo_impl(x); }
}

Minimal duplication with both interfaces.


That kinda defeats the purpose of why I want this.  It's for 
performance reasons.  I need one version to have NO arguments and 
one version to have one ref argument.


Re: @nogc with assoc array

2015-02-16 Thread Jonathan Marler via Digitalmars-d-learn

On Monday, 16 February 2015 at 17:58:10 UTC, Benjamin Thaut wrote:
Because the index operator throws a OutOfRange exception and 
throwing exceptions allocates, maybe?


Oh...I hadn't thought of that!  Thanks for the quick response.


@nogc with assoc array

2015-02-16 Thread Jonathan Marler via Digitalmars-d-learn

Why is the 'in' operator nogc but the index operator is not?

void main() @nogc
{
int[int] a;
auto v = 0 in a; // OK
auto w = a[0];   // Error: indexing an associative
 // array in @nogc function main may
 // cause GC allocation
}


Re: @nogc with assoc array

2015-02-16 Thread Jonathan Marler via Digitalmars-d-learn

On Monday, 16 February 2015 at 19:12:45 UTC, FG wrote:
Range violation is an Error, but never mind that. The real 
question is: given all the work related to @nogc, wouldn't it 
be better for such common Errors to be preallocated and only 
have file and line updated when they are thrown?


@nogc already, because they simply cast 
typeid(OutOfMemoryError).init or 
typeid(InvalidMemoryOperationError).init:
extern (C) void onOutOfMemoryError(void* pretend_sideffect = 
null) @trusted pure nothrow @nogc
extern (C) void onInvalidMemoryOperationError(void* 
pretend_sideffect = null) @trusted pure nothrow @nogc


Could be made @nogc with one object of each kind preallocated:
extern (C) void onAssertError( string file = __FILE__, size_t 
line = __LINE__ ) nothrow
extern (C) void onRangeError( string file = __FILE__, size_t 
line = __LINE__ ) @safe pure nothrow
extern (C) void onSwitchError( string file = __FILE__, size_t 
line = __LINE__ ) @safe pure nothrow


This could be a good idea for some types of exceptions.  I 
believe OutOfMemory is already pre-allocated (it has to be since 
you can't allocate it once you are out of memory).  The problem 
with your suggestion is that if you allow the exception to be 
updated with the line number/filename(it isn't immutable), then 
you have to store it in TLS memory.  That may be an acceptable 
tradeoff, but you have to take that into consideration.  Also if 
you have a chain of exceptions you wouldn't be able to include 
the same exception more then once in the chain.


The problem D has with exceptions and GC memory is complex and 
will have different optimal solutions in different cases.  In 
some cases, it would be better for D to support non-GC heap 
allocated exceptions.  Maybe these types of exceptions could be 
derived from another class so the user code will know that the 
memory needs to be freed.  There are also other ideas but my 
point is we should make a plan about what solutions we think 
would be good to implement and determine which ones we want to 
tackle first.


Re: @nogc with assoc array

2015-02-16 Thread Jonathan Marler via Digitalmars-d-learn

On Tuesday, 17 February 2015 at 00:00:54 UTC, FG wrote:
Yes, they would be in TLS. I know exceptions in general are a 
complex problem, therefore I limited the comment only to 
errors, because forbidding the use of `aa[key]` in @nogc seemed 
odd (although I do think that `aa.get(key, default)` and `key 
in aa` are superior to `aa[key]`). I have seen a few examples 
of Exception chaining, but not Error chaining, and since Error 
trumps Exception, whatever else was raised was of less 
importance to me, so I didn't give much thought to that.


I'm not sure what you mean by Errors?  Are you talking about 
asserts?


I can has @nogc and throw Exceptions?

2015-02-13 Thread Jonathan Marler via Digitalmars-d-learn
This question comes from wanting to be able to throw an exception 
in code that is @nogc.


I don't know if it's possible but I'd like to be able to throw an 
exception without allocating memory for the garbage collector?  
You can do it in C++ so I think you should be able to in D.  One 
idea I had was to allocate the memory for the Exception 
beforehand and create the Exception class with the pre-allocated 
memory.  I came up with the following code:


T construct(T,A...)(void* buffer, A args)
{
  return (cast(T)buffer).__ctor(args);
}

Now to test it:

void main()
{
  ubyte[ __traits(classInstanceSize, Exception)] exceptionBuffer;
  throw construct!(Exception)(exceptionBuffer.ptr, My Exception 
Allocated on the STACK!);

}

I got an assertion error. I'm not sure why, but when I print out 
the contents of the buffer of my stack exception it differs from 
an exception created for the garbage collector with new.  It 
looks like it has some accounting information embedded in the 
class instance. I figured as much but I didn't think the code 
that performs the throw would be dependent on this.


Also, this doesn't look like a very safe option because the 
initial values for the class members don't get set using this 
construct template.


If anyone has any other ideas or a way to fix mine let me know, 
thanks.


Re: I can has @nogc and throw Exceptions?

2015-02-13 Thread Jonathan Marler via Digitalmars-d-learn
On Friday, 13 February 2015 at 19:13:02 UTC, Steven Schveighoffer 
wrote:
You need to actually allocate the memory on the heap. Your data 
lives on the stack frame of main, which goes away as soon as 
main exits, and your exception is caught outside main.


-Steve


Yes I am aware of this.  That doesn't mean you have to allocate 
on the GC heap.  You can


1. Make sure the exception is caught before the function that 
allocated the memory for it on the stack (not the safest thing to 
do but works)

2. Allocate the memory on the NON-GC heap
3. Allocate the memory to a global



Re: I can has @nogc and throw Exceptions?

2015-02-13 Thread Jonathan Marler via Digitalmars-d-learn

On Friday, 13 February 2015 at 19:10:00 UTC, Adam D. Ruppe wrote:
On Friday, 13 February 2015 at 19:03:10 UTC, Jonathan Marler 
wrote:

T construct(T,A...)(void* buffer, A args)
{
 return (cast(T)buffer).__ctor(args);
}


This is wrong, you need to initialize the memory first to the 
proper values for the class, gotten via typeid(T).init. 
std.conv.emplace does this correctly, either use it or look at 
its source to see how to do it.


That's what I was looking for! Thanks.



 ubyte[ __traits(classInstanceSize, Exception)] 
exceptionBuffer;


When the stack unwinds, this will be invalidated... I don't 
think stack allocated exceptions are ever a good idea. I don't 
think malloc exceptions are a good idea either, the catcher 
would  need to know to free it.


You might preallocate a pool of GC'd exceptions though, then 
throw the next one in the list instead of making a new one each 
time.


Yes I am aware of these things. Stack allocated exception are 
dangerous if you let them get thrown above the function they were 
allocated in.  But this is easy to prevent by simply making sure 
you catch the exception in the function you allocate it in.  And 
yes malloc'd exceptions would be odd since the convention is to 
allocate them on the GC heap so no one would think they had to 
free them.  Also you could use a global...but I'm also aware of 
the caveats of this, between excessive TLS memory and the dangers 
of using shared or __gshare memory. A pool of pre-allocated 
exceptions is an idea I was throwing around...but with this new 
emplace function it opens up my options.  Thanks.





Tuples not working?

2015-01-09 Thread Jonathan Marler via Digitalmars-d-learn

import std.stdio;
import std.typecons;
void main()
{
  alias TL = Tuple!(int, long, float);
  foreach (i, T; TL)
writefln(TL[%d] = %s, i, typeid(T));
}

Why is this not working?

D:\dmd2\windows\bin\..\..\src\phobos\std\typecons.d(419): Error: 
need 'this' for '_expand_field_0' of type 'int'
D:\dmd2\windows\bin\..\..\src\phobos\std\typecons.d(419): Error: 
need 'this' for '_expand_field_1' of type 'long'
D:\dmd2\windows\bin\..\..\src\phobos\std\typecons.d(419): Error: 
need 'this' for '_expand_field_2' of type 'float'


Tried to compile using dmd 2.066 and dmd 2.067.  Code taken 
directly from dlang website here (http://dlang.org/tuple.html). 
Thanks.


Re: Conditional Compilation for Specific Windows

2015-01-07 Thread Jonathan Marler via Digitalmars-d-learn
On Wednesday, 7 January 2015 at 18:50:40 UTC, Jacob Carlborg 
wrote:

On 2015-01-07 19:27, Jonathan Marler wrote:
I'm looking at the Windows multicast API.  It has different 
socket
options depending on if you are on Windows XP or Windows Vista 
(and
later).  Is there a way to tell at runtime which version of 
windows you
are on? Note: I'm specifically talking about runtime because I 
want the
same binary to run on all windows versions so I have to 
support both and

determine which one I am running on at runtime.


Use the regular system API's as you would in C. Should be easy 
to find if you search the web.


I've looked up the windows version helper functions 
(http://msdn.microsoft.com/en-us/library/windows/desktop/dn424972(v=vs.85).aspx). 
 The problem is that these functions are not defined in DMD's 
user32.lib.  I could use the operating system's user32.lib but it 
is in COFF format, so I would have to convert my D object files 
to COFF and then compile using MSVC or GNU GCC for windows (or I 
could try converting the OS user32.lib to OMF).  Or, I could add 
the functions to DMD's user32.lib but as far as I know this is a 
private binary managed by Digital Mars that I can't contribute 
to?  Am I wrong?  Does anyone else have a solution or an idea on 
this?


Note: I've wanted to use other windows function in the past that 
were missing from DMD's user32.lib file.  A solution to solve 
this for multiple functions would be ideal, thanks.


Conditional Compilation for Specific Windows

2015-01-07 Thread Jonathan Marler via Digitalmars-d-learn
I'm looking at the Windows multicast API.  It has different 
socket options depending on if you are on Windows XP or Windows 
Vista (and later).  Is there a way to tell at runtime which 
version of windows you are on? Note: I'm specifically talking 
about runtime because I want the same binary to run on all 
windows versions so I have to support both and determine which 
one I am running on at runtime.


Re: Conditional Compilation for Specific Windows

2015-01-07 Thread Jonathan Marler via Digitalmars-d-learn
On Wednesday, 7 January 2015 at 18:50:40 UTC, Jacob Carlborg 
wrote:

On 2015-01-07 19:27, Jonathan Marler wrote:
I'm looking at the Windows multicast API.  It has different 
socket
options depending on if you are on Windows XP or Windows Vista 
(and
later).  Is there a way to tell at runtime which version of 
windows you
are on? Note: I'm specifically talking about runtime because I 
want the
same binary to run on all windows versions so I have to 
support both and

determine which one I am running on at runtime.


Use the regular system API's as you would in C. Should be easy 
to find if you search the web.


They are the regular system APIs.  They change depending on which 
version of windows you are on 
(http://msdn.microsoft.com/en-us/library/windows/desktop/ms738558(v=vs.85).aspx). 
 Again, how do I determine which version of windows I am on?  My 
code will default to using the new API (because it is the most 
efficient), but then will fall back to using the old API if it 
can detect that the current version of Windows does not support 
the new API.


Re: Conditional Compilation for Specific Windows

2015-01-07 Thread Jonathan Marler via Digitalmars-d-learn
On Wednesday, 7 January 2015 at 18:50:40 UTC, Jacob Carlborg 
wrote:

On 2015-01-07 19:27, Jonathan Marler wrote:
I'm looking at the Windows multicast API.  It has different 
socket
options depending on if you are on Windows XP or Windows Vista 
(and
later).  Is there a way to tell at runtime which version of 
windows you
are on? Note: I'm specifically talking about runtime because I 
want the
same binary to run on all windows versions so I have to 
support both and

determine which one I am running on at runtime.


Use the regular system API's as you would in C. Should be easy 
to find if you search the web.


Oh wait a second, I misunderstood you. You were talking about 
using the regular Windows APIs to determine which version of 
windows I am running on. I was going to do that but I wanted to 
check if D has created a wrapping for that or uses a particular 
convention.


Re: Delegate returning itself

2014-12-08 Thread Jonathan Marler via Digitalmars-d-learn

On Saturday, 6 December 2014 at 15:46:16 UTC, Adam D. Ruppe wrote:
The problem is the recursive *alias* rather than the delegate. 
Just don't use the alias name inside itself so like


alias MyDelegate = void delegate() delegate();

will work. The first void delegate() is the return value of the 
MyDelegate type.


Yes I tried that as well.  It still doesn't solve the issue.  The 
delegate being returned doesn't return a delegate, it returns the 
void type.  You would need to write delegate() delegate() 
delegate() delegate() ...FOREVER.  I can't figure out a way to 
write this in the language even though the machine code it 
generates should be quite trivial.


Re: Delegate returning itself

2014-12-08 Thread Jonathan Marler via Digitalmars-d-learn

On Monday, 8 December 2014 at 14:08:33 UTC, Jonathan Marler wrote:
On Saturday, 6 December 2014 at 15:46:16 UTC, Adam D. Ruppe 
wrote:
The problem is the recursive *alias* rather than the delegate. 
Just don't use the alias name inside itself so like


alias MyDelegate = void delegate() delegate();

will work. The first void delegate() is the return value of 
the MyDelegate type.


Yes I tried that as well.  It still doesn't solve the issue.  
The delegate being returned doesn't return a delegate, it 
returns the void type.  You would need to write delegate() 
delegate() delegate() delegate() ...FOREVER.  I can't figure 
out a way to write this in the language even though the machine 
code it generates should be quite trivial.


I did some digging and realized that C/C++ have the same problem. 
 I found a nice post on it with 2 potential solutions 
(http://c-faq.com/decl/recurfuncp.html).  I liked the second 
solution so I wrote up an example in D.  If anyone has any other 
ideas or can think of a way to improve my example feel free to 
post and let me know, thanks.import std.stdio;


struct StateFunc
{
  StateFunc function() func;
}
StateFunc state1()
{
  writeln(state1);
  return StateFunc(state2);
}
StateFunc state2()
{
  writeln(state2);
  return StateFunc(state3);
}
StateFunc state3()
{
  writeln(state3);
  return StateFunc(null);
}
void main(string[] args)
{
  StateFunc state = StateFunc(state1);

  while(state.func != null) {
state = state.func();
  }
}


Re: Delegate returning itself

2014-12-08 Thread Jonathan Marler via Digitalmars-d-learn

On Monday, 8 December 2014 at 14:38:37 UTC, Marc Schütz wrote:
On Monday, 8 December 2014 at 14:31:53 UTC, Jonathan Marler 
wrote:
On Monday, 8 December 2014 at 14:08:33 UTC, Jonathan Marler 
wrote:
On Saturday, 6 December 2014 at 15:46:16 UTC, Adam D. Ruppe 
wrote:
The problem is the recursive *alias* rather than the 
delegate. Just don't use the alias name inside itself so like


alias MyDelegate = void delegate() delegate();

will work. The first void delegate() is the return value of 
the MyDelegate type.


Yes I tried that as well.  It still doesn't solve the issue.  
The delegate being returned doesn't return a delegate, it 
returns the void type.  You would need to write delegate() 
delegate() delegate() delegate() ...FOREVER.  I can't figure 
out a way to write this in the language even though the 
machine code it generates should be quite trivial.


I did some digging and realized that C/C++ have the same 
problem.
I found a nice post on it with 2 potential solutions 
(http://c-faq.com/decl/recurfuncp.html).  I liked the second 
solution so I wrote up an example in D.  If anyone has any 
other ideas or can think of a way to improve my example feel 
free to post and let me know, thanks.import std.stdio;


struct StateFunc
{
 StateFunc function() func;
}
StateFunc state1()
{
 writeln(state1);
 return StateFunc(state2);
}
StateFunc state2()
{
 writeln(state2);
 return StateFunc(state3);
}
StateFunc state3()
{
 writeln(state3);
 return StateFunc(null);
}
void main(string[] args)
{
 StateFunc state = StateFunc(state1);

 while(state.func != null) {
   state = state.func();
 }
}


Nice! Using alias this, you can call the struct directly:

struct StateFunc
{
  StateFunc function() func;
  alias func this;
}
state = state();

Now there still needs to be a way to just `return state2;` 
instead of `return StateFunc(state2);`...


Nice addition! I can't think of a way to solve the implicit 
conversion from function pointer to struct, but not a big deal.  
I'm mostly glad I found a way to do this with no overhead and no 
awkward casting.  Adding the implicit conversion would be icing 
on the cake.


Delegate returning itself

2014-12-06 Thread Jonathan Marler via Digitalmars-d-learn

Is there a way to create a delegate that returns itself?

alias MyDelegate delegate() MyDelegate;
// OR
alias MyDelegate = MyDelegate delegate();

When I compile this I get:

Error: alias MyDelegate recursive alias declaration

The error makes sense but I still feel like there should be a way 
to create a delegate that returns itself.  Maybe there's 
something I haven't thought of.  Does anyone have an idea on how 
to do this?


Reason for mypackage/package.d instead of mypackage.d

2014-11-10 Thread Jonathan Marler via Digitalmars-d-learn
I was perusing a PR for phobos where std/range.d was split into 
submodules and std/range.d was moved to std/range/package.d


I was wondering why a package module had to be called package.d 
instead of just being the package name.  For example, instead of 
moving std/range.d to std/range/package.d, why doesn't modifying 
std/range.d to contain the public imports of the submodules work 
just as well?


My first thought was that maybe it let's the compiler know that 
all package modifiers are visible to all the modules in the 
same package, and the only way the compiler knows about what 
modules are in a package are if they are imported in the 
package.d file...is this one of the reasons?  Also are there 
other reasons?  Thanks in advance.