version assignment and module scope
I'm a bit baffled why using something like this doesn't get recognized by imported modules: version = Unicode; I use this to try and alter the way the win32 API headers resolve certain symbols, but currently the only way to force that version symbol to be recognized correctly is to define it on the compiler command-line. Is there some alternative to doing this? I thought this was one way to emulate the C "#define" type of conditional compilation, but it appears that anything outside of the current module is unaffected by version assignments like the above. Could this be considered a defect of design? Thanks!
Re: version assignment and module scope
no. it works the way it's intended to work. given that order of module imports is not defined (i.e. reordering imports should not affect resulting code), making 'version=' propagating to other modules will create disasterous side effects, even weirder than C macro abusing. Okay, thanks for the explanation. I hadn't considered that putting a version assignment between imports, but I see how that can become an issue.
dsource and WinAPI
I've tried to create an account on dsource.org without success, and am wondering if anyone here knows if they are no longer accepting new users or whatnot? I've added a new winhttp.d source file for the win32 folder and would like to somehow contribute it. thanks!
Re: dsource and WinAPI
Which project are you looking at? Bindings for the Windows API: http://www.dsource.org/projects/bindings/wiki/WindowsApi This is a pretty important project, especially for getting more Windows programmers on board.. thanks for your help
Re: dsource and WinAPI
Vladimir, thanks for looking at the pull request. It'd be great if the whole project was moved to GitHub to allow more people to contribute.
DirEntry on Windows - wstring variant?
As a Windows programmer using D, I find a number of questionable things with D's focus on using string everywhere. It's not a huge deal to add in UTF-8 to UTF-16 mapping in certain areas, but when it comes to working with a lot of data and Windows API calls, the less needless conversions the better. I like the DirEntries (std.file) approach to traversing files and folders in a directory (almost as nice as C++14's ), but I think its a bit odd that native-OS strings aren't used in D here. Sure, I get that having a fairly consistent programming interface may make using the language easier for certain programmers, but if you're using D with Windows, then you will be made well aware of the incompatibilities between D's strings and the Windows API (unless you always use ASCII I suppose). Anyway, I'm curious if proposing changes to those interfaces is worthwhile, or if I should just modify it for my own purposes and leave the standard library be. P.S. Its a shame to keep running into Unicode issues with D and Windows, and sometimes its a bit discouraging. Right before I peeked into DirEntry, I worked a bit on a workaround for stdio.File's unicode problems (a documented bug thats 2+ years old). I remember trying D a while back and giving up because optlink was choking on paths. And just yesterday it choked on what the %PATH% environment variable was set to, so I had to clear that before running it.
Re: DirEntry on Windows - wstring variant?
On Friday, 24 October 2014 at 22:53:15 UTC, Jonathan M Davis via Digitalmars-d-learn wrote: Also, given how DirEntry works internally, I'd definitely be inclined to argue that it would be too much of a mess to support wstring unless it's by simply converting the name to a wstring when requested (which is kind of pointless, since you can just do to!wstring on the name if that's what you want). Making it support wstring directly would involve a lot of code duplication, and it would increase the memory footprint, because the structs involved would then have to hold the path and whatnot as both a string and wstring. So, I question that it's at all worth it to try and make dirEntries support wstring. I would suggest that the string be kept as wstring inside the DirEntry structure, rather than converting twice as you suggest. Then a decision can be made as to whether .name() returns a string or wstring. If backwards compatibility is a concern, then it could be converted to a string on that call. It would break the nothrow promise that way, though. Adding something like .wname() would work here for getting the native wstring, I suppose. Another alternative is to have a union of string and wstring, and a bool indicating how strings are handled internally. Of course, the .name and .wname properties would need to check it and convert depending on how it is stored. Its not pretty, but its just another possibility. The whole point is that there is a lot of wasted time doing the UTF16-UTF8 conversions when using these library functions. And we definitely don't want to encourage the use of wstring. It's there for when you need it (which is great), but programs really should be using string if they don't actually need to use wstring or dstring. I get that wstring on a whole is ugly, but its the native unicode string type in Windows. If someone is doing serious work on Windows, wstring will eventually need to be used. It'd be nice to keep the abstraction of string at every level of a program, but in Windows its impossible. The standard library, even if it was comprehensive enough, will never cover every corner case where strings are needed. Whether using the Windows API, COM, or interfacing with other Windows libraries, wstring will still rear its ugly head. But, idealism aside, there are good reasons for keeping the pathname in its native format on Windows: - If a program is processing lots of files, there's going to be a lot of wasted cycles doing those wstring->string conversions. - Doing anything more with the files, besides listing them, will probably result in a string->wstring conversion during a call to Windows for opening or querying information about the file = more cycles wasted - Additionally, Windows has a peculiar way of handling long pathnames that requires a "\\?\" prefix, and only works with the unicode versions of its functions. This also makes the pathname uniquely OS-specific.. Anyway, some things to think about.
readln with buffer fails
I have this simple code: int main() { import std.stdio; char[4096] Input; readln(Input); //readln!(char)(Input); // also fails return 0; } I get these messages during compilation: test.d(39): Error: template std.stdio.readln cannot deduce function from argument types !()(char[4096]), candidates are: src\phobos\std\stdio.d(2818): std.stdio.readln(S = string)(dchar terminator = '\x0a') if (isSomeString!S) src\phobos\std\stdio.d(2851): std.stdio.readln(C)(ref C[] buf, dchar terminator = '\x0a') if (isSomeChar!C && is(Unqual!C == C) && !is(C == enum)) src\phobos\std\stdio.d(2858): std.stdio.readln(C, R)(ref C[] buf, R terminator) if (isSomeChar!C && is(Unqual!C == C) && !is(C == enum) && isBidirectionalRange!R && is(typeof(terminator.front == (dchar).init))) Now, I'm used to 'buffer' meaning one thing, but here it seems that buffer means something more akin to a 'sink' object, or a forced dynamic array type? Is there some way I can avoid dynamic allocations? Thanks!
Re: readln with buffer fails
On Wednesday, 29 October 2014 at 21:19:25 UTC, Peter Alexander wrote: You need to take a slice of the buffer: char[] buf = Input[]; readln(buf); // line now in buf The reason for this is because you need to know where the string ends. If you just passed in Input, how would you know how long the line read was? Thanks, that solves the problem. I guess what confuses me is that Input isn't a slice, or at least not implicitly convertible to one. Also, I've tried using Input[] directly at the callsite but apparently that would be an rValue, and D doesn't do rValues yet. So here's a simple solution to reading a line using a fixed stack array: char[4096] Input; char[] InputSlice; // actual slice of input'd text (instead of full 4K) size_t NumChars; while (NumChars == 0) { // readln(buf) requires a slice. Input isn't converted to one, // and readln() requires an rvalue for a buffer: char[] buf = Input[]; NumChars = readln(buf); // Set InputSlice to range of text that was input, minus linefeed: InputSlice = chomp(buf[0 .. NumChars]); // Empty line? if (InputSlice == "") NumChars = 0; } Thanks all for your help
Re: readln with buffer fails
err, I meant rvalue *reference* above
Re: readln with buffer fails
lol, if only I could edit my posts. The comment preceding the readln() call was wrong too. This is what I have now: // readln(buf) requires a slice *Reference*. // rvalue references aren't supported by D, so readln(Input[]) fails
Re: readln with buffer fails
On Wednesday, 29 October 2014 at 23:28:07 UTC, Justin Whear wrote: Part of what readln does is *modify* the slice itself, not just the pointed-to characters. In particular it alters the length member so that you know how much input was actually read. This is also why the rvalue reference shouldn't work. Remember, D chose not to repeat C's mistake of relying on null terminators. Nice, thanks for that. I wasn't aware the .length member was changed, but I just verified it myself by surrounding the call with some debug output. Sure enough, its length is 4096 before the call, and a different length after (depending on what was input).