Re: Ranges

2011-03-18 Thread Peter Alexander

On 13/03/11 12:05 AM, Jonathan M Davis wrote:

So, when you're using a range of char[] or wchar[], you're really using a range
of dchar. These ranges are bi-directional. They can't be sliced, and they can't
be indexed (since doing so would likely be invalid). This generally works very
well. It's exactly what you want in most cases. The problem is that that means
that the range that you're iterating over is effectively of a different type 
than
the original char[] or wchar[].


This has to be the worst language design decision /ever/.

You can't just mess around with fundamental principles like the first 
element in an array of T has type T for the sake of a minor 
convenience. How are we supposed to do generic programming if common 
sense reasoning about types doesn't hold?


This is just std::vectorbool from C++ all over again. Can we not learn 
from mistakes of the past?


Re: Ranges

2011-03-18 Thread Jonathan M Davis
On Friday 18 March 2011 02:29:51 Peter Alexander wrote:
 On 13/03/11 12:05 AM, Jonathan M Davis wrote:
  So, when you're using a range of char[] or wchar[], you're really using a
  range of dchar. These ranges are bi-directional. They can't be sliced,
  and they can't be indexed (since doing so would likely be invalid). This
  generally works very well. It's exactly what you want in most cases. The
  problem is that that means that the range that you're iterating over is
  effectively of a different type than the original char[] or wchar[].
 
 This has to be the worst language design decision /ever/.
 
 You can't just mess around with fundamental principles like the first
 element in an array of T has type T for the sake of a minor
 convenience. How are we supposed to do generic programming if common
 sense reasoning about types doesn't hold?
 
 This is just std::vectorbool from C++ all over again. Can we not learn
 from mistakes of the past?

It really isn't a problem for the most part. You just need to understand that 
when using range-based functions, char[] and wchar[] are effectively _not_ 
arrays. They are ranges of dchar. And given the fact that it really wouldn't 
make sense to treat them as arrays in this case anyway (due to the fact that a 
single element is a code unit but _not_ a code point), the current solution 
makes a lot of sense. Generally, you just can't treat char[] and wchar[] as 
arrays when you're dealing with characters/code points rather than code units. 
So, yes it's a bit weird, but it makes a lot of sense given how unicode is 
designed. And it works.

If you really don't want to deal with it, then just use dchar[] and dstring 
everywhere.

- Jonathan M Davis


Re: Ranges

2011-03-18 Thread spir

On 03/18/2011 10:29 AM, Peter Alexander wrote:

On 13/03/11 12:05 AM, Jonathan M Davis wrote:

So, when you're using a range of char[] or wchar[], you're really using a range
of dchar. These ranges are bi-directional. They can't be sliced, and they can't
be indexed (since doing so would likely be invalid). This generally works very
well. It's exactly what you want in most cases. The problem is that that means
that the range that you're iterating over is effectively of a different type
than
the original char[] or wchar[].


This has to be the worst language design decision /ever/.

You can't just mess around with fundamental principles like the first element
in an array of T has type T for the sake of a minor convenience. How are we
supposed to do generic programming if common sense reasoning about types
doesn't hold?

This is just std::vectorbool from C++ all over again. Can we not learn from
mistakes of the past?


I partially agree, but. Compare with a simple ascii text: you could iterate 
over it chars (=codes=bytes), words, lines... Or according to specific schemes 
for your app (eg reverse order, every number in it, every word at start of 
line...). A piece of is not only a stream of codes.


The problem is there is no good decision, in the case of char[] or wchar[]. We 
should have to choose a kind of natural sense of what it means to iterate 
over a text, but there no such thing. What does it *mean*? What is the natural 
unit of a text?
Bytes or words are code units which mean nothing. Code units (- dchars) are 
not guaranteed to mean anything neither (as shown by past discussion: a code 
unit may be the base 'a', the following one be the composite '^', both in â). 
Code unit do not represent characters in the common sense. So, it is very 
clear that implicitely iterating over dchars is a wrong choice. But what else?
I would rather get rid of wchar and dchar and deal with plain stream of bytes 
supposed to represent utf8. Until we get a good solution to operate at the 
level of human characters.


Denis
--
_
vita es estrany
spir.wikidot.com



How do I read data with ByChunk?

2011-03-18 Thread Craig Dillabaugh
Hi,
I have two binary files containing image data, and I want to go through them 
pixel by
pixel and read the image data into arrays and compare the pixel values (images 
have the
exact same dimensions but one is unsigned 1 byte per pixel, and the other 
signed 2 bytes
per pixel).

I am trying to read the data in using the following loop:

for(int i = 0; i  num_blocks; i++) {
auto resampbytes = resampFile.byChunk( resamp_buffer_size );
auto normbytes = normalFile.byChunk( normal_buffer_size );
ubyte[] resamp = cast(ubyte[]) resampbytes;
short[] normaldata = cast(short[]) normbytes;

//Process the data ...
}

However, when I attempt to compile this I get the following errors:

Error: cannot cast from ByChunk to ubyte[]
Error: cannot cast from ByChunk to short[]

Oddly, the example for ByChunk in the documentation seems to do exactly what I 
think I
am trying here, but apparently I am missing something.

Any help would be appreciated (also if there is a better way of doing what I am 
trying
to do any pointers on that would be appreciated too!)

Regards,

Craig



Re: How do I read data with ByChunk?

2011-03-18 Thread Zirneklis

On 18/03/2011 14:35, Craig Dillabaugh wrote:

Hi,
I have two binary files containing image data, and I want to go through them 
pixel by
pixel and read the image data into arrays and compare the pixel values (images 
have the
exact same dimensions but one is unsigned 1 byte per pixel, and the other 
signed 2 bytes
per pixel).

I am trying to read the data in using the following loop:

for(int i = 0; i  num_blocks; i++) {
auto resampbytes = resampFile.byChunk( resamp_buffer_size );
auto normbytes = normalFile.byChunk( normal_buffer_size );
ubyte[] resamp = cast(ubyte[]) resampbytes;
short[] normaldata = cast(short[]) normbytes;

 //Process the data ...
}

However, when I attempt to compile this I get the following errors:

Error: cannot cast from ByChunk to ubyte[]
Error: cannot cast from ByChunk to short[]

Oddly, the example for ByChunk in the documentation seems to do exactly what I 
think I
am trying here, but apparently I am missing something.

Any help would be appreciated (also if there is a better way of doing what I am 
trying
to do any pointers on that would be appreciated too!)

Regards,

Craig



File.byChunk is designed to be used with a foreach loop, you could try:

ubyte[] resamp = new ubyte[resamp_buffer_size];
short[] normaldata = new short[normal_buffer_size];
for(int i = 0; i  num_blocks; i++)
{
resampFile.rawRead(resamp);
normaldata.rawRead(normaldata);

//Process the data ...
}

--
Aku MoD.


Re: How do I read data with ByChunk?

2011-03-18 Thread Craig Dillabaugh
== Quote from Zirneklis (a...@dingspam.cc)'s article
 On 18/03/2011 14:35, Craig Dillabaugh wrote:
  Hi,
  I have two binary files containing image data, and I want to go
through them pixel by
  pixel and read the image data into arrays and compare the pixel
values (images have the
  exact same dimensions but one is unsigned 1 byte per pixel, and
the other signed 2 bytes
  per pixel).
 
  I am trying to read the data in using the following loop:
 
  for(int i = 0; i  num_blocks; i++) {
  auto resampbytes = resampFile.byChunk
( resamp_buffer_size );
  auto normbytes = normalFile.byChunk
( normal_buffer_size );
  ubyte[] resamp = cast(ubyte[]) resampbytes;
  short[] normaldata = cast(short[]) normbytes;
 
   //Process the data ...
  }
 
  However, when I attempt to compile this I get the following
errors:
 
  Error: cannot cast from ByChunk to ubyte[]
  Error: cannot cast from ByChunk to short[]
 
  Oddly, the example for ByChunk in the documentation seems to do
exactly what I think I
  am trying here, but apparently I am missing something.
 
  Any help would be appreciated (also if there is a better way of
doing what I am trying
  to do any pointers on that would be appreciated too!)
 
  Regards,
 
  Craig
 
 File.byChunk is designed to be used with a foreach loop, you could
try:
 ubyte[] resamp = new ubyte[resamp_buffer_size];
 short[] normaldata = new short[normal_buffer_size];
 for(int i = 0; i  num_blocks; i++)
 {
  resampFile.rawRead(resamp);
  normaldata.rawRead(normaldata);
  //Process the data ...
 }

Thanks, that did the trick!
Craig


Re: Ranges

2011-03-18 Thread Jonathan M Davis
On Friday, March 18, 2011 03:32:35 spir wrote:
 On 03/18/2011 10:29 AM, Peter Alexander wrote:
  On 13/03/11 12:05 AM, Jonathan M Davis wrote:
  So, when you're using a range of char[] or wchar[], you're really using
  a range of dchar. These ranges are bi-directional. They can't be
  sliced, and they can't be indexed (since doing so would likely be
  invalid). This generally works very well. It's exactly what you want in
  most cases. The problem is that that means that the range that you're
  iterating over is effectively of a different type than
  the original char[] or wchar[].
  
  This has to be the worst language design decision /ever/.
  
  You can't just mess around with fundamental principles like the first
  element in an array of T has type T for the sake of a minor
  convenience. How are we supposed to do generic programming if common
  sense reasoning about types doesn't hold?
  
  This is just std::vectorbool from C++ all over again. Can we not learn
  from mistakes of the past?
 
 I partially agree, but. Compare with a simple ascii text: you could iterate
 over it chars (=codes=bytes), words, lines... Or according to specific
 schemes for your app (eg reverse order, every number in it, every word at
 start of line...). A piece of is not only a stream of codes.
 
 The problem is there is no good decision, in the case of char[] or wchar[].
 We should have to choose a kind of natural sense of what it means to
 iterate over a text, but there no such thing. What does it *mean*? What is
 the natural unit of a text?
 Bytes or words are code units which mean nothing. Code units (- dchars)
 are not guaranteed to mean anything neither (as shown by past discussion:
 a code unit may be the base 'a', the following one be the composite '^',
 both in â). Code unit do not represent characters in the common sense.
 So, it is very clear that implicitely iterating over dchars is a wrong
 choice. But what else? I would rather get rid of wchar and dchar and deal
 with plain stream of bytes supposed to represent utf8. Until we get a good
 solution to operate at the level of human characters.

Iterating over dchars works in _most_ cases. Iterating over chars only works 
for 
pure ASCII. The additional overhead for dealing with graphemes instead of code 
points is almost certainly prohibitive, it _usually_ isn't necessary, and we 
don't have an actualy grapheme solution yet. So, treating char[] and wchar[] as 
if their elements were valid on their own is _not_ going to work. Treating them 
along with dchar[] as ranges of dchar _mostly_ works. We definitely should have 
a 
way to handle them as ranges of graphemes for those who need to, but the code 
point vs grapheme issue is nowhere near as critical as the code unit vs code 
point issue.

I don't really want to get into the whole unicode discussion again. It has been 
discussed quite a bit on the D list already. There is no perfect solution. The 
current solution _mostly_ works, and, for the most part IMHO, is the correct 
solution. We _do_ need a full-on grapheme handling solution, but a lot of stuff 
doesn't need that and the overhead for dealing with it would be prohibitive. 
The 
main problem with using code points rather than graphemes is the lack of 
normalization, and a _lot_ of string code can get by just fine without that.

So, we have a really good 90% solution and we still need a 100% solution, but 
using the 100% all of the time would almost certainly not be acceptable due to 
performance issues, and doing stuff by code unit instead of code point would be 
_really_ bad. So, what we have is good and will likely stay as is. We just need 
a proper grapheme solution for those who need it.

- Jonathan M Davis


P.S. Unicode is just plain ugly :(


In-source way to call any C Library

2011-03-18 Thread Adrian Iliescu
Is there a way to call a C function without having to screw around with the
linker on the command line?  In C#, for example, this is all you have to do:

[DllImport( @..\Debug\CLibTest.dll )]//location
internal static extern int MyTestResult(); //name of function

void CSUsingCLib()
{
int result = MyTestResult(); //use it
}


Re: In-source way to call any C Library

2011-03-18 Thread Jacob Carlborg

On 2011-03-18 19:54, Adrian Iliescu wrote:

Is there a way to call a C function without having to screw around with the
linker on the command line?  In C#, for example, this is all you have to do:

 [DllImport( @..\Debug\CLibTest.dll )]//location
 internal static extern int MyTestResult(); //name of function

 void CSUsingCLib()
 {
 int result = MyTestResult(); //use it
 }


With pragma(lib, lib); you can link to a library: 
http://www.digitalmars.com/d/2.0/pragma.html


And then using extern(C) as usual to declare the function. You can also 
use dlopen and friends on Posix and whatever the equivalent is for 
Windows.


--
/Jacob Carlborg


Re: In-source way to call any C Library

2011-03-18 Thread Andrej Mitrovic
For runtime linking with DLLs, you're looking for LoadLibrary,
GetProcAddress and friends. They're in core.sys.windows.windows.

The static constructor is useful if you want to have C functions in
module scope. Personally, I wrap C libraries in classes and hide all
the loading details there.

However the following is what I think you're after:
module testDllLoad;

import std.stdio;
import std.string;
import std.path : join, curdir;
import core.sys.windows.windows;
import std.exception;

extern(C) int function() MyTestResult;

static this()
{
string dllFileName = join(r..\Debug\, CLibTest.dll);
HMODULE dllModule;
enforce(dllModule = LoadLibraryA(toStringz(dllFileName)));
enforce(MyTestResult = GetProcAddress(dllModule, _MyTestResult));
}

void CSUsingCLib()
{
int result = MyTestResult();  // use it
writeln(result);
}

void main()
{
CSUsingCLib();
}

I've tested this with a C DLL which exports the function
_MyTestResult. The DLL was located in the previous directory, under
Debug, just like your example. It worked fine.

Btw, in case you don't know, its very important to specify the calling
convention and the /correct/ calling convention for a function. For a
C DLL this is always extern(C).


Re: In-source way to call any C Library

2011-03-18 Thread Jesse Phillips
Adrian Iliescu Wrote:

 Is there a way to call a C function without having to screw around with the
 linker on the command line?  In C#, for example, this is all you have to do:
 
 [DllImport( @..\Debug\CLibTest.dll )]//location
 internal static extern int MyTestResult(); //name of function
 
 void CSUsingCLib()
 {
 int result = MyTestResult(); //use it
 }

You may find the Library referenced here useful:
 http://stackoverflow.com/questions/3818229/loading-plugins-dlls-on-the-fly


Re: Ranges

2011-03-18 Thread Peter Alexander

On 18/03/11 5:53 PM, Jonathan M Davis wrote:

On Friday, March 18, 2011 03:32:35 spir wrote:

On 03/18/2011 10:29 AM, Peter Alexander wrote:

On 13/03/11 12:05 AM, Jonathan M Davis wrote:

So, when you're using a range of char[] or wchar[], you're really using
a range of dchar. These ranges are bi-directional. They can't be
sliced, and they can't be indexed (since doing so would likely be
invalid). This generally works very well. It's exactly what you want in
most cases. The problem is that that means that the range that you're
iterating over is effectively of a different type than
the original char[] or wchar[].


This has to be the worst language design decision /ever/.

You can't just mess around with fundamental principles like the first
element in an array of T has type T for the sake of a minor
convenience. How are we supposed to do generic programming if common
sense reasoning about types doesn't hold?

This is just std::vectorbool  from C++ all over again. Can we not learn
from mistakes of the past?


I partially agree, but. Compare with a simple ascii text: you could iterate
over it chars (=codes=bytes), words, lines... Or according to specific
schemes for your app (eg reverse order, every number in it, every word at
start of line...). A piece of is not only a stream of codes.

The problem is there is no good decision, in the case of char[] or wchar[].
We should have to choose a kind of natural sense of what it means to
iterate over a text, but there no such thing. What does it *mean*? What is
the natural unit of a text?
Bytes or words are code units which mean nothing. Code units (-  dchars)
are not guaranteed to mean anything neither (as shown by past discussion:
a code unit may be the base 'a', the following one be the composite '^',
both in â). Code unit do not represent characters in the common sense.
So, it is very clear that implicitely iterating over dchars is a wrong
choice. But what else? I would rather get rid of wchar and dchar and deal
with plain stream of bytes supposed to represent utf8. Until we get a good
solution to operate at the level of human characters.


Iterating over dchars works in _most_ cases. Iterating over chars only works for
pure ASCII. The additional overhead for dealing with graphemes instead of code
points is almost certainly prohibitive, it _usually_ isn't necessary, and we
don't have an actualy grapheme solution yet. So, treating char[] and wchar[] as
if their elements were valid on their own is _not_ going to work. Treating them
along with dchar[] as ranges of dchar _mostly_ works. We definitely should have 
a
way to handle them as ranges of graphemes for those who need to, but the code
point vs grapheme issue is nowhere near as critical as the code unit vs code
point issue.

I don't really want to get into the whole unicode discussion again. It has been
discussed quite a bit on the D list already. There is no perfect solution. The
current solution _mostly_ works, and, for the most part IMHO, is the correct
solution. We _do_ need a full-on grapheme handling solution, but a lot of stuff
doesn't need that and the overhead for dealing with it would be prohibitive. The
main problem with using code points rather than graphemes is the lack of
normalization, and a _lot_ of string code can get by just fine without that.

So, we have a really good 90% solution and we still need a 100% solution, but
using the 100% all of the time would almost certainly not be acceptable due to
performance issues, and doing stuff by code unit instead of code point would be
_really_ bad. So, what we have is good and will likely stay as is. We just need
a proper grapheme solution for those who need it.

- Jonathan M Davis


P.S. Unicode is just plain ugly :(


I must be missing something, because the solution seems obvious to me:

char[], wchar[], and dchar[] should be simple arrays like int[] with no 
unicode semantics.


string, wstring, and dstring should not be aliases to arrays, but 
instead should be separate types that behave the way char[], wchar[], 
and dchar[] do currently.


Is there any problem with this approach?


Re: Ranges

2011-03-18 Thread Jonathan M Davis
On Friday, March 18, 2011 14:08:48 Peter Alexander wrote:
 On 18/03/11 5:53 PM, Jonathan M Davis wrote:
  On Friday, March 18, 2011 03:32:35 spir wrote:
  On 03/18/2011 10:29 AM, Peter Alexander wrote:
  On 13/03/11 12:05 AM, Jonathan M Davis wrote:
  So, when you're using a range of char[] or wchar[], you're really
  using a range of dchar. These ranges are bi-directional. They can't
  be sliced, and they can't be indexed (since doing so would likely be
  invalid). This generally works very well. It's exactly what you want
  in most cases. The problem is that that means that the range that
  you're iterating over is effectively of a different type than
  the original char[] or wchar[].
  
  This has to be the worst language design decision /ever/.
  
  You can't just mess around with fundamental principles like the first
  element in an array of T has type T for the sake of a minor
  convenience. How are we supposed to do generic programming if common
  sense reasoning about types doesn't hold?
  
  This is just std::vectorbool  from C++ all over again. Can we not
  learn from mistakes of the past?
  
  I partially agree, but. Compare with a simple ascii text: you could
  iterate over it chars (=codes=bytes), words, lines... Or according to
  specific schemes for your app (eg reverse order, every number in it,
  every word at start of line...). A piece of is not only a stream of
  codes.
  
  The problem is there is no good decision, in the case of char[] or
  wchar[]. We should have to choose a kind of natural sense of what it
  means to iterate over a text, but there no such thing. What does it
  *mean*? What is the natural unit of a text?
  Bytes or words are code units which mean nothing. Code units (- 
  dchars) are not guaranteed to mean anything neither (as shown by past
  discussion: a code unit may be the base 'a', the following one be the
  composite '^', both in â). Code unit do not represent characters in
  the common sense. So, it is very clear that implicitely iterating over
  dchars is a wrong choice. But what else? I would rather get rid of
  wchar and dchar and deal with plain stream of bytes supposed to
  represent utf8. Until we get a good solution to operate at the level of
  human characters.
  
  Iterating over dchars works in _most_ cases. Iterating over chars only
  works for pure ASCII. The additional overhead for dealing with graphemes
  instead of code points is almost certainly prohibitive, it _usually_
  isn't necessary, and we don't have an actualy grapheme solution yet. So,
  treating char[] and wchar[] as if their elements were valid on their own
  is _not_ going to work. Treating them along with dchar[] as ranges of
  dchar _mostly_ works. We definitely should have a way to handle them as
  ranges of graphemes for those who need to, but the code point vs
  grapheme issue is nowhere near as critical as the code unit vs code
  point issue.
  
  I don't really want to get into the whole unicode discussion again. It
  has been discussed quite a bit on the D list already. There is no
  perfect solution. The current solution _mostly_ works, and, for the most
  part IMHO, is the correct solution. We _do_ need a full-on grapheme
  handling solution, but a lot of stuff doesn't need that and the overhead
  for dealing with it would be prohibitive. The main problem with using
  code points rather than graphemes is the lack of normalization, and a
  _lot_ of string code can get by just fine without that.
  
  So, we have a really good 90% solution and we still need a 100% solution,
  but using the 100% all of the time would almost certainly not be
  acceptable due to performance issues, and doing stuff by code unit
  instead of code point would be _really_ bad. So, what we have is good
  and will likely stay as is. We just need a proper grapheme solution for
  those who need it.
  
  - Jonathan M Davis
  
  
  P.S. Unicode is just plain ugly :(
 
 I must be missing something, because the solution seems obvious to me:
 
 char[], wchar[], and dchar[] should be simple arrays like int[] with no
 unicode semantics.
 
 string, wstring, and dstring should not be aliases to arrays, but
 instead should be separate types that behave the way char[], wchar[],
 and dchar[] do currently.
 
 Is there any problem with this approach?

There has been a fair bit of debate about it in the past. No one has been able 
to come up with an alternate solution which is generally considered better than 
what we have.

char is defined to be a UTF-8 code unit. wchar in defined to be a UTF-16 code 
unit. dchar is defined to be a UTF-32 code unit (which is also guaranteed to be 
a 
code point). So, manipulating char[] and wchar[] as arrays of characters 
doesn't 
generally make any sense. They _aren't_ characters. They're code units. Having 
a 
range of char or wchar generally makes no sense.

When you don't care about the contents of a string, treating it as an array is 
very useful. When you _do_ care, you 

Re: Unicode - Windows 1252

2011-03-18 Thread Stewart Gordon

On 16/03/2011 22:17, Tom wrote:

I have a D2 code that writes some stuff to the screen (usually runs in cmd.exe
pseudo-console). When I print spanish characters they show wrong (gibberish 
symbols and
so, wich corresponds to CP-1252 encoding).

Is there a way to convert all outputted streams to CP-1252 without having to 
wrap writeln
function (and replacing all its calls)?


My utility library has a console I/O module that converts to/from the console codepage 
under Windows:

http://pr.stewartsplace.org.uk/d/sutil/
See if it's useful to you.  I'm not sure whether it works under D2, but it's probably 
quite straightforward to tweak it so that it does.


Stewart.


Buliding DSFML2? (64-bit Linux)

2011-03-18 Thread Sean Eskapp
I've been trying for weeks to build the D bindings of SFML2, but with little
success. The main issue is that I get a myriad of linker errors (documented at
http://www.sfml-dev.org/forum/viewtopic.php?p=28345#28345), but I can't figure
out what linking options would solve them.

Can anybody shed some light on this?


Re: Buliding DSFML2? (64-bit Linux)

2011-03-18 Thread Jonathan M Davis
On Friday, March 18, 2011 17:56:44 Sean Eskapp wrote:
 I've been trying for weeks to build the D bindings of SFML2, but with
 little success. The main issue is that I get a myriad of linker errors
 (documented at http://www.sfml-dev.org/forum/viewtopic.php?p=28345#28345),
 but I can't figure out what linking options would solve them.
 
 Can anybody shed some light on this?

Just glancing at it, it looks like you might be missing pthreads, though that 
would be pretty weird. You don't normally need to specify -lpthread. But those 
symbols sure look like they're likely pthread-related.

- Jonathan M Davis


Re: Buliding DSFML2? (64-bit Linux) (New info)

2011-03-18 Thread Sean Eskapp
== Quote from Jonathan M Davis (jmdavisp...@gmx.com)'s article
 On Friday, March 18, 2011 17:56:44 Sean Eskapp wrote:
  I've been trying for weeks to build the D bindings of SFML2, but
with
  little success. The main issue is that I get a myriad of linker
errors
  (documented at http://www.sfml-dev.org/forum/viewtopic.php?
p=28345#28345),
  but I can't figure out what linking options would solve them.
 
  Can anybody shed some light on this?
 Just glancing at it, it looks like you might be missing pthreads,
though that
 would be pretty weird. You don't normally need to specify -
lpthread. But those
 symbols sure look like they're likely pthread-related.
 - Jonathan M Davis

I've tried -lpthread and -lm, and neither seemed to help. Is it
possible there are platform issues, since D (to my knowledge) is 32-
bit, and I'm 64-bit?


Re: Buliding DSFML2? (64-bit Linux) (New info)

2011-03-18 Thread Jonathan M Davis
On Friday, March 18, 2011 18:58:49 Sean Eskapp wrote:
 == Quote from Jonathan M Davis (jmdavisp...@gmx.com)'s article
 
  On Friday, March 18, 2011 17:56:44 Sean Eskapp wrote:
   I've been trying for weeks to build the D bindings of SFML2, but
 
 with
 
   little success. The main issue is that I get a myriad of linker
 
 errors
 
   (documented at http://www.sfml-dev.org/forum/viewtopic.php?
 
 p=28345#28345),
 
   but I can't figure out what linking options would solve them.
   
   Can anybody shed some light on this?
  
  Just glancing at it, it looks like you might be missing pthreads,
 
 though that
 
  would be pretty weird. You don't normally need to specify -
 
 lpthread. But those
 
  symbols sure look like they're likely pthread-related.
  - Jonathan M Davis
 
 I've tried -lpthread and -lm, and neither seemed to help. Is it
 possible there are platform issues, since D (to my knowledge) is 32-
 bit, and I'm 64-bit?

Well, as of dmd 2.052, if you pass -m64 to dmd, it'll compile in 64-bit on 
Linux, but if you don't pass it -m64 (or if you explicitly pass it -m32), it 
will compile in 32-bit. And if it's compiling in 32-bit, then you need the 32-
bit versions of whatever libraries that you're using. pthread is one of them. 
So, if you don't have a 32-bit version of pthread installed, then that would 
explain it.

- Jonathan M Davis


Re: GDC with D2?

2011-03-18 Thread Jonathan M Davis
On Friday, March 18, 2011 19:00:40 Sean Eskapp wrote:
 Does GDC support D2?

Yes. It's also fairly up-to-date now too, I believe (though it is still a bit 
behind dmd as I understand it - at least as farn as Phobos goes). I don't use 
anything other than dmd though, so I'm not sure of gdc's exact state. There 
_is_ 
a fairly up-to-date D2 version though.

- Jonathan M Davis


Re: GDC with D2?

2011-03-18 Thread Sean Eskapp
== Quote from Jonathan M Davis (jmdavisp...@gmx.com)'s article
 On Friday, March 18, 2011 19:00:40 Sean Eskapp wrote:
  Does GDC support D2?
 Yes. It's also fairly up-to-date now too, I believe (though it is
still a bit
 behind dmd as I understand it - at least as farn as Phobos goes). I
don't use
 anything other than dmd though, so I'm not sure of gdc's exact
state. There _is_
 a fairly up-to-date D2 version though.
 - Jonathan M Davis

Great, thanks!


Re: Building DSFML2? (64-bit Linux)

2011-03-18 Thread Sean Eskapp
== Quote from Jonathan M Davis (jmdavisp...@gmx.com)'s article
 On Friday, March 18, 2011 18:58:49 Sean Eskapp wrote:
  == Quote from Jonathan M Davis (jmdavisp...@gmx.com)'s article
 
   On Friday, March 18, 2011 17:56:44 Sean Eskapp wrote:
I've been trying for weeks to build the D bindings of SFML2,
but
 
  with
 
little success. The main issue is that I get a myriad of
linker
 
  errors
 
(documented at http://www.sfml-dev.org/forum/viewtopic.php?
 
  p=28345#28345),
 
but I can't figure out what linking options would solve
them.
   
Can anybody shed some light on this?
  
   Just glancing at it, it looks like you might be missing
pthreads,
 
  though that
 
   would be pretty weird. You don't normally need to specify -
 
  lpthread. But those
 
   symbols sure look like they're likely pthread-related.
   - Jonathan M Davis
 
  I've tried -lpthread and -lm, and neither seemed to help. Is it
  possible there are platform issues, since D (to my knowledge) is
32-
  bit, and I'm 64-bit?
 Well, as of dmd 2.052, if you pass -m64 to dmd, it'll compile in
64-bit on
 Linux, but if you don't pass it -m64 (or if you explicitly pass it
-m32), it
 will compile in 32-bit. And if it's compiling in 32-bit, then you
need the 32-
 bit versions of whatever libraries that you're using. pthread is
one of them.
 So, if you don't have a 32-bit version of pthread installed, then
that would
 explain it.
 - Jonathan M Davis

Perfect, thanks!


DMD2 - compiling and linking in separate steps (64-bit)

2011-03-18 Thread Sean Eskapp
I'm trying to use DMD through an IDE, but I'm getting stumped trying to
create 64-bit executables under Linux. I can get everything compiled fine,
using the -m64 compiler flag, but I can't get it to link. Here's the error
list:

/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-
gnu/4.4.5/../../../../lib/libphobos2.a when searching for -lphobos2
/usr/bin/ld: skipping incompatible /usr/lib/../lib/libphobos2.a when
searching for -lphobos2
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-
gnu/4.4.5/../../../libphobos2.a when searching for -lphobos2
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-
gnu/4.4.5/../../../../lib/libm.so when searching for -lm
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-
gnu/4.4.5/../../../../lib/libm.a when searching for -lm
/usr/bin/ld: skipping incompatible /usr/lib/../lib/libm.so when searching
for -lm
/usr/bin/ld: skipping incompatible /usr/lib/../lib/libm.a when searching for
-lm
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-
gnu/4.4.5/../../../libm.so when searching for -lm
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-
gnu/4.4.5/../../../libm.a when searching for -lm
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-
gnu/4.4.5/../../../../lib/libpthread.so when searching for -lpthread
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-
gnu/4.4.5/../../../../lib/libpthread.a when searching for -lpthread
/usr/bin/ld: skipping incompatible /usr/lib/../lib/libpthread.so when
searching for -lpthread
/usr/bin/ld: skipping incompatible /usr/lib/../lib/libpthread.a when
searching for -lpthread
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-
gnu/4.4.5/../../../libpthread.so when searching for -lpthread
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-
gnu/4.4.5/../../../libpthread.a when searching for -lpthread
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-
gnu/4.4.5/libgcc.a when searching for -lgcc
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-
gnu/4.4.5/libgcc.a when searching for -lgcc
/usr/bin/ld: cannot find -lgcc
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-
gnu/4.4.5/../../../../lib/librt.so when searching for -lrt
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-
gnu/4.4.5/../../../../lib/librt.a when searching for -lrt
/usr/bin/ld: skipping incompatible /usr/lib/../lib/librt.so when searching
for -lrt
/usr/bin/ld: skipping incompatible /usr/lib/../lib/librt.a when searching
for -lrt
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-
gnu/4.4.5/../../../librt.so when searching for -lrt
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-
gnu/4.4.5/../../../librt.a when searching for -lrt
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-
gnu/4.4.5/../../../../lib/libc.so when searching for -lc
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-
gnu/4.4.5/../../../../lib/libc.a when searching for -lc
/usr/bin/ld: skipping incompatible /usr/lib/../lib/libc.so when searching
for -lc
/usr/bin/ld: skipping incompatible /usr/lib/../lib/libc.a when searching for
-lc
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-
gnu/4.4.5/../../../libc.so when searching for -lc
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-
gnu/4.4.5/../../../libc.a when searching for -lc
collect2: ld returned 1 exit status

How do I use ld to link 64-bit D executables?


Re: DMD2 - compiling and linking in separate steps (64-bit)

2011-03-18 Thread Jonathan M Davis
On Friday 18 March 2011 20:49:58 Sean Eskapp wrote:
 incompatible /usr/lib/../lib/librt.so when searching
 for -lrt
 /usr/bin/ld: skipping incompatible /usr/lib/../lib/librt.a when searching
 for -lrt
 /usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-
 gnu/4.4.5/../../../librt.so when searching for -lrt
 /usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-
 gnu/4.4.5/../../../librt.a when searching for -lrt
 /usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-linux-
 gnu/4.4.5/../../../../lib/libc.so

Look at dmd.conf. It includes several flags which are supposed to be passed to 
the linker - either that or you can use dmd to link rather than using gcc on 
its 
own. Personally, I wouldn't bother compiling and linking as separate steps, but 
if you do, you need to make sure that you either use the flags in dmd.conf or 
you 
link with dmd rather than gcc. For the most part, there's no reason to link 
with 
gcc, even if you want to link separately.

- Jonathan M Davis