Re: Timer

2016-06-21 Thread Joerg Joergonson via Digitalmars-d-learn

On Wednesday, 22 June 2016 at 01:31:17 UTC, Adam D. Ruppe wrote:
On Tuesday, 21 June 2016 at 23:15:58 UTC, Joerg Joergonson 
wrote:

Does D have a timer?


You could make one with threads or timeouts, or event loop and 
GUI libraries have one.


Like simpledisplay.d has a Timer class 
http://dpldocs.info/experimental-docs/arsd.simpledisplay.Timer.html and i'm sure the others do too.


I guess I'll just use that then ;)

Thanks again.



Timer

2016-06-21 Thread Joerg Joergonson via Digitalmars-d-learn
Does D have a timer? I've tried some user code and it doesn't 
work. I need to be able to have a delegate called periodically. 
(fiber or thread, doesn't matter)




https://github.com/Dav1dde/BraLa/blob/master/brala/utils/thread.d




module lib.mTimer;

private
{
import std.traits : ParameterTypeTuple, isCallable;
import std.string : format;
import core.time : Duration;
import core.sync.mutex : Mutex;
import core.sync.condition : Condition;


import std.stdio : stderr;
}

public import core.thread;


private enum CATCH_DELEGATE = `
delegate void() {
try {
fun();
} catch(Throwable t) {
stderr.writefln("--- Exception in Thread: \"%s\" 
---".format(this.name));

stderr.writeln(t.toString());
stderr.writefln("--- End Exception in Thread 
\"%s\" ---".format(this.name));


throw t;
}
}
`;

class VerboseThread : Thread
{
this(void function() fun, size_t sz = 0)
{
super(mixin(CATCH_DELEGATE), sz);
}

this(void delegate() fun, size_t sz = 0)
{
super(mixin(CATCH_DELEGATE), sz);
}
}

class TTimer(T...) : VerboseThread
{
static assert(T.length <= 1);
static if(T.length == 1)
{
static assert(isCallable!(T[0]));
alias ParameterTypeTuple!(T[0]) Args;
} else
{
alias T Args;
}

public Duration interval;
protected Args args;
protected void delegate(Args) func;

protected Event finished;
@property bool is_finished() { return finished.is_set; }


this(Duration interval, void delegate(Args) func, Args args)
{
super();

finished = new Event();

this.interval = interval;
this.func = func;

static if(Args.length) {
this.args = args;
}
}

final void cancel()
{
finished.set();
}

protected void run()
{
finished.wait(interval);

if(!finished.is_set)
{
func(args);
}

finished.set();
}

}




class Event
{
protected Mutex mutex;
protected Condition cond;

protected bool flag;
@property bool is_set() { return flag; }

this()
{
mutex = new Mutex();
cond = new Condition(mutex);

flag = false;
}

void set()
{
mutex.lock();
scope(exit) mutex.unlock();

flag = true;
cond.notifyAll();
}

void clear()
{
mutex.lock();
scope(exit) mutex.unlock();

flag = false;
}

bool wait(T...)(T timeout) if(T.length == 0 || (T.length == 1 
&& is(T[0] : Duration)))

{
mutex.lock();
scope(exit) mutex.unlock();

bool notified = flag;
if(!notified)
{
static if(T.length == 0)
{
cond.wait();
notified = true;
} else
{
notified = cond.wait(timeout);
}
}
return notified;
}
}


TickDuration depreciated, yet stopwatch not?

2016-06-21 Thread Joerg Joergonson via Digitalmars-d-learn
Stopwatch depends on TickDuration and TickDuration is depreciated 
yet Stopwatch isn't and hasn't been converted to MonoTime... 
makes sense?





Re: D casting broke?

2016-06-21 Thread Joerg Joergonson via Digitalmars-d-learn

On Tuesday, 21 June 2016 at 02:39:25 UTC, H. S. Teoh wrote:
On Tue, Jun 21, 2016 at 02:09:50AM +, Joerg Joergonson via 
Digitalmars-d-learn wrote: [...]
Lets suppose A -> B means B is derived from A. That is, any 
object of B can be cast to A because the memory layout of A is 
contained in B and any object of B can be accessed as if it 
were an A.


Correct.


Template parameters also can have this property since they are 
types.

[...]

The template *parameters* can also have this property. But the 
*template* itself may not.  Contrived example:


class A { int x; }
class B : A { int y; }

struct Templ(C : class) {
int[C.sizeof] data;
}

Templ!A a;
Templ!B b;

a = b; // should this be allowed?

It should be clear that allowing this would cause problems, 
because in spite of the relationship between A and B, and hence 
the relationship between the template arguments of a and b, the 
same relationship does not hold between Templ!A and Templ!B 
(note that .data is an array of ints, not ubytes, and may not 
contain data in any layout that has any corresponds with the 
relationship between A and B).


Another contrived example:

class A { int x; }
class B : A { int y; }

struct Templ(C : class) {
static if (C.sizeof > 4) {
string x;
} else {
float y;
}
}

Allowing implicit casting from Templ!B to Templ!A would not 
make sense, because even though the respective template 
arguments have an inheritance relationship, the Templ 
instantiation made from these classes has a completely 
unrelated and incompatible implementation.




Well, I never mentioned any implicit casting. Obviously explicit 
casting wouldn't make sense either. It is a good example as it 
can show that Templ!A is completely different than Templ!B and no 
conversion is every possible even if A and B are related.


But I still think these are different examples.

You are talking about A!a vs A!b while I am talking about A!a vs 
B!b. I believe, but could be mistaken, that there is a subtle 
difference. I know it seems like B!b can be reduced to A!b, and 
the type system allows this... but if it never happens then all 
these cases explaining the problem of going from A!b to A!a are 
moot.


Now granted, these are contrived examples, and in real-life we 
may not have any real application that requires such strange 
code. However, the language itself allows such constructs, and 
therefore the compiler cannot blindly assume any relationship 
between Templ!A and Templ!B even though there is a clear 
relationship between A and B themselves.


I agree. I am not asking for blind assumptions. When I inform the 
compiler I want to cast to an object that I know should 
succeed(it was designed to do so) I don't expect a null. (As has 
been mentioned, I can do this using the void* trick, so there is 
a way)


What should be done if we wish to allow converting Templ!B to 
Templ!A, though?  One way (though this still does not allow 
implicit casting) is to use opCast:


struct Templ(C : class) {
... // implementation here

auto opCast(D : class)
if (is(C : D)) // C must be a base class of D
{
...
// do something here to make the conversion
// valid. Maybe something as simple as:
return cast(Templ!D) this;

// (provided that there are no memory layout
// problems in Templ's implementation, of
// course).
}
}

Implementing this using opCast actually gives us an even more 
powerful tool: provided it is actually possible to convert 
between potentially incompatible binary layouts of Templ!A and 
Templ!B, the opCast method can be written in such a way as to 
construct Templ!A from Templ!B in a consistent way, e.g., by 
treating B as a subclass of A and calling the ctor of Templ!A, 
or, in the case of my contrived examples, do something 
non-trivial with the .data member so that the returned Templ!A 
makes sense for whatever purpose it's designed for.  It allows 
the implementor of the template to specify exactly how to 
convert between the two types when the compiler can't possibly 
divine this on its own.




While this is nice, the problem was how to convert. Even in 
opCast I would get a null and I wouldn't want to reconstruct A!a 
from B!b because that would essentially entail duplication. Of 
course, now I know I can just cast to void* then back and 
essentially trick/bypass the compilers type system checking.


This is kind of bringing a nuclear warhead to an anthill, 
though.  In my own code where I have a template wrapping around 
types that need to convert to a common base type, 

Re: D casting broke?

2016-06-20 Thread Joerg Joergonson via Digitalmars-d-learn
On Monday, 20 June 2016 at 23:35:28 UTC, Steven Schveighoffer 
wrote:

On 6/19/16 5:19 PM, Joerg Joergonson wrote:

On Sunday, 19 June 2016 at 20:21:35 UTC, ag0aep6g wrote:

On 06/19/2016 09:59 PM, Joerg Joergonson wrote:
This should be completely valid since B!T' obviously derives 
from A!T

directly


ok


and we see that T' derives from b which derives from a
directly.


ok


So B!b is an entirely derived from A!a


No. B!b is derived from A!b, not from A!a. `b` being derived 
from `a`

does not make A!b derived from A!a.


why not? This doesn't seem logical!


Because:

class A(T : a)
{
  static if(is(T == a))
 int oops;
  ...
}

Now A!b and A!a have different layouts. They cannot be related, 
even if the template arguments are related. I could introduce 
another virtual function inside the static if, same result -- 
vtable is messed up.


In general, an instantiation of a template aggregate (class or 
struct) is not castable implicitly to another instantiation of 
the same aggregate unless explicitly declared.


And note that D does not allow multiple inheritance. I don't 
think you can solve this problem in D.


-Steve


Yes, but all you guys are doing is leaving out what I'm actually 
doing and creating a different problem that may not have the same 
issues.


I have this:

(Button!ButtonItem) : (Slider!SliderItem)


In my code/design that is different than

Button!ButtonItem : Button!SliderItem : Slider!SliderItem

The middle case, the case you guys are reducing do, never occurs. 
I never have that problem because I never *mix* a Button with a 
SliderItem. It makes no sense in my design. Hence I don't have to 
worry about that case, but that is the case you guys keep 
bringing up.


A SliderItem adds info to a ButtonItem that makes it "slider 
like". In my case, A slide amount. This field is useless to use 
in a Button. It is only used by the Slider class(In fact, only by 
SliderItem).


I realize that if one did a cast of a Button!SliderItem down to a 
Button!ButtonItem, things can become problematic. This doesn't 
occur in my design.


I see it more like


[Button!ButtonItem]
   |
   v
[Slider!SliderItem]


rather than


Button!ButtonItem
   |
   v
Button!SliderItem
   |
   v
Slider!SliderItem


or

Button!ButtonItem
   |
   v
Slider!ButtonItem
   |
   v
Slider!SliderItem


The first has the problem casting Button!SliderItem to 
Button!ButtonItem. (Same as List!mystring to List!string)


The second has the problem Slider!SliderItem to Slider!ButtonItem 
(It's the same problem in both)



There seems to be three relationships going on and D does one.

Let != not related(not derived from)
For a and b

a != b. D's assumption, never safe
a -> b works some of the time depending on usage
a = b works all the time

But it is more complex with A!a and B!b

A != B and a != b. never safe to cast in any combination
A != B and a = b. never safe to cast
A != B and a -> b. never safe to cast
A -> B and a != b. never safe to cast
A = B and a != b. never safe to cast
A -> B and a -> b. Sometimes safe to cast
A -> B and a = b. Sometimes safe to cast
A = B and a = b. Always safe to cast

Things get "safer" to cast as the relationships between the types 
becomes more "exact".  D always assumes worst case for the 
template parameters.


Some designs, though, work in the A -> B and a -> b 'region' with 
the fact that A!b never occurs, which as been shown is 
problematic through out this thread(but it is really just an 
extension of the first case because both are derivation from A 
and it really adds nothing to the complexity).

































Re: arsd png bug

2016-06-20 Thread Joerg Joergonson via Digitalmars-d-learn

On Tuesday, 21 June 2016 at 00:31:51 UTC, Adam D. Ruppe wrote:

On Monday, 20 June 2016 at 21:39:45 UTC, Joerg Joergonson wrote:

adding
if (i >= previousLine.length) break;

prevents some crashes and seems to work.


So previousLine should be either the right length or null, so I 
put in one test.


Can you try it on your test image?

BTW I do a few unnecessary duplications in here too. I think. 
But there's surely some potential for more memory/performance 
improvements here.


I'll update but can't do any tests since it's random. Seems to be 
something different with the png encoding. They are auto 
generated and I've already overwritten the ones that create the 
bug. (so if it's fixed it will just never occur again, if not it 
will happen sometime in the future again and I'll let you know).


Every time I've checked it's been previousLine being null and 
simply putting in that check fixed it, so it is just probably 
some strange edge case.






Re: D casting broke?

2016-06-20 Thread Joerg Joergonson via Digitalmars-d-learn

On Monday, 20 June 2016 at 23:10:14 UTC, ag0aep6g wrote:

On 06/20/2016 11:33 PM, Joerg Joergonson wrote:

On Monday, 20 June 2016 at 10:38:12 UTC, ag0aep6g wrote:

[...]
Is your position that Button!SliderItem should derive/inherit 
from
Button!ButtonItem, enabling the cast, or do you suppose the 
cast

should succeed because the fields are compatible?

I.e., should this work?

class A {int x;}
class B {int x;}
A a;
B b = cast(B) a;



No, not at all. first, A and B are not related, so casting 
makes no
sense unless there is a conversion(opCast) or whatever, but 
that is done

by the user.

This is exactly opposite of what I am talking about.


Ok, so you suppose there should be a inheritance implied when 
there's an inheritance relation between template arguments. 
I.e., A!a should be a superclass of A!b when a is a superclass 
of b.



Well, Now I don't think this is possible in all circumstances but 
I really fail to see how it is any different than any normal 
cast(the examples give using arrays and storing junk in them). If 
one is consistent, then I think it is valid and works... but it 
might require the type system to be too restrictive to be of much 
use.


But yes, the last line was what I have been stating. I don't 
think treating A!a and A!b as completely different when a is 
related to b.


Lets suppose A -> B means B is derived from A. That is, any 
object of B can be cast to A because the memory layout of A is 
contained in B and any object of B can be accessed as if it were 
an A.


Template parameters also can have this property since they are 
types.


Hence

We have two scenarios:

class A(T);
class a;
class b;
A!a -> A!b   // false, because, while both sides contain an A, 
and there is overlap(this is a partial relationships for 
everything that is identical in both types... that is, all the 
stuff that doesn't depend on a and b).



AND

class A(T);
class a;
class b : a;

A!a -> A!b  // ? This seems like an obvious logical consequence 
of inheritance and the ->.


This is the way I am thinking about it.

in A!b, everything that depends on b also depends on a because b 
is basically `more than` a.


So, if we cast A!b down to A!a, and IF A!b never uses the "extra" 
part of b that makes it different than a, then the cast should 
pass.


That is, if all b's in A!b could be cast to a and the code work, 
then A!b should be cast-able to A!a.


Obviously if A!b uses b objects in a way that can be treated like 
a's then casting will break if we use a's. (but only if we use 
a's when b's are expected).


The problem is more complex than just a one time fits all rule 
which the D type system uses.


A simple example is basically my problem:

class A(T);
class a { stuff using a...}
class b : a { }

In this case, down casting works because b doesn't do anything 
different than a. Effective b is exactly an a so there is no 
possible way to have any problems.


So, cast(A!a)A!b should pass.  Surely the compiler can be smart 
enough to figure this out? It's as if b is just an alias for a.


This whole Button/Slider/ButtonItem/SliderItem/etc setup may be 
too complex for me.


This is what I understand you have right now, basically:

class ButtonItem {}
class SliderItem : ButtonItem {}
class Widget {}
class Button(T : ButtonItem) : Widget { T[] items; }
class Slider(T : SliderItem) : Button!T {}

And I guess the point of having Button templated is so that 
Slider gets a `SliderItem[] items`, which is more restricted 
and nicer to use than a `ButtonItem[] items` would be.


Yes. And it makes sense to do that, right? Because, while, we 
could use a ButtonItem for Sliders, we would expect since a 
ButtonItem goes with Buttons that SliderItems should go with 
Sliders?


E.g., our SliderItems might need to have somewhat different 
behavior than ButtonItems... the thing that makes them "go with 
the slider".


Sure, we can include that info in slider but we shouldn't have to.

For example. In my code I have a Moved value for SliderItems. 
This tells you have far they have moved(been "slid"). Button 
Items should have this cause they can't move. The Slider type 
doesn't really care about how much they have been slid. But it 
fits nicely in SliderItem.


When I downcast to cast(Button!ButtonItem)Slider, I never use 
anything from Slider/SliderItem because I'm dealing with the 
Button!ButtonItem portion of Slider(the things that makes it a 
Button).


Of course, I could, in that part of the code, end up adding a 
ButtonItem to Slider and that wouldn't be logical. That just 
never happens in my code because of the design. Items are 
statically added to the classes they are part of and items are 
never added or removed... Since it's a gui there is no need for 
dynamic creation(at least in my apps). I even if I want to do 
some dynamic creation, it's not a huge deal because I'll just use 
a Factory to create the objects properly.





Maybe un-templatizing Button and Slider 

arsd png bug

2016-06-20 Thread Joerg Joergonson via Digitalmars-d-learn

1810:
case 3:
auto arr = data.dup;
foreach(i; 0 .. arr.length) {
auto prev = i < bpp ? 0 : arr[i - bpp];
if (i >= previousLine.length) break;
arr[i] += cast(ubyte)
/*std.math.floor*/( cast(int) (prev + 
previousLine[i]) / 2);
}


adding
if (i >= previousLine.length) break;

prevents some crashes and seems to work.








Re: Unmanaged drop in replacemet for [] and length -= 1

2016-06-20 Thread Joerg Joergonson via Digitalmars-d-learn
On Monday, 20 June 2016 at 16:27:29 UTC, Steven Schveighoffer 
wrote:

On 6/18/16 5:55 PM, Joerg Joergonson wrote:
I wanted to switch to std.container.Array but it doesn't seem 
to mimic
[] for some odd ball reason. I threw this class together and 
it seems to

work.

The only problem is that I can't do

carray.length -= 1;

I can't override `-=` because that is on the class. can I 
override it
for length somehow or do I have to create a length wrapper 
class that

has it overridden in it? Or is there a way to do it in cArray?


length wrapper *struct*:

struct AdjustableLength
{
   cArray t;
   auto get() { return t.data.length; }
   opOpAssign(string s: "+", Addend)(Addend x)
   {
  //... your code here that does += using t
  t.data.length = get() + x;
   }
   alias get this;
}

@property auto length()
{
   return AdjustableLength(this);
}

D does not have any direct support for modification of 
properties. It has been talked about, but has never been 
implemented.


-Steve


Thanks, this is what I was thinking I'd have to do. I'm probably 
going to manually manage the array items. I just need simple 
append and remove but this will still help with the drop in 
replacement(for my cases).










Re: D casting broke?

2016-06-20 Thread Joerg Joergonson via Digitalmars-d-learn

On Monday, 20 June 2016 at 10:38:12 UTC, ag0aep6g wrote:

On 06/20/2016 01:40 AM, Joerg Joergonson wrote:

public class Button(T : ButtonItem) : Widget { ... }
public class ButtonItem : Item
{
  void Do() { auto parent = 
(cast(Button!ButtonItem)this.Parent); }

  ...
}

All this works great! As long as Do is not being called from a 
derived

class

public class Slider(T : SliderItem) : Button!T { }
public class SliderItem : ButtonItem { }


The last two classes are truly empty. Now, when I use a Slider 
object,
things go to shit because the cast is invalid. this.Parent is 
of type

Slider!SliderItem.


It's the same setup as with the A and B things, right?

Parent is a Widget that holds a Slider!SliderItem. That's fine 
because Slider!SliderItem is derived from Button!SliderItem 
which is derived from Widget.


But Button!SliderItem does not derive from Button!ButtonItem. 
They both derive from Widget. So the cast fails.


But you think it should succeed, of course.

Is your position that Button!SliderItem should derive/inherit 
from Button!ButtonItem, enabling the cast, or do you suppose 
the cast should succeed because the fields are compatible?


I.e., should this work?

class A {int x;}
class B {int x;}
A a;
B b = cast(B) a;



No, not at all. first, A and B are not related, so casting makes 
no sense unless there is a conversion(opCast) or whatever, but 
that is done by the user.


This is exactly opposite of what I am talking about.

SliderItem only sets the array type. So in Slider, I end up 
with a
SliderItem[] type then in ButtonItem's Do(which gets called 
since

SliderItem doesn't override), it tries to cast that down to a
ButtonItem. It should work. There is no reason it shouldn't 
logically.

There is no up casting.


Some terminology clarification: Casting from SliderItem to 
ButtonItem is upcasting. The other direction would be 
downcasting. Upcasting a single object is trivial and can be 
done implicitly. Downcasting must be done explicitly and may 
yield null.


You say that you cast from SliderItem to ButtonItem. But that's 
not what's done in your snippet above. You try to cast from 
Button!SliderItem to Button!ButtonItem. Completely different 
operation.


Ok, I might have used terminology backwards.

The problem is more complex then maybe I demonstrated and anyone 
has mentioned. Yes, there might be an issue with 
downcasting/contravariance and all that. I think those problems 
though, are general issues.


The real issue is that Slider!SliderItem doesn't override a 
method that is called when a Slider!SliderItem object is used. 
The method, in Button!ButtonItem casts a Widget to 
Button!ButtonItem just fine because inside Button!ButtonItem, the 
Widget is of type Button!ButtonItem.


When we are inside a Slider!SliderItem though, the same code is 
executed with the same cast(using Button!ButtonItem) and this 
fails because if it succedded we could potentially store 
ButtonItems as SliderItems(being an "downcast", or similar to the 
example you gave).


This is the code that has the problem.

It is used inside ButtonItem

auto parent = (cast(cButton!cButtonItem)this.Parent);   

and not overridden in SliderItem, but still executed in there at 
some point.


this.Parent is a Slider!SliderItem and I need the cast to work so 
I can access the Item array.


But in Slider the array is of type SliderItem, not ButtonItem as 
I initially thought, because I particularized it.


Hence there is a "hidden" downcast going on. Now, in my case, it 
doesn't matter because I never store items in the wrong type. The 
code is automatically generated and creates the correct type for 
the correct storage class. I realize now though that it is 
possible that it can be done(If I just appended a ButtonItem to 
the array in ButtonItem, then when SliderItem is called, then 
"non-overridden" method will store a ButtonItem in the SliderItem 
array.



So, this isn't really a problem with casting so much as it is 
with the complexity of the inheritence. By doing it the way I 
did, to try to keep the Types and parameters synced and because 
they inherit from each other, there can be problems.


To get what I want, which is probably impossible, I'd need the 
cast to automatically cast in the correct type depending on where 
it is being executed:


auto parent = (cast(typeof(parent)!this)this.Parent);   

Which, of course, is impossible to do at compile time.

I only need parent to check if it's items exist in the array

if (parent.HoveredItems.canFind(this))


That is all it is used for, so there is no problem with it, but 
if I don't cast I obviously can't access the HoverdItems... but 
then the cast breaks for derived classes and parent is null.


To make it work I'd have to add, say, something like 
containsHovered to Widget. Then I wouldn't need the cast, but 
this doesn't make a lot of sense, since Widget doesn't contain an 
array of HoveredItems.


Alternatively I could add an interface to 

Re: D casting broke?

2016-06-19 Thread Joerg Joergonson via Digitalmars-d-learn

On Sunday, 19 June 2016 at 23:00:03 UTC, ag0aep6g wrote:

On 06/19/2016 11:19 PM, Joerg Joergonson wrote:

On Sunday, 19 June 2016 at 20:21:35 UTC, ag0aep6g wrote:

[...]
No. B!b is derived from A!b, not from A!a. `b` being derived 
from `a`

does not make A!b derived from A!a.


why not? This doesn't seem logical!


Template parameters simply don't work like that. A template can 
result in completely unrelated types based on template 
parameters.


For example:

template C(T)
{
static if (is(T == A)) class C {}
else static if(is(T == B)) alias C = int;
else struct C {int x;}
}

As you see there can't be any inheritance relation between the 
different instantiations of the C template here. Having such a 
relation for different instantiation that result in classes 
would be a weird special case.


There's probably a really simple and obvious reason why that 
special would be a bad idea in itself, but I'm not able to 
point it out. Maybe make a thread in the General group. I think 
the language people tend to focus their attention there.


Criticism and improvement proposals are also better directed to 
the General group.



Here is the full inheritance tree:

X
├─x
│ └─a
│   └─b
├─A!a
└─A!b
  └─B!b



But b is derived from a.


Yeah, that's right there in the middle.


Your tree completely ignores under A.


Clearly, there's B!b under A!b. That's it. Nothing exists below 
B!b. B!a doesn't exist either. A!a is on a different branch. I 
don't think I've missed anything.


[...]
Just because D doesn't understand this logical consistency 
between
inheritance doesn't mean D is right. (Hence, why D's type 
system is broke)



In fact, the tree should look like this:


X
├─x
│ └─a
│   └─b
└─A!x

 │  \
 └─A!a
   │  \
   └─A!b
 │  \
 └─B!b


I'm having trouble reading this. A!x isn't valid, as the 
constraint on A says `T : a`, but x doesn't satisfy that.




no, my point was that a is derived from x, b from a, hence we 
have a derivation change x -> a -> b. So, similarly A!x -> A!a -> 
A!b


I also don't understand what the backslashes mean. They just 
repeat the other lines, don't they? Or do they connect x, a, 
and b? That's already expressed in the upper section.


Yes, they connect them. Yes, exactly, But this time they connect 
in terms of A. The compiler doesn't seem to use the fact that x 
-> a -> -> b to infer anything about A!x -> A!a -> A!b, and it 
should.




As for A!b being below A!a, I can only repeat that this 
inheritance is not implied. You would have to spell it out 
explicitly for the compiler to pick it up.




Maybe so. But that is kinda my point.


Basically you are treating A!a and A!b as if a and be have no
relationship. BUT THEY DO!


Well, to the compiler they don't.


Yes, exactly.


basically I am doing a cast(A!a)this because all I care about is 
this in terms of A!a. If it's a B!b or B!a or A!b is immaterial 
to me since casting to A!a gets me what I need. (It's no 
different than if I was doing simpler inheritance) D doesn't 
understand this and there is no simple fix that anyone has 
presented.


The casting is the only problem and there is no reason it should 
fail because the object I am casting on can be cast to it's base 
class.


If we assume that I'm wrong or D can't do this because of a bug 
or shortsightedness... the issue remains on how to make it work.


public class Button(T : ButtonItem) : Widget { ... }
public class ButtonItem : Item
{
 void Do() { auto parent = (cast(Button!ButtonItem)this.Parent); }
 ...
}

All this works great! As long as Do is not being called from a 
derived class


public class Slider(T : SliderItem) : Button!T { }
public class SliderItem : ButtonItem { }


The last two classes are truly empty. Now, when I use a Slider 
object, things go to shit because the cast is invalid. 
this.Parent is of type Slider!SliderItem.


SliderItem only sets the array type. So in Slider, I end up with 
a SliderItem[] type then in ButtonItem's Do(which gets called 
since SliderItem doesn't override), it tries to cast that down to 
a ButtonItem. It should work. There is no reason it shouldn't 
logically. There is no up casting.


If I duplicate Do() and put it in SliderItem and change the cast 
to use Slider/SliderItem, it works. The cast is the only problem. 
Not the objects themselves.


If this wasn't a parameterized class, everything would work. If I 
made Slider use ButtonItems everything would work. It's only 
because I derived a new type SldierItem from ButtonItem that 
breaks and only at the cast.



I'm almost 100% sure this should work and haven't seen anyone 
actually show why this would not work(the examples given are 
simply wrong and are not understanding the problem... or there is 
more going than anyone has said).


Again, do we not expect derived types to be able to be down cast? 
Just because they are parameterized on other types doesn't change 
this fact? It just makes it more 

Re: D casting broke?

2016-06-19 Thread Joerg Joergonson via Digitalmars-d-learn

On Sunday, 19 June 2016 at 20:21:35 UTC, ag0aep6g wrote:

On 06/19/2016 09:59 PM, Joerg Joergonson wrote:
This should be completely valid since B!T' obviously derives 
from A!T

directly


ok


and we see that T' derives from b which derives from a
directly.


ok


So B!b is an entirely derived from A!a


No. B!b is derived from A!b, not from A!a. `b` being derived 
from `a` does not make A!b derived from A!a.


why not? This doesn't seem logical!


Here is the full inheritance tree:

X
├─x
│ └─a
│   └─b
├─A!a
└─A!b
  └─B!b



But b is derived from a. Your tree completely ignores under A.


X
├─x
│ └─a
│   └─b
├─A!a

|  \
└─A!b
  └─B!b

Just because D doesn't understand this logical consistency 
between inheritance doesn't mean D is right. (Hence, why D's type 
system is broke)



In fact, the tree should look like this:


X
├─x
│ └─a
│   └─b
└─A!x

│  \
└─A!a
  │  \
  └─A!b
│  \
└─B!b


Basically you are treating A!a and A!b as if a and be have no 
relationship. BUT THEY DO! If you don't take that into account 
then your wrong.


Simply stating how D behaves is not proof of why it is right or 
wrong.


This is very easy to see, check my other post using a Widget 
Example and you will see that it is a logical extension.


D doesn't check parameter inheritance relationships properly. A!b 
is a derivation of A!a.


import std.stdio;
class a { }
class b : a { }

class A(T : a)
{
   T x;
}



void main(string[] argv)
{
auto _A = new A!a();
auto _C = new A!b();

auto p = cast(A!a)_C;
}

p is null. My example with B is irrelevant. The issue is with the 
parameter.


As you can see, D thinks that A!b and A!a are completely 
unrelated... as do you and arsd.


Do you seriously think this is the case? That

class b : a { }

and

class b { }

effectively mean the same with regards to A?

The whole problem comes about at this line:

auto p = cast(A!a)_C;

We are trying to cast `T x` in C, which is effectively `b x` to 
`a x`.


Is that not possible to do? We do it all the time, right?

That is my point. D doesn't see that it can do this but it can, 
if not, prove me wrong.

















Re: D casting broke?

2016-06-19 Thread Joerg Joergonson via Digitalmars-d-learn

On Sunday, 19 June 2016 at 20:18:14 UTC, Adam D. Ruppe wrote:

On Sunday, 19 June 2016 at 19:59:28 UTC, Joerg Joergonson wrote:
This should be completely valid since B!T' obviously derives 
from A!T directly and we see that T' derives from b which 
derives from a directly. So B!b is an entirely derived from 
A!a and hence the cast should be successful


I don't see how you think that. Here's the parent chain:

B!b -> A!b -> X -> x -> Object

There's no A!a in there, the cast is failing correctly.

Just because `b` is a child of `a` doesn't mean that `A!b` is 
the same as `A!a`. Consider an array:


MyClass[] arr;
arr ~= new MyClass(); // ok cool

Object[] obj_arr = arr; // won't work! because...
obj_arr[0] = new RandomClass(); // this would compile...

// but obj_arr and arr reference the same data, so now:
arr[0] is typed MyClass... but is actually RandomClass! It'd 
crash horribly.




Array is just one example of where converting A!b to A!a is 
problematic. The same principle can apply anywhere, so it won't 
implicitly cast them.




I'm not saying they are the same! They don't have to be the same. 
That is the whole point of inheritance and casting. A!b is 
derived from A!a if b is derived from a, is it not? If not, then 
I am wrong, if so then D casting has a bug.








The obviously question: Is there a simple way around this?


What are you actually trying to do?


Do you really want to know? It's very simple and logical and 
might blow your mind and show you it's more complex than the 
example you have


I have a widget class

class Widget { Widget Parent; }

I have a button item class

class ButtonItem : Widget;

I have a button class

class Button : Widget { ButtonItem[] items; }

Make sense so far? Very logical and all that?

NOW, suppose I want to create a derived type from button? Say, a 
slider that effectively is a button that can move around:


class Slider : Button { }

So far so good, right?

WRONG! Slider shouldn't contain button items but slider items! 
How to get around this?



class SliderItem : ButtonItem; (since sliders are buttons slider 
items should be button items, right?)


So, to make this work we have to parameterize Button.

class Button(T : ButtonItem) : Widget { T[] items; }


So far so good!

and

class SliderItem : ButtonItem;

Very logical, Spock would be proud!

Now

class Slider(T : SliderItem) : Button!T;


Very logical still, right? Because T is of type SliderItem which 
is of type ButtonItem and therefor Button!SliderItem is of type 
Button!ButtonItem.


Everything works, right? Of course, I have a working example!

Slider!T is a type of Button!T, Slider!SliderItem is a type of 
Button!ButtonItem. Surely items in Button can hold SliderItems? 
(since they are derived from ButtonItems and ButtonItems work)


Ok, everything works!

Now what?

Well, In ButtonItem, I have to get the parent items to do some 
work. i.e.,


Work(Parent.items);

But this can't work because Parent is a Widget, so we must cast 
to a Button.


Work((cast(Button)Parent).items);

But this doesn't work because Button is parameterized. so

Work((cast(Button!T)Parent).items);

But this doesn't work because there is no T in ButtonItem, which 
is were we are at, so lets cast to a ButtonItem.


Work((cast(Button!ButtonItem)Parent).items);

This works!! At least as long as we are in ButtonItems!

When our parent is a Slider then the cast fails and everything 
goes to shit.


I have to duplicate the code AND only change the cast to 
cast(Slider!SliderItem)Parent and then everything works.



But, you might think that Slider!SliderItem is somehow not 
derived from Button!ButtonItem but it is, it was created to be 
that way by god himself.



Widget -> Button -> Slider
 | |
   -> ButtonItem -> SliderItem


First, for one, everything is an Widget, lets get that clear.

Second, Slider!SliderItem is just a wrapper to Button!ButtonItem. 
This allows us to add additional slider based code to a button to 
make it act like a slider(which is more than a button, but still 
a button).



This is just a 2D case of the 1D inheritance Slider is a Button. 
Just because we add a parameterization to it DOESN'T NECESSARILY 
change that. If the parameter also has an inheritance 
relationship then we have a fully valid inheritance relationship.



e.g., Slider!Pocahontas has only a partial inheritance to 
Button!ButtonItem because Pocahontas is not in any way derived 
from ButtonItem. But if Pocahontas is fully derived from 
ButtonItem then the partial inheritance is full inheritance.


Do you understand that?

Else, if you were correct, something like Slider!Widget and 
Button!Widget would never be relatable. Yet it's obvious that it 
is trivially relatable because Widget = Widget. In my case the 
only difference is SliderItem derives from ButtonItem.


We can always cast to a super class. ALWAYS! Slider!SliderItem is 
a super class of Button!ButtonItem.


In fact, if we had some way to 

D casting broke?

2016-06-19 Thread Joerg Joergonson via Digitalmars-d-learn

import std.stdio;

class X { X Parent; }

class x : X { }

class a : x
{
void Do()
{
auto p = cast(A!a)(this.Parent);  // works as long as we 
are in A

assert(p !is null);
}
}


class A(T : a) : X
{
X Parent = new X();
T _y = new T();
}

class b : a { }
class B(T : b) : A!T { }

void main(string[] argv)
{
auto _A = new A!a();
auto _B = new B!b();
_A.Parent = _A;
_A._y.Parent = _A;
_B.Parent = _B;// works if _A, since _B is of type _A it 
should still work

_B._y.Parent = _B; // ...

_A._y.Do();
_B._y.Do();

}


This should be completely valid since B!T' obviously derives from 
A!T directly and we see that T' derives from b which derives from 
a directly. So B!b is an entirely derived from A!a and hence the 
cast should be successful


So, my code crashes because when I do the cast in the base class 
A!T, and it is used in the derived class(which the cast should be 
valid), a null pointer is created which is used in the base 
class. (Basically, B!T doesn't have to have any code in it, just 
create the object, the code in A!T then will crash if such a cast 
exists)



The obviously question: Is there a simple way around this? I'd 
ask, how long to fix but that might take months/years. I can 
override in b and duplicate code but why? That makes life more 
difficult than having things work as they should(having to 
maintain twice is much code is not a solution).











Re: ARSD PNG memory usage

2016-06-19 Thread Joerg Joergonson via Digitalmars-d-learn
Also, for some reason one image has a weird horizontal line at 
the bottom of the image that is not part of the original. This is 
as if the height was 1 pixel to much and it's reading "junk". I 
have basically a few duplicate images that were generated from 
the same base image. None of the others have this problem.


If I reduce the image dimensions it doesn't have this problem. My 
guess is that there is probably a bug with a > vs >= or 
something. When the image dimensions are "just right" an extra 
line is added that may be non-zero.


The image dimensions are 124x123.

This is all speculation but it seems like it is a png.d or 
opengltexture issue. I cannot see this added line in any image 
editor I've tried(PS, ifranview) and changing the dimensions of 
the image fix it.


Since it's a hard one to debug without test case I will work on 
it... Hoping you have some possible points of attack though.









Re: ARSD PNG memory usage

2016-06-19 Thread Joerg Joergonson via Digitalmars-d-learn

On Saturday, 18 June 2016 at 02:17:01 UTC, Adam D. Ruppe wrote:

I have an auto generator for pngs and 99% of the time it works, 
but every once in a while I get an error when loading the png's. 
Usually re-running the generator "fixes the problem" so it might 
be on my end. Regardless of where the problem stems, it would be 
nice to have more info why instead of a range violation. 
previousLine is null in the break.


All the png's generated are loadable by external app like 
ifranview, so they are not completely corrupt but possibly could 
have some thing that is screwing png.d up.



The code where the error happens is:

case 3:
auto arr = data.dup;
foreach(i; 0 .. arr.length) {
auto prev = i < bpp ? 0 : arr[i - bpp];
arr[i] += cast(ubyte)
/*std.math.floor*/( cast(int) (prev + 
previousLine[i]) / 2);
}

Range violation at png.d(1815)

Any ideas?



Re: Unmanaged drop in replacemet for [] and length -= 1

2016-06-19 Thread Joerg Joergonson via Digitalmars-d-learn

On Sunday, 19 June 2016 at 10:10:54 UTC, Jonathan M Davis wrote:
On Saturday, June 18, 2016 21:55:31 Joerg Joergonson via 
Digitalmars-d-learn wrote:
I wanted to switch to std.container.Array but it doesn't seem 
to mimic [] for some odd ball reason.


D's dynamic arrays are really quite weird in that they're sort 
of containers and sort of not. So, pretty much nothing is ever 
going to act quite like a dynamic array. But when dynamic 
arrays are used as ranges, their semantics definitely are not 
that of containers. Having a container which is treated as a 
range is just begging for trouble, and I would strongly advise 
against attempting it. When it comes to containers, ranges are 
intended to be a view into a container, just like an iterator 
is intended to be a pointer into a container. Neither ranges 
are iterators are intended to _be_ containers. Treating a 
container as a range is going to get you weird behavior like 
foreach removing every element from the container.


- Jonathan M Davis


Thanks for your 2c... but I think I can handle it. I'm a big boy 
and I wear big boy pants and I'm not afraid of a few little 
scrapes.



If foreach removes all/any of the elements of a container then 
something is broke.





Re: Unmanaged drop in replacemet for [] and length -= 1

2016-06-18 Thread Joerg Joergonson via Digitalmars-d-learn

Also, how to handle foreach(i, x; w) (use index + value)?





Unmanaged drop in replacemet for [] and length -= 1

2016-06-18 Thread Joerg Joergonson via Digitalmars-d-learn
I wanted to switch to std.container.Array but it doesn't seem to 
mimic [] for some odd ball reason. I threw this class together 
and it seems to work.


The only problem is that I can't do

carray.length -= 1;

I can't override `-=` because that is on the class. can I 
override it for length somehow or do I have to create a length 
wrapper class that has it overridden in it? Or is there a way to 
do it in cArray?


Basically I want to support code that does something like

auto x = [];
x.length -= 1;

and not have to rewrite that to x.length = x.length - 1;




public class cArray(T)
{
Array!T data;

public void assumeSafeAppend() { };

public @property int length()
{
return data.length;
}

public @property int length(int len)
{
for(int i = 0; i < len; i++)
data.removeBack();
return data.length;
}

ref T opIndex(int i) { return data[i]; }
@property int opDollar(size_t dim : 0)() { return 
data.length; }


this() { data = Array!T(); }


int opApply(int delegate(ref T) dg)
{
int result = 0;

for (int i = 0; i < data.length; i++)
{
result = dg(data[i]);
if (result)
break;
}
return result;
}

int opApplyReverse(int delegate(ref T) dg)
{
int result = 0;

for (int i = 0; i < data.length; i++)
{
result = dg(data[i]);
if (result)
break;
}
return result;
}


void opOpAssign(string op)(T d)
{
if (op == "~")
{
data ~= d;
}
}

bool canFind(T)(T d)
{
for(int i = 0; i < data.length; i++)
{
if (data[i] == d)
return true;
}
return false;
}
}


Re: canFind doesn't work on Array, replacing [] with array doesn't work, etc...

2016-06-18 Thread Joerg Joergonson via Digitalmars-d-learn

On Saturday, 18 June 2016 at 17:46:26 UTC, ag0aep6g wrote:
On Saturday, 18 June 2016 at 17:02:40 UTC, Joerg Joergonson 
wrote:
3. can't use canFind from algorithm. Complains it can't find a 
matching case. I tried many variations to get this to work.


canFind takes a range. Array isn't a range itself, but you can 
get one by slicing it with []:



import std.container.array;
import std.algorithm;
void main()
{
Array!int a = [1, 2, 3];
assert(a[].canFind(2));
assert(!a[].canFind(0));
}



Thanks. I've decided to implement a wrapper around Array so it is 
a drop in replacement for [].







Re: Templated class defaults and inheritence

2016-06-18 Thread Joerg Joergonson via Digitalmars-d-learn


This is solved through simple inheritance constraints and 
aliasing with qualification.



class X;
class subfoo;
class subbaz : subfoo;
class foo(T) if (is(T : subfoo)) X;
class baz(T) if (is(T : subbaz)) foo!T;

then when we need foo with "default",

alias foo = qualified.foo!subfoo;

Without the qualification, which I guess requires having this 
stuff in a separate module, there is no conflict. alias foo = 
foo!subfoo; fails circularly.










canFind doesn't work on Array, replacing [] with array doesn't work, etc...

2016-06-18 Thread Joerg Joergonson via Digitalmars-d-learn

Have working code that uses []. Trying to replace with Array!

1. Can't use make in field initialization. Complains about malloc 
in static context.


2. can't decrement the length. So Darr.length = Darr.length - 1; 
fails
This means we can't remove the element easily. I see a removeBack 
but since I can't get the code to compile I don't know if this is 
a replacement. If so, why now allow setting the length?


can't set the length to 0. Why not? Why not just allow length to 
be set?


3. can't use canFind from algorithm. Complains it can't find a 
matching case. I tried many variations to get this to work.





Re: ARSD PNG memory usage

2016-06-18 Thread Joerg Joergonson via Digitalmars-d-learn

On Saturday, 18 June 2016 at 02:01:29 UTC, Adam D. Ruppe wrote:
On Saturday, 18 June 2016 at 01:20:16 UTC, Joerg Joergonson 
wrote:
Error: undefined identifier 'sleep', did you mean function 
'Sleep'?		


"import core.thread; sleep(10);"


It is `Thread.sleep(10.msecs)` or whatever time - `sleep` is a 
static member of the Thread class.



They mention to use PeekMessage and I don't see you doing 
that, not sure if it would change things though?


I am using MsgWaitForMultipleObjectsEx which blocks until 
something happens. That something can be a timer, input event, 
other message, or an I/O thing... it doesn't eat CPU unless 
*something* is happening.


Yeah, I don't know what though. Adding Sleep(5); reduces it's 
consumption to 0% so it is probably just spinning. It might be 
the nvidia issue that creates some weird messages to the app.


I'm not too concerned about it as it's now done to 0, it is 
minimal wait time for my app(maybe not acceptable for performance 
apps but ok for mine... at least for now).


As I continue to work on it, I might stumble on the problem or it 
might disappear spontaneously.




Re: Templated class defaults and inheritence

2016-06-18 Thread Joerg Joergonson via Digitalmars-d-learn

On Saturday, 18 June 2016 at 12:15:56 UTC, Klaus Kalsesh wrote:
On Saturday, 18 June 2016 at 02:11:23 UTC, Joerg Joergonson 
wrote:

I have something like

class X;
class subfoo : X;
class subbaz : X;

class foo : X
{
subfoo bar;
}

class baz : X;


which I have modified so that

class subbaz : subfoo;
class baz : foo;

(essentially baz is now a derivation of foo while before it 
was of X)


the problem is that subbaz uses subfoo bar; when it also needs 
to use a derived type. (so it is a full derivation of foo and 
subfoo)



To accomplish that I parameterized foo so I can do

class foo!T : X
{
T bar;
}


and I can now do

class baz : foo!subbaz;


There are two problems with this though:


1. How can I create a default foo!(T = subfoo) so I can just 
instantiate classes like new foo() and it is the same as 
foo!subfoo()? I tried creating a class like class foo : 
foo!subfoo; but I get a collision. I guess an alias will work 
here just fine though?(just thought of it)


You must declare an alias:

alias FooSubfoo = foo!subfoo;
FooSubfoo fsf = new FooSubfoo;



No, this is not what I'm asking

I would want something like

alias foo = foo!subfoo;

Not sure, though, if a when I instantiate like

new foo();

If the compiler will understand it is new foo!subfoo(); I see no 
problem here but haven't tested it.




2. The real problem is that baz isn't really a true derivation 
of foo like it should be. foo!subfoo and foo!subbaz are 
different types. I want the compiler to realize that 
foo!subbaz(and hence baz) is really a derived foo!subfoo and 
ultimately X.


For multiple inheritence in classes, the standard way of doing 
is with interfaces.




This is not multiple inheritance and alias this won't work.

Let me explain better:

X -> foo%subfoo
  -> baz%subbaz

by -> I mean inherits and by %, I mean "uses"(say, as a field or 
method parameter or whatever)


This then says that foo uses subfoo and is derived from X. 
Similarly for baz.


These are two distinct types(hence the two different lines)

Now, if baz inherits from foo instead of X, we have

X -> foo%subfoo -> baz%subfoo

But if subfoo -> subbaz, then we should be able to do

X -> foo%subfoo -> (baz%subfoo) -> baz%subbaz.

Note the are now on the same line. baz%subbaz is a derived type 
of foo%subfoo, not just X as in the first two line case.


This is an important distinction in the type system.


It's sort of multiple inheritance in that multiple types are used 
but each type only inherits once.


In C# we have the "where" keyword that lets us tell the compiler 
something like "where subbaz inherits from subfoo".




Dlang once had a page that had ways to express stuff like this 
but I can no longer find it ;/


It might be a syntax like

class baz!T : foo!(T : subfoo);

which may say "T must be derived from subfoo". Then baz!subbaz 
would work and it would be a derived type of foo!subfoo(rather 
than just X).









Templated class defaults and inheritence

2016-06-17 Thread Joerg Joergonson via Digitalmars-d-learn

I have something like

class X;
class subfoo : X;
class subbaz : X;

class foo : X
{
subfoo bar;
}

class baz : X;


which I have modified so that

class subbaz : subfoo;
class baz : foo;

(essentially baz is now a derivation of foo while before it was 
of X)


the problem is that subbaz uses subfoo bar; when it also needs to 
use a derived type. (so it is a full derivation of foo and subfoo)



To accomplish that I parameterized foo so I can do

class foo!T : X
{
T bar;
}


and I can now do

class baz : foo!subbaz;


There are two problems with this though:


1. How can I create a default foo!(T = subfoo) so I can just 
instantiate classes like new foo() and it is the same as 
foo!subfoo()? I tried creating a class like class foo : 
foo!subfoo; but I get a collision. I guess an alias will work 
here just fine though?(just thought of it)


2. The real problem is that baz isn't really a true derivation of 
foo like it should be. foo!subfoo and foo!subbaz are different 
types. I want the compiler to realize that foo!subbaz(and hence 
baz) is really a derived foo!subfoo and ultimately X.


I'm pretty sure D can do this, just haven't figure out how.

Thanks.












Re: ARSD PNG memory usage

2016-06-17 Thread Joerg Joergonson via Digitalmars-d-learn

On Saturday, 18 June 2016 at 01:46:32 UTC, Adam D. Ruppe wrote:
On Saturday, 18 June 2016 at 01:44:28 UTC, Joerg Joergonson 
wrote:
I simply removed your nextpowerof2 code(so the width and 
height wasn't being enlarged) and saw no memory change). 
Obviously because they are temporary buffers, I guess?


right, the new code free() them right at scope exit.

If this is the case, then maybe there is one odd temporary 
still hanging around in png?


Could be, though the png itself has relatively small overhead, 
and the opengl texture adds to it still. I'm not sure if video 
memory is counted by task manager or not... but it could be 
loading up the whole ogl driver that accounts for some of it. I 
don't know.


Ok. Also, maybe the GC hasn't freed some of those temporaries 
yet. What's strange is that when the app is run, it seems to do a 
lot of small allocations around 64kB or something for about 10 
seconds(I watch the memory increase in TM) then it stabilizes. 
Not a big deal, just seems a big weird(maybe some type of lazy 
allocations going on)



Anyways, I'm much happier now ;) Thanks!


Re: ARSD PNG memory usage

2016-06-17 Thread Joerg Joergonson via Digitalmars-d-learn

On Friday, 17 June 2016 at 14:39:32 UTC, kinke wrote:

On Friday, 17 June 2016 at 04:54:27 UTC, Joerg Joergonson wrote:

LDC x64 uses about 250MB and 13% cpu.

I couldn't check on x86 because of the error

phobos2-ldc.lib(gzlib.c.obj) : fatal error LNK1112: module 
machine type 'x64' conflicts with target machine type 'X86'


not sure what that means with gzlib.c.ojb. Must be another bug 
in ldc alpha ;/


It looks like you're trying to link 32-bit objects to a 64-bit 
Phobos.
The only pre-built LDC for Windows capable of linking both 
32-bit and 64-bit code is the multilib CI release, see 
https://github.com/ldc-developers/ldc/releases/tag/LDC-Win64-master.



Yes, it looks that way but it's not the case I believe(I did 
check when this error first came up). I'm using the phobo's libs 
from ldc that are x86.


I could be mistaken but

phobos2-ldc.lib(gzlib.c.obj)

suggests that the problem isn't with the entire phobos lib but 
gzlib.c.obj and that that is the only one marked incorrectly, 
since it's not for all the other imports, it seems something got 
marked wrong in that specific case?








Re: ARSD PNG memory usage

2016-06-17 Thread Joerg Joergonson via Digitalmars-d-learn

On Saturday, 18 June 2016 at 00:56:57 UTC, Joerg Joergonson wrote:

On Friday, 17 June 2016 at 14:48:22 UTC, Adam D. Ruppe wrote:

[...]


Yes, same here! Great! It runs around 122MB in x86 and 107MB 
x64. Much better!



[...]


Yeah, strange but good catch! It now works in x64! I modified 
it to to!wstring(title).dup simply to have the same title and 
classname.



[...]


I have the opposite on memory but not a big deal.



[...]


I will investigate this soon and report back anything. It 
probably is something straightforward.



[...]


I found this on non-power of 2 textures:

https://www.opengl.org/wiki/NPOT_Texture


https://www.opengl.org/registry/specs/ARB/texture_non_power_of_two.txt

It seems like it's probably a quick and easy add on and you 
already have the padding code, it could easily be optional(set 
a flag or pass a bool or whatever).


it could definitely same some serious memory for large textures.

e.g., a 3000x3000x4 texture takes about 36MB or 2^25.1 bytes. 
Since this has to be rounded up to 2^26 = 67MB, we almost have 
doubled the amount of wasted space.


Hence, allowing for non-power of two would probably reduce the 
memory footprint of my code to near 50MB(around 40MB being the 
minimum using uncompressed textures).


I might try to get a working version of that at some point. 
Going to deal with the cpu thing now though.


Thanks again.


Never mind about this. I wasn't keeping in mind that these 
textures are ultimately going to end up in the video card memory.


I simply removed your nextpowerof2 code(so the width and height 
wasn't being enlarged) and saw no memory change). Obviously 
because they are temporary buffers, I guess?


If this is the case, then maybe there is one odd temporary still 
hanging around in png?





Re: ARSD PNG memory usage

2016-06-17 Thread Joerg Joergonson via Digitalmars-d-learn
The CPU usage is consistently very low on my computer. I still 
don't know what could be causing it for you, but maybe it is 
the temporary garbage... let us know if the new patches make a 
difference there.




Ok, I tried the breaking at random method and I always ended up 
in system code and no stack trace to... seems it was an alternate 
thread(maybe GC?). I did a sampling profile and got this:


Function Name   Inclusive  Exclusive Inclusive % Exclusive %
_DispatchMessageW@4 10,361  5   88.32   0.04
[nvoglv32.dll]  7,874   745 67.12   6.35
_GetExitCodeThread@85,745   5,745   48.97   48.97
_SwitchToThread@0   2,166   2,166   18.46   18.46

So possibly it is simply my system and graphics card. For some 
reason NVidia might be using a lot of cpu here for no apparent 
reason?


DispatchMessage is still taking quite a bit of that though?


Seems like someone else has a similar issue:

https://devtalk.nvidia.com/default/topic/832506/opengl/nvoglv32-consuming-a-ton-of-cpu/


https://github.com/mpv-player/mpv/issues/152


BTW, trying sleep in the MSG loop

Error: undefined identifier 'sleep', did you mean function 
'Sleep'?		


"import core.thread; sleep(10);"

;)

Adding a Sleep(10); to the loop dropped the cpu usage down to 
0-1% cpu!


http://stackoverflow.com/questions/33948837/win32-application-with-high-cpu-usage/33948865

Not sure if that's the best approach though but it does work.

They mention to use PeekMessage and I don't see you doing that, 
not sure if it would change things though?






Re: ARSD PNG memory usage

2016-06-17 Thread Joerg Joergonson via Digitalmars-d-learn

On Friday, 17 June 2016 at 14:48:22 UTC, Adam D. Ruppe wrote:

On Friday, 17 June 2016 at 04:54:27 UTC, Joerg Joergonson wrote:
ok, then it's somewhere in TrueColorImage or the loading of 
the png.


So, opengltexture actually does reallocate if the size isn't 
right for the texture... and your image was one of those sizes.


The texture pixel size needs to be a power of two, so 3000 gets 
rounded up to 4096, which means an internal allocation.


But it can be a temporary one! So ketmar tackled png.d's 
loaders' temporaries and I took care of gamehelper.d's...


And the test program went down about to 1/3 of its memory 
usage. Try grabbing the new ones from github now and see if it 
works for you too.




Yes, same here! Great! It runs around 122MB in x86 and 107MB x64. 
Much better!




Well, It works on LDC x64! again ;) This seems like an issue 
with DMD x64? I was thinking maybe it has to do the layout of 
the struct or something, but not sure.


I have a fix for this too, though I don't understand why it 
works


I just .dup'd the string literal before passing it to Windows. 
I think dmd is putting the literal in a bad place for these 
functions (they do bit tests to see if it is a pointer or an 
atom, so maybe it is in an address where the wrong bits are set)




Yeah, strange but good catch! It now works in x64! I modified it 
to to!wstring(title).dup simply to have the same title and 
classname.


In any case, the .dup seems to fix it, so all should work on 32 
or 64 bit now. In my tests, now that the big temporary arrays 
are manually freed, the memory usage is actually slightly lower 
on 32 bit, but it isn't bad on 64 bit either.


I have the opposite on memory but not a big deal.


The CPU usage is consistently very low on my computer. I still 
don't know what could be causing it for you, but maybe it is 
the temporary garbage... let us know if the new patches make a 
difference there.


I will investigate this soon and report back anything. It 
probably is something straightforward.


Anyways, We'll figure it all out at some point ;) I'm really 
liking your lib by the way. It's let me build a gui and get a 
lot done and just "work". Not sure if it will work on X11 with 
just a recompile, but I hope ;)



It often will! If you aren't using any of the native event 
handler functions or any of the impl.* members, most things 
just work (exception being the windows hotkey functions, but 
those are marked Windows anyway!). The basic opengl stuff is 
all done for both platforms. Advanced opengl isn't implemented 
on Windows yet though (I don't know it; my opengl knowledge 
stops in like 1998 with opengl 1.1 so yeah, I depend on 
people's contributions for that and someone did Linux for me, 
but not Windows yet. I think.)


I found this on non-power of 2 textures:

https://www.opengl.org/wiki/NPOT_Texture


https://www.opengl.org/registry/specs/ARB/texture_non_power_of_two.txt

It seems like it's probably a quick and easy add on and you 
already have the padding code, it could easily be optional(set a 
flag or pass a bool or whatever).


it could definitely same some serious memory for large textures.

e.g., a 3000x3000x4 texture takes about 36MB or 2^25.1 bytes. 
Since this has to be rounded up to 2^26 = 67MB, we almost have 
doubled the amount of wasted space.


Hence, allowing for non-power of two would probably reduce the 
memory footprint of my code to near 50MB(around 40MB being the 
minimum using uncompressed textures).


I might try to get a working version of that at some point. Going 
to deal with the cpu thing now though.


Thanks again.




Re: ARSD PNG memory usage

2016-06-17 Thread Joerg Joergonson via Digitalmars-d-learn

On Friday, 17 June 2016 at 14:48:22 UTC, Adam D. Ruppe wrote:

On Friday, 17 June 2016 at 04:54:27 UTC, Joerg Joergonson wrote:

[...]


So, opengltexture actually does reallocate if the size isn't 
right for the texture... and your image was one of those sizes.


[...]



Cool, I'll check all this out and report back. I'll look into the 
cpu issue too.


Thanks!


Re: ARSD PNG memory usage

2016-06-16 Thread Joerg Joergonson via Digitalmars-d-learn

On Friday, 17 June 2016 at 04:32:02 UTC, Adam D. Ruppe wrote:

On Friday, 17 June 2016 at 01:51:41 UTC, Joerg Joergonson wrote:
Are you keeping multiple buffers of the image around? A 
trueimage, a memoryimage, an opengl texture


MemoryImage and TrueImage are the same thing, memory is just 
the interface, true image is the implementation.


OpenGL texture is separate, but it references the same memory 
as a TrueColorImage, so it wouldn't be adding.




ok, then it's somewhere in TrueColorImage or the loading of the 
png.




You might have pinned temporary buffers though. That shouldn't 
happen on 64 bit, but on 32 bit I have seen it happen a lot.




Ok, IIRC LDC both x64 and x86 had high memory usage too, so if it 
shouldn't happen on 64-bit(if it applies to ldc), this then is 
not the problem. I'll run a -vgc on it and see if it shows up 
anything interesting.


When I do a bare loop minimum project(create2dwindow + event 
handler) I get 13% cpu(on 8-core skylake 4ghz) and 14MB memory.


I haven't seen that here but I have a theory now: you have 
some pinned temporary buffer on 32 bit (on 64 bit, the GC would 
actually clean it up) that keeps memory usage near the 
collection boundary.


Again, it might be true but I'm pretty sure I saw the problem 
with ldc x64.


Then, a small allocation in the loop - which shouldn't be 
happening, I don't see any in here... - but if there is a small 
allocation I'm missing, it could be triggering a GC collection 
cycle each time, eating CPU to scan all that wasted memory 
without being able to free anything.




Ok, Maybe... -vgc might show that.

If you can run it in the debugger and just see where it is by 
breaking at random, you might be able to prove it.




Good idea, not thought about doing that ;) Might be a crap shoot 
but who knows...


That's a possible theory I can reproduce the memory usage 
here, but not the CPU usage though. Sitting idle, it is always 
<1% here (0 if doing nothing, like 0.5% if I move the mouse in 
the window to generate some activity)


 I need to get to bed though, we'll have to check this out in 
more detail later.


me too ;) I'll try to test stuff out a little more when I get a 
chance.




Thanks!  Also, when I try to run the app in 64-bit windows, 
RegisterClassW throws for some reason ;/ I haven't been able 
to figure that one out yet ;/


err this is a mystery to me too... a hello world on 64 bit 
seems to work fine, but your program tells me error 998 
(invalid memory access) when I run it. WTF, both register class 
the same way.


I'm kinda lost on that.


Well, It works on LDC x64! again ;) This seems like an issue with 
DMD x64? I was thinking maybe it has to do the layout of the 
struct or something, but not sure.


---

I just run a quick test:

LDC x64 uses about 250MB and 13% cpu.

I couldn't check on x86 because of the error

phobos2-ldc.lib(gzlib.c.obj) : fatal error LNK1112: module 
machine type 'x64' conflicts with target machine type 'X86'


not sure what that means with gzlib.c.ojb. Must be another bug in 
ldc alpha ;/



Anyways, We'll figure it all out at some point ;) I'm really 
liking your lib by the way. It's let me build a gui and get a lot 
done and just "work". Not sure if it will work on X11 with just a 
recompile, but I hope ;)




ARSD PNG memory usage

2016-06-16 Thread Joerg Joergonson via Digitalmars-d-learn
Hi, so, do you have any idea why when I load an image with png.d 
it takes a ton of memory?


I have a 3360x2100 that should take around 26mb of memory 
uncompressed and a bunch of other smaller png files.


Are you keeping multiple buffers of the image around? A 
trueimage, a memoryimage, an opengl texture thing that might be 
in main memory, etc? Total file space of all the images is only 
about 3MB compressed and 40MB uncompressed. So it's using around 
10x more memory than it should! I tried a GC collect and all that.


I don't think my program will have a chance in hell using that 
much memory. That's just a few images for gui work. I'll be 
loading full page png's later on that might have many pages(100+) 
that I would want to pre-cache. This would probably cause the 
program to use TB's of space.


I don't know where to begin diagnosing the problem. I am using 
openGL but I imagine that shouldn't really allocate anything new?


I have embedded the images using `import` but that shouldn't 
really add much size(since it is compressed) or change things.


You could try it out yourself on a test case to see? (might be a 
windows thing too) Create a high res image(3000x3000, say) and 
load it like


auto eImage = cast(ubyte[])import("mylargepng.png");

TrueColorImage image = 
imageFromPng(readPng(eImage)).getAsTrueColorImage;	
OpenGlTexture oGLimage = new OpenGlTexture(image); // Will crash 
without create2dwindow

//oGLimage.draw(0,0,3000,3000);


When I do a bare loop minimum project(create2dwindow + event 
handler) I get 13% cpu(on 8-core skylake 4ghz) and 14MB memory.


When I add the code above I get 291MB of memory(for one image.

Here's the full D code source:


module winmain;

import arsd.simpledisplay;
import arsd.png;
import arsd.gamehelpers;

void main()
{

auto window = create2dWindow(1680, 1050, "Test");

auto eImage = cast(ubyte[])import("Mock.png");

		TrueColorImage image = 
imageFromPng(readPng(eImage)).getAsTrueColorImage;   // 178MB	

OpenGlTexture oGLimage = new OpenGlTexture(image);   // 
291MB
//oGLimage.draw(0,0,3000,3000);

window.eventLoop(50,
delegate ()
{
window.redrawOpenGlSceneNow();
},

);
}

Note that I have modified create2dWindow to take the viewport and 
set it to 2x as large in my own code(I removed here). It 
shouldn't matter though as it's the png and OpenGlTexture that 
seem to have the issue.


Surely once the image is loaded by opengl we could potentially 
disregard the other images and virtually no extra memory would be 
required? I do use getpixel though, not sure it that could be 
used on OpenGLTexture's? I don't mind keeping a main memory copy 
though but I just need it to have a realistic size ;)


So two problems: 1 is the cpu usage, which I'll try to get more 
info on my side when I can profile and 2 is the 10x memory usage. 
If it doesn't happen on your machine can you try alternate(if 
'nix, go for win, or vice versa). This way we can get an idea 
where the problem might be.


Thanks!  Also, when I try to run the app in 64-bit windows, 
RegisterClassW throws for some reason ;/ I haven't been able to 
figure that one out yet ;/















Out of order execution

2016-06-15 Thread Joerg Joergonson via Digitalmars-d-learn

Suppose I have a loop where I execute two functions:

for(...)
{
   if (x) Do1(x);
   if (y) Do2(y);
}

The problem is, I really always want to execute all the Do2's 
first then the Do1's. As is, we could get any order of calls.


Suppose I can't run the loop twice for performance reasons(there 
is other stuff in it) and I don't want to store the state and 
call info then sort them out afterwards.


Is there an efficient lazy way to make this happen?




Re: Accessing COM Objects

2016-06-15 Thread Joerg Joergonson via Digitalmars-d-learn

On Wednesday, 15 June 2016 at 19:21:51 UTC, John wrote:
On Wednesday, 15 June 2016 at 18:32:28 UTC, Joerg Joergonson 
wrote:

  import core.sys.windows.com, core.sys.windows.oaidl;


Thanks. Should these not be added to the generated file?


The problem is that other type libraries will probably require 
other headers to be imported, and there's no way to work out 
which, so I've left that up to the user for now.




Also, could you add to it the following:

const static GUID iid = 
Guid!("5DE90358-4D0B-4FA1-BA3E-C91BBA863F32");


inside the interface (Replace the string with the correct 
guid)?




This allows it to work with ComPtr which looks for the iid 
inside the interface, shouldn't hurt anything.


I could add that as an option.



In any case, I haven't got ComPtr to work so...


GUID Guid(string str)()
{
static assert(str.length==36, "Guid string must be 36 
chars long");
enum GUIDstring = "GUID(0x" ~ str[0..8] ~ ", 0x" ~ 
str[9..13] ~ ", 0x" ~ str[14..18] ~
", [0x" ~ str[19..21] ~ ", 0x" ~ str[21..23] ~ ", 0x" 
~ str[24..26] ~ ", 0x" ~ str[26..28]
~ ", 0x" ~ str[28..30] ~ ", 0x" ~ str[30..32] ~ ", 0x" 
~ str[32..34] ~ ", 0x" ~ str[34..36] ~ "])";

return mixin(GUIDstring);
}




also tried CoCreateInstance and getting error 80040154

Not sure if it works.



Changed the GUID to another one found in the registry(not the 
one at the top of the generated file) and it works. Both load 
photoshop


Oops. The one at the top of the file is the type library's ID, 
not the class ID. I should just omit it if it causes confusion.





int main(string[] argv)
{

//auto ps = ComPtr!_Application(CLSID_PS).require;

	//const auto CLSID_PS = 
Guid!("6DECC242-87EF-11cf-86B4-44455354"); // PS 90.1 
fails because of interface issue
	const auto CLSID_PS = 
Guid!("c09f153e-dff7-4eff-a570-af82c1a5a2a8");   // PS 90.0 
works.





auto hr = CoInitialize(null);
auto iid = IID__Application;


_Application* pUnk;

hr = CoCreateInstance(_PS, null, CLSCTX_ALL, , 
cast(void**));

if (FAILED(hr))
throw new Exception("ASDF");

}

The photoshop.d file
http://www.filedropper.com/photoshop_1


So, I guess it works but how to access the methods? The 
photoshop file looks to have them listed but they are all 
commented out.


They're commented out because Photoshop seems to have only 
provided a late-binding interface and you have to call them by 
name through IDispatch.Invoke. It's possible to wrap all that 
in normal D methods, and I'm working on it, but it won't be 
ready for a while.



Ok, I've tried things like uncommenting

Document Open(BSTR Document, VARIANT As, VARIANT AsSmartObject);
void Load(BSTR Document);

/*[id(0x70537673)]*/ BSTR get_ScriptingVersion();
  /*[id(0x70464D4D)]*/ double get_FreeMemory();
  /*[id(0x76657273)]*/ BSTR get_Version();
and everything crashes with bad reference.



If I try ComPtr, same thing



	const auto CLSID_PS = 
Guid!("c09f153e-dff7-4eff-a570-af82c1a5a2a8");   // PS 90.0 works.





auto hr = CoInitialize(null);
auto iid = IID__Application;

 	auto ps = 
cast(_Application)(ComPtr!_Application(CLSID_PS).require);


_Application pUnk;

hr = CoCreateInstance(_PS, null, CLSCTX_ALL, , 
cast(void**));

if (FAILED(hr))
throw new Exception("ASDF");

auto ptr = cast(wchar*)alloca(wchar.sizeof * 1000);

auto fn = `ps.psd`;
for(auto i = 0; i < fn.length; i++)
{
ptr[i] = fn[i];
}

writeln(ps.get_FreeMemory());

pUnk.Load(ptr);



My thinking is that CoCreateinstance is suppose to give us a 
pointer to the interface so we can use it, if all this stuff is 
crashing does that mean the interface is invalid or not being 
assigned properly or is there far more to it than this?




(


Re: Accessing COM Objects

2016-06-15 Thread Joerg Joergonson via Digitalmars-d-learn

On Wednesday, 15 June 2016 at 06:09:33 UTC, thedeemon wrote:

On Monday, 13 June 2016 at 17:38:41 UTC, Incognito wrote:


[...]


There are ready tools idl2d:
https://github.com/dlang/visuald/tree/master/c2d

[...]


I can't seem to get ComPtr to work.


auto ps = ComPtr!_Application(CLSID_PS).require;

Where CLSID_PS is the Guid from the registry that seems to work 
with CoCreate. _Application was generated from tbl2d.



See my other post for a more(not much) complete description of 
the issues files.






Re: Accessing COM Objects

2016-06-15 Thread Joerg Joergonson via Digitalmars-d-learn

On Wednesday, 15 June 2016 at 17:20:31 UTC, John wrote:
On Wednesday, 15 June 2016 at 16:45:39 UTC, Joerg Joergonson 
wrote:
Thanks. When I ran it I got a d file! when I tried to use that 
d file I get undefined IID and IDispatch. I imagine these 
interfaces come from somewhere, probably built in?


Any ideas?


Add the following after the module name:

  import core.sys.windows.com, core.sys.windows.oaidl;


Thanks. Should these not be added to the generated file?

Also, could you add to it the following:

const static GUID iid = 
Guid!("5DE90358-4D0B-4FA1-BA3E-C91BBA863F32");


inside the interface (Replace the string with the correct guid)?



This allows it to work with ComPtr which looks for the iid inside 
the interface, shouldn't hurt anything.


In any case, I haven't got ComPtr to work so...


GUID Guid(string str)()
{
static assert(str.length==36, "Guid string must be 36 chars 
long");
enum GUIDstring = "GUID(0x" ~ str[0..8] ~ ", 0x" ~ str[9..13] 
~ ", 0x" ~ str[14..18] ~
", [0x" ~ str[19..21] ~ ", 0x" ~ str[21..23] ~ ", 0x" ~ 
str[24..26] ~ ", 0x" ~ str[26..28]
~ ", 0x" ~ str[28..30] ~ ", 0x" ~ str[30..32] ~ ", 0x" ~ 
str[32..34] ~ ", 0x" ~ str[34..36] ~ "])";

return mixin(GUIDstring);
}




also tried CoCreateInstance and getting error 80040154

Not sure if it works.



Changed the GUID to another one found in the registry(not the one 
at the top of the generated file) and it works. Both load 
photoshop



int main(string[] argv)
{

//auto ps = ComPtr!_Application(CLSID_PS).require;

	//const auto CLSID_PS = 
Guid!("6DECC242-87EF-11cf-86B4-44455354"); // PS 90.1 fails 
because of interface issue
	const auto CLSID_PS = 
Guid!("c09f153e-dff7-4eff-a570-af82c1a5a2a8");   // PS 90.0 works.





auto hr = CoInitialize(null);
auto iid = IID__Application;


_Application* pUnk;

hr = CoCreateInstance(_PS, null, CLSCTX_ALL, , 
cast(void**));

if (FAILED(hr))
throw new Exception("ASDF");

}

The photoshop.d file
http://www.filedropper.com/photoshop_1


So, I guess it works but how to access the methods? The photoshop 
file looks to have them listed but they are all commented out.


I suppose this is what ComPtr and other methods are used to help 
create the interface but none seem to work.









Re: Accessing COM Objects

2016-06-15 Thread Joerg Joergonson via Digitalmars-d-learn

On Wednesday, 15 June 2016 at 15:12:06 UTC, thedeemon wrote:
On Wednesday, 15 June 2016 at 07:01:30 UTC, Joerg Joergonson 
wrote:



It  seems idl2d from VD is not easily compilable?


I don't remember problems with that, anyway here's the binary I 
used:

http://stuff.thedeemon.com/idl2d.exe


It crashes when I use it ;/

core.exception.UnicodeException@src\rt\util\utf.d(290): invalid 
UTF-8 sequence


tbl2d did work and gave me a d file but I need to figure out what 
IID and IDispatch are.




Re: Accessing COM Objects

2016-06-15 Thread Joerg Joergonson via Digitalmars-d-learn

On Wednesday, 15 June 2016 at 08:24:41 UTC, John wrote:

On Wednesday, 15 June 2016 at 08:21:06 UTC, John wrote:
OK, adding the return type to the signature should fix that. 
So:


  private static Parameter getParameters(MethodImpl method)


Sorry, I meant the getParameter methods should return be:

  private static Parameter[] getParameters(MethodImpl method)

and

  private static Parameter[] getParameters(MethodImpl method, 
out Parameter returnParameter, bool getReturnParameter)


Thanks. When I ran it I got a d file! when I tried to use that d 
file I get undefined IID and IDispatch. I imagine these 
interfaces come from somewhere, probably built in?


Any ideas?


Re: Accessing COM Objects

2016-06-15 Thread Joerg Joergonson via Digitalmars-d-learn

On Wednesday, 15 June 2016 at 16:03:04 UTC, Jesse Phillips wrote:

On Monday, 13 June 2016 at 01:22:33 UTC, Incognito wrote:

[...]


There is also:
https://github.com/JesseKPhillips/Juno-Windows-Class-Library

It kind of provides similar highlevel options as the "Modern 
COM Programming in D."


But I don't use it and have only be somewhat keeping it alive 
(I had some hiccups in supporting 64bit), so I haven't been 
working to improve the simplicity of interfacing to COM objects.


It also includes definitions for accessing Windows COM objects 
which aren't needed when interfacing with your own or other COM 
objects. I'd like to have two libraries, Juno Library and Juno 
Windows Class Library.


I'll check it it out...


Re: Accessing COM Objects

2016-06-15 Thread Joerg Joergonson via Digitalmars-d-learn

On Wednesday, 15 June 2016 at 06:09:33 UTC, thedeemon wrote:

On Monday, 13 June 2016 at 17:38:41 UTC, Incognito wrote:

Cool. Oleview gives me the idl files. How to convert the idl 
files to d or possibly c?


There are ready tools idl2d:
https://github.com/dlang/visuald/tree/master/c2d

and tlb2idl:
https://github.com/dlang/visuald/tree/master/tools

I've used this idl2d and it works pretty well (although not 
perfect, sometimes manual editing still required).


Example of real-world DirectShow interfaces translated:
https://gist.github.com/thedeemon/46748f91afdbcf339f55da9b355a6b56

Would I just use them in place of IUnknown once I have the 
interface?


If you have the interface defined AND you know its IID, you can 
request it from CoCreateInstance and then use as ordinary D 
object.


You might want to look at this wrapper that takes most of COM 
machinery:

https://gist.github.com/thedeemon/3c2989b76004fafe9aa0

Then you just write almost as in C#, something like

  auto pGraph = ComPtr!IGraphBuilder(CLSID_FilterGraph, 
"pGraph").require;


  ComPtr!ICaptureGraphBuilder2 pBuilder = 
ComPtr!ICaptureGraphBuilder2(CLSID_CaptureGraphBuilder2).require;

  pBuilder.SetFiltergraph(pGraph);
  ...
  auto CLSID_NullRenderer = 
Guid!("C1F400A4-3F08-11D3-9F0B-006008039E37"); //qedit.dll
  auto pNullRendr = ComPtr!IBaseFilter(CLSID_NullRenderer, 
"nulrend");

  pGraph.AddFilter(pNullRendr, "Null Renderer"w.ptr);
  ...
  auto imf = ComPtr!IMediaFilter(pGraph);
  imf.SetSyncSource(null);

All the CreateInstance, QueryInterface, AddRef/Release etc. is 
taken care of. And even HRESULT return codes are automatically 
checked.


Thanks, if I can get the idl converted I'll test it out. It seems 
idl2d from VD is not easily compilable?




Re: Accessing COM Objects P3

2016-06-15 Thread Joerg Joergonson via Digitalmars-d-learn

[in] long index,
[out] long* value);
[id(0x60020017)]
HRESULT PutClass([in] long value);
[id(0x60020018)]
HRESULT GetGlobalClass(
[in] long index,
[out] long* value);
[id(0x60020019)]
HRESULT PutGlobalClass([in] long value);
[id(0x6002001a)]
HRESULT GetPath(
[in] long index,
[out] BSTR* pathString);
[id(0x6002001b)]
HRESULT PutPath([in] BSTR pathString);
[id(0x6002001c)]
HRESULT GetDataLength(
[in] long index,
[out] long* value);
[id(0x6002001d)]
HRESULT GetData(
[in] long index,
[out] BSTR* value);
[id(0x6002001e)]
HRESULT PutData(
[in] long length,
[in] BSTR value);
};

[
  odl,
  uuid(7CA9DE40-9EB3-11D1-B033-00C04FD7EC47),
  helpstring("Container class for actions system 
parameters."),

  dual,
  oleautomation
]
interface IActionDescriptor : IDispatch {
[id(0x6002)]
HRESULT GetType(
[in] long key,
[out] long* type);
[id(0x60020001)]
HRESULT GetKey(
[in] long index,
[out] long* key);
[id(0x60020002)]
HRESULT HasKey(
[in] long key,
[out] long* HasKey);
[id(0x60020003)]
HRESULT GetCount([out] long* count);
[id(0x60020004)]
HRESULT IsEqual(
[in] IActionDescriptor* otherDesc,
[out] long* IsEqual);
[id(0x60020005)]
HRESULT Erase([in] long key);
[id(0x60020006)]
HRESULT Clear();
[id(0x60020007)]
HRESULT GetInteger(
[in] long key,
[out] long* retval);
[id(0x60020008)]
HRESULT PutInteger(
[in] long key,
[in] long value);
[id(0x60020009)]
HRESULT GetDouble(
[in] long key,
[out] double* retval);
[id(0x6002000a)]
HRESULT PutDouble(
[in] long key,
[in] double value);
[id(0x6002000b)]
HRESULT GetUnitDouble(
[in] long key,
[out] long* unitID,
[out] double* retval);
[id(0x6002000c)]
HRESULT PutUnitDouble(
[in] long key,
[in] long unitID,
[in] double value);
[id(0x6002000d)]
HRESULT GetString(
[in] long key,
[out] BSTR* retval);
[id(0x6002000e)]
HRESULT PutString(
[in] long key,
[in] BSTR value);
[id(0x6002000f)]
HRESULT GetBoolean(
[in] long key,
[out] long* retval);
[id(0x60020010)]
HRESULT PutBoolean(
[in] long key,
[in] long value);
[id(0x60020011)]
HRESULT GetList(
[in] long key,
[out] IActionList** list);
[id(0x60020012)]
HRESULT PutList(
[in] long key,
[in] IActionList* list);
[id(0x60020013)]
HRESULT GetObject(
[in] long key,
[out] long* classID,
[out] IActionDescriptor** retval);
[id(0x60020014)]
HRESULT PutObject(
[in] long key,
[in] long classID,
[in] IActionDescriptor* value);
[id(0x60020015)]
HRESULT GetGlobalObject(
[in] long key,
[out] long* classID,
[out] IActionDescriptor** retval);
[id(0x60020016)]
HRESULT PutGlobalObject(
[in] long key,
[in] long classID,
[in] IActionDescriptor* value);
[id(0x60020017)]
HRESULT GetEnumerated(
[in] long key,
[out] long* enumType,
[out] long* value);
[id(0x60020018)]
HRESULT PutEnumerated(
[in] long key,
[in] long enumType,
[in] long value);
[id(0x60020019)]
HRESULT GetReference(
[in] long key,
[out] IActionReference** reference);

Re: Strange Issues regarding aliases

2016-06-14 Thread Joerg Joergonson via Digitalmars-d-learn

On Tuesday, 14 June 2016 at 17:34:42 UTC, Joerg Joergonson wrote:
This is how derelict does it, I simply moved them in to the 
class for simplicity.


I mean glad: http://glad.dav1d.de/


It seems that a loader is required for some reason and that 
possibly could be one or both of the problems.




Re: Strange Issues regarding aliases

2016-06-14 Thread Joerg Joergonson via Digitalmars-d-learn
This is how derelict does it, I simply moved them in to the 
class for simplicity.


I mean glad: http://glad.dav1d.de/





Re: Strange Issues regarding aliases

2016-06-14 Thread Joerg Joergonson via Digitalmars-d-learn
On Tuesday, 14 June 2016 at 16:08:03 UTC, Steven Schveighoffer 
wrote:

On 6/14/16 11:29 AM, Joerg Joergonson wrote:

[...]


Your aliases are a bunch of function pointer types. This isn't 
what you likely want.


I'm assuming you want to bring the existing functions into more 
categorized namespaces? What you need is to do this:


public static struct fGL
{
   alias CullFace = .CullFace; // or whatever the fully 
qualified name is.


I don't have much experience with the opengl modules, so I 
can't be more specific.


-Steve


This is how derelict does it, I simply moved them in to the class 
for simplicity.





Strange Issues regarding aliases

2016-06-14 Thread Joerg Joergonson via Digitalmars-d-learn

I have stuff like


public static class fGL
{
nothrow @nogc extern(System)
{
alias CullFace = void function(tGL.Enum);
alias FrontFace = void function(tGL.Enum);
alias HInt = void function(tGL.Enum, tGL.Enum);
alias LineWidth = void function(tGL.Float);
alias PoIntSize = void function(tGL.Float);
...
}
}


public static struct tGL
{
alias Void = void;
alias IntPtr = ptrdiff_t;
alias SizeInt = int;
alias Char = char;
alias CharARB = byte;
alias uShort = ushort;
alias Int64EXT = long;
alias Short = short;
alias uInt64 = ulong;
alias HalfARB = ushort;
alias BlendFunc = void function(tGL.Enum, tGL.Enum);
...

__gshared {
CopyTexImage1D glCopyTexImage1D;
TextureParameterf glTextureParameterf;
VertexAttribI3ui glVertexAttribI3ui;
VertexArrayElementBuffer glVertexArrayElementBuffer;
BlendFunc glBlendFunc;
...
}


public struct eGL
{
enum ubyte FALSE = 0;
enum ubyte TRUE = 1;
enum uint NO_ERROR = 0;
enum uint NONE = 0;
enum uint ZERO = 0;
enum uint ONE = 1;
...
}


And when I use this stuff(which is clearly openGL being cleaned 
up a bit(I know I can use modules to essentially do this too but 
that is irrelevant):


I get errors like


Error: null has no effect in expression (null)  

Error: more than one argument for construction of extern 
(Windows) void function(uint, uint) nothrow @nogc	



This is when using it like

fGL.BlendFunc(eGL.SRC_ALPHA, eGL.ONE_MINUS_SRC_ALPHA);

What's going on? Please no responses about "You are going to run 
into a lot of problems since all the openGL code uses the flat 
accessing"... The whole point is to get away from that ancient 
way and backwards way.





Re: What's up with GDC?

2016-06-13 Thread Joerg Joergonson via Digitalmars-d-learn

On Monday, 13 June 2016 at 16:46:38 UTC, Adam D. Ruppe wrote:

On Sunday, 12 June 2016 at 14:22:54 UTC, Joerg Joergonson wrote:
Error: undefined identifier 'Sleep' in module 'core.thread', 
did you mean function 'Sleep'?


It is supposed to be `Thread.sleep(1.seconds);`

I'm pretty sure the capital Sleep() is supposed to be private 
(that is the OS-specific Windows api call).




Ok, I tried both and Sleep was the one that worked for some odd 
ball reason.  Then it seemed to stop working I think(I tried it 
in a different spot)... maybe user error.


Basically keeping the event loop uses around 12% cpu and 12MB 
of memory.


That's weird, it just sleeps until a message comes in from the 
OS. On my computer, programs sit at 0% like you'd expect, and 
my guitest program (which has text areas, buttons, menu, etc) 
eats ~1.7 MB, both 32 and 64 bit versions.


Are you running some other program that might be sending a lot 
of broadcast messages?




Not that I know of. I haven't tried running it outside VS though 
so it might be doing something weird. I'll investigate further 
when I get a chance and get further down the road.


About the WM size thing, I haven't had a problem with it except 
for the weird vertical shifting. It doesn't use any more cpu when 
constantly resizing.




Re: What's up with GDC?

2016-06-12 Thread Joerg Joergonson via Digitalmars-d-learn

On Sunday, 12 June 2016 at 13:23:26 UTC, Adam D. Ruppe wrote:

On Sunday, 12 June 2016 at 13:05:48 UTC, Joerg Joergonson wrote:
BTW, when I compile a simple project with your simpledisplay 
it takes up around 300MB(for ldc, 400 for dmd) and uses about 
15% cpu.


What's your code? The library itself does fairly little so the 
time probably depends on your draw loop or timer settings 
(though it did have a memory leak until recently, it wasn't 
apparent until something had been running for a really long 
time - I use it in my day-to-day terminal emulator, so I have 
like 40 copies of the process running for months on end here...)



I'm leaving in 2 minutes for church btw so I might not answer 
you for about 5 hours when I'm back at the computer.


Well, it's about the same when I comment all my code out! It 
drops about 30 megs and a percent or two but still quite large.


When I remove all code, it is 2MB in size and 0 cpu(I just use 
Sleep to keep it from terminating).


When I try to use a sleep before the event loop to and just call 
create2dWindow, I get


Error: undefined identifier 'Sleep' in module 'core.thread', did 
you mean function 'Sleep'?		


Which godly in it's descriptive powers, right?

The code added is

import core.thread;
core.thread.Sleep(1);

which is the same code I use in main() which works(worked)

import core.thread : Sleep;
Sleep(1);

works though. Basically keeping the event loop uses around 12% 
cpu and 12MB of memory. Adding in my code, which simply uses your 
png to load some images and display them balloons it to 400MB. 
The exe is only 7MB in size.


So, I believe it is your code. The event loop is using quite a 
bit of cpu even when not "doing" anything(haven't look at it 
behind the scenes though).


The memory is probably from loading the images, possibly doubling 
all the images to powers of 2 might explain some of the bloat. I 
have a few large images that when uncompressed might be 20-40MB 
total and several smaller ones, probably insignificant. Shouldn't 
add up to 300MB though.


Once I get further in I'll try to see whats going on. I haven't 
noticed it leaking memory though.


Do you know if there is a way to get the largest used memory 
chunks and what is using them? That might tell the story!









Re: What's up with GDC?

2016-06-12 Thread Joerg Joergonson via Digitalmars-d-learn

On Sunday, 12 June 2016 at 09:11:09 UTC, Johan Engelen wrote:

On Sunday, 12 June 2016 at 04:19:33 UTC, Joerg Joergonson wrote:


Here are the versions

The one that isn't working:
LDC - the LLVM D compiler (30b1ed):
  based on DMD v2.071.1 and LLVM 3.9.0git-d06ea8a
  built with LDC - the LLVM D compiler (1.0.0)
  Default target: x86_64-pc-windows-msvc
  Host CPU: skylake
  http://dlang.org - http://wiki.dlang.org/LDC

The one that is:
B:\DLang\ldc2\bin>ldc2 -version
LDC - the LLVM D compiler (1.0.0):
  based on DMD v2.070.2 and LLVM 3.9.0git
  built with LDC - the LLVM D compiler (1.0.0)
  Default target: i686-pc-windows-msvc
  Host CPU: skylake
  http://dlang.org - http://wiki.dlang.org/LDC


The first one is a pre-alpha (!) version.
The second one is our latest released version, which is the one 
I recommend you to use.


If you want your bugs to be noted and fixed, you should try 
that test version and report bugs. That's kind of what you are 
doing now, so thanks ;)  Of course, clarity in reporting is 
important to get bugs fixed...


Ok. Well, I didn't know I was using an alpha version. Two bugs I 
have:


1. Paths(imports to subdirs as explained in my other posts) are 
not correct. It seems to be dropping the last '\'. Probably a 
simple substring range bug. (e.g., 1..$-2 instead of 1..$-1). 
Since it works in the previous version, it shouldn't be too hard 
to diff on the modules path resolution code.


2. There seems to be an issue with the x86 libs not being 
correctly resolved. The default is to choose x64. When I compile 
for x86 it still uses them. Maybe a compiler switch is needed, 
but regardless this isn't good practice. -m32 is being passed, 
maybe this should select the 32-bit configuration?


If these are regressions then someone needs to be fired!!! If 
not, someone still needs to be fired! Or at least be forced to 
buy me a ham burger or something!




Re: What's up with GDC?

2016-06-12 Thread Joerg Joergonson via Digitalmars-d-learn

On Sunday, 12 June 2016 at 12:38:25 UTC, Adam D. Ruppe wrote:

On Sunday, 12 June 2016 at 04:19:33 UTC, Joerg Joergonson wrote:

2. I got an error that I don't get with dmd:

Error: incompatible types for ((ScreenPainter) !is (null)): 
cannot use '!is' with types		


and I have defined ScreenPainter in my code. It is also in 
arsd's simpledisplay. I do not import simpledisplay directly:


It's probably the difference of the recent import rules bug 
fix. gamehelpers public imports simpledisplay, so its 
ScreenPainter will be visible there too.


You can probably just exclude it from the import by doing 
selective:


import arsd.gamehelpers : create2dWindow;

and comma list anything else you use from there (probably just 
OpenGlTexture, it is a small module).



But yeah, weird that it is different, but this was REALLY 
recently changed so if the release differs by just one month in 
version it could account for the difference.


Although, If I set the subsystem to windows I then get the 
error


There's another MSFT library needed there passing 
`-L/entry:mainCRTStartup` to the build should do it.


dmd 32 bit has its own library so it isn't needed there, but 
dmd 64 bit and ldc both I believe need the entry point.



Thanks. It worked! BTW, when I compile a simple project with your 
simpledisplay it takes up around 300MB(for ldc, 400 for dmd) and 
uses about 15% cpu. Basically just a modified example that draws 
some images which maybe take up 20MB total. Any ideas why it's 
taking up so much space and cpu? What's it doing on your machine?





Re: What's up with GDC?

2016-06-12 Thread Joerg Joergonson via Digitalmars-d-learn

On Sunday, 12 June 2016 at 05:08:12 UTC, Mike Parker wrote:

On Sunday, 12 June 2016 at 04:19:33 UTC, Joerg Joergonson wrote:



1. I had an older distro(I think) of ldc. The ldc2.exe is 18MB 
while the "new" one is 36MB. I copied the old ldc bin dir to 
the new one and didn't change anything and everything compiled 
EXCEPT


That's just asking for problems. You may get lucky and find 
that it works, but in general you don't want to go around 
swapping compiler executables like that.





2. I got an error that I don't get with dmd:

Error: incompatible types for ((ScreenPainter) !is (null)): 
cannot use '!is' with types		


and I have defined ScreenPainter in my code. It is also in 
arsd's simpledisplay. I do not import simpledisplay directly:


The code is

import arsd.gamehelpers;
	auto window = create2dWindow(cast(int)width, cast(int)height, 
cast(int)ViewportWidth, cast(int)ViewportHeight, title);


// Let the gui handle painting the screen
window.redrawOpenGlScene = {
if (ScreenPainter !is null || !ScreenPainter()) <--- error
...
};

I have defined ScreenPainter elsewhere in the module.


So ScreenPainter is a *type* and not an instance of a type? 
That *should* be an error, no matter which compiler you use. 
You can't compare a type against null. What does that even 
mean? And if SimpleDisplay already defines the type, why have 
you redefined it?


Assuming ScreenPainter is a class, then:

if(ScreenPainter !is null) <-- Invalid

auto sp = new ScreenPainter;
if(sp !is null) <-- valid




I can fix this by avoiding the import... but the point is that 
it's different than dmd.


If ScreenPainter is defined as a type, it shouldn't ever 
compile. How have you defined it?




So ldc parses things differently than dmd... I imagine this is 
a bug!


LDC and DMD share the same front end, meaning they parse code 
the same. I really would like to see your code, particularly 
your definition of ScreenPainter.


Although, If I set the subsystem to windows I then get the 
error


 error LNK2019: unresolved external symbol WinMain referenced 
in function "int __cdecl __scrt_common_main_seh(void)" 
(?__scrt_common_main_seh@@YAHXZ)>

Which looks like it's not linking to runtime and/or phobos?


No, this has nothing to do with DRuntime, Phobos, or even D. 
It's a Windows thing. By default, when you compile a D program, 
you are creating a console system executable so your primary 
entry point is an extern(C) main function that is generated by 
the compiler. This initializes DRuntime, which in turn calls 
your D main function.


When using OPTLINK with DMD, the linker will ensure that the 
generated extern(C) main remains the entry point when you set 
the subsystem to Windows. When using the MS linker, this is not 
the case. The linker will expect WinMain to be the entry point 
when the subsystem is set to Windows. If you do not have a 
WinMain, you will see the error above. In order to keep the 
extern(C) main as your entry point, you have to pass the /ENTRY 
option to the linker (see[1]). LDC should provide a means for 
passing linker options. You can probably set in VisualD's 
project settings, but if there's no field for it then ldc2 
--help may tell you what you need.


Alternatively, you can create a WinMain, but then you need to 
initialize DRuntime yourself. See [2], but ignore the .def file 
requirement since you are already setting the subsystem.





I seriously don't know how anyone gets anything done with all 
these problems ;/ The D community can't expect to get people 
interested in things don't work. If it wasn't because the D 
language was so awesome I wouldn't stick around! It's as if no 
one does any real testing on this stuff before it's released ;/


Then I would ask you to stop and think. There are a number of 
people using D without problems every day, including several 
companies. Obviously, they aren't having the same difficulties 
you are, else they wouldn't be successfully using D. You seem 
to be very quick to blame the tools rather than considering you 
might not fully understand how to use them. I don't mean that 
in a disparaging way. I've been there myself, trying to get 
something I wanted to use working and failing, then getting 
frustrated and blaming the tools. These days, I always blame 
yourself first. Sure, the tools sometimes have bugs and other 
issues, but more often than not it's because I'm using them the 
wrong way.


Right now, documentation on getting up to speed with LDC is 
sorely lacking. That's a valid criticism to make. For people 
who aren't familiar with it, or who aren't well versed in 
working with ahead of time compilers, whichever the case may 
be, it may not be the best choice for getting started with D. 
Since you seem to be having difficulties using LDC and since 
you've already told me that DMD is working for you, I strongly 
recommend that you use DMD instead for now. Once you are 

Re: Parse File at compile time, but not embedded

2016-06-11 Thread Joerg Joergonson via Digitalmars-d-learn

On Friday, 10 June 2016 at 07:03:21 UTC, ketmar wrote:
On Thursday, 9 June 2016 at 22:02:44 UTC, Joerg Joergonson 
wrote:
Lol, who says you have access to my software? You know, the 
problem with assumptions is that they generally make no sense 
when you actually think about them.


oh, yeah. it suddenly reminds me about some obscure thing. 
other people told me that they were able to solve the same 
problems with something they called "build system"...


Mines not a build system...

In any case LDC does drop the data so it is ok.

The problem with people is that they are idiots! They make 
assumptions about other peoples stuff without having any clue 
what actually is going on rather than addressing the real issue. 
In fact, the thing I'm doing has nothing to do with SQL, 
security, etc. It was only an example. I just don't want crap in 
my EXE that shouldn't be there, simple as that. Also, since I'm 
the sole designer and the software is simple, I have every right 
to do it how I want.


What's strange, though, is my little ole app takes 300MB's and 
constantly uses 13% of the cpu... even though all it does is 
display a few images. This is with LDC release. Doesn't seem very 
efficient. I imagine a similar app in C would take about 1% and 
20MB. Hopefully profiling in D isn't as much a nightmare as 
setting it up.


BTW, I'm using simpledisplay... I saw that you made a commit or 
something on github. Are you noticing any similarities to cpu and 
memory usage?









Re: What's up with GDC?

2016-06-11 Thread Joerg Joergonson via Digitalmars-d-learn

On Sunday, 12 June 2016 at 03:22:06 UTC, Mike Parker wrote:

On Sunday, 12 June 2016 at 02:09:24 UTC, Joerg Joergonson wrote:
Ok, So I started an empty project and I found all the libs 
that are required from all of VS, SDK, LDC, DMD, etc and put 
them in 4 folders:


Libs\COFF\x86
Libs\COFF\x64
Libs\OMF\x86
Libs\OMF\x64


There's no need for OMF\x64. OPTLINK is 32-bit only. That's why 
DMD uses the MS tools for 64-bit. And you never need OMF for 
LDC.


Yeah, I know. That's why it's empty. I made it anyways though to 
be consistent and not knowing if there was possibly any use. I 
wanted to make sure to cover all cases.




fixed up sc.ini and VD to use them and worked on stuff until I 
had no lib errors with the test project. I could compile with 
all versions now(DMD x86/64, LDC x86/64)


You said in your previous post that DMD was working fine for 
you. I would recommend against editing sc.ini except in the 
case where you do manual installs of DMD and need to configure 
it to work with Visual Studio. It's a pain to have to do it 
every time you update. Much better to use the installer and let 
it configure the VS paths for you.


Well, it's never really worked before. I've always had to 
manually edit it and add the VS sdk paths to get DMD working. The 
problem is, when you have many SDK's and kits, nothing plays nice 
together.


What I have now is at least something that is consistent and I 
can simply archive it all and it should work in all future cases. 
Uninstalling a kit and reinstalling one isn't going to fubar dmd.


I'll keep it this way because it works and the only issue is 
keeping it up to date. At least I don't have to go looking in 
some crazy long path for VS libs like C:\program files 
(x86)\Visual Studio\VC\Lib\um\arm\x86\amd\aldlf\crapola\doesn't 
contain everything\1.0534.4303020453414159265.




So, ldc is essentially working... gdc probably is the same if 
I can figure out how to get the proper binaries(not that 
arm-unknown-linux crap) that are not so far out of date. At 
this point I still need to get ldc to work though.


I would recommend against GDC for now. Until someone steps up 
and starts packaging regular MinGW-based releases, it's 
probably not worth it.




Ok. I'll stick with LDC if it works since at least there is a 
something that can be used for properly releasing software.


I probably just need to figure out how to properly include the 
library files mentioned in my other post.


I did try to include the path to the files in VD's LDC 
settings section but it did nothing.


Did you also include the libraries in the project settings? You 
can:


A) Add the path to 'Library Search Path' and add the library 
names to 'Library Files' or,
B) Add the full path and filename for each library to 'Library 
Files'.


I strongly recommend against doing this for system libraries 
like OpenGL. If LDC is configured to know where your VS 
installation is (the only version of LDC I've ever used was an 
old MinGW-based one, so I don't know how the VS version finds 
the MS libraries), then you should only need to include the 
file name and it will use the one from the MS SDK.


Well, I don't know. I monitored LDC2 and it is actually searching 
for the modules with out the slash:


B:\Software\App\Test\libmBase.d

When it should be

B:\Software\App\Test\lib\mBase.d

I created a test project that mimics the main project and it 
works and does not have that issue... So possibly my project is 
"corrupt". I will try and mess with it later or move the project 
over into a new one incrementally until the issue happens.


---

Ok, tried a simple copy and paste type of thing and same issue. 
This seems to be a bug in ldc or visual d.


--

Ok, this is a bug in ldc.

1. I had an older distro(I think) of ldc. The ldc2.exe is 18MB 
while the "new" one is 36MB. I copied the old ldc bin dir to the 
new one and didn't change anything and everything compiled EXCEPT


2. I got an error that I don't get with dmd:

Error: incompatible types for ((ScreenPainter) !is (null)): 
cannot use '!is' with types		


and I have defined ScreenPainter in my code. It is also in arsd's 
simpledisplay. I do not import simpledisplay directly:


The code is

import arsd.gamehelpers;
	auto window = create2dWindow(cast(int)width, cast(int)height, 
cast(int)ViewportWidth, cast(int)ViewportHeight, title);


// Let the gui handle painting the screen
window.redrawOpenGlScene = {
if (ScreenPainter !is null || !ScreenPainter()) <--- error
...
};

I have defined ScreenPainter elsewhere in the module.


I can fix this by avoiding the import... but the point is that 
it's different than dmd.


So ldc parses things differently than dmd... I imagine this is a 
bug!


Fixing it though does produce an executable!

Although, If I set the subsystem to windows I then get the error

 error LNK2019: unresolved external symbol WinMain referenced in 
function 

Re: No triangle with OpenGL (DerelictGLFW and DerelictGL3)

2016-06-11 Thread Joerg Joergonson via Digitalmars-d-learn

On Sunday, 12 June 2016 at 02:16:52 UTC, Peter Lewis wrote:

Hi all.
I am trying to create a basic OpenGL triangle in a GLFW 
instance. The window works, I can change the background colour 
and everything but for the life of me I can't get the triangle 
to show up. Instead of trying to put everything in the post, I 
have put it on github. (https://github.com/werl/d_glfw_tests) I 
am currently following a tutorial I found 
(http://learnopengl.com/#!Getting-started/Hello-Triangle).

Any help is appreciated, Thanks!


I had similar issues when I tried it and never got it to work 
either.


Try Adam Druppe's stuff on github. It works and he has some 
support for images and stuff. It's not perfect but at least it 
works out of the box compared to alot of the other stuff floating 
around that mostly is outdated.




Re: What's up with GDC?

2016-06-11 Thread Joerg Joergonson via Digitalmars-d-learn
Ok, So I started an empty project and I found all the libs that 
are required from all of VS, SDK, LDC, DMD, etc and put them in 4 
folders:


Libs\COFF\x86
Libs\COFF\x64
Libs\OMF\x86
Libs\OMF\x64

fixed up sc.ini and VD to use them and worked on stuff until I 
had no lib errors with the test project. I could compile with all 
versions now(DMD x86/64, LDC x86/64)


So, ldc is essentially working... gdc probably is the same if I 
can figure out how to get the proper binaries(not that 
arm-unknown-linux crap) that are not so far out of date. At this 
point I still need to get ldc to work though.


I probably just need to figure out how to properly include the 
library files mentioned in my other post.


I did try to include the path to the files in VD's LDC settings 
section but it did nothing.










Re: What's up with GDC?

2016-06-11 Thread Joerg Joergonson via Digitalmars-d-learn

On Saturday, 11 June 2016 at 08:48:42 UTC, Mike Parker wrote:
On Saturday, 11 June 2016 at 06:22:27 UTC, Joerg Joergonson 
wrote:

[...]


That's not true unless I'm not suppose to import them 
directly. When I switch to 64-bit build I get same errors. 
Basically only dmd x86 works.


It's true if everything is configured properly.




1. How to tell the difference between different libs and what 
kind of libs are required for what kind of compiler and build.


  x86x64
DMD.  .
LDC.  .
GDC.  .

Please fill in the table for windows, linux, etc.


On Windows/Mac/FreeBSD, if you know how to configure libraries 
for gcc, then you know how to configure them with DMD. You 
install the packages you need from your package manager, they 
are installed by default in the appropriate system paths, and 
when DMD invokes the system linker they will be found.


Windows is only problematic because there's no such thing as a 
system linker, nor are there any standard system paths for 
libraries. Given the following command line:


dmd foo.d user32.lib

The compiler will invoke OPTLINK, which it ships with. This 
guarantees that you always have a linker if DMD is installed. 
You will find several Win32 libraries in the dmd2/windows/lib 
directory. They are all a in OMF format for OPTLINK and they 
are all a bit outdated. With the above command line, the 
user32.lib in that directory will be used. If you need a Win32 
system library that does not ship with DMD (such as 
OpenGL32.lib) you will need to provide it yourself in the OMF 
format to use it with OPTLINK.


Add either the -m32mscoff or -m64 command line switch and DMD 
will invoke whichever Microsoft linker it is configured to 
call, meaning it will no longer use the libraries that ship 
with DMD. If you've installed one of paid versions of VS, or 
one of the Community Editions, all of the Win32 libraries will 
be installed as well. If you're using one of the VS Express 
versions, you'll need a Windows SDK installed to have the Win32 
libraries.



2. How does the D build system fetch the libs? It uses the 
sci.ini path? What about Visual D?


Yes, on Windows, sc.ini is where the library paths are 
configured. If you installed DMD manually, or used the 
installer but installed Visual Studio after, then you will need 
to edit sc.ini to make sure the paths point to the proper VS 
locations. If run the DMD installer *after* installing Visual 
Studio, the installer will configure sc.ini for you. I 
recommend you take that approach. It just works.


On other systems, dmd.conf is used to configure the location of 
libphobos2, but system libraries are on the system path (just 
as when using gcc).




Where does D look for these? I assume in the "libs" directory?
   pragma(lib, "opengl32");
   pragma(lib, "glu32");



On Windows, it looks on the path configured in sc.ini or 
whatever import path you have provided on the command line with 
-I, just as it does with any libraries you pass on the command 
line. So that means that compiling without -m32mscoff or -m64, 
it looks win the dmd2/windows/lib directory. Since opengl32 and 
glu32 do not ship with DMD, it will not find them there. So you 
either need to put COFF format libs there or tell the compiler 
where to find them with -I. With -m32mscoff or -m64, it looks 
for the Microsoft version of those libraries in the Visual 
Studio or Windows SDK installation, which you should have 
configured in sc.ini as I explained above.


On other systems, the OpenGL libraries are found on the system 
library path.


3. How to get different compilers and versions to play along? 
I would eventually like to build for win/lin/osx for both x64 
and x86.


On Windows, Just make sure you provide any third-party 
libraries you need, along with any system libraries you need 
that DMD does not ship with, in the OMF format when using 
OPTLINK on Windows and tell DMD where to find them. When using 
the MS toolchain, the system libraries are all there, so you 
need not provide any. You'll just need to make sure any 
third-party libraries are in the COFF format (preferably 
compiled with the same version of Visual Studio you are using 
to link your D programs).


On Linuux and OSX, just make sure the dev packages of any 
libraries you need are installed. You do need to account for 
different libray names (e.g. Openg32.lib on Windows vs. 
libopengl elswhere, so your pragmas above should include '32' 
in the name only on Windows).


Alternatively, you might try one of the dynamic bindings[1] to 
a library you need, such as DerelictGL3. Then there is no link 
time dependency, as the shared libraries are loaded at runtime. 
On Windows, that completely eliminates the COFF vs. OMF issues. 
As long as the DLLs match your executable's architecture 
(32-bit vs. 64-bit), then it doesn't matter what compiler was 
used to create them when loading them at runtime.



I am using Visual D, BTW. It seems to have a lot of stuff 

Re: Parse File at compile time, but not embedded

2016-06-11 Thread Joerg Joergonson via Digitalmars-d-learn

On Saturday, 11 June 2016 at 13:03:47 UTC, ketmar wrote:

On Friday, 10 June 2016 at 18:47:59 UTC, Joerg Joergonson wrote:
In any case, this is impossible. D has no such concept as 
"compile-time-only" values, so any usage of a value risks 
embedding it into the binary.

sure, it has.

template ParseData (string text) {
  private static enum Key = "XXXyyyZZZ33322211\n";
  private static enum TRet = "int data = 3;";
  private static enum FRet = "int data = 4;";
  static if (text.length >= Key.length) {
static if (text[0..Key.length] == Key)
  enum ParseData = TRet;
else
  enum ParseData = FRet;
  } else {
enum ParseData = FRet;
  }
}

void main () {
  mixin(ParseData!(import("a")));
}


look, ma, no traces of our secret key in binary! and no traces 
of `int data` declaration too!


This doesn't seem to be the case though in more complex examples 
;/ enums seem to be compile time only in certain conditions. My 
code is almost identical do what you have written except 
ParseData generates a more complex string and I do reference 
parts of the "Key" in the generation of the code. It's possible 
DMD keeps the full code around because of this.






Re: What's up with GDC?

2016-06-11 Thread Joerg Joergonson via Digitalmars-d-learn
On Saturday, 11 June 2016 at 16:04:45 UTC, Christophe Meessen 
wrote:
Real professionals won't have difficulties to find binaries for 
ldc: https://github.com/ldc-developers/ldc/releases




They also don't waste their time posting asinine comments.


Re: What's up with GDC?

2016-06-11 Thread Joerg Joergonson via Digitalmars-d-learn

On Saturday, 11 June 2016 at 05:12:56 UTC, Mike Parker wrote:
On Saturday, 11 June 2016 at 04:20:38 UTC, Joerg Joergonson 
wrote:

On Saturday, 11 June 2016 at 01:43:21 UTC, Adam D. Ruppe wrote:


What's the exact message and what did you do? The 
opengl32.lib I have on my github is for dmd 32 bit, ldc uses 
the Microsoft one I think so you shouldn't need anything else.


It just says the format is invalid. I used the one you 
supplied in the package and never worried about it. I'll try 
some other libs when I get a chance.




OpenGL32.lib and glu32.lib are part of the Windows SDK. 
Assuming you've got VS 2015 installed, they should be part of 
the installation and should be available out of the box. Adam's 
lib is solely for use with OPTLINK when compiling with DMD 
using the default -m32 on Windows, since DMD does not ship with 
the opengl lib. When compiling with -m32mscoff or -m64, it will 
use Visual Studios libraries.


That's not true unless I'm not suppose to import them directly. 
When I switch to 64-bit build I get same errors. Basically only 
dmd x86 works.


It could be my setup.

This is EXACTLY what anyone who is doing this sort of stuff needs 
to know:


1. How to tell the difference between different libs and what 
kind of libs are required for what kind of compiler and build.


  x86x64
DMD.  .
LDC.  .
GDC.  .

Please fill in the table for windows, linux, etc.

If I know this information then it is at least easy to make sure 
I have matching socks. Else it's kinda pointless to put on shoes?



2. How does the D build system fetch the libs? It uses the 
sci.ini path? What about Visual D?


Just because I have the correct stuff from 1 doesn't mean it is 
in the "correct" place.


Where does D look for these? I assume in the "libs" directory?
   pragma(lib, "opengl32");
   pragma(lib, "glu32");

3. How to get different compilers and versions to play along? I 
would eventually like to build for win/lin/osx for both x64 and 
x86.


Without these 3 ingredients, everything is futile!


I am using Visual D, BTW. It seems to have a lot of stuff setup 
by default but I haven't went in and messed with the settings 
much.











Re: What's up with GDC?

2016-06-10 Thread Joerg Joergonson via Digitalmars-d-learn

On Saturday, 11 June 2016 at 01:43:21 UTC, Adam D. Ruppe wrote:

On Friday, 10 June 2016 at 22:01:15 UTC, Joerg Joergonson wrote:
The problem I'm getting with ldc, using your simpledisplay, is 
that the libs aren't loading due to the wrong format.



What's the exact message and what did you do? The opengl32.lib 
I have on my github is for dmd 32 bit, ldc uses the Microsoft 
one I think so you shouldn't need anything else.


It just says the format is invalid. I used the one you supplied 
in the package and never worried about it. I'll try some other 
libs when I get a chance.


BTW I make your code a bit better with resizing


case WM_SIZING:
goto size_changed;
break;
case WM_ERASEBKGND: 
break;


Handling the erase background prevents windows from clearing the 
window, even with open GL. This allows resizing the window to 
still show the content. The only problem with it is that it seems 
the opengl code shifts up and down as the resize happens... It is 
probably an issue with the size change updates. It's pretty minor 
though and better than what it was(when I'd resize the window 
would go white and stay white as long as mouse was moving)


The specific errors are

opengl32.lib : warning LNK4003: invalid library format; library 
ignored
glu32.lib : warning LNK4003: invalid library format; library 
ignored









Re: Asio Bindings?

2016-06-10 Thread Joerg Joergonson via Digitalmars-d-learn

On Friday, 10 June 2016 at 20:52:46 UTC, Andrej Mitrovic wrote:
On 6/9/16, Joerg Joergonson via Digitalmars-d-learn 
<digitalmars-d-learn@puremagic.com> wrote:

[...]


Just to show that I'm not full of shit, here's the e-mail chain:

On 6/3/11, Andrej Mitrovic <andrej.mitrov...@gmail.com> wrote:

[...]




On 6/7/11, Yvan Grabit <y.gra...@steinberg.de> wrote:

[...]




On 6/23/11, Andrej Mitrovic <andrej.mitrov...@gmail.com> wrote:

[...]



On 6/23/11, Yvan Grabit <y.gra...@steinberg.de> wrote:

[...]




On 6/23/11, Andrej Mitrovic <andrej.mitrov...@gmail.com> wrote:

[...]




Unfortunately this is the last I've heard from them at the 
time..


Well, I definitely didn't think you were full of shit! But I see 
no negative statements against doing what you did. They simply 
said that you can't include their code directly... they want 
people who download your code to have to download the sdk from 
Steinberg.


You own your code, no one else, not even Steinberg... but you 
don't own or have the right, as they mention, to publish their 
code with yours.


Creating bindings is not a license infraction.


[...]



Are you using any of their source code from the vst sdk? If you 
hand re-write any of their source code, it is yours.


You say that the only two files from the sdk are aeffect.h and 
aeffectx.h, right?


Is it these:

https://github.com/falkTX/dssi-vst/blob/master/vestige/aeffectx.h

https://sourceforge.net/u/geyan123d/freepiano/ci/54f876d52c6f49925495f7ed880bd2434bda0504/tree/src/vst/aeffect.h?format=raw

http://www.dith.it/listing/vst_stuff/vstsdk2.4/doc/html/aeffect_8h.html

https://source.openmpt.org/svn/openmpt/branches/devBranch_1_17_03/include/AEffect.h

etc...

If so, as you can see, people already do what your wanting to do 
and their is probably a reason they stopped responding, and it's 
because they can't do much as long as you don't include anything 
from them.


Of course, don't take my word... maybe ask on stack overflow or 
something? If you are simply worried about being sued then that 
problem is relatively easily fixed! Just ask Walter or Andrei to 
fund your defense ;)


Don't let your hard work go to waste! ;) I think you should be 
more afraid of all the questions I'll be asking you on how to use 
it rather than being sued by Steinberg! ;)






Re: What's up with GDC?

2016-06-10 Thread Joerg Joergonson via Digitalmars-d-learn

On Friday, 10 June 2016 at 21:33:58 UTC, Adam D. Ruppe wrote:

On Friday, 10 June 2016 at 20:30:36 UTC, Joerg Joergonson wrote:
Why isn't there a proper binaries for ldc and gdc that work 
out of the box like dmd?  There used to be. What's up with all 
this arm-linux-genuabi crap?


Those are proper binaries that work out of the box on different 
platforms.


Alas Windows doesn't have prebuilt binaries for gdc targeting 
Windows. Weird, the Windows binaries there generate ARM 
binaries (for phones and stuff).


LDC has Win32 builds though:
https://github.com/ldc-developers/ldc/releases/download/v1.0.0/ldc2-1.0.0-win32-msvc.zip


The problem I'm getting with ldc, using your simpledisplay, is 
that the libs aren't loading due to the wrong format. It's the 
omf vs coff thing or whatever, I guess... I didn't feel like 
messing with it right now(later I'll need both x86 and x64 libs 
that work with dmd, ldc, and gdc). But yea, ldc basically worked 
similar to dmd, but not gdc.









Re: What's up with GDC?

2016-06-10 Thread Joerg Joergonson via Digitalmars-d-learn

On Friday, 10 June 2016 at 19:51:19 UTC, Johan Engelen wrote:

On Friday, 10 June 2016 at 19:37:13 UTC, Joerg Joergonson wrote:


 arm-linux-genuabi? arm-linux-gnueableihfqueridsofeyfh?  
aifh-fkeif-f-fdsskjhfkjfafaa?


Rofl!

and ldc requires building from sources(actually I didn't have 
too much trouble with installing it but it doesn't work with 
my libs because of the crappy coff issues that D has had since 
birth(it's like a tumor)).


Why do you have to build from sources? Any details about the 
problems you see?


Thanks,
  Johan


Well, the post was a bit incoherent because getting all this 
stuff working is. I was searching for ldc and ran across some web 
site that had only the sources(same for gdc).


The point of it all is that things seem to be a bit 
discombobulated and make D look bad.  Professions won't use D if 
it can't be used professionally(not that I'm a pro, just saying).


Why isn't there a proper binaries for ldc and gdc that work out 
of the box like dmd?  There used to be. What's up with all this 
arm-linux-genuabi crap? When one opens up the archive all the 
files are named that way too.  There is no explanation of what 
that means. Did some kid write this stuff in his basement or is 
this suppose to be serious? Do people think about the end user 
when creating this stuff or is it just a eureka moment 
"Lightbulb: Lets create some spaghetti!".


I would have thought things would have gotten easier and more 
logical but that doesn't seem to be the case.





What's up with GDC?

2016-06-10 Thread Joerg Joergonson via Digitalmars-d-learn

version 2.062? 2.066.1?

arm-linux-gnueabi   
arm-linux-gnueabihf

?

I remember a year ago when I tried D for the first time I 
downloaded both gdc and ldc and everything just worked and each 
install was just like dmd!  Now it seems like a step backwards 
and I'm not sure what is going on.  arm-linux-genuabi? 
arm-linux-gnueableihfqueridsofeyfh?  
aifh-fkeif-f-fdsskjhfkjfafaa?


Is D dead?  If DMD is the reference compiler and can't be used 
for production code... and gdc is about 1000 versions out of 
date... and ldc requires building from sources(actually I didn't 
have too much trouble with installing it but it doesn't work with 
my libs because of the crappy coff issues that D has had since 
birth(it's like a tumor)).


`Windows users should replace arm-gdcproject-linux-gnueabi-gdc 
with arm-gdcproject-linux-gnueabi-gdc.exe.`


Why so many hoops to jump through? D is making me feel like a 
poodle!







Re: Parse File at compile time, but not embedded

2016-06-10 Thread Joerg Joergonson via Digitalmars-d-learn

On Friday, 10 June 2016 at 12:48:43 UTC, Alex Parrill wrote:
On Thursday, 9 June 2016 at 22:02:44 UTC, Joerg Joergonson 
wrote:

On Tuesday, 7 June 2016 at 22:09:58 UTC, Alex Parrill wrote:
Accessing a SQL server at compile time seems like a huge 
abuse of CTFE (and I'm pretty sure it's impossible at the 
moment). Why do I need to install and set up a MySQL database 
in order to build your software?


Lol, who says you have access to my software? You know, the 
problem with assumptions is that they generally make no sense 
when you actually think about them.


By "I" I meant "someone new coming into the project", such as a 
new hire or someone that will be maintaining your program while 
you work on other things.


In any case, this is impossible. D has no such concept as 
"compile-time-only" values, so any usage of a value risks 
embedding it into the binary.


It seems that dmd does not remove the data if it is used in any 
way. When I started using the code, the data then appeared in the 
binary.


The access to the code is through the following

auto SetupData(string filename)
{
   enum d = ParseData!(filename);
   //pragma(msg, d);
   mixin(d);
   return data;
}

The enum d does not have the data in it as showing by the pragma. 
ParseData simply determines how to build data depending on 
external state and uses import(filename) to get data.


Since the code compiles, obviously d is a CT constant. But after 
actually using "data" and doing some work with it, the imported 
file showed up in the binary.


Of course, if I just copy the pragma output and paste it in place 
of the first 3 lines, the external file it isn't added to the 
binary(since there are obviously then no references to it).


So, at least for DMD, it doesn't do a good job at removing 
"dangling" references.  I haven't tried GDC or LDC.


You could probably use somethign like

string ParseData(string filename)()
{
   auto lines[] = import(splitLines(import(filename)));
   if (lines[0] == "XXXyyyZZZ33322211")
  return "int data = 3";
   return "int data = 4";
}

So the idea is if the external file contains XXXyyyZZZ33322211 we 
create an int with value 3 and if not then with 4.


The point is, though, that `XXXyyyZZZ33322211` should never be in 
the binary since ParseData is never called at run-time. At 
compile time, the compiler executes ParseData as CTFE and is able 
to generate the mixin string as if directly typed "int data = 3;" 
or "int data = 4;" instead.


The only difference between my code and the above is the 
generated string that is returned.


I'm going to assume it's a dmd thing for now until I'm able check 
it out with another compiler.












Re: Parse File at compile time, but not embedded

2016-06-09 Thread Joerg Joergonson via Digitalmars-d-learn

On Tuesday, 7 June 2016 at 22:09:58 UTC, Alex Parrill wrote:

On Monday, 6 June 2016 at 21:57:20 UTC, Pie? wrote:

On Monday, 6 June 2016 at 21:31:32 UTC, Alex Parrill wrote:

[...]


Not necessarily, You chased that rabbit quite far! The data 
your reading could contain sensitive information only used at 
compile time and not meant to embed. For example, the file 
could contain login and password to an SQL database that  you 
then connect, at compile time and retrieve that information 
the disregard the password(it is not needed at run time).


Accessing a SQL server at compile time seems like a huge abuse 
of CTFE (and I'm pretty sure it's impossible at the moment). 
Why do I need to install and set up a MySQL database in order 
to build your software?


Lol, who says you have access to my software? You know, the 
problem with assumptions is that they generally make no sense 
when you actually think about them.


Re: Asio Bindings?

2016-06-08 Thread Joerg Joergonson via Digitalmars-d-learn

On Wednesday, 8 June 2016 at 23:19:13 UTC, Andrej Mitrovic wrote:

I do have (Steinberg) ASIO binding in D.

The problem is I couldn't release the bindings. I've asked 
Steinberg if it was OK to release D bindings and they were 
strongly against it unfortunately (and this was over 3 years 
ago..).


Any kind of direct use of ASIO requires their approval first.. 
meaning you had to register on their website.


I would recommend using third party libs that abstract the 
underlying engine, like PortAudio  or RtAudio (the later of 
which I'm going to release a port of soon!).


I had a binding to PortAudio but the devs of that library 
insisted on only supporting interleaved audio, RtAudio supports 
both interleaved and non-interleaved audio, and the library is 
easy to port.




Why would bindings have any issues with licensing? People release 
VST source code all the time. Sure they will be against it 
because they are muddlefudgers! They can't officially endorse it 
without having to dealing with the user end to some degree and 
Steinberg is known for that kind of behavior(just look at all the 
hoops one has to jump through to get asio in the first place).


Of course, I can't convince you but and I'll probably have to 
re-create your work, but hosting something like that on git 
shouldn't cause any problems. At most, SB will send you a cease 
and desist type of letter. In which case you take it down. Think 
of mono, it essentially duplicated .net and MS hasn't done a 
thing about it(they can't do much but flex their big muscles, in 
which case they didn't).


I would appreciate it though if you thought about it again, it 
would save me a bunch of work!


If the problem is that you have included SB source code, then 
that can easily be remedied by removing and and placing an 
abstraction in it's place where others can plug in the source 
when they d/l it from SB.


I haven't got into writing any audio stuff yet but when I look in 
to it more I'll check out the options. I don't need anything 
overly complex but do need low latency IO.