Re: A ready to use Vulkan triangle example for D

2016-05-30 Thread maik klein via Digitalmars-d-announce

On Monday, 30 May 2016 at 11:30:24 UTC, Manuel König wrote:

On Fri, 27 May 2016 18:40:24 +, maik klein wrote:


[...]


Nice, runs without errors for me. I have a triangle example 
project too, but weird stuff happens when I resize my window. I 
see your window has fixed size, maybe I have more luck adding 
window resizing to your example. Will tell you when I get it to 
work.


Does anyone here have a working vulkan window with a resizable 
window?

I think its more of an xcb issue than a vulkan issue in my code,
because even when I do
- create xcb window with dimensions (w1, h1)
- resize it to dimensions (w2, h2) (no vulkan interaction yet)
- create a vulkan surface from that window
- render
the rendered image still has the original size (w1, h1), and I 
loose my
vulkan device when (w2, h2) deviates too much from the original 
size.


You probably have to update a lot of code

https://github.com/MaikKlein/VulkanTriangleD/blob/master/source/app.d

Do a ctrl+f vkcontext.width and you will see all the code that 
needs to be updated.


Re: A ready to use Vulkan triangle example for D

2016-05-28 Thread maik klein via Digitalmars-d-announce

On Sunday, 29 May 2016 at 00:37:54 UTC, Alex Parrill wrote:

On Saturday, 28 May 2016 at 19:32:58 UTC, maik klein wrote:
Btw does this even work? I think the struct initializers have 
to be


Foo foo = { someVar: 1 };

`:` instead of a `=`

I didn't do this because I actually got autocompletion for  
`vertexInputStateCreateInfo.` and that meant less typing for 
me.


No, its equals. In C it's a colon, which is a tad confusing.


https://dpaste.dzfl.pl/bd29c970050a


Re: A ready to use Vulkan triangle example for D

2016-05-28 Thread maik klein via Digitalmars-d-announce

On Saturday, 28 May 2016 at 17:50:30 UTC, Alex Parrill wrote:

On Saturday, 28 May 2016 at 10:58:05 UTC, maik klein wrote:


derelict-vulcan only works on windows, dvulkan doesn't have 
the platform dependend surface extensions for xlib, xcb, w32 
and wayland. Without them Vulkan is unusable for me.


I really don't care what I use, I just wanted something that 
works.


Platform extension support will be in the next release of 
d-vulkan. It doesn't include platform extensions now because I 
wanted to find a way to implement it without tying d-vulkan to 
a specific set of bindings, though I can't seem to find a good 
solution unfortunately... I personally use the git version of 
GLFW, which handles the platform-dependent surface handling for 
me.


As for the demo itself... It might help explain things more if 
the separate stages (instance creation, device creation, 
setting up shaders, etc) were split into their own functions, 
instead of stuffing everything into `main`.


Struct initializers are also useful when dealing with Vulkan 
info structs, since you don't have to repeat the variable name 
each time. Ex this:


VkPipelineVertexInputStateCreateInfo vertexInputStateCreateInfo 
= {};
vertexInputStateCreateInfo.sType = 
VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_STATE_CREATE_INFO;

vertexInputStateCreateInfo.vertexBindingDescriptionCount = 1;
vertexInputStateCreateInfo.pVertexBindingDescriptions = 


vertexInputStateCreateInfo.vertexAttributeDescriptionCount = 1;
vertexInputStateCreateInfo.pVertexAttributeDescriptions = 



Can become:

VkPipelineVertexInputStateCreateInfo vertexInputStateCreateInfo 
= {
sType = 
VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_STATE_CREATE_INFO, // 
also sType is pre-set with erupted or d-derelict

vertexBindingDescriptionCount = 1,
pVertexBindingDescriptions = ,
vertexAttributeDescriptionCount = 1,
pVertexAttributeDescriptions = ,
};


I think its personal preference, I like tutorials more if 
everything I just in the main instead of creating their own 
"architecture". Though I could probably group  with comments.


I saw that sType was a default value after a few hours and that 
is when I started using it. But at the end I was so annoyed by 
typing all the enums by hand that I mostly copied stuff from 
other people and translated it to D.


This was mostly caused by my current vim D setup with vim-dutyl 
and dcd, it is really unreliable and I didn't get any sane 
autocompletion. (I have to investigate that at some point).


Btw does this even work? I think the struct initializers have to 
be


Foo foo = { someVar: 1 };

`:` instead of a `=`

I didn't do this because I actually got autocompletion for  
`vertexInputStateCreateInfo.` and that meant less typing for me.


Re: A ready to use Vulkan triangle example for D

2016-05-28 Thread maik klein via Digitalmars-d-announce

On Saturday, 28 May 2016 at 03:02:23 UTC, WhatMeWorry wrote:

On Friday, 27 May 2016 at 18:40:24 UTC, maik klein wrote:

https://github.com/MaikKlein/VulkanTriangleD





Another dependency is ErupteD which I have forked myself 
because there is currently an issue with xlib-d and xcb-d with 
their versioning.




Nice work. As a person still trying to understand modern 
OpenGL, I admire your jump into Vulkan. Just a quick question 
if I may; Why did you use ErupteD over say d-vulkan or 
derelict-vulcan?  From my brief perusal of all three, they all 
seem kind of the same.


Thanks.


derelict-vulcan only works on windows, dvulkan doesn't have the 
platform dependend surface extensions for xlib, xcb, w32 and 
wayland. Without them Vulkan is unusable for me.


I really don't care what I use, I just wanted something that 
works.


A ready to use Vulkan triangle example for D

2016-05-27 Thread maik klein via Digitalmars-d-announce

https://github.com/MaikKlein/VulkanTriangleD

Currently only Linux is supported but it should be fairly easy to 
also add Windows support. Only the surface extensions have to be 
changed.


The example requires Vulkan ready hardware + driver + LunarG sdk 
with validation layer + sdl2.


Another dependency is ErupteD which I have forked myself because 
there is currently an issue with xlib-d and xcb-d with their 
versioning.


The example is also not currently completely 100% correct but it 
should run on most hardware.


I don't get any validation errors but I am sure I have made a few 
mistakes along the way.


It took me around 15 hours to get to a working triangle and I 
hope this might help someone who is interested in Vulkan.


Re: Battle-plan for CTFE

2016-05-11 Thread maik klein via Digitalmars-d-announce

On Monday, 9 May 2016 at 16:57:39 UTC, Stefan Koch wrote:

Hi Guys,

I have been looking into the DMD now to see what I can do about 
CTFE.

Unfortunately It is a pretty big mess to untangle.
Code responsible for CTFE is in at least 3 files.
[dinterpret.d, ctfeexpr.d, constfold.d]
I was shocked to discover that the PowExpression actually 
depends on phobos! (depending on the exact codePath it may or 
may not compile...)
which let to me prematurely stating that it worked at ctfe 
[http://forum.dlang.org/thread/ukcoibejffinknrbz...@forum.dlang.org]


My Plan is as follows.

Add a new file for my ctfe-interpreter and update it gradually 
to take more and more of the cases the code in the files 
mentioned above was used for.


Do Dataflow analysis on the code that is to be ctfe'd so we can 
tell beforehand if we need to store state in the ctfe stack or 
not.


Or baring proper data-flow analysis: RefCouting the variables 
on the ctfe-stack could also be a solution.


I will post more details as soon as I dive deeper into the code.


What is the current problem with ctfe?

Before I switched from C++ to D a few months ago I was heavily 
using boost hana in C++. I tried to emulate hana in D which 
worked quite well but the compile time performance was absolutely 
horrific


https://maikklein.github.io/2016/03/01/metaprogramming-typeobject/

After that I tried a few other things and I compared the compile 
times with


https://github.com/boostorg/hana/tree/master/benchmark

which I could never beat. The fastest thing, if I remember 
correctly, was string mixins but they used up too much memory.


But I have to say that I don't know much about the D internals 
and therefore don't know how I would optimize ct code execution.




Re: C#7 features

2016-05-09 Thread maik klein via Digitalmars-d-announce

On Monday, 9 May 2016 at 13:09:24 UTC, Jacob Carlborg wrote:

On 2016-05-09 14:46, John wrote:

C# 7's tuples are something different though. They don't even 
map to

System.Tuple. The syntax is:

   (int x, int y) GetPoint() {
 return (500, 400);
   }

   var p = GetPoint();
   Console.WriteLine($"{p.x}, {p.y}");


Would be nice to have in D. Both with and without named fields.


I mean it is not much shorter than in D

alias Point = Tuple!(int, "x", int, "y");
Point getPoint(){
return Point(500, 400);
}

What would be nice though if tuples would be implicitly 
convertible to named tuples, if the types matches.


Tuple!(int, "x", int, "y") getPoint(){
return tuple(500, 400);
}


Re: [Blog post] Why and when you should use SoA

2016-03-27 Thread maik klein via Digitalmars-d-announce

On Sunday, 27 March 2016 at 16:18:18 UTC, ZombineDev wrote:

On Saturday, 26 March 2016 at 20:55:17 UTC, maik klein wrote:

[snip]

Thanks, yes that is simpler.

But I am not sure that I want to have pluggable containers in 
SOA, mostly because every field would have overhead from the 
container.


For example array has size, length etc as overhead, but it is 
also not that much and probably won't matter anyway.


But I also thought about it, maybe sometimes I want to use a 
map instead of an array for some fields. So I need to have a 
way of telling which field should get which container.


Maybe something like this:

SOA!(Foo, Array, HashMap, DList);

The current implementation is mostly for experimentation.


Never mind. Anything with memory representation different from 
an array would ruin cache locality. My thinking was that using 
a container defined somewhere else would simplify the code.


I tried a couple approaches and came up with the following, 
which I think this is the most efficient in terms of space 
overhead and number of allocations (but still generic), 
implementation that is possible:

http://dpaste.dzfl.pl/3de1e18756f8

It took me a couple of tries, but overall I'm satisfied my 
code, although is it's more low-level and more meta-heavy than 
yours.


I also thought about doing it this way but I wasn't sure that it 
would be better overall.


I am not sure that one big buffer is better than several smaller 
ones overall.


I mean it is definitely more space efficient because you only 
have one pointer and reallocation is one big reallocation instead 
of smaller ones.


But it seems to me that smaller reallocations might be cheaper 
because you should have a higher chance of growing without 
reallocating.


Then again your approach will have no fragmented memory at all 
which might also be a good thing.


I just have not enough knowledge to know exactly what is better. 
Maybe we could maintain our implementations side by side and 
benchmark them for certain scenarios.


A lot of functionality is still missing in my implementation.


Re: futures and related asynchronous combinators

2016-03-27 Thread maik klein via Digitalmars-d-announce

On Sunday, 27 March 2016 at 07:16:53 UTC, Vlad Levenfeld wrote:

https://github.com/evenex/future/

I've been having to do a lot of complicated async work lately 
(sometimes multithreaded, sometimes not), and I decided to 
abstract a some patterns out and unify them with a little bit 
of formalism borrowed from functional languages. I've aimed to 
keep things as simple as possible while providing a full spread 
of functionality. This has worked well for me under a variety 
of use-cases, but YMMV of course.


[...]


What happens when you spawn a future inside a future and call 
await? Will the 'outer' future be rescheduled?




Re: [Blog post] Why and when you should use SoA

2016-03-26 Thread maik klein via Digitalmars-d-announce

On Sunday, 27 March 2016 at 02:20:09 UTC, Alex Parrill wrote:
Also I forgot to mention: Your "Isn’t SoA premature 
optimization?" section is a textbook YAGNI violation. I might 
have to refactor my web app to support running across multiple 
servers and internationalization when it becomes the Next Big 
Thing, but it more than likely will not become the Next Big 
Thing, so it's not productive for me to add additional 
complexity to "make sure my code scales" (and yes, SoA does add 
complexity, even if you hide it with templates and methods).


Personally I don't think it adds any complexity but everyone has 
to decide that for him or herself. But it is quite annoying to 
refactor from AoS to SoA (at least with my implementation).


And you are right for webdev that probably doesn't matter as much 
but for games you hit that level pretty fast.


It is just too easy for game developers to push the limits. "Oh 
hey lets see how many AI's I can spawn". Maybe you can have 10 
AI's or 100 AI's running around. Or maybe you have a gameserver 
for a completive game that runs at 100 updates per seconds, 
wouldn't it be nice to actually have it run at 500 on the same 
hardware?


Basically as a gamedev you are always resource bound.


Re: [Blog post] Why and when you should use SoA

2016-03-26 Thread maik klein via Digitalmars-d-announce

On Sunday, 27 March 2016 at 01:39:44 UTC, Simen Kjaeraas wrote:

On Friday, 25 March 2016 at 01:07:16 UTC, maik klein wrote:

Link to the blog post: https://maikklein.github.io/post/soa-d/
Link to the reddit discussion: 
https://www.reddit.com/r/programming/comments/4buivf/why_and_when_you_should_use_soa/


Neat. I've actually thought about writing exactly this kind of 
template for the fun of it. Thank you for showing how it'd work.


Btw, your use of Tuple!ArrayTypes for the 'containers' field 
strikes me as unnecessary, as ArrayTypes on its own would cover 
all your use cases.


--
  Simen


Yeah you are right, initially I thought I would use the a "named" 
tuple, like tuple(5, "field1", 1.0f, "field2"); but it was just 
unnecessary.




Re: [Blog post] Why and when you should use SoA

2016-03-26 Thread maik klein via Digitalmars-d-announce

On Saturday, 26 March 2016 at 23:31:23 UTC, Alex Parrill wrote:

On Friday, 25 March 2016 at 01:07:16 UTC, maik klein wrote:

Link to the blog post: https://maikklein.github.io/post/soa-d/
Link to the reddit discussion: 
https://www.reddit.com/r/programming/comments/4buivf/why_and_when_you_should_use_soa/


I think structs-of-arrays are a lot more situational than you 
make them out to be.


You say, at the end of your article, that "SoA scales much 
better because you can partially access your data without 
needlessly loading unrelevant data into your cache". But most 
of the time, programs access struct fields close together in 
time (i.e. accessing one field of a struct usually means that 
you will access another field shortly). In that case, you've 
now split your data across multiple cache lines; not good.


Your ENetPeer example works against you here; the the 
packetThrottle* variables would be split up into different 
arrays, but they will likely be checked together when 
throttling packets. Though admittedly, it's easy to fix; put 
fields likely to be accessed together in their own struct.


The SoA approach also makes random access more inefficient and 
makes it harder for objects to have identity. Again, your 
ENetPeer example works against you; it's common for servers to 
need to send packets to individual clients rather than 
broadcasting them. With the SoA approach, you end up accessing 
a tiny part of multiple arrays, and load several cache lines 
containing data for ENetPeers that you don't care about (i.e. 
loading irrelevant data).


I think SoA can be faster if you are commonly iterating over a 
section of a dataset, but I don't think that's a common 
occurrence. I definitely think it's unwarranted to conclude 
that SoAs "scale much better" without noting when they scale 
better, especially without benchmarks.


I will admit, though, that the template for making the 
struct-of-arrays is a nice demonstration of D's templates.


The next blog post that I am writing will contain a few 
benchmarks for SoA vs AoS.



But most of the time, programs access struct fields close 
together in time (i.e. accessing one field of a struct usually 
means that you will access another field shortly). In that 
case, you've now split your data across multiple cache lines; 
not good.


You can still group the data together if you always access it 
together.  What you wrote is actually not true for arrays, at 
least the way you wrote it.


Array!Foo arr

Iterating over 'arr', you will always load the complete Foo 
struct into memory, unless you hide stuff behind pointers.


The SoA approach also makes random access more inefficient and 
makes it harder for objects to have identity.


No it actually makes it much better because you only have to load 
the relevant stuff into memory.


But you usually don't look at your objects in isolation.

AoS makes sense if you always care about all fields like for 
example Array!Vector3. You usually access all components of a 
vector.


What you lose is the general feel of oop.

Vector add(Vector a, Vector b);

Array!Vector vectors;

add(vectors[index1], vectors[index2]);

This really just won't work with SoA, especially if you want to 
mutate the data behind with a reference. For this you would just 
use AoS.


Btw I have done a lot of benchmarks and SoA in the worst case was 
always as fast as SoA.


But once you actually only access partial data, SoA can 
potentially be much faster.


This is what I mean with scaling

You start with

struct Test{
  int i;
  int j;
}
Array!Test tests;

and you have absolutely no performance problem for 'tests' 
because it is just so small.


But after a few years Test will have grown much bigger.

struct Test{
  int i;
  int j;
  int[100] junk;
}

If you use SoA you can always add stuff without any performance 
penalty, that is why I said that it "scales" better.


But as I have said in the blog post, you will not always replace 
AoS with SoA, but you should replace AoS with SoA where it makes 
sense.


I think SoA can be faster if you are commonly iterating over a 
section of a dataset, but I don't think that's a common 
occurrence.


This happens in games very often when you use inheritance, your 
objects just will grow really big the more functionality you add.


Like for example you just want to move all objects based on 
velocity, so you just care about Position, Velocity. You don't 
have to load anything else into memory.


An entity component system really is just SoA at its core.


[Blog post] Why and when you should use SoA

2016-03-24 Thread maik klein via Digitalmars-d-announce

Link to the blog post: https://maikklein.github.io/post/soa-d/
Link to the reddit discussion: 
https://www.reddit.com/r/programming/comments/4buivf/why_and_when_you_should_use_soa/


Metaprogramming with type objects in D

2016-02-29 Thread maik klein via Digitalmars-d-announce

Discussion:
https://www.reddit.com/r/programming/comments/48dssq/metaprogramming_with_type_objects_in_d/

Direct link:
https://maikklein.github.io/2016/03/01/metaprogramming-typeobject/



Re: Vision for the first semester of 2016

2016-01-25 Thread maik klein via Digitalmars-d-announce

On Monday, 25 January 2016 at 13:08:18 UTC, Rory McGuire wrote:
On Mon, Jan 25, 2016 at 2:46 PM, Andrei Alexandrescu via 
Digitalmars-d-announce  
wrote:


On 01/25/2016 04:17 AM, Rory McGuire via 
Digitalmars-d-announce wrote:




Looking at the way we have things now, it would actually be 
quite simple to make two downloads, one with everything and 
one with the bare minimum.


If we changed phobos to compile like the recent vibe.d 
version does then we can even pick and choose sections of 
phobos. I suppose "he who has the vision" can do either type 
of release with our current tools.




What would be the benefits of this? My knee-jerk reaction is 
this is a large and disruptive project with no palpable 
benefits. -- Andrei



Yep, thats kind of what I was saying in the end. If someone 
wanted to they could make such a release independently.


I'm trying to hack on the compiler, personally I wish all those 
with the know how would put their efforts into documenting how 
the compiler works and what the different parts do, that way we 
could have more contributors.


+1 On lifetime management and tooling. I would like to see a lot 
of improvements for DCD also tools for refactoring would also be 
extremely useful.


As for splitting up everything into small packages, I don't think 
D is there yet. I am still new but I already found several 
libraries that I wanted to use that not follow the "official" D 
Style guide.


I would not want to include N different libraries that use N 
different coding styles. Look at Rust for example, you will find 
that pretty much every library uses the "official" style guide.


I think that is because it is mostly "enforced" by the compiler 
as a warning.


I really don't care how I write my code, but I care deeply about 
consistency.


Another point is that I couldn't find any metaprogramming library 
for D yet. Yes there is std.meta but it is extremely lacking, 
this is quite obvious if you look into the std.


For example in "zip"

return mixin (q{ElementType(%(.moveAt(ranges[%s], n)%|, 
%))}.format(iota(0, R.length)));


This could be easily expressed as a general metafunction. Also 
std.meta mostly focusses on compile time stuff but I don't think 
there is anything for a "Tuple".


Some inspiration could be found here 
https://github.com/boostorg/hana





Re: GDC Explorer Site Update

2016-01-25 Thread maik klein via Digitalmars-d-announce

On Monday, 25 January 2016 at 23:08:32 UTC, Iain Buclaw wrote:

Hi,

After a much needed rebuild of the server running various 
GDC-related hosted services 
[http://forum.dlang.org/post/zrnqcfhvyhlfjajtq...@forum.dlang.org] - I've gotten round to updating the compiler disassembler.


http://explore.dgnu.org/

Now supports 12 different architectures from ARM to SystemZ! 
(not including -m32 or any -march options)


Enjoy.
Iain.


This is awesome, I think I am going to use this to finally learn 
some assembly. But I am not quite sure though what the output is, 
is it x86 or x64?