Re: DIP: Tail call optimization

2016-07-11 Thread Tofu Ninja via Digitalmars-d-announce

On Sunday, 10 July 2016 at 13:15:38 UTC, Andrew Godfrey wrote:
Btw here's a thread from 2014 that touches on the idea of a 
"tailrec" annotation. At the time, Walter viewed the 
optimization as the compiler's business and not something he'd 
elevate to a language feature:



http://forum.dlang.org/post/lqp6pu$1kkv$1...@digitalmars.com


I find it odd that something like tail recursions that actually 
makes certain things possible that wouldn't be otherwise is seen 
as only an optimization, when things like inlining which really 
is only an optimization is seen as important enough to have a 
pragma.


Re: DIP: Tail call optimization

2016-07-10 Thread Tofu Ninja via Digitalmars-d-announce

On Sunday, 10 July 2016 at 06:39:06 UTC, ketmar wrote:

On Sunday, 10 July 2016 at 06:37:18 UTC, ketmar wrote:

On Sunday, 10 July 2016 at 06:20:59 UTC, Seb wrote:
... guys, please stay friendly, constructive and polite! I 
thought we are all grown-ups here!


i do. someone who is not able to understand when and how TCO 
works is clearly brain-damaged. if he isn't, why did he become 
programmer in the first place? it is clear that he is not able 
to program.


note that i didn't said this about OP, in no way. so no 
personal attacks here.


Your joking right? No personal attacks?


Re: Vision for the first semester of 2016

2016-02-03 Thread Tofu Ninja via Digitalmars-d-announce
On Wednesday, 3 February 2016 at 11:22:50 UTC, Márcio Martins 
wrote:


How would you select the package version you want to use. 
Obviously it would be fine for small scripts to pick the latest 
version, but no so much for non-trivial projects.


Somewhat related: I would love to be able to install packages 
with dub "globally", and then have them automatically added to 
the compiler's search paths and working with import with rdmd 
and dmd.


I install version 0.1 of package X globally and now all my 
programs pick that up with import. If the package is a library, 
dub would (re)compile then upon installation and put the 
resulting libs in the correct places so that all my programs 
can simply link to them.
I should also be able to override the global import with a 
different version at the project level if I which to do so. 
Similar to what dub.selections.json currently does.


Having dub fully integrated with the compiler and it's 
configuration would be a major quality of life improvement, and 
a nice step towards the "it just works" state. Much like C/C++ 
libraries get installed by Linux's package managers and just 
work, but for D.


Right now the easiest way to boot up a new project is to copy 
one that already exists, rename it, and edit the dub.json file 
to add/remove any dependencies. This gets old really quickly 
and is the only reason why D doesn't really feel like a 
scripting language for small projects, because setting up the 
environment and frequently used dependencies takes longer than 
writing the script, and you need a project directory instead of 
a single .d file that just uses all your common imports.


There are a few problems with it. For instance dub packages have 
no information about the files in them, you can't ask dub for 
derelict.opengl3.gl3.d, you ask it for the package derelict-gl3. 
So for something like this to work, there would need to be some 
type of syntax to import the package.


Probably something simple could be done like pragma(dub, 
"derelict-gl3", "==1.0.12");. As far as I can tell a dub pragma 
could be 100% ignored by the compiler unless a flag gets passed 
to print the dub dependencies. Then a tool like rdmd that gets 
all the imports for a file could also get all the dub 
dependencies and call out to dub to download them and pass the 
import paths for the dub dependencies to the final invocation of 
dmd. Otherwise the dub pragma would really do nothing other than 
be a signifier to outside tools. Tools like dcd could even use 
the pragmas to automatically call out to dub to find the paths of 
the dependencies and start indexing them for auto completion.


Really would be a great day to have that all automatically work. 
Also dub could be packaged with dmd and make all this more simple.


Right now the easiest way to use dub is to make a .json for your 
own project and build with dub, but honestly that sucks a lot 
because really the only thing people want to use dub for is the 
dependencies, the rest of it kinda gets in the way and is 
annoying as hell to use. Like trying to figure out how to build 
projects for x64 or unittests or whatever with dub is a pain. Its 
not really what people want to use dub for, but it tries to pass 
itself off as a build system which it sucks at.





Re: Vision for the first semester of 2016

2016-02-03 Thread Tofu Ninja via Digitalmars-d-announce

On Wednesday, 3 February 2016 at 23:35:21 UTC, Tofu Ninja wrote:

...


Actually now that I think about it, you can do with out the 
pragma and just define something like this...


mixin template DubDependency(string dependency, string vers)
{
// Does nothing but print a dependency
version(Dub_Dependency){
		pragma(msg, "DubDependency: \"" ~ dependency ~ "\" \"" ~ vers ~ 
"\"");

}
}


Then whenever you need it you just put:
mixin DubDependency!("derelict-gl3", "==1.0.12");

And to get the dependencies for a file you can just do
dmd test.d -o- -version=Dub_Dependency


Re: Vision for the first semester of 2016

2016-02-03 Thread Tofu Ninja via Digitalmars-d-announce

On Thursday, 4 February 2016 at 00:50:43 UTC, Tofu Ninja wrote:

On Wednesday, 3 February 2016 at 23:35:21 UTC, Tofu Ninja wrote:

...


Actually now that I think about it, you can do with out the 
pragma and just define something like this...


mixin template DubDependency(string dependency, string vers)
{
// Does nothing but print a dependency
version(Dub_Dependency){
		pragma(msg, "DubDependency: \"" ~ dependency ~ "\" \"" ~ vers 
~ "\"");

}
}


Then whenever you need it you just put:
mixin DubDependency!("derelict-gl3", "==1.0.12");

And to get the dependencies for a file you can just do
dmd test.d -o- -version=Dub_Dependency


Actually, nvm, wouldn't actually work because as soon as you add 
an import derelict.opengl3.gl3; it would error out because it 
cant find the file and it wouldn't print the dependencies.


Re: Vision for the first semester of 2016

2016-01-29 Thread Tofu Ninja via Digitalmars-d-announce
On Monday, 25 January 2016 at 02:37:40 UTC, Andrei Alexandrescu 
wrote:

Hot off the press! http://wiki.dlang.org/Vision/2016H1 -- Andrei


Just out of curiosity, is getting the different compilers in sync 
still a priority? Right now we have dmd at 2.070, ldc at 2.068. 
and gdc at 2.066.


Re: Vision for the first semester of 2016

2016-01-29 Thread Tofu Ninja via Digitalmars-d-announce

On Friday, 29 January 2016 at 20:30:35 UTC, Iain Buclaw wrote:
How much of it actually depends on the compiler though?  I'd be 
a little surprised if we couldn't backport at least 80% of 
phobos to 2.067/2.068 with zero changes.


I have no idea, I think you are probably right. But having a 
compiler and phobos out of sync sounds even worse than the way it 
is now. A better solution for me would be to just stick with a 
version and wait for gdc to catch up but honestly it seems like 
as soon as a new version comes out I hit some bug that is only 
fixed in the latest version, forcing me to upgrade.


For example this literally happened days ago, I am currently at 
2.069 and the other day I needed to call some winapi stuff, only 
to realize the winapi bindings are way outdated, well guess what 
they are updated in 2.070. Its amazing how often I hit a problem 
and then find out its fixed in the next version.


Re: Vision for the first semester of 2016

2016-01-29 Thread Tofu Ninja via Digitalmars-d-announce

On Friday, 29 January 2016 at 18:13:21 UTC, Iain Buclaw wrote:
On 29 Jan 2016 6:55 pm, "Tofu Ninja via Digitalmars-d-announce" 
< digitalmars-d-announce@puremagic.com> wrote:


On Monday, 25 January 2016 at 02:37:40 UTC, Andrei 
Alexandrescu wrote:


Hot off the press! http://wiki.dlang.org/Vision/2016H1 -- 
Andrei



Just out of curiosity, is getting the different compilers in 
sync still a
priority? Right now we have dmd at 2.070, ldc at 2.068. and gdc 
at 2.066.


If anyone wants to help out...

I have to also juggle working on GCC and GDB. :-)

When gdc reaches 2.068 (GCC 7.1 is the target release next 
year) - expect it to stay there for a while...


It would be nice if keeping them in sync was a priority, I would 
love to use GDC so I could use GDB, but not having the latest 
fixes is simply not worth it. Even simple things dont work when 
you go back just a few versions, like in 2.066 isForwardRange is 
not correct and does not work properly, something as simple as 
that does not work. Not to mention using the new stuff like 
allocators.


Re: Sublime Text 3 Gets Better D Support

2016-01-27 Thread Tofu Ninja via Digitalmars-d-announce

On Wednesday, 27 January 2016 at 19:15:02 UTC, Damian wrote:

Thank you, this is very much welcome!

Wishlist:
Will we see some dub support integration for building? I find 
when using rust the cargo build support is excellent, I wish we 
had this for D in sublime :)


I think the sublime D-Kit plugin has dub building support.

Also thanks Jack, I have been using sublime for D for a while 
now, its great but the highlighting kind sucks. My hope is 
sometime in the future dcd will have support for semantic 
highlighting along with the great auto complete it already 
provides.


Re: 2015 H1 Vision

2015-01-31 Thread Tofu Ninja via Digitalmars-d-announce

On Sunday, 1 February 2015 at 03:46:25 UTC, data man wrote:

Vision/2015H1 wrote:
We believe safety is an important aspect of language design, 
and we plan to continue building on the @safe/@trusted/@system 
troika.


I like the troika :-)


I had to look up what it means :/


Re: Deadcode: A code editor in D

2015-01-16 Thread Tofu Ninja via Digitalmars-d-announce

On Friday, 16 January 2015 at 21:19:08 UTC, Jonas Drewsen wrote:
I have been working on an editor written in D for use with D 
for some time now and have made a blog post about it.


Any feedback or suggestions are welcome.

http://deadcodedev.steamwinter.com

Thanks
Jonas


This is pretty sweet dude, keep us posted on the development.


Re: CUDA bindings

2014-10-26 Thread Tofu Ninja via Digitalmars-d-announce

On Sunday, 26 October 2014 at 05:31:52 UTC, Dmitri Nesteruk wrote:
This is great! I know that C++ uses  and  to enclose 
kernel calls and thus create the link between CPU and GPU code 
when NVCC rips things apart. How is this done in your bindings?


It's just the driver api, its not CUDA code in D.

Also I think you are mistaking where the   are actually 
used. The   are used in CUDA code, not in C++ code. While 
CUDA is a variation on C++, it is still not C++ and has to pass 
through a special parser that splits out the host code and the 
gpu code to be compiled.


Re: Mono-D v2.5 - dub buildTypes

2014-10-17 Thread Tofu Ninja via Digitalmars-d-announce
On Thursday, 16 October 2014 at 23:32:22 UTC, Alexander Bothe 
wrote:

Hi everyone,

just gave the second drop down box in Xamarin Studio a use: 
Selection of build types for dub projects.



Furthermore, please don't rage silently somewhere - tell me 
about issues with Mono-D on github or in #d.mono-d on freenode!

http://wiki.dlang.org/Mono-D
https://github.com/aBothe/Mono-D/issues



Cheers  thanks to everyone,
Alex


Sweet


Re: CUDA bindings

2014-10-17 Thread Tofu Ninja via Digitalmars-d-announce

On Thursday, 16 October 2014 at 21:18:15 UTC, ponce wrote:
More APIs could be implemented if the interest happens to be 
non-null.


Interest non-null, this is awesome.


Re: [Mono-D] v2.1.18 Parser/Completion/General fixesimprovements

2014-08-13 Thread Tofu Ninja via Digitalmars-d-announce

On Wednesday, 13 August 2014 at 21:35:26 UTC, Brian Schott wrote:
I'm not sure you'd want to do that. The DParser completion 
engine has a few features that DCD doesn't have. (I'm not sure 
if this is true the other way around)



I'm particularly interested in dscanner integration myself :)


Are you talking about displaying static analysis hints in the 
editor window, or something else?


It would be nice if we could have one unified auto complete 
engine instead of several different engines of varying quality 
and feature sets. But it is only a dream.


Re: [Mono-D] v2.1.18 Parser/Completion/General fixesimprovements

2014-08-13 Thread Tofu Ninja via Digitalmars-d-announce

On Wednesday, 13 August 2014 at 21:48:57 UTC, Tofu Ninja wrote:
It would be nice if we could have one unified auto complete 
engine instead of several different engines of varying quality 
and feature sets. But it is only a dream.


For that matter... it would be nice to have just one unified
parser.


Re: DConf 2014 Day 2 Talk 3: Designing an Aurora: A Glimpse at the Graphical Future of D by Adam Wilson

2014-07-10 Thread Tofu Ninja via Digitalmars-d-announce
On Thursday, 10 July 2014 at 10:43:45 UTC, Ola Fosheim Grøstad 
wrote:
Depends on what Aurora is meant to target. The video says it is 
meant to be more of a playful environment that allows pac-man 
mockups and possibly GUIs in the long run, but not sure who 
would want that? There are so many much better IDE/REPL 
environments for that: Swift, FlashCo, HTML5/WebGL/Dart/PNaCL, 
Python and lots of  advanced frameworks with engines for cross 
platform mobile development at all kinds of proficiency levels.


Seems to me what a language that D needs is two separate 
frameworks:


1. A barebones 3D high performance library with multiple 
backends that follow the hardware trends (kind of what you are 
suggesting). Suitable for creating games and HPC stuff.


2. A stable high level API with geometric libraries for dealing 
with established abstractions: font files, vector primitives, 
PDF generation and parsing with canvas abstraction for both 
screen/gui, print, file… Suitable for applications/web.


3. An engine layering of 2. on top of 1. for portable 
interactive graphics but a higher abstraction level than 1.


YES(I am so glad some one else sees this)! This is basically what 
I have been saying all along. I hoped the immediate mode could be 
(1) and the retained mode could be (2/3) so that we could have 
both and not be limited, but that does not seem to be the 
direction it is going. It is not even clear what the immediate 
mode 'is' right now in the current designs of Aurora. It seems to 
be more of an implementation detail rather than something usable 
on its own.


As it stands now, the direction that Aurora is taking seems to be 
an odd one IMHO. It is trying to be some thing in between (1) and 
(2/3) but I don't think that is useful to any one except maybe 
gui writers. That is what prompted me to post.


Re: DConf 2014 Day 2 Talk 3: Designing an Aurora: A Glimpse at the Graphical Future of D by Adam Wilson

2014-07-10 Thread Tofu Ninja via Digitalmars-d-announce

On Thursday, 10 July 2014 at 15:38:07 UTC, ponce wrote:
You might want to look at what bgfx does: 
https://github.com/bkaradzic/bgfx

It provides a shader compiler to various supported backends.


I have seen it but I have never used it. I don't actually know if
its any good or not but it is in the same vein of what I am
talking about.

Abstracting shaders more or less mandates having a shader 
compiler from your language to graphics API out there. It does 
make an additional build step.


It would be complicated yes but certainly doable.

Compiling such a shader abstraction language at compile-time 
seems a bit optimistic to me.


Maybe a little now but in the future maybe not, things are always
improving. It is also one of the reason I wish we could call out
to precompiled code at compile time, make it possible to have
inline shaders passed out of the compiler at compile time and
compiled by some other app.


Re: DConf 2014 Day 2 Talk 3: Designing an Aurora: A Glimpse at the Graphical Future of D by Adam Wilson

2014-07-09 Thread Tofu Ninja via Digitalmars-d-announce
On Wednesday, 9 July 2014 at 05:30:21 UTC, Ola Fosheim Grøstad 
wrote:
That's true, but OpenGL is being left behind now that there is 
a push to match the low level of how GPU drivers work.


As I said, ALL api's are converging on low level access, this 
includes opengl. This means that all major api's are moving to a 
buffer+shader model because this is what the hardware likes(there 
is some more interesting things with command buffers that is 
happening also).


Apple's Metal is oriented towards the tiled PowerVR and 
scenegraphs,


I am not exactly sure where you are get that idea, Metal is the 
same, buffers+shaders. The major difference is the command buffer 
that is being explicitly exposed, this is actually what is meant 
when they say that the the api is getting closer to the hardware. 
In current api's(dx/ogl) the command buffers are hidden from the 
user and constructed behind the scenes, in dx is is done by 
Microsoft and in ogl it is done by the driver(nvidia/amd/intel). 
There has been a push recently for this to be exposed to the user 
in some form, this is what metal does, I believe mantel does 
something similar but I can't be sure because they have not 
released any documentation.



probably also with some expectations of supporting the upcoming 
raytracing accelerators.


I doubt it.

AMD is in talks with Intel (rumour) with the intent of 
cooperating on Mantle.


I don't know anything about that but I also doubt it.

Direct-X is going lower level… So, there is really no stability 
in the API at the lower level.


On the contrary, all this movement towards low level API is 
actually causing the API's to all look vary similar.




But yes, OpenGL is not particularly suitable for rendering a 
scene graph without an optimizing engine to reduce context 
switches.


I was not talking explicitly about ogl, I am just talking about 
video cards in general.


Actually, modern 2D APIs like Apple's Quartz are backend 
independent and render to PDF. Native PDF support is 
important if you want to have an advantage in the web space and 
in the application space in general.


This does not really have any thing to do with what I am talking 
about. I am talking about hardware accelerated graphics, once it 
gets into the hardware(gpu), there is no real difference between 
2d and 3d.


There is almost no chance anyone wanting to do 3D would use 
something like Aurora… If you can handle 3D math you also can 
do OpenGL, Mantle, Metal?


As it stands now, that may be the case, but I honestly don't see 
a reason it must be so.


But then again, the official status for Aurora is kind of 
unclear.


This is true.


Re: DConf 2014 Day 2 Talk 3: Designing an Aurora: A Glimpse at the Graphical Future of D by Adam Wilson

2014-07-09 Thread Tofu Ninja via Digitalmars-d-announce

On Wednesday, 9 July 2014 at 15:03:13 UTC, Tofu Ninja wrote:

Also I should note, dx and ogl are both also moving towards 
exposing the command buffer.


Re: DConf 2014 Day 2 Talk 3: Designing an Aurora: A Glimpse at the Graphical Future of D by Adam Wilson

2014-07-09 Thread Tofu Ninja via Digitalmars-d-announce

On Wednesday, 9 July 2014 at 15:22:35 UTC, Tofu Ninja wrote:

On Wednesday, 9 July 2014 at 15:03:13 UTC, Tofu Ninja wrote:

Also I should note, dx and ogl are both also moving towards 
exposing the command buffer.


I should say that it looks like they are moving in that 
direction, both opengl and direct x support draw indirect which 
is almost nearly all the way to a command buffer, it is only a 
matter of time before it become a reality(explicitly with metal 
and mantel).


Re: DConf 2014 Day 2 Talk 3: Designing an Aurora: A Glimpse at the Graphical Future of D by Adam Wilson

2014-07-09 Thread Tofu Ninja via Digitalmars-d-announce

On Wednesday, 9 July 2014 at 16:21:55 UTC, Ola Fosheim Grøstad
wrote:
My point was that the current move is from heavy graphic 
contexts with few API calls to explicit command buffers with 
many API calls. I would think it fits better to tiling where 
you defer rendering and sort polygons and therefore get context 
switches anyway (classical PowerVR on iDevices). It fits better 
to rendering a display graph directly, or UI etc.





Actually is seems to be moving to fewer and fewer api calls where
possible(see AZDO) with lightweight contexts.


Yes, this is what they do. It is closer to what you want for 
general computing on the GPU. So there is probably a long term 
strategy for unifying computation and graphics in there 
somewhere. IIRC Apple claims Metal can be used for general 
computing as well as 3D.



Yeah, it seems like that is where everything is going very fast,
that is why I wish Aurora could try to follow that.



Why?

Imagination Technologies (PowerVR) purchased the raytracing 
accelerator (hardware design/patents) that three former Apple 
employees designed and has just completed the design for mobile 
devices so it is close to production. The RTU (ray tracing 
unit) has supposedly been worked into the same series of GPUs 
that is used in the iPhone. Speculation, sure, but not unlikely 
either.


http://www.imgtec.com/powervr/raytracing.asp



This is actually really cool, I just don't see real time ray
tracing being usable(games and the like) for at least another
5-10 years, though I will certainly be very happy to be wrong.



Why?

Intel has always been willing to cooperate when AMD holds the 
strong cards (ATI is stronger than Intel's 3D division).


http://www.phoronix.com/scan.php?page=news_itempx=MTcyODY



You may be right, I don't know, it just doesn't seem to be
something they would do to me, just a gut feeling, no real basis
to back it up.



I doubt it. ;-)

Apple wants unique AAA titles on their iDevices to keep 
Android/Winphone at bay and to defend the high profit margins. 
They have no interest in portable low level access and will 
just point at OpenGL 2ES for that.




They will all be incompatible of course, no way we could get a
decent standard... nooo.  All I am saying is that as they get
closer and closer to the hardware, they will all start looking
relatively similar. After all, if they are all trying to get
close to the same thing(the hardware) then by association they
are getting closer to eachother. There will be stupid little
nuances that make them incompatible but they will still be doing
basically the same thing. Hardware specific api's(Mantel)
complicate this a little bit but not by much, all the gpu
hardware(excluding niche stuff like hardware ray tracers :P) out
there has basicly the same interface.

True, but that is not a very stable abstraction level. Display 
Postscript/PDF etc is much more stable. It is also a very 
useful abstraction level since it means you can use the same 
graphics API for sending a drawing to the screen, to the 
printer or to a file.




I think its a fine abstraction level, buffers and shaders are not
hard concepts at all. All api's that Aurora is going to be based
on offers them as well as all modern gpu's support them. If
shaders were written in a DLS then in the case where Aurora needs
to fall back to software rendering then they can be translated to
D code and mixed right in. When they need to be turned into some
api specific shader then they could be translated at compile
time(the differences should mostly just be syntax). If the DSL
was a subset of D then that would simplify it even further as
well as make the learning curve much smaller. Its a perfectly
fine level of abstraction for any sort of graphics that also
happens to be supported very well by modern GPU's. I don't see
the problem.


Well, having the abstractions for opening a drawing context, 
input devices etc would be useful, but not really a language 
level task IMO. Solid cross platform behaviour on that level 
will never happen (just think about what you have to wrap up on 
Android).


Well then in that case Aurora should be designed as a software
renderer with
hardware support as a possible addition later on. But that comes
back to the point that is is a little iffy what Aurora is
actually trying to be. Personally I would be disappointed if it
went down that route.


Re: DConf 2014 Day 2 Talk 3: Designing an Aurora: A Glimpse at the Graphical Future of D by Adam Wilson

2014-07-08 Thread Tofu Ninja via Digitalmars-d-announce

On Tuesday, 8 July 2014 at 16:03:36 UTC, Andrei Alexandrescu
wrote:

http://www.reddit.com/r/programming/comments/2a5ia9/dconf_2014_day_2_talk_3_designing_an_aurora_a/


Great talk but I have some reservations about the design. What I
am most concerned about is the design of the immediate mode
layer. I was one of the few who initially pushed for the
immediate mode but I think you missed the point.

There are several points that I want to address so I will go
through them one at a time. Also I apologize for the wall of text.

*Scene Graph
Personally I find it odd that the immediate mode knows anything
about a scene graph at all. Scene Graphs are not an end all be
all, they do not make everything simpler to deal with. It is one
way to solve the problem but not always the best. D is supposed
to be multi paradigm, locking the users into a scene graph design
is against that goal. I personally think that the immediate mode
should be designed for performance and the less performant but
'simpler' modes should be built on top of it.

*Performance vs Simplicity
I know that you have stated quite clearly that you do not believe
performance should be a main goal of Aurora, you have stated that
simplicity is a more important goal. I propose that there is NO
reason at all that Aurora can't have both in the same way that D
itself has both. I think it is just a matter of properly defining
the goals of each layer. The retained mode should be designed
with simplicity in mind whilst still trying to be performant
where possible. On the other hand, the immediate mode should be
designed with performance in mind whilst still trying to be
simple where possible. The simple mode(s?) should be build on top
of the single performance mode.

*Design
Modern graphics hardware has a very well defined interface and
all modern graphics api's are all converging on matching the
hardware as close as possible. Modern graphics is done by sending
buffers of data to the card and having programmable shaders to
operate on the data, period. I believe that the immediate mode
layer should match this as close a possible. If that involves
having some DSL for shaders that gets translated into all the
other various shader languages then so be it, the differences
between them is minimal. If the DSL was a subset of D then all
the better.

*2D vs 3D
I think the difference you are making between 2D and 3D is
largely artificial. In modern graphics api's the difference
between 2D and 3D is merely a matrix multiply. If the immediate
mode was designed how I suggest above, then 2D vs 3D is a non
issue.

*Games
D is already appealing to game developers and with the work on
@nogc and andrei's work with allocators, it is becoming even more
appealing. The outcome of Aurora could land D a VERY strong spot
in games that would secure it a very good future. But only if it
is done right. I think there is a certain level of responsibility
in the way Aurora gets designed that needs to be taken into
account.


I know that most of my points are not in line with what you said
Aurora would and wouldn't be. I just don't think there is any
reason Aurora couldn't achieve the above points whilst still
maintaining it's goal of simplicity.

Also, I am willing to help, I just don't really know what needs
working on. I have a lot of experience with openGL on windows
writing high performance graphics.


Re: DConf Day 1 Talk 6: Case Studies in Simplifying Code with Compile-Time Reflection by Atila Neves

2014-06-21 Thread Tofu Ninja via Digitalmars-d-announce

On Saturday, 21 June 2014 at 20:02:56 UTC, Walter Bright wrote:

On 6/21/2014 6:15 AM, Jacob Carlborg wrote:
Youtube supports 4k resolution, is that good enough :). All 
videos from

RailsConf 2014 was uploaded to youtube in 1080p resolution.


For presentation videos, I don't see any point to hi res. DVD 
quality is more than good enough.


See thats the great thing about youtube, even if you upload it in 
high res, the users can select the quality they want, so really 
there is no reason not to just upload the best and let the user 
decide.


Re: DConf Day 1 Talk 6: Case Studies in Simplifying Code with Compile-Time Reflection by Atila Neves

2014-06-17 Thread Tofu Ninja via Digitalmars-d-announce
On Monday, 16 June 2014 at 17:26:51 UTC, Andrei Alexandrescu 
wrote:

https://news.ycombinator.com/newest

https://www.facebook.com/dlang.org/posts/867399893273693

https://twitter.com/D_Programming/status/478588866321203200

http://www.reddit.com/r/programming/comments/28am0x/case_studies_in_simplifying_code_with_compiletime/


Andrei


This is the most interesting talk I have seen so far from 
DConf14, very good.


-tofu


Re: DConf Day 1 Talk 6: Case Studies in Simplifying Code with Compile-Time Reflection by Atila Neves

2014-06-17 Thread Tofu Ninja via Digitalmars-d-announce

On Tuesday, 17 June 2014 at 17:10:16 UTC, Mengu wrote:
and also the genius idea to post each talk seperately instead 
of having a nice talks page on dconf.org and providing a link 
for that. i'd understand the keynotes but for the rest of the 
talks this is / was not a good idea.


I think the hope was that it would attract more views overall. I 
think what was not taken into account was the way Reddit post get 
viewed, having their up votes spread out among the different 
posts is much worse than pooling them as the reddit posts are far 
less likely to be viewed with low up vote counts. Also its 
annoying for us who just want to watch the talks.


A much better strategy would have been a full release of all the 
talks followed with a reddit post of all of them to get the large 
burst up front, then after wards have individual posts for each 
video to get the staggering as well. It would effectively doubled 
each videos exposure(reddit is all reposts any ways so its all 
the better :P).