ddoc latex/formulas?

2016-09-13 Thread Manu via Digitalmars-d
Can we produce formulas, or latex in ddoc? Are there any examples in
phobos I can refer to?


Re: Release D 2.071.0

2016-09-13 Thread Martin Nowak via Digitalmars-d-announce

On Monday, 2 May 2016 at 16:47:13 UTC, Márcio Martins wrote:
Since this release is not critical for us, despite including 
many great changes, we will also stick to DMD 2.070.2 and hope 
for a fix in a future release, if at all possible.


There is an issue by now 
https://issues.dlang.org/show_bug.cgi?id=15988, but I can't 
reproduce any serious slowdowns with vibe-d, Higgs, or dcd 
comparing 2.069.2, 2.070.2, and 2.071.2-b4.


Re: iPhone vs Android

2016-09-13 Thread Shachar Shemesh via Digitalmars-d

On 14/09/16 02:59, Walter Bright wrote:


Memory allocated with malloc() is unknown to the GC. This works fine
unless a reference to the GC memory is inserted into malloc'd data,
which is why there's an API to the GC to let it know about such things.



But if you do want to allow it, then my original problem comes back. You 
have to scan the malloced memory because you are not sure where that 
memory might contain pointers to GC managed memory.


And the hybrid hybrid approach (i.e. - only some of the memory allocated 
by malloc is scanned) is a wasp nest of potential bugs and problems.


Shachar


[Issue 16243] wrong C++ argument passing with empty struct and 6 integers

2016-09-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16243

--- Comment #10 from Joakim  ---
I ran into this on Android/ARM too, with ldc.  As the linked ldc comment and
Jacob note, this is an incompatibility in the way clang and gcc work with empty
structs on every platform, whether Linux/x86 or Android/ARM.  This test and the
auto-tester simply use gcc on every other tested platform, while clang is the
system compiler on OS X now.

--


[Issue 16031] [REG2.071] stale DW.ref.name EH symbol used with -lib and -fPIC

2016-09-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16031

Martin Nowak  changed:

   What|Removed |Added

   Keywords||EH, pull
 CC||c...@dawg.eu
   Assignee|nob...@puremagic.com|c...@dawg.eu
Summary|[REG2.071] dmd internal |[REG2.071] stale
   |error when compiling|DW.ref.name EH symbol used
   |druntime with PIC=1 |with -lib and -fPIC

--- Comment #3 from Martin Nowak  ---
Easy enough to guess that this is an issue with PIC and the new DWARF EH code.
Seems like it was introduced by
https://github.com/dlang/dmd/commit/58910e74854b7f7b86e5cec50a6d73943fc29f87,
b/c Sdw_ref_idx gets cached but not reset before generating further objects.

https://github.com/dlang/dmd/pull/6129

--


Re: Can vibe d leverage existing web technologies?

2016-09-13 Thread Brad Anderson via Digitalmars-d-learn

On Tuesday, 13 September 2016 at 23:45:18 UTC, Intersteller wrote:
vibe.d does not have much lateral support as the most commons 
web technologies do.  Can vibe.d leverage pre-existing techs 
such as php, ruby/rails, etc? Starting from scratch and having 
to build a robust and secure framework is really not the way to 
go.


Sure. Just use res.write(executeShell(["php", "-f", 
"somephpscript.php"]).output); or something like that.


But seriously, you probably don't want to do that. It's like 
asking if ruby on rails can leverage php. Sure, they can 
communicate over HTTP or whatever else they support but trying to 
execute PHP from within Rails or vice versa just isn't really all 
that beneficial.


Re: Critque of Rust's collection types

2016-09-13 Thread Basile B. via Digitalmars-d
On Wednesday, 14 September 2016 at 00:36:39 UTC, Walter Bright 
wrote:

On 9/13/2016 4:47 PM, Walter Bright wrote:

http://ticki.github.io/blog/horrible/

Some worthwhile insights into what makes a good collection 
type.


https://news.ycombinator.com/item?id=12488233

Of particular interest is the advocacy of collision attack 
resistance. Is anyone interested in exploring this w.r.t. D's 
builtin hashes?


https://www.reddit.com/r/rust/comments/52grcl/rusts_stdcollections_is_absolutely_horrible/

Of interest are the comments by the implementer of the hash.


There's a benchmark of languages builtin hashmaps here:

http://vaskir.blogspot.fr/2016/05/hash-maps-rust-vs-f.html

It includes D and Rust. The author found that D wins for the 
lookups but was a bit behind for the insertions (due to GC maybe 
?).


Rust results didn't seem that bad, despite of the cryptographic 
hash function it uses.


Re: Should debug{} allow GC?

2016-09-13 Thread Manu via Digitalmars-d
On 14 September 2016 at 10:37, Walter Bright via Digitalmars-d
 wrote:
> On 9/12/2016 6:26 PM, Manu via Digitalmars-d wrote:
>>
>> I'm concerned this would undermind @nogc... If this is supplied in the
>> std library, people will use it, and then you get to a place where you
>> can't rely on @nogc anymore.
>> debug{} blocks sound much safer to me.
>
>
>
> Yeah, I agree. Do you wanna submit an Enhancement Request to Bugzilla on
> this?

https://issues.dlang.org/show_bug.cgi?id=16492


[Issue 16492] New: support @nogc in debug{} blocks

2016-09-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16492

  Issue ID: 16492
   Summary: support @nogc in debug{} blocks
   Product: D
   Version: D2
  Hardware: All
OS: All
Status: NEW
  Severity: enhancement
  Priority: P1
 Component: dmd
  Assignee: nob...@puremagic.com
  Reporter: turkey...@gmail.com

I'm having a lot of trouble debugging @nogc functions. I have a number
of debug functions that use GC, but I can't call them from @nogc
code... should debug{} allow @nogc calls, the same as impure calls?

Does this problem also extend to @safe? (I haven't  encountered it, but I
expect the same problem exists?)

--


Re: colour lib needs reviewers

2016-09-13 Thread Manu via Digitalmars-d
On 14 September 2016 at 04:10, Random D user via Digitalmars-d
 wrote:
>
> In general, I think basics should be dead simple (even over simplified for
> the very basic case)

I think they are:

  import std.experimental.color;
  RGB8 color("red");

I don't think i's possible to make it simpler than that?


> but the API and the docs should provide me layers of
> increasing detail and sophistication, which I can study if (or when) I
> really need control and care about the details.

I'll work on the docs some more.


> Deprecated by who? Shouldn't phobos grade lib include all reasonable
> platforms?

They're all comprehensively supported, I just didn't pre-instantiate
every possible format imaginable. That would be devastating to compile
times, and have absolutely no value.

BGR really just exists because WINAPI. Microsoft haven't recommended
people write GDI based software for decades.
Look at DX10(,11,12), BGR isn't even supported in modern DirectX.
Microsoft left BGR to die years ago (and good riddance).
That said, there is an instantiation for BGR in package.d... the
reversed versions (ie, alpha first) are completely unusual; just some
big-endian games consoles.

Modern formats have the channels in order; RGB, this is true for all
data types, ubyte, short, float, etc.


> I agree that you probably don't see too much variation within windows APIs,
> images (BMP etc.) or with D3D GPU textures (they're compressed anyway and
> asset pipelines to do the conversions beforehand), but I was more of
> thinking image libraries and the numerous, sometimes old and quirky, image
> formats. Or perhaps someone wants to implement a software blitter for their
> favorite toy device/embedded project.

"favorite toy device/embedded project" <- this is the definition of
specialist task. I don't mind if that person has to type 'alias' at
the top of their app.


>>> For 16 bits fairly common are:
>>> RGB565 and RGBA5551, also sometimes you see one of RGBA permutations
>>> (like RGBA8 above).
>>
>>
>> Nobody uses 16bit textures anymore. Compressed textures are both MUCH
>> smaller, and generally higher quality.
>
>
> Sure, agreed. These are not too useful as GPU textures these days (even on
> mobile), but if you do software 2d, load older image formats (image viewer
> etc.) or something even more specialized, these are pretty useful.
>
> I guess I was going for comprehensive (within reason), where as you were
> going for "90% of 2016 colors/image data", which is fine.

There is comprehensive support for all these formats... I just didn't
pre-instantiate the templates.
It's not hard to do at the top of your program:
  alias MyRGBType = RGB!(...details...);


> Anyways, it would be nice to see color (and hopefully image + simple
> blitter) in phobos.

I think that'll come. This is a pre-requisite.


> There's no need to write the same convert/copy/stretch/blend loop over and
> over again. And cpu-side code is nice to have for quick prototyping and
> small-scale work.

Sure.


>>> Just as a completely random idea - How about aliases for HDR formats like
>>> HDR10 and Dolby Vision? Kind of looks like they're just combinations what
>>> you already have.
>>
>>
>> This is very specialist. Instantiations are expensive. If they're not
>> used, it's a waste of compiler effort.
>
>
> HDR TVs and displays are coming fast and with that I'm sure HDR editing and
> photography will be common, which of course means HDR color formats and
> apps. But these can be added later.

There is already support for HDR.
Use a floating point RGB with UHDTV colour space: alias HDR =
RGB!("rgb", float, linear(?), RGBColorSpace.UHDTV);
CompressedRGB also has comprehensive support for all the HDR formats
I've ever encountered.

My point was, I don't want to provide instantiations for these
types... most people don't need it. People writing HDR rendering
software will instantiate the types they need themselves. It's very
specialist work.


Status of static control flow? static foreach?

2016-09-13 Thread Tofu Ninja via Digitalmars-d
Sorry, I have not kept up in the forums for a while. Was just 
curious on what the current status of static control flow was for 
D.


The only one I have heard about was static foreach, what happend, 
are we going to get it? We have static if, are we going to get 
other static control flow like static for, static while, static 
switch(if any of those make sense)?


Thanks..


Re: iPhone vs Android

2016-09-13 Thread Walter Bright via Digitalmars-d

On 9/13/2016 6:09 PM, Andrei Alexandrescu wrote:

On 09/13/2016 05:43 PM, Walter Bright wrote:

There's currently nothing that would prevent any handler code from
saving a reference to the thrown exception. Statically allocating
them would break that.


Didn't we reach the agreement to closely investigate a reference counted
exceptions hierarchy?


Yes we did. The design of Phobos would be better and more flexible if exceptions 
didn't rely on the GC.


[Issue 16466] Alignment of reals inside structs on 32 bit OSX should be 16, not 8

2016-09-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16466

github-bugzi...@puremagic.com changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |FIXED

--


[Issue 16466] Alignment of reals inside structs on 32 bit OSX should be 16, not 8

2016-09-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16466

--- Comment #2 from github-bugzi...@puremagic.com ---
Commits pushed to master at https://github.com/dlang/dmd

https://github.com/dlang/dmd/commit/e4935fb9f3b779792ee8c0d300db8e83358b27a6
fix Issue 16466 - Alignment of reals inside structs on 32 bit OSX should be 16,
not 8

https://github.com/dlang/dmd/commit/9f5998759917629d2aa941d48fd32b5c1445b2fc
Merge pull request #6109 from WalterBright/fix16466

fix Issue 16466 - Alignment of reals inside structs on 32 bit OSX sho…

--


Re: colour lib needs reviewers

2016-09-13 Thread Manu via Digitalmars-d
On 14 September 2016 at 04:25, Random D user via Digitalmars-d
 wrote:
> On Tuesday, 13 September 2016 at 02:00:44 UTC, Manu wrote:
>>
>> On 13 September 2016 at 07:00, Marco Leise via Digitalmars-d
>>  wrote:
>>>
>>> Am Tue, 13 Sep 2016 00:37:22 +1000
>>> schrieb Manu via Digitalmars-d :
>>> Alright, but hybrid gamma is really not something that can be googled. Or
>>> rather I end up at Toyota's Gamma Hybrid product page. :D
>>
>>
>> True. I'm not even sure what the technical term for this sort of gamma
>> function is... I just made that up! :/
>> As Walter and others have asked, I'll have to start adding links to
>> reference material I guess, although that still feels really weird to
>> me for some reason.
>
>
> FWIW I stumbled into this while double-checking HDR standards for my
> previous post. (I'm not a HDR expert, only somewhat interested because it's
> the future of displays/graphics)
>
> https://en.wikipedia.org/wiki/Hybrid_Log-Gamma

Perfect, thanks!


Re: colour lib needs reviewers

2016-09-13 Thread Manu via Digitalmars-d
On 14 September 2016 at 04:34, Marco Leise via Digitalmars-d
 wrote:
> Am Tue, 13 Sep 2016 12:00:44 +1000
> schrieb Manu via Digitalmars-d :
>
>> What is the worth of storing alpha data if it's uniform 0xFF anyway?
>> It sounds like you mean rgbx, not rgba (ie, 32bit, but no alpha).
>> There should only be an alpha channel if there's actually alpha data... 
>> right?
>
> I don't mean RGBX. JavaScript's canvas works that way for
> example. I.e. the only pixel format is RGBA for simplicity's
> sake and I'm not surprised it actually draws something if I
> load it with a 24-bit graphic. ;)

Given this example, isn't it the job of the image loader to populate
the image with data?
If you ask for an RGBA image loaded from a 24bit BMP or something,
sure, it should put 0xFF in there, but I don't know if it's right that
that logic belongs here...?

>> > […] An additive one may be:
>> >
>> >   color = color_dst + color_src * alpha_src
>> >   alpha = alpha_dst
>>
>> I thought about adding blend's, but I stopped short on that. I think
>> that's firmly entering image processing territory, and I felt that was
>> one step beyond the MO of this particular lib... no?
>> Blending opens up a whole world.
>
> I agree with that decision, and that it entails that
> arithmetics are undefined for alpha channels. :-( Yeah bummer.
> The idea that basic (saturating) arithmetics work on colors is
> a great simplification that works for the most part, but let's
> be fair, multiplying two HSV colors isn't exactly going to
> yield a well defined hue either,

You'll notice I didn't add arithmetic operators to the HSx type ;)
If you have HSx colors,  and want to do arithmetic, cast it to RGB first.


> just as multiplying two
> angles doesn't give you a new angle. I.e.:
> http://math.stackexchange.com/a/47880

I went through a brief phase where I thought about adding an angle
type (beside normint), but I felt it was overkill.
I still wonder if it's the right thing to do though... some type that
understands a circle, and making angle arithmetic work as expected.
I like my solution of not providing arithmetic operators for HSx and
LCh for now though ;)


>> > […]
>> […]
>> From which functions? Link me?
>> I'd love to see more precedents.
>
> Yep, that's better than arguing :)
> So here are all graphics APIs I know and what they do when
> they encounter colors without alpha and need a default value:
>
> SDL:
> https://wiki.libsdl.org/SDL_MapRGB
> "If the specified pixel format has an alpha component it will be returned as 
> all 1 bits (fully opaque)."
>
> Allegro:
> https://github.com/liballeg/allegro5/blob/master/include/allegro5/internal/aintern_pixels.h#L59
> (No docs, just source code that defaults to 255 for alpha when
> converting a color from a bitmap with non-alpha pixel format
> to an ALLEGRO_COLOR.)
>
> Cairo:
> https://www.cairographics.org/manual/cairo-cairo-t.html#cairo-set-source-rgb
> "Sets the source pattern within cr to an opaque color."
>
> Microsoft GDI+:
> https://msdn.microsoft.com/de-de/library/windows/desktop/ms536255%28v=vs.85%29.aspx
> "The default value of the alpha component for this Color
> object is 255."
>
> Gnome GDK:
> https://developer.gnome.org/gdk-pixbuf/2.33/gdk-pixbuf-Utilities.html#gdk-pixbuf-add-alpha
> "[…] the alpha channel is initialized to 255 (full opacity)."
>
> Qt:
> http://doc.qt.io/qt-4.8/qcolor.html#alpha-blended-drawing
> "By default, the alpha-channel is set to 255 (opaque)."
>
> OpenGL:
> https://www.opengl.org/sdk/docs/man2/xhtml/glColor.xml
> "glColor3 variants specify new red, green, and blue values
> explicitly and set the current alpha value to 1.0 (full
> intensity) implicitly."
> (Note: The color can be queried and shows a=1.0 without
> blending operations setting it internally if needed.)
>
> Java (AWT):
> https://docs.oracle.com/javase/7/docs/api/java/awt/Color.html#Color%28int,%20boolean%29
> "If the hasalpha argument is false, alpha is defaulted to 255."
>
> Apple's Quartz does not seem to provide color space
> conversions and always requires the user to give the alpha
> value for new colors, so there is no default:
> https://developer.apple.com/library/tvos/documentation/GraphicsImaging/Reference/CGColor/index.html#//apple_ref/c/func/CGColorCreate

Touche! ;)


> One thing I noticed is that many of those strictly separate
> color spaces from alpha as concepts. For example in Quartz
> *all* color spaces have alpha. In Allegro color space
> conversions are ignorant of alpha. That begs the question
> what should happen when you convert RGBA to a HLx
> color space and back to RGBA. Can you retain the alpha value?
> CSS3 for example has HSLA colors that raise the bar a bit.

I've actually had the thought that I should remove alpha from RGB, and
instead provide an aggregate colour type where you could make RGBA by
RGB +  alpha, and each element in the aggregate would follow its own
natural arithmetic rules... again, it feels 

zip: why isn't requireSameLength the default?

2016-09-13 Thread Timothee Cour via Digitalmars-d-learn
in zip: why isn't requireSameLength the default?
This is the most common case and would fit with the goal of being safe by
default.


Re: iPhone vs Android

2016-09-13 Thread Andrei Alexandrescu via Digitalmars-d

On 09/13/2016 05:43 PM, Walter Bright wrote:

The all-or-nothing approach to using the GC is as wrong as any
programming methodology is.


That's a bit much.


There's currently nothing that would prevent any handler code from
saving a reference to the thrown exception. Statically allocating
them would break that.


Didn't we reach the agreement to closely investigate a reference counted 
exceptions hierarchy?



Andrei



Re: Critque of Rust's collection types

2016-09-13 Thread Walter Bright via Digitalmars-d

On 9/13/2016 4:47 PM, Walter Bright wrote:

http://ticki.github.io/blog/horrible/

Some worthwhile insights into what makes a good collection type.


https://news.ycombinator.com/item?id=12488233

Of particular interest is the advocacy of collision attack resistance. Is anyone 
interested in exploring this w.r.t. D's builtin hashes?


https://www.reddit.com/r/rust/comments/52grcl/rusts_stdcollections_is_absolutely_horrible/

Of interest are the comments by the implementer of the hash.


Re: Should debug{} allow GC?

2016-09-13 Thread Walter Bright via Digitalmars-d

On 9/12/2016 6:26 PM, Manu via Digitalmars-d wrote:

I'm concerned this would undermind @nogc... If this is supplied in the
std library, people will use it, and then you get to a place where you
can't rely on @nogc anymore.
debug{} blocks sound much safer to me.



Yeah, I agree. Do you wanna submit an Enhancement Request to Bugzilla on this?


Re: colour lib needs reviewers

2016-09-13 Thread Manu via Digitalmars-d
On 14 September 2016 at 00:56, John Colvin via Digitalmars-d
 wrote:
> On Tuesday, 13 September 2016 at 09:31:53 UTC, Manu wrote:
>>
>> On 13 September 2016 at 17:47, John Colvin via Digitalmars-d
>>  wrote:
>>>
>>> On Tuesday, 13 September 2016 at 01:05:56 UTC, Manu wrote:
>
>
> Also can I swizzle channels directly?



 I could add something like:
   auto brg =  c.swizzle!"brg";

 The result would be strongly typed to the new arrangement.
>>>
>>>
>>>
>>> Perfect use-case for opDispatch like in gl3n.
>>
>>
>> One trouble with arbitrary swizzling is that people often want to
>> write expressions like "c.xyzy", or whatever, but repeating
>> components... and that's not something my colours can do. It's useful
>> in realtime code, but it doesn't mean anything, and you can't
>> interpret that value as a colour anymore after you do that.
>> This sort of thing is used when you're not actually storing colours in
>> textures, but instead just some arbitrary data. Don't use a colour
>> type for that, use a vector type instead.
>> What that sort of swizzling really is, are vector operations, not
>> colour operations. Maybe I could add an API to populate a vector from
>> the components of colours, in-order... but then we don't actually have
>> linear algebra functions in phobos either! So colour->vectors and
>> arbitrary swizzling is probably not something we need immediately.
>>
>> In my lib, colours are colours. If you have `BGR8 x` and `RGB8 y`, and add
>> them, you don't get x.b+y.r, x.g+y.g, x.r+y.b... that's not a colour
>> operation, that's an element-wise vector operation.
>
>
> Fair enough, you know much better than me how useful it would be. I was just
> suggesting that if you do support some sorts of swizzling then opDispatch
> would allow you to avoid users having to use strings. It would be as simple
> to implement as `alias opDispatch = swizzle;` given the swizzle function you
> were using before.

Oh, it would be super-useful! Just that it's not a colour operation,
its liner algebra... I think it's a job for another lib. Getting a
colour as a vector is all that's needed.


Re: iPhone vs Android

2016-09-13 Thread Walter Bright via Digitalmars-d

On 9/13/2016 11:24 AM, deadalnix wrote:

No you don't, as how often the GC kicks in depend of the rate at which you
produce garbage, which is going to be very low with an hybrid approach.



Also, if you only use GC for exceptions, there isn't going to be much memory it 
needs to scan.


Re: iPhone vs Android

2016-09-13 Thread Walter Bright via Digitalmars-d

On 9/13/2016 4:59 AM, Shachar Shemesh wrote:

Here's my worries about the hybrid approach. The GC run time is proportional not
to the amount of memory you manage with the GC, but to the amount of memory that
might hold a pointer to a GC managed memory. In other words, if most of my
memory is RC managed, but some of it is GC, I pay the price of both memory
manager on most of my memory.


Memory allocated with malloc() is unknown to the GC. This works fine unless a 
reference to the GC memory is inserted into malloc'd data, which is why there's 
an API to the GC to let it know about such things.




Re: iPhone vs Android

2016-09-13 Thread Walter Bright via Digitalmars-d

On 9/13/2016 4:13 PM, H. S. Teoh via Digitalmars-d wrote:

There's nothing about the 'throw' keyword that requires GC allocation.
It's just that `throw new Exception(...)` has become a standard
incantation. The exception object itself can, for example, be emplaced
onto a static buffer as I propose above.


There's currently nothing that would prevent any handler code from saving a 
reference to the thrown exception. Statically allocating them would break that.


Critque of Rust's collection types

2016-09-13 Thread Walter Bright via Digitalmars-d

http://ticki.github.io/blog/horrible/

Some worthwhile insights into what makes a good collection type.


Can vibe d leverage existing web technologies?

2016-09-13 Thread Intersteller via Digitalmars-d-learn
vibe.d does not have much lateral support as the most commons web 
technologies do.  Can vibe.d leverage pre-existing techs such as 
php, ruby/rails, etc? Starting from scratch and having to build a 
robust and secure framework is really not the way to go.




Re: iPhone vs Android

2016-09-13 Thread Laeeth Isharc via Digitalmars-d

On Tuesday, 13 September 2016 at 19:30:16 UTC, Marco Leise wrote:

Am Tue, 13 Sep 2016 18:16:27 +
schrieb Laeeth Isharc :

Thanks you for the clear explanation.   So if you don't have 
GC allocations within RC structures and pick one or the other,

 then the concern does not apply?


That's right. Often such structures contain collections of
things, not just plain fields. And a list or a hash map working
in a @nogc environment typically checks its contained type for
any pointers with hasIndirections!T and if so adds its storage
area to the GC scanned memory to be on the safe side.
That means every collection needs a way to exempt its contents
from GC scanning and the user needs to remember to tell it so.

A practical example of that are the EMSI containers, but other 
containers, i.e. in my own private code look similar.


https://github.com/economicmodeling/containers

  struct DynamicArray(T, Allocator = Mallocator, bool supportGC 
= shouldAddGCRange!T)a certain

  {
  ...
  }

Here, when you use a dynamic array you need to specify the type 
and allocator before you get to the point of opting out of GC 
scanning. Many will prefer concise code, go with GC scanning to 
be "better safe than sorry" or don't want to fiddle with the 
options as long as the program works. This is no complaint, I'm 
just trying to draw a picture of how people end up with more GC 
scanned memory than necessary. :)


Thanks,  Marco.

So to a certain extent it's a problem of perception and of 
cognitive load when one is coming to the language - none of the 
steps taken together is necessarily difficult,  but cumulatively 
it's a lot to take in or figure out yourself - then, as you say 
people do what's easy or manageable, and then those habits come 
to constitute ones sense of what's implied by the language and 
implementation when that's not necessarily right.


There's a great blog post to be written on getting along with and 
without the GC for fun and profit - showing how to do the things 
you discussed,comparing the amount of GC D generates with eg Java 
to make vivid and concrete just how the situation is different,  
illustrating how one can use the allocator for significant 
allocations alongside the GC for trivial ones,  illustrating how 
to pre allocate buffers,  and finally demonstrating how in 
practice to use the GC profiling instrumentation we have.


Or perhaps rather a series of blog posts.   It would be helpful 
there to get across how the situation has improved.   Because the 
topic is almost guaranteed to come up in social media discussions 
(which matter as when people Google for it that is often what 
they will find), and we live in an age where the complexity means 
people use heuristic thinking,  and you can hardly blame them 
when there is no one place to point them too (as we have for 
ranges,  slices etc).


I would write it myself if I had time and understood better not 
just garbage collection techniques,  but also how other languages 
are in practice.   But it's not something for me at this point,  
and so someone else will have to do so.   I will see when it is a 
bit quieter in a year or two if someone that works with me wants 
to do it,  but in the meantime it's a great opportunity to 
improve the messaging.


The wiki could also be a bit clearer last I looked on low-GC 
solutions.   Eg you won't easily find EMSI containers from a 
casual browse.


Walter has mentioned the value as a technology professional from 
blogging and having a personal page and I think that is right.



Laeeth


Re: iPhone vs Android

2016-09-13 Thread Lurker via Digitalmars-d
On Tuesday, 13 September 2016 at 22:19:54 UTC, Jonathan M Davis 
wrote:
So, I really think that we need to find a way to make it so 
that exceptions aren't GC allocated normally anymore - or at 
least have a way to reasonably and easily not be GC allocated - 
but the problem is @nogc, not the actual memory management or 
its cost.


- Jonathan M Davis


Introduce RefCountedException?

Also that the pattern almost always is "throw new" and *very* 
rarely it is "Exception e = new ...; throw e;". I think we might 
be able to take advantage of that (e.g. "throw new E" could be a 
special case of "new" that allocates on some sort of "Exception" 
heap that is manually managed, or recognize RefCountedException).


Re: iPhone vs Android

2016-09-13 Thread H. S. Teoh via Digitalmars-d
On Tue, Sep 13, 2016 at 03:19:54PM -0700, Jonathan M Davis via Digitalmars-d 
wrote:
[...]
> But none of the code that's marked @nogc can throw an exception unless
> you're either dealing with pre-allocated exceptions (in which case,
> they're less informative),

I don't see why pre-allocated exceptions would be less informative. You
can always modify the exception object before throwing it, after all.
In fact, I've always wondered about the feasibility of a @nogc exception
handling system where the exception is emplaced onto a fixed static
buffer, so that no allocation (except at the start of the program) is
actually necessary. Of course, chained exceptions throw(!) a monkey
wrench into the works, but assuming we forego chained exceptions,
wouldn't this work around the problem of being unable to allocate
exceptions in @nogc code? (Albeit with its own limitations, obviously.
But it would be better than being unable to use exceptions at all in
@nogc code.)


[...]
> So, I really think that we need to find a way to make it so that
> exceptions aren't GC allocated normally anymore - or at least have a
> way to reasonably and easily not be GC allocated - but the problem is
> @nogc, not the actual memory management or its cost.
[...]

There's nothing about the 'throw' keyword that requires GC allocation.
It's just that `throw new Exception(...)` has become a standard
incantation. The exception object itself can, for example, be emplaced
onto a static buffer as I propose above.


T

-- 
I think Debian's doing something wrong, `apt-get install pesticide', doesn't 
seem to remove the bugs on my system! -- Mike Dresser


Re: iPhone vs Android

2016-09-13 Thread deadalnix via Digitalmars-d
On Tuesday, 13 September 2016 at 22:19:54 UTC, Jonathan M Davis 
wrote:
The big problem with exceptions being allocated by the GC isn't 
really the GC but @nogc.


No the problem IS @nogc . Allocating with the GC is absolutely 
not a problem is you deallocate properly. What is a problem is 
when you leak (ie, when the ownership is transferred to the GC). 
If you don't leak, GC do not kicks in.




Re: iPhone vs Android

2016-09-13 Thread Jonathan M Davis via Digitalmars-d
On Tuesday, September 13, 2016 14:43:09 Walter Bright via Digitalmars-d wrote:
> Case in point, exceptions. Currently exceptions are fairly wedded to being
> GC allocated. Some people have opined that this is a major problem, and it
> is if the app is throwing a lot of exceptions. But exceptions should be
> exceptional.
>
> There is still a place for GC, even in a high performance app. The
> all-or-nothing approach to using the GC is as wrong as any programming
> methodology is.

The big problem with exceptions being allocated by the GC isn't really the
GC but @nogc. Obviously, a program that does not use the GC at all can't
allocate an exception with the GC, but in general, I would fully expect that
even a program that allows the GC but uses it minimally would have no
problem with the garbage created by exceptions precisely because they should
be rare. But none of the code that's marked @nogc can throw an exception
unless you're either dealing with pre-allocated exceptions (in which case,
they're less informative), or you are _very_ careful with how you write that
code so that you can get away with malloc-ing the exception (but that
approach won't work in the general case unless you don't care about leaking
the memory from the exception, since most code would assume that the
exception was allocated by the GC and wouldn't know how to free it). So,
@nogc code is going to have a tendency to not use exceptions just because
exceptions don't work well without the GC. And those who want to use
exceptions and are willing to have their code not be @nogc will forgo @nogc
even when the code could have been @nogc otherwise.

So, I really think that we need to find a way to make it so that exceptions
aren't GC allocated normally anymore - or at least have a way to reasonably
and easily not be GC allocated - but the problem is @nogc, not the actual
memory management or its cost.

- Jonathan M Davis



[Issue 15088] timer_t should be void*, not int

2016-09-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=15088

--- Comment #3 from github-bugzi...@puremagic.com ---
Commits pushed to master at https://github.com/dlang/druntime

https://github.com/dlang/druntime/commit/2c910837f8f3f6a247e252780be2bf2d67df7e03
Bug 15088: timer_t should be void* on glibc

https://issues.dlang.org/show_bug.cgi?id=15088

https://github.com/dlang/druntime/commit/561e520f041fe96e29247f81d6e5013ada10092b
Merge pull request #1645 from tomerfiliba/patch-3

Bug 15088: timer_t should be void* on glibc

--


[Issue 15088] timer_t should be void*, not int

2016-09-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=15088

David Nadlinger  changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 CC||c...@klickverbot.at
 Resolution|--- |FIXED

--


[Issue 16486] Compiler see template alias like a separate type in template function definition

2016-09-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16486

--- Comment #6 from Sky Thirteenth  ---
(In reply to ag0aep6g from comment #5)
> (In reply to Sky Thirteenth from comment #4)
> > it can be pretty useful feature.
> 
> Sure. I don't think anyone disputes that it could be a useful feature. So we
> keep the issue open as an enhancement request.
> 
> It's not a bug, because, as far as I know, there has been no intention to
> support the feature, yet. That is, the language specification doesn't say
> that it should work, and there's no (buggy) code in the compiler that
> attempts to do it.

Then, OK. 

So, at least now I will know about it. 
And also know that this feature may appear later, in a future.

Thank You for Your time spend on this. =)

--


[Issue 16486] Compiler see template alias like a separate type in template function definition

2016-09-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16486

--- Comment #5 from ag0ae...@gmail.com ---
(In reply to Sky Thirteenth from comment #4)
> it can be pretty useful feature.

Sure. I don't think anyone disputes that it could be a useful feature. So we
keep the issue open as an enhancement request.

It's not a bug, because, as far as I know, there has been no intention to
support the feature, yet. That is, the language specification doesn't say that
it should work, and there's no (buggy) code in the compiler that attempts to do
it.

--


Re: iPhone vs Android

2016-09-13 Thread jmh530 via Digitalmars-d
On Tuesday, 13 September 2016 at 20:19:40 UTC, Jonathan M Davis 
wrote:


LOL. It's on my unfinished projects list, so I intend to 
complete it at some point, but unfortunately, it's been on the 
backburner for a while. Still, it's funny that you say that 
considering how many typos were in that post, since I neglected 
to reread it before sending it. :)


- Jonathan M Davis


Typos are for spellcheckers.


Re: iPhone vs Android

2016-09-13 Thread Walter Bright via Digitalmars-d

On 9/13/2016 10:44 AM, Jonathan M Davis via Digitalmars-d wrote:

Folks have posted here before about taking that approach with games and the
like that they've written. In a number cases, simply being careful about
specific pieces of code and avoiding the GC in those cases was enough to get
the required performance. In some cases, simply disabling the GC during a
critical piece of code and re-enabling it afterwards fixes the performance
problems trigged by the GC without even needing manual memory management or
RC. In others, making sure that the critical thread (e.g. the rendering
thread) was not GC-managed while letting the rest of the app us the GC
takes care of the problem.

We need reference counting to solve certain problems (e.g. cases where
deterministic destruction of stuff on the heap is required), but other stuff
(like array slices) work far better with a GC. So going either full-on RC or
full-on GC is not going to be good move for most programs. I don't think
that there's any question that we'll be best off by having both as solid
options, and best practices can develop as to when to use one or the other.


Case in point, exceptions. Currently exceptions are fairly wedded to being GC 
allocated. Some people have opined that this is a major problem, and it is if 
the app is throwing a lot of exceptions. But exceptions should be exceptional.


There is still a place for GC, even in a high performance app. The 
all-or-nothing approach to using the GC is as wrong as any programming 
methodology is.




Re: Copy a struct and its context

2016-09-13 Thread Steven Schveighoffer via Digitalmars-d-learn

On 9/13/16 5:01 PM, Yuxuan Shui wrote:


For example, a common use case might be I want to capture everything by
value. In stead of adding all the fields by hand and passing them to the
constructor, I want the compiler to do it for me.

i.e. I wish I could (borrowing C++ syntax):

struct A[=] {
   ...
}

Then the context will be captured by value instead of reference.


This is a valid enhancement. Why not try and ask for it?

I don't know if the specific syntax would work for D, but the feature 
seems useful in some respects.


-Steve


Re: Virtual Methods hurting performance in the DMD frontend

2016-09-13 Thread Stefan Koch via Digitalmars-d

On Monday, 12 September 2016 at 08:03:45 UTC, Stefan Koch wrote:

On Sunday, 11 September 2016 at 21:48:56 UTC, Stefan Koch wrote:

Those are indirect class

I meant indirect calls!

@Jacob

Yes that is my indented solution.
Having a type-field in root-object.


A Small update on this.
When dmd is complied with ldc this problem seems to lessen.

However much of dmd's code especially in dtemplate could be 
simpified if it could just switch on a value. instead of doing 
method calls and null checks.


Re: Copy a struct and its context

2016-09-13 Thread Yuxuan Shui via Digitalmars-d-learn
On Tuesday, 13 September 2016 at 20:36:22 UTC, Steven 
Schveighoffer wrote:

On 9/13/16 4:11 PM, Yuxuan Shui wrote:
On Tuesday, 13 September 2016 at 20:00:40 UTC, Steven 
Schveighoffer wrote:
Not familiar with C++ lambda. You can always "specify" how to 
capture

the data by directly declaring it:

auto foo()
{
int x;
static struct S
{
int x;
}
return S(x);
}


It just feels a bit tedious to do something manually while the 
compiler

have enough information to do it for me.


Do what for you? How does it know that you don't want to use a 
closure and a reference to that instead?


Note that all the internals for this are implementation 
defined. Given sufficient conditions, the compiler could 
"cheat" and allocate the data inside the struct itself instead. 
For example, if all referenced data was immutable.


-Steve


For example, a common use case might be I want to capture 
everything by value. In stead of adding all the fields by hand 
and passing them to the constructor, I want the compiler to do it 
for me.


i.e. I wish I could (borrowing C++ syntax):

struct A[=] {
   ...
}

Then the context will be captured by value instead of reference.


[Issue 16486] Compiler see template alias like a separate type in template function definition

2016-09-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16486

--- Comment #4 from Sky Thirteenth  ---
(In reply to Steven Schveighoffer from comment #3)
> The issue is that the compiler doesn't replace the alias until AFTER
> instantiation.
> 
> It should be able to see that:
> 
> testFunctionC(T)(TestAliasB!T arg)
> 
> is the same as 
> 
> testFunctionC(T)(TestType!(T, 3) arg)
> 
> and reduce basically to the same thing as testFunctionA
> 
> Still definitely an enhancement, as there is no definition of how the
> compiler will work backwards through template instantiations.
> 
> It may be a duplicate, I distinctly remember wishing for this kind of thing
> back when D2 was in infancy.

Actually, in my opinion, as it acts now, looks wrong, because this feature is
not used at full power. 

We can use templates and template alias, while declare variables, but can't do
the same thing while describe arguments for other templates(no matter is it
type or function). Don't you feel wrong about it?

The meaning of this features, is to simplify things. But in this case we need
to write something like this:
```
auto foo( That )( What.TheHell!(is(That : Thing) && How!(IShould!(Use!That)))
arg )
{
//...
}
```
Rather then just write down this:
```
auto foo( That )( CatInABag!That arg )
{
//...
}
```

Of course, now I'm exaggerating things, but still it can be pretty useful
feature.

--


Re: pure functions

2016-09-13 Thread Jonathan M Davis via Digitalmars-d-learn
On Tuesday, September 13, 2016 20:08:22 Patrick Schluter via Digitalmars-d-
learn wrote:
> On Tuesday, 13 September 2016 at 06:59:10 UTC, Jonathan M Davis
>
> wrote:
> > On Tuesday, September 13, 2016 03:33:04 Ivy Encarnacion via
> >
> > Digitalmars-d- learn wrote:
> >  A pure function cannot call any function that is not pure [...]
>
> I've read that a lot but it's not true. A pure function can call
> impure function. The restriction is, that the impure function
> called within the pure function does not depend or mutate on
> state existing outside of that function. If the called function
> changes local variable, it has no bearing on outside the scope.
> Here a contrived example in C.
>
> size_t strlen(const char *str); is pure. If I define it this way
> for example:
>
> size_t my_strlen(const char *str)
> {
>char temp[50];
>
>strlcpy(temp, str, sizeof temp);  /* strlcpy is not pure as it
> mutates something outside of its "scope" */
>
>return strlen(str);
> }
>
> That function is pure. There is no visible change outside of it.
>
> my_strlen and strlen have exactly the same properties.

That's because you're talking about functional purity and _not_ D's pure. If
you want to talk about D's pure, you pretty much need to forget about
functional purity. While D's pure allows you to get functional purity, it's
actually something different. It would be far more accurate at this point if
it were called something like @noglobal. Actual, functional purity really
only comes into play when a pure function's parameters are immutable or
implicitly convertible to immutable, at which point the compiler can
guarantee that the same arguments to the function will result in the
function returning the same result. Sometimes, pure functions whose
parameters are immutable or implicitly convertible to immutable are referred
to as strongly pure functions, whereas those whose parameters aren't are
called weakly pure. So-called strongly pure functions are what you need if
you want to talk about functional purity, whereas so-called weakly pure
functions are still pure as far as D is concerned, because they don't
violate the functional purity of a strongly pure function if they're called
by it.

Originally, for a function to be pure in D, it _had_ to have parameters
which were immutable or implicitly convertible to immutable, which actually
was functionally pure, but it was too restrictive to be useful, so pure was
expanded to what it is now, which makes it so that it's not really about
functional purity anymore even though it enables functional purity in a way
that the compiler can detect it and optimize for it.

But when a D programmer talks about a function that's imppure, they're
generally not talking about functional purity but about whether it's marked
with pure or inferred as pure by the compiler, and per that definition, the
code that you have above _is_ pure. In a way, what you're trying to prove
about impure in D is both right and wrong, because we're dealing with
conflicting definitions of purity here. When discussing pure in D, if you
want to talk about whether a function is functionally pure in the sense that
one would normally talk about pure functions outside of D, then you need to
clearly state that you're talking about functional purity and not about D's
pure. When simply saying that something is pure or not, folks here are going
to expect you to be talking about D's definition of purity.

By the way, the only ways to get around pure are:

1. use debug blocks (where pure is ignored in order to facilite debugging).

2. use an extern(*) other than extern(D) so that the body of the function
   doesn't have to be marked pure like the prototype does (since other types
   of linkage don't actually use pure), allowing you to lie to the compiler.

3. use function pointers and cast an impure function pointer to a pure one
   and thereby lie to the compiler.

So, there are a couple of ways to fool the compiler, but there's only one
wayt that's sanctioned by it, and it's only intended for debugging purposes.

- Jonathan M Davis



Re: Copy a struct and its context

2016-09-13 Thread Steven Schveighoffer via Digitalmars-d-learn

On 9/13/16 4:11 PM, Yuxuan Shui wrote:

On Tuesday, 13 September 2016 at 20:00:40 UTC, Steven Schveighoffer wrote:

Not familiar with C++ lambda. You can always "specify" how to capture
the data by directly declaring it:

auto foo()
{
int x;
static struct S
{
int x;
}
return S(x);
}


It just feels a bit tedious to do something manually while the compiler
have enough information to do it for me.


Do what for you? How does it know that you don't want to use a closure 
and a reference to that instead?


Note that all the internals for this are implementation defined. Given 
sufficient conditions, the compiler could "cheat" and allocate the data 
inside the struct itself instead. For example, if all referenced data 
was immutable.


-Steve


Re: pure functions

2016-09-13 Thread Steven Schveighoffer via Digitalmars-d-learn

On 9/13/16 4:08 PM, Patrick Schluter wrote:

On Tuesday, 13 September 2016 at 06:59:10 UTC, Jonathan M Davis wrote:

On Tuesday, September 13, 2016 03:33:04 Ivy Encarnacion via
Digitalmars-d- learn wrote:

 A pure function cannot call any function that is not pure [...]


I've read that a lot but it's not true. A pure function can call impure
function. The restriction is, that the impure function called within the
pure function does not depend or mutate on state existing outside of
that function. If the called function changes local variable, it has no
bearing on outside the scope.


D defines pure differently.

You are allowed to declare a function is pure if it takes (and possibly 
changes) mutable references. It can be called by pure functions, but 
will not be optimized in the same way one would expect traditional pure 
functions to be optimized.


We call it "weak-pure".

In D, an "impure" function is defined exactly as one that accesses 
mutable global state. Everything else can be declared pure. The one 
exception is memory allocation.


-Steve


Re: iPhone vs Android

2016-09-13 Thread Jonathan M Davis via Digitalmars-d
On Tuesday, September 13, 2016 18:50:20 jmh530 via Digitalmars-d wrote:
> On Tuesday, 13 September 2016 at 18:04:19 UTC, Jonathan M Davis
>
> wrote:
> > As I understand it, [snnip]
>
> If you ever write a book, I would pre-order it.

LOL. It's on my unfinished projects list, so I intend to complete it at some
point, but unfortunately, it's been on the backburner for a while. Still,
it's funny that you say that considering how many typos were in that post,
since I neglected to reread it before sending it. :)

- Jonathan M Davis



Re: Discarding all forum drafts at once

2016-09-13 Thread Basile B. via Digitalmars-d-learn

On Tuesday, 13 September 2016 at 18:15:56 UTC, Nordlöw wrote:
I have lots of unsent drafts I would like to discard all at 
once. Is this possible somehow?


Delete the cookies.


Re: Copy a struct and its context

2016-09-13 Thread Yuxuan Shui via Digitalmars-d-learn
On Tuesday, 13 September 2016 at 20:00:40 UTC, Steven 
Schveighoffer wrote:

On 9/13/16 3:42 PM, Yuxuan Shui wrote:

[...]


There's nothing in the language to prevent this optimization.


[...]


Again, could be clearer. But the fact that both the function 
and the struct affect the same data kind of dictates it needs 
to be a reference.



[...]


Not familiar with C++ lambda. You can always "specify" how to 
capture the data by directly declaring it:


auto foo()
{
int x;
static struct S
{
int x;
}
return S(x);
}


It just feels a bit tedious to do something manually while the 
compiler have enough information to do it for me.




In D, if you have a closure, it's going to be heap allocated. 
Just the way it is. If you don't want that, you have to avoid 
them.


-Steve




Re: pure functions

2016-09-13 Thread Patrick Schluter via Digitalmars-d-learn
On Tuesday, 13 September 2016 at 06:59:10 UTC, Jonathan M Davis 
wrote:
On Tuesday, September 13, 2016 03:33:04 Ivy Encarnacion via 
Digitalmars-d- learn wrote:


 A pure function cannot call any function that is not pure [...]


I've read that a lot but it's not true. A pure function can call 
impure function. The restriction is, that the impure function 
called within the pure function does not depend or mutate on 
state existing outside of that function. If the called function 
changes local variable, it has no bearing on outside the scope.

Here a contrived example in C.

size_t strlen(const char *str); is pure. If I define it this way 
for example:


size_t my_strlen(const char *str)
{
  char temp[50];

  strlcpy(temp, str, sizeof temp);  /* strlcpy is not pure as it 
mutates something outside of its "scope" */


  return strlen(str);
}

That function is pure. There is no visible change outside of it.

my_strlen and strlen have exactly the same properties.



Re: Copy a struct and its context

2016-09-13 Thread Basile B. via Digitalmars-d-learn
On Tuesday, 13 September 2016 at 01:32:19 UTC, Steven 
Schveighoffer wrote:

On 9/12/16 4:11 PM, Ali Çehreli wrote:

On 09/10/2016 10:44 PM, Yuxuan Shui wrote:
I recently noticed nested struct capture its context by 
reference

(which, BTW, is not mentioned at all here:
https://dlang.org/spec/struct.html#nested).


He wants to deep-copy the struct, meaning copy the context 
pointer data. Meaning if you change 'i' in s, then s_copy's foo 
still returns 42.


I don't think it is or should be doable.

-Steve


with stack stomp no way that any hack could work, if any.


Re: colour lib needs reviewers

2016-09-13 Thread Basile B. via Digitalmars-d

On Tuesday, 13 September 2016 at 02:00:44 UTC, Manu wrote:
What is the worth of storing alpha data if it's uniform 0xFF 
anyway?


He he, that's the big problem with classic image formats. When a 
graphic is described in term of vector, fill, stroke, gradient, 
layer, etc (not to name SVG) there's no more waste.


Re: Copy a struct and its context

2016-09-13 Thread Steven Schveighoffer via Digitalmars-d-learn

On 9/13/16 3:42 PM, Yuxuan Shui wrote:

On Tuesday, 13 September 2016 at 01:32:19 UTC, Steven Schveighoffer wrote:

On 9/12/16 4:11 PM, Ali Çehreli wrote:

On 09/10/2016 10:44 PM, Yuxuan Shui wrote:

I recently noticed nested struct capture its context by reference
(which, BTW, is not mentioned at all here:
https://dlang.org/spec/struct.html#nested).


" It has access to the context of its enclosing scope (via an added
hidden field)."

It needs to be a reference. Otherwise, you store the entire stack
frame in the struct? That wouldn't be a "field". It also has write
access to the context:


Why not just capture the variables that are actually been referenced?


There's nothing in the language to prevent this optimization.


Also being a field doesn't put limits on the size of the "field".


Again, could be clearer. But the fact that both the function and the 
struct affect the same data kind of dictates it needs to be a reference.



I like how C++ lambda lets you choose what variables to capture, and how
are they captured. I'm little disappointed that D doesn't let me do the
same.


Not familiar with C++ lambda. You can always "specify" how to capture 
the data by directly declaring it:


auto foo()
{
int x;
static struct S
{
int x;
}
return S(x);
}

In D, if you have a closure, it's going to be heap allocated. Just the 
way it is. If you don't want that, you have to avoid them.


-Steve


Re: Copy a struct and its context

2016-09-13 Thread Yuxuan Shui via Digitalmars-d-learn
On Tuesday, 13 September 2016 at 01:32:19 UTC, Steven 
Schveighoffer wrote:

On 9/12/16 4:11 PM, Ali Çehreli wrote:

On 09/10/2016 10:44 PM, Yuxuan Shui wrote:
I recently noticed nested struct capture its context by 
reference

(which, BTW, is not mentioned at all here:
https://dlang.org/spec/struct.html#nested).


" It has access to the context of its enclosing scope (via an 
added hidden field)."


It needs to be a reference. Otherwise, you store the entire 
stack frame in the struct? That wouldn't be a "field". It also 
has write access to the context:


Why not just capture the variables that are actually been 
referenced? Also being a field doesn't put limits on the size of 
the "field".


I like how C++ lambda lets you choose what variables to capture, 
and how are they captured. I'm little disappointed that D doesn't 
let me do the same.





UFCS for local symbols

2016-09-13 Thread Yuxuan Shui via Digitalmars-d
When I tried to use UFCS to call a nested function, I was 
surprised that it didn't work. Then I looked up the document, and 
found out this is actually by design. However, I found the reason 
listed in the document very unconvincing.


I will copy the example here:

int front(int[] arr) { return arr[0]; }

void main()
{
int[] a = [1,2,3];
auto x = a.front();   // call .front by UFCS

auto front = x;   // front is now a variable
auto y = a.front();   // Error, front is not a function
}

class C
{
int[] arr;
int front()
{
return arr.front(); // Error, C.front is not callable
// using argument types (int[])
}
}

Looks like all of these examples can be resolved by just letting 
the compiler keep trying the next symbol after the first one 
failed, just like doing an overload resolution. So what's the 
problem with doing that?


Also can't the compiler simply lower a.name to name(a) if 'name' 
is not found in 'a', then compile that?


Re: iPhone vs Android

2016-09-13 Thread Marco Leise via Digitalmars-d
Am Tue, 13 Sep 2016 18:16:27 +
schrieb Laeeth Isharc :

> Thanks you for the clear explanation.   So if you don't have GC 
> allocations within RC structures and pick one or the other,  then 
> the concern does not apply?
 
That's right. Often such structures contain collections of
things, not just plain fields. And a list or a hash map working
in a @nogc environment typically checks its contained type for
any pointers with hasIndirections!T and if so adds its storage
area to the GC scanned memory to be on the safe side.
That means every collection needs a way to exempt its contents
from GC scanning and the user needs to remember to tell it so.

A practical example of that are the EMSI containers, but
other containers, i.e. in my own private code look similar.

https://github.com/economicmodeling/containers

  struct DynamicArray(T, Allocator = Mallocator, bool supportGC = 
shouldAddGCRange!T)
  {
  ...
  }

Here, when you use a dynamic array you need to specify the
type and allocator before you get to the point of opting out
of GC scanning. Many will prefer concise code, go with GC
scanning to be "better safe than sorry" or don't want to fiddle
with the options as long as the program works. This is no
complaint, I'm just trying to draw a picture of how people end
up with more GC scanned memory than necessary. :)

-- 
Marco



Flycheck DMD Coverage and Dscanner Support

2016-09-13 Thread Nordlöw via Digitalmars-d-announce
I've added experimental support in Flycheck for highlighting all 
lines that have zero coverage at


https://github.com/nordlow/elisp/blob/master/mine/flycheck-d-all.el

Source is an extension of unittest add-ons in 
https://github.com/flycheck/flycheck-d-unittest.


I've also had Flycheck-support for Dscanner hanging around for 
some time at


https://github.com/nordlow/elisp/blob/master/mine/flycheck-d-dscanner.el

I haven't figured out how to have these active at once, thought. 
So pick one or the other but not both at the same time. Something 
wrong with my settings of the `:next-checkers` property I 
presume. Ideas?


Feedback is much appreciated. A key-question is how the coverage 
line count could be visualized aswell.


Destroy!


Re: iPhone vs Android

2016-09-13 Thread jmh530 via Digitalmars-d
On Tuesday, 13 September 2016 at 18:04:19 UTC, Jonathan M Davis 
wrote:


As I understand it, [snnip]


If you ever write a book, I would pre-order it.


Re: iPhone vs Android

2016-09-13 Thread Steven Schveighoffer via Digitalmars-d

On 9/13/16 2:24 PM, deadalnix wrote:

This is why ObjC exeption handling and ARC never worked well together.
This is why C++ exception are dog slow and this is why Swift is nothrow
by default.


Swift doesn't support exceptions AFAIK. It supports weird error handling 
that looks similar to exceptions, but is really just a standard return.


i.e. this:

do
{
   try someFunctionThatErrors(arg)
}
catch(Exception ex)
{
   // handle ex
}

really compiles like this:

var _err : Error?
someFunctionThatErrors(arg, &_err)
if(_err != nil)
{
   let ex = _err!.exception
}

-Steve


Re: GC of const/immutable reference graphs

2016-09-13 Thread Steven Schveighoffer via Digitalmars-d

On 9/13/16 11:27 AM, John Colvin wrote:

For the following, lifetimeEnd(x) is the time of freeing of x.

Given a reference graph and an const/immutable node n, all nodes
reachable via n (let's call them Q(n)) must also be const/immutable, as
per the rules of D's type system.

In order to avoid dangling pointers:
For all q in Q(n), lifetimeEnd(q) >= lifetimeEnd(n)

Therefore, there is never any need for the GC to mark or sweep anything
in Q(n) during the lifetime of n.


This only applies to immutable, because const can point at data that 
still has mutable references.


However, the GC could "know" that the actual data is immutable, so even 
a const reference could be known to point at immutable, and therefore 
unchanging, data.



Does the GC take advantage of this in some way to reduce collection times?


No. Arrays would have to be treated specially though, since you can 
append into an immutable block (would have to clear the "already 
checked" flag).


It also might wreak havoc with Andrei's idea to have AffixAllocator 
create mutable islands of data in an immutable/const memory block.


I'm a bit skeptical that this would provide a lot of benefit. There 
would have to be a lot of immutable data for this to be worthwhile. 
Perhaps when paired with precise scanning?


-Steve


Re: colour lib needs reviewers

2016-09-13 Thread Marco Leise via Digitalmars-d
Am Tue, 13 Sep 2016 12:00:44 +1000
schrieb Manu via Digitalmars-d :

> What is the worth of storing alpha data if it's uniform 0xFF anyway?
> It sounds like you mean rgbx, not rgba (ie, 32bit, but no alpha).
> There should only be an alpha channel if there's actually alpha data... right?

I don't mean RGBX. JavaScript's canvas works that way for
example. I.e. the only pixel format is RGBA for simplicity's
sake and I'm not surprised it actually draws something if I
load it with a 24-bit graphic. ;)

> > […] An additive one may be:
> >
> >   color = color_dst + color_src * alpha_src
> >   alpha = alpha_dst  
> 
> I thought about adding blend's, but I stopped short on that. I think
> that's firmly entering image processing territory, and I felt that was
> one step beyond the MO of this particular lib... no?
> Blending opens up a whole world.

I agree with that decision, and that it entails that
arithmetics are undefined for alpha channels. :-( Yeah bummer.
The idea that basic (saturating) arithmetics work on colors is
a great simplification that works for the most part, but let's
be fair, multiplying two HSV colors isn't exactly going to
yield a well defined hue either, just as multiplying two
angles doesn't give you a new angle. I.e.:
http://math.stackexchange.com/a/47880

> > […]
> […]
> From which functions? Link me?  
> I'd love to see more precedents.

Yep, that's better than arguing :)
So here are all graphics APIs I know and what they do when
they encounter colors without alpha and need a default value:

SDL:
https://wiki.libsdl.org/SDL_MapRGB
"If the specified pixel format has an alpha component it will be returned as 
all 1 bits (fully opaque)."

Allegro:
https://github.com/liballeg/allegro5/blob/master/include/allegro5/internal/aintern_pixels.h#L59
(No docs, just source code that defaults to 255 for alpha when
converting a color from a bitmap with non-alpha pixel format
to an ALLEGRO_COLOR.)

Cairo:
https://www.cairographics.org/manual/cairo-cairo-t.html#cairo-set-source-rgb
"Sets the source pattern within cr to an opaque color."

Microsoft GDI+:
https://msdn.microsoft.com/de-de/library/windows/desktop/ms536255%28v=vs.85%29.aspx
"The default value of the alpha component for this Color
object is 255."

Gnome GDK:
https://developer.gnome.org/gdk-pixbuf/2.33/gdk-pixbuf-Utilities.html#gdk-pixbuf-add-alpha
"[…] the alpha channel is initialized to 255 (full opacity)."

Qt:
http://doc.qt.io/qt-4.8/qcolor.html#alpha-blended-drawing
"By default, the alpha-channel is set to 255 (opaque)."

OpenGL:
https://www.opengl.org/sdk/docs/man2/xhtml/glColor.xml
"glColor3 variants specify new red, green, and blue values
explicitly and set the current alpha value to 1.0 (full
intensity) implicitly."
(Note: The color can be queried and shows a=1.0 without
blending operations setting it internally if needed.)

Java (AWT):
https://docs.oracle.com/javase/7/docs/api/java/awt/Color.html#Color%28int,%20boolean%29
"If the hasalpha argument is false, alpha is defaulted to 255."

Apple's Quartz does not seem to provide color space
conversions and always requires the user to give the alpha
value for new colors, so there is no default:
https://developer.apple.com/library/tvos/documentation/GraphicsImaging/Reference/CGColor/index.html#//apple_ref/c/func/CGColorCreate


One thing I noticed is that many of those strictly separate
color spaces from alpha as concepts. For example in Quartz
*all* color spaces have alpha. In Allegro color space
conversions are ignorant of alpha. That begs the question
what should happen when you convert RGBA to a HLx
color space and back to RGBA. Can you retain the alpha value?
CSS3 for example has HSLA colors that raise the bar a bit.

-- 
Marco



Re: colour lib needs reviewers

2016-09-13 Thread Random D user via Digitalmars-d

On Tuesday, 13 September 2016 at 02:00:44 UTC, Manu wrote:
On 13 September 2016 at 07:00, Marco Leise via Digitalmars-d 
 wrote:

Am Tue, 13 Sep 2016 00:37:22 +1000
schrieb Manu via Digitalmars-d :
Alright, but hybrid gamma is really not something that can be 
googled. Or rather I end up at Toyota's Gamma Hybrid product 
page. :D


True. I'm not even sure what the technical term for this sort 
of gamma

function is... I just made that up! :/
As Walter and others have asked, I'll have to start adding 
links to
reference material I guess, although that still feels really 
weird to

me for some reason.


FWIW I stumbled into this while double-checking HDR standards for 
my previous post. (I'm not a HDR expert, only somewhat interested 
because it's the future of displays/graphics)


https://en.wikipedia.org/wiki/Hybrid_Log-Gamma




Re: iPhone vs Android

2016-09-13 Thread deadalnix via Digitalmars-d
On Tuesday, 13 September 2016 at 11:59:46 UTC, Shachar Shemesh 
wrote:

On 13/09/16 02:21, deadalnix wrote:


RC itself is not panacea, it doesn't work well with 
exceptions, generate

a huge amount of code bloat,


I will need explanation to those two. Assuming we have RAII, 
why doesn't RC work well with exceptions?




With RC, the runtime needs to resume every frames. That makes 
exception very slow. Plus you need to generate a bunch of 
unwinding code + LSDA infos, and it clusters like crazy when you 
have destructor that can throw.


This is why ObjC exeption handling and ARC never worked well 
together. This is why C++ exception are dog slow and this is why 
Swift is nothrow by default.



But first and foremost, it is a disaster for shared data.


Again, please elaborate.



For shared data, you need synchronized reference counting, which 
is prohibitively expensive.


Here's my worries about the hybrid approach. The GC run time is 
proportional not to the amount of memory you manage with the 
GC, but to the amount of memory that might hold a pointer to a 
GC managed memory. In other words, if most of my memory is RC 
managed, but some of it is GC, I pay the price of both memory 
manager on most of my memory.


Shachar


No you don't, as how often the GC kicks in depend of the rate at 
which you produce garbage, which is going to be very low with an 
hybrid approach.




Re: iPhone vs Android

2016-09-13 Thread Laeeth Isharc via Digitalmars-d
On Tuesday, 13 September 2016 at 18:04:19 UTC, Jonathan M Davis 
wrote:
On Tuesday, September 13, 2016 17:48:38 Laeeth Isharc via 
Digitalmars-d wrote:

On Tuesday, 13 September 2016 at 11:59:46 UTC, Shachar Shemesh

wrote:
> On 13/09/16 02:21, deadalnix wrote:
>> I stay convinced that an hybrid approach is inevitable and 
>> am

>> surprised
>> why few are going there (hello PHP, here right).
>
> Here's my worries about the hybrid approach. The GC run time 
> is proportional not to the amount of memory you manage with 
> the GC, but to the amount of memory that might hold a 
> pointer to a GC managed memory. In other words, if most of 
> my memory is RC managed, but some of it is GC, I pay the 
> price of both memory manager on most of my memory.

>
> Shachar

Hi Shachar.

I hope you're well.

Would you mind elaborating a bit on why the cost of GC managed 
memory is as high as you imply when combined with other 
approaches, at least on a 64 bit machine and presuming you 
have a degree of hygiene and don't directly use a pointer 
allowed to point to either.   Eg if you use GC for long lived 
allocations and RC for short lived ones (and the RC 
constructor makes sure the thing is not registered with the GC 
so that takes care of short lived parts of long lived 
structures),  how in practice would this be a problem  ? I am 
no GC expert,  but keen to update my mental model.


As I understand it, the problem is that the length of time that 
the GC scan takes - and therefore that the world is stopped - 
depends on how much memory it has to scan, not on how much 
memory has been allocated by the GC. In many cases, that's just 
the GC heap plus the stack, but if you're malloc-ing memory, 
and that memory can hold references to GC-allocated memory, 
then you have to tell the GC to scan that memory too in order 
to avoid having anything it references being prematurely 
collected. So, ultimately, how expensive the GC is in terms of 
performance generally depends on how much memory can hold 
referencs to GC-allocated objects and not how many such objects 
there are, meaning that avoiding allocating with the GC in a 
lot of your code doesn't necessarily save you from the 
performance cost of the GC. To avoid that cost, you'd need to 
either not have many places in malloc-ed memory which could 
refer to GC-allocated memory, or you'd need to write your code 
in a way that the it was guaranteed that the GC-allocated 
objects that were referenced in malloc-ed memory would have 
other references in memory that was scanned that lived longer 
htan the malloc-ed memory so that the malloc-ed memory wouldn't 
need to be scanned (which is quite possible in some 
circumstances but potentially risky).


Using a 64-bit system significantly reduces the risk of false
pointers, but it doesn't reduce the amount of memory that 
actually needs to be scanned. And whether using the GC for 
long-lived allocations and RC for short-lived ones would help 
would depend primarily on  how many such objects would be 
around at any one time - and of course, whether they refer to 
GC-allocated memory and would thus need to be scanned. But 
reducing the amount of memory that the GC needs to scan and 
reduce how much is GC-allocated are two separate - albeit 
related - problems.


- Jonathan M Davis


Thanks you for the clear explanation.   So if you don't have GC 
allocations within RC structures and pick one or the other,  then 
the concern does not apply?




Discarding all forum drafts at once

2016-09-13 Thread Nordlöw via Digitalmars-d-learn
I have lots of unsent drafts I would like to discard all at once. 
Is this possible somehow?


Re: colour lib needs reviewers

2016-09-13 Thread Random D user via Digitalmars-d

On Tuesday, 13 September 2016 at 01:05:56 UTC, Manu wrote:


Can you describe what  you  perceive to be hard?


Well, I just skimmed through the docs and I didn't look at the 
code, so that sense it was an "honest" view for phobos proposal. 
Also I was trying to convey that based on the docs it "looks 
hard", although I was suspecting that it isn't really.


So the list of things was a list of examples that I would've 
wanted to see in the docs. Maybe in an overview or "here's how 
you use it" page. I can read the details if I really need them, 
but I think it's important to have a cook book examples for quick 
start.


In general, I think basics should be dead simple (even over 
simplified for the very basic case), but the API and the docs 
should provide me layers of increasing detail and sophistication, 
which I can study if (or when) I really need control and care 
about the details.



Few basic things I'd be interested in (examples):
1. How to pack/unpack color into uint/rgba channels.


auto c  = RGBA8(r, g, b, a);
uint r = c.r.value; // 'r' is a 'NormalizedInt", so you need 
'.value'

to get the raw integer value
float g = cast(float)c.g; // cast to float knows about the 
normalized 0-1 range


Let's put these in the docs.




Also can I swizzle channels directly?


I could add something like:
  auto brg =  c.swizzle!"brg";


It's just a nicety. I guess I could just define exact color 
formats and convert (let the lib do the rest), or just unpack and 
repack.


Also the docs might want to mention which way is the 
preferred/designed way.


2. Can I just cast an array of bytes into a RGBA array and 
operate on that?


Yes. Of course, if the image library returned an array of 
colors, you wouldn't have an array of raw bytes in the first 
place.


Cool. I just wanted to make sure something like this possible.

Also should be in the docs as an example, so that people don't 
have to try this out themselves.

Or search through github for details.



3. How to scale color channels e.g. 5 bits to 8 bits and vice 
versa. How to convert color formats? Can I combine 
scale/conversion with single channel or channel swizzling?


alias CustomPackedColor = PackedRGB!("BGR_5_6_5", ubyte); // <- 
ordered bgr

CustomPackedColor[] image = loadPackedImage();

auto unpacked = cast(RGB8)image[0]; // unpack and swizzle

I'm not sure what you mean by "with single channel"?


For example if you first unpack RGBA_uint32 into R,G,B,A then you 
want to take G (or just swizzle color.g) and scale that to 5 bits 
or 16 bits for what ever use case you might have.


Looks to me like this can be handled with something like 
PackedRGB!("G_5") and then do color conversion into that.




4. Which way are RGBA format names defined ("byte array order" 
r is first byte or "uint order" r is in high bits)?


I describe formats in byte (bit) order. It's the only 
reasonable way

that works outside of a uint.


Agreed. This should be in the docs, so that it's clear to users. 
(especially, since RGBA is the common case and it does fit 
into uint)




I'm showing that parsing colors from string's follow the web
convention. I can't redefine how colours are expressed on the 
web for

decades.
I think you've just highlighted that I need to remove the '0x'
notation though, that is definitely misleading! Use '#', that 
should

be clear.


Right. I was interpreting 0x as c hex literal. You should 
document that it's following the web convention.


6. Perhaps you should define all the usual color formats, 
since everybody

and their cat has their own preference for rgba channel orders.
In my experience all these 4 variants for RGBA are pretty 
common:

RGBA
BGRA
ARGB
ABGR
and their X and 24bit variants.


I don't think they're common at all. Only 2 are common (one of 
which

is deprecated), the other 2 are very niche.


Deprecated by who? Shouldn't phobos grade lib include all 
reasonable platforms?


I agree that you probably don't see too much variation within 
windows APIs, images (BMP etc.) or with D3D GPU textures (they're 
compressed anyway and asset pipelines to do the conversions 
beforehand), but I was more of thinking image libraries and the 
numerous, sometimes old and quirky, image formats. Or perhaps 
someone wants to implement a software blitter for their favorite 
toy device/embedded project.


I was trying to balance a tasteful amount of instantiations. I 
don't

really want to provide an instantiation of every single realtime
format I've ever encountered out of the box.

Seems reasonable.




For 16 bits fairly common are:
RGB565 and RGBA5551, also sometimes you see one of RGBA 
permutations

(like RGBA8 above).


Nobody uses 16bit textures anymore. Compressed textures are 
both MUCH smaller, and generally higher quality.


Sure, agreed. These are not too useful as GPU textures these days 
(even on mobile), but if you do software 2d, load older image 
formats (image viewer etc.) or something even 

Re: How to check member function for being @disable?

2016-09-13 Thread Jonathan M Davis via Digitalmars-d-learn
On Tuesday, September 13, 2016 17:59:09 Adam D. Ruppe via Digitalmars-d-learn 
wrote:
> On Tuesday, 13 September 2016 at 17:52:48 UTC, Jonathan M Davis
>
> wrote:
> > It's really intended for disabling features that would normally
> > be there. I don't know why it would ever make sense to @disable
> > a normal function.
>
> Consider the case of `alias this` or a mixin template. You might
> make a wrapper type that disables a particular operation by
> writing `@disable void opBinary(op)` so it won't forward to the
> underlying thing.

Ah. That makes sense. Thanks for pointing out that use case.

And actually, I think that that use case further supports the idea that what
code should be testing for is whether an operation works and not whether
it's @disabled. In the general case, you don't even have any guarantee that
the type being aliased has an operation that would need to be @disabled. And
from the caller's perspective, it shouldn't matter whether the + operator
doesn't work because it wasn't declared or because it was @disabled.

- Jonathan M Davis



Re: iPhone vs Android

2016-09-13 Thread Andrei Alexandrescu via Digitalmars-d

On 9/13/16 1:51 PM, deadalnix wrote:

On Tuesday, 13 September 2016 at 10:04:43 UTC, Andrei Alexandrescu wrote:

On 9/13/16 3:53 AM, Kagamin wrote:

The rule of thumb is that for efficient operation an advanced GC
consumes twice the used memory.


Do you have a citation? The number I know is 3x from Emery Berger's
(old) work. -- Andrei


I assume it is going to depend of the rate at which the application
produces garbage.


That's why Berger uses a relatively large battery of benchmark from 
multiple domains. See 
http://www-cs.canisius.edu/~hertzm/gcmalloc-oopsla-2005.pdf. -- Andrei


Re: iPhone vs Android

2016-09-13 Thread Jonathan M Davis via Digitalmars-d
On Tuesday, September 13, 2016 17:48:38 Laeeth Isharc via Digitalmars-d wrote:
> On Tuesday, 13 September 2016 at 11:59:46 UTC, Shachar Shemesh
>
> wrote:
> > On 13/09/16 02:21, deadalnix wrote:
> >> I stay convinced that an hybrid approach is inevitable and am
> >> surprised
> >> why few are going there (hello PHP, here right).
> >
> > Here's my worries about the hybrid approach. The GC run time is
> > proportional not to the amount of memory you manage with the
> > GC, but to the amount of memory that might hold a pointer to a
> > GC managed memory. In other words, if most of my memory is RC
> > managed, but some of it is GC, I pay the price of both memory
> > manager on most of my memory.
> >
> > Shachar
>
> Hi Shachar.
>
> I hope you're well.
>
> Would you mind elaborating a bit on why the cost of GC managed
> memory is as high as you imply when combined with other
> approaches, at least on a 64 bit machine and presuming you have a
> degree of hygiene and don't directly use a pointer allowed to
> point to either.   Eg if you use GC for long lived allocations
> and RC for short lived ones (and the RC constructor makes sure
> the thing is not registered with the GC so that takes care of
> short lived parts of long lived structures),  how in practice
> would this be a problem  ? I am no GC expert,  but keen to update
> my mental model.

As I understand it, the problem is that the length of time that the GC scan
takes - and therefore that the world is stopped - depends on how much memory
it has to scan, not on how much memory has been allocated by the GC. In many
cases, that's just the GC heap plus the stack, but if you're malloc-ing
memory, and that memory can hold references to GC-allocated memory, then you
have to tell the GC to scan that memory too in order to avoid having
anything it references being prematurely collected. So, ultimately, how
expensive the GC is in terms of performance generally depends on how much
memory can hold referencs to GC-allocated objects and not how many such
objects there are, meaning that avoiding allocating with the GC in a lot of
your code doesn't necessarily save you from the performance cost of the GC.
To avoid that cost, you'd need to either not have many places in malloc-ed
memory which could refer to GC-allocated memory, or you'd need to write your
code in a way that the it was guaranteed that the GC-allocated objects that
were referenced in malloc-ed memory would have other references in memory
that was scanned that lived longer htan the malloc-ed memory so that the
malloc-ed memory wouldn't need to be scanned (which is quite possible in
some circumstances but potentially risky).

Using a 64-bit system significantly reduces the risk of false pointers, but
it doesn't reduce the amount of memory that actually needs to be scanned.
And whether using the GC for long-lived allocations and RC for short-lived
ones would help would depend primarily on  how many such objects would be
around at any one time - and of course, whether they refer to GC-allocated
memory and would thus need to be scanned. But reducing the amount of memory
that the GC needs to scan and reduce how much is GC-allocated are two
separate - albeit related - problems.

- Jonathan M Davis



Re: iPhone vs Android

2016-09-13 Thread Andrei Alexandrescu via Digitalmars-d

On 9/13/16 11:58 AM, Kagamin wrote:

On Tuesday, 13 September 2016 at 10:04:43 UTC, Andrei Alexandrescu wrote:

Do you have a citation? The number I know is 3x from Emery Berger's
(old) work. -- Andrei


http://sealedabstract.com/rants/why-mobile-web-apps-are-slow/


This suggests the ratio is 4x not 2x:


What this chart says is “As long as you have about 6 times as much 
memory as you really need, you’re fine.  But woe betide you if you have 
less than 4x the required memory.”




http://www-cs.canisius.edu/~hertzm/gcmalloc-oopsla-2005.pdf
Probably these.


This is Berger's paper I was referring to. It concludes the ratio is 3x 
not 2x:



With only three times as much memory, the collector runs on average 17% 
slower than explicit memory management. However, with only twice as much 
memory, garbage collection degrades performance by nearly 70%.



So do you agree you were wrong in positing 2x as the rule of thumb?


Andrei



Re: How to check member function for being @disable?

2016-09-13 Thread Adam D. Ruppe via Digitalmars-d-learn
On Tuesday, 13 September 2016 at 17:52:48 UTC, Jonathan M Davis 
wrote:
It's really intended for disabling features that would normally 
be there. I don't know why it would ever make sense to @disable 
a normal function.


Consider the case of `alias this` or a mixin template. You might 
make a wrapper type that disables a particular operation by 
writing `@disable void opBinary(op)` so it won't forward to the 
underlying thing.


Re: iPhone vs Android

2016-09-13 Thread deadalnix via Digitalmars-d
On Tuesday, 13 September 2016 at 10:04:43 UTC, Andrei 
Alexandrescu wrote:

On 9/13/16 3:53 AM, Kagamin wrote:
The rule of thumb is that for efficient operation an advanced 
GC

consumes twice the used memory.


Do you have a citation? The number I know is 3x from Emery 
Berger's (old) work. -- Andrei


I assume it is going to depend of the rate at which the 
application produces garbage.




Re: How to check member function for being @disable?

2016-09-13 Thread Jonathan M Davis via Digitalmars-d-learn
On Tuesday, September 13, 2016 17:29:26 Uranuz via Digitalmars-d-learn wrote:
> OK. Seems that there is nothing that I could do more about my
> example code.. So the best way to be sure if something is
> assignable property is to try assign to it and test whether it
> compiles. The question was because utill this moment I somehow
> was living without __traits(compiles..). Seems that my use cases
> just was not enough complicated... Thanks for the answers.
>
> It could be good idea to have __traits( isDisable ... ) or
> something for it. I admit that not only '@disabled this();'
> regular methods could me marked @disable too..

The main places that I can think of at the moment where @disable makes sense
is for disabling default initialization - @disable this(); - and disabling
copying - @disable this(this);. It's really intended for disabling features
that would normally be there. I don't know why it would ever make sense to
@disable a normal function. Why would it even exist if it were @disabled?
So, for the compiler to allow @disable on normal functions sounds like a bug
to me - or at least an oversight in the design and implementation of
@disable - but maybe there's a legitimate reason that I'm not thinking of at
the moment. Regardless, testing for it is as simple as testing whether it
can be called or not, and you have to worry about that in a number of cases
anyway, because the access level of the function may be such that you can't
call it (e.g. it's private, and the code in question is not in the module
trying to call it). So, I don't really see what testing for @disable
specifically would buy you.

- Jonathan M Davis



Re: iPhone vs Android

2016-09-13 Thread Laeeth Isharc via Digitalmars-d
On Tuesday, 13 September 2016 at 11:59:46 UTC, Shachar Shemesh 
wrote:

On 13/09/16 02:21, deadalnix wrote:


I stay convinced that an hybrid approach is inevitable and am 
surprised

why few are going there (hello PHP, here right).



Here's my worries about the hybrid approach. The GC run time is 
proportional not to the amount of memory you manage with the 
GC, but to the amount of memory that might hold a pointer to a 
GC managed memory. In other words, if most of my memory is RC 
managed, but some of it is GC, I pay the price of both memory 
manager on most of my memory.


Shachar


Hi Shachar.

I hope you're well.

Would you mind elaborating a bit on why the cost of GC managed 
memory is as high as you imply when combined with other 
approaches, at least on a 64 bit machine and presuming you have a 
degree of hygiene and don't directly use a pointer allowed to 
point to either.   Eg if you use GC for long lived allocations 
and RC for short lived ones (and the RC constructor makes sure 
the thing is not registered with the GC so that takes care of 
short lived parts of long lived structures),  how in practice 
would this be a problem  ? I am no GC expert,  but keen to update 
my mental model.



Laeeth



Re: iPhone vs Android

2016-09-13 Thread Jonathan M Davis via Digitalmars-d
On Tuesday, September 13, 2016 03:58:50 Walter Bright via Digitalmars-d wrote:
> On 9/12/2016 4:21 PM, deadalnix wrote:
> > I stay convinced that an hybrid approach is inevitable and am surprised
> > why few are going there (hello PHP, here is a thing you get right).
>
> Interestingly, Warp (the C preprocessor I developed in D) used a hybrid
> approach. The performance critical code was all hand-managed, while the rest
> was GC'd.

Folks have posted here before about taking that approach with games and the
like that they've written. In a number cases, simply being careful about
specific pieces of code and avoiding the GC in those cases was enough to get
the required performance. In some cases, simply disabling the GC during a
critical piece of code and re-enabling it afterwards fixes the performance
problems trigged by the GC without even needing manual memory management or
RC. In others, making sure that the critical thread (e.g. the rendering
thread) was not GC-managed while letting the rest of the app us the GC
takes care of the problem.

We need reference counting to solve certain problems (e.g. cases where
deterministic destruction of stuff on the heap is required), but other stuff
(like array slices) work far better with a GC. So going either full-on RC or
full-on GC is not going to be good move for most programs. I don't think
that there's any question that we'll be best off by having both as solid
options, and best practices can develop as to when to use one or the other.

- Jonathan M Davis



Re: How to check member function for being @disable?

2016-09-13 Thread Uranuz via Digitalmars-d-learn
On Tuesday, 13 September 2016 at 15:32:57 UTC, Jonathan M Davis 
wrote:
On Tuesday, September 13, 2016 08:28:10 Jonathan M Davis via 
Digitalmars-d- learn wrote:
On Tuesday, September 13, 2016 04:58:38 Uranuz via 
Digitalmars-d-learn

wrote:
> In my code I iterate in CT over class methods marked as 
> @property and I have a probleme that one of methods is 
> @disable. So I just want to skip @disable members. I found 
> possible solution, but it's interesting to we if we have 
> more clear and obvious way to test for @disable without 
> using __traits( compile ) for it? @disable "looks" like 
> attribute but seems that I cant't get it through __traits( 
> getAttributes ) or __traits( getFunctionAttributes ). Maybe 
> we could add something to test for @disable if it's not 
> already exists?


I really don't think that it's going to scale properly to 
check whether something is marked with @disable. The problem 
is that it propagates. For instance, if a struct has a member 
variable that has default initialization disabled via @disable 
this(); then that struct effectively has @disable this(); too 
even though it doesn't have it explicitly. So, ultimately what 
needs to be tested for is the behavior and not the presence of 
@disable, and that means testing with __traits(compiles, ...). 
And I would point out that most traits test via 
__traits(compiles, ...) or is(typeof(...)) rather than 
checking for something like an attribute. So, if don't like 
using __traits(compiles, ...) in metaprogramming, your going 
to get frustrated quickly. A large portion of the time, it's 
exactly the solution to the problem.


What would make sense would be creating a trait to test for the 
@disabled functionality in queston - e.g. there could be an 
eponymous template named something like hasDefaultInitializer 
(though that name is a bit long) which indicated whether a type 
had @disabled this(); or not. Then you can use that trait in 
your code rather than using __traits(compiles, ...) all over 
the place.


- Jonthan M Davis


OK. Seems that there is nothing that I could do more about my 
example code.. So the best way to be sure if something is 
assignable property is to try assign to it and test whether it 
compiles. The question was because utill this moment I somehow 
was living without __traits(compiles..). Seems that my use cases 
just was not enough complicated... Thanks for the answers.


It could be good idea to have __traits( isDisable ... ) or 
something for it. I admit that not only '@disabled this();' 
regular methods could me marked @disable too..


Re: Beta D 2.071.2-b4

2016-09-13 Thread Johan Engelen via Digitalmars-d-announce

On Tuesday, 13 September 2016 at 14:00:26 UTC, Martin Nowak wrote:
On Monday, 12 September 2016 at 07:47:19 UTC, Martin Nowak 
wrote:
This comes with a different fix for Issue 15907 than 
2.071.2-b3.


There will be another beta tomorrow or so to include at least 
one more fix (for Issue 16460) and we'll soon release 2.071.2.


I have merged 2.071.2-b4+the fix for issue 16460 into LDC master.
As far as deprecations go, it's looking good on Weka's codebase.

cheers,
  Johan




[Issue 16491] Forward referencing and static/shared static module constructors break initialisation

2016-09-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16491

--- Comment #2 from Ethan Watson  ---
It is purely for illustrative/example purposes. The data I'm using cannot be
immutable.

--


[Issue 16491] Forward referencing and static/shared static module constructors break initialisation

2016-09-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16491

--- Comment #1 from Sobirari Muhomori  ---
Can't the name be immutable?

immutable string someOtherClassName = SomeOtherClass.stringof;

--


Re: small promotion for Dlang and Mir

2016-09-13 Thread Ilya Yaroshenko via Digitalmars-d-announce

On Tuesday, 13 September 2016 at 15:19:07 UTC, jmh530 wrote:
On Tuesday, 13 September 2016 at 14:14:01 UTC, Ilya Yaroshenko 
wrote:


1. findRoot. D implementation is significantly better then 98% 
of others  for the problem because the problem behaves like 
pathological. Thanks to ieeeMean

2. logmdigamma
3. logmdigammaInverse


Damn, I didn't even realize that std.numeric had a root 
function!


The next DMD release will also have findLocalMin


Re: iPhone vs Android

2016-09-13 Thread Kagamin via Digitalmars-d
On Tuesday, 13 September 2016 at 10:04:43 UTC, Andrei 
Alexandrescu wrote:
Do you have a citation? The number I know is 3x from Emery 
Berger's (old) work. -- Andrei


http://sealedabstract.com/rants/why-mobile-web-apps-are-slow/
http://www-cs.canisius.edu/~hertzm/gcmalloc-oopsla-2005.pdf
Probably these.


Re: GC of const/immutable reference graphs

2016-09-13 Thread Stefan Koch via Digitalmars-d

On Tuesday, 13 September 2016 at 15:27:23 UTC, John Colvin wrote:

For the following, lifetimeEnd(x) is the time of freeing of x.

Given a reference graph and an const/immutable node n, all 
nodes reachable via n (let's call them Q(n)) must also be 
const/immutable, as per the rules of D's type system.


In order to avoid dangling pointers:
For all q in Q(n), lifetimeEnd(q) >= lifetimeEnd(n)

Therefore, there is never any need for the GC to mark or sweep 
anything in Q(n) during the lifetime of n.


Does the GC take advantage of this in some way to reduce 
collection times?


I am pretty sure it does not.


Re: Confusion over what types have value vs reference semantics

2016-09-13 Thread Jonathan M Davis via Digitalmars-d-learn
On Tuesday, September 13, 2016 15:27:07 Neurone via Digitalmars-d-learn wrote:
> On Sunday, 11 September 2016 at 16:14:59 UTC, Mike Parker wrote:
> > On Sunday, 11 September 2016 at 16:10:04 UTC, Mike Parker wrote:
> >> And here, no memory is allocated. barSlice.ptr is the same as
> >> bar.ptr and barSlice.length is the same as bar.length.
> >> However, if you append a new element:
> >>
> >> barSlice ~= 10;
> >>
> >> The GC will allocate memory for a new array and barSlice will
> >> no longer point to bar. It will now have four elements.
> >
> > I should clarify that this holds true for all slices, not just
> > slices of static arrays. The key point is that appending to a
> > slice will only allocate if the the .capacity property of the
> > slice is 0. Slices of static arrays will always have a capacity
> > of 0. Slices of slices might not, i.e. there may be room in the
> > memory block for more elements.
>
> Thanks for the detailed answer. I still don't get the advantage
> of passing slices into functions by value allowing modification
> to elements of the original array.  Is there an way to specify
> that a true independent copy of an array should be passed into
> the function? E.g, in c++ func(Vector v) causes a copy of
> the argument to be passed in.

Slices are a huge performance boost (e.g. the fact that we have slicing like
this for strings makes parsing code _way_ more efficient by default than
would ever be the case for something like std::string). If you're worried
about a function mutating the elements of an array that it's given, then you
can always mark them with const. e.g.

auto foo(const(int)[] arr) {...}

But there is no way to force a naked dynamic array to do a deeper copy when
passed. If you're worried about it, you can explicitly call dup to create a
copy of the array rather than slice it - e.g. foo(arr.dup) - but the
function itself can't enforce that behavior. The closest that it could do
would be to explicitly dup the parameter itself - though if you were going
to do that, you'd want to make it clear in the documentation, since that's
not a typical thing to do, and if someone wanted to ensure that an array
that they were passing to the function didn't get mutated, they'd dup it
themselves, which would result in two dups if your function did the dup.

If you want a dynamic array to be duped every time it's passed to a function
or otherwise copied, you'd need to create a wrapper struct with a postblit
constructor that called dup. That would generally make for unnecessarily
inefficient code though. The few containers in std.container are all
reference types for the same reason - containers which copy by default make
it way too easy to accidentally copy them and are arguably a bad default
(though obviously, there are cases where that would be the best behavior).

So, the typical thing to do with dynamic arrays is to use const or immutable
elements when you want to ensure that they don't get mutated when passing
them around and duping or iduping a dynamic array when you want to ensure
that you have a copy of the array rather than a slice.

- Jonathan M Davis



Re: How to check member function for being @disable?

2016-09-13 Thread Jonathan M Davis via Digitalmars-d-learn
On Tuesday, September 13, 2016 08:28:10 Jonathan M Davis via Digitalmars-d-
learn wrote:
> On Tuesday, September 13, 2016 04:58:38 Uranuz via Digitalmars-d-learn 
wrote:
> > In my code I iterate in CT over class methods marked as @property
> > and I have a probleme that one of methods is @disable. So I just
> > want to skip @disable members. I found possible solution, but
> > it's interesting to we if we have more clear and obvious way to
> > test for @disable without using __traits( compile ) for it?
> > @disable "looks" like attribute but seems that I cant't get it
> > through __traits( getAttributes ) or __traits(
> > getFunctionAttributes ). Maybe we could add something to test for
> > @disable if it's not already exists?
>
> I really don't think that it's going to scale properly to check whether
> something is marked with @disable. The problem is that it propagates. For
> instance, if a struct has a member variable that has default initialization
> disabled via @disable this(); then that struct effectively has @disable
> this(); too even though it doesn't have it explicitly. So, ultimately what
> needs to be tested for is the behavior and not the presence of @disable, and
> that means testing with __traits(compiles, ...). And I would point out that
> most traits test via __traits(compiles, ...) or is(typeof(...)) rather than
> checking for something like an attribute. So, if don't like using
> __traits(compiles, ...) in metaprogramming, your going to get frustrated
> quickly. A large portion of the time, it's exactly the solution to the
> problem.

What would make sense would be creating a trait to test for the @disabled
functionality in queston - e.g. there could be an eponymous template named
something like hasDefaultInitializer (though that name is a bit long) which
indicated whether a type had @disabled this(); or not. Then you can use that
trait in your code rather than using __traits(compiles, ...) all over the
place.

- Jonthan M Davis



GC of const/immutable reference graphs

2016-09-13 Thread John Colvin via Digitalmars-d

For the following, lifetimeEnd(x) is the time of freeing of x.

Given a reference graph and an const/immutable node n, all nodes 
reachable via n (let's call them Q(n)) must also be 
const/immutable, as per the rules of D's type system.


In order to avoid dangling pointers:
For all q in Q(n), lifetimeEnd(q) >= lifetimeEnd(n)

Therefore, there is never any need for the GC to mark or sweep 
anything in Q(n) during the lifetime of n.


Does the GC take advantage of this in some way to reduce 
collection times?


Re: Confusion over what types have value vs reference semantics

2016-09-13 Thread Neurone via Digitalmars-d-learn

On Sunday, 11 September 2016 at 16:14:59 UTC, Mike Parker wrote:

On Sunday, 11 September 2016 at 16:10:04 UTC, Mike Parker wrote:

And here, no memory is allocated. barSlice.ptr is the same as 
bar.ptr and barSlice.length is the same as bar.length. 
However, if you append a new element:


barSlice ~= 10;

The GC will allocate memory for a new array and barSlice will 
no longer point to bar. It will now have four elements.


I should clarify that this holds true for all slices, not just 
slices of static arrays. The key point is that appending to a 
slice will only allocate if the the .capacity property of the 
slice is 0. Slices of static arrays will always have a capacity 
of 0. Slices of slices might not, i.e. there may be room in the 
memory block for more elements.


Thanks for the detailed answer. I still don't get the advantage 
of passing slices into functions by value allowing modification 
to elements of the original array.  Is there an way to specify 
that a true independent copy of an array should be passed into 
the function? E.g, in c++ func(Vector v) causes a copy of 
the argument to be passed in.


Re: How to check member function for being @disable?

2016-09-13 Thread Jonathan M Davis via Digitalmars-d-learn
On Tuesday, September 13, 2016 04:58:38 Uranuz via Digitalmars-d-learn wrote:
> In my code I iterate in CT over class methods marked as @property
> and I have a probleme that one of methods is @disable. So I just
> want to skip @disable members. I found possible solution, but
> it's interesting to we if we have more clear and obvious way to
> test for @disable without using __traits( compile ) for it?
> @disable "looks" like attribute but seems that I cant't get it
> through __traits( getAttributes ) or __traits(
> getFunctionAttributes ). Maybe we could add something to test for
> @disable if it's not already exists?

I really don't think that it's going to scale properly to check whether
something is marked with @disable. The problem is that it propagates. For
instance, if a struct has a member variable that has default initialization
disabled via @disable this(); then that struct effectively has @disable
this(); too even though it doesn't have it explicitly. So, ultimately what
needs to be tested for is the behavior and not the presence of @disable, and
that means testing with __traits(compiles, ...). And I would point out that
most traits test via __traits(compiles, ...) or is(typeof(...)) rather than
checking for something like an attribute. So, if don't like using
__traits(compiles, ...) in metaprogramming, your going to get frustrated
quickly. A large portion of the time, it's exactly the solution to the
problem.

- Jonathan M Davis



Re: iPhone vs Android

2016-09-13 Thread Kagamin via Digitalmars-d
On Tuesday, 13 September 2016 at 10:04:43 UTC, Andrei 
Alexandrescu wrote:
Do you have a citation? The number I know is 3x from Emery 
Berger's (old) work. -- Andrei


IIRC there was a thread about GC here where somebody posted a 
bunch of links to resources and benchmarks.


Re: small promotion for Dlang and Mir

2016-09-13 Thread jmh530 via Digitalmars-d-announce
On Tuesday, 13 September 2016 at 14:14:01 UTC, Ilya Yaroshenko 
wrote:


1. findRoot. D implementation is significantly better then 98% 
of others  for the problem because the problem behaves like 
pathological. Thanks to ieeeMean

2. logmdigamma
3. logmdigammaInverse


Damn, I didn't even realize that std.numeric had a root function!


Re: workspace-d 2.7.2 & code-d 0.10.14

2016-09-13 Thread WebFreak001 via Digitalmars-d-announce

On Sunday, 11 September 2016 at 23:46:18 UTC, Joel wrote:

On Sunday, 11 September 2016 at 08:43:53 UTC, WebFreak001 wrote:

On Sunday, 11 September 2016 at 06:01:45 UTC, Joel wrote:
I just get this: Debug adapter process has terminated 
unexpectedly


can you run `gdb --interpreter=mi2` from the console? Or if 
you use lldb, can you run `lldb-mi` from the console? If not 
then vscode won't be able to. To be sure that it isn't 
anything because of the PATH, run vscode from the console 
where gdb and lldb-mi works and try again. If its crashing 
unexpectedly its didnt even run gdb or lldb. It might also be 
the unix domain sockets, but I don't think they should be the 
issue. Also check the debug console (console icon in debug 
menu) if there is any output at all


It says they're not found. How do I get them?


lldb-mi is bundled with Xcode, there is a command to get it in 
the code-debug README: https://github.com/WebFreak001/code-debug


I don't know how to get gdb on OSX, you would need to find that 
out yourself


Re: small promotion for Dlang and Mir

2016-09-13 Thread Ilya Yaroshenko via Digitalmars-d-announce

On Tuesday, 13 September 2016 at 14:43:11 UTC, bachmeier wrote:
On Tuesday, 13 September 2016 at 14:14:01 UTC, Ilya Yaroshenko 
wrote:



[...]


How stable is Mir? I have recently stripped my library for 
embedding R inside D down to the minimum amount and created an 
R package to do the installation. Therefore it is trivial to 
install and get started on Linux.* I would like to test how it 
works to mix R and Mir code. However, I don't want to dig into 
that until Mir is in a stable state.


* Also Windows and Mac, but since I don't have either of those 
machines, I cannot do any work with them.


Recent release v0.15.3 is stable.
v0.17.0-alpha3 may have API changes in mir.glas and mir.random.

mir.ndslice will be removed in favor of std.experimental.ndslice, 
but redirection imports will work during long deprecation period 
(like in Phobos).


Thank you for the star)


Re: colour lib needs reviewers

2016-09-13 Thread John Colvin via Digitalmars-d

On Tuesday, 13 September 2016 at 09:31:53 UTC, Manu wrote:
On 13 September 2016 at 17:47, John Colvin via Digitalmars-d 
 wrote:

On Tuesday, 13 September 2016 at 01:05:56 UTC, Manu wrote:


Also can I swizzle channels directly?



I could add something like:
  auto brg =  c.swizzle!"brg";

The result would be strongly typed to the new arrangement.



Perfect use-case for opDispatch like in gl3n.


One trouble with arbitrary swizzling is that people often want 
to

write expressions like "c.xyzy", or whatever, but repeating
components... and that's not something my colours can do. It's 
useful

in realtime code, but it doesn't mean anything, and you can't
interpret that value as a colour anymore after you do that.
This sort of thing is used when you're not actually storing 
colours in
textures, but instead just some arbitrary data. Don't use a 
colour

type for that, use a vector type instead.
What that sort of swizzling really is, are vector operations, 
not
colour operations. Maybe I could add an API to populate a 
vector from
the components of colours, in-order... but then we don't 
actually have
linear algebra functions in phobos either! So colour->vectors 
and
arbitrary swizzling is probably not something we need 
immediately.


In my lib, colours are colours. If you have `BGR8 x` and `RGB8 
y`, and add them, you don't get x.b+y.r, x.g+y.g, x.r+y.b... 
that's not a colour operation, that's an element-wise vector 
operation.


Fair enough, you know much better than me how useful it would be. 
I was just suggesting that if you do support some sorts of 
swizzling then opDispatch would allow you to avoid users having 
to use strings. It would be as simple to implement as `alias 
opDispatch = swizzle;` given the swizzle function you were using 
before.


Re: small promotion for Dlang and Mir

2016-09-13 Thread bachmeier via Digitalmars-d-announce
On Tuesday, 13 September 2016 at 14:14:01 UTC, Ilya Yaroshenko 
wrote:



Also you can help Mir in
 - 5 second: star the project https://github.com/libmir/mir
 - 1 hour+:
 - opt1: Write an article or about ndslice or mir.glas [6] 
(upcoming BLAS implementation in D)

 - opt2: Add small enhancement you want, see also [4]
 - opt3: Include new chapter about ndslice and Mir to the 
Dlang Tour [7]

 - 1 day+: Became an author for a new package, see also [5].

Companies can order numerical, statistical, and data mining 
algorithms and services. We work with web and big data.


[1] http://rdcu.be/kiKR -  "On Robust Algorithm for Finding 
Maximum Likelihood Estimation of the Generalized Inverse 
Gaussian Distribution"
[2] https://github.com/9il/atmosphere - library, which contains 
the source code for the article

[3] https://github.com/libmir/mir
[4] 
https://github.com/libmir/mir/issues?q=is%3Aissue+is%3Aopen+label%3Aenhancement
[5] 
https://github.com/libmir/mir/issues?q=is%3Aissue+is%3Aopen+label%3A%22New+Package%22

[6] http://docs.mir.dlang.io/latest/mir_glas_l3.html
[7] http://tour.dlang.org/

Best regards,
Ilya


How stable is Mir? I have recently stripped my library for 
embedding R inside D down to the minimum amount and created an R 
package to do the installation. Therefore it is trivial to 
install and get started on Linux.* I would like to test how it 
works to mix R and Mir code. However, I don't want to dig into 
that until Mir is in a stable state.


* Also Windows and Mac, but since I don't have either of those 
machines, I cannot do any work with them.


[Issue 15988] [REG v2.070] Massive Compile Time Slowdown

2016-09-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=15988

b2.t...@gmx.com changed:

   What|Removed |Added

 CC|b2.t...@gmx.com |

--


[Issue 16487] Add function to obtain the available disk space

2016-09-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16487

b2.t...@gmx.com changed:

   What|Removed |Added

 CC|b2.t...@gmx.com |

--- Comment #8 from b2.t...@gmx.com ---
In both cases I 'd return 0.

statvfs_t stats;
int err = statvfs(path.toStringz(), );
return err ?  0 : stats.f_bavail * stats.f_frsize;

> "either the disk is full or inaccessible"

Because in both cases you can do nothing. By the way HDD are never full.
Only removable media can be full (bootable USB key, DVD/CD rom, etc).

--


small promotion for Dlang and Mir

2016-09-13 Thread Ilya Yaroshenko via Digitalmars-d-announce
My article [1] in Journal of Mathematical Sciences (Springer) 
will be released this October. It notes D standard library 3 
times:


1. findRoot. D implementation is significantly better then 98% of 
others  for the problem because the problem behaves like 
pathological. Thanks to ieeeMean

2. logmdigamma
3. logmdigammaInverse

The article is already available online [1].
Of course the source code for the article was written in D [2].

If you want to use D for Science or Machine Learning go forward 
with Mir project [3].


Also you can help Mir in
 - 5 second: star the project https://github.com/libmir/mir
 - 1 hour+:
 - opt1: Write an article or about ndslice or mir.glas [6] 
(upcoming BLAS implementation in D)

 - opt2: Add small enhancement you want, see also [4]
 - opt3: Include new chapter about ndslice and Mir to the 
Dlang Tour [7]

 - 1 day+: Became an author for a new package, see also [5].

Companies can order numerical, statistical, and data mining 
algorithms and services. We work with web and big data.


[1] http://rdcu.be/kiKR -  "On Robust Algorithm for Finding 
Maximum Likelihood Estimation of the Generalized Inverse Gaussian 
Distribution"
[2] https://github.com/9il/atmosphere - library, which contains 
the source code for the article

[3] https://github.com/libmir/mir
[4] 
https://github.com/libmir/mir/issues?q=is%3Aissue+is%3Aopen+label%3Aenhancement
[5] 
https://github.com/libmir/mir/issues?q=is%3Aissue+is%3Aopen+label%3A%22New+Package%22

[6] http://docs.mir.dlang.io/latest/mir_glas_l3.html
[7] http://tour.dlang.org/

Best regards,
Ilya



Re: Beta D 2.071.2-b4

2016-09-13 Thread Martin Nowak via Digitalmars-d-announce

On Monday, 12 September 2016 at 07:47:19 UTC, Martin Nowak wrote:

This comes with a different fix for Issue 15907 than 2.071.2-b3.


There will be another beta tomorrow or so to include at least one 
more fix (for Issue 16460) and we'll soon release 2.071.2.
This is a good moment to double check whether all the deprecation 
warnings for your project caused by the import changes are 
justified.


-Martin


[Issue 15988] [REG v2.070] Massive Compile Time Slowdown

2016-09-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=15988

--- Comment #4 from Jack Stouffer  ---
(In reply to Martin Nowak from comment #3)
> Any example project to reproduce the issue?

You can try to reach out to the people in the linked news group threads

--


[Issue 15988] [REG v2.070] Massive Compile Time Slowdown

2016-09-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=15988

--- Comment #3 from Martin Nowak  ---
Any example project to reproduce the issue?

--


[Issue 15988] [REG v2.070] Massive Compile Time Slowdown

2016-09-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=15988

Martin Nowak  changed:

   What|Removed |Added

 CC||c...@dawg.eu

--- Comment #2 from Martin Nowak  ---
I think we fixed a major performance issue with
https://github.com/dlang/dmd/pull/6024 but that problem never made it to any
release.

--


Re: DlangUI 0.9.0: Console backend added

2016-09-13 Thread ketmar via Digitalmars-d-announce
On Tuesday, 13 September 2016 at 12:29:47 UTC, Vadim Lopatin 
wrote:

Screenshots on imgur: http://imgur.com/a/eaRiT


btw. please note that on most GNU/Linux terminals you can use 
simple RGB colors (with each component in [0..5] range). IRL if 
$TERM != "Linux", it is safe to assume that terminal supports 256 
colors (with rare exclustions like "screen" -- those can be 
safely ignored, screen is fubared anyway ;-).


[Issue 16487] Add function to obtain the available disk space

2016-09-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16487

Jack Stouffer  changed:

   What|Removed |Added

 CC||j...@jackstouffer.com

--- Comment #7 from Jack Stouffer  ---
(In reply to b2.temp from comment #6)
> Anyway...Wouldn't 0 be a possible return type in case of error

No. What if there's no more space in the drive? Would you return something
other than zero?

--


  1   2   >