Re: IDE - Coedit 2 rc1

2016-02-07 Thread Dominikus Dittes Scherkl via Digitalmars-d-announce

On Monday, 8 February 2016 at 07:05:15 UTC, Suliman wrote:
Cool! Thanks! But do you have any plans to reimplement it from 
Pascal to В to get it's more native...


B?

What is B?


Re: Odd Destructor Behavior

2016-02-07 Thread Daniel Kozak via Digitalmars-d-learn
V Sun, 07 Feb 2016 23:47:39 +
Matt Elkins via Digitalmars-d-learn 
napsáno:

> On Sunday, 7 February 2016 at 23:11:34 UTC, anonymous wrote:
> > On 07.02.2016 23:49, Matt Elkins wrote:  
> >> Oi. Yes, I can, but it is quite a lot of code even if you 
> >> don't count
> >> that it is dependent on OpenGL, GLFW, and gl3n to run to this 
> >> point.
> >> This is why I was disappointed that simpler reproducing cases 
> >> weren't
> >> appearing. I should probably spend more time trying to reduce 
> >> the case
> >> some...  
> >
> > Minimal test cases are great, but if you're not able to get it 
> > down in size, or not willing to, then a larger test case is ok, 
> > too. The problem is clear, and I'd expect reducing it to be 
> > relatively straight-foward (but possibly time-consuming). Just 
> > don't forget about it completely, that would be bad.
> >
> > Also be aware of DustMite, a tool for automatic reduction:
> >
> > https://github.com/CyberShadow/DustMite  
> 
> Turns out it was less hard to reduce than I thought. Maybe it 
> could be taken down some more, too, but this is reasonably small:
> 
> [code]
> import std.stdio;
> 
> struct TextureHandle
> {
>  ~this() {}
> }
> 
> TextureHandle create() {return TextureHandle();}
> 
>   struct TileView
>   {
>   @disable this();
>   @disable this(this);
>   this(TextureHandle a, TextureHandle b) {}
>   ~this() {writeln("HERE2");}
>   }
> 
>   struct View
>   {
>   this(int)
>   {
>   writeln("HERE1a");
>   m_tileView = TileView(create(), create());
>   writeln("HERE1b");
>   }
> 
>   private TileView m_tileView;
> }
> 
> unittest
> {
>  auto v = View(5);
> }
> [/code]
> 
> This yields the following:
> 
> [output]
> HERE1a
> HERE2
> HERE1b
> HERE2
> [/output]
> 
> I would have expected only one "HERE2", the last one. Any of a 
> number of changes cause it to behave in the expected way, 
> including (but probably not limited to):
> * Creating the TextureHandles directly rather than calling 
> create()
> * Using only one argument to TileView's constructor
> * Removing TextureHandle's empty destructor
> 
> That last one especially seems to indicate a bug to me...
Seems to me too, please report it on issues.dlang.org



Re: IDE - Coedit 2 rc1

2016-02-07 Thread Suliman via Digitalmars-d-announce
On Monday, 8 February 2016 at 07:25:49 UTC, Dominikus Dittes 
Scherkl wrote:

On Monday, 8 February 2016 at 07:05:15 UTC, Suliman wrote:
Cool! Thanks! But do you have any plans to reimplement it from 
Pascal to В to get it's more native...


B?

What is B?


Sorry, D


Re: An IO Streams Library

2016-02-07 Thread Jakob Ovrum via Digitalmars-d

On Sunday, 7 February 2016 at 00:48:54 UTC, Jason White wrote:
I'm interested in feedback on this library. What is it missing? 
How can be better?


I like what I've seen so far, but I'd just like to note that it's 
easier to give feedback on the API when there is web 
documentation. GitHub Pages would be a natural place to host it.


A lot of D libraries on GitHub do this and not everyone uses the 
same tools, but for one example, here's LuaD[1] with reference 
documentation on GitHub pages, automatically generated and pushed 
by Travis-CI for the master branch.


https://github.com/JakobOvrum/LuaD



Re: Things that keep D from evolving?

2016-02-07 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Sunday, 7 February 2016 at 02:46:39 UTC, Marco Leise wrote:

My code would not see much ref counting in performance critical
loops. There is no point in ref counting every single point in
a complex 3D scene.
I could imagine it used on bigger items. Textures for example
since they may be used by several objects. Or - a prime
example - any outside resource that is potentially scarce and
benefits from deterministic release: file handles, audio
buffers, widgets, ...


In my experience most such resources don't need reference 
counting. Yes, Textures if you load dynamically, but if you load 
Textures before entering the render loop... not so much and it 
should really be a caching system so you don't have to reload the 
texture right after freeing it. File handles are better done as 
single borrow from owner, or pass by reference, you don't want 
multiple locations to write to the same file handle, audio 
buffers should be preallocated as they cannot be deallocated 
cheaply on the real time thread, widgets benefit more from 
weak-pointers/back-pointers-to-borrowers as they tend to have a 
single owning parent...


What would be better is to build better generic static analysis 
and optimization into the compiler. So that the compiler can 
deduce that an integer is never read except when decremented, and 
therefore can elide inc/dec pairs.




[Issue 15651] New: filter: only parameters or stack based variables can be inout

2016-02-07 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=15651

  Issue ID: 15651
   Summary: filter: only parameters or stack based variables can
be inout
   Product: D
   Version: D2
  Hardware: All
OS: All
Status: NEW
  Severity: normal
  Priority: P1
 Component: dmd
  Assignee: nob...@puremagic.com
  Reporter: lt.infiltra...@gmail.com

import std.algorithm.iteration : filter;
import std.array : array;

class D { int x; this(int n) { x = n; } }

class C {
   @property auto fun() inout nothrow pure @safe {
  return members.filter!(m => m.x == 0).array;
   }
   D[] members;
}

void main() {
   auto c = new C;
   c.members = [ new D(0), new D(1) ];
   auto foo = c.fun;
}

Compilation output:

/opt/compilers/dmd2/include/std/algorithm/iteration.d(980): Error: variable
f534.C.fun.FilterResult!(__lambda1, inout(D)[]).FilterResult._input only
parameters or stack based variables can be inout
/opt/compilers/dmd2/include/std/algorithm/iteration.d(944): Error: template
instance f534.C.fun.FilterResult!(__lambda1, inout(D)[]) error instantiating
/d456/f534.d(8):instantiated from here: filter!(inout(D)[])
/d456/f534.d(8): Error: template std.algorithm.iteration.filter cannot deduce
function from argument types !((m) => m.x == 0)(inout(D[])), candidates are:
/opt/compilers/dmd2/include/std/algorithm/iteration.d(940):   
std.algorithm.iteration.filter(alias predicate) if
(is(typeof(unaryFun!predicate)))


--


Re: Detecting exception unwinding

2016-02-07 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Saturday, 6 February 2016 at 22:51:45 UTC, cy wrote:

auto e = somethingThatFails()
scope(failure) cleanup(e);

makes more sense to me, since it's blatantly obvious that the 
construction (and entering) process isn't covered by the 
cleanup routine.


Not sure what you mean by that. Destructors shall not be called 
if constructors fail, constructors should clean up themselves.


"scope(failure)" is useful for C-like APIs, but not a good 
replacement for RAII.




CPA Email Marketing to Drive Targeted Traffic to CPA Offers

2016-02-07 Thread atekurrahman via Digitalmars-d-debugger
There is a frequently ignored technique to profiting with CPA 
offers. Notwithstanding costly pay per click, and debilitating 
article advertising, there is a superior more viable approach to 
get movement and create deals. Partner advertisers might need to 
consider CPA email promoting. This method is generally simple to 
actualize, cheap contrasted with pay per click costs, and less 
tedious than article showcasing.


Making an email rundown is a shrewd approach to stay in contact 
with clients who are keen on your offers. You can start by 
agreeing to an email showcasing account with a trustworthy 
organization such as Aweber, as they commonly charge under $20 
every month for a standard record. While picking an email 
advertising organization, discover one that has incredible 
audits, a brilliant notoriety, and a shrewd bolster group. On the 
off chance that you have inquiries or worries about your email 
advertising wander, you will need an organization that is 
proficient and accommodating. Next you will need to make a few 
messages to be show to your rundown. Consider that you will need 
to include your CPA subsidiary connection and incorporate the 
advantages of your item or administration that you are 
advertising.


Need to know how to inspire individuals to join to your rundown? 
One strong approach to deal with an email crusade is to at first 
give away something for nothing. This free thing can be anything 
from a digital book, to a free example, or other valuable 
substance identified with your rundown. The key is to give your 
prospects something they will need, or will see to really sweeten 
the deal for joining. Give them an impetus, and let them realize 
that you are truly there to help and help, and they will 
remunerate you will deals.


Disseminate your free writing so as to thing around the web a 
couple articles. Consider connecting with somebody with a built 
up rundown of clients in your specialty. Contact Ezine 
distributers to check whether you can make an ad for your free 
thing to their appropriation list, or even pay per snap is 
perfect for creating intrigued prospects to your offer. There are 
numerous approaches to disperse your message on the web, and you 
ought to consider utilizing whatever number as could be allowed.


Keep in mind to check your point of arrival, and test it before 
making your email crusade live. Make sure that your crush page 
looks proficient, and your select in box is unmistakably shown 
over the fold of the web program. Be mindful so as not to 
exaggerate your offer, and be clear about what you are putting 
forth your new clients.


CPA email advertising can be an extremely remunerating knowledge 
for web distributers. You can accumulate a web working so as to 
take after to make your own one of a kind brand through an email 
list.


 http://www.latestdatabase.com/cpa-email-list/


Re: Does anyone care if Tuple.slice() returns by ref?

2016-02-07 Thread ZombineDev via Digitalmars-d

On Sunday, 7 February 2016 at 07:31:51 UTC, tsbockman wrote:

On Sunday, 7 February 2016 at 07:00:04 UTC, Saurabh Das wrote:

On Sunday, 7 February 2016 at 03:16:48 UTC, tsbockman wrote:
(If we go with Saurabh Das' approach, we'll deprecate the old 
slice() by ref method, so it there won't be any *silent* 
breakage either way.)


Can we keep the old by ref slice() method, but add guards to 
disallow cases where the alignment is off?


Honestly, I think it will just confuse people if the 
slice-by-ref method only works on some seemingly-random subset 
of Tuple instantiations - especially since the subset that 
worked would be platform dependent.


We should either fix it, or deprecate it.


Contrary to my expectations, slicing bultin tuples returns a 
copy. (http://dpaste.dzfl.pl/fd96b17e735d)
Maybe we need to fix this in the compiler. That way we can reuse 
the language feature for std.typecons : Tuple.slice().


[Issue 15629] [REG] wrong code with "-O -inline" but correct with "-O"

2016-02-07 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=15629

--- Comment #3 from Kenji Hara  ---
Dustmited case code:

void main()
{
int[] a = [3];
int value = abs(a[0]);
assert(a[0] == 3);
writeln(value, " ", a);
}

Num abs(Num)(Num x)
{
return x >= 0 ? x : -x;
}

struct File
{
struct LockingTextWriter
{
this(File) {}

~this() @trusted {}
}

auto lockingTextWriter()
{
return LockingTextWriter();
}
}

File stdout;

void writeln(T...)(T args)
{
stdout.lockingTextWriter;
}

--


IDE - Coedit 2 rc1

2016-02-07 Thread Basile Burg via Digitalmars-d-announce

See https://github.com/BBasile/Coedit/releases/tag/2_rc1


[idea] Mutable pointee/ RCString

2016-02-07 Thread Iakh via Digitalmars-d

Is it hard to make pointee data mutable?
E.g. if have:
--
struct RCString
{
private char[] data;
private @mutable int* counter;
}
--
So for optimiser (in case of immutable) this looks like
--
struct RCString
{
private char[] data;
private @mutable void* counter; // pointer to garbage
}
--


Re: Cannot get Derelict to work

2016-02-07 Thread Mike Parker via Digitalmars-d-learn

On Sunday, 7 February 2016 at 12:55:30 UTC, Whirlpool wrote:


Is it the same kind of problem as before ? If my understanding 
is correct [1], I need to link with the OpenGL DLL, don't I ? I 
found that I have an opengl32.dll file in C:\Windows\System32, 
and tried adding the path to it in the linker configuration of


No, you do not need to link with OpenGL. As the documentation 
link you provided explains, Derelict bindings are all dynamic, so 
there is no link time dependency on the bound library. It's all 
runtime, which is why you have to call DerelictFoo.load.


I am aware that this example from GLFW uses deprecated 
functions from OpenGL 2, is that the problem ? (I will try 
OpenGL 4 with the tutorials at [2] soon)




The problem is right there in the exception you posted. That's 
what exceptions are for.


First-chance exception: 
derelict.util.exception.SymbolLoadException Failed to load 
OpenGL symbol [glGetnTexImage] at 
..\AppData\Roaming\dub\packages\derelict-util-2.0.4\source\derelict\util\exception.d(35)


You can see the exception thrown is a 
derelict.util.exception.SymbolLoadException. As explained in the 
Derelict documentation on load failures [1], this is thrown when 
a symbol cannot be found in a DLL. It means that the loader found 
the DLL just fine, but a function it expected to find wasn't 
there. Here, that function is glGetnTexImage. So it has nothing 
to do with your code, but is a problem in the DLL.


Normally, this means that you're trying to load version of a DLL 
that is older than the binding you are using. However, OpenGL is 
a special case, which is why it has the reload method. It will 
only attempt to load the version of OpenGL supported by your 
driver. This means that if it doesn't find a function it expects, 
then either your driver is lying or DerelictGL3 is broken.


glGetnTexImage is part of the OpenGL 4.5 specification. I'm going 
to make a guess that you are using an AMD (formerly ATI) card. 
There is an open issue in the Derelict repo regarding their 
latest drivers and 4.5 support [2]. It's not the first time it's 
been reported to me. It seems the driver claims to support 4.5, 
but some of the 4.5 functions fail to load. The way to work 
around this is to specify a maximum OpenGL version as the second 
of two arguments in the call to reload.


```
DerelictGL3.reload(GLVersion.None, GLVersion.GL44);
```

The first argument is the minimum version to load, which always 
defaults to None. In effect, this says load everything from 1.2 
(1.1 is loaded in the call to DerelictGL3.load) up to the highest 
version the driver supports or up to 4.4, whichever is lower. 
This line should be enough to work around the AMD driver issue.


Another point to make is that if you need deprecated functions, 
DerelictGL3 is not what you want. You should import 
derelict.opengl3.gl and use DerelictGL.load/reload instead. It 
includes all of the deprecated functions. Just make sure you have 
created a context that allows you to access the deprecated stuff. 
As far as I know, 4.0+ (perhaps even 3.3) are core only. Note 
that you *do not* need to load both DerelictGL3 and DerelictGL, 
as the latter handles everything for you.


[1] http://derelictorg.github.io/using/fail.html
[2] https://github.com/DerelictOrg/DerelictGL3/issues/43
[3] 
https://github.com/DerelictOrg/DerelictGL3/blob/master/source/derelict/opengl3/gl3.d#L80


Re: Github woes

2016-02-07 Thread Rikki Cattermole via Digitalmars-d

On 07/02/16 11:22 PM, Wobbles wrote:

Just curious, is there a backup plan for D if github.com goes by the
wayside?

Now that there seems to be community back-lash against it (at least on
reddit) maybe a contingency plan would be useful.

Obviously not today or tomorrow, but you never know what's down the road.


The only thing that we have hosted on Github is code.
So excluding integrations, we could move over to Bitbucket without too 
many problems.


I really wouldn't worry about it. Sure it would upset and add a lot of 
extra work but we sure won't be the only ones in that position.


Re: Github woes

2016-02-07 Thread ZombineDev via Digitalmars-d

On Sunday, 7 February 2016 at 10:22:35 UTC, Wobbles wrote:
Just curious, is there a backup plan for D if github.com goes 
by the wayside?


Now that there seems to be community back-lash against it (at 
least on reddit) maybe a contingency plan would be useful.


Obviously not today or tomorrow, but you never know what's down 
the road.


If Github goes down we would still continue to use git like 
nothing has ever happened. We would just need to switch to a 
different pull request code review system, like GitLab for 
example. Or we can use forum.dlang.org like LKML.


Re: Cannot get Derelict to work

2016-02-07 Thread Mike Parker via Digitalmars-d-learn

On Sunday, 7 February 2016 at 14:04:49 UTC, Mike Parker wrote:



Another point to make is that if you need deprecated functions, 
DerelictGL3 is not what you want. You should import 
derelict.opengl3.gl and use DerelictGL.load/reload instead. It 
includes all of the deprecated functions. Just make sure you 
have created a context that allows you to access the deprecated 
stuff. As far as I know, 4.0+ (perhaps even 3.3) are core only. 
Note that you *do not* need to load both DerelictGL3 and 
DerelictGL, as the latter handles everything for you.


Which I see you are already doing! I should have looked at your 
code first.


Re: voldemort stack traces (and bloat)

2016-02-07 Thread Iakh via Digitalmars-d
On Sunday, 7 February 2016 at 05:18:39 UTC, Steven Schveighoffer 
wrote:
4   testexpansion   0x00010fb5dbec pure 
@safe void 
testexpansion.s!(testexpansion.s!(testexpansion.s!(testexpansion.s!(testexpansion.s!


Why "bad" foo is void?


Is there a better way we should be doing this? I'm wondering if


Yeah would by nice to auto-repacle with 
testexpansion.S!(...)(...).Result.foo

or even with ...Result.foo


Re: Does anyone care if Tuple.slice() returns by ref?

2016-02-07 Thread Ola Fosheim Grøstad via Digitalmars-d

On Sunday, 7 February 2016 at 12:28:07 UTC, tsbockman wrote:
That is surprising indeed, but I don't see how fixing it would 
solve the Tuple.slice() memory alignment issues.


Why won't a reinterpret cast work?

struct tupleX {
  T0 _0;
  T1 _1;
}

struct tupleX_slice_1_2 {
  T0 _dummy0;
  T1 _0
}



Re: Does anyone care if Tuple.slice() returns by ref?

2016-02-07 Thread tsbockman via Digitalmars-d
On Sunday, 7 February 2016 at 12:51:07 UTC, Ola Fosheim Grøstad 
wrote:

On Sunday, 7 February 2016 at 12:28:07 UTC, tsbockman wrote:
That is surprising indeed, but I don't see how fixing it would 
solve the Tuple.slice() memory alignment issues.


Why won't a reinterpret cast work?

struct tupleX {
  T0 _0;
  T1 _1;
}

struct tupleX_slice_1_2 {
  T0 _dummy0;
  T1 _0
}


That is essentially what my PR does. But, some people are unhappy 
with the thought of a slice's type not matching the type of the 
equivalent standard Tuple:


Tuple!(int, bool, string) foo;
const bar = foo.slice!(1, 3)();

static assert(! is(typeof(bar) == Tuple!(bool, string)));


Github woes

2016-02-07 Thread Wobbles via Digitalmars-d
Just curious, is there a backup plan for D if github.com goes by 
the wayside?


Now that there seems to be community back-lash against it (at 
least on reddit) maybe a contingency plan would be useful.


Obviously not today or tomorrow, but you never know what's down 
the road.


Re: An IO Streams Library

2016-02-07 Thread Johannes Pfau via Digitalmars-d
Am Sun, 07 Feb 2016 00:48:54 +
schrieb Jason White <54f9byee3...@gmail.com>:

> I see the subject of IO streams brought up here occasionally. The 
> general consensus seems to be that we need something better than 
> what Phobos provides.
> 
> I wrote a library "io" that can work as a replacement for 
> std.stdio, std.mmfile, std.cstream, and parts of std.stream:
> 
>  GitHub:  https://github.com/jasonwhite/io
>  Package: https://code.dlang.org/packages/io
> 
> This library provides an input and output range interface for 
> streams (which is more efficient if the stream is buffered). 
> Thus, many of the wonderful range operations from std.range and 
> std.algorithm can be used with this.
> 
> I'm interested in feedback on this library. What is it missing? 
> How can be better?
> 
> I'm also interested in a discussion of what IO-related 
> functionality people are missing in Phobos.
> 
> Please destroy!

I saw this on code.dlang.org some time ago and had a quick look. First
of all this would have to go into phobos to make sure it's used as some
kind of a standard. Conflicting stream libraries would only cause more
trouble.

Then if you want to go for phobos inclusion I'd recommend looking at
other stream implementations and learning from their mistakes ;-)
There's
https://github.com/schveiguy/phobos/tree/babe9fe338f03cafc0fb50fc0d37ea96505da3e3/std/io
which was supposed to be a stream replacement for phobos. Then there
are also vibe.d streams*.


Your Stream interfaces looks like standard stream implementations (which
is a good thing) which also work for unbuffered streams. I think it's a
good idea to support partial reads and writes. For an explanation why
partial reads, see the vibe.d rant below. Partial writes are useful
as a write syscall can be interrupted by posix signals to stop the
write. I'm not sure if the API should expose this feature (e.g. by
returning a partial write on EINTR) but it can sometimes be useful.
Still readExactly / writeAll helpers functions are useful. I would try
to implement these as UFCS functions instead of as a struct wrapper.

For some streams you'll need a TimeoutException. An interesting
question is whether users should be able to recover from
TimeoutExceptions. This essentially means if a read/write function
internally calls read/write posix calls more than once and only the
last one timed out, we already processed some data and it's not
possible to recover from a TimeoutException if the amount of already
processed data is unknown.
The simplest solution is using only one syscall internally. Then
TimeoutException => no data was processed. But this doesn't work for
read/writeExcatly (Another reason why read/writeExactly shouldn't be
the default. vibe.d...)

Regarding buffers / sliding windows I'd have a look at
https://github.com/schveiguy/phobos/blob/babe9fe338f03cafc0fb50fc0d37ea96505da3e3/std/io/buffer.d

Another design question is whether there should be an interface for
such buffered streams or whether it's OK to have only unbuffered
streams + one buffer struct / class. Basically the question is whether
there might be streams that can offer a buffer interface but can't  use
the standard implementation.




* vibe.d stream rant ahead:

vibe.d streams get some things right and some things very wrong. For
example their leastSize/empty/read combo means you might actually
have to implement reading data in any of these functions. Users have to
handle timeouts or other errors for any of these as well.

Then the API requires a buffered stream, it simply won't work for
unbuffered IO (leastSize, empty). And the fact that read reads exactly
n bytes makes stream implementations more complicated (re-reading until
enough data has been read should be done by a generic function, not
reimplemented in every stream). It even makes some user code more
complicated: I've implemented a serial port library for vibe-d.
If I don't know how many bytes will arrive with the next packet, the
read posix function usually returns the expected/available amount of
data. But now vibe.d requires me to specify a fixed length when calling
the stream read method. This leads to ugly code using peak...

Then vibe.d also mixes the sliding window / buffer concept into the
stream class, but does so in a bad way. A sliding window should expose
the internal buffer so that it's possible to consume bytes from the
buffer, skip bytes, refill... In vibe.d you can peak at the buffer. But
you can't discard data. You'll have to call read instead which copies
from the internal buffer to an external buffer, even if you only want
to skip data. Even worse, your external buffer size is limited. So you
have to implement some loop logic if you want to skip more data than
fits your buffer. And all you need is a discard(size_t n) function which
does _buffer = _buffer[n .. $] in the stream class...

TLDR: API design is very important.


Re: Cannot get Derelict to work

2016-02-07 Thread Whirlpool via Digitalmars-d-learn

Hi,

Sorry, I have a problem again :)

I tried to compile this example :
http://www.glfw.org/docs/latest/quick.html#quick_example
which required to add derelict-gl3

My code is currently this : http://pastebin.com/A5seZmX6

It compiles without errors, but crashes immediately with again 
exceptions of the form :
First-chance exception: 
derelict.util.exception.SymbolLoadException Failed to load OpenGL 
symbol [glGetnTexImage] at 
..\AppData\Roaming\dub\packages\derelict-util-2.0.4\source\derelict\util\exception.d(35)



The output of the Visual Studio console is this :

C:\Windows\SysWOW64\winmmbase.dll unloaded.
C:\Windows\SysWOW64\winmmbase.dll unloaded.
C:\Windows\SysWOW64\winmmbase.dll unloaded.
C:\Windows\SysWOW64\atigktxx.dll unloaded.
First-chance exception: 
derelict.util.exception.SymbolLoadException Failed to load OpenGL 
symbol [glGetnTexImage] at 
..\AppData\Roaming\dub\packages\derelict-util-2.0.4\source\derelict\util\exception.d(35)


Unhandled exception: derelict.util.exception.SymbolLoadException 
Failed to load OpenGL symbol [glGetnTexImage] at 
..\AppData\Roaming\dub\packages\derelict-util-2.0.4\source\derelict\util\exception.d(35)


First-chance exception: 
derelict.util.exception.SymbolLoadException Failed to load OpenGL 
symbol [glGetnCompressedTexImage] at 
..\AppData\Roaming\dub\packages\derelict-util-2.0.4\source\derelict\util\exception.d(35)


Unhandled exception: derelict.util.exception.SymbolLoadException 
Failed to load OpenGL symbol [glGetnCompressedTexImage] at 
..\AppData\Roaming\dub\packages\derelict-util-2.0.4\source\derelict\util\exception.d(35)


First-chance exception: 0xc005: Access violation
Unhandled exception: 0xc005: Access violation


Is it the same kind of problem as before ? If my understanding is 
correct [1], I need to link with the OpenGL DLL, don't I ? I 
found that I have an opengl32.dll file in C:\Windows\System32, 
and tried adding the path to it in the linker configuration of 
Visual Studio. I've also tried copying it to my bin target 
folder, but it still doesn't work. I also tried compiling in 
64-bit mode, but I get different errors :

C:\Windows\System32\winmmbase.dll unloaded.
C:\Windows\System32\winmmbase.dll unloaded.
C:\Windows\System32\winmmbase.dll unloaded.
C:\Windows\System32\atig6txx.dll unloaded.
First-chance exception: 0xc096: Privileged instruction
Unhandled exception: 0xc096: Privileged instruction
The thread 0x11cc has exited with code 255 (0xff).
The thread 0x3d0 has exited with code 255 (0xff).
The thread 0x12c0 has exited with code 255 (0xff).
The program '[8112] derelicttest.exe' has exited with code 255 
(0xff).


I am aware that this example from GLFW uses deprecated functions 
from OpenGL 2, is that the problem ? (I will try OpenGL 4 with 
the tutorials at [2] soon)


Please can you help me, thanks in advance

[1] http://derelictorg.github.io/dynstat.html
[2] http://antongerdelan.net/opengl/




Re: Overloading free functions & run-time dispatch based on parameter types

2016-02-07 Thread Robert M. Münch via Digitalmars-d-learn

On 2016-02-06 14:33:57 +, Marc Schütz said:


I don't see why this wouldn't work, if you've in fact covered all combinations.


It works, the problem was that castSwitch returns something and I 
didn't "catch" it.


It's similar to how castSwitch is implemented, though the double casts 
are inefficient. You could use:


if(auto inta = cast(IntV) a) {
 if(auto intb = cast(IntV) b) {
 return new IntV(inta.num + intb.num);
 }
}


Yes, thanks. Was on my list.


(Again, this can be automated.)


How? Do you mean by castSwitch?


I read this here: 
https://github.com/D-Programming-Language/phobos/pull/1266#issuecomment-53507509 
(functional pattern matching) but it seems it won't be implemented... 
at the end of the day what I simulate are poor-mans-multimethods


As I read the discussion, it was just decided to defer the more complex 
version of castSwitch for later, but it wasn't rejected.


Well... yes, "won't be implemented in the near future" Anyway, it's not 
available at the moment, so looking at other ways.


--
Robert M. Münch
http://www.saphirion.com
smarter | better | faster

Re: voldemort stack traces (and bloat)

2016-02-07 Thread deadalnix via Digitalmars-d
On Sunday, 7 February 2016 at 05:18:39 UTC, Steven Schveighoffer 
wrote:

Thoughts?


And no line number. But hey, these are convenience for 
youngsters. We real program, who type on the keyboard using our 
balls, don't need such distractions.




Re: Does anyone care if Tuple.slice() returns by ref?

2016-02-07 Thread tsbockman via Digitalmars-d

On Sunday, 7 February 2016 at 08:54:08 UTC, ZombineDev wrote:
Contrary to my expectations, slicing bultin tuples returns a 
copy. (http://dpaste.dzfl.pl/fd96b17e735d)
Maybe we need to fix this in the compiler. That way we can 
reuse the language feature for std.typecons : Tuple.slice().


That is surprising indeed, but I don't see how fixing it would 
solve the Tuple.slice() memory alignment issues.


Re: Bug or intended?

2016-02-07 Thread Marc Schütz via Digitalmars-d-learn
The specification doesn't list (non-static) members a valid 
template alias parameters:

http://dlang.org/spec/template.html#TemplateAliasParameter


Re: Does anyone care if Tuple.slice() returns by ref?

2016-02-07 Thread Ola Fosheim Grøstad via Digitalmars-d

On Sunday, 7 February 2016 at 13:01:14 UTC, tsbockman wrote:
That is essentially what my PR does. But, some people are 
unhappy with the thought of a slice's type not matching the 
type of the equivalent standard Tuple:


Well, Tuple is flawed by design for more than one reason. IMO it 
should be replaced wholesale with a clean design with consistent 
semantics.





Re: [idea] Mutable pointee/ RCString

2016-02-07 Thread Iakh via Digitalmars-d

On Sunday, 7 February 2016 at 14:00:24 UTC, Iakh wrote:

Explanations:
As far as "immutable" transitive:
--
immutable RCString str;
*str.counter++; // Impossible/error/undefined behavior(with const 
cast)

--
Language defines immutable to do some optimizations based on true
constness of str, fields, and variables pointed by fields. But if
pointer would be treated by optimizer(not by GC) as void* (or 
size_t)

pointee "true constness" does not matter.

The only drawback is the immutable function that reads @mutable 
field

can't be @pure because it reads "global variable".


Re: Github woes

2016-02-07 Thread Joakim via Digitalmars-d
On Sunday, 7 February 2016 at 10:27:02 UTC, Rikki Cattermole 
wrote:

On 07/02/16 11:22 PM, Wobbles wrote:
Just curious, is there a backup plan for D if github.com goes 
by the

wayside?

Now that there seems to be community back-lash against it (at 
least on

reddit) maybe a contingency plan would be useful.

Obviously not today or tomorrow, but you never know what's 
down the road.


The only thing that we have hosted on Github is code.
So excluding integrations, we could move over to Bitbucket 
without too many problems.


I really wouldn't worry about it. Sure it would upset and add a 
lot of extra work but we sure won't be the only ones in that 
position.


Unfortunately, there's a lot of valuable info in the PR comments, 
that would be lost if github.com went down.  Since D never 
switched from bugzilla to github for bugs, that wouldn't be an 
issue.  Hopefully, we could pull that github PR discussion from a 
backup at archive.org or someplace.


Re: Google Summer of Code 2016

2016-02-07 Thread Dragos Carp via Digitalmars-d
On Saturday, 6 February 2016 at 20:18:57 UTC, Craig Dillabaugh 
wrote:
Anyone interested and capable of mentor a student interested in 
doing FlatBuffers for D.


I could do that. Currently, as a side project, I'm working on 
adding D support for Protocol Buffers v3 [1].


Main goals of the new design:
- integration in the upstream project
- simple readable generated code
- range based solution

Of course, the same can be applied for the FlatBuffers.

[1] https://github.com/dcarp/protobuf/tree/dlang_support


Re: Github woes

2016-02-07 Thread rsw0x via Digitalmars-d

On Sunday, 7 February 2016 at 10:48:49 UTC, Joakim wrote:
On Sunday, 7 February 2016 at 10:27:02 UTC, Rikki Cattermole 
wrote:

On 07/02/16 11:22 PM, Wobbles wrote:

[...]


The only thing that we have hosted on Github is code.
So excluding integrations, we could move over to Bitbucket 
without too many problems.


I really wouldn't worry about it. Sure it would upset and add 
a lot of extra work but we sure won't be the only ones in that 
position.


Unfortunately, there's a lot of valuable info in the PR 
comments, that would be lost if github.com went down.  Since D 
never switched from bugzilla to github for bugs, that wouldn't 
be an issue.  Hopefully, we could pull that github PR 
discussion from a backup at archive.org or someplace.


IIRC Gitlab(FOSS, can self-host) is capable of keeping a perfect 
in-sync mirror of all Github data, I'd have to go review if it's 
still capable of this so don't take my word on it.


Bye.


Re: is increment on shared ulong atomic operation?

2016-02-07 Thread Minas Mina via Digitalmars-d-learn

On Sunday, 7 February 2016 at 19:43:23 UTC, rsw0x wrote:

On Sunday, 7 February 2016 at 19:39:27 UTC, rsw0x wrote:
On Sunday, 7 February 2016 at 19:27:19 UTC, Charles Hixson 
wrote:
If I define a shared ulong variable, is increment an atomic 
operation?

E.g.

shared ulong t;

...

t++;

It seems as if it ought to be, but it could be split into 
read, increment, store.


I started off defining a shared struct, but that seems silly, 
as if the operations defined within a shared struct are 
synced, then the operation on a shared variable should be 
synced, but "+=" is clearly stated not to be synchronized, so 
I'm uncertain.


https://dlang.org/phobos/core_atomic.html#.atomicOp


Just noticed that there's no example.
It's used like

shared(ulong) a;
atomicOp!"+="(a, 1);


Wow, that syntax sucks a lot.


dpaste and the wayback machine

2016-02-07 Thread Andrei Alexandrescu via Digitalmars-d
Dpaste currently does not expire pastes by default. I was thinking it 
would be nice if it saved them in the Wayback Machine such that they are 
archived redundantly.


I'm not sure what's the way to do it - probably linking the 
newly-generated paste URLs from a page that the Wayback Machine already 
knows of.


I just saved this by hand: http://dpaste.dzfl.pl/2012caf872ec (when the 
WM does not see a link that is search for, it offers the option to 
archive it) obtaining 
https://web.archive.org/web/20160207215546/http://dpaste.dzfl.pl/2012caf872ec.



Thoughts?

Andrei


Re: Odd Destructor Behavior

2016-02-07 Thread Márcio Martins via Digitalmars-d-learn

On Sunday, 7 February 2016 at 21:49:24 UTC, Matt Elkins wrote:
I've been experiencing some odd behavior, where it would appear 
that a struct's destructor is being called before the object's 
lifetime expires. More likely I am misunderstanding something 
about the lifetime rules for structs. I haven't been able to 
reproduce with a particularly minimal example, so I will try to 
explain with my current code:


[...]


The destructor you are seeing is from the assignment:

m_tileView = TileView(...);

This creates a temporary TileView, copies it to m_tileView, and 
then destroys it. I suppose you want to move it instead. You need 
to copy the handles from the temporary into the destination, and 
then clear them out from the temporary to prevent them from being 
released.


std.algorithm has a couple of move() overloads that might be 
useful here.


Re: is increment on shared ulong atomic operation?

2016-02-07 Thread rsw0x via Digitalmars-d-learn

On Sunday, 7 February 2016 at 19:27:19 UTC, Charles Hixson wrote:
If I define a shared ulong variable, is increment an atomic 
operation?

E.g.

shared ulong t;

...

t++;

It seems as if it ought to be, but it could be split into read, 
increment, store.


I started off defining a shared struct, but that seems silly, 
as if the operations defined within a shared struct are synced, 
then the operation on a shared variable should be synced, but 
"+=" is clearly stated not to be synchronized, so I'm uncertain.


https://dlang.org/phobos/core_atomic.html#.atomicOp


[Issue 15652] New: Alias this exceptions cannot be caught, but shadow others

2016-02-07 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=15652

  Issue ID: 15652
   Summary: Alias this exceptions cannot be caught, but shadow
others
   Product: D
   Version: D2
  Hardware: All
OS: All
Status: NEW
  Severity: normal
  Priority: P1
 Component: dmd
  Assignee: nob...@puremagic.com
  Reporter: bugzi...@kyllingen.net

Say we have two exception classes, Foo and Bar, where Bar is a subtype of Foo
via "alias this":

class Foo : Exception
{
this() { super("Foo"); }
}

class Bar : Exception
{
this()
{
super("Bar");
foo = new Foo;
}
Foo foo;
alias foo this;
}

Now, we try to throw a Bar and catch it as a Foo:

try { throw new Bar; }
catch (Foo) { }

This compiles and runs, but does not work as expected; the exception is not
caught.  But then, that ought to mean that the following code is perfectly
fine:

try { throw new Bar; }
catch (Foo) { /* ... */ }  // A
catch (Bar) { /* ... */ }  // B

However, this doesn't even compile.

Error: catch at [line A] hides catch at [line B]

I don't know what is supposed to be the correct behaviour here, but one of
these cases should work.

--


Re: is increment on shared ulong atomic operation?

2016-02-07 Thread rsw0x via Digitalmars-d-learn

On Sunday, 7 February 2016 at 20:25:44 UTC, Minas Mina wrote:

On Sunday, 7 February 2016 at 19:43:23 UTC, rsw0x wrote:

On Sunday, 7 February 2016 at 19:39:27 UTC, rsw0x wrote:
On Sunday, 7 February 2016 at 19:27:19 UTC, Charles Hixson 
wrote:

[...]


https://dlang.org/phobos/core_atomic.html#.atomicOp


Just noticed that there's no example.
It's used like

shared(ulong) a;
atomicOp!"+="(a, 1);


Wow, that syntax sucks a lot.


how so?
It's meant to be very explicit


Odd Destructor Behavior

2016-02-07 Thread Matt Elkins via Digitalmars-d-learn
I've been experiencing some odd behavior, where it would appear 
that a struct's destructor is being called before the object's 
lifetime expires. More likely I am misunderstanding something 
about the lifetime rules for structs. I haven't been able to 
reproduce with a particularly minimal example, so I will try to 
explain with my current code:


I have a struct called "TileView", with the relevant parts 
looking like so:

[code]
struct TileView
{
this(Texture.Handle wallsTexture, Texture.Handle topTexture)
{
// Work happens here, but it doesn't seem to matter to 
reproducing the condition

}

// Destructor added for debugging after seeing odd behavior
~this()
{
import std.stdio;
writeln("HERE2");
}
// ...more implementation that doesn't seem to affect the 
condition...

}
[/code]

An instance of this is stored in another struct called "View", 
with the relevant parts looking like so:

[code]
struct View
{
this(/* irrelevant args here */)
{
writeln("HERE1a");
m_tileView = 
TileView(Texture.create(loadTGA(makeInputStream!FileInputStream("resources/images/grass-topped-clay.tga").handle)), Texture.create(loadTGA(makeInputStream!FileInputStream("resources/images/grass-outlined.tga").handle)));//, Texture.create(loadTGA(makeInputStream!FileInputStream("resources/images/grass-outlined.tga").handle)));

writeln("HERE1b");
}

TileView m_tileView;
// ...more irrelevant implementation...
}
[/code]

The output from the two writelns in View and the one in TileView 
is:

[output]
HERE1a
HERE2
HERE1b
[/output]

So the destructor of TileView is being called during its 
construction. Flow proceeds normally (e.g., no exception is 
thrown), as demonstrated by "HERE1b" being printed. Interestingly 
enough, it all seems to hinge on the second argument to 
TileView's constructor; if I make it on a separate line 
beforehand and pass it in, or if I don't pass in a second 
argument at all, I don't see this behavior. In fact, almost any 
attempt I've made to reduce the problem for illustration causes 
it to vanish, which is unfortunate.


From this non-reduced situation, does anything jump out? Am I 
missing something about struct lifetimes? This is the only place 
I instantiate a TileView.


Thanks!




Re: Dconf 2015 talks...

2016-02-07 Thread Joseph Rushton Wakeling via Digitalmars-d

On Monday, 25 January 2016 at 22:06:31 UTC, Era Scarecrow wrote:
On Monday, 25 January 2016 at 21:22:13 UTC, Joseph Rushton 
Wakeling wrote:
I have been wondering about how allocators could help to deal 
with these problems.  Could you put forward a minimal example 
of how you would see it working?


 Most likely alloca would have to be built into the compiler. 
Here's a crash course in how the stack memory management works. 
sp=stack pointer, bp=base pointer (more relevant pre 386).


Apologies for the delay in writing back about this.

What you describe makes sense, but I don't quite follow what you 
mean in one particular case:


 Technically alloca simply returns the current sp, then adds to 
it the number of bytes you requested. This means you have to 
run it at the function stack where you want to use it (and not 
in a called function, otherwise corruption). So inlined 
functions where alloca's data would remain would be a must.


I don't quite follow your remark about inlined functions; do you 
mean that the function where the RNG instance is generated must 
be inlined?  (That would make sense in order to avoid the 
internal state being deallocated immediately.)


I think there might be more complications here than just 
allocating individual RNG instances, though (which can happen 
quite early on in the program); what about stuff like random 
algorithms (RandomCover, RandomSample) which might be generated 
deep in internal loops, passed to other functionality as rvalues, 
etc. etc.?



Then it comes down to a simple:

[code]
struct RNG {
  int *state; //add assert to functions that this isn't null
  this(int seed) {
state = alloca(sizeof(int));
*state = seed;
  }
}
[/code]


Yes, missing your understanding of the details of how it would 
have to happen, this is pretty much what I had in mind for random 
ranges; a pointer to internal state nevertheless allocated on the 
stack.  But the already-mentioned concerns about some of the ways 
that stack could be deallocated make for some concerns.


It might be simpler, in practice, to just have the state 
refcounted.


 I suppose the alternate is an option to skip/throw-away some 
numbers that should've been consumed (assuming you want to 
keep using the same seed), or seeding each per use.


I'm not sure I follow what you mean here or why you think this 
would work?  Could you give a concrete example?


certainly.

[code]
struct RNG {
  ...

  void skip(int x) {assert(x>0); while(x--) popfront();}
}

RNG rnd = RND(seed);
rnd.take(10).writeln;  //loosely based on talk examples
rnd.skip(10);  //skips the 'consumed' numbers.
rnd.take(10).writeln;  //won't have identical output

[/code]


I'm afraid that's not really viable :-(  In the best case, it's 
just working around the fundamental design problem via programmer 
virtue.  But the problem is, in the general case, you can't 
anticipate how many random variates may be popped from your 
random number generator inside a function (it might depend on 
other input factors over which you as programmer have no control).


Re: Odd Destructor Behavior

2016-02-07 Thread Matt Elkins via Digitalmars-d-learn

On Sunday, 7 February 2016 at 22:35:57 UTC, anonymous wrote:

On 07.02.2016 23:07, Márcio Martins wrote:

The destructor you are seeing is from the assignment:

m_tileView = TileView(...);

This creates a temporary TileView, copies it to m_tileView, 
and then
destroys it. I suppose you want to move it instead. You need 
to copy the
handles from the temporary into the destination, and then 
clear them out

from the temporary to prevent them from being released.


I think you're mistaken here. The result of a struct literal is 
usually moved implicitly.


Code:

import std.stdio;

struct S
{
~this() {writeln("dtor");}
}

void main()
{
auto s = S();
writeln("end of main");
}


Output:

end of main
dtor


If there was a copy that's destroyed after the assignment, 
there should be another "dtor" before "end of main".


Yeah...and I just stuck this into TileView:
@disable this();
@disable this(this);

and it compiled just fine. If it created a copy I assume the 
compiler would have choked on that.


Re: is increment on shared ulong atomic operation?

2016-02-07 Thread rsw0x via Digitalmars-d-learn

On Sunday, 7 February 2016 at 19:39:27 UTC, rsw0x wrote:
On Sunday, 7 February 2016 at 19:27:19 UTC, Charles Hixson 
wrote:
If I define a shared ulong variable, is increment an atomic 
operation?

E.g.

shared ulong t;

...

t++;

It seems as if it ought to be, but it could be split into 
read, increment, store.


I started off defining a shared struct, but that seems silly, 
as if the operations defined within a shared struct are 
synced, then the operation on a shared variable should be 
synced, but "+=" is clearly stated not to be synchronized, so 
I'm uncertain.


https://dlang.org/phobos/core_atomic.html#.atomicOp


Just noticed that there's no example.
It's used like

shared(ulong) a;
atomicOp!"+="(a, 1);


Re: [suggestion] Automated one-stop compiler version chart

2016-02-07 Thread Xinok via Digitalmars-d

On Sunday, 7 February 2016 at 18:46:48 UTC, Nick Sabalausky wrote:
I was just updating a project's .travis.yml file and noticed: 
It doesn't seem we have any one-stop-shop location to check all 
the versions of DMD/LDC/GDC currently available on travis-ci.


It's be really nice if we had some auto-updated chart like that 
which ALSO listed the DMDFE, LLVM and GCC versions each LDC/GDC 
version is based on.


That info seems especially difficult to find for GDC. It's a 
little easier for LDC, since I found this page ( 
https://github.com/ldc-developers/ldc/releases ), but it'd be 
really nice to have just a simple chart somewhere.


The GDC downloads page has this info:
http://gdcproject.org/downloads

Perhaps it would be good to add this info to the main downloads 
page:

http://dlang.org/download.html


Re: Cannot get Derelict to work

2016-02-07 Thread Whirlpool via Digitalmars-d-learn
Thank you very much for your explanations and patience :) I 
indeed have an AMD Radeon HD 7870 card, and using 4.4 as the max 
version fixes my problem !


Re: Odd Destructor Behavior

2016-02-07 Thread anonymous via Digitalmars-d-learn

On 07.02.2016 22:49, Matt Elkins wrote:

 From this non-reduced situation, does anything jump out? Am I missing
something about struct lifetimes? This is the only place I instantiate a
TileView.


Looks weird. I presume this doesn't happen with simpler constructor 
parameters/arguments, like int instead of Texture.Handle? I don't see 
how the parameter types would make a destructor call appear. Might be a bug.


Can you post the code for Texture, makeInputStream, etc, so that we have 
a full, reproducible test case?


Re: Odd Destructor Behavior

2016-02-07 Thread anonymous via Digitalmars-d-learn

On 07.02.2016 23:07, Márcio Martins wrote:

The destructor you are seeing is from the assignment:

m_tileView = TileView(...);

This creates a temporary TileView, copies it to m_tileView, and then
destroys it. I suppose you want to move it instead. You need to copy the
handles from the temporary into the destination, and then clear them out
from the temporary to prevent them from being released.


I think you're mistaken here. The result of a struct literal is usually 
moved implicitly.


Code:

import std.stdio;

struct S
{
~this() {writeln("dtor");}
}

void main()
{
auto s = S();
writeln("end of main");
}


Output:

end of main
dtor


If there was a copy that's destroyed after the assignment, there should 
be another "dtor" before "end of main".


[Issue 15653] New: IFTI fails for immutable parameter

2016-02-07 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=15653

  Issue ID: 15653
   Summary: IFTI fails for immutable parameter
   Product: D
   Version: D2
  Hardware: All
OS: All
Status: NEW
  Severity: major
  Priority: P1
 Component: dmd
  Assignee: nob...@puremagic.com
  Reporter: john.loughran.col...@gmail.com

% cat test.d
void foo(T)(const T x) {}
void bar(T)(immutable T x) {}

void main()
{
foo(4); // OK
bar(4); // Error
}
% dmd test.d
test.d(8): Error: template test.bar cannot deduce function from argument types
!()(int), candidates are:
test.d(3):test.bar(T)(immutable T x)

--


Re: Odd Destructor Behavior

2016-02-07 Thread Matt Elkins via Digitalmars-d-learn

On Sunday, 7 February 2016 at 22:04:27 UTC, anonymous wrote:

On 07.02.2016 22:49, Matt Elkins wrote:
 From this non-reduced situation, does anything jump out? Am I 
missing
something about struct lifetimes? This is the only place I 
instantiate a

TileView.


Looks weird. I presume this doesn't happen with simpler 
constructor parameters/arguments, like int instead of 
Texture.Handle? I don't see how the parameter types would make 
a destructor call appear. Might be a bug.


Correct; if I switch the second Texture.Handle to an int it 
doesn't happen. Nor if I remove it altogether. Nor if I create 
the Texture.Handle on the line immediately above TileView's 
construction, and then pass in the created Texture.Handle. I also 
didn't understand how the parameters would cause this.


Can you post the code for Texture, makeInputStream, etc, so 
that we have a full, reproducible test case?


Oi. Yes, I can, but it is quite a lot of code even if you don't 
count that it is dependent on OpenGL, GLFW, and gl3n to run to 
this point. This is why I was disappointed that simpler 
reproducing cases weren't appearing. I should probably spend more 
time trying to reduce the case some...


Re: Does anyone care if Tuple.slice() returns by ref?

2016-02-07 Thread Saurabh Das via Digitalmars-d
On Sunday, 7 February 2016 at 13:13:21 UTC, Ola Fosheim Grøstad 
wrote:

On Sunday, 7 February 2016 at 13:01:14 UTC, tsbockman wrote:
That is essentially what my PR does. But, some people are 
unhappy with the thought of a slice's type not matching the 
type of the equivalent standard Tuple:


Well, Tuple is flawed by design for more than one reason. IMO 
it should be replaced wholesale with a clean design with 
consistent semantics.


Why is the design flawed?



Re: Github woes

2016-02-07 Thread Nick Sabalausky via Digitalmars-d

On 02/07/2016 05:48 AM, Joakim wrote:


Unfortunately, there's a lot of valuable info in the PR comments, that
would be lost if github.com went down.  Since D never switched from
bugzilla to github for bugs, that wouldn't be an issue.  Hopefully, we
could pull that github PR discussion from a backup at archive.org or
someplace.


It'd be nice if we'd migrate to gitlabs. Not only is it really nice (I 
like it slightly better than github) but it doesn't have github's 
problems with being such a walled-garden.


There's also a github/gitlabs-like tool on sandstorm.io that sounds 
good, although I haven't tried it.




Re: Does anyone care if Tuple.slice() returns by ref?

2016-02-07 Thread Ola Fosheim Grøstad via Digitalmars-d

On Sunday, 7 February 2016 at 16:27:32 UTC, Saurabh Das wrote:

Why is the design flawed?


Because it breaks expectations.

Tuples should be builtin and the primarily use case is supporting 
multiple return values with heavy duty register optimization.




[suggestion] Automated one-stop compiler version chart

2016-02-07 Thread Nick Sabalausky via Digitalmars-d
I was just updating a project's .travis.yml file and noticed: It doesn't 
seem we have any one-stop-shop location to check all the versions of 
DMD/LDC/GDC currently available on travis-ci.


It's be really nice if we had some auto-updated chart like that which 
ALSO listed the DMDFE, LLVM and GCC versions each LDC/GDC version is 
based on.


That info seems especially difficult to find for GDC. It's a little 
easier for LDC, since I found this page ( 
https://github.com/ldc-developers/ldc/releases ), but it'd be really 
nice to have just a simple chart somewhere.


is increment on shared ulong atomic operation?

2016-02-07 Thread Charles Hixson via Digitalmars-d-learn

If I define a shared ulong variable, is increment an atomic operation?
E.g.

shared ulong t;

...

t++;

It seems as if it ought to be, but it could be split into read, 
increment, store.


I started off defining a shared struct, but that seems silly, as if the 
operations defined within a shared struct are synced, then the operation 
on a shared variable should be synced, but "+=" is clearly stated not to 
be synchronized, so I'm uncertain.




Re: Odd Destructor Behavior

2016-02-07 Thread anonymous via Digitalmars-d-learn

On 07.02.2016 23:49, Matt Elkins wrote:

Oi. Yes, I can, but it is quite a lot of code even if you don't count
that it is dependent on OpenGL, GLFW, and gl3n to run to this point.
This is why I was disappointed that simpler reproducing cases weren't
appearing. I should probably spend more time trying to reduce the case
some...


Minimal test cases are great, but if you're not able to get it down in 
size, or not willing to, then a larger test case is ok, too. The problem 
is clear, and I'd expect reducing it to be relatively straight-foward 
(but possibly time-consuming). Just don't forget about it completely, 
that would be bad.


Also be aware of DustMite, a tool for automatic reduction:

https://github.com/CyberShadow/DustMite


Re: Just because it's a slow Thursday on this forum

2016-02-07 Thread Andrei Alexandrescu via Digitalmars-d

On 02/04/2016 09:46 PM, Tofu Ninja wrote:

On Thursday, 4 February 2016 at 15:33:41 UTC, Andrei Alexandrescu wrote:

https://github.com/D-Programming-Language/phobos/pull/3971 -- Andrei


People one github were asking for a dump function so they could do
  int a = 5;
  dump!("a"); // prints "a = 5"


Here's a working version if anyone wants it but you have to use it like
  mixin dump!("a");


//

mixin template dump(Names ... )
{
 auto _unused_dump = {
 import std.stdio : writeln, write;
 foreach(i,name; Names)
 {
 write(name, " = ", mixin(name), (i

Re: Does anyone care if Tuple.slice() returns by ref?

2016-02-07 Thread tsbockman via Digitalmars-d
On Sunday, 7 February 2016 at 16:49:16 UTC, Ola Fosheim Grøstad 
wrote:

On Sunday, 7 February 2016 at 16:27:32 UTC, Saurabh Das wrote:

Why is the design flawed?


Because it breaks expectations.

Tuples should be builtin and the primarily use case is 
supporting multiple return values with heavy duty register 
optimization.


The fact that `Tuple` cannot implement `opIndex` or `opSlice`, 
but instead must use the non-standard `slice` name is a good 
indicator that the current design is less than ideal.


Re: Odd Destructor Behavior

2016-02-07 Thread Matt Elkins via Digitalmars-d-learn

On Sunday, 7 February 2016 at 23:11:34 UTC, anonymous wrote:

On 07.02.2016 23:49, Matt Elkins wrote:
Oi. Yes, I can, but it is quite a lot of code even if you 
don't count
that it is dependent on OpenGL, GLFW, and gl3n to run to this 
point.
This is why I was disappointed that simpler reproducing cases 
weren't
appearing. I should probably spend more time trying to reduce 
the case

some...


Minimal test cases are great, but if you're not able to get it 
down in size, or not willing to, then a larger test case is ok, 
too. The problem is clear, and I'd expect reducing it to be 
relatively straight-foward (but possibly time-consuming). Just 
don't forget about it completely, that would be bad.


Also be aware of DustMite, a tool for automatic reduction:

https://github.com/CyberShadow/DustMite


Turns out it was less hard to reduce than I thought. Maybe it 
could be taken down some more, too, but this is reasonably small:


[code]
import std.stdio;

struct TextureHandle
{
~this() {}
}

TextureHandle create() {return TextureHandle();}

 struct TileView
 {
 @disable this();
 @disable this(this);
 this(TextureHandle a, TextureHandle b) {}
 ~this() {writeln("HERE2");}
 }

 struct View
 {
 this(int)
 {
 writeln("HERE1a");
 m_tileView = TileView(create(), create());
 writeln("HERE1b");
 }

 private TileView m_tileView;
}

unittest
{
auto v = View(5);
}
[/code]

This yields the following:

[output]
HERE1a
HERE2
HERE1b
HERE2
[/output]

I would have expected only one "HERE2", the last one. Any of a 
number of changes cause it to behave in the expected way, 
including (but probably not limited to):
* Creating the TextureHandles directly rather than calling 
create()

* Using only one argument to TileView's constructor
* Removing TextureHandle's empty destructor

That last one especially seems to indicate a bug to me...


Re: Odd Destructor Behavior

2016-02-07 Thread Matt Elkins via Digitalmars-d-learn

Some environment information:
DMD 2.070 32-bit
Windows 7 (64-bit)


Re: Just because it's a slow Thursday on this forum

2016-02-07 Thread John Colvin via Digitalmars-d
On Sunday, 7 February 2016 at 23:26:05 UTC, Andrei Alexandrescu 
wrote:

On 02/04/2016 09:46 PM, Tofu Ninja wrote:
On Thursday, 4 February 2016 at 15:33:41 UTC, Andrei 
Alexandrescu wrote:
https://github.com/D-Programming-Language/phobos/pull/3971 -- 
Andrei


People one github were asking for a dump function so they 
could do

  int a = 5;
  dump!("a"); // prints "a = 5"


Here's a working version if anyone wants it but you have to 
use it like

  mixin dump!("a");


//

mixin template dump(Names ... )
{
 auto _unused_dump = {
 import std.stdio : writeln, write;
 foreach(i,name; Names)
 {
 write(name, " = ", mixin(name), 
(i

What is a short, fast way of testing whether x in [a, b]?

2016-02-07 Thread Enjoys Math via Digitalmars-d-learn

Right now I'm using a logical ||:

if (!(2*PI - EPS!float <= t1-t0 || t1-t0 <= 2*PI + EPS!float)) {

But I'll be doing this a lot, so was wondering if there's a D 
native way of doing it.


Thanks.


[Issue 15376] The time zone name conversions should not be compiled into Phobos

2016-02-07 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=15376

Jonathan M Davis  changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |FIXED

--- Comment #3 from Jonathan M Davis  ---
Technically, removing the functions with the hard-coded conversions would be
what's required to fix this bug per its title, but the new functionality is in
place now, and the old functionality will be deprecated after the new
functionality has been out for a release and eventually removed with the
completion of the deprecation cycle, so I'm going to mark this as fixed.

--


Re: [suggestion] Automated one-stop compiler version chart

2016-02-07 Thread David Nadlinger via Digitalmars-d

On Sunday, 7 February 2016 at 21:26:36 UTC, Xinok wrote:
On Sunday, 7 February 2016 at 18:46:48 UTC, Nick Sabalausky 
wrote:
I was just updating a project's .travis.yml file and noticed: 
It doesn't seem we have any one-stop-shop location to check 
all the versions of DMD/LDC/GDC currently available on 
travis-ci.

[…]


The GDC downloads page has this info:
http://gdcproject.org/downloads


The page doesn't have the history of versions available at Travis 
though, which is what Nick asked for.


I agree, by the way, something like that would definitely be nice 
to have. Somebody just needs to write a little scrapper set up to 
gather everything into a pretty table…


 — David


Re: voldemort stack traces (and bloat)

2016-02-07 Thread Steven Schveighoffer via Digitalmars-d

On 2/7/16 10:42 AM, Iakh wrote:

On Sunday, 7 February 2016 at 05:18:39 UTC, Steven Schveighoffer wrote:

4   testexpansion   0x00010fb5dbec pure @safe
void
testexpansion.s!(testexpansion.s!(testexpansion.s!(testexpansion.s!(testexpansion.s!



Why "bad" foo is void?


Huh? foo returns void in both instances.


Yeah would by nice to auto-repacle with
testexpansion.S!(...)(...).Result.foo
or even with ...Result.foo


A possible fix for the stack printing is to use the template parameter 
placeholders:


testexpansion.s!(T = testexpansion.s!...)(T).Result.foo

But this doesn't fix the object-file bloat.

-Steve


Re: is increment on shared ulong atomic operation?

2016-02-07 Thread Charles Hixson via Digitalmars-d-learn

Thanks, that's what I needed to know.

I'm still going to do it as a class, but now only the inc routine needs 
to be handled specially.
(The class is so that other places where the value is used don't even 
need to know that it's special.  And so that instances are easy to share 
between threads.)


On 02/07/2016 11:43 AM, rsw0x via Digitalmars-d-learn wrote:

On Sunday, 7 February 2016 at 19:39:27 UTC, rsw0x wrote:

On Sunday, 7 February 2016 at 19:27:19 UTC, Charles Hixson wrote:

If I define a shared ulong variable, is increment an atomic operation?
E.g.

shared ulong t;

...

t++;

It seems as if it ought to be, but it could be split into read, 
increment, store.


I started off defining a shared struct, but that seems silly, as if 
the operations defined within a shared struct are synced, then the 
operation on a shared variable should be synced, but "+=" is clearly 
stated not to be synchronized, so I'm uncertain.


https://dlang.org/phobos/core_atomic.html#.atomicOp


Just noticed that there's no example.
It's used like

shared(ulong) a;
atomicOp!"+="(a, 1);





[Issue 15655] New: SysTime.from*String incorrectly accept single digit time zones and minutes > 59

2016-02-07 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=15655

  Issue ID: 15655
   Summary: SysTime.from*String incorrectly accept single digit
time zones and minutes > 59
   Product: D
   Version: D2
  Hardware: All
OS: All
Status: NEW
  Severity: normal
  Priority: P1
 Component: phobos
  Assignee: nob...@puremagic.com
  Reporter: issues.dl...@jmdavisprog.com

"+08" and "-08" would be valid for a time zone per ISO 8601, as should be
"+08:00" and "-08:00" for the extended format and "+0800" and "-0800" for the
non-extdended format. However, strings like "+8" or "-8" are not legal. It must
be +hhmm, -hhmm, +hh, or -hh for non-extended and +hh:mm, -hh:mm, +hh, or -hh
for extended. And currently, SysTime.from*String is not properly strict about
the time zone, allowing +h and -h when there are no minutes (even though it is
properly strict when there are minutes) and is not strict about the number of
digits in the minutes. In addition, it is not strict about whether the number
of minutes is actually valid (e.g. "+07:60" should not be valid).

--


Re: Dconf 2015 talks...

2016-02-07 Thread Era Scarecrow via Digitalmars-d
On Sunday, 7 February 2016 at 22:27:40 UTC, Joseph Rushton 
Wakeling wrote:

On Monday, 25 January 2016 at 22:06:31 UTC, Era Scarecrow wrote:
What you describe makes sense, but I don't quite follow what 
you mean in one particular case:


 Technically alloca simply returns the current sp, then adds 
to it the number of bytes you requested. This means you have 
to run it at the function stack where you want to use it (and 
not in a called function, otherwise corruption). So inlined 
functions where alloca's data would remain would be a must.


 That's the low level assembly language that's generated i'm 
referring to; At least based on what i've read and seen for code 
output from a compiler to a .s file or similar.


I don't quite follow your remark about inlined functions; do 
you mean that the function where the RNG instance is generated 
must be inlined?  (That would make sense in order to avoid the 
internal state being deallocated immediately.)


Assuming alloca moves to the inlined function. Although i had 
another idea thrown in my head where the memory would be 
pre-allocated and you could just point to it when requested via 
an allocator. So assume


@alloca(sizeof(int)) struct RNG {

 During instantiation it would know the size ahead of time and 
just append that to the end of the structure. That extra padding 
space could be handled manually instead.


 this(int seed) {
   state = cast(void*)(this+1);
 }

 But this forced type breaking is quite clunky (and obviously the 
wrong way to write it).


 I recall in C there was suppose to be a way to attach an array 
(of unknown size) immediately following a struct by making the 
length of the array 0, then accessing it directly. But you'd 
still need to guarantee somehow that the access rights are in 
place and not referencing other data which you could screw up 
(via optimizations or something).


I think there might be more complications here than just 
allocating individual RNG instances, though (which can happen 
quite early on in the program); what about stuff like random 
algorithms (RandomCover, RandomSample) which might be generated 
deep in internal loops, passed to other functionality as 
rvalues, etc. etc.?


 Either they use more stack space, or they act normally after 
their call is done and are deallocated normally (automatically, 
unless they are passed outside of the scope where they were 
generated).


It might be simpler, in practice, to just have the state 
refcounted.


 I suppose the alternate is an option to skip/throw-away 
some numbers that should've been consumed
I'm not sure I follow what you mean here or why you think 
this would work?  Could you give a concrete example?



  void skip(int x) {assert(x>0); while(x--) popfront();}



rnd.take(10).writeln;  //loosely based on talk examples
rnd.skip(10);  //skips the 'consumed' numbers.
rnd.take(10).writeln;  //won't have identical output




I'm afraid that's not really viable :-(
But the problem is, in the general case, you can't anticipate 
how many random variates may be popped from your random number 
generator inside a function.


 True; Perhaps have one RNG for seeding and one RNG for passing, 
then reseed after passing the function off, although how far deep 
some of this could go with it's deeper copying; I don't know.


 Perhaps RNG should be a class outright, which probably removes a 
lot of these problems.


Re: An IO Streams Library

2016-02-07 Thread Jason White via Digitalmars-d

On Sunday, 7 February 2016 at 10:50:24 UTC, Johannes Pfau wrote:
I saw this on code.dlang.org some time ago and had a quick 
look. First of all this would have to go into phobos to make 
sure it's used as some kind of a standard. Conflicting stream 
libraries would only cause more trouble.


Then if you want to go for phobos inclusion I'd recommend 
looking at
other stream implementations and learning from their mistakes 
;-)

There's
https://github.com/schveiguy/phobos/tree/babe9fe338f03cafc0fb50fc0d37ea96505da3e3/std/io
which was supposed to be a stream replacement for phobos. Then 
there

are also vibe.d streams*.


I saw Steven's stream implementation quite some time ago and I 
had a look at vibe's stream implementation just now. I think it 
is a mistake to use classes over structs for this sort of thing. 
I briefly tried implementing it with classes, but ran into 
problems. The non-deterministic destruction of classes is 
probably the biggest issue. One has to be careful about calling 
f.close() in order to avoid accumulating too many open file 
descriptors in programs that open a lot of files. Reference 
counting takes care of this problem nicely and has less overhead. 
This is one area where classes relying on the GC is not ideal. 
Rust's ownership system solves this problem quite well. Python 
also solves this with "with" statements.


Your Stream interfaces looks like standard stream 
implementations (which
is a good thing) which also work for unbuffered streams. I 
think it's a
good idea to support partial reads and writes. For an 
explanation why
partial reads, see the vibe.d rant below. Partial writes are 
useful
as a write syscall can be interrupted by posix signals to stop 
the
write. I'm not sure if the API should expose this feature (e.g. 
by
returning a partial write on EINTR) but it can sometimes be 
useful.


I don't want to assume what the user wants to do in the event of 
an EINTR unless a certain behavior is desired 100% of the time. I 
don't think that is the case here. Thus, that is probably 
something the user should handle manually, if needed.


Still readExactly / writeAll helpers functions are useful. I 
would try
to implement these as UFCS functions instead of as a struct 
wrapper.


I agree. I went ahead and made that change.


For some streams you'll need a TimeoutException. An interesting
question is whether users should be able to recover from
TimeoutExceptions. This essentially means if a read/write 
function
internally calls read/write posix calls more than once and only 
the

last one timed out, we already processed some data and it's not
possible to recover from a TimeoutException if the amount of 
already

processed data is unknown.
The simplest solution is using only one syscall internally. Then
TimeoutException => no data was processed. But this doesn't 
work for
read/writeExcatly (Another reason why read/writeExactly 
shouldn't be

the default. vibe.d...)


In the current implementation of readExactly/writeExactly, one 
cannot assume how much was read or written in the event of an 
exception anyway. The only way around this I can see is to return 
the number of bytes read/written in the exception itself. In 
fact, that might solve the TimeoutException problem, too. Hmm...


I'd like to keep the fundamental read/write functions at just one 
system call each in order to guarantee that they are atomic in 
relation to each other.


Regarding buffers / sliding windows I'd have a look at 
https://github.com/schveiguy/phobos/blob/babe9fe338f03cafc0fb50fc0d37ea96505da3e3/std/io/buffer.d


Another design question is whether there should be an interface 
for such buffered streams or whether it's OK to have only 
unbuffered streams + one buffer struct / class. Basically the 
question is whether there might be streams that can offer a 
buffer interface but can't  use the standard implementation.


I think it's OK to re-implement buffering for different types of 
streams where it is more efficient to do so. For example, there 
is no need to implement buffering for an in-memory stream 
because, by definition, it is already buffered.


I'm not sure if having multiple buffering strategies would be 
useful. Right now, there is only the fixed-sized sliding window. 
If multiple buffering strategies are useful, then it makes sense 
to have all streams unbuffered by default and have separate 
buffering implementations.


There is an interesting buffering approach here that is mainly 
geared towards parsing: 
https://github.com/DmitryOlshansky/datapicked/blob/master/dpick/buffer/buffer.d



* vibe.d stream rant ahead:

vibe.d streams get some things right and some things very 
wrong. For
example their leastSize/empty/read combo means you might 
actually
have to implement reading data in any of these functions. Users 
have to

handle timeouts or other errors for any of these as well.

Then the API requires a buffered stream, it simply won't work 
for
unbuffered IO (leastSize, empty). 

Re: voldemort stack traces (and bloat)

2016-02-07 Thread Steven Schveighoffer via Digitalmars-d

On 2/7/16 5:20 AM, deadalnix wrote:

On Sunday, 7 February 2016 at 05:18:39 UTC, Steven Schveighoffer wrote:

Thoughts?


And no line number. But hey, these are convenience for youngsters. We
real program, who type on the keyboard using our balls, don't need such
distractions.



Remind me never to borrow your laptop.

But really, if you can't figure out what +144 is, *eyeroll*.

-Steve


[Issue 15654] New: SysTime.toISOString formats the time zones incorrectly

2016-02-07 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=15654

  Issue ID: 15654
   Summary: SysTime.toISOString formats the time zones incorrectly
   Product: D
   Version: D2
  Hardware: All
OS: All
Status: NEW
  Severity: normal
  Priority: P1
 Component: phobos
  Assignee: nob...@puremagic.com
  Reporter: issues.dl...@jmdavisprog.com

The ISO Extended format for time zones other than UTC is +hh:mm, -hh:mm, +hh,
or -hh. ISO non-Extended is the same but without the colons. However, currently
SysTime uses colons for non-Extended (it correctly doesn't put the dashes or
colons in the date or time portion, but it still puts the colons in the time
zone portion).

So, we need to fix it so that toISOString does not use colons in the time zone,
and fromISOString should not accept them either, though I think that we're
going to have to accept them for a while in case existing code has been writing
them out with toISOString and saving them to later read in with fromISOString.

--


Re: What is a short, fast way of testing whether x in [a, b]?

2016-02-07 Thread Enjoys Math via Digitalmars-d-learn

On Monday, 8 February 2016 at 02:47:24 UTC, Enjoys Math wrote:

Right now I'm using a logical ||:

if (!(2*PI - EPS!float <= t1-t0 || t1-t0 <= 2*PI + EPS!float)) {

But I'll be doing this a lot, so was wondering if there's a D 
native way of doing it.


Thanks.


Currently I have:

@property T EPS(T)() {
static if (is(T == double)) {
return 0.000_000_001;   
}
static if (is(T == float)) {
return 0.000_001;   
}
static if (is(T == int)) {
return 1;
}
}

alias EPS!float EPSF;
alias EPS!double EPSD;

bool epsEq(T)(T x, T y) {
return x >= y - EPS!T && x <= y + EPS!T;
}



Re: What is a short, fast way of testing whether x in [a, b]?

2016-02-07 Thread cy via Digitalmars-d-learn

On Monday, 8 February 2016 at 03:09:53 UTC, Enjoys Math wrote:

was wondering if there's a D

native way of doing it.


That is the D native way of doing it, but you could clean up a 
lot of the boilerplate with some more templates. Also, || tests 
for exclusion, as in whether something is NOT in the range [a,b] 
(it's either above or below).


Also not sure why you have EPS=1 for integers. 1 ∈ [1,1] but 1-1 
is outside of [1,1].


template EPS(T) {
static if (is(T == double)) {
T EPS = 0.000_000_001;
}
static if (is(T == float)) {
T EPS = 0.000_001;
}
static if (is(T == int)) {
T EPS = 0;
}
}

struct DerpRange(T) {
T low;
T hi;
}
// because the constructor Range(1,2) can't POSSIBLY be deduced
// to be Range!(int)(1,2)
DerpRange!T Range(T) (T low, T hi) {
return DerpRange!T(low, hi);
}


bool in_range(T) (T what, DerpRange!T range) {
return
what - EPS!T >= range.low &&
what + EPS!T <= range.hi;
}

bool in_range(T) (T what, T low, T hi) {
return in_range(what,DerpRange!T(low,hi));
}

void main() {
import std.stdio: writeln;

void check(bool success) {
if(success) {
writeln("yay!");
} else {
throw new Exception("fail...");
}
}
check(!in_range(3,4,5));
check(in_range(3,3,5));
check(in_range(3,2,5));
check(in_range(3,2,3));
check(!in_range(3,2,2));
check(in_range(3,Range(0,99)));
auto r = Range(0,99);
check(in_range(42,r));

for(int i=0;i<10;++i) {
import std.random: uniform;
int what = uniform!"[]"(0,99);
check(in_range(what,r));
check(in_range(what,0,99));
check(in_range(what,Range(0,99)));
}
}



Re: voldemort stack traces (and bloat)

2016-02-07 Thread Nicholas Wilson via Digitalmars-d
On Monday, 8 February 2016 at 01:48:32 UTC, Steven Schveighoffer 
wrote:

On 2/7/16 10:42 AM, Iakh wrote:
On Sunday, 7 February 2016 at 05:18:39 UTC, Steven 
Schveighoffer wrote:
4   testexpansion   0x00010fb5dbec 
pure @safe

void
testexpansion.s!(testexpansion.s!(testexpansion.s!(testexpansion.s!(testexpansion.s!



Why "bad" foo is void?


Huh? foo returns void in both instances.


Yeah would by nice to auto-repacle with
testexpansion.S!(...)(...).Result.foo
or even with ...Result.foo


A possible fix for the stack printing is to use the template 
parameter placeholders:


testexpansion.s!(T = testexpansion.s!...)(T).Result.foo

But this doesn't fix the object-file bloat.

-Steve


I think it was Manu who was complaining about symbol length some 
time ago and we ended up discussing symbol compression as a 
possible solution. Did anything ever come of that? If so this 
seems like an obvious candidate for recursive compression.


Nic




How do you pass in a static array by reference?

2016-02-07 Thread Enjoys Math via Digitalmars-d-learn



I have several class members:

Arc[4] arcs;
Arc[4] arcs_2;

and Id like to initialize them with the same function, so how do 
I "pass them in" by reference?


Re: How do you pass in a static array by reference?

2016-02-07 Thread Jakob Ovrum via Digitalmars-d-learn

On Monday, 8 February 2016 at 05:59:43 UTC, Enjoys Math wrote:



I have several class members:

Arc[4] arcs;
Arc[4] arcs_2;

and Id like to initialize them with the same function, so how 
do I "pass them in" by reference?


void foo(ref Arc[4] arr)
{
…
}

The dimension can of course be templatized:

void foo(size_t n)(ref Arc[n] arr)
{
…
}


Re: How do you pass in a static array by reference?

2016-02-07 Thread Jakob Ovrum via Digitalmars-d-learn

On Monday, 8 February 2016 at 06:01:24 UTC, Jakob Ovrum wrote:

On Monday, 8 February 2016 at 05:59:43 UTC, Enjoys Math wrote:



I have several class members:

Arc[4] arcs;
Arc[4] arcs_2;

and Id like to initialize them with the same function, so how 
do I "pass them in" by reference?


void foo(ref Arc[4] arr)
{
…
}

The dimension can of course be templatized:

void foo(size_t n)(ref Arc[n] arr)
{
…
}


Alternatively you can take a slice of the fixed-length array:

void foo(Arc[] arr)
{
…
}

foo(arcs[]);
foo(arcs_2[]);


Re: IDE - Coedit 2 rc1

2016-02-07 Thread Suliman via Digitalmars-d-announce

On Sunday, 7 February 2016 at 13:18:44 UTC, Basile Burg wrote:

See https://github.com/BBasile/Coedit/releases/tag/2_rc1


Cool! Thanks! But do you have any plans to reimplement it from 
Pascal to В to get it's more native... 
https://github.com/filcuc/DOtherSide maybe helpful


Re: Dynamic Ctors ?

2016-02-07 Thread Voitech via Digitalmars-d-learn

On Saturday, 6 February 2016 at 23:35:17 UTC, Ali Çehreli wrote:

On 02/06/2016 10:05 AM, Voitech wrote:

> [...]

You can use string mixins (makeCtor and makeCtors):

string makeCtor(T)() {
import std.string : format;

[...]


Thank you very much for answering.
Cheers