Re: how to assign to shared obj.systime?

2020-07-10 Thread Kagamin via Digitalmars-d-learn

On Friday, 10 July 2020 at 17:18:25 UTC, mw wrote:

On Friday, 10 July 2020 at 08:48:38 UTC, Kagamin wrote:

On Friday, 10 July 2020 at 05:12:06 UTC, mw wrote:

looks like we still have to cast:
as of 2020, sigh.


Why not?


Because cast is ugly.


Implicitly escaping thread local data into shared context is much 
uglier than a cast. D disallows such implicit sharing, and thus 
ensures existence of thread local data on the language level. 
SysTime wasn't designed to be shared and due to this is 
incompatible with sharing by default, which enforces the promise 
that SysTime must be thread local, because it wasn't designed to 
be shared.


synchronized setTime(ref SysTime t) shared {
(cast()this).time = t;
}
Steven's solution isn't good in the general case, because it 
still puts thread local data in shared context, which itself is a 
problem, because it makes thread local data implicitly shared, 
and when you work with such implicitly shared thread local data, 
you can't assume it's thread local, because it might be escaped 
into shared context. In this case the language prevented implicit 
sharing of thread local data (this is what shared does and does 
it well contrary to the popular myth that shared is broken).


Re: GDC and DMD incompatability, can both be used?

2020-07-10 Thread H. S. Teoh via Digitalmars-d-learn
On Sat, Jul 11, 2020 at 04:28:32AM +, cy via Digitalmars-d-learn wrote:
> hunt/source/hunt/serialization/JsonSerializer.d:125:20: error: basic
> type expected, not foreach
>   125 | static foreach (string member; FieldNameTuple!T) {
> 
> I'm having a little trouble using the hunt library with gdc. Does gdc
> not support static foreach at all? Is there some way to write code
> that it can understand, which does the same thing?

Which version of GDC are you using?  Static foreach is a relatively new
feature, and GDC may not have picked it up yet.

GDC uses exactly the same front end as DMD (and LDC), so there is no
fundamental incompatibility. The only issue is that GDC is tied to the
GCC release cycle, so it may be a few releases behind DMD, and therefore
lack some newer features.  Eventually it will catch up, though.


T

-- 
When solving a problem, take care that you do not become part of the problem.


Re: GDC and DMD incompatability, can both be used?

2020-07-10 Thread cy via Digitalmars-d-learn
And OK yes I see gdc definitely does not support static foreach, 
but instead there's some sort of D compiler written in D compiled 
by GDC? That's just dmd, isn't it?


https://github.com/D-Programming-GDC/GDC/pull/550

He calls it "DDMD."


GDC and DMD incompatability, can both be used?

2020-07-10 Thread cy via Digitalmars-d-learn
hunt/source/hunt/serialization/JsonSerializer.d:125:20: error: 
basic type expected, not foreach
  125 | static foreach (string member; 
FieldNameTuple!T) {


I'm having a little trouble using the hunt library with gdc. Does 
gdc not support static foreach at all? Is there some way to write 
code that it can understand, which does the same thing?


Re: how to assign to shared obj.systime?

2020-07-10 Thread Jonathan M Davis via Digitalmars-d-learn
On Friday, July 10, 2020 12:30:16 PM MDT mw via Digitalmars-d-learn wrote:
> On Friday, 10 July 2020 at 17:35:56 UTC, Steven Schveighoffer
>
> wrote:
> > Mark your setTime as shared, then cast away shared (as you
> > don't need atomics once it's locked), and assign:
> >
> > synchronized setTime(ref SysTime t) shared {
> >
> > (cast()this).time = t;
> >
> > }
>
> I know I can make it work by casting, my question is:
>
> we had a lock on the owning shared object already, WHY we still
> need the cast to make it compile.

Because the type system has no way of knowing that access to that shared
object is currently protected, and baking that into the type system is
actually very difficult - especially if you don't want to be super
restrictive about what is allowed.

The only scheme that anyone has come up thus far with which would work is
TDPL's synchronized classes (which have never been implemented), but in
order for them to work, they would have to be restrictive about what you do
with the member variables, and ultimately, the compiler would still only be
able to implicitly remove the outer layer of shared (i.e. the layer sitting
directly in the class object itself), since that's the only layer that the
compiler could prove hadn't had any references to it escape. So, you'd have
to create a class just to be able to avoid casting, and it wouldn't
implicitly remove enough of shared to be useful in anything but simple
cases.

Sure, it would be great if we could have shared be implicitly removed when
the object in question is protected by a mutex, but the type system would
have to know that that mutex was associated with that object and be able to
prove not only that that mutex was locked but that no other piece of code
could possibly access that shared object without locking that mutex. It
would also have to be able to prove that no thread-local references escaped
from the code where shared was implicitly removed. It's incredibly difficult
to bake the required information into the type system even while be very
restrictive about what's allowed let alone while allowing code to be as
flexible as code generally needs to be - especially in a systems language
like D.

If someone actually manages to come up with an appropriate scheme that lets
us implicitly removed shared under some set of circumstances, then we may
very well get that ability at some point in the future, but it seems very
unlikely as things stand, and even if someone did manage it, it's even less
likely that it would work outside of a limited set of use cases, since there
are a variety of ways of dealing with safely accessing data across threads.

So, for the forseeable future, explicit casts are generally going to be
required when dealing with shared.

- Jonathan M Davis





Re: What's the point of static arrays ?

2020-07-10 Thread Ali Çehreli via Digitalmars-d-learn

On 7/10/20 8:03 AM, wjoe wrote:

> What I'm saying is even if this allocation is slow let's say 5ms, but it
> only happens once, that wouldn't matter to overall performance at all.

Yes, you are correct and there are dynamic arrays that are allocated 
once in many programs.


I haven't read the rest of your post but you've said elsewhere that a 
static array is on the stack. Yes, there are such static arrays but the 
issue is not that simple.


struct S {
  float[3] rgb;  // Can be on the stack or dynamic memory
}

The member of that struct can be anywhere:

void foo() {
  S s;// On the stack
  auto arr = [ S() ]; // On dynamically allocated memory
}

Additionally, as is common and understandable in D, we are conflating 
dynamic arrays and slices. The way I see it is dynamic array is owned by 
the D runtime. Although a slice is an interface to such dynamic arrays, 
a slice can start its life with non-dynamic arrays and may or may not 
move to accessing dynamic arrays.


struct S {
  float[] arr;  // A slice can use dynamic or static memory
}

void foo() {
  float[10] storage;
  auto a = S(storage[1..7]);  // Slice is referring to the stack space
  auto b = S();
  b.arr ~= 1.5;   // Slice is referring to dynamic memory
}

What is important is overhead:

1) Allocation: Only sometimes an issue.

2) Cost of the slice object (1 pointer and 1 size_t): The cost of this 
may be enormous. (Compare the 12-byte rgb above to a 16-byte slice 
overhead.)


3) Cost of accessing the elements: The access through that extra level 
of indirection may be a cost but the CPU can alleviate it by 
pre-fetching or caching but only for some access patterns.


4) Bounds checking: Some bounds checks for static arrays can be elided 
at run time.


So, there are cases where a dynamic array is better (or must), there are 
cases there is no winner and there are cases where a static array is a 
huge win.


Ali



Re: So how do I find and remove an element from DList?

2020-07-10 Thread Ogi via Digitalmars-d-learn
On Friday, 10 July 2020 at 19:23:57 UTC, Steven Schveighoffer 
wrote:
It's not linear over the size of the list, it's linear over the 
size of the range.


If you are always removing 1 element, it's effectively O(1).

-Steve


I see. Thanks!


Re: So how do I find and remove an element from DList?

2020-07-10 Thread Steven Schveighoffer via Digitalmars-d-learn

On 7/10/20 3:08 PM, Ogi wrote:

auto list = DList!int([1, 2, 3, 4]);
list.remove(list[].find(2).take(1));

Error: function std.container.dlist.DList!int.DList.remove(Range r) is 
not callable using argument types (Take!(Range))


It works if I replace `remove` with `linearRemove`, but that defeats the 
whole purpose of using a doubly linked list.


It's not linear over the size of the list, it's linear over the size of 
the range.


If you are always removing 1 element, it's effectively O(1).

-Steve


Re: how to assign to shared obj.systime?

2020-07-10 Thread Steven Schveighoffer via Digitalmars-d-learn

On 7/10/20 2:30 PM, mw wrote:

On Friday, 10 July 2020 at 17:35:56 UTC, Steven Schveighoffer wrote:
Mark your setTime as shared, then cast away shared (as you don't need 
atomics once it's locked), and assign:


synchronized setTime(ref SysTime t) shared {
    (cast()this).time = t;
}


I know I can make it work by casting, my question is:

we had a lock on the owning shared object already, WHY we still need the 
cast to make it compile.




Because locking isn't that simple. There is no "one size fits all" 
locking scheme that can be enforced by the language. So the best option 
is to make sure if you shoot yourself in the foot, it's your fault, and 
not D's.


-Steve


So how do I find and remove an element from DList?

2020-07-10 Thread Ogi via Digitalmars-d-learn

auto list = DList!int([1, 2, 3, 4]);
list.remove(list[].find(2).take(1));

Error: function 
std.container.dlist.DList!int.DList.remove(Range r) is not 
callable using argument types (Take!(Range))


It works if I replace `remove` with `linearRemove`, but that 
defeats the whole purpose of using a doubly linked list.


Re: how to assign to shared obj.systime?

2020-07-10 Thread mw via Digitalmars-d-learn
On Friday, 10 July 2020 at 17:35:56 UTC, Steven Schveighoffer 
wrote:
Mark your setTime as shared, then cast away shared (as you 
don't need atomics once it's locked), and assign:


synchronized setTime(ref SysTime t) shared {
(cast()this).time = t;
}


I know I can make it work by casting, my question is:

we had a lock on the owning shared object already, WHY we still 
need the cast to make it compile.




Re: how to assign to shared obj.systime?

2020-07-10 Thread Steven Schveighoffer via Digitalmars-d-learn

On 7/10/20 1:18 PM, mw wrote:

On Friday, 10 July 2020 at 08:48:38 UTC, Kagamin wrote:

On Friday, 10 July 2020 at 05:12:06 UTC, mw wrote:

looks like we still have to cast:
as of 2020, sigh.


Why not?


Because cast is ugly.

I've also tried this:
```
class A {
     SysTime time;
     synchronized setTime(ref SysTime t) {
     time = t;
     }
}

void main() {
     shared A a = new A();
     SysTime time;
     a.setTime(time);
}
```

Same Error: template std.datetime.systime.SysTime.opAssign cannot deduce 
function from argument types !()(SysTime) shared, candidates are:
/usr/include/dmd/phobos/std/datetime/systime.d(659,17): opAssign()(auto 
ref const(SysTime) rhs)


However, we have a lock on the owning shared object, still we need cast 
to make it compile:

```
   cast()time = t;
```



Mark your setTime as shared, then cast away shared (as you don't need 
atomics once it's locked), and assign:


synchronized setTime(ref SysTime t) shared {
(cast()this).time = t;
}

-Steve


Re: how to assign to shared obj.systime?

2020-07-10 Thread mw via Digitalmars-d-learn

On Friday, 10 July 2020 at 17:18:25 UTC, mw wrote:

On Friday, 10 July 2020 at 08:48:38 UTC, Kagamin wrote:

On Friday, 10 July 2020 at 05:12:06 UTC, mw wrote:

looks like we still have to cast:
as of 2020, sigh.


Why not?


Because cast is ugly.

I've also tried this:
```
class A {
SysTime time;
synchronized setTime(ref SysTime t) {
time = t;
}
}

void main() {
shared A a = new A();
SysTime time;
a.setTime(time);
}
```

Same Error: template std.datetime.systime.SysTime.opAssign 
cannot deduce function from argument types !()(SysTime) shared, 
candidates are:
/usr/include/dmd/phobos/std/datetime/systime.d(659,17):
opAssign()(auto ref const(SysTime) rhs)


However, we have a lock on the owning shared object, still we 
need cast to make it compile:

```
  cast()time = t;
```


Shall I log an enhancement bug for this?


Re: how to assign to shared obj.systime?

2020-07-10 Thread mw via Digitalmars-d-learn

On Friday, 10 July 2020 at 08:48:38 UTC, Kagamin wrote:

On Friday, 10 July 2020 at 05:12:06 UTC, mw wrote:

looks like we still have to cast:
as of 2020, sigh.


Why not?


Because cast is ugly.

I've also tried this:
```
class A {
SysTime time;
synchronized setTime(ref SysTime t) {
time = t;
}
}

void main() {
shared A a = new A();
SysTime time;
a.setTime(time);
}
```

Same Error: template std.datetime.systime.SysTime.opAssign cannot 
deduce function from argument types !()(SysTime) shared, 
candidates are:
/usr/include/dmd/phobos/std/datetime/systime.d(659,17):
opAssign()(auto ref const(SysTime) rhs)


However, we have a lock on the owning shared object, still we 
need cast to make it compile:

```
  cast()time = t;
```




D and Raku

2020-07-10 Thread RaycatWhoDat via Digitalmars-d-learn

Hello!

I was on my way to post an new topic when I did a search and 
found this one.


https://forum.dlang.org/post/ozubrkqquguyplwon...@forum.dlang.org
On Thursday, 22 November 2018 at 09:03:19 UTC, Gary Willoughby 
wrote:

On Monday, 19 November 2018 at 06:46:55 UTC, dangbinghoo wrote:
So, can you experts give a more comprehensive compare with 
perl6 and D?


Sure!

1). You can actually read and understand D code.


I disagree with this dismissive line of reasoning. I've noticed 
some interesting parallels between D and Raku (formerly Perl 6), 
specifically regarding the modeling power of the two. For 
example, here is a script to create a `|`-delimited CSV to import 
into Google Sheets.


This is my D implementation:

void main() {
  import std.stdio, std.range, std.algorithm;

  auto numbers = "formatted_numbers.txt".File.byLine;
  auto texts = "formatted_text.txt".File.byLine;
  auto types = "formatted_types.txt".File.byLine;

  auto output = File("final_conversion.csv", "a");
  scope(exit) output.close;

  zip(numbers, texts, types)
  .each!(line => output.writefln("%s|%s|%s", line.expand));
}


This is my Raku implementation:

use v6;

sub MAIN() {
my @numbers = "formatted_numbers.txt".IO.lines;
my @text = "formatted_text.txt".IO.lines;
my @types = "formatted_types.txt".IO.lines;

my $fileHandle = open "final_conversion.csv", :a;

for @numbers Z @text Z @types -> [$number, $text, $type] {
$fileHandle.sprintf("%s|%s|%s", $number, $text, $type);
}

$fileHandle.close;
}

Both approaches are quite similar with the main difference of the 
Raku version being `Z` as the zip operator and the destructuring 
assignment instead of `Tuple.expand`. Unfortunately, this example 
is a bit too small to really see all of the parallels.


To the people who have used both for less-contrived applications, 
what is your experience with the two? What features have you 
liked from both?


Re: What's the point of static arrays ?

2020-07-10 Thread wjoe via Digitalmars-d-learn

On Friday, 10 July 2020 at 14:20:15 UTC, wjoe wrote:

On Friday, 10 July 2020 at 10:47:49 UTC, psycha0s wrote:

On Friday, 10 July 2020 at 10:13:23 UTC, wjoe wrote:
A static array resides in this memory (with a fixed length, so 
a length needn't be stored because bounds can be checked at 
compile time) if declared as a variable and is accessed via 
stack pointer and offset.


resides is the wrong word again, sorry.

A static array is mapped into stack memory...




Re: What's the point of static arrays ?

2020-07-10 Thread wjoe via Digitalmars-d-learn

On Friday, 10 July 2020 at 11:13:51 UTC, Stanislav Blinov wrote:

On Friday, 10 July 2020 at 10:13:23 UTC, wjoe wrote:

So many awesome answers, thank you very much everyone!

Less overhead,
Using/needing it to interface with something else, and
Efficiency are very good points.

However stack memory needs to be allocated at program start. I 
don't see a huge benefit in allocation speed vs. heap 
pre-allocation, or is there?


Stack is allocated by the OS for the process when it's started. 
Reserving space for stack variables, including arrays, is 
effectively free, since the compiler assigns offsets statically 
at compile time.


I mean 1 allocation vs 2 isn't going to noticeably improve 
overall performance.


A GC allocation is way more complex than a mere 
bump-the-pointer. If your program is trivial enough you may 
actually find that one extra GC allocation is significant in 
its runtime. Of course, if you only ever allocate once and your 
program runs for ages, you won't really notice that allocation.




I think that memory for the program must be allocated when the 
process is created, this includes the stack, etc. but it is an 
allocation in computer RAM nonetheless.


A pre-allocated amount of memory for a dynamic array once is one 
additional allocation throughout the entire run time of the 
program.
Now I can't make any guess about how this allocation takes place 
- it might be allocated via malloc, a pool, new, etc. - but it is 
one additional allocation.
What I'm saying is even if this allocation is slow let's say 5ms, 
but it only happens once, that wouldn't matter to overall 
performance at all.


But performance isn't the focus of this question.

No. A slice is just a pointer/length pair - a contiguous view 
into *some* memory, regardless of where that memory came from:


Metadata is data that describes other data - a pointer/length 
pair, which describs a continuous view into memory, is matching 
that criteria, doesn't it ?
An array is, by definition, a length of the same kind of element 
mapped into continuous memory, no ?


A dynamic array is a pointer/length pair, a slice is a 
pointer/length pair, too.
Can a function, which is passed both, a dynamic array and a 
slice, tell the two apart, as in distinguish between the two ?
Also, can this function distinguish a slice of a dynamic array 
from a slice of a static array ?


void takeASlice(scope void[] data) // can take any slice since 
any slice converts to void[]

{
import std.stdio;
writefln("%x %d", data.ptr, data.length);
}

int[10] a;
takeASlice(a); // a[]
takeASlice(a[1 .. $-1]); // a[1 .. 9]

struct S
{
float x, y, z;
float dx, dy, dz;
}

S s;
takeASlice(()[0 .. 1]); // Slicing a pointer, not @safe but 
can be done.

takeASlice(new int[10]); // Array, GC allocation
takeASlice([1, 2, 3, 4]); // Array literal, may or may not be 
GC-allocated


`takeASlice` has no knowledge of where the memory came from.

Dynamic arrays only ever come into the picture if you try to 
manipulate the slice itself: resize it, append to it, etc.


that it's not possible to slice a static array because the 
slice would technically be akin to a dynamic array and hence 
be incompatible.


Incompatible to what?


void foo(int[4] a){}

int[4] x;
auto slice = x[];
foo(slice); // that's incompatible, isn't it?

Thank you for your explanations :)


Re: What's the point of static arrays ?

2020-07-10 Thread wjoe via Digitalmars-d-learn

On Friday, 10 July 2020 at 10:47:49 UTC, psycha0s wrote:

On Friday, 10 July 2020 at 10:13:23 UTC, wjoe wrote:
However stack memory needs to be allocated at program start. I 
don't see a huge benefit in allocation speed vs. heap 
pre-allocation, or is there?
I mean 1 allocation vs 2 isn't going to noticeably improve 
overall performance.


Allocation on the stack is basically just a single processor 
instruction that moves the stack pointer (well, ofc you also 
need to initialize array elements). Meanwhile, allocation on 
the heap involves much more complex logic of the memory 
allocator. Moreover, in D dynamic arrays use GC, thus the 
memory allocation may involve the trash collection step.


On Friday, 10 July 2020 at 11:20:17 UTC, Simen Kjærås wrote:
You seem to still be thinking of static arrays as the same kind 
of "thing" as a dynamic array. They're (usually) more like ints 
or structs than containers: they're generally small, they're 
often parts of other structures or classes, and they're fairly 
often the element type of a larger dynamic array. For instance, 
a bitmap image could be a byte[4][][], with dynamic dimensions 
3840x2160. If instead of byte[4] we used byte[], not only would 
things grind to a halt immediately, we'd also be using 
massively more memory.


No, not quite. My idea of the stack is like a pre-allocated 
amount of memory in computer RAM which is pointed to by the stack 
pointer.
And further that at program start,whn the process is created, 
this memory needs to be allocated by the OS, just like any other 
memory allocation in protected mode, but only once for the entire 
run time of the process.
(And since there is an allocation when a process/thread is 
created, the act of creating a thread is considered slow.)
A static array resides in this memory (with a fixed length, so a 
length needn't be stored because bounds can be checked at compile 
time) if declared as a variable and is accessed via stack pointer 
and offset.


As for dynamic arrays, that's an allocated amount 
(length/capacity) of memory in computer RAM which is not part of 
the stack (the heap).
Whichever creation process is chosen (malloc, GC, Pool that 
doesn't allocate but just returns a pointer/length, etc.) you end 
up with a pointer to the allocated memory in computer RAM and a 
length variable.


But when the static array is part of a data structure which 
itself is stored in a dynamic array, this memory is accessed 
through the pointer of the dynamic array.


Is that not correct ?


Thanks for the answers :)


Re: Send empty assoc array to function

2020-07-10 Thread Steven Schveighoffer via Digitalmars-d-learn

On 7/10/20 4:15 AM, Max Samukha wrote:

On Thursday, 9 July 2020 at 21:04:57 UTC, Steven Schveighoffer wrote:



Why isn't [] accepted as an empty AA literal?


Because it's an empty dynamic array literal.

If D were to accept an empty AA literal, I'd expect it to be [:].



Just as typeof(null) is a subtype of all nullable types, you could make 
typeof([]) a subtype of both AAs and dynamic arrays. [:] could still be 
made a specifically AA literal.


Sure it's possible. But I don't see it happening.



BTW, writeln((int[int]).init) prints "[]" (to!string((V[K]).init) == 
"[]"), but pragma(msg, (int[int]).init) - the more general 'null' 
((V[K]).init.stringof == "null"), which is a unfortunate inconsistency.


to!string is from the library, pragma(msg) is the compiler. The latter 
is authoratative where the compiler is concerned.


to!string probably should be changed. [] should be printed for 
initialized but empty AAs, null should be printed for .init.


-Steve


Re: Unexpected copy constructor behavior

2020-07-10 Thread Steven Schveighoffer via Digitalmars-d-learn

On 7/10/20 3:31 AM, psycha0s wrote:

On Thursday, 9 July 2020 at 22:18:59 UTC, Steven Schveighoffer wrote:
Looking at the generated AST, it's because the compiler is adding an 
auto-generated opAssign, which accepts a Foo by value. It is that 
object that is being created and destroyed.


Is there a reason the autogenerated opAssign accepts its argument by 
value? Honestly, it looks like a premature pessimisation to me.


If it accepts the value by ref, then it will not bind to rvalues. 
Accepting by value accepts anything.


-Steve


Re: Send empty assoc array to function

2020-07-10 Thread JN via Digitalmars-d-learn

On Friday, 10 July 2020 at 03:59:37 UTC, Mike Parker wrote:
Meh. You could say the same about foo(int[]), or 
foo(SomeClass). AAs are reference types. Reference type 
instances can be null.


Oh, that actually makes sense. I always thought assoc arrays are 
value types.


Anyway, even if they are reference type, I still would consider 
[] and null different types of values. [] conveys to me that the 
object exists, but is empty. null conveys to me that the object 
exists and cannot be used.


int[int] a = null;
a[5] = 6;

This kind of code just looks weird... yes, I know the " = null " 
part is excessive, but still.


Re: What's the point of static arrays ?

2020-07-10 Thread Simen Kjærås via Digitalmars-d-learn

On Friday, 10 July 2020 at 10:13:23 UTC, wjoe wrote:
However stack memory needs to be allocated at program start. I 
don't see a huge benefit in allocation speed vs. heap 
pre-allocation, or is there?
I mean 1 allocation vs 2 isn't going to noticeably improve 
overall performance.


You seem to still be thinking of static arrays as the same kind 
of "thing" as a dynamic array. They're (usually) more like ints 
or structs than containers: they're generally small, they're 
often parts of other structures or classes, and they're fairly 
often the element type of a larger dynamic array. For instance, a 
bitmap image could be a byte[4][][], with dynamic dimensions 
3840x2160. If instead of byte[4] we used byte[], not only would 
things grind to a halt immediately, we'd also be using massively 
more memory.


When you're using a static array on the stack, it's usually just 
because it's more convenient to say `int[16] buffer;` than `auto 
buffer = new int[16];`. The fact it may be faster is mostly a 
side benefit. Also, even if you did preallocate such a buffer, 
there's the overhead of remembering how to get to it, the work of 
setting it up, probably a function call on use, etc. The 
alternative is terser, built-in, more obvious to maintainers, 
pretty unlikely to overflow the stack, and very unlikely to be 
slower. Allocating a multi-MiB static array on the stack is a 
sign that you're using your screwdriver as a hammer, and there 
are probably better ways to do what you're trying to do.




a[]

What happens here exactly ?


It creates a dynamic array that points to the data in the static 
array. It's just shorthand for a[0..$]:


unittest {
int[4] a = [1,2,3,4];

auto b = a[];
assert(b.length == 4);
assert(b.ptr == [0]);

auto c = a[0..$];
assert(b is c);
}

--
  Simen


Re: What's the point of static arrays ?

2020-07-10 Thread Stanislav Blinov via Digitalmars-d-learn

On Friday, 10 July 2020 at 10:13:23 UTC, wjoe wrote:

So many awesome answers, thank you very much everyone!

Less overhead,
Using/needing it to interface with something else, and
Efficiency are very good points.

However stack memory needs to be allocated at program start. I 
don't see a huge benefit in allocation speed vs. heap 
pre-allocation, or is there?


Stack is allocated by the OS for the process when it's started. 
Reserving space for stack variables, including arrays, is 
effectively free, since the compiler assigns offsets statically 
at compile time.


I mean 1 allocation vs 2 isn't going to noticeably improve 
overall performance.


A GC allocation is way more complex than a mere bump-the-pointer. 
If your program is trivial enough you may actually find that one 
extra GC allocation is significant in its runtime. Of course, if 
you only ever allocate once and your program runs for ages, you 
won't really notice that allocation.



a[]

What happens here exactly ?


This:

int[10] a;
int[] slice = a[];
assert(slice.ptr == [0]);
assert(slice.length == 10);
assert(a.sizeof == 10 * int.sizeof);// 40
assert(slice.sizeof == (int[]).sizeof); // 16 on 64 bit

I read the chapters in Ali's book (thank you very much for such 
a great book, Ali) on arrays and slicing prior to asking this 
question and I came to the following conclusion:


Because a static array is pre-allocated on the stack,
doesn't have a pointer/length pair,
is addressed via the stack pointer, and
due to the fact that a slice is a pointer/length pair
  and because a slice is technically the meta data of a dynamic 
array, a view into (part) of a dynamic array,


No. A slice is just a pointer/length pair - a contiguous view 
into *some* memory, regardless of where that memory came from:


void takeASlice(scope void[] data) // can take any slice since 
any slice converts to void[]

{
import std.stdio;
writefln("%x %d", data.ptr, data.length);
}

int[10] a;
takeASlice(a); // a[]
takeASlice(a[1 .. $-1]); // a[1 .. 9]

struct S
{
float x, y, z;
float dx, dy, dz;
}

S s;
takeASlice(()[0 .. 1]); // Slicing a pointer, not @safe but can 
be done.

takeASlice(new int[10]); // Array, GC allocation
takeASlice([1, 2, 3, 4]); // Array literal, may or may not be 
GC-allocated


`takeASlice` has no knowledge of where the memory came from.

Dynamic arrays only ever come into the picture if you try to 
manipulate the slice itself: resize it, append to it, etc.


that it's not possible to slice a static array because the 
slice would technically be akin to a dynamic array and hence be 
incompatible.


Incompatible to what?

int[10] a = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
a[0 .. 2] = a[2 .. 4];
assert(a[0] == 3);
assert(a[1] == 4);
int[10] b = void;
b[] = a[];
assert(b == [3, 4, 3, 4, 5, 6, 7, 8, 9, 10]);


struct SuperSpecializedArray(T, size_t S) if (S > 0)
{
   T[S] elements;

   struct SuperSpecializedArrayRange
   {
  typeof(elements) e;

  this(SuperSpecializedArray a)
  {
 e = a.elements; // copies
  }

  // ...
   }
}

Upon creation of a SuperSpecializedArrayRange, the array is 
copied, but more importantly, data which may not ever be needed 
is copied and that's supposed to be a big selling point for 
ranges - only ever touching the data when it's requested - am I 
wrong ?


Ranges need not be lazy. They can be, and most of them should be 
indeed, but they need not be. And, as you yourself point out, in 
your case `e` can just be a slice, and your range becomes lazy.


Re: What's the point of static arrays ?

2020-07-10 Thread psycha0s via Digitalmars-d-learn

On Friday, 10 July 2020 at 10:13:23 UTC, wjoe wrote:
However stack memory needs to be allocated at program start. I 
don't see a huge benefit in allocation speed vs. heap 
pre-allocation, or is there?
I mean 1 allocation vs 2 isn't going to noticeably improve 
overall performance.


Allocation on the stack is basically just a single processor 
instruction that moves the stack pointer (well, ofc you also need 
to initialize array elements). Meanwhile, allocation on the heap 
involves much more complex logic of the memory allocator. 
Moreover, in D dynamic arrays use GC, thus the memory allocation 
may involve the trash collection step.




Re: What's the point of static arrays ?

2020-07-10 Thread wjoe via Digitalmars-d-learn

On Thursday, 9 July 2020 at 17:15:26 UTC, Jonathan M Davis wrote:
On Thursday, July 9, 2020 10:21:41 AM MDT H. S. Teoh via 
Digitalmars-d-learn wrote:
> - Assignment copies the whole array, as in int[5] a; auto b 
> = a;


Sometimes this is desirable.  Consider the 3D game example.  
Suppose you're given a vector and need to perform some 
computation on it. If it were a dynamic array, you'd need to 
allocate a new array on the heap in order to work on it 
without changing the original vector. With a static array, 
it's passed by value to your function, so you just do what you 
need to do with it, and when you're done, either discard it 
(== no work because it's allocated on the stack) or return it 
(== return on the stack, no allocations).


I recall that at one point, I wrote brute-force sudoku solver, 
and initially, I'd used dynamic arrays to represent the board. 
When I switched them to static arrays, it was _way_ faster - 
presumably, because all of those heap allocations were gone. 
And of course, since the sudoku board is always the same size, 
the ability to resize the array was unnecessary.


In most programs that I've written, it hasn't made sense to use 
static arrays anywhere, but sometimes, they're exactly what you 
need.


- Jonathan M Davis


Even though dynamic arrays can be resized, they don't need to be 
if the need doesn't arise  - did you measure performance with 
pre-allocated dynamic arrays and disabled bounds checks, too ?


I wouldn't expect that big of a performance difference in that 
scenario, but what you expect and what you get can be vastly 
different. I'm really curious to know.


Re: What's the point of static arrays ?

2020-07-10 Thread wjoe via Digitalmars-d-learn

So many awesome answers, thank you very much everyone!

Less overhead,
Using/needing it to interface with something else, and
Efficiency are very good points.

However stack memory needs to be allocated at program start. I 
don't see a huge benefit in allocation speed vs. heap 
pre-allocation, or is there?
I mean 1 allocation vs 2 isn't going to noticeably improve 
overall performance.



On Thursday, 9 July 2020 at 12:48:26 UTC, Simen Kjærås wrote:

[...]

- Can't slice them
- Can't range them


Sure you can:

unittest {
import std.stdio;
import std.algorithm;
int[10] a = [1,2,3,4,5,6,7,8,9,10];

// Note I'm slicing the static array to use in range 
algorithms:

writeln(a[].map!(b => b+2));

[...]


Cool.

a[]

What happens here exactly ?

I read the chapters in Ali's book (thank you very much for such a 
great book, Ali) on arrays and slicing prior to asking this 
question and I came to the following conclusion:


Because a static array is pre-allocated on the stack,
doesn't have a pointer/length pair,
is addressed via the stack pointer, and
due to the fact that a slice is a pointer/length pair
  and because a slice is technically the meta data of a dynamic 
array, a view into (part) of a dynamic array,
that it's not possible to slice a static array because the slice 
would technically be akin to a dynamic array and hence be 
incompatible.



As for `can't range them` Ali's chapter on ranges emphatically 
stresses the fact that ranges are lazy.
But due to the fact that static arrays are copied, I couldn't see 
how to satisfy the lazy property.


Consider this:

struct SuperSpecializedArray(T, size_t S) if (S > 0)
{
   T[S] elements;

   struct SuperSpecializedArrayRange
   {
  typeof(elements) e;

  this(SuperSpecializedArray a)
  {
 e = a.elements; // copies
  }

  // ...
   }
}

Upon creation of a SuperSpecializedArrayRange, the array is 
copied, but more importantly, data which may not ever be needed 
is copied and that's supposed to be a big selling point for 
ranges - only ever touching the data when it's requested - am I 
wrong ?


But considering Simen's answer I now know that a slice of 
elements can be used so that's that.


Re: how to assign to shared obj.systime?

2020-07-10 Thread Kagamin via Digitalmars-d-learn

On Friday, 10 July 2020 at 05:12:06 UTC, mw wrote:

looks like we still have to cast:
as of 2020, sigh.


Why not?


Re: Send empty assoc array to function

2020-07-10 Thread Max Samukha via Digitalmars-d-learn

On Friday, 10 July 2020 at 08:15:24 UTC, Max Samukha wrote:


a unfortunate

an unfortunate





Re: App/lib config system

2020-07-10 Thread Kagamin via Digitalmars-d-learn

Without contradictions the solution is trivial:

module config;

version(LogEnabled) enum isEnabled=true;
else enum isEnabled=false;

shared int level;


Re: Send empty assoc array to function

2020-07-10 Thread Max Samukha via Digitalmars-d-learn
On Thursday, 9 July 2020 at 21:04:57 UTC, Steven Schveighoffer 
wrote:




Why isn't [] accepted as an empty AA literal?


Because it's an empty dynamic array literal.

If D were to accept an empty AA literal, I'd expect it to be 
[:].


-Steve


Just as typeof(null) is a subtype of all nullable types, you 
could make typeof([]) a subtype of both AAs and dynamic arrays. 
[:] could still be made a specifically AA literal.


BTW, writeln((int[int]).init) prints "[]" (to!string((V[K]).init) 
== "[]"), but pragma(msg, (int[int]).init) - the more general 
'null' ((V[K]).init.stringof == "null"), which is a unfortunate 
inconsistency.


Re: App/lib config system

2020-07-10 Thread Kagamin via Digitalmars-d-learn

Contradictions that don't let you glue it all together.


Re: Unexpected copy constructor behavior

2020-07-10 Thread psycha0s via Digitalmars-d-learn
On Thursday, 9 July 2020 at 22:18:59 UTC, Steven Schveighoffer 
wrote:
Looking at the generated AST, it's because the compiler is 
adding an auto-generated opAssign, which accepts a Foo by 
value. It is that object that is being created and destroyed.


Is there a reason the autogenerated opAssign accepts its argument 
by value? Honestly, it looks like a premature pessimisation to me.


Re: how to assign to shared obj.systime?

2020-07-10 Thread Jonathan M Davis via Digitalmars-d-learn
On Thursday, July 9, 2020 9:01:20 PM MDT mw via Digitalmars-d-learn wrote:
> On Friday, 10 July 2020 at 02:59:56 UTC, mw wrote:
> > Error: template std.datetime.systime.SysTime.opAssign cannot
> > deduce function from argument types !()(SysTime) shared,
> > candidates are:
> > /usr/include/dmd/phobos/std/datetime/systime.d(659,17):
> > opAssign()(auto ref const(SysTime) rhs)
>
> of course, better without casting.

Unless you're dealing with a primitive type that works with atomics, you
pretty much always have to cast when using shared (the only real exception
being objects that are specifically designed to work as shared and do the
atomics or casting internally for you). In general, when operating on a
shared object, you need to protect the section of code that's operating on
it with a mutex and then temporarily cast away shared to operate on the
object as thread-local. It's then up to you to ensure that no thread-local
references to the shared data escape the section of code protected by the
mutex (though scope may help with that if used in conjunction with
-dip1000).

- Jonathan M Davis