Re: Need Advice: Union or Variant?

2022-11-17 Thread Petar via Digitalmars-d-learn
On Thursday, 17 November 2022 at 20:54:46 UTC, jwatson-CO-edu 
wrote:
I have an implementation of the "[Little 
Scheme](https://mitpress.mit.edu/9780262560993/the-little-schemer/)" educational programming language written in D, [here](https://github.com/jwatson-CO-edu/SPARROW)".


It has many problems, but the one I want to solve first is the 
size of the "atoms" (units of data).


`Atom` is a struct that has fields for every possible type of 
data that the language supports. This means that a bool `Atom` 
unnecessarily takes up space in memory with fields for number, 
string, structure, etc.


Here is the 
[definition](https://github.com/jwatson-CO-edu/SPARROW/blob/main/lil_schemer.d#L55):


```d
enum F_Type{
CONS, // Cons pair
STRN, // String/Symbol
NMBR, // Number
EROR, // Error object
BOOL, // Boolean value
FUNC, // Function
}

struct Atom{
F_Type  kind; //  What kind of atom this is
Atom*   car; // - Left  `Atom` Pointer
Atom*   cdr; // - Right `Atom` Pointer
double  num; // - Number value
string  str; // - String value, D-string 
underlies

boolbul; // - Boolean value
F_Error err = F_Error.NOVALUE; // Error code
}

```
Question:
**Where do I begin my consolidation of space within `Atom`?  Do 
I use unions or variants?**


In general, I recommend 
[`std.sumtype`](https://dlang.org/phobos/std_sumtype), as it is 
one of the best D libraries for this purpose. It is implemented 
as a struct containing two fields: the `kind` and a `union` of 
all the possible types.
That said, one difficulty you are likely to face is with 
refactoring your code to use the 
[`match`](https://dlang.org/phobos/std_sumtype#.match) and 
[`tryMatch`](https://dlang.org/phobos/std_sumtype#.tryMatch) 
functions, as `std.sumtype.SumType` does not expose the 
underlying kind field.


Other notable alternatives are:
* [`mir-core`](https://code.dlang.org/packages/mir-core)'s 
`mir.algebraic`: http://mir-core.libmir.org/mir_algebraic.html
* 
[`taggedalgebraic`](https://code.dlang.org/packages/taggedalgebraic): https://vibed.org/api/taggedalgebraic.taggedalgebraic/


Re: T... args!

2021-12-09 Thread Petar via Digitalmars-d-learn

On Thursday, 9 December 2021 at 00:36:29 UTC, Salih Dincer wrote:

On Wednesday, 8 December 2021 at 23:47:07 UTC, Adam Ruppe wrote:
On Wednesday, 8 December 2021 at 23:43:48 UTC, Salih Dincer 
wrote:


I think you meant to say

void foo(string[] args...) {}


Not exactly...

```d
alias str = immutable(char)[];

void foo(str...)(str args) {
  foreach(ref a; args) {
a.writeln('\t', typeof(a).stringof);
  }
  str s; // "Amazing! ---v";
  s.writeln(": ", typeof(s).stringof);
}
```


Unlike [value template parameters][0] (which constitute of 
existing type + identifier), all other template parameter forms 
introduce a brand new identifier in the template scope that is 
completely unrelated to whatever other types you may have outside 
in your program (including the ones implicitly imported from 
`object.d` like `string`).


The `str...` in your `foo` function introduces a [template 
sequence parameter][1] which shadows the `str` `alias` you have 
above. The `str s;` line declares a variable of the `str` type 
sequence, so it's essentially a tuple (*). See:


See:

```d
import std.stdio : writeln;

alias str = immutable(char)[];

void foo(str...)(str args) {
  foreach(ref a; args) {
a.writeln('\t', typeof(a).stringof);
  }
  str s; // "Amazing! ---v";
  s.writeln(": ", typeof(s).stringof);
}

void main()
{
foo(1, true, 3.5);
}
```

```
1   int
truebool
3.5 double
0falsenan: (int, bool, double)
```

(*) Technically, you can attempt to explicitly instantiate `foo` 
with non type template arguments, but it will fail to compile, 
since:
* The `args` function parameter demands `str` to be a type (or 
type sequence)
* The `s` function local variable demands `str` to be a type (or 
type sequence)


If you can either remove `args` and `s` or filter the sequence to 
keep only the types:


```d
import std.meta : Filter;
enum bool isType(alias x) = is(x);
alias TypesOnly(args...) = Filter!(isType, args);

void foo(str...)(TypesOnly!str args)
{
static foreach(s; str)
pragma (msg, s);
}

void main()
{
static immutable int a = 42;
foo!(int, double, string)(3, 4.5, "asd");
pragma (msg, ``);
foo!(a, "asd", bool, foo, int[])(true, []);
}
```

```
int
double
string

42
asd
bool
foo(str...)(TypesOnly!str args)
int[]
```

[0]: https://dlang.org/spec/template.html#template_value_parameter
[1]: https://dlang.org/spec/template.html#variadic-templates


Re: Any workaround for "closures are not yet supported in CTFE"?

2021-12-08 Thread Petar via Digitalmars-d-learn

On Wednesday, 8 December 2021 at 17:05:49 UTC, Timon Gehr wrote:

On 12/8/21 9:07 AM, Petar Kirov [ZombineDev] wrote:

[...]


Nice, so the error message is lying.


Closure support deserves way more love in the compiler. I'm quite 
surprised that that hack worked, given that various very similar 
rearrangements that I tried before didn't.



This is a bit more complete:

```d
import std.stdio, std.traits, core.lifetime;
auto partiallyApply(alias fun,C...)(C context){
return  class(move(context)){
C context;
this(C context) { foreach(i,ref c;this.context) 
c=move(context[i]); }
auto opCall(ParameterTypeTuple!fun[context.length..$] 
args) {

return fun(context,forward!args);
}
}.opCall;
}

// [snip]

```


Thanks, I was struggling to find a good name for this building 
block. `partiallyApply` is a natural fit. Also thanks for the 
move / forwarding icing.


Re: Any workaround for "closures are not yet supported in CTFE"?

2021-12-08 Thread Petar via Digitalmars-d-learn
On Wednesday, 8 December 2021 at 12:17:42 UTC, Stanislav Blinov 
wrote:
On Wednesday, 8 December 2021 at 08:07:59 UTC, Petar Kirov 
[ZombineDev] wrote:



```d
interface ICallable
{
void opCall() const;
}

alias Action = void delegate();

struct A
{
Action[] dg;
}
```


At this point why not just call a spade a spade and store an 
array of ICallables directly? :) I mean, why store fat pointers 
to fat pointers?


Initially that's exactly what I tried, and it worked if the 
result was stored as `static const` / `static immutable`, but it 
didn't when using the `enum`:



```
onlineapp.d(39): Error: variable `onlineapp.main.a` : Unable to 
initialize enum with class or pointer to struct. Use static const 
variable instead.

```

```d
interface ICallable
{
void opCall() const;
}

auto makeDelegate(alias fun, Args...)(auto ref Args args)
{
return new class(args) ICallable
{
Args m_args;
this(Args p_args) { m_args = p_args; }
void opCall() const { fun(m_args); }
};
}

alias Action = void delegate();

ICallable createDelegate(string s)
{
import std.stdio;
return makeDelegate!((string str) => writeln(str))(s);
}

struct A
{
ICallable[] dg;
}

A create()
{
A a;
a.dg ~= createDelegate("hello");
a.dg ~= createDelegate("buy");
return a;
}

void main()
{
enum a = create();
foreach(dg; a.dg)
dg();
}
```


I didn't have time to fully investigate the issue and report this 
compiler limitation.


Re: Any workaround for "closures are not yet supported in CTFE"?

2021-12-08 Thread Petar via Digitalmars-d-learn

On Wednesday, 8 December 2021 at 07:55:55 UTC, Timon Gehr wrote:

On 08.12.21 03:05, Andrey Zherikov wrote:

On Tuesday, 7 December 2021 at 18:50:04 UTC, Ali Çehreli wrote:
I don't know whether the workaround works with your program 
but that delegate is the equivalent of the following struct 
(the struct should be faster because there is no dynamic 
context allocation). Note the type of 'dg' is changed 
accordingly:


The problem with struct-based solution is that I will likely 
be stuck with only one implementation of delegate (i.e. opCall 
implementation). Or I'll have to implement dispatching inside 
opCall based on some "enum" by myself which seems weird to me. 
Do I miss anything?


This seems to work, maybe it is closer to what you are looking 
for.


```d
import std.stdio, std.traits, core.lifetime;

struct CtDelegate(R,T...){
void* ctx;
R function(T,void*) fp;
R delegate(T) get(){
R delegate(T) dg;
dg.ptr=ctx;
dg.funcptr=cast(typeof(dg.funcptr))fp;
return dg;
}
alias get this;
this(void* ctx,R function(T,void*) fp){ this.ctx=ctx; 
this.fp=fp; }

R opCall(T args){ return fp(args,ctx); }
}

auto makeCtDelegate(alias f,C)(C ctx){
static struct Ctx{ C ctx; }
return 
CtDelegate!(ReturnType!(typeof(f)),ParameterTypeTuple!f[0..$-1])(new Ctx(forward!ctx),
  (ParameterTypeTuple!f[0..$-1] args,void* ctx){ auto 
r=cast(Ctx*)ctx; return f(r.ctx,forward!args); });

}

struct A{
CtDelegate!void[] dg;
}

auto createDelegate(string s){
return makeCtDelegate!((string s){ s.writeln; })(s);
}

A create(){
A a;
a.dg ~= createDelegate("hello");
a.dg ~= createDelegate("buy");
return a;
}


void main(){
static a = create();
foreach(dg; a.dg)
dg();
}

```


Incidentally, yesterday I played with a very similar solution. 
Here's my version:


https://run.dlang.io/gist/PetarKirov/f347e59552dd87c4c02d0ce87d0e9cdc?compiler=dmd


```d
interface ICallable
{
void opCall() const;
}

auto makeDelegate(alias fun, Args...)(auto ref Args args)
{
return new class(args) ICallable
{
Args m_args;
this(Args p_args) { m_args = p_args; }
void opCall() const { fun(m_args); }
};
}

alias Action = void delegate();

Action createDelegate(string s)
{
import std.stdio;
return !((string str) => writeln(str))(s).opCall;
}

struct A
{
Action[] dg;
}

A create()
{
A a;
a.dg ~= createDelegate("hello");
a.dg ~= createDelegate("buy");
return a;
}

void main()
{
enum a = create();
foreach(dg; a.dg)
dg();
}
```


Re: Was this supposed to be allowed?

2021-09-15 Thread Petar via Digitalmars-d-learn

On Wednesday, 15 September 2021 at 13:52:40 UTC, z wrote:

```D
float[2] somevalue = somefloat3value[] + cast(Unqual!float[2]) 
[somesharedfloatarray1[i],somesharedfloatarray2[ii]];

```
Older LDC/DMD releases never complained but now that i upgraded 
DMD, DMD-compiled builds suffer from runtime assert error 
`core.internal.array.operations.arrayOp!(float[], float[], 
float[], "+", "=").arrayOp at 
.\src\druntime\import\core\internal\array\operations.d(45) 
: Mismatched array lengths for vector operation `


Explicitly specifying `somefloat3value[0..2]` now works, and it 
seems that this assert check is an addition to a recent DMD 
version's `druntime`, does it means that this was a recent 
change in the language+runtime or just a retroactive 
enforcement of language rules that didn't use to be enforced?

Big thanks.


The history is roughly as follows:

* between dmd 2.065 and 2.076 (including), this used to fail at 
runtime with message "Array lengths don't match for vector 
operation: 2 != 3"
* dmd 2.077 included [druntime PR 1891][1] which was a ground-up 
re-implementation of the way array operations are implemented and 
in general a very welcome improvement. Unfortunately that PR 
didn't include checks to ensure that all arrays have equal length 
(or perhaps it had insufficient checks, I didn't dig into the 
details).
* 2020-08-04 The issue was reported: 
https://issues.dlang.org/show_bug.cgi?id=21110
* 2021-08-09 A PR that fixes the issue was merged: 
https://github.com/dlang/druntime/pull/3267

* 2021-08-09 The fix was released in 2.097.2

In summary, the validation was always supposed to be there, but 
between 2.077.0 and 2.097.1 it wasn't.


[1]: https://github.com/dlang/druntime/pull/1891


Re: github copilot and dlang

2021-07-05 Thread Petar via Digitalmars-d-learn

On Monday, 5 July 2021 at 15:56:38 UTC, Antonio wrote:
Has someone tried github copilot (https://copilot.github.com/) 
with dlang? Access to the preview could be requested and, I 
think, main dlang team members could bypass the waitlist easily.


I suspect that the "low" volume of dlang code (used to train 
OpenAI) compared to other languages could impact in the support 
(if there is any).  Anyway, it could be really interesting to 
see how Copilot faces templates, traits, ...


I was wondering the same, but I haven't gotten the chance to try 
it - still on wait list since last week. On the topic of GH 
waitlists, I'm still waiting for access to their Codespaces 
feature since December of last year. I'd love to build a 
codespace template for D, along the lines of the setup-dlang GH 
action.


Re: Checking for manifest constants

2021-03-05 Thread Petar via Digitalmars-d-learn

On Friday, 5 March 2021 at 08:23:09 UTC, Bogdan wrote:
I was using a trick with dmd to check for manifest constants 
which worked until dmd v2.094. Yesterday I tried it on the 
latest compiler and it failed with:



source/introspection/manifestConstant.d(37,28): Error: need 
this for name of type string
source/introspection/type.d(156,13): Error: value of this is 
not known at compile time


any ideas how to fix it? or, is it a bug with dmd?

```

/// Check if a member is manifest constant
bool isManifestConstant(T, string name)() {
  mixin(`return is(typeof(T.init.` ~ name ~ `)) && 
!is(typeof(` ~ name ~ `));`);

}

/// ditto
bool isManifestConstant(alias T)() {
  return is(typeof(T)) && !is(typeof());
}

enum globalConfig = 32;
int globalValue = 22;

unittest {
  struct Test {
enum config = 3;
int value = 2;
  }

  static assert(isManifestConstant!(Test.config));
  static assert(isManifestConstant!(Test, "config"));
  static assert(isManifestConstant!(globalConfig));

  static assert(!isManifestConstant!(Test.value));
  static assert(!isManifestConstant!(Test, "value"));
  static assert(!isManifestConstant!(globalValue));
}

void main() {}


```


I suggest this:

enum globalConfig = 32;
int globalValue = 22;
immutable globaImmutablelValue = 22;

enum isManifestConstant(alias symbol) =
  __traits(compiles, { enum e = symbol; }) &&
  !__traits(compiles, { const ptr =  });

unittest {
  struct Test {
enum config = 3;
int value = 2;
  }

  static assert(isManifestConstant!(Test.config));
  static assert(isManifestConstant!(mixin("Test.config")));

  static assert(isManifestConstant!(globalConfig));
  static assert(isManifestConstant!(mixin("globalConfig")));

  static assert(!isManifestConstant!(Test.value));
  static assert(!isManifestConstant!(mixin("Test.value")));

  static assert(!isManifestConstant!(globalValue));
  static assert(!isManifestConstant!(mixin("globalValue")));

  static assert(!isManifestConstant!(globaImmutablelValue));
  static 
assert(!isManifestConstant!(mixin("globaImmutablelValue")));

}


Re: GC.addRange in pure function

2021-02-12 Thread Petar via Digitalmars-d-learn

On Friday, 12 February 2021 at 12:17:13 UTC, Per Nordlöw wrote:

On Tuesday, 9 February 2021 at 03:05:10 UTC, frame wrote:

On Sunday, 7 February 2021 at 14:13:18 UTC, vitamin wrote:
Why using 'new' is allowed in pure functions but calling 
GC.addRange or GC.removeRange isn't allowed?


Would making

`new T[]` inject a call to `GC.addRange` based on `T` (and 
maybe also T's attributes)


be a step forward?


`GC.addRange` is only used for memory allocated outside of the GC 
that can hold references to GC allocated objects. Since `new T[]` 
uses the GC, all the information is typeinfo is already there 
(*), so `GC.addRange` is unnecessary and even wrong, because when 
the GC collects the memory it won't call `GC.removeRange` on it


Implementation-wise, metadata about GC-allocated memory is held 
in the GC internal data structures, whereas the GC roots and 
ranges are stored in separate malloc/free-managed containers.


(*) Currently `new T[]` is lowered to an `extern (C)` runtime 
hook and the compiler passes to it typeid(T). After this the call 
chain is: _d_newarray_d_newarray{T,iT,mTX,miTX} -> _d_newarrayU 
-> __arrayAlloc -> GC.qalloc -> ConservativeGC.mallocNoSync -> 
Gcx.alloc -> {small,big}Alloc -> setBits


Re: GC.addRange in pure function

2021-02-12 Thread Petar via Digitalmars-d-learn

On Friday, 12 February 2021 at 19:48:01 UTC, vitamin wrote:
On Wednesday, 10 February 2021 at 16:25:44 UTC, Petar Kirov 
[ZombineDev] wrote:

On Wednesday, 10 February 2021 at 13:44:53 UTC, vit wrote:

[...]


TL;DR Yes, you can, but it depends on what "without problem" 
means for you :P


[...]


Thanks,

Yes, I am implementing container (ref counted pointer). When 
allcoator is Mallcoator (pure allocate and deallocate) and 
constructor of Type inside rc pointer has pure constructor and 
destructor, then only impure calls was GC.addRange and 
GC.removeRange.

Now there are marked as pure.


Great, that's the exact idea!


Re: GC.addRange in pure function

2021-02-10 Thread Petar via Digitalmars-d-learn
On Wednesday, 10 February 2021 at 16:25:44 UTC, Petar Kirov 
[ZombineDev] wrote:

[..]


A few practical examples:

Here it is deemed that the only observable side-effect of 
`malloc` and friends is the setting of `errno` in case of 
failure, so these wrappers ensure that this is not observed. 
Surely there are low-level ways to observe it (and also the act 
of allocating / deallocating memory on the C heap), but this 
definition purity what the standard library has decided it was 
reasonable:

https://github.com/dlang/druntime/blob/master/src/core/memory.d#L1082-L1150

These two function calls in Array.~this() can be marked as 
`pure`, as the Array type as a whole implements the RAII design 
pattern and offers at least basic exception-safety guarantees:

https://github.com/dlang/phobos/blob/81a968dee68728f7ea245b6983eb7236fb3b2981/std/container/array.d#L296-L298

(The whole function is not marked pure, as the purity depends on 
the purity of the destructor of the template type parameter `T`.)




Re: GC.addRange in pure function

2021-02-10 Thread Petar via Digitalmars-d-learn

On Wednesday, 10 February 2021 at 13:44:53 UTC, vit wrote:

On Wednesday, 10 February 2021 at 12:17:43 UTC, rm wrote:

On 09/02/2021 5:05, frame wrote:

On Sunday, 7 February 2021 at 14:13:18 UTC, vitamin wrote:
Why using 'new' is allowed in pure functions but calling 
GC.addRange or GC.removeRange isn't allowed?


Does 'new' violate the 'pure' paradigm? Pure functions can 
only call pure functions and GC.addRange or GC.removeRange is 
only 'nothrow @nogc'.


new allocates memory via the GC and the GC knows to scan this 
location. Seems like implicit GC.addRange.


Yes, this is my problem, if `new` can create object in pure 
function, then GC.addRange and GC.removeRange is may be pure 
too.


Can I call GC.addRange and GC.removeRange from pure function 
without problem? (using assumePure(...)() ).


TL;DR Yes, you can, but it depends on what "without problem" 
means for you :P



# The Dark Arts of practical D code
===

According to D's general approach to purity, malloc/free/GC.* are 
indeed impure as they read and write global **mutable** state, 
but are still allowed in pure functions **if encapsulated 
properly**. The encapsulation is done by @trusted wrappers which 
must be carefully audited by humans - the compiler can't help you 
with that.


The general rule that you must follow for such 
*callable-from-pure* code (technically it is labeled as `pure`, 
e.g.:


pragma(mangle, "malloc") pure @system @nogc nothrow
void* fakePureMalloc(size_t);

but I prefer to make the conceptual distinction) is that the 
effect of calling the @trusted wrapper must not drastically leak 
/ be observed.


What "drastically" means depends on what you want `pure` to mean 
in your application. Which side-effects you want to protect 
against by using `pure`? It is really a high-level concern that 
you as a developer must decide on when writing/using @trusted 
pure code in your program. For example, generally everyone will 
agree that network calls are impure. But what about logging? It's 
impure by definition, since it mutates a global log stream. But 
is this effect worth caring about? In some specific situations it 
maybe ok to ignore. This is why in D you can call `writeln` in 
`pure` functions, as long as it's inside a `debug` block. But 
given that you as a developer can decide whether to pass `-debug` 
option to the compiler, essentially you're in control of what 
`pure` means for your codebase, at least to some extent.


100% mathematical purity is impossible even in the most strict 
functional programming language implementations, since our 
programs run on actual hardware and not on an idealized 
mathematical machine. For example, even the act of reading 
immutable data can be globally observed as by measuring the 
memory access times - see Spectre [1] and all other 
microarchitecture side-channel [1] vulnerabilities.


[1]: 
https://en.wikipedia.org/wiki/Spectre_(security_vulnerability)

[2]: https://en.wikipedia.org/wiki/Side-channel_attack

That said, function purity is not useless at all, quite the 
contrary. It is about making your programs more deterministic and 
easy to reason about. We all want less bugs in our code and less 
time spent chasing hard to reproduce crashes, right?


`pure` is really about limiting, containing / compartmentalizing 
and controlling the the (in-deterministic) global effects in your 
program. Ideally you should strive to structure your programs as 
a pure core, driven by an imperative, impure shell. E.g. if 
you're working on an accounting application, the core is the part 
that implements the main domain / business logic and should be 
100% deterministic and pure. The imperative shell is the part 
that reads spreadsheet files, exports to pdf, etc. (actually just 
the actual file I/O needs to be impure - the actual decoding / 
encoding of data structures can be perfectly pure).



Now, back to practice and the question of memory management.

Of course allocating memory is globally observable effect and 
even locally one can compare pointers, as Paul Backus mentioned, 
as D is a systems language. However, as a practical concession, 
D's concept of pure-ity is about ensuring high-level invariants 
and so such low-level concerns can be ignored, as long as the 
codebase doesn't observe them. What does it mean to observe them? 
Here's an example:


---
void main()
{
import std.stdio : writeln;
observingLowLevelSideEffects.writeln; // `false`, but could 
be `true`

notObservingSideEffects.writeln; // always `true`
}

// BAD:
bool observingLowLevelSideEffects() pure
{
immutable a = [2];
immutable b = [2];
return a.ptr == b.ptr;
}

// OK
bool notObservingSideEffects() pure
{
immutable a = [2];
immutable b = [2];
return a == b;
}
---

`observingLowLevelSideEffects` is bad, as according to the 
language rules, the compiler is free to make `a` and `b` point to 
the same immutable array, the result of the function 

Re: Dimensions in compile time

2021-02-08 Thread Petar via Digitalmars-d-learn

On Monday, 8 February 2021 at 13:09:53 UTC, Rumbu wrote:

On Monday, 8 February 2021 at 12:19:26 UTC, Basile B. wrote:

On Monday, 8 February 2021 at 11:42:45 UTC, Vindex wrote:

size_t ndim(A)(A arr) {
return std.algorithm.count(typeid(A).to!string, '[');
}

Is there a way to find out the number of dimensions in an 
array at compile time?


yeah.

---
template dimensionCount(T)
{
static if (isArray!T)
{
static if (isMultiDimensionalArray!T)
{
alias DT = typeof(T.init[0]);
enum dimensionCount = dimensionCount!DT + 1;
}
else enum dimensionCount = 1;
}
else enum dimensionCount = 0;
}
///
unittest
{
static assert(dimensionCount!char == 0);
static assert(dimensionCount!(string[]) == 1);
static assert(dimensionCount!(int[]) == 1);
static assert(dimensionCount!(int[][]) == 2);
static assert(dimensionCount!(int[][][]) == 3);
}
---

that can be rewritten using some phobos traits too I think, 
but this piece of code is very old now, more like learner 
template.


dimensionCount!string should be 2.

My take without std.traits:

template rank(T: U[], U)
{
   enum rank = 1 + rank!U;
}

template rank(T: U[n], size_t n)
{
enum rank = 1 + rank!U;
}

template rank(T)
{
enum rank = 0;
}


Here's the version I actually wanted to write:

---
enum rank(T) = is(T : U[], U) ? 1 + rank!U : 0;
---

But it's not possible, because of 2 language limitations:
1. Ternary operator doesn't allow the different branches to be 
specialized like `static if` even if the condition is a 
compile-time constant.
2. `is()` expressions can only introduce an identifier if inside 
a `static if`.


Otherwise, I'd consider this the "idiomatic" / "typical" D 
solution, since unlike C++, D code rarely (*) overloads and 
specializes templates.


(*) Modern Phobos(-like) code.

---
template rank(T)
{
static if (is(T : U[], U))
enum rank = 1 + rank!U;
else
enum rank = 0;
}

unittest
{
static assert( rank!(char) == 0);
static assert( rank!(char[]) == 1);
static assert( rank!(string) == 1);
static assert( rank!(string[]) == 2);
static assert( rank!(string[][]) == 3);
static assert( rank!(string[][][]) == 4);
}
---


Otherwise, the shortest and cleanest solution IMO is this one:

---
enum rank(T : U[], U) = is(T : U[], U) ? 1 + rank!U : 0;
enum rank(T) = 0;

unittest
{
static assert( rank!(char) == 0);
static assert( rank!(char[]) == 1);
static assert( rank!(string) == 1);
static assert( rank!(string[]) == 2);
static assert( rank!(string[][]) == 3);
static assert( rank!(string[][][]) == 4);

static assert( rank!(char) == 0);
static assert( rank!(char[1]) == 1);
static assert( rank!(char[1][2]) == 2);
static assert( rank!(char[1][2][3]) == 3);
static assert( rank!(char[1][2][3][4]) == 4);
}
---

- Use eponymous template syntax shorthand
- Static arrays are implicitly convertible to dynamic arrays, so 
we can merge the two implementations.


Re: std.expreimantal.allocator deallocate

2021-01-24 Thread Petar via Digitalmars-d-learn

On Sunday, 24 January 2021 at 14:56:25 UTC, Paul Backus wrote:

On Sunday, 24 January 2021 at 11:00:17 UTC, vitamin wrote:
It is Ok when I call deallocate with smaller slice or I need 
track exact lengtht?


It depends on the specific allocator, but in general, it is 
only guaranteed to work correctly if the slice you pass to 
deallocate is exactly the same as the one you got from allocate.


To add to that, if an allocator defines `resolveInternalPointer` 
[0][1] you could be able to get the original slice that was 
allocated (and then pass that to `deallocate`, but not all 
allocators define `resolveInternalPointer` and also even if they 
do define it, they're not required to maintain complete 
book-keeping as doing so could have bad performance implications 
(i.e. calling say `a.resolveInternalPointer(a.allocate(10)[3 .. 
6].ptr, result)` can return `Ternary.unknown`.


[0]: 
https://dlang.org/phobos/std_experimental_allocator.html#.IAllocator.resolveInternalPointer
[1]: 
https://dlang.org/phobos/std_experimental_allocator_building_blocks.html


Re: std.expreimantal.allocator deallocate

2021-01-24 Thread Petar via Digitalmars-d-learn

On Sunday, 24 January 2021 at 16:16:12 UTC, vitamin wrote:

On Sunday, 24 January 2021 at 14:56:25 UTC, Paul Backus wrote:

On Sunday, 24 January 2021 at 11:00:17 UTC, vitamin wrote:
It is Ok when I call deallocate with smaller slice or I need 
track exact lengtht?


It depends on the specific allocator, but in general, it is 
only guaranteed to work correctly if the slice you pass to 
deallocate is exactly the same as the one you got from 
allocate.


thanks,
is guaranteed this:

void[] data = Allocator.allocate(data_size);
assert(data.length == data_size)


or can be data.length >= data_size ?


Yes, it is guaranteed [0]. Even though some allocator 
implementations will allocate a larger block internally to back 
your requested allocation size, `allocate` [1] must return the 
same number of bytes as you requested, or a `null` slice.
If an allocator has a non-trivial `goodAllocSize(s)` [2] function 
(i.e. one that is not the identity function `s => s`) and you you 
allocate say N bytes, while allocator.goodAllocSize(N) returns M, 
M > N, it means that most likely calling `expand` [3] will 
succeed - meaning it will give you the excess memory that it has 
internally for free. I say "most likely", because this is the 
intention of the allocator building blocks spec, even though it's 
not specified. In theory, `expand` could fail in such situation 
either because of an allocator implementation deficiency (which 
would technically not be a bug), or because `allocate` was called 
concurrently by another thread and the allocator decided to give 
the excess space to someone else.


[0]: 
https://dlang.org/phobos/std_experimental_allocator_building_blocks.html
[1]: 
https://dlang.org/phobos/std_experimental_allocator.html#.IAllocator.allocate
[2]: 
https://dlang.org/phobos/std_experimental_allocator.html#.IAllocator.goodAllocSize
[3]: 
https://dlang.org/phobos/std_experimental_allocator.html#.IAllocator.expand




Re: Value of type enum members

2021-01-19 Thread Petar via Digitalmars-d-learn
On Tuesday, 19 January 2021 at 20:27:30 UTC, Andrey Zherikov 
wrote:
Could someone please explain why there is a difference in 
values between compile-time and run-time?



struct T
{
int i;
this(int a) {i=a;}
}

enum TENUM : T
{
foo = T(2),
bar = T(3),
}

void main()
{
pragma(msg, TENUM.foo);// T(2)
pragma(msg, TENUM.bar);// T(3)
writeln(TENUM.foo);// foo
writeln(TENUM.bar);// bar
}


TL;DR `pragma(msg, x)` prints the value of `x` usually casted to 
the enumeration (base) type, while `std.conv.to!string(x)` prints 
the name of the enum member corresponding to the value of `x`.


Both `pragma(msg, ...)` and `std.conv.to!string(..)` (what is 
used under the hood by `writeln(..)`) take somewhat arbitrary 
decisions about formating enum members, neither of which is the 
"right" one, as there's no rule saying which is better.


In general, `std.conv.to!string(..)` tries to use format that is 
meant to be friendly to the end-users of your program, while 
`pragma(msg, ...)` is a CT debugging tool and it tries to stay 
close to the compiler's understanding of your program.


For example:

void main()
{
import std.stdio;

enum E1 { a = 1, b, c }
enum E2 { x = "4" }
enum E3 : string { y = "5" }

// 1.0 2L cast(E1)3 4 5
pragma(msg, 1.0, " ", long(2), " ", E1.c, " ", E2.x, " ", 
E3.y);


// 1 2 c x y
writeln(1.0, " ", long(2), " ", E1.c, " ", E2.x, " ", E3.y);
}


End-users generally don't care about the specific representations 
of numbers in your program, while on the other hand that's a 
crucial detail for the compiler and you can see this bias in the 
output.





Re: Git-repo-root relative path

2020-11-18 Thread Petar via Digitalmars-d-learn

On Monday, 16 November 2020 at 10:21:27 UTC, Per Nordlöw wrote:
I need a function that gets the relative path of a file in a 
Git-repo and preferrably also its status.


I'm not sure I understand the question. I have written two 
programs, hopefully one of them does what you want :D


Either via an external call to `git` or optionally via `libgit` 
(if available).


Which DUB packages do you prefer?


For such small tasks, the easiest is to just use the shell.

1st answer:

Initially I thought that you want to convert the current working 
directory (I don't know why - I didn't read well the question 
apparently :D) to a path relative to the root git repo path. 
Here's my solution to that problem:


```d
import std.exception : enforce;
import std.format : format;
import std.file : getcwd;
import std.path : asRelativePath;
import std.process : executeShell;
import std.stdio : writeln;
import std.string : stripRight;

void main()
{
auto cwd = getcwd();
const gitRootPathResult = executeShell("git rev-parse 
--show-toplevel");

enforce(
gitRootPathResult.status == 0,
"`git` is not installed, or '%s' is not a git 
repo".format(cwd)

);
// Trim trailing whitespace from the shell invocation
const gitRoot = gitRootPathResult.output.stripRight;
debug writeln("Git root path: ", gitRoot);
gitRoot
.asRelativePath(getcwd())
.writeln;
}
```

Example usage:

```
$ cd ~/code/repos/dlang/dlang/dmd/src/dmd/backend/

$ dmd -run ~/code/cwd_to_git_relative_path.d
../../..

# Sanity check:

$ dmd -debug -run ~/code/cwd_to_git_relative_path.d
Git root path: /home/zlx/code/repos/dlang/dlang/dmd
../../..

$ cd '../../..' && pwd
/home/zlx/code/repos/dlang/dlang/dmd
```

2nd answer:

Reading a second time, I don't understand what you meant by "gets 
the relative path of a file in a Git-repo". Did you mean that it 
receives an absolute path (or relative to current working 
directory) to a file and converts it to a path relative to a git 
repo? If so, here's my solution and for determining the status of 
a file:


https://gist.github.com/PetarKirov/b4c8b64e7fc9bb7391901bcb541ddf3a


Re: Why is time_t defined as a 32-bit type on Windows?

2020-08-07 Thread Petar via Digitalmars-d-learn

On Friday, 7 August 2020 at 05:37:32 UTC, Andrej Mitrovic wrote:
On Wednesday, 5 August 2020 at 16:13:19 UTC, Andrej Mitrovic 
wrote:

```
C:\dev> rdmd -m64 --eval="import core.stdc.time; 
writeln(time_t.sizeof);"

4
```

According to MSDN this should not be the case:

https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/time-time32-time64?view=vs-2019

time is a wrapper for _time64 and **time_t is, by default, 
equivalent to __time64_t**.


But in Druntime it's defined as a 32-bit type: 
https://github.com/dlang/druntime/blob/349d63750d55d078426d4f433cba512625f8a3a3/src/core/sys/windows/stdc/time.d#L42


I filed it as an issue to get more eyes / feedback: 
https://issues.dlang.org/show_bug.cgi?id=21134



As far as I gather, this was changed with MSVC 2005 [0], so 
perhaps the relevent change wasn't applied to the druntime 
windows bindings. Also keep in mind that we revamped a large 
portion of the Windows bindins in 2015 [1], whose code was based 
on MinGW IIRC.


In versions of Visual C++ and Microsoft C/C++ before Visual 
Studio 2005, time_t was a long int (32 bits) and hence could 
not be used for dates past 3:14:07 January 19, 2038, UTC. 
time_t is now equivalent to __time64_t by default, but defining 
_USE_32BIT_TIME_T changes time_t to __time32_t and forces many 
time functions to call versions that take the 32-bit time_t. 
For more information, see Standard Types and comments in the 
documentation for the individual time functions.


(^ Source [0])

[0]:
https://docs.microsoft.com/en-us/cpp/c-runtime-library/time-management?view=vs-2019
[1]: https://github.com/dlang/druntime/pull/1402

Edit: I see you're discussing core.stdc.time, which actually 
wasn't part of the changes in [1]. In any case, druntime should 
offer both time_t, __time32_t, and __time64_t, and have time_t 
time() default to 64-bit. I do wonder what exactly is exported 
from UCRT as time(), as from the docs it looks it should be just 
a macro, but if anyone had used time() one Windows (from D) and 
didn't get linker errors or memory corruption, then I suppose 
they're still defaulting it to 32-bit to avoid ABI breakages.


Re: core.thread vs std.concurrency - which of them to use?

2020-08-06 Thread Petar via Digitalmars-d-learn

On Thursday, 6 August 2020 at 01:13:28 UTC, Victor L Porton wrote:
When to use core.thread and when std.concurrency for 
multithreading in applications? Is one of them a preferred way?


Druntime's core.thread sets the foundation for D's 
multi-threading (or at least the non-betterC foundation). On top 
of it Phobos' std.concurrency and std.parallelism provide a 
higher-level abstractions.


Which ones you should use depends on your application:

* If you want to build a web application, event loop is the way 
to go -> look at libraries like vibe-d / vibe-core / event-code 
(these I've ordered from high-level to lowe-level).


* If you want to speed up a computation, then you're likely 
looking for data-parallelism -> look into std.parallelism and 
only if you need more control you can consider core.thread


* If you need concurrency, either logical (represent different 
"processes" like web requests, AI agents in a simulation, or say 
simply remembering different states of a graph iteration) or 
physical (using multiple cores to do things concurrently, but not 
necessarily based on data-parallelism) look into std.concurrency.


* If want to build a library (e.g. event loop, task 
system/future/promises/ reactive extensions, actor model, SCP, 
etc.) then you need to understand how things work under the hood 
and so I'd say that reading core.thread's source code would be 
valuable.


Cheers,
Petar


Re: Strong typing and physical units

2020-07-28 Thread Petar via Digitalmars-d-learn

On Tuesday, 28 July 2020 at 04:40:33 UTC, Cecil Ward wrote:

[snip]


By the way, I found 2 implementations of unit of measurement in D:
https://code.dlang.org/packages/units-d
https://code.dlang.org/packages/quantities


Re: String switch is odd using betterC

2020-02-26 Thread Petar via Digitalmars-d-learn

On Wednesday, 26 February 2020 at 08:32:50 UTC, Abby wrote:

On Wednesday, 26 February 2020 at 08:25:00 UTC, Abby wrote:

Any idea why?


Ok so this is enough to produce the same result, it seems that 
there is a problem in string switch when there is more the 6 
cases.


extern(C) void main()
{
auto s = "F";
final switch(s)
{
case "A": break;
case "B": break;
case "C": break;
case "D": break;
case "E": break;
case "F": break;
case "G": break;
}
}


This looks like a possible cause:
https://github.com/dlang/druntime/blob/e018a72084e54ecc7466e97c96e4557b96b7f905/src/core/internal/switch_.d#L34



Re: What's opIndexAssign supposed to return ?

2020-02-25 Thread Petar via Digitalmars-d-learn

On Tuesday, 25 February 2020 at 11:02:40 UTC, wjoe wrote:

Lets say I've got 3 overloads of opIndexAssign:

auto opIndexAssign(T t);
auto opIndexAssign(T t, size_t i); and
auto opIndexAssign(T t, size_t[2] i);

I would assume to return what I would return with opIndex but 
I'd rather not act upon assumptions.
But if yes is it supposed to be the newly assigned values or 
the pre-assignment ones ? By value or by reference ? And if 
it's the new stuff can I just return t ?


The language manual on operator overloading didn't answer that 
question and neither did an internet search which didn't find 
any useful information. Something unrelated and a heads up 
about introducing opIndexAssign from 2004.


opIndexAssign is the operator used in the following code:

arr[1] = 8;

It returns the element at index 1 (so 8 in this case) by 
reference.

This allows you to do:

(arr[1] = 8)++;
assert(arr[1] == 9);

Whether or not you want to support this behavior in your custom 
data structure is up to you. It's perfectly valid to return the 
element by value or even return void.


Returning void from any custom assignment operator is always a 
safe choice. It's possible that some algorithms (e.g. in Phobos 
or third-party libraries) may need op*Assign to return something, 
but in that unlikely case you'll get a compile-time error, so it 
will be an easy fix.




Re: Two problems with json and lcd

2020-02-19 Thread Petar via Digitalmars-d-learn

On Wednesday, 19 February 2020 at 08:14:34 UTC, AlphaPurned wrote:


The first is std.json. It is broke. Doesn't work with tuples. 
The change above fixes it by treating tuple as an array(same 
code). It works fine.


Can you post a minimal, but complete program that shows the 
problems with std.json regarding tuples?


If you do we could open a pull request that fixes the problem and 
also uses the code of your program as a unit test, to both 
showcase the support for tuples and also prevent regressions in 
the future.


Re: State of MIPS

2020-02-19 Thread Petar via Digitalmars-d-learn

On Wednesday, 19 February 2020 at 07:09:02 UTC, April wrote:
What's the current state of MIPS compiling for bare metal? 
Especially the R4300i processor. I see MIPS on both GDC and LDC 
"partial support/bare metal" lists but them being somewhat 
vague about it I'm not quite sure which it means and I'm sure 
by now the processors and instruction sets are different from 
what they were in 1995.


Thanks,
April.


Unfortunately, the current state is objectively unknown, as MIPS 
is not part of the architectures that we do continuous 
integration testing on. I suggest trying to run the 
compiler/druntime/phobos tests on MIPS (either real hardware, or 
emulator) to see what works at this moment. It is likely that for 
bare metal enough of the language would be stable and working 
correctly, but we can't know for sure.


You can follow the instructions to cross-compile with LDC:
https://wiki.dlang.org/Building_LDC_runtime_libraries

And for GDC:
https://wiki.dlang.org/GDC_Cross_Compiler

If you need specific help any of those compilers, I suggest 
asking for help in their respective sections of the 
forum/newsgroup.


Re: Two problems with json and lcd

2020-02-18 Thread Petar via Digitalmars-d-learn

On Tuesday, 18 February 2020 at 18:05:43 UTC, AlphaPurned wrote:

json has two issues, it doesn't work with tuple:

(isArray!T)

goes to

(isArray!T || (T.stringof.length > 4 && T.stringof[0..5] == 
"Tuple"))


and right below

else
{
static assert(false, text(`unable to convert type 
"`, T.Stringof, `" to json`));

}

and it used Stringof.



This fixes json to work with tuples.

Second, LCD gives me the error:

error : function `Test.main.rate!(d, "", "").rate` cannot 
access frame of function `Test.main.__foreachbody1`


Not sure the problem, works fine with DMD. I'm simply accessing 
a variable outside a templated function.


I didn't understand your first point, but if I got the gist of 
your second one, the difference may be due to LDC not yet having 
implemented this:

https://github.com/ldc-developers/ldc/issues/3125


Re: How to declare a virtual member (not a function) in a class

2020-02-18 Thread Petar via Digitalmars-d-learn

On Tuesday, 18 February 2020 at 12:37:45 UTC, Adnan wrote:
I have a base class that has a couple of constant member 
variables. These variables are abstract, they will only get 
defined when the derived class gets constructed.


class Person {
const string name;
const int id;
}

class Male : Person {
this(string name = "Unnamed Male") {
static int nextID = 0;
this.id = nextID++;
this.name = name;
}
}

The compiler restricts me from assigning those two functions. 
How can I get around this?


`const` members must be initialized by the same class that 
declares them. What you could do is have the abstract Person 
class declare a constructor (which would initialize the `const` 
members) and call it from derived class (such as `Male`) 
constructors by the `super(arg1, arg2)` syntax.


Alternatively, you could define `abstract` accessor functions in 
the base class and have the derived classes implement them. In D 
you can use the same syntax to call functions as if they were 
fields. (Before you had to put the @property attribute on such 
functions, but for the most part it is not necessary now.)


Re: Alternative to friend functions?

2020-02-18 Thread Petar via Digitalmars-d-learn

On Tuesday, 18 February 2020 at 12:43:22 UTC, Adnan wrote:

What is the alternative to C++'s friend functions in D?

module stable_matching;

alias FemaleID = int;
alias MaleID = int;

class Person {
string name;
int id;
}

class Male : Person {
this(string name = "Unnamed Male") {
static int nextID = 0;
this.id = nextID++;
this.name = name;
}
}

class Female : Person {
this(string name = "Unnamed Female") {
static int nextID = 0;
this.id = nextID++;
this.name = name;
}
}

class Husband(uint N) : Male {
FemaleID engagedTo = -1;
const FemaleID[N] preferences;

this(FemaleID[N] preferences) {
this.preferences = preferences;
}
}

class Wife(uint N) : Female {
FemaleID engagedTo = -1;
const MaleID[N] preferences;

this(MaleID[N] preferences) {
this.preferences = preferences;
}
}

void engage(N)(ref Wife!N, wife, ref Husband!N husband) {
// Here, I want to access both husband and wife's engaged_to
}

class MatchPool(uint N) {
Husband!N[N] husbands;
Wife!N[N] wives;
}


In D the unit of encapsulation is not class, but module, and so 
`private` only restricts access from other modules. If `engage` 
is declared in the same module as the classes, you should have no 
problems accessing their private members.


If you want to put `engage` in a different module, than you can 
use the `package` access modifier to allow all modules in a given 
package to access `package` members.


Re: DPP: Linker issue with functions implemented in C header files

2020-02-18 Thread Petar via Digitalmars-d-learn

On Tuesday, 18 February 2020 at 09:20:08 UTC, Andre Pany wrote:


Hi Petar,


Hi Andre, I'm happy to help :)


thank you very much for the explanation and the code sample.
Filling the az_span anonymous member was the tricky part,
I thought it would be not possible to do so, but you showed me
the trick.


I wouldn't call it a trick, I was using standard struct literal 
initialization (the very syntax that DIP1031 proposes to 
deprecate).


For example:

struct Inner { int x, y; }
struct Outer { Inner inner; }

// You can initialize Outer in various ways:

// 1)
auto o1 = Outer(Inner(1, 2));

// 2)
Outer o2 = { inner: Inner(1, 2) };

// 3)
Outer o3 = { Inner(1, 2) };

// 4)
Outer o4 = { inner: { x: 1, y: 2} };

// 5)
Outer o5 = { { x: 1, y: 2} };

// 6)
Outer o6;
o6.inner.x = 1;
o6.inner.y = 1;

For POD (plain old data) struct like that, all six variants are 
equivalent (of course there more possible variations).


Since there's no `private` protection modifier in C, the only 
thing C library authors can do is make it inconvenient to access 
struct fields (by prefixing them with underscores), but they 
can't really prevent it.


For example, without this syntax, in pure C you can initialize a 
span like this:

char my_string[] = "Hey";
az_span span;
span._internal.ptr = my_string;
span._internal.length = sizeof(my_string) - 1;
span._internal.capacity = sizeof(my_string) - 1;

And with almost the same syntax you can do this in D:

string my_string = "Hey";
az_span span;
span._internal.ptr = cast(ubyte*)my_string.ptr; // note: I think 
this should be safe, because of [1]

span._internal.length = my_string.length;
span._internal.capacity = my_string.length;

It's just that that author wanted to prevent accidental bugs by 
pushing you to use the inline helper functions or macros (which 
are technically not needed).


[1]: 
https://github.com/Azure/azure-sdk-for-c/blob/25f8a0228e5f250c02e389f19d88c064c93959c1/sdk/core/core/inc/az_span.h#L22




I will do it like you have proposed but had also already created
a ticket for the Azure SDK developer:
https://github.com/Azure/azure-sdk-for-c/issues/359
There should be a more convenient way to fill a az_span 
structure.


To be honest, I don't think the authors will agree to change 
this, as putting inline functions in headers files is a pretty 
common practice in both C and C++.

There are two benefits to that:
1) Potentially better performance, because the code is easier to 
inline
2) It's possible to provide header-only libraries (not the case 
here), that don't require build steps.


For reference, here is my dockerfile which does the DPP call 
and linking:


Cool, I'll check it later!


``` dockerfile
FROM dlang2/ldc-ubuntu:1.20.0 as ldc

RUN apt-get install -y git libssl-dev uuid-dev 
libcurl4-openssl-dev curl


RUN curl -OL 
https://cmake.org/files/v3.12/cmake-3.12.4-Linux-x86_64.sh \

&& mkdir /opt/cmake \
&& sh /cmake-3.12.4-Linux-x86_64.sh --prefix=/opt/cmake 
--skip-license \

&& ln -s /opt/cmake/bin/cmake /usr/local/bin/cmake

RUN git clone https://github.com/Azure/azure-sdk-for-c.git \
&& cd azure-sdk-for-c \
&& git submodule update --init --recursive

RUN cd azure-sdk-for-c \
&& mkdir build \
&& cd build \
&& cmake ../ \
&& make

RUN apt-get install -y clang-9 libclang-9-dev
RUN ln -s /usr/bin/clang-9 /usr/bin/clang
COPY az_storage_blobs.dpp /tmp/

RUN DFLAGS="-L=-L/usr/lib/llvm-9/lib/" dub run dpp -- --help

RUN DFLAGS="-L=-L/usr/lib/llvm-9/lib/" dub run dpp -- 
/tmp/az_storage_blobs.dpp \

--include-path /azure-sdk-for-c/sdk/core/core/inc \
--include-path /azure-sdk-for-c/sdk/core/core/internal \
--include-path /azure-sdk-for-c/sdk/storage/blobs/inc \
--include-path 
/azure-sdk-for-c/sdk/transport_policies/curl/inc \

--preprocess-only

ADD blobs_client_example.d /tmp/blobs_client_example.d
RUN  ldc2 /tmp/blobs_client_example.d /tmp/az_storage_blobs.d \
/azure-sdk-for-c/build/sdk/core/core/libaz_core.a \

/azure-sdk-for-c/build/sdk/storage/blobs/libaz_storage_blobs.a \

/azure-sdk-for-c/build/sdk/transport_policies/curl/libaz_curl.a 
\

-of=/tmp/app
```

Kind regards
André


Cheers,
Petar


Re: DPP: Linker issue with functions implemented in C header files

2020-02-18 Thread Petar via Digitalmars-d-learn

On Tuesday, 18 February 2020 at 05:41:38 UTC, Andre Pany wrote:

Hi,

I try to get wrap the "Azure SDK for C" using DPP and have 
following issue.
Functions, which are actually implemented in C header files 
will cause

linker errors:

https://github.com/Azure/azure-sdk-for-c/blob/master/sdk/core/core/inc/az_span.h#L91

Example:
AZ_NODISCARD AZ_INLINE az_span az_span_init(uint8_t * ptr, 
int32_t length, int32_t capacity) {
  return (az_span){ ._internal = { .ptr = ptr, .length = 
length, .capacity = capacity, }, };

}

Error message:
/tmp/app.o:az_storage_blobs.d:function 
_D20blobs_client_example__T19AZ_SPAN_FROM_BUFFERTG4096hZQBdFNbQnZS16az_storage_blobs7az_span: error: undefined reference to 'az_span_init'


I do not know C well, is this the expected behavior and should 
I ask the Azure SDK developers to not implement functions 
within C header files?


Kind regards
André


I think the problem is that you haven't actually linked in the 
Azure SDK C library.


Dpp translates the header declarations from C to D, but the 
actual definitions (function bodies) are not part of the process. 
The executable code for the function definitions should be inside 
either a static or dynamic library provided by the SDK.



From the the project's readme file, it looks like they're using 
CMake as the build system generator (afterwards both make and 
ninja should be valid choices for building):


mkdir build
cd build
cmake ../
make

In cases like this, it's best to checkout the CMakeLists.txt 
files of the individual sub project, like this one:


https://github.com/Azure/azure-sdk-for-c/blob/master/sdk/core/core/CMakeLists.txt

As you can see, there are several outputs of the build process, 
among which:


- add_library(az_core ...)
This defines a library named az_core which can produce either a 
static (.a on Linux, .lib on Windows) or dynamic library file 
(.so on Linux, .dll on Windows). (If no configuration is 
specified, I think it's static by default).

So the final file name would be libaz_core.{a,so} on Linux.
For the .c files to be built, a list of include directories must 
be specified, where the various .h would located (containing 
function and type declarations). This done like so:

target_include_directories (az_core PUBLIC ...)
The 'PUBLIC' argument to the target_include_directories specifies 
that if you want to use the library, you need to use the same 
include directories, as those needed for building it.


- add_executable(az_core_test ..)
This defines an executable build output, which looks is only used 
for testing, so it's not interesting to us, except that it can 
serve as an example app using the az_core library.


---

So in summary, if you want to use the az_core library, you need 
to:

1. Build it
2. Run Dpp like so:

d++ \
  --include-path target_include_directories>

  


You will need to repeat the same steps for any other part of the 
Azure C SDK.





TL;DR
After I went through all those steps I got a similar linker error 
for az_http_response_init. After looking where is the actual 
function definition, it turned out that it's not defined in a .c 
file, but it is an inline function part of a header file.
Searching for az_span_init revealed the same (I could have saved 
myself some time by reading your message more carefully :D).


So, to answer your original question, the problem is that dpp 
translates only declarations, not function definitions (such as 
inline functions like that).


For now, your best course of action is to translate all inline 
function definition by hand. Since in C inline functions are 
mostly short and simple functions (a better alternative to 
macros), hopefully that won't be too much work.


Also, looking at macros like AZ_SPAN_FROM_STR, there's really 
very little chance that they could be correctly translated 
automatically. As the things they do are likely not valid even in 
@system D code (without additional casts), so it's better to 
write your own D functions by hand anyway.



Here's what I tried:

test.dpp:

#include 
#include 

import std.stdio;

void main()
{
char[] resp =
"HTTP/1.2 404 We removed the\tpage!\r\n" ~
"\r\n" ~
"But there is somebody. :-)".dup;
az_span response_span =
{{
ptr: cast(ubyte*)resp.ptr,
length: cast(int)resp.length,
capacity: cast(int)resp.length
}};
az_http_response response;
az_result result = az_http_response_init(
, response_span);

writeln(result);
}

d++ --compiler ldmd2 --include-path ./inc test.dpp 
./build/libaz_core.a




Re: How to get Code.dlang.org to update the package?

2020-02-12 Thread Petar via Digitalmars-d-learn

On Wednesday, 12 February 2020 at 12:42:32 UTC, Dukc wrote:
I have pushed a new release tag in Github around two weeks ago, 
and ordered a manual update at DUB, yet DUB has still not 
aknowledged the new tag. Is there some requirement for the 
release tag for it to be recognized?


Hi Dukc,

I'm not sure which dub package you're referring to, but I'm gonna 
guess that it's this one: 
http://code.dlang.org/packages/nuklearbrowser, which corresponds 
to this github repo: https://github.com/dukc/nuklearbrowser. I 
think the problem is that your latest tag is 0.0.2, instead of 
v0.0.2 (https://github.com/dukc/nuklearbrowser/tags).


I hope this helps!

Cheers,
Petar


Re: Building for multiple platforms

2020-02-12 Thread Petar via Digitalmars-d-learn
On Wednesday, 12 February 2020 at 12:46:23 UTC, Petar Kirov 
[ZombineDev] wrote:

On Wednesday, 12 February 2020 at 08:41:25 UTC, Neils wrote:

[...]


Since your project is already on GitHub, I think the easiest 
solution would be to use GitHub Actions [1] + setup-dlang 
action [2] + upload-release-asset action [3]  to automate the 
whole process.


[1]: https://help.github.com/en/actions
[2]: https://github.com/mihails-strasuns/setup-dlang
[3]: https://github.com/actions/upload-release-asset


P.S. Your project looks quite interesting! Best of luck!


Re: Building for multiple platforms

2020-02-12 Thread Petar via Digitalmars-d-learn

On Wednesday, 12 February 2020 at 08:41:25 UTC, Neils wrote:
I maintain an open-source project written in D and I use DUB 
for building and my compiler backend is DMD. My dub.json file 
is rather simple: 
https://github.com/neilsf/XC-BASIC/blob/master/dub.json


I offer pre-built binaries for Linux x86, Linux x86_64, Windows 
and Mac OS.


I'm only doing this for a year so I am still quite a beginner 
in D and my workflow is the following when building the project:


1. Launch a VM using VirtualBox
2. dub build
3. Repeat for each platforms

The above is a painfully slow process. Is there any way to make 
it simpler and faster?


Any suggestions are warmly appreciated.


Since your project is already on GitHub, I think the easiest 
solution would be to use GitHub Actions [1] + setup-dlang action 
[2] + upload-release-asset action [3]  to automate the whole 
process.


[1]: https://help.github.com/en/actions
[2]: https://github.com/mihails-strasuns/setup-dlang
[3]: https://github.com/actions/upload-release-asset


Re: How do I fix my failed PRs?

2020-02-03 Thread Petar via Digitalmars-d-learn

On Sunday, 2 February 2020 at 08:54:02 UTC, mark wrote:
I've done quite a few small corrections/improvements to the 
D-tour's English. Almost all have been accepted.


However, four have not been accepted, apparently for technical 
reasons. But I don't understand what's wrong or what I need to 
do to fix them. (I'm not very knowledgeable about github.)


These are the ones that are held up:

https://github.com/dlang-tour/english/pull/336
https://github.com/dlang-tour/english/pull/335
https://github.com/dlang-tour/english/pull/328
https://github.com/dlang-tour/english/pull/316


Hi Mark,

I will take care of reviewing and merging all of the rest of your 
pull requests later this week.


For the most part, my process is:
1. Git checkout a pull request locally
2. Rebase its branch on top of the upstream master one
2. Fixing any whitespace issues (quite easy due to .editorconfig)
3. Reviewing your changes
4. Reviewing the paragraphs as a whole
5. Changing the commit message to something more descriptive, for 
example:


[chapter-name]: Change being made

Longer description...

See also: https://chris.beams.io/posts/git-commit/

6. Force-pushing and auto-merging.

Last week I got stuck on step 4 for two of the chapters 
(classes.md - pr #329 and templates.md - pr #331) as I decided 
that small fixes won't be sufficient and I started rewriting a 
few paragraphs from scratch. However I ran out of time to finish 
both the rewriting and reviewing the rest of your changes. This 
time I'll try to prioritize merging the easier PRs before going 
down the rabbit whole of rewriting chapters.


Anyway, thanks a lot for your help! I'll try to speed up the 
process on my side.


Cheers,
Petar


Re: lambda alias import

2020-01-17 Thread Petar via Digitalmars-d-learn
On Friday, 17 January 2020 at 23:04:57 UTC, Petar Kirov 
[ZombineDev] wrote:

[..]


*If* you compile both modules ..


[..]





Re: lambda alias import

2020-01-17 Thread Petar via Digitalmars-d-learn

On Friday, 17 January 2020 at 21:40:05 UTC, JN wrote:

stuff.d:

alias doStuff = () {};

main.d:

import stuff;

void main()
{
  doStuff();
}


DMD throws compile error:

 Error 42: Symbol Undefined __D5stuff9__lambda3FNaNbNiNfZv

Is this expected behavior? It tripped me while trying to use 
DerelictVulkan :(


I think the problem comes from the way you compile and link your 
code. I you compile both modules together like this it should 
work out:


dmd -ofresult main.d stuff.d

(I'm on the phone, so I can't verify if it works atm)


Re: Information about the 'magic' field in object.Object class

2020-01-16 Thread Petar via Digitalmars-d-learn

On Thursday, 16 January 2020 at 14:32:24 UTC, Adam D. Ruppe wrote:

On Thursday, 16 January 2020 at 14:30:04 UTC, realhet wrote:

Is there a documentation about that 'magic' field?


I'm pretty sure the only fields in there are pointer to vtable 
and pointer to monitor object...


I have a really small object, only 32 bytes. At this point if 
I want to add a flag bit I have 3 choices:


Do you need virtual functions? If not, you could probably just 
make a struct instead.


Alternatively, the class can be marked as extern(C++).


Re: Mapping float to ulong in CTFE

2019-12-12 Thread Petar via Digitalmars-d-learn

On Thursday, 12 December 2019 at 19:21:22 UTC, berni44 wrote:
Is it possible to get to the bits of a float in CTFE? I tried 
the following, but this doesn't work:


```
import std.stdio;

union FloatBits
{
float floatValue;
ulong ulongValue;
}

ulong test(float f)
{
FloatBits fb;
fb.floatValue = f;
return fb.ulongValue;
}

void main()
{
static assert(test(3.0) == 1077936128);
}
```

test.d(13): Error: reinterpretation through overlapped field 
ulongValue is not allowed in CTFE

test.d(18):called from here: test(3.0F)
test.d(18):while evaluating: static 
assert(test(3.0F) == 1077936128LU)


You can use a C-style pointer reinterpret cast like this:

uint test(float f) { return *cast(uint*) }

Make sure that source and destination types have the same size.

Or more generally:

IntegerOfSize!T bitRepresentation(T)(T f)
{
return *cast(IntegerOfSize!T*)
}

pragma (msg, bitRepresentation(3.0f)); // 1077936128u
pragma (msg, bitRepresentation(3.0));  // 4613937818241073152LU

void main()
{
static assert(bitRepresentation(3.0f) == 1077936128);
}

template IntegerOfSize(T)
{
static if (T.sizeof == 1)
alias IntegerOfSize = ubyte;
else static if (T.sizeof == 2)
alias IntegerOfSize = ushort;
else static if (T.sizeof == 4)
alias IntegerOfSize = uint;
else static if (T.sizeof == 8)
alias IntegerOfSize = ulong;
else
static assert(0);
}


Re: std.format range with compound format specifiers?

2019-11-19 Thread Petar via Digitalmars-d-learn
On Tuesday, 19 November 2019 at 21:50:08 UTC, Steven 
Schveighoffer wrote:
I know I can format a range with a format string that contains 
%(%s, %). And this results in a nice comma separated list for 
each item.


But what about an item that has a not-so-cookie-cutter format? 
Like for instance a name/value field:


struct NV
{
  string name;
  int value;
}

If I want to print one of these, I can do:

format("%s: %s", nv.name, nv.value);

If I wanted to print a range of these, let's say:

auto arr = [NV("Steve", 1), NV("George", 500), NV("Adam", -5)];

How can I have it come out like:

Steve: 1, George: 500, Adam: -5

Do I have to define a toString method in the NV struct? Is 
there not another way besides doing this?


-Steve


In cases where I have some aggregate data, but I don't feel like 
writing a custom toString method, I often wrap the data in a 
Tuple and use its [1] %(inner%) or %(inner%|sep%) format 
specifiers. Here's an example:


import std;
void main()
{
{
alias NV = tuple;
auto arr = [NV("Steve", 1), NV("George", 500), NV("Adam", 
-5)];

writefln("%(%(%s: %s%), %)", arr);
}

{
static struct NV
{
string name;
int value;
}
auto arr = [NV("Steve", 1), NV("George", 500), NV("Adam", 
-5)];
writefln("%(%(%s: %s%), %)", arr.map!(obj => 
obj.tupleof.tuple));

}
}

In this case, from outside to inside, I am first formatting the 
range and then for each tuple I am formatting its fields one by 
one.


If for exmaple I want to format a tuple with 3 double, each one 
of them with a different number of digits after the decimal 
point, I could do:

"%(%.1f %.2f %.3f%)".writefln(tuple(1.5, 1.25, 1.125));

If on the other hand I want to format all tuple elements the 
same, I would use this scheme:

"%(%.1f%| %)".writefln(tuple(1.5, 1.25, 1.125));

I think we should extend std.format with support for using the 
same tuple formatting specifier as std.typecons.Tuple, but for 
structs and possibly classes, as I find it quite useful.


[1]: https://dlang.org/phobos/std_typecons#.Tuple.toString


Re: CI: Why Travis & Circle

2019-11-16 Thread Petar via Digitalmars-d-learn

On Thursday, 14 November 2019 at 17:32:27 UTC, jmh530 wrote:

On Thursday, 14 November 2019 at 17:06:36 UTC, Andre Pany wrote:

[snip]

With the public availability of Github Actions I highly 
recommend it if you have open source project on Github. If is 
free and works well with D and Dub.


Kind regards
Andre


I'm not that familiar with Github Actions, but I should get 
more familiar with it.


But my broader question is why both? Don't they both do largely 
the same things?


I was motivated to ask this by looking at the mir repositories, 
which have both.

https://github.com/libmir/mir


Most likely the reason is parallelism. Every CI service offers a 
limited amount of agents that can run in parallel, which limits 
the number of test matrix combinations that you can run in a 
reasonable amount of time. For example, many of the major D 
projects are tested across different OSes and several versions of 
D compilers. Additionally some CIs are faster than others. In my 
experience CircleCI is faster than TravisCI by a large margin.


Another reason is the different types of CI agents. Traditionally 
Travis CI was the first CI service that offered macOS agents for 
free for open-source projects. Now they have experimental Windows 
support as well. On the other hand Travis uses VMs which are 
often noticeably slower to boot than containers. (I think they 
also offered containers for a while, but for some reason they 
deprecated this offering).
CircleCI on the other hand has a really good support for 
containers and in my experience they start in about 1-2s after a 
commit is pushed (whereas VMs can take between anywhere between 
20-60s.


As Andre mentioned, GitHub Actions is a new player in the space, 
which offers an extremely generous amount of compute for public 
repos, courtesy of Microsoft Azure.
(Actually I think they offer exactly the same for public repos as 
what Azure Pipelines offers). However, GitHub Actions has the 
most catch up to do, if you exclude their raw compute power. 
Their UI is extremely bare bones and the configurability is 
lacking behind both CircleCI, SemaphoreCI and even Azure 
Pipelines. However, for most D projects what they offer should be 
a perfect fit.


As for Mir specifically, just check what commands are run by 
circle.yml and .travis.yml and you'll be able to understand 
whether there is actually a need for using multiple CI services:

https://github.com/libmir/mir/blob/master/circle.yml
https://github.com/libmir/mir/blob/master/.travis.yml

I may be wrong, but it seams that they should be able to 
consolidate their testing into just one CI service, but I would 
recommend holding off trying GitHub Actions for now and to go 
with Azure Pipelines, as GitHub's offering is not very stable 
yet. For example two days, due to some change on their side their 
CI completely stopped working on a public repo of mine. After 
researching I found that other people had similar issues (e.g. 
[1]), but fortunately, after contacting their support the issue 
was resolved one day later. In comparison, Circle CI, Semaphore 
CI and Azure Pipelines have been much more rock-solid in my 
experience.


[1]: 
https://github.community/t5/GitHub-Actions/GitHub-Actions-workflows-can-t-be-executed-on-this-repository/td-p/38153


Re: Meson build system user new to D.

2019-05-07 Thread Petar via Digitalmars-d-learn

On Monday, 6 May 2019 at 19:52:23 UTC, Mike Brockus wrote:
Hello everyone I am a Meson build system user and I am new to 
the D language, just wondering if there are compiler flags that 
I should add, unit testing frameworks, any good practices I can 
follow and or anything like that also some resources would be 
helpful thanks. (:


Perhaps a good place to start is examining existing D projects 
that use Meson - for example mir-algorithm: 
https://github.com/libmir/mir-algorithm/blob/master/meson.build.


Re: Design by Introspection - Looking for examples

2019-01-14 Thread Petar via Digitalmars-d-learn

On Tuesday, 15 January 2019 at 00:42:37 UTC, Tony A wrote:
Hi, I just watched the Andrei's talk about Design by 
Introspection and for what I see this is used in D.


Could anyone point out some good Github examples that I can see 
this in action and the benefits?


Thanks.


Basically, look for `static if`s in Phobos. A couple of rich 
modules/packages:

https://github.com/dlang/phobos/tree/master/std/algorithm
https://github.com/dlang/phobos/blob/master/std/range/package.d
https://github.com/dlang/phobos/blob/master/std/typecons.d
https://github.com/dlang/phobos/tree/master/std/experimental/allocator
https://github.com/dlang/phobos/blob/master/std/experimental/checkedint.d

In particular, (just from the top of my head) I think that 
`std.range.retro` is nice canonical example:

https://github.com/dlang/phobos/blob/23f600ac78591391f7009beb1367fb97bf65496c/std/range/package.d#L256

* The `retro` function takes a range argument and returns the 
elements of the range in reverse order. For example [1, 2, 3] 
would become [3, 2, 1].


* The main point of all range algorithms in `std.range` is that 
they're lazy, which means that when possible they will attempt to 
process elements one by one, instead of eagerly performing the 
whole algorithm at once. The way this is achieved is by wrapping 
the input in a struct that encapsulates the traversal and allows 
the caller to call the range primitives on the object returned by 
the range algorithm whenever they want.


* So almost all range functions return structs that have somewhat 
different capabilities. For example the range returned by `[1, 2, 
3].map!(x => x * 2)` has random access (you can immediately 
access any element, without needing to evaluate `x => x * 2` for 
any previous element), while `someArray.filter!(x => x % 2) 
doesn't have random access as in general you don't know which 
elements of `someArray` satisfy the `x => x % 2` condition.


So the point of Design by Introspection in the `retro` example is 
that it allows to return a range that best matches the 
capabilities of the argument.


Re: Dub and it's build directory

2018-08-21 Thread Petar via Digitalmars-d-learn

On Monday, 20 August 2018 at 18:59:23 UTC, Russel Winder wrote:

Hi,

[...]


Hi Russel,


So the questions are:

1. How does Dub find the compiler version number, in this case 
2.081, given that neither DMD or LDC seem to have a way of 
delivering only the version number.


The __VERSION__ [1] special token gives you an integer like 2081 
at compile-time. For DMD, LDC and GDC this is the version of the 
DMD frontend they are based on. Obviously, for other compilers 
(not based on the DMD frontend like SDC) there would be no 
equivalent integer, but since there's no other mature alternative 
D implementations this is not a problem in practice. Since the 
__VERSION__ token is replaced with an integer literal at 
compile-time, it can be used for things like conditional 
compilation (e.g. when you want support multiple versions of a 
dmd+druntime+phobos release, for exmaple: [2]).


What Dub does is what it calls "platform probing" [3]. It creates 
a temporary .d file containing various `pragma (msg, ..)` 
statements that output information to stderr during compilation. 
Of course the question is then: which compiler is used to compile 
the platform probe file? AFAICS, it uses either the one requested 
for on the command-line (via --compiler=... ) or via an internal 
heuristic which is a bit involved [4].



2. What is the pseudo-random number, and how is it calculated?


It's an MD5 hash of all the various parameters of a dub build 
(aka the build settings). It's calculated here: [5].


Perhaps some of the information is documented somewhere (if not 
it should be), but I found it easier to search through the code.


Petar

[1]: https://dlang.org/spec/lex.html#specialtokens
[2]: 
https://github.com/dlang/dub/blob/520d527fb11811c8f60b29a0ad15e6f48cf9f9d0/source/dub/internal/utils.d#L263
[3]: 
https://github.com/dlang/dub/blob/1ca0ad263bb364f66f71642152420dd1dce43ce2/source/dub/compilers/compiler.d#L119
[4]: 
https://github.com/dlang/dub/blob/1ca0ad263bb364f66f71642152420dd1dce43ce2/source/dub/dub.d#L1296
[5]: 
https://github.com/dlang/dub/blob/1ca0ad263bb364f66f71642152420dd1dce43ce2/source/dub/generators/build.d#L320


Re: is this a betterC bug ?

2018-08-15 Thread Petar via Digitalmars-d-learn
On Wednesday, 15 August 2018 at 08:14:53 UTC, Petar Kirov 
[ZombineDev] wrote:


https://run.dlang.io/is/iD9ydu


I made a typo in one of the comments. Here's the fixed version:
https://run.dlang.io/is/HRqYcZ


Re: is this a betterC bug ?

2018-08-15 Thread Petar via Digitalmars-d-learn

On Tuesday, 14 August 2018 at 17:49:32 UTC, Mike Franklin wrote:

On Tuesday, 14 August 2018 at 17:22:42 UTC, Seb wrote:

FYI: staticArray will be part of 2.082, it already works with 
dmd-nightly:


That just seems wrong.  Isn't the fact that `staticArray` is 
needed a bug in the compiler?  I think the compiler could have 
lowered to something like that automatically to avoid the 
library workaround.


Mike


It's not a bug, it's all about how the type system is set up. The 
type of an array literal expression like `[1, 2, 3]` is `int[]` 
(a slice of an array of ints), so no matter if you do:


auto readonly(T)(const(T)[] x) { return x; }

auto arr1 = [1, 2, 3];
auto arr2 = [1, 2, 3].readonly;
constarr3 = [1, 2, 3];
enum arr4 = [1, 2, 3];
static immutable arr5 = [1, 2, 3];
scopearr6 = [1, 2, 3];

In all instances the type will be `int[]` modulo type qualifiers.

Static arrays are completely different types, that just happen to 
accept

assignments from slices. Their two defining properties are:
1. Their length is fixed at compile-time, meaning that you can do:

import std.array, std.meta;
auto x = [1, 2, 3, 4, 5].staticArray;
enum length = x.length;
pragma (msg, length);
alias seq = AliasSeq!(0, 42, length);
static foreach (i; 0 .. length) { }
static foreach (i; seq) { }


2. Where slices are reference types, static arrays are value 
types which means that each assignment will copy an entire array.


Basically they behave like a `struct { int _arr_0 = 0, _arr_1 = 
1, _arr_2 = 2; }`.


https://run.dlang.io/is/iD9ydu


Re: dtoh

2018-08-12 Thread Petar via Digitalmars-d-learn
On Tuesday, 7 August 2018 at 12:46:31 UTC, Steven Schveighoffer 
wrote:

On 8/7/18 6:08 AM, bauss wrote:

On Monday, 6 August 2018 at 13:28:05 UTC, Laeeth Isharc wrote:

Hi Walter.

Can dtoh be open-sourced now that dmd is?




https://github.com/adamdruppe/dtoh

I might be confused, but it seems like it is there.


I think he meant htod: https://dlang.org/htod.html

Which I believe uses some of the dmc source.

-Steve


For the record, I think this is the source code for htod:
https://github.com/DigitalMars/Compiler/blob/master/dm/src/dmc/htod.d


Re: Storing Formatted Array Value

2017-11-28 Thread Petar via Digitalmars-d-learn

On Wednesday, 29 November 2017 at 07:08:12 UTC, Vino wrote:

Hi All,

Request your help, with the below code I am able to print the 
value of the array without brackects , but can some on help me 
on hot to store this output to a variable


Program:
import std.stdio;
import std.container;

void main()
{
   auto test = Array!string("Test1", "Test2");
   writefln("%(%s, %)",test[]); // output : "Test1", "Test2"

   // Similar like this
   auto res = `writefln("%(%s, %)",test[])`;
   writeln(res);
}

Output of res should look like : "Test1", "Test2" (Without [] 
brackets).



From,
Vino.B


https://dlang.org/phobos/std_format is your friend:

// https://run.dlang.io/is/sUkOX0

import std.container;
import std.format;
import std.stdio;

void main()
{
   auto test = Array!string("Test1", "Test2");
   writefln("%(%s, %)",test[]);

   // Similar like this
   auto res = "%(%s, %)".format(test[]);
   writeln(res);

   auto res2 = "%-(%s, %)".format(test[]);
   writeln(res2);
}


Re: dmd/ldc failed with exit code -11

2017-11-22 Thread Petar via Digitalmars-d-learn

On Wednesday, 22 November 2017 at 15:33:46 UTC, Anonymouse wrote:

On Tuesday, 21 November 2017 at 19:22:47 UTC, Anonymouse wrote:
Compiling a debug dmd and running the build command in gdb, it 
seems to be a stack overflow at ddmd/dtemplate.d:6241, 
TemplateInstance::needsCodegen().


After a lot of trial and error I managed to find /a/ line which 
lets it compile under -b plain and release.


void colour(Sink, Codes...)(auto ref Sink sink, const Codes 
codes)

{
// Sink is a LockingTextWriter or an Appender!string
// Codes is a tuple of named enum members

foreach (const code; codes)
{
import std.conv : to;

if (++numCodes > 1) sink.put(';');

sink.put((cast(size_t)code).to!string);  // <--
}

Change size_t to uint and it compiles, keep it size_t and the 
compiler segfaults. Tested on two machines, both running 
up-to-date Arch linux, both with dmd and ldc.


The bug is too ephemeral to reduce well, if a thing like order 
of arguments matters.


If this is an emergent property of the rest of the program, and 
the size_t merely fells the house of cards, is it even worth 
reporting when I can't reduce it?


You did a good investigation and I still think it's important to 
report it.


I managed to find a few other cases where people were having 
issues with needsCodegen:


https://github.com/ldc-developers/ldc/issues/2168#issuecomment-312709632
https://github.com/ldc-developers/ldc/issues/2336
https://github.com/ldc-developers/ldc/issues/2022#issuecomment-288481397
https://github.com/ldc-developers/ldc/issues/1297#issuecomment-184770787

So there's enough evidence that there's a bug somewhere around 
that part of the compiler and we should gather good test cases to 
narrow down the problem.


Re: dmd/ldc failed with exit code -11

2017-11-21 Thread Petar via Digitalmars-d-learn
On Tuesday, 21 November 2017 at 10:10:59 UTC, Petar Kirov 
[ZombineDev] wrote:

On Tuesday, 21 November 2017 at 00:15:04 UTC, Anonymouse wrote:
I have a large named enum (currently 645 members) of IRC event 
types. It's big by neccessity[1].


I'm using dub, and both dmd and ldc successfully build it in 
test and debug modes, but choke and die on plain and release. 
I bisected it down to when I did a big addition to the enum to 
encompass virtually all event types there are.




Try using https://github.com/CyberShadow/DustMite/wiki to 
obtain a minimal test case which reproduces the issue and file 
bug report(s).


This tool can actually be used straight from dub itself:
http://code.dlang.org/docs/commandline#dustmite


Re: dmd/ldc failed with exit code -11

2017-11-21 Thread Petar via Digitalmars-d-learn

On Tuesday, 21 November 2017 at 00:15:04 UTC, Anonymouse wrote:
I have a large named enum (currently 645 members) of IRC event 
types. It's big by neccessity[1].


I'm using dub, and both dmd and ldc successfully build it in 
test and debug modes, but choke and die on plain and release. I 
bisected it down to when I did a big addition to the enum to 
encompass virtually all event types there are.




Try using https://github.com/CyberShadow/DustMite/wiki to obtain 
a minimal test case which reproduces the issue and file bug 
report(s).


Re: DMD test suite assertion failure in test_cdvecfill.d

2017-11-21 Thread Petar via Digitalmars-d-learn
On Tuesday, 21 November 2017 at 07:02:20 UTC, Michael V. Franklin 
wrote:

I'm getting this error when I try to run the DMD test suite.

cd dmd
make -C test -f Makefile

[...]


I don't think you're doing anything wrong, though you shouldn't 
be getting that error. File a bug report and try contacting 
Martin Nowak, as he's the author of this test, IIRC.


Re: Turn a float into a value between 0 and 1 (inclusive)?

2017-11-21 Thread Petar via Digitalmars-d-learn
On Tuesday, 21 November 2017 at 09:53:33 UTC, Petar Kirov 
[ZombineDev] wrote:
On Tuesday, 21 November 2017 at 09:21:29 UTC, Chirs Forest 
wrote:
I'm interpolating some values and I need to make an 
(elapsed_time/duration) value a float between 0 and 1 
(inclusive of 0 and 1). The elapsed_time might be more than 
the duration, and in some cases might be 0 or less. What's the 
most efficient way to cap out of bounds values to 0 and 1? I 
can do a check and cap them manually, but if I'm doing a lot 
of these operations I'd like to pick the least resource 
intensive way.


Also, if I wanted out of bounds values to wrap (2.5 becomes 
0.5) how would I get that value and also have 1.0 not give me 
0.0?


Is this what you're looking for:
https://www.desmos.com/calculator/8iytgsr3y3

In which case that would be simply:
T zeroToOne(T)(T val) { return val % 1; }


The problem, as you can see in the plot is that 0.9(9) remains 
0.9(9), however 1.0 and 1.001 become 0.0 and 0.001 
respectively and there's no way around this due to the inherent 
discontinuity of the function.


Re: Turn a float into a value between 0 and 1 (inclusive)?

2017-11-21 Thread Petar via Digitalmars-d-learn

On Tuesday, 21 November 2017 at 09:21:29 UTC, Chirs Forest wrote:
I'm interpolating some values and I need to make an 
(elapsed_time/duration) value a float between 0 and 1 
(inclusive of 0 and 1). The elapsed_time might be more than the 
duration, and in some cases might be 0 or less. What's the most 
efficient way to cap out of bounds values to 0 and 1? I can do 
a check and cap them manually, but if I'm doing a lot of these 
operations I'd like to pick the least resource intensive way.


Also, if I wanted out of bounds values to wrap (2.5 becomes 
0.5) how would I get that value and also have 1.0 not give me 
0.0?


Is this what you're looking for:
https://www.desmos.com/calculator/8iytgsr3y3

In which case that would be simply:
T zeroToOne(T)(T val) { return val % 1; }


Re: Inference of GC allocation scope

2017-11-16 Thread Petar via Digitalmars-d-learn

On Thursday, 16 November 2017 at 17:29:56 UTC, Nordlöw wrote:
Are there any plans on a compiler pass that finds scoped 
GC-allocations and makes their destructors deterministic 
similar to D's struct scope behaviour?


https://github.com/ldc-developers/ldc/blob/master/gen/passes/GarbageCollect2Stack.cpp


Re: How you guys go about -BetterC Multithreading?

2017-11-10 Thread Petar via Digitalmars-d-learn
On Thursday, 9 November 2017 at 19:42:55 UTC, Jacob Carlborg 
wrote:

On 2017-11-09 17:52, Petar Kirov [ZombineDev] wrote:

Thanks for reminding me, I keep forgetting that it should just 
work (minus initialization?).


What do you mean "initialization"?


static constructors


Re: How you guys go about -BetterC Multithreading?

2017-11-10 Thread Petar via Digitalmars-d-learn
On Friday, 10 November 2017 at 11:55:57 UTC, Guillaume Piolat 
wrote:
On Thursday, 9 November 2017 at 16:00:36 UTC, Petar Kirov 
[ZombineDev] wrote:


In short, the cost / benefit of going all the way 
version(D_BetterC) is incredibly poor for regular 
applications, as you end up a bit more limited than with 
modern C++ (> 11) for prototyping. For example, even writers 
of D real-time audio plugins don't go as far.


For now we do have some @nogc alternatives for mutex, condition 
variables, thread-pool, file reading, etc... (dplug:core 
package) for use with the runtime disabled - the middle ground 
that's way more usable than -betterC. They may, or not, be 
applicable to -betterC.


Interesting, thanks for the info. It looks you made good progress 
since the last time I read about your project (quite a while 
ago). I'll be sure to check what you have in dplug:core ;)


Re: How you guys go about -BetterC Multithreading?

2017-11-09 Thread Petar via Digitalmars-d-learn
On Thursday, 9 November 2017 at 16:08:20 UTC, Jacob Carlborg 
wrote:

On 2017-11-09 13:19, Petar Kirov [ZombineDev] wrote:

Though you need to be extra careful not to use thread-local 
storage


I think TLS should work, it's the OS that handles TLS, not 
druntime.


Thanks for reminding me, I keep forgetting that it should just 
work (minus initialization?).


Re: How you guys go about -BetterC Multithreading?

2017-11-09 Thread Petar via Digitalmars-d-learn

On Thursday, 9 November 2017 at 13:00:15 UTC, ParticlePeter wrote:
On Thursday, 9 November 2017 at 12:19:00 UTC, Petar Kirov 
[ZombineDev] wrote:
On Thursday, 9 November 2017 at 11:08:21 UTC, ParticlePeter 
wrote:

Any experience reports or general suggestions?
I've used only D threads so far.


It would be far easier if you use druntime + @nogc and/or 
de-register latency-sensitive threads from druntime [1], so 
they're not interrupted even if some other thread calls the 
GC. Probably the path of least resistance is to call [2] and 
queue @nogc tasks on [3].


If you really want to pursue the version(D_BetterC) route, 
then you're essentially on your own to use the threading 
facilities provided by your target OS, e.g.:


https://linux.die.net/man/3/pthread_create
https://msdn.microsoft.com/en-us/library/windows/desktop/ms682516(v=vs.85).aspx

Though you need to be extra careful not to use thread-local 
storage (e.g. only shared static and __gshared) and not to 
rely on (shared) static {con|de}structors, dynamic arrays, 
associative arrays, exceptions, classes, RAII, etc., which is 
really not worth it, unless you're writing very low-level code 
(e.g. OS kernels and drivers).


[1]: https://dlang.org/phobos/core_thread#.thread_detachThis
[2]: https://dlang.org/phobos/core_memory#.GC.disable
[3]: https://dlang.org/phobos/std_parallelism#.taskPool


Forgot to mention, I'll try this first, I think its a good 
first step towards -BetterC usage. But in the end I want to see 
how far I can get with the -BetterC feature.


In short, the cost / benefit of going all the way 
version(D_BetterC) is incredibly poor for regular applications, 
as you end up a bit more limited than with modern C++ (> 11) for 
prototyping. For example, even writers of D real-time audio 
plugins don't go as far.
If you're writing libraries, especially math-heavy template code, 
CTFE and generic code in general, then version(D_BetterC) is a 
useful tool for verifying that your library doesn't need 
unnecessary dependencies preventing it from being trivially 
integrated in foreign language environments.


Well if you like generic code as much as I do, you can surely do 
great with version(D_BetterC) even for application code, but you 
would have to make alomst every non-builtin type that you use in 
your code a template parameter (or alternatively an extern 
(C++/COM) interface if that works in -betterC), so you can easily 
swap druntime/phobos-based implementations for your custom ones, 
one by one, but I guess few people would be interested in 
following this path.


Re: How you guys go about -BetterC Multithreading?

2017-11-09 Thread Petar via Digitalmars-d-learn
On Thursday, 9 November 2017 at 12:30:49 UTC, rikki cattermole 
wrote:

On 09/11/2017 12:19 PM, Petar Kirov [ZombineDev] wrote:
On Thursday, 9 November 2017 at 11:08:21 UTC, ParticlePeter 
wrote:

Any experience reports or general suggestions?
I've used only D threads so far.


It would be far easier if you use druntime + @nogc and/or 
de-register latency-sensitive threads from druntime [1], so 
they're not interrupted even if some other thread calls the 
GC. Probably the path of least resistance is to call [2] and 
queue @nogc tasks on [3].


If you really want to pursue the version(D_BetterC) route, 
then you're essentially on your own to use the threading 
facilities provided by your target OS, e.g.:


https://linux.die.net/man/3/pthread_create
https://msdn.microsoft.com/en-us/library/windows/desktop/ms682516(v=vs.85).aspx


You can use a library like libuv to handle threads 
(non-language based TLS too, not sure that it can be tied in 
unfortunately).


Yeah, any cross-platform thread-pool / event loop library with C 
interface should obviously be preferred than manual use of raw 
thread primitives.


Essentially, try to follow Sean Parent's advice on "No 
Raw/Incidental *":

https://www.youtube.com/watch?v=zULU6Hhp42w


Re: How you guys go about -BetterC Multithreading?

2017-11-09 Thread Petar via Digitalmars-d-learn

On Thursday, 9 November 2017 at 11:08:21 UTC, ParticlePeter wrote:

Any experience reports or general suggestions?
I've used only D threads so far.


It would be far easier if you use druntime + @nogc and/or 
de-register latency-sensitive threads from druntime [1], so 
they're not interrupted even if some other thread calls the GC. 
Probably the path of least resistance is to call [2] and queue 
@nogc tasks on [3].


If you really want to pursue the version(D_BetterC) route, then 
you're essentially on your own to use the threading facilities 
provided by your target OS, e.g.:


https://linux.die.net/man/3/pthread_create
https://msdn.microsoft.com/en-us/library/windows/desktop/ms682516(v=vs.85).aspx

Though you need to be extra careful not to use thread-local 
storage (e.g. only shared static and __gshared) and not to rely 
on (shared) static {con|de}structors, dynamic arrays, associative 
arrays, exceptions, classes, RAII, etc., which is really not 
worth it, unless you're writing very low-level code (e.g. OS 
kernels and drivers).


[1]: https://dlang.org/phobos/core_thread#.thread_detachThis
[2]: https://dlang.org/phobos/core_memory#.GC.disable
[3]: https://dlang.org/phobos/std_parallelism#.taskPool


Re: Request Assistance Calling D from C++: weird visibility issue inside struct and namespace

2017-11-08 Thread Petar via Digitalmars-d-learn
On Wednesday, 8 November 2017 at 07:55:02 UTC, Andrew Edwards 
wrote:

On Wednesday, 8 November 2017 at 07:30:34 UTC, evilrat wrote:

On Wednesday, 8 November 2017 at 06:34:27 UTC, Andrew Edwards

just using fully qualified name didn't make it?

void call_cpp() {
::foo("do great things"); // calling global foo
return;
}



No, it did not.


Are you sure you put it in a namespace in C++ too?


Yes. That wasn't the issue

otherwise there might be some name mangling incompatibility 
that probably worth filling a bug report


That's the one. Thanks to the hint form, Nicholas Wilson, I was 
able to track it down and resolve it with a call to 
pragma(mangle).


-Andrew


Walter has recently been working on improving the C++ mangling, 
so be sure to test the latest dmd nightly build and if that 
doesn't work be sure to file bug report(s).


https://github.com/dlang/dmd/pull/7250
https://github.com/dlang/dmd/pull/7259
https://github.com/dlang/dmd/pull/7272


Re: CTFE static array error: cannot modify read-only constant

2017-09-23 Thread Petar via Digitalmars-d-learn

On Friday, 22 September 2017 at 14:43:28 UTC, Johan wrote:

Hi all,
```
  auto foo(const int[3] x)
  {
  int[3] y = x;
  y[0] = 1; // line 4
  return y;
  }
  immutable int[3] a = [0,1,2];
  immutable int[3] b = foo(a); // line 8
```
compiles with an error:
```
4: Error: cannot modify read-only constant [0, 1, 2]
8:called from here: foo(a)
```

What am I doing wrong?

Thanks,
  Johan


Looks like a compiler bug to me. Please file a bug report on 
https://issues.dlang.org/enter_bug.cgi