Re: D is nice whats really wrong with gc??

2023-12-23 Thread IGotD- via Digitalmars-d-learn

On Monday, 18 December 2023 at 16:44:11 UTC, Bkoie wrote:
just look at this i know this is overdesign im just trying to 
get a visual on how a api can be design im still new though but 
the fact you can build an api like this and it not break it is 
amazing.


but what is with these ppl and the gc?
just dont allocate new memory or invoke,
you can use scopes to temporry do stuff on immutable slices 
that will auto clean up

the list goes on

and you dont need to use pointers at all...!!

i honesty see nothing wrong with gc,



I don't think there is any wrong having GC in language either and 
upcoming languages also show that as a majority of the have some 
form of GC. GC is here to stay regardless.


So what is the problem with D? The problem with D is that it is 
limited to what type of GC it can support. Right now D only 
supports stop the world GC which is quickly becoming unacceptable 
on modern systems. Sure it was fine when when we had dual core 
CPUs but today desktop PCs can have 32 execution units (server 
CPUs can have an insane amount of of them like 128). Stopping 32 
execution (potentially even more if you have more threads) units 
is just unacceptable, which not only takes a lot of time but a 
very clumsy approach on modern systems.


What GC should D then support? In my opinion, all of them. Memory 
management is a moving target and I don't know how it will look 
like in 10 years. Will cache snoop be viable for example, will 
the cores be clustered so that snoops are only possible within 
them etc? D needs a more future proof language design when it 
comes to memory management.


Because of this it is important that D can as seamless as 
possible support different types of GC types. Exposing raw 
pointers in the language for GC allocated type was a big mistake 
in the D language design which I think should be rectified. About 
all other new languages have opaque pointers/reference types in 
order to hide the GC mechanism and so that other GC algorithms 
like reference counting can be used.


This is a an F- in language design.



Re: Request help on allocator.

2023-12-02 Thread IGotD- via Digitalmars-d-learn

On Saturday, 2 December 2023 at 19:13:18 UTC, Vino B wrote:

Hi All,

  Request your help in understanding the below program, with 
the below program I can allocate 8589934592(8GB) it prints the 
length 8589934592(8GB) where as my laptop has only 4 GB so the 
confusion is that how can this program allocate 8GB RAM when I 
have only 4GB of RAM installed






From,
Vino


Welcome to the wonderful world of virtual memory.

Virtual memory size != physical memory size

This nothing to do with D programming language but how the OS 
manage the memory.




Re: Single-thread processes

2023-06-29 Thread IGotD- via Digitalmars-d-learn

On Wednesday, 28 June 2023 at 23:46:17 UTC, Cecil Ward wrote:
If your program is such that one process will never ever 
involve multiple threads, because it simply doesn’t apply in 
your situation, then would it be worthwhile to have a "version 
(D_SingleThread)" which would get rid of the all-statics in TLS 
thing and make __g shared into a respectable, fully-ok storage 
class modifier that is fine in @safe code. What do you think? 
That way, you would be documenting what you’re doing and would 
not need to have any TLS-statics overhead?


I general, use globals as little as possible and definitely not 
TLS. TLS is wart in computer technology that shouldn't be 
invented anyway. They should have stopped with per thread 
key/value storage which was available around the 2000s and let 
language runtimes handle that instead.


In general I do not agree with D's making globals TLS as 
standard. It should be the other way around, TLS must be 
explicitly declared while traditional globals should be be 
declared just like in C.


Re: Why are globals set to tls by default? and why is fast code ugly by default?

2023-04-01 Thread IGotD- via Digitalmars-d-learn

On Saturday, 1 April 2023 at 15:02:12 UTC, Ali Çehreli wrote:


Does anyone have documentation on why Rust and Zip does not do 
thread local by default? I wonder what experience it was based 
on.




I think that would hard to get documentation on the rationale for 
that decision. Maybe you can get an answer in their forums but I 
doubt it. For Rust I think they based it on that globals should 
have some kind of synchronization which is enforced at compile 
time. Therefore TLS becomes second citizen.


Speaking of experience, I used to be a C++ programmer. We made 
use of thread-local storage precisely zero times. I think it's 
because the luminaries of the time did not even talk about it.




Yes, that's "normal" programming that you more or less never use 
TLS.


With D, I take good advantage of thread-local storage. 
Interestingly, I do that *only* for fast code.


void foo(int arg) {
static int[] workArea;

if (workArea.length < nededFor(arg)) {
// increase length
}

// Use workArea
}

Now I can use any number of threads using foo and they will 
have their independent work areas. Work area grows in amortized 
fashion for each thread.


I find the code above to be clean and beautiful. It is very 
fast because there are no synchronization primitives needed 
because no work area is shared between threads.




There is nothing beautiful with it other than the clean syntax. 
Why not just use a stack variable which is thread local as well. 
TLS is often allocated on the stack in many systems anyway. 
Accessing TLS variables can slower compared to stack variables. 
The complexity of TLS doesn't pay for its usefulness.




> It's common knowledge that accessing tls global is slow
> 
http://david-grs.github.io/tls_performance_overhead_cost_linux/


"TLS global is slow" would be misleading because even the 
article you linked explains right at the top, in the TL;DR are 
that "TLS may be slow".


This depends how it is implemented. TLS is really a forest and 
can be implemented in many ways and it also depends where it is 
being accessed (shared libraries, executable etc.). In general 
TLS on x86 is accessed by fs:[-offset_to_variable] this isn't 
that slow but the complexity to get there is high. Keep in mind 
the TLS area must be initialized for every thread creation which 
isn't ideal. fs:[] isn't always possible and a function call is 
required similar to a DLL symbol look up. TLS is a turd which 
shouldn't have been created. They should have stopped with 
key/value pair which languages then could build on if they 
wanted. Now TLS are in the executable standards and it is a mess. 
x86 has now two ways of TLS (normal and TLS_DESC) just to make 
things even more complicated. A programmer never see this mess 
but as systems programmer I see this and it is horrible.





Re: Why are globals set to tls by default? and why is fast code ugly by default?

2023-04-01 Thread IGotD- via Digitalmars-d-learn
On Sunday, 26 March 2023 at 18:25:54 UTC, Richard (Rikki) Andrew 
Cattermole wrote:
Having TLS by default is actually quite desirable if you like 
your code to be safe without having to do anything extra.


As soon as you go into global to the process memory, you are 
responsible for synchronization. Ensuring that the state is 
what you want it to be.


Keep in mind that threads didn't exist when C was created. They 
could not change their approach without breaking everyone's 
code. So what they do is totally irrelevant unless its 1980.


I think its the correct way around. You can't accidentally 
cause memory safety issues. You must explicitly opt-into the 
ability to mess up your programs state.


I think "safe" BS is going too far. Normally you don't use global 
variables at all but if you do the most usual is to use normal 
global variables with perhaps some kind of synchronization 
primitive. TLS is quite unusual and having TLS by default might 
even introduce bugs as the programmer believes that the value can 
be set by all threads while they are independent.


Regardless, __gshared in front of the variable isn't a huge deal 
but it shows that the memory model in D is a leaking bucket. Some 
compilers enforce synchronization primitives for global variables 
and are "safe" that way. However, sometimes you don't need them 
like in small systems that only has one thread and it just gets 
in the way.


TLS by default is mistake in my opinion and it doesn't really 
help. TLS should be discouraged as much as possible as it is 
complicated and slows down thread creation.


Re: Why are globals set to tls by default? and why is fast code ugly by default?

2023-04-01 Thread IGotD- via Digitalmars-d-learn
On Sunday, 26 March 2023 at 18:25:54 UTC, Richard (Rikki) Andrew 
Cattermole wrote:
Having TLS by default is actually quite desirable if you like 
your code to be safe without having to do anything extra.


As soon as you go into global to the process memory, you are 
responsible for synchronization. Ensuring that the state is 
what you want it to be.


Keep in mind that threads didn't exist when C was created. They 
could not change their approach without breaking everyone's 
code. So what they do is totally irrelevant unless its 1980.


I think its the correct way around. You can't accidentally 
cause memory safety issues. You must explicitly opt-into the 
ability to mess up your programs state.


I think "safe" BS is going too far. Normally you don't use global 
variables at all but if you do the most usual is to use normal 
global variables with perhaps some kind of synchronization 
primitive. TLS is quite unusual and having TLS by default might 
even introduce bugs as the programmer believes that the value can 
be set by all threads while they are independent.


Regardless, __gshared in front of the variable isn't a huge deal 
but it shows that the memory model in D is a leaking bucket. Some 
compilers enforce synchronization primitives for global variables 
and are "safe" that way. However, sometimes you don't need them 
like in small systems that only has one thread and it just gets 
in the way.


TLS by default is mistake in my opinion and it doesn't really 
help. TLS should be discouraged as much as possible as it is 
complicated and slows down thread creation.




Re: Is there such concept of a list in D?

2022-12-19 Thread IGotD- via Digitalmars-d-learn

On Saturday, 10 December 2022 at 15:59:07 UTC, Ali Çehreli wrote:


There isn't a single point in favor of linked lists.


Yes there is, there are still special cases where linked lists 
can be a better alternative. Especially a version with intrusive 
members (with next/prev pointers as members in your object)


The intrusive linked list doesn't need any extra allocation apart 
from the object itself which means less fragmentation and small 
container allocations.


The double linked list has O(1) insert and delete, arrays has not.

The single linked list offer completely lockless variants, which 
is also completely unbounded.


The intrusive linked list has better performance with everything, 
except random access.


You can move/splice entire lists without copying.

The linked list performs equally well regardless of number of 
objects or object size. The performance of arrays depend on this.




As CPUs has progressed the array has become more favorable than 
the linked list type that is being offered by most standard 
libraries (the one that must allocate container objects, not 
intrusive). For most programming practices the array is usually 
the best. However, there are occasions where the linked list can 
be worth to be considered.






Re: Passing a string by reference

2022-11-09 Thread IGotD- via Digitalmars-d-learn

On Tuesday, 8 November 2022 at 12:43:47 UTC, Adam D Ruppe wrote:


In fact, ref in general in D is a lot more rare than in 
languages like C++. The main reason to use it for arrays is 
when you need changes to the length to be visible to the 
caller... which is fairly rare.




In general many parameters are passed as const, the called 
function never changes the parameter. Which brings the question, 
will the compiler optimize the parameter so that the string is 
only passed as a pointer underneath when it knows it is a const?




Re: Programs in D are huge

2022-08-19 Thread IGotD- via Digitalmars-d-learn

On Friday, 19 August 2022 at 11:18:48 UTC, bauss wrote:


It's one thing D really misses, but is really hard to implement 
when it wasn't thought of to begin with. It should have been 
implemented alongside functions that may change between 
languages and cultures.


I guess we have another task for Phobos v2.


Re: Programs in D are huge

2022-08-19 Thread IGotD- via Digitalmars-d-learn
On Thursday, 18 August 2022 at 17:15:12 UTC, rikki cattermole 
wrote:


Unicode support in Full D isn't complete.

There is nothing in phobos to even change case correctly!

Both are limited if you care about certain stuff like non-latin 
based languages like Turkic.


I think full D is fine for terminal programs. What you commonly 
do is tokenize text, remove white space, convert text to numbers 
etc. The D library is good for these tasks. Rewrite all this for 
betterC would be a tedious tak


Re: Programs in D are huge

2022-08-18 Thread IGotD- via Digitalmars-d-learn

On Wednesday, 17 August 2022 at 17:25:51 UTC, Ali Çehreli wrote:

On 8/17/22 09:28, Diego wrote:

> I'm writing a little terminal tool, so i think `-betterC` is
the best
> and simple solution in my case.

It depends on what you mean with terminal tool bun in general, 
no, full features of D is the most useful option.


I've written a family of programs that would normally be run on 
the terminal; I had no issues that would warrant -betterC.


Ali


BetterC means no arrays or strings library and usually in 
terminal tools you need to process text. Full D is wonderful for 
such task but betterC would be limited unless you want to write 
your own array and string functionality.


Re: vectorization of a simple loop -- not in DMD?

2022-07-11 Thread IGotD- via Digitalmars-d-learn

On Monday, 11 July 2022 at 18:19:41 UTC, max haughton wrote:


The dmd backend is ancient, it isn't really capable of these 
kinds of loop optimizations.


I've said it several times before. Just depreciate the the DMD 
backend, it's just not up to the task anymore. This is not 
criticism against the original purpose of it as back in the 90s 
and early 2000s it made sense to create your own backend. Time 
has moved on and we have LLVM and GCC backends with a lot of CPU 
support that the D project could never achieve themselves. The D 
project should just can the DMD backend in order to free up 
resources for more important tasks.


Some people say they like it because it is fast, yes it is fast 
because it doesn't do much.


Re: Why are structs and classes so different?

2022-05-16 Thread IGotD- via Digitalmars-d-learn

On Sunday, 15 May 2022 at 16:08:01 UTC, Mike Parker wrote:


`scope` in a class variable declaration will cause it to the 
class to be allocated on the stack.




Common practice is that a class has class members itself. So 
where are they allocated? Most likely is only the top class that 
is on the stack, the class members are allocated on the heap 
because the constructor is already compiled.


That scope isn't that useful unless you have it like C++, that 
expands class members in the parent class.






Re: decimal type in d

2022-05-16 Thread IGotD- via Digitalmars-d-learn

On Sunday, 15 May 2022 at 13:26:30 UTC, vit wrote:
Hello, I want read decimal type from sql db, do some arithmetic 
operations inside D program and write it back to DB. Result 
need to be close to result as if this operations was performed 
in sql DB. Something like C# decimal.
Exists this kind of library ind D? (ideally `pure @safe @nogc 
nothrow`).


This also something I wondered, it should be standard in the D 
library. Implementing it can be done straight forward with 
existing D language primitives, essentially a struct.


For those who don't know, decimal in C# is like a floating point 
value but the exponent is a power of 10 (internally total 16 
bytes). This means that for "simple" mathematics rational decimal 
values remains rational decimals values and not some rounded 
value that would happen if you would use normal floating point 
values. The decimal type is essential for financial calculations.


I think D can more or less copy the C# solution.




Re: How to get an IP address from network interfaces

2022-04-22 Thread IGotD- via Digitalmars-d-learn

On Friday, 22 April 2022 at 12:58:24 UTC, H. S. Teoh wrote:


Why would you not want to use OS APIs?



1. Portability

2. Language APIs are usually much better to use that the OS APIs, 
like Berkeley sockets for example.




Re: How to get an IP address from network interfaces

2022-04-22 Thread IGotD- via Digitalmars-d-learn

On Thursday, 21 April 2022 at 07:20:30 UTC, dangbinghoo wrote:


[... Berkley sockets network code ...]



It really makes me sad when I see this. D has some native 
networking API but unfortunately you have go to the OS API to 
have this basic functionality. D should really expand its own API 
so that we don't have to use OS APIs.





Re: Why do immutable variables need reference counting?

2022-04-11 Thread IGotD- via Digitalmars-d-learn

On Sunday, 10 April 2022 at 23:19:47 UTC, rikki cattermole wrote:


immutable isn't tied to lifetime semantics.

It only says that this memory will never be modified by anyone 
during its lifetime.


Anyway, the real problem is with const. Both mutable and 
immutable become it automatically.


I was thinking about that, often when using const you use it when 
passing parameters to functions. This is essentially borrowing. 
The situation is similar with C++ with unique_ptr and shared_ptr. 
Often C++ interfaces use const* when using pointers and not their 
smart pointer counterparts, so essentially the ownership remains, 
while "borrowing" are using raw pointers. A C++ interface only 
accepts the smart pointers when you want to change ownership. D 
could use a similar approach, when using pointers/references you 
shouldn't alter the internal data including reference count.


What I would interested in is if D could have move by default 
depending on type. In this case the RC pointer wrapper could be 
move by default. Increasing is only done when calling "clone" 
(similar to Rust). This way RC increases are optimized naturally. 
What I don't want from Rust is the runtime aliasing check 
(RefCell) on at all times. I rather go with that the compiler 
assumes no aliasing but the programmer is responsible for this. 
You can have runtime aliasing/borrowing check in debug mode but 
in release build it can be removed. This is similar to bounds 
checking where you can choose to have it or not.


Re: I like dlang but i don't like dub

2022-03-22 Thread IGotD- via Digitalmars-d-learn

On Friday, 18 March 2022 at 18:16:51 UTC, Ali Çehreli wrote:


The first time I learned about pulling in dependencies 
terrified me. (This is the part I realize I am very different 
from most other programmers.) I am still terrified that my 
dependency system will pull in a tree of code that I have no 
idea doing. Has it been modified to be malicious overnight? I 
thought it was possible. The following story is an example of 
what I was exactly terrified about:



https://medium.com/hackernoon/im-harvesting-credit-card-numbers-and-passwords-from-your-site-here-s-how-9a8cb347c5b5

Despite such risks many projects just pull in code. (?) What am 
I missing?




This is an interesting observation and something of an oddity in 
modern SW engineering. I have been on several projects where they 
just download versions of libraries from some random server. For 
personal projects I guess this would be ok but for commercial 
software this would be a big no-no for me. Still the trend goes 
towards this. Now, several build systems and packet manager 
software have the possibility to change the server to a local 
one. Changing to local one is unusual though which is strange.


First as you mentioned is that you increase the vulnerability by 
the possibility injecting a modified version of a library with 
back doors. Then you also become dependent on outside servers 
which is bad if they are down.


In all, for commercial software just avoid dub. If you want to 
use a build system go for Meson as it has D support out of the 
box today. For commercial projects pull libraries manually as you 
want to have full control where you get it, the version and so on.




Re: Is there an equivavlent to C# boxing in D?

2022-02-12 Thread IGotD- via Digitalmars-d-learn

On Saturday, 12 February 2022 at 00:41:22 UTC, H. S. Teoh wrote:


How about this?


final class Boxed(T) {
T payload;
	alias payload this; // caveat: probably not a good idea in 
general

this(T val) { payload = val; }
}

Boxed!int i = new Boxed!int(123);
int j = i; // hooray, implicit unboxing!
i = 321; // even this works




Pretty neat solution, you need an extra type but that's not much. 
If alias this would be removed in D, would tricks like these 
suddenly become impossible?





Is there an equivavlent to C# boxing in D?

2022-02-11 Thread IGotD- via Digitalmars-d-learn
If you want to store a value type on the heap in D you just use 
"new" and a pointer to the type. The same thing in C# would be to 
wrap the value type into an object. However when you do that 
automatic conversion without a cast seems not to be possible (C# 
also have a dynamic type that might solve that but more heavy 
weight).


https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/types/boxing-and-unboxing

Is there a possibility to wrap a value type in D around the base 
object class that is otherwise used as the class reference type? 
Would this be a way to use D without raw pointers for heap 
allocated value types?


Re: How to loop through characters of a string in D language?

2021-12-10 Thread IGotD- via Digitalmars-d-learn

On Friday, 10 December 2021 at 06:24:27 UTC, Rumbu wrote:



Since it seems there is a contest here:

```d
"abc;def;ghi".split(';').join();
```

:)


Would that become two for loops or not?


Re: How to test if a string is pointing into read-only memory?

2021-10-12 Thread IGotD- via Digitalmars-d-learn

On Tuesday, 12 October 2021 at 09:20:42 UTC, Elronnd wrote:


There is no good way.


Can't it be done using function overloading?


Re: Reference Counted Class

2021-07-14 Thread IGotD- via Digitalmars-d-learn

On Wednesday, 14 July 2021 at 17:52:16 UTC, sclytrack wrote:
Would reference counted classes by default be too much of a 
change? Is it a bad idea? Currently there a changes in the 
language where you can avoid the reference count, right?
Combination both the rc and the stop-the-world gc, for the 
cycles.


Since classes are reference type, I think it can be done. If I'm 
wrong please explain why. Now, I think there might problems with 
existing code as there is no concern about the RC when classes 
are passed around. However, for new code this can be mitigated.


In practice this would lead to RC classes while other structures 
would still be garbage collected. This would reduce the amount of 
garbage collected memory which could be beneficial.


What I would find interesting is if this would enable 
deterministic destruction of classes.


It's an interesting subject and could be a half way step to more 
a more versatile memory management.


Re: Are D classes proper reference types?

2021-06-27 Thread IGotD- via Digitalmars-d-learn
On Sunday, 27 June 2021 at 07:48:22 UTC, Ola Fosheim Grøstad 
wrote:



Which languages use fat pointers? C++ may use it (but is not 
required to).


Probably about all managed languages. One common method is a that 
it is actually an identifier it is used in a hash table. Then you 
can find all sorts of meta data in there. My original point was 
that this consume more memory in managed languages but nobody 
seems to mind.




Not sure what you mean by expanding? I never liked `alias this` 
for structs, inheritance would be simpler. Is this what you 
mean by expanding?




When you use a struct as a member variable in another struct the 
data will be expanded into the host struct. If the member struct 
is 16 bytes then the host struct will have grow 16 bytes to 
accommodate that member struct.


This is not the case in D with classes as classes always are 
allocated on the heap using dynamic allocation. This leads to 
more fragmentation and memory consumption.


Re: Are D classes proper reference types?

2021-06-25 Thread IGotD- via Digitalmars-d-learn
On Friday, 25 June 2021 at 20:22:24 UTC, Ola Fosheim Grøstad 
wrote:


Hm. Not sure if I follow, I think we are talking about stuffing 
bits into the counter and not the address?


Then I misunderstood. If it's a counter it should be fine.



But fat pointers are 16 bytes, so quite expensive.


Yes, that's a tradeoff but one I'm willing to take. I'm thinking 
even bigger managed pointers of perhaps 32 bytes which has more 
metadata like the allocated size. Managed languages in general 
have fat pointers which we see everywhere and it is not a big 
deal.


If you are littering pointers you perhaps should refactor your 
code, use an array if loads of objects of the same type. Another 
thing which I'm not that satisfied with D is that there is no 
built in method of expanding member classes into the host class 
like C++ which creates pointer littering and memory fragmentation.


Re: Are D classes proper reference types?

2021-06-25 Thread IGotD- via Digitalmars-d-learn

On Friday, 25 June 2021 at 17:37:13 UTC, IGotD- wrote:


You cannot use the most significant bit as it will not work 
with some 32-bit systems. Linux with a 3G kernel position for 
example. Better to use the least significant bit as all 
allocated memory is guaranteed to be aligned. Regardless this 
requires compiler support for masking off this bit.


Now where going into halfway fat pointer support. Then we can 
just use fat pointers instead and have full freedom.


One thing I have found out over the years that if you want to 
have full versatility, have a pointer to your free function in 
your fat pointer. By having this you have generic method to free 
your object when it goes out of scope. You have the ability to 
use custom allocators and even change allocators on the fly. If 
you for some reason don't want to free your object automatically, 
just put zero in that field for example.


Re: Are D classes proper reference types?

2021-06-25 Thread IGotD- via Digitalmars-d-learn

On Friday, 25 June 2021 at 07:17:20 UTC, kinke wrote:
Wrt. manual non-heap allocations (stack/data segment/emplace 
etc.), you could e.g. reserve the most significant bit of the 
counter to denote such instances and prevent them from being 
free'd (and possibly finalization/destruction too; this would 
need some more thought I suppose).


You cannot use the most significant bit as it will not work with 
some 32-bit systems. Linux with a 3G kernel position for example. 
Better to use the least significant bit as all allocated memory 
is guaranteed to be aligned. Regardless this requires compiler 
support for masking off this bit.


Now where going into halfway fat pointer support. Then we can 
just use fat pointers instead and have full freedom.


Re: wanting to try a GUI toolkit: needing some advice on which one to choose

2021-06-01 Thread IGotD- via Digitalmars-d-learn
On Tuesday, 1 June 2021 at 16:20:19 UTC, Ola Fosheim Grøstad 
wrote:


I don't really agree with this, most of the interesting things 
for specifying UIs are happening in 
web-frameworks/web-standards nowadays. But it doesn't matter...


If I were to make a desktop application in D today then I would 
have chosen to either embed a browser-view and use that for 
interface, or electron. Not sure if has been mentioned already, 
but it is an option.


This is also my observation. Browser UI is on the way to take 
over. Advantages are that they are easy to remote, meaning if you 
have network connection you can run it on another device. Some 
mobile phones have a "hidden" web UI if you connect to them using 
a browser. Another advantage is that web UI scales well with 
different resolutions and aspect ratios. You can also more easily 
find people who have the knowledge making cool looking web 
content, compared to people who know how to make custom UI in Qt 
or C#.


Car infotainment systems often runs the graphics in a browser, 
without many knowing about it. They work similar to FireFox OS, 
you boot directly to the browser.


Re: How long does the context of a delegate exist?

2021-05-27 Thread IGotD- via Digitalmars-d-learn

On Thursday, 27 May 2021 at 18:13:17 UTC, Adam D. Ruppe wrote:


If the delegate is created by the GC and stored it will still 
be managed by the GC, along with its captured vars.


As long as the GC can see the delegate in your example you 
should be OK. But if it is held on to by a C or OS lib, the GC 
might not see it and you'd have to addRoot or something to 
ensure it stays in.


With lambdas in C++ you can cherrypick captures the way you want, 
then somehow this can be converted to an std::function and this 
is where it goes on the heap as well.


In D, how does the compiler know what captures it is supposed to 
store, or does it take the whole lot?


Re: Remove own post from forum

2021-05-10 Thread IGotD- via Digitalmars-d-learn

On Monday, 10 May 2021 at 03:36:02 UTC, Виталий Фадеев wrote:
I have missformated post in thread: 
https://forum.dlang.org/thread/kwpqyzwgczdpzgsvo...@forum.dlang.org


Say, please,
how to remove own post from this forum ?


Welcome to the 90s, this forum is essentially a front end to a 
news group. Modern luxury like forum SW and nice graphical 
interfaces that you see everywhere else is void here.


Re: Initializing D runtime and executing module and TLS ctors for D libraries

2021-01-30 Thread IGotD- via Digitalmars-d-learn

On Saturday, 30 January 2021 at 12:28:16 UTC, Ali Çehreli wrote:


I wonder whether doing something in the runtime is possible. 
For example, it may be more resilient and not crash when 
suspending a thread fails because the thread may be dead 
already.


However, studying the runtime code around thread_detachThis 
three years ago, I had realized that like many things in 
computing, the whole stop-the-world is wishful thinking because 
there is no guarantee that your "please suspend this thread" 
request to the OS has succeeded. You get a success return code 
back but it means your request succeeded not that the thread 
was or will be suspended. (I may be misremembering this point 
but I know that the runtime requests things where OS does not 
give full guarantee for.)




OT. A thread that suspends itself will always happen (not taking 
fall through cases into account), if not, throw the OS away. If a 
thread suspends another thread, then you don't really know when 
that thread will be suspended. I would discourage that threads 
suspends other threads because that will open up a new world of 
race conditions. Some systems don't even allow it and its 
benefits are very limited.


Back to topic. I think that the generic solution even if it 
doesn't help you with your current implementation is to ban TLS 
all together. I think there have already been requests to remove 
TLS for druntime/phobos totally and I think this should 
definitely be done sooner than later. Also if you write a shared 
library in D, simply don't use TLS at all. This way it will not 
matter if a thread is registered by druntime or not. TLS is in my 
opinion a wart in computer science.




Re: is core Mutex lock "fast"?

2021-01-26 Thread IGotD- via Digitalmars-d-learn
On Tuesday, 26 January 2021 at 21:09:34 UTC, Steven Schveighoffer 
wrote:


The only item that is read without being locked is owner. If 
you change that to an atomic read and write, it should be fine 
(and is likely fine on x86* without atomics anyway).


All the other data is protected by the actual mutex, and so 
should be synchronous.


However, I think this is all moot, druntime is the same as 
Tango.


-Steve


Yes, I didn't see the lock would block subsequent threads.

Both pthread_mutex_lock and EnterCriticalSection do exactly the 
same as FastLock and the only difference is the the check is 
"closer" to the running code. Performance increase should be low.


As I was wrong about the thread safety, I will not write here any 
further.


Re: is core Mutex lock "fast"?

2021-01-26 Thread IGotD- via Digitalmars-d-learn

On Tuesday, 26 January 2021 at 18:07:06 UTC, ludo wrote:

Hi guys,

still working on old D1 code, to be updated to D2. At some 
point the previous dev wrote a FastLock class. The top comment 
is from the dev himself, not me. My question is after the code.


---

class FastLock
{
protected Mutex mutex;
protected int lockCount;
protected Thread owner;

///
this()
{   mutex = new Mutex();
}

/**
	 * This works the same as Tango's Mutex's lock()/unlock except 
provides extra performance in the special case where
	 * a thread calls lock()/unlock() multiple times while it 
already has ownership from a previous call to lock().

 * This is a common case in Yage.
 *
	 * For convenience, lock() and unlock() calls may be nested.  
Subsequent lock() calls will still maintain the lock,
	 * but unlocking will only occur after unlock() has been 
called an equal number of times.

 *
	 * On Windows, Tango's lock() is always faster than D's 
synchronized statement.  */

void lock()
{   auto self = Thread.getThis();
if (self !is owner)
{   mutex.lock();
owner = self;
}
lockCount++;
}
void unlock() /// ditto
{   assert(Thread.getThis() is owner);
lockCount--;
if (!lockCount)
{   owner = null;
mutex.unlock();
}
}
}

---

Now if I look at the doc , in particular Class 
core.sync.mutex.Mutex, I see:

---
 lock () 	If this lock is not already held by the caller, the 
lock is acquired, then the internal counter is incremented by 
one.

--
Which looks exactly like the behavior of "fastLock". Is it so 
that the old Tango's mutex lock was not keeping count and would 
lock the same object several time? Do we agree that the 
FastLock class is obsolete considering current D core?


cheers


That code isn't thread safe at all (assuming FastLock is used 
from several threads). lockCount isn't atomic which means the 
code will not work with several threads. Also the assignment of 
the variable owner isn't thread safe. As soon you start to 
include more that one supposedly atomic assignment in 
synchronization primitives, things quickly get out of hand.


Normal D Mutex uses pthread_mutex on Linux and the usual 
CriticalSection stuff on Windows. Neither is particularly fast. 
Futex on Linux isn't exactly super fast either. Synchronization 
primitives aren't exactly fast on any system because that's how 
it is. There are ways to makes things faster but adding stuff 
like timeouts and the complexity goes exponential. There so many 
pitfalls with synchronization primitives that it is hardly worth 
it making your own.


Re: Initializing D runtime and executing module and TLS ctors for D libraries

2021-01-24 Thread IGotD- via Digitalmars-d-learn

On Sunday, 24 January 2021 at 03:59:26 UTC, Ali Çehreli wrote:


That must be the case for threads started by D runtime, right? 
It sounds like I must call rt_moduleTlsCtor explicitly for 
foreign threads. It's still not clear to me which modules' TLS 
variables are initialized (copied over). Only this module's or 
all modules that are in the program? I don't know whether it's 
possible to initialize one module; rt_moduleTlsCtor does not 
take any parameter.




Any threads started by druntime has proper initialization of 
course. Any thread started by any module written in another 
language will not do D the thread initialization.


All TLS variables in all loaded modules are being initialized 
(only copying and zeoring) by the OS system code for each thread 
that the OS system knows about. After that it is up to each 
library for each language to do further initialization. Next time 
__tls_get_addr is being called after loading a library, the TLS 
variables of any new module will be found and initialized.


It is a mystery to me why the TLS standard never included a 
ctor/dtor vector for TLS variables. It is in practice possible 
but they didn't do it. The whole TLS design is like a swiss 
scheese.




Did you mean a generic API, which makes calls to D? That's how 
I have it: an extern(C) API function calling proper D code.




I have a lot of system code written in C++ which also include 
callbacks from that code. In order to support D a layer is 
necessary to catch all callbacks in a trampoline and invoke D 
delegates. Calling D code directly with extern(C) should be 
avoided because 1. D delegates are so much more versatile. 2. You 
must use a trampoline in order to do D specific thread 
initialization anyway. Since std::function cannot be used in a 
generic interface I actually use something like this, 
http://blog.coldflake.com/posts/C++-delegates-on-steroids/. Which 
is more versatile than plain extern(C) but simple enough so that 
it can be used by any language. In the case of D the "this 
pointer" can be used to a pointer of a D delegate.


Creating language agnostic interfaces require more attention than 
usual as I have experienced. Strings for example complicates 
things further as they are different for every language.




Re: Initializing D runtime and executing module and TLS ctors for D libraries

2021-01-23 Thread IGotD- via Digitalmars-d-learn

On Sunday, 24 January 2021 at 00:24:55 UTC, Ali Çehreli wrote:


One question I have is, does rt_init already do 
thread_attachThis? I ask because I have a library that is 
loaded by Python and things work even *without* calling 
thread_attachThis.




During rt_init in the main thread, thread_attachThis is performed 
what I have seen.




Another question: Are TLS ctors executed when I do loadLibrary?

And when they are executed, which modules are involved? The 
module that is calling rt_moduleTlsCtor or all modules? What 
are "all modules"?


The TLS standard (at least the ELF standard) does not have ctors. 
Only simple initialization are allowed meaning the initial data 
is stored as .tdata which is copied to the specific memory area 
for each thread. There is also a .tbss which is zero memory just 
like the .bss section. Actual ctor code that runs for each TLS 
thread is language specific and not part of the ELF standard 
therefore no such TLS ctor code are being run in the lower level 
API. The initialization (only copy and zeroing) of TLS data is 
being done when each thread starts. This can even be done in a 
lazy manner when the first TLS variable is being accessed.




I have questions regarding thread_attachThis and 
thread_detachThis: When should they be called? Should the 
library expose a function that the users must call from *each 
thread* that they will be using? This may not be easy because a 
user may not know what thread they are running on. For example, 
the user of our library may be on a framework where threads may 
come and go, where the user may not have an opportunity to call 
thread_detachThis when a thread goes away. For example, the 
user may provide callback functions (which call us) to a 
framework that is running on a thread pool.




I call thread_attachThis as soon the thread is supposed to call a 
D function. For example a callback from a thread in a thread 
pool. This usually happens when there is a function or delegate 
involved as any jump to D code would use them. I have to make a 
generic API and then a D API on top of that. In practice this 
means there is a trampoline function involved where and 
thread_attachThis and thread_detachThis is being called. Also 
this is where I call TLS ctors/dtors. It is an effect that 
delegates is language specific and it falls natural that way. 
Avoid extern(C) calls directly into D code.


In practice you can do this for any thread even if there are 
several delegates during the thread lifetime. You can simply have 
a TLS bool variable telling if the thread_attachThis and 
rt_moduleTlsCtor have already been run.




More questions: Can I thread_detachThis the thread that called 
rt_init? Can I call rt_moduleTlsCtor more than once? I guess it 
depends on each module. It will be troubling if a TLS ctor 
reinitializes an module state. :/




I have brought up this question before because like it is right 
now I haven't seen any "rt_uninit" or "rt_close" function. This 
is bit limiting for me as the main thread can exit while the 
process lives on. In general the main thread that goes into main 
must also be the last one returning the entire line of functions 
that was called during entry of the process. What will happen is 
that you possibly do a thread_detachThis twice.


Short answer is just park the main thread while the bulk is being 
done by other threads. Unfortunately that's how many libraries 
work today.





Re: opIndex negative index?

2021-01-21 Thread IGotD- via Digitalmars-d-learn

On Thursday, 21 January 2021 at 15:49:02 UTC, IGotD- wrote:

On Thursday, 21 January 2021 at 14:47:38 UTC, cerjones wrote:


ohreally?


I thought you were talking about the built in arrays.


Since you create your own implementation just as you showed you 
are basically free to do anything. That also includes that you 
are free to create your own iterator as well.


Re: opIndex negative index?

2021-01-21 Thread IGotD- via Digitalmars-d-learn

On Thursday, 21 January 2021 at 14:47:38 UTC, cerjones wrote:


ohreally?


I thought you were talking about the built in arrays.


Re: opIndex negative index?

2021-01-21 Thread IGotD- via Digitalmars-d-learn

On Thursday, 21 January 2021 at 14:00:28 UTC, cerjones wrote:
I have an iterator that steps along a 2D vector path command by 
command and uses opIndex to give access to the points for the 
current command. The issue is that there's a shared point 
between commands, so when the iterator is on a given command, 
it also needs to also allow access to the end point of the 
previous command. Currently the iterator juggles the indexes so 
that for a cubic you use 0,1,2,3, and 0 gives the previous end 
point. But it would simplify a few things if [-1] was for 
accessing the previous end point, so a cubic would allow 
indexes -1,0,1,2.


I'm a bit unsure if this is reasonable or not.

Thoughts?


Not possible, indexes are of type size_t which is unsigned. If 
you want negative indexes you need to wrap arrays in your own 
implementation and offset 0 so that negative indexes can be used.


Re: Why many programmers don't like GC?

2021-01-15 Thread IGotD- via Digitalmars-d-learn

On Friday, 15 January 2021 at 15:50:50 UTC, H. S. Teoh wrote:


DMD *never* frees anything.  *That's* part of why it's so fast; 
it completely drops the complexity of tracking free lists and 
all of that jazz.


That's also why it's a gigantic memory hog that can be a big 
embarrassment when run on a low-memory system. :-D


This strategy only works for DMD because a compiler is, by its 
very nature, a transient process: you read in source files, 
process them, spit out object files and executables, then you 
exit.  Add to that the assumption that most PCs these days have 
gobs of memory to spare, and this allocation scheme completely 
eliminates memory management overhead. It doesn't matter that 
memory is never freed, because once the process exits, the OS 
reclaims everything anyway.


But such an allocation strategy would not work on anything that 
has to be long-running, or that recycles a lot of memory such 
that you wouldn't be able to fit it all in memory if you didn't 
free any of it.



T


Are we talking about the same things here? You mentioned DMD but 
I was talking about programs compiled with DMD (or GDC, LDC), not 
the nature of the DMD compiler in particular.


Bump the pointer and never return any memory might acceptable for 
short lived programs but totally unacceptable for long running 
programs, like a browser you are using right now.


Just to clarify, in a program that is made in D with the default 
options, will there be absolutely no memory reclamation?




Re: Why many programmers don't like GC?

2021-01-15 Thread IGotD- via Digitalmars-d-learn

On Friday, 15 January 2021 at 14:24:40 UTC, welkam wrote:


No. And it will never will. Currently DMD uses custom allocator 
for almost everything. It works as follows. Allocate a big 
chunk(1MB) of memory using malloc. Have a internal pointer that 
points to the beginning of unallocated memory. When someone ask 
for memory return that pointer and increment internal pointer 
with the 16 byte aligned size of allocation. Meaning the new 
pointer is pointing to unused memory and everything behind the 
pointer has been allocated. This simple allocation strategy is 
called bump the pointer and it improved DMD performance by ~70%.


You can use GC with D compiler by passing -lowmem flag. I didnt 
measure but I heard it can increase compilation time by 3x.


https://github.com/dlang/dmd/blob/master/src/dmd/root/rmem.d#L153


Actually druntime uses map (Linux) and VirtualAlloc (Windows) to 
break out more memory. C-lib malloc is an option but not used in 
most platforms and this option also is very inefficient in terms 
of waste of memory because of alignment requirements.


Bump the pointer is a very fast way to allocate memory but what 
is more interesting is what happens when you return the memory. 
What does the allocator do with chunks of free memory? Does it 
put it in a free list, does it merge chunks? I have a feeling 
that bump the pointer is not the complete algorithm that D uses 
because of that was the only one, D would waste a lot of memory.


As far as I can see, it is simply very difficult to create a 
completely lockless allocator. Somewhere down the line there will 
be a lock, even if you don't add one in druntime (the lock will 
be in the kernel instead when breaking out memory). Also merging 
chunks can be difficult without locks.




Re: Why many programmers don't like GC?

2021-01-14 Thread IGotD- via Digitalmars-d-learn

On Thursday, 14 January 2021 at 15:18:28 UTC, ddcovery wrote:


I understand perfectly the D community people that needs to 
work without GC:  **it is not snobbish**:  it is a real need.  
But not only a "need"... sometimes it is basically the way a 
team wants to work:  explicit memory management vs GC.


D already supports manual memory management so that escape hatch 
was always there. My main criticism of D is the inability to 
freely exchange the GC algorithms as one type of GC might not be 
the best fit for everyone. The problem is of course that there is 
no differentiation between raw and fat pointers. With fat 
pointers, the community would have a better opportunities to 
experiment with different GC designs which would lead to a larger 
palette of GC algorithms.


Re: properly passing strings to functions? (C++ vs D)

2021-01-12 Thread IGotD- via Digitalmars-d-learn

On Tuesday, 12 January 2021 at 18:12:14 UTC, Q. Schroll wrote:


Did you consider `in`? It will do that in some time and do it 
now with -preview=in.
If you're using `const`, in almost all cases, `in` will work, 
too, and be better (and shorter).


Has the redesignation of "in" like in the preview been formally 
accepted as a part of language? I know that it was suggested to 
make "in" the optimized parameter passing for const which I like. 
However, if I'm going to use it I need to know if this going to 
be accepted as I don't want go around and change all the 
parameters back again if it was not accepted.


Re: properly passing strings to functions? (C++ vs D)

2021-01-11 Thread IGotD- via Digitalmars-d-learn

On Monday, 11 January 2021 at 14:12:57 UTC, zack wrote:


D:
void myPrint(string text){ ... }
void myPrintRef(ref string text) { ... }



In D strings are immutable so there will be no copying when 
passing as function parameters. Strings are essentially like 
slices when passing them.


I usually use "const string text" because D has no implicit 
declaration of variables. So using "ref" will not create a 
variable. This is contrary to C++ where passing as "const 
std::string " has a performance benefit and also C++ creates 
a unnamed variable for you.


ex.

void myFunction1(const string text);
void myFunction2(const ref string text);

myFunction1("test");
myFunction2("test"); ---> error cannot create an implicit 
reference


then you have to do like this
string t = "text";
myFunction2(t);

This will work but you have to do the extra step by declaring t. 
Annoying as you cannot write text literals directly in the 
function parameters, therefore I do not use "ref" and it doesn't 
have a big benefit in D either.




Re: Avoid deallocate empty arrays?

2020-12-17 Thread IGotD- via Digitalmars-d-learn

On Thursday, 17 December 2020 at 18:42:54 UTC, H. S. Teoh wrote:


Are you sure?

My understanding is that capacity is always set to 0 when you 
shrink an array, in order to force reallocation when you append 
a new element. The reason is this:


int[] data = [ 1, 2, 3, 4, 5 ];
int[] slice = data[0 .. 4];

writeln(slice.capacity); // 0
writeln(data.capacity);  // 7  <--- N.B.
slice ~= 10;

writeln(slice); // [1, 2, 3, 4, 10]
writeln(data);  // [1, 2, 3, 4, 5]

Notice that slice.capacity is 0, but data.capacity is *not* 0, 
even after taking the slice.  Meaning the array was *not* 
deallocated.


Why is slice.capacity set to 0?  So that when you append 10 to 
it, it does not overwrite the original array (cf. last line of 
code above), but instead allocates a new array and appends to 
that.  This is the default behaviour because it's the least 
surprising -- you don't want people to be able to modify 
elements of your array outside the slice you've handed to them. 
If they want to append, they get a copy of the data instead.


In order to suppress this behaviour, use .assumeSafeAppend.


T


How does this connect to the array with zero elements? According 
to your explanation if I have understood it correctly, capacity 
is also indicating if the pointer has been "borrowed" from 
another array forcing an allocation whenever the array is 
modified. However, if capacity is zero when the array length is 
zero, then you would get a allocation as well, regardless if 
there was a previous allocated array.




Re: Avoid deallocate empty arrays?

2020-12-17 Thread IGotD- via Digitalmars-d-learn
On Thursday, 17 December 2020 at 17:46:59 UTC, Steven 
Schveighoffer wrote:


This isn’t correct. Can you post the code that led you to 
believe this?


-Steve


Sure.


import std.algorithm;
import std.typecons;
import std.stdio;


struct Buffer
{
this(size_t size)
{
m_buffer.reserve = size;
}


void add(const void[] arr)
{
m_buffer ~= cast(ubyte[])arr;
}


string getSome()
{
if(m_buffer.length > 0)
{
return cast(string)m_buffer[0..$];
}
else
{
return "";
}
}

void remove(size_t size)
{
m_buffer = m_buffer.remove(tuple(0, size));
}

ubyte[] m_buffer;
}

void main()
{
Buffer b = Buffer(16);

b.add("aa");

	writeln("b.m_buffer.length ", b.m_buffer.length, ", 
b.m_buffer.capacity ", b.m_buffer.capacity);


string s = b.getSome();

assert(s == "aa");

b.remove(s.length);

	writeln("b.m_buffer.length ", b.m_buffer.length, ", 
b.m_buffer.capacity ", b.m_buffer.capacity);

}

This will print

b.m_buffer.length 2, b.m_buffer.capacity 31
b.m_buffer.length 0, b.m_buffer.capacity 0

capacity 0, suggests that the array has been deallocated.


Re: Avoid deallocate empty arrays?

2020-12-17 Thread IGotD- via Digitalmars-d-learn

On Thursday, 17 December 2020 at 16:46:47 UTC, Q. Schroll wrote:

On Thursday, 17 December 2020 at 16:11:37 UTC, IGotD- wrote:

It's common using arrays for buffering


Outside of CTFE, use an Appender.¹ Unless you're having a 
const/immutable element type, Appender can shrink and reuse 
space.² If you have, reallocation is necessary anyway not to 
break const/immutable' guarantees.


¹ https://dlang.org/phobos/std_array.html#appender
² https://dlang.org/phobos/std_array.html#.Appender.clear


Thank you, I will try this one out.


Avoid deallocate empty arrays?

2020-12-17 Thread IGotD- via Digitalmars-d-learn
It's common using arrays for buffering, that means constantly 
adding elements and empty the elements. I have seen that when the 
number of elements is zero, the array implementation deallocates 
the array which is shown with capacity is zero. This of course 
leads to constant allocation and deallocation of the array.


One workaround is just to allocate the array once, then keep 
track of the position yourself rather than shrinking the array so 
that the length field always track the number of elements. This 
is possible but if you want dynamic increasing capacity you have 
track that yourself too.


However, is there a way to tell the array not deallocate the 
array and just increasing the array when necessary.


Re: low-latency GC

2020-12-06 Thread IGotD- via Digitalmars-d-learn
On Sunday, 6 December 2020 at 15:44:32 UTC, Ola Fosheim Grøstad 
wrote:


It was more a hypothetical, as read barriers are too expensive. 
But write barriers should be ok, so a single-threaded 
incremental collector could work well if D takes a principled 
stance on objects not being 'shared' not being handed over to 
other threads without pinning them in the GC.


Maybe a better option for D than ARC, as it is closer to what 
people are used to.


In kernel programming there are plenty of atomic reference 
counted objects. The reason is that is you have kernel that 
supports SMP you must have it because you don't really know which 
CPU is working with a structure at any given time. These are 
often manually reference counted objects, which can lead to 
memory leaking bugs but they are not that hard to find.


Is automatic atomic reference counting a contender for kernels? 
In kernels you want to reduce the increase/decrease of the 
counts. Therefore the Rust approach using 'clone' is better 
unless there is some optimizer that can figure it out. 
Performance is important in kernels, you don't want the kernel to 
steal useful CPU time that otherwise should go to programs.


In general I think that reference counting should be supported in 
D, not only implicitly but also under the hood with fat pointers. 
This will make D more attractive to performance applications. 
Another advantage is the reference counting can use malloc/free 
directly if needed without any complicated GC layer with 
associated meta data.


Also tracing GC in a kernel is my opinion not desirable. For the 
reason I previously mentioned, you want to reduce meta data, you 
want reduce CPU time, you want to reduce fragmentation. Special 
allocators for structures are often used.




Re: low-latency GC

2020-12-06 Thread IGotD- via Digitalmars-d-learn
On Sunday, 6 December 2020 at 11:07:50 UTC, Ola Fosheim Grostad 
wrote:


ARC can be done incrementally, we can do it as a library first 
and use a modified version existing GC for detecting failed 
borrows at runtime during testing.


But all libraries that use owning pointers need ownership to be 
made explicit.


A static borrow checker an ARC optimizer needs a high level IR 
though. A lot of work though.


The Rust approach is interesting as it doesn't need an ARC 
optimizer. Everything is a  move so no increase/decrease is done 
when doing that. Increase is done first when the programmer 
decides to 'clone' the reference. This inherently becomes 
optimized without any compiler support. However, this requires 
that the programmer inserts 'clone' when necessary so it isn't 
really automatic.


I was thinking about how to deal with this in D and the question 
is if it would be better to be able to control move as default 
per type basis. This way we can implement Rust style reference 
counting without intruding too much on the rest of the language. 
The question is if we want this or if we should go for a fully 
automated approach where the programmer doesn't need to worry 
about 'clone'.


Re: converting D's string to use with C API with unicode

2020-12-05 Thread IGotD- via Digitalmars-d-learn

On Saturday, 5 December 2020 at 20:12:52 UTC, IGotD- wrote:

On Saturday, 5 December 2020 at 19:51:14 UTC, Jack wrote:

So in D I have a struct like this:


struct ProcessResult
{
string[] output;
bool ok;
}


in order to use output from C WINAPI with unicode, I need to 
convert each string to wchar* so that i can acess it from C 
with wchar_t*. Is that right or am I missing anything?




struct ProcessResult
{
string[] output;
bool ok;

C_ProcessResult toCResult()
{
auto r = C_ProcessResult();
r.ok = this.ok; // just copy, no conversion needed
foreach(s; this.output)
r.output ~= cast(wchar*)s.ptr;
return r;
}
}



version(Windows) extern(C) export
struct C_ProcessResult
{
wchar*[] output;
bool ok;
}


I would just use std.encoding

https://dlang.org/phobos/std_encoding.html

and use transcode

https://dlang.org/phobos/std_encoding.html#transcode


Forget previous post, I didn't see the arrays.

extern(C) has no knowledge of D arrays, I think you need to use 
wchar** instead of []. Keep in mind you need to store the lengths 
as well unless you use zero terminated strings.




Re: converting D's string to use with C API with unicode

2020-12-05 Thread IGotD- via Digitalmars-d-learn

On Saturday, 5 December 2020 at 19:51:14 UTC, Jack wrote:

So in D I have a struct like this:


struct ProcessResult
{
string[] output;
bool ok;
}


in order to use output from C WINAPI with unicode, I need to 
convert each string to wchar* so that i can acess it from C 
with wchar_t*. Is that right or am I missing anything?




struct ProcessResult
{
string[] output;
bool ok;

C_ProcessResult toCResult()
{
auto r = C_ProcessResult();
r.ok = this.ok; // just copy, no conversion needed
foreach(s; this.output)
r.output ~= cast(wchar*)s.ptr;
return r;
}
}



version(Windows) extern(C) export
struct C_ProcessResult
{
wchar*[] output;
bool ok;
}


I would just use std.encoding

https://dlang.org/phobos/std_encoding.html

and use transcode

https://dlang.org/phobos/std_encoding.html#transcode



Re: Development: Work vs Lazy Programmers... How do you keep sanity?

2020-12-03 Thread IGotD- via Digitalmars-d-learn

On Thursday, 3 December 2020 at 15:18:31 UTC, matheus wrote:

Hi,

I didn't know where to post this and I hope this is a good 
place.


I'm a lurker in this community and I read a lot of discussions 
on this forum and I think there a lot of smart people around 
here.


So I'd like to know if any of you work with Lazy or even Dumb 
programmers, and If yes how do you keep your sanity?


Matheus.

PS: Really I'm almost losing mine.


When you are out with your horse it often gets dirty, sand and 
small stones get stuck in the fur. Small stones can also grind 
between the saddle and the horse which can cause wounds. 
Therefore it is important to groom the horse properly before 
getting out. Many people don't take this seriously and skip this 
important step. If you groom your horse properly each time I 
think you will have a much better time and your horse will 
perform better.


Re: Druntime undefined references

2020-11-02 Thread IGotD- via Digitalmars-d-learn

On Monday, 2 November 2020 at 10:50:06 UTC, Severin Teona wrote:

Hi guys!

I build the druntime for an ARM Cortex-M based microcontroller 
and I trying to create an application and link it with the 
druntime. I am also using TockOS[1], which does not implement 
POSIX thread calls and other OS-dependent implementations. As I 
was looking through the errors I got, I found some functions 
and I don’t know what they do, or how should I solve the errors.


The first one is:
libdruntime-ldc.a(dwarfeh.o): in function 
`_d_eh_personality_common': 
dwarfeh.d:(.text._d_eh_personality_common[_d_eh_personality_common]+0x2c): undefined reference to `_d_eh_GetIPInfo'


and the second one is:
dl.d:(.text._D4core8internal3elf2dl12SharedObject14thisExecutableFNbNiZSQCgQCeQByQBxQBx[_D4core8internal3elf2dl12SharedObject14thisExecutableFNbNiZSQCgQCeQByQBxQBx]+0x1a):
 undefined reference to `dl_iterate_phdr'
(the druntime was build as a static library, because I can’t 
use dynamic libraries on a microcontroller)


Does anyone know what these exactly do and how 
important/essential are they? Is there any way I could solve 
them?


Thank a lot.

[1]: https://www.tockos.org


https://man7.org/linux/man-pages/man3/dl_iterate_phdr.3.html

dl_iterate_phdr is used to iterate over the elf program headers 
and is used to get a textual back trace among other things. These 
are typically calls found in Linux systems and if you don't have 
that, you have to replace them with your own calls or just remove 
the call all together.


Much of the elf stuff can be removed. The only things that might 
be essential is that druntime needs to know where the TLS data is 
per thread for scanning.




Re: Looking for a Simple Doubly Linked List Implementation

2020-10-29 Thread IGotD- via Digitalmars-d-learn

On Thursday, 29 October 2020 at 22:02:52 UTC, Paul Backus wrote:


I'm pretty sure the post you replied to is spam.


Yes, when I read the post again it is kind of hollow.


Re: Looking for a Simple Doubly Linked List Implementation

2020-10-29 Thread IGotD- via Digitalmars-d-learn

On Thursday, 29 October 2020 at 18:06:55 UTC, xpaceeight wrote:

https://forum.dlang.org/post/bpixuevxzzltiybdr...@forum.dlang.org

It contains the data and a pointer to the next and previous 
linked list node. This is given as follows. struct Node { int 
data; struct Node *prev; struct Node *next; }; The function 
insert() inserts the data into the beginning of the doubly 
linked list. https://jiofilocalhtml.run https://forpc.onl


Is this what you are looking for?
https://dlang.org/phobos/std_container_dlist.html


Re: What is the difference between enum and shared immutable?

2020-10-29 Thread IGotD- via Digitalmars-d-learn

On Thursday, 29 October 2020 at 16:45:51 UTC, Ali Çehreli wrote:


import std;

immutable string p;

shared static this() {
  p = environment["PATH"];  // <-- Run time
}



Just to clarify, immutable is allowed to be initialized in ctors 
but not anything later than that? Moving p = environment["PATH"] 
to main would generate an error.




Re: What is the difference between enum and shared immutable?

2020-10-29 Thread IGotD- via Digitalmars-d-learn
On Thursday, 29 October 2020 at 12:21:19 UTC, Ola Fosheim Grøstad 
wrote:


You can test this with is(TYPE1==TYPE2)

is(shared(immutable(int))==immutable(int))


So I got that to true, which means that shared immutable is 
exactly the same as immutable. Shared is implicit for immutable 
which makes sense.


That means that the documentation
https://dlang.org/articles/migrate-to-shared.html#immutable

is correct at least this one time.


Re: What is the difference between enum and shared immutable?

2020-10-28 Thread IGotD- via Digitalmars-d-learn

On Wednesday, 28 October 2020 at 21:54:19 UTC, Jan Hönig wrote:


shared immutable x = 1;



Is there a point to add shared to an immutable? Aren't immutable 
implicitly also shared?


Unexpected behaviour using remove on char[]

2020-10-25 Thread IGotD- via Digitalmars-d-learn

I have a part in my code that use remove

buffer.remove(tuple(0, size));

with

char[] buffer

What I discovered is that remove doesn't really remove size 
number of bytes but also removed entire multibyte characters and 
consider that one step. The result was of course that I got out 
of bounds exceptions as it went past the end.


When I changed char[] to ubyte[] my code started to work 
correctly again.


According to the documentation a char is an "unsigned 8 bit 
(UTF-8 code unit)" so you really believe you are working on 
bytes. I presume that under the hood there are range iterators at 
work and those work multibyte characters. However you can iterate 
over one byte characters as well as an option and you don't know 
what happens underneath.


I'm a bit confused, when should I expect that the primitives work 
with single versus multibyte chars in array operations?


Re: More elaborate asserts in unittests?

2020-10-21 Thread IGotD- via Digitalmars-d-learn

On Wednesday, 21 October 2020 at 23:54:41 UTC, bachmeier wrote:


Click the "Improve this page" link in the upper right corner 
and add what you think needs to be there. Those PRs usually get 
a fast response.


Will do, thank you for the direction.


Re: More elaborate asserts in unittests?

2020-10-21 Thread IGotD- via Digitalmars-d-learn
On Wednesday, 21 October 2020 at 22:41:42 UTC, Adam D. Ruppe 
wrote:


try compiling with dmd -checkaction=context


Thanks, that was simple and it worked. Speaking of this, 
shouldn't this be documented here for example.


https://dlang.org/spec/unittest.html

Just adding a friendly tip that -checkaction=context gives the 
user more information. I couldn't find this information anywhere.






More elaborate asserts in unittests?

2020-10-21 Thread IGotD- via Digitalmars-d-learn
When an assert fails in a unittest, I only get which line that 
failed. However, it would be very useful to see what the values 
are on either side of the unary boolean expression. Is this 
possible?


Re: Druntime without pthreads?

2020-10-20 Thread IGotD- via Digitalmars-d-learn

On Tuesday, 20 October 2020 at 16:58:12 UTC, Severin Teona wrote:

Hi guys.

I have a curiosity, regarding [1] - I had encountered some 
"undefined reference" errors when trying to link the druntime 
(compiled for an embedded architecture) without some 
implementation of the POSIX thread calls (and other stuff too).


My curiosity is what would change if I removed from the 
druntime everything that has to do with mutexes or threads. 
Would it be possible for the druntime to run and work properly 
on a microcontroller - where those concepts are not necessary? 
Could I just remove everything about synchronisation from the 
druntime, and classes or Garbage Collector to still work 
properly?


[1]: 
https://forum.dlang.org/post/erwfgtigvcciohllv...@forum.dlang.org


Yes, you can stub the thread and mutex interfaces. You will of 
course not be able use that or any part of the library that uses 
threads  but you will be able to get to the main entry point and 
run a single threaded D program.


This is something we would want with the druntime API change, a 
stub that people can use when they don't need the functionality 
or as a starting point for implementing your own version.




Re: Undefined references in Druntime for microcontrollers

2020-10-19 Thread IGotD- via Digitalmars-d-learn

On Monday, 19 October 2020 at 06:25:17 UTC, Severin Teona wrote:


- 'munmap'
- 'clock_gettime'
- `pthread_mutex_trylock'
etc.



These are typically calls found in a Unix system, Linux for 
example. In a microcontroller you will likely not support these 
at all except clock_gettime.


You need to dissect druntime a bit further to remove these and/or 
find a replacement that does the same thing.




Re: why do i need an extern(C): here?

2020-10-15 Thread IGotD- via Digitalmars-d-learn

On Thursday, 15 October 2020 at 21:29:59 UTC, WhatMeWorry wrote:


I've go a small DLL and a test module both written in D. Why do 
I need to use the extern(C)?  Shouldn't both sides be using D 
name wrangling?




You have answered your own question. If you're not using 
extern(C), D just like C++ adds a lot of name mangling which is 
not "addSeven" but extra characters as well. GetProcAddress which 
is a Windows call has no idea about D name mangling and therefore 
will not find the function.


Re: Link Time Optimization Bitcode File Format

2020-10-06 Thread IGotD- via Digitalmars-d-learn

On Tuesday, 6 October 2020 at 16:46:28 UTC, Severin Teona wrote:

Hi all,

I am trying to build the druntime with the 'ldc-build-runtime' 
tool for microcontrollers (using the arm-none-eabi-gcc 
compiler) and therefore the size of the druntime should be as 
little as possible. One solution I had was to use Link Time 
Optimization (LTO) to reduce the size.


The problem now is the fact that when I compile the druntime 
with -flto=full or -flto=thin (as arguments for LDC2), the 
resulted object files (and also a big part of the runtime as it 
is a static library) have a different file format - LLVM IR 
bitcode - than I need, which is ELF 32-bit. Also, when I try to 
link the druntime with the application I want to write on the 
microcontroller, there are some link errors due to the file 
format.
I also tried using a different archiver - llvm-ar - but I had 
no luck.


Could you give me some advice about how should I fix this?

Thank you!


I have run into the same problem when using GNU ld. The problem 
is that my version of GNU ld, version 2.30.0.20180329 (which is 
ancient) cannot deal with the object file format when LTO is 
enabled. One way is to try llvm-ld and see if that one can handle 
it.





Re: Taking arguments by value or by reference

2020-10-04 Thread IGotD- via Digitalmars-d-learn

On Saturday, 3 October 2020 at 23:00:46 UTC, Anonymouse wrote:
I'm passing structs around (collections of strings) whose 
.sizeof returns 432.


The readme for 2.094.0 includes the following:

This release reworks the meaning of in to properly support all 
those use cases. in parameters will now be passed by reference 
when optimal, [...]


* Otherwise, if the type's size requires it, it will be passed 
by reference.
Currently, types which are over twice the machine word size 
will be passed by
reference, however this is controlled by the backend and can 
be changed based

on the platform's ABI.


However, I asked in #d a while ago and was told to always pass 
by value until it breaks, and only then resort to ref.


[18:32:16]  at what point should I start passing my 
structs by ref rather than by value? some are nested in 
others, so sizeofs range between 120 and 620UL

[18:33:43]  when you start getting stack overflows
[18:39:09]  so if I don't need ref for the references, 
there's no inherent merit to it unless I get in trouble 
without it?

[18:39:20]  pretty much
[18:40:16]  in many cases the copying is merely 
theoretical and doesn't actually happen when optimized


I've so far just been using const parameters. What should I be 
using?


I don't agree with this, especially if the struct is 432 bytes. 
It takes time and memory to copy such structure. I always use 
"const ref" when I pass structures because that's only a pointer. 
Classes are references by themselves so its not applicable there. 
Only "ref" when I want to modify the contents.


However there are some exceptions to this rule in D as D support 
slice parameters. In this case you want a copy as slice of the 
array, often because the slice is often casted from something 
else. Basically the array slice parameter become an lvalue.


This copy of parameters to the stack is an abomination in 
computer science and only useful in some cases but mostly not. 
The best would be if the compiler itself could determine what is 
the most efficient. Nim does this and it was not long ago 
suggested that the "in" keyword should have a new life as such 
optimization, is that the change that has entered in 2.094.0? Why 
wasn't this a DIP?


I even see this in some C++ program code where strings are passed 
as value which means that the string is copied including a 
possible memory allocation which certainly slow things down.


Do not listen to people who says "pass everything by value" 
because that is in general not ideal in imperative languages.




Re: What classification should shared objects in qeued thread pools have?

2020-10-01 Thread IGotD- via Digitalmars-d-learn
On Thursday, 1 October 2020 at 14:12:24 UTC, Ola Fosheim Grøstad 
wrote:


Also, atomic operations on members do not ensure the integrity 
of the struct. For that you need something more powerful 
(complicated static analysis or transactional memory).


I'm very wary of being able to cast away shared, it might 
completely negate all the advertised (memory management) 
optimization opportunities for shared.


For that to work you need some kind of "unshared" or "borrowed" 
like concept.


Making all variables atomic in a shared struct is a intelligent 
as putting all hands on 4:00 on an analogue alarm clock if you 
want to wake up at 4:00.


Atomic operations in itself does not ensure thread safety, you 
can still have races and the lockless algorithm might not be 
waterproof. They can be very difficult to design. Sometimes, this 
can show up months after a product has gone live.


Furthermore, there is also the possibility to use locking 
primitives (mutexes, read write locks) inside a shared struct to 
ensure the thread safety. In that case you really don't all the 
data member operations to be atomic.


In order to have "everything allowed" struct like in C++, 
shouldn't __gshared also work so that the allocator can 
successfully do its operations from several threads?




Re: What classification should shared objects in qeued thread pools have?

2020-09-30 Thread IGotD- via Digitalmars-d-learn

On Thursday, 1 October 2020 at 00:00:06 UTC, mw wrote:



I think using `shared` is the D's encouraged way.

If there is a better way do this in D, I'd want to know it too.


I think that the shared in shared structs should not be 
transitive to members of the struct. The compiler should not 
enforce this as we don't really know what the programmer will do 
inside the struct to ensure the thread safety.


For example completely lockless algorithms can often be a 
combination of atomic operations and also non-atomic operations 
on data members.


I originally thought that DIP 1024 only applied for integer types 
alone (not inside structs). I don't really understand the 
rationale why a shared struct should all have atomic integers, it 
doesn't make any sense.


What classification should shared objects in qeued thread pools have?

2020-09-30 Thread IGotD- via Digitalmars-d-learn
I have a system that heavily relies on thread pools. Typically 
this is used with items that are put on a queue and then a thread 
pool system process this queue. The thread pool can be configured 
to process the items in whatever parallel fashion it wants but 
usually it is set to one, that means no special protection for 
concurrency is needed as serialization is guaranteed. One item is 
processed at a time.


So think that one item is being processed and then put on some 
kind of dynamic data structure. During this operation allocations 
and deallocations can happen. Because we have a thread pool, 
these memory operations can happen in different threads.


This where the memory model of D starts to become confusing for 
me. By default memory allocations/deallocation are not allowed 
between threads, however setting the object to shared circumvents 
this. This seems to work as there is no more aborts from the D 
memory management. However, this has a weird side effect. Now the 
compiler wants that all my integer member variables operates by 
using atomic primitives. I don't need this, I know that this 
object will be sequentially used.


Is shared the wrong way to go here and is there another way to 
solve this?





Re: Memory management

2020-09-29 Thread IGotD- via Digitalmars-d-learn

On Tuesday, 29 September 2020 at 15:47:09 UTC, Ali Çehreli wrote:


I am not a language expert but I can't imagine how the compiler 
knows whether an event will happen at runtime. Imagine a server 
program allocates memory for a client. Let's say, that memory 
will be deallocated when the client logs out or the server 
times that client out. The compiler cannot know either of that 
will ever happen, right?


Ali


It doesn't need to know when a certain event happens but based on 
ownership and program flow. Let's think Rust for a moment.


You get a connection and you allocate some metadata about it. 
Then you put that metadata on a list and the list owns that 
object. It will stay there as long there is a connection and the 
program can go and do other things, the list still owns the 
metadata.


After a while the client disconnects and the program finds the 
metadata in the list and removes it and puts it in a local 
variable. Some cleanup is one but the metadata is still owned by 
the local variable and when that variable goes out of scope 
(basically end of a {} block), then it will deallocated.


It's really a combination of ownership and scopes.

That little simple example shows that you don't necessarily need 
to know things in advance in order to have static lifetimes. 
However, there are examples where there is no possibility for the 
compiler to infer when the object goes out of scope. Multiple 
ownership is an obvious example of that.





Cmake dependency scanning

2020-09-27 Thread IGotD- via Digitalmars-d-learn
Do we have any D dependency scanning available for D in Cmake, 
just like the built in C/C++ dependency scanner which is handy, 
or do you have to use the option to compile everything into one 
module (--deps=full)?


I have some problems when there is a mix of inlining and calling 
the separately compiled version from the same module, obviously.


Re: Methods for expanding class in another class/struct

2020-09-27 Thread IGotD- via Digitalmars-d-learn

On Saturday, 26 September 2020 at 11:30:23 UTC, k2aj wrote:


It does work, the problem is that scoped returns a Voldemort 
type, so you have to use 
typeof(scoped!SomeClass(someConstructorArgs)) to declare a 
field. Gets really annoying when doing it with any class that 
doesn't have a zero-argument constructor, especially in generic 
code.


class Foo {}

class Bar {
typeof(scoped!Foo()) foo;
this() {
foo = scoped!Foo();
}
}


Thanks, so what we really need is a new scope template that 
declares the the variables in the class and possible a template 
for running the constructor?


Re: Methods for expanding class in another class/struct

2020-09-26 Thread IGotD- via Digitalmars-d-learn
One thing that struck me looking at the source code of scoped, 
would scoped work inside a class and not only for stack 
allocations?





Methods for expanding class in another class/struct

2020-09-26 Thread IGotD- via Digitalmars-d-learn
We know that classes are all reference typed so that classes must 
be allocated on the heap. However, this memory could be taken 
from anywhere so basically this memory could be a static array 
inside the class. This is pretty much what the scoped template 
does when allocating a class on the stack. In practice allocating 
a class inside another class could be done in similar fashion.


Do we have a good example showing how to expand a class inside 
another class?
Shouldn't be have a standard template similar to scoped, that 
statically allocates space inside the class? This template should 
be available in the standard library.


Re: Building LDC runtime for a microcontroller

2020-09-19 Thread IGotD- via Digitalmars-d-learn

On Friday, 18 September 2020 at 07:44:50 UTC, Dylan Graham wrote:


I use D in an automotive environment (it controls parts of the 
powertrain, so yeah there are cars running around on D) on 
various types of ARM Cortex M CPUs, I think this will be the 
best way to extend D to those platforms.




Do I dare to ask what brand of cars that are running D code. 
Maybe you're supplier that sells products to several car brands.




Re: vibe.d: How to get the conent of a file upload ?

2020-09-19 Thread IGotD- via Digitalmars-d-learn
On Saturday, 19 September 2020 at 19:27:40 UTC, Steven 
Schveighoffer wrote:


I used Kai's book, and yeah, you have to do things the vibe 
way. But most web frameworks are that way I think.




Do you have a reference to this book (web link, ISBN)?




Re: Proper way to exit with specific exit code?

2020-09-18 Thread IGotD- via Digitalmars-d-learn

On Friday, 18 September 2020 at 05:02:21 UTC, H. S. Teoh wrote:


That's the obvious solution, except that actually implementing 
it is not so simple.  When you have multiple threads listening 
for each other and/or doing work, there is no 100% guaranteed 
way of cleanly shutting all of them down at the same time.  You 
can't just clean up the calling thread and leave the others 
running, because the other threads might hold references to 
your data, etc..  But there's no universal protocol for 
shutting down the other threads too -- they could be in a busy 
loop with some long-running computation, or they may not be 
checking for thread messages, or they could be in a server loop 
that is designed to keep running, etc..  It's one of those 
annoying things that reduce to the halting problem in the 
general case.


Unless we adopt some kind of exit protocol that will apply to 
*all* threads in *all* D programs, I don't see any way to 
implement something that will work in the general case.



T


I think a pragmatic solution is just to mutex protect the D exit 
function in case several threads tries to use simultaneously. 
Then if more threads call exit, it will do nothing as the first 
one that called exit actually do the tear down.


Also, it should be responsibility of the program to ensure that 
its tear down code runs before calling the D exit function. 
That's the only way I can think of because waiting for all other 
threads to release their resources and exit isn't really 
realistic either as that might do that the program exit never 
happens. Whatever you do you, you have to resort to some "manual" 
solution".


I suggest keeping it simple and stupid.



Re: Proper way to exit with specific exit code?

2020-09-17 Thread IGotD- via Digitalmars-d-learn

On Thursday, 17 September 2020 at 14:58:48 UTC, drathier wrote:

What's the proper way to exit with a specific exit code?

I found a bunch of old threads discussing this, making sure 
destructors run and the runtime terminates properly, all of 
which seemingly concluding that it's sad that there isn't a way 
to do this easily, but hopefully things have changed in the 
last 5-10 years and I'm just missing the obvious solution.


The only way is to return from main. The thing is that druntime 
runs initialization before main and then returning from main it 
runs all the tear down code including cleaning up the GC. This 
means there is no equivalent of the exit function in the C 
library. Calling exit from D means that there will be no cleanup 
in D environment.


This is a bit limiting for my needs for example. I would like 
that exiting from main will not tear down the D runtime because 
my system is a message driven system and main just sets up the 
program and then returns but the programs continues to react on 
messages. Many libraries like Qt circumvents this just by parking 
the main thread as a event handler but this doesn't fit my system 
and will waste one thread resource. Finally to exit the program I 
have equivalent to the C library exit function. Creating a 
similar exit function in D would be trivial really.


Re: Building LDC runtime for a microcontroller

2020-09-07 Thread IGotD- via Digitalmars-d-learn

On Monday, 7 September 2020 at 19:12:59 UTC, aberba wrote:


How about an alternative runtime + standard library for 
embedded systems...with a least bare minimum. I've seen a 
number of efforts to get D to run in those environments but 
almost none of them is packaged for others to consume.


Adam D. Ruppe's book "D cookbook" describes another way by 
augmenting object.d in order to get "full D" to compile. I guess 
this was written before betterC existed. It will be similar to 
betterC, a very naked system without druntime. To be honest I 
like this approach better as it opens up for gradually adding 
functionality.


A small runtime + standard library is about the only possibility 
in order to fit in those microconroller systems. Alternatively it 
might be better just start from scratch and implement often 
limited functionality they requires. The problem as I mentioned 
before was that OS dependent stuff is mixed with OS independent. 
I think that OS independent should be broken out so that it can 
more easily be used for embedded programming.


Memory management can be a problem too. OS independent library 
code might expect full GC support and there seems to be no 
documentation which functions that does and which doesn't. I was 
thinking GC might not be that much of a problem for medium sized 
microcontroller systems. In practice you can have a fixed pool 
that is initialized from the beginning, non expandable. Still 
there is a GC memory overhead penalty.



In the C/C++ world generic standard C libraries for embedded 
systems are rare, often unfinished, limited, GPL soiled so there 
are difficulties there as well. Often there are tons of POSIX 
filth in them as they are assumed to run on some kind Linux 
system.




Re: Building LDC runtime for a microcontroller

2020-09-07 Thread IGotD- via Digitalmars-d-learn

On Monday, 7 September 2020 at 15:23:28 UTC, Severin Teona wrote:


I would also appreciate any advice regarding ways to build or 
create a small runtime for microcontrollers (runtime that can 
fit in the memory of a microcontroller).

Thank you very much,
Teona

[1]: https://wiki.dlang.org/Building_LDC_runtime_libraries
[2]: 
https://github.com/golang/go/issues/36633#issuecomment-576411479


Use betterC, which is much better suited for microcontrollers 
than the full D. The disadvantage is that many great features are 
disabled in betterC.


I have ported druntime/phobos to my system. This is pretty large 
job because structure of druntime/phobos is not very good for 
adding/porting to new systems. It's a cascade of version(this) {} 
else version(that) {}. Some functionality must be ported, some 
others can just be stubbed.


Keep in mind that you in general have to port phobos as well 
because it contains many useful functions like parsing and 
conversion. The OS dependent stuff is mixed together with OS 
independent.


For an ARM target I get about a compiled size of about 500KB of a 
simple Hello world program when linked statically. This isn't 
really microcontroller size to begin with. The size quickly 
increases as you start to use more modules from druntime/phobos.


Another interesting observation is that druntime has a option to 
use malloc/free in a clib rather than map/VirtualAlloc for GC 
allocations. What druntime does is over allocate because it 
requires page aligned memory. The result is that this workaround 
waste a lot of memory.


The conclusion is that D as it is isn't really suitable for 
systems that are memory limited or lack an MMU (reason is that 
shared libraries don't work). D is like C++ will full STL support 
which is also very large. Embedded programmers who use C++ almost 
never use STL because of this, among other things.




Re: miscellaneous array questions...

2020-07-21 Thread IGotD- via Digitalmars-d-learn

On Tuesday, 21 July 2020 at 13:23:32 UTC, Adam D. Ruppe wrote:


But the array isn't initialized in the justification scenario. 
It is accessed through a null pointer and the type system 
thinks it is fine because it is still inside the static limit.


At run time, the cpu just sees access to memory address 0 + x, 
and if x is sufficient large, it can bypass those guard pages.


I'm not that convinced. This totally depends on how the virtual 
memory for the process looks like. Some operating systems might 
have a gap between 0 - 16MB but some others don't. This is also a 
subject that can change between versions of the OS and even more 
uncertain as address space randomization becomes popular. Safety 
based on assumptions aren't really worth it.


I don't personally care about the 16MB limit as I would never use 
it for any foreseeable future but the motivation for it is kind 
of vague.


Re: miscellaneous array questions...

2020-07-21 Thread IGotD- via Digitalmars-d-learn

On Tuesday, 21 July 2020 at 12:34:14 UTC, Adam D. Ruppe wrote:


With the null `a`, the offset to the static array is just 0 + 
whatever and the @safe mechanism can't trace that.


So the arbitrary limit was put in place to make it more likely 
that such a situation will hit a protected page and segfault 
instead of carrying on. (most low addresses are not actually 
allocated by the OS... though there's no reason why they 
couldn't, it just usually doesn't, so that 16 MB limit makes 
the odds of something like this actually happening a lot lower)


I don't recall exactly when this was discussed but it came up 
in the earlier days of @safe, I'm pretty sure it worked before 
then.


If that's the case I would consider this 16MB limit unnecessary. 
Most operating systems put a guard page at the very bottom of the 
stack (which is usually 1MB - 4MB, usually 1MB on Linux). Either 
the array will hit that page during initialization or something 
else during the execution.


Let's say someone puts a 15MB array on the stack, then we will 
have a page fault instead for sure and this artificial limit 
there for nothing. With 64-bits or more and some future crazy 
operating system, it might support large stack sizes like 256MB. 
This is a little like a 640kB limit.


Re: miscellaneous array questions...

2020-07-21 Thread IGotD- via Digitalmars-d-learn

On Monday, 20 July 2020 at 22:05:35 UTC, WhatMeWorry wrote:


2) "The total size of a static array cannot exceed 16Mb" What 
limits this? And with modern systems of 16GB and 32GB, isn't 
16Mb excessively small?   (an aside: shouldn't that be 16MB in 
the reference instead of 16Mb? that is, Doesn't b = bits and B 
= bytes)




I didn't know this but it makes sense and I guess this is a 
constraint of the D language itself. In practice 16MB should be 
well enough for most cases. I'm not sure where 16MB is taken 
from, if there is any OS out there that has this limitation or if 
it was just taken as an adequate limit.


Let's say you have a program with 4 threads, then suddenly the 
TLS area is 4 * 16 MB = 64MB. This size rapidly increases with 
number of threads and TLS area size. Let's say TLS area of 128MB 
and 8 threads, which gives you a memory consumption of 1GB. 
That's how quickly it starts to consume memory if you don't limit 
the TLS variables.


If you want global variables like in good old C/C++, then use 
__gshared. Of course you have to take care if any multiple 
accesses from several threads.




Re: Good way to send/receive UDP packets?

2020-07-18 Thread IGotD- via Digitalmars-d-learn

On Saturday, 18 July 2020 at 16:00:09 UTC, Dukc wrote:
I have a project where I need to take and send UDP packets over 
the Internet. Only raw UDP - my application uses packets 
directly, with their starting `[0x5a, packet.length.to!ubyte]` 
included. And only communication with a single address, no need 
to communicate with multiple clients concurrently.


I understand that I could do it either with the Curl library 
bundled with Phobos, or use Vibe.D or Hunt instead. But it's 
the first time I'm dealing with low-level networking like this, 
and my knowledge about it is lacking. So seek opinions about 
what library I should use, and more importantly, why.


Other advice about projects like this is also welcome, should 
anyone wish to share it.


D has socket wrapper interfaces just as many other languages.

https://dlang.org/phobos/std_socket.html


Re: What's the point of static arrays ?

2020-07-09 Thread IGotD- via Digitalmars-d-learn

On Thursday, 9 July 2020 at 18:51:47 UTC, Paul Backus wrote:


Note that using VLAs in C is widely considered to be bad 
practice, and that they were made optional in the C11 standard.


If you want to allocate an array on the stack, the best way is 
to use a static array for size below a predetermined limit, and 
fall back to heap allocation if that limit is exceeded. An easy 
way to do this in D is with 
`std.experimental.allocator.showcase.StackFront`.


I know but I at least really like them because they solve a few 
obstacles for me. If you have a maximum allowed size, then they 
should be alright but opinions differ here.


I do not recommend allocating a 'max' static size if you don't 
use just a fraction of it. The reason is that will possibly break 
out more stack pages than necessary leading to unnecessary memory 
consumption, especially if you initialize the entire array. This 
is another reason I like VLAs.


Re: What's the point of static arrays ?

2020-07-09 Thread IGotD- via Digitalmars-d-learn

On Thursday, 9 July 2020 at 12:12:06 UTC, wjoe wrote:

...


Static arrays are great because as already mentioned, they are 
allocated on the stack (unless it is global variable something, 
then it ends up in the data segment or TLS area).


As C/C++ now allows dynamically sized static arrays (for stack 
only), shouldn't D allow this as well.


Now you have to do.

import core.stdc.stdlib;

void myFunc(size_t arraySize)
{
void *ptr = alloca(arraySize);
ubyte[] arr = cast(ubyte[])ptr[0..arraySize];

}

it would be nice if we could just write

ubyte[arraySize] just like in C.

Now this operation is not safe, but could be allowed in @system 
code.


Is this the reason that ubyte[arraySize] doesn't create a dynamic 
array with size arraySize?


Now you need to do.

ubyte[] arr;
arr.length = arraySize;

Like ubyte[arraySize] is reserved for the same usage as in C.





How to ensure template function can be processed during compile time

2020-07-08 Thread IGotD- via Digitalmars-d-learn

I have the following functions in C++

template
inline constexpr size_t mySize(const T )
{
return sizeof(v) + 42;
}


template
inline constexpr size_t mySize()
{
return sizeof(T) + 42;
}

The constexpr ensures that it will be calculated to a compile 
time constant otherwise the build will fail. In this case C++ can 
handle that I feed these functions with both a type and a 
variable which it can solve during compile time.


int v;

constexpr size_t sz = mySize(v);   // works, returns 46
constexpr size_t sz2 = mySize();  // works, returns 46


Doing the same in D, would with my lack of knowledge look like 
this.



size_t mySize(T)()
{
return T.sizeof + 42;
}


size_t mySize(T)(const T t)
{
 return T.sizeof + 42;
}



int v;

enum sz = mySize!int // works, returns 46
enum sz2 = mySize(v) // doesn't work. Error: variable v cannot be 
read at compile time


Here we have a difference between C++ and D as C++ was able infer 
the size of v during compile time.


Now since mySize is a template, shouldn't this work mySize!v, but 
it doesn't? What essential understanding have I missed here?




Re: Template function specialization doesn't work

2020-07-07 Thread IGotD- via Digitalmars-d-learn

On Tuesday, 7 July 2020 at 20:14:19 UTC, IGotD- wrote:


Thank you, that worked and now it picked the correct overloaded 
function. I don't understand why and it is a bit counter 
intuitive. Why two template arguments as I'm not even us using 
U?


If you look at the article

https://dlang.org/articles/templates-revisited.html#specialization

Then it mentioned that (T : T*) would work. Intuitively, then 
you would think (T : T[]) would work.


Here (T : T[]) is even the example with the correct double[] type 
as a correct example as well.


https://dlang.org/spec/template.html#parameters_specialization

I'm confused.


Re: Template function specialization doesn't work

2020-07-07 Thread IGotD- via Digitalmars-d-learn
On Tuesday, 7 July 2020 at 20:05:37 UTC, Steven Schveighoffer 
wrote:

On 7/7/20 4:04 PM, Steven Schveighoffer wrote:


Have you tried (T: U[], U)(ref T[] s) ?


Ugh... (T: U[], U)(ref T s)

-Steve


Thank you, that worked and now it picked the correct overloaded 
function. I don't understand why and it is a bit counter 
intuitive. Why two template arguments as I'm not even us using U?


If you look at the article

https://dlang.org/articles/templates-revisited.html#specialization

Then it mentioned that (T : T*) would work. Intuitively, then you 
would think (T : T[]) would work.


Re: Template function specialization doesn't work

2020-07-07 Thread IGotD- via Digitalmars-d-learn

On Tuesday, 7 July 2020 at 19:53:30 UTC, IGotD- wrote:


...



I also forgot to mention that the overloadedFunction is used in a 
variadic template function.


void processAll(T...)(ref T t)
{
foreach(ref v; t)
{
overloadedFunction(v);
}
}





Template function specialization doesn't work

2020-07-07 Thread IGotD- via Digitalmars-d-learn

I have two template functions

void overloadedFunction(T)(ref T val)
{
...
}


void overloadedFunction(T : T[])(ref T[] s)
{
...
}

Obviously the second should be used when the parameter is a slice 
of any type, and the first should be used in other cases. However 
this doesn't happen, the compiler always picks the first function 
regardless if the parameter is a slice or not.


So

ubyte[3] ar = [ 1, 2, 3 ];
ubyte[] arSlice = ar;

overloadedFunction(arSlice);

The first function will be used. Shouldn't the template argument 
(T : T[]) make the compiler pick the second one?




Delegates and C++ FFI lifetimes

2020-07-02 Thread IGotD- via Digitalmars-d-learn
I have this runtime written in C++ that allows callbacks for 
various functionality. In C++ the callbacks are stored as a 
function pointer together with a void* that is passed as first 
argument. The void* can be a lot of things, for example the class 
pointer in C++. However, this is a bit limited for D and the 
question is if you can use the void* in order to store a pointer 
to a D delegate instead. A trampoline function just casts the 
void* to a delegate and you go from there.


Basically:

class TestClass
{
void test() { writeln("test"); }
}

void Setup()
{
auto t = new TestClass;
MyCallbackSystem s = CreateCallbackSystem(); // C++ FFI 
interface
s.SetCallback(); // SetCallback, a helper function for 
MyCallbackSystem

}

Where SetCallback accepts a D delegate. Now this delegate is 
completely scoped and will likely disappear when SetCallback 
ends. The void* where the delegate is stored is of course 
completely outside the knowledge of the GC. I assume this leads 
to that the C++ runtime will read an invalid pointer.


The question how can you best solve this so that the delegates 
lives as long as the object (s) associated with SetCallback. Can 
you actually dynamically in D allocate a delegate? I've never 
seen such idiom other than this thread


https://forum.dlang.org/thread/20080201213313.gb9...@homero.springfield.home

where the workaround seems to be having a delegate in a struct.





Re: Generating struct .init at run time?

2020-07-02 Thread IGotD- via Digitalmars-d-learn

On Thursday, 2 July 2020 at 07:51:29 UTC, Ali Çehreli wrote:


Both asserts pass: S.init is 800M and is embedded into the 
compiled program.




Not an answer to your problem but what on earth are those extra 
800MB? The array size is 8MB so if the program would just copy 
the data it would just take 8MB. Does the binary have this size, 
even with the debugging info stripped?


Also, this an obvious optimization that can be implemented, that 
the program do an initialization loop instead of putting it in 
the data segment when the array size is above a certain size and 
they are supposed to have the same value.




Re: What would be the advantage of using D to port some games?

2020-06-24 Thread IGotD- via Digitalmars-d-learn

On Wednesday, 24 June 2020 at 19:28:15 UTC, matheus wrote:


To see how the game could fit/run in D, like people are porting 
some of those games to Rust/Go and so on.



When you mention "advantage", advantage compared to what?


To the original language the game was written. For example 
taking DOOM (Written in C), in D what would be the features 
that someone would use to port this game, like using CTFE to 
generate SIN/COS table would be reasonable?


I'm just looking for a roughly idea of this matter.



A previous game implementation in D would be interesting and if 
you do it you are welcome to write your about experiences here. 
It's hard to say what features you would take advantage in D as I 
haven't seen the code in C/C++. However, one thing is clear D 
would be an easy port because it is so close to C/C++. Every 
algorithm can be ported directly without any bigger change. If it 
would Rust you would probably have to rethink some of the data 
structures and it would be more difficult. Another thing that is 
clear is that productivity would be high. With today's fast 
machines and old games you don't have to worry about any GC 
pauses as there is plenty of time for that. You can basically use 
the slow "scripting language" features of D. I would expect that 
D is in the C# ball park in productivity for such task.


Re: What would be the advantage of using D to port some games?

2020-06-24 Thread IGotD- via Digitalmars-d-learn

On Wednesday, 24 June 2020 at 18:53:34 UTC, matheus wrote:



What I'd like to know from the experts is: What would be the 
advantage of using D to port such games?




Can you elaborate your question a little bit more. Why would you 
want to port existing game code to another language to begin 
with? When you mention "advantage", advantage compared to what? 
Porting to another language at all, porting to another language 
other than D? In that case what other languages would you have in 
mind?





Re: Read to stdout doesn't trigger correct action

2020-06-22 Thread IGotD- via Digitalmars-d-learn
On Monday, 22 June 2020 at 14:27:18 UTC, Steven Schveighoffer 
wrote:


I'm sure if there is a clib that doesn't work with this, it is 
a bug with druntime, and should be addressed. I don't know 
enough about the exact functionality to be able to write such a 
bug report, but you probably should if it's not working for you.


-Steve


I don't really have a good solution for this and it seems to work 
for existing supported C libraries. One solution would be to move 
the makeGlobal implementation to become OS/C-library specific and 
each platform would have a suitable implementation but that kind 
of contradicts having as little platform specific code possible.


Another thing that is a bit sketchy here is the usage of raw 
atomics as spinlocks as it would have side effects in case of 
locking (CPU could spin for several milliseconds if the locker 
CPU is preempted and does something else). I guess in this 
particular case it is used because it would be race creating a 
mutex as well because we want lazy initialization. While I kind 
of like lazy initialization myself it kind creates situations 
like these. I don't quite follow the exact reason behind avoiding 
static ctors, lazy initialization is nice for many things but 
ctors have their place as well. Using C stdio without druntime 
would read the OS specific FILE* directly, or what am I missing 
here?


Read to stdout doesn't trigger correct action

2020-06-22 Thread IGotD- via Digitalmars-d-learn
I've done some adaptations to druntime for another C library that 
isn't currently supported. Obtaining the FILE* structure of the 
clib is done via a function call rather than global variables. 
However this function call is never triggered when issuing a 
writeln function call. The FILE* structure is a pointer that 
points to a completely wrong location.


Reading core.stdc.stdio.stdin and core.stdc.stdio.stdout 
explicitly will trigger the function call and the correct 
addresses can be read.


The function writeln seems to obtain the std FILE* structures is 
"@property ref File makeGlobal(StdFileHandle _iob)()" in 
std.stdio.d.



// Undocumented but public because the std* handles are aliasing 
it.

@property ref File makeGlobal(StdFileHandle _iob)()
{
__gshared File.Impl impl;
__gshared File result;

// Use an inline spinlock to make sure the initializer is 
only run once.
// We assume there will be at most uint.max / 2 threads 
trying to initialize
// `handle` at once and steal the high bit to indicate that 
the globals have

// been initialized.
static shared uint spinlock;
import core.atomic : atomicLoad, atomicOp, MemoryOrder;
if (atomicLoad!(MemoryOrder.acq)(spinlock) <= uint.max / 2)
{
for (;;)
{
if (atomicLoad!(MemoryOrder.acq)(spinlock) > uint.max 
/ 2)

break;
if (atomicOp!"+="(spinlock, 1) == 1)
{
with (StdFileHandle)
assert(_iob == stdin || _iob == stdout || 
_iob == stderr);

impl.handle = mixin(_iob);
result._p = 
atomicOp!"+="(spinlock, uint.max / 2);
break;
}
atomicOp!"-="(spinlock, 1);
}
}
return result;
}


This seems do some atomic operation preventing the D File class 
for stdio not to be initialized several times. I'm not quite sure 
if this is global or per thread but I guess it is for the entire 
process. For some reason the std File classes are never 
initialized at all. Another problem is that the function that is 
called to obtain the clib stdin/out use a structure that is lazy 
initialized per thread, so it must be called at least a first 
time for each thread in order to get the correct stdin/out. 
Removing the atomic operations so that the File initialization is 
done every time, then it works.


Question is if "File makeGlobal(StdFileHandle _iob)()" is correct 
when it comes to ensure compatibility among all the clib versions 
out there. Not too seldom are clib global variables really 
functions, like errno is often a function rather than a variable. 
The internal code the the clib (Newlib) does not have this 
"optimization" but always get stdin/out using this function call.





Re: Does std.net.curl: download have support for callbacks?

2020-06-21 Thread IGotD- via Digitalmars-d-learn

On Thursday, 18 June 2020 at 01:15:00 UTC, dangbinghoo wrote:


Don't worry, almost ALL GUI FRAMEWORK in the world IS NOT 
THREAD SAFE, the wellknow Qt and Gtk, and even morden Android 
and the java Swing.




binghoo dang


You can certainly download in another thread in Qt. However, you 
are only allowed to have the widget classes in the main thread. 
Qt is a large library that also spans non-GUI functionality and 
these can often be moved to separate threads, IO for example.


In this case you could download in a separate thread, then send 
signals to the progress bar object to update it. Is it possible 
to implement a similar approach in Gtk?


Re: Garbage Collection Issue

2020-06-01 Thread IGotD- via Digitalmars-d-learn
On Monday, 1 June 2020 at 12:37:05 UTC, Steven Schveighoffer 
wrote:


I was under the impression that TLS works by altering a global 
pointer during the context switch. I didn't think accessing a 
variable involved a system call.


For sure they are slower than "normal" variables, but how much 
slower? I'm not sure.




It depends, there several different optimizations possible. This 
is essentially the difference between the -fPIC and -fpie flag 
GNU compilers. -fpie can optimize TLS so that it is an offset 
from a certain register (fs or gs with x86). Otherwise the 
compiler insert __tls_get_addr. Typically shared objects gets 
this call, but the executable can optimize. So if druntime is a 
shared object, it will use __tls_get_addr. TLS variables will not 
be major hit if used moderately, used in a loop, then you will 
certainly see a performance hit.




This can only take you so far, when the language uses TLS by 
default. The GC has to support scanning TLS and so it uses TLS 
to track thread-specific data.




Yes, this was some of the annoyance I had when porting druntime. 
The thread startup code needed to use link library (like 
elf/link.h for linux) in order to obtain the entire TLS area 
(areas because there a several of them). This includes scanning 
sections during startup and it becomes even more complicated with 
runtime loaded modules. Basically there is a lot of boiler plate 
in druntime just for reading the executable format. druntime has 
tons of elf stuff in it just to load a program something I'm not 
too keen on, because it's a lot of code and you need to support 
all the quirks with different CPU archs and operating systems. 
You'd want druntime to be more OS agnostic a let the OS services 
deal with the TLS stuff. The only upside can be that you can have 
a full symbolic stack trace during aborts when a poking in the 
executable formats.


Well, that's how it is because of GC and there is not really any 
way around it. A non tracing GC would not have this requirement 
though. When you dig into these details you realize how heavy the 
D language really is and some solutions get negative beauty 
points.










  1   2   >