Re: Mir vs. Numpy: Reworked!
On Thursday, 10 December 2020 at 14:49:08 UTC, jmh530 wrote: On Thursday, 10 December 2020 at 11:07:06 UTC, Igor Shirkalin wrote: On Monday, 7 December 2020 at 13:17:47 UTC, jmh530 wrote: "no need to calculate inverse matrix" What? Since when? Since when highly optimized algorithms are required. This does not mean that you should not know the algorithms for calculating the inverse matrix. I still find myself inverting large matrices from time to time. Maybe there are ways to reduce the number of times I do it, but it still needs to get done for some types of problems. It is what I do for science purposes too, from time to time.
Re: Mir vs. Numpy: Reworked!
On Monday, 7 December 2020 at 13:07:23 UTC, 9il wrote: On Monday, 7 December 2020 at 12:28:39 UTC, data pulverizer wrote: On Monday, 7 December 2020 at 02:14:41 UTC, 9il wrote: I don't know. Tensors aren't so complex. The complex part is a design that allows Mir to construct and iterate various kinds of lazy tensors of any complexity and have quite a universal API, and all of these are boosted by the fact that the user-provided kernel(lambda) function is optimized by the compiler without the overhead. I agree that a basic tensor is not hard to implement, but the specific design to choose is not always obvious. Your benchmarks shows that design choices have a large impact on performance, and performance is certainly a very important consideration in tensor design. For example I had no idea that your ndslice variant was using more than one array internally to achieve its performance - it wasn't obvious to me. ndslice tensor type uses exactly one iterator. However, the iterator is generic and lazy iterators may contain any number of other iterators and pointers. How does the iterator of Mir differ from the concept of an iterator in D and the use of your own design of tensors and the actions that need to be performed on them then in terms of speed of execution if we know how to achive it?
Re: Mir vs. Numpy: Reworked!
On Monday, 7 December 2020 at 13:54:26 UTC, Ola Fosheim Grostad wrote: On Monday, 7 December 2020 at 13:48:51 UTC, jmh530 wrote: On Monday, 7 December 2020 at 13:41:17 UTC, Ola Fosheim Grostad wrote: On Monday, 7 December 2020 at 13:17:47 UTC, jmh530 wrote: [snip] "no need to calculate inverse matrix" What? Since when? I dont know what he meant in this context, but a common technique in computer graphics is to build the inverse as as you apply computations. Ah, well if you have a small matrix, then it's not so hard to calculate the inverse anyway. It is an optimization, maybe also for accuracy, dunno. So, instead of ending up with a transform from coordinate system A to B, you also get the transform from B to A for cheap. This may matter when the next step is to go from B to C... And so on... A good example is a Simplex method for linear programming. It can be done such as you have to calculate inverse [m x m] matrix every step. Better, make a transform from one inverse matrix to another, that speeds up algorithm from O(3) to O(2) and even more. You don't even need to calculate the first inverse matrix if the algorithm is built in such a way that it is trivial. It is just one example.
Re: Mir vs. Numpy: Reworked!
On Monday, 7 December 2020 at 13:41:17 UTC, Ola Fosheim Grostad wrote: On Monday, 7 December 2020 at 13:17:47 UTC, jmh530 wrote: On Monday, 7 December 2020 at 11:21:16 UTC, Igor Shirkalin wrote: [snip] Agreed. As a matter of fact the simplest convolutions of tensors are out of date. It is like there's no need to calculate inverse matrix. Mir is the usefull work for author, of course, and practically almost not used. Every one who needs something fast in his own tasks should make same things again in D. "no need to calculate inverse matrix" What? Since when? I dont know what he meant in this context, but a common technique in computer graphics is to build the inverse as as you apply computations. It makes sense for simplest cases if matrices are small (2x2, 3x3 or even 4x4) and you have to multiply them at least thousands of times.
Re: Mir vs. Numpy: Reworked!
On Monday, 7 December 2020 at 13:54:26 UTC, Ola Fosheim Grostad wrote: On Monday, 7 December 2020 at 13:48:51 UTC, jmh530 wrote: On Monday, 7 December 2020 at 13:41:17 UTC, Ola Fosheim Grostad wrote: On Monday, 7 December 2020 at 13:17:47 UTC, jmh530 wrote: [snip] "no need to calculate inverse matrix" What? Since when? I dont know what he meant in this context, but a common technique in computer graphics is to build the inverse as as you apply computations. Ah, well if you have a small matrix, then it's not so hard to calculate the inverse anyway. It is an optimization, maybe also for accuracy, dunno. Exactly. Optimization plus accuracy.
Re: Mir vs. Numpy: Reworked!
On Monday, 7 December 2020 at 13:17:47 UTC, jmh530 wrote: On Monday, 7 December 2020 at 11:21:16 UTC, Igor Shirkalin wrote: [snip] Agreed. As a matter of fact the simplest convolutions of tensors are out of date. It is like there's no need to calculate inverse matrix. Mir is the usefull work for author, of course, and practically almost not used. Every one who needs something fast in his own tasks should make same things again in D. "no need to calculate inverse matrix" What? Since when? Since when highly optimized algorithms are required. This does not mean that you should not know the algorithms for calculating the inverse matrix.
Re: Mir vs. Numpy: Reworked!
On Monday, 7 December 2020 at 02:14:41 UTC, 9il wrote: On Sunday, 6 December 2020 at 17:30:13 UTC, data pulverizer wrote: On Saturday, 5 December 2020 at 07:44:33 UTC, 9il wrote: sweep_ndslice uses (2*N - 1) arrays to index U, this allows LDC to unroll the loop. I don't know. Tensors aren't so complex. The complex part is a design that allows Mir to construct and iterate various kinds of lazy tensors of any complexity and have quite a universal API, and all of these are boosted by the fact that the user-provided kernel(lambda) function is optimized by the compiler without the overhead. Agreed. As a matter of fact the simplest convolutions of tensors are out of date. It is like there's no need to calculate inverse matrix. Mir is the usefull work for author, of course, and practically almost not used. Every one who needs something fast in his own tasks should make same things again in D.
Re: Development: Work vs Lazy Programmers... How do you keep sanity?
On Thursday, 3 December 2020 at 15:18:31 UTC, matheus wrote: Hi, I didn't know where to post this and I hope this is a good place. I'm a lurker in this community and I read a lot of discussions on this forum and I think there a lot of smart people around here. So I'd like to know if any of you work with Lazy or even Dumb programmers, and If yes how do you keep your sanity? Matheus. PS: Really I'm almost losing mine. Just skip them all and do your job.
Re: How to imporve D-translation of these Python list comprehensions ?
On Monday, 15 January 2018 at 19:05:52 UTC, xenon325 wrote: A workmate has recently shown this piece of code to show how nice Python is (we are mostly C and growing C++ shop): dd = [dict(_name=k, **{a + str(i): aget(d, k, a) for a in aa for i, d in enumerate([srv1, srv2])}) for k in sorted(kk)] This is the most terrible Python code I have ever seen. If you know Python, could you please unroll to more readable form? Suggestion are welcome! --- Alexander
Re: Efficient way to pass struct as parameter
On Tuesday, 2 January 2018 at 18:45:48 UTC, Jonathan M Davis wrote: [...] Smart optimizer should think for you without any "auto" private words if function is inlined. I mean LDC compiler first of all.
Re: Why 2 ^^ 1 ^^ 2 = 2?
On Sunday, 22 October 2017 at 14:20:20 UTC, Ilya Yaroshenko wrote: .. i thought it should be (2 ^^ 1) ^^ 2 = 4 Imagine 2^^10^^10^^7. It's a big number, isn't? (up-up-and up) Where would you start from?
Re: TLS + LDC + Android (ARM) = FAIL
On Wednesday, 1 November 2017 at 18:33:45 UTC, Iain Buclaw wrote: On 1 November 2017 at 19:28, Johannes Pfau via Digitalmars-dwrote: Am Wed, 01 Nov 2017 17:42:22 + schrieb David Nadlinger : [...] ARM: Fine. Android: probably won't work well. AFAIK we're only missing emulated TLS / GC integration, so most test will pass but real apps will crash because of GC memory corruption. I guess I should finally get back to fixing that problem ;-) OTOH Android doesn't even support GCC anymore, so I don't really see much benefit in maintaining GDC Android support. -- Johannes What's the thread model on Android? You could perhaps get away with --enable-threads=single. I didn't pick this option up. We need multithreading.
Re: TLS + LDC + Android (ARM) = FAIL
On Wednesday, 1 November 2017 at 18:30:28 UTC, Johannes Pfau wrote: Am Wed, 01 Nov 2017 18:06:29 + schrieb Joakim: On Wednesday, 1 November 2017 at 17:24:32 UTC, Igor Shirkalin wrote: > [...] If you're having problems with the emulated TLS I put together for Android, it is most likely because I didn't document well what needs to be done when linking for Android. Specifically, there are three rules that _must_ be followed: 1. You must use the ld.bfd linker, ld.gold won't do. 2. You must have a D main function, even for a shared library (which can be put next to android_main, if you're using the default Android wrapper from my D android library). 3. The ELF object with the D main function must be passed to the linker first. If you look at my examples on the wiki, you'll see that they all follow these rules: https://wiki.dlang.org/Build_D_for_Android I should have called these rules out separately though, like I'm doing here, a documentation oversight. Also when mixing D and C code, you can't access extern TLS variables across the language boundary. Maybe OP tries to do that as he mixes D/C code? -- Johannes Everything was beautiful for Win/Linux/iOS/Android. Some day we had to use LDC (for some obvious reasons). I marked every function with @nogc and pure (it helped me to optimize the code). The problem is TLS. Android doesn't support it. At all. If you understand I talk about -betterC feature. - Igor
Re: TLS + LDC + Android (ARM) = FAIL
On Wednesday, 1 November 2017 at 18:28:12 UTC, Johannes Pfau wrote: Am Wed, 01 Nov 2017 17:42:22 + schrieb David Nadlinger: On Wednesday, 1 November 2017 at 17:30:05 UTC, Iain Buclaw wrote: > [...] Or quite possibly fewer, depending on what one understands "platform" and "support" to mean. ;) What is the state of GDC on Android/ARM – has anyone been using it recently? — David ARM: Fine. Android: probably won't work well. AFAIK we're only missing emulated TLS / GC integration, so most test will pass but real apps will crash because of GC memory corruption. I guess I should finally get back to fixing that problem ;-) OTOH Android doesn't even support GCC anymore, so I don't really see much benefit in maintaining GDC Android support. -- Johannes That's too bad. I'd do it here for food. - Igor
Re: TLS + LDC + Android (ARM) = FAIL
On Wednesday, 1 November 2017 at 17:46:29 UTC, David Nadlinger wrote: On Wednesday, 1 November 2017 at 17:24:32 UTC, Igor Shirkalin wrote: Does new "-betterC" mean we may use parallelism with using separate linker? `-betterC` does not add any emulation of missing platform features — on the contrary, it *removes* language runtime functionality! Thus, if TLS doesn't work for you (IIRC, emulated TLS should work on Android following Joakim's work!), adding `-betterC` won't improve the situation. I undrastand that. Could you please open an ticket on the LDC GitHub tracker with details on the issue? I'd try with help. — David
Re: TLS + LDC + Android (ARM) = FAIL
On Wednesday, 1 November 2017 at 17:44:11 UTC, Joakim wrote: On Wednesday, 1 November 2017 at 17:24:32 UTC, Igor Shirkalin wrote: We solved the subject with modifying druntime source related with tls. Imaging, we have lost a lot of D's features. You'd have been better off opening an issue for ldc: https://github.com/ldc-developers/ldc/issues TLS should work fine, though it's emulated, as Android doesn't support native TLS. You have to be careful how you link the emulated TLS, and you have to include a main function even for a shared library: https://wiki.dlang.org/Build_LDC_for_Android#Directions_for_future_work You'd be better off talking to the ldc devs- this is the first I'm hearing about this- rather than going in and making changes to druntime. As far as I know DMD or GDC are not available for ARM architecture. So we need LDC. A short story: we have big C/C++ project that links D (LDC) code for different platforms. Does new "-betterC" mean we may use parallelism with using separate linker? You'll need to expand on this: you want to use std.parallelism, which isn't working for you now, or something else to do with We use external D libraries in C project. Using parallelisms means we have to initialize druntime. But tls stops it (therefore we had changed it). Sure, we have tried to build pure D part and it didnt work. the linker is going wrong? of course the linker was working properly.
Re: TLS + LDC + Android (ARM) = FAIL
On Wednesday, 1 November 2017 at 17:30:05 UTC, Iain Buclaw wrote: On 1 November 2017 at 18:24, Igor Shirkalin via Digitalmars-d <digitalmars-d@puremagic.com> wrote: We solved the subject with modifying druntime source related with tls. Imaging, we have lost a lot of D's features. As far as I know DMD or GDC are not available for ARM architecture. So we need LDC. GDC supports the same or maybe more platforms than LDC. :-) Could you please give the reference how to build it in linux or windows?
TLS + LDC + Android (ARM) = FAIL
We solved the subject with modifying druntime source related with tls. Imaging, we have lost a lot of D's features. As far as I know DMD or GDC are not available for ARM architecture. So we need LDC. A short story: we have big C/C++ project that links D (LDC) code for different platforms. Does new "-betterC" mean we may use parallelism with using separate linker? - IS
Re: "version" private word
On Tuesday, 31 October 2017 at 15:19:49 UTC, Steven Schveighoffer wrote: On 10/31/17 10:47 AM, Igor Shirkalin wrote: [...] Sorry I hate writing code on mobile. You can create an arbitrary version by assigning a symbol to it, use that symbol to describe a feature, assign that symbol for each architecture that supports it. Then write code in a version block of that symbol. The question was not about mobile platforms. I think he meant he didn't like writing code in a forum post on his mobile, so he wrote something more abstract :) Ah. :) Sometimes we need to mix some combinations of code in one big project with or without some libraries, algorithms etc. I see what you mean and practically agree with you. But not everything depends on you (us). The above response has been the standard D answer for as long as this question has been asked (and it has been asked a lot). Walter is dead-set against allowing boolean expressions in version statements. Now I understand the irritation about my question. I'm sorry. The anointed way is to divide your code by feature support, and then version those features in/out based on the platform you are on. For example, instead of "X86_or_X64", you would do "TryUsingSSE" or something (not sure what your specific use case is). This doesn't solve the case with combinations of different versions. Four different versions produce nine (+4) different variants. It's stupid to define 9 additional version constants. However, enums and static if can be far more powerful. Version statements do not extend across modules, so you may have to repeat the entire scaffolding to establish versions in multiple modules. Enums are accessible across modules. Yes, it's now clear for me what to do. Thanks!
Re: "version" private word
On Tuesday, 31 October 2017 at 14:54:27 UTC, Dr. Assembly wrote: On Tuesday, 31 October 2017 at 13:53:54 UTC, Jacob Carlborg wrote: On 2017-10-31 14:46, Igor Shirkalin wrote: [...] The only alternative is to do something like this: version (X86) enum x86 = true; else enum x86 = false; else version (X86_64) enum x86_64 = true; else enum x86_64 = false; static if (x86 || x86_64) {} Why is that keyword called enum? is this any related to the fact enumeration's field are const values? it would be called invariable or something? You're right. Enum defines constant or group of constants in compile time. The full description of enum can be found here: https://dlang.org/spec/enum.html
Re: "version" private word
On Tuesday, 31 October 2017 at 14:31:17 UTC, Jesse Phillips wrote: On Tuesday, 31 October 2017 at 14:25:19 UTC, Igor Shirkalin wrote: On Tuesday, 31 October 2017 at 14:22:37 UTC, Jesse Phillips wrote: On Tuesday, 31 October 2017 at 13:46:40 UTC, Igor Shirkalin wrote: Hello! You goal should be to describe features. Version x86 ... Version = I can stand on my head ... pardon? Sorry I hate writing code on mobile. You can create an arbitrary version by assigning a symbol to it, use that symbol to describe a feature, assign that symbol for each architecture that supports it. Then write code in a version block of that symbol. The question was not about mobile platforms. Sometimes we need to mix some combinations of code in one big project with or without some libraries, algorithms etc. I see what you mean and practically agree with you. But not everything depends on you (us).
Re: "version" private word
On Tuesday, 31 October 2017 at 14:22:37 UTC, Jesse Phillips wrote: On Tuesday, 31 October 2017 at 13:46:40 UTC, Igor Shirkalin wrote: Hello! You goal should be to describe features. Version x86 ... Version = I can stand on my head ... pardon?
Re: "version" private word
On Tuesday, 31 October 2017 at 13:53:54 UTC, Jacob Carlborg wrote: On 2017-10-31 14:46, Igor Shirkalin wrote: Hello! We need some conditional compilation using 'version'. Say we have some code to be compiled for X86 and X86_64. How can we do that using predefined (or other) versions? Examples: version(X86 || X86_64) // failed version(X86) || version(X86_64) // failed The following works but it is too verbose: version(X86) { version = X86_or_64; } version(X86_64) { version = X86_or_64; } The only alternative is to do something like this: version (X86) enum x86 = true; else enum x86 = false; else version (X86_64) enum x86_64 = true; else enum x86_64 = false; static if (x86 || x86_64) {} Got it. Thank you!
"version" private word
Hello! We need some conditional compilation using 'version'. Say we have some code to be compiled for X86 and X86_64. How can we do that using predefined (or other) versions? Examples: version(X86 || X86_64) // failed version(X86) || version(X86_64) // failed The following works but it is too verbose: version(X86) { version = X86_or_64; } version(X86_64) { version = X86_or_64; } - IS
Re: the best language I have ever met(?)
On Friday, 25 November 2016 at 19:16:43 UTC, ketmar wrote: yeah. but i'm not Andrei, i don't believe that the only compiler task is to resolve templated code. ;-) i.e. Andrei believes that everything (and more) should be moved out of compiler core and done with library templates. Andrei is genius, for sure, but he is living somewhere in future, where our PCs are not bound by memory, CPU, and other silly restrictions. ;-) tl;dr: using template for this sux. Nice to meet you, Andrei! Yes, in mathematics we are more servants than gentlemen (Charles Hermite).
Re: "This week in D" state
On Tuesday, 10 October 2017 at 15:52:03 UTC, Adam D. Ruppe wrote: Of course, if someone wants to email me something I can copy/paste in, that's cool... but you could also send that to the blog... I'd like to but i would be nice to know where to write to without required piles.
Re: Why do I have to cast arguments from int to byte?
On Tuesday, 10 October 2017 at 19:55:36 UTC, Chirs Forest wrote: I keep having to make casts like the following and it's really rubbing me the wrong way: void foo(T)(T bar){...} byte bar = 9; foo!byte(bar + 1); //Error: function foo!byte.foo (byte bar) is not callable using argument types (int) foo!byte(cast(byte)(bar + 1)); It wouldn't be so bad if I didn't have to use the word cast before each cast, bust since I have to specify both the word cast and the cast type and then wrap both the cast type and the value in brackets... it just explodes my code into multiple lines of unreadable mess. void foo(T)(T bar, T bar2, T bar3){...} byte foobar = 12; foo!byte(foobar + 1, foobar + 22, foobar + 333); vs. foo!byte(cast(byte)(foobar + 1), cast(byte)(foobar + 22), cast(byte)(foobar + 333)); Why? Because int (1) + ubyte (9) = int
Re: Create uninitialized dynamic array
On Thursday, 5 October 2017 at 21:04:30 UTC, Adam D. Ruppe wrote: On Thursday, 5 October 2017 at 19:59:48 UTC, Igor Shirkalin wrote: Is there a pure way to make what I want? oh i almost forgot about this function too: http://dpldocs.info/experimental-docs/std.array.uninitializedArray.1.html import std.array; double[] arr = uninitializedArray!(double[])(100); Ha! I saw it some day and forgot too! And GC.malloc is, I think, what I need. Thank you!
Re: Create uninitialized dynamic array
On Thursday, 5 October 2017 at 20:19:15 UTC, Adam D. Ruppe wrote: On Thursday, 5 October 2017 at 19:59:48 UTC, Igor Shirkalin wrote: I want to quickly fill it with my own data and I do not want to waste CPU time to fill it with zeros (or some other value). You could always just allocate it yourself. Something that large is liable to be accidentally pinned by the GC anyway, so I suggest: int[] data; int* dataptr = cast(int*) malloc(SOMETHING * int.sizeof); if(dataptr is null) throw new Exception("malloc failed"); scope(exit) free(dataptr); data = dataptr[0 .. SOMETHING]; // work with data normally here Just keep in mind it is freed at scope exit there, so don't escape slices into it. Thank you, Adam, for pinpoint answer. Doesn't it mean we have to avoid GC for such large blocks? And what if we need a lot blocks with less sizes? I'm from C++ world but... I like GC. Usually the block(s) is scoped with some more complex way, so it is good to pass it to GC for management. Maybe (say LDC) compiler can reject this unuseful initialization on simplest cases. Shortly, I'm still in doubt.
Re: Looking for a mentor in D
On Tuesday, 3 October 2017 at 06:54:01 UTC, eastanon wrote: I have been reading the D forums for a while and following on its amazing progress for a long time. Over time I have even written some basic D programs for myself, nothing major or earth shuttering. I have downloaded and read Ali's excellent book. [...] Let's try to combine D, Python and programming at isems...@gmail.com
Create uninitialized dynamic array
Hello! Preface: I need 1G array of ints (or anything else). Problem: I want to quickly fill it with my own data and I do not want to waste CPU time to fill it with zeros (or some other value). I do like this: void main() { int[] data; // key code: data.length = SOMETHING; // how to create an array and leave it uninitialized? // fill 'data' with some data foreach(i, ref v; data) v=i; // an example } Is there a pure way to make what I want?
Re: LDC 1.4.0-beta1
On Saturday, 26 August 2017 at 22:35:11 UTC, kinke wrote: Hello! Hi everyone, on behalf of the LDC team, I'm glad to announce LDC 1.4.0-beta1. The highlights of version 1.4 in a nutshell: * Based on D 2.074.1. Is it possible to build LDC based on D 2.076 with latest -betterC feature?
Re: void init of out variables
On Saturday, 19 August 2017 at 06:20:28 UTC, Nicholas Wilson wrote: I have a function that takes a large matrix as an out parameter. Is there a way to do `=void` for an out parameter like there is for is for a plain declaration? enum M = 2600; void f() { float[M] mean = void; // works as expected, mean is left uninitialised } void g(out float[M][M] corr) // works but assigns twice { corr[] = float.init; // compiler inserted // assign to each value of corr } //Error: found ')' when expecting '.' following void void h(out float[M][M] corr = void) { } is there a way to not assign to out variables? Try 'ref' instead of 'out'.
Re: ASCII-ART mandelbrot running under newCTFE
On Friday, 4 August 2017 at 22:50:03 UTC, Stefan Koch wrote: Hey Guys, I just trans-compiled a brainfuck mandelbrot into ctfeable D. newCTFE is able to execute it correctly (although it takes 3.5 minutes to do so). The code is here https://gist.github.com/UplinkCoder/d4e4426e6adf9434e34529e8e1f8cb47 The gist it evaluates the function at runtime since the newCTFE version capable of running this, is not yet available as a preview release. If you want a laugh you can compile the code with ldc and -Oz flag set. LLVM will detect that the function is pure and will try to constant-fold it. I do not know how long this takes though since my patience is limited. Cheers, Stefan I have interest in mandelbtott. Some day, a long time ago, when turbo pascal was the part of the world and EGA monitors had 320x240x1 bytes per pixel... That time it was interesting to algorithm to go around the Mandelbrott set because of one theorem where every point of manfelbrott set has continious connection with any other point. If I have time I'd love to write it in D.
Re: Remove instance from array
On Wednesday, 5 July 2017 at 16:04:16 UTC, Jolly James wrote: On Wednesday, 5 July 2017 at 15:56:45 UTC, Igor Shirkalin wrote: On Wednesday, 5 July 2017 at 15:48:14 UTC, Jolly James wrote: On Wednesday, 5 July 2017 at 15:44:47 UTC, Igor Shirkalin wrote: On Wednesday, 5 July 2017 at 15:30:08 UTC, Jolly James wrote: WhatEver[] q = []; [...] auto i = new WhatEver(); q[] = i; How does one remove that instance 'i'? What exactly do you want to remove? After a[]=i your array contain a lot of references to 'i'. I would like to know how works: removing - the first - and all references to 'i' inside the 'q'. Perhaps, for all references to i it should look like: a = a.filter!(a => a !is i).array; Thank you! :) But why a containers so complicated in D? In C# I would go for a generic List, which would support structs and classes, where I simply could call '.Remove(T item)' or '.RemoveAt(int index)'. I would know how this works, because the method names make sense, the docs are straight forward. Here in D everything looks like climbing mount everest. When you ask how to use D's containers you are recommended to use dynamic arrays instead. When you look at the docs for std.algorithm, e.g. the .remove section, you get bombed with things like 'SwapStrategy.unstable', asserts and tuples, but you aren't told how to simply remove 1 specific element. I don't know c sharp, but I can tell everything about c++ and python. To climb a everest in python you have to know almost nothing, in c++ you have to know almost everything. In D you have to be smarter, you do not need to climb a everest but you have to know minimum to do that. Spend a year in learning and get the best result in minutes).
Re: New Garbage Collector?
On Saturday, 22 July 2017 at 10:17:49 UTC, Temtaime wrote: On Saturday, 22 July 2017 at 04:53:17 UTC, aedt wrote: In the forum, I saw a thread about someone working on a new GC. Just wanted to know if there's any updates on that. And what issues is it going to fix.. Personally, I would be greatly delighted if it acknowledges the Stop-The-World problem[1]. Going around it is shown not to be impossible[2]. I do understand it's not trivial task but how is the community/core devs supporting this? [1] https://dlang.org/spec/garbage.html [2] https://hub.docker.com/r/nimlang/nim/ The new precise GC will be never added to druntime. It is dead, man. Are you real developer of new GC?
Re: Foreign threads in D code.
On Wednesday, 12 July 2017 at 09:49:32 UTC, Guillaume Piolat wrote: On Tuesday, 11 July 2017 at 22:59:42 UTC, Igor Shirkalin wrote: [...] -- Biotronic Thanks for very useful information! Just one small note. If you don't know the foreign thread lifetime, it's cleaner to detach it from the runtime upon exit. Else may fall in the following scenario. 1. you register thread A 2. thread A is destroyed later on, in the C++ code 3. another thread B come into your callback and allocate. The GC triggers and try to pause a non-existing thread A. This is important note. Yes, usually the lifetime of foreign thread is unknown. You, guys, helped me a lot.
Re: Foreign threads in D code.
On Tuesday, 11 July 2017 at 06:18:44 UTC, Biotronic wrote: On Monday, 10 July 2017 at 20:03:32 UTC, Igor Shirkalin wrote: [...] If DRuntime is not made aware of the thread's existence, the thread will not be stopped by the GC, and the GC might collect memory that the thread is referencing on the stack or in non-GC memory. Anything allocated by the GC would still be scanned. To inform DRuntime about your thread, you should call thread_attachThis: https://dlang.org/phobos/core_thread.html#.thread_attachThis As pointed out in the documentation of thread_attachThis, you might also want to call rt_moduleTlsCtor, to run thread local static constructors. Depending on your usage, this might not be necessary. -- Biotronic Thanks for very useful information!
Foreign threads in D code.
Hello! I have written some D code that I need to link to :C++ huge project. Let it be just one function that uses GC. The question is: if C++ code creates several threads and runs this :D function simultaneously, will GC work correctly? p.s. Of course the druntime is initialized before it. Igor Shirkalin
Re: Remove instance from array
On Wednesday, 5 July 2017 at 16:04:16 UTC, Jolly James wrote: On Wednesday, 5 July 2017 at 15:56:45 UTC, Igor Shirkalin wrote: On Wednesday, 5 July 2017 at 15:48:14 UTC, Jolly James wrote: On Wednesday, 5 July 2017 at 15:44:47 UTC, Igor Shirkalin wrote: On Wednesday, 5 July 2017 at 15:30:08 UTC, Jolly James wrote: WhatEver[] q = []; [...] auto i = new WhatEver(); q[] = i; How does one remove that instance 'i'? What exactly do you want to remove? After a[]=i your array contain a lot of references to 'i'. I would like to know how works: removing - the first - and all references to 'i' inside the 'q'. Perhaps, for all references to i it should look like: a = a.filter!(a => a !is i).array; Thank you! :) But why a containers so complicated in D? In C# I would go for a generic List, which would support structs and classes, where I simply could call '.Remove(T item)' or '.RemoveAt(int index)'. I would know how this works, because the method names make sense, the docs are straight forward. Here in D everything looks like climbing mount everest. When you ask how to use D's containers you are recommended to use dynamic arrays instead. When you look at the docs for std.algorithm, e.g. the .remove section, you get bombed with things like 'SwapStrategy.unstable', asserts and tuples, but you aren't told how to simply remove 1 specific element. I don't know c sharp, but I can tell everything about c++ and python. To climb a everest in python you have to know almost nothing, in c++ you have to know almost everything. In D you have to be smarter, you do not need to climb a everest but you have to know minimum to do that. Spend a year in learning and get the best result in minutes).
Re: Remove instance from array
On Wednesday, 5 July 2017 at 15:48:14 UTC, Jolly James wrote: On Wednesday, 5 July 2017 at 15:44:47 UTC, Igor Shirkalin wrote: On Wednesday, 5 July 2017 at 15:30:08 UTC, Jolly James wrote: WhatEver[] q = []; [...] auto i = new WhatEver(); q[] = i; How does one remove that instance 'i'? What exactly do you want to remove? After a[]=i your array contain a lot of references to 'i'. I would like to know how works: removing - the first - and all references to 'i' inside the 'q'. Perhaps, for all references to i it should look like: a = a.filter!(a => a !is i).array;
Re: Remove instance from array
On Wednesday, 5 July 2017 at 15:30:08 UTC, Jolly James wrote: WhatEver[] q = []; [...] auto i = new WhatEver(); q[] = i; How does one remove that instance 'i'? What exactly do you want to remove? After a[]=i your array contain a lot of references to 'i'.
Re: casting to structure
On Saturday, 24 June 2017 at 21:41:22 UTC, Moritz Maxeiner wrote: Hi, unfortunately not: - Operator overloading is supported via member functions only [1]. - Corollary: You cannot overload operators for builtin types (i.e. where the cast gets rewritten to `e.opOverloaded` where `e` is a builtin type) - opCast needs to be defined for the type that you are casting from [2], not the one you are casting to => You cannot overload opCast for casting from builtin types [...] You cannot. As any such cast operation would have to create a new A object, anyway (and would as such be a wrapper around a (possibly default) constructor), I suggest using elaborate constructors: Thank you for your detailed explanation! Word of warning, though: What you're doing is highly unsafe. I guess it is unsafe, but core.sys.windows module works this way. That *is* how the Windows C API works AFAIK. I have no choice except for using it as it is. Define a struct for HWND as shown above for A and use constructors. Add an `alias this` to the wrapped pointer value. That was the first thing I did. And it is led me to my question. Thank you!
Re: casting to structure
On Saturday, 24 June 2017 at 20:43:48 UTC, Igor Shirkalin wrote: struct A { void * data; // void * type is just for example // no matter what is here } I know that if I add constructor this(int) struct A { void * p; this(int k) { p = cast(void*)k; } } auto a = cast(A) 23; // it works Is it possible without such a constructor?
casting to structure
Hello! I'm in trouble with opCast function. Is it possible to cast some (integral) type to user defined structure? We have a structure: struct A { void * data; // void * type is just for example // no matter what is here } How can we define opCast operator to make the following expression working properly? auto a = cast(A) 12; The task arised from core.sys.windows module. This module defines a lot of windows specific types based on HANDLE: these are HWND, HDC, HBITMAP, etc. The problem is that all these types are identical, and it is too bad. When I redefine these types like: alias Typedef!(void*) HDC; I have another problem: HWND_MESSAGE = cast(HWND) -3; // Error: cannot cast expression -3 of type int to Typedef!(void*, null, null) So, is there any solution to this?
Re: DirectX bindings
On Thursday, 22 June 2017 at 09:09:40 UTC, evilrat wrote: On Sunday, 3 November 2013 at 05:27:24 UTC, evilrat wrote: https://github.com/evilrat666/directx-d I'm sorry to say that, but I have to quit the post of DirectX bindings maintainer. I haven't yet decided on what to do with dub package[1], but I'm in favor of completely deleting it so the next maintainer or D foundation can step in, that way there should be little to no code breakage occurs and most users won't even notice a change. My current estimate is to drop it till the end of month. I will probably continue to maintain github repository for few months for my personal needs though. [1] http://code.dlang.org/packages/directx-d Thanks for your DirectX bindings! I'm currently in the process of learning it.
Re: Simple c header => Dlang constants using mixins in compile time
On Sunday, 18 June 2017 at 16:02:38 UTC, Seb wrote: On Saturday, 17 June 2017 at 11:27:40 UTC, Igor Shirkalin wrote: On Saturday, 17 June 2017 at 11:23:52 UTC, Cym13 wrote: On Saturday, 17 June 2017 at 11:20:53 UTC, Igor Shirkalin wrote: [...] I'm sure others will have cleaner solutions as as a quick hack you can read the file at compile time, modify it, and compile the D code on the go: [...] Thanks a lot! That what I was looking for. FWIW there are tools for this as well,e.g. https://github.com/jacob-carlborg/dstep Thank you. I try to do it myself for learning purposes.
Re: Simple c header => Dlang constants using mixins in compile time
On Saturday, 17 June 2017 at 11:23:52 UTC, Cym13 wrote: On Saturday, 17 June 2017 at 11:20:53 UTC, Igor Shirkalin wrote: [...] I'm sure others will have cleaner solutions as as a quick hack you can read the file at compile time, modify it, and compile the D code on the go: [...] Thanks a lot! That what I was looking for.
Re: Simple c header => Dlang constants using mixins in compile time
On Saturday, 17 June 2017 at 11:10:47 UTC, Igor wrote: On Saturday, 17 June 2017 at 10:56:52 UTC, Igor Shirkalin wrote: Hello! I have a simple C header file that looks like: #define Name1 101 #define Name2 122 #define NameN 157 It comes from resource compiler and I need all these constants to be available in my Dlang program in compile time. It seems to me it is possible. I know I can simply write external program (in python, for example) that does it, but it means I should constantly run it after every change before D compilation. Please, can anyone help to direct me how to realize it? Thank you in advance! Igor Shirkalin Maybe I am not quite understanding what you are asking but can't you just use: enum Name1 = 101; enum Name2 = 122; ... No, I need the original header file to be used in other applications (say, resource compiler). Therefore this file is primary. I think some pretty short code can be written in D that use such a file to generate constants (enum Name1 = 101) in compile time.
Simple c header => Dlang constants using mixins in compile time
Hello! I have a simple C header file that looks like: #define Name1 101 #define Name2 122 #define NameN 157 It comes from resource compiler and I need all these constants to be available in my Dlang program in compile time. It seems to me it is possible. I know I can simply write external program (in python, for example) that does it, but it means I should constantly run it after every change before D compilation. Please, can anyone help to direct me how to realize it? Thank you in advance! Igor Shirkalin
Re: Strange expression found in std.variant
On Saturday, 3 June 2017 at 16:22:33 UTC, Francis Nixon wrote: When looking at std.variant I found the following line: return q{ static if (allowed!%1$s && T.allowed!%1$s) if (convertsTo!%1$s && other.convertsTo!%1$s) return VariantN(get!%1$s %2$s other.get!%1$s); }.format(tp, op); I was wondering what exactly the % signs where doing/what they are for? https://tour.dlang.org/tour/en/basics/alias-strings
Re: Default class template parameter
On Saturday, 27 May 2017 at 19:30:40 UTC, Stanislav Blinov wrote: On Saturday, 27 May 2017 at 19:23:59 UTC, Igor Shirkalin wrote: [...] No, you'd have to at least write auto a = new ClassName!()(1.2); Or you could define a make function: auto makeClassName(T = double)(T value) { return new ClassName!T(value); } auto a = makeClassName(1.2); Thank you!
Default class template parameter
Hi, I try to make a class template with single template argument defaulted to some type. Is it possible to use the name of class without specification of template argumet (no '!' operator)? Example: class ClassName(T=double) { this(T value) { /// do some stuff here } /// some other stuff.. } void main() { a = ClassName(1.2); /// error: cannot deduce function from argument types !()(int) a = ClassName!double(1.2); /// OK } It seems the compiler treats 'ClassName' as function, but it is obvious that it should treat it as 'ClassName!double'.
Re: Destructor attribute inheritance, yea or nay?
On Friday, 26 May 2017 at 17:48:24 UTC, Stanislav Blinov wrote: I'm sorry, I ment explicitly. I hope it is not possible. It is very possible, and it should be possible, otherwise we couldn't even think about deterministic destruction. Hm, you've said it is decision of GC (see bellow), so how can it be deterministic? Another side, clearly demonstrated by my second post, is that non-deterministic destruction cannot be @safe, period. Because when GC collects and calls destructors, it calls all of them, regardless of their @safe status, even when the collection is triggered inside a @safe function. Doesn't that mean if compiler can't call inherited destructor despite of GC it must be error? 1) Destructors are not "inherited" in D. Each derived class has it's own independent destructor. That's why they don't inherit any attributes either. 2) Compiler doesn't call destructors for classes. It is done either manually (by calling destroy()) or by the GC. Look at the example in the second post: I'm in @safe function (important()), I need some memory. I ask for it, the GC decides to do a collection before giving me memory. And during that collection it calls a @system destructor. So the language and runtime are effectively in disagreement: language says "no @system calls in @safe context", runtime says "whatever, I need to call those destructors". Your example is very interesting and it derives some questions. First, why 'oblivious' function does not free Malicious object (no matter GC or not GC). What if 'important' function needs some "external an not safe" resource used by 'oblivious'? Is it all about @safe that stops allowing it? If so, @safe is really important feature in Dlang. Second, same as first, it looks like I got it.
Re: Destructor attribute inheritance, yea or nay?
On Friday, 26 May 2017 at 17:17:39 UTC, Stanislav Blinov wrote: Destructors of derived classes are called implicitly on finalization. The net effect is that such finalization adopts the weakest set of attributes among all the destructors it calls. I'm sorry, I ment explicitly. I hope it is not possible. There are two sides of this problem: one is that we cannot have deterministic destruction (i.e. manually allocate/free classes) while keeping attribute inference: under current rules, finalization has to be @system. This one can be tackled if the language provided strict rules of attribute inheritance in destructors. Another side, clearly demonstrated by my second post, is that non-deterministic destruction cannot be @safe, period. Because when GC collects and calls destructors, it calls all of them, regardless of their @safe status, even when the collection is triggered inside a @safe function. Doesn't that mean if compiler can't call inherited destructor despite of GC it must be error?
Re: Destructor attribute inheritance, yea or nay?
On Monday, 22 May 2017 at 17:05:06 UTC, Stanislav Blinov wrote: I'd like to hear what you guys think about this issue: https://issues.dlang.org/show_bug.cgi?id=15246 [...] If your destructor is not @safe and @nogc, why not to make it be the same or call inherited destructor implicity?
Re: [OT] I found a bug in Excel 2016
On Friday, 26 May 2017 at 14:38:41 UTC, Steven Schveighoffer wrote: Here is my story: http://www.schveiguy.com/blog/2017/05/how-to-report-a-bug-to-microsoft/ -Steve Fundamental story. You have achieved 8th level:) Sh.Igor
Re: Questionnaire
On Wednesday, 8 February 2017 at 18:27:57 UTC, Ilya Yaroshenko wrote: 1. a + d + new projects 2. C++ + Python 3. Beacause of D strength with LDC 4. Have you use one of the following Mir projects in production: No. The lack of numeric methods. To Ilya personally: if you try to realize primitive and fast Gauss you see you don't need anything from Mir. It is good work for diplom work. ... 5. What D misses to be commercially successful languages? IDE only. But I have found Visual D pretty enough. D reminds me Watcom C++. Debugging tools are for loosers :) ... Igor Shirkalin
Re: [Semi-OT] I don't want to leave this language!
On Monday, 5 December 2016 at 20:25:00 UTC, Ilya Yaroshenko wrote: Hi e-y-e, The main problem with D for production is its runtime. GC, DRuntime, Phobos is big constraint for real world software production. The almost only thing I do is real world software production (basically math and optimized math methods). D with it's GC, DRuntime and Phobos makes it what I really like and need for. I do my own libs for my own needs. Perhaps some day I will use Mir, but I don't care if it is with or without D's standard libs. Igor Shirkalin.
Re: the best language I have ever met(?)
On Monday, 5 December 2016 at 17:27:21 UTC, Igor Shirkalin wrote: On Monday, 5 December 2016 at 16:39:33 UTC, eugene wrote: On Monday, 5 December 2016 at 16:07:41 UTC, Igor Shirkalin wrote: I didnt count, but its about ten thousend a year, i.e. nothing. if you earned nothing using D language why do you recommend it?))) People usually earn money using programming langs. some people have nothing about science. Some of them are god's addicted. I don't think D is here. We are out of here. We should go to facebook and keep here if you take it. that's it. I think we have to stop.
Re: the best language I have ever met(?)
On Monday, 5 December 2016 at 16:39:33 UTC, eugene wrote: On Monday, 5 December 2016 at 16:07:41 UTC, Igor Shirkalin wrote: I didnt count, but its about ten thousend a year, i.e. nothing. if you earned nothing using D language why do you recommend it?))) People usually earn money using programming langs. some people have nothing about science. Some of them are god's addicted. I don't think D is here. We are out of here. We should go to facebook and keep here if you take it. that's it.
Re: the best language I have ever met(?)
On Saturday, 3 December 2016 at 15:02:35 UTC, eugene wrote: On Friday, 18 November 2016 at 17:54:52 UTC, Igor Shirkalin wrote: That was preface. Now I have server written in D for C++ pretty ancient client. Most things are three times shorter in size and clear (@clear? suffix). All programming paradigms were used. I have the same text in russian, but who has bothered russian(s)? The meaning of all of that is: powerfull attractive language with sufficient infrastructure with future. Just use it. p.s. I'm excused for my primitive english. how much money did you earn using D language? I didnt count, but its about ten thousend a year, i.e. nothing. You earn ten times more of me. Ask me enything more.
Re: the best language I have ever met(?)
On Monday, 28 November 2016 at 16:15:23 UTC, Jonathan M Davis wrote: That's what pragma(inline, true) is for. And if someone wants a different solution that's completely compile-time and doesn't work with variables, then fine. I'm talking about adding something to the standard library, and for that, I think that a solution that is as close as possible to being identical to simply declaring the static array with the length is what would be appropriate. ... I'm not married to the syntax. I tried that syntax, but I couldn't figure out how to get it to work with runtime values. Can I insert my own opinion about static arrays with programmers' unknown size? Being the practical developer I can't imagine the situation where it is really needed. I hope D is not for theoretical goals only but for practical ones first. If we know the size of our future array we tell that to the compiler. If we don't know the size of our future static array we write an external generator to produce that. I really don't know places where I want static array to be unknown size inline (perhaps, except for debugging purposes). Concluding, the designer knows he's achived the perfection if there is nothing to remove, but not to add. Igor Shirkalin
Re: the best language I have ever met(?)
On Friday, 25 November 2016 at 07:17:18 UTC, MGW wrote: On Wednesday, 23 November 2016 at 18:54:35 UTC, Igor Shirkalin wrote: Igor, is the good Russian-speaking forum https://vk.com/vk_dlang. There are articles on GUI and other aspects of dlang. Thank you, I'll tale a look at it for sure.
Re: the best language I have ever met(?)
On Friday, 25 November 2016 at 14:51:52 UTC, Jonathan M Davis wrote: I think you may write it (I mean actual D) with using some template like this: auto array = static_array!uint(1, 2, 3, 4) Could you please help to write down this template in the best and clear manner? That's easy. The problem is if you want it to have the same semantics as uint[4] arr = [1, 2, 3, 4]; In particular, VRP (Value Range Propagation) is a problem. This compiles ubyte[4] arr = [1, 2, 3, 4]; because each of the arguments is known to fit in a ubyte. However, making auto arr = staticArray!ubyte(1, 2, 3, 4); do the same without forcing a cast is difficult. And if you force the cast, then it's not equivalent anymore, because something like ubyte[4] arr = [1, 2, 3, 900]; would not compile. And surprisingly, having the function take a dynamic array doesn't fix that problem (though maybe that's something that we could talk the dmd devs into improving). e.g. To mine mind it is not a problem because when you write you think what you write (or you are right). Morover compiler will always tell you are wrong if you make 256 to be ubyte implicity. auto arr = staticArray!ubyte([1, 2, 3, 4]); doesn't compile either. The most straightforward implementations are something like Why? Is it since [1, 2, 3, 4] is int[4] by default? T[n] staticArray(T, size_t n)(auto ref T[n] arr) { return arr; } or auto staticArray(Args...)(Args args) { CommonType!Args[Args.length] arr = [args]; return arr; } Great! Thank you. I should take more precise look at std.traits but I don't know if the VRP problem is solvable or not without some compiler improvements. If there's a clever enough implementation to get VRP with a function like this with the current language, I haven't figured it out yet. - Jonathan M Davis As I noticed, to my mind it is not a hindrance to write deliberate things. - Igor Shirkalin
Re: the best language I have ever met(?)
On Wednesday, 23 November 2016 at 18:58:55 UTC, ketmar wrote: We can define static array without counting the elements as following: enum array_ = [1u,2,3,4]; uint[array_.length] static_array = array_; there are workarounds, of course. yet i'll take mine `uint[$] a = [1u,2,3,4];` over that quoted mess at any time, without second thought. ;-) I think you may write it (I mean actual D) with using some template like this: auto array = static_array!uint(1, 2, 3, 4) Could you please help to write down this template in the best and clear manner?
Re: the best language I have ever met(?)
On Tuesday, 22 November 2016 at 00:08:05 UTC, ketmar wrote: On Monday, 21 November 2016 at 23:49:27 UTC, burjui wrote: Though I would argue that it's better to use '_' instead of '$' to denote deduced fixed size, it seems more obvious to me: int[_] array = [ 1, 2, 3 ]; alas, `_` is valid identifier, so `enum _ = 42; int[_] a;` is perfectly valid. dollar is simply most logical non-identifier character. We can define static array without counting the elements as following: enum array_ = [1u,2,3,4]; uint[array_.length] static_array = array_;
Re: the best language I have ever met(?)
On Saturday, 19 November 2016 at 20:54:32 UTC, ketmar wrote: On Saturday, 19 November 2016 at 17:12:13 UTC, Igor Shirkalin wrote: string s = "%(%s, %)".format(a); writefln(s); } Accepted. Is it really needed to call 'writefln'? I mean 'f'. no. it's a leftover from the code without format. it originally was `writefln("%(%s, %)", a);`, but i wanted to show `format` function too, and forgot to remove `f`. actually, it is a BUG to call `writefln` here, 'cause who knows, `s` may contain '%', and then boom! all hell broke loose. ;-) Got it! Thanks.
Re: the best language I have ever met(?)
On Saturday, 19 November 2016 at 00:28:36 UTC, Stefan Koch wrote: import std.stdio; import std.format; void main () { uint[$] a = [42, 69]; string s = "%(%s, %)".format(a); writefln(s); } Please don't post non-d. People might use it an then complain that it does not work. Let these people to complain. ;)
Re: the best language I have ever met(?)
On Friday, 18 November 2016 at 21:28:44 UTC, ketmar wrote: On Friday, 18 November 2016 at 20:31:57 UTC, Igor Shirkalin wrote: After 2 hours of brain breaking (as D newbie) I have come to: uint_array.map!(v=>"%x".format(v)).join(", ") Why 2 hours? Because I have started with 'joiner' function and aftewords found out the 'join'. To my mind there is more simple form for this task in D (about formatting). sure ;-) import std.stdio; import std.format; void main () { uint[$] a = [42, 69]; string s = "%(%s, %)".format(a); writefln(s); } Accepted. Is it really needed to call 'writefln'? I mean 'f'.
Re: the best language I have ever met(?)
On Friday, 18 November 2016 at 19:47:17 UTC, H. S. Teoh wrote: On Fri, Nov 18, 2016 at 11:43:49AM -0800, H. S. Teoh via Digitalmars-d-learn wrote: [...] Yes, I meant 'sentiments' as in опыта, not as in сентметальность. :-) [...] Sorry, typo. I meant сентиментальности. But I think you understand what I mean. :-) T I Think there's a bug. When I'm answerring a message, and if my recipient send me the message, and after I press 'send' button, my message is duplicated. Simple bug to repare.
Re: the best language I have ever met(?)
On Friday, 18 November 2016 at 19:47:17 UTC, H. S. Teoh wrote: Yes, I meant 'sentiments' as in опыта, not as in сентметальность. :-) [...] Sorry, typo. I meant сентиментальности. But I think you understand what I mean. :-) Oh, I think you understand what you think what I mean :)
Re: the best language I have ever met(?)
On Friday, 18 November 2016 at 19:43:49 UTC, H. S. Teoh wrote: I was a little bit afraid of my missunderstanding in terms of sentiments. You've got me right (I don't quite feel the meaning of that in these non-cyrillic letters:). But what I understand is the path you have walked and what I have in my mind. Yes, I meant 'sentiments' as in опыта, not as in сентметальность. :-) I used to mean 'sentiments' as "сентиметальность", but "опыт - сын ошибок трудных" (Пушкин) is what realy in behind :) Simple example about D: I spent two hours to write a line (borrowed from Python), related with lazy calculations, but finally I got it with deep great thinking, and it was like understanding of Moon alienation from Earth. Great! Would you like to share the code snippet? Sure. Let we have a uint_array of values. And we need to get a string of these values in hex separated with ','. In Python it looks like ', '.join(map(hex, uint_array)) After 2 hours of brain breaking (as D newbie) I have come to: uint_array.map!(v=>"%x".format(v)).join(", ") Why 2 hours? Because I have started with 'joiner' function and aftewords found out the 'join'. To my mind there is more simple form for this task in D (about formatting). What is your using of D? Sadly, I have not been able to use D in a professional capacity. My coworkers are very much invested into C/C++ and have a very high level of skepticism to anything else, in addition to resistance to adding new toolchains (much less languages) to the current projects. So my use of D has mainly been in personal projects. I do contribute to Phobos (the Same here. But my coworkers are not addicted to programming at all :) standard library) every now and then, though. It's my way of "contributing to the cause" in the hopes that one day D may be more widespread and accepted by the general programming community. I don't hope about "D some day", I'm sure about that (5 to 30 years). The idea is "I D", not "I C++" :)
Re: the best language I have ever met(?)
On Friday, 18 November 2016 at 19:43:49 UTC, H. S. Teoh wrote: I was a little bit afraid of my missunderstanding in terms of sentiments. You've got me right (I don't quite feel the meaning of that in these non-cyrillic letters:). But what I understand is the path you have walked and what I have in my mind. Yes, I meant 'sentiments' as in опыта, not as in сентметальность. :-) I used to mean 'sentiments' as "сентиметальность", but "опыт - сын ошибок трудных" (Пушкин) is what realy in behind :) Simple example about D: I spent two hours to write a line (borrowed from Python), related with lazy calculations, but finally I got it with deep great thinking, and it was like understanding of Moon alienation from Earth. Great! Would you like to share the code snippet? Sure. We have an array of uint. And we need to get a string of these values in hex separated with ','. In Python it looks like ', '.join(map(hex, array)) array.map!(v=>"%x".format(v)).join(", ") [...] What is your using of D? For me it is tool to develope other tools. [...] Sadly, I have not been able to use D in a professional capacity. My coworkers are very much invested into C/C++ and have a very high level of skepticism to anything else, in addition to resistance to adding new toolchains (much less languages) to the current projects. So my use of D has mainly been in personal projects. I do contribute to Phobos (the standard library) every now and then, though. It's my way of "contributing to the cause" in the hopes that one day D may be more widespread and accepted by the general programming community. T
Re: the best language I have ever met(?)
On Friday, 18 November 2016 at 18:14:41 UTC, H. S. Teoh wrote: Welcome, Igor! Hello, Teoh! Your sentiments reflect mine years ago when I first discovered D. I came from a C/C++/Perl background. It was also Andrei's book that got me started; in those early days documentation was scant and I didn't know how to write idiomatic D code. But after I found TDPL, the rest was history. :-) I was a little bit afraid of my missunderstanding in terms of sentiments. You've got me right (I don't quite feel the meaning of that in these non-cyrillic letters:). But what I understand is the path you have walked and what I have in my mind. Simple example about D: I spent two hours to write a line (borrowed from Python), related with lazy calculations, but finally I got it with deep great thinking, and it was like understanding of Moon alienation from Earth. We have a few Russians on this forum, and I can understand some Russian too. Though on this mailing list English is the language to use. Sure, I don't have any doubt of it. I hope to be one of russian understandables here:) Your English is understandable. That's good enough, I think. :-) Thank you, Teoh. That is very important for me to hear. What is your using of D? For me it is tool to develope other tools. Igor
the best language I have ever met(?)
The simpler - the better. After reading "D p.l." by A.Alexandrescu two years ago I have found my past dream. It's theory to start with. That book should be read at least two times especially if you have asm/c/c++/python3/math/physics background, and dealt with Watcom/Symantec C/C++ compilers (best to Walter Bright) with very high optimization goal. No stupid questions, just doing things. That was preface. Now I have server written in D for C++ pretty ancient client. Most things are three times shorter in size and clear (@clear? suffix). All programming paradigms were used. I have the same text in russian, but who has bothered russian(s)? The meaning of all of that is: powerfull attractive language with sufficient infrastructure with future. Just use it. p.s. I'm excused for my primitive english.