Re: Help using lubeck on Windows
On Friday, 23 February 2018 at 12:13:11 UTC, Arredondo wrote: Help using lubeck on Windows I'd like to experiment with linear algebra in D, and it looks like lubeck is the way to do it right now. However, I'm having a hard time dealing with the CBLAS and LAPACK dependencies. I downloaded the OpenBLAS binaries for Windows (libopenblas.dll), but I am cluless as to what to do with them. I can't find an example of how to link them/what commands to pass to dmd. Any help deeply appreciated. openblas.net contains precompiled openblas library for Windows. It may not be optimised well for exactly your CPU but it is fast enought to start. Put the library files into your prodject and add openblas library to your project dub configuration. A .dll files are dinamic, you need also a .lib /.a to link with. OpenBLAS contains both cblas and lapack api by default. We defenetely need to add an example for Windows Best Ilya
Re: Difference in reduce in std and mir
On Tuesday, 26 December 2017 at 16:12:07 UTC, Seb wrote: On Tuesday, 26 December 2017 at 15:56:19 UTC, Vino wrote: Hi All, What is the difference between std.algorithm.reduce and mir.ndslice.algorithm.reduce. From, Vino.B Mir's reduce works on Slices whereas Phobos's reduce works on Arrays/Ranges. See also: http://docs.algorithm.dlang.io/latest/mir_ndslice_slice.html Mir's reduce works for arrays and ranges too. The difference it that how it works for Slices: it reduces all dimensions of top dimension pack. -- Ilya
Why 2 ^^ 1 ^^ 2 = 2?
.. i thought it should be (2 ^^ 1) ^^ 2 = 4
Re: @nogc formattedWrite
On Saturday, 7 October 2017 at 18:14:00 UTC, Nordlöw wrote: Is it currently possible to somehow do @nogc formatted output to string? I'm currently using my `pure @nogc nothrow` array-container `CopyableArray` as @safe pure /*TODO nothrow @nogc*/ unittest { import std.format : formattedWrite; const x = "42"; alias A = CopyableArray!(char); A a; a.formattedWrite!("x : %s")(x); assert(a == "x : 42"); } but I can't tag the unittest as `nothrow @nogc` because of the call to `formattedWrite`. Is this because `formattedWrite` internally uses the GC for buffer allocations or because it may throw? It would be nice to be able to formatted output in -betterC... Phobos code is not @nogc (can be fixed), but it will not nothrow. PRs with @nogc formatting functionality are welcome in Mir Algorithm. Cheers, Ilya
Re: Double ended arrays?
On Saturday, 7 October 2017 at 07:38:47 UTC, Chirs Forest wrote: I have some data that I want to store in a dynamic 2d array... I'd like to be able to add elements to the front of the array and access those elements with negative integers as well as add numbers to the back that I'd acess normally with positive integers. Is this something I can do, or do I have to build a container to handle what I want? Mir Algorithm [1] has 2D arrays. Elements can be added to the front/back of each dimension using `concatenation` routine [2]. In the same time it does not support negative indexes. [1] https://github.com/libmir/mir-algorithm [2] http://docs.algorithm.dlang.io/latest/mir_ndslice_concatenation.html#.concatenation Best regards, Ilya Yaroshenko
Re: how to use unknown size of array at compile time for further processing
On Sunday, 1 October 2017 at 14:23:54 UTC, thorstein wrote: [...] Sorry, I'm still really confused with the results from my function: [...] Replace arrayT ~= rowT; with arrayT ~= rowT.dup; Also, you may want to look into ndslice package [1]. [1] https://github.com/libmir/mir-algorith Best regards, Ilya Yaroshenko
Re: problem with opIndex
On Friday, 29 September 2017 at 19:31:14 UTC, Joseph wrote: I am trying to have a multi-dimensional array and opIndex has to have both an arbitrary number of parameters and allow for slicing. You may want to look into ndslice package source code [1] --Ilya [1] https://github.com/libmir/mir-algorithm/blob/master/source/mir/ndslice/slice.d
Re: 24-bit int
On Saturday, 2 September 2017 at 03:29:20 UTC, EntangledQuanta wrote: On Saturday, 2 September 2017 at 02:49:41 UTC, Ilya Yaroshenko wrote: On Friday, 1 September 2017 at 19:39:14 UTC, EntangledQuanta wrote: Is there a way to create a 24-bit int? One that for all practical purposes acts as such? This is for 24-bit stuff like audio. It would respect endianness, allow for arrays int24[] that work properly, etc. Hi, Probably you are looking for bitpack ndslice topology: http://docs.algorithm.dlang.io/latest/mir_ndslice_topology.html#.bitpack sizediff_t[] data; // creates a packed signed integer slice with max allowed value equal to `2^^24 - 1`. auto packs = data[].sliced.bitpack!24; packs has the same API as D arrays Package is Mir Algorithm http://code.dlang.org/packages/mir-algorithm Best, Ilya Thanks. Seems useful. Just added `bytegroup` topology. Released in v0.6.12 (will be available in DUB after few minutes.) http://docs.algorithm.dlang.io/latest/mir_ndslice_topology.html#bytegroup It is faster for your task then `bitpack`. Best regards, Ilya
Re: 24-bit int
On Friday, 1 September 2017 at 19:39:14 UTC, EntangledQuanta wrote: Is there a way to create a 24-bit int? One that for all practical purposes acts as such? This is for 24-bit stuff like audio. It would respect endianness, allow for arrays int24[] that work properly, etc. Hi, Probably you are looking for bitpack ndslice topology: http://docs.algorithm.dlang.io/latest/mir_ndslice_topology.html#.bitpack sizediff_t[] data; // creates a packed signed integer slice with max allowed value equal to `2^^24 - 1`. auto packs = data[].sliced.bitpack!24; packs has the same API as D arrays Package is Mir Algorithm http://code.dlang.org/packages/mir-algorithm Best, Ilya
Re: Best syntax for a diagonal and vertical slice
On Saturday, 22 July 2017 at 20:55:06 UTC, kerdemdemir wrote: We have awesome way for creating slices like: a = new int[5]; int[] b = a[0..2]; But what about if I have 2D array and I don't want to go vertical. Something like : int[3][3] matrix = [ [ 1, 2, 3 ], [ 4, 5, 6 ], [ 7, 8, 9 ] ]; I believe I can use std.range function "RoundRobin"(or maybe it won't work with 2D array directly) for having a good looking vertical slice which will have 1,4,7 or 2,5,8 or 3,6,9 in my example above. And what if I want to go diagonal like 1,5,9 or 3,5,7 in the example above. Is there a good solution in std without using for loops? I have one more requirement for fulfilling the task that I working on. This slices do not have to be the same size as the array. For example in the example above slice size could have 2 instead of 3. In this case I need to have slices like 1,5;2,6;4,8;5,9 ... and so on for diagonal case. Erdem Ps: Converting the 2D array to 1D array is possible in my case. Hello Erdem, You may want to use mir-algorithm DUB package. It is a D tensor library. https://github.com/libmir/mir-algorithm import mir.ndslice; auto slice = matrix[0].ptr.sliced(3, 3); auto row = matrix[0]; auto col = matrix[0 .. $, 0]; A lot of examples with diagonal and sub-diagonals can be found here http://docs.algorithm.dlang.io/latest/mir_ndslice_topology.html#.diagonal Best, Ilya
Re: ndslice summary please
On Thursday, 13 April 2017 at 15:00:16 UTC, Dejan Lekic wrote: On Thursday, 13 April 2017 at 10:00:43 UTC, 9il wrote: On Thursday, 13 April 2017 at 08:47:16 UTC, Ali Çehreli wrote: [...] The reasons to use mir-algorithm instead of std.range, std.algorithm, std.functional (when applicable): 1. It allows easily construct one and multidimensional random access ranges. You may compare `bitwise` implementation in mir-algorithm and Phobos. Mir's version few times smaller and do not have Phobos bugs like non mutable `front`. See also `bitpack`. 2. Mir devs are very cary about BetterC 3. Slice is universal, full featured, and multidimensional random access range. All RARs can be expressed through generic Slice struct. 4. It is faster to compile and generates less templates bloat. For example: slice.map!fun1.map!fun2 is the same as slice.map!(pipe!(fun1, fun2)) `map` and `pipe` are from mir-algorithm. It is all good, but I am sure many D programmers, myself included, would appreciate if shortcomings of Phobos are fixed instead of having a completely separate package with set of features that overlap... I understand ndslice was at some point in the `experimental` package, but again - it would be good if you improve existing Phobos stuff instead of providing a separate library that provides better implementation(s). Work on Phobos is useless for me because when I need something and I am ready to write / fix it I have few days, but not few months until LDC release. DUB is more flexible, it allows to override version with local path for example. Finally, I think Phobos should be deprecated and be split into dub packages.
Re: DIP-1000 and return
On Monday, 2 January 2017 at 15:28:57 UTC, Nordlöw wrote: Should I file a bug report? Yes please
Re: Why doesn't this chain of ndslices work?
On Sunday, 15 May 2016 at 12:30:03 UTC, Stiff wrote: On Sunday, 15 May 2016 at 08:31:17 UTC, 9il wrote: On Saturday, 14 May 2016 at 21:59:48 UTC, Stiff wrote: Here's the code that doesn't compile: import std.stdio, std.experimental.ndslice, std.range, std.algorithm; [...] Coming soon https://github.com/libmir/mir/issues/213#issuecomment-219271447 --Ilya Awesome, thanks to both of you. I think I can get what I want from mir.sparse. I was aware of it, but hadn't made the connection to what I'm trying to do for some reason. More generally, I'm eagerly looking forward to a more complete Mir. I've been using bindings to GSL for non-uniform (and quasi-) random number generation, and a cleaner interface would be great. I'll be watching on github. Done: http://docs.algorithm.dlang.io/latest/mir_ndslice_stack.html#.stack Sorry for the huge delay!
Re: Pointers vs functional or array semantics
On Saturday, 25 February 2017 at 11:06:28 UTC, data pulverizer wrote: I have noticed that some numerical packages written in D use pointer semantics heavily (not referring to packages that link to C libraries). I am in the process of writing code for a numerical computing library and would like to know whether there times when addressing an array using pointers conveys performance benefits over using D's array or functional semantics? Pointers can behave like iterators, while to have an iterator on top of array you need to store array and index. This is slower and may disable vectorisation. Iterators semantic is required to build multidimensional random access ranges (ndslice). ndslice uses iterators heavily in its internals. Iterators semantic helped to create many multidimensional Phobos analogs in few dozens LOC, while Phobos has few hundreds LOC for the same functionality. Phobos functional semantics disable vectorisation and other optimisations. mir.ndslice.topology [1], mir.ndslice.algorithm[1] and mir.functional[1] in combination with other Mir modules and packages can be used instead of Phobos functional utilities from std.algorithm, std.range, std.functional. You may want to use the new ndslice [1]. Slice!(Contiguous, [1], T*) can replace T[]. [1] https://github.com/libmir/mir-algorithm
Re: multi-dimensional arrays, not arrays of arrays
On Saturday, 18 February 2017 at 10:37:21 UTC, XavierAP wrote: Does D provide anything like this? Otherwise, was this ever considered and were reasons found not to have it? They are implemented as part of the Mir project. We call them ndslices. https://github.com/libmir/mir-algorithm Docs: http://docs.algorithm.dlang.io/ See also other Mir projects at https://github.com/libmir. std.experimental.ndslice is a deprecated version of mir.ndslice. std.experimental.ndslice provides only numpy like tensors. mir.ndslice provides all kinds of tensors. Sparse tensors can be found at https://github.com/libmir/mir Best, Ilya
Re: Initialization of dynamic multidimensional array
On Sunday, 5 February 2017 at 20:33:06 UTC, berni wrote: With X not known at compile time: auto arr = new int[][](X,X); for (int i=0;i Is there anything better for this? I mean, the program will fill the array with zeroes, just to overwrite all of them with -1. That's wasted execution time and doesn't feel D-ish to me. You may want to used new ndslice for multidimensional algorithms http://docs.algorithm.dlang.io/latest/mir_ndslice.html
Re: BitArray Slicing
On Wednesday, 21 December 2016 at 12:00:57 UTC, Ezneh wrote: On Wednesday, 21 December 2016 at 11:49:06 UTC, Ilya Yaroshenko wrote: [...] Thanks, I'll check that solution to see if it fits my needs. As an off-topic question, is there any plan in Mir to implement the Tiny Mersenne Twister[1] algorithm (or a wrapper for it) ? [1] http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/TINYMT/index.html You are the first who is interested TINYMT, feel free to open a PR in Mir Random https://github.com/libmir/mir-random TINYMT should not be big, only Engine itself is required (without floating point stuff and arrays generators).
Re: BitArray Slicing
On Wednesday, 21 December 2016 at 09:08:51 UTC, Ezneh wrote: Hi, in one of my projects I have to get a slice from a BitArray. I am trying to achieve that like this : void foo(BitArray ba) { auto slice = ba[0..3]; // Assuming it has more than 4 elements } The problem is that I get an error : "no operator [] overload for type BitArray". Is there any other way to get a slice from a BitArray ? Thanks, Ezneh. Mir allows you to define simple alternative to BitArray: https://github.com/libmir/mir -- struct BitMap { size_t* ptr; import core.bitop; bool opIndex(size_t index) const { return bt(ptr, index) != 0; } void opIndexAssign(bool val, size_t index) { if(val) bts(ptr, index); else btr(ptr, index); } } import mir.ndslice; void main() { auto arr = new size_t[3]; auto sl = BitMap(arr.ptr).sliced(size_t.sizeof * 8 * arr.length); sl[4] = true; sl[100] = true; sl.popFrontN(3); assert(sl[1]); assert(sl[97]); auto sl2 = sl[1...3]; // slicing } --
Re: BetterC classes
On Friday, 16 December 2016 at 16:24:18 UTC, Daniel N wrote: On Friday, 16 December 2016 at 15:17:15 UTC, Ilya Yaroshenko wrote: Hi Is it possible to use classes, which do not have monitor and other DRuntime stuff? Object can be allocated/deallocated using allocators, but they are very complex for betterC mode (monitor, mutex, object.d dependency). Can we have something more primitive? Ilya extern(c++) class Works pretty good in my experience, at least it gets rid of the monitor, yay! Thanks, DRuntime is required anyway, but this is step forward. ldmd2 -betterC -defaultlib= -O -inline -release -run cppclass.d extern(C++) class Hw { import core.stdc.stdio; this() { } void hw() { printf("hey\n"); } } import std.traits; extern(C) int main() { enum classSize = __traits(classInstanceSize, Hw); align(16) ubyte[classSize] payload = void; auto init = cast(const(ubyte)[]) typeid(Hw).initializer; foreach(i; 0..classSize) payload[i] = init[i]; auto object = cast(Hw) payload.ptr; object.__ctor(); object.hw(); return 0; } - Undefined symbols for architecture x86_64: "__D14TypeInfo_Class6__vtblZ", referenced from: __D8cppclass2Hw7__ClassZ in cppclass-7ed89bd.o
Re: extern(C++) struct - what is it?
On Friday, 16 December 2016 at 13:02:11 UTC, Nicholas Wilson wrote: On Friday, 16 December 2016 at 12:40:19 UTC, Ilya Yaroshenko wrote: [...] Like any other struct. [...] Thank you Nicholas
BetterC classes
Hi Is it possible to use classes, which do not have monitor and other DRuntime stuff? Object can be allocated/deallocated using allocators, but they are very complex for betterC mode (monitor, mutex, object.d dependency). Can we have something more primitive? Ilya
extern(C++) struct - what is it?
It was in DMD sources. How it can be used? Are methods virtual? How multiple inheritance works? Can this be used in betterC mode? What different between classes in C++? Thanks, Ilya
Re: [Semi-OT] I don't want to leave this language!
On Thursday, 8 December 2016 at 09:57:21 UTC, Guillaume Piolat wrote: On Wednesday, 7 December 2016 at 12:12:56 UTC, Ilya Yaroshenko wrote: R, Matlab, Python, Mathematica, Gauss, and Julia use C libs. --Ilya As a C lib, you have the possibility of not initializing the runtime, which leaves usable a part of phobos+druntime and it's only a matter of avoiding TLS/globals and global ctor/dtor. No need to rewrite druntime this way. https://www.auburnsounds.com/blog/2016-11-10_Running-D-without-its-runtime.html -betterC is cleaner (link errors) but give more work. Link requirement is problem too. A numeric library for the language list above will never be accepted if this library depends on huge non portable runtime like D has. --Ilya
Re: [Semi-OT] I don't want to leave this language!
On Wednesday, 7 December 2016 at 13:14:52 UTC, Kagamin wrote: On Monday, 5 December 2016 at 20:25:00 UTC, Ilya Yaroshenko wrote: Good D code should be nothrow, @nogc, and betterC. BetterC means that it must not require DRuntime to link and to start. Without runtime you won't have asserts (C has them), bounds checking, array casts, string switch. Doesn't sound good to me. All this can be done without runtime. It is weird that we need runtime for now for this features. And why is it a requirement at all? C and C++ already depend on their quite huge runtimes already. Why D shouldn't? Exactly, C already has a runtime. We can reuse it instead of maintaining our own. I never said we must delete DRunime. I just need an infrastructure without runtime. And I am working on it. Ilya
Re: [Semi-OT] I don't want to leave this language!
On Wednesday, 7 December 2016 at 12:36:49 UTC, Dejan Lekic wrote: On Monday, 5 December 2016 at 20:25:00 UTC, Ilya Yaroshenko wrote: Good D code should be nothrow, @nogc, and betterC. BetterC means that it must not require DRuntime to link and to start. I started Mir as scientific/numeric project, but it is going to be a replacement for Phobos to use D instead/with of C/C++. Yes, perhaps it is so in your world... In my world I have absolutely no need for this. In fact we are perfectly happy with Java runtime which is many times bigger than druntime! Exactly, this is why D will never beat Java and Go. BTW, both languages has commercial support. Current D users are here because they are OK with current D runtime. The number of users is small. I don't expect/want that every one will agree. I am targeting the ocean where we has not concurrents except C/C++. GC for D is good as dub package or a compiler option. Ilya
Re: [Semi-OT] I don't want to leave this language!
On Wednesday, 7 December 2016 at 11:48:32 UTC, bachmeier wrote: On Wednesday, 7 December 2016 at 06:17:17 UTC, Picaud Vincent wrote: Considering scientific/numerical applications, I do agree with Ilya: it is mandatory to have zero overhead and a straightforward/direct interoperability with C. I am impressed by the Mir lib results and I think "BetterC" is very attractive/important. As always, it depends on what you are doing. It is mandatory for some numerical applications. R, Matlab, Python, Mathematica, Gauss, and Julia are used all the time and they are not zero overhead. A fast way to kill their usage would be to force their users to think about those issues. What matters is the available libraries, first and foremost, and whatever is second most important, it is a distant second. I write D code all the time for my research. I want to write correct code quickly. My time is too valuable to spend weeks writing code to cut the running time by a few minutes. That might be fun for some people, but it doesn't pay the bills. It's close enough to optimized C performance out of the box. But ultimately I need a tool that provides fast code, has libraries to do what I want, and allows me to write a correct program with a limited budget. This is, of course, not universal, but zero overhead is not important for most of the numerical code that is written. R, Matlab, Python, Mathematica, Gauss, and Julia use C libs. --Ilya
Re: [Semi-OT] I don't want to leave this language!
On Tuesday, 6 December 2016 at 17:00:35 UTC, Jonathan M Davis wrote: So, while there are certainly folks who would prefer using D as a better C without druntime or Phobos, I think that you're seriously overestimating how many folks would be interested in that. Certainly, all of the C++ programmers that I've worked with professionally would have _zero_ interest in D as a better C. - Jonathan M Davis My experience is completely orthogonal. --Ilya
Re: [Semi-OT] I don't want to leave this language!
On Tuesday, 6 December 2016 at 13:02:16 UTC, Andrei Alexandrescu wrote: On 12/6/16 3:28 AM, Ilya Yaroshenko wrote: On Tuesday, 6 December 2016 at 08:14:17 UTC, Andrea Fontana wrote: On Monday, 5 December 2016 at 20:25:00 UTC, Ilya Yaroshenko Phobos/Druntime are pretty good for a lot of projects. In theory And what seem to be the issues in practice with code that is not highly specialized? -- Andrei If code is not highly specialized there is no reason to spent resources to use C/C++/D. A company will be happy with Python, Java, C#, Go and Swift. If one need to have C/C++ programming level he can not use D because DRuntime. Only a subset of D can be used. And current problem that we have not BetterC paradigm in D specification. So, only crazy companies will consider D for large projects. Current D is successful in small console text routines. If a system PL can not be used as C for highly specialized code, it is not a real system PL. DRuntime and Phobos is going to compete with Java and Go. It is suicide for D, IMHO. In other hand, BetterC is a direction where D can be populated among professionals and replace C/C++. Ilya
Re: [Semi-OT] I don't want to leave this language!
On Monday, 5 December 2016 at 20:49:50 UTC, e-y-e wrote: On Monday, 5 December 2016 at 20:25:00 UTC, Ilya Yaroshenko wrote: [...] You know from the 15th December I will have a month of free time, and I would love to get myself up to speed with Mir to contribute to it. If you don't mind me saying, I think Mir could be one of the best things for the future of D (along with LDC) and I'd be glad to help it on its way. Awesome! The main directions are: 1. stdC++ analogs implementation in betterC Dlang subset 2. betterC analogs of existing Phobos modules. We can reuse DRuntime / Phobos code for initial commits. 3. Various numeric / sci software. 4. GPU algorithms. This require dcompute to be a part of LDC. Thank you, Ilya
Re: [Semi-OT] I don't want to leave this language!
On Tuesday, 6 December 2016 at 08:14:17 UTC, Andrea Fontana wrote: On Monday, 5 December 2016 at 20:25:00 UTC, Ilya Yaroshenko Phobos/Druntime are pretty good for a lot of projects. In theory
Re: [Semi-OT] I don't want to leave this language!
Hi e-y-e, The main problem with D for production is its runtime. GC, DRuntime, Phobos is big constraint for real world software production. Good D code should be nothrow, @nogc, and betterC. BetterC means that it must not require DRuntime to link and to start. I started Mir as scientific/numeric project, but it is going to be a replacement for Phobos to use D instead/with of C/C++. For example, Mir CPUID, Mir GLAS, Mir Random are nothrow @nogc and do not need DRuntime to start/link. (Mir Random is not tested for BetterC, so maybe few dependencies are exist.) Mir Random covers C++11 random number generation for example. If D code can be compiled into a common C libraries like Mir libs, than you can include it into existing ecosystem. Currently it is possible only with LDC (requires some programming techniques for now). I will be happy to see more Mir contributors [1] Currently there are 5 Mir devs (not all are visible publicly). [1] https://github.com/libmir Cheers, Ilya
Re: Making floating point deterministic cross diffrent platforms/hardware
On Sunday, 20 November 2016 at 21:42:30 UTC, ketmar wrote: On Sunday, 20 November 2016 at 21:31:09 UTC, Guillaume Piolat wrote: I think you can roughly have that with ldc, always using SSE and the same rounding-mode. ARM. oops. No problem with ARM + x86 for double and float.
Re: Complex numbers are harder to use than in C
On Saturday, 19 November 2016 at 19:42:27 UTC, Marduk wrote: On Saturday, 19 November 2016 at 16:17:08 UTC, Meta wrote: On Saturday, 19 November 2016 at 09:38:38 UTC, Marduk wrote: [...] D used to support complex numbers in the language (actually it still does, they're just deprecated). This code should compile with any D compiler: cdouble[2][2] a = [[0 + 1i, 0], [0, 0 + 1i]]; Thank you! However, I am concerned that if this is deprecated, then I should not use it (it is not future-proof). I wonder why D dropped this syntax for complex numbers. It is very handy. You can use builtin complex numbers (cfloat/cdouble/creal). The idea of std.complex is wrong . Mir GLAS uses builtin complex numbers and I don't think they will be really deprecated. --Ilya
Re: Floating-point Modulus math.fmod
On Sunday, 6 November 2016 at 21:45:28 UTC, Fendercaster wrote: I'm not quite sure if this is the right forum to ask this question: I've been trying to implement the "floating-point modulus" function from the math library. Equivalently that's what I've tried in Python too. Problem is - the results are different and not even close. Please have a look at both snippets. Maybe someone can help me out: [...] Python uses 64-bit doubles. You may want to try with `double` and `core.stdc.tgmath` -- Ilya
Re: Neural Networks / ML Libraries for D
On Tuesday, 25 October 2016 at 13:56:45 UTC, Saurabh Das wrote: On Tuesday, 25 October 2016 at 11:55:27 UTC, maarten van damme wrote: There is mir https://github.com/libmir/mir which is geared towards machine learning, I don't know if it has anything about neural networks, I've yet to use it. If you're only interested in neural networks, I've used FANN (a C library) together with D and it worked very well. 2016-10-25 13:17 GMT+02:00 Saurabh Das via Digitalmars-d-learn < digitalmars-d-learn@puremagic.com>: Hello, Are there any good ML libraries for D? In particular, looking for a neural network library currently. Any leads would be appreciated. Thanks, Saurabh I saw mir but it didn't seem to have anything for NNs. I'll give FANN a try. https://github.com/ljubobratovicrelja/mir.experimental.model.rbf
Re: How to debug (potential) GC bugs?
On Sunday, 25 September 2016 at 16:23:11 UTC, Matthias Klumpp wrote: Hello! I am working together with others on the D-based appstream-generator[1] project, which is generating software metadata for "software centers" and other package-manager functionality on Linux distributions, and is used by default on Debian, Ubuntu and Arch Linux. [...] Probably related issue: https://issues.dlang.org/show_bug.cgi?id=15939
Re: ndslice and RC containers
On Thursday, 22 September 2016 at 20:23:57 UTC, Nordlöw wrote: On Thursday, 22 September 2016 at 13:30:28 UTC, ZombineDev wrote: ndslice (i.e. Slice(size_t N, Range) ) is a generalization of D's built-in slices (i.e. T[]) to N dimensions. Just like them, ... Please note that the support for creating ndslices via custom memory allocators (i.e. makeSlice) was added after dmd-2.071 was branched, so you either have to wait for dmd-2.072, or use a nightly build (which I recommend). How will this interact with DIP-1000, specifically, will ref returning members of Slice() be scope-qualified? Thx! No, `Slice` structure is not data owner.
Re: ndslice and RC containers
On Thursday, 22 September 2016 at 13:30:28 UTC, ZombineDev wrote: On Thursday, 22 September 2016 at 12:38:57 UTC, Nordlöw wrote: [...] ndslice (i.e. Slice(size_t N, Range) ) is a generalization of D's built-in slices (i.e. T[]) to N dimensions. Just like them, it doesn't handle memory ownership - it just provides a view over existing data. Usually, there are two use cases: 1) Wrapping an existing array: 1.1) You allocate an array however you want (GC, RC, plain malloc) or just use a slice to an array returned from a third-party library 1.2) Wrap it via sliced [1] 1.3) Use it 1.4) Free it: 1.4.a) Let the GC reclaim it 1.4.b) Let the RC reclaim it 1.4.c) Free it manually. [...] Thank you ZombineDev, good answer!
Re: Fast multidimensional Arrays
On Monday, 29 August 2016 at 15:46:26 UTC, Steinhagelvoll wrote: On Monday, 29 August 2016 at 14:55:50 UTC, Seb wrote: On Monday, 29 August 2016 at 14:43:08 UTC, Steinhagelvoll wrote: It is quite surprising that there is this much of a difference, even when all run sequential. I believe this might be specific to this small problem. You should definitely have a look at this benchmark for matrix multiplication across a many languages: https://github.com/kostya/benchmarks#matmul With the recent generic GLAS kernel in mir, matrix multiplication in D is the blazingly fast (it improved the existing results by at least 8x). Please not that this requires the latest LDC beta with includes the fastMath pragma and GLAS is still under development at mir: https://github.com/libmir/mir It not really about multiplying matrices. I wanted to see how D compares for different tasks. If I actually want to do matrix multiplication I will use LAPACK or something of that nature. In this task the difference was much bigger compared to e.g. prime testing, which was about even. ndslice is analog of numpy. It is more flexible comparing with Fortran arrays. In the same time, if you want fast iteration please use Mir, which includes upcoming ndslice.algorithm with @fasmath attribute and `vectorized` flag for `ndReduce`. Note, that in-memory representation is important for vectorization, e.g. for dot product both slices should have strides equal to 1. Add also -mcpu=native flag for LDC. http://docs.mir.dlang.io/latest/mir_ndslice_algorithm.html#ndReduce Best regards, Ilya
Re: MurmurHash3 behaviour
On Saturday, 20 August 2016 at 09:15:00 UTC, Ilya Yaroshenko wrote: On Friday, 19 August 2016 at 20:28:13 UTC, Cauterite wrote: Regarding the MurmurHash3 implementation in core.internal.hash, it is my understanding that: // assuming a and b are uints bytesHash([a, b], 0) == bytesHash([b], bytesHash([a], 0)) Is this correct? I'm just not quite certain of this property when I try to read the code myself, and I don't know much about hash algorithms. DRuntime has Murmurhash2, not 3. Oh, maybe I am wrong. Anyway 128bit Murmurhash3 hash from std.digest would much faster on 64 bit CPUs.
Re: MurmurHash3 behaviour
On Friday, 19 August 2016 at 20:28:13 UTC, Cauterite wrote: Regarding the MurmurHash3 implementation in core.internal.hash, it is my understanding that: // assuming a and b are uints bytesHash([a, b], 0) == bytesHash([b], bytesHash([a], 0)) Is this correct? I'm just not quite certain of this property when I try to read the code myself, and I don't know much about hash algorithms. DRuntime has Murmurhash2, not 3.
Re: I need a @nogc version of hashOf(). What are the options?
On Sunday, 7 August 2016 at 16:42:47 UTC, Gary Willoughby wrote: I need a @nogc version of hashOf(). Here's one i'm currently using but it's not marked as @nogc. https://github.com/dlang/druntime/blob/master/src/object.d#L3170 What are the options now? Is there anything D offers that I could use? I need a function that takes a variable of any type and returns a numeric hash. Current DMD master has MurmurHash3 digest.
Re: Library for serialization of data (with cycles) to JSON and binary
On Saturday, 6 August 2016 at 16:11:03 UTC, Neurone wrote: Is there a library that can serialize data (which may contain cycles) into JSON and a binary format that is portable across operating systems? JSON: http://code.dlang.org/packages/asdf Binary: http://code.dlang.org/packages/cerealed
Re: Expression template
On Tuesday, 26 July 2016 at 10:35:12 UTC, Etranger wrote: I'll have time 2 months from now as I'm getting married in 2 weeks :) Congratulations!
Re: Expression template
On Saturday, 23 July 2016 at 11:05:57 UTC, Etranger wrote: 1- Is there a cleaner way to do it ? I had to use struct because I want every thing to happen at compile time and on the stack (without gc). And I had to use string mixins because template mixin does not work the way I tried to use it ( see the error last line). Yes, but it is more complicated in terms of multidimensional and generic abstraction. First we need to finish and test general matrix multiplication [2]. 2- Is there a safer way to do it (without using pointers) ? Yes, but it is slower than pointers. See Internal Binary Representation for Slice structure [1]. Mir's BLAS will use pointers. 3- Do you think I'll hit a wall with this approach ? No idea. In the same time I think that C++ approach is not flexible as we can build for D. Currently we need std.algorithm analog for multidimensional case. It would be more flexible than expression templates. But it requires more complex architecture analysis. 4- Do you known any D libs that uses expression template for linear algebra ? I don't know. ndslice provides operations like `a[] += b` [1]. I just opened a PR to optimise them using SIMD instructions [4]. Mir BLAS will have more low level API for matrix multiplication then Eigen. See PR for gemm [2]. Expression like API can be build on top of Mir's BLAS. I'll be very happy if I could contribute something useful for the D community :) That would be great! See Mir's issues [3]. Feel free to open new one and start a discussion or open PR. Mir requires a lot of benchmarks, for example versus Eigen. Just Eigen benchmark of Level 3 BLAS functionality (like matrix multiplication) with proper CSV console output would very helpful. See current charts [5] and [6] [1] http://dlang.org/phobos/std_experimental_ndslice_slice.html#.Slice [2] https://github.com/libmir/mir/pull/255 [3] https://github.com/libmir/mir/issues [4] https://github.com/dlang/phobos/pull/4647 [5] https://s3.amazonaws.com/media-p.slid.es/uploads/546207/images/2854632/Untitled_2.004.png [6] https://s3.amazonaws.com/media-p.slid.es/uploads/546207/images/2854640/Untitled_2.006.png Best regards, Ilya
Re: get number of columns and rows in an ndarray.
On Wednesday, 15 June 2016 at 21:51:25 UTC, learner wrote: Hi, How can i get the number of cols and rows in and ndarray that has already been created? learner Also `sl.length!0`, `sl.length!1`, etc --Ilya
Re: ndslice: convert a sliced object to T[]
On Wednesday, 15 June 2016 at 14:14:23 UTC, Seb wrote: ``` T[] a = slice.ptr[0.. slice.elementsCount]; ``` This would work only for slices with continuous memory representation and positive strides. -- Ilya
Re: Wrap array into a range.
On Saturday, 5 March 2016 at 16:28:51 UTC, Alexandru Ermicioi wrote: I have to pass an array to a function that accepts an input range. Therefore I need to transform somehow array into an input range. Is there a range that wraps an array in standard library? You just need to import std.array. --Ilya
Re: foreach( i, e; a) vs ndslice
On Monday, 18 January 2016 at 23:33:53 UTC, Jay Norwood wrote: I'm playing with the example below. I noticed a few things. 1. The ndslice didn't support the extra index, i, in the foreach, so had to add extra i,j. 2. I couldn't figure out a way to use sliced on the original 'a' array. Is slicing only available on 1 dim arrays? 3. Sliced parameter order is different than multi-dimension array dimension declaration. import std.stdio; import std.experimental.ndslice.slice; void main() { int[4][5] a = new int[20]; foreach(i,ref r; a){ foreach(j,ref c; r){ c= i+j; writefln("a(%d,%d)=%s",i,j,c); } } writefln("a=%s",a); auto b = new int[20].sliced(5,4); int i=0; foreach( ref r; b){ int j=0; foreach( ref c; r){ c= i+j; writefln("b(%d,%d)=%s",i,j,c); j++; } i++; } writefln("b=%s",b); } Hi, 1. You can use std.range.enumerate or just use a normal foreach: foreach(i; 0..slice.length!0) { /// use slice[i] ... } 2. Yes (A 2D D array is an array of arrays, so slice would be a slice composed of arrays) 3. Order is correct: void main() { auto a = new long[][](5, 4); auto b = new long[20].sliced(5, 4); foreach(i, ref r; a) { foreach(j, ref c; r) { c = i+j; b[i, j] = c; writefln("a(%d,%d)=%s", i, j, c); writefln("b(%d,%d)=%s", i, j, b[i, j]); } } writefln("a=%s",a); writefln("b=%s",b); } Ilya
Re: c style casts
On Friday, 15 January 2016 at 10:16:41 UTC, Warwick wrote: I though C style casts were not supported? But when I accidentaly did int i; if (uint(i) < length) it compiled and worked fine. Whys that? This is not a cast. You call constructor `uint(int x)`. In the same time `uint(somethingTypeOfLong)` would not work, but cast(long). --Ilya
Re: Scale-Hierarchy on ndslice
On Wednesday, 13 January 2016 at 18:09:06 UTC, Per Nordlöw wrote: On Wednesday, 13 January 2016 at 08:31:58 UTC, Ilya Yaroshenko wrote: If you describe additional required features for ndslice, I will think how they can be implemented in the Mir/Phobos. --Ilya Ok, great. BTW: What is Mir? https://github.com/DlangScience/mir --Ilya
Re: Scale-Hierarchy on ndslice
On Tuesday, 12 January 2016 at 21:48:39 UTC, Per Nordlöw wrote: Have anybody been thinking about adding a scale-hierarchy structure on top of ndslice? I need this to implementing some cool signal/image processing algorithms in D. When processing an image this structure is called a Mipmap. If you describe additional required features for ndslice, I will think how they can be implemented in the Mir/Phobos. --Ilya
Re: ndslice, using a slice in place of T[] in template parameters
On Monday, 11 January 2016 at 00:39:04 UTC, Jay Norwood wrote: On Sunday, 10 January 2016 at 23:31:47 UTC, Ilya Yaroshenko wrote: Just use normal arrays for buffer (median accepts array on second argument for optimisation reasons). ok, I think I see. I created a slice(numTasks, bigd) over an allocated double[] dbuf, but slb[task] will be returning some struct instead of the double[] that i need in this case. If I add .array to the Slice, it does compile, and executes, but slower than using the buffer directly. medians[i] = median(vec, slb[task].array); parallel time medians msec:113 original version using the computed slice of the original allocated dbuf. medians[i] = median(vec,dbuf[j .. k]); parallel time medians msec:85 The .array appears to make a copy. Is there some other call in ndslice to return the double[] slice of the original array? I will add such function. But it is not safe to do so (Slice can have strides not equal to 1). So it is like a hack (&ret[0, 0, 0])[0 .. ret.elementsCount]). Have you made comparison between my and yours parallel versions? https://github.com/9il/examples/blob/parallel/image_processing/median-filter/source/app.d -- Ilya
Re: ndslice, using a slice in place of T[] in template parameters
On Sunday, 10 January 2016 at 23:24:24 UTC, Jay Norwood wrote: On Sunday, 10 January 2016 at 22:23:18 UTC, Ilya Yaroshenko wrote: Could you please provide full code and error (git gists)? -- Ilya ok, thanks. I'm building with DMD32 D Compiler v2.069.2 on Win32. The dub.json is included. https://gist.github.com/jnorwood/affd05b69795c20989a3 I have create parallel test to (it requires mir v0.10.0-beta ) https://github.com/9il/examples/blob/parallel/image_processing/median-filter/source/app.d Could you please create a benchmark with default values of nc & nc for single thread app, your parallel version, and my. My version has some additional overhead and I am interesting if it is significant. -- Ilya
Re: ndslice, using a slice in place of T[] in template parameters
On Sunday, 10 January 2016 at 23:24:24 UTC, Jay Norwood wrote: On Sunday, 10 January 2016 at 22:23:18 UTC, Ilya Yaroshenko wrote: Could you please provide full code and error (git gists)? -- Ilya ok, thanks. I'm building with DMD32 D Compiler v2.069.2 on Win32. The dub.json is included. https://gist.github.com/jnorwood/affd05b69795c20989a3 Just use normal arrays for buffer (median accepts array on second argument for optimisation reasons). BTW, dip80-ndslice moved to http://code.dlang.org/packages/mir -- Ilya
Re: ndslice, using a slice in place of T[] in template parameters
On Sunday, 10 January 2016 at 22:00:20 UTC, Jay Norwood wrote: I cut this median template from Jack Stouffer's article and was attempting to use it in a parallel function. As shown, it builds and execute correctly, but it failed to compile if I attempting to use medians[i] = median(vec,slb[task]); [...] Could you please provide full code and error (git gists)? -- Ilya
Re: sliced().array compatibility with parallel?
On Saturday, 9 January 2016 at 23:20:00 UTC, Jay Norwood wrote: I'm playing around with win32, v2.069.2 dmd and "dip80-ndslice": "~>0.8.8". If I convert the 2D slice with .array(), should that first dimension then be compatible with parallel foreach? [...] Oh... there is no bug. means must be shared =) : shared double[1000] means;
Re: sliced().array compatibility with parallel?
On Saturday, 9 January 2016 at 23:20:00 UTC, Jay Norwood wrote: I'm playing around with win32, v2.069.2 dmd and "dip80-ndslice": "~>0.8.8". If I convert the 2D slice with .array(), should that first dimension then be compatible with parallel foreach? I find that without using parallel, all the means get computed, but with parallel, only about half of them are computed in this example. The others remain NaN, examined in the debugger in Visual D. import std.range : iota; import std.array : array; import std.algorithm; import std.datetime; import std.conv : to; import std.stdio; import std.experimental.ndslice; enum testCount = 1; double[1000] means; double[] data; void f1() { import std.parallelism; auto sl = data.sliced(1000,100_000); auto sla = sl.array(); foreach(i,vec; parallel(sla)){ double v=vec.sum(0.0); means[i] = v / 100_000; } } void main() { data = new double[100_000_000]; for(int i=0;i<100_000_000;i++){ data[i] = i/100_000_000.0;} auto r = benchmark!(f1)(testCount); auto f0Result = to!Duration(r[0] / testCount); f0Result.writeln; writeln(means[0]); } This is a bug in std.parallelism :-) Proof: import std.range : iota; import std.array : array; import std.algorithm; import std.datetime; import std.conv : to; import std.stdio; import mir.ndslice; import std.parallelism; enum testCount = 1; double[1000] means; double[] data; void f1() { //auto sl = data.sliced(1000, 100_000); //auto sla = sl.array(); auto sla = new double[][1000]; foreach(i, ref e; sla) { e = data[i * 100_000 .. (i+1) * 100_000]; } foreach(i,vec; parallel(sla)) { double v = vec.sum; means[i] = v / vec.length; } } void main() { data = new double[100_000_000]; foreach(i, ref e; data){ e = i / 100_000_000.0; } auto r = benchmark!(f1)(testCount); auto f0Result = to!Duration(r[0] / testCount); f0Result.writeln; writeln(means); } Prints: [0.00045, 0.0015, 0.0025, 0.0035, 0.0044, 0.0054, 0.0064, 0.0074, 0.0084, 0.0094, 0.0105, 0.0115, 0.0125, 0.0135, 0.0145, 0.0155, 0.0165, 0.0175, 0.0185, 0.0195, 0.0205, 0.0215, 0.0225, 0.0235, 0.0245, 0.0255, 0.0265, 0.0275, 0.0285, 0.0295, 0.0305, 0.0315, 0.0325, 0.0335, 0.0345, 0.0355, 0.0365, 0.0375, 0.0385, 0.0395, 0.0405, 0.0415, 0.0425, 0.0435, 0.0445, 0.0455, 0.0465, 0.0475, 0.0485, 0.0495, 0.0505, 0.0515, 0.0525, 0.0535, 0.0545, 0.0555, 0.0565, 0.0575, 0.0585, 0.0595, 0.0605, 0.0615, 0.0625, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan
Re: sliced().array compatibility with parallel?
On Saturday, 9 January 2016 at 23:20:00 UTC, Jay Norwood wrote: I'm playing around with win32, v2.069.2 dmd and "dip80-ndslice": "~>0.8.8". If I convert the 2D slice with .array(), should that first dimension then be compatible with parallel foreach? [...] It is a bug (Slice or Parallel ?). Please fill this issue. Slice should work with parallel, and array of slices should work with parallel.
Re: ndslice help.
On Wednesday, 30 December 2015 at 18:53:15 UTC, Zz wrote: Hi, Just playing with ndslice and I couldn't figure how to get the following transformations. given. auto slicea = sliced(iota(6), 2, 3, 1); foreach (item; slicea) { writeln(item); } which gives. [[0][1][2]] [[3][4][5]] what transformation should i do to get the following from slicea. [[0][2][4]] [[1][3][5]] [[4][2][0]] [[5][3][1]] Zz Hi, void main() { auto slicea = sliced(iota(6), 2, 3, 1); auto sliceb = slicea.reshape(3, 2, 1).transposed!1; auto slicec = sliceb.reversed!1; writefln("%(%(%(%s%)\n%)\n\n%)", [slicea, sliceb, slicec]); } Output: [0][1][2] [3][4][5] [0][2][4] [1][3][5] [4][2][0] [5][3][1] Ilya
Re: std.experimental.allocator optlink error
On Friday, 20 November 2015 at 12:31:37 UTC, Tofu Ninja wrote: On Tuesday, 10 November 2015 at 11:39:56 UTC, Rikki Cattermole wrote: One already exists. I've confirmed it in the build scripts. It only effects Windows. Any fix for this right now? No, https://issues.dlang.org/show_bug.cgi?id=15281
Re: conver BigInt to string
On Thursday, 5 November 2015 at 16:53:50 UTC, Namal wrote: On Thursday, 5 November 2015 at 16:45:10 UTC, Meta wrote: On Thursday, 5 November 2015 at 16:29:30 UTC, Namal wrote: Hello I am trying to convert BigInt to string like that while trying to sort it: string s1 = to!string(a).dup.sort; and get an error cannot implicitly convert expression (_adSortChar(dup(to(a of type char[] to string what do I do wrong? Try this instead: string s1 = to!string(a).idup.sort() If I try it like that i get: Error: template std.algorithm.sorting.sort cannot deduce function from argument types !()(char[]), candidates are: /../src/phobos/std/algorithm/sorting.d(996): string s1 = to!string(a).dup.sort.idup;
Re: good reasons not to use D?
On Monday, 2 November 2015 at 17:07:33 UTC, Laeeth Isharc wrote: On Sunday, 1 November 2015 at 09:07:56 UTC, Ilya Yaroshenko wrote: On Friday, 30 October 2015 at 10:35:03 UTC, Laeeth Isharc wrote: Any other thoughts? Floating point operations can be extended automatically (without some kind of 'fastmath' flag) up to 80bit fp on 32 bit intel processors. This is worst solution for language that want to be used in accounting or math. Thoughts like "larger precision entails more accurate answer" are naive and wrong. --Ilya Thanks, Ilya. An important but subtle point. Funnily enough, I haven't done so much specifically heavily numerical work so far, but trust that in the medium term the dlangscience project by John Colvin and others will bear fruit. What would you suggest is a better option to address the concerns you raised? The main goal is that we need to rewrite std.math & std.mathspecial to 1. Support all FP types (float/double) 2. Be portable (no Assembler when possible!, we _can_ use CEPHES code) 3. Produce equal results on different machines with different compilers for float/double. The problem is that many math functions use compensatory hacks to make result more accurate. But this hack have _very_ bad behaviour if we allow 80bit math for double and float. Q: What we need to change? A: Only dmd backend for old intel/amd 32 bit processors without sse support. Other compilers and platforms are not affected. So, D will have the same FP behaviour like C. See also the comment: https://github.com/D-Programming-Language/phobos/pull/2991#issuecomment-74506203 -- Ilya
Re: good reasons not to use D?
On Friday, 30 October 2015 at 10:35:03 UTC, Laeeth Isharc wrote: Any other thoughts? Floating point operations can be extended automatically (without some kind of 'fastmath' flag) up to 80bit fp on 32 bit intel processors. This is worst solution for language that want to be used in accounting or math. Thoughts like "larger precision entails more accurate answer" are naive and wrong. --Ilya
GC and MMM
Hi All! Does GC scan manually allocated memory? I want to use huge manually allocated hash tables and I don't want to GC scan them because performance reasons. Best regards, Ilya
new pragma(inline) syntax
Are `foo` and `bar` always inlined? struct S { pragma(inline, true): void foo(T)(T t) {} void bar(T)(T t) {} }
Re: new pragma(inline) syntax
On Saturday, 13 June 2015 at 19:13:20 UTC, Ilya Yaroshenko wrote: Are `foo` and `bar` always inlined? struct S { pragma(inline, true): void foo(T)(T t) {} void bar(T)(T t) {} } I am confused becuase they are templates.
Re: Utf8 to Utf32 cast cost
On Monday, 8 June 2015 at 10:42:00 UTC, Kadir Erdem Demir wrote: I want to use my char array with awesome, cool std.algorithm functions. Since many of this algorithms requires like slicing etc.. I prefer to create my string with Utf32 chars. But by default all strings literals are Utf8 for performance. With my current knowledge I use to!dhar to convert Utf8[](or char[]) to Utf32[](or dchar[]) dchar[] range = to!dchar("erdem".dup) How costly is this? Is there a way which I can have Utf32 string directly without a cast? 1. dstring range = to!dstring("erdem"); //without dup 2. dchar[] range = to!(dchar[])("erdem"); //mutable 3. dstring range = "erdem"d; //directly 4. dchar[] range = "erdem"d.dup; //mutable
Re: stdx.data.json - enhancement suggestions
You can use std.json or create TrustedInputRangeShell template with @trasted methods: struct TrustedInputRangeShell(Range) { Range* data; auto front() @property @trusted { return (*data).front; } //etc } But I am not sure about other parseJSONStream bugs.
Re: Merging one Array with Another
fix: completeSort(x.assumeSorted, y); x = x.chain(y).uniq.array; or (was fixed) y = y.sort().uniq.array; completeSort(x.assumeSorted, y); x ~= y;
Re: stdx.data.json - enhancement suggestions
This line can be removed: .map!(ch => ch.idup) On Friday, 1 May 2015 at 20:02:46 UTC, Ilya Yaroshenko wrote: Current std.stdio is deprecated. This ad-hoc should works. auto json = File("fileName") .byChunk(1024 * 1024) //1 MB. Data cluster equals 1024 * 4 .map!(ch => ch.idup) .joiner .map!(b => cast(char)b) .parseJSON;
Re: stdx.data.json - enhancement suggestions
Current std.stdio is deprecated. This ad-hoc should works. auto json = File("fileName") .byChunk(1024 * 1024) //1 MB. Data cluster equals 1024 * 4 .map!(ch => ch.idup) .joiner .map!(b => cast(char)b) .parseJSON; On Friday, 1 May 2015 at 14:19:26 UTC, Laeeth Isharc wrote: On Wednesday, 29 April 2015 at 18:48:22 UTC, Laeeth Isharc wrote: Hi. What's the best way to pass the contents of a file to the stream parser without reading the whole thing into memory first? I get an error if using byLine because the kind of range this function returns is not what the stream parser is expecting. There is an optional filename argument to parseJSONStream, but I am not sure what this is for, since it still requires the first argument for the input if the filename is passed. Thanks. Laeeth. some more suggestions for enhancements here: https://github.com/s-ludwig/std_data_json/issues/12 a big advantage of languages like python is that there are end-to-end examples of using the language+libraries to accomplish real work. (they don't need to be more than a screenful of code, but for the newcomer having it all set out makes a big difference). since I believe a significant reason for people to switch to D will be a realisation that actually CPU time isn't free, it makes sense to be accommodating to people who previously used scripting types of languages for these jobs. Laeeth.
Re: Merging one Array with Another
fix: completeSort(x.assumeSorted, y); x = x.chain(y).uniq.array; or completeSort(x.assumeSorted, y.uniq.array); x ~= y; On Friday, 1 May 2015 at 19:34:20 UTC, Ilya Yaroshenko wrote: If x is already sorted x = x.completeSort(y).uniq.array; On Friday, 1 May 2015 at 19:30:08 UTC, Ilya Yaroshenko wrote: Both variants are wrong because uniq needs sorted ranges. Probably you need something like that: x = x.chain(y).sort.uniq.array; On Friday, 1 May 2015 at 19:08:51 UTC, Per Nordlöw wrote: What's the fastest Phobos-way of doing either x ~= y; // append x = x.uniq; // remove duplicates or x = (x ~ y).uniq; // append and remove duplicates in one go provided that T[] x, y; ?
Re: Merging one Array with Another
If x is already sorted x = x.completeSort(y).uniq.array; On Friday, 1 May 2015 at 19:30:08 UTC, Ilya Yaroshenko wrote: Both variants are wrong because uniq needs sorted ranges. Probably you need something like that: x = x.chain(y).sort.uniq.array; On Friday, 1 May 2015 at 19:08:51 UTC, Per Nordlöw wrote: What's the fastest Phobos-way of doing either x ~= y; // append x = x.uniq; // remove duplicates or x = (x ~ y).uniq; // append and remove duplicates in one go provided that T[] x, y; ?
Re: Merging one Array with Another
Both variants are wrong because uniq needs sorted ranges. Probably you need something like that: x = x.chain(y).sort.uniq.array; On Friday, 1 May 2015 at 19:08:51 UTC, Per Nordlöw wrote: What's the fastest Phobos-way of doing either x ~= y; // append x = x.uniq; // remove duplicates or x = (x ~ y).uniq; // append and remove duplicates in one go provided that T[] x, y; ?
Re: DMD Zip for Mac OS X
On Saturday, 28 February 2015 at 06:45:58 UTC, Mike Parker wrote: I'm not a Mac user and I'm fairly clueless about it. The DMD zip for OS X contains one executable. I assume it's a 64-bit binary. Is that true? Hi! Probably, yes. Anyway dmd compiles 64-bit binaries on OS X. Ilya
Re: dub can't read files from cache
On Thursday, 18 September 2014 at 16:05:15 UTC, ketmar via Digitalmars-d-learn wrote: On Thu, 18 Sep 2014 15:53:02 + Ilya Yaroshenko via Digitalmars-d-learn wrote: Seriously, console application (in Russian lang. Windows) is not unicode-ready. that's 'cause authors tend to ignore W-functions. but GNU/Linux is not better, 'cause authors tend to ignore any encodings except latin1 and utf-8. koi? what is koi? it's broken utf-8, we don't know about koi! and we don't care what your locale says, it's utf-8! bwah, D compiler does just that. "one ring to rule them all" UTF-8 = Lord of the encodings.
Re: dub can't read files from cache
On Thursday, 18 September 2014 at 16:05:15 UTC, ketmar via Digitalmars-d-learn wrote: On Thu, 18 Sep 2014 15:53:02 + Ilya Yaroshenko via Digitalmars-d-learn wrote: Seriously, console application (in Russian lang. Windows) is not unicode-ready. that's 'cause authors tend to ignore W-functions. but GNU/Linux is not better, 'cause authors tend to ignore any encodings except latin1 and utf-8. koi? what is koi? it's broken utf-8, we don't know about koi! and we don't care what your locale says, it's utf-8! bwah, D compiler does just that. "one ring to rule them all" UTF-8 = Lord of the encodings.
Re: dub can't read files from cache
On Thursday, 18 September 2014 at 16:05:15 UTC, ketmar via Digitalmars-d-learn wrote: On Thu, 18 Sep 2014 15:53:02 + Ilya Yaroshenko via Digitalmars-d-learn wrote: Seriously, console application (in Russian lang. Windows) is not unicode-ready. that's 'cause authors tend to ignore W-functions. but GNU/Linux is not better, 'cause authors tend to ignore any encodings except latin1 and utf-8. koi? what is koi? it's broken utf-8, we don't know about koi! and we don't care what your locale says, it's utf-8! bwah, D compiler does just that. You can choice encoding for console in Linux and edit encoding for other files with different utilities. https://plus.google.com/u/0/photos/110121926948523695249/albums/6060444524608761937/6060444528078004690?pid=6060444528078004690&oid=110121926948523695249
Re: dub can't read files from cache
On Thursday, 18 September 2014 at 10:45:24 UTC, Kagamin wrote: Windows has full support for unicode, since it's an OS based on unicode. It's old C code, which is not unicode-ready, and it remains not unicode-ready without changing behavior. Modern code like phobos usually tries to be unicode-ready. Windows 9 will be based on Windows 98 =) Seriously, console application (in Russian lang. Windows) is not unicode-ready.
Re: dub can't read files from cache
On Tuesday, 16 September 2014 at 17:11:14 UTC, Ruslan wrote: Note. ╨а╤Г╤Б╨╗╨░╨╜ is a cyrillic word. That should not affect because dub only displays so. Если Вы программируете (не только на D), то Вам стоит сменить учетную запись на латинскую. От кириллической много проблем, в особенности из-за самого Windows, с его устаревшими кодировками. Под linux и OS X, использующих только UTF-8 обычно все работает. Илья
Re: Why if(__ctfe)?
On Tuesday, 16 September 2014 at 13:28:17 UTC, Rene Zwanenburg wrote: On Tuesday, 16 September 2014 at 13:17:28 UTC, Adam D. Ruppe wrote: On Tuesday, 16 September 2014 at 13:11:50 UTC, Ilya Yaroshenko wrote: Why not "static if(__ctfe)" ? ctfe is a runtime condition. The function has the same code when run at compile time, it is just being run in a different environment. Note that if(__ctfe) does not incur a runtime performance penalty. Even in debug builds will the branch be removed. It is a kind of magic ;)
Why if(__ctfe)?
Why not "static if(__ctfe)" ?
Docs for PR
How generate documentation for phobos PR?
Re: Error: array operation d1[] + d2[] without assignment not implemented
Operations like "ar1[] op= ar2[] op ar3[]" is only for arrays. There is no operator overloading for this staff.
Re: Error: array operation d1[] + d2[] without assignment not implemented
On Saturday, 13 September 2014 at 14:18:57 UTC, deed wrote: struct Vector (T) { T[]arr; void opSliceAssign (T[] a) { arr[] = a[]; } } unittest { auto v = Vector!double([1, 2]); double[] d1 = [11, 12]; double[] d2 = [21, 22]; double[] d3 = new double[](2); d3[] = d1[] + d2[]; assert (d3 == [11.+21., 12.+22.]); assert (is(typeof(d1[] + d2[]) == double[])); v[] = d1[] // Fine v[] = d1[] + d2[]; // Error: array operation d1[] + d2[] without assignment not implemented } How can opSliceAssign be defined to make this work? Hi! struct Vector (T) { T[]arr; T[] opSlice() { return arr; } } Vector!double v; double[] d; v[][] = d[] + d[]; //first [] call opSlise, second [] for array syntax Best Regards, Ilya