Re: automem v0.0.7 - C++ style smart pointers using std.experimental.allocator
On Monday, 17 April 2017 at 13:21:50 UTC, Kagamin wrote: If we can control memory layout, we can do what shared_ptr does and couple the reference counter with the object, then we can have just one pointer: struct RefCounted(T) { struct Wrapper { int count; T payload; } Wrapper* payload; } I'm not sure I follow your comment. Indeed, that is how shared_ptr, or, in this case, RefCounted, is implemented. My point was that there is no practical sense in having a shared(RefCounted).
Re: automem v0.0.7 - C++ style smart pointers using std.experimental.allocator
On Wednesday, 12 April 2017 at 13:32:36 UTC, Stanislav Blinov wrote: Syntax is not the core of the issue, it's not about just marking a destructor as shared. Making RefCounted itself shared would require implementing some form of synchronization of all the 'dereference' operations, including assignments. I.e. if we have some shared(RefCounted!T) ptr, what should happen when two threads simultaneously attempt to do ptr = shared(RefCounted!T)(someNewValue) ? Should a library implementation even consider this? Or should such synchronization be left to client's care? It seems like in this regard shared(RefCounted!T) would be no different from shared(T*), which brings me to the next point. If we can control memory layout, we can do what shared_ptr does and couple the reference counter with the object, then we can have just one pointer: struct RefCounted(T) { struct Wrapper { int count; T payload; } Wrapper* payload; }
msgpack-ll: Low level @nogc, nothrow, @safe, pure, betterC MessagePack (de)serializer
Hello list, msgpack-ll is a new low-level @nogc, nothrow, @safe, pure and betterC compatible MessagePack serializer and deserializer. The library was designed to avoid any external dependencies and handle the low-level protocol details only. It only depends the phobos bigEndianToNative and nativeToBigEndian templates from std.bitmanip. It uses an optimized API to avoid any runtime bounds checks and still be 100% memory safe. The library doesn't have to do any error handling or buffer management and never dynamically allocates memory. It's meant as a building block for higher level serializers (e.g. vibeD data.serialization) or as a high-speed serialization library. The github README shows a quick overview of the generated ASM for serialization and deserialization. dub: http://code.dlang.org/packages/msgpack-ll github: https://github.com/jpf91/msgpack-ll api: https://jpf91.github.io/msgpack-ll/msgpack_ll.html -- Johannes
Re: Call for arms: Arch Linux D package maintenance
On Thursday, 13 April 2017 at 09:34:00 UTC, Atila Neves wrote: On Tuesday, 11 April 2017 at 16:17:32 UTC, John Colvin wrote: On Thursday, 16 February 2017 at 19:58:47 UTC, Rory McGuire wrote: [...] Any news on this? The arch packages are listed as orphaned. Same question, and adding that I volunteer to take over. Atila Are you involved in Arch? The update is that Arch TUs are not likely to accept one of us that is not very involved in Arch. I've been watching the Arch mailing lists a bit and two users that tried to get involved were rejected and one user who is _very_ involved and has been for years got accepted. I have tried to contact the last person that built the packages to see I could help with the dlang packages but have not received a reply yet.
Re: Article: Interfacing D with C and Fortran
On Friday, 14 April 2017 at 17:55:54 UTC, jmh530 wrote: On Thursday, 13 April 2017 at 11:23:32 UTC, jmh530 wrote: Just an FYI, I was looking at another post http://www.active-analytics.com/blog/fitting-glm-with-large-datasets/ and the top part is a little confusing because the code below switches it up to do CC=BB*AA instead of CC=AA*BB. If I'm understanding it correctly, you originally have an mXn matrix times an nXp matrix, then you partition the left hand side to be mXk and the right hand to kXp and loop through and add them up. However, at the top you say that A (which at the top is the left hand variable) is split up by rows. However, the code clearly splits the left hand side (B here) by columns (BB is 5X100 and B is a 10-dimensional list of 5X10 matrices). Sorry, I didn't see your question until now. That article was something I worked on years earlier. The main principle is that you split and aggregate over repeated indices. The code is intended to be illustrative of the principle. Don't get too hung up with equating the the code symbols with equation - the principle is the main thing. I wrote an R package where the important bits is written in C++: https://cran.r-project.org/web/packages/bigReg/index.html using the principle in GLM MORE IMPORTANTLY, however is that that algorithm is not efficient! At least not as efficient as gradient descent or even better stochastic gradient descent or their respective modifications.
Bitbucket Pipelines
Hi, Thought some of us might be interested in this, it is now really simple to set up testing within bitbucket, by using "Bitbucket Pipelines", docker based testing with what seems to be standard bash script in the YAML file. The below config is for a dub package. bitbucket-pipelines.yml = # This is a sample build configuration for D. # Check our guides at https://confluence.atlassian.com/x/5Q4SMw for more examples. # Only use spaces to indent your .yml configuration. # - # You can specify a custom docker image from Docker Hub as your build environment. image: base/archlinux pipelines: default: - step: script: # Modify the commands below to build your repository. - pacman -Sy --noconfirm dub dmd - dub build #- dub test = Regards, Rory McGuire