Re: Web page listing all D compilers (and DMDFE version!) on travis-ci
On 04/28/2016 04:03 PM, Seb wrote: FYI you miss the available ldc alpha and betas. On purpose? Didn't initially occur to me, but I'd say that's a "possible future enhancement". It will take more work, and some extra thought, to figure out how to handle: Right now, my tool relies on the ability to say in .travis.yml "just give me the latest available version" (by saying, ex "dmd" instead of "dmd-2.070.0"). Then my tool checks which version that turned out to be and posts it. I'm not aware of an equivalent that includes alpha/beta releases. I wonder if it would be easy enough for the folks handling D on travis to add? That would be the easiest way for me. I *could* manually trigger it for each alpha/beta *already* available (like I did to initially populate the database with all the earlier versions of everything). I guess that would be information worth archiving. But that still doesn't give me a way to capture new alphas/betas automatically. Hmm, I do have an idea for how to do that, but it may be a little convoluted so I'd have to get to it later. I guess...we'll see ;)
IDE - Coedit 2, update 5 released
See https://github.com/BBasile/Coedit/releases
Re: Proposed: start DConf days one hour later
On Wednesday, 27 April 2016 at 18:36:54 UTC, Andrei Alexandrescu wrote: The folks at Sociomantic suggested to start at 10:00 AM instead of 9:00 AM, therefore shifting the end time by one as well. Please reply with thoughts on this! We're particularly concerned about folks who need to take off early on Friday. -- Andrei That would be great, at least on the first day. My flight lands at 8:00 AM. Two other guys are in the same situation. Dragos
Re: LZ4 decompression at CTFE
On Thursday, 28 April 2016 at 20:12:58 UTC, Stefan Koch wrote: On Wednesday, 27 April 2016 at 06:55:46 UTC, Walter Bright wrote: Sounds nice. I'm curious how it would compare to: https://www.digitalmars.com/sargon/lz77.html https://github.com/DigitalMars/sargon/blob/master/src/sargon/lz77.d lz77 took 176 hnecs uncompressing lz4 took 92 hnecs uncompressing And another test in reversed order using the same data. lz4 took 162 hnecs uncompressing lz77 took 245 hnecs uncompressing Though the compression ratio is worse. But that is partially fixable.
Re: LZ4 decompression at CTFE
On Wednesday, 27 April 2016 at 06:55:46 UTC, Walter Bright wrote: Sounds nice. I'm curious how it would compare to: https://www.digitalmars.com/sargon/lz77.html https://github.com/DigitalMars/sargon/blob/master/src/sargon/lz77.d lz77 took 176 hnecs uncompressing lz4 took 92 hnecs uncompressing And another test in reversed order using the same data. lz4 took 162 hnecs uncompressing lz77 took 245 hnecs uncompressing
Re: Web page listing all D compilers (and DMDFE version!) on travis-ci
On Wednesday, 27 April 2016 at 18:57:29 UTC, Nick Sabalausky wrote: On 04/26/2016 02:42 AM, Nick Sabalausky wrote: https://semitwist.com/travis-d-compilers ... - Auto-trigger an update check on a regular basis (I'm thinking once daily?) so I don't have to stay on top of new compiler versions and trigger an update manually. (I can use Travis's API to do this.) The page is now set to automatically check for updates every 24 hours. So it should always be automatically up-to-date now. No intervention needed by myself or any of the DMD/LDC/GDC developers. FYI you miss the available ldc alpha and betas. On purpose?
Re: LZ4 decompression at CTFE
On Thursday, 28 April 2016 at 17:29:05 UTC, Dmitry Olshansky wrote: Compression on the other hand might be helpful to avoid precompressing everything beforehand. I fear that is going to be pretty slow and will eat at least 1.5 the memory of the file you are trying to store. If you want a good compression ratio. then again... it might be fast enough to still be useful.
Re: Proposed: start DConf days one hour later
On Wednesday, 27 April 2016 at 18:36:54 UTC, Andrei Alexandrescu wrote: The folks at Sociomantic suggested to start at 10:00 AM instead of 9:00 AM, therefore shifting the end time by one as well. Please reply with thoughts on this! We're particularly concerned about folks who need to take off early on Friday. -- Andrei Yay, more sleep time! Also, you can always not shift the schedule for Friday only, or shift it only half an hour. On Friday people will probably start to be better acclimated to the timezone anyway.
Re: Proposed: start DConf days one hour later
On Wednesday, 27 April 2016 at 18:36:54 UTC, Andrei Alexandrescu wrote: The folks at Sociomantic suggested to start at 10:00 AM instead of 9:00 AM, therefore shifting the end time by one as well. Please reply with thoughts on this! We're particularly concerned about folks who need to take off early on Friday. -- Andrei +1
Re: LZ4 decompression at CTFE
On Thursday, 28 April 2016 at 18:31:25 UTC, deadalnix wrote: Also, the damn thing is allocation in a loop. I would like a have an allocation primitive for ctfe use. But that would not help too much as I don't know the size I need in advance. storing that in the header is optional, and unfortunately lz4c does not store it by default. decompressing the lz family takes never more space then uncompressed size of the data. The working set is often bounded. In the case of lz4 it's restricted to 4k in the frame format. and to 64k by design.
Re: LZ4 decompression at CTFE
On 28-Apr-2016 21:31, deadalnix wrote: On Thursday, 28 April 2016 at 17:58:50 UTC, Stefan Koch wrote: On Thursday, 28 April 2016 at 17:29:05 UTC, Dmitry Olshansky wrote: What's the benefit? I mean after CTFE-decompression they are going to add weight to the binary as much as decompressed files. Compression on the other hand might be helpful to avoid precompressing everything beforehand. The compiler can load files faster, that are being used by ctfe only. Which would be stripped out by the linker later. And keep in mind that it also works at runtime. Memory is scarce at compiletime and this can help reducing the memory requirements. When a bit of structure is added on top. Considering the speed and memory consumption of CTFE, I'd bet on the exact reverse. Yeah, the whole CTFE to save compile-time memory sounds like a bad joke to me;) Also, the damn thing is allocation in a loop. -- Dmitry Olshansky
Re: LZ4 decompression at CTFE
On Thursday, 28 April 2016 at 17:58:50 UTC, Stefan Koch wrote: On Thursday, 28 April 2016 at 17:29:05 UTC, Dmitry Olshansky wrote: What's the benefit? I mean after CTFE-decompression they are going to add weight to the binary as much as decompressed files. Compression on the other hand might be helpful to avoid precompressing everything beforehand. The compiler can load files faster, that are being used by ctfe only. Which would be stripped out by the linker later. And keep in mind that it also works at runtime. Memory is scarce at compiletime and this can help reducing the memory requirements. When a bit of structure is added on top. Considering the speed and memory consumption of CTFE, I'd bet on the exact reverse. Also, the damn thing is allocation in a loop.
Re: Computer Vision Library in D
On Thursday, 28 April 2016 at 11:50:55 UTC, Edwin van Leeuwen wrote: On Thursday, 28 April 2016 at 11:32:25 UTC, Michael wrote: And I would also like to see some more scientific libraries make it into D. Though I understand that including it in the standard library can cause issues, it would be nice to at least get some Linear Algebra libraries in experimental or over with the rest of the science libraries. As I understand it that is part of the goal of mir: https://code.dlang.org/packages/mir Not sure if you were aware, but there is also a group with the aim to promote scientific dlang work: https://gitter.im/DlangScience/public I've seen the mir project and it looks promising. I'm also aware of Dlang science and I hope that it gains some support.
Re: Commercial video processing app in D (experience report)
On Wednesday, 27 April 2016 at 12:42:05 UTC, thedeemon wrote: Cerealed This compile-time-introspection-based serializaition lib is really great: powerful and easy to use. We're probably using an old version, haven't updated for some time, and the version we use sometimes had problems serializing certain types (like bool[], IIRC), so sometimes we had to tweak our message types to make it compile, but most of the time it just works. Thanks for the kind words! Can you let me know what was wrong with serialising bool[] and whatever other types you had problems with please? I'd like to fix them. Thanks! Atila
Re: Commercial video processing app in D (experience report)
On 2016-04-28 01:53, Walter Bright wrote: Wonderful, thanks for taking the time to write this up. I'm especially pleased that you found great uses for a couple features that were a bit speculative because they are unusual - the user defined attributes, and the file binary data imports. I'm using the string import feature in DStep to bundle Clang internal header files. It's a great feature that makes distribution a lot easier since only a single executable is everything that is needed. -- /Jacob Carlborg
Re: Computer Vision Library in D
On Thursday, 28 April 2016 at 11:32:25 UTC, Michael wrote: And I would also like to see some more scientific libraries make it into D. Though I understand that including it in the standard library can cause issues, it would be nice to at least get some Linear Algebra libraries in experimental or over with the rest of the science libraries. As I understand it that is part of the goal of mir: https://code.dlang.org/packages/mir Not sure if you were aware, but there is also a group with the aim to promote scientific dlang work: https://gitter.im/DlangScience/public
Re: LZ4 decompression at CTFE
On Thursday, 28 April 2016 at 06:03:46 UTC, Marco Leise wrote: There exist some comparisons for the C++ implementations (zlib's DEFLATE being a variation of lz77): http://catchchallenger.first-world.info//wiki/Quick_Benchmark:_Gzip_vs_Bzip2_vs_LZMA_vs_XZ_vs_LZ4_vs_LZO https://pdfs.semanticscholar.org/9b69/86f2fff8db7e080ef8b02aa19f3941a61a91.pdf (pg.9) The high compression variant of lz4 basically like gzip with 9x faster decompression. That makes it well suited for use cases where you compress once, decompress often and I/O sequential reads are fast e.g. 200 MB/s or the program does other computations meanwhile and one doesn't want decompression to use a lot of CPU time. Thanks for the 2. link you posted. This made me aware of a few things I were not aware of before.
Re: Proposed: start DConf days one hour later
On Wednesday, 27 April 2016 at 18:36:54 UTC, Andrei Alexandrescu wrote: The folks at Sociomantic suggested to start at 10:00 AM instead of 9:00 AM, therefore shifting the end time by one as well. Please reply with thoughts on this! We're particularly concerned about folks who need to take off early on Friday. -- Andrei I'll take off early on Friday and will miss the last talk, with this one hour shift I'll probably miss the last two talks ... But hey they'll be recorded right?
Re: Commercial video processing app in D (experience report)
On Wednesday, 27 April 2016 at 12:42:05 UTC, thedeemon wrote: Hi, I just wanted to share some experience of using D in industry. Recently my little company released version 2.0 of our flagship product Video Enhancer, a video processing application for Windows, and this time it's written in D. http://www.infognition.com/VideoEnhancer/ [snip] DLangUI Very nice library. Documentation is very sparse though, so learning to use DLangUI often means reading source code of examples and the lib itself, and sometimes even that's not enough and you need to learn some Android basics, since it originates from Android world. But once you learn how to use it, how to encode what you need in DML (a QML counterpart) or add required functionality by overriding some method of its class, it's really great and pleasant to use. Many times I was so happy the source code is available, first for learning, then for tweaking and fixing bugs. I've found a few minor bugs and sent a few trivial fixes that were merged quickly. DLangUI is cross-platform and has several backends for drawing and font rendering. We're using its minimal build targeted to use Win32 API (had to tweak dub.json a bit). We don't use OpenGL, as it's not really guaranteed to work well on any Windows box. Using just WinAPI makes our app smaller, more stable and avoids dependencies. [snip] Another reason to embrace DLangUI. One starting point would be to improve the documentation and write a few tutorials (including DML, themes etc.)
Re: Commercial video processing app in D (experience report)
Awesome! Thanks so much for such detailed explanation! Btw, if you're interested in an image processing app in pure D, I've got one too: http://www.infognition.com/blogsort/ (sources: https://bitbucket.org/infognition/bsort ) Great, I'll check it out - Thanks!
Re: Commercial video processing app in D (experience report)
On Thursday, 28 April 2016 at 06:22:18 UTC, Relja Ljubobratovic wrote: Can you share with us some of your experience working on image and video processing modules in the app, such as are filters here: http://www.infognition.com/VideoEnhancer/filters.html If I may ask, was that part implemented in D, C++, or was some 3rd party library used? Thanks! The filters listed there are third-party plugins originally created for VirtualDub ( http://virtualdub.org/ ) by different people, in C++. We made just 2-3 of them, like motion-based temporal denoiser (Film Dirt Cleaner) and Intelligent Brightness filter for automatic brightness/contrast correction. Our most interesting and distinctive piece of tech is our Super Resolution engine for video upsizing and it's not in that list, it's built-in in the app (and also available separately as plugins for some other hosts). All this image processing stuff is written in C++ and works directly with raw image bytes, no special libraries involved. When video processing starts our filters usually launch a bunch of worker threads and these threads work in parallel each on its part of video frame (divided into horizontal stripes usually). Inside they often work block-wise and we have a bunch of template classes for different blocks (RGB or monochrome) parameterized by pixel data type and often block size, so the size is often is known at compile-time and compiler can unroll the loops properly. When doing motion search we're using our vector class parameterized by precision, so we have vectors of different precision (low-res pixel, high-res pixel, half-pixel, quarter-pixel etc.) and type system makes sure I don't add or mix vectors of different precision and don't pass a half-pixel-precise vector to a block reading routine that expects quarter-pixel precise coordinates. Where it makes sense and possible we use SIMD classes like F32vec4 and/or SIMD intrinsics for pixel operations. Video Enhancer allows chaining several VD filters and our SR rescaler instances to a pipeline and it's also parallelized, so when first filter finishes with frame X it can immediately start working on frame X+1 while the next filter is still working on frame X. Previously it was organized as a chain of DirectShow filters with a special Parallelizer filter inserted between video processing ones, this Parallelizer had some frame queue inside and separated receiving and sending threads, allowing the connected filters to work in parallel. In version 2 it's trickier, since we need to be able to seek to different positions in the video and some filters may request a few frames before and after the current, so sequential pipeline doesn't suffice anymore, now we build a virtual chain inside one big DirectShow filter, and each node in that chain has its worker thread and they do message passing to communicate. After all, we now have a big DirectShow filter in 11K lines of C++ that does both Super Resolution resizing and invoking VirtualDub plugins (imitating VirtualDub for them) and doing colorspace conversions where necessary and organizing them all into a pipeline that is pull-based inside but behaves as push-based DirectShow filter outside. So the D part is using COM to build and run a DirectShow graph with all the readers, splitters, codecs and of course our big video processing DirectShow filter, it talks to it via COM and some callbacks but doesn't do much with video frames apart from copying. Btw, if you're interested in an image processing app in pure D, I've got one too: http://www.infognition.com/blogsort/ (sources: https://bitbucket.org/infognition/bsort )
Re: Proposed: start DConf days one hour later
On Thursday, 28 April 2016 at 05:10:25 UTC, Mithun Hunsur wrote: On Thursday, 28 April 2016 at 04:47:38 UTC, Rory McGuire wrote: [...] Aha - if Google Maps is accurate, I have nothing to worry about :) For reference, the number it gives me is 37 minutes. In that case, +1 to starting late; that being said, I feel like announcing a sweeping change like this days before the conference seems fraught to end poorly. +1 from me. I wouldn't make it to the 17:45 train home anyway.
Re: Proposed: start DConf days one hour later
On 28 April 2016 at 07:10, Mithun Hunsur via Digitalmars-d-announcewrote: > On Thursday, 28 April 2016 at 04:47:38 UTC, Rory McGuire wrote: >> >> On 28 Apr 2016 6:30 AM, "Mithun Hunsur via Digitalmars-d-announce" < >> digitalmars-d-announce@puremagic.com> wrote: >>> >>> Hmm; my talk's at 3:30pm on Friday (4:30pm after this change), which >> >> means I'd leave at 5:30pm. My flight out of Berlin is at 9:30pm; how long >> does it take to get from the venue to the airport? (I'll probably have to >> skip the last talk of Friday, which is a shame.) >> >> Check Google maps. Google maps can even guess how long it will take at the >> time you specify. > > > Aha - if Google Maps is accurate, I have nothing to worry about :) For > reference, the number it gives me is 37 minutes. > Yep, by rule of thumb, you should set off at least 1 hour before the gate closes, maybe an 1h30m just to be safe. For all the times I've departed from Tegel, it's always taken about 5-10 minutes to walk from the bus stop outside to *any* departure gate. The airport really is that small. http://www.berlin-airport.de/de/_dokumente/reisende/2015-09-28-txl-terminal-de-en.pdf To give you a small idea, the length of Terminal C (left to right) is about 100 metres. :-)
Re: Commercial video processing app in D (experience report)
On Wednesday, 27 April 2016 at 12:42:05 UTC, thedeemon wrote: Hi, I just wanted to share some experience of using D in industry. Recently my little company released version 2.0 of our flagship product Video Enhancer, a video processing application for Windows, and this time it's written in D. http://www.infognition.com/VideoEnhancer/ Awesome work, congratulations! Can you share with us some of your experience working on image and video processing modules in the app, such as are filters here: http://www.infognition.com/VideoEnhancer/filters.html If I may ask, was that part implemented in D, C++, or was some 3rd party library used? Thanks, and again - big congrats! Relja
Re: LZ4 decompression at CTFE
Am Tue, 26 Apr 2016 23:55:46 -0700 schrieb Walter Bright: > On 4/26/2016 3:05 PM, Stefan Koch wrote: > > Hello, > > > > originally I want to wait with this announcement until DConf. > > But since I working on another toy. I can release this info early. > > > > So as per title. you can decompress .lz4 flies created by the standard lz4hc > > commnadline tool at compile time. > > > > No github link yet as there is a little bit of cleanup todo :) > > > > Please comment. > > Sounds nice. I'm curious how it would compare to: > > https://www.digitalmars.com/sargon/lz77.html > > https://github.com/DigitalMars/sargon/blob/master/src/sargon/lz77.d There exist some comparisons for the C++ implementations (zlib's DEFLATE being a variation of lz77): http://catchchallenger.first-world.info//wiki/Quick_Benchmark:_Gzip_vs_Bzip2_vs_LZMA_vs_XZ_vs_LZ4_vs_LZO https://pdfs.semanticscholar.org/9b69/86f2fff8db7e080ef8b02aa19f3941a61a91.pdf (pg.9) The high compression variant of lz4 basically like gzip with 9x faster decompression. That makes it well suited for use cases where you compress once, decompress often and I/O sequential reads are fast e.g. 200 MB/s or the program does other computations meanwhile and one doesn't want decompression to use a lot of CPU time. -- Marco