It would be good if the std.file operations used the D multi-
thread features, since you've done such a nice job of making them
easy. I hacked up your std.file recursive remove and got a 4x
speed-up on a win7 system with corei7 using the examples from the
D programming language book. Code is
== Quote from Andrei Alexandrescu
Suppose all the cores but one are already preoccupied with
other stuff, or
maybe you're even running on a single-core. Does the threading
add enough
overhead that it would actually go slower than the original
single-threaded
version?
If not, then this
Andrei Alexandrescu Wrote:
That's why I'm saying - let's leave the decision to the user. Take a
uint parameter for the number of threads to be used, where 0 means leave
it to phobos, and default to 0.
Andrei
ok, here is another version. I was reading about the std.parallelism library,
I placed the two parallel file operations, rmdir and copy on
github in
https://github.com/jnorwood/file_parallel
These combine the std.parallelism operations with the std.file
operations to speed up the processing on Windows.
---
I also put a useful function that does argv pathname
On Monday, 5 March 2012 at 12:48:54 UTC, Andrei Alexandrescu
wrote:
Sounds great! Next step, should you be interested, is to create
a pull request for phobos so we can integrate your code within.
Andrei
I considered that. I suppose the wildArgv code could go in
std.path, and the file
On Monday, 5 March 2012 at 16:35:09 UTC, dennis luehring wrote:
do you compare single-threaded robocopy with your
implementation or multithreaded?
you can command robocopy to use multiple threads with /MT[:n]
yes, I tested vs multithread robocopy. As someone pointed out,
robocopy has
So here is the output of a batch file I just ran on the ssd drive
for the 1.5GB copy. Robocopy displays that it took around 14
secs, while the release build of the D commandline cpd utility
took around 12 secs. That's a pretty consistent result on the
ssd drive, which are more sensitive to
On Tuesday, 13 March 2012 at 05:25:38 UTC, Jay Norwood wrote:
Admittedly I have not heard of PEGs before, so I'm curious: Is
this powerful enough to parse a language such as C?
I've just read a few articles referenced from this page, and
the second link was by someone who had done java
On Sunday, 11 March 2012 at 21:45:02 UTC, SiegeLord wrote:
Anyway, the repository for it is here:
https://github.com/SiegeLord/DGnuplot
It requires TangoD2 to build and gnuplot 4.4.3 to run (unless
you're saving commands to a file as described above).
It works on Linux, and maybe on Windows
On Wednesday, 14 March 2012 at 07:16:39 UTC, Jay Norwood wrote:
I just tried this on Win7 64 bit using the latest TangoD2 and
the gnuplot from this link
http://sourceforge.net/projects/gnuplot/files/
I had to substitute pgnuplot.exe, which is one of the windows
gnuplot exe versions
On Wednesday, 14 March 2012 at 07:33:52 UTC, Jay Norwood wrote:
On Wednesday, 14 March 2012 at 07:16:39 UTC, Jay Norwood wrote:
I just tried this on Win7 64 bit using the latest TangoD2 and
the gnuplot from this link
http://sourceforge.net/projects/gnuplot/files/
I had to substitute
I uploaded a parallel unzip here, and the main in the examples
folder. Testing on my ssd drive, unzips a 2GB directory
structure in 17.5 secs. 7zip took 55 secs on the same file.
This restores timestamps on the regular files. There is also a
loop which will restore timestams on folders.
On Thursday, 5 April 2012 at 14:04:57 UTC, Jay Norwood wrote:
I uploaded a parallel unzip here, and the main in the examples
folder.
So, below is a demo of how to use the example app in windows,
where I unzipped a 2GB directory structure from a 1GB zip file,
tzip.zip.
02/18/2012 03:23 PM
On Thursday, 5 April 2012 at 15:07:47 UTC, Jay Norwood wrote:
so, a few comments about std.zip...
I attempted to use it and found that its way of unzipping is a
memory hog, keeping the full original and all the unzipped data
in memory. It quickly ran out of memory on my test case
I think he is talking about 7zip the standalone software, not
7zip the compression algorithm.
7zip took 55 secs _on the same file_.
Yes, that's right, both 7zip and this uzp program are using the
same deflate standard format of zip for this test. It is the
only expand format that is
On Friday, 6 April 2012 at 14:55:14 UTC, Sean Cavanaugh wrote:
If you delete a directory containing several hundred thousand
directories (each with 4-5 files inside, don't ask), you can
see windows freeze for long periods (10+seconds) of time until
it is finished, which affects everything up
On Saturday, 7 April 2012 at 05:02:04 UTC, dennis luehring wrote:
7zip took 55 secs _on the same file_.
that is ok but he still compares different implementations
7zip is the program. It unzips many formats, with the standard
zip format being one of them. The parallel d program is three
On Saturday, 7 April 2012 at 11:41:41 UTC, Rainer Schuetze wrote:
Maybe it is the trim command being executed on the sectors
previously occupied by the file.
No, perhaps I didn't make it clear that the rmdir slowness is
only an issue on hard drives. I can unzip the 2GB archive in
about
On Saturday, 7 April 2012 at 17:08:33 UTC, Jay Norwood wrote:
The mydefrag program uses the ntfs defrag api. There is an
article at the following link showing how to access it to get
the Logical Cluster Numbers on disk for a file. I suppose you
could sort your file operations by start LCN
On Sunday, 8 April 2012 at 13:55:21 UTC, Marco Leise wrote:
Maybe the kernel caches writes, but synchronizes deletes? (So
the seek times become apparent there, and not in the writes)
Also check the file creation flags, maybe you can hint Windows
to the final file size and they wont be
I hacked up one of the file.d functions to create a function that
returns the first Logical Cluster Number for a regular file.
I've tested it on the 2GB layout that has been defragged with the
myDefrag sortByName() operation, and it works as expected.
Values of 0 mean the file was small
On Tuesday, 3 April 2012 at 14:10:32 UTC, Jesse Phillips wrote:
Most of his code isn't available as it was kind of under
Microsoft. However I revived Juno for D2 awhile ago (still need
to play with it myself). Juno provides some nice tools and API.
On Wednesday, 11 April 2012 at 22:17:16 UTC, Eldar Insafutdinov
wrote:
example http://eldar.me/candydoc/algorithm.html . Among new
The outline panel links work fine on Google Chrome, but not on
IE8.
http://www.reddit.com/r/programming/comments/1yts5n/facebook_open_sources_flint_a_c_linter_written_in/
Somewhere in that thread was a mention of facebook moving away
from git because it was too slow. I thought it was interesting
and found this info on the topic ... They rewrote some sections
I ran into this Kepler bug trying to update. The work-around is
stated, which involves renaming your eclipse.exe. Worked for me.
https://bugs.eclipse.org/bugs/show_bug.cgi?id=55
On Friday, 14 March 2014 at 15:44:24 UTC, Bruno Medeiros wrote:
A new version of DDT - D Development tools is out.
This has really nice source browsing... much better than the
VisualD. I end up using both because the debugging support is
still better in VisualD.
One browsing issue I noticed
On Sunday, 23 March 2014 at 20:33:15 UTC, Daniel Murphy wrote:
It still needs a lot of work, but it's functional.
Is there a test suite that you have to pass to declare it fully
functional?
On Monday, 24 November 2014 at 15:27:19 UTC, Gary Willoughby
wrote:
Just browsing reddit and found this article posted about D.
Written by Andrew Pascoe of AdRoll.
From the article:
The D programming language has quickly become our language of
choice on the Data Science team for any task that
On Monday, 24 November 2014 at 23:32:14 UTC, Jay Norwood wrote:
Is this related?
https://github.com/dscience-developers/dscience
This seems good too. Why the comments in the discussion about
lack of libraries?
https://github.com/kyllingstad/scid/wiki
Very nice.
I wonder about representation of references, and perhaps
replication, inheritance. Does SDL just punt on those?
On Monday, 27 June 2016 at 06:31:49 UTC, Ola Fosheim Grøstad
wrote:
Besides there are plenty of other advantages to using a
terminating sentinel depending on the use scenario. E.g. if you
want many versions of the same tail or if you are splitting a
string at white space (overwrite a white
On Tuesday, 28 June 2016 at 03:11:26 UTC, Jay Norwood wrote:
On Tuesday, 28 June 2016 at 01:53:22 UTC, deadalnix wrote:
If we were in interview, I'd ask you "what does this returns
if you pass it an empty string ?"
oops. I see ... need to test for empty string.
nothrow pure size
On Tuesday, 28 June 2016 at 01:53:22 UTC, deadalnix wrote:
If we were in interview, I'd ask you "what does this returns if
you pass it an empty string ?"
I'd say use this one instead, to avoid negative size_t. It is
also a little faster for the same measurement.
nothrow pure size_t
On Monday, 27 June 2016 at 16:38:58 UTC, Ola Fosheim Grøstad
wrote:
Yes, and the idea of speeding up strings by padding out with
zeros is not new. ;-) I recall suggesting it back in 1999 when
discussing the benefits of having a big endian cpu when sorting
strings. If it is big endian you can
After watching Andre's sentinel thing, I'm playing with strlen on
char strings with 4 terminating 0s instead of a single one.
Seems to work and is 4x faster compared to the runtime version.
nothrow pure size_t strlen2(const(char)* c) {
if (c is null)
return 0;
size_t l=0;
while (*c){
On Sunday, 26 June 2016 at 16:59:54 UTC, David Nadlinger wrote:
Please keep general discussions like this off the announce
list, which would e.g. be suitable for announcing a fleshed out
collection of high-performance string handling routines.
A couple of quick hints:
- This is not a correct
On Monday, 27 June 2016 at 20:43:40 UTC, Ola Fosheim Grøstad
wrote:
Just keep in mind that the major bottleneck now is loading 64
bytes from memory into cache. So if you test performance you
have to make sure to invalidate the caches before you test and
test with spurious reads over a very
On Tuesday, 28 June 2016 at 09:18:34 UTC, qznc wrote:
Did you also compare to strlen from libc? I'd guess GNU libc
uses a lot more tricks like vector instructions.
I did test with the libc strlen, although the D libraries did not
have a strlen for dchar or wchar. I'm currently using this for
On Tuesday, 28 June 2016 at 09:31:46 UTC, Sebastiaan Koppe wrote:
If we were in interview, I'd ask you "what does this returns
if you pass it an empty string ?"
Since no one is answering:
It depends on the memory right before c. But if there is at
least one 0 right before it - which is quite
On Tuesday, 18 April 2017 at 18:09:54 UTC, Thomas Brix Larsen
wrote:
"Cap’n Proto is an insanely fast data interchange format and
capability-based RPC system. Think JSON, except binary. Or
think Protocol Buffers, except faster."
The features below, from the capnproto.org description, interest
On Wednesday, 19 April 2017 at 16:52:14 UTC, Thomas Brix Larsen
wrote:
Take a look at FileDescriptor[1]. It is a class I've added to
support read/write using File from std.stdio. You can create a
similar streamer using std.mmfile. I believe that this would be
enough for memory mapped reading.
On Sunday, 2 July 2017 at 13:34:25 UTC, Szabo Bogdan wrote:
Any feedback is appreciated.
Thanks,
Bogdan
Hi, if you're just looking for other ideas, you might want to
look at adding capabilities like in the java hamcrest matchers.
You might also want to support regular expression matches
On Tuesday, 18 July 2017 at 00:47:04 UTC, Jean-Louis Leroy wrote:
I don't know R but after a trip to Wikipedia it looks like it.
J-L
R is listed as one of the languages with built-in support in this
wiki link. I searched for multiple dispatch because I was
familiar with the similar
43 matches
Mail list logo