Re: xUnit Testing Framework for D

2013-06-12 Thread Juan Manuel Cabo
On 06/12/2013 07:15 PM, Mario Kroeplin wrote:
> Here is the 'dunit' mentioned in the talk by Stefan Rohe:
> https://github.com/linkrope/dunit
> 
> D-stroy ;-)

I'm glad that dunit was of use to you and that the fork
went well.
I'm sorry I couldn't follow up on it.

--jm







Re: DConf 2013 Day 2 Talk 5: A Precise Garbage Collector for D by Rainer Schütze

2013-06-12 Thread Juan Manuel Cabo
On 06/05/2013 10:23 AM, Andrei Alexandrescu wrote:
> Reddit: 
> http://www.reddit.com/r/programming/comments/1fpw2r/dconf_2013_day_2_talk_5_a_precise_garbage/
> 
> Hackernews: https://news.ycombinator.com/item?id=5825320
> 
> Twitter: https://twitter.com/D_Programming/status/342269600689430529
> 
> Facebook: https://www.facebook.com/dlang.org/posts/651801198166898
> 
> Youtube: http://youtube.com/watch?v=LQY1m_eT37c
> 
> Please drive discussions on the social channels, they help D a lot.
> 
> 
> Andrei


Loved this talk.

Would struct have an extra field in memory pointing to the
needed type info? If all of this is implemented, will this
mean that an array of structs will not have their data contiguous
in memory?

Thanks for the talk!

--jm





Re: DConf 2013 Day 2 Talk 5: A Precise Garbage Collector for D by Rainer Schütze

2013-06-12 Thread Juan Manuel Cabo
On 06/05/2013 10:23 AM, Andrei Alexandrescu wrote:
> Reddit: 
> http://www.reddit.com/r/programming/comments/1fpw2r/dconf_2013_day_2_talk_5_a_precise_garbage/
> 
> Hackernews: https://news.ycombinator.com/item?id=5825320
> 
> Twitter: https://twitter.com/D_Programming/status/342269600689430529
> 
> Facebook: https://www.facebook.com/dlang.org/posts/651801198166898
> 
> Youtube: http://youtube.com/watch?v=LQY1m_eT37c
> 
> Please drive discussions on the social channels, they help D a lot.
> 
> 
> Andrei


Loved this talk.

Would struct have an extra field in memory pointing to the
needed type info? If all of this is implemented, will this
mean that an array of structs will not have their data contiguous
in memory?

Thanks for the talk!

--jm





Re: DConf 2013 Day 3 Talk 1: Metaprogramming in the Real World by Don Clugston

2013-06-12 Thread Juan Manuel Cabo
On 06/11/2013 09:33 AM, Andrei Alexandrescu wrote:
> Reddit: 
> http://www.reddit.com/r/programming/comments/1g47df/dconf_2013_metaprogramming_in_the_real_world_by/
> 
> Hackernews: https://news.ycombinator.com/item?id=5861237
> 
> Twitter: https://twitter.com/D_Programming/status/344431490257526785
> 
> Facebook: https://www.facebook.com/dlang.org/posts/655271701153181
> 
> Youtube: http://youtube.com/watch?v=pmwKRYrfEyY
> 
> Please drive discussions on the social channels, they help D a lot.
> 
> 
> Andrei


Great talk!!!

Can't wait for faster CTFE, the new orange serialization
library would benefit from it.

--jm




Re: DConf 2013 Day 2 Talk 3: C# to D by Adam Wilson

2013-05-31 Thread Juan Manuel Cabo
On 05/31/2013 03:42 PM, Steven Schveighoffer wrote:
> [..]
> I would love to say that I have set aside enough time to do it, but it's very 
> difficult to find the time :(
> 
> I hate to commit to a certain time frame, I have done that here in the past 
> and have been very wrong with my expectations.
> 
> That being said, my lack of effort on D stuff is really pissing me off, and I 
> want to spend more time on it.  Dconf
> really has yanked me back into D, and I want to finish all the loose ends 
> I've started, including dcollections, this
> streaming stuff, and some other little bits.
> 
> -Steve

I'm very happy to read this.

It would be awesome to have the power of dcollections in phobos!!
I would definitely appreciate it and a lot of people too!!!
Streams and collections are very important building blocks.

--jm




Re: DConf 2013 Day 2 Talk 3: C# to D by Adam Wilson

2013-05-31 Thread Juan Manuel Cabo
On 05/31/2013 05:18 PM, Nick Sabalausky wrote:
> On Fri, 31 May 2013 15:29:40 +0100
> "Regan Heath"  wrote:
> 
>>
>> I have old SHA etc hashing routines in old style D, this makes me
>> want to spend some time bringing them up to date...
>>
> 
> http://dlang.org/phobos/std_digest_sha.html
> 
> Since 2.061, IIRC.
> 

The sha digest in phobos is SHA1.
SHA256 and SHA512 are still missing.

--jm




Re: DConf 2013 Day 2 Talk 3: C# to D by Adam Wilson

2013-05-31 Thread Juan Manuel Cabo
On 05/31/2013 09:33 AM, Andrei Alexandrescu wrote:
> http://www.reddit.com/r/programming/comments/1feem1/dconf_2013_day_2_talk_3_from_c_to_d_by_adam_wilson/
> 
> {Enj,Destr}oy!
> 
> Andrei

Just watched it over lunch and I liked this talk very much.

For transforming pieces of code I very often write Vim regex,
(supports multiline with a flag) and when that is not
enough, writing a Vim function does the trick.

About streams: there is some phobos support for streams, though
it seems not finalized.

I wish something were done about the containers. Note that it
is very easy to write C# containers in a OOP style, based
on T[] and T[K] internally (though a concurrent hash map with
read/write locking would need to be done from scratch without
using AAs).

It is not true that Array!T is equivalent to List.
Array!T wants to own their items (because it manages its own
memory), so it is only practically useable with structs.
Even duplicating the array is unsafe if the element type
is a class:

import std.stdio, std.container;

class A {
int val;
this(int v) {
val = v;
}
~this() {
writeln("A destroyed");
}
}

void func(Array!A list) {
}

void main() {
A a = new A(3);
Array!A list;
list ~= a;
writeln(a.val);//prints 3
func(list.dup);//prints A destroyed
//<-- The object cannot be used anymore, though it
//is still present in 'list')
writeln(a.val);//prints 0
}

And one cannot use RefCounted!A because RefCounted doesn't
work with classes.
I guess that RedBlackTree's suffer the same problem.

--jm






Re: DConf 2013 Day 1 Talk 6: Concurrent Garbage Collection for D by Leandro Lucarella

2013-05-24 Thread Juan Manuel Cabo

On Friday, 24 May 2013 at 19:44:19 UTC, Jonathan M Davis wrote:

On Friday, May 24, 2013 20:30:54 Juan Manuel Cabo wrote:


I know that this is slightly offtopic, since the topic lately
seems to be how to make the GC run generationally or with small
footprint (don't stop the world, etc.).

I'd like to know if there is interest in a precise garbage
collector.


There is interest in it, and Rainer Schütze did a talk on it at 
DConf. At the
current pace (assuming that Andrei actually posts one on Monday 
even though
it's a federal holiday in the US), it'll be posted on June 3rd 
(and if he
skips Monday, then it'll probably be June 5th). And actually, 
the precise GC
changes stand a much better chance of making it into druntime 
in the short

term than any concurrency changes do.

- Jonathan M Davis


Thanks, that is great news!

--jm


Re: DConf 2013 Day 1 Talk 6: Concurrent Garbage Collection for D by Leandro Lucarella

2013-05-24 Thread Juan Manuel Cabo

On Monday, 20 May 2013 at 12:50:23 UTC, Andrei Alexandrescu wrote:

On reddit:

http://www.reddit.com/r/programming/comments/1eovfu/dconf_2013_day_1_talk_6_concurrent_garbage/


Enjoy! Discuss!! Vote!!!

Andrei


I know that this is slightly offtopic, since the topic lately 
seems to be how to make the GC run generationally or with small 
footprint (don't stop the world, etc.).


I'd like to know if there is interest in a precise garbage 
collector. Anyways, here is how .NET does it:


http://blogs.msdn.com/b/abhinaba/archive/2009/03/03/back-to-basics-how-does-the-gc-find-object-references.aspx

It uses a mask stored in the type information of a class. D 
doesn't have this kind of type info in runtime I guess, but since 
D is on the verge of supporting multiple dlls/so, the time is now 
for a small modification to be made in the ABI to support this 
(if it is ever going to be made).


I know that in 64bits there is less of a problem with data as 
false pointers, but having a precise garbage collector would make 
two things possible:


  1) Defragmenting the heap by being able to move references.
  2) Easier to make a generational GC

The following link explains (in the first comment) how .NET 
distinguishes its own stack frames from non-managed stack frames, 
by adding a "cookie":


http://stackoverflow.com/questions/10669173/how-does-the-gc-update-references-after-compaction-occurs


--jm



Re: avgtime - Small D util for your everyday benchmarking needs

2012-03-26 Thread Juan Manuel Cabo

On Tuesday, 27 March 2012 at 03:39:56 UTC, Ary Manzana wrote:
On Tuesday, 27 March 2012 at 01:19:22 UTC, Juan Manuel Cabo 
wrote:

On Tuesday, 27 March 2012 at 00:58:26 UTC, Ary Manzana wrote:

On 3/23/12 4:11 PM, Juan Manuel Cabo wrote:

On Friday, 23 March 2012 at 06:51:48 UTC, James Miller wrote:

Dude, this is awesome. I tend to just use time, but if I 
was doing
anything more complicated, I'd use this. I would suggest 
changing the
name while you still can. avgtime is not that informative a 
name given

that it now does more than just "Average" times.

--
James Miller




Dude, this is awesome.


Thanks!! I appreciate your feedback!


I would suggest changing the name while you still can.


Suggestions welcome!!

--jm



give_me_d_average



Hahahah, naahh, prefiero avgtime o timestats, porque times
autocompletaría a timestats.

Qué hacés tanto tiempo? Gracias por mencionarme D hace años.
Me quedó en la cabeza, y el año pasado cuando empecé un 
laburo

nuevo tuve oportunidad de meterme con D.

Saludos Ary, espero que andes bien!!
--jm


El nombre lo dije en broma :-P

[...]
ahhaha, ya se que lo dijiste en broma!
--jm





Re: avgtime - Small D util for your everyday benchmarking needs

2012-03-26 Thread Juan Manuel Cabo

On Thursday, 22 March 2012 at 17:13:58 UTC, Manfred Nowak wrote:

Juan Manuel Cabo wrote:


like the unix 'time' command


`version linux' is missing.

-manfred



Done!, it works in windows now too.
(release 0.5 in github).

--jm




Re: avgtime - Small D util for your everyday benchmarking needs

2012-03-26 Thread Juan Manuel Cabo

On Tuesday, 27 March 2012 at 00:58:26 UTC, Ary Manzana wrote:

On 3/23/12 4:11 PM, Juan Manuel Cabo wrote:

On Friday, 23 March 2012 at 06:51:48 UTC, James Miller wrote:

Dude, this is awesome. I tend to just use time, but if I was 
doing
anything more complicated, I'd use this. I would suggest 
changing the
name while you still can. avgtime is not that informative a 
name given

that it now does more than just "Average" times.

--
James Miller




Dude, this is awesome.


Thanks!! I appreciate your feedback!


I would suggest changing the name while you still can.


Suggestions welcome!!

--jm



give_me_d_average



Hahahah, naahh, prefiero avgtime o timestats, porque times
autocompletaría a timestats.

Qué hacés tanto tiempo? Gracias por mencionarme D hace años.
Me quedó en la cabeza, y el año pasado cuando empecé un laburo
nuevo tuve oportunidad de meterme con D.

Saludos Ary, espero que andes bien!!
--jm





Re: avgtime - Small D util for your everyday benchmarking needs

2012-03-23 Thread Juan Manuel Cabo

On Friday, 23 March 2012 at 05:26:54 UTC, Nick Sabalausky wrote:>>


Wow, that's just fantastic! Really, this should be a standard 
system tool.


I think this guy would be proud:
http://zedshaw.com/essays/programmer_stats.html


Thanks for the good vibes!

Hahahhah, that article is so ing hillarious!
I love the maddox tone.

--jm




Re: avgtime - Small D util for your everyday benchmarking needs

2012-03-23 Thread Juan Manuel Cabo

On Friday, 23 March 2012 at 10:51:37 UTC, Don Clugston wrote:


No, it's easy. Student t is in std.mathspecial.


Aargh, I didn't get around to copying it in. But this should do 
it.


/** Inverse of Student's t distribution
 *
 [.]


Great!!! Thank you soo much Don!!!
--jm




Re: avgtime - Small D util for your everyday benchmarking needs

2012-03-23 Thread Juan Manuel Cabo
On Friday, 23 March 2012 at 15:33:18 UTC, Andrei Alexandrescu 
wrote:

On 3/23/12 3:02 AM, Juan Manuel Cabo wrote:
On Friday, 23 March 2012 at 05:16:20 UTC, Andrei Alexandrescu 
wrote:

[.]

(man, the gaussian curve is everywhere, it never ceases to
perplex me).


I'm actually surprised. I'm working on benchmarking lately 
and the

distributions I get are very concentrated around the minimum.

Andrei



Well, the shape of the curve depends a lot on
how the random noise gets inside the measurement.

[snip]

Hmm, well the way I see it, the observed measurements have the 
following composition:


X = T + Q + N

where T > 0 (a constant) is the "real" time taken by the 
processing, Q > 0 is the quantization noise caused by the 
limited resolution of the clock (can be considered 0 if the 
resolution is much smaller than the actual time), and N is 
noise caused by a variety of factors (other processes, 
throttling, interrupts, networking, memory hierarchy effects, 
and many more). The challenge is estimating T given a bunch of 
X samples.


N can be probably approximated to a Gaussian, although for 
short timings I noticed it's more like bursts that just cause 
outliers. But note that N is always positive (therefore not 
100% Gaussian), i.e. there's no way to insert some noise that 
makes the code seem artificially faster. It's all additive.


Taking the mode of the distribution will estimate T + mode(N), 
which is informative because after all there's no way to 
eliminate noise. However, if the focus is improving T, we want 
an estimate as close to T as possible. In the limit, taking the 
minimum over infinitely many measurements of X would yield T.



Andrei


In general, I agree with your reasoning. And I appreciate you
taking the time to put it so eloquently!!

But I think that your considering T as a constant, and
preferring the minimum misses something. This might work
very well for benchmarking mostly CPU bound processes,
but all those other things that you consider noise
(disk I/O, network, memory hierarchy, etc.) are part
of the elements that make an algorithm or program faster
than other, and I would consider them inside T for
some applications.

Consider the case depicted in this wonderful (ranty) article
that was posted elsewhere in this thread:
http://zedshaw.com/essays/programmer_stats.html
In a part of the article, the guy talks about a
system that worked fast most of the time, but would halt
for a good 1 or 2 minutes sometimes.

The minimum time for such a system might be a few ms, but
the standard deviation would be big. This properly shifts
the average time away from the minimum.

If programA does the same task than programB with less I/O,
or with better memory layout, etc. its average will be
better, and maybe its timings won't be so spread out. But
the minimum will be the same.

So, in the end, I'm just happy that I could share this
little avgtime with you all, and as usual there is
no one-answer fits all. For some applications, the
minimum will be enough. For others, it's esential to look
at how spread the sample is.


On the symmetry/asymmetry of the distribution topic:
I realize as you said that T never gets faster than
a certain point.
But, depending on the nature of the program under test,
the good utilization of disk I/O, network, memory,
motherboard buses, etc. is what you want inside the
test too, and those come with gaussian like noises
which might dominate over T or not.

A program that avoids that other big noise is a better
program (all else the same), so I would tend to consider
the whole.

Thanks for the eloquency/insightfulness in your post!
I'll consider adding chi-squared confidence intervals
in the future. (and open to more info or if another
distribution might be better).

--jm





Re: avgtime - Small D util for your everyday benchmarking needs

2012-03-23 Thread Juan Manuel Cabo

On Friday, 23 March 2012 at 05:51:40 UTC, Manfred Nowak wrote:


| For samples, if it is known that they are drawn from a 
symmetric
| distribution, the sample mean can be used as an estimate of 
the

| population mode.


I'm not printing the population mode, I'm printing the 'sample 
mode'.
It has a very clear meaning: most frequent value. To have 
frequency,

I group into 'bins' by precision: 12.345 and 12.3111 will both
go to the 12.3 bin.



and the program computes the variance as if the values of the 
sample

follow a normal distribution, which is symmetric.


This program doesn't compute the variance. Maybe you are talking
about another program. This program computes the standard 
deviation

of the sample. The sample doesn't need to of any distribution
to have a standard deviation. It is not a distribution parameter,
it is a statistic.

Therefore the mode of the sample is of interest only, when the 
variance

is calculated wrongly.


???

The 'sample mode', 'median' and 'average' can quickly tell you
something about the shape of the histogram, without
looking at it.
If the three coincide, then maybe you are in normal distribution 
land.


The only place where I assume normal distribution is for the
confidence intervals. And it's in the usage help.

If you want to support estimating weird probability
distributions parameters, forking and pull requests are
welcome. Rewrites too. Good luck detecting distribution
shapes  ;-)




-manfred


PS: I should use the t student to make the confidence intervals,
and for computing that I should use the sample standard
deviation (/n-1), but that is a completely different story.
The z normal with n>30 aproximation is quite good.
(I would have to embed a table for the t student tail factors,
pull reqs velcome).

PS2: I now fixed the confusion with the confidence interval
of the variable and the confidence interval of the mu average,
I simply now show both. (release 0.4).

PS3: Statistics estimate distribution parameters.

--jm





Re: avgtime - Small D util for your everyday benchmarking needs

2012-03-23 Thread Juan Manuel Cabo

On Friday, 23 March 2012 at 06:51:48 UTC, James Miller wrote:

Dude, this is awesome. I tend to just use time, but if I was 
doing
anything more complicated, I'd use this. I would suggest 
changing the
name while you still can. avgtime is not that informative a 
name given

that it now does more than just "Average" times.

--
James Miller




Dude, this is awesome.


Thanks!! I appreciate your feedback!


I would suggest changing the name while you still can.


Suggestions welcome!!

--jm



Re: avgtime - Small D util for your everyday benchmarking needs

2012-03-23 Thread Juan Manuel Cabo

On Thursday, 22 March 2012 at 17:13:58 UTC, Manfred Nowak wrote:

Juan Manuel Cabo wrote:


like the unix 'time' command


`version linux' is missing.

-manfred



Linux only for now. Will make it work in windows this weekend.

I hope that's what you meant.

--jm




Re: avgtime - Small D util for your everyday benchmarking needs

2012-03-23 Thread Juan Manuel Cabo
On Friday, 23 March 2012 at 05:16:20 UTC, Andrei Alexandrescu 
wrote:

[.]

(man, the gaussian curve is everywhere, it never ceases to
perplex me).


I'm actually surprised. I'm working on benchmarking lately and 
the distributions I get are very concentrated around the 
minimum.


Andrei



Well, the shape of the curve depends a lot on
how the random noise gets inside the measurement.

I like  'ls -lR'  because the randomness comes
from everywhere, and its quite bell shaped.
I guess there is a lot of I/O mess (even if
I/O is all cached, there are lots of opportunities
for kernel mutexes to mess everything I guess).

When testing "/bin/sleep 0.5", it will be quite
a pretty boring histogram.

And I guess than when testing something thats only
CPU bound and doesn't make too much syscalls,
the shape is more concentrated in a few values.


On the other hand, I'm getting some weird bimodal
(two peaks) curves sometimes, like the one I put on
the README.md.
It's definitely because of my laptop's CPU throttling,
because it went away when I disabled it (for the curious
ones, in ubuntu 64bit, here is a way to disable
throttling (WARNING: might get hot until you undo or reboot):

echo 160 > 
/sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq


echo 160 > 
/sys/devices/system/cpu/cpu1/cpufreq/scaling_min_freq


(yes my cpu is 1.6GHz, but it rocks).


--jm





Re: avgtime - Small D util for your everyday benchmarking needs

2012-03-22 Thread Juan Manuel Cabo
On Thursday, 22 March 2012 at 22:22:31 UTC, Andrei Alexandrescu 
wrote:


Sweet! You may want to also print the mode of the distribution, 
which is the time of the maximum sample density. 
http://en.wikipedia.org/wiki/Mode_(statistics) (Warning: 
nontrivial but informative.)



Andrei


Thanks for your feedback!

Sweet! You may want to also print the mode of the distribution, 
[]


Done!. Just pushed it to github. I made a histogram too!!
(man, the gaussian curve is everywhere, it never ceases to
perplex me). The histogram bins are the most significant digits
(three "automatic" levels of precision, with rounding and
casting tricks).

But I think the most important change is that I'm now showing
the 95% and 99% confidence intervals. (For the confidence 
intervals

to mean anything, please everyone, remember to control
your variables (don't defrag and benchmark :-) !!) so that apples
are still apples and don't become oranges, and make sure N>30).

More info on histogram and confidence intervals in the
usage help.


avgtime -q -h -r400 ls /etc


Total time (ms): 2751.96
Repetitions: 400
Sample mode: 6.9 (79 ocurrences)
Median time: 6.945
Avg time   : 6.8799
Std dev.   : 0.93927
Minimum: 3.7
Maximum: 16.36
95% conf.int.  : [6.78786, 6.97195]  e = 0.0920468
99% conf.int.  : [6.75893, 7.00087]  e = 0.12097
Histogram  :
msecs: count  normalized bar
  3.7: 2  #
  3.8: 4  ##
  3.9: 1
  4.0: 1
  4.2: 4  ##
  4.3: 1
  4.4: 1
  4.5: 2  #
  4.6: 3  #
  4.7: 2  #
  4.8: 3  #
  4.9: 3  #
  5.2: 1
  5.3: 2  #
  6.1: 1
  6.2: 1
  6.3: 4  ##
  6.4: 6  ###
  6.5:14  ###
  6.6:21  ##
  6.7:31  ###
  6.8:50  #
  6.9:79  
  7.0:48  
  7.1:29  ##
  7.2:22  ###
  7.3:13  ##
  7.4: 8  
  7.5: 7  ###
  7.6:12  ##
  7.7: 6  ###
  7.8: 6  ###
  7.9: 2  #
  8.0: 3  #
  8.1: 1
  8.2: 1
  8.7: 1
  8.8: 1
  9.1: 1
 11.5: 1
 16.3: 1


--jm





Re: avgtime - Small D util for your everyday benchmarking needs

2012-03-21 Thread Juan Manuel Cabo

On Thursday, 22 March 2012 at 01:37:19 UTC, Tove wrote:


Awesome, I do have a tiny feature request for the next 
version... a commandline switch to enable automatically 
discarding the first run as an outlier.


/Tove


Done, I just put it in github. (-d switch).
But maybe you should be looking at the median
to ignore outliers.


I also added a -p switch to print all the times:

./avgtime -d -q -p -r10  ls -lR /usr/share/doc


Total time (ms): 3986.69
Repetitions: 10
Median time: 397.62
Avg time   : 398.669
Std dev.   : 2.95832
Minimum: 395.633
Maximum: 406.274
Sorted times   :
[395.633, 396.261, 396.273, 397.413, 397.425, 397.815,
399.321, 399.719, 400.551, 406.274]


--jm




avgtime - Small D util for your everyday benchmarking needs

2012-03-21 Thread Juan Manuel Cabo

This is a small util I wrote in D which is like the unix
'time' command but can repeat the command N times and show
median, average, standard deviation, minimum and maximum.

As you all know, it is not proper to conclude that
a program is faster than another program by running
them just once.

It's BOOST and is in github:

https://github.com/jmcabo/avgtime

Example:


avgtime -r 10 -q ls -lR /etc


Total time (ms): 933.742
Repetitions: 10
Median time: 90.505
Avg time   : 93.3742
Std dev.   : 4.66808
Minimum: 88.732
Maximum: 101.225

The -q argument pipes stderr and stdout of the program
under test to /dev/null

I put more info in the github page.


HAVE FUN!!

--jm




Re: DUnit - class MyTest { mixin TestMixin; void testMethod1() {} void testMethod2() {}}

2012-03-21 Thread Juan Manuel Cabo

On Wednesday, 21 March 2012 at 20:04:39 UTC, Rio wrote:

On Wednesday, 21 March 2012 at 17:29:59 UTC, Juan Manuel Cabo
wrote:

Btw, here is the whole list:
   
http://www.junit.org/junit/javadoc/3.8.1/junit/framework/Assert.html



Do you have any thoughts?


Be careful: for JUnit 4 there is a separation of concerns.

The assertions are now the responsibility of the Hamcrest
library: http://code.google.com/p/hamcrest/
(Wouldn't it be nice to have a port to D?)


Not true.
Regular asserts are still regular asserts in junit4.
Hamcrest is supported for other kinds of asserts.

The only care taken must be to consider an assert as something
that throws core.exception.AssertError, which is already done.
You can create your own kinds of asserts.

Also, for mock objects, take a look at BlackHole and WhiteHole
in std.typecons. That, together with anonymous classes pretty much
is all you need for basic mock objects for testing.




On the other hand, D/JUnit "just" provides the frame for the 
test

cases.


Exactly.



Here are some ideas for this DUnit frame (from the previous
DUnit):
- provide means to let a test case pass iff an expected 
exception is thrown


You already have it: std.exception.assertThrown().



- use command-line args to filter test cases to be executed


Nice, I'll implement in the DUnitMain mixin.



- add an XML test report for inspection by machines


Not big on the priority list, but yes, some kind of automation
for Ant and IDEs would be cool. The reason I tried to stay
compatible with java style tests output is so that some existing
tools might already be able to parse it.
   For xml output, I would layout the output in some form that is
already recognized by tools, and that would take me some time.
And this DUnit is still too green (but 100% solid and usable in
what it provides).
   I first want to do a graphical test runner.
I also want to have the runner precompiled outside.
So that I can stay in the test runner window, and just click
'retest' after recompiling. I got very used to this rhythm
of work, and I think it'd be nice to have it in D. For
that to work, I have to solve a few issues.

Also, I don't want to make a graphical test runner that
only works on windows or only works on unix. And I love DWT,
so I first want a DWT that everyone can depend on.
I already made DWT compile in linux 64bit which didn't work
(though not as a library, that had some issues of its own,
instead, I have to list aaalll the DWT files used in the
dmd command line).

So my priorities are (for when I have time!!):

 -Merge to latest DWT (I diverged last year when dmd 2.0.54
  was newest) and make a pull request for DWT.

 -Add more assert* to DUnit, finalizing its API so to speak.
  Any other future DUnit change will be source compatible.
  (at least until D gets user defined attributes!).

 -Solve the 'test runner compiled outside' issues.

 -Make a graphical test runner, the nicest, slickest,
  coolest one of the *Unit bunch!!

--jm





Re: DUnit - class MyTest { mixin TestMixin; void testMethod1() {} void testMethod2() {}}

2012-03-21 Thread Juan Manuel Cabo

On Sunday, 18 March 2012 at 11:05:59 UTC, Marc P. Michel wrote:
Oh and also, changing "version(linux)" with "version(Posix)" 
for the color output management would be great. ( I'm on 
FreeBSD and was wondering why I had no colors as advertised :} 
).


Yeahp, will fix it. Sorry!

Thanks for finding that!

Still haven't had the time to get back to dunit,
but I will eventually, and also make a proper documentation.

  I'm also going to make the whole family of assert functions,
(assertNotNull, assertNotEquals, etc.) but I'm not sure where
to put the optional 'message' argument (first or last).
To be compatible with the java style, 'message' would go first.
But that would have two issues:

  - It's different than D's assert where 'message' goes last.

  - "assertNotNull(someStr, "message");" would compile and you
wouldn't have a clue that "message" was supposed to go
as first argument.

Btw, here is the whole list:

http://www.junit.org/junit/javadoc/3.8.1/junit/framework/Assert.html



Do you have any thoughts?

--jm





Re: DUnit - class MyTest { mixin TestMixin; void testMethod1() {} void testMethod2() {}}

2012-03-21 Thread Juan Manuel Cabo

On Saturday, 17 March 2012 at 12:30:49 UTC, Marc P. Michel wrote:
On Monday, 20 February 2012 at 01:49:04 UTC, Juan Manuel Cabo 
wrote:
I thought I could do a better effort to describe why DUnit is 
so extraordinary,
for a native language, especially for those unfamiliar with 
xUnit frameworks


This is great stuff, thanks !


You're welcome! I'm glad that you tried it!!



Anyway, I'm not fond of your examples; so here is a silly one 
from me :


http://lanael.free.fr/summertest.d.html


It's nice and clear indeed. If you put a BOOST licence header
and your name in your 3 example files, I'll add it to github.

Yeah, the example.d is a bit rough because I wanted to show
all that you can do with dunit quickly in single file.

--jm




Re: GoingNative 6: The D Episode with Walter Bright and Andrei Alexandrescu

2012-02-21 Thread Juan Manuel Cabo
> We want to have many users.

dUsers ~= (juanManuel);

:-) :-) :-)

--jm

On 02/21/2012 09:39 PM, Walter Bright wrote:
> http://channel9.msdn.com/Shows/C9-GoingNative/GoingNative-6-The-D-Episode-with-Walter-Bright-and-Andrei-Alexandrescu



Re: Please try rdmd on large projects

2012-02-20 Thread Juan Manuel Cabo
I said:

>> Is it possible to have an option to skip rechecking inside phobos
>> dependencies each time? That would be the thing that brings
>> it down to < 5ms.

but I did a:

ltrace -e __xstat64 ./rdmd bla.d

and saw that rdmd doesn't recheck phobos, so SORRY nevermind what I 
said!!
And saw the inALibrary() function later.


But, even though rdmd now doesn't recheck phobos, "dmd --deps" does.


By the way, the .deps caching has two problems as it is right now in github:

1) BUG: the .deps should be rebuilt if the file changes.
   This misses the:  isNewer(root, exe)part:

// See if the deps file is still in good shape
auto deps = readDepsFile();
bool mustRebuildDeps = anyNewerThan(deps.keys, depsFilename);
if (!mustRebuildDeps)

   So if a add an import to a dependency, the .deps get rebuilt,
   but if I add an import to the D file, the .deps don't get rebuilt,
   and the only solution is to delete de .deps file.
   And if the .deps file goes to the /tmp dir, someusers will miss that
   they have to delete the .dep file and get stuck.

2) The .deps file is thrown in the same directory as the D root file, instead
   of at /tmp. One might not have write access to the D script directory,
   just read access.

--jm


On 02/21/2012 02:17 AM, Juan Manuel Cabo wrote:
> Doing:
> 
>   ltrace -e open dmd -deps=outdeps.txt example.d
> 
> and:
> 
>   ltrace -e read dmd -deps=outdeps.txt example.d
> 
> shows that dmd opens and reads a lot of phobos and druntime
> to generate the dependencies of:
> 
>  import std.stdio;
>  void main() { writeln("something");}
> 
> --jm
> 
> 
>>




Re: Please try rdmd on large projects

2012-02-20 Thread Juan Manuel Cabo
Doing:

  ltrace -e open dmd -deps=outdeps.txt example.d

and:

  ltrace -e read dmd -deps=outdeps.txt example.d

shows that dmd opens and reads a lot of phobos and druntime
to generate the dependencies of:

 import std.stdio;
 void main() { writeln("something");}

--jm


On 02/21/2012 02:02 AM, Juan Manuel Cabo wrote:
> GOOD!
> 
> Is the missing chmod problem fixable? So that the
> binary has the same permissions as the D file?
> If my D file is not readable or runnable by 'other',
> the binary shouldn't be either. (the cached .deps should
> have the same readability as the D file too perhaps).
> 
> 
> I think that this is the big timesaver:
> 
>  rdmd: cache dependency file to improve startup time
> 
> So: Big Thanks!! I was using a wrapper for rdmd that only
> called rdmd if the file was modified (which worked great for
> small one file scripts, those 300ms to 1000ms startup
> delays where unbearable).
> With the .deps caching, rerun time went down to 20ms.
> It was 300ms ~ 1000ms before (depending on how many imports).
> 
> I think that 20ms is still too slow (for certain applications,
> it is just too much).
> 
> When rdmd asks dmd to generate the dependencies of my_file.d,
> dmd goes beyond and parses phobos files, opening
> all the module files in the path of dependency. I think
> that was the major slow part.
> 
> Is it possible to have an option to skip rechecking inside phobos
> dependencies each time? That would be the thing that brings
> it down to < 5ms.
> 
> --jm
> 
> 
> On 02/20/2012 07:17 PM, Andrei Alexandrescu wrote:
>> Hello,
>>
>>
>> I just submitted 
>> (https://github.com/D-Programming-Language/tools/commit/c77b870fdc5674d7434b03d1767ba831eaac25b1)
>>  a
>> change to rdmd that runs one thread per stat when comparing file dates, 
>> using David's excellent std.parallelism.
>>
>> In my experiment the change introduces no additional lag on small projects 
>> and works 10-15% faster on moderate projects
>> (couple dozen deps).
>>
>> Could someone try rdmd against some larger projects and assess its behavior 
>> and speed?
>>
>>
>> Thanks,
>>
>> Andrei
> 



Re: Please try rdmd on large projects

2012-02-20 Thread Juan Manuel Cabo
GOOD!

Is the missing chmod problem fixable? So that the
binary has the same permissions as the D file?
If my D file is not readable or runnable by 'other',
the binary shouldn't be either. (the cached .deps should
have the same readability as the D file too perhaps).


I think that this is the big timesaver:

 rdmd: cache dependency file to improve startup time

So: Big Thanks!! I was using a wrapper for rdmd that only
called rdmd if the file was modified (which worked great for
small one file scripts, those 300ms to 1000ms startup
delays where unbearable).
With the .deps caching, rerun time went down to 20ms.
It was 300ms ~ 1000ms before (depending on how many imports).

I think that 20ms is still too slow (for certain applications,
it is just too much).

When rdmd asks dmd to generate the dependencies of my_file.d,
dmd goes beyond and parses phobos files, opening
all the module files in the path of dependency. I think
that was the major slow part.

Is it possible to have an option to skip rechecking inside phobos
dependencies each time? That would be the thing that brings
it down to < 5ms.

--jm


On 02/20/2012 07:17 PM, Andrei Alexandrescu wrote:
> Hello,
> 
> 
> I just submitted 
> (https://github.com/D-Programming-Language/tools/commit/c77b870fdc5674d7434b03d1767ba831eaac25b1)
>  a
> change to rdmd that runs one thread per stat when comparing file dates, using 
> David's excellent std.parallelism.
> 
> In my experiment the change introduces no additional lag on small projects 
> and works 10-15% faster on moderate projects
> (couple dozen deps).
> 
> Could someone try rdmd against some larger projects and assess its behavior 
> and speed?
> 
> 
> Thanks,
> 
> Andrei



Re: DUnit - class MyTest { mixin TestMixin; void testMethod1() {} void testMethod2() {}}

2012-02-19 Thread Juan Manuel Cabo
I thought I could do a better effort to describe why DUnit is so extraordinary,
for a native language, especially for those unfamiliar with xUnit frameworks
or TDD. So here it goes:


*What is a unit test*

Unit tests, ideally, test a specific functionality in isolation, so that
if the test fails, you can assume that it's because the functionality
under test, and only that functionality, is broken.

Testing a program or system is complex. This is not the only kind of test
that should be done on a system. But it's one kind of test that lets you
split the testing effort into less complex units, especially if your
system is in fact assembled from smaller units.

Unit testing frameworks provide mechanisms to isolate and setup the
program state after one test, so that when the next test begins,
you get a clean slate.
They also give you mechanisms to make it easier for you to reuse the
effort you put into setting the test environment for a test, because
in fact many tests will have the same setup. Those tests are then
made to be part of the same "TestCase".
A "TestCase" contains tests, and contains a setup method common
to all of them.



*What is an xUnit framework?*

It's a framework that allows you to write unit tests in a fashion that
goes well with TestDrivenDevelopment and TestFirstDesign.

It's also a kind of "meme" that got spread to most programming languages.
We owe the existance of this powerful meme to Kent Beck, and his SUnit 
framework.
Arguably, the most popular incarnation nowadays is JUnit
(https://en.wikipedia.org/wiki/JUnit).

This "meme" consists of at least the following elements:

   * TestCases as classes (with setup() and teardown() methods).

   * Tests as methods of TestCases.

   * Convenience assert functions.

   * A green progress bar that turns to red when one of the Tests fails.

   * Pluggable console or GUI "test runners".

So, the user writes a class... that contains methods... which in turn contain 
asserts.
The user then compiles the program or library and starts the "test runner", 
which
then runs all the tests. Some runners display the list of available TestCases, 
and
allow to pick which ones to run (or re-run):

NUNIT: http://nunit.org/docs/2.5/img/gui-screenshot.jpg

NUNIT CONSOLE: http://nunit.org/docs/2.6/img/console-mock.jpg

JUNIT: http://www.eclipse.org/screenshots/images/JavaPerspective-WinXP.png
(bottom part of the IDE screenshot)

CPPUNIT: 
http://sourceforge.net/apps/mediawiki/cppunit/nfs/project/c/cp/cppunit/8/81/Mfctestrunner.png

CPPUNIT CONSOLE: 
http://sourceforge.net/apps/mediawiki/cppunit/index.php?title=File:Cursetr_RunningTests.png

Note the presence of the tipical green progress bar (which turns red when
one test fails, giving fast feedback to the user, and acting as a reinforcement
when green. It only keeps the green when *all* tests pass (no failing asserts).


*But how does the 'test runner' know which methods are tests to run?*

Each programming language has its own best way of marking a class as a TestCase,
and of marking a method in it as a Test to be run.

   With JUnit3, one had to inherit the class from a specific base class.
A test runner would have to use Reflection to get the list of all
classes that were TestCases to present them to the user. And then use
reflection to search for and list the methods whose names start with "test".

   With JUnit4, the user now has to only mark methods with the @Test attribute.
Through reflection, test runners can find all the classes which contain
methods marked with the @Test attribute, and then call those methods.
This still has some overhead, (hopefully not on the order of methods of the 
program)
and we are talking about late-binding too.

   With C++, since there is absolutely no real reflection capability (like
getting all the names of the methods of a TestCase class). So one has to 
manually
register the test method using macros. So, each time that you add a
new test method, you have to type its name at least three times. The result
is not beautiful: cppunit.sourceforge.net/doc/lastest/money_example.html
(but it's efficient though).


*How does DUnit do it?*

In DUnit classes are marked as a TestCase by declaring a "mixin TestMixin"
once in any part of their body. You don't need to type the name of a method
more than once, or the name of the class more than once. The mixin gets
it all from the context in which it was instantiated (thanks to "typeof(this)").
It creates a static constructor for the class, with:

. a compile time generated immutable list of the names of the Test methods
  of the class whose names begin with "test" and can be called without
  arguments.
  (thanks to __traits(allMembers,), __traits(compiles,) and recursive
  template declarations).

. a compile time generated function with a switch-case statement that takes
  the name of a Test and calls it.
  (thanks to __traits(allMembers) and __traits(compiles,) and recursive
   

Re: DUnit - class MyTest { mixin TestMixin; void testMethod1() {} void testMethod2() {}}

2012-02-19 Thread Juan Manuel Cabo


Interesting, congrats. A common question that will come up is 
comparing, contrasting, and integrating your work with the 
existing unittest language feature. You may want to address 
these issues directly in the documentation.


Thanks!! I'll put it in the doc (and also clean up my crude
documentation formatting). I plan to keep improving dunit too.


To answer quickly:

* The convenience couple of assertEquals() that I defined,
  do work by throwing core.exception.AssertError. This means that 
they

  can be used inside the classic unittest{} blocks, if you
  import dunit.

* The classic assert() statement can be used inside DUnit tests.

* One could run DUnit tests when passing -unittest to dmd.
  Just define once somewhere in your project:

 unittest {
 import dunit;
 dunit.runTests();
 }

   And all the classes in your project that are marked as DUnit 
tests

   with mixin TestMixin; will get run, no matter which module
   they are in.

* DUnit also shows exceptions which are not asserts. Those are
  displayed as 'Errors' instead of 'Failures' (as the java style).

* DUnit will not halt the execution of the program if a test 
fails.

  It will keep going with the next test. This is very useful when
  combined with:
 unittest {
 import dunit;
 dunit.runTests();
 }
  On the other hand, D's classic unittests would halt the 
execution

  at the first failure.

* D's classic unittest{} blocks are more straightforward and
  instantaneous to learn, and support a brief style of writing
  unit tests that DUnit is not meant for.

* DUnit provides a style of unit testing popularized in the OOP 
crowd
  which begun with sUnit by Kent Beck (smalltalk), later jUnit by 
Kent Beck
  and Erich Gamma (java), and then also NUnit for .NET (there 
also exists
  too FlexUnit for Flex, FireUnit for Javascript, CppUnit for C++ 
etc., but

  those deviate a little from the originals).
So DUnit brings D to the familiy of languages that support 
certain
  testing idioms familiar to many OOP developers. (tests fixtures 
(grouped by
  classes), with common initialization, green bars ;-), decoupled 
test runners,

  convenience assert functions, etc.
  (I'm thinking of writing a quick DWT GUI test runner too

# (OFFTOPIC: I made a patched DWT that works in linux 64bits (by 
fixing

  a few bugs and commenting out some impossible XPCOM code
  and I'll try to sync to Jacob Carlborg's github repo
  when I have more time; and fixed missing 'double vars=0' inits 
instead of
  NaN that produced slowdowns and prevented certain drawing 
functions

  from working in the Graphics Context)).
  --jm



On Sunday, 19 February 2012 at 16:36:53 UTC, Andrei Alexandrescu 
wrote:

On 2/19/12 9:30 AM, Juan Manuel Cabo wrote:
People of the D world.. I give you DUnit (not to be confused 
with an old
tango DUnit, this one is for >= D2.057, and doesn't really 
require
phobos or tango (just you version the few writeln's of the 
runner, and

maybe
something else)).

https://github.com/jmcabo/dunit


Interesting, congrats. A common question that will come up is 
comparing, contrasting, and integrating your work with the 
existing unittest language feature. You may want to address 
these issues directly in the documentation.



Thanks,

Andrei





Re: DUnit - class MyTest { mixin TestMixin; void testMethod1() {} void testMethod2() {}}

2012-02-19 Thread Juan Manuel Cabo
I forgot to mention. All of this works flawlessly with D2.057 and 
D2.058. But with previous versions, you might need to declare the:


   mixin TestMixin;

at the bottom of the class. Otherwise, the test* methods were not 
seen.



And excuse me for all the bad formatting in my post and all the 
excitement!


--jm




Re: DUnit - class MyTest { mixin TestMixin; void testMethod1() {} void testMethod2() {}}

2012-02-19 Thread Juan Manuel Cabo

Unit testing framework ('dunit')

Allows to define unittests simply as methods which names start
with 'test'.
The only thing necessary to create a unit test class, is to
declare the mixin TestMixin inside the class. This will register
the class and its test methods for the test runner.

License:   http://www.boost.org/LICENSE_1_0.txt";>Boost 
License 1.0.

Authors:   Juan Manuel Cabo
Version:   0.3
Source:dunit.d
Last update: 2012-02-19

     Copyright Juan Manuel Cabo 2012.
Distributed under the Boost Software License, Version 1.0.
   (See accompanying file LICENSE_1_0.txt or copy at
 http://www.boost.org/LICENSE_1_0.txt)
-

module ExampleTests;
import std.stdio, std.string;
import dunit;


//Minimal example:
class ATestClass() {
mixin TestMixin;

void testExample() {
assertEquals("bla", "b"~"la");
}
}


/**
 * Look!! no test base class needed!!
 */
class AbcTest {
//This declaration here is the only thing needed to mark a 
class as a unit test class.

mixin TestMixin;

//Variable members that start with 'test' are allowed.
public int testN = 3;
public int testM = 4;

//Any method whose name starts with 'test' is run as a unit 
test:

//(NOTE: this is bound at compile time, there is no overhead).
public void test1() {
assert(true);
}

public void test2() {
//You can use D's assert() function:
assert(1 == 2 / 2);
//Or dunit convenience asserts (just edit dunit.d to add 
more):

assertEquals(1, 2/2);
//The expected and actual values will be shown in the 
output:
assertEquals("my string looks dazzling", "my dtring looks 
sazzling");

}

//Test methods with default arguments work, as long as they 
can
//be called without arguments, ie: as testDefaultArguments() 
for instance:

public void testDefaultArguments(int a=4, int b=3) {
}

//Even if the method is private to the unit test class, it is 
still run.

private void test5(int a=4) {
}

//This test was disabled just by adding an underscore to the 
name:

public void _testAnother() {
assert(false, "fails");
}

//Optional inicialization and de-initialization.
//  setUp() and tearDown() are called around each individual 
test.
//  setUpClass() and tearDownClass() are called once around 
the whole unit test.

public void setUp() {
}
public void tearDown() {
}
public void setUpClass() {
}
public void tearDownClass() {
}
}


class DerivedTest : AbcTest {
mixin TestMixin;

//Base class tests will be run!!
//You can for instance override setUpClass() and change the 
target implementation

//of a family of classes that you are testing.
}


version = DUnit;

version(DUnit) {

//-All you need to run the tests, is to declare
//
//  mixin DUnitMain.
//
//-You can alternatively call
//
//  dunit.runTests_Progress();  for java style 
results output (SHOWS COLORS IF IN UNIX !!!)
// or   dunit.runTests_Tree();  for a more verbose 
output

//
//from your main function.

//mixin DUnitMain;
void main() {dunit.runTests_Tree();}

} else {
int main (string[] args) {
writeln("production");
}
}


/*

Run this file with (works in Windows/Linux):


dmd exampleTests.d dunit.d
./exampleTests


The output will be (java style):


..FF..
There were 2 failures:
1) 
test2(AbcTest)core.exception.AssertError@exampleTests.d(60): 
Expected: 'my string looks dazzling', but was: 'my dtring looks 
sazzling'
2) 
test2(DerivedTest)core.exception.AssertError@exampleTests.d(60): 
Expected: 'my string looks dazzling', but was: 'my dtring looks 
sazzling'


FAILURES!!!
Tests run: 8,  Failures: 2,  Errors: 0


If you use the more verbose method dunit.runTests_Tree(), then 
the output is:



Unit tests:
AbcTest
OK: test1()
FAILED: test2(): 
core.exception.AssertError@exampleTests.d(60): Expected: 'my 
string looks dazzling', but was: 'my dtring looks sazzling'

OK: testDefaultArguments()
OK: test5()
DerivedTest
OK: test1()
FAILED: test2(): 
core.exception.AssertError@exampleTests.d(60): Expected: 'my 
string looks dazzling', but was: 'my dtring looks sazzling'

OK: testDefaultArguments()
OK: test5()

HAVE FUN!

*/



DUnit - class MyTest { mixin TestMixin; void testMethod1() {} void testMethod2() {}}

2012-02-19 Thread Juan Manuel Cabo
People of the D world.. I give you DUnit (not to be confused with 
an old
tango DUnit, this one is for >= D2.057, and doesn't really 
require phobos or tango (just you version the few writeln's of 
the runner, and maybe

something else)).

  https://github.com/jmcabo/dunit

I've been developing it for the past few weeks, and since I saw a 
post of another unit testing framework just a few minutes ago, I 
thought I'd rush it to github.


S, here is how you define a test:


import dunit;

class Something {
mixin TestMixin;

void testOne() {
assert(1 == 1, "this works");
}

void testTwo() {
assertEquals(1, 2/2);
assertEquals("a string", "a"~" string");
}
}

.. and that's all there is to it. Put the mixin TestMixin, and 
name your tests
starting with 'test'. The results output shows all of them even 
if some fail, and... guess what, it tells you the name of the 
unit tests that failed!! isn't this awesome!! (all thanks to 
mixins, recursive template declarations, __traits, and a little 
bit of CTFE)... isn't D like, so incredibly awesome or what!?!?


There is absolutely no overhead in registering the tests for the 
test runner.. its all at compile time!


Your tests are inherited through derived classes, and can be 
private in the unit test class (they will still run).



I made two test runners:

* One that shows the results in java style (but WITH COLORS!! 
(fineprint: colors only on unix console, windows console is 
colorless for now)


* Another one more verbose that shows the tree of tests as it 
runs them.


It is very easy to make your own.


This is all BOOST licenced, so please tweak it away!


FINEPRINT yes shouting fineprint ;-) haha:
THIS IS NOT A unitest{} REPLACEMENT, JUST AN ITCH EVERY OOP 
PEOPLE WANTED TO SCRATCH: named, easy, xUnit style unit tests.. 
AND NOW YOU'VE GOT THEM.