On 2012-08-30 08:29, Behnaz Bostanipour wrote:
> Hello Tom,
>
> Thanks for your reply. I put some comments on your email below:
>
>
> On 30 août 2012, at 16:13, Tom Henderson wrote:
>
>>
>> On 08/29/2012 06:50 AM, behnaz.bostanip...@unil.ch wrote:
>>>> I'm not able to reproduce that error, so I would like to
>>>> see some  examples of the diffed output.  Would you mind
>>>> collecting all of the  *.test output files for the tests
>>>> that failed and send them to me in a  tarball, such as:
>>>>
>>>> cd tcl/tests
>>>> find . -name "*.test" -type f | xargs tar cvfj
>>>> ns-2-diffs.tbz2
>>>>
>>>> and send me the ns-2-diffs.tbz2 file?
>>>>
>>>> Thanks,
>>>> Tom
>>>
>>>
>>> Here you are, I exactly execuuted your command, but not sure
>>> if it has all the outputs that we want (i.e., for tests:
>>> "test-all-tcpLinux", "test-all-tcpHighspeed", "test-all-red"
>>> and "test-all-cbq".
>>>
>>
>> There seem to be a few things going on here.
>>
>> for the test-all-cbq and test-all-red files, the data seems correct 
>> but
>> the formatting is slightly off:  there are commas for periods in 
>> some of
>> the outputs.  e.g.
>>
>> test-output-red/flows-combined.test
>>> ==> flows_combined.test <==
>>> TitleText: test_flows_combined
>>> Device: Postscript
>>>
>>> "flow  1
>>> 84,8786 74,1902
>>> 48,3181 46,8733
>>
>> vs. test-output-red/flows-combined (good output)
>>> TitleText: test_flows_combined
>>> Device: Postscript
>>>
>>> "flow  1
>>> 84.8786 74.1902
>>> 48.3181 46.8733
>>
>>
>> This may have something to do with the version of xgraph on the 
>> system.
>
> Yes, if you look at the validation output, whenever a test fails , it 
> says:
>
>> > "couldn't execute "xgraph": no such file or directory"
>
>
> So, maybe I should do something about my graph, (e.g., reinstall it
> or …, do you have any suggestions?)

xgraph is an optional component, so I think you could safely ignore 
that warning.


>
>
>>
>> for the tcpLinux and tcpHighspeed tests, there are lines missing 
>> (either
>> truncated, or interleaved in the test output) from the output when
>> compared to the reference output.  I don't know whether this is 
>> again a
>> post-processing error or whether the simulation is not producing the
>> same data.
>>
>> The test-output-xcp data is different:
>>
>> 0.12186 4200000
>> 0.12186 4200000
>> 0.12186 4200000
>> 0.12228 8
>> 0.12228 8
>>
>> vs
>>
>> 0.12186 4200000
>> 0.12186 4200000
>> 0.12186 4200000
>> 0.12228 8190000
>> 0.12228 8190000
>>
>> In summary, I would be suspicious of the use of the tcpLinux,
>> tcpHighspeed, and xcp models on this platform.  To debug this 
>> probably
>> requires to step through the code at the points where the output
>> diverge, using also a platform such as Linux that produces the 
>> reference
>> output.
>>
>> I don't have ns-2 running on Mountain Lion yet but I'll check 
>> whether
>> similar issues arise there.
>>
>> - Tom
>>
>>
>
> As I explained in my last email, I would like to do some simulations
> for IEEE 802.11 MAC layer and also use some routing protocols , I
> don't think that validation tests failures for TCP will not be a
> problem, my only concern is that these tests fail:
>
> /test-all-red ./test-all-cbq
>
> Do you think that will cause a problem for my simulations if I these
> models do not work properly on my machine?
>

Based on what you sent, the models appear to be working properly and 
the difference is due to the post-processing for the regression tests.

- Tom

Reply via email to