Re: [R] optim .C / Crashing on run

2012-11-06 Thread Paul Browne
Ah, I hadn't looked at all closely enough at that line then!

The parameter with value ~434 is most likely the cause of the problem,
since the actual value of that parameter in usage is never going to much
exceed ~1E-5 -  ~1.0.

This too-large computed step must be what then causes the external code
library to throw up a memory exception error  crash out. I'll need to look
into how the step-size can either be reduced or tuned on a
parameter-by-parameter basis, perhaps with dfoptim().

On 5 November 2012 16:27, William Dunlap wdun...@tibco.com wrote:

 You said the R traceback was not very informative, but it did include the
 line

   4: function (par) fn(par, ...)(c(4334.99, 53, 4.57, 0.277,
 433.50033, 2.158, 0.288))

 and if I try to run your fsbl_chi2 with those parameters, outside of any
 optimizer, R crashes:

 fsbl_chi2(c(4334.99, 53, 4.57, 0.277, 433.50033, 2.158, 0.288))

 *** caught segfault ***
address 0x40, cause 'memory not mapped'

Traceback:
 1: .C(a_fsbl_wrapper, as.integer(length(t)),
 as.double(model_par[6]), as.double(model_par[7]),
 as.double(model_par[1]), as.double(model_par[2]), as.double(t),
 as.double(model_par[3]), as.double(model_par[4]),
 as.double(model_par[5]), as.double(prec), as.double(vector(double,
   length(t
 2: fsbl_mag(subset(data$hjd, data$site_n == i), model_par)
 3: fsbl_chi2(c(4334.99, 53, 4.57, 0.277, 433.50033, 2.158, 0.288))

Possible actions:
1: abort (with core dump, if enabled)
2: normal R exit
3: exit R without saving workspace
4: exit R saving workspace

 valgrind will tell you the line number in the C++ code where the function
 first misused memory.

 If you use 'R --debugger=valgrind' I think it helps to also set
 gctorture(TRUE)
 in R so that valgrind can do more checking of memory misuse.

 Bill Dunlap
 Spotfire, TIBCO Software
 wdunlap tibco.com


  -Original Message-
  From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
 On Behalf
  Of Paul Browne
  Sent: Sunday, November 04, 2012 6:20 AM
  To: Patrick Burns
  Cc: r-help@r-project.org
  Subject: Re: [R] optim  .C / Crashing on run
 
  Hi,
 
  Thanks for your help. Invoking valgrind under R for the test script I
  attached produces the following crash report;
 
 
   Rscript optim_rhelp.R -d valgrind
 Nelder-Mead direct search function minimizer
   function value for initial parameters = 1267.562555
 Scaled convergence tolerance is 1.2e-05
   Stepsize computed as 433.499000
*** caught segfault ***
   address 0x40, cause 'memory not mapped'
   Traceback:
1: .C(a_fsbl_wrapper, as.integer(length(t)),
 as.double(model_par[6]),
 as.double(model_par[7]), as.double(model_par[1]),
   as.double(model_par[2]), as.double(t), as.double(model_par[3]),
   as.double(model_par[4]), as.double(model_par[5]), as.double(prec),
   as.double(vector(double, length(t
2: fsbl_mag(subset(data$hjd, data$site_n == i), model_par)
3: fn(par, ...)
4: function (par) fn(par, ...)(c(4334.99, 53, 4.57, 0.277, 433.50033,
   2.158, 0.288))
5: optim(par = model_par, fn = fsbl_chi2, method = c(Nelder-Mead),
   control = list(trace = 6, maxit = 2000))
   aborting ...
   Segmentation fault (core dumped)
 
 
  So definitely a memory problem then, but the traceback doesn't seem very
  informative as to its cause.
 
  Running a valgrind memcheck  leak check just on a test of the C++ code,
  without it being called from R, reports no issues;
 
 
   ==6670== Memcheck, a memory error detector
   ==6670== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et
 al.
   ==6670== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright
 info
   ==6670== Command: ./fsbl_y_test
   ==6670== Parent PID: 2614
   ==6670==
   ==6670==
   ==6670== HEAP SUMMARY:
   ==6670== in use at exit: 0 bytes in 0 blocks
   ==6670==   total heap usage: 6,022,561 allocs, 6,022,561 frees,
   408,670,648 bytes allocated
   ==6670==
   ==6670== All heap blocks were freed -- no leaks are possible
   ==6670==
   ==6670== For counts of detected and suppressed errors, rerun with: -v
   ==6670== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 2 from 2)
 
 
  Perhaps it has something to do with how I've written two wrapping
 functions
  in the C/C++ code that pass input  results back  forth from R  the
 rest
  of the external code?
 
  These are the two functions;
 
  //*
   //a_fsbl_wrapper - R wrapper function for FSBL magnification
   //*
   extern C
   {
   void a_fsbl_wrapper(int *k, double *a, double *q, double *t0, double
 *tE,
   double *t,
   double *alpha, double *u0, double *Rs,
 double
   *prec,
   double *result)
   {
  int i;
  for(i=0;i*k;i++){
 result[i] = a_fsbl(*a,*q,*t0,*tE,t[i],*alpha,*u0,*Rs,*prec

Re: [R] optim .C / Crashing on run

2012-11-05 Thread William Dunlap
You said the R traceback was not very informative, but it did include the line

  4: function (par) fn(par, ...)(c(4334.99, 53, 4.57, 0.277, 433.50033, 
2.158, 0.288))

and if I try to run your fsbl_chi2 with those parameters, outside of any 
optimizer, R crashes:

fsbl_chi2(c(4334.99, 53, 4.57, 0.277, 433.50033, 2.158, 0.288))
   
*** caught segfault ***
   address 0x40, cause 'memory not mapped'
   
   Traceback:
1: .C(a_fsbl_wrapper, as.integer(length(t)), as.double(model_par[6]), 
as.double(model_par[7]), as.double(model_par[1]), as.double(model_par[2]), 
as.double(t), as.double(model_par[3]), as.double(model_par[4]), 
as.double(model_par[5]), as.double(prec), as.double(vector(double, 
length(t
2: fsbl_mag(subset(data$hjd, data$site_n == i), model_par)
3: fsbl_chi2(c(4334.99, 53, 4.57, 0.277, 433.50033, 2.158, 0.288))
   
   Possible actions:
   1: abort (with core dump, if enabled)
   2: normal R exit
   3: exit R without saving workspace
   4: exit R saving workspace

valgrind will tell you the line number in the C++ code where the function
first misused memory.

If you use 'R --debugger=valgrind' I think it helps to also set gctorture(TRUE)
in R so that valgrind can do more checking of memory misuse.

Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com


 -Original Message-
 From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On 
 Behalf
 Of Paul Browne
 Sent: Sunday, November 04, 2012 6:20 AM
 To: Patrick Burns
 Cc: r-help@r-project.org
 Subject: Re: [R] optim  .C / Crashing on run
 
 Hi,
 
 Thanks for your help. Invoking valgrind under R for the test script I
 attached produces the following crash report;
 
 
  Rscript optim_rhelp.R -d valgrind
Nelder-Mead direct search function minimizer
  function value for initial parameters = 1267.562555
Scaled convergence tolerance is 1.2e-05
  Stepsize computed as 433.499000
   *** caught segfault ***
  address 0x40, cause 'memory not mapped'
  Traceback:
   1: .C(a_fsbl_wrapper, as.integer(length(t)), as.double(model_par[6]),
as.double(model_par[7]), as.double(model_par[1]),
  as.double(model_par[2]), as.double(t), as.double(model_par[3]),
  as.double(model_par[4]), as.double(model_par[5]), as.double(prec),
  as.double(vector(double, length(t
   2: fsbl_mag(subset(data$hjd, data$site_n == i), model_par)
   3: fn(par, ...)
   4: function (par) fn(par, ...)(c(4334.99, 53, 4.57, 0.277, 433.50033,
  2.158, 0.288))
   5: optim(par = model_par, fn = fsbl_chi2, method = c(Nelder-Mead),
  control = list(trace = 6, maxit = 2000))
  aborting ...
  Segmentation fault (core dumped)
 
 
 So definitely a memory problem then, but the traceback doesn't seem very
 informative as to its cause.
 
 Running a valgrind memcheck  leak check just on a test of the C++ code,
 without it being called from R, reports no issues;
 
 
  ==6670== Memcheck, a memory error detector
  ==6670== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al.
  ==6670== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info
  ==6670== Command: ./fsbl_y_test
  ==6670== Parent PID: 2614
  ==6670==
  ==6670==
  ==6670== HEAP SUMMARY:
  ==6670== in use at exit: 0 bytes in 0 blocks
  ==6670==   total heap usage: 6,022,561 allocs, 6,022,561 frees,
  408,670,648 bytes allocated
  ==6670==
  ==6670== All heap blocks were freed -- no leaks are possible
  ==6670==
  ==6670== For counts of detected and suppressed errors, rerun with: -v
  ==6670== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 2 from 2)
 
 
 Perhaps it has something to do with how I've written two wrapping functions
 in the C/C++ code that pass input  results back  forth from R  the rest
 of the external code?
 
 These are the two functions;
 
 //*
  //a_fsbl_wrapper - R wrapper function for FSBL magnification
  //*
  extern C
  {
  void a_fsbl_wrapper(int *k, double *a, double *q, double *t0, double *tE,
  double *t,
  double *alpha, double *u0, double *Rs, double
  *prec,
  double *result)
  {
 int i;
 for(i=0;i*k;i++){
result[i] = a_fsbl(*a,*q,*t0,*tE,t[i],*alpha,*u0,*Rs,*prec);
 }
   }
  }
  //*
  //a_fsbl - FSBL magnification, model parameters, no parallax
  //*
  double a_fsbl(double a, double q, double t0, double tE, double t,
double alpha, double u0, double Rs, double prec)
  {
  double y1,y2;
  y1 = (-1)*u0*sin(alpha) + ((t-t0)/tE)*cos(alpha);
   y2 = y2 = u0*cos(alpha) + ((t-t0)/tE)*sin(alpha);
  return(BinaryLightCurve(a,q,y2,0.0,y1,Rs,prec));
  }
 
 
 a_fsbl_wrapper takes input model parameters  an input vector of times t,
 then returns an output vector result. The elements

Re: [R] optim .C / Crashing on run

2012-11-05 Thread Ravi Varadhan
You might also want to try the Nelder-Mead algorithm, nmk(), in the dfoptim 
package.  It is a better algorithm than the Nelder-Mead in optim.  It is all R 
code, so you might be able to modify it to fit your needs.

Ravi

Ravi Varadhan, Ph.D.
Assistant Professor
The Center on Aging and Health
Division of Geriatric Medicine  Gerontology
Johns Hopkins University
rvarad...@jhmi.edumailto:rvarad...@jhmi.edu
410-502-2619


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optim .C / Crashing on run

2012-11-04 Thread Patrick Burns

That is a symptom of the C/C++ code doing
something like using memory beyond the proper
range.  It's entirely possible to have crashes
in some contexts but not others.

If you can run the C code under valgrind,
that would be the easiest way to find the
problem.

Pat

On 03/11/2012 18:15, Paul Browne wrote:

Hello,

I am attempting to use optim under the default Nelder-Mead algorithm for
model fitting, minimizing a Chi^2 statistic whose value is determined by a
.C call to an external shared library compiled from C  C++ code.

My problem has been that the R session will immediately crash upon starting
the simplex run, without it taking a single step.

This is strange, as the .C call itself works, is error-free (as far as I
can tell!)  does not return NAN or Inf under any initial starting
parameters that I have tested it with in R. It only ever crashes the R
session when the Chi^2 function to be minimized is called from optim, not
under any other circumstances.

In the interests of reproducibility, I attach R code that reads attached
data files  attempts a N-M optim run. The required shared library
containing the external code (compiled in Ubuntu 12.04 x64 with g++ 4.6.3)
is also attached. Calculating an initial Chi^2 value for a starting set of
model parameters works, then the R session crashes when the optim call is
made.

Is there something I'm perhaps doing wrong in the specification of the
optim run? Is it inadvisable to use external code with optim? There doesn't
seem to be a problem with the external code itself, so I'm very stumped as
to the source of the crashes.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



--
Patrick Burns
pbu...@pburns.seanet.com
twitter: @portfolioprobe
http://www.portfolioprobe.com/blog
http://www.burns-stat.com
(home of 'Some hints for the R beginner'
and 'The R Inferno')

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optim .C / Crashing on run

2012-11-04 Thread Paul Browne
It looks like my attached files didn't go through, so I'll put them in a
public Dropbox folder instead;
optim_rhelp.tar.gzhttp://dl.dropbox.com/u/1113102/optim_rhelp.tar.gz

Thanks, I'll run a compiled binary of the C++ code through Valgrind  see
what it reports, then perhaps I'll try an Rscript execution of the R code
calling the C++ in optim (not sure if Valgrind can process that though!).

It does seem to be a memory error of some kind, since occasionally the OS
pops up a crash report referencing a segmentation fault after optim crashes
the R session. Though it is strange that the code has never crashed from a
straight .C call in R, or when run from a compiled C++ binary.

- Paul

On 4 November 2012 09:35, Patrick Burns pbu...@pburns.seanet.com wrote:

 That is a symptom of the C/C++ code doing
 something like using memory beyond the proper
 range.  It's entirely possible to have crashes
 in some contexts but not others.

 If you can run the C code under valgrind,
 that would be the easiest way to find the
 problem.

 Pat


 On 03/11/2012 18:15, Paul Browne wrote:

 Hello,

 I am attempting to use optim under the default Nelder-Mead algorithm for
 model fitting, minimizing a Chi^2 statistic whose value is determined by a
 .C call to an external shared library compiled from C  C++ code.

 My problem has been that the R session will immediately crash upon
 starting
 the simplex run, without it taking a single step.

 This is strange, as the .C call itself works, is error-free (as far as I
 can tell!)  does not return NAN or Inf under any initial starting
 parameters that I have tested it with in R. It only ever crashes the R
 session when the Chi^2 function to be minimized is called from optim, not
 under any other circumstances.

 In the interests of reproducibility, I attach R code that reads attached
 data files  attempts a N-M optim run. The required shared library
 containing the external code (compiled in Ubuntu 12.04 x64 with g++ 4.6.3)
 is also attached. Calculating an initial Chi^2 value for a starting set of
 model parameters works, then the R session crashes when the optim call is
 made.

 Is there something I'm perhaps doing wrong in the specification of the
 optim run? Is it inadvisable to use external code with optim? There
 doesn't
 seem to be a problem with the external code itself, so I'm very stumped as
 to the source of the crashes.



 __**
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/**listinfo/r-helphttps://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/**
 posting-guide.html http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


 --
 Patrick Burns
 pbu...@pburns.seanet.com
 twitter: @portfolioprobe
 http://www.portfolioprobe.com/**blog http://www.portfolioprobe.com/blog
 http://www.burns-stat.com
 (home of 'Some hints for the R beginner'
 and 'The R Inferno')


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optim .C / Crashing on run

2012-11-04 Thread Patrick Burns

When invoking R, you can add

 -d valgrind

to run it under valgrind.

On 04/11/2012 11:35, Paul Browne wrote:

It looks like my attached files didn't go through, so I'll put them in a
public Dropbox folder instead; optim_rhelp.tar.gz
http://dl.dropbox.com/u/1113102/optim_rhelp.tar.gz

Thanks, I'll run a compiled binary of the C++ code through Valgrind 
see what it reports, then perhaps I'll try an Rscript execution of the R
code calling the C++ in optim (not sure if Valgrind can process that
though!).

It does seem to be a memory error of some kind, since occasionally the
OS pops up a crash report referencing a segmentation fault after optim
crashes the R session. Though it is strange that the code has never
crashed from a straight .C call in R, or when run from a compiled C++
binary.

- Paul

On 4 November 2012 09:35, Patrick Burns pbu...@pburns.seanet.com
mailto:pbu...@pburns.seanet.com wrote:

That is a symptom of the C/C++ code doing
something like using memory beyond the proper
range.  It's entirely possible to have crashes
in some contexts but not others.

If you can run the C code under valgrind,
that would be the easiest way to find the
problem.

Pat


On 03/11/2012 18:15, Paul Browne wrote:

Hello,

I am attempting to use optim under the default Nelder-Mead
algorithm for
model fitting, minimizing a Chi^2 statistic whose value is
determined by a
.C call to an external shared library compiled from C  C++ code.

My problem has been that the R session will immediately crash
upon starting
the simplex run, without it taking a single step.

This is strange, as the .C call itself works, is error-free (as
far as I
can tell!)  does not return NAN or Inf under any initial starting
parameters that I have tested it with in R. It only ever crashes
the R
session when the Chi^2 function to be minimized is called from
optim, not
under any other circumstances.

In the interests of reproducibility, I attach R code that reads
attached
data files  attempts a N-M optim run. The required shared library
containing the external code (compiled in Ubuntu 12.04 x64 with
g++ 4.6.3)
is also attached. Calculating an initial Chi^2 value for a
starting set of
model parameters works, then the R session crashes when the
optim call is
made.

Is there something I'm perhaps doing wrong in the specification
of the
optim run? Is it inadvisable to use external code with optim?
There doesn't
seem to be a problem with the external code itself, so I'm very
stumped as
to the source of the crashes.




R-help@r-project.org mailto:R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/__listinfo/r-help
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/__posting-guide.html
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


--
Patrick Burns
pbu...@pburns.seanet.com mailto:pbu...@pburns.seanet.com
twitter: @portfolioprobe
http://www.portfolioprobe.com/__blog
http://www.portfolioprobe.com/blog
http://www.burns-stat.com
(home of 'Some hints for the R beginner'
and 'The R Inferno')




--
Patrick Burns
pbu...@pburns.seanet.com
twitter: @portfolioprobe
http://www.portfolioprobe.com/blog
http://www.burns-stat.com
(home of 'Some hints for the R beginner'
and 'The R Inferno')

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] optim .C / Crashing on run

2012-11-04 Thread Paul Browne
Hi,

Thanks for your help. Invoking valgrind under R for the test script I
attached produces the following crash report;


 Rscript optim_rhelp.R -d valgrind
   Nelder-Mead direct search function minimizer
 function value for initial parameters = 1267.562555
   Scaled convergence tolerance is 1.2e-05
 Stepsize computed as 433.499000
  *** caught segfault ***
 address 0x40, cause 'memory not mapped'
 Traceback:
  1: .C(a_fsbl_wrapper, as.integer(length(t)), as.double(model_par[6]),
   as.double(model_par[7]), as.double(model_par[1]),
 as.double(model_par[2]), as.double(t), as.double(model_par[3]),
 as.double(model_par[4]), as.double(model_par[5]), as.double(prec),
 as.double(vector(double, length(t
  2: fsbl_mag(subset(data$hjd, data$site_n == i), model_par)
  3: fn(par, ...)
  4: function (par) fn(par, ...)(c(4334.99, 53, 4.57, 0.277, 433.50033,
 2.158, 0.288))
  5: optim(par = model_par, fn = fsbl_chi2, method = c(Nelder-Mead),
 control = list(trace = 6, maxit = 2000))
 aborting ...
 Segmentation fault (core dumped)


So definitely a memory problem then, but the traceback doesn't seem very
informative as to its cause.

Running a valgrind memcheck  leak check just on a test of the C++ code,
without it being called from R, reports no issues;


 ==6670== Memcheck, a memory error detector
 ==6670== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al.
 ==6670== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info
 ==6670== Command: ./fsbl_y_test
 ==6670== Parent PID: 2614
 ==6670==
 ==6670==
 ==6670== HEAP SUMMARY:
 ==6670== in use at exit: 0 bytes in 0 blocks
 ==6670==   total heap usage: 6,022,561 allocs, 6,022,561 frees,
 408,670,648 bytes allocated
 ==6670==
 ==6670== All heap blocks were freed -- no leaks are possible
 ==6670==
 ==6670== For counts of detected and suppressed errors, rerun with: -v
 ==6670== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 2 from 2)


Perhaps it has something to do with how I've written two wrapping functions
in the C/C++ code that pass input  results back  forth from R  the rest
of the external code?

These are the two functions;

//*
 //a_fsbl_wrapper - R wrapper function for FSBL magnification
 //*
 extern C
 {
 void a_fsbl_wrapper(int *k, double *a, double *q, double *t0, double *tE,
 double *t,
 double *alpha, double *u0, double *Rs, double
 *prec,
 double *result)
 {
int i;
for(i=0;i*k;i++){
   result[i] = a_fsbl(*a,*q,*t0,*tE,t[i],*alpha,*u0,*Rs,*prec);
}
  }
 }
 //*
 //a_fsbl - FSBL magnification, model parameters, no parallax
 //*
 double a_fsbl(double a, double q, double t0, double tE, double t,
   double alpha, double u0, double Rs, double prec)
 {
 double y1,y2;
 y1 = (-1)*u0*sin(alpha) + ((t-t0)/tE)*cos(alpha);
  y2 = y2 = u0*cos(alpha) + ((t-t0)/tE)*sin(alpha);
 return(BinaryLightCurve(a,q,y2,0.0,y1,Rs,prec));
 }


a_fsbl_wrapper takes input model parameters  an input vector of times t,
then returns an output vector result. The elements of result are calculated
in a_fsbl, from a call to the rest of the external C++ code for each
element.

As I mentioned, this works amazingly well from a straight .C call in R, it
only crashes when invoked by optim.

- Paul

On 4 November 2012 11:55, Patrick Burns pbu...@pburns.seanet.com wrote:

 When invoking R, you can add

  -d valgrind

 to run it under valgrind.


 On 04/11/2012 11:35, Paul Browne wrote:

 It looks like my attached files didn't go through, so I'll put them in a
 public Dropbox folder instead; optim_rhelp.tar.gz
 http://dl.dropbox.com/u/**1113102/optim_rhelp.tar.gzhttp://dl.dropbox.com/u/1113102/optim_rhelp.tar.gz
 


 Thanks, I'll run a compiled binary of the C++ code through Valgrind 
 see what it reports, then perhaps I'll try an Rscript execution of the R
 code calling the C++ in optim (not sure if Valgrind can process that
 though!).

 It does seem to be a memory error of some kind, since occasionally the
 OS pops up a crash report referencing a segmentation fault after optim
 crashes the R session. Though it is strange that the code has never
 crashed from a straight .C call in R, or when run from a compiled C++
 binary.

 - Paul

 On 4 November 2012 09:35, Patrick Burns pbu...@pburns.seanet.com
 mailto:pburns@pburns.seanet.**com pbu...@pburns.seanet.com wrote:

 That is a symptom of the C/C++ code doing
 something like using memory beyond the proper
 range.  It's entirely possible to have crashes
 in some contexts but not others.

 If you can run the C code under valgrind,
 that would be the easiest way to find the
 problem.

 Pat


 On 03/11/2012 18:15, Paul Browne wrote:

 Hello,

 I am 

Re: [R] optim .C / Crashing on run

2012-11-04 Thread Paul Browne
Running this valgrind command on the test optim_rhelp.R script

R -d valgrind --tool=memcheck --leak-check=full
 --log-file=optim_rhelp.valgrind.log --vanilla  optim_rhelp.R


yields this report:
optim_rhelp.valgrind.loghttp://dl.dropbox.com/u/1113102/optim_rhelp.valgrind.log

Ignoring everything in there to do with R  other libraries, it seems like
the problem in my external code is occuring here;

==8176== Invalid read of size 8
 ==8176== at 0xCD8F0D3: _curve::~_curve() (VBBinaryLensing.cpp:257)
 ==8176== by 0xCD8F806: _sols::~_sols() (VBBinaryLensing.cpp:494)
 ==8176== by 0xCD95F20: BinaryMag(double, double, double, double, double,
 double) (VBBinaryLensing.cpp:816)
 ==8176== by 0xCD9659C: BinaryLightCurve(double, double, double, double,
 double, double, double) (VBBinaryLensing.cpp:636)
 ==8176== by 0xCD8D47C: a_fsbl_wrapper (fsbl.c:24)
 ==8176== by 0x4EEDCF2: ??? (in /usr/lib/R/lib/libR.so)
 ==8176== by 0x4F25B1C: Rf_eval (in /usr/lib/R/lib/libR.so)
 ==8176== by 0x4F2B092: ??? (in /usr/lib/R/lib/libR.so)
 ==8176== by 0x5009A01: ??? (in /usr/lib/R/lib/libR.so)
 ==8176== by 0x4F258FE: Rf_eval (in /usr/lib/R/lib/libR.so)
 ==8176== by 0x4F276AF: ??? (in /usr/lib/R/lib/libR.so)
 ==8176== by 0x4F258FE: Rf_eval (in /usr/lib/R/lib/libR.so)
 ==8176== Address 0x40 is not stack'd, malloc'd or (recently) free'd
 ==8176==
 ==8176==
 ==8176== Process terminating with default action of signal 11 (SIGSEGV)
 ==8176== General Protection Fault
 ==8176== at 0x571BC60: __snprintf_chk (snprintf_chk.c:31)
 ==8176== by 0x4FCEA81: Rf_EncodeReal (in /usr/lib/R/lib/libR.so)
 ==8176== by 0x4FCFEC7: Rf_EncodeElement (in /usr/lib/R/lib/libR.so)
 ==8176== by 0x4ED895D: ??? (in /usr/lib/R/lib/libR.so)
 ==8176== by 0x4EDAB4A: ??? (in /usr/lib/R/lib/libR.so)
 ==8176== by 0x4ED976D: ??? (in /usr/lib/R/lib/libR.so)
 ==8176== by 0x4EDAB4A: ??? (in /usr/lib/R/lib/libR.so)
 ==8176== by 0x4ED945B: ??? (in /usr/lib/R/lib/libR.so)
 ==8176== by 0x4EDAB4A: ??? (in /usr/lib/R/lib/libR.so)
 ==8176== by 0x4ED945B: ??? (in /usr/lib/R/lib/libR.so)
 ==8176== by 0x4EDA48E: ??? (in /usr/lib/R/lib/libR.so)
 ==8176== by 0x4F0ED54: R_GetTraceback (in /usr/lib/R/lib/libR.so)


in the function ~_curve(void) or ~_sols(void) of the external C++ library.

Unfortunately I didn't write this code library, nor do I have very much
experience with C++ so this problem might well be unsolvable for me.

If anyone could see anything wrong in the C++ code fragments comprising the
problem functions, I'd be extremely grateful!

_curve::~_curve(void){
 _point *scan1,*scan2;
  scan1=first;
 for(int i=0;ilength;i++){
 scan2=scan1-next;
  delete scan1;
 scan1=scan2;
 }
 }


_sols::~_sols(void){
 _curve *scan1,*scan2;
  scan1=first;
 while(scan1){
 scan2=scan1-next;
  delete scan1;
 scan1=scan2;
 }
 }



- Paul

On 4 November 2012 14:20, Paul Browne paulfj.bro...@gmail.com wrote:

 Hi,

 Thanks for your help. Invoking valgrind under R for the test script I
 attached produces the following crash report;


 Rscript optim_rhelp.R -d valgrind
   Nelder-Mead direct search function minimizer
 function value for initial parameters = 1267.562555
   Scaled convergence tolerance is 1.2e-05
 Stepsize computed as 433.499000
  *** caught segfault ***
 address 0x40, cause 'memory not mapped'
 Traceback:
  1: .C(a_fsbl_wrapper, as.integer(length(t)), as.double(model_par[6]),
 as.double(model_par[7]), as.double(model_par[1]),
 as.double(model_par[2]), as.double(t), as.double(model_par[3]),
 as.double(model_par[4]), as.double(model_par[5]), as.double(prec),
 as.double(vector(double, length(t
  2: fsbl_mag(subset(data$hjd, data$site_n == i), model_par)
  3: fn(par, ...)
  4: function (par) fn(par, ...)(c(4334.99, 53, 4.57, 0.277, 433.50033,
 2.158, 0.288))
  5: optim(par = model_par, fn = fsbl_chi2, method = c(Nelder-Mead),
 control = list(trace = 6, maxit = 2000))
 aborting ...
 Segmentation fault (core dumped)


 So definitely a memory problem then, but the traceback doesn't seem very
 informative as to its cause.

 Running a valgrind memcheck  leak check just on a test of the C++ code,
 without it being called from R, reports no issues;


 ==6670== Memcheck, a memory error detector
 ==6670== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al.
 ==6670== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info
 ==6670== Command: ./fsbl_y_test
 ==6670== Parent PID: 2614
 ==6670==
 ==6670==
 ==6670== HEAP SUMMARY:
 ==6670== in use at exit: 0 bytes in 0 blocks
 ==6670==   total heap usage: 6,022,561 allocs, 6,022,561 frees,
 408,670,648 bytes allocated
 ==6670==
 ==6670== All heap blocks were freed -- no leaks are possible
 ==6670==
 ==6670== For counts of detected and suppressed errors, rerun with: -v
 ==6670== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 2 from 2)


 Perhaps it has something to do with how I've written two wrapping
 functions in the C/C++ code that pass input  results back  forth from R 
 the rest of the external 

Re: [R] optim .C / Crashing on run

2012-11-04 Thread paulfjbrowne
Playing around with alternate optimzers, I've found that both nlminb  the
nls.lm Levenberg-Marquadt optimizer in minpack.lm both work with my
objective function without crashing, and minimize the function as I'd expect
them to. 

Using optim for amoeba sampling would be nice, but I think I'll just have to
chalk up its crashing with my external code library as a problem I won't be
able to solve for now. I'll use nlminb or nls.lm for optimization  a
hand-coded MCMC algorithm for characterization of local minima.



--
View this message in context: 
http://r.789695.n4.nabble.com/optim-C-Crashing-on-run-tp4648325p4648419.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] optim .C / Crashing on run

2012-11-03 Thread Paul Browne
Hello,

I am attempting to use optim under the default Nelder-Mead algorithm for
model fitting, minimizing a Chi^2 statistic whose value is determined by a
.C call to an external shared library compiled from C  C++ code.

My problem has been that the R session will immediately crash upon starting
the simplex run, without it taking a single step.

This is strange, as the .C call itself works, is error-free (as far as I
can tell!)  does not return NAN or Inf under any initial starting
parameters that I have tested it with in R. It only ever crashes the R
session when the Chi^2 function to be minimized is called from optim, not
under any other circumstances.

In the interests of reproducibility, I attach R code that reads attached
data files  attempts a N-M optim run. The required shared library
containing the external code (compiled in Ubuntu 12.04 x64 with g++ 4.6.3)
is also attached. Calculating an initial Chi^2 value for a starting set of
model parameters works, then the R session crashes when the optim call is
made.

Is there something I'm perhaps doing wrong in the specification of the
optim run? Is it inadvisable to use external code with optim? There doesn't
seem to be a problem with the external code itself, so I'm very stumped as
to the source of the crashes.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.