Many thanks Alexios!!

1. In my TGARCH setup nlminb doesn`t converge even at smaller GARCH order. So I will stick to solnp then.

2. Unfortunately I couldn`t find the correct command which limits the number of iterations via the solver.control options. The details from the manual mention n.sim and n.restarts, but these seem to control other parameters. For the nloptr solver the option maxeval is mentioned. But I don`t work with this solver and trial-and-error-implementation of this option to sonnp leaded to no success. Other packages inspired me to try "maxiter" , "iter.max" , "n.iter" , but they didn`t work either.

E.g. ugarchfit( spec=spec, data=tempdata , solver="solnp", solver.control=list( maxeval=20, rseed=9876 ) )

3. You`re surely right. The whole study should actually investigate this issue empirically.
E.g. in one case there was a surprising result in a sample of size 1200:
An ARMA(0,0) eGARCH(5,5) model with a skewed normal for the innovations yielded very good results. No sign biases, nice gof, no autocorrelation in standardized and squared standardized residuals up to order p+q+10, nice AIC and BIC as well as only highly significant coefficients (6 out of 18 were not significant as to the robust SE, though). I will compare this model to a more parsimonious one and also investigate parameter uncertainty.

Best, Johannes



Am 10.05.2014 11:34, schrieb alexios ghalanos:
Johannes,

I suggest the following:

1. Don't use hybrid, use instead solnp or nlminb.

2. You can control a number of solver convergence criteria (e.g. number
of iterations) using the solver.control argument.

3. Before running the code, do consider a little more how reasonable it
is to be modelling a TGARCH(7,8) model. Investigate the model first
(don't just return the AIC or BIC). Are any of the higher order
ARCH/GARCH parameters different from zero or even significant? I have
not seen a single study which shows that such very high order GARCH
models have better performance than more parsimonious alternatives.

4. At the best of times it takes a considerable amount of data to
estimate the GARCH persistence. Try running a simulation exercise using
for example the ugarchdistribution function to obtain some insight into
higher order GARCH models.

5. Finally, as mentioned numerous times on this forum, the fGARCH model
is a highly parameterized omnibus model. Imposing stationarity during
the optimization, particularly for non-symmetric distributions such as
the ged, is a costly exercise.  Consider using the GJR instead and a
distribution which is a little faster to evaluate such as the JSU.
Alternatively consider using the normal distribution to estimate the
GARCH parameters for the purpose of model comparison.

-Alexios

On 10/05/2014 08:23, Johannes Moser wrote:
I guess that the problem is due to the processing in C as part of the
ugarchfit routine.

Is there any way to timeout a ugarchfit command or to constrain the
number if iterations?

At one time the loop seems to be stuck completely.
I waited for several hours for a single ugarchfit step which just didn`t
complete. Then I manually stopped the process.

Separate calculation of the respective model also seems to be "stuck"
(CPU is still working, the "hybrid" algorithms seem to find no solution
though and just keep running).

As I want to set up a GARCH model-preselection battery there hopefully
is a way to handle such problems?

Best, Johannes





Am 09.05.2014 13:58, schrieb Johannes Moser:
Dear all,

I`ve set up a double loop which loops through different GARCH-orders
and ARMA-orders in a rugarch estimation (of several models and error
distributions) and each time writes the AIC and other information into
a data frame.
The resulting data frame should be used for the pre-selection of a
model, which then will be examined manually.

A small part of the model estimation steps using "ugarchfit" take very
long time. So I implemented a timeout function using "evalWithTimeout"
which stops the current estimation step and proceeds with the next
step in the loop and estimates the next model.

The timeout function is wrapped into a "tryCatch" function which
assures thet the loop keeps running after e.g. convergence problems.

A small toy model works fine:


#######################################################################
require('R.utils')
abc <- matrix(NA,10,3)

foo <- function() {
      print("Tic");
      for (kk in 1:50) {
           print(kk);
           Sys.sleep(0.1);
      }
      print("Tac");
}


for (i in 1:10){
      ptm <- proc.time()
tryCatch( { abc[i,1] <- evalWithTimeout({foo()} ,timeout=(4+i*0.2)
,onTimeout="silent" )
             abc[i,2] <- 1
}
, error = function(x) x)
tt<- proc.time() - ptm
abc[i,3]<-tt[3]
}

abc
#####################################################################


However, in the rugarch setup the "evalWithTimeout" doesn't seem to
stop the "ugarchfit" estimation reliably. E.g. in one instance the
recorded time for a step was 1388.03 seconds even though the limit was
set to be 300 seconds. The next example illustrates my setup in a
simplified version (unfortunately my results depend on the data I have
used, so you will not be able to reproduce them):


#####################################################################
require('rugarch')
quiet1 <- read.table( "dax_quiet1.txt" , header=T)
tempdata <- quiet1$logreturns

g_order <- matrix(NA,5,2)
g_order[1,]<-c(1,1)
g_order[2,]<-c(1,8)
g_order[3,]<-c(9,6)
g_order[4,]<-c(9,8)
g_order[5,]<-c(3,10)

overview <- data.frame(matrix(NA,5,2))

for(i in 1:5){
      ptm <- proc.time()

      spec <- ugarchspec(
           variance.model = list(model = "fGARCH", garchOrder =
g_order[i,], submodel = "TGARCH", external.regressors = NULL,
variance.targeting = FALSE),
           mean.model = list(armaOrder = c(0,0), external.regressors =
NULL), distribution.model = "sged")

      tryCatch( {tempgarch <- evalWithTimeout({ugarchfit(spec=spec,
data=tempdata ,solver="hybrid")} ,timeout=20 ,onTimeout="silent" )
                 overview[i,1]<-infocriteria(tempgarch)[1]
      }
      , error = function(x) x)

      tt<- proc.time() - ptm
      overview[i,2]<-tt[3]
}

overview

# If the timeout is set set to 20, this setup leads to:
# 2.87 sec.
# 6.95 sec.
# 125 sec.     ... here, tryCatch interrupted the process
# 51.73 sec.
# 27.11 sec.
# for the 5 different estimation steps.

# timeout set to 300:
# 2.81 sec.
# 6.85 sec.
# 743.58 sec.
# 41.70 sec.
# 26.85 sec.
# no process was interrupted by tryCatch
#######################################################################


As can be seen even from this simplified example, when the timeout was
set to be 20 there still was a process that took 125 seconds (which is
more than 5 times longer!).
I would be very thankful for any ideas or comments!

Best, Johannes











--

_______________________________________________
R-SIG-Finance@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-finance
-- Subscriber-posting only. If you want to post, subscribe first.
-- Also note that this is not the r-help list where general R questions
should go.



--

_______________________________________________
R-SIG-Finance@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-finance
-- Subscriber-posting only. If you want to post, subscribe first.
-- Also note that this is not the r-help list where general R questions should 
go.

Reply via email to