I have found a way to approach the question that I posed, that is
having deriv and deriv3 deal with an indexed term in the formula.
Unless, there is a more direct approach (and my minimal investigation
of the code involved suggested that it would not be simple to implement,
for me at least), this should be tractable as long as the number of
elements indexed is not too large. In short, I reparameterize the problem
so that each of the indices corresponds to a separate column in the
data.frame.  Then, I can apply both deriv and deriv3, the only
post-editing of the functions being to deal with the derivative at 0,
because a log appears.

For anyone interested, here is the example:

gg <- model.matrix(~-1 + Gun/GL, ddsub)[1:51, c(4:6)]
ddsub$Rgun <- gg[, 1]
ddsub$Ggun <- gg[, 2]
ddsub$Bgun <- gg[, 3]

dd.der3 <- deriv3(f, c("Blev", "Br", "Bg", "Bb", "gamm"),
        function(Blev, Br, Bg, Bb, gamm, Rgun, Ggun, Bgun) {})

###edit to deal with log(0) case

dd.der3 <- function (Blev, Br, Bg, Bb, gamm, Rgun, Ggun, Bgun)
{
    .expr1 <- Rgun^gamm
    .expr4 <- Ggun^gamm
    .expr7 <- Bgun^gamm
    .expr10 <- ifelse(Rgun == 0, 0, log(Rgun))
    .expr11 <- .expr1 * .expr10
    .expr12 <- ifelse(Ggun == 0, 0, log(Ggun))
    .expr13 <- .expr4 * .expr12
    .expr14 <- ifelse(Bgun == 0, 0, log(Bgun))
    .expr15 <- .expr7 * .expr14
    .value <- Blev + Br * .expr1 + Bg * .expr4 + Bb * .expr7
    .grad <- array(0, c(length(.value), 5), list(NULL, c("Blev",
        "Br", "Bg", "Bb", "gamm")))
    .hessian <- array(0, c(length(.value), 5, 5), list(NULL,
        c("Blev", "Br", "Bg", "Bb", "gamm"), c("Blev", "Br",
            "Bg", "Bb", "gamm")))
    .grad[, "Blev"] <- 1
    .grad[, "Br"] <- .expr1
    .hessian[, "Br", "Br"] <- 0
    .hessian[, "Br", "Bg"] <- .hessian[, "Bg", "Br"] <- 0
    .hessian[, "Br", "Bb"] <- .hessian[, "Bb", "Br"] <- 0
    .hessian[, "Br", "gamm"] <- .hessian[, "gamm", "Br"] <- .expr11
    .grad[, "Bg"] <- .expr4
    .hessian[, "Bg", "Bg"] <- 0
    .hessian[, "Bg", "Bb"] <- .hessian[, "Bb", "Bg"] <- 0
    .hessian[, "Bg", "gamm"] <- .hessian[, "gamm", "Bg"] <- .expr13
    .grad[, "Bb"] <- .expr7
    .hessian[, "Bb", "Bb"] <- 0
    .hessian[, "Bb", "gamm"] <- .hessian[, "gamm", "Bb"] <- .expr15
    .grad[, "gamm"] <- Br * .expr11 + Bg * .expr13 + Bb * .expr15
    .hessian[, "gamm", "gamm"] <- Br * (.expr11 * .expr10) +
        Bg * (.expr13 * .expr12) + Bb * (.expr15 * .expr14)
    attr(.value, "gradient") <- .grad
    attr(.value, "hessian") <- .hessian
    .value
}

dd.nls.d4 <- nls(Lum ~ dd.der3(Blev, Br, Bg, Bb, gamm,
                                 Rgun, Ggun, Bgun),
                data = ddsub,
                start = list(Blev = B[2], Br = B[3], Bg = B[4] , Bb = B[5],
                                                gamm = B[1])
                )

summary(dd.nls.d4)

Formula: Lum ~ dd.der3(Blev, Br, Bg, Bb, gamm, Rgun, Ggun, Bgun)

Parameters:
     Estimate Std. Error t value Pr(>|t|)
Blev -0.06552    0.04766  -1.375    0.176
Br   16.74186    0.13245 126.402   <2e-16 ***
Bg   50.51605    0.14802 341.281   <2e-16 ***
Bb    8.12063    0.13452  60.368   <2e-16 ***
gamm  2.51187    0.01561 160.916   <2e-16 ***

confint(dd.nls.d4)
Waiting for profiling to be done...
           2.5%       97.5%
Blev -0.1612492  0.02988387
Br   16.4752317 17.00843251
Bg   50.2193167 50.81325840
Bb    7.8496610  8.39135292
gamm  2.4811018  2.54296647

ms.curv(dd.nls.d4)
Parameter effects: c^theta x sqrt(F) = 0.0123
        Intrinsic: c^iota  x sqrt(F) = 0.0068

Here is the data.frame after the new columns have been added,

Lum     GL      Gun     Rgun    Ggun    Bgun
0.15    0       rouge   0       0       0
0.07    0.0588  rouge   0.0588  0       0
0.1     0.1216  rouge   0.1216  0       0
0.19    0.1843  rouge   0.1843  0       0
0.4     0.2471  rouge   0.2471  0       0
0.73    0.3098  rouge   0.3098  0       0
1.2     0.3725  rouge   0.3725  0       0
1.85    0.4353  rouge   0.4353  0       0
2.91    0.498   rouge   0.498   0       0
3.74    0.5608  rouge   0.5608  0       0
5.08    0.6235  rouge   0.6235  0       0
6.43    0.6863  rouge   0.6863  0       0
8.06    0.749   rouge   0.749   0       0
9.84    0.8118  rouge   0.8118  0       0
12      0.8745  rouge   0.8745  0       0
14.2    0.9373  rouge   0.9373  0       0
16.6    1       rouge   1       0       0
0.1     0       vert    0       0       0
0.1     0.0588  vert    0       0.0588  0
0.17    0.1216  vert    0       0.1216  0
0.46    0.1843  vert    0       0.1843  0
1.08    0.2471  vert    0       0.2471  0
2.22    0.3098  vert    0       0.3098  0
3.74    0.3725  vert    0       0.3725  0
5.79    0.4353  vert    0       0.4353  0
8.36    0.498   vert    0       0.498   0
11.6    0.5608  vert    0       0.5608  0
15.4    0.6235  vert    0       0.6235  0
19.9    0.6863  vert    0       0.6863  0
24.6    0.749   vert    0       0.749   0
30.4    0.8118  vert    0       0.8118  0
36.1    0.8745  vert    0       0.8745  0
43      0.9373  vert    0       0.9373  0
49.9    1       vert    0       1       0
0.06    0       bleu    0       0       0
0.06    0.0588  bleu    0       0       0.0588
0.08    0.1216  bleu    0       0       0.1216
0.13    0.1843  bleu    0       0       0.1843
0.25    0.2471  bleu    0       0       0.2471
0.43    0.3098  bleu    0       0       0.3098
0.66    0.3725  bleu    0       0       0.3725
1.02    0.4353  bleu    0       0       0.4353
1.46    0.498   bleu    0       0       0.498
1.93    0.5608  bleu    0       0       0.5608
2.49    0.6235  bleu    0       0       0.6235
3.2     0.6863  bleu    0       0       0.6863
3.96    0.749   bleu    0       0       0.749
4.9     0.8118  bleu    0       0       0.8118
5.68    0.8745  bleu    0       0       0.8745
6.71    0.9373  bleu    0       0       0.9373
7.93    1       bleu    0       0       1

I still do not know why confint does not work for me on this problem
without the derivatives and it does for Gabor.  I'm on a Mac Pro and
he may not be.  If anyone else on a Mac has this problem, it would
be useful to know.

Thanks.

ken


Gabor Grothendieck wrote:
> I mixed up examples.  Here it is again.  As with the last one
> confint(dd.plin) gives an error (which I assume is a problem with
> confint that needs to be fixed) but other than that it works without
> issuing errors and I assume you don't need the confint(dd.plin)
> in any case since dd.plin is just being used to get starting values.
>
>> gg <- model.matrix(~ Gun/GL - Gun, dd)
>> dd.plin <- nls(Lum ~ gg^gamm, dd, start = list(gamm = 2.4),
> +                                alg = "plinear"
> +                                )
>> confint(dd.plin)
> Waiting for profiling to be done...
> Error in eval(expr, envir, enclos) : (subscript) logical subscript too
> long
>>
>>
>> B <- as.vector(coef(dd.plin))
>> st <-list(Blev = B[2], beta = B[3:5], gamm = B[1])
>>
>>
>>
>> dd.nls <- nls(Lum ~ Blev + beta[Gun] * GL^gamm,
> +                        data = dd, start = st
> +                        )
>>
>> confint(dd.nls)
> Waiting for profiling to be done...
>                2.5%        97.5%
> Blev  -1.612492e-01 2.988387e-02
> beta1  6.108282e-06 8.762679e-06
> beta2  1.269000e-05 1.792914e-05
> beta3  3.844042e-05 5.388546e-05
> gamm   2.481102e+00 2.542966e+00
>>
>> dd.deriv2 <- function (Blev, beta, gamm, GL)
> + {
> +    .expr1 <- GL^gamm
> +    .value <- Blev + rep(beta, each = 17) * .expr1
> +    .grad <- array(0, c(length(.value), 5), list(NULL, c("Blev",
> +        "beta.rouge", "beta.vert", "beta.bleu", "gamm")))
> +    .grad[, "Blev"] <- 1
> +    .grad[1:17, "beta.rouge"] <- .expr1[1:17]
> +    .grad[18:34, "beta.vert"] <- .expr1[1:17]
> +    .grad[35:51, "beta.bleu"] <- .expr1[1:17]
> +    .grad[, "gamm"] <- ifelse(GL == 0, 0, rep(beta, each = 17) * (.expr1
> *
> + log(GL)))
> +    attr(.value, "gradient") <- .grad
> +    .value
> + }
>>
>>
>> dd.nls.d2 <- nls(Lum ~ dd.deriv2(Blev, beta, gamm, GL), data = dd,
> +                        start = list(Blev = B[2], beta = B[3:5], gamm =
> B[1]))
>>
>> confint(dd.nls.d2)
> Waiting for profiling to be done...
>                2.5%        97.5%
> Blev  -1.612492e-01 2.988391e-02
> beta1  1.269000e-05 1.792914e-05
> beta2  3.844041e-05 5.388546e-05
> beta3  6.108281e-06 8.762678e-06
> gamm   2.481102e+00 2.542966e+00
>>
>> R.version.string # XP
> [1] "R version 2.4.0 Patched (2006-10-24 r39722)"
>>
>
>
> On 11/18/06, Gabor Grothendieck <[EMAIL PROTECTED]> wrote:
>> This works for me in terms of giving results without error messages
>> except for the confint(dd.plin) which I assume you don't really need
>> anyways.
>>
>> > gg <- model.matrix(~ Gun/GL - Gun, dd)
>> > dd.plin <- nls(Lum ~ gg^gamm, dd, start = list(gamm = 2.4),
>> +                                alg = "plinear"
>> +                                )
>> > confint(dd.plin)
>> Waiting for profiling to be done...
>> Error in eval(expr, envir, enclos) : (subscript) logical subscript too
>> long
>> >
>> >
>> > B <- as.vector(coef(dd.nls0))
>> > st <-list(Blev = B[2], beta = B[3:5], gamm = B[1])
>> >
>> >
>> >
>> > dd.nls <- nls(Lum ~ Blev + beta[Gun] * GL^gamm,
>> +                        data = dd, start = st
>> +                        )
>> >
>> > confint(dd.nls)
>> Waiting for profiling to be done...
>>               2.5%        97.5%
>> Blev  -1.612492e-01 2.988386e-02
>> beta1  6.108282e-06 8.762679e-06
>> beta2  1.269000e-05 1.792914e-05
>> beta3  3.844042e-05 5.388546e-05
>> gamm   2.481102e+00 2.542966e+00
>> >
>> > dd.deriv2 <- function (Blev, beta, gamm, GL)
>> + {
>> +    .expr1 <- GL^gamm
>> +    .value <- Blev + rep(beta, each = 17) * .expr1
>> +    .grad <- array(0, c(length(.value), 5), list(NULL, c("Blev",
>> +        "beta.rouge", "beta.vert", "beta.bleu", "gamm")))
>> +    .grad[, "Blev"] <- 1
>> +    .grad[1:17, "beta.rouge"] <- .expr1[1:17]
>> +    .grad[18:34, "beta.vert"] <- .expr1[1:17]
>> +    .grad[35:51, "beta.bleu"] <- .expr1[1:17]
>> +    .grad[, "gamm"] <- ifelse(GL == 0, 0, rep(beta, each = 17) *
>> (.expr1 *
>> + log(GL)))
>> +    attr(.value, "gradient") <- .grad
>> +    .value
>> + }
>> >
>> >
>> > dd.nls.d2 <- nls(Lum ~ dd.deriv2(Blev, beta, gamm, GL), data = dd,
>> +                        start = list(Blev = B[2], beta = B[3:5], gamm =
>> B[1]))
>> >
>> > confint(dd.nls.d2)
>> Waiting for profiling to be done...
>>               2.5%        97.5%
>> Blev  -1.612492e-01 2.988391e-02
>> beta1  1.269000e-05 1.792914e-05
>> beta2  3.844041e-05 5.388546e-05
>> beta3  6.108281e-06 8.762678e-06
>> gamm   2.481102e+00 2.542966e+00
>> >
>> > R.version.string # XP
>> [1] "R version 2.4.0 Patched (2006-10-24 r39722)"
>> >
>>
>>
>> On 11/18/06, Ken Knoblauch <[EMAIL PROTECTED]> wrote:
>> > Thank you for your rapid response.
>> >
>> > This is reproducible on my system.  Here it is again, with, I hope,
>> > sufficient detail to properly document what does not work and what
>> > does on my system,
>> >
>> > But my original question, properly motivated or not, concerns whether
>> > there is a way to use or adapt deriv() to work if a term is indexed,
>> > as here my term beta is indexed by Gun?
>> >
>> > In any case, it is puzzling that the error is not reproducible and so
>> > I would be curious to track that down, if it is specific to my system.
>> >
>> > Thank you. Before retrying, I upgraded to the latest patched version
>> > (details at the end).
>> >
>> > ####The data, again
>> > dd
>> > Lum     GL      Gun     mBl
>> > 0.15    0       rouge   0.09
>> > 0.07    15      rouge   0.01
>> > 0.1     31      rouge   0.04
>> > 0.19    47      rouge   0.13
>> > 0.4     63      rouge   0.34
>> > 0.73    79      rouge   0.67
>> > 1.2     95      rouge   1.14
>> > 1.85    111     rouge   1.79
>> > 2.91    127     rouge   2.85
>> > 3.74    143     rouge   3.68
>> > 5.08    159     rouge   5.02
>> > 6.43    175     rouge   6.37
>> > 8.06    191     rouge   8
>> > 9.84    207     rouge   9.78
>> > 12      223     rouge   11.94
>> > 14.2    239     rouge   14.14
>> > 16.6    255     rouge   16.54
>> > 0.1     0       vert    0.04
>> > 0.1     15      vert    0.04
>> > 0.17    31      vert    0.11
>> > 0.46    47      vert    0.4
>> > 1.08    63      vert    1.02
>> > 2.22    79      vert    2.16
>> > 3.74    95      vert    3.68
>> > 5.79    111     vert    5.73
>> > 8.36    127     vert    8.3
>> > 11.6    143     vert    11.54
>> > 15.4    159     vert    15.34
>> > 19.9    175     vert    19.84
>> > 24.6    191     vert    24.54
>> > 30.4    207     vert    30.34
>> > 36.1    223     vert    36.04
>> > 43      239     vert    42.94
>> > 49.9    255     vert    49.84
>> > 0.06    0       bleu    0
>> > 0.06    15      bleu    0
>> > 0.08    31      bleu    0.02
>> > 0.13    47      bleu    0.07
>> > 0.25    63      bleu    0.19
>> > 0.43    79      bleu    0.37
>> > 0.66    95      bleu    0.6
>> > 1.02    111     bleu    0.96
>> > 1.46    127     bleu    1.4
>> > 1.93    143     bleu    1.87
>> > 2.49    159     bleu    2.43
>> > 3.2     175     bleu    3.14
>> > 3.96    191     bleu    3.9
>> > 4.9     207     bleu    4.84
>> > 5.68    223     bleu    5.62
>> > 6.71    239     bleu    6.65
>> > 7.93    255     bleu    7.87
>> >
>> > ###For initial values - this time using plinear algorithm insted of
>> optim
>> > gg <- model.matrix(~-1 + Gun/GL, dd)[ , c(4:6)]
>> > dd.plin <- nls(Lum ~ cbind(rep(1, 51), gg^gamm), data = dd,
>> >                                start = list(gamm = 2.4),
>> >                                alg = "plinear"
>> >                                )
>> > B <- as.vector(coef(dd.plin))
>> > st <-list(Blev = B[2], beta = B[3:5], gamm = B[1])
>> >
>> >
>> > dd.nls <- nls(Lum ~ Blev + beta[Gun] * GL^gamm,
>> >                        data = dd, start = st
>> >                        )
>> > confint(dd.plin)
>> >
>> > Waiting for profiling to be done...
>> > Error in eval(expr, envir, enclos) : (subscript) logical subscript too
>> long
>> > ###  Here is the error that I observe
>> > confint(dd.nls)
>> > Waiting for profiling to be done...
>> > Error in prof$getProfile() : step factor 0.000488281 reduced below
>> > 'minFactor' of 0.000976562
>> >
>> > dd.deriv2 <- function (Blev, beta, gamm, GL)
>> > {
>> >    .expr1 <- GL^gamm
>> >    .value <- Blev + rep(beta, each = 17) * .expr1
>> >    .grad <- array(0, c(length(.value), 5), list(NULL, c("Blev",
>> >        "beta.rouge", "beta.vert", "beta.bleu", "gamm")))
>> >    .grad[, "Blev"] <- 1
>> >    .grad[1:17, "beta.rouge"] <- .expr1[1:17]
>> >    .grad[18:34, "beta.vert"] <- .expr1[1:17]
>> >    .grad[35:51, "beta.bleu"] <- .expr1[1:17]
>> >    .grad[, "gamm"] <- ifelse(GL == 0, 0, rep(beta, each = 17) *
>> (.expr1 *
>> > log(GL)))
>> >    attr(.value, "gradient") <- .grad
>> >    .value
>> > }
>> >
>> > dd.nls.d2 <- nls(Lum ~ dd.deriv2(Blev, beta, gamm, GL), subset(dd, Gun
>> !=
>> > "gris"),
>> >                        start = list(Blev = B[2], beta = B[3:5], gamm =
>> B[1]))
>> >
>> >
>> > #####But not here
>> > confint(dd.nls.d2)
>> > Waiting for profiling to be done...
>> >               2.5%        97.5%
>> > Blev  -1.612492e-01 2.988387e-02
>> > beta1  1.269000e-05 1.792914e-05
>> > beta2  3.844042e-05 5.388546e-05
>> > beta3  6.108282e-06 8.762679e-06
>> > gamm   2.481102e+00 2.542966e+00
>> >
>> > R version 2.4.0 Patched (2006-11-16 r39921)
>> > i386-apple-darwin8.8.1
>> >
>> > locale:
>> > C
>> >
>> > attached base packages:
>> > [1] "stats"     "graphics"  "grDevices" "utils"     "datasets"
>> > [6] "methods"   "base"
>> >
>> > other attached packages:
>> >     MASS   lattice
>> >  "7.2-29" "0.14-13"
>> >
>> > best,
>> >
>> > ken
>> >
>> >
>> > Gabor Grothendieck wrote:
>> > > Please provide reproducible code which shows the error.
>> > >
>> > >>
>> > >> st <- list(Blev = -0.06551802, beta = c(1.509686e-05, 4.55525e-05,
>> > > + 7.32272e-06), gamm = 2.51187)
>> > >>
>> > >> dd.nls <- nls(Lum ~ Blev + beta[Gun] * GL^gamm,
>> > > +                        data = dd, start = st)
>> > >>
>> > >> confint(dd.nls)
>> > > Waiting for profiling to be done...
>> > >                2.5%        97.5%
>> > > Blev  -1.612492e-01 2.988388e-02
>> > > beta1  6.108282e-06 8.762679e-06
>> > > beta2  1.269000e-05 1.792914e-05
>> > > beta3  3.844042e-05 5.388546e-05
>> > > gamm   2.481102e+00 2.542966e+00
>> > >> R.version.string
>> > > [1] "R version 2.4.0 Patched (2006-10-24 r39722)"
>> > >
>> > >
>> > >
>> > >
>> > > On 11/17/06, Ken Knoblauch <[EMAIL PROTECTED]> wrote:
>> > >> Hi,
>> > >>
>> > >> I'm fitting a standard nonlinear model to the luminances measured
>> > >> from the red, green and blue guns of a TV display, using nls.
>> > >>
>> > >> The call is:
>> > >>
>> > >> dd.nls <- nls(Lum ~ Blev + beta[Gun] * GL^gamm,
>> > >>                        data = dd, start = st)
>> > >> where st was initally estimated using optim()
>> > >>
>> > >> st
>> > >> $Blev
>> > >> [1] -0.06551802
>> > >>
>> > >> $beta
>> > >> [1] 1.509686e-05 4.555250e-05 7.322720e-06
>> > >>
>> > >> $gamm
>> > >> [1] 2.511870
>> > >>
>> > >> This works fine but I received an error message when I tried to
>> > >> use confint().  I thought that getting derivatives with deriv might
>> > >> help but found that deriv does not automatically handle the
>> > >> indexing of the beta parameter.  I modified the output of deriv
>> > >> from the case when the term beta is not indexed to give:
>> > >>
>> > >> dd.deriv2 <- function (Blev, beta, gamm, GL)
>> > >> {
>> > >>    .expr1 <- GL^gamm
>> > >>    .value <- Blev + rep(beta, each = 17) * .expr1
>> > >>    .grad <- array(0, c(length(.value), 5), list(NULL, c("Blev",
>> > >>        "beta.rouge", "beta.vert", "beta.bleu", "gamm"  )))
>> > >>    .grad[, "Blev"] <- 1
>> > >>    .grad[1:17, "beta.rouge"] <- .expr1[1:17]
>> > >>    .grad[18:34, "beta.vert"] <- .expr1[1:17]
>> > >>    .grad[35:51, "beta.bleu"] <- .expr1[1:17]
>> > >>    .grad[, "gamm"] <- ifelse(GL == 0, 0, rep(beta, each = 17) *
>> (.expr1
>> > >> *
>> > >> log(GL)))
>> > >>    attr(.value, "gradient") <- .grad
>> > >>    .value
>> > >> }
>> > >>
>> > >> which is not general but
>> > >> now I can get a result from confint. My question:  Can deriv()
>> > >> be made to handle an indexed term more automatically (elegantly)?
>> > >> I think that this would become more urgent (or require more work
>> > >> on my part) if I were to want the Hessian, too, for example, in
>> > >> anticipation of using rms.curv as described in the on-line
>> > >> complements for MASS.
>> > >>
>> > >> The plinear algorithm can be used for this model (it is similar in
>> some
>> > >> respects to the example given on p 218 of MASS, but the intercept
>> > >> terms are indexed instead, there).
>> > >>
>> > >> Thanks for any suggestions.
>> > >>
>> > >> best,
>> > >>
>> > >> Ken
>> > >>
>> > >>
>> > >> --
>> > >> Ken Knoblauch
>> > >> Inserm U371
>> > >> Institut Cellule Souche et Cerveau
>> > >> Département Neurosciences Intégratives
>> > >> 18 avenue du Doyen Lépine
>> > >> 69500 Bron
>> > >> France
>> > >> tel: +33 (0)4 72 91 34 77
>> > >> fax: +33 (0)4 72 91 34 61
>> > >> portable: +33 (0)6 84 10 64 10
>> > >> http://www.lyon.inserm.fr/371/
>> > >>
>> > >> ______________________________________________
>> > >> [email protected] mailing list
>> > >> https://stat.ethz.ch/mailman/listinfo/r-help
>> > >> PLEASE do read the posting guide
>> > >> http://www.R-project.org/posting-guide.html
>> > >> and provide commented, minimal, self-contained, reproducible code.
>> > >>
>> > >
>> >
>> >
>> > --
>> > Ken Knoblauch
>> > Inserm U371
>> > Institut Cellule Souche et Cerveau
>> > Département Neurosciences Intégratives
>> > 18 avenue du Doyen Lépine
>> > 69500 Bron
>> > France
>> > tel: +33 (0)4 72 91 34 77
>> > fax: +33 (0)4 72 91 34 61
>> > portable: +33 (0)6 84 10 64 10
>> > http://www.lyon.inserm.fr/371/
>> >
>> >
>>
>


-- 
Ken Knoblauch
Inserm U371
Institut Cellule Souche et Cerveau
Département Neurosciences Intégratives
18 avenue du Doyen Lépine
69500 Bron
France
tel: +33 (0)4 72 91 34 77
fax: +33 (0)4 72 91 34 61
portable: +33 (0)6 84 10 64 10
http://www.lyon.inserm.fr/371/

______________________________________________
[email protected] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to