Hi Toufigh,

Just a suggestion that you may already be using, do you use the SORT option for 
estimation.
This is i think helpful when the informativeness of individuals vary 
considerably.
I might help stabilise the full data set.

Douglas Eleveld


________________________________________
From: [email protected] [[email protected]] on behalf of 
Toufigh Gordi [[email protected]]
Sent: Tuesday, January 24, 2012 5:23 AM
To: [email protected]
Subject: [NMusers] Choice of models

Dear all,

I have a general question on the choice of model in a population analysis. I 
have a set of data set that includes a large number of studies with about ¾ of 
the data from extensive sampling schemes (phase 1, 2, and 3 studies) and the 
rest from sparse samples (phase 3 clinical studies). When developing the PK 
model, a model on the extensive samples only fits the data well and I can get 
quite reasonable parameter estimates, including covariate effects, and a 
successful $COV (NONMEM). When all data is used, the model becomes somewhat 
instable: the same covariates are identified but the model becomes quite 
sensitive to the initial estimates and the $COV step won’t go through. I could, 
of course, perform a bootstrap to go around this issue. In general, the fit of 
the model based on the full data set is not as good as the extensive data set 
model, although the two models are rather similar with regard to the parameter 
estimates. However, the range of estimated parameters is wider when using all 
data and noticeably KA and V2 are skewed to very larger values.

Moving forward, I could either use the full data model and simulate steady 
state profiles for the phase 3 study (sparse samples) data. Or, I could use the 
model based on the extensive samples only, use the sparse data and generate 
post-hoc estimates for the sparsely sampled individuals and move forward that 
way. The advantage with the first option is that all the available data have 
been used in the modeling process. The disadvantage would be that the model is 
not as good as the other model, with sparse data distorting the parameter 
estimates. The advantage of the second option is that the model performs better 
and there is really no reason why the underlying PK model for the sparsely 
sampled subjects should be different, which means one should be able to use 
that model to generate post-hoc estimates. The disadvantage is that not all the 
available data have been used in the model building process.

It would be interesting to hear other people’s thoughts and ideas on this.

Toufigh
________________________________
 De inhoud van dit bericht is vertrouwelijk en alleen bestemd voor de 
geadresseerde(n). Anderen dan de geadresseerde(n) mogen geen gebruik maken van 
dit bericht, het niet openbaar maken of op enige wijze verspreiden of 
vermenigvuldigen. Het UMCG kan niet aansprakelijk gesteld worden voor een 
incomplete aankomst of vertraging van dit verzonden bericht.

The contents of this message are confidential and only intended for the eyes of 
the addressee(s). Others than the addressee(s) are not allowed to use this 
message, to make it public or to distribute or multiply this message in any 
way. The UMCG cannot be held responsible for incomplete reception or delay of 
this transferred message.

Reply via email to