# [NMusers] RE: Standard errors of estimates for strictly positive parameters

From: Chaouch Aziz <Aziz.Chaouch_at_chuv.ch>
Date: Thu, 12 Feb 2015 11:01:51 +0000

Dear Douglas, dear Pascal,

Thanks a lot for your answers. I guess the main point here is constrained v=
s unconstrained optimization as the asymptotic covariance matrix of estimat=
es (as returned by \$COV) is "well defined" only in the latter case. When fi=
tting model 1, one would normally constrain THETA(1) to be positive by usin=
g something like:

\$THETA
(0, 15, 50) ; TVCL

In this situation I wonder whether it makes sense at all to consider the ou=
tput of \$COV. It seems model 2 would be here preferable (unconstrained opti=
mization). If model 1 is fitted without boundary constraints on THETA(1), t=
he covariance matrix of estimates may have "some" meaning but the optimizat=
ion in NONMEM is then likely to crash if it encounters a negative value at =
some point, which again speaks somehow in favor of model 2 (unless one is n=
ot interested in the output of \$COV).

Now what about \$OMEGA? Here NONMEM knows that these are variances and there=
fore we do not need to explicitly (i.e. manually) place boundary constraint=
s on the diagonal elements of the omega matrix. However something must acco=
unt for it internally. The covariance matrix of estimates returned by \$COV =
also contains elements that refer to omega so I'm unsure how these are trea=
ted. For diagonal elements of the omega matrix, does NONMEM optimize log(om=
ega) or omega? Or does it uses a Cholesky decomposition of the Omega matrix=
and optimize elements on that scale? Again, unless the optimization on ome=
ga is unconstrained, can we really trust the output of \$COV? Basically the =
question here is how would you construct an asymptotic 95% confidence inter=
val for a diagonal element of Omega (i.e. a variance) based on the informat=
ion from the covariance matrix of estimates?

The covariance matrix of estimate is of importance to me because I'm consid=
ering published studies and I do not have access to the data so I cannot re=
fit the model with an alternative parametrization. Results from \$COV (in ls=
t file when available from the authors) is then the only available piece of=
information about the uncertainty of the estimation process.

Kind regards,

Aziz Chaouch
________________________________________
De : Eleveld, DJ [mailto:d.j.eleveld_at_umcg.nl]
Envoyé : mercredi, 11. février 2015 22:26
À : Chaouch Aziz; nmusers_at_globomaxnm.com
Objet : RE: Standard errors of estimates for strictly positive parameters

Hi Aziz,

Just some comments off the top of my head in a quite informal way: I'm not =
really sure that these are the same problem because they dont start with th=
e same information in the form of parameter constraints. In model 1 you are=
asking the optimizer for the unconstrained maximum likelihood solution for=
TVCL. OK, this is reasonable in a lot of situations, but not necessairily =
in all situations.

In model 2 you add information by forcing TVCL and CL to be positive. If yo=
u think of the optimal solution as some point in N-dimensional space which =
has to be searched for, in model 2 you are saying "dont even look in the sp=
ace where TVCL or CL is negative". Even stronger, in model 2 you are also s=
aying "dont even get close to zero" because the log-normal distribution van=
ishes towards zero.

Which solution of these is best for some particular application depends on =
a lot of things. One of the things I would think about in this situation is=
whether or not my a priori beliefs match with the structual constraints of=
the model. Do I really think that the "true" CL could be zero? If yes, the=
n model 2 is hard to defend in that case.

You description of your situation regarding standard errors is a part of th=
e same thing. When you extrapolate standard errors into low-probability are=
as you are checking the boundaries of the probability area. It should not b=
e suprising that model 1 might tell you that CL is negative since this was =
part of the solution space which you allowed. With model 2 your model struc=
ture says "dont even look there"

In short, although these two models might look similar, I think they are re=
ally quite different. This becomes most clear when you consider the low-pro=
bability space.

Sorry for the vauge language.

Warm regards,

Douglas

________________________________________
De : pascal.girard_at_merckgroup.com [mailto:pascal.girard_at_merckgroup.com]
Envoyé : mercredi, 11. février 2015 18:30
À : Chaouch Aziz; nmusers_at_globomaxnm.com
Objet : RE: Standard errors of estimates for strictly positive parameters

Dear Aziz,

NM does not return the asymptotic SE of THETA(1) in model 1 on the log-scal=
e. So I would use model 2.

With best regards / Mit freundlichen Grüßen / Cordialement

Pascal
________________________________________
From: owner-nmusers_at_globomaxnm.com [owner-nmusers_at_globomaxnm.com] on behalf=
of Chaouch Aziz [Aziz.Chaouch_at_chuv.ch]
Sent: Wednesday, February 11, 2015 5:21 PM
To: nmusers_at_globomaxnm.com
Subject: [NMusers] Standard errors of estimates for strictly positive param=
eters

Hi,

I'm interested in generating samples from the asymptotic sampling distribut=
ion of population parameter estimates from a published PKPOP model fitted w=
ith NONMEM. By definition, parameter estimates are asymptotically (multivar=
iate) normally distributed (unconstrained optimization) with mean M and cov=
ariance C, where M is the vector of parameter estimates and C is the covari=
ance matrix of estimates (returned by \$COV and available in the lst file).

Consider the 2 models below:

Model 1:

TVCL = THETA(1)

CL = TVCL*EXP(ETA(1))

Model 2:

TVCL = EXP(THETA(1))

CL = TVCL*EXP(ETA(1))

It is clear that model 1 and model 2 will provide exactly the same fit. How=
ever, although in both cases the standard error of estimates (SE) will refe=
r to THETA(1), the asymptotic sampling distribution of TVCL will be normal =
in model 1 while it will be lognormal in model 2. Therefore if one is inter=
ested in generating random samples from the asymptotic distribution of TVCL=
, some of these samples might be negative in model 1 while they'll remain n=
icely positive in model 2. The same would happen with bounds of (asymptotic=
) confidence intervals: in model 1 the lower bound of a 95% confidence inte=
rval for TVCL might be negative (unrealistic) while it would remain positiv=
e in model 2.

This has obviously no impact for point estimates or even confidence interva=
ls constructed via non-parametric bootstrap since boundary constraints can =
be placed on parameters in NONMEM. But what if one is interested in the asy=
mptotic covariance matrix of estimates returned by \$COV? The asymptotic sam=
pling distribution of parameter estimates is (multivariate) normal only if =
the optimization is unconstrained! Doesn't this then speak in favour of mod=
el 2 over model 1? Or does NONMEM take care of it and returns the asymptoti=
c SE of THETA(1) in model 1 on the log-scale (when boundary constraints are=
placed on the parameter)?

Thanks,

Aziz Chaouch

________________________________
De inhoud van dit bericht is vertrouwelijk en alleen bestemd voor de geadr=
esseerde(n). Anderen dan de geadresseerde(n) mogen geen gebruik maken van d=
it bericht, het niet openbaar maken of op enige wijze verspreiden of vermen=
igvuldigen. Het UMCG kan niet aansprakelijk gesteld worden voor een incompl=
ete aankomst of vertraging van dit verzonden bericht.

The contents of this message are confidential and only intended for the eye=
s of the addressee(s). Others than the addressee(s) are not allowed to use =
this message, to make it public or to distribute or multiply this message i=
n any way. The UMCG cannot be held responsible for incomplete reception or =
delay of this transferred message.
Received on Thu Feb 12 2015 - 06:01:51 EST

This archive was generated by hypermail 2.3.0 : Fri Sep 27 2019 - 16:43:02 EDT