Re: [NMusers] Fwd: Should we generate VPCs with or without uncertainty?

From: Devin Pastoor <devin.pastoor_at_gmail.com>
Date: Mon, 08 Jun 2015 16:29:31 +0000

Matts,

The way I see the CI's around the point estimates provided in the VPC can
help provide a useful indication of model robustness, especially in regards
to the impact of random effects components, in that portion of your model.
Especially for heterogeneous data (or even all rich data for that matter)
there are a number of binning strategies that can be used, which can impact
the aforementioned intervals.

At the end of the day, we must use our judgement for how the model is being
used to support decisions, and whether information regarding uncertainty
can provide additional support towards the overall evaluation of the key
questions you are trying to address. Eg, if you are dealing with a narrow
therapeutic index drug the value of having a 'feel' for the robustness of
the ability of your model to describe the tails may be valuable
information, even as a qualitative indication of model robustness. On the
other hand, if you are trying to make a decision regarding dose adjustment
between different populations and are looking to normalize large
differences, as well as are constrained to certain oral dosage options,
uncertainty in the point estimates will likely provide very little support
to an argument one way or the other.

Finally, in my opinion, inclusion/exclusion also relies on what the plot is
trying to communicate. Are you trying to personally evaluate model
adequacy, sure, but if using to convey to non- modelers/quantitative people
that your model describes the data - include a visualization of uncertainty
at your own peril :-)

So, for better or worse, I would say - it depends, though I would be highly
concerned if major decisions rode on inclusion/exclusion of parameter
uncertainty, in most cases.


Devin Pastoor
Center for Translational Medicine
University of Maryland, Baltimore



On Mon, Jun 8, 2015 at 11:57 AM Matts K├ągedal <mattskagedal_at_gmail.com>
wrote:

> Hi all,
>
> Creation of VPCs is a way to assess if simulated data generated by the
> model is compatible with observed data.
> VPCs are usually based on parameter point estimates of the model.
> Sometimes parameter uncertainty is also accounted for in the generation o=
f
> VPCs (PPCs) where each simulated replicate of the data set is based on a
> new set of parameter values representing the uncertainty of the estimates
> (e.g. based on a bootstrap).
>
> I wonder if inclusion of uncertainty in this way is really appropriate or
> if it just makes the confidence intervals wider and hence easier to quali=
fy
> the model. Is it possible based on such an approach, that a model might
> look good, when in fact no likely combination of parameter values (based =
on
> parameter uncertainty) would generate data that are compatible with the
> observations?
>
> To illustrate my question:
> I could generate 100 sets of parameters reflecting parameter uncertainty
> (e.g. from a bootstrap). Based on each set of parameters I could then
> generate a separate VPC (e.g. showing median, 5 and 95% percentile) to se=
e
> if any of the parameter sets are compatible with data. I would then have
> 100 VPCs, each based on a separate set of parameter values reflecting the
> parameter correlations and uncertainty.
>
> If the VPC based on point estimates looks bad, I would (generally) expect
> that the other VPCs would be worse (they all have lower likelihood), so
> that we have 101 VPCs that does not look good. Some might over predict an=
d
> some underpredict, some might describe parts of the relation better than
> the VPC based on the point estimates.
>
> By putting the VPCs together from all parameter vectors, the CI becomes
> wider, and perhaps now includes the observed data. So based on a set of 1=
00
> parameter vectors which individually are not compatible with the observed
> data I have now generated a VPC (PPC) where the confidence interval
> actually includes the observed metric (e.g median). It seems to me that
> based on such an approach it is possible that a model might look good, wh=
en
> in fact no likely individual set of parameter values would generate data
> that are compatible with the observations.
>
> Simulation based on parameter uncertainty is useful when we want to make
> inference, but I am unsure of its use for model qualification. In any cas=
e
> it is confusing that we some times simulate based on point estimates and
> sometimes based on parameter uncertainty without any particular rationale
> as far as I understand.
>
> Would be interested if someone could shed some light on the inclusion of
> uncertainty in simulations for model qualification (VPCs).
>
> Best regards,
> Matts Kagedal
>
> Pharmacometrician, Genentech
>
>
>

Received on Mon Jun 08 2015 - 12:29:31 EDT

This archive was generated by hypermail 2.3.0 : Fri Sep 27 2019 - 16:44:34 EDT