CellML Discussion List

Text archives Help


[cellml-discussion] Summary of simulation metadata meeting


Chronological Thread 
  • From: david.nickerson at nus.edu.sg (David Nickerson)
  • Subject: [cellml-discussion] Summary of simulation metadata meeting
  • Date: Wed, 23 Aug 2006 10:07:06 +0800

Hi Shane,

The problem with the approach you discuss below is the following
example: a cell naturally sits in some rest state doing basically
nothing until some stimulus occurs and then suddenly you go from a rest
state to a highly active state.

If you are only ever applying a single stimulus and that stimulus occurs
at (or near) the beginning of you simulation interval, then the
integrator will quite likely find it and simulate the expected response.
Every adaptive stepping integrator I have tried, however, will basically
hit the rest state following that initial response and then provided
your next stimulus is more than a few time steps away the integrator
will step to the end point in very few steps, completely missing any
further stimuli (sometimes you randomly hit a stimulus and get another
response, but not at all reliably). Of course, that first response is
very nicely represented with varying intervals between your data points
and you get lots of data where things are happening rapidly and not so
much where things are occurring at a much slower rate. This is ideal,
but how do you then have an integrator that captures all the responses
without specifically coding your stimulus protocol(s) into the
integrator? And its not just your main stimulus protocol that causes
these responses. There can be all sorts of processes in your model that
are essentially doing nothing until some threshold is reached (whether
that is a membrane potential, ionic concentration, etc.).

In order to accurately capture the above behaviour, the typical approach
is to limit the maximum internal step size that your integrator is
allowed to take, you try to make that limit as large as you can for your
model and stimulus protocol but its still a limit. And people seem happy
to include this maximum step size in the simulation metadata
specification. So given you have already limited your integration
stepping, how is it bad to ask for the model results at some multiple of
that maximum step?

As for interpolating values, if we just talk about cardiac
electrophysiological models you can get quite different action potential
shapes depending on the sampling frequency of your simulation results.
If you are trying to reproduce a given set of results but using a
different method to determine when you output data you can end up with
essentially a different answer. Sure, the general characteristics will
probably be the same and the differences might be quite subtle - but
they will be different, and I don't see how that can work for model
curation, unless you start curating models based on some analysis of the
simulation results rather than the results directly.

And I think we have to firmly keep model curation in mind as we think
about the specification of simulations.

To summarise, I agree. It will always be best, especially when using any
of the really smart adaptive integrators available today, to simply hand
your model over to the integrator and let it go for it. But the fact
that we need to ensure that all responses at all time scales are
captured means we have to limit the flexibility of the integrator - but
even then, adaptive stepping with certain limits is still more efficient
than a simple fixed step Euler-like method for any reasonably complex
biophysical model. Maybe we (I?) just use these integrators incorrectly,
or maybe the way we're applying stimuli to our models is wrong? Maybe
its just I haven't had coffee yet and completely misinterpreted your
email :-)

And then if that convinces you that we need to apply fixed limits to the
integration of a model, then it might just follow that it is ok to also
specify that you want to output results from the model integration at
some multiple of this limiting interval.

Andre.

Shane Blackett wrote:
> Hi,
>
> RE: Tabulation Data.
>
> I was very surprised about six months ago to hear that we are running
> adaptive step size integrators and then requiring them to output results
> as if they were integrating at fixed intervals. When I queried this
> then I was told that this is the way that it is done (so I let my
> concern die). Poul queried the same thing yesterday at this little meeting.
>
> It seems to me that an integrator should not be required to do this
> (which is what is implied by including it in the specification). If you
> have an efficient integrator (or a very smooth part of your function) it
> should be able to make large steps, and you should not force it to
> produce hundreds of numbers containing no information. (or even worse
> limit it's step size to some grid).
> I had previously assumed that any integrator would just tell you the x
> and y values to draw on a graph, however this doesn't appear to be what
> is happening in the implementation we are currently using.
>
> If for comparison purposes you want to check the results at a particular
> time value then I guess something needs to interpolate to that time,
> however I think Poul would argue that this is not a parameter of the
> simulation but just of presentation of the results, similar to what
> colour to make the curve in the graph (which we haven't yet included).
>
> Hope this helps to explain what is going on.
>
> Shane
> _______________________________________________
> cellml-discussion mailing list
> cellml-discussion at cellml.org
> http://www.cellml.org/mailman/listinfo/cellml-discussion

--
David Nickerson, PhD
Research Fellow
Division of Bioengineering
Faculty of Engineering
National University of Singapore




Archive powered by MHonArc 2.6.18.

Top of page