CellML Discussion List

Text archives Help


[cellml-discussion] pcenv development priorities


Chronological Thread 
  • From: david.nickerson at nus.edu.sg (David Nickerson)
  • Subject: [cellml-discussion] pcenv development priorities
  • Date: Mon, 30 Oct 2006 18:49:36 +0800

Alan Garny wrote:
> As you know, there is a CellML 1.0 version of the epicardial version of the
> ten Tusscher model, which we have. I computed that model for one-second
> worth of cardiac activity, plotting the trans-membrane potential. From
> there, we could extrapolate to 70 minutes by saying that the frequency of
> the stimulus is 1 Hz. Like David, I have set the maximum time step to 0.1
> ms. Here are some rough figures:
>
> Simulation time: 1037.4 s (i.e. ~17 min 17 sec)
> Computation time: 684.6 s (i.e. ~11 min 24 sec)

so the total (predicted) wall clock run time is 17minutes? or 28minutes?
any chance you could run something longer than a 1s simulation and
extrapolate from there, if not run a whole 70minutes worth? If not, I
guess I can run it next time I'm on a windows machine ;)

> The computer on which I have run this is an IBM ThinkPad T42p (2 GHZ
> processor with 2 GB of RAM). These figures can obviously vary quite a bit,
> since it's based on a one-second simulation and is subject to whatever my
> system does at the time... Still, that should give you a rough idea
> indeed...
>
> Alan.
>
>> -----Original Message-----
>> From: cellml-discussion-bounces at cellml.org
>> [mailto:cellml-discussion-bounces at cellml.org] On Behalf Of
>> David Nickerson
>> Sent: 30 October 2006 10:14
>> To: For those interested in contributing to the development of CellML.
>> Subject: Re: [cellml-discussion] pcenv development priorities
>>
>> Just thought it might be useful to establish some benchmarks
>> for comparison of performance amongst the various tools.
>>
>> I have attached my version of the ten Tusscher et. al (2004)
>> human ventricular cell electrophysiology model, its in CellML
>> 1.1 and uses relative URIs for all the imports. If there was
>> an easy way to flatten the model Alan could try it in COR and
>> we could also give it a go in JSim....and others...
>>
>> If you extract the attached file, there is a CellML model
>> model/2004_tenTusscher/experiments/increasing-frequency-epicardial.xml
>> which describes the boundary conditions etc. and simulation
>> metadata for a simulation running the epicardial variant of
>> the ten Tusscher model for 70 minutes with a periodic
>> stimulus protocol with a frequency that varies from 0.25 to 3Hz.
>>
>> I run this simulation as specified in the simulation metadata
>> using the BDF multistep method with Newton iteration from the
>> CVODES integrator (using the dense direct linear solver). The
>> full solution (every time varying variable) is saved every
>> 1ms and the integration has a maximum time step of 0.1ms. My
>> resulting HDF5 data file is 2.9GB and the timing, memory
>> usage, and integrator stats were:
>>
>> Wall clock time : 1.57464269e+03 s
>> CPU time : 1.54678467e+03 s (user 1.52703543e+03/system
>> 1.97492340e+01)
>> Total allocated space: 22786048 bytes
>> in use: 22362384 bytes
>> free: 423664 bytes
>>
>> Final integrator statistics for this run:
>> (MM: BDF; IM: Newton; LS: Dense; max-step: 1.0000e-01)
>> CVode real workspace length = 266
>> CVode integer workspace length = 62
>> Number of steps = 44973141
>> Number of f-s = 47068790
>> Number of setups = 3852394
>> Number of nonlinear iterations = 47068785
>> Number of nonlinear convergence failures = 19131
>> Number of error test failures = 283742
>>
>> Linear solver real workspace length = 578
>> Linear solver integer workspace length = 17
>> Number of Jacobian evaluations = 814797
>> Number of f evals. in linear solver = 13851549
>>
>> I also used the Unix command time to get total execution time:
>> 1530.279u 19.817s 26:18.03 98.2% 0+0k 0+0io 0pf+0w
>>
>> So it took just almost 26.5 minutes to run this simulation
>> (the difference between the time command output and the times
>> reported above is the setup time prior to the integration
>> loop - i.e. generating and compiling the C code).
>>
>> Hopefully this proves useful in establishing some idea of the
>> relative performance of various tools/integrators.
>>
>>
>> David.
>>
>> PS - the attached model seems to work for me, but its possible I have
>> either missed some relative links or left out some required files...
>>
>>
>> Andrew Miller wrote:
>>> David Nickerson wrote:
>>>> Hi all,
>>>>
>>>> From today's meeting minutes the following priorities
>> were set for the
>>>> development of pcenv:
>>>>
>>>> 1. Make an official release of what we have now,
>> instead of just
>>>> snapshot releases.
>>>> 2. Try to improve integration performance, by using
>> CVODE from the
>>>> SUNDIALS project.
>>>> 3. Investigate the possibility of getting Mac OSX
>> support - Intel
>>>> only to start with.
>>>> 4. Get editing support for MathML and the CellML
>> structure working.
>>>> 5. Add CellML Metadata support to the backend, and
>> editing support
>>>> for this to the UI.
>>>>
>>>>
>>>> I'm just wondering if 2 is more important than 1?
>>>>
>>>> From feedback so far, the performance of pcenv is very
>> poor compared to
>>>> other tools. There is currently (to my knowledge) no firm
>> idea if this
>>>> is due to the underlying technology being used by pcenv,
>> or simply due
>>>> to the numerical integrators being used not being as good
>> as what most
>>>> people are currently using.
>>>>
>>> Please refer to my messages on the 27th of this month,
>> where I discuss
>>> the results of profiling it in callgrind.
>>>
>>> The major performance bottleneck is the the evaluation of
>> the Jacobian
>>> function (I use the standard O(n^2) method for generating a dense
>>> Jacobian in an array, and most of the time is spent evaluating the
>>> variables). Although COR is closed source and so I cannot
>> see exactly
>>> what it is doing. Given that COR apparently isn't doing any
>> optimisation
>>> here, it must be taking a comparable amount of time per Jacobian
>>> computation, so the difference must be in the number of
>> calls to compute
>>> the Jacobian.
>>>> Seems it would be good to address this question now,
>> because if using
>>>> something like CVODE still results in the same poor
>> performance then I
>>>> think some serious thinking needs to be done about the underlying
>>>> technology before an official release of pcenv should be made.
>>> I understand that you already have CVODE working with CCGS,
>> so perhaps
>>> you can give some indication of how well CCGS generated
>> code works with
>>> CVODE?
>>>
>>> Best regards,
>>> Andrew
>>>
>>> _______________________________________________
>>> cellml-discussion mailing list
>>> cellml-discussion at cellml.org
>>> http://www.cellml.org/mailman/listinfo/cellml-discussion
>> --
>> David Nickerson, PhD
>> Research Fellow
>> Division of Bioengineering
>> Faculty of Engineering
>> National University of Singapore
>> Email: david.nickerson at nus.edu.sg
>>
>
> _______________________________________________
> cellml-discussion mailing list
> cellml-discussion at cellml.org
> http://www.cellml.org/mailman/listinfo/cellml-discussion

--
David Nickerson, PhD
Research Fellow
Division of Bioengineering
Faculty of Engineering
National University of Singapore
Email: david.nickerson at nus.edu.sg




Archive powered by MHonArc 2.6.18.

Top of page