CellML Discussion List

Text archives Help


[cellml-discussion] pcenv development priorities


Chronological Thread 
  • From: david.nickerson at nus.edu.sg (David Nickerson)
  • Subject: [cellml-discussion] pcenv development priorities
  • Date: Tue, 31 Oct 2006 09:36:24 +0800

Thats good to see Andrew. Have you tried running the same experiment
with one of the GSL integrators? The next step will be to see how the
simulation goes when runs within PCEnv (which is essentially a GUI
around the CIS, right? or is CIS not using the cellml_corba_bridge?)

The simulation time with CIS is pretty close to that with my code with
the small difference probably due to the slightly tighter tolerances I
used (1.0e-6 relative and 1.0e-8 absolute) and that I'm running on a
3GHz P4.

I suspect the difference in time with COR might be due to what is being
done with the results. At least for my code, I am streaming the full
results set (just over 90 variables) to disk every 1ms - which probably
takes as much time as the numerical integration (should benchmark that
also I guess). And you're probably doing the same thing using CIS,
right? The question is what COR is doing with those results, Alan? (This
is assuming the the two different version of the TNNP model are actually
equivalent :)


David.

Andrew Miller wrote:
> David Nickerson wrote:
>> Alan Garny wrote:
>>
>>> As you know, there is a CellML 1.0 version of the epicardial version of
>>> the
>>> ten Tusscher model, which we have. I computed that model for one-second
>>> worth of cardiac activity, plotting the trans-membrane potential. From
>>> there, we could extrapolate to 70 minutes by saying that the frequency of
>>> the stimulus is 1 Hz. Like David, I have set the maximum time step to 0.1
>>> ms. Here are some rough figures:
>>>
>>> Simulation time: 1037.4 s (i.e. ~17 min 17 sec)
>>> Computation time: 684.6 s (i.e. ~11 min 24 sec)
>>>
>>
> Hi Andre,
>
> I have now added support for CVODE into CIS, and have run your benchmark
> using the command-line test program RunCellML. I note that you didn't
> specify error control parameters, so I chose 1E-6 for both absolute and
> relative error control (with a maximum step size of 0.1).
> This command line used was...
> time ../../../../CellML_DOM_API/RunCellML
> file:///people/amil082/code/Andres_Benchmark/model/2004_tenTusscher/experiments/increasing-frequency-epicardial.xml
>
> step_type BDF15SIMP range 0,4200000,1 step_size_control 1E-6,1E-6,1,0.1
>
> This was run on the following processor...
> processor : 0
> vendor_id : GenuineIntel
> cpu family : 15
> model : 6
> model name : Intel(R) Pentium(R) D CPU 3.20GHz
> stepping : 2
> cpu MHz : 3192.180
> cache size : 2048 KB
> physical id : 0
> siblings : 1
> core id : 255
> cpu cores : 1
> fdiv_bug : no
> hlt_bug : no
> f00f_bug : no
> coma_bug : no
> fpu : yes
> fpu_exception : yes
> cpuid level : 6
> wp : yes
> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
> mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm
> pni monitor ds_cpl vmx cid cx16 xtpr lahf_lm
> bogomips : 6390.23
> The processor has hyperthreading turned on, but the integrator is
> entirely serial, so only used one hyper-thread. The process was able to
> take up an entire hyper-thread, running at 100% CPU.
>
> Note that RunCellML times include model loading and parsing, code
> generation, compilation, and model run (but that isn't really a big
> deal, because it is insignificant compared to the time it took to run
> the model).
>
> I get:
> real 25m49.238s
> user 25m46.581s
> sys 0m0.684s
>
> This is with the main program / SUNDIALS compiled with -O2, and the
> generated code compiled with -O3 -ffast-math. I have disassembled the
> computation functions in the -O3 -ffast-math code, and it looks
> reasonable, there are no CALL instructions anymore (the built-in exp and
> log from gcc get inlined). I therefore doubt that differences in the
> quality of the generated code is the cause of the problem. It is
> possible that Alan has managed to get the better benchmark by compiling
> CVODE with -O3 -ffast-math or other optimisations. Another possibility
> would be that his CellML 1.0 Ten Tuscher model behaves differently. Yet
> another possibility would be that the differences could be arising from
> the structure of the CVODE stepping loop, or differences in some
> parameters given to the solver. My stepping loop looks like this:
> https://svn.physiomeproject.org/svn/physiome/CellML_DOM_API/trunk/CIS/sources/CISSolve.cxx,
>
> see function SolveODEProblemCVODE.
>> so the total (predicted) wall clock run time is 17minutes? or 28minutes?
>> any chance you could run something longer than a 1s simulation and
>> extrapolate from there, if not run a whole 70minutes worth? If not, I
>> guess I can run it next time I'm on a windows machine ;)
>>
> BTW, it would be useful if I could also have Alan's CellML 1.0 ten
> Tuscher model (with the embedded stimulus protocol), to see if this
> makes any difference.
>
> Best regards,
> Andrew
>>
>>> The computer on which I have run this is an IBM ThinkPad T42p (2 GHZ
>>> processor with 2 GB of RAM). These figures can obviously vary quite a bit,
>>> since it's based on a one-second simulation and is subject to whatever my
>>> system does at the time... Still, that should give you a rough idea
>>> indeed...
>>>
>>> Alan.
>>>
>
> _______________________________________________
> cellml-discussion mailing list
> cellml-discussion at cellml.org
> http://www.cellml.org/mailman/listinfo/cellml-discussion

--
David Nickerson, PhD
Research Fellow
Division of Bioengineering
Faculty of Engineering
National University of Singapore
Email: david.nickerson at nus.edu.sg




Archive powered by MHonArc 2.6.18.

Top of page