CellML Discussion List

Text archives Help


[cellml-discussion] Binary and source snapshots for PCEnvonWin32and Linux


Chronological Thread 
  • From: ak.miller at auckland.ac.nz (Andrew Miller)
  • Subject: [cellml-discussion] Binary and source snapshots for PCEnvonWin32and Linux
  • Date: Wed, 25 Oct 2006 12:17:20 +1300

Alan Garny wrote:
> Hi Andrew,
>
>
>>> I ran one of my CellML files using the default (i.e. Implicit
>>> Runge-Kutta
>>> (2) with solve) and it was *extremely* slow. After a couple
>>>
>> of minutes
>>
>>> I had to kill PCEnv, as it was slowing down my computer to
>>>
>> the point
>>
>>> where I could barely use it (and couldn't cancel the integration!).
>>>
>> It seems that cancel integration never got implemented. I
>> have opened a bug on this at http://www.cellml.org/tools/pcenv/bugs/18
>>
>> I'm not sure what is causing your speed issue without looking
>> at your model. What process is using 100% CPU, PCEnv or
>> cellml_corba_server? You mention below you tried this on
>> Linux, did you get the same performance issues there?
>>
>
> Please find attached a ZIP file that contains my model. Otherwise, the
> process that was taking 100% of the CPU was cellml_corba_server. Regarding
> Linux, I got exactly the same behaviour, but as I said I only tried with one
> integrator (Implicit Gear (M=1)).
Hi Alan,

A few questions about your code, since profiling reveals where my code
is spending its time, and I am surprised that your code does not need to
spend similar time in the same places:

1) 74% of the model evaluation time is spent computing exponentials and
powers. Are you using the standard library functions from Delphi for
this, or did you implement some sort of interpolated lookup table or
similar? Do you pull out fragments of equations which can be computed
early? (my code could do better if I added support for this).
2) 62% of model evaluation time is spent computing Jacobians (this is an
overlapping set with the above time, since I compute Jacobians
numerically, which requires that I recompute the other variables after I
perturb the rate variables, so I can compute the correct rates). Do you
compute your Jacobians analytically or numerically? Do you keep a
Jacobian cache, and try to predict when you can reuse a Jacobian? Do you
unroll your Jacobian function across variables, and then do some logic
so you only have to recompute the variables needed to work out the rate?
Any similar optimisations like this?

Best regards,
Andrew





Archive powered by MHonArc 2.6.18.

Top of page