On Jul 22, 8:25 pm, Richard <[email protected]> wrote:
> Sorry for spamming,
>
> but I figured my initial code didn't make sense in the way mu was
> computed; so I changed it to the following. The error message stays
> the same, though.
>
> ====================
> t = var ('t')
>
> S_0 = 1.5
> X_0 = 0.05
> Y_XS = 0.5
> K_S = 0.007
> mu_max = 0.8
>
> X = function ('X', t)
> S = function ('S', X)
>
> def mu (S):
> return (mu_max * S) / (K_S + S)
>
> dXdt = diff (X, t) == mu(S) * X
> dSdt = diff (S, t) == -mu(S) * X / Y_XS
>
> desolve_system ([dXdt, dSdt], [X, S], ics = [0, X_0, S_0])
> ====================
Thanks for this explicit example. I think that what is happening is
that we are providing a list to the Maxima function "atvalue" (the
'X(t) is just an unevaluated function X(t)), which Maxima wouldn't
like, perhaps.
626 ivar_ic = ics[0]
627 for dvar, ic in zip(dvars, ics[1:]):
--> 628 dvar.atvalue(ivar==ivar_ic, ic)
629 soln = dvars[0].parent().desolve(des, dvars)
630 if str(soln).strip() == 'false':
But I'm not sure why this is happening; doing the atvalue 'by hand'
seems to give the right thing. Anyone else have ideas why this
happens?
- kcrisman
--
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/sage-support
URL: http://www.sagemath.org