Exported functions and types
Index
SeeToDee.AdaptiveStepSeeToDee.BackwardEulerSeeToDee.ForwardEulerSeeToDee.HeunSeeToDee.RKC2SeeToDee.Rk3SeeToDee.Rk4SeeToDee.SimpleCollocSeeToDee.SuperSamplerSeeToDee.SwitchingIntegratorSeeToDee.TrapezoidalSeeToDee.initializeSeeToDee.linearize
Docstrings
Integrators
SeeToDee.ForwardEuler — Type
f_discrete = ForwardEuler(f, Ts; supersample = 1)Discretize a continuous-time dynamics function f using forward Euler with sample time Tₛ. f is assumed to have the signature f : (x,u,p,t)->ẋ and the returned function f_discrete : (x,u,p,t)->x(t+Tₛ).
supersample determines the number of internal steps, this can be increased to make the integration more accurate, but it might be favorable to choose a higher-order method instead. u is assumed constant during all steps.
If called with StaticArrays, this integrator is allocation free.
sourceSeeToDee.Heun — Type
f_discrete = Heun(f, Ts; supersample = 1)Discretize a continuous-time dynamics function f using Heun's method with sample time Tₛ. f is assumed to have the signature f : (x,u,p,t)->ẋ and the returned function f_discrete : (x,u,p,t)->x(t+Tₛ).
supersample determines the number of internal steps, this can be increased to make the integration more accurate, but it might be favorable to choose a higher-order method instead. u is assumed constant during all steps.
If called with StaticArrays, this integrator is allocation free.
sourceSeeToDee.Rk3 — Type
f_discrete = Rk3(f, Ts; supersample = 1)Discretize a continuous-time dynamics function f using RK3 with sample time Tₛ. f is assumed to have the signature f : (x,u,p,t)->ẋ and the returned function f_discrete : (x,u,p,t)->x(t+Tₛ).
supersample determines the number of internal steps, 1 is often sufficient, but this can be increased to make the integration more accurate. u is assumed constant during all steps.
If called with StaticArrays, this integrator is allocation free.
sourceSeeToDee.Rk4 — Type
f_discrete = Rk4(f, Ts; supersample = 1)Discretize a continuous-time dynamics function f using RK4 with sample time Tₛ. f is assumed to have the signature f : (x,u,p,t)->ẋ and the returned function f_discrete : (x,u,p,t)->x(t+Tₛ).
supersample determines the number of internal steps, 1 is often sufficient, but this can be increased to make the integration more accurate. u is assumed constant during all steps.
If called with StaticArrays, this integrator is allocation free.
sourceSeeToDee.RKC2 — Type
f_discrete = RKC2(f, Ts; supersample=1, stages=nothing, L_est=nothing, eta=0.05)Discretize a continuous-time dynamics function f using a second-order stabilized explicit Runge–Kutta–Chebyshev (RKC2) method with sample time Tₛ.
fmust have the signaturef(x, u, p, t) -> ẋ.- The returned callable
f_discrete(x,u,p,t; Ts=Ts_override)advances one step tox(t+Tₛ). supersample= number of internal substeps per call (keepsuconstant inside the step).stages= number of Chebyshev stagesm. If not provided, it is chosen fromL_est.L_est= estimate of the spectral radius of the Jacobian (or diffusion operator) over the step; if given (andstagesnot given), we pickmlarge enough so the step is stable.eta= damping parameter in (0,1); 0.05–0.1 are common. Largeretaslightly shortens the stability interval but improves internal stability.
Notes
• This is an explicit stabilized method: very effective when stiffness is mostly dissipative (eigenvalues with large negative real parts, e.g., diffusion/semi-discrete parabolic PDEs). It is not a general cure for oscillatory stiffness (strong imaginary spectrum).
• If called with StaticArrays, the method is allocation free.
SeeToDee.SimpleColloc — Type
SimpleColloc(dyn, Ts, nx, na, nu; n = 5, abstol = 1.0e-8, solver=SimpleNewtonRaphson(), residual=false)
SimpleColloc(dyn, Ts, x_inds, a_inds, nu; n = 5, abstol = 1.0e-8, solver=SimpleNewtonRaphson(), residual=false)A simple direct-collocation integrator that can be stepped manually, similar to the function returned by SeeToDee.Rk4.
This integrator supports differential-algebraic equations (DAE), the dynamics is expected to be on either of the forms
nx,naprovided:(xz,u,p,t)->[ẋ; res]wherexzis a vector[x; z]contaning the differential statexand the algebraic variableszin this order.resis the algebraic residuals, anduis the control input. The algebraic residuals are thus assumed to be the lastnaelements of of the arrays returned by the dynamics (the convention used by ModelingToolkit).x_inds, a_indsprovided:(xz,u,p,t)->xzdwherexzd[x_inds] = ẋandxzd[a_inds] = res.
The returned function has the signature f_discrete : (x,u,p,t)->x(t+Tₛ).
This integrator also supports a fully implicit form of the dynamics
\[0 = F(ẋ, x, u, p, t)\]
When using this interface, the dynamics is called using an additional input ẋ as the first argument, and the return value is expected to be the residual of the entire state descriptor. To use the implicit form, pass residual = true.
A Gauss-Radau collocation method is used to discretize the dynamics. The resulting nonlinear problem is solved using (by default) a Newton-Raphson method. This method handles stiff dynamics.
Arguments:
dyn: Dynamics function (continuous time)Ts: Sample timenx: Number of differential state variablesna: Number of algebraic variablesx_inds, a_inds: If indices are provided instead ofnxandna, the mass matrix is assumed to be diagonal, with ones located atx_indsand zeros ata_inds. For maximum efficiency, provide these indices as unit ranges or static arrays.nu: Number of inputsn: Number of collocation points.n=2corresponds to trapezoidal integration.abstol: Tolerance for the root finding algorithmresidual: Iftruethe dynamics function is assumed to return the residual of the entire state descriptor and have the signature(ẋ, x, u, p, t) -> res. This is sometimes called "fully implicit form".solver: Any compatible SciML Nonlinear solver to use for the root finding problemscale_x: If provided, the state variables are scaled by this vector before being passed to the nonlinear solver. This can improve convergence for states with very different magnitudes. The scaling is applied asres .= res ./ scale_xbefore being passed to the solver.
Extended help
- Super-sampling is not supported by this integrator, see
SeeToDee.SuperSampler.
SeeToDee.Trapezoidal — Type
Trapezoidal(dyn, Ts, nx, na, nu; abstol = 1.0e-8, solver=SimpleNewtonRaphson(), residual=false)
Trapezoidal(dyn, Ts, x_inds, a_inds, nu; abstol = 1.0e-8, solver=SimpleNewtonRaphson(), residual=false)A simple trapezoidal integrator that can be stepped manually, similar to the function returned by SeeToDee.Rk4.
This integrator supports differential-algebraic equations (DAE), the dynamics is expected to be on either of the forms
nx,naprovided:(xz,u,p,t)->[ẋ; res]wherexzis a vector[x; z]contaning the differential statexand the algebraic variableszin this order.resis the algebraic residuals, anduis the control input. The algebraic residuals are thus assumed to be the lastnaelements of of the arrays returned by the dynamics (the convention used by ModelingToolkit).x_inds, a_indsprovided:(xz,u,p,t)->xzdwherexzd[x_inds] = ẋandxzd[a_inds] = res.
The returned function has the signature f_discrete : (x,u,p,t)->x(t+Tₛ).
Arguments:
dyn: Dynamics function (continuous time)Ts: Sample timenx: Number of differential state variablesna: Number of algebraic variablesx_inds, a_inds: If indices are provided instead ofnxandna, the mass matrix is assumed to be diagonal, with ones located atx_indsand zeros ata_inds. For maximum efficiency, provide these indices as unit ranges or static arrays.nu: Number of inputsabstol: Tolerance for the root finding algorithmresidual: Iftruethe dynamics function is assumed to return the residual of the entire state descriptor and have the signature(ẋ, x, u, p, t) -> res. This is sometimes called "fully implicit form".solver: Any compatible SciML Nonlinear solver to use for the root finding problemscale_x: If provided, the residual is scaled by this vector before being passed to the nonlinear solver,res ./ scale_x. This can help with convergence if the state variables have very different magnitudes.
Extended help
- Super-sampling is not supported by this integrator, see
SeeToDee.SuperSampler.
SeeToDee.BackwardEuler — Type
BackwardEuler(dyn, Ts, nx, na, nu; abstol = 1.0e-8, solver=SimpleNewtonRaphson(), residual=false)
BackwardEuler(dyn, Ts, x_inds, a_inds, nu; abstol = 1.0e-8, solver=SimpleNewtonRaphson(), residual=false)A simple backward Euler integrator that can be stepped manually.
This integrator supports differential-algebraic equations (DAE), the dynamics is expected to be on either of the forms
nx,naprovided:(xz,u,p,t)->[ẋ; res]wherexzis a vector[x; z]contaning the differential statexand the algebraic variableszin this order.resis the algebraic residuals, anduis the control input. The algebraic residuals are thus assumed to be the lastnaelements of of the arrays returned by the dynamics (the convention used by ModelingToolkit).x_inds, a_indsprovided:(xz,u,p,t)->xzdwherexzd[x_inds] = ẋandxzd[a_inds] = res.
The returned function has the signature f_discrete : (x,u,p,t)->x(t+Tₛ).
This integrator also supports a fully implicit form of the dynamics
\[0 = F(ẋ, x, u, p, t)\]
When using this interface, the dynamics is called using an additional input ẋ as the first argument, and the return value is expected to be the residual of the entire state descriptor. To use the implicit form, pass residual = true.
Arguments:
dyn: Dynamics function (continuous time)Ts: Sample timenx: Number of differential state variablesna: Number of algebraic variablesx_inds, a_inds: If indices are provided instead ofnxandna, the mass matrix is assumed to be diagonal, with ones located atx_indsand zeros ata_inds. For maximum efficiency, provide these indices as unit ranges or static arrays.nu: Number of inputsabstol: Tolerance for the root finding algorithmresidual: Iftruethe dynamics function is assumed to return the residual of the entire state descriptor and have the signature(ẋ, x, u, p, t) -> res. This is sometimes called "fully implicit form".solver: Any compatible SciML Nonlinear solver to use for the root finding problemscale_x: If provided, the residual is scaled by this vector before being passed to the nonlinear solver,res ./ scale_x. This can help with convergence if the state variables have very different magnitudes.
Notes
The backward Euler method is a first-order implicit method that is unconditionally stable (A-stable), making it suitable for stiff problems. The method solves:
\[x(t+Tₛ) = x(t) + Tₛ \cdot f(x(t+Tₛ), u, p, t+Tₛ)\]
This requires solving a nonlinear system at each step, but provides better stability properties than explicit methods like ForwardEuler, especially for stiff dynamics. It is simpler and cheaper per iteration than Trapezoidal (only one dynamics evaluation per solve vs. two), but is only first-order accurate compared to Trapezoidal's second-order accuracy.
Extended help
- Super-sampling is not supported by this integrator, see
SeeToDee.SuperSampler.
SeeToDee.AdaptiveStep — Type
AdaptiveStep(integrator)A wrapper that enables automatic step subdivision for taking arbitrary-length steps with any integrator.
When the requested step size Ts is larger than the integrator's effective step size (largest_Ts), AdaptiveStep automatically subdivides the step using the integrator's internal supersample mechanism (for explicit integrators) or manual stepping (for implicit integrators).
Fields
integ: The wrapped integratorlargest_Ts: The largest step size the integrator can take in a single call (Ts / supersample)
Usage
# Wrap any integrator to enable automatic step subdivision
base_integrator = Rk4(dynamics, 0.1; supersample=2) # largest_Ts = 0.05
adaptive_integrator = AdaptiveStep(base_integrator)
# Take arbitrary step sizes - automatically subdivides when needed
x_next = adaptive_integrator(x, u, p, t; Ts=0.3) # Uses supersample=6 internallyNotes
- This wrapper does NOT use error control - it only ensures step sizes never exceed
largest_Ts - For explicit integrators (
Rk4,Rk3,ForwardEuler,Heun), uses built-in supersample mechanism - For implicit integrators (
SimpleColloc,Trapezoidal), performs manual step subdivision - When
Ts ≤ largest_Ts, calls the integrator directly without subdivision
Examples
using SeeToDee, StaticArrays
# Define dynamics
function simple_dynamics(x, u, p, t)
return -x + u
end
# Create base integrator with supersample=3
base = SeeToDee.Rk4(simple_dynamics, 0.1; supersample=3) # largest_Ts = 0.1/3 ≈ 0.033
# Wrap with AdaptiveStep
adaptive = SeeToDee.AdaptiveStep(base)
x0 = SA[1.0]
u = SA[0.5]
# Small step - no subdivision needed
x1 = adaptive(x0, u, 0, 0; Ts=0.02) # Direct call
# Large step - automatic subdivision
x2 = adaptive(x0, u, 0, 0; Ts=0.5) # Uses supersample=15 internallysourceSeeToDee.SuperSampler — Type
SuperSampler(integ, supersample)A wrapper that enables supersampling for any integrator by manually stepping multiple times.
When an integrator doesn't have built-in supersample support (like Trapezoidal, BackwardEuler, or SimpleColloc), this wrapper allows you to take supersample internal steps to produce one effective step of duration Ts.
Fields
integ: The wrapped integratorsupersample: Number of internal steps to take per call
Usage
# Wrap an implicit integrator to add supersampling
base_integrator = Trapezoidal(dynamics, 0.1, 4, 0, 1)
supersampled = SuperSampler(base_integrator, 5) # Takes 5 steps of 0.02s each
# Each call advances by Ts=0.1 using 5 internal steps
x_next = supersampled(x, u, p, t) # Equivalent to 5 steps of 0.02sNotes
- The input
uis held constant during all internal steps - Each internal step uses time
Ts / supersample - Time
tis advanced appropriately for each internal step - All args and kwargs are forwarded to the wrapped integrator
- Type-stable when used with StaticArrays
Comparison with AdaptiveStep
AdaptiveStep: Handles arbitrary step sizes by automatic subdivision (for variable Ts)SuperSampler: Fixed supersampling for improved accuracy (for constant Ts with more substeps)
Examples
using SeeToDee, StaticArrays
# Define dynamics
function dynamics(x, u, p, t)
return -x + u
end
# Create implicit integrator without supersample support
base = Trapezoidal(dynamics, 0.1, 1, 0, 1)
# Add supersampling for better accuracy
supersampled = SuperSampler(base, 10) # 10 internal steps
x0 = SA[1.0]
u = SA[0.5]
# Single step with 10 internal substeps
x1 = supersampled(x0, u, 0, 0.0)
# Can also override Ts at call time (if integrator supports it)
x2 = supersampled(x0, u, 0, 0.0; Ts=0.05) # Uses 10 steps of 0.005ssourceSeeToDee.SwitchingIntegrator — Type
SwitchingIntegrator(int_true, int_false, cond)Create an integrator that switches between two different integrators based on a condition.
int_true: Integrator to use whencond(...)is trueint_false: Integrator to use whencond(...)is falsecond(x,u,p,t,args...): A function that takes the same arguments as the integrator and returns aBool
This can be used to, e.g., use a faster integrator when the state is in a certain region and a more accurate (but slower) integrator otherwise.
sourceUtilities
SeeToDee.linearize — Function
A,B = linearize(f, x0, u0, p, t)Linearize dynamics function f(x, u, p, t) w.r.t., state x, input u. Returns Jacobians A,B in
\[ẋ = A\, Δx + B\, Δu\]
Works for both continuous and discrete-time dynamics.
sourceSeeToDee.initialize — Function
initialize(integ, x0, u, p, t = 0.0; solver=integ.solver, abstol=integ.abstol)Given the differential state variables in x0, initialize the algebraic variables by solving the nonlinear problem f(x,u,p,t) = 0 using the provided solver.
Arguments:
integ: An intergrator likeSeeToDee.SimpleCollocx0: Initial state descriptor (differential and algebraic variables, where the algebraic variables comes last)