Exported functions and types

Index

Docstrings

Integrators

SeeToDee.ForwardEulerType
f_discrete = ForwardEuler(f, Ts; supersample = 1)

Discretize a continuous-time dynamics function f using forward Euler with sample time Tₛ. f is assumed to have the signature f : (x,u,p,t)->ẋ and the returned function f_discrete : (x,u,p,t)->x(t+Tₛ).

supersample determines the number of internal steps, this can be increased to make the integration more accurate, but it might be favorable to choose a higher-order method instead. u is assumed constant during all steps.

If called with StaticArrays, this integrator is allocation free.

source
SeeToDee.HeunType
f_discrete = Heun(f, Ts; supersample = 1)

Discretize a continuous-time dynamics function f using Heun's method with sample time Tₛ. f is assumed to have the signature f : (x,u,p,t)->ẋ and the returned function f_discrete : (x,u,p,t)->x(t+Tₛ).

supersample determines the number of internal steps, this can be increased to make the integration more accurate, but it might be favorable to choose a higher-order method instead. u is assumed constant during all steps.

If called with StaticArrays, this integrator is allocation free.

source
SeeToDee.Rk3Type
f_discrete = Rk3(f, Ts; supersample = 1)

Discretize a continuous-time dynamics function f using RK3 with sample time Tₛ. f is assumed to have the signature f : (x,u,p,t)->ẋ and the returned function f_discrete : (x,u,p,t)->x(t+Tₛ).

supersample determines the number of internal steps, 1 is often sufficient, but this can be increased to make the integration more accurate. u is assumed constant during all steps.

If called with StaticArrays, this integrator is allocation free.

source
SeeToDee.Rk4Type
f_discrete = Rk4(f, Ts; supersample = 1)

Discretize a continuous-time dynamics function f using RK4 with sample time Tₛ. f is assumed to have the signature f : (x,u,p,t)->ẋ and the returned function f_discrete : (x,u,p,t)->x(t+Tₛ).

supersample determines the number of internal steps, 1 is often sufficient, but this can be increased to make the integration more accurate. u is assumed constant during all steps.

If called with StaticArrays, this integrator is allocation free.

source
SeeToDee.RKC2Type
f_discrete = RKC2(f, Ts; supersample=1, stages=nothing, L_est=nothing, eta=0.05)

Discretize a continuous-time dynamics function f using a second-order stabilized explicit Runge–Kutta–Chebyshev (RKC2) method with sample time Tₛ.

  • f must have the signature f(x, u, p, t) -> ẋ.
  • The returned callable f_discrete(x,u,p,t; Ts=Ts_override) advances one step to x(t+Tₛ).
  • supersample = number of internal substeps per call (keeps u constant inside the step).
  • stages = number of Chebyshev stages m. If not provided, it is chosen from L_est.
  • L_est = estimate of the spectral radius of the Jacobian (or diffusion operator) over the step; if given (and stages not given), we pick m large enough so the step is stable.
  • eta = damping parameter in (0,1); 0.05–0.1 are common. Larger eta slightly shortens the stability interval but improves internal stability.

Notes

• This is an explicit stabilized method: very effective when stiffness is mostly dissipative (eigenvalues with large negative real parts, e.g., diffusion/semi-discrete parabolic PDEs). It is not a general cure for oscillatory stiffness (strong imaginary spectrum).

• If called with StaticArrays, the method is allocation free.

source
SeeToDee.SimpleCollocType
SimpleColloc(dyn, Ts, nx, na, nu; n = 5, abstol = 1.0e-8, solver=SimpleNewtonRaphson(), residual=false)
SimpleColloc(dyn, Ts, x_inds, a_inds, nu; n = 5, abstol = 1.0e-8, solver=SimpleNewtonRaphson(), residual=false)

A simple direct-collocation integrator that can be stepped manually, similar to the function returned by SeeToDee.Rk4.

This integrator supports differential-algebraic equations (DAE), the dynamics is expected to be on either of the forms

  • nx,na provided: (xz,u,p,t)->[ẋ; res] where xz is a vector [x; z] contaning the differential state x and the algebraic variables z in this order. res is the algebraic residuals, and u is the control input. The algebraic residuals are thus assumed to be the last na elements of of the arrays returned by the dynamics (the convention used by ModelingToolkit).
  • x_inds, a_inds provided: (xz,u,p,t)->xzd where xzd[x_inds] = ẋ and xzd[a_inds] = res.

The returned function has the signature f_discrete : (x,u,p,t)->x(t+Tₛ).

This integrator also supports a fully implicit form of the dynamics

\[0 = F(ẋ, x, u, p, t)\]

When using this interface, the dynamics is called using an additional input as the first argument, and the return value is expected to be the residual of the entire state descriptor. To use the implicit form, pass residual = true.

A Gauss-Radau collocation method is used to discretize the dynamics. The resulting nonlinear problem is solved using (by default) a Newton-Raphson method. This method handles stiff dynamics.

Arguments:

  • dyn: Dynamics function (continuous time)
  • Ts: Sample time
  • nx: Number of differential state variables
  • na: Number of algebraic variables
  • x_inds, a_inds: If indices are provided instead of nx and na, the mass matrix is assumed to be diagonal, with ones located at x_inds and zeros at a_inds. For maximum efficiency, provide these indices as unit ranges or static arrays.
  • nu: Number of inputs
  • n: Number of collocation points. n=2 corresponds to trapezoidal integration.
  • abstol: Tolerance for the root finding algorithm
  • residual: If true the dynamics function is assumed to return the residual of the entire state descriptor and have the signature (ẋ, x, u, p, t) -> res. This is sometimes called "fully implicit form".
  • solver: Any compatible SciML Nonlinear solver to use for the root finding problem
  • scale_x: If provided, the state variables are scaled by this vector before being passed to the nonlinear solver. This can improve convergence for states with very different magnitudes. The scaling is applied as res .= res ./ scale_x before being passed to the solver.

Extended help

source
SeeToDee.TrapezoidalType
Trapezoidal(dyn, Ts, nx, na, nu; abstol = 1.0e-8, solver=SimpleNewtonRaphson(), residual=false)
Trapezoidal(dyn, Ts, x_inds, a_inds, nu; abstol = 1.0e-8, solver=SimpleNewtonRaphson(), residual=false)

A simple trapezoidal integrator that can be stepped manually, similar to the function returned by SeeToDee.Rk4.

This integrator supports differential-algebraic equations (DAE), the dynamics is expected to be on either of the forms

  • nx,na provided: (xz,u,p,t)->[ẋ; res] where xz is a vector [x; z] contaning the differential state x and the algebraic variables z in this order. res is the algebraic residuals, and u is the control input. The algebraic residuals are thus assumed to be the last na elements of of the arrays returned by the dynamics (the convention used by ModelingToolkit).
  • x_inds, a_inds provided: (xz,u,p,t)->xzd where xzd[x_inds] = ẋ and xzd[a_inds] = res.

The returned function has the signature f_discrete : (x,u,p,t)->x(t+Tₛ).

Arguments:

  • dyn: Dynamics function (continuous time)
  • Ts: Sample time
  • nx: Number of differential state variables
  • na: Number of algebraic variables
  • x_inds, a_inds: If indices are provided instead of nx and na, the mass matrix is assumed to be diagonal, with ones located at x_inds and zeros at a_inds. For maximum efficiency, provide these indices as unit ranges or static arrays.
  • nu: Number of inputs
  • abstol: Tolerance for the root finding algorithm
  • residual: If true the dynamics function is assumed to return the residual of the entire state descriptor and have the signature (ẋ, x, u, p, t) -> res. This is sometimes called "fully implicit form".
  • solver: Any compatible SciML Nonlinear solver to use for the root finding problem
  • scale_x: If provided, the residual is scaled by this vector before being passed to the nonlinear solver, res ./ scale_x. This can help with convergence if the state variables have very different magnitudes.

Extended help

source
SeeToDee.BackwardEulerType
BackwardEuler(dyn, Ts, nx, na, nu; abstol = 1.0e-8, solver=SimpleNewtonRaphson(), residual=false)
BackwardEuler(dyn, Ts, x_inds, a_inds, nu; abstol = 1.0e-8, solver=SimpleNewtonRaphson(), residual=false)

A simple backward Euler integrator that can be stepped manually.

This integrator supports differential-algebraic equations (DAE), the dynamics is expected to be on either of the forms

  • nx,na provided: (xz,u,p,t)->[ẋ; res] where xz is a vector [x; z] contaning the differential state x and the algebraic variables z in this order. res is the algebraic residuals, and u is the control input. The algebraic residuals are thus assumed to be the last na elements of of the arrays returned by the dynamics (the convention used by ModelingToolkit).
  • x_inds, a_inds provided: (xz,u,p,t)->xzd where xzd[x_inds] = ẋ and xzd[a_inds] = res.

The returned function has the signature f_discrete : (x,u,p,t)->x(t+Tₛ).

This integrator also supports a fully implicit form of the dynamics

\[0 = F(ẋ, x, u, p, t)\]

When using this interface, the dynamics is called using an additional input as the first argument, and the return value is expected to be the residual of the entire state descriptor. To use the implicit form, pass residual = true.

Arguments:

  • dyn: Dynamics function (continuous time)
  • Ts: Sample time
  • nx: Number of differential state variables
  • na: Number of algebraic variables
  • x_inds, a_inds: If indices are provided instead of nx and na, the mass matrix is assumed to be diagonal, with ones located at x_inds and zeros at a_inds. For maximum efficiency, provide these indices as unit ranges or static arrays.
  • nu: Number of inputs
  • abstol: Tolerance for the root finding algorithm
  • residual: If true the dynamics function is assumed to return the residual of the entire state descriptor and have the signature (ẋ, x, u, p, t) -> res. This is sometimes called "fully implicit form".
  • solver: Any compatible SciML Nonlinear solver to use for the root finding problem
  • scale_x: If provided, the residual is scaled by this vector before being passed to the nonlinear solver, res ./ scale_x. This can help with convergence if the state variables have very different magnitudes.

Notes

The backward Euler method is a first-order implicit method that is unconditionally stable (A-stable), making it suitable for stiff problems. The method solves:

\[x(t+Tₛ) = x(t) + Tₛ \cdot f(x(t+Tₛ), u, p, t+Tₛ)\]

This requires solving a nonlinear system at each step, but provides better stability properties than explicit methods like ForwardEuler, especially for stiff dynamics. It is simpler and cheaper per iteration than Trapezoidal (only one dynamics evaluation per solve vs. two), but is only first-order accurate compared to Trapezoidal's second-order accuracy.

Extended help

source
SeeToDee.AdaptiveStepType
AdaptiveStep(integrator)

A wrapper that enables automatic step subdivision for taking arbitrary-length steps with any integrator.

When the requested step size Ts is larger than the integrator's effective step size (largest_Ts), AdaptiveStep automatically subdivides the step using the integrator's internal supersample mechanism (for explicit integrators) or manual stepping (for implicit integrators).

Fields

  • integ: The wrapped integrator
  • largest_Ts: The largest step size the integrator can take in a single call (Ts / supersample)

Usage

# Wrap any integrator to enable automatic step subdivision
base_integrator = Rk4(dynamics, 0.1; supersample=2)  # largest_Ts = 0.05
adaptive_integrator = AdaptiveStep(base_integrator)

# Take arbitrary step sizes - automatically subdivides when needed
x_next = adaptive_integrator(x, u, p, t; Ts=0.3)  # Uses supersample=6 internally

Notes

  • This wrapper does NOT use error control - it only ensures step sizes never exceed largest_Ts
  • For explicit integrators (Rk4, Rk3, ForwardEuler, Heun), uses built-in supersample mechanism
  • For implicit integrators (SimpleColloc, Trapezoidal), performs manual step subdivision
  • When Ts ≤ largest_Ts, calls the integrator directly without subdivision

Examples

using SeeToDee, StaticArrays

# Define dynamics
function simple_dynamics(x, u, p, t)
    return -x + u
end

# Create base integrator with supersample=3
base = SeeToDee.Rk4(simple_dynamics, 0.1; supersample=3)  # largest_Ts = 0.1/3 ≈ 0.033

# Wrap with AdaptiveStep
adaptive = SeeToDee.AdaptiveStep(base)

x0 = SA[1.0]
u = SA[0.5]

# Small step - no subdivision needed
x1 = adaptive(x0, u, 0, 0; Ts=0.02)  # Direct call

# Large step - automatic subdivision
x2 = adaptive(x0, u, 0, 0; Ts=0.5)   # Uses supersample=15 internally
source
SeeToDee.SuperSamplerType
SuperSampler(integ, supersample)

A wrapper that enables supersampling for any integrator by manually stepping multiple times.

When an integrator doesn't have built-in supersample support (like Trapezoidal, BackwardEuler, or SimpleColloc), this wrapper allows you to take supersample internal steps to produce one effective step of duration Ts.

Fields

  • integ: The wrapped integrator
  • supersample: Number of internal steps to take per call

Usage

# Wrap an implicit integrator to add supersampling
base_integrator = Trapezoidal(dynamics, 0.1, 4, 0, 1)
supersampled = SuperSampler(base_integrator, 5)  # Takes 5 steps of 0.02s each

# Each call advances by Ts=0.1 using 5 internal steps
x_next = supersampled(x, u, p, t)  # Equivalent to 5 steps of 0.02s

Notes

  • The input u is held constant during all internal steps
  • Each internal step uses time Ts / supersample
  • Time t is advanced appropriately for each internal step
  • All args and kwargs are forwarded to the wrapped integrator
  • Type-stable when used with StaticArrays

Comparison with AdaptiveStep

  • AdaptiveStep: Handles arbitrary step sizes by automatic subdivision (for variable Ts)
  • SuperSampler: Fixed supersampling for improved accuracy (for constant Ts with more substeps)

Examples

using SeeToDee, StaticArrays

# Define dynamics
function dynamics(x, u, p, t)
    return -x + u
end

# Create implicit integrator without supersample support
base = Trapezoidal(dynamics, 0.1, 1, 0, 1)

# Add supersampling for better accuracy
supersampled = SuperSampler(base, 10)  # 10 internal steps

x0 = SA[1.0]
u = SA[0.5]

# Single step with 10 internal substeps
x1 = supersampled(x0, u, 0, 0.0)

# Can also override Ts at call time (if integrator supports it)
x2 = supersampled(x0, u, 0, 0.0; Ts=0.05)  # Uses 10 steps of 0.005s
source
SeeToDee.SwitchingIntegratorType
SwitchingIntegrator(int_true, int_false, cond)

Create an integrator that switches between two different integrators based on a condition.

  • int_true: Integrator to use when cond(...) is true
  • int_false: Integrator to use when cond(...) is false
  • cond(x,u,p,t,args...): A function that takes the same arguments as the integrator and returns a Bool

This can be used to, e.g., use a faster integrator when the state is in a certain region and a more accurate (but slower) integrator otherwise.

source

Utilities

SeeToDee.linearizeFunction
A,B = linearize(f, x0, u0, p, t)

Linearize dynamics function f(x, u, p, t) w.r.t., state x, input u. Returns Jacobians A,B in

\[ẋ = A\, Δx + B\, Δu\]

Works for both continuous and discrete-time dynamics.

source
SeeToDee.initializeFunction
initialize(integ, x0, u, p, t = 0.0; solver=integ.solver, abstol=integ.abstol)

Given the differential state variables in x0, initialize the algebraic variables by solving the nonlinear problem f(x,u,p,t) = 0 using the provided solver.

Arguments:

  • integ: An intergrator like SeeToDee.SimpleColloc
  • x0: Initial state descriptor (differential and algebraic variables, where the algebraic variables comes last)
source