LTI state-space models

This page documents the facilities available for estimating statespace models on the form

\[\begin{aligned} x^+ &= Ax + Bu + Ke\\ y &= Cx + Du + e \end{aligned}\]

There exist several methods for identification of statespace models, subspaceid, n4sid, newpem and era. subspaceid is the most comprehensive algorithm for subspace-based identification whereas n4sid is an older implementation. newpem solves the prediction-error problem using an iterative optimization method (from Optim.jl) and ins generally slightly more accurate but also more computationally expensive. If unsure which method to use, try subspaceid first (unless the data comes from closed-loop operation, use newpem in this case).

Subspace-based identification using n4sid and subspaceid

In this example we will estimate a statespace model using the n4sid method. This function returns an object of type N4SIDStateSpace where the model is accessed as sys.sys.

using ControlSystemIdentification, ControlSystems
Ts = 0.1
G  = c2d(DemoSystems.resonant(), Ts)
u  = randn(1,1000)
y  = lsim(G,u).y
y .+= 0.01 .* randn.() # add measurement noise
d  = iddata(y,u,Ts)
sys = n4sid(d, :auto; verbose=false, zeroD=true)
# or use a robust version of svd if y has outliers or missing values
# using TotalLeastSquares
# sys = n4sid(d, :auto; verbose=false, svd=x->rpca(x)[3])
bodeplot([G, sys.sys], lab=["True" "" "n4sid" ""])

N4SIDStateSpace is a subtype of AbstractPredictionStateSpace, a statespace object that contains an observer gain matrix sys.K (Kalman filter) as well as estimated covariance matrices etc.

Using the function subspaceid instead, we have

sys2 = subspaceid(d, :auto; verbose=false, zeroD=true)
bodeplot!(sys2.sys, lab=["subspace" ""])

subspaceid allows you to choose the weighting between :MOESP, :CVA, :N4SID, :IVM and is generally preferred over n4sid.

Both functions allow you to choose which functions are used for least-squares estimates and computing the SVD, allowing e.g., robust estimators for resistance against outliers etc.

ERA and OKID

The "Eigenvalue realization algorithm" and "Observer Kalman identification" algorithms are available as era and okid. If era is called with a data object, okid is automatically used internally to produce the Markov parameters to the ERA algorithm.

sys3 = era(d, 2)
bodeplot!(sys3, lab=["ERA" ""])

PEM (Prediction-error method)

Note

The old function pem is "soft deprecated" in favor of newpem which is more general and much more performant.

A simple algorithm for identification of discrete-time LTI systems on state-space form:

\[\begin{aligned} x' &= Ax + Bu + Ke \\ y &= Cx + Du + e \end{aligned}\]

is provided. The user can choose to minimize either prediction errors or simulation errors, with arbitrary metrics, i.e., not limited to squared errors.

The result of the identification with newpem is a custom type with extra fields for the identified Kalman gain and noise covariance matrices.

Usage example

Below, we generate a system and simulate it forward in time. We then try to estimate a model based on the input and output sequences using the function newpem.

using ControlSystemIdentification, ControlSystems, Random, LinearAlgebra
using ControlSystemIdentification: newpem
sys = c2d(tf(1, [1, 0.5, 1]) * tf(1, [1, 1]), 0.1)

Random.seed!(1)
T   = 1000                      # Number of time steps
nx  = 3                         # Number of poles in the true system
nu  = 1                         # Number of inputs
x0  = randn(nx)                 # Initial state
sim(sys,u,x0=x0) = lsim(ss(sys), u, x0=x0).y # Helper function
u   = randn(nu,T)               # Generate random input
y   = sim(sys, u, x0)           # Simulate system
y .+= 0.01 .* randn.()          # Add some measurement noise
d   = iddata(y,u,0.1)

sysh,opt = newpem(d, nx, focus=:prediction) # Estimate model

yh = predict(sysh, d) # Predict using estimated model
predplot(sysh, d)     # Plot prediction and true output

See the example notebooks for more plots as well as several examples in the example section of this documentation.

Internals

Internally, Optim.jl is used to optimize the system parameters, using automatic differentiation to calculate gradients (and Hessians where applicable). Optim solver options can be controlled by passing keyword arguments to newpem, and by passing a manually constructed solver object. The default solver is BFGS()

Filtering, prediction and simulation

Models can be simulated using lsim from ControlSystems.jl and using simulate. You may also convert the model to a KalmanFilter from LowLevelParticleFilters.jl by calling KalmanFilter(sys), after which you can perform filtering and smoothing etc. with the utilities provided for a KalmanFilter.

Furthermore, we have the utility functions below

ControlSystemIdentification.subspaceidFunction
subspaceid(
    data::InputOutputData,
    nx = :auto;
    verbose = false,
    r = nx === :auto ? min(length(data) ÷ 20, 20) : nx + 10, # the maximal prediction horizon used
    s1 = r, # number of past outputs
    s2 = r, # number of past inputs
    W = :MOESP,
    zeroD = false,
    stable = true, 
    focus = :prediction,
    svd::F1 = svd!,
    scaleU = true,
    Aestimator::F2 = \,
    Bestimator::F3 = \,
    weights = nothing,
)

Estimate a state-space model using subspace-based identification.

Ref: Ljung, Theory for the user.

Arguments:

  • data: Identification data iddata
  • nx: Rank of the model (model order)
  • verbose: Print stuff?
  • r: Prediction horizon. The model may perform better on simulation if this is made longer, at the expense of more computation time.
  • s1: past horizon of outputs
  • s2: past horizon of inputs
  • W: Weight type, choose between :MOESP, :CVA, :N4SID, :IVM
  • zeroD: Force the D matrix to be zero.
  • stable: Stabilize unstable system using eigenvalue reflection.
  • focus: :prediction or simulation
  • svd: The function to use for svd
  • scaleU: Rescale the input channels to have the same energy.
  • Aestimator: Estimator function used to estimate A,C.
  • Bestimator: Estimator function used to estimate B,D.
  • weights: A vector of weights can be provided if the Bestimator is wls.

Extended help

A more accurate prediciton model can sometimes be obtained using newpem, which is also unbiased for closed-loop data (subspaceid is biased for closed-loop data, see example in the docs). The prediction-error method is iterative and generally more expensive than subspaceid, and uses this function (by default) to form the initial guess for the optimization.

source
subspaceid(frd::FRD, args...; estimate_x0 = false, bilinear_transform = false, kwargs...)

If a frequency-reponse data object is supplied

  • The FRD will be automatically converted to an InputOutputFreqData
  • estimate_x0 is by default set to 0.
  • bilinear_transform transform the frequency vector to discrete time, see note below.

Note: if the frequency-response data comes from a frequency-response analysis, a bilinear transform of the data is required before estimation. This transform will be applied if bilinear_transform = true.

source
subspaceid(data::InputOutputFreqData,
    Ts = data.Ts,
    nx = :auto;
    cont = false,
    verbose = false,
    r = nx === :auto ? min(length(data) ÷ 20, 20) : 2nx, # Internal model order
    zeroD = false,
    estimate_x0 = true,
    stable = true, 
    svd = svd!,
    Aestimator = \,
    Bestimator = \,
    weights = nothing
)

Estimate a state-space model using subspace-based identification in the frequency domain.

See the docs for an example.

Arguments:

  • data: A frequency-domain identification data object.
  • Ts: Sample time at which the data was collected
  • nx: Desired model order, an interer or :auto.
  • cont: Return a continuous-time model? A bilinear transformation is used to convert the estimated discrete-time model, see function d2c.
  • verbose: Print stuff?
  • r: Internal model order, must be ≥ nx.
  • zeroD: Force the D matrix to be zero.
  • estimate_x0: Esimation of extra parameters to account for initial conditions. This may be required if the data comes from the fft of time-domain data, but may not be required if the data is collected using frequency-response analysis with exactly periodic input and proper handling of transients.
  • stable: For the model to be stable (uses schur_stab).
  • svd: The svd function to use.
  • Aestimator: The estimator of the A matrix (and initial C-matrix).
  • Bestimator: The estimator of B/D and C/D matrices.
  • weights: An optional vector of frequency weights of the same length as the number of frequencies in `data.
source
ControlSystemIdentification.n4sidFunction
res = n4sid(data, r=:auto; verbose=false)

Estimate a statespace model using the n4sid method. Returns an object of type N4SIDStateSpace where the model is accessed as res.sys.

Implements the simplified algorithm (alg 2) from "N4SID: Subspace Algorithms for the Identification of Combined Deterministic Stochastic Systems" PETER VAN OVERSCHEE and BART DE MOOR

The frequency weighting is borrowing ideas from "Frequency Weighted Subspace Based System Identification in the Frequency Domain", Tomas McKelvey 1996. In particular, we apply the output frequency weight matrix (Fy) as it appears in eqs. (16)-(18).

Arguments:

  • data: Identification data data = iddata(y,u)
  • r: Rank of the model (model order)
  • verbose: Print stuff?
  • Wf: A frequency-domain model of measurement disturbances. To focus the attention of the model on a narrow frequency band, try something like Wf = Bandstop(lower, upper, fs=1/Ts) to indicate that there are disturbances outside this band.
  • i: Algorithm parameter, generally no need to tune this
  • γ: Set this to a value between (0,1) to stabilize unstable models such that the largest eigenvalue has magnitude γ.
  • zeroD: defaults to false

See also the newer implementation subspaceid which allows you to choose between different weightings (n4sid being one of them). A more accurate prediciton model can sometimes be obtained using newpem, which is also unbiased for closed-loop data.

source
ControlSystemIdentification.newpemFunction
sys, x0, res = newpem(
    d,
    nx;
    zeroD  = true,
    focus  = :prediction,
    stable = true,
    sys0   = subspaceid(d, nx; zeroD, focus, stable),
    metric = abs2,
    regularizer = (p, P) -> 0,
    optimizer = BFGS(
        linesearch = LineSearches.BackTracking(),
    ),
    store_trace = true,
    show_trace  = true,
    show_every  = 50,
    iterations  = 10000,
    time_limit  = 100,
    x_tol       = 0,
    f_abstol    = 0,
    g_tol       = 1e-12,
    f_calls_limit = 0,
    g_calls_limit = 0,
    allow_f_increases = false,
)

A new implementation of the prediction-error method (PEM). Note that this is an experimental implementation and subject to breaking changes not respecting semver.

The prediction-error method is an iterative, gradient-based optimization problem, as such, it can be extra sensitive to signal scaling, and it's recommended to perform scaling to d before estimation, e.g., by pre and post-multiplying with diagonal matrices d̃ = Dy*d*Du, and apply the inverse scaling to the resulting system. In this case, we have

\[D_y y = G̃ D_u u ↔ y = D_y^{-1} G̃ D_u u\]

hence G = Dy \ G̃ * Du where $ G̃ $ is the plant estimated for the scaled iddata.

Arguments:

  • d: iddata
  • nx: Model order
  • zeroD: Force zero D matrix
  • stable if true, stability of the estimated system will be enforced by eigenvalue reflection using schur_stab with ϵ=1/100 (default). If stable is a real value, the value is used instead of the default ϵ.
  • sys0: Initial guess, if non provided, subspaceid is used as initial guess.
  • focus: prediction or :simulation. If :simulation, hte K matrix will be zero.
  • optimizer: One of Optim's optimizers
  • metric: The metric used to measure residuals. Try, e.g., abs for better resistance to outliers.

The rest of the arguments are related to Optim.Options.

  • regularizer: A function of the parameter vector and the corresponding PredictionStateSpace/StateSpace system that can be used to regularize the estimate.

Example

using ControlSystemIdentification, ControlSystems, Plots
G = DemoSystems.doylesat()
T = 1000  # Number of time steps
Ts = 0.01 # Sample time
sys = c2d(G, Ts)
nx = sys.nx
nu = sys.nu
ny = sys.ny
x0 = zeros(nx) # actual initial state
sim(sys, u, x0 = x0) = lsim(sys, u; x0)[1]

σy = 1e-1 # Noise covariance

u  = randn(nu, T)
y  = sim(sys, u, x0)
yn = y .+ σy .* randn.() # Add measurement noise
d  = iddata(yn, u, Ts)

sysh, x0h, opt = ControlSystemIdentification.newpem(d, nx, show_every=10)

plot(
    bodeplot([sys, sysh]),
    predplot(sysh, d, x0h), # Include the estimated initial state in the prediction
)

Extended help

This implementation uses a tridiagonal parametrization of the A-matrix that has been shown to be favourable from an optimization perspective.¹ The initial guess sys0 is automatically transformed to a special tridiagonal modal form. [1]: Mckelvey, Tomas & Helmersson, Anders. (1997). State-space parametrizations of multivariable linear systems using tridiagonal matrix forms.

The parameter vector used in the optimizaiton takes the following form

p = [trivec(A); vec(B); vec(C); vec(D); vec(K); vec(x0)]

Where ControlSystemIdentification.trivec vectorizes the -1,0,1 diagonals of A. If focus = :simulation, K is omitted, and if zeroD = true, D is omitted.

source
ControlSystemIdentification.eraFunction
era(YY::AbstractArray{<:Any, 3}, Ts, r::Int, m::Int, n::Int)

Eigenvalue realization algorithm.

Arguments:

  • YY: Markov parameters (impulse response) size n_out×n_in×n_time
  • Ts: Sample time
  • r: Model order
  • m: Number of rows in Hankel matrix
  • n: Number of columns in Hankel matrix
source
era(d::AbstractIdData, r, m = 2r, n = 2r, l = 5r; p = l, λ=0)

Eigenvalue realization algorithm. Uses okid to find the Markov parameters as an initial step.

Arguments:

  • r: Model order
  • l: Number of Markov parameters to estimate.
  • λ: Regularization parameter
  • p: Optionally, delete the first p columns in the internal Hankel matrices to account for initial conditions != 0. If x0 != 0, for era, p defaults to l, while when calling okid directly, p defaults to 0.
source
ControlSystemIdentification.okidFunction
H = okid(d::AbstractIdData, nx, l = 5nx; p = 1, λ=0, estimator = /)

Observer Kalman filter identification. Returns the Markov parameters H size n_out×n_in×l+1

Arguments:

  • nx: Model order
  • l: Number of Markov parameters to estimate.
  • λ: Regularization parameter
  • p: Optionally, delete the first p columns in the internal Hankel matrices to account for initial conditions != 0. If x0 != 0, try setting p around the same value as l.
source