LTI state-space models
There exist several methods for identification of statespace models, subspaceid
, n4sid
and pem
. subspaceid
is the most comprehensive algorithm for subspace-based identification whereas n4sid
is an older implementation. pem
solves the prediction-error problem using an iterative optimization method (from Optim.jl). If unsure which method to use, try subspaceid
first.
Subspace-based identification using n4sid
d = iddata(y,u,sampletime)
sys = n4sid(d, :auto; verbose=false)
# or use a robust version of svd if y has outliers or missing values
using TotalLeastSquares
sys = n4sid(d, :auto; verbose=false, svd=x->rpca(x)[3])
Estimate a statespace model using the n4sid
method. Returns an object of type N4SIDResult
where the model is accessed as sys.sys
. The frequency-weighting functionality is borrowing ideas from "Frequency Weighted Subspace Based System Identification in the Frequency Domain", Tomas McKelvey 1996. In particular, we apply the output frequency weight matrix (Fy) as it appears in eqs. (16)-(18).
ERA and OKID
PEM
A simple algorithm for identification of discrete-time LTI systems on state-space form:
is provided. The user can choose to minimize either prediction errors or simulation errors, with arbitrary metrics, i.e., not limited to squared errors.
The result of the identification with pem
is a custom type StateSpaceNoise <: ControlSystems.LTISystem
, with fields A,B,K
, representing the dynamics matrix, input matrix and Kalman gain matrix, respectively. The observation matrix C
is not stored, as this is always given by [I 0]
(you can still access it through sys.C
thanks to getproperty
).
Usage example
Below, we generate a system and simulate it forward in time. We then try to estimate a model based on the input and output sequences.
using ControlSystemIdentification, ControlSystems, Random, LinearAlgebra
using ControlSystemIdentification: newpem
sys = c2d(tf(1, [1, 0.5, 1]) * tf(1, [1, 1]), 0.1)
Random.seed!(1)
T = 1000 # Number of time steps
nx = 3 # Number of poles in the true system
x0 = randn(nx) # Initial state
sim(sys,u,x0=x0) = lsim(ss(sys), u, x0=x0)[1] # Helper function
u = randn(nu,T) # Generate random input
y = sim(sys, u, x0) # Simulate system
d = iddata(y,u,0.1)
sysh,opt = newpem(d, nx, focus=:prediction) # Estimate model
yh = predict(sysh, d) # Predict using estimated model
predplot(sysh, d) # Plot prediction and true output
See the example notebooks for more plots.
Internals
Internally, Optim.jl is used to optimize the system parameters, using automatic differentiation to calculate gradients (and Hessians where applicable). Optim solver options can be controlled by passing keyword arguments to pem
, and by passing a manually constructed solver object. The default solver is BFGS()
Filtering and simulation
Models can be simulated using lsim
from ControlSystems.jl and using simulate
. You may also convert the model to a KalmanFilter
from LowLevelParticleFilters.jl by calling KalmanFilter(sys)
, after which you can perform filtering and smoothing etc. with the utilities provided for a KalmanFilter
.
Furthermore, we have the utility functions below
predict
(sys, d, x0=zeros)
: Form predictions using estimatedsys
, this essentially runs a stationary Kalman filter.simulate
(sys, u, x0=zeros)
: Simulate the system using inputu
. The noise model and Kalman gain does not have any influence on the simulated output.observer_predictor
: Extract the predictor model from the estimated system (ss(A-KC,[B K],C,D)
).observer_controller
prediction_error
predictiondata
noise_model
ControlSystemIdentification.subspaceid
— Functionsubspaceid(
data::InputOutputData,
nx = :auto;
verbose = false,
r = nx === :auto ? min(length(data) ÷ 20, 20) : nx + 10, # the maximal prediction horizon used
s1 = r, # number of past outputs
s2 = r, # number of past inputs
W = :MOESP,
zeroD = false,
stable = true,
focus = :prediction,
svd::F1 = svd,
scaleU = true,
Aestimator::F2 = \,
Bestimator::F3 = \,
weights = nothing,
)
Estimate a state-space model using subspace-based identification.
Ref: Ljung, Theory for the user.
Arguments:
data
: Identification dataiddata
nx
: Rank of the model (model order)verbose
: Print stuff?r
: Prediction horizon. The model may perform better on simulation if this is made longer, at the expense of more computation time.s1
: past horizon of outputss2
: past horizon of inputsW
: Weight type, choose between:MOESP, :CVA, :N4SID, :IVM
zeroD
: Force theD
matrix to be zero.stable
: Stabilize unstable system using eigenvalue reflection.focus
::prediction
orsimulation
svd
: The function to use forsvd
scaleU
: Rescale the input channels to have the same energy.Aestimator
: Estimator function used to estimateA,C
.Bestimator
: Estimator function used to estimateB,D
.weights
: A vector of weights can be provided if theBestimator
iswls
.
ControlSystemIdentification.n4sid
— Functionres = n4sid(data, r=:auto; verbose=false)
Estimate a statespace model using the n4sid method. Returns an object of type N4SIDStateSpace
where the model is accessed as res.sys
.
Implements the simplified algorithm (alg 2) from "N4SID: Subspace Algorithms for the Identification of Combined Deterministic Stochastic Systems" PETER VAN OVERSCHEE and BART DE MOOR
The frequency weighting is borrowing ideas from "Frequency Weighted Subspace Based System Identication in the Frequency Domain", Tomas McKelvey 1996. In particular, we apply the output frequency weight matrix (Fy) as it appears in eqs. (16)-(18).
Arguments:
data
: Identification datadata = iddata(y,u)
y
: Measurements N×nyu
: Control signal N×nur
: Rank of the model (model order)verbose
: Print stuff?Wf
: A frequency-domain model of measurement disturbances. To focus the attention of the model on a narrow frequency band, try something likeWf = Bandstop(lower, upper, fs=1/Ts)
to indicate that there are disturbances outside this band.i
: Algorithm parameter, generally no need to tune thisγ
: Set this to a value between (0,1) to stabilize unstable models such that the largest eigenvalue has magnitude γ.zeroD
: defaults to false
ControlSystemIdentification.pem
— Functionsys, x0, opt = pem(data; nx, kwargs...)
System identification using the prediction-error method.
Arguments:
data
: iddata object containingy
andu
.y
: Measurements, either a matrix with time along dim 2, or a vector of vectorsu
: Control signals, same structure asy
nx
: Number of poles in the estimated system. Thus number should be chosen as number of system poles plus number of poles in noise models for measurement noise and load disturbances.focus
: Either:prediction
or:simulation
. If:simulation
is chosen, a two stage problem is solved with prediction focus first, followed by a refinement for simulation focus.metric
: A Function determining how the size of the residuals is measured, defaultsse
(e'e), but any Function such asnorm
,e->sum(abs,e)
ore -> e'Q*e
could be used.regularizer(p)=0
: function for regularization. The structure ofp
is detailed belowsolver
Defaults toOptim.BFGS()
stabilize_predictor=true
: Modifies the estimated Kalman gainK
in caseA-KC
is not stable by moving all unstable eigenvalues to the unit circle.difficult=false
: If the identification problem appears to be difficult and ends up in a local minimum, set this flag to true to solve an initial global optimization problem to supply a good initial guess. This is expected to take some time.kwargs
: additional keyword arguments are sent toOptim.Options
.
Return values
sys::StateSpaceNoise
: identified system. Can be converted toStateSpace
byconvert(StateSpace, sys)
orss(sys)
, but this will discard the Kalman gain matrix, seeinnovation_form
to obtain a predictor system.x0
: Estimated initial stateopt
: Optimization problem structure. Contains info of the result of the optimization problem
Structure of parameter vector p
The parameter vector is of type ComponentVector
and the fields A,B,K,x0
can be accessed as p.A
etc. The internal storage is according to
A = size(nx,nx)
B = size(nx,nu)
K = size(nx,ny)
x0 = size(nx)
p = [A[:]; B[:]; K[:]; x0]
ControlSystemIdentification.newpem
— Functionnewpem(
d,
nx;
zeroD = true,
sys0 = subspaceid(d, nx; zeroD),
focus = :prediction,
optimizer = BFGS(
alphaguess = LineSearches.InitialStatic(alpha = 1),
linesearch = LineSearches.HagerZhang(),
),
zerox0 = false,
initx0 = false,
store_trace = true,
show_trace = true,
show_every = 50,
iterations = 10000,
allow_f_increases = false,
time_limit = 100,
x_tol = 0,
f_abstol = 0,
g_tol = 1e-12,
f_calls_limit = 0,
g_calls_limit = 0,
)
A new implementation of the prediction-error method (PEM). Note that this is an experimental implementation and subject to breaking changes not respecting semver.
Arguments:
d
:iddata
nx
: Model orderzeroD
: Force zeroD
matrixsys0
: Initial guess, if non provided,subspaceid
is used as initial guess.focus
:prediction
or:simulation
. If:simulation
, hteK
matrix will be zero.optimizer
: One of Optim's optimizerszerox0
: Force initial state to zero.initx0
: Estimate initial state once, otherwise at each iteration
The rest of the arguments are related to Optim.Options
.
ControlSystemIdentification.era
— Functionera(YY::AbstractArray{<:Any, 3}, Ts, r::Int, m::Int, n::Int)
Eigenvalue realization algorithm.
Arguments:
YY
: Markov parameters (impulse response) sizen_out×n_in×n_time
Ts
: Sample timer
: Model orderm
: Number of rows in Hankel matrixn
: Number of columns in Hankel matrix
era(d::AbstractIdData, r, m = 2r, n = 2r, l = 5r; λ=0)
Eigenvalue realization algorithm. Uses okid
to find the Markov parameters as an initial step.
Arguments:
r
: Model orderl
: Number of Markov parameters to estimate.λ
: Regularization parameter
ControlSystemIdentification.okid
— FunctionH = okid(d::AbstractIdData, r, l = 5r; λ=0)
Observer Kalman filter identification. Returns the Markov parameters H
size n_out×n_in×l+1
Arguments:
r
: Model orderl
: Number of Markov parameters to estimate.λ
: Regularization parameter