LowLevelParticleFilters
This is a library for state estimation, that is, given measurements $y(t)$ from a dynamical system, estimate the state vector $x(t)$. Throughout, we assume dynamics on the form
\[\begin{aligned} x(t+1) &= f(x(t), u(t), p, t, w(t))\\ y(t) &= g(x(t), u(t), p, t, e(t)) \end{aligned}\]
or the linear version
\[\begin{aligned} x(t+1) &= Ax(t) + Bu(t) + w(t)\\ y(t) &= Cx(t) + Du(t) + e(t) \end{aligned}\]
where $x$ is the state vector, $u$ an input, $p$ some form of parameters, $t$ is the time and $w,e$ are disturbances (noise). Throughout the documentation, we often call the function $f$ dynamics
and the function $g$ measurement
.
The dynamics above describe a discrete-time system, i.e., the function $f$ takes the current state and produces the next state. This is in contrast to a continuous-time system, where $f$ takes the current state but produces the time derivative of the state. A continuous-time system can be discretized, described in detail in Discretization.
The parameters $p$ can be anything, or left out. You may write the dynamics functions such that they depend on $p$ and include parameters when you create a filter object. You may also override the parameters stored in the filter object when you call any function on the filter object. This behavior is modeled after the SciML ecosystem.
Depending on the nature of $f$ and $g$, the best method of estimating the state may vary. If $f,g$ are linear and the disturbances are additive and Gaussian, the KalmanFilter
is an optimal state estimator. If any of the above assumptions fail to hold, we may need to resort to more advanced estimators. This package provides several filter types, outlined below.
Estimator types
We provide a number of filter types
KalmanFilter
. A standard Kalman filter. Is restricted to linear dynamics (possibly time varying) and Gaussian noise.SqKalmanFilter
. A standard Kalman filter on square-root form (slightly slower but more numerically stable with ill-conditioned covariance).ExtendedKalmanFilter
: For nonlinear systems, the EKF runs a regular Kalman filter on linearized dynamics. Uses ForwardDiff.jl for linearization (or user provided). The noise model must be Gaussian.UnscentedKalmanFilter
: The Unscented Kalman filter often performs slightly better than the Extended Kalman filter but may be slightly more computationally expensive. The UKF handles nonlinear dynamics and measurement models, but still requires a Gaussian noise model (may be non additive) and assumes all posterior distributions are Gaussian, i.e., can not handle multi-modal posteriors.ParticleFilter
: The particle filter is a nonlinear estimator. This version of the particle filter is simple to use and assumes that both dynamics noise and measurement noise are additive. Particle filters handle multi-modal posteriors.AdvancedParticleFilter
: This filter gives you more flexibility, at the expense of having to define a few more functions. This filter does not require the noise to be additive and is thus the most flexible filter type.AuxiliaryParticleFilter
: This filter is identical toParticleFilter
, but uses a slightly different proposal mechanism for new particles.
Functionality
This package provides
- Filtering, estimating $x(t)$ given measurements up to and including time $t$. We call the filtered estimate $x(t|t)$ (read as $x$ at $t$ given $t$).
- Smoothing, estimating $x(t)$ given data up to $T > t$, i.e., $x(t|T)$.
- Parameter estimation.
All filters work in two distinct steps.
- The prediction step (
predict!
). During prediction, we use the dynamics model to form $x(t|t-1) = f(x(t-1), ...)$ - The correction step (
correct!
). In this step, we adjust the predicted state $x(t|t-1)$ using the measurement $y(t)$ to form $x(t|t)$.
In general, all filters represent not only a point estimate of $x(t)$, but a representation of the complete posterior probability distribution over $x$ given all the data available up to time $t$. One major difference between different filter types is how they represent these probability distributions.
Particle filter
A particle filter represents the probability distribution over the state as a collection of samples, each sample is propagated through the dynamics function $f$ individually. When a measurement becomes available, the samples, called particles, are given a weight based on how likely the particle is given the measurement. Each particle can thus be seen as representing a hypothesis about the current state of the system. After a few time steps, most weights are inevitably going to be extremely small, a manifestation of the curse of dimensionality, and a resampling step is incorporated to refresh the particle distribution and focus the particles on areas of the state space with high posterior probability.
Defining a particle filter is straightforward, one must define the distribution of the noise df
in the dynamics function, dynamics(x,u,p,t)
and the noise distribution dg
in the measurement function measurement(x,u,p,t)
. Both of these noise sources are assumed to be additive, but can have any distribution. The distribution of the initial state d0
must also be provided. In the example below, we use linear Gaussian dynamics so that we can easily compare both particle and Kalman filters. (If we have something close to linear Gaussian dynamics in practice, we should of course use a Kalman filter and not a particle filter.)
using LowLevelParticleFilters, LinearAlgebra, StaticArrays, Distributions, Plots
Define problem
nx = 2 # Dimension of state
nu = 1 # Dimension of input
ny = 1 # Dimension of measurements
N = 500 # Number of particles
const dg = MvNormal(ny,0.2) # Measurement noise Distribution
const df = MvNormal(nx,0.1) # Dynamics noise Distribution
const d0 = MvNormal(randn(nx),2.0) # Initial state Distribution
Define linear state-space system (using StaticArrays for maximum performance)
const A = SA[0.97043 -0.097368
0.09736 0.970437]
const B = SA[0.1; 0;;]
const C = SA[0 1.0]
Next, we define the dynamics and measurement equations, they both take the signature (x,u,p,t) = (state, input, parameters, time)
dynamics(x,u,p,t) = A*x .+ B*u
measurement(x,u,p,t) = C*x
vecvec_to_mat(x) = copy(reduce(hcat, x)') # Helper function
the parameter p
can be anything, and is often optional. If p
is not provided when performing operations on filters, any p
stored in the filter objects (if supported) is used. The default if none is provided and none is stored in the filter is p = LowLevelParticleFilters.NullParameters()
.
We are now ready to define and use a filter
pf = ParticleFilter(N, dynamics, measurement, df, dg, d0)
ParticleFilter{PFstate{StaticArraysCore.SVector{2, Float64}, Float64}, typeof(Main.dynamics), typeof(Main.measurement), Distributions.ZeroMeanIsoNormal{Tuple{Base.OneTo{Int64}}}, Distributions.ZeroMeanIsoNormal{Tuple{Base.OneTo{Int64}}}, Distributions.IsoNormal, DataType, Random.Xoshiro, LowLevelParticleFilters.NullParameters}
state: PFstate{StaticArraysCore.SVector{2, Float64}, Float64}
dynamics: dynamics (function of type typeof(Main.dynamics))
measurement: measurement (function of type typeof(Main.measurement))
dynamics_density: Distributions.ZeroMeanIsoNormal{Tuple{Base.OneTo{Int64}}}
measurement_density: Distributions.ZeroMeanIsoNormal{Tuple{Base.OneTo{Int64}}}
initial_density: Distributions.IsoNormal
resample_threshold: Float64 0.1
resampling_strategy: LowLevelParticleFilters.ResampleSystematic <: LowLevelParticleFilters.ResamplingStrategy
rng: Random.Xoshiro
p: LowLevelParticleFilters.NullParameters LowLevelParticleFilters.NullParameters()
threads: Bool false
Ts: Float64 1.0
With the filter in hand, we can simulate from its dynamics and query some properties
du = MvNormal(nu,1.0) # Random input distribution for simulation
xs,u,y = simulate(pf,200,du) # We can simulate the model that the pf represents
pf(u[1], y[1]) # Perform one filtering step using input u and measurement y
particles(pf) # Query the filter for particles, try weights(pf) or expweights(pf) as well
x̂ = weighted_mean(pf) # using the current state
2-element Vector{Float64}:
0.5596856030406465
2.557662255165061
If you want to perform filtering using vectors of inputs and measurements, try any of the functions
sol = forward_trajectory(pf, u, y) # Filter whole vectors of signals
x̂,ll = mean_trajectory(pf, u, y)
plot(sol, xreal=xs, markersize=2)
u
ad y
are then assumed to be vectors of vectors. StaticArrays is recommended for maximum performance.
If MonteCarloMeasurements.jl is loaded, you may transform the output particles to Matrix{MonteCarloMeasurements.Particles}
with the layout T × n_state
using Particles(x,we)
. Internally, the particles are then resampled such that they all have unit weight. This is conventient for making use of the plotting facilities of MonteCarloMeasurements.jl.
For a full usage example, see the benchmark section below or example_lineargaussian.jl
Resampling
The particle filter will perform a resampling step whenever the distribution of the weights has become degenerate. The resampling is triggered when the effective number of samples is smaller than pf.resample_threshold
$\in [0, 1]$, this value can be set when constructing the filter. How the resampling is done is governed by pf.resampling_strategy
, we currently provide ResampleSystematic <: ResamplingStrategy
as the only implemented strategy. See https://en.wikipedia.org/wiki/Particle_filter for more info.
Particle Smoothing
Smoothing is the process of finding the best state estimate given both past and future data. Smoothing is thus only possible in an offline setting. This package provides a particle smoother, based on forward filtering, backward simulation (FFBS), example usage follows:
N = 2000 # Number of particles
T = 80 # Number of time steps
M = 100 # Number of smoothed backwards trajectories
pf = ParticleFilter(N, dynamics, measurement, df, dg, d0)
du = MvNormal(nu,1) # Control input distribution
x,u,y = simulate(pf,T,du) # Simulate trajectory using the model in the filter
tosvec(y) = reinterpret(SVector{length(y[1]),Float64}, reduce(hcat,y))[:] |> copy
x,u,y = tosvec.((x,u,y)) # It's good for performance to use StaticArrays to the extent possible
xb,ll = smooth(pf, M, u, y) # Sample smoothing particles
xbm = smoothed_mean(xb) # Calculate the mean of smoothing trajectories
xbc = smoothed_cov(xb) # And covariance
xbt = smoothed_trajs(xb) # Get smoothing trajectories
xbs = [diag(xbc) for xbc in xbc] |> vecvec_to_mat .|> sqrt
plot(xbm', ribbon=2xbs, lab="PF smooth")
plot!(vecvec_to_mat(x), l=:dash, lab="True")
We can plot the particles themselves as well
downsample = 5
plot(vecvec_to_mat(x), l=(4,), layout=(2,1), show=false)
scatter!(xbt[1, 1:downsample:end, :]', subplot=1, show=false, m=(1,:black, 0.5), lab="")
scatter!(xbt[2, 1:downsample:end, :]', subplot=2, m=(1,:black, 0.5), lab="")
Kalman filter
The KalmanFilter
(wiki) assumes that $f$ and $g$ are linear functions, i.e., that they can be written on the form
\[\begin{aligned} x(t+1) &= Ax(t) + Bu(t) + w(t)\\ y(t) &= Cx(t) + Du(t) + e(t) \end{aligned}\]
for some matrices $A,B,C,D$ where $w \sim N(0, R_1)$ and $e \sim N(0, R_2)$ are zero mean and Gaussian. The Kalman filter represents the posterior distributions over $x$ by the mean and a covariance matrix. The magic behind the Kalman filter is that linear transformations of Gaussian distributions remain Gaussian, and we thus have a very efficient way of representing them.
A Kalman filter is easily created using the constructor. Many of the functions defined for particle filters, are defined also for Kalman filters, e.g.:
R1 = cov(df)
R2 = cov(dg)
kf = KalmanFilter(A, B, C, 0, R1, R2, d0)
sol = forward_trajectory(kf, u, y) # sol contains filtered state, predictions, pred cov, filter cov, loglik
It can also be called in a loop like the pf
above
for t = 1:T
kf(u,y) # Performs both correct and predict!!
# alternatively
ll, e = correct!(kf, y, nothing, t) # Returns loglikelihood and prediction error
x = state(kf)
R = covariance(kf)
predict!(kf, u, nothing, t)
end
The matrices in the Kalman filter may be time varying, such that A[:, :, t]
is $A(t)$. They may also be provided as functions on the form $A(t) = A(x, u, p, t)$. This works for both dynamics and covariance matrices.
The numeric type used in the Kalman filter is determined from the mean of the initial state distribution, so make sure that this has the correct type if you intend to use, e.g., Float32
or ForwardDiff.Dual
for automatic differentiation.
Smoothing using KF
Kalman filters can also be used for smoothing
kf = KalmanFilter(A, B, C, 0, cov(df), cov(dg), d0)
xT,R,lls = smooth(kf, u, y) # Returns smoothed state, smoothed cov, loglik
Plot and compare PF and KF
plot(vecvec_to_mat(xT), lab="Kalman smooth", layout=2)
plot!(xbm', lab="pf smooth")
plot!(vecvec_to_mat(x), lab="true")
Kalman filter tuning tutorial
The tutorial "How to tune a Kalman filter" details how to figure out appropriate covariance matrices for the Kalman filter, as well as how to add disturbance models to the system model.
Unscented Kalman Filter
The UnscentedKalmanFilter
represents posterior distributions over $x$ as Gaussian distributions, but propagate them through a nonlinear function $f$ by a deterministic sampling of a small number of particles called sigma points (this is referred to as the unscented transform). This UKF thus handles nonlinear functions $f,g$, but only Gaussian disturbances and unimodal posteriors. The UKF will by default treat the noise as additive, but by using the augmented UKF form, non-additive noise may be handled as well. See the docstring of UnscentedKalmanFilter
for more details.
The UKF takes the same arguments as a regular KalmanFilter
, but the matrices defining the dynamics are replaced by two functions, dynamics
and measurement
, working in the same way as for the ParticleFilter
above (unless the augmented form is used).
ukf = UnscentedKalmanFilter(dynamics, measurement, cov(df), cov(dg), MvNormal(SA[1.,1.]); nu=nu, ny=ny)
UnscentedKalmanFilter{false, false, false, false, typeof(Main.dynamics), typeof(Main.measurement), Matrix{Float64}, Matrix{Float64}, Distributions.MvNormal{Float64, PDMats.PDiagMat{Float64, StaticArraysCore.SVector{2, Float64}}, FillArrays.Zeros{Float64, 1, Tuple{Base.OneTo{Int64}}}}, Vector{StaticArraysCore.SVector{2, Float64}}, Vector{StaticArraysCore.SVector{2, Float64}}, Vector{StaticArraysCore.SVector{2, Float64}}, Vector{StaticArraysCore.SVector{1, Float64}}, Vector{Float64}, Matrix{Float64}, LowLevelParticleFilters.NullParameters, Nothing, typeof(LowLevelParticleFilters.safe_mean), typeof(LowLevelParticleFilters.safe_cov), Base.Broadcast.BroadcastFunction{typeof(-)}}
dynamics: dynamics (function of type typeof(Main.dynamics))
measurement: measurement (function of type typeof(Main.measurement))
R1: Array{Float64}((2, 2)) [0.010000000000000002 0.0; 0.0 0.010000000000000002]
R2: Array{Float64}((1, 1)) [0.04000000000000001;;]
d0: Distributions.MvNormal{Float64, PDMats.PDiagMat{Float64, StaticArraysCore.SVector{2, Float64}}, FillArrays.Zeros{Float64, 1, Tuple{Base.OneTo{Int64}}}}
xsd: Array{StaticArraysCore.SVector{2, Float64}}((5,))
xsd0: Array{StaticArraysCore.SVector{2, Float64}}((5,))
xsm: Array{StaticArraysCore.SVector{2, Float64}}((5,))
ys: Array{StaticArraysCore.SVector{1, Float64}}((5,))
x: Array{Float64}((2,)) [0.0, 0.0]
R: Array{Float64}((2, 2)) [1.0 0.0; 0.0 1.0]
t: Int64 0
Ts: Float64 1.0
ny: Int64 1
nu: Int64 1
p: LowLevelParticleFilters.NullParameters LowLevelParticleFilters.NullParameters()
reject: Nothing nothing
mean: safe_mean (function of type typeof(LowLevelParticleFilters.safe_mean))
cov: safe_cov (function of type typeof(LowLevelParticleFilters.safe_cov))
innovation: Base.Broadcast.BroadcastFunction(-) (function of type Base.Broadcast.BroadcastFunction{typeof(-)})
If your function dynamics
describes a continuous-time ODE, do not forget to discretize it before passing it to the UKF. See Discretization for more information.
Extended Kalman Filter
The ExtendedKalmanFilter
(EKF) is similar to the UKF, but propagates Gaussian distributions by linearizing the dynamics and using the formulas for linear systems similar to the standard Kalman filter. This can be slightly faster than the UKF (not always), but also less accurate for strongly nonlinear systems. The linearization is performed automatically using ForwardDiff.jl unless the user provides Jacobian functions that compute $A$ and $C$. In general, the UKF is recommended over the EKF unless the EKF is faster and computational performance is the top priority.
The EKF constructor has the following two signatures
ExtendedKalmanFilter(dynamics, measurement, R1, R2, d0=MvNormal(R1); nu::Int, p = LowLevelParticleFilters.NullParameters(), α = 1.0, check = true, Ajac = nothing, Cjac = nothing)
ExtendedKalmanFilter(kf, dynamics, measurement; Ajac = nothing, Cjac = nothing)
The first constructor takes all the arguments required to initialize the extended Kalman filter, while the second one takes an already defined standard Kalman filter. using the first constructor, the user must provide the number of inputs to the system, nu
.
where kf
is a standard KalmanFilter
from which the covariance properties are taken.
If your function dynamics
describes a continuous-time ODE, do not forget to discretize it before passing it to the UKF. See Discretization for more information.
AdvancedParticleFilter
The AdvancedParticleFilter
works very much like the ParticleFilter
, but admits more flexibility in its noise models.
The AdvancedParticleFilter
type requires you to implement the same functions as the regular ParticleFilter
, but in this case you also need to handle sampling from the noise distributions yourself. The function dynamics
must have a method signature like below. It must provide one method that accepts state vector, control vector, parameter, time and noise::Bool
that indicates whether or not to add noise to the state. If noise should be added, this should be done inside dynamics
An example is given below
using Random
const rng = Random.Xoshiro()
function dynamics(x, u, p, t, noise=false) # It's important that this defaults to false
x = A*x .+ B*u # A simple linear dynamics model in discrete time
if noise
x += rand(rng, df) # it's faster to supply your own rng
end
x
end
The measurement_likelihood
function must have a method accepting state, input, measurement, parameter and time, and returning the log-likelihood of the measurement given the state, a simple example below:
function measurement_likelihood(x, u, y, p, t)
logpdf(dg, C*x-y) # A simple linear measurement model with normal additive noise
end
This gives you very high flexibility. The noise model in either function can, for instance, be a function of the state, something that is not possible for the simple ParticleFilter
. To be able to simulate the AdvancedParticleFilter
like we did with the simple filter above, the measurement
method with the signature measurement(x,u,p,t,noise=false)
must be available and return a sample measurement given state (and possibly time). For our example measurement model above, this would look like this
measurement(x, u, p, t, noise=false) = C*x + noise*rand(rng, dg)
We now create the AdvancedParticleFilter
and use it in the same way as the other filters:
apf = AdvancedParticleFilter(N, dynamics, measurement, measurement_likelihood, df, d0)
sol = forward_trajectory(apf, u, y, ny)
LowLevelParticleFilters.ParticleFilteringSolution{AdvancedParticleFilter{PFstate{StaticArraysCore.SVector{2, Float64}, Float64}, typeof(Main.dynamics), typeof(Main.measurement), typeof(Main.measurement_likelihood), Distributions.ZeroMeanIsoNormal{Tuple{Base.OneTo{Int64}}}, Distributions.IsoNormal, DataType, Random.Xoshiro, LowLevelParticleFilters.NullParameters}, Vector{StaticArraysCore.SVector{1, Float64}}, Vector{StaticArraysCore.SVector{1, Float64}}, Matrix{StaticArraysCore.SVector{2, Float64}}, Matrix{Float64}, Matrix{Float64}, Float64}(AdvancedParticleFilter{PFstate{StaticArraysCore.SVector{2, Float64}, Float64}, typeof(Main.dynamics), typeof(Main.measurement), typeof(Main.measurement_likelihood), Distributions.ZeroMeanIsoNormal{Tuple{Base.OneTo{Int64}}}, Distributions.IsoNormal, DataType, Random.Xoshiro, LowLevelParticleFilters.NullParameters}
state: PFstate{StaticArraysCore.SVector{2, Float64}, Float64}
dynamics: dynamics (function of type typeof(Main.dynamics))
measurement: measurement (function of type typeof(Main.measurement))
measurement_likelihood: measurement_likelihood (function of type typeof(Main.measurement_likelihood))
dynamics_density: Distributions.ZeroMeanIsoNormal{Tuple{Base.OneTo{Int64}}}
initial_density: Distributions.IsoNormal
resample_threshold: Float64 0.5
resampling_strategy: LowLevelParticleFilters.ResampleSystematic <: LowLevelParticleFilters.ResamplingStrategy
rng: Random.Xoshiro
p: LowLevelParticleFilters.NullParameters LowLevelParticleFilters.NullParameters()
threads: Bool false
Ts: Float64 1.0
, StaticArraysCore.SVector{1, Float64}[[0.13888664576159715], [1.55955097779371], [1.11363919744918], [-0.8256247298183935], [0.7963906469132007], [-2.0573776200571556], [-0.20286945754921734], [-0.17822945736805426], [1.0803828641466804], [-0.3587546269966357] … [0.7202594424601985], [-0.05206464927127771], [-0.4720225935628422], [-1.023755100009704], [2.280747955758511], [0.32997351475517533], [0.7343702068591484], [1.047788414198784], [0.9602221623687094], [-0.8294870599278492]], StaticArraysCore.SVector{1, Float64}[[2.3802587799828667], [2.7381626337601785], [2.555293722918394], [2.7575236175659055], [3.147074863464682], [2.189198344536882], [2.5639281417590105], [2.4352708337419986], [2.1604096577225045], [1.9774160723564387] … [1.0346898718773991], [1.130893066785092], [1.0393888341156905], [0.8838504238530217], [0.5169683387550073], [0.5745919212650856], [0.6043713769484869], [0.2953515612631761], [0.24899876011409117], [0.06783317826931426]], StaticArraysCore.SVector{2, Float64}[[0.21537855293982933, -1.1907892382448595] [0.9826729138139576, 2.8477243117911843] … [-0.26425146455227777, 0.5164106516123049] [-0.14777745833289702, 0.47061620745770333]; [0.9044182322854855, 4.061214878044721] [1.115487190271459, 2.7134524653768914] … [-1.066449561785335, 0.18074315132708044] [-0.9086167644796632, 0.0241504454129899]; … ; [-0.0782913698492469, 5.3624976414034515] [1.2249061363291125, 2.5060286661789024] … [-0.05492998143144487, 0.32139718685103447] [0.13612573383116988, 0.23137960876523705]; [4.887473416903035, 7.6050453001695] [-0.09259482124243266, 2.7524475763955025] … [-0.17013155192900503, 0.6201030325160604] [-0.2618791537512831, 0.6001775790206836]], [-164.78734806176013 -6.832373958913131 … -9.797551037503442 -11.326162150508928; -40.7027162507997 -6.689959348020138 … -6.9239789287904445 -6.44851519864121; … ; -116.55440653774733 -7.35590417906382 … -7.086017680414661 -6.921044622492435; -346.6124759747073 -6.68487768757719 … -11.579501242762738 -14.622567260882722], [2.7149638375953966e-72 0.0010782952481182594 … 5.5587564738575174e-5 1.2053419642734754e-5; 2.103947670420709e-18 0.0012433333158948187 … 0.0009839072447796182 0.001582870673299872; … ; 2.404719356221725e-51 0.0006388095581545773 … 0.0008367228374090155 0.000986798569946894; 2.938423072026149e-151 0.0012496675943186738 … 9.355920160723829e-6 4.46169344465038e-7], 0.2578390094022103)
plot(sol, xreal=x)
We can even use this type as an AuxiliaryParticleFilter
apfa = AuxiliaryParticleFilter(apf)
sol = forward_trajectory(apfa, u, y, ny)
plot(sol, dim=1, xreal=x) # Same as above, but only plots a single dimension
See the tutorials section for more advanced examples, including state estimation for DAE (Differential-Algebraic Equation) systems.
Troubleshooting and tuning
Tuning a particle filter can be quite the challenge. To assist with this, we provide som visualization tools
debugplot(pf,u[1:20],y[1:20], runall=true, xreal=x[1:20])
Time Surviving Effective nbr of particles
--------------------------------------------------------------
t: 1 1.000 2000.0
t: 2 1.000 302.2
t: 3 0.156 2000.0
t: 4 1.000 1302.4
t: 5 1.000 904.4
t: 6 1.000 467.4
t: 7 1.000 203.1
t: 8 0.225 2000.0
t: 9 1.000 1764.4
t: 10 1.000 1365.9
t: 11 1.000 1029.4
t: 12 1.000 767.8
t: 13 1.000 521.4
t: 14 1.000 448.1
t: 15 1.000 257.8
t: 16 1.000 246.9
t: 17 0.195 2000.0
t: 18 1.000 1467.8
t: 19 1.000 742.5
t: 20 1.000 750.8
The plot displays all states and all measurements. The heatmap in the background represents the weighted particle distributions per time step. For the measurement sequences, the heatmap represent the distributions of predicted measurements. The blue dots corresponds to measured values. In this case, we simulated the data and we had access to states as well, if we do not have that, just omit xreal
. You can also manually step through the time-series using
commandplot(pf,u,y; kwargs...)
For options to the debug plots, see ?pplot
.
Tuning noise parameters through optimization
See examples in Parameter Estimation.
Tuning through simulation
It is possible to sample from the Bayesian model implied by a filter and its parameters by calling the function simulate
. A simple tuning strategy is to adjust the noise parameters such that a simulation looks "similar" to the data, i.e., the data must not be too unlikely under the model.
Videos
Several video tutorials using this package are available in the playlists
Some examples featuring this package in particular are
Using an optimizer to optimize the likelihood of an UnscentedKalmanFilter
:
Estimation of time-varying parameters:
Adaptive control by means of estimation of time-varying parameters: