Exported functions and types
Index
LowLevelParticleFilters.AdvancedParticleFilter
LowLevelParticleFilters.AuxiliaryParticleFilter
LowLevelParticleFilters.ExtendedKalmanFilter
LowLevelParticleFilters.KalmanFilter
LowLevelParticleFilters.ParticleFilter
LowLevelParticleFilters.SqKalmanFilter
LowLevelParticleFilters.UnscentedKalmanFilter
LowLevelParticleFilters.commandplot
LowLevelParticleFilters.correct!
LowLevelParticleFilters.correct!
LowLevelParticleFilters.correct!
LowLevelParticleFilters.debugplot
LowLevelParticleFilters.densityplot
LowLevelParticleFilters.forward_trajectory
LowLevelParticleFilters.forward_trajectory
LowLevelParticleFilters.log_likelihood_fun
LowLevelParticleFilters.loglik
LowLevelParticleFilters.logsumexp!
LowLevelParticleFilters.mean_trajectory
LowLevelParticleFilters.mean_trajectory
LowLevelParticleFilters.metropolis
LowLevelParticleFilters.prediction_errors!
LowLevelParticleFilters.reset!
LowLevelParticleFilters.reset!
LowLevelParticleFilters.reset!
LowLevelParticleFilters.simulate
LowLevelParticleFilters.smooth
LowLevelParticleFilters.smooth
LowLevelParticleFilters.smoothed_cov
LowLevelParticleFilters.smoothed_mean
LowLevelParticleFilters.smoothed_trajs
LowLevelParticleFilters.update!
LowLevelParticleFilters.weighted_cov
LowLevelParticleFilters.weighted_mean
LowLevelParticleFilters.weighted_mean
StatsAPI.predict!
StatsAPI.predict!
StatsAPI.predict!
LowLevelParticleFilters.AdvancedParticleFilter
— MethodAdvancedParticleFilter(N::Integer, dynamics::Function, measurement::Function, measurement_likelihood, dynamics_density, initial_density; p = NullParameters(), threads = false, kwargs...)
This type represents a standard particle filter but affords extra flexibility compared to the ParticleFilter
type, e.g., non-additive noise in the dynamics and measurement functions.
See the docs for more information: https://baggepinnen.github.io/LowLevelParticleFilters.jl/stable/#AdvancedParticleFilter-1
Arguments:
N
: Number of particlesdynamics
: A discrete-time dynamics function(x, u, p, t, noise=false) -> x⁺
. It's important that thenoise
argument defaults tofalse
.measurement
: A measurement function(x, u, p, t, noise=false) -> y
. It's important that thenoise
argument defaults tofalse
.measurement_likelihood
: A function(x, u, y, p, t)->logl
to evaluate the log-likelihood of a measurement.dynamics_density
: This field is not used by the advanced filter and can be set tonothing
.initial_density
: The distribution of the initial state.threads
: use threads to propagate particles in parallel. Only activate this if your dynamics is thread-safe.SeeToDee.SimpleColloc
is not thread-safe by default due to the use of internal caches, butSeeToDee.Rk4
is.
LowLevelParticleFilters.AuxiliaryParticleFilter
— MethodAuxiliaryParticleFilter(args...; kwargs...)
Takes exactly the same arguments as ParticleFilter
, or an instance of ParticleFilter
.
LowLevelParticleFilters.ExtendedKalmanFilter
— TypeExtendedKalmanFilter(kf, dynamics, measurement; Ajac, Cjac)
ExtendedKalmanFilter(dynamics, measurement, R1,R2,d0=MvNormal(Matrix(R1)); nu::Int, p = NullParameters(), α = 1.0, check = true)
A nonlinear state estimator propagating uncertainty using linearization.
The constructor to the extended Kalman filter takes dynamics and measurement functions, and either covariance matrices, or a KalmanFilter
. If the former constructor is used, the number of inputs to the system dynamics, nu
, must be explicitly provided with a keyword argument.
By default, the filter will internally linearize the dynamics using ForwardDiff. User provided Jacobian functions can be provided as keyword arguments Ajac
and Cjac
. These functions should have the signature (x,u,p,t)::AbstractMatrix
where x
is the state, u
is the input, p
is the parameters, and t
is the time.
The dynamics and measurement function are on the following form
x(t+1) = dynamics(x, u, p, t) + w
y = measurement(x, u, p, t) + e
where w ~ N(0, R1)
, e ~ N(0, R2)
and x(0) ~ d0
See also UnscentedKalmanFilter
which is typically more accurate than ExtendedKalmanFilter
. See KalmanFilter
for detailed instructions on how to set up a Kalman filter kf
.
LowLevelParticleFilters.KalmanFilter
— TypeKalmanFilter(A,B,C,D,R1,R2,d0=MvNormal(R1); p = NullParameters(), α=1, check=true)
The matrices A,B,C,D
define the dynamics
x' = Ax + Bu + w
y = Cx + Du + e
where w ~ N(0, R1)
, e ~ N(0, R2)
and x(0) ~ d0
The matrices can be time varying such that, e.g., A[:, :, t]
contains the $A$ matrix at time index t
. They can also be given as functions on the form
Afun(x, u, p, t) -> A
For maximum performance, provide statically sized matrices from StaticArrays.jl
α is an optional "forgetting factor", if this is set to a value > 1, such as 1.01-1.2, the filter will, in addition to the covariance inflation due to $R_1$, exhibit "exponential forgetting" similar to a Recursive Least-Squares (RLS) estimator. It is thus possible to get a RLS-like algorithm by setting $R_1=0, R_2 = 1/α$ and $α > 1$ ($α$ is the inverse of the traditional RLS parameter $α = 1/λ$). The exact form of the covariance update is
\[R(t+1|t) = α AR(t)A^T + R_1\]
If check = true (default)
the function will check that the eigenvalues of A
are less than 2 in absolute value. Large eigenvalues may be an indication that the system matrices are representing a continuous-time system and the user has forgotten to discretize it. Turn off this check by setting check = false
.
Tutorials on Kalman filtering
The tutorial "How to tune a Kalman filter" details how to figure out appropriate covariance matrices for the Kalman filter, as well as how to add disturbance models to the system model. See also the tutorial in the documentation
LowLevelParticleFilters.ParticleFilter
— MethodParticleFilter(N::Integer, dynamics, measurement, dynamics_density, measurement_density, initial_density; threads = false, p = NullParameters(), kwargs...)
See the docs for more information: https://baggepinnen.github.io/LowLevelParticleFilters.jl/stable/#Particle-filter-1
Arguments:
N
: Number of particlesdynamics
: A discrete-time dynamics function(x, u, p, t) -> x⁺
measurement
: A measurement function(x, u, p, t) -> y
dynamics_density
: A probability-density function for additive noise in the dynamics. UseAdvancedParticleFilter
for non-additive noise.measurement_density
: A probability-density function for additive measurement noise. UseAdvancedParticleFilter
for non-additive noise.initial_density
: Distribution of the initial state.
LowLevelParticleFilters.SqKalmanFilter
— TypeSqKalmanFilter(A,B,C,D,R1,R2,d0=MvNormal(R1); p = NullParameters(), α=1)
A standard Kalman filter on square-root form. This filter may have better numerical performance when the covariance matrices are ill-conditioned.
The matrices A,B,C,D
define the dynamics
x' = Ax + Bu + w
y = Cx + Du + e
where w ~ N(0, R1)
, e ~ N(0, R2)
and x(0) ~ d0
The matrices can be time varying such that, e.g., A[:, :, t]
contains the $A$ matrix at time index t
. They can also be given as functions on the form
Afun(x, u, p, t) -> A
The internal fields storing covariance matrices are for this filter storing the upper-triangular Cholesky factor.
α is an optional "forgetting factor", if this is set to a value > 1, such as 1.01-1.2, the filter will, in addition to the covariance inflation due to $R_1$, exhibit "exponential forgetting" similar to a Recursive Least-Squares (RLS) estimator. It is thus possible to get a RLS-like algorithm by setting $R_1=0, R_2 = 1/α$ and $α > 1$ ($α$ is the inverse of the traditional RLS parameter $α = 1/λ$). The form of the covariance update is
\[R(t+1|t) = α AR(t)A^T + R_1\]
Ref: "A Square-Root Kalman Filter Using Only QR Decompositions", Kevin Tracy https://arxiv.org/abs/2208.06452
LowLevelParticleFilters.UnscentedKalmanFilter
— MethodUnscentedKalmanFilter(dynamics, measurement, R1, R2, d0=MvNormal(Matrix(R1)); p = NullParameters(), ny, nu)
A nonlinear state estimator propagating uncertainty using the unscented transform.
The dynamics and measurement function are on either of the following forms
x' = dynamics(x, u, p, t) + w
y = measurement(x, u, p, t) + e
x' = dynamics(x, u, p, t, w)
y = measurement(x, u, p, t, e)
where w ~ N(0, R1)
, e ~ N(0, R2)
and x(0) ~ d0
. The former (default) assums that the noise is additive and added after the dynamics and measurement updates, while the latter assumes that the dynamics functions take an additional argument corresponding to the noise term. The latter form (sometimes refered to as the "augmented" form) is useful when the noise is multiplicative or when the noise is added before the dynamics and measurement updates. See "Augmented UKF" below for more details on how to use this form.
The matrices R1, R2
can be time varying such that, e.g., R1[:, :, t]
contains the $R_1$ matrix at time index t
. They can also be given as functions on the form
Rfun(x, u, p, t) -> R
For maximum performance, provide statically sized matrices from StaticArrays.jl
ny, nu
indicate the number of outputs and inputs.
Custom type of u
The input u
may be of any type, e.g., a named tuple or a custom struct. The u
provided in the input data is passed directly to the dynamics and measurement functions, so as long as the type is compatible with the dynamics it will work out. The one exception where this will not work is when calling simulate
, which assumes that u
is an array.
Augmented UKF
If the noise is not additive, one may use the augmented form of the UKF. In this form, the dynamics functions take additional input arguments that correspond to the noise terms. To enable this form, the typed constructor
UnscentedKalmanFilter{inplace_dynamics,inplace_measurement,augmented_dynamics,augmnented_measurement}(...)
is used, where the Boolean type parameters have the following meaning
inplace_dynamics
: Iftrue
, the dynamics function operates in-place, i.e., it modifies the first argument indynamics(dx, x, u, p, t)
. Default isfalse
.inplace_measurement
: Iftrue
, the measurement function operates in-place, i.e., it modifies the first argument inmeasurement(y, x, u, p, t)
. Default isfalse
.augmented_dynamics
: Iftrue
the dynamics function is augmented with an additional noise inputw
, i.e.,dynamics(x, u, p, t, w)
. Default isfalse
.augmnented_measurement
: Iftrue
the measurement function is agumented with an additional noise inpute
, i.e.,measurement(x, u, p, t, e)
. Default isfalse
.
Use of augmented dynamics incurs extra computational cost. The number of sigma points used is 2L+1
where L
is the length of the augmented state vector. Without augmentation, L = nx
, with augmentation L = nx + nw
and L = nx + ne
for dynamics and measurement, respectively.
Sigma-point rejection
For problems with challenging dynamics, a mechanism for rejection of sigma points after the dynamics update is provided. A function reject(x) -> Bool
can be provided through the keyword argument reject
that returns true
if a sigma point for $x(t+1)$ should be rejected, e.g., if an instability or non-finite number is detected. A rejected point is replaced by the propagated mean point (the mean point cannot be rejected). This function may be provided either to the constructor of the UKF or passed to the predict!
function.
LowLevelParticleFilters.commandplot
— Functioncommandplot(pf, u, y, p=parameters(pf); kwargs...)
Produce a helpful plot. For customization options (kwargs...
), see ?pplot
. After each time step, a command from the user is requested.
- q: quit
- s n: step
n
steps
This function requires using Plots
to be called before it is used.
LowLevelParticleFilters.correct!
— Function(; ll, e, S, Sᵪ, K) = correct!(kf::AbstractKalmanFilter, u, y, p = parameters(kf), t::Integer = index(kf), R2)
The correct step for a Kalman filter returns not only the log likelihood ll
and the prediction error e
, but also the covariance of the output S
, its Cholesky factor Sᵪ
and the Kalman gain K
.
If R2
stored in kf
is a function R2(x, u, p, t)
, this function is evaluated at the state before the correction is performed. The measurement noise covariance matrix R2
stored in the filter object can optionally be overridden by passing the argument R2
, in this case R2
must be a matrix.
LowLevelParticleFilters.correct!
— Functionll, e = correct!(f, u, y, p = parameters(f), t = index(f))
Update state/covariance/weights based on measurement y
, returns log-likelihood and prediction error (the error is always 0 for particle filters).
LowLevelParticleFilters.correct!
— Functioncorrect!(kf::SqKalmanFilter, u, y, p = parameters(kf), t::Real = index(kf); R2 = get_mat(kf.R2, kf.x, u, p, t))
For the square-root Kalman filter, a custom provided R2
must be the upper triangular Cholesky factor of the covariance matrix of the measurement noise.
LowLevelParticleFilters.debugplot
— Functiondebugplot(pf, u, y, p=parameters(pf); runall=false, kwargs...)
Produce a helpful plot. For customization options (kwargs...
), see ?pplot
.
runall=false:
if true, runs all time steps befor displaying (faster), if false, displays the plot after each time step.
The generated plot becomes quite heavy. Initially, try limiting your input to 100 time steps to verify that it doesn't crash.
This function requires using Plots
to be called before it is used.
LowLevelParticleFilters.densityplot
— Functiondensityplot(x,[w])
Plot (weighted) particles densities
LowLevelParticleFilters.forward_trajectory
— Functionsol = forward_trajectory(pf, u::AbstractVector, y::AbstractVector, p=parameters(pf))
Run the particle filter for a sequence of inputs and measurements. Return a solution with x,w,we,ll = particles, weights, expweights and loglikelihood
If MonteCarloMeasurements.jl is loaded, you may transform the output particles to Matrix{MonteCarloMeasurements.Particles}
using Particles(x,we)
. Internally, the particles are then resampled such that they all have unit weight. This is conventient for making use of the plotting facilities of MonteCarloMeasurements.jl.
sol
can be plotted
plot(sol::ParticleFilteringSolution; nbinsy=30, xreal=nothing, dim=nothing)
LowLevelParticleFilters.forward_trajectory
— Functionsol = forward_trajectory(kf::AbstractKalmanFilter, u::Vector, y::Vector, p=parameters(kf))
Run a Kalman filter forward
Returns a KalmanFilteringSolution: with the following
x
: predictions $x(t|t-1)$xt
: filtered estimates $x(t|t)$R
: predicted covariance matrices $R(t|t-1)$Rt
: filter covariances $R(t|t)$ll
: loglik
sol
can be plotted
plot(sol::KalmanFilteringSolution; plotx = true, plotxt=true, plotu=true, ploty=true)
LowLevelParticleFilters.log_likelihood_fun
— Methodll(θ) = log_likelihood_fun(filter_from_parameters(θ::Vector)::Function, priors::Vector{Distribution}, u, y, p)
returns function θ -> p(y|θ)p(θ)
LowLevelParticleFilters.loglik
— Functionll = loglik(filter, u, y, p=parameters(filter))
Calculate log-likelihood for entire sequences u,y
LowLevelParticleFilters.logsumexp!
— Functionll = logsumexp!(w, we [, maxw])
Normalizes the weight vector w
and returns the weighted log-likelihood
https://arxiv.org/pdf/1412.8695.pdf eq 3.8 for p(y) https://discourse.julialang.org/t/fast-logsumexp/22827/7?u=baggepinnen for stable logsumexp
LowLevelParticleFilters.mean_trajectory
— Methodx,ll = mean_trajectory(pf, u::Vector{Vector}, y::Vector{Vector}, p=parameters(pf))
This method resets the particle filter to the initial state distribution upon start
LowLevelParticleFilters.mean_trajectory
— Methodmean_trajectory(sol::ParticleFilteringSolution)
mean_trajectory(x::AbstractMatrix, we::AbstractMatrix)
Compute the weighted mean along the trajectory of a particle-filter solution. Returns a matrix of size T × nx
. If x
and we
are supplied, the weights are expected to be in the original space (not log space).
LowLevelParticleFilters.metropolis
— Functionmetropolis(ll::Function(θ), R::Int, θ₀::Vector, draw::Function(θ) = naive_sampler(θ₀))
Performs MCMC sampling using the marginal Metropolis (-Hastings) algorithm draw = θ -> θ'
samples a new parameter vector given an old parameter vector. The distribution must be symmetric, e.g., a Gaussian. R
is the number of iterations. See log_likelihood_fun
Example:
filter_from_parameters(θ) = ParticleFilter(N, dynamics, measurement, MvNormal(n,exp(θ[1])), MvNormal(p,exp(θ[2])), d0)
priors = [Normal(0,0.1),Normal(0,0.1)]
ll = log_likelihood_fun(filter_from_parameters,priors,u,y,1)
θ₀ = log.([1.,1.]) # Initial point
draw = θ -> θ .+ rand(MvNormal(0.1ones(2))) # Function that proposes new parameters (has to be symmetric)
burnin = 200 # If using threaded call, provide number of burnin iterations
# @time theta, lls = metropolis(ll, 2000, θ₀, draw) # Run single threaded
# thetam = reduce(hcat, theta)'
@time thetalls = LowLevelParticleFilters.metropolis_threaded(burnin, ll, 5000, θ₀, draw) # run on all threads, will provide (2000-burnin)*nthreads() samples
histogram(exp.(thetalls[:,1:2]), layout=3)
plot!(thetalls[:,3], subplot=3) # if threaded call, log likelihoods are in the last column
LowLevelParticleFilters.reset!
— Methodreset!(kf::AbstractKalmanFilter; x0)
Reset the initial distribution of the state. Optionally, a new mean vector x0
can be provided.
LowLevelParticleFilters.reset!
— MethodReset the filter to initial state and covariance/distribution
LowLevelParticleFilters.reset!
— Methodreset!(kf::SqKalmanFilter; x0)
Reset the initial distribution of the state. Optionally, a new mean vector x0
can be provided.
LowLevelParticleFilters.simulate
— Functionx,u,y = simulate(f::AbstractFilter, T::Int, du::Distribution, p=parameters(f), [N]; dynamics_noise=true, measurement_noise=true)
x,u,y = simulate(f::AbstractFilter, u, p=parameters(f); dynamics_noise=true, measurement_noise=true)
Simulate dynamical system forward in time T
steps, or for the duration of u
. Returns state sequence, inputs and measurements.
u
is an input-signal trajectory, alternatively,du
is a distribution of random inputs.
A simulation can be considered a draw from the prior distribution over the evolution of the system implied by the selected noise models. Such a simulation is useful in order to evaluate whether or not the noise models are reasonable.
If MonteCarloMeasurements.jl is loaded, the argument N::Int
can be supplied, in which case N
simulations are done and the result is returned in the form of Vector{MonteCarloMeasurements.Particles}
.
LowLevelParticleFilters.smooth
— Functionxb,ll = smooth(pf, M, u, y, p=parameters(pf))
xb,ll = smooth(pf, xf, wf, wef, ll, M, u, y, p=parameters(pf))
Perform particle smoothing using forward-filtering, backward simulation. Return smoothed particles and loglikelihood. See also smoothed_trajs
, smoothed_mean
, smoothed_cov
LowLevelParticleFilters.smooth
— FunctionxT,RT,ll = smooth(kf::KalmanFilter, u::Vector, y::Vector, p=parameters(kf))
xT,RT,ll = smooth(kf::ExtendedKalmanFilter, u::Vector, y::Vector, p=parameters(kf))
Returns smoothed estimates of state x
and covariance R
given all input output data u,y
LowLevelParticleFilters.smoothed_cov
— Methodsmoothed_cov(xb)
Helper function to calculate the covariance of smoothed particle trajectories
LowLevelParticleFilters.smoothed_mean
— Methodsmoothed_mean(xb)
Helper function to calculate the mean of smoothed particle trajectories
LowLevelParticleFilters.smoothed_trajs
— Methodsmoothed_trajs(xb)
Helper function to get particle trajectories as a 3-dimensions array (N,M,T) instead of matrix of vectors.
LowLevelParticleFilters.update!
— Functionll, e = update!(f::AbstractFilter, u, y, p = parameters(f), t = index(f))
Perform one step of predict!
and correct!
, returns log-likelihood and prediction error
LowLevelParticleFilters.weighted_cov
— Methodweighted_cov(x,we)
Similar to weighted_mean
, but returns covariances
LowLevelParticleFilters.weighted_mean
— Methodx̂ = weighted_mean(x,we)
Calculated weighted mean of particle trajectories. we
are expweights.
LowLevelParticleFilters.weighted_mean
— Methodx̂ = weighted_mean(pf)
x̂ = weighted_mean(s::PFstate)
StatsAPI.predict!
— Functionpredict!(f, u, p = parameters(f), t = index(f))
Move filter state forward in time using dynamics equation and input vector u
.
StatsAPI.predict!
— Functionpredict!(kf::AbstractKalmanFilter, u, p = parameters(kf), t::Integer = index(kf); R1, α = kf.α)
Perform the prediction step (updating the state estimate to $x(t+1|t)$). If R1
stored in kf
is a function R1(x, u, p, t)
, this function is evaluated at the state before the prediction is performed. The dynamics noise covariance matrix R1
stored in kf
can optionally be overridden by passing the argument R1
, in this case R1
must be a matrix.
StatsAPI.predict!
— Functionpredict!(kf::SqKalmanFilter, u, p = parameters(kf), t::Real = index(kf); R1 = get_mat(kf.R1, kf.x, u, p, t), α = kf.α)
For the square-root Kalman filter, a custom provided R1
must be the upper triangular Cholesky factor of the covariance matrix of the process noise.
LowLevelParticleFilters.prediction_errors!
— Functionprediction_errors!(res, f::AbstractFilter, u, y, p = parameters(f), λ = 1)
Calculate the prediction errors and store the result in res
. Similar to sse
, this function is useful for sum-of-squares optimization. In contrast to sse
, this function returns the residuals themselves rather than their sum of squares. This is useful for Gauss-Newton style optimizers, such as LeastSquaresOptim.LevenbergMarquardt.
Arguments:
res
: A vector of lengthny*length(y)
. Note, for each datapoint inu
andu
, there areny
outputs, and thusny
residuals.f
: Any filterλ
: A weighting factor to minimizedot(e, λ, e
). A commonly used metric isλ = Diagonal(1 ./ (mag.^2))
, wheremag
is a vector of the "typical magnitude" of each output. Internally, the square root ofW = sqrt(λ)
is calculated so that the residuals stored inres
areW*e
.
See example in Solving using Gauss-Newton optimization.