Xray Tomography#

Open In Colab

If you are running this notebook locally, make sure you’ve followed steps here to set up the environment. (This environment.yml file specifies a list of packages required to run the notebooks)

Adapted from notebooks by Andrew Valentine & Malcolm Sambridge - Research School of Earth Sciences, The Australian National University

In this notebook, we look at an linear inverse problem based on Xray Tomography. We will use cofi to run a linear system solver (optionally with Tikhonov regularization and noise estimation) for this problem.

As an example, we will consider performing X-Ray Tomography (XRT) to image the interior of a structure. We assume that the x-rays travel at the same speed regardless of the medium through which they are passing, and so their paths are straight lines between source and receiver. However, the medium causes the x-rays to attenuate: paths through dense objects (such as bones!) arrive at the receiver with far less energy than they had at the source. Thus, by analysing the attenuation along many different paths, we can build up a picture of the interior of an object.

Specifically, we will assume that the intensity at the receiver, \(I_{rec}\), is related to the intensity at the source, \(I_{src}\) by

\[I_{rec} = I_{src}\exp\left\{-\int_\mathrm{path} \mu(\mathbf{x})\,\mathrm{d}\mathbf{l}\right\}\]

where \(\mu(\mathbf{x})\) is a position-dependent attenuation coefficient. To obtain a linear inverse problem, we rewrite this as

\[-\log \frac{I_{rec}}{I_{src}}=\int_\mathrm{path} \mu(\mathbf{x})\,\mathrm{d}\mathbf{l}\,.\]

We know that

\[\int\left[f(x) + g(x)\right]\,\mathrm{d}x = \int f(x)\,\mathrm{d}x + \int g(x)\,\mathrm{d}x\]

so we say that integration is a linear operation, and hence we can solve the XRT problem with linear inverse theory.

We will assume that the object we are interested in is 2-dimensional, so that \(\mu(\boldsymbol{x}) = \mu(x,y)\). If we discretize this model, with \(N_x\) cells in the \(x\)-direction and \(N_y\) cells in the \(y\)-direction, we can express \(\mu(x,y)\) as an \(N_x \times N_y\) vector \(\boldsymbol{\mu}\). This is related to the data by

\[d_i = A_{ij}\mu_j\]

where \(d_i = -\log {I^{(i)}_{rec}}/{I^{(i)}_{src}}\), and where \(A_{ij}\) represents the path length in cell \(j\) of the discretized model.

0. Import modules#

The package cofi-espresso contains the forward code for this problem.

# -------------------------------------------------------- #
#                                                          #
#     Uncomment below to set up environment on "colab"     #
#                                                          #
# -------------------------------------------------------- #

# !pip install -U cofi
# !pip install -U cofi-espresso
import numpy as np
from cofi import BaseProblem, InversionOptions, Inversion
from cofi_espresso import XrayTomography

1. Define the problem#

Firstly, we get some information from the cofi-espresso module. These include the dataset and the Jacobian matrix. In the Xray Tomography example, the Jacobian matrix is related to the lengths of paths within each grid. Since the paths are fixed, the Jacobian matrix stays constant.

xrt = XrayTomography()
xrt_problem = BaseProblem()
xrt_problem.set_data(xrt.data)
xrt_problem.set_jacobian(xrt.jacobian(xrt.starting_model))
Evaluating paths:   0%|          | 0/10416 [00:00<?, ?it/s]
Evaluating paths:   8%|8         | 838/10416 [00:00<00:01, 8376.15it/s]
Evaluating paths:  16%|#6        | 1686/10416 [00:00<00:01, 8433.53it/s]
Evaluating paths:  24%|##4       | 2530/10416 [00:00<00:00, 8204.28it/s]
Evaluating paths:  32%|###2      | 3352/10416 [00:00<00:00, 8039.22it/s]
Evaluating paths:  40%|###9      | 4157/10416 [00:00<00:00, 8035.06it/s]
Evaluating paths:  48%|####7     | 4961/10416 [00:00<00:00, 7810.83it/s]
Evaluating paths:  55%|#####5    | 5744/10416 [00:00<00:00, 7596.04it/s]
Evaluating paths:  63%|######2   | 6557/10416 [00:00<00:00, 7758.26it/s]
Evaluating paths:  71%|#######   | 7360/10416 [00:00<00:00, 7840.34it/s]
Evaluating paths:  78%|#######8  | 8146/10416 [00:01<00:00, 7778.84it/s]
Evaluating paths:  86%|########5 | 8955/10416 [00:01<00:00, 7870.80it/s]
Evaluating paths:  94%|#########3| 9775/10416 [00:01<00:00, 7969.28it/s]
Evaluating paths: 100%|##########| 10416/10416 [00:01<00:00, 7972.45it/s]

We do some estimation on data noise and further perform a regularization.

sigma = 0.002
lamda = 50
data_cov_inv = np.identity(xrt.data_size) * (1/sigma**2)
reg_matrix = lamda * np.identity(xrt.model_size)
xrt_problem.set_data_covariance_inv(data_cov_inv)
xrt_problem.set_regularization(2, 1, reg_matrix)

Review what information is included in the BaseProblem object:

xrt_problem.summary()
=====================================================================
Summary for inversion problem: BaseProblem
=====================================================================
Model shape: Unknown
---------------------------------------------------------------------
List of functions/properties set by you:
['jacobian', 'regularization', 'regularization_matrix', 'regularization_factor', 'data', 'data_covariance_inv']
---------------------------------------------------------------------
List of functions/properties created based on what you have provided:
['jacobian_times_vector']
---------------------------------------------------------------------
List of functions/properties that can be further set for the problem:
( not all of these may be relevant to your inversion workflow )
['objective', 'log_posterior', 'log_posterior_with_blobs', 'log_likelihood', 'log_prior', 'gradient', 'hessian', 'hessian_times_vector', 'residual', 'jacobian_times_vector', 'data_misfit', 'forward', 'data_covariance', 'initial_model', 'model_shape', 'blobs_dtype', 'bounds', 'constraints']

2. Define the inversion options#

my_options = InversionOptions()
my_options.set_tool("scipy.linalg.lstsq")

Review what’s been defined for the inversion we are about to run:

my_options.summary()
=============================
Summary for inversion options
=============================
Solving method: None set
Use `suggest_solving_methods()` to check available solving methods.
-----------------------------
Backend tool: `scipy.linalg.lstsq` - SciPy's wrapper function over LAPACK's linear least-squares solver, using 'gelsd', 'gelsy' (default), or 'gelss' as backend driver
References: ['https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.lstsq.html', 'https://www.netlib.org/lapack/lug/node27.html']
Use `suggest_tools()` to check available backend tools.
-----------------------------
Solver-specific parameters: None set
Use `suggest_solver_params()` to check required/optional solver-specific parameters.

3. Start an inversion#

We can now solve the inverse problem using the Tikhonov-regularized form of least-squares,

\[\mathbf{m}=\left(\mathbf{A^TA}+\epsilon^2\sigma^2\mathbf{I}\right)^\mathbf{-1}\mathbf{A^Td}\]

where \(\sigma^2\) is the variance of the expected noise on the attenuation data.

For this dataset, we’ve taken \(\sigma = 0.002\)s and chosen \(\epsilon^2 = 50\).

inv = Inversion(xrt_problem, my_options)
inv_result = inv.run()
inv_result.summary()
============================
Summary for inversion result
============================
SUCCESS
----------------------------
model: [1.13306453 0.86363911 1.01958229 ... 1.01319821 0.8615539  1.14691342]
sum_of_squared_residuals: []
effective_rank: 2500
singular_values: [932638.73185699 860130.56593555 860130.56593555 ...   3644.1527398
   3379.60041023   3379.60041023]
model_covariance: [[ 7.47520869e-05 -3.87965698e-05 -4.62858729e-06 ...  2.58820545e-08
  -8.37982995e-09 -8.03271846e-08]
 [-3.87965698e-05  1.21131273e-04 -2.70276186e-05 ... -1.63652129e-07
   1.37850692e-07 -8.37982995e-09]
 [-4.62858729e-06 -2.70276186e-05  8.87810002e-05 ...  1.30995411e-07
  -1.63652129e-07  2.58820545e-08]
 ...
 [ 2.58820545e-08 -1.63652129e-07  1.30995411e-07 ...  8.87810002e-05
  -2.70276186e-05 -4.62858729e-06]
 [-8.37982995e-09  1.37850692e-07 -1.63652129e-07 ... -2.70276186e-05
   1.21131273e-04 -3.87965698e-05]
 [-8.03271846e-08 -8.37982995e-09  2.58820545e-08 ... -4.62858729e-06
  -3.87965698e-05  7.47520869e-05]]

4. Plotting#

Below the two figures refers to the inferred model and true model respectively.

xrt.plot_model(inv_result.model, clim=(1, 1.5));       # inferred model
xrt.plot_model(xrt.good_model, clim=(1, 1.5));          # true model
  • xray tomography
  • xray tomography
<Figure size 640x480 with 2 Axes>

5. Estimated uncertainties#

We can now find the uncertainty on the recovered slowness parameters, which describes how noise in the data propagate into the slowness parameters with this data set. For the Tikhonov-regularised form of least-squares, the model covariance matrix is a square matrix of size \(M\times M\), where there are \(M\) cells in the model.

\[\mathbf{C_m}=\sigma^2\left(\mathbf{A^TA}+\epsilon^2\sigma^2\mathbf{I}\right)^\mathbf{-1}\]

.

This matrix was calculated as part of the solver routine above. The square roots of the diagonal entries of this matrix are the \(\sigma\) errors in the slowness in each cell.

Lets plot the slowness uncertainties as a function of position across the cellular model.

xrt.plot_model(np.sqrt(np.diag(Cm)));
xray tomography
<Figure size 640x480 with 2 Axes>

Uncertainty is uniformly low across the entire model and only significant near the corners where there are few ray paths.

Similarly we can calculate uncertainty in velocity parameters using some calculus.

\[\Delta v = \left | \frac{\partial s}{\partial v} \right | \Delta s\]

and since \(s = 1/v\) we get

\[\Delta v = s^2\Delta s\]

which gives the uncertainty image on velocity, which looks very similar.

xrt.plot_model(np.sqrt(np.diag(Cm)) * inv_result.model);
xray tomography
<Figure size 640x480 with 2 Axes>

By clipping the colour range you can see an imprint of the true image, indicating that high slowness/low velcoity areas have slightly higher uncertainty.


Watermark#

watermark_list = ["cofi", "cofi_espresso", "numpy", "scipy", "matplotlib"]
for pkg in watermark_list:
    pkg_var = __import__(pkg)
    print(pkg, getattr(pkg_var, "__version__"))
cofi 0.1.2.dev22
cofi_espresso 0.0.1.dev10
numpy 1.21.6
scipy 1.9.1
matplotlib 3.5.3

sphinx_gallery_thumbnail_number = -1

Total running time of the script: ( 0 minutes 3.787 seconds)

Gallery generated by Sphinx-Gallery