pyspecdata.matrix_math package

Submodules

pyspecdata.matrix_math.dot module

pyspecdata.matrix_math.dot.along(self, dimname, rename_redundant=None)

Specifies the dimension for the next matrix multiplication (represents the rows/columns).

Parameters:
  • dimname (str) –

    The next time matrix multiplication is called, ‘dimname’ will be summed over. That is, dimname will become the columns position if this is the first matrix.

    If along is not called for the second matrix, dimname will also take the position of rows for that matrix!

  • rename_redundant (tuple of str or (default) None) –

    If you are multiplying two different matrices, then it is only sensible that before the multiplication, you should identify the dimension representing the row space of the right matrix and the column space of the left matrix with different names.

    However sometimes (e.g. constructing projection matrices) you may want to start with two matrices where both the row space of the right matrix and the column space of the left have the same name. If so, you will want to rename the column space of the resulting matrix – then you pass rename_redundant=('orig name','new name')

pyspecdata.matrix_math.dot.dot(self, arg)

This will perform a dot product or a matrix multiplication. If one dimension in arg matches that in self, it will dot along that dimension (take a matrix multiplication where that dimension represents the columns of self and the rows of arg)

Note that if you have your dimensions named “rows” and “columns”, this will be very confusing, but if you have your dimensions named in terms of the vector basis they are defined/live in, this makes sense.

If there are zero or no matching dimensions, then use along() to specify the dimensions for matrix multiplication / dot product.

pyspecdata.matrix_math.dot.matmul(self, arg)

pyspecdata.matrix_math.nnls module

pyspecdata.matrix_math.nnls.G(x_vec, K)
pyspecdata.matrix_math.nnls.H(product)
pyspecdata.matrix_math.nnls.chi(x_vec, val, data_fornnls, K)
pyspecdata.matrix_math.nnls.d_chi(x_vec, val, data_fornnls, K)
pyspecdata.matrix_math.nnls.dd_chi(G, val)
pyspecdata.matrix_math.nnls.demand_real(x, addtxt='')
pyspecdata.matrix_math.nnls.mod_BRD(guess, K, factor, data_fornnls, maxiter=20)
pyspecdata.matrix_math.nnls.newton_min(input_vec, val, data_fornnls, K)
pyspecdata.matrix_math.nnls.nnls(self, dimname_list, newaxis_dict, kernel_func, l=0, default_cut=0.001, store_uncompressed_kernel=False)

Perform regularized non-negative least-squares “fit” on self.

Capable of solving for solution in 1 or 2 dimensions.

We seek to minimize \(Q = \| Ax - b \|_2 + \|\lambda x\|_2\) in order to obtain solution vector \(x\) subject to non-negativity constraint given input matrix \(A\), the kernel, and input vector \(b\), the data.

The first term assesses agreement between the fit \(Ax\) and the data \(b\), and the second term accounts for noise with the regularization parameter \(\lambda\) according to Tikhonov regularization.

To perform regularized minimization in 1 dimension, provide :str:`dimname_list`, :nddata:`newaxis_dict`, :function:`kernel_func`, and regularization parameter l. One may set l to a :double: of the regularization parameter of choice (found, for instance, through L-curve analysis) or set l to :str:`BRD` to enable automatic selection of a regularization parameter via the BRD algorithm - namely that described in Venkataramanan et al. 2002 but adapted for 1D case (DOI:10.1109/78.995059).

To perform regularized minimization in 2 dimensions, set l to :str:`BRD` and provide a tuple of parameters :str:`dimname_list`, :nddata:`newaxis_dict`, and :function:`kernel_func`. Algorithm described in Venkataramanan et al. 2002 is performed which determines optimal \(\lambda\) for the data (DOI:10.1109/78.995059). Note that setting l to a :double: for a regularization parameter is supported in this 2 dimensional should an appropriate parameter be known.

See: Wikipedia page on NNLS, Wikipedia page on Tikhonov regularization

Parameters:
  • dimname_list (str or tuple) –

    Name of the “data” dimension that is to be replaced by a distribution (the “fit” dimension); e.g. if you are regularizing a set of functions \(\exp(-\tau*R_1)\), then this is \(\tau\)

    If you are performing 2D regularization, then this is a tuple (pair) of 2 names

  • newaxis_dict (dict or (tuple of) nddata) –

    a dictionary whose key is the name of the “fit” dimension (\(R_1\) in the example above) and whose value is an np.array with the new axis labels.

    OR

    this can be a 1D nddata – if it has an axis, the axis will be used to create the fit axis; if it has no axis, the data will be used

    OR

    if dimname_list is a tuple of 2 dimensions indicating a 2D ILT, this should also be a tuple of 2 nddata, representing the two axes

  • kernel_func (function or tuple of functions) –

    a function giving the kernel for the regularization. The first argument is the “data” variable and the second argument is the “fit” variable (in the example above, this would be something like lambda x,y: exp(-x*y))

    For 2D, this must be a tuple or dictionary of functions – the kernel is the product of the two.

  • l (double (default 0) or str) – the regularization parameter \(lambda\) – if this is set to 0, the algorithm reverts to standard nnls. If this is set to :str:`BRD`, then automatic parameter selection is executed according to the BRD algorithm, either in 1-dimension or 2-dimensions depending on presence of tuple synax (i.e., specifying more than 1 dimension).

Returns:

The regularized result. For future use, both the kernel (as an nddata, in a property called “nnls_kernel”) and the residual (as an nddata, in a property called “nnls_residual”) are stored as properties of the nddata. The regularized dimension is always last (innermost).

If the tuple syntax is used to input 2 dimensions and :str:`BRD` is specified, then the individual, uncompressed kernels \(K_{1}\) and \(K_{2}\) are returned as properties of the nddata “K1” and “K2” respectively. The number of singular values used to compressed each kernel is returned in properties of the nddata called, respectively, “s1” and “s2”.

Return type:

self

pyspecdata.matrix_math.nnls.optimize_alpha(input_vec, val, factor, K, tol=1e-06)
pyspecdata.matrix_math.nnls.square_heaviside(x_vec, K)

pyspecdata.matrix_math.svd module

pyspecdata.matrix_math.svd.svd(self, todim, fromdim)

Singular value decomposition. Original matrix is unmodified.

Note

Because we are planning to upgrade with axis objects, FT properties, axis errors, etc, are not transferred here. If you are using it when this note is still around, be sure to .copy_props(

Also, error, units, are not currently propagated, but could be relatively easily!

If

>>> U, Sigma, Vh = thisinstance.svd()

then U, Sigma, and Vh are nddata such that result in

>>> result = U @ Sigma @ Vh

will be the same as thisinstance. Note that this relies on the fact that nddata matrix multiplication doesn’t care about the ordering of the dimensions (see :method:`~pyspecdata.core.dot`). The vector space that contains the singular values is called ‘SV’ (see more below).

Parameters:
  • fromdim (str) – This dimension corresponds to the columns of the matrix that is being analyzed by SVD. (The matrix transforms from the vector space labeled by fromdim and into the vector space labeled by todim).

  • todim (str) – This dimension corresponds to the rows of the matrix that is being analyzed by SVD.

Returns:

  • U (nddata) – Has dimensions (all other dimensions) × ‘todim’ × ‘SV’, where the dimension ‘SV’ is the vector space of the singular values.

  • Sigma (nddata) – Has dimensions (all other dimensions) × ‘SV’. Only non-zero

  • Vh (nddata) – Has dimensions (all other dimensions) × ‘SV’ × ‘fromdim’,

Module contents