pyspecdata.lmfitdata.lmfitdata

class pyspecdata.lmfitdata.lmfitdata(*args, **kwargs)

Inherits from an nddata and enables curve fitting through use of a sympy expression.

The user creates a lmfitdata class object from an existing nddata class object, and on this lmfitdata object can define the functional_form() of the curve it would like to fit to the data of the original nddata. This functional form must be provided as a sympy expression, with one of its variables matching the name of the dimension that the user would like to fit to.

__init__(*args, **kwargs)

initialize nddata – several options. Depending on the information available, one of several formats can be used.

3 arguments:

nddata(inputarray, shape, dimlabels)

inputarray:

np.ndarray storing the data – note that the size is ignored and the data is reshaped as needed

shape:

a list (or np.array, etc.) giving the size of each dimension, in order

dimlabels:

a list giving the names of each dimension, in order

2 arguments:

nddata(inputarray, dimlabels)

inputarray:

np.ndarray storing the data – the data is not reshaped

dimlabels:

a list giving the names of each dimension, in order

2 arguments:

nddata(inputarray, single_dimlabel)

inputarray:

np.ndarray storing the data – must be 1D inputarray is also used to label the single axis

single_dimlabel:

a list giving the name of the single axis

1 argument:

nddata(inputarray, shape, dimlabels)

inputarray:

np.ndarray storing the data – reduced to 1D A single dimension, called “INDEX” is set. This suppresses the printing of axis labels. This is used to store numbers and arrays that might have error and units, but aren’t gridded data.

keyword args

these can be used to set the labels, etc, and are passed to __my_init__()

Methods

__init__(*args, **kwargs)

initialize nddata -- several options.

add_noise(intensity)

Add Gaussian (box-muller) noise to the data.

aligndata(arg)

This is a fundamental method used by all of the arithmetic operations.

along(dimname[, rename_redundant])

Specifies the dimension for the next matrix multiplication (represents the rows/columns).

argmax(*args, **kwargs)

find the max along a particular axis, and get rid of that axis, replacing it with the index number of the max value

argmin(*axes, **kwargs)

If .argmin('axisname') find the min along a particular axis, and get rid of that axis, replacing it with the index number of the max value.

axis(axisname)

returns a 1-D axis for further manipulation

axlen(axis)

return the size (length) of an axis, by name

axn(axis)

Return the index number for the axis with the name "axis"

cdf([normalized, max_bins])

calculate the Cumulative Distribution Function for the data along axis_name

check_axis_coords_errors()

chunk(axisin, *otherargs)

"Chunking" is defined here to be the opposite of taking a direct product, increasing the number of dimensions by the inverse of the process by which taking a direct product decreases the number of dimensions. This function chunks axisin into multiple new axes arguments.: axesout -- gives the names of the output axes shapesout -- optional -- if not given, it assumes equal length -- if given, one of the values can be -1, which is assumed length.

chunk_auto(axis_name[, which_field, dimname])

assuming that axis "axis_name" is currently labeled with a structured np.array, choose one field ("which_field") of that structured np.array to generate a new dimension Note that for now, by definition, no error is allowed on the axes.

circshift(axis, amount)

contiguous(lambdafunc[, axis])

Return contiguous blocks that satisfy the condition given by lambdafunc

contour([labels])

Contour plot -- kwargs are passed to the matplotlib contour function.

convolve(axisname, filterwidth[, convfunc, ...])

Perform a convolution.

copy(**kwargs)

Return a full copy of this instance.

copy_axes(other)

copy_props(other)

Copy all properties (see get_prop()) from another nddata object -- note that these include properties pertaining the the FT status of various dimensions.

copyaxes(other)

cov_mat(along_dim)

calculate covariance matrix for a 2D experiment

cropped_log([subplot_axes, magnitude])

For the purposes of plotting, this generates a copy where I take the log, spanning "magnitude" orders of magnitude.

diff(thisaxis[, backwards])

dot(arg)

This will perform a dot product or a matrix multiplication.

eval([taxis])

Calculate the fit function along the axis taxis.

eval_poly(c, axis[, inplace, npts])

Take c output (array of polynomial coefficents in ascending order) from polyfit(), and apply it along axis axis

extend(axis, extent[, fill_with, tolerance])

Extend the (domain of the) dataset and fill with a pre-set value.

extend_for_shear(altered_axis, propto_axis, ...)

this is propto_axis helper function for .fourier.shear

fit([use_jacobian])

actually run the fit

fld(dict_in[, noscalar])

flatten dictionary -- return list

fourier_shear(altered_axis, propto_axis, ...)

the fourier shear method -- see .shear() documentation

fromaxis(*args, **kwargs)

Generate an nddata object from one of the axis labels.

ft(axes[, tolerance, cosine, verbose, unitary])

This performs a Fourier transform along the axes identified by the string or list of strings axes.

ft_clear_startpoints(axis[, t, f, nearest])

Clears memory of where the origins in the time and frequency domain are.

ft_state_to_str(*axes)

Return a string that lists the FT domain for the given axes.

ftshift(axis, value)

FT-based shift.

gen_indices(this_set, set_to)

pass this this_set and this_set_to parameters, and it will return: indices,values,mask indices --> gives the indices that are forced values --> the values they are forced to mask --> p[mask] are actually active in the fit

get_covariance()

this returns the covariance matrix of the data

get_error(*args)

get a copy of the errors either set_error('axisname',error_for_axis) or set_error(error_for_data)

get_ft_prop(axis[, propname])

Gets the FT property given by propname.

get_plot_color()

get_prop([propname])

return arbitrary ND-data properties (typically acquisition parameters etc.) by name (propname)

get_range(dimname, start, stop)

get raw indices that can be used to generate a slice for the start and (non-inclusive) stop

get_units(*args)

getaxis(axisname)

getaxisshape(axisname)

gnuplot_save(filename)

guess()

Old code that we are preserving here -- provide the guess for our parameters; by default, based on pseudoinverse

hdf5_write(h5path[, directory])

Write the nddata to an HDF5 file.

human_units()

This function attempts to choose "human-readable" units for axes or y-values of the data.

ift(axes[, n, tolerance, verbose, unitary])

This performs an inverse Fourier transform along the axes identified by the string or list of strings axes.

indices(axis_name, values)

Return a string of indeces that most closely match the axis labels corresponding to values.

inhomog_coords(direct_dim, indirect_dim[, ...])

Apply the "inhomogeneity transform," which rotates the data by \(45^{\circ}\), and then mirrors the portion with \(t_2<0\) in order to transform from a \((t_1,t_2)\) coordinate system to a \((t_{inh},t_{homog})\) coordinate system.

integrate(thisaxis[, backwards, cumulative])

Performs an integration -- which is similar to a sum, except that it takes the axis into account, i.e., it performs: \(\int f(x) dx\) rather than \(\sum_i f(x_i)\)

interp(axis, axisvalues[, past_bounds, ...])

interpolate data values given axis values

invinterp(axis, values, **kwargs)

interpolate axis values given data values

item()

like numpy item -- returns a number when zero-dimensional

jacobian(pars[, sigma])

cache the symbolic jacobian and/or use it to compute the numeric result

labels(*args)

label the dimensions, given in listofstrings with the axis labels given in listofaxes -- listofaxes must be a numpy np.array; you can pass either a dictionary or a axis name (string)/axis label (numpy np.array) pair

latex()

show the latex string for the function, with all the symbols substituted by their values

like(value)

provide "zeros_like" and "ones_like" functionality

linear_shear(along_axis, propto_axis, shear_amnt)

the linear shear -- see self.shear for documentation

matchdims(other)

add any dimensions to self that are not present in other

matrices_3d([also1d, invert, max_dimsize, ...])

returns X,Y,Z,x_axis,y_axis matrices X,Y,Z, are suitable for a variety of mesh plotting, etc, routines x_axis and y_axis are the x and y axes

max()

mayavi_surf()

use the mayavi surf function, assuming that we've already loaded mlab during initialization

mean(*args, **kwargs)

Take the mean and (optionally) set the error to the standard deviation

mean_all_but(listofdims)

take the mean over all dimensions not in the list

mean_nopop(axis)

mean_weighted(axisname)

perform the weighted mean along axisname (use \(\sigma\) from \(\sigma = `self.get_error() do generate :math:`1/\sigma\) weights) for now, it clears the error of self, though it would be easy to calculate the new error, since everything is linear

meshplot([stride, alpha, onlycolor, light, ...])

takes both rotation and light as elevation, azimuth only use the light kwarg to generate a black and white shading display

min()

mkd(*arg, **kwargs)

make dictionary format

name(*arg)

args: .name(newname) --> Name the object (for storage, etc) .name() --> Return the name

nnls(dimname_list, newaxis_dict, kernel_func)

Perform regularized non-negative least-squares "fit" on self.

normalize(axis[, first_figure])

oldtimey([alpha, ax, linewidth, ...])

output(*name)

give the fit value of a particular symbol, or a dictionary of all values.

pcolor([fig, shading, ax1, ax2, ax, ...])

generate a pcolormesh and label it with the axis coordinate available from the nddata

phdiff(axis[, return_error])

calculate the phase gradient (units: cyc/Δx) along axis, setting the error appropriately

pinvr_step([sigma])

Use regularized Pseudo-inverse to (partly) solve: \(-residual = f(\mathbf{p}+\Delta \mathbf{p})-f(\mathbf{p}) \approx \nabla f(\mathbf{p}) \cdot \Delta \mathbf{p}\)

plot_labels(labels[, fmt])

this only works for one axis now

polyfit(axis[, order, force_y_intercept])

polynomial fitting routine -- return the coefficients and the fit .. note: previously, this returned the fit data as a second argument called formult-- you very infrequently want it to be in the same size as the data, though; to duplicate the old behavior, just add the line formult = mydata.eval_poly(c,'axisname').

popdim(dimname)

random_mask(axisname[, threshold, inversion])

generate a random mask with about 'threshold' of the points thrown out

register_axis(arg[, nearest])

Interpolate the data so that the given axes are in register with a set of specified values.

rename(previous, new)

reorder(*axes, **kwargs)

Reorder the dimensions the first arguments are a list of dimensions

replicate_units(other)

repwlabels(axis)

residual(pars[, sigma])

calculate the residual OR if data is None, return fake data

retaxis(axisname)

run(*args)

run a standard numpy function on the nddata:

run_avg(thisaxisname[, decimation, centered])

a simple running average

run_nopop(func, axis)

runcopy(*args)

secsy_transform(direct_dim, indirect_dim[, ...])

Shift the time-domain data backwards by the echo time.

secsy_transform_manual(direct_dim, indirect_dim)

Shift the time-domain data backwards by the echo time.

set_error(*args)

set the errors: either

set_ft_prop(axis[, propname, value])

Sets the FT property given by propname.

set_guess(*args, **kwargs)

set both the guess and the bounds

set_plot_color(thiscolor)

set_plot_color_next()

set_prop(*args)

set a 'property' of the nddata This is where you can put all unstructured information (e.g. experimental parameters, etc).

set_to(otherinst)

Set data inside the current instance to that of the other instance.

set_units(*args)

setaxis(*args)

set or alter the value of the coordinate axis

settoguess()

a debugging function, to easily plot the initial guess

shear(along_axis, propto_axis, shear_amnt[, ...])

Shear the data \(s\):

smoosh(dimstocollapse[, dimname, noaxis])

Collapse (smoosh) multiple dimensions into one dimension.

sort(axisname[, reverse])

sort_and_xy()

spline_lambda([s_multiplier])

For 1D data, returns a lambda function to generate a Cubic Spline.

squeeze([return_dropped])

squeeze singleton dimensions

sum(axes)

calculate the sum along axes, also transforming error as needed

sum_nopop(axes)

svd(todim, fromdim)

Singular value decomposition.

to_ppm([axis, freq_param, offset_param])

Function that converts from Hz to ppm using Bruker parameters

unitify_axis(axis_name[, is_axis])

this just generates an axis label with appropriate units

units_texsafe(*args)

unset_prop(arg)

remove a 'property'

waterfall([alpha, ax, rotation, color, ...])

Attributes

C

shortcut for copy

angle

Return the angle component of the data.

function_string

A property of the myfitclass class which stores a string output of the functional form of the desired fit expression provided in func:functional_form in LaTeX format

functional_form

A property of the myfitclass class which is set by the user, takes as input a sympy expression of the desired fit expression

imag

Return the imag component of the data

real

Return the real component of the data

shape

transformed_data

If we do something like fit a lorentzian or voigt lineshape, it makes more sense to define our fit function in the time domain, but to calculate the residuals and to evaluate in the frequency domain.

want_to_prospa_decim_correct