sessions Package

sessions Package

Package containing moduls with the session classes for all available datafile types. See sessions.generig module for more information.

generic Module

Generic session class, parent class for all sessions. This includes command line argument processing, help text, generic file readout, OS specific temp file handling and storing of global variables.

class plotpy.sessions.generic.GenericSession(arguments)[source]

Bases: plotpy.gtkgui.generic.GenericGUI

This is the class valid the whole session to read the files and store the measurement data objects. It contains the common functions used for every type of data plus data reading for space separated common files.

Specific measurements are childs of this class!

add_data(data_list, name, append=True)[source]

Function which ither adds file data to the object or replaces all data by a new dictionary.

add_file(filename, append=True)[source]

Add the data of a new file to the session. Transformations are also done here, so childs will change this function.

Returns:A list of datasets that have been found in the file.
change_active(object_=None, name=None)[source]

Change the active data file by object or name.

create_numbers(datasets)[source]

Give the sequences numbers with leading zeros depending on the number of sequences present in the file. Mostly usefull for file name perpose. This function also filteres the dataset for the values given by “-s from to” and “-i increment”.

Returns:The filtered list with the numbers set.
create_snapshot_obj(multiplots=False)[source]

Create a python object that should be pickled as snapshot. Child classes can overwrite this to save additional parts in the snapshot. The main class only stores the active_file_data list.

extract_snapshot_obj(dump_obj)[source]

Extract a python object that was pickled as snapshot to the associated objects. Child classes can overwrite this to load additional parts from the snapshot. The main class only loads the active_file_data list.

file_data = {}

dictionary for the data objects indexed by filename

get_active_file_info()[source]

Return a string with information about the active file.

import_plugins()[source]

Import plugins from the users plugin directory.

initialize_gnuplot()[source]

Start a gnuplot instance for the main plotting.

make_transformations(datasets)[source]

Make unit transformations of a list of datasets.

new_file_data_treatment(datasets)[source]

Perform common datatreatment tasks on all Datasets.

next()[source]

Function to iterate through the file_data dictionary. Object can be used in “for name in data:”. Also changes the active_file_data and active_file_name.

os_cleanup()[source]

Delete temporal files and folder.

os_path_stuff()[source]

Create the session temp directory. Is only called once when initializing the session. Has not been tested in OSX.

plot(datasets, file_name_prefix, title, names)[source]

Plot one or a list of datasets.

Returns:The stderr and stdout of gnuplot.
plot_active()[source]

Plots the active datasets.

plot_all()[source]

Plot everything selected from all files.

read_argument_add(argument, last_argument_option=[False, ''], input_file_names=[])[source]

Dummi function for child classes, which makes it possible to add command line options for them.

Returns:A Squence depending on the found parameters.
read_arguments(arguments)[source]

Function to evaluate the command line arguments. Returns a list of filenames.

Parameters:arguments – The command line arguments to evaluate.
Returns:A list of file names to import.
read_file(filename)[source]

Function which reads one datafile and returns a list of mds objects splitted into sequences. Every child class will overwrite this.

Returns:A list of datasets that has been found.
reload_snapshot(name=None)[source]

Reload a snapshot created with store_snapshot.

replace_systemdependent(string)[source]

Function for path name replacements. Only under windows, in linux this is just a dummi method returning the same string.

Returns:The replaced string.
single_dataset_data_treatment(dataset)[source]

Perform actions on every dataset that is imported.

store_snapshot(name=None, multiplots=False)[source]

Create a snapshot of the active measurement to reload it later. The method uses cPickle to create a file with the content of the active_file_data list and stores it in active_file_name.mdd.

try_import_externals()[source]

Try to import modules not part of core python. Gnuplot.py has no error reporting, so we change some settings to make it work.

class plotpy.sessions.generic.SessionProxy[source]

Bases: dict

Object keeping trac of available sessions.

plotpy.sessions.generic.read_full_snapshot(name)[source]

Extract a python object that was pickled as snapshot to the associated objects. For new style snapshots return the data with the associated namd,module and class to start a new session, old style snapshots are read into a generic session.

gisas Module

Class for KWS2/GISANS/GISAXS data sessions

class plotpy.sessions.gisas.GISASSession(arguments)[source]

Bases: plotpy.gtkgui.kws2.KWS2GUI, plotpy.sessions.generic.GenericSession

Class to handle in12 data sessions

autosubtract_background(dataset, fraction=5.0)[source]

Try to estimate the background and subtract it. This is done using a threashhold, which is logarithmic increased until a cirtain amount of points lies below it. After this the threashold is linearly increased within the power of 10 evaluated by the logarithmic method.

read_argument_add(argument, last_argument_option=[False, ''], input_file_names=[])[source]

additional command line arguments for squid sessions

read_file(file_name)[source]

Function to read data files.

mbe Module

Class for Oxide MBE data (logfiles, RHEED, LEED) sessions

class plotpy.sessions.mbe.MBESession(arguments)[source]

Bases: plotpy.gtkgui.mbe.MBEGUI, plotpy.sessions.generic.GenericSession

Class to handle mbe leed/rheed and other data

read_file(file_name)[source]

Function to read data files.

pnd Module

Class for DNS (Diffuse Neutron Scattering) data sessions and derived MeasurementData object.

class plotpy.sessions.pnd.DNSMeasurementData(columns=[], const=[], x=0, y=1, yerror=-1, zdata=-1, dtype=<type 'numpy.float32'>)[source]

Bases: plotpy.mds.MeasurementData

Class derived from MeasurementData to be more suitable for DNS measurements. Datatreatment is done here and additional data treatment functions should be put here, too.

Doc of MeasurementData:

The main class for the data storage. Stores the data as a list of PhysicalProperty objects. Sample name and measurement informations are stored as well as plot options and columns which have to stay constant in one sequence.

Main Attributes Description
number_of_points Number of datapoints stored in the class
data List of PhysicalProperty instances for every data column
[x,y,z]data/.yerror Indices of the plotted columns in the .data list. If z=-1 the plot is 2d
log[x,y,z] Boolean defining the logarithmic scale plotting of the columns
crop_zdata Boolean to set z data to be croped when the zrange is smaller than the data range. For plot to be without empty spots
short_info Second part of the plot title and name of line in multiplot
sample_name First part of the plot title.
filters List of filters which are applied to the dataset before export for plotting.
plot_options PlotOptions object storing the visualization options
Main Methods Description
append Append a datapoint at the end of the dataset
append_column Add a new datacolumn to the object
dimensions Return the dimensions of all columns
export Export the data to a file
process_function Call a function for all data of the object, e.g. square the y data
sort Sort the datapoints for one column
unit_trans Transform units
units Return the units of all columns
calculate_wavevectors()[source]

Calculate the wavevectors from omega, 2Theta and lambda.

change_omega_offset(omega_offset)[source]

Recalculate omega and q_x, q_y for a new offset value.

copy_intensities(point)[source]

Just copy the raw intensity measured to another column.

correct_background(point)[source]

Subtract background from the intensity data and calculate new error for these values.

Parameters:point – List of arrays for all columns
Returns:Changed list of arrays
correct_vanadium(point)[source]

Devide the intensity by the counts measured with vanadium for the same detector bank.

Parameters:point – List of arrays for all columns
Returns:Changed list of arrays
error_propagation_quotient(xdata, ydata)[source]

Calculate the propagated error for x/y.

get_info()[source]

Replacement of the general get_info function.

make_combined_corrections(other)[source]

Correct two datasets for background, flipping-ratio and Vanadium standard. The rawdata is not changed only the I column.

make_corrections()[source]

Correct the data for background and Vanadium standard. The rawdata is not changed only the I column.

make_flipping_correction(item, scattering_propability)[source]

Calculate the flipping ratio correction for all intensity chanels.

Parameters:
  • item – Sequence of NiCr-data, self and other DNSMeasurementData object
  • scattering_propability – Propability of a neutron to scatter in the sample
Returns:

If all corrections converged

prepare_powder_data()[source]

Change settings for own dataset to be used as powderdata. This includes 1d plot of 2Theta vs. Intensity and avaridge over points with same 2Theta value.

class plotpy.sessions.pnd.DNSSession(arguments)[source]

Bases: plotpy.gtkgui.dns.DNSGUI, plotpy.sessions.generic.GenericSession

Class to handle dns data sessions

add_file(filenames, append=True)[source]

In contrast to other sessions this is only called from the gui to add new files. Works as if the filenames had been given via commandline.

autosplit_sequences()[source]

Go through all sequences and try to split them by temperature.

correct_flipping_ratio(scattering_propability=0.1)[source]

This function assigns the right NiCr measurements to the data sequences. The flipper and Helmolz currents are used to identify the right data.

Parameters:scattering_propability – Propapility of a neutron to get scattered inside the sample (=1-Transmission)
Returns:If files could be found with the right settings.
create_maps(file_name)[source]

Crates a MeasurementData object which can be used to plot color maps or lineplots of the measurement. For Powder data it is only shown as 2Theta vs intensity. For single crystal it is a map in q_x,q_y. (Or hkl)

Parameters:file_name – Sequence with the options for the file_name import
find_background_data(dataset)[source]

Try to find a background data with the right flipper and helmholz currents for this dataset. The background data is connected to dataset.background_data.

Parameters:dataset – a DNSMeasurementData object
find_prefixes(names)[source]

Try to find prefixes from a list of filenames. config.dns.min_prefix_length is used to split different sets of files. The prefix is stored in self.prefixes and the numbers in self.file_options.

Parameters:names – A list of file names to process
find_vanadium_data(dataset)[source]

Set vanadium data for this dataset. The vanadium data is connected to dataset.vanadium_data.

Parameters:dataset – A DNSMeasurementData object
get_active_file_info()[source]

Replacement of generic info function, more specific to dns imports.

get_temp_current_infos(options)[source]

Get the temperature and current infos from all files in one sequence defined by options.

initialize_fullauto(names)[source]

Initializing the fullauto session, which tries to set every options from the file names and the informations in the datafiles.

Parameters:names – List of filenames to be used
plot_all()[source]

Plot everything selected from all files. This overwrites the generic method to remove the raw date from beeing ploted.

read_argument_add(argument, last_argument_option=[False, ''], input_file_names=[])[source]

Additional command line arguments for dns sessions

read_bg_file(bg_file, bg_data)[source]

Read one background file and add it to a dictionary. The dictionary keys correspond to the flipper and helmholz coil currents.

Parameters:
  • bg_file – File name to read from
  • bg_data – The dictionary to write to
read_bg_file_d7(bg_file, bg_data)[source]

Read one background file and add it to a dictionary. The dictionary keys correspond to the flipper and helmholz coil currents.

Parameters:
  • bg_file – File name to read from
  • bg_data – The dictionary to write to
read_files(file_name)[source]

Function to read data files for one measurement. The files are split by their prefixes.

Parameters:file_name – Sequence with the options for the file_name import
read_files_d7(file_name)[source]

Function to read data files for one D7 measurement. The files are split by their prefixes.

Parameters:file_name – Sequence with the options for the file_name import
read_vana_bg_nicr_files()[source]

Read all NiCr, background and vanadium files in the chosen directorys and zipfile and correct NiCr for backgound.

read_vana_bg_nicr_files_d7()[source]

Read all NiCr, background and vanadium files in the chosen directorys and zipfile and correct NiCr for backgound. This is used for d7 datasets.

separate_scattering_nonmag(af)[source]

Separate the scattering parts of sf-, nfs-scattering for nonmagnetic samples.

separate_scattering_xyz(af)[source]

Saparate the scattering parts of powderdata with xyz polarization analysis.

set_transformations()[source]

Set the transformation options from q_x to dx*,dy* from the dx,dy values given on command line.

split_sequences(length)[source]

Split the file_options and prefixes at every [length] number.

sum_same_omega_tth(scans)[source]

Find scan files with same omega and tth value and sum them up.

sum_same_tth(scans)[source]

Find scan files with same omega and tth value and sum them up.

plotpy.sessions.pnd.correct_flipping_ratio(flipping_ratio, pp_data, pm_data, scattering_propability=0.1, convergence_criteria=None)[source]

Calculate the selfconsistent solution of the spin-flip/non spin-flip scattering with respect to a measured flipping ratio for given measured data. As most variables are scaled to sum up as 1. be sure to use floating point numbers or arrays of floating point numers.

Parameters:
  • flipping_ratio – The flipping ratio for the solution
  • pp_data – non spin-flip data measured
  • pm_data – spin-flip data measured
  • scattering_propability – Propapility for a neutron to get scattered from the sample
  • convergence_criteria – Function to test for convergence
Returns:

non splin-flip, spin-flip data and if the algorithm converged

pnr Module

Class for TREFF/MARIA data sessions (polarized neutron reflectivity)

class plotpy.sessions.pnr.FitList(*args)[source]

Bases: list

Class to store the fit parameters together with the list of MeasurementData objects.

class plotpy.sessions.pnr.PNRSession(arguments)[source]

Bases: plotpy.gtkgui.pnr.PNRGUI, plotpy.gtkgui.reflectometer_functions.ReflectometerFitGUI, plotpy.sessions.generic.GenericSession

Class to handle treff data sessions

add_data(data_list, name, append=True)[source]

Function which ither adds file data to the object or replaces all data by a new dictionary.

call_fit_program(file_ent, force_compile=False)[source]

This function calls the fit_pnr program and if it is not compiled with those settings, will compile it. It does not wait for the program to finish, it only startes the sub process, which is returned.

do_extract_specular_reflectivity(file_actions, line_width, center_position_offset=(0.0, 0.0))[source]

Function to extract the true specular reflectivity from an intensity map. It is appended to the file_actions dictionary to make it useable in a makro. The specular and two off-specular lines next to it are extracted and the off-specular ones are subtracted from the specular to account for the roughness scattering in the specular line. The distance of the two off-specular cuts is 2 times the line width of the specular line, this way we are quite close to the specular line without counting anything twice.

Returns:At the moment True, should be if the extraction was successfull.
export_data_and_entfile(folder, file_name, datafile_prefix='fit_temp_', use_multilayer=False, use_roughness_gradient=True)[source]

Export measured data for fit program and the corresponding .ent file.

read_argument_add(argument, last_argument_option=[False, ''], input_file_names=[])[source]

additional command line arguments for squid sessions

read_file(file_name)[source]

Function to read data files.

smooth_dataset(dataset, kernel_size, kernel_size_y=None)[source]

Smoothe a dataset using the convolution with a gaussian kernel function. At the moment only for detector images (rectangular, equally spaced lattice)

class plotpy.sessions.pnr.TreffFitParameters[source]

Bases: plotpy.sessions.reflectometer_fit.parameters.FitParameters

Class to store the parameters of a simulation or fit from the fit.f90 program. Mostly just storing different variables for the layers.

append_layer(material, thickness, roughness)[source]

append one layer at bottom from the lookup table defined in scattering_length_densities.py

append_multilayer(materials, thicknesses, roughnesses, repititions, name='Unnamed')[source]

append a multilayer at bottom from the lookup table defined in scattering_length_densities.py

append_substrate(material, roughness)[source]

append substrat from the lookup table defined in scattering_length_densities.py

copy()[source]

create a copy of this object

get_ent_str(use_multilayer=False, use_roughness_gradient=True)[source]

create a .ent file for fit.f90 script from given parameters fit parameters have to be set in advance, see set_fit_parameters/set_fit_constrains

get_errors(errors)[source]

convert errors dictionary from parameter indices to layer indices

get_parameters(parameters)[source]

set layer parameters from existing fit

read_params_from_X_file(name)[source]

Convert Parameters from x-ray .ent file to neutrons and import it for usage with this fit.

read_params_from_file(file_name)[source]

read data from .ent file

set_fit_constrains()[source]

set fit constrains depending on (multi)layers layer_params is a dictionary with the layer number as index

set_fit_parameters(layer_params={}, substrate_params=[], background=False, polarizer_efficiancy=False, analyzer_efficiancy=False, flipper0_efficiancy=False, flipper1_efficiancy=False, scaling=False)[source]

set fit parameters depending on (multi)layers layer_params is a dictionary with the layer number as index

class plotpy.sessions.pnr.TreffLayerParam(name='NoName', parameters_list=None)[source]

Bases: plotpy.sessions.reflectometer_fit.parameters.LayerParam

class for one layer data layer and multilay have the same function to create .ent file text

copy()[source]

create a copy of this object

dialog_get_params(action, response, thickness, scatter_density_Nb, scatter_density_Nb2, scatter_density_Np, theta, phi, roughness)[source]

function to get parameters from the GUI dialog

get_ent_text(layer_index, para_index, add_roughness=0.0, use_roughness_gradient=True)[source]

Function to get the text lines for the .ent file. Returns the text string and the parameter index increased by the number of parameters for the layer.

get_fit_params(params, param_index)[source]

return a parameter list according to params

set_param(index, value)[source]

set own parameters by index

class plotpy.sessions.pnr.TreffMultilayerParam(repititions=2, name='NoName', layer_list=None)[source]

Bases: plotpy.sessions.reflectometer_fit.parameters.MultilayerParam

class for multilayer data

get_fit_cons(param_index)[source]

return a list of constainlists according to multilayers

get_fit_params(params, param_index)[source]

return a parameter list according to params (list of param lists for multilayer)

plotpy.sessions.pnr.blur_image(I, n, n_y=None)[source]

Function from scipy cookbook (www.scipy.org/Cookbook/SiognalSmooth) blurs the image by convolving with a gaussian kernel of typical size n.

plotpy.sessions.pnr.calc_intensities(R, P)[source]

Calculate intensities from given reflectivity channels R and given polarizations P.

Parameters:
  • R – Dictionary of ++, –, +-, -+ reflectivities (arrays)
  • P – Dictionary of polarizer,flipper1,flipper2,analyzer polarization component efficiencies (scalars)
plotpy.sessions.pnr.calc_intensities_general(S, P)[source]

A general formalism to calculate measured intensities from simulated scattering and the polarization parameters. The matrices used can be found in:

A.R.Wildes, Review of Scientivic Instruments, Vol. 70, 11 (1999)
Parameters:
  • S – Dictionary of intensities for the Scattering channels ‘++’,’–’,’+-‘,’-+’
  • P – Dictionary of the instrumental parameters ‘F1’,’F2’,’p1’,’p2’
Returns:

Dictionary of the calculated intensities

plotpy.sessions.pnr.dot_matrix(M1, M2)[source]

Calculate the matrix product of a 4x4 matrix of arrays.

plotpy.sessions.pnr.dot_vector(M, v)[source]

Calculate the 4x4 matix·vector product of a matrix of arrays.

plotpy.sessions.pnr.gauss_kern(size, size_y=None)[source]

Function from scipy cookbook (www.scipy.org/Cookbook/SignalSmooth) Returns a normalized 2D gauss kernel array for convolutions

plotpy.sessions.pnr.seperate_scattering(datasets, P)[source]

Try to calculate the true reflectivity channels from the polarization components and the measured data.

Parameters:
  • datasets – A list of MeasurementData objects for ++, –, +- and -+ channel
  • P – Dictionary of polarizer,flipper1,flipper2,analyzer polarization component efficiencies

sas Module

Class for small angle scattering data sessions.

class plotpy.sessions.sas.SASSession(arguments)[source]

Bases: plotpy.gtkgui.sas.SASGUI, plotpy.sessions.generic.GenericSession

Class to handle small angle scattering data sessions

read_file(file_name)[source]

Function to read data files.

shg Module

Class for SHG data sessions.

class plotpy.sessions.shg.ChiMultifit(datasets, Chis=[])[source]

Bases: plotpy.fitdata.FitFunction3D

Class to fit the SHG Chi terms for several datasets at once.

add_chi(Chi)[source]

Add a specific Chi_ijk factor.

add_domain(operations)[source]

Add a domain to the model.

delete_domain(index)[source]

Remove a domain.

fit_function(p, pol, ana)[source]

Simulate the SHG Intensity dependent on the polarizer/analzer tilt.

get_anapol()[source]

Return analser,polariser and x of all datasets as one array.

get_chis(scale)[source]

Return a list of chi_ijk as (chi_ijk, i, j, k).

refine(i1, i2, dataset_yerror=None, progress_bar_update=None)[source]

Create arrays whith the combined data and start a refienement.

set_simulations()[source]

Insert the simulation and components to the datasets.

simulate(x, *ign, **ignore)[source]

Simulate for all datasets.

class plotpy.sessions.shg.SHGSession(arguments)[source]

Bases: plotpy.gtkgui.shg.SHGGUI, plotpy.sessions.generic.GenericSession

Class to handle SHG data sessions

create_shg_sim()[source]

Create a simulation object.

read_file(file_name)[source]

Function to read data files.

squid Module

Class for SQUID/PPMS data sessions.

class plotpy.sessions.squid.SquidSession(arguments)[source]

Bases: plotpy.gtkgui.squid.SquidGUI, plotpy.sessions.generic.GenericSession

Class to handle squid data sessions

SPECIFIC_HELP = '\n\tSQUID-Data treatment:\n\t-para [C] [off]\tInclude paramagnetic correction factor (C/(T-off)) [emu*K/Oe]\n\t-dia [Chi]\tInclude diamagnetic correction in [10^-9 emu/Oe]\n\nData columns and unit transformations are defined in config.squid.py.\n'
-dia-calc [e] [m] Add diamagnetic correction of sample containing elements e
with complete mass m in mg. e is given for example as ‘La_1-Fe_2-O_4’,’la_1-fe2+_2-o_4’ or ‘La-Fe_2-O_4’.
add_file(filename, append=True)[source]

Add the data of a new file to the session. In addition to GenericSession dia and paramagnetic corrections are performed here, too.

calc_dia_elements()[source]

Returns the diamagnetic moment of the elements in self.dia_calc[1] with the mass self.dia_calc[2] The format for the elements strin is ‘La_1-Fe_2-O_4’,’la_1-fe2+_2-o_4’, ‘La-Fe_2-O_4’ or ‘LaFe2O4’

dia_para_correction(dataset, dia, para)[source]

Calculate dia- and paramagnetic correction for the given dataset. A new collumn is created for the corrected data and the old data stays unchanged.

do_subtract_dataset(dataset, object_)[source]

Subtract one dataset from another using interpolation.

read_argument_add(argument, last_argument_option=[False, ''], input_file_names=[])[source]

additional command line arguments for squid sessions

xrd Module

class for 4 circle data sessions

class plotpy.sessions.xrd.XRDSession(arguments)[source]

Bases: plotpy.gtkgui.circle.CircleGUI, plotpy.sessions.generic.GenericSession

Class to handle 4 circle data sessions

add_file(filename, append=True)[source]

Add the data of a new file to the session. In addition to GenericSession short info is set.

compare_types_p09(datasets)[source]

Check if all scans have the same scan type.

counts_to_cps(dataset)[source]

Convert couts to couts per second.

counts_to_cps_calc(input_data)[source]

Calculate counts/s for one datapoint. This function will be used in process_function() of a mds object.

cps_to_counts(dataset)[source]

Convert couts to couts per second.

cps_to_counts_calc(input_data)[source]

Calculate counts/s for one datapoint. This function will be used in process_function() of a mds object.

create_mesh(datasets)[source]

Combine a list of scans to one mesh.

filter_fast_energyscan_ue64(dataset)[source]

Remove points with wrong energy reading from the fast E-scan performed at ue64.

get_ds_hkl(dataset, round_by=1)[source]

Return the approximate hkl position of one scan.

join_sequences_p09()[source]

Combine scans to one file_data entry, which are around the same HKL position.

read_argument_add(argument, last_argument_option=[False, ''], input_file_names=[])[source]

additional command line arguments for squid sessions

xrr Module

Session for reflectometer data and fits with fit.f90.

class plotpy.sessions.xrr.FitList(*args)[source]

Bases: list

Class to store the fit parameters together with the list of MeasurementData objects.

class plotpy.sessions.xrr.ReflectometerSession(arguments)[source]

Bases: plotpy.sessions.xrr.GUI, plotpy.sessions.xrr.ReflectometerFitGUI, plotpy.sessions.generic.GenericSession

Class to handle reflectometer data sessions

add_data(data_list, name, append=True)[source]

Function which ither adds file data to the object or replaces all data by a new dictionary.

add_file(filename, append=True)[source]

Add the data of a new file to the session. In addition to GenericSession counts per second corrections and fitting are performed here, too.

call_fit_program(file_ent, file_res, file_out, max_iter, exe=None)[source]

This function calls the fit.f90 program and if it is not compiled with those settings, will compile it with the number of layres present in the current simulation. For this the maxint parameter in the fit.f90 code is replaced by the real number of layers. It does not wait for the program to finish, it only startes the sub process, which is returned.

counts_to_cps(input_data)[source]

Calculate counts/s for one datapoint. This function will be used in process_function() of a mds object.

export_fit(dataset, input_file_name, export_file_prefix=None)[source]

Function to export data for fitting with fit.f90 program.

find_total_reflection(dataset)[source]

try to find the angle of total reflection by searching for a decrease of intensity to 1/3

fourier_analysis(dataset, theta_c, lambda_x=1.54, interpolation_type='linear')[source]

Apply the fourier transform calculus found in

K.Sakurai et. all, Jpn. J. Appl. Phys. Vol 31 (1992) pp. L113-L115

to the dataset to get thickness components of the measured sample.

The dataset is expressed as a function of sqrt(Θ²-Θc²)/λ and is than normalized to a avaridge attenuation (polynomial fit to log of data). The result is then fourier transformed using the FFT algorithm.

Parameters:
  • dataset – MeasurementData object of reflectivity data
  • theta_c – Angle of total external reflection
  • lambda_x – x-ray wavelength
Returns:

new MeasurementData object of transformed data.

read_argument_add(argument, last_argument_option=[False, ''], input_file_names=[])[source]

additional command line arguments for reflectometer sessions

read_file(file_name)[source]

function to read data files

refine_roughnesses(dataset)[source]

try to fit the layer roughnesses

refine_scaling(dataset)[source]

try to fit the scaling factor before the total reflection angle

templates Module

Implementation of a template framework to make it possible for the user to i mport any ascii column data.

class plotpy.sessions.templates.DataImportTemplate(template_name)[source]

Bases: object

Class framework to import ascii data files with different settings defined in user templates. Can be used for any filetype instead of the normal import framework.

calculate_columns_from_function(data_cols)[source]

Calculate new column values from imported columns from functions given in the template.

Parameters:data_cols – the columns of the already read data.
Returns:List of new columns that have been created
compile_template(template_code)[source]

Evaluate all settings in the template and add them to this object.

Parameters:template_code – Code object returned by compile function
Returns:If evaluation was successful.
get_sequences(lines)[source]

Extract measured sequences and inter sequence lines.

Parameters:lines – Lines from the input file
Returns:List of sequence lines containing data, List of inter sequence lines
init_new_sequence(lines)[source]

Make changes for the next read sequence according to header/intersequence lines. Defines the columns to be imported.

Parameters:lines – Header/intersequence lines to be used for the initialization
init_replacements(lines, split_patterns)[source]

Search for patterns in the input lines and add the result to the objects patterns dictionary.

new_dataset()[source]

Create a new data object from the active column and replacement settings.

Returns:MeasurementData or derived object
read_data(lines, dataset)[source]

Read data from input lines and append it to a dataset object.

Parameters:
  • lines – Lines of the file to be used as data
  • dataset – MeasurementData or derived object

Get the footer size and read needed information from this footer. Defines the region to end looking for data.

Parameters:lines – Lines from the input file
Returns:Part of lines which belogs to the footer
read_header(lines)[source]

Get the header size and read needed information from this header. Defines the region to start looking for data.

Parameters:lines – Lines from the input file
Returns:Part of lines which belogs to the header
replace(string)[source]

Replace placeholders in string by the settings read from the input file.

Returns:changed string