parepy_toolbox.pare module#

Probabilistic Approach to Reliability Engineering (PAREPY)

deterministic_algorithm_structural_analysis(obj, tol, max_iter, random_var_settings, x0, verbose=False, args=None)[source]#

Computes the reliability index and probability of failure using FORM (First Order Reliability Method).

Parameters:
  • obj (Callable) – The objective function: obj(x, args) -> float or obj(x) -> float, where x is a list with shape n and args is a tuple fixed parameters needed to completely specify the function.

  • tol (float) – Tolerance for convergence.

  • max_iter (int) – Maximum number of iterations allowed.

  • random_var_settings (list) – Containing the distribution type and parameters. Example: {‘type’: ‘normal’, ‘parameters’: {‘mean’: 0, ‘std’: 1}}. Supported distributions: (a) ‘uniform’: keys ‘min’ and ‘max’, (b) ‘normal’: keys ‘mean’ and ‘std’, (c) ‘lognormal’: keys ‘mean’ and ‘std’, (d) ‘gumbel max’: keys ‘mean’ and ‘std’, (e) ‘gumbel min’: keys ‘mean’ and ‘std’, (f) ‘triangular’: keys ‘min’, ‘mode’ and ‘max’, or (g) ‘gamma’: keys ‘mean’ and ‘std’.

  • x0 (list) – Initial guess.

  • verbose (bool) – If True, prints detailed information about the process. Default is False.

  • args (tuple | None) – Extra arguments to pass to the objective function (optional).

Returns:

Results of reliability analysis. output[0] = Numerical data obtained for the MPP search, output [1] = Probability of failure values for each indicator function, output[2] = Reliability index values for each indicator function.

Return type:

tuple[DataFrame, float, float]

Example

>>> # pip install -U parepy-toolbox
>>> from parepy_toolbox import deterministic_algorithm_structural_analysis
def obj(x):

return 12.5 * x[0]**3 - x[1]

d = {‘type’: ‘normal’, ‘parameters’: {‘mean’: 1., ‘std’: 0.1}} l = {‘type’: ‘normal’, ‘parameters’: {‘mean’: 10., ‘std’: 1.}} var = [d, l] x0 = [1.0, 10.0] max_iter = 100 tol = 1E-8

results, pf, beta = deterministic_algorithm_structural_analysis(obj, tol, max_iter, var, x0, verbose=True) print(f”Probability of failure: {pf}”) print(f”Reliability index (beta): {beta}”) results.head()

generate_factorial_design(variable_names, levels_per_variable, verbose=False)[source]#

Generates a full factorial design based on variable names and levels.

Parameters:
  • variable_names (List[str]) – Variable names.

  • levels_per_variable (List[List[float]]) – List of lists, where each sublist contains the levels for the corresponding variable.

  • verbose (bool) – If True, prints the number of combinations and preview of the DataFrame.

Returns:

All possible combinations of the levels provided.

Return type:

DataFrame

Example:

>>> # pip install -U parepy-toolbox
>>> from parepy_toolbox import generate_factorial_design

>>> variable_names = ['i (mm)', 'j (mm)', 'k (mm)', 'l (mm)']
>>> levels_per_variable = [
...     np.linspace(0, 10, 3),
...     np.linspace(0, 15, 4),
...     [5, 15],
...     np.linspace(0, 20, 5)
... ]

>>> df = parepy.generate_factorial_design(variable_names, levels_per_variable, verbose=True)
>>> print(df.head())
reprocess_sampling_results(folder_path, verbose=False)[source]#

Reprocesses sampling results from multiple .txt files in a specified folder.

Parameters:
  • folder_path (str) – Path to the folder containing sampling result files (.txt format).

  • verbose (bool) – If True, prints detailed information about the process. Default is False.

Returns:

Results of reprocessing: [0] = Combined dataframe with all sampling data, [1] = Failure probabilities for each limit state function, [2] = Reliability index (beta) for each limit state function.

Return type:

tuple[DataFrame, DataFrame, DataFrame]

Example

>>> # pip install -U parepy-toolbox
from parepy_toolbox import reprocess_sampling_results

df, pf, beta = reprocess_sampling_results(“path/to/your/folder”, verbose=True)

print(“PF:”, pf_df) print(“Beta:”, beta_df) df_all.head()

sampling_algorithm_structural_analysis(obj, random_var_settings, method, n_samples, number_of_limit_functions, parallel=True, verbose=False, random_var_settings_importance_sampling=None, args=None)[source]#

Computes the reliability index and probability of failure using sampling methods.

Parameters:
  • obj (Callable) – The objective function: obj(x, args) -> float or obj(x) -> float, where x is a list with shape n and args is a tuple fixed parameters needed to completely specify the function.

  • random_var_settings (list) – Containing the distribution type and parameters. Example: {‘type’: ‘normal’, ‘parameters’: {‘mean’: 0, ‘std’: 1}}. Supported distributions: (a) ‘uniform’: keys ‘min’ and ‘max’, (b) ‘normal’: keys ‘mean’ and ‘std’, (c) ‘lognormal’: keys ‘mean’ and ‘std’, (d) ‘gumbel max’: keys ‘mean’ and ‘std’, (e) ‘gumbel min’: keys ‘mean’ and ‘std’, (f) ‘triangular’: keys ‘min’, ‘mode’ and ‘max’, or (g) ‘gamma’: keys ‘mean’ and ‘std’.

  • method (str) – Sampling method. Supported values: ‘lhs’ (Latin Hypercube Sampling), ‘mcs’ (Crude Monte Carlo Sampling) or ‘sobol’ (Sobol Sampling).

  • n_samples (int) – Number of samples. For Sobol sequences, this variable represents the exponent “m” (n = 2^m).

  • number_of_limit_functions (int) – Number of limit state functions or constraints.

  • parallel (bool) – Start parallel process.

  • verbose (bool) – If True, prints detailed information about the process. Default is False.

  • args (tuple | None) – Extra arguments to pass to the objective function (optional).

  • random_var_settings_importance_sampling (list | None)

Returns:

Results of reliability analysis. output[0] = Numerical data obtained for the MPP search, output [1] = Probability of failure values for each indicator function, output[2] = Reliability index values for each indicator function.

Return type:

tuple[DataFrame, DataFrame, DataFrame]

Example

>>> # pip install -U parepy-toolbox
from parepy_toolbox import sampling_algorithm_structural_analysis
def obj(x):

return [12.5 * x[0]**3 -x[1]]

d = {‘type’: ‘normal’, ‘parameters’: {‘mean’: 1., ‘std’: 0.1}} l = {‘type’: ‘normal’, ‘parameters’: {‘mean’: 10., ‘std’: 1.}} var = [d, l]

df, pf, beta = sampling_algorithm_structural_analysis(obj, var, method=’lhs’, n_samples=10000, number_of_limit_functions=1, parallel=False, verbose=True) print(pf) print(beta) print(df.head())

sobol_algorithm(obj, random_var_settings, n_sobol, number_of_limit_functions, parallel=False, verbose=False, args=None)[source]#

Calculates the Sobol sensitivity indices in structural reliability problems.

Parameters:
  • obj (Callable) – The objective function: obj(x, args) -> float or obj(x) -> float, where x is a list with shape n and args is a tuple of fixed parameters needed to completely specify the function.

  • random_var_settings (list) – List of dictionaries containing the distribution type and parameters. Example: {'type': 'normal', 'parameters': {'mean': 0, 'std': 1}}. Supported distributions: - uniform: keys min and max - normal: keys mean and std - lognormal: keys mean and std - gumbel_max: keys mean and std - gumbel_min: keys mean and std - triangular: keys min, mode and max - gamma: keys mean and std

  • n_sobol (int) – This variable represents the exponent “m” (n = 2^m) to generate Sobol sequence sampling. Must be a positive integer.

  • number_of_limit_functions (int) – Number of limit state functions or constraints.

  • parallel (bool) – If True, runs the evaluation in parallel processes.

  • verbose (bool) – If True, prints detailed information about the process.

  • args (tuple | None) – Extra arguments to pass to the objective function (optional).

Returns:

A pandas DataFrame with the first-order and total-order Sobol sensitivity indices for each input variable.

Return type:

DataFrame

Example

>>> # pip install -U parepy-toolbox
>>> from parepy_toolbox import sobol_algorithm
>>> import time
>>> def obj(x, *args):
...     g_0 = 12.5 * x[0] ** 3 - x[1]
...     time.sleep(1e-6)
...     return [g_0]
>>> d = {'type': 'normal', 'parameters': {'mean': 1.0, 'std': 0.1}}
>>> l = {'type': 'normal', 'parameters': {'mean': 10.0, 'std': 1.0}}
>>> var = [d, l]
>>> amostras = 12
>>> data_sobol = sobol_algorithm(obj=obj, random_var_settings=var, n_sobol=amostras, number_of_limit_functions=1, verbose=True)
>>> data_sobol.head()