Concatenates .txt files generated by the sampling_algorithm_structural_analysis algorithm, and calculates probabilities of failure and reliability indexes based on the data.

results_about_data, failure_prob_list, beta_list = concatenates_txt_files_sampling_algorithm_structural_analysis(setup)

Input variables

Name Description Type
setup

Dictionary containing the main settings. Keys include:

  • 'folder path': Path to the folder containing the .txt files [String]

  • 'number of state limit functions or constraints': Number of state limit functions or constraints [Integer]

  • 'numerical model': Numerical model settings [Dictionary]. See this key in sampling_algorithm_structural_analysis documentation

  • 'simulation name': Name of the simulation [String]

Dictionary

Output variables

Name Description Type
results_about_data A DataFrame containing the concatenated results from the .txt files. DataFrame
failure_prob_list A list containing the calculated failure probabilities for each indicator function. List
beta_list A list containing the calculated reliability indices (beta) for each indicator function. List

Here's an example of how to organize the directory structure with the input files. See following:

└── concat.ipynb # or concat.py
└── results_path
    └── result_sampling_algorithm_structural_analysis_0.txt
    └── result_sampling_algorithm_structural_analysis_1.txt
    └── result_sampling_algorithm_structural_analysis_2.txt
    ...
    └── result_sampling_algorithm_structural_analysis_n-1.txt
    └── result_sampling_algorithm_structural_analysis_n.txt

The function expects to find multiple .txt files in the folder_path directory. Ensure that the file format follows the described structure, with columns separated by tabs (\t) and necessary columns (X_, G_, I_). This format ensures that the function can correctly concatenate the data and perform the expected calculations.

Example 1

This example demonstrates how the concatenates_txt_files_sampling_algorithm_structural_analysis function processes a folder containing .txt files. Consider example 2 in sampling_algorithm_structural_analysis. We generate samples three times (10000 samples) with this code.

# Libraries
import os
import pandas as pd
from tabulate import tabulate

from parepy_toolbox import concatenates_txt_files_sampling_algorithm_structural_analysis

# Run algorithm
setup = {
            'folder_path': 'results_path', 
            'number of state limit functions or constraints': 1,
            'numerical model': {'model sampling': 'mcs-time'},
            'name simulation': 'new_simulation_results'  
        }

results, pf, beta = concatenates_txt_files_sampling_algorithm_structural_analysis(setup)
pf

Output details.

13:10:54 - Uploading files!
13:10:59 - Finished Upload in 4.43e+00 seconds!
13:10:59 - Started evaluation beta reliability index and failure probability...
13:10:59 - Finished evaluation beta reliability index and failure probability in 1.99e-02 seconds!
13:11:14 - Voilà!!!!....simulation results saved in simulation_results_MCS-TIME_20240910-131059.txt
pf:
+-----------------------+
|          G_0          |
+-----------------------+
| 0.0018821428571428572 |
| 0.0025428571428571427 |
|        0.0031         |
| 0.0036321428571428572 |
| 0.003939285714285715  |
+-----------------------+

Suggestions

Use this function when you need to divide your process among various computers. In the end, you can concatenate all data into a unique data frame and generate a probability of failure and reliability index for this full data.

If three samples with 10,000 lines are generated, when to use concatenates_txt_files_sampling_algorithm_structural_analysis, the final dataset will have 30,000 lines.