Publikační činnost Katedry informatiky / Publications of Department of Computer Science (460)

Permanent URI for this collectionhttp://hdl.handle.net/10084/64750

Kolekce obsahuje bibliografické záznamy publikační činnosti (článků) akademických pracovníků Katedry informatiky (460) v časopisech a v Lecture Notes in Computer Science registrovaných ve Web of Science od roku 2003 po současnost.
Do kolekce jsou zařazeny:
a) publikace, u nichž je v originálních dokumentech jako působiště autora (adresa) uvedena Vysoká škola báňská-Technická univerzita Ostrava (VŠB-TUO),
b) publikace, u nichž v originálních dokumentech není v adrese VŠB-TUO uvedena, ale autoři prokazatelně v době jejich zpracování a uveřejnění působili na VŠB-TUO.

Bibliografické záznamy byly původně vytvořeny v kolekci Publikační činnost akademických pracovníků VŠB-TUO, která sleduje publikování akademických pracovníků od roku 1990.

Browse

Recent Submissions

Now showing 1 - 20 out of 567 results
  • Item type: Item ,
    An activity level based surrogate-assisted evolutionary algorithm for many-objective optimization
    (Elsevier, 2024) Pan, Jeng-Shyang; Zhang, An-Ning; Chu, Shu-Chu; Zhao, Jia; Snášel, Václav
    Addressing expensive many-objective optimization problems (MaOPs) is a formidable challenge owing to their intricate objective spaces and high computational demands. Surrogate-assisted evolutionary algorithms (SAEAs) have gained prominence because of their ability to tackle MaOPs efficiently. They achieve this by using surrogate models to approximate objective functions, significantly reducing their reliance on costly evaluations. However, the effectiveness of many SAEAs is hampered by their reliance on various surrogate models and optimization strategies, which often result in suboptimal prediction accuracy and optimization performance. This study introduces a novel approach: an activity level based surrogate-assisted reference vector guided evolutionary algorithm specifically designed for expensive MaOPs. Utilizing the Kriging model and an angle penalty distance criterion, this algorithm effectively filters solutions that require evaluation using the original function. It employs a fixed number of training sets,that are updated via a two-screening strategy that leverages activity levels to refine population screening. This process ensures that the reference vector progressively aligns more closely with the Pareto fronts,which is enhanced by the deployment of adjusted adaptive reference vectors, thereby improving the screening precision. The proposed algorithm was tested against six contemporary algorithms using the DTLZ, WFG, and MaF test suites. The experimental results show that the proposed method outperforms other algorithms in most problems. Furthermore, its application to the cloud computing task scheduling problem underscores its practical value, demonstrating its notable effectiveness. The experimental outcomes attest to the robust performance of the algorithm across both test scenarios and real-world applications.
  • Item type: Item ,
    A brief review on quantum computing based drug design
    (Wiley, 2024) Das, Poulami; Ray, Avishek; Bhattacharyya, Siddhartha; Platoš, Jan; Snášel, Václav; Mršić, Leo; Huang, Tingwen; Zelinka, Ivan
    Design and development of new drug molecules are essential for the survival of human society. New drugs are designed for therapeutic purposes to combat new diseases. Besides treating new diseases, new drug development is also needed to treat pre-existing diseases more effectively and reduce the existing drugs' side effects. The design of drugs involves several steps, from the discovery of the drug molecule to its commercialization in the market. One of the most critical steps in drug design is to find the molecular interactions between the target (infected) molecule and the drug molecule. Several complex chemical equations need to be solved to determine the molecular interactions. In the late 20th Century, the advancement of computational technologies has made the solution of chemical equations relatively easier and faster. Moreover, the design of drug molecules involves multi-criteria optimization. Classical computational methodologies have been used for drug design since the end of the 20th Century. However, nowadays, more advanced computational methodologies are inevitable in designing drugs for new diseases and drugs with fewer side effects. In this context, the quantum computing paradigm has proved beneficial in drug design due to its advanced computational capabilities. This paper presents a state-of-the-art comprehensive review of the quantum computing-based methodologies involved in drug design. A comparative study is made about the different quantum-aided drug design methods, stating each methodology's merits and demerits. The review work presented in this manuscript will help new researchers assess the present state-of-the-art concept of quantum-based drug design. This article is categorized under: Technologies > Structure Discovery and Clustering Technologies > Computational Intelligence Application Areas > Health Care
  • Item type: Item ,
    Association of selected adipokines with vitamin D deficiency in children with inflammatory bowel disease
    (BMC, 2024) Geryk, Miloš; Kučerová, Veronika; Velgáňová-Véghová, Mária; Foltenová, Hana; Bouchalová, Kateřina; Karásek, David; Radvansky Jr., Martin; Karásková, Eva
    Background: Adipose tissue is significantly involved in inflammatory bowel disease (IBD). Vitamin D can affect both adipogenesis and inflammation. The aim of this study was to compare the production of selected adipokines, potentially involved in the pathogenesis of IBD - adiponectin, resistin, retinol binding protein 4 (RBP-4), adipocyte fatty acid binding protein and nesfatin-1 in children with IBD according to the presence of 25-hydroxyvitamin D (25(OH)D) deficiency. Methods: The study was conducted as a case-control study in pediatric patients with IBD and healthy children of the same sex and age. In addition to adipokines and 25(OH)D, anthropometric parameters, markers of inflammation and disease activity were assessed in all participants. Results: Children with IBD had significantly higher resistin levels regardless of 25(OH)D levels. IBD patients with 25(OH)D deficiency only had significantly lower RBP-4 compared to healthy controls and also compared to IBD patients without 25(OH)D deficiency. No other significant differences in adipokines were found in children with IBD with or without 25(OH)D deficiency. 25(OH)D levels in IBD patients corelated with RBP-4 only, and did not correlate with other adipokines. Conclusions: Whether the lower RBP-4 levels in the 25(OH)D-deficient group of IBD patients directly reflect vitamin D deficiency remains uncertain. The production of other adipokines does not appear to be directly related to vitamin D deficiency.
  • Item type: Item ,
    Retinal image dataset of infants and retinopathy of prematurity
    (Springer Nature, 2024) Timkovič, Juraj; Nowaková, Jana; Kubíček, Jan; Hasal, Martin; Varyšová, Alice; Kolarčík, Lukáš; Maršolková, Kristýna; Augustynek, Martin; Snášel, Václav
    Retinopathy of prematurity (ROP) represents a vasoproliferative disease, especially in newborns and infants, which can potentially affect and damage the vision. Despite recent advances in neonatal care and medical guidelines, ROP still remains one of the leading causes of worldwide childhood blindness. The paper presents a unique dataset of 6,004 retinal images of 188 newborns, most of whom are premature infants. The dataset is accompanied by the anonymized patients' information from the ROP screening acquired at the University Hospital Ostrava, Czech Republic. Three digital retinal imaging camera systems are used in the study: Clarity RetCam 3, Natus RetCam Envision, and Phoenix ICON. The study is enriched by the software tool ReLeSeT which is aimed at automatic retinal lesion segmentation and extraction from retinal images. Consequently, this tool enables computing geometric and intensity features of retinal lesions. Also, we publish a set of pre-processing tools for feature boosting of retinal lesions and retinal blood vessels for building classification and segmentation models in ROP analysis.
  • Item type: Item ,
    Fast bicriteria streaming algorithms for submodular cover problem under noise models
    (Elsevier, 2024) Nguyen, Bich-Ngan T.; Pham, Phuong N. H.; Pham,Canh V.; Snášel, Václav
    The Submodular Cover (SC) problem has attracted the attention of researchers because of its wide variety of applications in many domains. Previous studies on this problem have focused on solving it under the assumption of a non-noise environment or using the greedy algorithm to solve it under noise. However, in some applications, the data is often large-scale and brings a noisy version, so the existing solutions are ineffective or not applicable to large and noisy data. Motivated by this phenomenon, we study the Submodular Cover under Noises (SCN) problem and propose two efficient streaming algorithms, which provide a solution with theoretical bounds under two common noise models, multiplicative and additive noises. The experimental results indicate that our proposed algorithms not only provide the solution with a high objective function value but also outperform the state-of-the-art algorithm in terms of both the number of queries and the running time.
  • Item type: Item ,
    Towards modeling conceptual graphs and transparent intensional logic
    (Springer Nature, 2024) Han, Nguyen Van; Vinh, Phan Cong; Duží, Marie
    In this paper, we introduce a graphical method for modeling and reasoning with linguistic expressions. The former represents a graph called a conceptual graph, and the latter involves graph transformations. In our conceptual graphs, nodes represent linguistic concepts and edges links between these concepts. This model facilitates reasonining with linguistic concepts by making direct consequences easy to infer.
  • Item type: Item ,
    Network embedding based on DepDist contraction
    (Springer Nature, 2024) Dopater, Emanuel; Ochodková, Eliška; Kudělka, Miloš
    Networks provide an understandable and, in the case of small size, visualizable representation of data, which allows us to obtain essential information about the relationships between pairs of nodes, e.g., their distances. In visualization, networks have an alternative two-dimensional vector representation to which various machine-learning methods can be applied. More generally, networks can be transformed into a low-dimensional space using so-called embedding methods, which bridge the gap between network analysis and traditional machine learning by creating numerical representations that capture the essence of the network structure. In this article, we present a new embedding method that uses non-symmetric dependency to find the distance between nodes and applies an iterative procedure to find a satisfactory distribution of nodes in space. For dimension 2 and the visualization of the result, we demonstrate the method's effectiveness on small networks. For higher dimensions and several larger networks, we present the results of two experiments comparing our results with two well-established methods in the research community, namely node2vec and DeepWalk. The first experiment focuses on a qualitative comparison of the methods, while the second focuses on applying and comparing the classification results to embeddings in a higher dimension. Although the presented method does not outperform the two chosen methods, its results are still comparable. Therefore, we also explain the limitations of our method and a possible way to overcome them.
  • Item type: Item ,
    An optimal standalone wind-photovoltaic power plant system for green hydrogen generation: Case study for hydrogen refueling station
    (Elsevier, 2024) Rizk-Allah, Rizk M.; Hassan, Islam A.; Snášel, Václav; Hassanien, Aboul Ella
    Sustainability goals include the utilization of renewable energy resources to supply the energy needs in addition to wastewater treatment to satisfy the water demand. Moreover, hydrogen has become a promising energy carrier and green fuel to decarbonize the industrial and transportation sectors. In this context, this research investigates a wind-photovoltaic power plant to produce green hydrogen for hydrogen refueling station and to operate an electrocoagulation water treatment unit in Ostrava, Czech Republic's northeast region. The study conducts a techno-economic analysis through HOMER Pro (R) software for optimal sizing of the power station components and to investigate the economic indices of the plant. The power station employs photovoltaic panels and wind turbines to supply the required electricity for electrolyzers and electrocoagulation reactors. As an offgrid system, lead acid batteries are utilized to store the surplus electricity. Wind speed and solar irradiation are the key role site dependent parameters that determine the cost of hydrogen, electricity, and wastewater treatment. The simulated model considers the capital, operating, and replacement costs for system components. In the proposed system, 240 kg of hydrogen as well as 720 kWh electrical energy are daily required for the hydrogen refueling station and the electrocoagulation unit, respectively. Accordingly, the power station annually generates 6,997,990 kWh of electrical energy in addition to 85595 kg of green hydrogen. Based on the economic analysis, the project's NPC is determined to be 5.49 M and the levelized cost of Hydrogen (LCH) is 2.89 /kg excluding compressor unit costs. This value proves the effectiveness of this power system, which encourages the utilization of green hydrogen for fuel-cell electric vehicles (FCVs). Furthermore, emerging electrocoagulation studies produce hydrogen through wastewater treatment, increasing hydrogen production and lowering LCH. Therefore, this study is able to provide practicable methodology support for optimal sizing of the power station components, which is beneficial for industrialization and economic development as well as transition toward sustainability and autonomous energy systems.
  • Item type: Item ,
    Optimizing AVR system performance via a novel cascaded RPIDD2-FOPI controller and QWGBO approach
    (PLOS, 2024) Ekinci, Serdar; Snášel, Václav; Rizk-Allah, Rizk M.; Izci, Davut; Salman, Mohammad; Youssef, Ahmed A. F.
    Maintaining stable voltage levels is essential for power systems' efficiency and reliability. Voltage fluctuations during load changes can lead to equipment damage and costly disruptions. Automatic voltage regulators (AVRs) are traditionally used to address this issue, regulating generator terminal voltage. Despite progress in control methodologies, challenges persist, including robustness and response time limitations. Therefore, this study introduces a novel approach to AVR control, aiming to enhance robustness and efficiency. A custom optimizer, the quadratic wavelet-enhanced gradient-based optimization (QWGBO) algorithm, is developed. QWGBO refines the gradient-based optimization (GBO) by introducing exploration and exploitation improvements. The algorithm integrates quadratic interpolation mutation and wavelet mutation strategy to enhance search efficiency. Extensive tests using benchmark functions demonstrate the QWGBO's effectiveness in optimization. Comparative assessments against existing optimization algorithms and recent techniques confirm QWGBO's superior performance. In AVR control, QWGBO is coupled with a cascaded real proportional-integral-derivative with second order derivative (RPIDD2) and fractional-order proportional-integral (FOPI) controller, aiming for precision, stability, and quick response. The algorithm's performance is verified through rigorous simulations, emphasizing its effectiveness in optimizing complex engineering problems. Comparative analyses highlight QWGBO's superiority over existing algorithms, positioning it as a promising solution for optimizing power system control and contributing to the advancement of robust and efficient power systems.
  • Item type: Item ,
    Optimized long short-term memory with rough set for sustainable forecasting renewable energy generation
    (Elsevier, 2024) Sayed, Gehad Ismail; El-Latif, Eman I. Abd; Hassanien, Aboul Ella; Snášel, Václav
    Research and development in the field of renewable energy is receiving more attention as a result of the growing demand for clean, sustainable energy. This paper proposes a model for forecasting renewable energy generation. The proposed model consists of three main phases: data preparation, feature selection-based rough set and nutcracker optimization algorithm (NOA), and data classification and cross-validation. First, the missing values are tackled using the mean method. Then, data normalization and data shuffling are applied in the data preparation phase. In the second phase, a new feature selection algorithm is proposed based on rough set theory and NOA, namely RSNOA. The proposed RSNOA is based on adopting the rough set method as the fitness function during the searching mechanism to find the optimal feature subset. Finally, a custom long -short -term memory architecture with the k-fold cross-validation method is utilized in the last phase. The experimental results revealed that the proposed model is very competitive. It is achieved with 4.2113 root mean square error, 0.96 R2, 2.835 mean absolute error, and 4.6349 mean absolute percentage error. The findings also show that the proposed model has great promise as a useful tool for accurately forecasting renewable energy generation across various sources.
  • Item type: Item ,
    Evaluation of performance enhancement in Ethereum fraud detection using oversampling techniques
    (Elsevier, 2024) Ravindranath, Vaishali; Nallakaruppan, M. K.; Shri, M. Lawanya; Balusamy, Balamurugan; Bhattacharyya, Siddhartha
    With the growing popularity of cryptocurrencies and their decentralized nature, the risk of fraudulent activities within these ecosystems has become a pressing concern. This research paper focuses on Ethereum fraud detection using a dataset specifically curated for this purpose. The methodology encompasses essential steps, including data cleaning, correlation analysis, data splitting, and exploratory data analysis to understand the data characteristics. Subsequently, self -optimized machine learning models are trained with the Pycaret library while addressing the class imbalance using SMOTENC (Synthetic Minority oversampling Technique for Nominal and Continuous Data), ADA-SYN (Adaptive Synthetic Algorithm), and K -Means -SMOTE techniques. The performance of the various models is evaluated on test and validation datasets using metrics such as accuracy, precision, recall, and AUC (Area Under Curve). The study reveals that the ensemble models, particularly CATBoost (Categorical Boost) and LGBM (Light Gradient Boost Method), show exceptional efficiency, with accuracy ranging from 97% to 98.42% after oversampling. Moreover, these models exhibit higher F1 scores and AUC values, indicating their potential to detect fraud effectively. The validation metrics also lie in the same range, demonstrating that the models do not suffer from over -fitting. The experiment demonstrates the promise of ensemble models in Ethereum fraud detection, paving the way for deploying robust fraud detection systems in crypto-currency ecosystems. The results show that the K -Means SMOTE oversampling technique has the highest classification accuracy levels of 98.42% with an AUC of 99.82%.
  • Item type: Item ,
    Artificial Protozoa Optimizer (APO): A novel bio-inspired metaheuristic algorithm for engineering optimization
    (Elsevier, 2024) Wang, Xiaopeng; Snášel, Václav; Mirjalili, Seyedali; Pan, Jeng-Shyang; Kong, Lingping; Shehadeh, Hisham A.
    This study proposes a novel artificial protozoa optimizer (APO) that is inspired by protozoa in nature. The APO mimics the survival mechanisms of protozoa by simulating their foraging, dormancy, and reproductive behaviors. The APO was mathematically modeled and implemented to perform the optimization processes of metaheuristic algorithms. The performance of the APO was verified via experimental simulations and compared with 32 state-of-the-art algorithms. Wilcoxon signed-rank test was performed for pairwise comparisons of the proposed APO with the state-of-the-art algorithms, and Friedman test was used for multiple comparisons. First, the APO was tested using 12 functions of the 2022 IEEE Congress on Evolutionary Computation benchmark. Considering practicality, the proposed APO was used to solve five popular engineering design problems in a continuous space with constraints. Moreover, the APO was applied to solve a multilevel image segmentation task in a discrete space with constraints. The experiments confirmed that the APO could provide highly competitive results for optimization problems. The source codes of Artificial Protozoa Optimizer are publicly available at https://seyedalimirjalili.com/projects and https://ww2.mathworks.cn/matlabcentral/fileexchange/162656-artificial-protozoa-optimizer.
  • Item type: Item ,
    Analysis on fetal phonocardiography segmentation problem by hybridized classifier
    (Elsevier, 2024) Kong, Lingping; Barnová, Kateřina; Jaroš, René; Mirjalili, Seyedali; Snášel, Václav; Pan, Jeng-Shyang; Martinek, Radek
    Fetal examinations are a significant and challenging field of healthcare. Cardiotocography is the most commonly used method for monitoring fetal heart rate and uterine contractions. As a promising alternative to cardiotocography, fetal phonocardiography is beginning to emerge. It is an entirely non-invasive, passive, and low-cost method. However, it is tough to estimate the ideal form of the fetal sound signal in most cases due to the presence of disturbances. The disturbances originate from movements or rotations of the fetal body, making fetal heart sound processing difficult. This study presents an automatic method for segmenting the fetal heart sounds in a phonocardiographic signal that is loaded with different types of disturbances and analyzes which of these disturbances most affect segmentation accuracy. To provide a comprehensive investigation, we propose a hybrid classifier based on Transformer and eXtreme Gradient Boosting, short for XGBoost, to improve segmentation performance by decision -making integration. 2000 segments of data from the Research Resource for Complex Physiologic Signals, PhysioNet repository, and created synthetic data (873 recordings) were used for the experiment. In the S1 label, our proposed method ranks first among all compared algorithms in precision, recall, F1, and accuracy score, tying with Transformer in recall score. It achieves an accuracy increase of 5% and 1.3% compared to XGBoost and Transformer, respectively. Similarly, in the S2 label, there is a precision score increase of 5.8% and 3.7% compared to XGBoost and Transformer, respectively. In general, our proposed method shows effective and promising performance..
  • Item type: Item ,
    Knowing who occupies an office: purely contingent, necessary and impossible offices
    (Springer Nature, 2024) Duží, Marie; Číhalová, Martina
    This paper examines different kinds of definite descriptions denoting purely contingent, necessary or impossible objects. The discourse about contingent/impossible/necessary objects can be organised in terms of rational questions to ask and answer relative to the modal profile of the entity in question. There are also limits on what it is rational to know about entities with this or that modal profile. We will also examine epistemic modalities; they are the kind of necessity and possibility that is determined by epistemic constraints related to knowledge or rationality. Definite descriptions denote so-called offices, roles, or things to be. We explicate these alpha-offices as partial functions from possible worlds to chronologies of objects of type alpha, where alpha is mostly the type of individuals. Our starting point is Prior's distinction between a 'weak' and 'strong' definite article 'the'. In both cases, the definite description refers to at most one object; yet, in the case of the weak 'the', the referred object can change over time, while in the case of the strong 'the', the object referred to by the definite description is the same forever, once the office has been occupied. The main result we present is the way how to obtain a Wh-knowledge about who or what plays a given role presented by a hyper-office, i.e. procedure producing an office. Another no less important result concerns the epistemic necessity of the impossibility of knowing who or what occupies the impossible office presented by a hyper-office.
  • Item type: Item ,
    A hybrid analysis method for calculating the cogging torque of consequent pole hybrid excitation synchronous machine
    (Emerald Publishing Limited, 2024) Wu, Jie; Wang, Kang; Zhang, Ming; Guo, Leilei; Shen, Yongpeng; Wang, Mingjie; Zhang, Jitao; Snášel, Václav
    Purpose When solving the cogging torque of complex electromagnetic structures, such as consequent pole hybrid excitation synchronous (CPHES) machine, traditional methods have a huge computational complexity. The notable feature of CPHES machine is the symmetric range of field-strengthening and field-weakening, but this type of machine is destined to be equipped with a complex electromagnetic structure. The purpose of this paper is to propose a hybrid analysis method to quickly and accurately solve the cogging torque of complex 3D electromagnetic structure, which is applicable to CPHES machine with different magnetic pole shapings. Design/methodology/approach In this paper, a hybrid method for calculating the cogging torque of CPHES machine is proposed, which considers three commonly used pole shapings. Firstly, through magnetic field analysis, the complex 3D finite element analysis (FEA) is simplified to 2D field computing. Secondly, the discretization method is used to obtain the distribution of permeance and permeance differential along the circumference of the air-gap, taking into account the effect of slots. Finally, the cogging torque of the whole motor is obtained by using the idea of modular calculation and the symmetry of the rotor structure. Findings This method is applicable to different pole shapings. The experimental results show that the proposed method is consistent with 3D FEA and experimental measured results, and the average calculation time is reduced from 8 h to 4 min. Originality/value This paper proposes a new concept for calculating cogging torque, which is a hybrid calculation of dimension reduction and discretization modules. Based on magnetic field analysis, the 3D problem is simplified into a 2D issue, reducing computational complexity. Based on the symmetry of the machine structure, a modeling method for discretized analytical models is proposed to calculate the cogging torque of the machine.
  • Item type: Item ,
    Adaptive energy management strategy for solar energy harvesting IoT nodes by evolutionary fuzzy rules
    (Elsevier, 2024) Prauzek, Michal; Krömer, Pavel; Mikuš, Miroslav; Konečný, Jaromír
    This study explores the integration of genetic programming (GP) and fuzzy logic to enhance control strategies for Internet of Things (IoT) nodes across varied locations. It is introduced a novel methodology for designing a fuzzy-based energy management controller that autonomously determines the most suitable controller structure and inputs. This approach is evaluated using a solar harvesting IoT model that leverages historical solar irradiance data, highlighting the methodology’s potential for diverse geographical applications and compatibility with low-performance microcontrollers. The findings demonstrate that the integration of GP with designed fitness function enables the dynamic learning and adaptation of control strategies, optimizing system behavior based on historical data. The experimental model showcases an ability to efficiently use historical datasets to derive optimal control strategies, with the fitness metric indicating consistent improvement throughout the learning phase. The results indicate that useful control strategies learned at a certain location may outperform a locally-trained control strategies and can be successfully re-applied in other locations.
  • Item type: Item ,
    Hybrid optimization algorithm for handwritten document enhancement
    (Tech Science Press, 2024) Chu, Shu-Chuan; Yang, Xiaomeng; Zhang, Li; Snášel, Václav; Pan, Jeng-Shyang
    The Gannet Optimization Algorithm (GOA) and the Whale Optimization Algorithm (WOA) demonstrate strong performance; however, there remains room for improvement in convergence and practical applications. This study introduces a hybrid optimization algorithm, named the adaptive inertia weight whale optimization algorithm and gannet optimization algorithm (AIWGOA), which addresses challenges in enhancing handwritten documents. The hybrid strategy integrates the strengths of both algorithms, significantly enhancing their capabilities, whereas the adaptive parameter strategy mitigates the need for manual parameter setting. By amalgamating the hybrid strategy and parameter-adaptive approach, the Gannet Optimization Algorithm was refined to yield the AIWGOA. Through a performance analysis of the CEC2013 benchmark, the AIWGOA demonstrates notable advantages across various metrics. Subsequently, an evaluation index was employed to assess the enhanced handwritten documents and images, affirming the superior practical application of the AIWGOA compared with other algorithms.
  • Item type: Item ,
    Multi-objective optimization model for railway heavy-haul traffic: Addressing carbon emissions reduction and transport efficiency improvement
    (Elsevier, 2024) Tian, Ai-Qing; Wang, Xiao-Yang; Xu, Heying; Pan, Jeng-Shyang; Snášel, Václav; Lv, Hong-Xia
    This paper establishes a multi-objective optimization model for railway heavy-haul trains, focusing on reducing carbon emissions and improving transport efficiency. The model integrates optimization of the route and the vehicle load rate, significantly reducing carbon emissions and enhancing transport efficiency. It addresses the challenges and characteristics of heavy-haul trains, introducing multi-objective optimization problems related to transport carbon emissions and efficiency. Using a pigeon-inspired optimization algorithm, the model considers joint constraints between carbon emissions and transport efficiency objectives. To overcome challenges in multi-objective transportation problems, the paper proposes a forward-learning pigeon-inspired optimization algorithm based on a surrogate-assisted model. This approach calculates the quality of the candidate solution using a surrogate model, reducing time costs. The algorithm employs a forward-learning strategy to enhance learning from non-dominant solutions. Experimental validation with benchmark functions confirms the effectiveness of the model and offers optimized solutions. The proposed method reduces carbon emissions while maintaining transport efficiency, contributing innovative ideas for the development of sustainable heavy-duty trains.
  • Item type: Item ,
    Novel lossy compression method of noisy time series data with anomalies: Application to partial discharge monitoring in overhead power lines
    (Elsevier, 2024) Klein, Lukáš; Dvorský, Jiří; Seidl, David; Prokop, Lukáš
    In overhead power transmission lines, particularly in regions like natural parks where establishing a safe zone is difficult, the adoption of cross-linked polyethylene insulated covered conductors (CCs) helps prevent outages due to vegetation contact. However, these CCs are susceptible to partial discharge (PD) activity, which can degrade insulation and lead to system failures. Detecting and analyzing PD are essential for maintaining power system reliability and safety. A key challenge in PD monitoring is transmitting the large volumes of PD signal data over unreliable 2G networks, as existing compression methods either compromise on data integrity or are ineffective. This paper introduces a novel lossy compression technique utilizing an autoencoder with skip connections and correction data to address this issue. Unlike previous algorithms that struggle with noisy time series data and fail to preserve crucial anomaly information, our method reconstructs the signal without anomalies, which are subsequently restored using correction data. Achieving a compression factor of about 25 (reducing data to 4.1% of its original size), this approach maintains essential PD signal features for analysis. The effectiveness of our method is validated by three classification algorithms, showing promise for future fault detection, diagnosis, and memory space reduction. This innovative compression solution marks a significant advancement in PD data processing, offering a balanced trade-off between compression efficiency and data fidelity, and paving the way for enhanced remote monitoring in power transmission systems.
  • Item type: Item ,
    Application of Instrumented Indentation Test and Neural Networks to determine the constitutive model of in-situ austenitic stainless steel components
    (Springer Nature, 2024) Ma, Quoc-Phu; Basterrech, Sebastian; Halama, Radim; Omacht, Daniel; Měsíček, Jakub; Hajnyš, Jiří; Platoš, Jan; Petrů, Jana
    Over the last few decades, Instrumented Indentation Test (IIT) has evolved into a versatile and convenient method for assessing the mechanical properties of metals. Unlike conventional hardness tests, IIT allows for incremental control of the indenter based on depth or force, enabling the measurement of not only hardness but also tensile properties, fracture toughness, and welding residual stress. Two crucial measures in IIT are the reaction force (F) exerted by the tested material on the indenter and the depth of the indenter (D). Evaluation of the mentioned properties from F-D curves typically involves complex analytical formulas that restricts the application of IIT to a limited group of materials. Moreover, for soft materials, such as austenitic stainless steel SS304L, with excessive pile-up/sink-in behaviors, conducting IIT becomes challenging due to improper evaluation of the imprint depth. In this work, we propose a systematic procedure for replacing complex analytical evaluations of IIT and expensive physical measurements. The proposed approach is based on the well-known potential of Neural Networks (NN) for data-driven modeling. We carried out physical IIT and tensile tests on samples prepared from SS304L. In addition, we generated multiple configurations of material properties and simulated the corresponding number of IITs using Finite Element Method (FEM). The information provided by the physical tests and simulated data from FEM are integrated into an NN, to produce a parametric mapping that can predict the parameters of a constitutive model based on any given F-D curve. Our physical and numerical experiments successfully demonstrate the potential of the proposed approach.