Publikační činnost Katedry informatiky / Publications of Department of Computer Science (460)
Permanent URI for this collectionhttp://hdl.handle.net/10084/64750
Kolekce obsahuje bibliografické záznamy publikační činnosti (článků) akademických pracovníků Katedry informatiky (460) v časopisech a v Lecture Notes in Computer Science registrovaných ve Web of Science od roku 2003 po současnost.
Do kolekce jsou zařazeny:
a) publikace, u nichž je v originálních dokumentech jako působiště autora (adresa) uvedena Vysoká škola báňská-Technická univerzita Ostrava (VŠB-TUO),
b) publikace, u nichž v originálních dokumentech není v adrese VŠB-TUO uvedena, ale autoři prokazatelně v době jejich zpracování a uveřejnění působili na VŠB-TUO.
Bibliografické záznamy byly původně vytvořeny v kolekci
Publikační činnost akademických pracovníků VŠB-TUO, která sleduje publikování akademických pracovníků od roku 1990.
Browse
Recent Submissions
Item type: Item , IndiVNet A region adaptive semantic image segmentation for autonomous driving in unstructured environments(Springer Nature, 2025) Chakraborty, Pritam; Bandyopadhyay, Anjan; Bhattacharyya, Siddhartha, Siddhartha; Platoš, JanAutonomous navigation in developing regions is challenged by heterogeneous traffic, dynamic occlusions, and weak road structure. Existing segmentation models, largely trained on structured Western datasets, struggle to generalize under these conditions. To address this gap, we propose IndiVNet, a semantic segmentation architecture tailored for unstructured Indian driving environments. IndiVNet introduces a progressive dilation encoder (616) that captures fine-grained details and broad contextual cues without inducing oversparsity. Evaluated on the India Driving Dataset (IDD), it achieves 69.98% mIoU, outperforming CNN and Transformer baselines, and reaches 73.2% mIoU on CAMVID, demonstrating strong cross-domain generalization. By combining contextual adaptability with real-time efficiency, IndiVNet offers a scalable, region-aware solution for robust autonomous navigation in complex environments.Item type: Item , Lexical predicates do substitute in fine-grained attitudes(Springer Nature, 2025) Jespersen, BjørnLet {'is a woodchuck', 'is a groundhog'} be a pair of synonymous lexical predicates. Are they intersubstitutable within a fine-grained attitude ascription without affecting either the truth-value of the ascription or the content of the attitude? I will show that synonymy is sufficient to preserve substitutability within any non-quotational context. Only this requires that substitution is executed within a semantics that observes semantic and epistemic transparency also in contexts such as hyperintensional belief reports. I will develop my argument within Transparent Intensional Logic. I use my pro-substitution claim to argue against one wrong reason for fine-graining, which introduces logical distinctions without semantic differences.Item type: Item , Using synthetic data for pretraining partial discharge detection in overhead transmission lines(Springer Nature, 2025) Klein, Lukáš; Fulneček, Jan; Kabot, Ondřej; Dvorský, Jiří; Prokop, LukášAccurate detection of partial discharges (PDs) in medium-voltage overhead transmission lines is critical for preemptive maintenance and avoiding costly outages, yet it is challenged by scarce labeled data and pervasive electromagnetic interference. This paper investigates a hybrid simulation-and-data-driven framework in which synthetically generated PD signals are used to pretrain deep neural networks and are subsequently fine-tuned on a limited set of real overhead-line measurements. The synthetic pipeline systematically varies PD repetition rates, amplitude distributions, vegetation-contact scenarios, and noise conditions, producing diverse time-series and spectrogram-like representations that approximate real operating environments. We conduct a comprehensive ablation study across multiple architectures—Convolutional Neural Networks (CNNs), a Vision Transformer (ViT), and a Long Short-Term Memory (LSTM) network—and analyze their sensitivity to granular sweeps of synthetic-data parameters. CNN-based models decisively outperform ViT and LSTM counterparts on the spectrogram-based classification task, while ViT and LSTM fail to learn meaningful representation. For the successful CNNs, pretraining on carefully parameterized synthetic datasets—particularly those reflecting higher PD activity, such as our Datasets 3 and 4—consistently improves downstream performance on real data, boosting the Matthews Correlation Coefficient (MCC) on imbalanced, cost-sensitive test sets by roughly 10–20% compared with training from scratch. At the same time, we show that poorly aligned synthetic data can degrade generalization, underscoring the need for accurate noise calibration and domain-aligned simulation. Overall, the results confirm that (i) architectural choice is pivotal for PD detection in overhead lines and (ii) well-designed synthetic data is a powerful, practical lever for achieving reliable and cost-effective PD monitoring when real labeled data are limited.Item type: Item , From constraints fusion to manifold optimization: A new directional transport manifold metaheuristic algorithm(Elsevier, 2024) Snášel, Václav; Kong, Lingping; Das, SwagatamThe ascent of geometry-based models and methodologies, exemplified by geometric deep learning and manifold numerical optimization algorithms, has inaugurated a novel domain across various applications that grapple with geometric data complexities, such as electroencephalogram signals represented by symmetric positive definite matrix manifold, hierarchical data represented by hyperbolic manifold. The imperative fuels this inevitable paradigm shift to encapsulate the intricacies and richness inherent in data, areas where traditional methods prove inadequate. While metaheuristic algorithms are renowned for their versatile adaptability across applications, offering practical solutions within reasonable timeframes. However, the conventional metaheuristic algorithms fail on manifold applications with meaningless solutions. From an extrinsic optimization perspective, we treat manifold optimization problems as general optimization problems with multiple fused constraints that limit the optimization path to the manifold. This study pioneered the proposal and implementation of a metaheuristic manifold optimization, introducing a novel directional transport operator to rectify previously identified issues. Through experimentation across five sets of 25 problems, comparing against five algorithms, including both gradient-free and gradient-dominant counterparts, our proposed algorithm emerges as the optimal performer within the gradient-free category, demonstrating competitiveness even against gradient-dominant algorithms. Furthermore, we applied the proposed algorithm to the robot dynamic manipulation problem, achieving a close-optimal solution that eludes gradient-dominant approaches. This paper delves into the inherent capabilities and establishes the generalization of a metaheuristic algorithm within non-Euclidean functional landscapes. The source code will be available at https://github.com/lingpingfuzzy/metaheuristic-manifold-optimization.Item type: Item , Collaborative filtering by graph convolution network in location-based recommendation system(KSII, 2024) Tran, Tin T.; Snášel, Václav; Nguyen, Thuan Q.Recommendation systems research is a subfield of information retrieval, as these systems recommend appropriate items to users during their visits. Appropriate recommendation results will help users save time searching while increasing productivity at work, travel, or shopping. The problem becomes more difficult when the items are geographical locations on the ground, as they are associated with a wealth of contextual information, such as geographical location, opening time, and sequence of related locations. Furthermore, on social networking platforms that allow users to check in or express interest when visiting a specific location, their friends receive this signal by spreading the word on that online social network. Consideration should be given to relationship data extracted from online social networking platforms, as well as their impact on the geolocation recommendation process. In this study, we compare the similarity of geographic locations based on their distance on the ground and their correlation with users who have checked in at those locations. When calculating feature embeddings for users and locations, social relationships are also considered as attention signals. The similarity value between location and correlation between users will be exploited in the overall architecture of the recommendation model, which will employ graph convolution networks to generate recommendations with high precision and recall. The proposed model is implemented and executed on popular datasets, then compared to baseline models to assess its overall effectiveness.Item type: Item , Will dissolved hydrogen reveal the instability of the anaerobic digestion process?(MDPI, 2025) Platošová, Daniela; Rusín, Jiří; Svoboda, Radek; Vašinková, MarkétaDissolved hydrogen is a critical factor in maintaining the delicate balance among microbial species that drive anaerobic digestion. Since previous findings have demonstrated a correlation between dissolved hydrogen concentration and volatile fatty acid (VFA) levels, we propose to evaluate the use of dissolved hydrogen concentration in digestate as an alternative to traditional VFA measurements. The aim is to determine whether dissolved hydrogen could serve as a faster, more accurate, and more efficient indicator of process instability in anaerobic digestion. An integral part of this task also involves addressing the technical challenge of identifying a suitable sensor that meets our requirements. In this study, we evaluated the ratio of dissolved hydrogen concentration to Total Inorganic Carbon as a potential alternative to the traditional stability indicator, Volatile Fatty Acids/Total Inorganic Carbon (VFA/TIC), also referred to as Fl & uuml;chtige Organische S & auml;uren/Totales Anorganisches Carbonat (FOS/TAC). The single-stage anaerobic digestion process was carried out in a Terrafors IS rotary drum bioreactor for 150 days at an average temperature of 40 degrees C and an organic volatile load of 0.092 kg m-3 d-1. Corn silage was dosed on weekdays as the substrate. With a theoretical retention time of 45 days, a biogas production of 0.219 Nm3kgVs-1 with a CH4 content of 31.6% was achieved. The values of the determined VFA/TIC stability indicator ranged from 0.22 to 5.66, with the highest values obtained when the reactor was overloaded. The dissolved hydrogen concentration ranged 0.005-0.196 mg dm-3. The Pearson correlation coefficient was 0.337 and the Spearman correlation coefficient was 0.468. The amperometric microsensor has proven to be unsuitable for field applications due to its lack of sensitivity and short lifetime. The proposed ratio of dissolved hydrogen concentration to TIC did not prove to be significantly more effective than the established VFA/TIC indicator.Item type: Item , An activity level based surrogate-assisted evolutionary algorithm for many-objective optimization(Elsevier, 2024) Pan, Jeng-Shyang; Zhang, An-Ning; Chu, Shu-Chu; Zhao, Jia; Snášel, VáclavAddressing expensive many-objective optimization problems (MaOPs) is a formidable challenge owing to their intricate objective spaces and high computational demands. Surrogate-assisted evolutionary algorithms (SAEAs) have gained prominence because of their ability to tackle MaOPs efficiently. They achieve this by using surrogate models to approximate objective functions, significantly reducing their reliance on costly evaluations. However, the effectiveness of many SAEAs is hampered by their reliance on various surrogate models and optimization strategies, which often result in suboptimal prediction accuracy and optimization performance. This study introduces a novel approach: an activity level based surrogate-assisted reference vector guided evolutionary algorithm specifically designed for expensive MaOPs. Utilizing the Kriging model and an angle penalty distance criterion, this algorithm effectively filters solutions that require evaluation using the original function. It employs a fixed number of training sets,that are updated via a two-screening strategy that leverages activity levels to refine population screening. This process ensures that the reference vector progressively aligns more closely with the Pareto fronts,which is enhanced by the deployment of adjusted adaptive reference vectors, thereby improving the screening precision. The proposed algorithm was tested against six contemporary algorithms using the DTLZ, WFG, and MaF test suites. The experimental results show that the proposed method outperforms other algorithms in most problems. Furthermore, its application to the cloud computing task scheduling problem underscores its practical value, demonstrating its notable effectiveness. The experimental outcomes attest to the robust performance of the algorithm across both test scenarios and real-world applications.Item type: Item , A brief review on quantum computing based drug design(Wiley, 2024) Das, Poulami; Ray, Avishek; Bhattacharyya, Siddhartha; Platoš, Jan; Snášel, Václav; Mršić, Leo; Huang, Tingwen; Zelinka, IvanDesign and development of new drug molecules are essential for the survival of human society. New drugs are designed for therapeutic purposes to combat new diseases. Besides treating new diseases, new drug development is also needed to treat pre-existing diseases more effectively and reduce the existing drugs' side effects. The design of drugs involves several steps, from the discovery of the drug molecule to its commercialization in the market. One of the most critical steps in drug design is to find the molecular interactions between the target (infected) molecule and the drug molecule. Several complex chemical equations need to be solved to determine the molecular interactions. In the late 20th Century, the advancement of computational technologies has made the solution of chemical equations relatively easier and faster. Moreover, the design of drug molecules involves multi-criteria optimization. Classical computational methodologies have been used for drug design since the end of the 20th Century. However, nowadays, more advanced computational methodologies are inevitable in designing drugs for new diseases and drugs with fewer side effects. In this context, the quantum computing paradigm has proved beneficial in drug design due to its advanced computational capabilities. This paper presents a state-of-the-art comprehensive review of the quantum computing-based methodologies involved in drug design. A comparative study is made about the different quantum-aided drug design methods, stating each methodology's merits and demerits. The review work presented in this manuscript will help new researchers assess the present state-of-the-art concept of quantum-based drug design. This article is categorized under: Technologies > Structure Discovery and Clustering Technologies > Computational Intelligence Application Areas > Health CareItem type: Item , Association of selected adipokines with vitamin D deficiency in children with inflammatory bowel disease(BMC, 2024) Geryk, Miloš; Kučerová, Veronika; Velgáňová-Véghová, Mária; Foltenová, Hana; Bouchalová, Kateřina; Karásek, David; Radvansky Jr., Martin; Karásková, EvaBackground: Adipose tissue is significantly involved in inflammatory bowel disease (IBD). Vitamin D can affect both adipogenesis and inflammation. The aim of this study was to compare the production of selected adipokines, potentially involved in the pathogenesis of IBD - adiponectin, resistin, retinol binding protein 4 (RBP-4), adipocyte fatty acid binding protein and nesfatin-1 in children with IBD according to the presence of 25-hydroxyvitamin D (25(OH)D) deficiency. Methods: The study was conducted as a case-control study in pediatric patients with IBD and healthy children of the same sex and age. In addition to adipokines and 25(OH)D, anthropometric parameters, markers of inflammation and disease activity were assessed in all participants. Results: Children with IBD had significantly higher resistin levels regardless of 25(OH)D levels. IBD patients with 25(OH)D deficiency only had significantly lower RBP-4 compared to healthy controls and also compared to IBD patients without 25(OH)D deficiency. No other significant differences in adipokines were found in children with IBD with or without 25(OH)D deficiency. 25(OH)D levels in IBD patients corelated with RBP-4 only, and did not correlate with other adipokines. Conclusions: Whether the lower RBP-4 levels in the 25(OH)D-deficient group of IBD patients directly reflect vitamin D deficiency remains uncertain. The production of other adipokines does not appear to be directly related to vitamin D deficiency.Item type: Item , Retinal image dataset of infants and retinopathy of prematurity(Springer Nature, 2024) Timkovič, Juraj; Nowaková, Jana; Kubíček, Jan; Hasal, Martin; Varyšová, Alice; Kolarčík, Lukáš; Maršolková, Kristýna; Augustynek, Martin; Snášel, VáclavRetinopathy of prematurity (ROP) represents a vasoproliferative disease, especially in newborns and infants, which can potentially affect and damage the vision. Despite recent advances in neonatal care and medical guidelines, ROP still remains one of the leading causes of worldwide childhood blindness. The paper presents a unique dataset of 6,004 retinal images of 188 newborns, most of whom are premature infants. The dataset is accompanied by the anonymized patients' information from the ROP screening acquired at the University Hospital Ostrava, Czech Republic. Three digital retinal imaging camera systems are used in the study: Clarity RetCam 3, Natus RetCam Envision, and Phoenix ICON. The study is enriched by the software tool ReLeSeT which is aimed at automatic retinal lesion segmentation and extraction from retinal images. Consequently, this tool enables computing geometric and intensity features of retinal lesions. Also, we publish a set of pre-processing tools for feature boosting of retinal lesions and retinal blood vessels for building classification and segmentation models in ROP analysis.Item type: Item , Fast bicriteria streaming algorithms for submodular cover problem under noise models(Elsevier, 2024) Nguyen, Bich-Ngan T.; Pham, Phuong N. H.; Pham,Canh V.; Snášel, VáclavThe Submodular Cover (SC) problem has attracted the attention of researchers because of its wide variety of applications in many domains. Previous studies on this problem have focused on solving it under the assumption of a non-noise environment or using the greedy algorithm to solve it under noise. However, in some applications, the data is often large-scale and brings a noisy version, so the existing solutions are ineffective or not applicable to large and noisy data. Motivated by this phenomenon, we study the Submodular Cover under Noises (SCN) problem and propose two efficient streaming algorithms, which provide a solution with theoretical bounds under two common noise models, multiplicative and additive noises. The experimental results indicate that our proposed algorithms not only provide the solution with a high objective function value but also outperform the state-of-the-art algorithm in terms of both the number of queries and the running time.Item type: Item , Towards modeling conceptual graphs and transparent intensional logic(Springer Nature, 2024) Han, Nguyen Van; Vinh, Phan Cong; Duží, MarieIn this paper, we introduce a graphical method for modeling and reasoning with linguistic expressions. The former represents a graph called a conceptual graph, and the latter involves graph transformations. In our conceptual graphs, nodes represent linguistic concepts and edges links between these concepts. This model facilitates reasonining with linguistic concepts by making direct consequences easy to infer.Item type: Item , Network embedding based on DepDist contraction(Springer Nature, 2024) Dopater, Emanuel; Ochodková, Eliška; Kudělka, MilošNetworks provide an understandable and, in the case of small size, visualizable representation of data, which allows us to obtain essential information about the relationships between pairs of nodes, e.g., their distances. In visualization, networks have an alternative two-dimensional vector representation to which various machine-learning methods can be applied. More generally, networks can be transformed into a low-dimensional space using so-called embedding methods, which bridge the gap between network analysis and traditional machine learning by creating numerical representations that capture the essence of the network structure. In this article, we present a new embedding method that uses non-symmetric dependency to find the distance between nodes and applies an iterative procedure to find a satisfactory distribution of nodes in space. For dimension 2 and the visualization of the result, we demonstrate the method's effectiveness on small networks. For higher dimensions and several larger networks, we present the results of two experiments comparing our results with two well-established methods in the research community, namely node2vec and DeepWalk. The first experiment focuses on a qualitative comparison of the methods, while the second focuses on applying and comparing the classification results to embeddings in a higher dimension. Although the presented method does not outperform the two chosen methods, its results are still comparable. Therefore, we also explain the limitations of our method and a possible way to overcome them.Item type: Item , An optimal standalone wind-photovoltaic power plant system for green hydrogen generation: Case study for hydrogen refueling station(Elsevier, 2024) Rizk-Allah, Rizk M.; Hassan, Islam A.; Snášel, Václav; Hassanien, Aboul EllaSustainability goals include the utilization of renewable energy resources to supply the energy needs in addition to wastewater treatment to satisfy the water demand. Moreover, hydrogen has become a promising energy carrier and green fuel to decarbonize the industrial and transportation sectors. In this context, this research investigates a wind-photovoltaic power plant to produce green hydrogen for hydrogen refueling station and to operate an electrocoagulation water treatment unit in Ostrava, Czech Republic's northeast region. The study conducts a techno-economic analysis through HOMER Pro (R) software for optimal sizing of the power station components and to investigate the economic indices of the plant. The power station employs photovoltaic panels and wind turbines to supply the required electricity for electrolyzers and electrocoagulation reactors. As an offgrid system, lead acid batteries are utilized to store the surplus electricity. Wind speed and solar irradiation are the key role site dependent parameters that determine the cost of hydrogen, electricity, and wastewater treatment. The simulated model considers the capital, operating, and replacement costs for system components. In the proposed system, 240 kg of hydrogen as well as 720 kWh electrical energy are daily required for the hydrogen refueling station and the electrocoagulation unit, respectively. Accordingly, the power station annually generates 6,997,990 kWh of electrical energy in addition to 85595 kg of green hydrogen. Based on the economic analysis, the project's NPC is determined to be 5.49 M and the levelized cost of Hydrogen (LCH) is 2.89 /kg excluding compressor unit costs. This value proves the effectiveness of this power system, which encourages the utilization of green hydrogen for fuel-cell electric vehicles (FCVs). Furthermore, emerging electrocoagulation studies produce hydrogen through wastewater treatment, increasing hydrogen production and lowering LCH. Therefore, this study is able to provide practicable methodology support for optimal sizing of the power station components, which is beneficial for industrialization and economic development as well as transition toward sustainability and autonomous energy systems.Item type: Item , Optimizing AVR system performance via a novel cascaded RPIDD2-FOPI controller and QWGBO approach(PLOS, 2024) Ekinci, Serdar; Snášel, Václav; Rizk-Allah, Rizk M.; Izci, Davut; Salman, Mohammad; Youssef, Ahmed A. F.Maintaining stable voltage levels is essential for power systems' efficiency and reliability. Voltage fluctuations during load changes can lead to equipment damage and costly disruptions. Automatic voltage regulators (AVRs) are traditionally used to address this issue, regulating generator terminal voltage. Despite progress in control methodologies, challenges persist, including robustness and response time limitations. Therefore, this study introduces a novel approach to AVR control, aiming to enhance robustness and efficiency. A custom optimizer, the quadratic wavelet-enhanced gradient-based optimization (QWGBO) algorithm, is developed. QWGBO refines the gradient-based optimization (GBO) by introducing exploration and exploitation improvements. The algorithm integrates quadratic interpolation mutation and wavelet mutation strategy to enhance search efficiency. Extensive tests using benchmark functions demonstrate the QWGBO's effectiveness in optimization. Comparative assessments against existing optimization algorithms and recent techniques confirm QWGBO's superior performance. In AVR control, QWGBO is coupled with a cascaded real proportional-integral-derivative with second order derivative (RPIDD2) and fractional-order proportional-integral (FOPI) controller, aiming for precision, stability, and quick response. The algorithm's performance is verified through rigorous simulations, emphasizing its effectiveness in optimizing complex engineering problems. Comparative analyses highlight QWGBO's superiority over existing algorithms, positioning it as a promising solution for optimizing power system control and contributing to the advancement of robust and efficient power systems.Item type: Item , Optimized long short-term memory with rough set for sustainable forecasting renewable energy generation(Elsevier, 2024) Sayed, Gehad Ismail; El-Latif, Eman I. Abd; Hassanien, Aboul Ella; Snášel, VáclavResearch and development in the field of renewable energy is receiving more attention as a result of the growing demand for clean, sustainable energy. This paper proposes a model for forecasting renewable energy generation. The proposed model consists of three main phases: data preparation, feature selection-based rough set and nutcracker optimization algorithm (NOA), and data classification and cross-validation. First, the missing values are tackled using the mean method. Then, data normalization and data shuffling are applied in the data preparation phase. In the second phase, a new feature selection algorithm is proposed based on rough set theory and NOA, namely RSNOA. The proposed RSNOA is based on adopting the rough set method as the fitness function during the searching mechanism to find the optimal feature subset. Finally, a custom long -short -term memory architecture with the k-fold cross-validation method is utilized in the last phase. The experimental results revealed that the proposed model is very competitive. It is achieved with 4.2113 root mean square error, 0.96 R2, 2.835 mean absolute error, and 4.6349 mean absolute percentage error. The findings also show that the proposed model has great promise as a useful tool for accurately forecasting renewable energy generation across various sources.Item type: Item , Evaluation of performance enhancement in Ethereum fraud detection using oversampling techniques(Elsevier, 2024) Ravindranath, Vaishali; Nallakaruppan, M. K.; Shri, M. Lawanya; Balusamy, Balamurugan; Bhattacharyya, SiddharthaWith the growing popularity of cryptocurrencies and their decentralized nature, the risk of fraudulent activities within these ecosystems has become a pressing concern. This research paper focuses on Ethereum fraud detection using a dataset specifically curated for this purpose. The methodology encompasses essential steps, including data cleaning, correlation analysis, data splitting, and exploratory data analysis to understand the data characteristics. Subsequently, self -optimized machine learning models are trained with the Pycaret library while addressing the class imbalance using SMOTENC (Synthetic Minority oversampling Technique for Nominal and Continuous Data), ADA-SYN (Adaptive Synthetic Algorithm), and K -Means -SMOTE techniques. The performance of the various models is evaluated on test and validation datasets using metrics such as accuracy, precision, recall, and AUC (Area Under Curve). The study reveals that the ensemble models, particularly CATBoost (Categorical Boost) and LGBM (Light Gradient Boost Method), show exceptional efficiency, with accuracy ranging from 97% to 98.42% after oversampling. Moreover, these models exhibit higher F1 scores and AUC values, indicating their potential to detect fraud effectively. The validation metrics also lie in the same range, demonstrating that the models do not suffer from over -fitting. The experiment demonstrates the promise of ensemble models in Ethereum fraud detection, paving the way for deploying robust fraud detection systems in crypto-currency ecosystems. The results show that the K -Means SMOTE oversampling technique has the highest classification accuracy levels of 98.42% with an AUC of 99.82%.Item type: Item , Artificial Protozoa Optimizer (APO): A novel bio-inspired metaheuristic algorithm for engineering optimization(Elsevier, 2024) Wang, Xiaopeng; Snášel, Václav; Mirjalili, Seyedali; Pan, Jeng-Shyang; Kong, Lingping; Shehadeh, Hisham A.This study proposes a novel artificial protozoa optimizer (APO) that is inspired by protozoa in nature. The APO mimics the survival mechanisms of protozoa by simulating their foraging, dormancy, and reproductive behaviors. The APO was mathematically modeled and implemented to perform the optimization processes of metaheuristic algorithms. The performance of the APO was verified via experimental simulations and compared with 32 state-of-the-art algorithms. Wilcoxon signed-rank test was performed for pairwise comparisons of the proposed APO with the state-of-the-art algorithms, and Friedman test was used for multiple comparisons. First, the APO was tested using 12 functions of the 2022 IEEE Congress on Evolutionary Computation benchmark. Considering practicality, the proposed APO was used to solve five popular engineering design problems in a continuous space with constraints. Moreover, the APO was applied to solve a multilevel image segmentation task in a discrete space with constraints. The experiments confirmed that the APO could provide highly competitive results for optimization problems. The source codes of Artificial Protozoa Optimizer are publicly available at https://seyedalimirjalili.com/projects and https://ww2.mathworks.cn/matlabcentral/fileexchange/162656-artificial-protozoa-optimizer.Item type: Item , Analysis on fetal phonocardiography segmentation problem by hybridized classifier(Elsevier, 2024) Kong, Lingping; Barnová, Kateřina; Jaroš, René; Mirjalili, Seyedali; Snášel, Václav; Pan, Jeng-Shyang; Martinek, RadekFetal examinations are a significant and challenging field of healthcare. Cardiotocography is the most commonly used method for monitoring fetal heart rate and uterine contractions. As a promising alternative to cardiotocography, fetal phonocardiography is beginning to emerge. It is an entirely non-invasive, passive, and low-cost method. However, it is tough to estimate the ideal form of the fetal sound signal in most cases due to the presence of disturbances. The disturbances originate from movements or rotations of the fetal body, making fetal heart sound processing difficult. This study presents an automatic method for segmenting the fetal heart sounds in a phonocardiographic signal that is loaded with different types of disturbances and analyzes which of these disturbances most affect segmentation accuracy. To provide a comprehensive investigation, we propose a hybrid classifier based on Transformer and eXtreme Gradient Boosting, short for XGBoost, to improve segmentation performance by decision -making integration. 2000 segments of data from the Research Resource for Complex Physiologic Signals, PhysioNet repository, and created synthetic data (873 recordings) were used for the experiment. In the S1 label, our proposed method ranks first among all compared algorithms in precision, recall, F1, and accuracy score, tying with Transformer in recall score. It achieves an accuracy increase of 5% and 1.3% compared to XGBoost and Transformer, respectively. Similarly, in the S2 label, there is a precision score increase of 5.8% and 3.7% compared to XGBoost and Transformer, respectively. In general, our proposed method shows effective and promising performance..Item type: Item , Knowing who occupies an office: purely contingent, necessary and impossible offices(Springer Nature, 2024) Duží, Marie; Číhalová, MartinaThis paper examines different kinds of definite descriptions denoting purely contingent, necessary or impossible objects. The discourse about contingent/impossible/necessary objects can be organised in terms of rational questions to ask and answer relative to the modal profile of the entity in question. There are also limits on what it is rational to know about entities with this or that modal profile. We will also examine epistemic modalities; they are the kind of necessity and possibility that is determined by epistemic constraints related to knowledge or rationality. Definite descriptions denote so-called offices, roles, or things to be. We explicate these alpha-offices as partial functions from possible worlds to chronologies of objects of type alpha, where alpha is mostly the type of individuals. Our starting point is Prior's distinction between a 'weak' and 'strong' definite article 'the'. In both cases, the definite description refers to at most one object; yet, in the case of the weak 'the', the referred object can change over time, while in the case of the strong 'the', the object referred to by the definite description is the same forever, once the office has been occupied. The main result we present is the way how to obtain a Wh-knowledge about who or what plays a given role presented by a hyper-office, i.e. procedure producing an office. Another no less important result concerns the epistemic necessity of the impossibility of knowing who or what occupies the impossible office presented by a hyper-office.