Publikační činnost Katedry informatiky / Publications of Department of Computer Science (460)
Permanent URI for this collectionhttp://hdl.handle.net/10084/64750
Kolekce obsahuje bibliografické záznamy publikační činnosti (článků) akademických pracovníků Katedry informatiky (460) v časopisech a v Lecture Notes in Computer Science registrovaných ve Web of Science od roku 2003 po současnost.
Do kolekce jsou zařazeny:
a) publikace, u nichž je v originálních dokumentech jako působiště autora (adresa) uvedena Vysoká škola báňská-Technická univerzita Ostrava (VŠB-TUO),
b) publikace, u nichž v originálních dokumentech není v adrese VŠB-TUO uvedena, ale autoři prokazatelně v době jejich zpracování a uveřejnění působili na VŠB-TUO.
Bibliografické záznamy byly původně vytvořeny v kolekci
Publikační činnost akademických pracovníků VŠB-TUO, která sleduje publikování akademických pracovníků od roku 1990.
Browse
Recent Submissions
Item type: Item , Towards modeling conceptual graphs and transparent intensional logic(Springer Nature, 2024) Han, Nguyen Van; Vinh, Phan Cong; Duží, MarieIn this paper, we introduce a graphical method for modeling and reasoning with linguistic expressions. The former represents a graph called a conceptual graph, and the latter involves graph transformations. In our conceptual graphs, nodes represent linguistic concepts and edges links between these concepts. This model facilitates reasonining with linguistic concepts by making direct consequences easy to infer.Item type: Item , Network embedding based on DepDist contraction(Springer Nature, 2024) Dopater, Emanuel; Ochodková, Eliška; Kudělka, MilošNetworks provide an understandable and, in the case of small size, visualizable representation of data, which allows us to obtain essential information about the relationships between pairs of nodes, e.g., their distances. In visualization, networks have an alternative two-dimensional vector representation to which various machine-learning methods can be applied. More generally, networks can be transformed into a low-dimensional space using so-called embedding methods, which bridge the gap between network analysis and traditional machine learning by creating numerical representations that capture the essence of the network structure. In this article, we present a new embedding method that uses non-symmetric dependency to find the distance between nodes and applies an iterative procedure to find a satisfactory distribution of nodes in space. For dimension 2 and the visualization of the result, we demonstrate the method's effectiveness on small networks. For higher dimensions and several larger networks, we present the results of two experiments comparing our results with two well-established methods in the research community, namely node2vec and DeepWalk. The first experiment focuses on a qualitative comparison of the methods, while the second focuses on applying and comparing the classification results to embeddings in a higher dimension. Although the presented method does not outperform the two chosen methods, its results are still comparable. Therefore, we also explain the limitations of our method and a possible way to overcome them.Item type: Item , An optimal standalone wind-photovoltaic power plant system for green hydrogen generation: Case study for hydrogen refueling station(Elsevier, 2024) Rizk-Allah, Rizk M.; Hassan, Islam A.; Snášel, Václav; Hassanien, Aboul EllaSustainability goals include the utilization of renewable energy resources to supply the energy needs in addition to wastewater treatment to satisfy the water demand. Moreover, hydrogen has become a promising energy carrier and green fuel to decarbonize the industrial and transportation sectors. In this context, this research investigates a wind-photovoltaic power plant to produce green hydrogen for hydrogen refueling station and to operate an electrocoagulation water treatment unit in Ostrava, Czech Republic's northeast region. The study conducts a techno-economic analysis through HOMER Pro (R) software for optimal sizing of the power station components and to investigate the economic indices of the plant. The power station employs photovoltaic panels and wind turbines to supply the required electricity for electrolyzers and electrocoagulation reactors. As an offgrid system, lead acid batteries are utilized to store the surplus electricity. Wind speed and solar irradiation are the key role site dependent parameters that determine the cost of hydrogen, electricity, and wastewater treatment. The simulated model considers the capital, operating, and replacement costs for system components. In the proposed system, 240 kg of hydrogen as well as 720 kWh electrical energy are daily required for the hydrogen refueling station and the electrocoagulation unit, respectively. Accordingly, the power station annually generates 6,997,990 kWh of electrical energy in addition to 85595 kg of green hydrogen. Based on the economic analysis, the project's NPC is determined to be 5.49 M and the levelized cost of Hydrogen (LCH) is 2.89 /kg excluding compressor unit costs. This value proves the effectiveness of this power system, which encourages the utilization of green hydrogen for fuel-cell electric vehicles (FCVs). Furthermore, emerging electrocoagulation studies produce hydrogen through wastewater treatment, increasing hydrogen production and lowering LCH. Therefore, this study is able to provide practicable methodology support for optimal sizing of the power station components, which is beneficial for industrialization and economic development as well as transition toward sustainability and autonomous energy systems.Item type: Item , Optimizing AVR system performance via a novel cascaded RPIDD2-FOPI controller and QWGBO approach(PLOS, 2024) Ekinci, Serdar; Snášel, Václav; Rizk-Allah, Rizk M.; Izci, Davut; Salman, Mohammad; Youssef, Ahmed A. F.Maintaining stable voltage levels is essential for power systems' efficiency and reliability. Voltage fluctuations during load changes can lead to equipment damage and costly disruptions. Automatic voltage regulators (AVRs) are traditionally used to address this issue, regulating generator terminal voltage. Despite progress in control methodologies, challenges persist, including robustness and response time limitations. Therefore, this study introduces a novel approach to AVR control, aiming to enhance robustness and efficiency. A custom optimizer, the quadratic wavelet-enhanced gradient-based optimization (QWGBO) algorithm, is developed. QWGBO refines the gradient-based optimization (GBO) by introducing exploration and exploitation improvements. The algorithm integrates quadratic interpolation mutation and wavelet mutation strategy to enhance search efficiency. Extensive tests using benchmark functions demonstrate the QWGBO's effectiveness in optimization. Comparative assessments against existing optimization algorithms and recent techniques confirm QWGBO's superior performance. In AVR control, QWGBO is coupled with a cascaded real proportional-integral-derivative with second order derivative (RPIDD2) and fractional-order proportional-integral (FOPI) controller, aiming for precision, stability, and quick response. The algorithm's performance is verified through rigorous simulations, emphasizing its effectiveness in optimizing complex engineering problems. Comparative analyses highlight QWGBO's superiority over existing algorithms, positioning it as a promising solution for optimizing power system control and contributing to the advancement of robust and efficient power systems.Item type: Item , Optimized long short-term memory with rough set for sustainable forecasting renewable energy generation(Elsevier, 2024) Sayed, Gehad Ismail; El-Latif, Eman I. Abd; Hassanien, Aboul Ella; Snášel, VáclavResearch and development in the field of renewable energy is receiving more attention as a result of the growing demand for clean, sustainable energy. This paper proposes a model for forecasting renewable energy generation. The proposed model consists of three main phases: data preparation, feature selection-based rough set and nutcracker optimization algorithm (NOA), and data classification and cross-validation. First, the missing values are tackled using the mean method. Then, data normalization and data shuffling are applied in the data preparation phase. In the second phase, a new feature selection algorithm is proposed based on rough set theory and NOA, namely RSNOA. The proposed RSNOA is based on adopting the rough set method as the fitness function during the searching mechanism to find the optimal feature subset. Finally, a custom long -short -term memory architecture with the k-fold cross-validation method is utilized in the last phase. The experimental results revealed that the proposed model is very competitive. It is achieved with 4.2113 root mean square error, 0.96 R2, 2.835 mean absolute error, and 4.6349 mean absolute percentage error. The findings also show that the proposed model has great promise as a useful tool for accurately forecasting renewable energy generation across various sources.Item type: Item , Evaluation of performance enhancement in Ethereum fraud detection using oversampling techniques(Elsevier, 2024) Ravindranath, Vaishali; Nallakaruppan, M. K.; Shri, M. Lawanya; Balusamy, Balamurugan; Bhattacharyya, SiddharthaWith the growing popularity of cryptocurrencies and their decentralized nature, the risk of fraudulent activities within these ecosystems has become a pressing concern. This research paper focuses on Ethereum fraud detection using a dataset specifically curated for this purpose. The methodology encompasses essential steps, including data cleaning, correlation analysis, data splitting, and exploratory data analysis to understand the data characteristics. Subsequently, self -optimized machine learning models are trained with the Pycaret library while addressing the class imbalance using SMOTENC (Synthetic Minority oversampling Technique for Nominal and Continuous Data), ADA-SYN (Adaptive Synthetic Algorithm), and K -Means -SMOTE techniques. The performance of the various models is evaluated on test and validation datasets using metrics such as accuracy, precision, recall, and AUC (Area Under Curve). The study reveals that the ensemble models, particularly CATBoost (Categorical Boost) and LGBM (Light Gradient Boost Method), show exceptional efficiency, with accuracy ranging from 97% to 98.42% after oversampling. Moreover, these models exhibit higher F1 scores and AUC values, indicating their potential to detect fraud effectively. The validation metrics also lie in the same range, demonstrating that the models do not suffer from over -fitting. The experiment demonstrates the promise of ensemble models in Ethereum fraud detection, paving the way for deploying robust fraud detection systems in crypto-currency ecosystems. The results show that the K -Means SMOTE oversampling technique has the highest classification accuracy levels of 98.42% with an AUC of 99.82%.Item type: Item , Artificial Protozoa Optimizer (APO): A novel bio-inspired metaheuristic algorithm for engineering optimization(Elsevier, 2024) Wang, Xiaopeng; Snášel, Václav; Mirjalili, Seyedali; Pan, Jeng-Shyang; Kong, Lingping; Shehadeh, Hisham A.This study proposes a novel artificial protozoa optimizer (APO) that is inspired by protozoa in nature. The APO mimics the survival mechanisms of protozoa by simulating their foraging, dormancy, and reproductive behaviors. The APO was mathematically modeled and implemented to perform the optimization processes of metaheuristic algorithms. The performance of the APO was verified via experimental simulations and compared with 32 state-of-the-art algorithms. Wilcoxon signed-rank test was performed for pairwise comparisons of the proposed APO with the state-of-the-art algorithms, and Friedman test was used for multiple comparisons. First, the APO was tested using 12 functions of the 2022 IEEE Congress on Evolutionary Computation benchmark. Considering practicality, the proposed APO was used to solve five popular engineering design problems in a continuous space with constraints. Moreover, the APO was applied to solve a multilevel image segmentation task in a discrete space with constraints. The experiments confirmed that the APO could provide highly competitive results for optimization problems. The source codes of Artificial Protozoa Optimizer are publicly available at https://seyedalimirjalili.com/projects and https://ww2.mathworks.cn/matlabcentral/fileexchange/162656-artificial-protozoa-optimizer.Item type: Item , Analysis on fetal phonocardiography segmentation problem by hybridized classifier(Elsevier, 2024) Kong, Lingping; Barnová, Kateřina; Jaroš, René; Mirjalili, Seyedali; Snášel, Václav; Pan, Jeng-Shyang; Martinek, RadekFetal examinations are a significant and challenging field of healthcare. Cardiotocography is the most commonly used method for monitoring fetal heart rate and uterine contractions. As a promising alternative to cardiotocography, fetal phonocardiography is beginning to emerge. It is an entirely non-invasive, passive, and low-cost method. However, it is tough to estimate the ideal form of the fetal sound signal in most cases due to the presence of disturbances. The disturbances originate from movements or rotations of the fetal body, making fetal heart sound processing difficult. This study presents an automatic method for segmenting the fetal heart sounds in a phonocardiographic signal that is loaded with different types of disturbances and analyzes which of these disturbances most affect segmentation accuracy. To provide a comprehensive investigation, we propose a hybrid classifier based on Transformer and eXtreme Gradient Boosting, short for XGBoost, to improve segmentation performance by decision -making integration. 2000 segments of data from the Research Resource for Complex Physiologic Signals, PhysioNet repository, and created synthetic data (873 recordings) were used for the experiment. In the S1 label, our proposed method ranks first among all compared algorithms in precision, recall, F1, and accuracy score, tying with Transformer in recall score. It achieves an accuracy increase of 5% and 1.3% compared to XGBoost and Transformer, respectively. Similarly, in the S2 label, there is a precision score increase of 5.8% and 3.7% compared to XGBoost and Transformer, respectively. In general, our proposed method shows effective and promising performance..Item type: Item , Knowing who occupies an office: purely contingent, necessary and impossible offices(Springer Nature, 2024) Duží, Marie; Číhalová, MartinaThis paper examines different kinds of definite descriptions denoting purely contingent, necessary or impossible objects. The discourse about contingent/impossible/necessary objects can be organised in terms of rational questions to ask and answer relative to the modal profile of the entity in question. There are also limits on what it is rational to know about entities with this or that modal profile. We will also examine epistemic modalities; they are the kind of necessity and possibility that is determined by epistemic constraints related to knowledge or rationality. Definite descriptions denote so-called offices, roles, or things to be. We explicate these alpha-offices as partial functions from possible worlds to chronologies of objects of type alpha, where alpha is mostly the type of individuals. Our starting point is Prior's distinction between a 'weak' and 'strong' definite article 'the'. In both cases, the definite description refers to at most one object; yet, in the case of the weak 'the', the referred object can change over time, while in the case of the strong 'the', the object referred to by the definite description is the same forever, once the office has been occupied. The main result we present is the way how to obtain a Wh-knowledge about who or what plays a given role presented by a hyper-office, i.e. procedure producing an office. Another no less important result concerns the epistemic necessity of the impossibility of knowing who or what occupies the impossible office presented by a hyper-office.Item type: Item , A hybrid analysis method for calculating the cogging torque of consequent pole hybrid excitation synchronous machine(Emerald Publishing Limited, 2024) Wu, Jie; Wang, Kang; Zhang, Ming; Guo, Leilei; Shen, Yongpeng; Wang, Mingjie; Zhang, Jitao; Snášel, VáclavPurpose When solving the cogging torque of complex electromagnetic structures, such as consequent pole hybrid excitation synchronous (CPHES) machine, traditional methods have a huge computational complexity. The notable feature of CPHES machine is the symmetric range of field-strengthening and field-weakening, but this type of machine is destined to be equipped with a complex electromagnetic structure. The purpose of this paper is to propose a hybrid analysis method to quickly and accurately solve the cogging torque of complex 3D electromagnetic structure, which is applicable to CPHES machine with different magnetic pole shapings. Design/methodology/approach In this paper, a hybrid method for calculating the cogging torque of CPHES machine is proposed, which considers three commonly used pole shapings. Firstly, through magnetic field analysis, the complex 3D finite element analysis (FEA) is simplified to 2D field computing. Secondly, the discretization method is used to obtain the distribution of permeance and permeance differential along the circumference of the air-gap, taking into account the effect of slots. Finally, the cogging torque of the whole motor is obtained by using the idea of modular calculation and the symmetry of the rotor structure. Findings This method is applicable to different pole shapings. The experimental results show that the proposed method is consistent with 3D FEA and experimental measured results, and the average calculation time is reduced from 8 h to 4 min. Originality/value This paper proposes a new concept for calculating cogging torque, which is a hybrid calculation of dimension reduction and discretization modules. Based on magnetic field analysis, the 3D problem is simplified into a 2D issue, reducing computational complexity. Based on the symmetry of the machine structure, a modeling method for discretized analytical models is proposed to calculate the cogging torque of the machine.Item type: Item , Adaptive energy management strategy for solar energy harvesting IoT nodes by evolutionary fuzzy rules(Elsevier, 2024) Prauzek, Michal; Krömer, Pavel; Mikuš, Miroslav; Konečný, JaromírThis study explores the integration of genetic programming (GP) and fuzzy logic to enhance control strategies for Internet of Things (IoT) nodes across varied locations. It is introduced a novel methodology for designing a fuzzy-based energy management controller that autonomously determines the most suitable controller structure and inputs. This approach is evaluated using a solar harvesting IoT model that leverages historical solar irradiance data, highlighting the methodology’s potential for diverse geographical applications and compatibility with low-performance microcontrollers. The findings demonstrate that the integration of GP with designed fitness function enables the dynamic learning and adaptation of control strategies, optimizing system behavior based on historical data. The experimental model showcases an ability to efficiently use historical datasets to derive optimal control strategies, with the fitness metric indicating consistent improvement throughout the learning phase. The results indicate that useful control strategies learned at a certain location may outperform a locally-trained control strategies and can be successfully re-applied in other locations.Item type: Item , Hybrid optimization algorithm for handwritten document enhancement(Tech Science Press, 2024) Chu, Shu-Chuan; Yang, Xiaomeng; Zhang, Li; Snášel, Václav; Pan, Jeng-ShyangThe Gannet Optimization Algorithm (GOA) and the Whale Optimization Algorithm (WOA) demonstrate strong performance; however, there remains room for improvement in convergence and practical applications. This study introduces a hybrid optimization algorithm, named the adaptive inertia weight whale optimization algorithm and gannet optimization algorithm (AIWGOA), which addresses challenges in enhancing handwritten documents. The hybrid strategy integrates the strengths of both algorithms, significantly enhancing their capabilities, whereas the adaptive parameter strategy mitigates the need for manual parameter setting. By amalgamating the hybrid strategy and parameter-adaptive approach, the Gannet Optimization Algorithm was refined to yield the AIWGOA. Through a performance analysis of the CEC2013 benchmark, the AIWGOA demonstrates notable advantages across various metrics. Subsequently, an evaluation index was employed to assess the enhanced handwritten documents and images, affirming the superior practical application of the AIWGOA compared with other algorithms.Item type: Item , Multi-objective optimization model for railway heavy-haul traffic: Addressing carbon emissions reduction and transport efficiency improvement(Elsevier, 2024) Tian, Ai-Qing; Wang, Xiao-Yang; Xu, Heying; Pan, Jeng-Shyang; Snášel, Václav; Lv, Hong-XiaThis paper establishes a multi-objective optimization model for railway heavy-haul trains, focusing on reducing carbon emissions and improving transport efficiency. The model integrates optimization of the route and the vehicle load rate, significantly reducing carbon emissions and enhancing transport efficiency. It addresses the challenges and characteristics of heavy-haul trains, introducing multi-objective optimization problems related to transport carbon emissions and efficiency. Using a pigeon-inspired optimization algorithm, the model considers joint constraints between carbon emissions and transport efficiency objectives. To overcome challenges in multi-objective transportation problems, the paper proposes a forward-learning pigeon-inspired optimization algorithm based on a surrogate-assisted model. This approach calculates the quality of the candidate solution using a surrogate model, reducing time costs. The algorithm employs a forward-learning strategy to enhance learning from non-dominant solutions. Experimental validation with benchmark functions confirms the effectiveness of the model and offers optimized solutions. The proposed method reduces carbon emissions while maintaining transport efficiency, contributing innovative ideas for the development of sustainable heavy-duty trains.Item type: Item , Novel lossy compression method of noisy time series data with anomalies: Application to partial discharge monitoring in overhead power lines(Elsevier, 2024) Klein, Lukáš; Dvorský, Jiří; Seidl, David; Prokop, LukášIn overhead power transmission lines, particularly in regions like natural parks where establishing a safe zone is difficult, the adoption of cross-linked polyethylene insulated covered conductors (CCs) helps prevent outages due to vegetation contact. However, these CCs are susceptible to partial discharge (PD) activity, which can degrade insulation and lead to system failures. Detecting and analyzing PD are essential for maintaining power system reliability and safety. A key challenge in PD monitoring is transmitting the large volumes of PD signal data over unreliable 2G networks, as existing compression methods either compromise on data integrity or are ineffective. This paper introduces a novel lossy compression technique utilizing an autoencoder with skip connections and correction data to address this issue. Unlike previous algorithms that struggle with noisy time series data and fail to preserve crucial anomaly information, our method reconstructs the signal without anomalies, which are subsequently restored using correction data. Achieving a compression factor of about 25 (reducing data to 4.1% of its original size), this approach maintains essential PD signal features for analysis. The effectiveness of our method is validated by three classification algorithms, showing promise for future fault detection, diagnosis, and memory space reduction. This innovative compression solution marks a significant advancement in PD data processing, offering a balanced trade-off between compression efficiency and data fidelity, and paving the way for enhanced remote monitoring in power transmission systems.Item type: Item , Application of Instrumented Indentation Test and Neural Networks to determine the constitutive model of in-situ austenitic stainless steel components(Springer Nature, 2024) Ma, Quoc-Phu; Basterrech, Sebastian; Halama, Radim; Omacht, Daniel; Měsíček, Jakub; Hajnyš, Jiří; Platoš, Jan; Petrů, JanaOver the last few decades, Instrumented Indentation Test (IIT) has evolved into a versatile and convenient method for assessing the mechanical properties of metals. Unlike conventional hardness tests, IIT allows for incremental control of the indenter based on depth or force, enabling the measurement of not only hardness but also tensile properties, fracture toughness, and welding residual stress. Two crucial measures in IIT are the reaction force (F) exerted by the tested material on the indenter and the depth of the indenter (D). Evaluation of the mentioned properties from F-D curves typically involves complex analytical formulas that restricts the application of IIT to a limited group of materials. Moreover, for soft materials, such as austenitic stainless steel SS304L, with excessive pile-up/sink-in behaviors, conducting IIT becomes challenging due to improper evaluation of the imprint depth. In this work, we propose a systematic procedure for replacing complex analytical evaluations of IIT and expensive physical measurements. The proposed approach is based on the well-known potential of Neural Networks (NN) for data-driven modeling. We carried out physical IIT and tensile tests on samples prepared from SS304L. In addition, we generated multiple configurations of material properties and simulated the corresponding number of IITs using Finite Element Method (FEM). The information provided by the physical tests and simulated data from FEM are integrated into an NN, to produce a parametric mapping that can predict the parameters of a constitutive model based on any given F-D curve. Our physical and numerical experiments successfully demonstrate the potential of the proposed approach.Item type: Item , Surrogate-assisted sine Phasmatodea population evolution algorithm applied to 3D coverage of mobile nodes(Springer Nature, 2024) Chu, Shu-Chuan; Liang, LuLu; Pan, Jeng-Shyang; Kong, LingPing; Zhao, JiaDeploying static wireless sensor nodes is prone to network coverage gaps, resulting in poor network coverage. In this paper, an attempt is madetoimprovethenetworkcoveragebymovingthelocationsofthenodes.Asurrogate-assistedsinePhasmatodea population evolution algorithm (SASPPE) is used to evaluate the network coverage. A 50 × 50 hill simulation environment was tested for the number of nodes of 30 and 40 and radii of 3, 5 and 7, respectively. The results show that the SASPPE algorithm has the highest coverage, which can be up to 23.624% higher than the PPE algorithm, and up to 5.196% higher than the PPEalgorithm, ceteris paribus. The SASPPE algorithm mixes the GSAMwithLSAMs,whichbalancesthecomputational cost of the algorithm and the algorithm’s ability to find optimal results. The use of hierarchical clustering enhances the stable type of the LSAMs. In addition, LSAMs are easy to fall into local optimality when they are modeled with local data, and the use of sine Phasmatodea population evolution algorithm (Sine-PPE) for searching in LSAMs alleviates the time for the algorithm to fall into local optimality. On 30D, 50D, and 100D, the proposed algorithm was tested by 7 test functions. The results show that the algorithm has significant advantages on most functions.Item type: Item , Refined sinh cosh optimizer tuned controller design for enhanced stability of automatic voltage regulation(Springer Nature, 2024) Izci, Davut; Rizk-Allah, Rizk M.; Snášel, Václav; Ekinci, Serdar; Migdady, Hazem; Daoud, Mohammad Sh.; Altalhi, Maryam; Abualigah, LaithThe efficacy of automatic voltage regulator (AVR) systems is contingent on crucial parameters like voltage regulation, response time, stability, and efficiency. Integration of controllers with AVR systems facilitates centralized monitoring and regulation, enhancing voltage output efficiency. This study employs a modified sinh cosh optimizer (m-SCHO) and a modified time-domain metrics-based objective function to fine-tune a fractional-order proportional-integral-derivative with double derivative (FOPIDD2) controller tailored for AVR system control. The m-SCHO is strengthened with an adaptive local search mechanism and an experience-based perturbed learning strategy and enhances solution diversity and navigational efficacy, leading to improved optimization quality. This investigation illustrates the superior performance of the m-SCHO-based FOPIDD2 controller in addressing the multifaceted challenges of AVR control, surpassing other techniques in stability, speed of response, robustness, and efficiency. To validate the method's efficacy, a comparative analysis is conducted using existing controllers with various tuning algorithms. Results indicate that the proposed m-SCHO-based FOPIDD2 controller achieves superior performance metrics, showcasing its capability. The study extends its scope by considering nineteen different controllers reported in the literature for a comprehensive comparison which also exhibits the best stability, further affirming its effectiveness.Item type: Item , Impossibilities without impossibilia(Taylor & Francis, 2024) Jespersen, Bjørn; Duží, Marie; Carrara, MassimilianoCircumstantialists already have a logical semantics for impossibilities. They expand their logical space of possible worlds by adding impossible worlds. These are impossible circumstances serving as indices of evaluation, at which impossibilities are true. A variant of circumstantialism, namely modal Meinongianism (noneism), adds impossible objects as well. These are so-called incomplete objects that are necessarily non-existent. The opposite of circumstantialism, namely structuralism, has some catching-up to do. What might a structuralist logical semantics for impossibilities without impossibilia look like? This paper makes a structuralist counterproposal. We present a semantics based on a procedural interpretation of the typed lambda-calculus. The fundamental idea is that talk about impossibilities should be construed in terms of procedures: some yield as their product a condition that could not possibly have a satisfier, while the rest fail to yield a product altogether. Dispensing with a 'bottom' of impossibilia requires instead a 'top' consisting of structured hyperintensions, intensions, intensions defining other intensions, a typed universe, and dual (de dicto and de re) predication. We explain how the theory works by going through several examples.Item type: Item , An Explainable AI framework for credit evaluation and analysis(Elsevier, 2024) Nallakaruppan, M. K.; Balusamy, Balamurugan; Shri, M. Lawanya; Malathi, V.; Bhattacharyya, SiddharthaLoan Facility is a profitable venture for the banking industry and can render great financial support to the beneficiary. The Global banking systems with secured private cloud are making the service reachable around the world around the clock. Loan acceptance and disbursal are governed by the protocol of the banks with the highest degree of privacy and integrity. As per the report of Experian, the loan acceptance rate of the banks has been reduced to 61%-70%, and it is further reduced to 50% post -pandemic since there is a huge financial setback and a higher rate of defaulters. The banks are not in a position to explain the reasoning behind the rejection since the rejection further diminishes the customers' credit scores. With the parallel improvements in Industry 5.0, futuristic banking could evolve around Non Fungible Tokens (NFT) integrated Explainable AI (XAI) framework which can interact with the customer through the human -machine interface in the meta -verse. For such a kind of system, the proposed work could be a driving application which provides explanations for the loan rejection, with the Random Forest integrated XAI framework that provides the reasons for acceptance and rejection of the loan. The proposed Random Forest -based approach rendered the highest accuracy, sensitivity and specificity of 0.998, 0.998, and 0.997, respectively. The LIME and SHAPELY Explainers provide explanations with local and global surrogates of various parameters on the features.Item type: Item , Unraveling human social behavior motivations via inverse reinforcement learning-based link prediction(Springer Nature, 2024) Jiang, Xin; Liu, Hongbo; Yang, Liping; Zhang, Bo; Ward, Tomas E.; Snášel, VáclavLink prediction aims to capture the evolution of network structure, especially in real social networks, which is conducive to friend recommendations, human contact trajectory simulation, and more. However, the challenge of the stochastic social behaviors and the unstable space-time distribution in such networks often leads to unexplainable and inaccurate link predictions. Therefore, taking inspiration from the success of imitation learning in simulating human driver behavior, we propose a dynamic network link prediction method based on inverse reinforcement learning (DN-IRL) to unravel the motivations behind social behaviors in social networks. Specifically, the historical social behaviors (link sequences) and a next behavior (a single link) are regarded as the current environmental state and the action taken by the agent, respectively. Subsequently, the reward function, which is designed to maximize the cumulative expected reward from expert behaviors in the raw data, is optimized and utilized to learn the agent's social policy. Furthermore, our approach incorporates the neighborhood structure based node embedding and the self-attention modules, enabling sensitivity to network structure and traceability to predicted links. Experimental results on real-world dynamic social networks demonstrate that DN-IRL achieves more accurate and explainable of prediction compared to the baselines.