Zobrazit minimální záznam

dc.contributor.authorOjha, Varun Kumar
dc.contributor.authorAbraham, Ajith
dc.contributor.authorSnášel, Václav
dc.date.accessioned2017-04-28T12:12:50Z
dc.date.available2017-04-28T12:12:50Z
dc.date.issued2017
dc.identifier.citationEngineering Applications of Artificial Intelligence. 2017, vol. 60, p. 97-116.cs
dc.identifier.issn0952-1976
dc.identifier.issn1873-6769
dc.identifier.urihttp://hdl.handle.net/10084/117033
dc.description.abstractOver the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimite the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era.cs
dc.language.isoencs
dc.publisherElseviercs
dc.relation.ispartofseriesEngineering Applications of Artificial Intelligencecs
dc.relation.urihttps://doi.org/10.1016/j.engappai.2017.01.013cs
dc.rights© 2017 Elsevier Ltd. All rights reserved.cs
dc.subjectfeedforward neural networkcs
dc.subjectmetaheuristicscs
dc.subjectnature-inspired algorithmscs
dc.subjectmultiobjectivecs
dc.subjectensemblecs
dc.titleMetaheuristic design of feedforward neural networks: A review of two decades of researchcs
dc.typearticlecs
dc.identifier.doi10.1016/j.engappai.2017.01.013
dc.type.statusPeer-reviewedcs
dc.description.sourceWeb of Sciencecs
dc.description.volume60cs
dc.description.lastpage116cs
dc.description.firstpage97cs
dc.identifier.wos000396971500009


Soubory tohoto záznamu

SouboryVelikostFormátZobrazit

K tomuto záznamu nejsou připojeny žádné soubory.

Tento záznam se objevuje v následujících kolekcích

Zobrazit minimální záznam