dc.contributor.author | Farkhodov, Khurshedjon | |
dc.contributor.author | Lee, Suk-Hwan | |
dc.contributor.author | Platoš, Jan | |
dc.contributor.author | Kwon, Ki-Ryong | |
dc.date.accessioned | 2024-04-18T10:28:28Z | |
dc.date.available | 2024-04-18T10:28:28Z | |
dc.date.issued | 2023 | |
dc.identifier.citation | IEEE Access. 2023, vol. 11, p. 124129-124138. | cs |
dc.identifier.issn | 2169-3536 | |
dc.identifier.uri | http://hdl.handle.net/10084/152534 | |
dc.description.abstract | The recent development of object-tracking frameworks has affected the performance of
many manufacturing and industrial services such as product delivery, autonomous driving systems, security
systems, military, transportation and retailing industries, smart cities, healthcare systems, agriculture, etc.
Achieving accurate results in physical environments and conditions remains quite challenging for the actual
object-tracking. However, the process can be experimented with using simulation techniques or platforms
to evaluate and check the model’s performance under different simulation conditions and weather changes.
This paper presents one of the target tracking approaches based on the reinforcement learning technique
integrated with TensorFlow-Agent (tf-agent) to accomplish the tracking process in the Unreal Game Engine
simulation platform AirSim Blocks. The productivity of these platforms can be seen while experimenting
in virtual-reality conditions with virtual drone agents and performing fine-tuning to achieve the best or
desired performance. In this paper, the tf-agent drone learns how to track an object integration with a deep
reinforcement learning process to control the actions, states, and tracking by receiving sequential frames
from a simple Blocks environment. The tf-agent model is trained in the AirSim Blocks environment for
adaptation to the environment and existing objects in a simulation environment for further testing and
evaluation regarding the accuracy of tracking and speed. We tested and compared two approaches, DQN
and PPO trackers, and reported results in terms of stability, rewards, and numerical performance. | cs |
dc.language.iso | en | cs |
dc.publisher | IEEE | cs |
dc.relation.ispartofseries | IEEE Access | cs |
dc.relation.uri | https://doi.org/10.1109/ACCESS.2023.3325062 | cs |
dc.rights | © 2023 The Authors. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. | cs |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/ | cs |
dc.subject | object tracking | cs |
dc.subject | object detection | cs |
dc.subject | reinforcement learning | cs |
dc.subject | AirSim | cs |
dc.subject | virtual environment | cs |
dc.subject | virtual simulation | cs |
dc.subject | tf-agent | cs |
dc.subject | unreal game engine | cs |
dc.title | Deep reinforcement learning Tf-Agent-based object tracking with virtual autonomous drone in a game engine | cs |
dc.type | article | cs |
dc.identifier.doi | 10.1109/ACCESS.2023.3325062 | |
dc.rights.access | openAccess | cs |
dc.type.version | publishedVersion | cs |
dc.type.status | Peer-reviewed | cs |
dc.description.source | Web of Science | cs |
dc.description.volume | 11 | cs |
dc.description.lastpage | 124138 | cs |
dc.description.firstpage | 124129 | cs |
dc.identifier.wos | 001104556800001 | |