Depending on the amount of data to process, file generation may take longer.

If it takes too long to generate, you can limit the data by, for example, reducing the range of years.

Article

Download BibTeX

Title

A modified random network distillation algorithm and its application in USVs naval battle simulation

Authors

[ 1 ] Instytut Robotyki i Inteligencji Maszynowej, Wydział Automatyki, Robotyki i Elektrotechniki, Politechnika Poznańska | [ P ] employee

Scientific discipline (Law 2.0)

[2.2] Automation, electronics, electrical engineering and space technology

Year of publication

2022

Published in

Ocean Engineering

Journal year: 2022 | Journal volume: vol. 261

Article type

scientific article

Publication language

english

Keywords
EN
  • unmanned surface vessel
  • reinforcement learning
  • sparse reward
  • naval battle simulation
Abstract

EN Unmanned surface vessel (USV) operations will change the future form of maritime wars profoundly, and one of the critical factors for victory is the cluster intelligence of USVs. Training USVs for combat using reinforcement learning (RL) is an important research direction. Sparse reward as one of the complex problems in reinforcement learning causes sluggish and inefficient USV training. Therefore, a modified random network distillation (MRND) algorithm is proposed for the sparse reward problem. This algorithm measures the weight of internal rewards by calculating the variance of the number of training steps in each training episode to adjust internal and external rewards dynamically. Through the self-play iterative training method, our algorithm, in conjunction with the classical proximal policy optimization (PPO) algorithm, can improve USV cluster intelligence rapidly. Based on USV cluster combat training environments constructed on Unity3D and ML-Agent Toolkits platform, three types of USV cluster combat simulations are conducted to validate the algorithm, including a target pursuit combat simulation, a USV cluster maritime combat simulation, and a USV cluster base offense and defense combat simulation. Simulation experiments have shown that USV clusters trained with the MRND algorithm converge quicker, acquire more rewards in fewer steps, and exhibit a higher level of intelligence than the USV cluster trained by the comparison algorithms.

Date of online publication

07.08.2022

Pages (from - to)

112147-1 - 112147-15

DOI

10.1016/j.oceaneng.2022.112147

URL

https://doi.org/10.1016/j.oceaneng.2022.112147

Comments

Article number: 112147

Ministry points / journal

140

Impact Factor

5

This website uses cookies to remember the authenticated session of the user. For more information, read about Cookies and Privacy Policy.