Depending on the amount of data to process, file generation may take longer.

If it takes too long to generate, you can limit the data by, for example, reducing the range of years.

Article

Download BibTeX

Title

Reward Function and Configuration Parameters in Machine Learning of a Four-Legged Walking Robot

Authors

[ 1 ] Instytut Technologii Mechanicznej, Wydział Inżynierii Mechanicznej, Politechnika Poznańska | [ P ] employee

Scientific discipline (Law 2.0)

[2.9] Mechanical engineering

Year of publication

2023

Published in

Applied Sciences

Journal year: 2023 | Journal volume: vol. 13 | Journal number: iss. 18

Article type

scientific article

Publication language

english

Keywords
EN
  • walking robot
  • quadruped
  • artificial neural network
  • reinforcement learning
  • robots
  • unity
  • ML-Agents
  • ML-Agents toolkit
  • Crawler
  • reward function
  • configuration parameters
Abstract

EN In contemporary times, the use of walking robots is gaining increasing popularity and is prevalent in various industries. The ability to navigate challenging terrains is one of the advantages that they have over other types of robots, but they also require more intricate control mechanisms. One way to simplify this issue is to take advantage of artificial intelligence through reinforcement learning. The reward function is one of the conditions that governs how learning takes place, determining what actions the agent is willing to take based on the collected data. Another aspect to consider is the predetermined values contained in the configuration file, which describe the course of the training. The correct tuning of them is crucial for achieving satisfactory results in the teaching process. The initial phase of the investigation involved assessing the currently prevalent forms of kinematics for walking robots. Based on this evaluation, the most suitable design was selected. Subsequently, the Unity3D development environment was configured using an ML-Agents toolkit, which supports machine learning. During the experiment, the impacts of the values defined in the configuration file and the form of the reward function on the course of training were examined. Movement algorithms were developed for various modifications for learning to use artificial neural networks.

Pages (from - to)

10298-1 - 10298-20

DOI

10.3390/app131810298

URL

https://www.mdpi.com/2076-3417/13/18/10298

License type

CC BY (attribution alone)

Open Access Mode

open journal

Open Access Text Version

final published version

Date of Open Access to the publication

at the time of publication

Ministry points / journal

100

Impact Factor

2,5

This website uses cookies to remember the authenticated session of the user. For more information, read about Cookies and Privacy Policy.