Analisis Performansi Variasi Fungsi Reward Pada Metode Deep Q-Network Untuk Simulasi Mobil Otonom

Afandy, Achmad (2021) Analisis Performansi Variasi Fungsi Reward Pada Metode Deep Q-Network Untuk Simulasi Mobil Otonom. Undergraduate thesis, Institut Teknologi Sepuluh Nopember.

[thumbnail of Buku Ta afandy fix.pdf] Text
Buku Ta afandy fix.pdf - Accepted Version
Restricted to Repository staff only until 1 October 2023.

Download (2MB) | Request a copy

Abstract

Self Driving Cars (SDC) menjadi inovasi yang paling dilirik oleh peneliti dan investor di dunia saat ini. Pengembangan SDC tidak luput dari metode Reinforcement Learning (RL). Pengembangan RL pada SDC sendiri banyak dilakukan melalui pendekatan sebuah simulasi. Disisi lain, salah satu parameter pada RL yang memainkan peranan penting dalam pemilihan aksi yang optimal dari mobil otonom yaitu fungsi reward. Oleh karena itu, penelitian ini bertujuan untuk menganalisis varian fungsi reward pada metode RL untuk simulasi SDC. Selanjutnya, penelitian ini menggunakan metode Deep Q-Network (DQN) sebagai metode RL dikarenakan DQN menitikberatkan pada fungsi Q yang bergantung pada fungsi reward. Secara spesifik, penelitian ini mendefinisikan fungsi non-linier agar lebih optimal dalam mendeskripsikan kondisi menyalip mobil dengan akurasi yang tinggi. Untuk evaluasi varians fungsi reward, penelitian ini mensimulasikan mobil otonom pygame pada jalan tol dengan tingkat kepadatan kendaraan tinggi. Hasil simulasi dengan fungsi reward non-linier yang didefinisikan berhasil mengarahkan perilaku mengemudi SDC mendekati pengemudi manusia.
====================================================================================================
Self Driving Cars (SDC) is the innovation that researchers and investors are looking at the most in the world today. SDC development is inseparable from the Reinforcement Learning (RL) method. The development of RL on SDC itself is mostly done through a simulation approach. On the other hand, one of the parameters in RL that plays an important role in selecting the optimal action of an autonomous car is the reward function. Therefore, this study aims to analyze the variance of the reward function in the RL method for SDC simulation. Furthermore, this study uses the Deep Q-Network (DQN) method as the RL method because DQN focuses on the Q function which depends on the reward function. Specifically, this study defines a non-linear function to be more optimal in describing the condition of overtaking a car with high accuracy. To evaluate the variance of the reward function, this study simulates a Pygame autonomous car on a toll road with a high vehicle density level. Simulation results with a non-linear reward function defined by directing SDC driving behavior toward human drivers.

Item Type: Thesis (Undergraduate)
Uncontrolled Keywords: Self Driving Cars, Deep Q-Network, Perbandingan Fungsi Reward. =================================================== Self Driving Cars, Deep Q-Network, Comparison of Reward Functions.
Subjects: Q Science > QA Mathematics > QA336 Artificial Intelligence
Q Science > QA Mathematics > QA76.87 Neural networks (Computer Science)
T Technology > TL Motor vehicles. Aeronautics. Astronautics > TL152.8 Vehicles, Remotely piloted. Autonomous vehicles.
Divisions: Faculty of Science and Data Analytics (SCIENTICS) > Mathematics > 44201-(S1) Undergraduate Thesis
Depositing User: Achmad Afandy
Date Deposited: 27 Aug 2021 02:15
Last Modified: 27 Aug 2021 02:15
URI: http://repository.its.ac.id/id/eprint/90278

Actions (login required)

View Item View Item