Perbandingan Performa Metode Pengukuran Multidimensi dengan Kamera dan Sensor Fusion pada Kendaraan Bergerak

Dzikra, Azka Ammar (2025) Perbandingan Performa Metode Pengukuran Multidimensi dengan Kamera dan Sensor Fusion pada Kendaraan Bergerak. Diploma thesis, Institut Teknologi Sepuluh Nopember.

[thumbnail of 5009211049-Undergraduate _Thesis.pdf] Text
5009211049-Undergraduate _Thesis.pdf
Restricted to Repository staff only

Download (4MB) | Request a copy

Abstract

Teknologi dalam pengukuran dimensi kendaraan bergerak, khususnya kendaraan berat seperti trailer dan tronton, terus berkembang pesat seiring dengan meningkatnya kebutuhan akan keselamatan transportasi, efisiensi logistik, dan otomatisasi kendaraan. Kendaraan yang melebihi standar dimensi yang ditetapkan sering dianggap sebagai salah satu penyebab kecelakaan karena ukurannya melampaui batas yang diizinkan. Dalam proses deteksi kendaraan bergerak, masih ditemui kendala teknis di lapangan, seperti pemeriksaan fisik yang harus dilakukan secara manual oleh petugas. Metode pengukuran tradisional, seperti pengukuran manual atau sensor tunggal, memiliki banyak keterbatasan. Penggunaan metode berbasis deep learning dan sensor fusion menjadi pendekatan yang lebih presisi dan adaptif dalam pengukuran dimensi kendaraan yang bergerak. Penelitian ini mengusulkan perbandingan kedua metode tersebut, yaitu deep learning dan sensor fusion, untuk menentukan metode terbaik dalam mengukur kendaraan bergerak secara real-time. Perancangan dilakukan menggunakan metode YOLOv5s untuk deep learning, sedangkan pada sensor fusion digunakan dua sensor jarak untuk mendeteksi lebar, sensor LiDAR untuk mendeteksi panjang, dan sensor ultrasonik untuk mendeteksi tinggi. Pengujian dilakukan terhadap kendaraan yang bergerak dengan rentang kecepatan 1 km/jam hingga 5 km/jam serta dalam kondisi pencahayaan siang dan malam hari. Hasil penelitian menunjukkan bahwa baik sensor fusion maupun YOLO mampu mendeteksi dimensi kendaraan secara real-time. Pembacaan sensor fusion memiliki nilai rata-rata error pembacaan panjang, lebar, dan tinggi masing-masing sebesar 16,39 %, 2,87 %, dan 1,81 % pada siang hari dan 14,05 %, 3,63%, dan 2,76 % pada malam hari. Pembacaan YOLO memiliki nilai rata-rata error pembacaan panjang, lebar, dan tinggi masing-masing sebesar 1,19 %, 1,15 %, dan 0,77 % pada siang hari dan 1,17 %, 100%, dan 4,15% pada malam hari.
==================================================================================================================================
Technology in measuring the dimensions of moving vehicles, especially heavy vehicles such as trailers and trucks, continues to develop rapidly along with the increasing need for transportation safety, logistics efficiency, and vehicle automation. Vehicles that exceed the established dimension standards are often considered one of the causes of accidents because their size exceeds the permitted limit. In the process of detecting moving vehicles, technical obstacles
are still encountered in the field, such as physical inspections that must be carried out manually by officers. Traditional measurement methods, such as manual measurements or single sensors, have many limitations. The use of deep learning-based methods and sensor fusion is a more precise and adaptive approach in measuring the dimensions of moving vehicles. This study proposes a comparison of the two methods, namely deep learning and sensor fusion, to determine the best method for measuring moving vehicles in real-time. The design is carried out using the YOLOv5s method for deep learning, while in sensor fusion, two distance sensors are used to detect width, a LiDAR sensor to detect length, and an ultrasonic sensor to detect height. Tests were conducted on vehicles moving at speeds ranging from 1 km/h to 5 km/h and in both day and night lighting conditions. The results show that both sensor fusion and YOLO are capable of detecting vehicle dimensions in real-time. The fusion sensor readings have an average error value for length, width, and height readings of 16.39%, 2.87%, and 1.81% respectively during the day and 14.05%, 3.63%, and 2.76% respectively at night. The YOLO readings have an average error value for length, width, and height readings of 1.19%, 1.15%, and 0.77% respectively during the day and 1.17%, 100%, and 4.15% respectively at night.

Item Type: Thesis (Diploma)
Uncontrolled Keywords: YOLO, Sensor Fusion, Pengukuran Dimensi, Kendaraan Bergerak.
Subjects: T Technology > TA Engineering (General). Civil engineering (General) > TA1573 Detectors. Sensors
T Technology > TA Engineering (General). Civil engineering (General) > TA1637 Image processing--Digital techniques. Image analysis--Data processing.
Divisions: Faculty of Industrial Technology and Systems Engineering (INDSYS) > Physics Engineering > 30201-(S1) Undergraduate Thesis
Depositing User: Azka Ammar Dzikra
Date Deposited: 06 Aug 2025 07:01
Last Modified: 06 Aug 2025 07:01
URI: http://repository.its.ac.id/id/eprint/127756

Actions (login required)

View Item View Item