This project aims to develop a conceptual prototype of a weapon aiming system that simulates an anti-aircraft gun. Utilizing an optical camera, the system detects moving objects and calculates their trajectories in real time. The results are then used to control a motorized laser pointer with two degrees of freedom (DoF) of rotation, enabling it to aim at the predicted position of the target. Our system is built on the Raspberry Pi platform, employing machine vision software. The object motion tracking functionality was developed using the OpenCV library, based on color detection algorithms. Experimental results indicate that the system successfully detects the movement of a tennis ball at a rate of 30 frames per second (fps). The current phase involves designing and integratively testing the mechanical system for precise laser pointer position control. This project exemplifies the integration of knowledge in electronics (computer programming) and mechanical engineering (motor control).
โปรเจคนี้เกิดจากความสนใจในการพัฒนาระบบที่มีการผสมผสานของ Machine Vision และระบบความคุมกลไกมอเตอร์ 2 แกนแบบ Degrees of Freedom(DoF) เพื่อพัฒนาอุปกรณ์ต้นแบบที่สามารถตรวจจับ ติดตาม และเล็งเป้าหมายได้อย่างมีแม่นยำ ซึ่งหวังเป็นอย่างยิ่งว่าโปรเจคนี้จะมีประโยชน์ต่องานในอนาคตต่างๆที่เกี่ยวข้อง ไม่ว่าจะเป็น ทางการทหาร ทางการแพทย์ หรือทางอุตสาหกรรม
คณะเทคโนโลยีสารสนเทศ
Facial Expression Recognition (FER) has attracted considerable attention in fields such as healthcare, customer service, and behavior analysis. However, challenges remain in developing a robust system capable of adapting to various environments and dynamic situations. In this study, the researchers introduced an Ensemble Learning approach to merge outputs from multiple models trained in specific conditions, allowing the system to retain old information while efficiently learning new data. This technique is advantageous in terms of training time and resource usage, as it reduces the need to retrain a new model entirely when faced with new conditions. Instead, new specialized models can be added to the Ensemble system with minimal resource requirements. The study explores two main approaches to Ensemble Learning: averaging outputs from dedicated models trained under specific scenarios and using Mixture of Experts (MoE), a technique that combines multiple models each specialized in different situations. Experimental results showed that Mixture of Experts (MoE) performs more effectively than the Averaging Ensemble method for emotion classification in all scenarios. The MoE system achieved an average accuracy of 84.41% on the CK+ dataset, 54.20% on Oulu-CASIA, and 61.66% on RAVDESS, surpassing the 71.64%, 44.99%, and 57.60% achieved by Averaging Ensemble in these datasets, respectively. These results demonstrate MoE’s ability to accurately select the model specialized for each specific scenario, enhancing the system’s capacity to handle more complex environments.
คณะวิศวกรรมศาสตร์
The project uses artificial intelligence (AI) and deep learning to develop a smart police system (Smart Police) to analyze the identity of individuals and vehicles suspected of involvement in crimes. The system uses CCTV cameras to detect people with concealed weapons and track vehicles involved in crimes. The system also sends alerts to the police when a crime is detected. The Smart Police system is a collaboration between the Faculty of Engineering, King Mongkut's Institute of Technology Ladkrabang, the Provincial Police Region 2, the Chachoengsao Foundation for Development, and the Smart City Office of Chachoengsao Province. The system is designed to prevent and deter crime, increase public safety and order, and build a network of cooperation between the government, the private sector, and the community. The system is currently under development, but it has the potential to be a valuable tool for law enforcement. The system could help to reduce crime and improve public safety in Chachoengsao Province and other parts of Thailand.
วิทยาลัยวิศวกรรมสังคีต
This work attempts to spark conversations about the technical and creative aspects of participatory concert settings. It features the results of two interactive research concerts on basis of audience participation quantities, motion analysis and log data clustering. Ultimately, it poses questions how participatory work can help when teaching interactive technology for the arts and beyond.