In the era where information technology plays a significant role in various fields, the use of website to support education has become essential. This case study focuses on the development of a website for the Object Oriented Programming (OOP) course at King Mongkut's Institute of Technology Ladkrabang (KMITL) to facilitate instructors and teaching assistants in grading, assessing student work, and tracking student progress. The developed system helps reduce grading errors, ensures accurate and timely assessments, and enables efficient monitoring of students' academic performance. Additionally, the platform allows students to schedule project submissions and track their grades, while providing statistical data on student performance. This development aims to modernize and enhance the quality of teaching and learning.
ในปัจจุบันคณะเทคโนโลยีสารสนเทศ สถาบันเทคโนโลยีพระจอมเกล้าเจ้าคุณทหารลาดกระบัง ได้เปิดการเรียนการสอนวิชา object oriented programming และให้บริการเว็บไซต์ สำหรับรายวิชา เพื่ออำนวยความสะดวกให้แก่ผู้สอน และ ผู้เรียน โดยทางทีมผู้พัฒนาได้เล็งเห็นว่า ขั้นตอนในกรอกคะแนนเก็บของนักศึกษาว่าอาจเกิดปัญหาจากการกรอกคะแนนจากการส่งงานหรือการส่งโปรเจคผิดอาจส่งผลกระทบถึงเกรดของนักศึกษา ปัญหาต่อที่ผู้สอนได้พบเจอคือผู้ช่วยสอนเดินตรวจงานของนักศึกษาได้ลำบากเพราะตำแหน่งการแสดงผลในระบบไม่ชัดเจนจึงทำให้เกิดความล่าช้าในการตรวจงาน

คณะเทคโนโลยีการเกษตร
This project involves the development of a plant care system for dormitories using IoT (Internet of Things). The system is implemented through programming on an ESP-32 board and controlled via sensors for automated watering. The commands are operated through smartphones, supporting both iOS and Android. It is expected that this project will make plant care in dormitories easier and more convenient.

คณะวิทยาศาสตร์
This research will begin with a review of literature and related studies to examine existing technologies and methods for hand gesture recognition and their applications in controlling electronic devices such as drones, robots, and gaming systems. Subsequently, a hand gesture recognition system will be designed and developed using machine learning and computer vision techniques, with a focus on creating an algorithm that operates quickly and accurately, making it suitable for real-time control. The developed system will be tested and refined using various simulated scenarios to evaluate its efficiency and accuracy in diverse environments. Additionally, a user-friendly interface will be developed to ensure accessibility for all user groups. The research will also incorporate qualitative studies to gather feedback from both novice users and experts, which will contribute to further system improvements, ensuring it effectively meets user needs. Ultimately, the findings of this research will lead to the development of a functional prototype for gesture-based control, which can be applied in industries and entertainment. This will contribute to advancements in innovation and new technologies in the future.

คณะวิศวกรรมศาสตร์
The Thai Sign Language Generation System aims to create a comprehensive 3D modeling and animation platform that translates Thai sentences into dynamic and accurate representations of Thai Sign Language (TSL) gestures. This project enhances communication for the Thai deaf community by leveraging a landmark-based approach using a Vector Quantized Variational Autoencoder (VQVAE) and a Large Language Model (LLM) for sign language generation. The system first trains a VQVAE encoder using landmark data extracted from sign videos, allowing it to learn compact latent representations of TSL gestures. These encoded representations are then used to generate additional landmark-based sign sequences, effectively expanding the training dataset using the BigSign ThaiPBS dataset. Once the dataset is augmented, an LLM is trained to output accurate landmark sequences from Thai text inputs, which are then used to animate a 3D model in Blender, ensuring fluid and natural TSL gestures. The project is implemented using Python, incorporating MediaPipe for landmark extraction, OpenCV for real-time image processing, and Blender’s Python API for 3D animation. By integrating AI, VQVAE-based encoding, and LLM-driven landmark generation, this system aspires to bridge the communication gap between written Thai text and expressive TSL gestures, providing the Thai deaf community with an interactive, real-time sign language animation platform.