
Layla, the hotel robot, is responsible for carrying guests’ luggage and guiding them to their accommodations. It is equipped with an internal map of the hotel, allowing it to navigate various locations efficiently. Additionally, it features an AI-powered system that enables interactive conversations in three major languages: Thai, English, and Chinese.
เนื่องจากปัญหาการขาดแคลนพนักงานโรงแรมในปัจจุบัน พนักงานจึงต้องทำหน้าที่หลายอย่างในเวลาเดียวกัน ทำให้ลูกค้าอาจต้องรอคิวนาน ดังนั้นเพื่อเพิ่มประสิทธิภาพการทำงานให้สะดวกและรวดเร็วยิ่งขึ้น จึงสร้างนวัตกรรมชิ้นนี้มาเพื่อช่วยลดภาระหน้าที่ของพนักงาน เช่น Bellboy, Concierge และ Receptionist เป็นต้น

วิทยาเขตชุมพรเขตรอุดมศักดิ์
-

คณะวิศวกรรมศาสตร์
The integration of intelligent robotic systems into human-centric environments, such as laboratories, hospitals, and educational institutions, has become increasingly important due to the growing demand for accessible and context-aware assistants. However, current solutions often lack scalability—for instance, relying on specialized personnel to repeatedly answer the same questions as administrators for specific departments—and adaptability to dynamic environments that require real-time situational responses. This study introduces a novel framework for an interactive robotic assistant (Beckerle et al. , 2017) designed to assist during laboratory tours and mitigate the challenges posed by limited human resources in providing comprehensive information to visitors. The proposed system operates through multiple modes, including standby mode and recognition mode, to ensure seamless interaction and adaptability in various contexts. In standby mode, the robot signals readiness with a smiling face animation while patrolling predefined paths or conserving energy when stationary. Advanced obstacle detection ensures safe navigation in dynamic environments. Recognition mode activates through gestures or wake words, using advanced computer vision and real-time speech recognition to identify users. Facial recognition further classifies individuals as known or unknown, providing personalized greetings or context-specific guidance to enhance user engagement. The proposed robot and its 3D design are shown in Figure 1. In interactive mode, the system integrates advanced technologies, including advanced speech recognition (ASR Whisper), natural language processing (NLP), and a large language model Ollama 3.2 (LLM Predictor, 2025), to provide a user-friendly, context-aware, and adaptable experience. Motivated by the need to engage students and promote interest in the RAI department, which receives over 1,000 visitors annually, it addresses accessibility gaps where human staff may be unavailable. With wake word detection, face and gesture recognition, and LiDAR-based obstacle detection, the robot ensures seamless communication in English, alongside safe and efficient navigation. The Retrieval-Augmented Generation (RAG) human interaction system communicates with the mobile robot, built on ROS1 Noetic, using the MQTT protocol over Ethernet. It publishes navigation goals to the move_base module in ROS, which autonomously handles navigation and obstacle avoidance. A diagram is explained in Figure 2. The framework includes a robust back-end architecture utilizing a combination of MongoDB for information storage and retrieval and a RAG mechanism (Thüs et al., 2024) to process program curriculum information in the form of PDFs. This ensures that the robot provides accurate and contextually relevant answers to user queries. Furthermore, the inclusion of smiling face animations and text-to-speech (TTS BotNoi) enhanced user engagement metrics were derived through a combination of observational studies and surveys, which highlighted significant improvements in user satisfaction and accessibility. This paper also discusses capability to operate in dynamic environments and human-centric spaces. For example, handling interruptions while navigating during a mission. The modular design allows for easy integration of additional features, such as gesture recognition and hardware upgrades, ensuring long-term scalability. However, limitations such as the need for high initial setup costs and dependency on specific hardware configurations are acknowledged. Future work will focus on enhancing the system’s adaptability to diverse languages, expanding its use cases, and exploring collaborative interactions between multiple robots. In conclusion, the proposed interactive robotic assistant represents a significant step forward in bridging the gap between human needs and technological advancements. By combining cutting-edge AI technologies with practical hardware solutions, this work offers a scalable, efficient, and user-friendly system that enhances accessibility and user engagement in human-centric spaces.

คณะวิศวกรรมศาสตร์
This project aims to develop a conceptual prototype of a weapon aiming system that simulates an anti-aircraft gun. Utilizing an optical camera, the system detects moving objects and calculates their trajectories in real time. The results are then used to control a motorized laser pointer with two degrees of freedom (DoF) of rotation, enabling it to aim at the predicted position of the target. Our system is built on the Raspberry Pi platform, employing machine vision software. The object motion tracking functionality was developed using the OpenCV library, based on color detection algorithms. Experimental results indicate that the system successfully detects the movement of a tennis ball at a rate of 30 frames per second (fps). The current phase involves designing and integratively testing the mechanical system for precise laser pointer position control. This project exemplifies the integration of knowledge in electronics (computer programming) and mechanical engineering (motor control).