Nowadays, assembling a computer is considered something close to many people. Everyone has a chance to catch it. which knowledge of various components of computers and skills in assembling computers. These 2 things mentioned above are things that the general public should have basic knowledge and understanding about. For the self-assembly of computers, We therefore would like to provide knowledge to the general public who wants to learn how to assemble a computer, including information about its components. Through presentation in the form of learning media using VR technology, which will help reduce the problem of errors. and resources used in assembly Ready to create excitement for users by simulating computer assembly for users to interact within the virtual world. experience and provide knowledge before actually putting it into practice with real equipment This project was therefore created for those interested in assembling computers. Especially for people who have no experience in computer assembly. Including people who would like to have the opportunity to try building a computer by themselves.
ในการประกอบคอมพิวเตอร์หนึ่งเครื่องนั้นจำเป็นต้องมีอุปกรณ์จริงในการประกอบ หากไม่มีก็ไม่สามารถทำได้ อีกทั้งผู้ที่จะประกอบไม่มีความรู้อาจส่งผลให้ต้องใช้เวลานานในการประกอบ และ ในการปฏิบัติจริงในบางกรณีอาจส่งผลเสียกับอุปกรณ์กรณีที่ประกอบผิดขั้นตอน ซึ่งโครงงานนี้จะช่วยให้ผู้ใช้สามารถได้ทดลองประกอบคอมพิวเตอร์ได้ด้วยตนเอง พร้อมกับให้ความรู้เบื้องต้น โดยผ่านการนำเสนอในรูปแบบสื่อการสอนด้วยเทคโนโลยีความจริงเสมือน เพื่อให้ผู้ใช้ได้มีปฏิสัมพันธ์ และ ได้จำลองสถานการณ์ ซึ่งจะช่วยให้ผู้ใช้งานสามารถเข้าใจ และ ได้ความรู้ในการประกอบคอมพิวเตอร์มากยิ่งขึ้น ก่อนที่จะนำความรู้ที่ได้ไปปฏิบัติกับอุปกรณ์จริงได้อย่างถูกต้อง

คณะเทคโนโลยีสารสนเทศ
This research project aims to study and develop a metaverse system for tourism in Horklong Subdistrict, Phitsanulok Province. The primary goal is to create a prototype metaverse system that showcases the cultural and historical tourist attractions in Horklong Subdistrict through virtual reality technology. This will help promote tourism in rural areas that are not yet widely known and enhance the modern promotion of tourist attractions in the province. The development of the metaverse system in this project utilizes virtual reality technology to simulate the experience of touring Horklong Subdistrict via a virtual boat ride. Users will be able to access the system through the Unity platform, which is a tool used to develop 3D and VR applications. The system is designed to allow users to choose to ride a virtual boat and visit various places that have been creatively recreated in a virtual format. These locations are designed and developed using 3D models based on real data collected from the Horklong Subdistrict area. This project is therefore an initiative to sustainably promote cultural and historical tourism in Phitsanulok Province. It has the potential to increase income for local communities as well as to disseminate local knowledge and wisdom to a broader audience. The metaverse system for tourism in Horklong Subdistrict is thus an important project in modernizing the province’s tourism sector and attracting more visitors, both domestically and internationally.

คณะศิลปศาสตร์
Layla, the hotel robot, is responsible for carrying guests’ luggage and guiding them to their accommodations. It is equipped with an internal map of the hotel, allowing it to navigate various locations efficiently. Additionally, it features an AI-powered system that enables interactive conversations in three major languages: Thai, English, and Chinese.

คณะวิศวกรรมศาสตร์
The integration of intelligent robotic systems into human-centric environments, such as laboratories, hospitals, and educational institutions, has become increasingly important due to the growing demand for accessible and context-aware assistants. However, current solutions often lack scalability—for instance, relying on specialized personnel to repeatedly answer the same questions as administrators for specific departments—and adaptability to dynamic environments that require real-time situational responses. This study introduces a novel framework for an interactive robotic assistant (Beckerle et al. , 2017) designed to assist during laboratory tours and mitigate the challenges posed by limited human resources in providing comprehensive information to visitors. The proposed system operates through multiple modes, including standby mode and recognition mode, to ensure seamless interaction and adaptability in various contexts. In standby mode, the robot signals readiness with a smiling face animation while patrolling predefined paths or conserving energy when stationary. Advanced obstacle detection ensures safe navigation in dynamic environments. Recognition mode activates through gestures or wake words, using advanced computer vision and real-time speech recognition to identify users. Facial recognition further classifies individuals as known or unknown, providing personalized greetings or context-specific guidance to enhance user engagement. The proposed robot and its 3D design are shown in Figure 1. In interactive mode, the system integrates advanced technologies, including advanced speech recognition (ASR Whisper), natural language processing (NLP), and a large language model Ollama 3.2 (LLM Predictor, 2025), to provide a user-friendly, context-aware, and adaptable experience. Motivated by the need to engage students and promote interest in the RAI department, which receives over 1,000 visitors annually, it addresses accessibility gaps where human staff may be unavailable. With wake word detection, face and gesture recognition, and LiDAR-based obstacle detection, the robot ensures seamless communication in English, alongside safe and efficient navigation. The Retrieval-Augmented Generation (RAG) human interaction system communicates with the mobile robot, built on ROS1 Noetic, using the MQTT protocol over Ethernet. It publishes navigation goals to the move_base module in ROS, which autonomously handles navigation and obstacle avoidance. A diagram is explained in Figure 2. The framework includes a robust back-end architecture utilizing a combination of MongoDB for information storage and retrieval and a RAG mechanism (Thüs et al., 2024) to process program curriculum information in the form of PDFs. This ensures that the robot provides accurate and contextually relevant answers to user queries. Furthermore, the inclusion of smiling face animations and text-to-speech (TTS BotNoi) enhanced user engagement metrics were derived through a combination of observational studies and surveys, which highlighted significant improvements in user satisfaction and accessibility. This paper also discusses capability to operate in dynamic environments and human-centric spaces. For example, handling interruptions while navigating during a mission. The modular design allows for easy integration of additional features, such as gesture recognition and hardware upgrades, ensuring long-term scalability. However, limitations such as the need for high initial setup costs and dependency on specific hardware configurations are acknowledged. Future work will focus on enhancing the system’s adaptability to diverse languages, expanding its use cases, and exploring collaborative interactions between multiple robots. In conclusion, the proposed interactive robotic assistant represents a significant step forward in bridging the gap between human needs and technological advancements. By combining cutting-edge AI technologies with practical hardware solutions, this work offers a scalable, efficient, and user-friendly system that enhances accessibility and user engagement in human-centric spaces.