Nowadays, assembling a computer is considered something close to many people. Everyone has a chance to catch it. which knowledge of various components of computers and skills in assembling computers. These 2 things mentioned above are things that the general public should have basic knowledge and understanding about. For the self-assembly of computers, We therefore would like to provide knowledge to the general public who wants to learn how to assemble a computer, including information about its components. Through presentation in the form of learning media using VR technology, which will help reduce the problem of errors. and resources used in assembly Ready to create excitement for users by simulating computer assembly for users to interact within the virtual world. experience and provide knowledge before actually putting it into practice with real equipment This project was therefore created for those interested in assembling computers. Especially for people who have no experience in computer assembly. Including people who would like to have the opportunity to try building a computer by themselves.
ในการประกอบคอมพิวเตอร์หนึ่งเครื่องนั้นจำเป็นต้องมีอุปกรณ์จริงในการประกอบ หากไม่มีก็ไม่สามารถทำได้ อีกทั้งผู้ที่จะประกอบไม่มีความรู้อาจส่งผลให้ต้องใช้เวลานานในการประกอบ และ ในการปฏิบัติจริงในบางกรณีอาจส่งผลเสียกับอุปกรณ์กรณีที่ประกอบผิดขั้นตอน ซึ่งโครงงานนี้จะช่วยให้ผู้ใช้สามารถได้ทดลองประกอบคอมพิวเตอร์ได้ด้วยตนเอง พร้อมกับให้ความรู้เบื้องต้น โดยผ่านการนำเสนอในรูปแบบสื่อการสอนด้วยเทคโนโลยีความจริงเสมือน เพื่อให้ผู้ใช้ได้มีปฏิสัมพันธ์ และ ได้จำลองสถานการณ์ ซึ่งจะช่วยให้ผู้ใช้งานสามารถเข้าใจ และ ได้ความรู้ในการประกอบคอมพิวเตอร์มากยิ่งขึ้น ก่อนที่จะนำความรู้ที่ได้ไปปฏิบัติกับอุปกรณ์จริงได้อย่างถูกต้อง

คณะวิศวกรรมศาสตร์
The integration of intelligent robotic systems into human-centric environments, such as laboratories, hospitals, and educational institutions, has become increasingly important due to the growing demand for accessible and context-aware assistants. However, current solutions often lack scalability—for instance, relying on specialized personnel to repeatedly answer the same questions as administrators for specific departments—and adaptability to dynamic environments that require real-time situational responses. This study introduces a novel framework for an interactive robotic assistant (Beckerle et al. , 2017) designed to assist during laboratory tours and mitigate the challenges posed by limited human resources in providing comprehensive information to visitors. The proposed system operates through multiple modes, including standby mode and recognition mode, to ensure seamless interaction and adaptability in various contexts. In standby mode, the robot signals readiness with a smiling face animation while patrolling predefined paths or conserving energy when stationary. Advanced obstacle detection ensures safe navigation in dynamic environments. Recognition mode activates through gestures or wake words, using advanced computer vision and real-time speech recognition to identify users. Facial recognition further classifies individuals as known or unknown, providing personalized greetings or context-specific guidance to enhance user engagement. The proposed robot and its 3D design are shown in Figure 1. In interactive mode, the system integrates advanced technologies, including advanced speech recognition (ASR Whisper), natural language processing (NLP), and a large language model Ollama 3.2 (LLM Predictor, 2025), to provide a user-friendly, context-aware, and adaptable experience. Motivated by the need to engage students and promote interest in the RAI department, which receives over 1,000 visitors annually, it addresses accessibility gaps where human staff may be unavailable. With wake word detection, face and gesture recognition, and LiDAR-based obstacle detection, the robot ensures seamless communication in English, alongside safe and efficient navigation. The Retrieval-Augmented Generation (RAG) human interaction system communicates with the mobile robot, built on ROS1 Noetic, using the MQTT protocol over Ethernet. It publishes navigation goals to the move_base module in ROS, which autonomously handles navigation and obstacle avoidance. A diagram is explained in Figure 2. The framework includes a robust back-end architecture utilizing a combination of MongoDB for information storage and retrieval and a RAG mechanism (Thüs et al., 2024) to process program curriculum information in the form of PDFs. This ensures that the robot provides accurate and contextually relevant answers to user queries. Furthermore, the inclusion of smiling face animations and text-to-speech (TTS BotNoi) enhanced user engagement metrics were derived through a combination of observational studies and surveys, which highlighted significant improvements in user satisfaction and accessibility. This paper also discusses capability to operate in dynamic environments and human-centric spaces. For example, handling interruptions while navigating during a mission. The modular design allows for easy integration of additional features, such as gesture recognition and hardware upgrades, ensuring long-term scalability. However, limitations such as the need for high initial setup costs and dependency on specific hardware configurations are acknowledged. Future work will focus on enhancing the system’s adaptability to diverse languages, expanding its use cases, and exploring collaborative interactions between multiple robots. In conclusion, the proposed interactive robotic assistant represents a significant step forward in bridging the gap between human needs and technological advancements. By combining cutting-edge AI technologies with practical hardware solutions, this work offers a scalable, efficient, and user-friendly system that enhances accessibility and user engagement in human-centric spaces.

คณะเทคโนโลยีสารสนเทศ
Traditional methods of public relations and learning often lack engagement and fail to provide users with a deep and immersive experience. Additionally, these methods struggle to reach a wide audience, especially those unable to visit the physical location. This project aims to solve the issues of accessibility and awareness regarding the institution’s Chalermphrakiat Hall and historical exhibition. Utilizing metaverse technology to simulate important locations allows users to explore the site and view key information in a virtual format, thereby enhancing the engagement of students staff alumni and the general public. The metaverse system is developed using Unity, a powerful game engine capable of supporting the creation of metaverse environments. This allows for the creation of an interactive and realistic virtual space. Unity also supports the management of physics, lighting, and sound, further enhancing realism. Additionally, the system is integrated with web browsers using WebGL technology, enabling the project developed in Unity to be accessed directly through a browser. Users can visit and interact with the metaverse environment from anywhere without the need to install additional software. The developers have thus created the metaverse system to provide a realistic and engaging learning experience, enhancing public relations efforts and fostering a strong connection with the institution efficiently.

คณะแพทยศาสตร์
Background: The RGL3 gene plays a role in key signal transduction pathways and has been implicated in hypertension risk through the identification of a copy number variant deletion in exon 6. Genome-wide association studies have highlighted RGL3 as associated with hypertension, providing insights into the genetic underpinnings of the condition and its protective effects on cardiovascular health. Despite these findings, there is a lack of data that confirms the precise role of RGL3 in hypertension. Additionally, the functional impact of certain variants, particularly those classified as variants of uncertain significance, remains poorly understood. Objectives: This study aims to analyze alterations in the RGL3 protein structure caused by mutations and validate the location of the ligand binding sites. Methods: Clinical variants of the RGL3 gene were obtained from NCBI ClinVar. Variants of uncertain significance and likely benign were analyzed. Multiple sequence alignment was conducted using BioEdit v7.7.1. AlphaFold 2 predicted the wild-type and mutant 3D structures, followed by quality assessment via PROCHECK. Functional domain analysis of RasGEF, RASGEF_NTER, and RA domains was performed, and BIOVIA Discovery Studio Visualizer 2024 was used to evaluate structural and physicochemical changes. Results: The analysis of 81 RGL3 variants identified 5 likely benign and 76 variants of uncertain significance (VUS), all of which were missense mutations. Structural modeling using AlphaFold 2 revealed three key domains: RasGEF_NTER, RasGEF, and RA, where mutations induced conformational changes. Ramachandran plot validation confirmed 79.7% of residues in favored regions, indicating an overall reliable structure. Moreover, mutations within RasGEF and RA domains altered polarity, charge, and stability, suggesting potential functional disruptions. These findings provide insight into the structural consequences of RGL3 mutations, contributing to further functional assessments. Discussion & Conclusion: The identified RGL3 mutations induced physicochemical alterations in key domains, affecting charge, polarity, hydrophobicity, and flexibility. These changes likely disrupt interactions with Ras-like GTPases, impairing GDP-GTP exchange and cellular signaling. Structural analysis highlighted mutations in RasGEF and RA domains that may interfere with activation states, potentially affecting protein function and stability. These findings suggest that mutations in RGL3 could have functional consequences, emphasizing the need for further molecular and functional studies to explore their pathogenic potential.