
Our project seek to create an Al-powered tarot card reader that bridges the gap between traditional fortune-telling and modern technology. By leveraging a combination of 3D modeling, natural language processing, text-to-speech (TTS), and speech-to-text (STT) systems, the service will deliver an interactive and culturally sensitive experience in Thai and English. Users will input their queries through voice, which will be processed via STT, and receive engaging Al-generated tarot readings through TTS. Additionally, a 3D animated avatar will mimic a real-life fortune teller, adding a visual dimension to the experience. Hosted on a user-friendly website, this platform will redefine fortune-telling by blending tradition with innovation, making it both accessible and engaging for modern users.
ในปัจจุบัน AI Chatbots และ LLM ถูกนำมาใช้ในหลายธุรกิจ รวมถึงการให้คำแนะนำและการพยากรณ์ อย่างไรก็ตาม ในตลาดการพยากรณ์ออนไลน์ยังไม่มีระบบ AI ที่สามารถให้บริการพยากรณ์ไพ่ทาโรต์ได้อย่างมีประสิทธิภาพและให้ประสบการณ์ที่ใกล้เคียงกับนักพยากรณ์จริง จากการสังเกตพฤติกรรมของผู้ใช้สื่อโซเชียลในไทย พบว่ามีผู้ใช้จำนวนมากที่นิยมใช้ ChatGPT และ AI อื่น ๆ เพื่อขอคำพยากรณ์เกี่ยวกับอนาคตของตนเอง ซึ่งบ่งบอกถึงความสนใจในบริการลักษณะนี้ อย่างไรก็ตาม ยังไม่มีแพลตฟอร์มที่พัฒนาเพื่อจุดประสงค์ในการดูดวงโดยเฉพาะ ดังนั้น โครงการนี้จึงมีเป้าหมายในการพัฒนา AI ที่สามารถโต้ตอบกับผู้ใช้ได้อย่างสมจริง โดยใช้โมเดลภาษาและเทคโนโลยีที่ทันสมัย

คณะวิศวกรรมศาสตร์
ยานยนต์ไฟฟ้าดัดแปลง

คณะวิศวกรรมศาสตร์
The integration of intelligent robotic systems into human-centric environments, such as laboratories, hospitals, and educational institutions, has become increasingly important due to the growing demand for accessible and context-aware assistants. However, current solutions often lack scalability—for instance, relying on specialized personnel to repeatedly answer the same questions as administrators for specific departments—and adaptability to dynamic environments that require real-time situational responses. This study introduces a novel framework for an interactive robotic assistant (Beckerle et al. , 2017) designed to assist during laboratory tours and mitigate the challenges posed by limited human resources in providing comprehensive information to visitors. The proposed system operates through multiple modes, including standby mode and recognition mode, to ensure seamless interaction and adaptability in various contexts. In standby mode, the robot signals readiness with a smiling face animation while patrolling predefined paths or conserving energy when stationary. Advanced obstacle detection ensures safe navigation in dynamic environments. Recognition mode activates through gestures or wake words, using advanced computer vision and real-time speech recognition to identify users. Facial recognition further classifies individuals as known or unknown, providing personalized greetings or context-specific guidance to enhance user engagement. The proposed robot and its 3D design are shown in Figure 1. In interactive mode, the system integrates advanced technologies, including advanced speech recognition (ASR Whisper), natural language processing (NLP), and a large language model Ollama 3.2 (LLM Predictor, 2025), to provide a user-friendly, context-aware, and adaptable experience. Motivated by the need to engage students and promote interest in the RAI department, which receives over 1,000 visitors annually, it addresses accessibility gaps where human staff may be unavailable. With wake word detection, face and gesture recognition, and LiDAR-based obstacle detection, the robot ensures seamless communication in English, alongside safe and efficient navigation. The Retrieval-Augmented Generation (RAG) human interaction system communicates with the mobile robot, built on ROS1 Noetic, using the MQTT protocol over Ethernet. It publishes navigation goals to the move_base module in ROS, which autonomously handles navigation and obstacle avoidance. A diagram is explained in Figure 2. The framework includes a robust back-end architecture utilizing a combination of MongoDB for information storage and retrieval and a RAG mechanism (Thüs et al., 2024) to process program curriculum information in the form of PDFs. This ensures that the robot provides accurate and contextually relevant answers to user queries. Furthermore, the inclusion of smiling face animations and text-to-speech (TTS BotNoi) enhanced user engagement metrics were derived through a combination of observational studies and surveys, which highlighted significant improvements in user satisfaction and accessibility. This paper also discusses capability to operate in dynamic environments and human-centric spaces. For example, handling interruptions while navigating during a mission. The modular design allows for easy integration of additional features, such as gesture recognition and hardware upgrades, ensuring long-term scalability. However, limitations such as the need for high initial setup costs and dependency on specific hardware configurations are acknowledged. Future work will focus on enhancing the system’s adaptability to diverse languages, expanding its use cases, and exploring collaborative interactions between multiple robots. In conclusion, the proposed interactive robotic assistant represents a significant step forward in bridging the gap between human needs and technological advancements. By combining cutting-edge AI technologies with practical hardware solutions, this work offers a scalable, efficient, and user-friendly system that enhances accessibility and user engagement in human-centric spaces.

คณะเทคโนโลยีสารสนเทศ
Traditional methods of public relations and learning often lack engagement and fail to provide users with a deep and immersive experience. Additionally, these methods struggle to reach a wide audience, especially those unable to visit the physical location. This project aims to solve the issues of accessibility and awareness regarding the institution’s Chalermphrakiat Hall and historical exhibition. Utilizing metaverse technology to simulate important locations allows users to explore the site and view key information in a virtual format, thereby enhancing the engagement of students staff alumni and the general public. The metaverse system is developed using Unity, a powerful game engine capable of supporting the creation of metaverse environments. This allows for the creation of an interactive and realistic virtual space. Unity also supports the management of physics, lighting, and sound, further enhancing realism. Additionally, the system is integrated with web browsers using WebGL technology, enabling the project developed in Unity to be accessed directly through a browser. Users can visit and interact with the metaverse environment from anywhere without the need to install additional software. The developers have thus created the metaverse system to provide a realistic and engaging learning experience, enhancing public relations efforts and fostering a strong connection with the institution efficiently.