KMITL Innovation Expo 2025 Logo

Productivity Improvement in Warehouse Using Power BI and Power Automate

Productivity Improvement in Warehouse Using Power BI and Power Automate

Abstract

This cooperative education project aims to enhance speed and facilitate the verification process for stock issuance, transfers, distributions, and receipts in the warehouse. The primary focus is to address issues related to wasted time and delays in operational processes. Through analysis, it was found that SAP, the current system, involves complex processes requiring specialized expertise. Although the company has developed the iWarehouse system to improve efficiency, delays and procedural complexity persist. To resolve these challenges, Power BI was utilized to visualize data related to stock issuance, transfers, distributions, and receipts, allowing warehouse staff to work more efficiently by minimizing waste and accelerating processes. Additionally, Power Automate was integrated to automate the processing of received stock numbers from emails, reducing errors and delays caused by manual data entry. The results of this improvement indicate a significant increase in employee efficiency and a noticeable reduction in wasted time. Upon project completion, the findings and development approach will be provided to the company for further enhancement.

Objective

เนื่องจากในกระบวนการทำงานในคลังสินค้า ในหน่วยงานส่วนจัดหาและบริหารพัสดุ กรณีศึกษาบริษัท ปตท. จำกัด (มหาชน) ศูนย์ปฏิบัติการชลบุรี มีเวลาสูญเปล่าเกิดขึ้นเป็นจำนวนมาก และมีกระบวนการทำงานที่ซับซ้อน และยังขาดเครื่องมือหรือเทคโนโลยีสมัยใหม่

Other Innovations

J-LEARN Website Extension for Reservation of Assignment and Project Submission

คณะเทคโนโลยีสารสนเทศ

J-LEARN Website Extension for Reservation of Assignment and Project Submission

In the era where information technology plays a significant role in various fields, the use of website to support education has become essential. This case study focuses on the development of a website for the Object Oriented Programming (OOP) course at King Mongkut's Institute of Technology Ladkrabang (KMITL) to facilitate instructors and teaching assistants in grading, assessing student work, and tracking student progress. The developed system helps reduce grading errors, ensures accurate and timely assessments, and enables efficient monitoring of students' academic performance. Additionally, the platform allows students to schedule project submissions and track their grades, while providing statistical data on student performance. This development aims to modernize and enhance the quality of teaching and learning.

Read more
A Human-engaging Robotic Interactive Assistant

คณะวิศวกรรมศาสตร์

A Human-engaging Robotic Interactive Assistant

The integration of intelligent robotic systems into human-centric environments, such as laboratories, hospitals, and educational institutions, has become increasingly important due to the growing demand for accessible and context-aware assistants. However, current solutions often lack scalability—for instance, relying on specialized personnel to repeatedly answer the same questions as administrators for specific departments—and adaptability to dynamic environments that require real-time situational responses. This study introduces a novel framework for an interactive robotic assistant (Beckerle et al. , 2017) designed to assist during laboratory tours and mitigate the challenges posed by limited human resources in providing comprehensive information to visitors. The proposed system operates through multiple modes, including standby mode and recognition mode, to ensure seamless interaction and adaptability in various contexts. In standby mode, the robot signals readiness with a smiling face animation while patrolling predefined paths or conserving energy when stationary. Advanced obstacle detection ensures safe navigation in dynamic environments. Recognition mode activates through gestures or wake words, using advanced computer vision and real-time speech recognition to identify users. Facial recognition further classifies individuals as known or unknown, providing personalized greetings or context-specific guidance to enhance user engagement. The proposed robot and its 3D design are shown in Figure 1. In interactive mode, the system integrates advanced technologies, including advanced speech recognition (ASR Whisper), natural language processing (NLP), and a large language model Ollama 3.2 (LLM Predictor, 2025), to provide a user-friendly, context-aware, and adaptable experience. Motivated by the need to engage students and promote interest in the RAI department, which receives over 1,000 visitors annually, it addresses accessibility gaps where human staff may be unavailable. With wake word detection, face and gesture recognition, and LiDAR-based obstacle detection, the robot ensures seamless communication in English, alongside safe and efficient navigation. The Retrieval-Augmented Generation (RAG) human interaction system communicates with the mobile robot, built on ROS1 Noetic, using the MQTT protocol over Ethernet. It publishes navigation goals to the move_base module in ROS, which autonomously handles navigation and obstacle avoidance. A diagram is explained in Figure 2. The framework includes a robust back-end architecture utilizing a combination of MongoDB for information storage and retrieval and a RAG mechanism (Thüs et al., 2024) to process program curriculum information in the form of PDFs. This ensures that the robot provides accurate and contextually relevant answers to user queries. Furthermore, the inclusion of smiling face animations and text-to-speech (TTS BotNoi) enhanced user engagement metrics were derived through a combination of observational studies and surveys, which highlighted significant improvements in user satisfaction and accessibility. This paper also discusses capability to operate in dynamic environments and human-centric spaces. For example, handling interruptions while navigating during a mission. The modular design allows for easy integration of additional features, such as gesture recognition and hardware upgrades, ensuring long-term scalability. However, limitations such as the need for high initial setup costs and dependency on specific hardware configurations are acknowledged. Future work will focus on enhancing the system’s adaptability to diverse languages, expanding its use cases, and exploring collaborative interactions between multiple robots. In conclusion, the proposed interactive robotic assistant represents a significant step forward in bridging the gap between human needs and technological advancements. By combining cutting-edge AI technologies with practical hardware solutions, this work offers a scalable, efficient, and user-friendly system that enhances accessibility and user engagement in human-centric spaces.

Read more
SignGen: An LLM-Based Thai Sign Language Generator

คณะวิศวกรรมศาสตร์

SignGen: An LLM-Based Thai Sign Language Generator

The Thai Sign Language Generation System aims to create a comprehensive 3D modeling and animation platform that translates Thai sentences into dynamic and accurate representations of Thai Sign Language (TSL) gestures. This project enhances communication for the Thai deaf community by leveraging a landmark-based approach using a Vector Quantized Variational Autoencoder (VQVAE) and a Large Language Model (LLM) for sign language generation. The system first trains a VQVAE encoder using landmark data extracted from sign videos, allowing it to learn compact latent representations of TSL gestures. These encoded representations are then used to generate additional landmark-based sign sequences, effectively expanding the training dataset using the BigSign ThaiPBS dataset. Once the dataset is augmented, an LLM is trained to output accurate landmark sequences from Thai text inputs, which are then used to animate a 3D model in Blender, ensuring fluid and natural TSL gestures. The project is implemented using Python, incorporating MediaPipe for landmark extraction, OpenCV for real-time image processing, and Blender’s Python API for 3D animation. By integrating AI, VQVAE-based encoding, and LLM-driven landmark generation, this system aspires to bridge the communication gap between written Thai text and expressive TSL gestures, providing the Thai deaf community with an interactive, real-time sign language animation platform.

Read more