
The capture of a target spacecraft by a chaser is an on-orbit docking operation that requires an accurate, reliable, and robust object recognition algorithm. Vision-based guided spacecraft relative motion during close-proximity maneuvers has been consecutively applied using dynamic modeling as a spacecraft on-orbit service system. This research constructs a vision-based pose estimation model that performs image processing via a deep convolutional neural network. The pose estimation model was constructed by repurposing a modified pretrained GoogLeNet model with the available Unreal Engine 4 rendered dataset of the Soyuz spacecraft. In the implementation, the convolutional neural network learns from the data samples to create correlations between the images and the spacecraft’s six degrees-of-freedom parameters. The experiment has compared an exponential-based loss function and a weighted Euclidean-based loss function. Using the weighted Euclidean-based loss function, the implemented pose estimation model achieved moderately high performance with a position accuracy of 92.53 percent and an error of 1.2 m. The in-attitude prediction accuracy can reach 87.93 percent, and the errors in the three Euler angles do not exceed 7.6 degrees. This research can contribute to spacecraft detection and tracking problems. Although the finished vision-based model is specific to the environment of synthetic dataset, the model could be trained further to address actual docking operations in the future.
In one, docking is defined as “when one incoming spacecraft rendezvous with another spacecraft and flies a controlled collision trajectory in such a manner to align and mesh the interface mechanisms”, and defined docking as an on-orbital service to connect two free-flying man-made space objects. The service should be supported by an accurate, reliable, and robust positioning and orientation (pose) estimation system. Therefore, pose estimation is an essential process in an on-orbit spacecraft docking operation. The position estimation can be obtained by the most well-known cooperative measurement, a Global Positioning System (GPS), while the spacecraft attitude can be measured by an installed Inertial Measurement Unit (IMU). However, these methods are not applicable to non-cooperative targets. Many studies and missions have been performed by focusing on mutually cooperative satellites. However, the demand for non-cooperative satellites may increase in the future. Therefore, determining the attitude of non-cooperative spacecrafts is a challenging technological research problem that can improve spacecraft docking operations. One traditional method, which is based on spacecraft control principles, is to estimate the position and attitude of a spacecraft using the equations of motion, which are a function of time. However, the prediction using a spacecraft equation of motion needs support from the sensor fusion to achieve the highest accuracy of the state estimation algorithm. For non-cooperative spacecraft, a vision-based pose estimator is currently developing for space application with a faster and more powerful computational resource.

คณะวิศวกรรมศาสตร์
The Thai Sign Language Generation System aims to create a comprehensive 3D modeling and animation platform that translates Thai sentences into dynamic and accurate representations of Thai Sign Language (TSL) gestures. This project enhances communication for the Thai deaf community by leveraging a landmark-based approach using a Vector Quantized Variational Autoencoder (VQVAE) and a Large Language Model (LLM) for sign language generation. The system first trains a VQVAE encoder using landmark data extracted from sign videos, allowing it to learn compact latent representations of TSL gestures. These encoded representations are then used to generate additional landmark-based sign sequences, effectively expanding the training dataset using the BigSign ThaiPBS dataset. Once the dataset is augmented, an LLM is trained to output accurate landmark sequences from Thai text inputs, which are then used to animate a 3D model in Blender, ensuring fluid and natural TSL gestures. The project is implemented using Python, incorporating MediaPipe for landmark extraction, OpenCV for real-time image processing, and Blender’s Python API for 3D animation. By integrating AI, VQVAE-based encoding, and LLM-driven landmark generation, this system aspires to bridge the communication gap between written Thai text and expressive TSL gestures, providing the Thai deaf community with an interactive, real-time sign language animation platform.

วิทยาลัยนวัตกรรมการผลิตขั้นสูง
The research on improving the strength of solid electrolytes aims to enhance the properties of solid electrolyte materials produced from cement and additives that help develop the cement structure to generate electricity. The main components include sodium chloride (NaCl) and graphite, which contribute to the material’s ability to generate a weak electrical current. The objective is to develop an electricity-generating flooring material. This study involves preparing a mixture of cement, water, sodium chloride (NaCl), and graphite to enhance the material’s electrical conductivity. It is highly anticipated that this research will lead to the development of concrete flooring capable of generating electricity and can be further expanded for future applications.

คณะวิทยาศาสตร์
With the urgent need for rapid screening of Aflatoxin B1 (AFB1) due to its association with increased liver cirrhosis and hepatocellular carcinoma cases from contaminated agricultural foods, we propose a novel electrochemical aptasensor. This aptasensor is based on trimetallic nanoparticles AuPt-Ru supported by reduced graphene oxide (AuPt-Ru/RGO) modified on a low-cost and disposable goldleaf electrode (GLEAuPt-Ru/RGO) for detection of AFB1. The trimetallic nanoparticle AuPt-Ru was synthesized using an ultrasonic-driven chemical reduction method. The synthesized AuPt-Ru exhibited a waxberry-like appearance, with AuPt core-shell structure and ruthenium dispersed over the particles. The average particle size was 57.35 ± 8.24 nm. The AuPt-Ru was integrated into RGO sheets (inner diameter of 0.5 to 1.6 µm) in order to enhance electron transfer efficiency and increase the specific immobilizing surface area of the thiol-5’-terminated modified aptamer (Apt) to target AFB1. With a large electrochemical surface area and low electrochemical impedance, GLEAuPt-Ru/RGO displays ultra-high sensitivity for AFB1 detection. Differential pulse voltammetry (DPV) measurements revealed a linear range for AFB1 detection range from 0.3 to 30.0 pg mL-1 (R2 = 0.9972), with a limit of detection (LOD, S/N = 3) and a limit of quantification (LOQ, S/N = 10) of 0.009 pg mL-1 and 0.031 pg mL-1, respectively. The developed aptasensor also demonstrated excellent accuracy in real agricultural products, including dried red chili, garlic, peanut, pepper, and Thai jasmine rice, achieving recovery rates between 94.6 and 107.9%. The fabricated aptamer-based GLEAuPt-Ru/RGO performance is comparable to that of a modified commercial electrode, which has great potential application prospects for detecting AFB1 in agricultural products.