
The capture of a target spacecraft by a chaser is an on-orbit docking operation that requires an accurate, reliable, and robust object recognition algorithm. Vision-based guided spacecraft relative motion during close-proximity maneuvers has been consecutively applied using dynamic modeling as a spacecraft on-orbit service system. This research constructs a vision-based pose estimation model that performs image processing via a deep convolutional neural network. The pose estimation model was constructed by repurposing a modified pretrained GoogLeNet model with the available Unreal Engine 4 rendered dataset of the Soyuz spacecraft. In the implementation, the convolutional neural network learns from the data samples to create correlations between the images and the spacecraft’s six degrees-of-freedom parameters. The experiment has compared an exponential-based loss function and a weighted Euclidean-based loss function. Using the weighted Euclidean-based loss function, the implemented pose estimation model achieved moderately high performance with a position accuracy of 92.53 percent and an error of 1.2 m. The in-attitude prediction accuracy can reach 87.93 percent, and the errors in the three Euler angles do not exceed 7.6 degrees. This research can contribute to spacecraft detection and tracking problems. Although the finished vision-based model is specific to the environment of synthetic dataset, the model could be trained further to address actual docking operations in the future.
In one, docking is defined as “when one incoming spacecraft rendezvous with another spacecraft and flies a controlled collision trajectory in such a manner to align and mesh the interface mechanisms”, and defined docking as an on-orbital service to connect two free-flying man-made space objects. The service should be supported by an accurate, reliable, and robust positioning and orientation (pose) estimation system. Therefore, pose estimation is an essential process in an on-orbit spacecraft docking operation. The position estimation can be obtained by the most well-known cooperative measurement, a Global Positioning System (GPS), while the spacecraft attitude can be measured by an installed Inertial Measurement Unit (IMU). However, these methods are not applicable to non-cooperative targets. Many studies and missions have been performed by focusing on mutually cooperative satellites. However, the demand for non-cooperative satellites may increase in the future. Therefore, determining the attitude of non-cooperative spacecrafts is a challenging technological research problem that can improve spacecraft docking operations. One traditional method, which is based on spacecraft control principles, is to estimate the position and attitude of a spacecraft using the equations of motion, which are a function of time. However, the prediction using a spacecraft equation of motion needs support from the sensor fusion to achieve the highest accuracy of the state estimation algorithm. For non-cooperative spacecraft, a vision-based pose estimator is currently developing for space application with a faster and more powerful computational resource.

คณะบริหารธุรกิจ
BrushXchange is a toothbrush brand dedicated to reducing plastic waste in Thailand by offering toothbrushes made from recycled plastic with replaceable bristles. These products help minimize waste generated by traditional toothbrushes. The design is modern and user-friendly, emphasizing durability, comfort, and affordability, making it appropriate for health-conscious and environmentally aware consumers. The brand aims to drive change in the oral care industry by providing high-quality products at accessible prices. Its marketing strategy focuses on using social media platforms like Instagram and TikTok and collaborating with organizations that promote sustainability. The product is distributed through retail stores such as Lotus’s and Tops. BrushXchange also prioritizes environmental responsibility by using recycled paper packaging and organizing sustainability campaigns. The brand's long-term goal is to become a widely recognized brand image in the eco-friendly toothbrush market in Thailand while encouraging sustainable living habits within society.

คณะเทคโนโลยีสารสนเทศ
This research presents a deep learning method for generating automatic captions from the segmentation of car part damage. It analyzes car images using a Unified Framework to accurately and quickly identify and describe the damage. The development is based on the research "GRiT: A Generative Region-to-text Transformer for Object Understanding," which has been adapted for car image analysis. The improvement aims to make the model generate precise descriptions for different areas of the car, from damaged parts to identifying various components. The researchers focuses on developing deep learning techniques for automatic caption generation and damage segmentation in car damage analysis. The aim is to enable precise identification and description of damages on vehicles, there by increasing speed and reducing the work load of experts in damage assessment. Traditionally, damage assessment relies solely on expert evaluations, which are costly and time-consuming. To address this issue, we propose utilizing data generation for training, automatic caption creation, and damage segmentation using an integrated framework. The researchers created a new dataset from CarDD, which is specifically designed for cardamage detection. This dataset includes labeled damages on vehicles, and the researchers have used it to feed into models for segmenting car parts and accurately labeling each part and damage category. Preliminary results from the model demonstrate its capability in automatic caption generation and damage segmentation for car damage analysis to be satisfactory. With these results, the model serves as an essential foundation for future development. This advancement aims not only to enhance performance in damage segmentation and caption generation but also to improve the model’s adaptability to a diversity of damages occurring on various surfaces and parts of vehicles. This will allow the system to be applied more broadly to different vehicle types and conditions of damage inthe future

คณะเทคโนโลยีการเกษตร
This experiment aimed to study the suitable types of polymers for coating with chlorophyll extract and the quality of cucumber seeds after coating. The experiment was planned using a Completely Randomized Design (CRD) with four replications, consisting of five methods involving seeds coated with different types of polymers: Polyvinylpyrrolidone, Sodium Alginate, Carboxy Methyl Cellulose, and Hydroxypropyl Methylcellulose, each polymer being coated alongside chlorophyll, with uncoated seeds serving as the control method. The coating substance was prepared by extracting chlorophyll from mango leaves, then mixed with each type of polymer at a concentration of 1%, using an 8% concentration of chlorophyll extract. The properties of each coating method, such as pH and viscosity of the coating substance, were examined before coating the cucumber seeds with a rotary disk coater model RRC150 at a coating rate of 1,100 milliliters per 1 kilogram of seeds. Subsequently, the seeds were dried to reach the initial moisture level using a hot air blower, and seed quality was assessed in various aspects, including seed moisture, germination rate under laboratory conditions, germination index, and seed fluorescence under a portable ultraviolet light illuminator, as well as light emission spectrum analysis using a Spectrophotometer. The experiment found that each type of polymer could be used to form a film together with chlorophyll, which had appropriate pH and viscosity for the coating without affecting seed quality and showed fluorescence on the seed surface both under portable ultraviolet light and spectral emission analysis with a Spectrophotometer. Using HPMC as the film-forming agent with chlorophyll was the most suitable method, enhancing seed fluorescence efficiency.