
The capture of a target spacecraft by a chaser is an on-orbit docking operation that requires an accurate, reliable, and robust object recognition algorithm. Vision-based guided spacecraft relative motion during close-proximity maneuvers has been consecutively applied using dynamic modeling as a spacecraft on-orbit service system. This research constructs a vision-based pose estimation model that performs image processing via a deep convolutional neural network. The pose estimation model was constructed by repurposing a modified pretrained GoogLeNet model with the available Unreal Engine 4 rendered dataset of the Soyuz spacecraft. In the implementation, the convolutional neural network learns from the data samples to create correlations between the images and the spacecraft’s six degrees-of-freedom parameters. The experiment has compared an exponential-based loss function and a weighted Euclidean-based loss function. Using the weighted Euclidean-based loss function, the implemented pose estimation model achieved moderately high performance with a position accuracy of 92.53 percent and an error of 1.2 m. The in-attitude prediction accuracy can reach 87.93 percent, and the errors in the three Euler angles do not exceed 7.6 degrees. This research can contribute to spacecraft detection and tracking problems. Although the finished vision-based model is specific to the environment of synthetic dataset, the model could be trained further to address actual docking operations in the future.
In one, docking is defined as “when one incoming spacecraft rendezvous with another spacecraft and flies a controlled collision trajectory in such a manner to align and mesh the interface mechanisms”, and defined docking as an on-orbital service to connect two free-flying man-made space objects. The service should be supported by an accurate, reliable, and robust positioning and orientation (pose) estimation system. Therefore, pose estimation is an essential process in an on-orbit spacecraft docking operation. The position estimation can be obtained by the most well-known cooperative measurement, a Global Positioning System (GPS), while the spacecraft attitude can be measured by an installed Inertial Measurement Unit (IMU). However, these methods are not applicable to non-cooperative targets. Many studies and missions have been performed by focusing on mutually cooperative satellites. However, the demand for non-cooperative satellites may increase in the future. Therefore, determining the attitude of non-cooperative spacecrafts is a challenging technological research problem that can improve spacecraft docking operations. One traditional method, which is based on spacecraft control principles, is to estimate the position and attitude of a spacecraft using the equations of motion, which are a function of time. However, the prediction using a spacecraft equation of motion needs support from the sensor fusion to achieve the highest accuracy of the state estimation algorithm. For non-cooperative spacecraft, a vision-based pose estimator is currently developing for space application with a faster and more powerful computational resource.

คณะเทคโนโลยีการเกษตร
Durian is a crucial economic crop of Thailand and one of the most exported agricultural products in the world. However, producing high-quality durian requires maintaining the health of durian trees, ensuring they remain strong and disease-free to optimize productivity and minimize potential damage to both the tree and its fruit. Among the various diseases affecting durian, foliar diseases are among the most common and rapidly spreading, directly impacting tree growth and fruit quality. Therefore, monitoring and controlling leaf diseases is essential for preserving durian quality. This study aims to apply image analysis technology combined with artificial intelligence (AI) to classify diseases in durian leaves, enabling farmers to diagnose diseases independently without relying on experts. The classification includes three categories: healthy leaves (H), leaves infected with anthracnose (A), and leaves affected by algal spot (S). To develop the classification model, convolutional neural network (CNN) algorithms—ResNet-50, GoogleNet, and AlexNet—were employed. Experimental results indicate that the classification accuracy of ResNet-50, GoogleNet, and AlexNet is 93.57%, 93.95%, and 68.69%, respectively.

วิทยาเขตชุมพรเขตรอุดมศักดิ์
This project aims to design and develop a propulsion system for agricultural equipment using RFID technology and evaluate its movement performance on different surfaces, including concrete and grass. The experiment focuses on examining the tag detection range under transmission power levels of 20 dBm, 23 dBm, and 26 dBm, as well as the impact of antenna angles on detection efficiency. Additionally, the system was tested in three movement scenarios: straight path, left turn, and right turn, at distances of 2 meters, 4 meters, and 6 meters. The results indicate that the system achieved the highest average speed of 0.4736 m/s and an average turning angle of 91.6° when moving in a straight path on a concrete surface at a distance of 4 meters. On a grass surface at the same distance, the average speed was 0.4483 m/s, with an average turning angle of 91.1°. For left and right turns, the movement on the concrete surface generally exhibited a higher average speed than on grass, particularly at a distance of 4 meters, where differences in turning angles were observed. This study provides insights into the factors affecting the movement of agricultural mowing equipment and serves as a foundation for enhancing the efficiency of propulsion systems in future developments.

คณะเทคโนโลยีการเกษตร
This experiment aimed to study the suitable types of polymers for coating with chlorophyll extract and the quality of cucumber seeds after coating. The experiment was planned using a Completely Randomized Design (CRD) with four replications, consisting of five methods involving seeds coated with different types of polymers: Polyvinylpyrrolidone, Sodium Alginate, Carboxy Methyl Cellulose, and Hydroxypropyl Methylcellulose, each polymer being coated alongside chlorophyll, with uncoated seeds serving as the control method. The coating substance was prepared by extracting chlorophyll from mango leaves, then mixed with each type of polymer at a concentration of 1%, using an 8% concentration of chlorophyll extract. The properties of each coating method, such as pH and viscosity of the coating substance, were examined before coating the cucumber seeds with a rotary disk coater model RRC150 at a coating rate of 1,100 milliliters per 1 kilogram of seeds. Subsequently, the seeds were dried to reach the initial moisture level using a hot air blower, and seed quality was assessed in various aspects, including seed moisture, germination rate under laboratory conditions, germination index, and seed fluorescence under a portable ultraviolet light illuminator, as well as light emission spectrum analysis using a Spectrophotometer. The experiment found that each type of polymer could be used to form a film together with chlorophyll, which had appropriate pH and viscosity for the coating without affecting seed quality and showed fluorescence on the seed surface both under portable ultraviolet light and spectral emission analysis with a Spectrophotometer. Using HPMC as the film-forming agent with chlorophyll was the most suitable method, enhancing seed fluorescence efficiency.