
The capture of a target spacecraft by a chaser is an on-orbit docking operation that requires an accurate, reliable, and robust object recognition algorithm. Vision-based guided spacecraft relative motion during close-proximity maneuvers has been consecutively applied using dynamic modeling as a spacecraft on-orbit service system. This research constructs a vision-based pose estimation model that performs image processing via a deep convolutional neural network. The pose estimation model was constructed by repurposing a modified pretrained GoogLeNet model with the available Unreal Engine 4 rendered dataset of the Soyuz spacecraft. In the implementation, the convolutional neural network learns from the data samples to create correlations between the images and the spacecraft’s six degrees-of-freedom parameters. The experiment has compared an exponential-based loss function and a weighted Euclidean-based loss function. Using the weighted Euclidean-based loss function, the implemented pose estimation model achieved moderately high performance with a position accuracy of 92.53 percent and an error of 1.2 m. The in-attitude prediction accuracy can reach 87.93 percent, and the errors in the three Euler angles do not exceed 7.6 degrees. This research can contribute to spacecraft detection and tracking problems. Although the finished vision-based model is specific to the environment of synthetic dataset, the model could be trained further to address actual docking operations in the future.
In one, docking is defined as “when one incoming spacecraft rendezvous with another spacecraft and flies a controlled collision trajectory in such a manner to align and mesh the interface mechanisms”, and defined docking as an on-orbital service to connect two free-flying man-made space objects. The service should be supported by an accurate, reliable, and robust positioning and orientation (pose) estimation system. Therefore, pose estimation is an essential process in an on-orbit spacecraft docking operation. The position estimation can be obtained by the most well-known cooperative measurement, a Global Positioning System (GPS), while the spacecraft attitude can be measured by an installed Inertial Measurement Unit (IMU). However, these methods are not applicable to non-cooperative targets. Many studies and missions have been performed by focusing on mutually cooperative satellites. However, the demand for non-cooperative satellites may increase in the future. Therefore, determining the attitude of non-cooperative spacecrafts is a challenging technological research problem that can improve spacecraft docking operations. One traditional method, which is based on spacecraft control principles, is to estimate the position and attitude of a spacecraft using the equations of motion, which are a function of time. However, the prediction using a spacecraft equation of motion needs support from the sensor fusion to achieve the highest accuracy of the state estimation algorithm. For non-cooperative spacecraft, a vision-based pose estimator is currently developing for space application with a faster and more powerful computational resource.

คณะวิศวกรรมศาสตร์
This project aims to develop a conceptual prototype of a weapon aiming system that simulates an anti-aircraft gun. Utilizing an optical camera, the system detects moving objects and calculates their trajectories in real time. The results are then used to control a motorized laser pointer with two degrees of freedom (DoF) of rotation, enabling it to aim at the predicted position of the target. Our system is built on the Raspberry Pi platform, employing machine vision software. The object motion tracking functionality was developed using the OpenCV library, based on color detection algorithms. Experimental results indicate that the system successfully detects the movement of a tennis ball at a rate of 30 frames per second (fps). The current phase involves designing and integratively testing the mechanical system for precise laser pointer position control. This project exemplifies the integration of knowledge in electronics (computer programming) and mechanical engineering (motor control).

คณะเทคโนโลยีสารสนเทศ
The project of game development to educate about Egyptian civilization This is intended to be a game to promote learning about ancient Egyptian civilization. The original learning may feel boring. Do not attract the attention of learners Therefore saw the presentation in the form of a game on virtual reality technology that inserts knowledge history With the aim of making learning more interesting The organizers can choose to use Unreal Engine 5.1 (Unreal Engine 5.1) and Oculus Quest II (Oculus Quest2) to develop. Within the game, players must find a way to escape from this room within the specified period of solving puzzles in various forms such as statues, traps, etc. In order to solve puzzles and leave this room, players must collect all the information inside the room. In addition, the game shows the life of the player and when the player cannot solve the puzzle correctly.

คณะสถาปัตยกรรม ศิลปะและการออกแบบ
This conceptual model, inspired by the Rose Window in Gothic architecture, embodies intricate geometric patterns that reflect divine harmony and balance. Its symmetrical structure and the interplay of light passing through stained glass create a sense of movement, enhancing the sacred and mystical atmosphere. The composition evokes a celestial presence, like a window to heaven.