This project presents the development of a "Smart Cat House" using Internet of Things (IoT) and image processing technology to facilitate and enhance the safety of cat care for owners. The infrastructure of the smart cat house consists of an ESP8266 board connected to an ESP32 CAM camera for cat monitoring, and an Arduino board that controls various sensors such as a motion sensor in the litter box, a DHT22 temperature and humidity sensor, an ultrasonic water and food level sensor, including a water supply system for cats, an automatic feeding system, and a ventilation system controlled by a DC FAN that adjusts its operation according to the measured temperature to maintain a suitable environment. There is also an IR sensor to detect the cat's entry into the litter box and an automatic sand changing system with a SERVO MOTOR. All systems are connected and controlled through the Blynk application, which can be used on mobile phones, allowing owners to monitor and care for their pets remotely. Cat detection and identification uses image processing technology from the ESP32 CAM camera in conjunction with YOLO (You Only Look Once), a high-performance object detection algorithm, to detect and distinguish between cats and people. Data from various sensors are sent to the Arduino board to control the operation of various devices in the smart cat house, such as turning lights on and off, automatically changing sand, adjusting temperature and humidity, feeding food and water at scheduled times, or ventilation. The use of a connection system via ESP8266 and the Blynk application makes it easy and convenient to control various devices. Owners can monitor and control the operation of the entire system from anywhere with internet access.
ในปัจจุบัน ผู้คนจำนวนมากเลือกเลี้ยงแมวเป็นเพื่อนคลายเหงา แต่ด้วยภาระหน้าที่การงาน การเรียน หรือธุระส่วนตัว ทำให้หลายครั้งเจ้าของไม่สามารถดูแลแมวได้อย่างใกล้ชิดตลอดเวลา ก่อให้เกิดความกังวลใจและเป็นห่วงสัตว์เลี้ยงที่บ้าน ปัญหาเหล่านี้เป็นแรงบันดาลใจให้เกิดแนวคิดในการพัฒนาบ้านแมวอัจฉริยะ (Smart Cat House) เพื่ออำนวยความสะดวกและตอบโจทย์ความต้องการของผู้เลี้ยงแมวในยุคปัจจุบัน บ้านแมวอัจฉริยะเป็นระบบที่ถูกออกแบบมาเพื่อช่วยให้เจ้าของสามารถติดตามและดูแลแมวได้จากระยะไกลผ่านแอปพลิเคชันบนมือถือ โดยภายในบ้านแมวอัจฉริยะจะมีระบบต่างๆ ที่ช่วยอำนวยความสะดวกสบายให้กับทั้งเจ้าของและแมว อาทิ ระบบควบคุมการให้อาหารและน้ำ ระบบจัดการกระบะทราย ระบบควบคุมอุณหภูมิ และระบบกล้องวงจรปิด เป็นต้น ทำให้บ้านแมวอัจฉริยะเป็นทางออกที่ตอบโจทย์สำหรับผู้เลี้ยงแมวที่ไม่ค่อยมีเวลา แต่ยังคงต้องการดูแลสัตว์เลี้ยงของตนให้มีความสุขและมีสุขภาพแข็งแรง
คณะวิศวกรรมศาสตร์
The project uses artificial intelligence (AI) and deep learning to develop a smart police system (Smart Police) to analyze the identity of individuals and vehicles suspected of involvement in crimes. The system uses CCTV cameras to detect people with concealed weapons and track vehicles involved in crimes. The system also sends alerts to the police when a crime is detected. The Smart Police system is a collaboration between the Faculty of Engineering, King Mongkut's Institute of Technology Ladkrabang, the Provincial Police Region 2, the Chachoengsao Foundation for Development, and the Smart City Office of Chachoengsao Province. The system is designed to prevent and deter crime, increase public safety and order, and build a network of cooperation between the government, the private sector, and the community. The system is currently under development, but it has the potential to be a valuable tool for law enforcement. The system could help to reduce crime and improve public safety in Chachoengsao Province and other parts of Thailand.
คณะสถาปัตยกรรม ศิลปะและการออกแบบ
This research aims to study the guidelines and develop a prototype of an application for public transport users to plan their journey and increase safety in using different types of public transport to travel to King Mongkut's Institute of Technology Ladkrabang. The objectives are as follows: 1) To study the factors of user experience (UX) and user interface (UI) design that affect the users of the application for using public transport. 2) To study the needs of users of public transport applications who must travel to King Mongkut's Institute of Technology Ladkrabang. 3) To present the guidelines for designing the user experience (UX) and user interface (UI) and to produce a prototype of the application for using public transport to travel to King Mongkut's Institute of Technology Ladkrabang. The research includes a review of the literature on User Experience (UX) and User Interface (UI) design, as well as a look at examples of public transportation applications and pick-up sites near King Mongkut's Institute of Technology Ladkrabang. This study is based on qualitative research. There are examples of employing relevant applications during the interview. The target audience is students aged 18 to 35 who will give prototypes for application development to satisfy their requirements. Provide information that is actually useful to users. The research results found that the public transport vehicles that the target group used the most were the Songthaew (the pick-up truck), train, airport rail link, motorcycle taxi, taxi, and bus, respectively. Users were concerned about various safety issues and wanted to design features to increase safety and confidence in using public transport vehicles for students, such as sending locations to relevant officials in the event of an emergency or when assistance was needed, and important information about public transport vehicles that students needed, such as calculating prices, calculating travel times, bus schedules, official and clear pick-up and drop-off points, bus routes, driver registration, suggestions or route recommendations, and the time of public transport vehicles arriving at the point where users were waiting, etc. The guidelines for designing the User Experience (UX) were presented from the analysis of the target group's data, which was a prioritization of the features of the menu for recording frequently used routes, a menu showing nearby pick-up points, a menu for searching for routes and selecting using various user constraints, such as calculating travel prices or travel times, and a menu that could set fonts and color modes to support a variety of users. This was because the study of user needs for fonts found an equal demand for Thai fonts with looped (Looped font) and without looped (Loopless font), as well as a study of the application's color requirements, which required both light and dark colors to be displayed in approximately equal amounts. This includes the design of the user interface (User Interface) by designing symbols that allow users to access the desired information quickly without confusion.
วิทยาลัยอุตสาหกรรมการบินนานาชาติ
The capture of a target spacecraft by a chaser is an on-orbit docking operation that requires an accurate, reliable, and robust object recognition algorithm. Vision-based guided spacecraft relative motion during close-proximity maneuvers has been consecutively applied using dynamic modeling as a spacecraft on-orbit service system. This research constructs a vision-based pose estimation model that performs image processing via a deep convolutional neural network. The pose estimation model was constructed by repurposing a modified pretrained GoogLeNet model with the available Unreal Engine 4 rendered dataset of the Soyuz spacecraft. In the implementation, the convolutional neural network learns from the data samples to create correlations between the images and the spacecraft’s six degrees-of-freedom parameters. The experiment has compared an exponential-based loss function and a weighted Euclidean-based loss function. Using the weighted Euclidean-based loss function, the implemented pose estimation model achieved moderately high performance with a position accuracy of 92.53 percent and an error of 1.2 m. The in-attitude prediction accuracy can reach 87.93 percent, and the errors in the three Euler angles do not exceed 7.6 degrees. This research can contribute to spacecraft detection and tracking problems. Although the finished vision-based model is specific to the environment of synthetic dataset, the model could be trained further to address actual docking operations in the future.