研究目的
To propose a vision-oriented method for product tracking and interaction in smart factories using detection and tracking along with a cloud-based platform.
研究成果
The proposed vision-based product tracking method is capable of successful product detection and cloud platform interaction, contributing to the transformation towards Industry 4.0. However, challenges remain in product orientation changes and data upload intervals.
研究不足
Difficulties in addressing the change in the product’s orientation while on the conveyor belt and the time difference between actual record and cloud uploaded data due to upload intervals.
1:Experimental Design and Method Selection:
The study employs the Viola-Jones algorithm for product detection and the Kanade-Lucas-Tomasi algorithm for tracking. A cloud-based platform 'thingspeak' is used for data upload and interaction.
2:Sample Selection and Data Sources:
Video data from a real-scale educational CIM system under various environmental conditions.
3:List of Experimental Equipment and Materials:
Allied Vision Prosilica GE 1660 camera, Matlab R2013b, laptop with 2.5GHz quad-core Intel Core i7 processor and 16GB RAM.
4:5GHz quad-core Intel Core i7 processor and 16GB RAM.
Experimental Procedures and Operational Workflow:
4. Experimental Procedures and Operational Workflow: The camera detects and tracks the product, uploading its position to the cloud every 25 seconds for further analysis and interaction with other CPSs.
5:Data Analysis Methods:
The data is analyzed and visualized on the 'thingspeak' platform for decision-making purposes.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容