研究目的
To present a modeling environment oriented towards the academic community for processing photo and video images transmitted by open channels, with features for creating extensions and being open source.
研究成果
The modeling environment automates the development and modeling of algorithms for photo and video processing over open networks, enabling step-by-step analysis and plugin extensions. It is useful for applications like video surveillance, monitoring distributed objects, and secure transmission, with demonstrated convenience in debugging masking algorithms and tracking visual transformations.
研究不足
The current implementation has only 4 plugins and one input module; it may not cover all image processing methods and requires further development for broader applicability.
1:Experimental Design and Method Selection:
The modeling environment was developed using a technology stack including C++ programming language, OpenCV computer vision library, and Qt cross-platform framework. It employs a plugin-based architecture for implementing and testing image processing algorithms such as filtering, binarization, and masking.
2:Sample Selection and Data Sources:
Input images and video sequences are used, including support for web-cameras.
3:List of Experimental Equipment and Materials:
Software tools include C++ compiler, OpenCV library with additional modules from opencv_contrib, and Qt framework.
4:Experimental Procedures and Operational Workflow:
Users can create chains of processing units (e.g., 'Image In', 'Noise(G)', 'Delay', 'Median'), configure each unit via widgets, and observe real-time processing effects.
5:Data Analysis Methods:
Step-by-step analysis of image transformations is performed, with visual tracking of changes at each stage.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容