Video repeaters

Previous tutorial: Controlling overloaded video processing graph

 

Starting from version 1.2.2, Computer Vision Sandbox provides a Video Repeater plug-in, which represents a very useful add-on to the collection of plug-ins provided with the application. The plug-in is of video source type, which means it is aimed to provide/generate video. However, it does need some help from the Video Repeater Push plug-in to achieve it. Both plug-ins do nothing useful when used on their own. But when used in pair, they allow doing lots of interesting stuff, which could not be done before.

The very basic idea of the Video Repeater video source plug-in is to provide video frames, which are pushed into it by Video Repeater Push plug-in. When a video source object represented by Video Repeater is opened in Computer Vision Sandbox, it provides no video on its own. However when the Video Repeater Push plug-in is used as a video processing step of another video source, it can push video frames into the repeater, which will retranslate them. This allows implementing things like video source splitting, branching video processing graph, etc. Below will be described some of these use cases.

The first example to demonstrate video repeaters is aimed for splitting single video source into multiple and processing each independently. Suppose we have a video source, a sandbox with 2x2 view and we want to have this video source playing in all four cells. Why would we want to play the same in multiple cells? Well, we may want to put different image/video processing routines for every played video, for example. For illustration purposes, below is a screen shot of the same video played in 4 cells, but processed differently (for this demonstration just a simple image mirroring plug-in is used).

So how do we get same video source being played in multiple cells of a sandbox view? Putting same camera object into multiple cells? Well, we can do that in sandbox view configuration. It will allow playing same video in more than one cell. But it will not allow to process those independently, since video processing graphs are done per camera in sandbox wizard, not per cell/view. Another thing to try is to configure 4 camera objects with the same configuration (same physical camera). But this will fail as well. First of all, many video sources (like USB cameras), can be accessed only by single client - no second client is allowed to access those cameras. IP cameras are different - they do allow multiple connections. However, many IP cameras have a limit for the number of simultaneous connections. And even if that limit is not hit, having multiple connections to the same IP camera only leads to generating more network traffic. So running multiple video sources targeting same physical camera is not an option.

To get the view shown above, we need to configure only one video source object for the actual camera we have and 3 video repeaters (adding repeaters is done in the same way as adding any other video source). When configuring video repeaters, it is important to assign them different IDs, so they could be distinguished. Then a sandbox is created with 2x2 view, where one cell shows the camera, but other cells are showing the video repeaters we've got. Opening this sandbox just now, will show video only in one cell, not in four. As it was mentioned before video repeaters will not do the job on their own without the somebody pushing video into them. To complete configuration it is required to add 3 instances of Video Repeater Push plug-in into video processing graph of the camera we have (using same IDs, which were used for repeaters). As the result all 4 cells will have video in them. And it can be processed independently.

Another use of video repeaters is to help debugging video processing graphs and/or scripts. As the screen shot below demonstrates, we may have a video processing routine which consists of many steps and we may want to see some or all of the intermediate steps. That particular example demonstrates a process of finding a red ball in a video feed. The sandbox configuration uses one camera and eight video repeaters. Then a Lua script is used to perform video processing, which uses variety of image processing plug-ins to achieve the goal. While it applies those plug-ins, it also pushes intermediate steps into video repeaters. This allows seeing entire detection process on a single view.

Configuring sandbox to use multiple views, the same debugging can be moved out of the main view. As the screen shot below demonstrates, it is possible to show only the final result on the main view. And if the need to examine intermediate steps arises, then those can be found on the additional views of the sandbox.

Yet another use case for video repeaters is spreading video processing across multiple threads. Suppose we created a video graph like the one shown below. It takes about 44 ms to process an image with the plug-ins added to the graph. However, the camera used for that graph is generating 30 frames per second. This means that with the performance we have, there is no way we can save video at the desired frame rate. To make sure video source is not delayed, we can drop frames which we cannot handle (as it was show in the previous tutorial). This will lead to performance of 15 frames per second for our graph. But we really want to get better than that.

The key thing about video repeaters is that as any other video sources they get their own background thread for video processing. Which means we can split the above slow video graph into two running in their own threads. All we need to do is to reconfigure the sandbox, so it has 2 video sources - a camera and a video repeater. Keep single cell view, but display repeater in it instead of camera. Finally in sandbox wizard we need to keep only few image processing steps done on the camera, while the rest of image processing move to repeater. And of course don't forget to add push plug-in on the camera to complete the processing chain.

So now instead of having one slow video processing graph we got two smaller and faster graphs. First an image arrives from camera and is pre-processed in the background thread dedicated to the camera. Then the image is pushed into repeater, where its final processing steps are done in the background thread dedicated to the repeater. As the result of spreading processing across two threads, the video source is no longer delayed - it keeps providing new video frames, while the previous frames could be still processed/saved by repeaters' threads. And, what is more important, this new configuration allowed us to process and save video at the rate of 30 frames per second.

 

Next tutorial: Blobs processing