movis.contrib.segmentation.RobustVideoMatting#

class movis.contrib.segmentation.RobustVideoMatting(onnx_file: str | PathLike | None = None, downsample_ratio: float = 0.25, recurrent_state: bool = True)[source]#

An effect that extracts the foreground using the RobustVideoMatting [Li2021].

It uses a deep learning model to automatically identify the area of persons in a given frame and extract it as the foreground.

Note

This effect requires onnxruntime to be installed.

Note

While there is no need to set up greenbacks or other special imaging environments, the quality is not at the production level. This effect is useful in areas that generally do not require foreground extraction quality, such as presentation videos.

Args:
onnx_file:

The path to the ONNX file of the model. Download it from PeterL1n/RobustVideoMatting and put it in an appropriate location. If None, the default model will be downloaded and chached in ~/.cache/movis. The default model is rvm_mobilenetv3_fp16.onnx.

downsample_ratio:

The downsample ratio to accelerate the inference speed. The default value is 0.25.

recurrent_state:

The flag to indicate whether the model uses a reccurent state. Enabling this flag tends to improve quality, but may also produce unstable results in some cases. The default value is True.

Methods

get_key(time: float) float[source]#

Return the key for caching.

Attributes

default_md5_digest

default_model_url

recurrent_state

Whether the model uses a recurrent state.