movis.layer.composition.Composition#
- class movis.layer.composition.Composition(size: tuple[int, int] = (1920, 1080), duration: float = 1.0)[source]#
A base layer that integrates multiple layers into one video.
Users create a composition by specifying both time and resolution. Next, multiple layers can be added to the target composition through
Composition.add_layer()
. During this process, additional information such as the layer’s name, start time, position, opacity, and drawing mode can be specified. Finally, the composition integrates the layers in the order they were added to create a single video.Another composition can also be added as a layer within a composition. By nesting compositions in this way, more complex motions can be created.
- Examples:
>>> import movis as mv >>> composition = mv.layer.Composition(size=(640, 480), duration=5.0) >>> composition.add_layer( ... mv.layer.Rectangle(size=(640, 480), color=(127, 127, 127), duration=5.0), ... name='bg') >>> len(composition) 1 >>> composition['bg'].opacity.enable_motion().extend( ... keyframes=[0, 1, 2, 3, 4], ... values=[1.0, 0.0, 1.0, 0.0, 1.0], ... easings=['ease_out5'] * 5) >>> composition.write_video('output.mp4')
- Args:
- size:
A tuple representing the size of the composition in the form of
(width, height)
.- duration:
The duration along the time axis for the composition.
Methods
- add_layer(layer: Layer, name: str | None = None, position: tuple[float, float] | None = None, scale: float | tuple[float, float] | ndarray = (1.0, 1.0), rotation: float = 0.0, opacity: float = 1.0, blending_mode: BlendingMode | str = BlendingMode.NORMAL, anchor_point: float | tuple[float, float] | ndarray = (0.0, 0.0), origin_point: Direction | str = Direction.CENTER, transform: Transform | None = None, offset: float = 0.0, start_time: float = 0.0, end_time: float | None = None, audio_level: float = 0.0, visible: bool = True, audio: bool = True) LayerItem [source]#
Add a layer to the composition.
This method appends the target layer to the composition, along with details about the layer such as position, scale, opacity, and rendering mode. The composition registers layers wrapped within a
LayerItem
object, which consolidates these related details. Users can also add the layer with a unique name to the composition. In this case, the LayerItem can be accessed usingcomposition['layer_name']
. To access the layer directly, users can reference it ascomposition['layer_name'].layer
, or ideally, retain it in a separate variable before registering withadd_layer()
.A composition can also be treated as a layer. This means that users can embed one composition within another using the
add_layer()
method. This allows for more intricate image compositions and animations.- Args:
- layer:
An instance or function of the layer to be added to the composition, conforming to the
Layer
protocol.- name:
The unique name for the layer within the composition.
- position:
The position of the layer. If unspecified, the layer is placed at the center of the composition by default.
- scale:
Scale
(sx, sy)
of the layer. Defaults to(1.0, 1.0)
.- rotation:
Clockwise rotation angle (in degrees) of the layer. Default is
0.0
.- opacity:
Opacity of the layer. Default is
1.0
.- blending_mode:
Rendering mode of the layer. Can be specified as an Enum from
BlendingMode
or as a string. Defaults toBlendingMode.NORMAL
.- anchor_point:
Defines the origin of the layer’s coordinate system. The origin is determined by the sum of
origin_point
andanchor_point
. Iforigin_point
isDirection.CENTER
andanchor_point
is(0, 0)
, the origin is the center of the layer. Default is(0, 0)
.- origin_point:
Initial reference point for the layer’s coordinate system. The final origin is determined by the sum of
origin_point
andanchor_point
. Defaults toDirection.CENTER
(center of the layer).- transform:
A Transform object managing the geometric properties and rendering mode of the layer. If specified, the arguments for
position
,scale
,rotation
,anchor_point
,origin_point
, andblending_mode
inadd_layer()
are ignored in favor of the values intransform
.- offset:
The starting time of the layer. For example, if
start_time=0.0
andoffset=1.0
, the layer will appear after 1 second in the composition.- start_time:
The start time of the layer. This variable is used to clip the layer in the time axis direction. For example, if
start_time=1.0
andoffset=0.0
, this layer will appear immediately with one second skipped.- end_time:
The end time of the layer. This variable is used to clip the layer in the time axis direction. For example, if
start_time=0.0
,end_time=1.0
, andoffset=0.0
, this layer will disappear after one second. If not specified, the layer’s duration is used forend_time
.- audio_level:
The audio level of the layer (dB). If the layer has no audio, this value is ignored.
- visible:
A flag specifying whether the layer is visible or not; if
visible=False
, the layer in the composition is not rendered.- audio:
A flag specifying whether the audio is enabled or not; if
audio=False
, the audio of the given layer is not used.
- Returns:
A
LayerItem
object that wraps the layer and its corresponding information.
- get_audio(start_time: float, end_time: float) ndarray | None [source]#
Returns the audio of the composition as a numpy array.
- Args:
- start_time:
The start time of the audio. This variable is used to clip the audio in the time axis direction.
- end_time:
The end time of the audio. This variable is used to clip the audio in the time axis direction. If not specified, the composition’s duration is used for
end_time
.
- Returns:
The audio of the composition as a numpy array. If no audio is found,
None
is returned.
- get_key(time: float) tuple[Hashable, ...] | None [source]#
Returns a tuple of hashable keys representing the state for each layer at the given time.
- items() list[tuple[str, LayerItem]] [source]#
Returns a list of tuples, each consisting of a layer name and its corresponding item.
- Returns:
A list of tuples, where each tuple contains a layer name and its layer item.
- keys() list[str] [source]#
Returns a list of layer names.
Note
The keys are sorted in the order in which they will be rendered.
- Returns:
A list of layer names sorted in the rendering order.
- pop_layer(name: str) LayerItem [source]#
Removes a layer item from the composition and returns it.
- Args:
name: The name of the layer to be removed.
- Returns:
The layer item that was removed.
- preview(level: int = 2) Iterator[None] [source]#
Context manager method to temporarily change the
preview_level
using the with syntax.For example,
with self.preview(level=2):
would change thepreview_level
to 2 in that scope.- Args:
- level:
The resolution of the rendering of the composition. For example, if
level=2
is set, the composition’s resolution is(W / 2, H / 2)
.
- Examples:
>>> import movis as mv >>> composition = mv.layer.Composition(size=(640, 480), duration=5.0) >>> with composition.preview(level=2): ... image = composition(0.0) >>> image.shape (240, 320, 4) >>> with composition.preview(level=4): ... image = composition(0.0) >>> image.shape (120, 160, 4)
- render_and_play(start_time: float = 0.0, end_time: float | None = None, fps: float = 30.0, preview_level: int = 2) None [source]#
Renders the composition and plays it in a Jupyter notebook.
Note
This method requires
ipywidgets
andIPython
to be installed. These are usually installed automatically when users install Jupyter notebook.- Args:
- start_time:
The start time of the video. This variable is used to clip the video in the time axis direction.
- end_time:
The end time of the video. This variable is used to clip the video in the time axis direction.
- fps:
The frame rate of the video. Default is
30.0
.- preview_level:
The resolution of the rendering of the composition. For example, if
preview_level=2
is set, the resolution of the output is(W / 2, H / 2)
. Default is2
.
- values() list[LayerItem] [source]#
Returns a list of LayerItem objects.
Note
The elements of the list are not the layers themselves, but
LayerItem
containing information of the layers.- Returns:
A list of
LayerItem
objects.
- write_audio(dst_file: str | PathLike, start_time: float = 0.0, end_time: float | None = None, format: str | None = None, subtype: str | None = None) None [source]#
Writes the audio of the composition to a file.
The current supported audio formats are limited to those supported by the soundfile module (wav, flac, ogg, etc.). For more details, please refer to the soundfile documentation.
- Args:
- dst_file:
The path to the destination audio file.
- start_time:
The start time of the audio. This variable is used to clip the audio in the time axis direction.
- end_time:
The end time of the audio. This variable is used to clip the audio in the time axis direction. If not specified, the composition’s duration is used for
end_time
.- format:
The format of the audio file. If not specified, the format is inferred from the file extension.
- subtype:
The subtype of the audio file. If not specified, the subtype is inferred from the file extension.
- write_video(dst_file: str | PathLike, start_time: float = 0.0, end_time: float | None = None, codec: str = 'libx264', pixelformat: str = 'yuv420p', input_params: list[str] | None = None, output_params: list[str] | None = None, fps: float = 30.0, audio: bool = True, audio_codec: str | None = None) None [source]#
Writes the composition’s contents to a video file.
- Args:
- dst_file:
The path to the destination video file.
- start_time:
The start time of the video. This variable is used to clip the video in the time axis direction.
- end_time:
The end time of the video. This variable is used to clip the video in the time axis direction. If not specified, the composition’s duration is used for
end_time
.- codec:
The codec used to encode the video. Default is
libx264
.- pixelformat:
The pixel format of the video. Default is
yuv420p
.- input_params:
A list of parameters to be passed to the ffmpeg input.
- output_params:
A list of parameters to be passed to the ffmpeg output. For example,
["-preset", "medium"]
or["-crf", "18"]
.- fps:
The frame rate of the video. Default is
30.0
.- audio:
A flag specifying whether to include audio in the video. If
audio=False
, the audio of the composition is not rendered. Note that if the composition has no audio, this value is simply ignored and the video is rendered without audio.- audio_codec:
The codec used to encode the audio. If not specified, the default codec determined by
codec
is used. For example, ifcodec="libx264"
, the default value ofaudio_codec
isaac
.
Attributes
duration
The duration of the composition.
layers
Returns a list of
LayerItem
objects.preview_level
The resolution of the rendering of the composition.
size
The size of the composition in the form of
(width, height)
.