Starting from the beginning, audio and video have been with us for a long time. We’ve been using this medium forever – once in analog form, and more recently in digital form (TikTok, Reels), that is, on smartphones and tablets. And when it comes to mobile devices, this is our backyard as Flutter Developers. I would single out three challenges that video and audio bring when we work on such features:

  1. Creating new videos. Sometimes we need to trim the video or apply some effects, change the audio volume, and other such operations.
  2. Playback of video and audio. Our user somehow wants to see what we want to show, and will be playing the audio or audio from the speaker.
  3. Data transport. This includes buffering and other kinds of video transmission so that our user doesn’t lose too many MB of his packet, and can play the video comfortably. You also need to make sure that it can play right away.

This guide focuses on the first challenge. I will show you how to work with FFmpeg in Flutter as well as other video processing tools you may use. The following solutions will work on Android and iOS without using Kotlin or Swift.

Introduction to video editing in Flutter

Most often when we talk about video editing, we associate it with a video editor program. We have a video track, a preview, and files on which we work. It’s very important to note that such an application doesn’t create a media file, but collects a set of instructions, which later is used only in the rendering process to make a new encoded video or audio file.

How does this transfer to the Flutter world? The solution that can perform the rendering process for us is, for example, FFmpeg-Kit for Flutter. It’s available for both desktop and mobile platforms. Luckily for us, we can also use a Flutter wrapper and avoid either Kotlin or Swift.

What is FFmpeg?

FFmpeg is mainly a desktop application that we can use in the terminal providing a complete solution for creating audio and video.

$ ffmpeg -i input.mp4 -af "volume=0.5" output.mp4

After installing FFmpeg into our terminal, first of all, we need to call the program’s name so that the terminal knows which program we are using. 

Then we give the parameter i, which is simply an abbreviation for input, in this case, a file called input.mp4.

Next, we specify the af parameter, which is the audio filter, and pass it a value that we want volume equal to 0.5. We can expect that the output.mp4 file (our result) will have 50% of its initial volume.

Using FFmpeg in Flutter development

Let’s assume that we have already installed FFmpeg to the Flutter project (to do this you need to add a dependency to the pubspec file, then some changes are required in build.gradle and Xcode – all of this is described in the documentation).

final ffmpeg = FlutterFFmpeg();
ffmpeg.execute(‘-i input.mp4 -af “volume=0.5” output.mp4’);

The first thing we do is to create an object, which we create from the FluterFFmpeg class. This object provides us with an execute method. It allows us to transfer what we wrote in the terminal to the parameter that the execute method takes in the form of a string. We can test our solutions on the desktop and then implement them in the mobile application.

However, in a mobile app environment, we have to point to the exact location of the input and output files. Neither FFmpeg nor Flutter knows what the input and output are – we have to specify specific paths from the file systems on our phones.

To do this, we need to use the getApplicationDocumentsDirectory method from the path_provider package. This method gives us access to the documents directory on a given device. Then we can create two File objects. The input file already exists in the user’s local storage. The output is a File object that doesn’t exist yet. The File object holds a reference to the file’s path, not the file itself. Then we can replace the input and output with new paths using string interpolation.

final documents = await getApplicationDocumentsDirectory();
final input = File(‘${documents.path}/input.mp4’);
final output = File(‘${documents.path}/output.mp4’);

ffmpeg.execute(‘-i ${input.path} -af “volume=0.5” ${output.path}’);

The type returned by the execute method is int. The FFmpeg informs us about the result of a given operation in a traditional way for terminal applications. If the result is 0 that means the operation was executed successfully. Otherwise, it will provide us with an int which is an error code.

int result = await ffmpeg.execute(‘-i ${input.path} -af “volume=0.5” ${output.path}’);

if(result == 0) {
	return output;
} else {
	throw Exception(‘FFmpeg Failed!’);

What FFmpeg can do in Flutter?

Cutting the video

One of the most common actions on video and audio files that we can do is trimming these files. In the example, we give the input and ss parameter, which is responsible for the start of the file, as well as to responsible for the end of the file.

	‘-i ${input.path} ‘
	‘-ss 00:01:00 ‘
	‘-to 00:02:00 ‘

As a result of this operation, we will get an output file that lasts from the first minute to the second minute. But if this file wasn’t two minutes long in this example, FFmpeg would throw us an error. By removing one of these parameters, the video will be trimmed from the very beginning or from the indicated place to the end of the video.

Merging multiple files

With FFmpeg, there are plenty of options for combining video and audio into one large file. For readability, I present here a more straightforward example, and for more, I refer you here.

	‘-i “concat:${input.path}|${input2.path}” ‘

So we give our input, but instead of one file, we can have multiple files. We do this using concat. We can expect the final file to be one large file, where the videos will play one after the other.

Replacing audio in video

Let’s imagine a situation where we have a video file and that in this file we have recorded audio. However, we would like to put music in this recording. To replace the audio in the video, we do a trick that makes the video track be copied, while only the audio file will be replaced.

	‘-i ${video.path} -i ${audio.path} ‘
	‘-c:v copy -map 0:v:0 -map 1:a:0 ‘

To do this, we use the c:v parameter, or codec:video, and set it to copy. We also pass the map parameter, which tells FFmpeg that the video track should be left alone and nothing should be done with it. Another map parameter tells FFmpeg that we would like to discard one audio track and leave the other. In the final file, you will get a video with music instead of the original audio track.

Changing resolution and quality

This is something we would want to do when we want to change the quality or size of a file.

	‘-i ${audio.path} ‘
	‘-vf scale=1280:720 ‘
	‘-crf 18 ‘ // Constant Rate Factor 0 - 51
	‘-preset slow ‘ // [ultrafast, superfast, veryfast, slow, slower]

By default, we specify the input, then use the vf (video filter) parameter, setting it a scale value (in this case, it will be 720p video). Instead of one of these values, we can insert -1, then FFmpeg knows that one side of the video should be left alone, and scale the other dimension of the video. 

Next, we have the crf parameter and this parameter is responsible for the quality of our video. The rule of thumb is that if we want the quality to be as high as possible, then crf must be as low as possible. However, this is at the expense of the file size. So if we want the smallest possible file size at the expense of quality, we can set the crf up to 51, which will give an adequate result. On the other hand, we have the preset parameter. This parameter defines the speed of rendering. If we care about quality should set it to as slow as possible. If we care about time, we can choose ultrafast rendering, but it will reflect on the quality of the file.

Visualizing the soundtrack – Waveform

We can also use FFmpeg to render or create a so-called Waveform, which is a visualization of the soundtrack. To do this, we specify an input, then the filter_complex parameter, where we can set values such as the size of the image (showwavespic), which will create output in .png format.

	‘-i ${audio.path} ‘
	‘-filter_complex “showwavespic=s=640x120” -frames:v 1 ‘
	‘${output.path}’, // .png

Checking the render progress

Rendering video files is a complicated operation, which makes it work differently on different mobile devices in terms of process execution length. To do that FFmpeg provides something like FFmpegConfig class.

With this, we can use a callback, which is called enableStatisticsCallback, and it gives us such an object as statistics and has interesting parameters that we can read. We will be most likely interested in time because it tells us how much of the file has already been rendered and then we can calculate the percentage and show our user how the process is going.

Gathering information from multimedia – FFprobe

Mentioning checking the length of a video file, a studious search on Google for “how to check the length of a video file in Flutter” reveals that the answers tell us to create videoPlayerController from the video_player package and use this controller to check what the length of the video is. 

FFmpeg, on the other hand, provides us with a tool like FFprobe. Thanks to this, when we don’t have a Flutter video player in our project, we don’t have to implement it just for that. We can read information about the video file or audio file using ffprobe

final probe = FlutterFFprobe();

final mediaInformation = await probe.getMediaInformation(video.path);
final map = mediaInformation.getMediaProperties();
final duration = map?[‘duration’].toString();

The example shows how we can extract the duration, where getMediaProperties returns a map and we have to refer to it with keys. In such a map we also have information about length, codec, number of audio and video tracks, etc.

A note about the LGPL license

I must point out that FFmpeg is licensed under the LGPL (GNU Lesser General Public License). This means that you need to read the rules for using FFmpegKit beforehand on the tool’s website. When you add a package to a Flutter project, always pay attention to what license the package is under

Other video processing tools in Flutter


The most standard case we face is when a user records a video, and later we would like to upload that video to a server. Such a video recorded with the camera package is either not compressed at all or very poorly compressed. The uncompressed video has advantages when it comes to the quality of the recording, but when we are uploading a video from a mobile device to a server, we would like to make sure that the user doesn’t lose the entire mobile data. In such a situation, we need to compress the video and the video_compress package helps with that.

final mediaInfo = await VideoCompress.compressVideo(
	quality: VideoQuality.Res1920x1080Quality,
	deleteOrigin: false, 
	includeAudio: false,
	frameRate: 60,

We can set parameters such as the path to the video, and quality, we can also set such an option to have the original file deleted after compression, make the output have or not have audio, or even the frame rate. 

video_compress also gives us access to the mediaInfo object, which returns us various statistics and information about the newly created video file. In particular, we have the file path, height, width, length, or size in bytes, which we can use further.


Another situation we have very often when working on Flutter apps with video is when we need to cut one frame from a video file because we would like to do a preview or thumbnail. The user usually wants to click on the tile that it just starts playing the video for him like on YouTube. In such a situation, it would be useful to have a tool that allows you to cut one frame from a video. And such a tool is video_thumbnail

final file = await VideoThumbnail.thumbnailFile(
	video: ‘url’,
	thumbnailPath: image.path,
	imageFormat: ImageFormat.PNG,
	timeMs: 1000, // milliseconds
	maxHeight: 64,
	maxWidth: 64,
	quality: 75,

We can use it in such a way that the method returns us an object, which in this case is an image. First, we specify our video (it can be a web address or a local storage path), and where we would like our file to appear. We can also specify the format (jpg/png), and specify in milliseconds the time where we want our frame to be cut (here it will be cut from the first second). Further, we can also set the quality, width, and height of the image that will be generated. 


This video_sdk_library library is commercial and you have to pay for it, but they have very good support for Flutter. We can launch a new screen by specifying the video file we want to edit in the argument, and they launch a new page on our device, giving us advanced video editing capabilities – including stickers or drawing over the video. 

	configuration: Configuration(
		export: ExportOptions(
			filename: ‘My video’,
			video: VideoOptions(codec: VideoCodec.h264, quality: 0.5),
		), // ExportOptions
		adjustment: AdjustmentOptions(items: [
		]), // AdjustmentOptions
		overlay: OverlayOptions(blendModes: [
		]), // OverlayOptions
		/* … */
	), // Configuration

The implementation is very simple and can be created in a day, as all we need to do is call the openEditor method. We specify the URL or local path to the file, the configuration can appear and many different options can be enabled or disabled thanks to this package.

More to come on Flutter video and audio

Audio and video editing is an inseparable part of developers’ work as this multimedia becomes more and more popular. Trimming, merging, and creating can all be accomplished with FFmpeg’s powerful framework. And when you need to process videos within Flutter apps, with help come packages like video_compress, video_thumbnail, and video_editor_sdk. As this blog post focuses on creating videos, there are also many challenges with playing the audio and video itself but that’s a topic for another time. Also, feel free to check out the recent articles by Michał on animations and UI as well as the Mason tool.