Filtering STDERR

I got fedup today with a Gstreamer warning that clutters my output, so that I cannot see my own debugging print outs not anymore.

GStreamer-WARNING **: gstpad.c:3923:gst_pad_push_data: Got data flow before segment event

I tried to set GST_DEBUG=0 but it did not help. What did help though, was a simple intervention on the bash side of things:

python cameraPipelineSwapTest.py 3>&1 1>&2 2>&3 3>&- | grep DEBUG

This filters all lines in STDERR for DEBUG which I print out with the python logging module and throws everything else away. So all other warning are suppressed. (Of course I wont see any gstreamer error either, but in this case I do not really care)

Casting QString to C string

There are several questions in forums out there how to cast a QString to a C string that is needed for instance for g_object_set or g_object_set in glib and therefore in gstreamer. This can be done by casting a QString to a std lib string and this one to a c-string

#include <QString>
#include <string.h>

QString qtString("Passing me to a c function wont work");
g_object_set(dummy, "String Property", qtString.toStdString.c_str());

Gstreamer: Stream H264 webcam data to series of files

After a long time without any post, now something new. Today I am going to sketch how I save a H264 stream from my Logitech C920 to disk – in chunks of 1 minute each. I started with literally no knowledge about gstreamer. I decided to go for gstreamer, because recently the uvch264_src was published. This is a source that grabs the h264 stream directly from a UVC webcam and therefore allows comfortable grabbing of the encoded stream.

Introduction of gstreamer via example of simple H264-to-disk grabbing pipeline

You can get the 0.10 development version that I used from the git repository. There is a great tutorial available. Just follow their instructions to install the recent 0.10 version of gstreamer. Dont forget to change to the 0.10 branches in the git repositories before compiling. Gstreamer is a great streaming framework that allows to manipulate streams which are send as chunks (GstBuffers) through a pipeline. For basic applications (simple pipelines) one can create and start a pipeline from the command line. If you have a UVC H264 webcam you should try to type the following into a terminal, after installing gstreamer (this is the test pipeline from uvch264_src, just adjusted to making it save to disk). You will have to change “device=/dev/video1” to your device (thanks to S4nshi for giving that pointer):

gst_launch -e uvch264_src device=/dev/video1 name=src auto-start=true src.vfsrc ! queue ! video/x-raw-yuv,width=320,height=240,framerate=30/1 ! xvimagesink src.vidsrc ! queue ! video/x-h264,width=1920,height=1080,framerate=30/1,profile=constrained-baseline ! h264parse ! mp4mux ! filesink location=test.mp4

I will explain this pipeline to give you an idea about gstreamer and basic concepts that I am going to exploit later. In this pipeline we use the elements uvch264_src, queue,  xvimagesink, h264parse, mp4mux, and filesink. The exclamation marks (“!”) are like pipes in the unix terminal and link those elements together. This symbol makes the source pad of an element connect to the sinkpad of the next element downstream of the  pipeline. uvch264_src has also a name. This is used, because it has two source pads. In my example, the “vfsrc” source pad of uvch264_src is connected via a queue to a xvimagesink. This is the preview sink uvch264_src gives in its example. The other source pad of the uvch264_src element is called vidsrc and gives us the proper video signal, we are saving down. The video signal goes through a queue and a h264parse to a mp4mux and is saved down by the filesink at the given location. The queues act as a FIFO buffer and also open new threads. The mp4mux builds the “header” information (in mp4 actually header and footer information) about the video content around the “raw” video signal of the camera. The h264parse provides the mp4mux with basic codec information that is essential for it to be able to build the headers complete. We will have a lot of trouble with that later on. The only thing left to explain are the caps ( video/x-raw-yuv,width=320,height=240,framerate=30/1). They specify basic properties of the video signal. Its encoding, size and framerate. Oh and the -e command. This sends an EOS event down the pipeline when it is canceled with CRTL-C. That is important to instruct the mp4mux to add the footer of the mp4 file close the video file properly. Another annoying thing, because the same event also closes the camera. Ok that works so far. For many people this might be already enough to work with, to record compressed data from the camera. But what I want is just a little more. I am recording very long video sequences and to avoid troubles if the system crashes for some reason, I want to split my long video in chunks of one minute length. And there should be a gapless transition between successive chunks. That means I do not want to restart the pipeline, because that takes about 30 seconds.

List of Hurdles

There are several major hurdles to overcome before we get such a construction. It took me several weeks to go from 0 to this stage trying many ways of solving this problem and many of those hurdles popped up one after another. But in retro-perspective there are the following three major problems to overcome:

  1. seamless change of file location while pipeline is running
  2. closing the mp4 file properly
  3. opening another mp4 file with complete headers

Problem no 1 is easily solved, if we would have a real raw byte stream of data that we would just dump onto the disk. One has to leave the command line and implement the pipeline with the SDK, but using gst_parse_launch() this is basically copy and paste of our pipeline above, leaving away the h264parse ! mp4mux part. Then one just needs to give the filesink a name and retrieve it with gst_bin_get_by_name(). If you want to change the location just do this:

int ret = gst_element_set_state (this->file_sink2, GST_STATE_NULL);
g_object_set(this->file_sink2, "location", "./new_location", NULL);
ret = gst_element_set_state (this->file_sink2, GST_STATE_PLAYING);

That works because the change of the file location will happen very fast and the queue  before the filesink will buffer in case of tiny delays. This does not work like that if you are dealing with mp4 data, so to say with the mp4mux, leading to problem 2. And somehow to problem 3. To tell mp4mux to close a file, you need to send the EOS signal. But that also closes the camera. We do not want this since we would effectively loose frames. If we managed to close an mp4mux managed file, we still ne to be able to open another file with mp4mux. So one needs to renegotiate the pipeline. I dont want to spoil here, but figuring that out was a chapter in itself.

Final Pipeline design

The final structure are actually two pipelines that run in parallel. One pipeline similar to the one above, and another one where we will send the EOS signal separate from the camera to close the file. You can get an idea below. Just look now at stage A. H264gstreamer

The whole structure is started at the beginning in a reduced state with the connected mainPipeline and binRec1 set to GST_STATE_PLAYING and having catchPipeline and binRec unconnected in GST_STATE_NULL. That is essentially the pipeline from above, just with an additional queue. (I am not sure if this queue is technically necessary).

Stage B – Part 1: Closing mp4mux1

That is problem 2 from above. What I do here is, I

  1. disconnect recBin1 from mainPipeline and
  2. connect it to the catchPipeline
  3. send an EOS through the catchPipeline

How to do this is described in this page of the gstreamer manual. The only additional thing one needs to take care about is to remove recBin1 from the mainPipeline and to add it to the catchPipeline

gst_bin_remove(GST_BIN(mainPipeline), binRec1);
gst_bin_add(GST_BIN(catchPipeline), binRec1);
GstEvent* event = gst_event_new_eos();
gst_pad_push_event(gst_pad_from_element(queue_catch, "src"), event);

As soon as both are connected the EOS signal is send through the source pad of queue_catch, leading in closing up the file properly.

Ok now we can close the first minute chunk and play it with a standard player. Camera is still running. Great. You would expect what is left is just to connect recBin2 to the mainPipeline and continue recoding in the file location of filesink2? Well than remainder of the article would not be necessary.

Stage B – Part 2: Renegotiation of new mainPipeline

If you would just connect recBin2 to the mainPipeline, you would get a Error: not negotiated (-4). Or something like that

To keep it short, an h264parse and seemingly the mp4mux element expects a kind of header in the first frame they process. If you save the H264 encoded frames of your camera without h264parse and mp4mux twice or trice, compare the first few bytes of the file in a hex editor and you will notice that they are the same followed by a “mdat” string. In case of the C920 from Logitech, there are 56 bytes before the “mdat” tag. This bytes are what h264parse and mp4mux expect when they are initialized/reinitialized. Do not just copy them from the file, as the bytes change at different camera options. What I do instead is copying the first 56bytes of the first  frame and save them.

/* copy first 56 bytes with Qt */
byteHeader = QByteArray((char*)GST_BUFFER_DATA(buffer), 56);

Every time I swap my binRec’s I insert those bytes in the next keyframe.

/* insert saved bytes in first frame */
GstBuffer* newBuffer = gst_buffer_try_new_and_alloc(byteHeader.size());
memcpy(GST_BUFFER_DATA(newBuffer), byteHeader.data_ptr(), byteHeader.size());
buffer = gst_buffer_merge(newBuffer, buffer);
gst_buffer_unref(newBuffer);

Both passages are called inside pad buffer handlers at queue_0 (see gst_pad_add_buffer_probe ).

In my implementation, however, I am not able to restructure the pipeline at the exact frame I want to. My approach, and there might be a better way, is to install a buffer probe at queue_0 src pad and wait for a keyframe (if (!GST_BUFFER_FLAG_IS_SET(buffer, GST_BUFFER_FLAG_DELTA_UNIT)) ). I add the byteheader to it and call gst_pad_set_blocked_async() on this very queue_0 src pad. In my understanding the  buffer probe handler should get called when the previous buffer left the pad and was send to binRec1. I receive my keyframe, modify it and change my pipeline structures. After the restructuring of my pipeline, the modified keyframe should arrive as the first frame in binRec2. For some reason, however, this keyframe is usually  the second frame that arrives at binRec2. I am saying usually, because I am not sure. So I am comparing the first 56 bytes of new incoming frames until they match the template that I saved at the very beginning and drop all frames arriving before my keyframe.

Unsolved Issues

My code runs now stable and records hours of data. Nonetheless, I could not find a way to solve the frame drop problem. If anyone has any suggestion, please let me know. The best would be to do so in the gst-devel mailing list [link].