- Revision
- 281617
- Author
- [email protected]
- Date
- 2021-08-26 02:29:59 -0700 (Thu, 26 Aug 2021)
Log Message
[GStreamer][MSE] Implement the "sequence" mode in SourceBuffer for the GStreamer ports
https://bugs.webkit.org/show_bug.cgi?id=210341
Reviewed by Alicia Boya Garcia.
LayoutTests/imported/w3c:
* web-platform-tests/media-source/mediasource-sourcebuffer-mode-timestamps-expected.txt: Added extra blank line in expectations, like in many other expectations files.
Source/WebCore:
Declare audio/mpeg support and handle it properly in AppendPipeline. This media type
doesn't have any container format, so an identity element is used in place of the
demuxer (dummy demuxer). As this element isn't an actual demuxer, the standard
no-more-pads signal isn't present, so its behaviour is simulated when the first buffer
is emitted by the element (detected by a probe).
A parser element is needed to process the data (mpegaudioparse for mpeg audio v1,
aacparse for mpeg audio v2 & v4). The existing createOptionalParserForFormat()
infrastructure is used for that after analyzing the caps, and an identity element is
now also used instead in the cases when a parser isn't needed. This simplifies the code
that links the pipeline elements, as now there aren't any optional (non existing)
elements in place, just GstIdentity instances there.
Still, the selection of the proper mpeg/aac parser requires precise caps. A new typefind
element is now used after the appsrc (when needed, and another identity element when
not).
Return TypeError in SourceBuffer::setMode(), instead of InvalidAccessError. The MSE spec
changed this behaviour at some point before June 2016 and the WebKit code never reflected
the change, still returning the deprecated InvalidAccessError when generate timestamps
flag equals true and new mode equals "segments".
Finally, the MediaSample::setTimestamps() method has been implemented for the GStreamer
port. It had an empty implementation and no problem had been detected before because
the "sequence" SourceBuffer mode had never been used until now.
Covered by existing tests.
* Modules/mediasource/SourceBuffer.cpp:
(WebCore::SourceBuffer::setMode): Return TypeError.
* platform/graphics/gstreamer/GStreamerRegistryScanner.cpp:
(WebCore::GStreamerRegistryScanner::initializeDecoders): Declare audio/mpeg support.
* platform/graphics/gstreamer/MediaSampleGStreamer.cpp:
(WebCore::MediaSampleGStreamer::setTimestamps): Implement method.
* platform/graphics/gstreamer/MediaSampleGStreamer.h: Removed setTimestamps() empty implementation.
* platform/graphics/gstreamer/mse/AppendPipeline.cpp:
(WebCore::AppendPipeline::AppendPipeline): Support audio/mpeg by instancing a typefind element when needed (or identity when not), using an identity element instead of a demuxer and simulating what would be done in the no-more-pads signal handlers, but using a probe to detect the first buffer traversing the parser.
(WebCore::createOptionalParserForFormat): Use the mpegversion caps field to instance the right kind of parser when detecting the audio/mpeg media type in the caps. An identity element is now returned in case no parser is needed.
(WebCore::AppendPipeline::Track::initializeElements): Simplify element linking now that the parser (or identity) is always guaranteed to exist.
(WebCore::AppendPipeline::streamTypeToString): Added default branch to switch to fix build warning.
* platform/graphics/gstreamer/mse/AppendPipeline.h: Added m_typefind. Reordered some attributes and added comments to improve readability.
LayoutTests:
* platform/glib/TestExpectations: Unskipped test
Modified Paths
Diff
Modified: trunk/LayoutTests/ChangeLog (281616 => 281617)
--- trunk/LayoutTests/ChangeLog 2021-08-26 07:57:35 UTC (rev 281616)
+++ trunk/LayoutTests/ChangeLog 2021-08-26 09:29:59 UTC (rev 281617)
@@ -1,3 +1,12 @@
+2021-08-26 Enrique Ocaña González <[email protected]>
+
+ [GStreamer][MSE] Implement the "sequence" mode in SourceBuffer for the GStreamer ports
+ https://bugs.webkit.org/show_bug.cgi?id=210341
+
+ Reviewed by Alicia Boya Garcia.
+
+ * platform/glib/TestExpectations: Unskipped test
+
2021-08-26 Youenn Fablet <[email protected]>
Signaling state check when applying a local or remote description is no longer aligned with the WebRTC spec
Modified: trunk/LayoutTests/imported/w3c/ChangeLog (281616 => 281617)
--- trunk/LayoutTests/imported/w3c/ChangeLog 2021-08-26 07:57:35 UTC (rev 281616)
+++ trunk/LayoutTests/imported/w3c/ChangeLog 2021-08-26 09:29:59 UTC (rev 281617)
@@ -1,3 +1,12 @@
+2021-08-26 Enrique Ocaña González <[email protected]>
+
+ [GStreamer][MSE] Implement the "sequence" mode in SourceBuffer for the GStreamer ports
+ https://bugs.webkit.org/show_bug.cgi?id=210341
+
+ Reviewed by Alicia Boya Garcia.
+
+ * web-platform-tests/media-source/mediasource-sourcebuffer-mode-timestamps-expected.txt: Added extra blank line in expectations, like in many other expectations files.
+
2021-08-26 Youenn Fablet <[email protected]>
Signaling state check when applying a local or remote description is no longer aligned with the WebRTC spec
Modified: trunk/LayoutTests/imported/w3c/web-platform-tests/media-source/mediasource-sourcebuffer-mode-timestamps-expected.txt (281616 => 281617)
--- trunk/LayoutTests/imported/w3c/web-platform-tests/media-source/mediasource-sourcebuffer-mode-timestamps-expected.txt 2021-08-26 07:57:35 UTC (rev 281616)
+++ trunk/LayoutTests/imported/w3c/web-platform-tests/media-source/mediasource-sourcebuffer-mode-timestamps-expected.txt 2021-08-26 09:29:59 UTC (rev 281617)
@@ -1,3 +1,4 @@
PASS audio/aac : If generate timestamps flag equals true and new mode equals "segments", then throw a TypeError exception and abort these steps.
PASS audio/mpeg : If generate timestamps flag equals true and new mode equals "segments", then throw a TypeError exception and abort these steps.
+
Modified: trunk/LayoutTests/platform/glib/TestExpectations (281616 => 281617)
--- trunk/LayoutTests/platform/glib/TestExpectations 2021-08-26 07:57:35 UTC (rev 281616)
+++ trunk/LayoutTests/platform/glib/TestExpectations 2021-08-26 09:29:59 UTC (rev 281617)
@@ -666,7 +666,6 @@
webkit.org/b/226804 imported/w3c/web-platform-tests/media-source/mediasource-changetype-play.html [ Failure ]
webkit.org/b/167108 imported/w3c/web-platform-tests/media-source/mediasource-trackdefaultlist.html [ Failure ]
-webkit.org/b/210341 imported/w3c/web-platform-tests/media-source/mediasource-sourcebuffer-mode-timestamps.html [ Failure ]
webkit.org/b/210341 media/media-source/media-mp4-h264-sequence-mode.html [ Failure ]
webkit.org/b/210342 imported/w3c/web-platform-tests/media-source/mediasource-sourcebuffer-trackdefaults.html [ Failure ]
webkit.org/b/167108 imported/w3c/web-platform-tests/media-source/mediasource-trackdefault.html [ Failure ]
Modified: trunk/Source/WebCore/ChangeLog (281616 => 281617)
--- trunk/Source/WebCore/ChangeLog 2021-08-26 07:57:35 UTC (rev 281616)
+++ trunk/Source/WebCore/ChangeLog 2021-08-26 09:29:59 UTC (rev 281617)
@@ -1,3 +1,52 @@
+2021-08-26 Enrique Ocaña González <[email protected]>
+
+ [GStreamer][MSE] Implement the "sequence" mode in SourceBuffer for the GStreamer ports
+ https://bugs.webkit.org/show_bug.cgi?id=210341
+
+ Reviewed by Alicia Boya Garcia.
+
+ Declare audio/mpeg support and handle it properly in AppendPipeline. This media type
+ doesn't have any container format, so an identity element is used in place of the
+ demuxer (dummy demuxer). As this element isn't an actual demuxer, the standard
+ no-more-pads signal isn't present, so its behaviour is simulated when the first buffer
+ is emitted by the element (detected by a probe).
+
+ A parser element is needed to process the data (mpegaudioparse for mpeg audio v1,
+ aacparse for mpeg audio v2 & v4). The existing createOptionalParserForFormat()
+ infrastructure is used for that after analyzing the caps, and an identity element is
+ now also used instead in the cases when a parser isn't needed. This simplifies the code
+ that links the pipeline elements, as now there aren't any optional (non existing)
+ elements in place, just GstIdentity instances there.
+
+ Still, the selection of the proper mpeg/aac parser requires precise caps. A new typefind
+ element is now used after the appsrc (when needed, and another identity element when
+ not).
+
+ Return TypeError in SourceBuffer::setMode(), instead of InvalidAccessError. The MSE spec
+ changed this behaviour at some point before June 2016 and the WebKit code never reflected
+ the change, still returning the deprecated InvalidAccessError when generate timestamps
+ flag equals true and new mode equals "segments".
+
+ Finally, the MediaSample::setTimestamps() method has been implemented for the GStreamer
+ port. It had an empty implementation and no problem had been detected before because
+ the "sequence" SourceBuffer mode had never been used until now.
+
+ Covered by existing tests.
+
+ * Modules/mediasource/SourceBuffer.cpp:
+ (WebCore::SourceBuffer::setMode): Return TypeError.
+ * platform/graphics/gstreamer/GStreamerRegistryScanner.cpp:
+ (WebCore::GStreamerRegistryScanner::initializeDecoders): Declare audio/mpeg support.
+ * platform/graphics/gstreamer/MediaSampleGStreamer.cpp:
+ (WebCore::MediaSampleGStreamer::setTimestamps): Implement method.
+ * platform/graphics/gstreamer/MediaSampleGStreamer.h: Removed setTimestamps() empty implementation.
+ * platform/graphics/gstreamer/mse/AppendPipeline.cpp:
+ (WebCore::AppendPipeline::AppendPipeline): Support audio/mpeg by instancing a typefind element when needed (or identity when not), using an identity element instead of a demuxer and simulating what would be done in the no-more-pads signal handlers, but using a probe to detect the first buffer traversing the parser.
+ (WebCore::createOptionalParserForFormat): Use the mpegversion caps field to instance the right kind of parser when detecting the audio/mpeg media type in the caps. An identity element is now returned in case no parser is needed.
+ (WebCore::AppendPipeline::Track::initializeElements): Simplify element linking now that the parser (or identity) is always guaranteed to exist.
+ (WebCore::AppendPipeline::streamTypeToString): Added default branch to switch to fix build warning.
+ * platform/graphics/gstreamer/mse/AppendPipeline.h: Added m_typefind. Reordered some attributes and added comments to improve readability.
+
2021-08-26 Youenn Fablet <[email protected]>
Signaling state check when applying a local or remote description is no longer aligned with the WebRTC spec
Modified: trunk/Source/WebCore/Modules/mediasource/SourceBuffer.cpp (281616 => 281617)
--- trunk/Source/WebCore/Modules/mediasource/SourceBuffer.cpp 2021-08-26 07:57:35 UTC (rev 281616)
+++ trunk/Source/WebCore/Modules/mediasource/SourceBuffer.cpp 2021-08-26 09:29:59 UTC (rev 281617)
@@ -1237,7 +1237,7 @@
// 1. Let new mode equal the new value being assigned to this attribute.
// 2. If generate timestamps flag equals true and new mode equals "segments", then throw an InvalidAccessError exception and abort these steps.
if (m_shouldGenerateTimestamps && newMode == AppendMode::Segments)
- return Exception { InvalidAccessError };
+ return Exception { TypeError };
// 3. If this object has been removed from the sourceBuffers attribute of the parent media source, then throw an InvalidStateError exception and abort these steps.
// 4. If the updating attribute equals true, then throw an InvalidStateError exception and abort these steps.
Modified: trunk/Source/WebCore/platform/graphics/gstreamer/GStreamerRegistryScanner.cpp (281616 => 281617)
--- trunk/Source/WebCore/platform/graphics/gstreamer/GStreamerRegistryScanner.cpp 2021-08-26 07:57:35 UTC (rev 281616)
+++ trunk/Source/WebCore/platform/graphics/gstreamer/GStreamerRegistryScanner.cpp 2021-08-26 09:29:59 UTC (rev 281617)
@@ -255,6 +255,8 @@
m_decoderMimeTypeSet.add(AtomString("audio/aac"));
m_decoderMimeTypeSet.add(AtomString("audio/mp4"));
m_decoderMimeTypeSet.add(AtomString("audio/x-m4a"));
+ m_decoderMimeTypeSet.add(AtomString("audio/mpeg"));
+ m_decoderMimeTypeSet.add(AtomString("audio/x-mpeg"));
m_decoderCodecMap.add(AtomString("mpeg"), false);
m_decoderCodecMap.add(AtomString("mp4a*"), false);
}
Modified: trunk/Source/WebCore/platform/graphics/gstreamer/MediaSampleGStreamer.cpp (281616 => 281617)
--- trunk/Source/WebCore/platform/graphics/gstreamer/MediaSampleGStreamer.cpp 2021-08-26 07:57:35 UTC (rev 281616)
+++ trunk/Source/WebCore/platform/graphics/gstreamer/MediaSampleGStreamer.cpp 2021-08-26 09:29:59 UTC (rev 281617)
@@ -192,6 +192,16 @@
m_pts = MediaTime::zeroTime();
}
+void MediaSampleGStreamer::setTimestamps(const MediaTime& presentationTime, const MediaTime& decodeTime)
+{
+ m_pts = presentationTime;
+ m_dts = decodeTime;
+ if (auto* buffer = gst_sample_get_buffer(m_sample.get())) {
+ GST_BUFFER_PTS(buffer) = toGstClockTime(m_pts);
+ GST_BUFFER_DTS(buffer) = toGstClockTime(m_dts);
+ }
+}
+
void MediaSampleGStreamer::offsetTimestampsBy(const MediaTime& timestampOffset)
{
if (!timestampOffset)
Modified: trunk/Source/WebCore/platform/graphics/gstreamer/MediaSampleGStreamer.h (281616 => 281617)
--- trunk/Source/WebCore/platform/graphics/gstreamer/MediaSampleGStreamer.h 2021-08-26 07:57:35 UTC (rev 281616)
+++ trunk/Source/WebCore/platform/graphics/gstreamer/MediaSampleGStreamer.h 2021-08-26 09:29:59 UTC (rev 281617)
@@ -51,7 +51,7 @@
size_t sizeInBytes() const override { return m_size; }
FloatSize presentationSize() const override { return m_presentationSize; }
void offsetTimestampsBy(const MediaTime&) override;
- void setTimestamps(const MediaTime&, const MediaTime&) override { }
+ void setTimestamps(const MediaTime&, const MediaTime&) override;
bool isDivisable() const override { return false; }
std::pair<RefPtr<MediaSample>, RefPtr<MediaSample>> divide(const MediaTime&, UseEndTime) override { return { nullptr, nullptr }; }
Ref<MediaSample> createNonDisplayingCopy() const override;
Modified: trunk/Source/WebCore/platform/graphics/gstreamer/mse/AppendPipeline.cpp (281616 => 281617)
--- trunk/Source/WebCore/platform/graphics/gstreamer/mse/AppendPipeline.cpp 2021-08-26 07:57:35 UTC (rev 281616)
+++ trunk/Source/WebCore/platform/graphics/gstreamer/mse/AppendPipeline.cpp 2021-08-26 09:29:59 UTC (rev 281617)
@@ -78,10 +78,9 @@
#if ENABLE(ENCRYPTED_MEDIA)
static GstPadProbeReturn appendPipelineAppsinkPadEventProbe(GstPad*, GstPadProbeInfo*, struct PadProbeInformation*);
#endif
-
static GstPadProbeReturn appendPipelineDemuxerBlackHolePadProbe(GstPad*, GstPadProbeInfo*, gpointer);
-
static GstPadProbeReturn matroskademuxForceSegmentStartToEqualZero(GstPad*, GstPadProbeInfo*, void*);
+static GRefPtr<GstElement> createOptionalParserForFormat(const AtomString&, const GstCaps*);
// Wrapper for gst_element_set_state() that emits a critical if the state change fails or is not synchronous.
static void assertedElementSetState(GstElement* element, GstState desiredState)
@@ -143,11 +142,18 @@
const String& type = m_sourceBufferPrivate.type().containerType();
GST_DEBUG("SourceBuffer containerType: %s", type.utf8().data());
- if (type.endsWith("mp4") || type.endsWith("aac"))
+ bool hasDemuxer = true;
+ if (type.endsWith("mp4") || type.endsWith("aac")) {
m_demux = makeGStreamerElement("qtdemux", nullptr);
- else if (type.endsWith("webm"))
+ m_typefind = makeGStreamerElement("identity", nullptr);
+ } else if (type.endsWith("webm")) {
m_demux = makeGStreamerElement("matroskademux", nullptr);
- else
+ m_typefind = makeGStreamerElement("identity", nullptr);
+ } else if (type == "audio/mpeg") {
+ m_demux = makeGStreamerElement("identity", nullptr);
+ m_typefind = makeGStreamerElement("typefind", nullptr);
+ hasDemuxer = false;
+ } else
ASSERT_NOT_REACHED();
#if !LOG_DISABLED
@@ -157,19 +163,35 @@
m_demuxerDataEnteringPadProbeInformation.probeId = gst_pad_add_probe(demuxerPad.get(), GST_PAD_PROBE_TYPE_BUFFER, reinterpret_cast<GstPadProbeCallback>(appendPipelinePadProbeDebugInformation), &m_demuxerDataEnteringPadProbeInformation, nullptr);
#endif
- // These signals won't outlive the lifetime of `this`.
- g_signal_connect(m_demux.get(), "no-more-pads", G_CALLBACK(+[](GstElement*, AppendPipeline* appendPipeline) {
- ASSERT(!isMainThread());
- GST_DEBUG("Posting no-more-pads task to main thread");
- appendPipeline->m_taskQueue.enqueueTaskAndWait<AbortableTaskQueue::Void>([appendPipeline]() {
- appendPipeline->didReceiveInitializationSegment();
- return AbortableTaskQueue::Void();
- });
- }), this);
+ if (hasDemuxer) {
+ // These signals won't outlive the lifetime of `this`.
+ g_signal_connect(m_demux.get(), "no-more-pads", G_CALLBACK(+[](GstElement*, AppendPipeline* appendPipeline) {
+ ASSERT(!isMainThread());
+ GST_DEBUG("Posting no-more-pads task to main thread");
+ appendPipeline->m_taskQueue.enqueueTaskAndWait<AbortableTaskQueue::Void>([appendPipeline]() {
+ appendPipeline->didReceiveInitializationSegment();
+ return AbortableTaskQueue::Void();
+ });
+ }), this);
+ } else {
+ GRefPtr<GstPad> identitySrcPad = adoptGRef(gst_element_get_static_pad(m_demux.get(), "src"));
+ gst_pad_add_probe(identitySrcPad.get(), GST_PAD_PROBE_TYPE_BUFFER, reinterpret_cast<GstPadProbeCallback>(
+ +[](GstPad *pad, GstPadProbeInfo*, AppendPipeline* appendPipeline) {
+ GRefPtr<GstCaps> caps = gst_pad_get_current_caps(pad);
+ if (!caps)
+ return GST_PAD_PROBE_DROP;
+ appendPipeline->m_taskQueue.enqueueTaskAndWait<AbortableTaskQueue::Void>([appendPipeline]() {
+ appendPipeline->didReceiveInitializationSegment();
+ return AbortableTaskQueue::Void();
+ });
+ return GST_PAD_PROBE_REMOVE;
+ }
+ ), this, nullptr);
+ }
// Add_many will take ownership of a reference. That's why we used an assignment before.
- gst_bin_add_many(GST_BIN(m_pipeline.get()), m_appsrc.get(), m_demux.get(), nullptr);
- gst_element_link(m_appsrc.get(), m_demux.get());
+ gst_bin_add_many(GST_BIN(m_pipeline.get()), m_appsrc.get(), m_typefind.get(), m_demux.get(), nullptr);
+ gst_element_link_many(m_appsrc.get(), m_typefind.get(), m_demux.get(), nullptr);
assertedElementSetState(m_pipeline.get(), GST_STATE_PLAYING);
}
@@ -605,19 +627,31 @@
{
GstStructure* structure = gst_caps_get_structure(caps, 0);
const char* mediaType = gst_structure_get_name(structure);
-
GUniquePtr<char> parserName(g_strdup_printf("%s_parser", trackId.string().utf8().data()));
+ const gchar* elementClass = "identity";
- if (!g_strcmp0(mediaType, "audio/x-opus")) {
- GstElement* opusparse = makeGStreamerElement("opusparse", parserName.get());
- return GRefPtr<GstElement>(opusparse);
+ if (!g_strcmp0(mediaType, "audio/x-opus"))
+ elementClass = "opusparse";
+ else if (!g_strcmp0(mediaType, "video/x-h264"))
+ elementClass = "h264parse";
+ else if (!g_strcmp0(mediaType, "audio/mpeg")) {
+ int mpegversion = 0;
+ gst_structure_get_int(structure, "mpegversion", &mpegversion);
+ switch (mpegversion) {
+ case 1:
+ elementClass = "mpegaudioparse";
+ break;
+ case 2:
+ case 4:
+ elementClass = "aacparse";
+ break;
+ default: {
+ GUniquePtr<char> capsString(gst_caps_to_string(caps));
+ GST_WARNING("Unsupported audio mpeg caps: %s", capsString.get());
+ }
+ }
}
- if (!g_strcmp0(mediaType, "video/x-h264")) {
- GstElement* h264parse = makeGStreamerElement("h264parse", parserName.get());
- return GRefPtr<GstElement>(h264parse);
- }
-
- return nullptr;
+ return GRefPtr<GstElement>(makeGStreamerElement(elementClass, parserName.get()));
}
AtomString AppendPipeline::generateTrackId(StreamType streamType, int padIndex)
@@ -757,7 +791,7 @@
gst_bin_add(GST_BIN(appendPipeline->pipeline()), appsink.get());
gst_element_sync_state_with_parent(appsink.get());
- entryPad = appsinkPad = adoptGRef(gst_element_get_static_pad(appsink.get(), "sink"));
+ appsinkPad = adoptGRef(gst_element_get_static_pad(appsink.get(), "sink"));
#if !LOG_DISABLED
appsinkDataEnteringPadProbeInformation.appendPipeline = appendPipeline;
@@ -774,13 +808,13 @@
// Some audio files unhelpfully omit the duration of frames in the container. We need to parse
// the contained audio streams in order to know the duration of the frames.
// This is known to be an issue with YouTube WebM files containing Opus audio as of YTTV2018.
- if ((parser = createOptionalParserForFormat(trackId, caps.get()))) {
- gst_bin_add(bin, parser.get());
- gst_element_sync_state_with_parent(parser.get());
- gst_element_link(parser.get(), appsink.get());
- ASSERT(GST_PAD_IS_LINKED(appsinkPad.get()));
- entryPad = adoptGRef(gst_element_get_static_pad(parser.get(), "sink"));
- }
+ // If no parser is needed, a GstIdentity element will be created instead.
+ parser = createOptionalParserForFormat(trackId, caps.get());
+ gst_bin_add(bin, parser.get());
+ gst_element_sync_state_with_parent(parser.get());
+ gst_element_link(parser.get(), appsink.get());
+ ASSERT(GST_PAD_IS_LINKED(appsinkPad.get()));
+ entryPad = adoptGRef(gst_element_get_static_pad(parser.get(), "sink"));
}
void AppendPipeline::hookTrackEvents(Track& track)
@@ -841,6 +875,8 @@
return "Invalid";
case StreamType::Unknown:
return "Unknown";
+ default:
+ return "(Unsupported stream type)";
}
}
#endif
Modified: trunk/Source/WebCore/platform/graphics/gstreamer/mse/AppendPipeline.h (281616 => 281617)
--- trunk/Source/WebCore/platform/graphics/gstreamer/mse/AppendPipeline.h 2021-08-26 07:57:35 UTC (rev 281616)
+++ trunk/Source/WebCore/platform/graphics/gstreamer/mse/AppendPipeline.h 2021-08-26 09:29:59 UTC (rev 281617)
@@ -81,9 +81,10 @@
GRefPtr<GstCaps> caps;
FloatSize presentationSize;
- GRefPtr<GstPad> entryPad; // Sink pad of the parser (if any) or the appsink.
- GRefPtr<GstElement> parser; // Optional (needed by some formats).
+ // Needed by some formats. To simplify the code, parser can be a GstIdentity when not needed.
+ GRefPtr<GstElement> parser;
GRefPtr<GstElement> appsink;
+ GRefPtr<GstPad> entryPad; // Sink pad of the parser/GstIdentity.
GRefPtr<GstPad> appsinkPad;
RefPtr<WebCore::TrackPrivateBase> webKitTrack;
@@ -149,9 +150,11 @@
MediaPlayerPrivateGStreamerMSE* m_playerPrivate;
MediaTime m_initialDuration;
- GRefPtr<GstElement> m_appsrc;
GRefPtr<GstElement> m_pipeline;
GRefPtr<GstBus> m_bus;
+ GRefPtr<GstElement> m_appsrc;
+ // To simplify the code, mtypefind and m_demux can be a GstIdentity when not needed.
+ GRefPtr<GstElement> m_typefind;
GRefPtr<GstElement> m_demux;
Vector<std::unique_ptr<Track>> m_tracks;