Re: [webkit-dev] Compile times and class-scoped enums

2023-01-23 Thread Jer Noble via webkit-dev


> On Jan 23, 2023, at 11:05 AM, Geoffrey Garen  wrote:
> 
>>> However, this change requires moving class-scoped enums out into the 
>>> namespace scope.
>> 
>> Seems worthwhile. Doesn’t seem to me like it would have far reaching effects.
> 
> I agree.
> 
>> +using Type = DOMAudioSessionType;
> 
> Did you do this to make the patch smaller, or do you prefer this style?

Yes to both. IMO, there’s nothing inherently wrong with having an enum scoped 
to a class apart from that makes it impossible to forward declare. If there was 
some language supported mechanism to forward-declare class-scoped enums, we’d 
probably just do that instead. So, this pattern of declaring the enum outside 
of a class and pulling it into the class with a using declaration is a bit 
ugly, insofar as it pollutes the global namespace, but it does allow you to 
continue to use the type as if it were file scoped.

-Jer___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] Compile times and class-scoped enums

2023-01-20 Thread Jer Noble via webkit-dev
Hi all!

I’ve noticed that compile times for the WebKit project have started creeping up 
again, so I and a few other WebKit contributors started looking into possible 
compile time improvements using the tools listed in 
. One cause of long 
compile times is when a commonly-included header (like Document.h) includes 
other headers (which include other headers, ad nauseam). Whereas much of the 
time those includes can be avoided by forward declaring types, for types 
(specifically enums) scoped within classes, this is impossible.

I attempted to address this in , 
which (on this machine) reduces the total compile time of Document.h in the 
WebCore project from about 1000s to about 200s.

However, this change requires moving class-scoped enums out into the namespace 
scope. E.g.:

> diff --git a/Source/WebCore/Modules/audiosession/DOMAudioSession.h 
> b/Source/WebCore/Modules/audiosession/DOMAudioSession.h
> index 01bf6960d3a4..d84e1eae78d5 100644
> --- a/Source/WebCore/Modules/audiosession/DOMAudioSession.h
> +++ b/Source/WebCore/Modules/audiosession/DOMAudioSession.h
> @@ -36,14 +36,17 @@
>  
>  namespace WebCore {
>  
> +enum class DOMAudioSessionType : uint8_t { Auto, Playback, Transient, 
> TransientSolo, Ambient, PlayAndRecord };
> +enum class DOMAudioSessionState : uint8_t { Inactive, Active, Interrupted };
> +
>  class DOMAudioSession final : public RefCounted, public 
> ActiveDOMObject, public EventTarget, public 
> AudioSession::InterruptionObserver {
>  WTF_MAKE_ISO_ALLOCATED(DOMAudioSession);
>  public:
>  static Ref create(ScriptExecutionContext*);
>  ~DOMAudioSession();
>  
> -enum class Type : uint8_t { Auto, Playback, Transient, TransientSolo, 
> Ambient, PlayAndRecord };
> -enum class State : uint8_t { Inactive, Active, Interrupted };
> +using Type = DOMAudioSessionType;
> +using State = DOMAudioSessionState;
>  
>  ExceptionOr setType(Type);
>  Type type() const;

So that these enums can be forward declared in Document.h, rather than 
including the header wholesale.

However, this requires a significant coding style change, both to existing code 
and new code, and as such, it should probably be discussed here. So, what do 
people think? Is the change in coding style (moving class-scoped enums out into 
namespace scope) worth doing if it means a significant increases in compile 
speeds?

-Jer___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Use of Swift (for bridging) in the WebKit project

2021-06-09 Thread Jer Noble via webkit-dev


> On Jun 9, 2021, at 11:31 AM, Geoff Garen  wrote:
> 
> In this specific case
> 
>   What is the API we’re trying to call into?

This is calling into the GroupActivities.framework API which was announced at 
WWDC this week.

>   Is using Swift the only way to call into it?

This was true at the time we wrote it, as GroupActivities required using the 
Combine framework, which does not have an Objective-C API. I'm learning that 
this may not be the case in the future. If GroupActivities is modified to not 
require Combine, I will gladly remove the .swift.

>   Is there any way to reduce the use of Swift to only the calls into it, 
> and not the surrounding objects (which all seem to be marked @objc anyway)?

The surrounding objects have to be written in Swift (in order to call Swift 
APIs), but callable from Objective-C/++, which is why they're marked as @objc. 
I believe that they are as minimal as I can make them.

-Jer

> Thanks,
> Geoff
> 
>> On Jun 8, 2021, at 4:27 PM, Sam Weinig via webkit-dev 
>>  wrote:
>> 
>> Hi Jer,
>> 
>> I think it sounds like a reasonable rule to allow Swift for bridging 
>> purposes only, with the caveat that we should prefer Objective-C/C where it 
>> can be used.
>> 
>> The one other place that Swift seems reasonable for WebKit is in the 
>> definition and refinement of Swift bindings to WebKit’s public API.
>> 
>> That is to say, for the time being, we should avoid Swift in tools and core 
>> functionality.
>> 
>> Thanks for bringing this up on the list.
>> 
>> - Sam
>> 
>>> On Jun 8, 2021, at 3:57 PM, Jer Noble via webkit-dev 
>>>  wrote:
>>> 
>>> Hi all!
>>> 
>>> We're working on some new features that currently use APIs exposed through 
>>> Swift. We have not yet approved writing and committing WebKit code in 
>>> Swift, given runtime, library, and just plain mental overhead that comes 
>>> with adding a new language to the project. But I'd argue that doing so for 
>>> the purpose of allowing existing C++ code to call into Swift APIs is 
>>> probably not terrible.
>>> 
>>> Should we relax our "no new language" policy, only for the purposes of 
>>> allowing our core language code to call into APIs in Swift?
>>> 
>>> (ref: https://bugs.webkit.org/show_bug.cgi?id=226757)
>>> 
>>> Thanks, and look forward to hearing from everyone,
>>> 
>>> -Jer
>>> ___
>>> webkit-dev mailing list
>>> webkit-dev@lists.webkit.org
>>> https://lists.webkit.org/mailman/listinfo/webkit-dev
>> 
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org
>> https://lists.webkit.org/mailman/listinfo/webkit-dev
> 

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] Use of Swift (for bridging) in the WebKit project

2021-06-08 Thread Jer Noble via webkit-dev
Hi all!

We're working on some new features that currently use APIs exposed through 
Swift. We have not yet approved writing and committing WebKit code in Swift, 
given runtime, library, and just plain mental overhead that comes with adding a 
new language to the project. But I'd argue that doing so for the purpose of 
allowing existing C++ code to call into Swift APIs is probably not terrible.

Should we relax our "no new language" policy, only for the purposes of allowing 
our core language code to call into APIs in Swift?

(ref: https://bugs.webkit.org/show_bug.cgi?id=226757)

Thanks, and look forward to hearing from everyone,

-Jer
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Waiting for an event in layout test...

2019-05-31 Thread Jer Noble

> On May 30, 2019, at 11:01 PM, Ryosuke Niwa  wrote:
> 
> I’m gonna give you a game changing function:
> 
> function listenForEventOnce(target, name, timeout) {
> return new Promise((resolve, reject) => {
> const timer = timeout ? setTimeout(reject, timeout) : null;
> target.addEventListener(name, () => {
> if (timer)
> clearTimeout(timer);
> resolve();
> }, {once: true});
> });
> }
> 
> You can then write a test like this:
> await listenForEventOnce(document.body, 'load');
> // do stuff after load event.
> 
> await listenForEventOnce(document.querySelector('input'), 'focus');
> await listenForEventOnce(visualViewport, 'scroll', 5000);
> // After the input element is focused, then the visual viewport scrolled or 5 
> seconds has passed.

Ryosuke++.

Just FYI, if you’re writing LayoutTests, we’ve got something very similar in 
LayoutTests/media/video-test.js:

function waitFor(element, type) {
return new Promise(resolve => {
element.addEventListener(type, event => {
consoleWrite(`EVENT(${event.type})`);
resolve(event);
}, { once: true });
});
}

And:

function sleepFor(duration) {
return new Promise(resolve => {
setTimeout(resolve, duration);
});
}

And also:

function shouldReject(promise) {
return new Promise((resolve, reject) => {
promise.then(result => {
logResult(Failed, 'Promise resolved incorrectly');
reject(result);
}).catch((error) => {
logResult(Success, 'Promise rejected correctly');
resolve(error);
});
});
}

So you could also do:

await Promise.race(waitFor(document.body, ‘load’), 
shouldReject(sleepFor(5000)));

Although we’d probably want a new function, “rejectIn(duration)”, and then it’d 
be:

await shouldResolve(Promise.race(waitFor(document.body, ‘load’), 
rejectIn(5000)));

But all that said, I agree that wrapping events in Promises makes it very easy 
to write readable, single `async` function test cases. A+++, would write test 
again.

-Jer


> - R. Niwa
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] [MSE] Range ends inclusion is deleting wanted MediaSample's

2017-11-14 Thread Jer Noble


> On Nov 14, 2017, at 1:36 PM, Alicia Boya García  wrote:
> 
> Hi, Jer.
> 
> Sorry for the late reply, I've been busy investigating all of this.
> 
> I'm replying to your points in reverse order...
> 
> On 09/11/17 23:23, Jer Noble wrote:
>>> As I understand it, the point of that part of the algorithm is to delete
>>> old samples that are -- even partially -- in the presentation time range
>>> of the newly appended one, but using (beginTime, endTime] fails to
>>> accomplish that in two cases:
>>> 
>>> a) If there is an existing MediaSample with PTS=9 and DUR=1 it will not
>>> be removed because beginTime (=9) is exclusive.
>> 
>> Eh, not really. The search space is the entire duration of the sample,
>> so an exclusive beginTime of 9 _should_ match PTS=9, DUR=1 because 9 <
>> presentationEndTime=10.  I’m pretty sure this is correct, but a mock
>> test case should clear this up pretty quickly.
> 
> That's totally not the case, though the first time I gave a look at this
> code I thought that too... presentationRange() is a std::map whose
> key type is a single MediaTime specifying the PTS of the MediaSample,
> not the duration or the full presentation time range.
> 
> Also, neither findSamplesWithinPresentationRange() or its siblings take
> durations into account, as they search only with the key (PTS) in
> mind... Well, actually findSamplesWithinPresentationRangeFromEnd() looks
> at the values, but it only checks .presentationTimestamp() so it's
> functionally the same.

Aha, okay.  There’s also SampleIsLessThanMediaTimeComparator, 
SampleIsGreaterThanMediaTimeComparator, which does take duration into account, 
but those look to be unused.

> Adding a sample with same PTS as an existing one does not remove the old
> one with this check. There are two possible behaviors that can mask the
> issue:
> 
> a) The previous frame removal step (see step 1.13 in the spec or 1.14 in
> WebKit) removed the overlapping frame. This is only the case if the
> frame started a new Coded Frame Group.
> 
> b) The existing overlapping frame was not removed, but later on when the
> new MediaSample was passed to std::map::insert() it was ignored because
> it was a duplicated key (same PTS as the existing overlapping frame).
> 
> You can confirm that it's the old MediaSample the one persisted by
> setting different durations for the old and the new one but the same
> PTS. After the append of the second frame, the old duration persists and
> there are no new frames in the TrackBuffer.
> 
> On 09/11/17 23:23, Jer Noble wrote:
>>> My question is... shouldn't the range ends inclusivity be the other way
>>> around i.e. [beginTime, endTime)?
>> 
>> beginTime was intended to be exclusive so that, given a [0, 9] range, 
>> searching for (9, 10] wouldn’t match the last sample in the [0, 9] range.
>> 
>> But now that you mention it, perhaps the real problem is that both beginTime 
>> _and_ endTime should be exclusive.
> 
> This is not an issue because as explained before we are considering just
> the frame start PTS, not its entire presentation interval.

I think we should consider the entire interval, though. See below:

> beginTime must be inclusive so that old frames with the same PTS are
> deleted. endTime must be exclusive so that non overlapping consecutive
> frames are not deleted.
> 
> I have checked this with the spec:
> https://www.w3.org/TR/media-source/#sourcebuffer-coded-frame-processing
> 
> 1.14. Remove existing coded frames in track buffer:
> -> If highest end timestamp for track buffer is not set:
>   [...]
> -> If highest end timestamp for track buffer is set and less than or
>equal to presentation timestamp:
>   Remove all coded frames from track buffer that have a
>   presentation timestamp greater than or equal to highest end
>   timestamp and less than frame end timestamp.

Yeah, the spec is not the best here.  If you have, e.g. a sample with a PTS of 
0 and a duration of 100, and then insert a sample with a PTS of 50 and a 
duration of 100, you’d expect that to cause the first sample to be removed. But 
a strict reading of the spec says that sample stays.  Now you have two 
overlapping samples.  It can get even weirder if you insert a sample with a PTS 
of 25 and a duration of 50.  Now, strictly implementing the spec, you have a 
sample overlaps on both ends of another sample.  What does that even mean for a 
decoder?  It’s almost guaranteed to generate a decode error, unless both of the 
overlapping samples are I-frames.

I think the intent of the spec is clear: if any part of a previous sample 
overlaps the new one, it has to be removed, and a

Re: [webkit-dev] [MSE] Range ends inclusion is deleting wanted MediaSample's

2017-11-09 Thread Jer Noble
Hi Alicia,

It should be possible to make a “mock” testcase which doesn’t rely on any 
particular media engine and which can demonstrate this bug.

Continued:

> On Nov 9, 2017, at 2:03 PM, Alicia Boya García  wrote:
> 
> Hi, WebKittens!
> 
> In the YouTube Media Source Extensions conformance tests there is one
> called 36.AppendOpusAudioOutOfOrder where two audio media segments are
> appended out of order to a SourceBuffer: First, a segment with the PTS
> ranges [10, 20) is added. Then, another one with [0, 10) is added.
> 
> (I have rounded the actual timestamps to near integers for easier
> understanding).
> 
> Almost at the very end of the process the buffered ranges are like this:
> 
> [ 0,  9)
> [10, 20)
> 
> At this point, SourceBuffer::sourceBufferPrivateDidReceiveSample() is
> called with the last audio frame, that has PTS=9 and DUR=1.
> 
> The execution reaches this conditional block:
> 
> ```
> if (trackBuffer.highestPresentationTimestamp.isValid() &&
> trackBuffer.highestPresentationTimestamp <= presentationTimestamp) {
> ```
> 
> trackBuffer.highestPresentationTimestamp contains the highest PTS so far
> within the current segment. The condition is true (9 <= 9) as expected
> for sequentially appended frames.
> 
> Inside there is this block of code:
> 
> ```
> MediaTime highestBufferedTime = trackBuffer.buffered.maximumBufferedTime();
> 
> PresentationOrderSampleMap::iterator_range range;
> if (highestBufferedTime - trackBuffer.highestPresentationTimestamp <
> trackBuffer.lastFrameDuration)
>range =
> trackBuffer.samples.presentationOrder().findSamplesWithinPresentationRangeFromEnd(trackBuffer.highestPresentationTimestamp,
> frameEndTimestamp);
> else
>range =
> trackBuffer.samples.presentationOrder().findSamplesWithinPresentationRange(trackBuffer.highestPresentationTimestamp,
> frameEndTimestamp);
> 
> if (range.first != trackBuffer.samples.presentationOrder().end())
>erasedSamples.addRange(range.first, range.second);
> ```
> 
> The first if block there is an optimization, it decides whether to do a
> binary search in the entire collection of MediaSample's or do a linear
> search starting with the MediaSample with the highest PTS (which is
> faster when appends always occur at the end), but the result is the same
> in both cases.
> 
> presentationOrder() is a std::map.
> 
> findSamplesWithinPresentationRange(beginTime, endTime) and its *FromEnd
> counterpart both return a pair of STL-style iterators which cover a
> range of MediaSample objects whose presentation timestamps sit in the
> range (beginTime, endTime] (beginTime is exclusive, endTime is inclusive).
> 
> Then, it marks those MediaSample objects (frames) for deletion.
> 
> My question is... shouldn't the range ends inclusivity be the other way
> around i.e. [beginTime, endTime)?

beginTime was intended to be exclusive so that, given a [0, 9] range, searching 
for (9, 10] wouldn’t match the last sample in the [0, 9] range.

But now that you mention it, perhaps the real problem is that both beginTime 
_and_ endTime should be exclusive.

> As I understand it, the point of that part of the algorithm is to delete
> old samples that are -- even partially -- in the presentation time range
> of the newly appended one, but using (beginTime, endTime] fails to
> accomplish that in two cases:
> 
> a) If there is an existing MediaSample with PTS=9 and DUR=1 it will not
> be removed because beginTime (=9) is exclusive.

Eh, not really. The search space is the entire duration of the sample, so an 
exclusive beginTime of 9 _should_ match PTS=9, DUR=1 because 9 < 
presentationEndTime=10.  I’m pretty sure this is correct, but a mock test case 
should clear this up pretty quickly.

> b) If there is an existing MediaSample with PTS=10 and DUR=1 it WILL be
> removed even though there is no overlap with the sample being appended
> (PTS=9 DUR=1) because endTime (=10) is inclusive. This is exactly what
> is making the YTTV test fail in my case.

This may be true because endTime is inclusive (and shouldn’t be). 

>Before:
> 
>[ 0,  9)
>[10, 20)
> 
>Expected result after adding [9, 10):
> 
>[0, 20)
> 
>Actual result in WebKit:
> 
>[ 0, 10)
>[11, 20)

It looks like findSamplesWithinPresentationRange() is only ever used in that 
specific part of sourceBufferPrivateDidReceiveSample(), so this should be very 
safe to change.

-Jer

smime.p7s
Description: S/MIME cryptographic signature
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] minimum version of MacOSX to have the VTB?

2017-01-27 Thread Jer Noble

> On Jan 25, 2017, at 10:41 PM, Alexandre GOUAILLARD  
> wrote:
> 
> Dear all,
> 
> I'm trying to port https://trac.webkit.org/changeset/210974 
>  upstream.
> 
> I would need to know what was the first version of MacOs X to support the VTB 
> to be sure we're not breaking upstream build first.
> 
> I could not find any definitive answer online, even though I saw hints that 
> 10.8 might be it.

The APIs are marked up to indicate they’re available as far back as 10.8. 
However, they may have been private in that release as well as 10.9. They’re 
definitely public in 10.10.  So builds which target 10.10 or later will run 
against OS’s as early as 10.8.

-Jer

> Does anybody know?
> 
> thanks in advance.
> 
> Alex.
> 
> -- 
> Alex. Gouaillard, PhD, PhD, MBA
> 
> President - CoSMo Software Consulting, Singapore
> 
> sg.linkedin.com/agouaillard 
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Making MockMediaPlayerMediaSource and other MediaPlayerPrivateInterface subclasses work together

2016-11-28 Thread Jer Noble

> On Nov 24, 2016, at 10:00 AM, Enrique Ocaña González  
> wrote:
> 
> Hi,
> 
> These days I've been working on improving the layout test passrate of our new 
> Media Source Extensions GStreamer platform implementation 
> (MediaPlayerPrivateGStreamerMSE), but I'm having problems with the selection 
> of the right MediaPlayerPrivateInterface implementation for each use case.
> 
> As I understand it (please correct me if I'm wrong), under normal 
> circumstances (no MEDIA_SOURCE enabled) MediaPlayer tries to find the best 
> media engine (ie: MediaPlayerPrivateInterface implementation) available to 
> play a content. It does so by asking each engine if they support (yes/no/
> maybe) the particular mime type of the video. For the "maybe" case, the 
> engine 
> is instantiated, loading goes forward and the networkState is set to 
> FormatError in case something goes wrong. In that case MediaPlayer tries the 
> next available engine.
> 
> Things work different for MSE. No matter what support the engine reports, all 
> the engines are tried and the content loading is attempted. Setting 
> FormatError in networkState is the only way in which an engine can reject 
> being selected. Unfortunately, it's impossible for 
> MediaPlayerPrivateGStreamerMSE to take that decision at loading time, because 
> usually the load happens before the MediaSource has been configured with 
> SourceBuffers (the ones specifying a mime type). Therefore, the MSE player 
> private must always succeed blindly on loading if it wants to have any 
> opportunity. This works fine for real world use cases.
> 
> My issue is related to MockMediaPlayerMediaSource, the test engine which 
> should take care of the "video/mock" content used in some layout tests. Our 
> MSE player private gets selected and performs the loading (of an empty 
> MediaSource). However, when a video/mock SourceBuffer is added, it's too late 
> for the MSE player private to reject being in charge. Both MediaSource and 
> SourceBuffer are already using the GStreamer-related subclasses as their 
> MediaSourcePrivate and SourceBufferPrivate counterparts and they can't be hot-
> swapped with the Mock-related subclasses. Returning NotSupported in 
> MediaSourceGStreamer::addSourceBuffer() only makes things worse. The 
> JavaScript code triggering that call expects addSourceBuffer() to be handled 
> by the mock engine and to succeed on the first attempt. The JS code isn't 
> supposed to retry the call to addSourceBuffer().
> 
> I wonder what's the right way to manage the competitive selection between the 
> platform player private and the mock player private to make real world use 
> cases and test use cases work together. In particular, I wonder how it can 
> successfully work in the Mac implementation.

It’s a bit of a hack, the way this works in macOS.  When a LayoutTest asks to 
install the mock MSE player in Internals::initializeMockMediaSource(), we 
uninstall all the AVFoundation-based MediaPlayerPrivates, including 
MediaPlayerPrivateAVFoundationObjC, MediaPlayerPrivateMediaSourceAVFObjC, and 
MediaPlayerPrivateMediaStreamAVFObjC.  This leaves the 
MockMediaPlayerMediaSource player as the only remaining installed player.

I’m very open to ideas about how to clean this area up, since, as you noted, we 
don’t even get a MIME type until long after the MediaPlayerPrivate is created.  
But for now, you can add some calls to disable the standard GTK 
MediaPlayerPrivates in Internals::initializeMockMediaSource(), which should 
allow you to run tests correctly.

-Jer

> The MediaPlayerPrivateMediaSourceAVFObjC implementation declares that it 
> doesn't support and empty mimetype (irrelevant here, as it'll always be tried 
> by MediaPlayer when MEDIA_SOURCE is enabled). Then, on load, it doesn't check 
> anything so apparently succeeds.
> 
> I would be really grateful if Jer Noble or anybody else with knowledge on the 
> matter could devote some minutes to shed some light about the right way to 
> make the mock player private and the MSE player private live together.
> 
> Thank you.
> 
> -- 
> Enrique Ocaña González
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Using JavaScriptCore in an audio context

2015-09-22 Thread Jer Noble

> On Sep 22, 2015, at 11:45 AM, Stéphane Letz  wrote:
> 
>> Can you file a bug with exact repro steps?  I played with these and did not 
>> hear glitches.  Maybe I used a different version of WebKit or different 
>> hardware than you.  Filing a bug with specifics can help make these things 
>> clear.
> 
> What version of WebKit/Safari are you using ?
> 
>> 
>> Theoretically, this kind of real-time workload is probably the best argument 
>> for an execution environment that doesn’t have a warm-up.  On the other 
>> hand, it could just be a silly bug in our engine.  We usually do pretty well 
>> at cold code execution.
> 
> I'll try again, but if it is with an older version of Webkit/Safari and you 
> dont' hear the problem then...
> 
>> 
>>> 
>>> So my understanding is that we would be in a very similar case if we 
>>> directly use JavaScriptCore. Is there any safe way to be sure the JS code 
>>> is actually compiled before executing it?
>> 
>> No.
>> 
>>> By calling the "compute" code thousand of time outside the audio callback? 
>>> Or is there any other more reliable trick to do that?
>> 
>> That’s the most reliable.
>> 
>> But I just want to be clear.  The fact that the act of compiling code causes 
>> memory allocation is just he tip of the iceberg.  There’s no way to prevent 
>> JS execution from allocating, and JS has no guarantees about what operations 
>> may lead to memory allocation.  Even “a+b” could allocate even if a and b 
>> are both numbers, provided the right conditions are met.
> 
> Even in pure asm.js code? So then we would need to go for a more reliable  
> Audio callback ==> double buffer ==> JS (asm.js) code called in a high 
> priority thread scheme (but not real-time)? (so similar design to what 
> WebAudio worker is supposed to achieve : 
> https://developer.mozilla.org/fr/docs/Web/API/Web_Audio_API#Audio_Workers)

The Audio Worker thread will not run at a higher-than-normal priority (see 
, paragraph 
2).  The only performance benefit of the Audio Worker is that it will continue 
to be serviced while the main thread is blocked.

-Jer

> 
> Stephane
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] WebCore/platform standalone library

2015-03-20 Thread Jer Noble

> On Mar 20, 2015, at 11:40 AM, Antti Koivisto  wrote:
> 
> Reusable Os Fitting Layer

…Containing Opensource Platform Types, Events, and Resources.

-Jer

> 
> 
>antti
> 
> On Fri, Mar 20, 2015 at 11:26 AM, Simon Fraser  > wrote:
> 
>> On Mar 20, 2015, at 11:03 AM, Edward O'Connor > > wrote:
>> 
>>> >> This almost makes me want to suggest a jokey name for Platform. I can’t 
>>> >> off the top of my head think of a good expansion of OMG, though. Or BBQ.
>>> >
>>> > I am not a pro at this, but here are a few tries: Lower-level Object 
>>> > Library. Algorithm Reuse Framework. New Framework for WebCore, New System 
>>> > Framework for WebCore.
>>> 
>>> Platform Obfuscation Source.
>>> 
>>> Platform Interface and Testing Abstraction.
>> 
>> General Independent Framework (pronounced "jiff," of course).
>> 
>> Low-Level Abstract Platform would also be a logical choice.
> 
> Low-level Object Library.
> 
> Simon
> 
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org 
> https://lists.webkit.org/mailman/listinfo/webkit-dev 
> 
> 
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev



smime.p7s
Description: S/MIME cryptographic signature
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Safari browser on Mac OSX complains "AudioContext.createMediaStreamSource" is undefined !

2015-03-19 Thread Jer Noble

> On Mar 19, 2015, at 11:47 AM, Sasi San  wrote:
> 
> Thanks Chris. Is there any other way I can turn that flag?  or Do you any 
> idea whether it will be supported  in the near furture?

Sasi, that is a compile-time flag, so you’d have to build WebKit from source 
with that flag enabled. There’s no guarantee that the code will compile on Mac, 
nor that it will work once compiled.

Apple does not comment on future releases of Safari, their timing, nor their 
contents.

-Jer

smime.p7s
Description: S/MIME cryptographic signature
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Question regarding video on canvas

2015-03-13 Thread Jer Noble
On Mar 13, 2015, at 12:54 AM, Paul Preibisch  wrote:
> 
> Hi there,  I am working on a mobile web app which
> requires videos to be displayed within an iframe or on a canvase, and NOT 
> displayed full-screen
> on the iphone when played.
> 
> I've searched the web, but am unclear whether this is actually possible or 
> not.
> 
> I assume this feature is currently NOT available, as every browser video I 
> have come across always plays in full screen mode.
> 
> 1) I'd be grateful If you could let me know IF playing a video within an 
> iframe or on canvas IS in fact possible (without going full screen) ON an 
> iphone.
> 
> 2) If it is NOT possible, is this feature on the roadmap? If so, what is the 
> release date?
> 
> Thanks a lot for your information,
> 
> Kind regards

Paul,

This is the wrong venue; webkit-dev is for the development of the WebKit engine 
itself. The appropriate venue for this question is webkit-h...@lists.webkit.org 
.

-Jer

smime.p7s
Description: S/MIME cryptographic signature
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] “createMediaStreamSource” of “AudioContext” is undefined using Safari browser !

2015-03-06 Thread Jer Noble

> On Mar 5, 2015, at 3:24 PM, Sasi San  wrote:
> 
> Hi-
> 
> I am using Temasys’ Free WebRTC Plugin in my project with web audio APIs to 
> get the audio stream. It works on all the browsers except Safari which is 
> complaining about “createMediaStreamSource” of “AudioContext” is undefined.
> 
> Please let me know when this API will be available for Safari. or Is there 
> any other API available to capture the audio stream from "onAudioProcess” 
> function.
> 


This mailing list is intended for the development of WebKit itself. Your 
question would be better addressed in the webkit-help mailing list, as listed 
here: >

-Jer



smime.p7s
Description: S/MIME cryptographic signature
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] CDMi interface for EME?

2014-06-10 Thread Jer Noble

> On Jun 9, 2014, at 3:56 PM, Brendan Long  wrote:
> 
> I'm looking into EME and something I've been asked to investigate is 
> Microsoft's proposed CDMi interface, and if we could use it in WebKit. The 
> idea is that we would expose a few common interfaces (see page 12 of the PDF 
> at that link), like "Cdm_MediaKeys(wchar_t *keySystem)" constructor, 
> "createSession(wchar_t* type, const unsigned char *initData, const unsigned 
> char *AppData)" to create a CDM session, etc. The advantage of this would be 
> that we could create CDMs for each platform and then re-use them in multiple 
> browsers.
> 
> Is there any interest in this in WebKit? I'm starting to look through the 
> code to see how it could be done (presumably implement CDMPrivate / 
> CDMPrivateInterface to use CDMs matching this interface), but I figured I'd 
> go to the source and see if anyone has looked into doing this before.

I haven’t seen this document before, but after a brief look it seems that 
implementing a new CDMPrivate and CDMSession would be the best way to add 
support for this CDMi interface.

It does look like the CDMi module implements a lot of the platform-independant 
parts of the EME spec (whereas CDMPrivate and CDMSession are the 
platform-specific parts), so there may be some redundant logic.  And we may 
need to expose some new methods from CDMSession -> MediaKeySession, e.g. to 
change the MediaKeySession’s readyState.  Apart from that, this looks doable.

-Jer

> Thanks,
> Brendan Long
> CableLabs, Inc.
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev



smime.p7s
Description: S/MIME cryptographic signature
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Reference count leak with InBandTextTracks?

2013-10-01 Thread Jer Noble

On Oct 1, 2013, at 2:48 AM, Benjamin Dupont (bedupont)  
wrote:

> Hi all,
>  
> I am currently working on the InbandTextTracks in webkit and I am trying to 
> understand how the memory is released.
>  
> When we launch the track-in-band.html layout test, two in-band text tracks 
> have been created and added, the corresponding RefPtr has a refCount equals 
> to three.
> 1. Why are there 3 owners for each in-band text track? Is there an hidden 
> cache mechanism?

The tracks are being referenced by the JavaScript running in the test page.

> After this test, if we load another page, the player is destroyed and the 
> clearTextTracks method is called.
> In my understanding, the player should be the only owner of in-band text 
> tracks and thus after the clearTextTracks method is called, the ref count 
> should be decreased to 0 and the in-band text track object should be deleted.

Your understanding is incorrect.  Each track is also referenced by the 
JSTextTrack wrapper created when referencing videoElement.textTracks[].  
Furthermore, those wrappers are stored off as variables inbandTrack{1,2,3,4} in 
the global context, so they won't be destroyed by GC until the window object is 
destroyed

> In fact, after the clearTextTracks method the ref count isn’t equals to 0 
> thus the in-band text track object isn’t deleted.
> This text track object is deleted when the clear memory cache is called.
>  
> 2. Is it a normal behavior? If yes, what is the interest to use smart pointer?

Yes.

> 3. How does the clear memory cache know that this ref pointer (with a ref 
> count != 0) can be released? 

This is precisely the point of using a smart pointer; since the track is still 
being referenced, it won't be deleted until that refcount drops to 0.

-Jer



smime.p7s
Description: S/MIME cryptographic signature
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Is there a plan for supporting multi-process and WebCL in webkit

2013-04-09 Thread Jer Noble

On Apr 9, 2013, at 10:47 AM, Benjamin Poulain  wrote:

> I am very curious about the source of interest in OpenCL on browser. While 
> OpenCL is a great technology, I have the feeling it is not ready for the web. 
> What kind of applications do you foresee being powered by OpenCL on the Web?
> 
> I can imagine some use of CPU based kernels for the web (for image 
> manipulation for example). But I have a hard time seeing how adding full 
> support of OpenCL would not be shooting ourself in the foot at this point. 
> That may change in the future when GPU hardware converges…

There has also been interest in the WebAudio WG about using OpenCL/WebCL for 
custom audio processing.  There are significant performance issues involved 
with doing custom audio processing in JavaScript, even in a Worker thread, but 
WebCL may offer performance and memory characteristics which would couple well 
with the requirements of realtime audio threads.

-Jer

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] New web-facing CSS feature: -webkit-cursor-visibility: auto-hide

2013-03-04 Thread Jer Noble

On Mar 4, 2013, at 5:17 PM, Silvia Pfeiffer  wrote:

> Is  -webkit-cursor-visibility just for video ? If so, I am personally very 
> much in favor of such a selector. It's a common feature of full-screen video 
> players, see e.g. 
> http://www.longtailvideo.com/support/forums/jw-player/bug-reports/31053/mouse-cursor-does-not-disappear-in-fullscreen-mode/
>  
> orhttp://forum.brightcove.com/t5/General-Development/Hide-Mouse-Pointer-in-Fullscreen/td-p/7128
>  .

Not necessarily, though it's first use case is for the  element in full 
screen mode.  I can imagine that full screen  apps could use it as well.

> Here's a discussion by Chrome users: 
> http://productforums.google.com/forum/#!topic/chrome/Hd7AZRWejpk .
> 
> It would be nice to get this through the CSS group quickly and implement 
> without the prefix.

Agreed!

-Jer

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] New web-facing CSS feature: -webkit-cursor-visibility: auto-hide

2013-03-04 Thread Jer Noble

On Mar 4, 2013, at 5:00 PM, James Robinson  wrote:

> On Mon, Mar 4, 2013 at 4:52 PM, Jer Noble  wrote:
> 
> On Mar 4, 2013, at 4:46 PM, Ryosuke Niwa  wrote:
> 
>> Could you add either build or runtime flag?
> 
> I most definitely could.  But are there any ports who would disable the flag? 
>  (Honestly asking, here.)  If not, adding a feature flag may be more trouble 
> than its worth.
> 
> In chromium we would like the ability to monitor and, when appropriate, 
> disable vendor-prefixed non-standard CSS properties.  I think it's a bad idea 
> to assume that by default all ports will want to expose non-standard API to 
> the web platform without at least considering the situation and having a plan 
> to remove at least the prefixed version.  Please add a flag and, for bonus 
> points, hook up FeatureObserver so we can monitor usage of this property.

Sure thing.  I'll ask around on IRC about FeatureObserver.

-Jer

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] New web-facing CSS feature: -webkit-cursor-visibility: auto-hide

2013-03-04 Thread Jer Noble

On Mar 4, 2013, at 4:46 PM, Ryosuke Niwa  wrote:

> Could you add either build or runtime flag?

I most definitely could.  But are there any ports who would disable the flag?  
(Honestly asking, here.)  If not, adding a feature flag may be more trouble 
than its worth.

-Jer
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] New web-facing CSS feature: -webkit-cursor-visibility: auto-hide

2013-03-04 Thread Jer Noble
In an effort to improve the user-experience of watching videos in full screen 
mode, I have created a patch which adds a new CSS style attribute: 
-webkit-cursor-visibility. When set to "auto-hide", this rule would change the 
cursor type to "none" after a few seconds of inactivity. UAs could then add 
this rule to their UA full screen style sheets for full screened video 
elements, but this rule can be overridden by site authors to handle hiding the 
cursor (or not) themselves.  Sites which do not hide the cursor during playback 
in full screen mode (e.g. YouTube) would get this behavior for free, and sites 
which do (e.g. vimeo) can continue to explicitly hide the mouse cursor when 
hiding their custom controls.

This new attribute is not currently hidden behind a feature flag.

We are at the very initial stages of proposing this attribute to the CSSWG, but 
have already incorporated feedback from some of the WG members.

Please take a look at the associated bug, if you're interested: 
 Default mouse cursor behavior 
should be auto-hide for full screen video with custom controls

-Jer
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] webkit-resource: referring to resources in User Agent stylesheet

2011-04-26 Thread Jer Noble

On Apr 26, 2011, at 1:18 PM, Dimitri Glazkov wrote:

> SOLUTION: Looking at the current media controls implementations, most
> of the -webkit-appearance states are kind of like background images,
> each reflecting appearance of an element at a particular state. Thus,
> it seems we should be able to solve this by just using CSS
> backgrounds:
> 
> video:playing::-webkit-media-controls-play-button:hover {
>   background: url(/media-controls/play-button-hover.png);
> }
> 
> That is how the authors would style the media controls. However, at
> the UA level, we shouldn't probably be loading resources from random
> sites. Instead, we need a way to bake these images into the WebKit
> runtime, and then a way to refer to them from the stylesheet.
> 
> This is where a vendor-specific URL scheme comes in:
> 
> video:playing::-webkit-media-controls-play-button:hover {
>   background: url(webkit-resource:/media-controls/play-button-hover.png);
> }
> 
> A quick poll of smart people (abarth and smfr) seems to indicate it's
> not a completely horrid idea.
> 
> WDYT? Thoughts? Comments?

FWIW, WebKit/mac generates these images programmatically, so there's not really 
a URL for "play button hover state" which can be targeted.  That said, if the 
URL scheme could be overloaded to handle generated content, I guess this could 
still work.  WebKit/mac could either parse the URL path and special case 
"media-controls/play-button-hover.png", or define our own URL scheme, e.g. 
"background: url(webkit-generated:media-play-button-playing-hover);"  But then 
we're just back to something functionally identical to -webkit-appearance.

I guess what I'm getting at is, if support for webkit-resource: is added, 
great.  However, at least one port will still need the old behavior.

-Jer

smime.p7s
Description: S/MIME cryptographic signature
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev