Hi all,
I recently started updating the WTF::Vector API to use unsigned types instead
of size_t [1][2], because:
- WTF::Vector is already using unsigned type for size/capacity internally to
save memory on 64-bit, causing a mismatch between the API and the internal
representation [3]
- Some
Whatever we do, the clients of Vector should be consistent about what type they
use. I'm actually fine with Vector internally using unsigned even if the API
uses size_t, but right now we have lots of code that uses a mix of size_t and
unsigned when indexing into Vectors. That's confusing.
If
This is not exactly about Vector, but if one uses
FileReader.prototype.readAsArrayBuffer() on a large file, I think that it
overflows ArrayBuffer. WebKit actually crashes when uploading multi-gigabyte
files to YouTube, Google Drive and other similar services, although I haven't
checked
ArrayBuffers are very special because they are part of ECMAScript.
We use unsigned for the length, because once upon a time, that would have been
the right type; these days the right type would be a 53-bit integer. So,
size_t would be wrong. I believe that ArrayBuffers should be changed to
I agree with Phil, we should just use unsigned consistently. For very large
memory allocations Vector is probably not the right type to use.
antti
On Wed, Nov 19, 2014 at 10:32 PM, Filip Pizlo fpi...@apple.com wrote:
Whatever we do, the clients of Vector should be consistent about what type
I agree with Filip and Antti.
The internal type of Vector is an orthogonal issue.
Keeping the old size_t API with the unsigned internal has caused a
mess of types in the codebase and some confusion for contributors. We
should fix the API and clean up our code.
There are cases were we can
That makes good sense for internal implementation, do you think that class API
should also use a custom type, or should it use size_t?
With Vector though, I don't know how we would differentiate code paths that
need large allocations from ones that don't. Nearly anything that is exposed as
a
I don’t haver a specific opinion on size_t vs unsigned, but I have a related
point:
If Vector uses unsigned, then it must RELEASE_ASSERT that it does not overflow
unsigned when growing.
Even if most normal content allocates 4GB, exploit content surely will try to
allocate 4GB.
Chris,
On Nov 19, 2014, at 1:50 PM, Alexey Proskuryakov a...@webkit.org wrote:
That makes good sense for internal implementation, do you think that class
API should also use a custom type, or should it use size_t?
I would propose having an Int53 type in JSC, because that is now a thing in
Hi,
I believe the Vector class is already checking for overflows. I looked quickly
and it seems that when trying to allocate / reallocate the Vector buffer, we
will call CRASH() if:
(newCapacity std::numeric_limitsunsigned::max() / sizeof(T))
Kr,
--
Chris Dumez - Apple Inc.
Cupertino, CA
On a related note I’ve also got a patch up for review that adds bounds checking
to Vector::iterator, and that vigorously performs bounds and overflow checking.
—Oliver
On Nov 19, 2014, at 2:04 PM, Chris Dumez cdu...@apple.com wrote:
Hi,
I believe the Vector class is already checking for
19 нояб. 2014 г., в 13:58, Filip Pizlo fpi...@apple.com написал(а):
With Vector though, I don't know how we would differentiate code paths that
need large allocations from ones that don't. Nearly anything that is exposed
as a JS API or deals with external world can hit sizes over 4Gb.
If we don’t want to crash on overflow, the callers can use the try*() API I
believe (e.g. tryAppend()). This returns false (and does not resize the vector)
instead of crashing, when we reach the size limit.
Kr,
--
Chris Dumez - Apple Inc.
Cupertino, CA
On Nov 19, 2014, at 2:58 PM, Alexey
13 matches
Mail list logo