Re: [whatwg] Drag-and-drop folders/files support with directory structure using DirectoryEntry

2012-04-09 Thread Eric U
On Thu, Apr 5, 2012 at 8:52 PM, Glenn Maynard gl...@zewt.org wrote:
 On Wed, Apr 4, 2012 at 11:36 PM, Kinuko Yasuda kin...@chromium.org wrote:

 A follow up about this proposal:

 Based on the feedbacks we got on this list we've implemented the following
 API to do experiments in Chrome:
  DataTransferItem.getAsEntry(in EntryCallback callback)


 Does this actually need to be async?  The only information you need to
 create the Entry are the filename and the file type (file or directory),
 which the browser can load before performing the drop, so no file I/O is
 needed here.

 which takes a callback that returns FileEntry or DirectoryEntry if it's for
 drop event and the item's kind is 'file'.
 Right now it's prefixed therefore its actual name in Chrome is
 'webkitGetAsEntry'.
 We use kind=='file' in a broader definition here (i.e. a file path which
 can be either regular file or directory file) and didn't add a specific
 kind for directories.
 (Btw we've also implemented DataTransferItem.getAsFile(), so apps can call
 either getAsFile or webkitGetAsEntry for kind=='file' item)


 If getAsEntry is synchronous, a separate getAsFile method isn't needed.
 You can just say transfer.getAsEntry().file(), and reduce the API surface
 area a bit.

 As for lifetime and toURL() issue, which was the biggest concern in the
 past discussion, we decided not to support toURL/resolveURL on  Entries for
 drag-and-drop, so that it won't leak reference or expose GC period.  A
 dragged file can be accessed only while the script has the Entry instance
 (as well as we do for File object).


 I agree with this.  toURL makes some sense within the sandboxed filesystem,
 but it just doesn't for non-sandboxed use.

 We eventually aim to support structured cloning of Entries but it's not
 there yet.

 This is sort of a separate issue, but it would be nice to eventually get
 full structured cloning support, with support for File/Entry into
 IndexedDB.  That is, let me store an Entry into IndexedDB, so I can later
 restore it and regain access to the file.  For example, if a user grants my
 music player web app access to his MP3 collection, I can store the
 resulting Entry in IndexedDB (or History), and the user can load my web app
 later and start playing music, without having to re-open the directory
 every time.  This needs further thought around user expectations of how
 long access grants last, but hopefully it can be worked out eventually.

 (We don't need to go into this here; just mentioning it again while it's on
 my mind, so people can be thinking about it.)

As you point out, persistent access permissions are a big issue, which
I'll leave for another time.
I just wanted to mention that I think that storing an Entry in a
database is a bit odd, since you're not actually storing the file that
it represents, so the data can effectively change or disappear outside
of a transaction.

It might make more sense to store a URL or other locator for the file,
to make it clear that you're storing a reference, not the data itself.
 I suppose that storing an Entry could be like storing a capability
[permission to access the file], but that's for the other discussion.

 As for input type=file support I am thinking about adding AsEntries
 attribute (so that we do not need to do the automatic recursive
 files/directories retrieval when the attribute is specified) and entries
 field, but haven't done anything yet.  (Open to further suggestions)


 This sounds right, too.  This would make File access from input
 obsolete.  (File would still avoid at least one asynchronous call for
 non-recursive use cases, though, so people will still use it.)

  I hope we can get valuable user feedbacks (as well as from yours) based on
 the implementation.


 This sounds good.  Once we've played around with this for a while, we can
 start thinking about how to safely expose write access.

 --
 Glenn Maynard


Re: [whatwg] Drag-and-drop folders/files support with directory structure using DirectoryEntry

2012-04-05 Thread Eric U
On Wed, Apr 4, 2012 at 9:36 PM, Kinuko Yasuda kin...@chromium.org wrote:
 A follow up about this proposal:

 Based on the feedbacks we got on this list we've implemented the following
 API to do experiments in Chrome:
  DataTransferItem.getAsEntry(in EntryCallback callback)

 which takes a callback that returns FileEntry or DirectoryEntry if it's for
 drop event and the item's kind is 'file'.
 Right now it's prefixed therefore its actual name in Chrome is
 'webkitGetAsEntry'.
 We use kind=='file' in a broader definition here (i.e. a file path which
 can be either regular file or directory file) and didn't add a specific
 kind for directories.
 (Btw we've also implemented DataTransferItem.getAsFile(), so apps can call
 either getAsFile or webkitGetAsEntry for kind=='file' item)

 As for lifetime and toURL() issue, which was the biggest concern in the
 past discussion, we decided not to support toURL/resolveURL on  Entries for
 drag-and-drop, so that it won't leak reference or expose GC period.  A
 dragged file can be accessed only while the script has the Entry instance
 (as well as we do for File object).  We eventually aim to support
 structured cloning of Entries but it's not there yet.

 Each Entry returned by this API has following properties:
 * is read-only.
 * has the dropped file/directory name (not a full path) in its
 Entry.name, which must also match with the basename of Entry.fullPath.
 * should not expose the actual platform path, but how exactly its fullPath
 should look is implementation dependent. (In our implementation it always
 appears as a top-level path, e.g. '/foo' for a file/directory 'foo')

 Example:
 If we drop multiple files/directories like following:
  /User/kinuko/Photos/travel/thailand/
  /User/kinuko/Photos/holiday2012/
  /User/kinuko/Photos/photos.txt

 We'll get three kind=='file' items in dataTransfer.items, and
 calling getAsEntry (webkitGetAsEntry) on each item allow us to get
 FileEntry or DirectoryEntry and to recursively traverse its child
 files/subdirectories with full control if it's directory.

full control still doesn't include modification, though, right?
It's read-only all the way down?

  var items = e.dataTransfer.items;
  for (var i = 0; i  items.length; ++i) {
    if (items[i].kind == 'file') {
      items[i].webkitGetAsEntry(function(entry) {
        displayEntry(entry.name + (entry.isDirectory ? ' [dir]' : ''));
        ...
      });
    }
  }

 As for input type=file support I am thinking about adding AsEntries
 attribute (so that we do not need to do the automatic recursive
 files/directories retrieval when the attribute is specified) and entries
 field, but haven't done anything yet.  (Open to further suggestions)

 I hope we can get valuable user feedbacks (as well as from yours) based on
 the implementation.


 On Sat, Nov 19, 2011 at 7:37 AM, Glenn Maynard gl...@zewt.org wrote:

 On Fri, Nov 18, 2011 at 1:36 AM, Kinuko Yasuda kin...@chromium.orgwrote:

 I would say the approach has a bloating per-page bookkeeping problem but
 not a 'leak'.


 It's a reference leak: an object which remains referenced after it's no
 longer needed.  I'm not aware of anything standardized in the platform with
 this problem.  Also, a lot of toURL use cases would simply not work with
 drag-and-dropped files (being able to modify the URL to access neighboring
 files; storing the URL for access in a future session).

 Anyway, do you still agree that having Entry structured clonable is a good
 idea?  I'm only really worried about toURL if it causes structured cloning
 of Entry to not happen, since I think the latter is a much more solid and
 useful approach, and more consistent with what we already have.
 (Half-solutions make me nervous, because they have a tendency to delay full
 solutions.)

 --
 Glenn Maynard




Re: [whatwg] creating a new file via the File API

2011-12-19 Thread Eric U
On Mon, Dec 19, 2011 at 2:06 PM, Bronislav Klučka
bronislav.klu...@bauglir.com wrote:


 On 19.12.2011 17:05, Glenn Maynard wrote:

 2011/12/19 Bronislav Klučkabronislav.klu...@bauglir.com:

 I agree, additional API for this would be better
 FileSaver is not exactly all you would need, because FileSaver is already
 implemented e.g. in Chrome
 to save file to browser file system (requestFileSystem).

 requestFileSystem is not FileSaver.


 hi,
 I know, that was not what I was saying, it's an API to store data in File.
 The point is, that you can get File from FileSystem API as well,
 FileSaver API is already implemented and that did not solve anything.

The FileSaver object has not been implemented by any browser that I know of.

The FileSystem API has been implemented by Chrome.
It does not address this use case.

 FileSaver API is insufficient to solve this issue, what we need is file
 entry that actually represent exactly that file on
 user's disk, not in some pseudo file system.

The FileSystem API does in fact let you create a FileEntry that
represents exactly a file on disk.  That doesn't seem to be what you
want, though; I believe you actually want FileSaver, which may be
implemented some day, but hasn't been yet.

The conversation about it isn't moving very quickly, but it's not
dead.  It just hasn't been a high priority for anyone yet.


 B.


Re: [whatwg] Drag-and-drop folders/files support with directory structure using DirectoryEntry

2011-11-16 Thread Eric U
On Tue, Nov 15, 2011 at 6:06 PM, Glenn Maynard gl...@zewt.org wrote:
 On Tue, Nov 15, 2011 at 8:38 PM, Kinuko Yasuda kin...@chromium.org wrote:

 The async nature of DirectoryEntry makes the code longer,
 but webapps can work on the files incrementally and can show
 progress UI while enumerating.  For the apps that may deal with
 potentially huge folders providing such a scalable (but slightly
 more cumbersome) way sounds reasonable to me.


 Entry (and subclasses) should also be supported by structured clone.  That
 would allow passing a DirectoryEntry received from file inputs to be passed
 to a worker.  This is something for later, of course, but combined with an
 API to convert between Entry and EntrySync (and DE/DESync), this would
 allow using the much more convenient sync API in a worker, even if the only
 way to retrieve the Entry in the first place is in the UI thread.

As Kinuko mentions, toURL obviates the need for structured clone for Entry.
I'd rather not add support for that if we can avoid it, and this seems
like an acceptable workaround.

While the URL format for non-sandboxed files has yet to be worked out,
I think we need toURL to work no matter where the file comes from.
It's already the case that an Entry can expire if the underlying file
is deleted or moved; I think it's OK for the URL to expire under
similar circumstances.  In the case of a drag, we can just say that it
expires when either the page goes away or the underlying file does
[analogous to the expiry of the data in a normal drag event, I think,
which should last as long as the page does].

 I think this is a better solution to the inconvenience of async APIs than
 falling back to exposing unscalable sync interfaces in the main thread.
 This is one of the reasons we have workers.

 --
 Glenn Maynard



Re: [whatwg] Drag-and-drop folders/files support with directory structure using DirectoryEntry

2011-11-16 Thread Eric U
On Tue, Nov 15, 2011 at 9:16 PM, Glenn Maynard gl...@zewt.org wrote:
 On Tue, Nov 15, 2011 at 10:58 PM, Kinuko Yasuda kin...@chromium.org wrote:

 Good point, we could do this synchronously in workers!
 I think we already have one way to convert Entry to EntrySync:
 we can get a URL from Entry (Entry.toURL()), send the URL to
 the worker and get the EntrySync via resolveLocalFileSystemSyncURL.

 http://www.w3.org/TR/file-system-api/#widl-Entry-toURL

 That might be tricky, since toURL looks designed with origin-specific
 sandboxed storage in mind.  Files and directories supplied in this way is
 outside of the sandbox; that makes securely creating persistent, un-expiring
 URLs for arbitrary files a lot harder.

 Note that there's another (unrelated) issue: there are unsolved issues with
 filenames when giving access to unsandboxed storage.  They're not
 unsolvable, they've just been punted on so far.  It's been worked around so
 far by splitting apart the rules for sandboxed filesystems from those for
 unsandboxed filesystems, so sandboxed filesystems (those that don't actually
 store filenames on real files) can use simple, interoperable rules that
 wouldn't work for unsandboxed access to real files.

 Off-hand, the main issue that directly affects reading is that most
 non-Windows filesystems can store filenames which can't be represented by a
 DOMString, such as invalid codepoints (most commonly mismatched encodings).
 There are more issues with writing: each platform has its own length
 limitations on both filenames and full path lengths; they're not always even
 in the same units, with Linux in bytes and Windows in UTF-16 codepoints; and
 Windows filenames are case-folding (in practice).

 The writing issues might be ignorable to implement reading, but they're all
 related issues so it's probably good to try to look at them as a whole.
 (+CC Eric)

Yup; I'm paying attention to those issues.  In previous drafts of the
spec, we handled reading of awkward paths [barring some handling of
invalid code points, where one could enumerate/read files, but perhaps
not really read their exact names], but restricted writing.  I expect
we'll do something similar when we get around to the write cases.  We
can either restrict path creation to what's valid UTF-8, or we could
allow users to try to create arbitrary paths using ArrayBuffers of
bytes.  At any rate, I don't think we're doing anything here that
would limit our options in the future.

 --
 Glenn Maynard




Re: [whatwg] Drag-and-drop folders/files support with directory structure using DirectoryEntry

2011-11-16 Thread Eric U
).  For limited read-only drag-and-drop cases we
  wouldn't need to think about remapping and the mapping could just go
  away when the page goes away, so hopefully implementing such mapping
  wouldn't be that hard.
 

 There are probably some cases that we'll just have to accept will never
 work perfectly, and design with that in mind.

 To take a common case, suppose a script does the following, a commonplace
 method for safe file overwriting (relatively; the needed flush operations
 don't exist here):

 1. Create a file with the name filename + .new.
 2. Write the new file contents to the file.
 3. Rename filename + .new to filename, overwriting the original file.

 This is a useful case: it's real-world--I've done this countless times--and
 it's a case where unrepresentable filenames affects both reading and
 writing, plus the auxiliary operation of renaming.

 I suppose the mapping approach could work here.  Associate the mapping with
 the DirectoryEntry containing it, from invalid filenames to generated
 filenames.  Then, if the invalid filename is X, and the DOMString mapping
 is MAPPING1, then this would first create the literal filename
 MAPPING1.new, followed by renaming it to the original invalid filename
 X.

 (In particular, though, I think it should not be possible to create *new*
 garbage filenames on people's systems, that didn't exist to begin with.
 That is, it should map to the filenames that really exist, not just string
 escaping.)

 This is complex, though, and leads to new questions, like how long the
 mappings last if the underlying file is deleted.  As a data point, note
 that most Windows applications are unable to access files whose filenames
 can't be represented in the current ANSI codepage.  That is, if you're on a
 US English system, you can't access filenames with Japanese in them.
 (Unicode applications can, but tons of applications in Windows aren't
 Unicode; Windows has never made it simple to support Unicode.)  If users
 find that reasonable, it might not be worth all this for the even rarer
 case of illegal codepoints in Linux.

 Yup, writing side would have tougher issues, and that's why I started
  this proposal only with read-only scenarios.  (I agree that it'd be
  good to give another thought about unsandboxed writing cases though)
 

 For what it's worth, I think the only sane approach here is an isolated
 break from attempting to make everything interoperable, and allow the
 platform's limitations to be visible.  (That is, fail file creation if the
 path depth or filename length is too long on the platform; succeed with
 file creation even if it would fail on a different platform, and so on.)  I
 think this is just inherent to allowing this sort of access to real
 filesystems, and trying to avoid it just causes other, stranger problems.

 (For example, if you prevent creating filenames in Linux which are illegal
 in Windows, then things get strange if an illegal filename already exists
 on a filesystem where it's not actually disallowed.)



 On Wed, Nov 16, 2011 at 12:01 PM, Eric U er...@google.com wrote:

   While the URL format for non-sandboxed files has yet to be worked out,
  I think we need toURL to work no matter where the file comes from.
  It's already the case that an Entry can expire if the underlying file
  is deleted or moved;


 But there's no revocation mechanism for toURL URLs.

 Also, if toURL URLs to non-sandboxed storage expires with the context it
 was created in (which it would have to, I think), it loses a whole category
 of use cases covered by structured clone: the ability to persist an access
 token.  For example, the spec allows storing a File within a History
 state.  That allows history navigation to restore its state properly: if
 the user opened a local picture into an image viewer app, navigating
 through history can correctly show the files in older history states, and
 even restore correctly through browser restarts and session restores.  The
 same should apply to Entry and DirectoryEntry.

 (Nobody implements this yet, as far as I know, but I hope it'll happen
 eventually.  It's a limitation today, and it'll become a more annoying one
 as local file access mechanisms like this one are fleshed out.)

 Also, if non-sandboxed toURL URLs are same-origin only, then that also
 loses functionality that structured cloning allows: using Web Messaging to
 pass an access token to a page with a different origin.  (This is much
 safer than allowing cross-origin use of the URLs, since it's far easier to
 accidentally expose a URL string than to accidentally transfer an object.)

 File API has already solved all of this by using structured clone.  I think
 it makes a lot of sense to follow its lead.

 --
 Glenn Maynard




Re: [whatwg] Drag-and-drop folders/files support with directory structure using DirectoryEntry

2011-11-16 Thread Eric U
On Wed, Nov 16, 2011 at 3:55 PM, Daniel Cheng dch...@chromium.org wrote:
 On Wed, Nov 16, 2011 at 15:31, Glenn Maynard gl...@zewt.org wrote:

 On Wed, Nov 16, 2011 at 5:33 PM, Daniel Cheng dch...@chromium.org wrote:

 I'm trying to better understand the use case for DataTransfer.entries.
 Using the example you listed in your first post, if I dragged those folders
 into a browser, I'd expect to see File objects with the following names in
 DataTransfer.files:
     trip/1.jpg
     trip/2.jpg
     trip/3.jpg
     halloween/a.jpg
     halloween/b.jpg
     tokyo/1.jpg
     tokyo/2.jpg
 It seems like with that, a web app could implement a progress meter and
 handle subdirectories easily while using workers. What does the FileSystem
 API provide on top of that?


 The issue isn't when you have seven files; it's when you have seven
 thousand.  File trees can be very large.  In order to implement the above
 API, you need to traverse the entire tree in advance to discover what files
 exist.  The DirectoryEntry API lets you traverse the directory explicitly,
 without having to read the entire tree into memory first, so you don't
 waste time reading file metadata that you don't care about.

 For example, you might drag a SVN working copy into a page, which allows
 viewing logs and other data about the repository.  It might easily contain
 tens of thousands of files, but you rarely need to enumerate all of them in
 advance to do useful things with it.

 (If the trees are on a slow medium, like a DVD drive or a high-latency
 network drive, even a much smaller number of files can take a long time.)

 Even when you do want to traverse it all, there are many other advantages:
 the traversal can be done asynchronously without blocking the page; the
 page can have a cancel button to abort the operation; the page can show
 other information about what it's doing (eg. number of new files, number of
 unrecognized filenames); the page can allow dragging more directories to be
 queued up for processing without having to wait for the first set to
 complete; and so on.


 I see. I personally feel it's a little confusing to have two different ways
 to read files in DataTransfer, and now we're adding a third.



 Also, if a page caches a DirectoryEntry from entries, does that mean it
 can continuously poll the DirectoryEntry to see if the contents have
 changed to contain something interesting? That seems undesirable.


 Nothing needs to be cached.  The DirectoryEntry just represents the
 directory that was dragged; you don't have to look inside the directory at
 all until the page uses it.


 Let's say I drag my pictures directory to a web app uploader. If this
 uploader passes the DirectoryEntry to my pictures directory to a worker,
 will it be able to read files I create a long time after the original drag?
 It sounds like the approach being advocated would allow that type of attack.

I think it's a bit of an exaggeration to call that an attack, but
yes, we'll have to make sure we set expectations appropriately.



 --
 Glenn Maynard



 Daniel



Re: [whatwg] creating a new file via the File API

2011-09-07 Thread Eric U
On Mon, Aug 15, 2011 at 3:40 AM, David Karger kar...@mit.edu wrote:
 Apologies if I'm revisiting old territory.  I've been doing work on pure
 html/javascript applications that work entirely clientside
 (http://projects.csail.mit.edu/exhibit/Dido).  For persistence, they read
 and write local files.  There's already an input type=file interface for
 letting the user specify a file to be read.  And I can use the same
 interface, inappropriately, to let the user overwrite a preexisting file.
  But things get much messier if I want to let the user specify a _new_ file
 to be written, because the file-open dialog doesn't offer users a way to
 specify a new filename.  What I'd like to be able to do is specify a tag, or
 a invoke some javascript method, that will produce the save file dialog
 typical of most systems, with a graphical directory browser but including
 the option to specify a new filename.  This problem isn't unique to me; a
 discussion on stackoverflow appears at
 http://stackoverflow.com/questions/2897619/using-html5-javascript-to-generate-and-save-a-file
 where the proposed solution is to use flash---and that would be an
 unfortunate loss of html5 purity.  They also suggest the hack of using a
 data: url but that has size limitations.

 Perhaps input type=file could be given an attribute specifying whether a
 new filename is permitted?

 -David Karger

This sounds like a job for the FileSaver interface [1].  Currently no
browser implements it, but we at Chrome have been considering it.  At
TPAC last year we discussed it a bit in the WebApps WG meeting; IIRC
we talked about letting it take a URL instead of or in addition to
just a Blob, for more general utility.

I suggest you bring it up on public-webapps@, where that spec lives.

  Eric

[1] http://dev.w3.org/2009/dap/file-system/file-writer.html#idl-def-FileSaver


Re: [whatwg] File API Streaming Blobs

2011-08-08 Thread Eric U
Sorry about the very slow response; I've been on leave, and am now
catching up on my email.

On Wed, Jun 22, 2011 at 11:54 AM, Arun Ranganathan a...@mozilla.com wrote:
 Greetings Adam,

 Ian, I wish I knew that earlier when I originally posted the idea,
 there was lots of discussion and good ideas but then it suddenly
 dropped of the face of the earth. Essentially I am fowarding this
 suggestion to public-weba...@w3.org on the basis as apparently most
 discussion of File API specs happen there, and would like to know how
 to move forward with this suggestion.

 The original suggestion and following comments are on the whatwg list
 archive, starting with

 http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2011-January/029973.html

 Summing up, the problem with the current implementation of Blobs is
 that once a URI has been generated for them, by design changes are no
 longer reflected in the object URL. In a streaming scenario, this is
 not what is needed, rather a long-living Blob that can be appended is
 needed and 'streamed' to other parts of the browser, e.g. thevideo
 oraudio  element.
 The original use case was:  make an application which will download
 media files from a server and cache them locally, as well as playing
 them without making the user wait for the entire file to be
 downloaded, converted to a blob, then saved and played, however such
 an API covers many other use cases such as on-the-fly on-device
 decryption of streamed media content (ie live streams either without
 end or static large files that to download completely would be a waste
 when only the first couple of seconds need to be buffered and
 decrypted before playback can begin)

 Some suggestions were to modify or create a new type of Blob, the
 StreamingBlob which can be changed without its object url changing and
 appended to as new data is downloaded or decoded, and using a similar
 process to how large files may start to be decoded/played by a browser
 before they are fully downloaded. Other suggestions suggested using a
 pull API on the Blob so browsers can request for new data
 asynchronously, such as in

 http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2011-January/029998.html

 Some problems however that a browser may face is what to do with urls
 which are opened twice, and whether the object url should start from
 the beginning (which would be needed for decoding encrypted, on-demand
 audio) or start from the end (similar to `tail`, for live streaming
 events that need decryption, etc.).

 Thanks,
 P.S. Sorry if I've not done this the right way by forwarding like
 this, I'm not usually active on mailing lists.



 I actually think moving to a streaming mode for file reads in general is
 desirable, but I'm not entirely sure extending Blobs is the way to go for
 *that* use case, which honestly is the main use case I'm interested in.  We
 may improve upon ideas after this API goes to Last Call for streaming file
 reads; hopefully we'll do a better job than other non-JavaScript APIs out
 there :) [1].  Blob objects as they are currently specified live in memory
 and represent in memory File objects as well.  A change to the underlying
 file isn't captured in the Blob snapshot; moreover, if the file moves or is
 no longer present at time of read, an error event is fired while processing
 a read operation.  The object URL may be dereferenced, but will result in a
 404.

 The Streaming API explored by WHATWG uses the Object URL scheme for
 videoconferencing use cases [2], and so the scheme itself is suitable for
 resources that are more dynamic than memory-resident Blob objects.
  Segment-plays/segment dereferencing in general can be handled through media
 fragments; the scheme can naturally be accompanied by fragment identifiers.

 I agree that it may be desirable to extend Blobs to do a few other things in
 general, maybe independent of better file reads.  You've Cc'd the right
 listserv :)  I'd be interested in what Eric has to say, since BlobBuilder
 evolves under his watch.

Having reviewed the threads, I'm not absolutely sure that we want to
add this stuff to Blob.  It seems like streaming is quite a bit
different than a lot of the problems people want to solve with Blobs,
and we may end up with a bit of a mess if we mash them together.
BlobBuilder does seem a decent match as a StreamBuilder, though.
Since Blobs are specifically non-mutable, it sounds like what you're
looking for is more like createObjectURL(blobBuilder) than
createObjectURL(blobBuildler.getBlob()).

From the threads and from my head, here are some questions:

1) Would reading from a stream always start at the beginning, or would
it start at the current point [e.g. in a live video stream]?
2) Would this have to support infinite streams?
3) Would we be expected to keep around data from the very beginning of
a stream, even if e.g. it's a live broadcast and you're now watching
hour 7?  If not, who controls the buffer size and what's the API for

Re: [whatwg] Fixing undo on the Web - UndoManager and Transaction

2011-08-03 Thread Eric U
On Tue, Aug 2, 2011 at 2:43 PM, Ryosuke Niwa rn...@webkit.org wrote:
 On Tue, Aug 2, 2011 at 2:32 PM, Eric U er...@google.com wrote:

 Could you add an example of the user typing e.g. h
 ... e ... l ... l ... o, via an app that's doing the DOM
 modifications, using managed transactions, such that a browser
 undo/redo will act on the whole word hello?  It looks like you'd
 have an open transaction for a while, adding a letter at a time, and
 then you'd close it at some point?

 For example,
 myEditor.undoManager.transact(insertChar('h'), removeChar,
 reinsertChar('h'));
 myEditor.undoManager.transact(insertChar('e'), removeChar,
 reinsertChar('e'), true);
 myEditor.undoManager.transact(insertChar('l'), removeChar,
 reinsertChar('l'), true);
 myEditor.undoManager.transact(insertChar('l'), removeChar,
 reinsertChar('l'), true);
 myEditor.undoManager.transact(insertChar('o'), removeChar,
 reinsertChar('o'), true);
 where insertChar, removeChar, and reinsertChar are sensible DOM mutation
 functions will insert 5 manual transactions in one transaction group.  The
 idea is that you decide whether you want new transaction to be a part of the
 last transaction or not.  If you want it to be, then merge=true and
 merge=false otherwise.
 Another example:
 myEditor.undoManager.transact(insertChar('o'), removeChar,
 reinsertChar('o'));
 myEditor.undoManager.transact(insertChar('k'), removeChar,
 reinsertChar('k'), true);
 myEditor.undoManager.transact(insertBR, removeBR, reinsertBR);
 myEditor.undoManager.transact(insertChar('h'), removeChar,
 reinsertChar('h'), true);
 myEditor.undoManager.transact(insertChar('i'), removeChar,
 reinsertChar('i'), true);
 will insert two transactions that insert o and k as one transaction
 group, and then three transactions that insert br, h, and i as another
 transaction group.  So when the first undo is executed, br and hi will be
 removed (i.e. the last three transactions are unapplied), and the second
 undo will remove ok (the first two transactions are unapplied).
 - Ryosuke


These both look like manual transactions again, given that you're
supplying unapply and reapply functions.  Is that just a typo in the
email, or am I misunderstanding?

The transaction group parameter is clear now, though.


Re: [whatwg] Fixing undo on the Web - UndoManager and Transaction

2011-08-02 Thread Eric U
On Tue, Aug 2, 2011 at 2:17 PM, Ryosuke Niwa rn...@webkit.org wrote:
 On Tue, Aug 2, 2011 at 1:51 PM, Eric U er...@google.com wrote:

 I think the manual transaction is what I'd want to make undo/redo in
 the edit menu work with jV
 [https://addons.mozilla.org/en-US/firefox/addon/jv/]*.

 That's great to hear!  I've spent so much time reconciling the way managed
 transactions and manual transactions interact so it's good to know my work
 wasn't put into vain.

 It looks like using manual transactions would be the straightforward
 way to make this work...I assume it could also be made to work with
 managed transactions, but I'm having trouble picturing how that would
 look from this early spec.  Perhaps you could add a little sample code
 of an app making a number of small changes and merging them into a
 single undo record after each?

 Sure. The following example will add two transactions each inserting hello
 and br before the selection anchor and groups them into one transaction
 group:
 myEditor.undoManager.transact(
   new ManualTransaction(
     function () {
       this.text = document.createTextNode('hello');
       this.nodeBefore = window.getSelection().anchorNode;
       this.nodeBefore.parentNode.insertBefore(this.text, this.nodeBefore);
     },
     function () { this.text.parentNode.removeChild(this.text); },
     function () { this.nodeBefore.parentNode.insertBefore(this.text,
 this.nodeBefore); })
   );
 myEditor.undoManager.transact(
   new ManualTransaction(
     function () {
       this.br = document.createElement('br');
       this.nodeBefore = window.getSelection().anchorNode;
       this.nodeBefore.parentNode.insertBefore(this.br, this.nodeBefore);
     },
     function () { this.br.parentNode.removeChild(this.br); },
     function () { this.nodeBefore.parentNode.insertBefore(this.br,
 this.nodeBefore); }
   ), true);

Ah, sorry--I wasn't clear.  How to do it with manual transactions was
pretty obvious.  That's one of the things I like about the API--it's
very straightforward.

Could you add an example of the user typing e.g. h
... e ... l ... l ... o, via an app that's doing the DOM
modifications, using managed transactions, such that a browser
undo/redo will act on the whole word hello?  It looks like you'd
have an open transaction for a while, adding a letter at a time, and
then you'd close it at some point?

Thanks,

 Eric