The method of checking progress on a separate URL similar to the
example I sent does result in repeated requests during upload. But,
they're trivial by comparison - easily in the 100's of req/sec range
of response time and throughput. A bit goofy, but over a single keep-
alive socket for
Ah -- old message I didn't see at first replies in-line below
On Dec 9, 2009, at 4:06 PM, Tom Jackson wrote:
> Jim,
>
> Looks like a lot of really cool stuff.
>
> One question about the ns_quewait: the only events are NS_SOCK_READ
> and NS_SOCK_WRITE, which matches up with the tcl le
On Fri, Jan 22, 2010 at 3:03 PM, Jeff Rogers wrote:
> The YUI upload control looks like a good place to start for the flash
> client-upload feature. I haven't looked into it too deeply tho, so I don't
> know what the server side looks like.
>
> YUI Uploader widget: http://developer.yahoo.com/yui/
I don't have any problem with this solution. It is superior to using a
forward proxy which uploads the entire file then reports progress to
the final server (this was the original model proposed in this thread,
by example).
In fact, I pointed out that the server thread is a proxy, handling
upload
The YUI upload control looks like a good place to start for the flash
client-upload feature. I haven't looked into it too deeply tho, so I
don't know what the server side looks like.
YUI Uploader widget: http://developer.yahoo.com/yui/uploader/
Other that that, I was pondering the plain uploa
This method could also have the advantage of recovery in case of a
failed upload. A client would look much like a udp application which
tracks packets at the application level.
Once the server side API is set, the client could be javascript, java,
flash or tcl.
The client-side solution also has t
Hi,
I think we were talking about this about a month ago. I updated the source to
enable upload-progress checking with a combination of ns_register_filter and
nsv -- there's an example at the latest ns_register_filter man page (pasted
below). This may work for you although it would require co
> On 11/24/09 5:13 PM, John Buckman wrote:
>> Is there any access (in C or Tcl) to an upload-in-progress in aolserver?
>
> It'd be nice if we extended ns_info with [ns_info driver ...] that could
> give you connection-level info. from the driver thread. In its simplest
> form, all we need is to e
On 11/24/09 5:13 PM, John Buckman wrote:
> Is there any access (in C or Tcl) to an upload-in-progress in aolserver?
It'd be nice if we extended ns_info with [ns_info driver ...] that could
give you connection-level info. from the driver thread. In its simplest
form, all we need is to expose the t
Jim,
Looks like a lot of really cool stuff.
One question about the ns_quewait: the only events are NS_SOCK_READ
and NS_SOCK_WRITE, which matches up with the tcl level, and also match
up with Ns_QueueWait. Are the other possible file events handled
somewhere else? Note that tcl includes exceptiona
Hi,
I just checked in some changes to hopefully fix the pre-queue interp leak muck
(and other bugs). I also added read and write filter callbacks -- the read
callbacks can be used to report file upload progress somewhere. And, I added
new ns_cls and ns_quewait commands to work with the curiou
On Tue, Dec 1, 2009 at 3:28 PM, Jim Davidson wrote:
> On Dec 1, 2009, at 4:45 PM, Jeff Rogers wrote:
>>
>> I also don't understand why there can be multiple interps per server+thread
>> combo in the first place (PopInterp/PushInterp); I'd expect that only one
>> conn can be in a thread at a time
On 02/12/2009, at 9:31 AM, Tom Jackson wrote:
> One problem is if the upload is ever "upgraded" to HTTP/1.1, which
> allows chunked transfer (why?). You could still track a total, but you
> have no idea the expected size.
even if the server is only capable of saying "I've received X bytes" that c
When I tested using the prequeue filter, it didn't crash the server.
The server just ran out of physical memory, which might be even worse.
But it just happened because I was doing load testing. I wanted to try
logging incoming connections before they got dumped to conn threads.
It worked, for a wh
Jeff,
Interps are confined to a specific thread. You can transfer the sock
around, but not the interp. But the big reason for different interps
is that they are or can be specialized. The prequeue interp could be
very simple. Conn interps tend to be big and expensive so you don't
want to use them
On Dec 1, 2009, at 4:45 PM, Jeff Rogers wrote:
> Jim Davidson wrote:
>> Right -- the pre-queue thing operates within the driver thread only,
>> after all content is read, before it's dispatched to a connection.
>> The idea is that you may want to use the API to fetch using
>> event-style network I
Jim Davidson wrote:
Right -- the pre-queue thing operates within the driver thread only,
after all content is read, before it's dispatched to a connection.
The idea is that you may want to use the API to fetch using
event-style network I/O some other bit of context to attach to the
connection usi
Right -- the pre-queue thing operates within the driver thread only, after all
content is read, before it's dispatched to a connection. The idea is that you
may want to use the API to fetch using event-style network I/O some other bit
of context to attach to the connection using the "connection
It looks like the pre-queue filters are run after the message body has
been read, but before it is passed off to the Conn thread, so no help
there. However it looks like it would not be hard to add in a new
callback to the middle of the read loop, tho it's debatable if that's a
good idea or no
Gustaf,
Oops, accidentally hit send.
I just started work on an event driven http client (called htclient).
It can monitor downloads just by using a variable trace. I haven't
reversed the idea for uploads yet, but it would be easy. Not so easy
is guessing the length of the encoded file prior to se
Gustaf,
I've seen these working, although I'm never sure where exactly the
magic happens. It looks like the ngix idea is to work as a proxy:
"It works because Nginx acts as an accelerator of an upstream server,
storing uploaded POST content on disk, before transmitting it to the
upstream server.
> the typical meachnism for upload progress bars is that separate (ajax) queries
> are used to query the state of an running upload, which is identified by some
> unique ID (e.g. X-Progress-ID in
> http://wiki.nginx.org/NginxHttpUploadProgressModule,
> or some other heuristics. e.g. based on URL a
Tom Jackson schrieb:
John,
I'm just going to venture a guess. I hope that Jim D. or someone else
more familiar with the internals will set me straight.
The problem with upload progress monitoring is that uploads are
finished before a conn thread is allocated.
Uploads are done in the driver thr
John,
I'm just going to venture a guess. I hope that Jim D. or someone else
more familiar with the internals will set me straight.
The problem with upload progress monitoring is that uploads are
finished before a conn thread is allocated.
Uploads are done in the driver thread, or a worker thread
Naviserver has a very nice feature that allows (via javascript) to show a user
the percent upload progress of a file. I tried porting their progress.c file to
aolserver, but it's a significant effort, as it depends on other changes
naviserver has implemented to the aolserver code.
However, I wa
25 matches
Mail list logo