I have implemented file upload facilities (predating JHP)
for a couple of domains I host. What happens with names in
them is that the original name is indeed passed along, the
upload is accumulated into  /tmp files (with munged names)
and then zipped into a file named with a munged field as
well as a full date/time stamp. The zip file contains one
or more "original" files that were uploaded - retaining
their names (which often include inconvenient things like
blanks...) These "indexed" documents are stored and put
into an index used to later retrieve them and an email
notification is sent to specified addresses.

Thanks for putting your thoughts etc. into the wiki - it
should be a useful reference and study document. I will
think about similarly documenting the stuff I did.


At 18:34  +1200 2007/05/30, Sherlock, Ric wrote:
 > [mailto:[EMAIL PROTECTED] On Behalf Of Oleg Kobchenko

 This looks great. Before interning, I would like to provide
 separation between parsing and storage.
 Please elaborate on the use cases that you have right now and
 hypothetical, which would require writing files directly to
 temp folders.

 The pattern that I am familiar with is
  - use designated system temp folder
  - use automatic mangled unique temp name
  - create file with flag to automatically delete upon process exit
    (like J break files)

 So it would require two file names: original and physical.


I agree that it makes sense to separate the writing of any file(s) from
the parsing of the upload.
As you say two file names are then required. One option would be for
qmparse to return the original file name of any uploaded files in dat,
return the physical (mangled unique) filename in fnmes, and return an
additional vector of expanded fdat

I have made some changes to the code on my wiki page comment to
illustrate these ideas.

My current need for this functionality is as follows:
I am providing a breeding population simulation for a students in which
they download selection lists (csv) of potential parents. Based on their
selection strategy they then upload lists (csv) of chosen sires and
dams. Each user has their own folder which contains their population and
from which they download and upload their csv files. Users log on and
their session is maintained using a userid cookie. The upload location
is determined by looking up their userid.
This is currently implemented using an APL webserver, but I'm keen to
replace that with IIS & JHP.

Ric
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to