Yes, you can use a Perl handler to do that.  (I do this to send 
images stored in PostgreSQL that are retrieved with DBI instead of 
reading from files on the file system outside of the document root.)

        Perl can read X number of bytes into a local variable, then send the 
contents of that local variable.  (Flushing the output is optional as 
Apache HTTPd should handle this efficiently and automatically.)

        Make sure you set the "Content-type" header appropriately so that 
your users' web browsers can handle the download appropriately:

                $r->content_type($MIME_TYPE); # --- Set MIME type

        Within a loop you'll keep reading the next portion of your data into 
a variable such as $chunk, and simply send it in the standard way:

                $r->print($chunk); # --- Send data

        You can also send other headers (I usually set the content-type 
after sending other headers), such as "Content-Disposition" and 
whatnot to communicate whatever filename you prefer, and to specify 
the file size so that your users can see "percent complete" download 
indicators from their web browsers.  (If you'd like an example, feel 
free to ask.)

> Using a perl handler as the download link would be ideal. So, if I am
> understanding you correctly, so long as I load chunks of the file and call
> the perl print function on each chunk individually instead of loading all
> of them into a variable and THEN calling print, I should avoid having to
> load the complete file into memory?
> 
> On Fri, Aug 16, 2019 at 2:36 PM Randolf Richardson <rand...@modperl.pl>
> wrote:
> 
> >         One fairly straight-forward approach would be to write a script
> > that
> > serves as the path for downloads, then have it parse the filename to
> > use as a key in determining which content to send to the user.  (The
> > AcceptPathInfo directive could be helpful for this, especially if you
> > want to the script to appear as a subdirectory to the user.)
> >
> >         That script could perform all the necessary security checks that
> > you
> > need, and read portions of the file to be streamed as needed (the
> > streaming protocol implementation details would also need to be
> > handled by your script, and you might find it helpful to look at
> > what's available in CPAN for the streaming protocol you want to use).
> >
> > > In Java servlets, I can stream a file back to the browser one chunk at a
> > > time. This has 2 benefits which interest me.
> > > 1) Files can be stored outside the web root so users cannot download them
> > > unless they are logged in, even if they know the path.
> > > 2) Large files can be streamed back to the client without having the
> > entire
> > > file loaded into memory at the same time
> > >
> > > How would you recommend achieving similar functionality in mod_perl?
> > >
> > > --
> > > John Dunlap
> > > *CTO | Lariat *
> > >
> > > *Direct:*
> > > *j...@lariat.co <j...@lariat.co>*
> > >
> > > *Customer Service:*
> > > 877.268.6667
> > > supp...@lariat.co
> > >
> >
> >
> > Randolf Richardson - rand...@inter-corporate.com
> > Inter-Corporate Computer & Network Services, Inc.
> > Beautiful British Columbia, Canada
> > https://www.inter-corporate.com/
> >
> >
> >
> 
> -- 
> John Dunlap
> *CTO | Lariat *
> 
> *Direct:*
> *j...@lariat.co <j...@lariat.co>*
> 
> *Customer Service:*
> 877.268.6667
> supp...@lariat.co
> 


Randolf Richardson - rand...@inter-corporate.com
Inter-Corporate Computer & Network Services, Inc.
Beautiful British Columbia, Canada
https://www.inter-corporate.com/


Reply via email to