php-general Digest 2 Mar 2009 06:06:02 -0000 Issue 5987

Topics (messages 289019 through 289029):

Re: file locking...
        289019 by: bruce
        289020 by: Robert Cummings
        289021 by: Stuart

Re: www.soongy.com
        289022 by: German Geek
        289023 by: mike
        289025 by: tedd
        289028 by: mike
        289029 by: Gevorg Harutyunyan

500 Internal Server Error
        289024 by: VamVan
        289026 by: 9el

A Foolproof Legit Online Jobs Program
        289027 by: Foster Diegel

Administrivia:

To subscribe to the digest, e-mail:
        [email protected]

To unsubscribe from the digest, e-mail:
        [email protected]

To post to the list, e-mail:
        [email protected]


----------------------------------------------------------------------
--- Begin Message ---
hi rob...

what you have written is similar to my initial approach... my question, and
the reason for posting this to a few different groups.. is to see if someone
has pointers/thoughts for something much quicker...

this is going to handle processing requests from client apps to a
webservice.. the backend of the service has to quickly process the files in
the dir as fast as possible to return the data to the web client query...

thanks



-----Original Message-----
From: Robert Cummings [mailto:[email protected]]
Sent: Sunday, March 01, 2009 9:54 AM
To: bruce
Cc: [email protected]
Subject: RE: [PHP] file locking...


On Sun, 2009-03-01 at 09:09 -0800, bruce wrote:
> hi rob...
>
> here's the issue in more detail..
>
> i have multiple processes that are generated/created and run in a
> simultaneous manner. each process wants to get XX number of files from the
> same batch of files... assume i have a batch of 50,000 files. my issue is
> how do i allow each of the processes to get their batch of unique files as
> fast as possible. (the 50K number is an arbotrary number.. my project will
> shrink/expand over time...
>
> if i dump all the 50K files in the same dir, i can have a lock file that
> would allow each process to sequentially read/write the lock file, and
then
> access the dir to get the XX files the process is needing. (each process
is
> just looking to get the next batch of files for processing. there's no
> searching based on text in the name of the files. it's a kind of fifo
queing
> system) this approach could work, but it's basically sequential, and could
> in theory get into race conditions regarding the lockfile.
>
> i could also have the process that creates the files, throw the files in
> some kind of multiple directory processes, where i split the 50K files
into
> separate dirs and somehow implement logic to allow the cient process to
> fetch the files from the unique/separate dirs.. but this could get ugly.
>
> so my issue is essentially how can i allow as close to simultaneous access
> by client/child processes to a kind of FIFO of files...
>
> whatever logic i create for this process, will also be used for the next
> iteration of the project, where i get rid of the files.. and i use some
sort
> of database as the informational storage.
>
> hopefully this provides a little more clarity.

Would I be right in assuming that a process grabs X of the oldest
available files and then begins to work on them. Then the next process
would essentially grab the next X oldest files so on and so forth over
and over again? Also is the file discarded once processed? Would I be
correct in presuming that processing of the files takes longer than
grabbing the files wanted? If so then I would have a single lock upon
which all processes wait. Each process grabs the lock when it can and
then moves X oldest files to a working directory where it can then
process them.

So... directory structure:

    /ROOT
    /ROOT/queue
    /ROOT/work

Locks...

    /ROOT/lock

So let's say you have 500 files:

    /ROOT/queue/file_001.dat
    /ROOT/queue/file_002.dat
    /ROOT/queue/file_003.dat
    ...
    /ROOT/queue/file_499.dat
    /ROOT/queue/file_500.dat

And you have 5 processes...

    /proc/1
    /proc/2
    /proc/3
    /proc/4
    /proc/5

Now to start all processes try to grab the lock at the same time, by
virtue of lock mechanics only one process gets the lock... let's say for
instance 4.... While 4 has the lock all the other processes go to sleep
for say... 10000 usecs... upon failing to get the lock.

So process 4 transfers file_001.dat through to file_050.dat
into /ROOT/work.

    /ROOT/work/file_001.dat
    /ROOT/work/file_002.dat
    /ROOT/work/file_003.dat
    ...
    /ROOT/work/file_049.dat
    /ROOT/work/file_050.dat

Then it releases the lock and begins processing.... meanwhile the other
processes wake up and try to grab the lock again... this time PID 2 gets
it. It does the same...

    /ROOT/work/file_043.dat
    /ROOT/work/file_044.dat
    /ROOT/work/file_045.dat
    ...
    /ROOT/work/file_049.dat
    /ROOT/work/file_100.dat

    /ROOT/queue/file_101.dat
    /ROOT/queue/file_102.dat
    /ROOT/queue/file_103.dat
    ...
    /ROOT/queue/file_499.dat
    /ROOT/queue/file_500.dat

Now while it was doing that PID 4 finished and all it's files are now
deleted. The first thing it does is try to get the lock so it can get
more... but it's still owned by PID 2 so PID 4 goes to sleep. Once PID 2
gets it's files it releases the lock and off it goes and the cycle
continued. Now there's still an issue with respect to incoming partially
written files. During the incoming process those should be written
elsewhere... lets say /ROOT/incoming. Once writing of the file is
complete it can be moved to /ROOT/queue. Also if you don't want
processes to delete the files you can have yet another
directory /ROOT/processed. So with everything considered here's your
directory structure:

    /ROOT
    /ROOT/incoming
    /ROOT/processed
    /ROOT/queue
    /ROOT/work

One last thing to consider is that if there are no available files on
which to work then you might have your processes sleep a little longer.

Cheers,
Rob.
--
http://www.interjinn.com
Application and Templating Framework for PHP


--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php


--- End Message ---
--- Begin Message ---
On Sun, 2009-03-01 at 10:05 -0800, bruce wrote:
> hi rob...
> 
> what you have written is similar to my initial approach... my question, and
> the reason for posting this to a few different groups.. is to see if someone
> has pointers/thoughts for something much quicker...
> 
> this is going to handle processing requests from client apps to a
> webservice.. the backend of the service has to quickly process the files in
> the dir as fast as possible to return the data to the web client query...

Then use a database to process who gets what. DB queries will queue up
while a lock is in place so batches will occur on first come first
served basis. I had thought this was for a background script. This will
save your script from having to browse the filesystem files, sort by
age, etc. Instead put an index on the ID of the file and grab the X
lowest IDs.

Cheers,
Rob.
-- 
http://www.interjinn.com
Application and Templating Framework for PHP


--- End Message ---
--- Begin Message ---
2009/3/1 Robert Cummings <[email protected]>

> On Sun, 2009-03-01 at 10:05 -0800, bruce wrote:
> > hi rob...
> >
> > what you have written is similar to my initial approach... my question,
> and
> > the reason for posting this to a few different groups.. is to see if
> someone
> > has pointers/thoughts for something much quicker...
> >
> > this is going to handle processing requests from client apps to a
> > webservice.. the backend of the service has to quickly process the files
> in
> > the dir as fast as possible to return the data to the web client query...
>
> Then use a database to process who gets what. DB queries will queue up
> while a lock is in place so batches will occur on first come first
> served basis. I had thought this was for a background script. This will
> save your script from having to browse the filesystem files, sort by
> age, etc. Instead put an index on the ID of the file and grab the X
> lowest IDs.


A database would be the best way to do this, but I've need to handle this
situation with files in the past and this is the solution I came up with...

1) Get the next filename to process
2) Try to move it to /tmp/whatever.<pid>
3) Check to see if /tmp/whatever.<pid> exists, and if it does process it
then delete it or move it to an archive directory
4) Repeat until there are no files left to process

I have this running on a server that processes several million files a day
without any issues.

For database-based queues I use a similar system but the move is replaced by
an update which sets the pid field of a single row. I then do a select where
that pid is my pid and process whatever comes back. I have several queues
that use this system and combined they're handling 10's of millions of queue
items per day without any problems, with the advantage that I can scale
across servers as well as processes.

-Stuart

-- 
http://stut.net/

--- End Message ---
--- Begin Message ---
Also check this one out: google uses it in gmail:
http://code.google.com/p/jquery-multifile-plugin/downloads/detail?name=multiple-file-upload.zip&can=2&q=

Cheers,
Tim
Tim-Hinnerk Heuer

http://www.ihostnz.com
Groucho Marx  - "I have had a perfectly wonderful evening, but this wasn't
it."

2009/3/2 tedd <[email protected]>

> At 4:17 PM +0200 3/1/09, Nitsan Bin-Nun wrote:
>
>> There is no need to go that far, try to google a bit about swfupload.
>>
>> In short, this is a flash & javascript component that give's you the
>> ability to maintain the upload, get the current speed, get the current
>> amount of uploaded data, etc. It is very simple and works like a charm on a
>> dedi server. There are some issues on shared server sometimes, but even
>> these things are not that much complicated and can be easily solved.
>>
>> HTH,
>>
>
>
> Nitsan
>
> Oh yeah, try this:
>
> http://swfupload.org/documentation/demonstration
>
> and go through the "up" link -- and then try the "See it in action!" link
> and also try the "Demonstration" link. You can even use the
> demo.swfupload.org link, which will provide you with this:
>
> http://demo.swfupload.org/v220beta5/index.htm
>
> All of which is well worth the effort if you're trying to waste your time.
> If their code is as good as their web site, no thanks -- I'll pass.
>
> But if I was to seriously investigate it, I would go directly to Google:
>
> http://code.google.com/p/swfupload/
>
> However, I haven't a clue as to if it works or not.
>
> Cheers,
>
> tedd
>
>
> --
> -------
> http://sperling.com  http://ancientstones.com  http://earthstones.com
>
> --
> PHP General Mailing List (http://www.php.net/)
> To unsubscribe, visit: http://www.php.net/unsub.php
>
>

--- End Message ---
--- Begin Message ---
you can use gears pretty easily to make a seamless multiple file
upload now. it's all javascript too so you can make it look how you
want, behave how you want, etc. without having to buy/tweak flash
code.


On Sun, Mar 1, 2009 at 12:07 PM, German Geek <[email protected]> wrote:
> Also check this one out: google uses it in gmail:
> http://code.google.com/p/jquery-multifile-plugin/downloads/detail?name=multiple-file-upload.zip&can=2&q=
>
> Cheers,
> Tim
> Tim-Hinnerk Heuer
>
> http://www.ihostnz.com
> Groucho Marx  - "I have had a perfectly wonderful evening, but this wasn't
> it."
>
> 2009/3/2 tedd <[email protected]>
>
>> At 4:17 PM +0200 3/1/09, Nitsan Bin-Nun wrote:
>>
>>> There is no need to go that far, try to google a bit about swfupload.
>>>
>>> In short, this is a flash & javascript component that give's you the
>>> ability to maintain the upload, get the current speed, get the current
>>> amount of uploaded data, etc. It is very simple and works like a charm on a
>>> dedi server. There are some issues on shared server sometimes, but even
>>> these things are not that much complicated and can be easily solved.
>>>
>>> HTH,
>>>
>>
>>
>> Nitsan
>>
>> Oh yeah, try this:
>>
>> http://swfupload.org/documentation/demonstration
>>
>> and go through the "up" link -- and then try the "See it in action!" link
>> and also try the "Demonstration" link. You can even use the
>> demo.swfupload.org link, which will provide you with this:
>>
>> http://demo.swfupload.org/v220beta5/index.htm
>>
>> All of which is well worth the effort if you're trying to waste your time.
>> If their code is as good as their web site, no thanks -- I'll pass.
>>
>> But if I was to seriously investigate it, I would go directly to Google:
>>
>> http://code.google.com/p/swfupload/
>>
>> However, I haven't a clue as to if it works or not.
>>
>> Cheers,
>>
>> tedd
>>
>>
>> --
>> -------
>> http://sperling.com  http://ancientstones.com  http://earthstones.com
>>
>> --
>> PHP General Mailing List (http://www.php.net/)
>> To unsubscribe, visit: http://www.php.net/unsub.php
>>
>>
>

--- End Message ---
--- Begin Message ---
At 12:13 PM -0800 3/1/09, mike wrote:
you can use gears pretty easily to make a seamless multiple file
upload now. it's all javascript too so you can make it look how you
want, behave how you want, etc. without having to buy/tweak flash
code.


On Sun, Mar 1, 2009 at 12:07 PM, German Geek <[email protected]> wrote:
 Also check this one out: google uses it in gmail:
> http://code.google.com/p/jquery-multifile-plugin/downloads/detail?name=multiple-file-upload.zip&can=2&q=

Understood, but I don't think either of these do what the OP wanted, whcih was a real time file upload progress bar.

Cheers,

tedd
--
-------
http://sperling.com  http://ancientstones.com  http://earthstones.com

--- End Message ---
--- Begin Message ---
gears will allow you to do that, more or less. i have it going...

On Sun, Mar 1, 2009 at 12:34 PM, tedd <[email protected]> wrote:
> At 12:13 PM -0800 3/1/09, mike wrote:
>>
>> you can use gears pretty easily to make a seamless multiple file
>> upload now. it's all javascript too so you can make it look how you
>> want, behave how you want, etc. without having to buy/tweak flash
>> code.
>>
>>
>> On Sun, Mar 1, 2009 at 12:07 PM, German Geek <[email protected]> wrote:
>>>
>>>  Also check this one out: google uses it in gmail:
>>
>>  >
>> http://code.google.com/p/jquery-multifile-plugin/downloads/detail?name=multiple-file-upload.zip&can=2&q=
>
> Understood, but I don't think either of these do what the OP wanted, whcih
> was a real time file upload progress bar.
>
> Cheers,
>
> tedd
> --
> -------
> http://sperling.com  http://ancientstones.com  http://earthstones.com
>

--- End Message ---
--- Begin Message ---
Thank you all for your help!

I tried to do without being dependent from flash, but as I see there are
only two ways for this: "flash" and "loading without tracking percentage". I
choose second way :)

On Mon, Mar 2, 2009 at 1:58 AM, mike <[email protected]> wrote:

> gears will allow you to do that, more or less. i have it going...
>
> On Sun, Mar 1, 2009 at 12:34 PM, tedd <[email protected]> wrote:
> > At 12:13 PM -0800 3/1/09, mike wrote:
> >>
> >> you can use gears pretty easily to make a seamless multiple file
> >> upload now. it's all javascript too so you can make it look how you
> >> want, behave how you want, etc. without having to buy/tweak flash
> >> code.
> >>
> >>
> >> On Sun, Mar 1, 2009 at 12:07 PM, German Geek <[email protected]> wrote:
> >>>
> >>>  Also check this one out: google uses it in gmail:
> >>
> >>  >
> >>
> http://code.google.com/p/jquery-multifile-plugin/downloads/detail?name=multiple-file-upload.zip&can=2&q=
> >
> > Understood, but I don't think either of these do what the OP wanted,
> whcih
> > was a real time file upload progress bar.
> >
> > Cheers,
> >
> > tedd
> > --
> > -------
> > http://sperling.com  http://ancientstones.com  http://earthstones.com
> >
>
> --
> PHP General Mailing List (http://www.php.net/)
> To unsubscribe, visit: http://www.php.net/unsub.php
>
>


-- 
Best Regards,
Gevorg Harutyunyan

--- End Message ---
--- Begin Message ---
Hello All,

What is the situation when we get internal server error 500 on PHP pages?

>From Internet I got some info like, you get it when :

1) If friendly urls are not supported by apache.(mod_rewrite)
2) If max_execution_time max's out.
and some more related to apache.

I got a weird situation where I do PHP eval() and if the parsed string is
wrong, I get internal server error 500.

So question is I was under the impression that PHP code errors  will never
result in a http response errors. Am I completely wrong here? Can anyone
tell me from their experience what are the specific scenarios we get this
error?

Thanks,
V

--- End Message ---
--- Begin Message ---
-----------------------------------------------------------------------
Use FreeOpenSourceSoftwares, Stop piracy, Let the developers live. Get
a Free CD of Ubuntu mailed to your door without any cost. Visit :
www.ubuntu.com
----------------------------------------------------------------------


On Mon, Mar 2, 2009 at 2:18 AM, VamVan <[email protected]> wrote:

> Hello All,
>
> What is the situation when we get internal server error 500 on PHP pages?
>
> From Internet I got some info like, you get it when :
>
> 1) If friendly urls are not supported by apache.(mod_rewrite)
> 2) If max_execution_time max's out.
> and some more related to apache.
>
> I got a weird situation where I do PHP eval() and if the parsed string is
> wrong, I get internal server error 500.
>
> So question is I was under the impression that PHP code errors  will never
> result in a http response errors. Am I completely wrong here? Can anyone
> tell me from their experience what are the specific scenarios we get this
> error?

Understandably  2)  for the eval.




>
>
> Thanks,
> V
>

--- End Message ---
--- Begin Message ---
What if we could show you a real system you can use to put $500 - $1,500 per 
day into your account, working from the comfort of your home? Would you be 
interested?

Online advertising has skyrocketed over the past few years. In 2008, companies 
spent close to $50 billion in online advertising. That figure is expected to 
increase substantially in 2009. What does this mean for you? - Money is going 
into your bank account a lot faster this year. And next year. And the year 
after. That's only IF you follow our methods!

You see, companies worldwide are desperately searching for people just like you 
to type up their ads and post them online, and they'll pay you nicely in 
return. It's a win-win situation. They get more customers, you get paid! It's 
as simple as that. These companies have cash, LOTS of it, and they're eager to 
share it with you. It's time for you to get a piece of the pie!

Please, send us an email back to [email protected] if you are 
interested to participate too, so we can proceed further - we will send you all 
required information that you need in order to join these programs.

With respect,
HR, Real Jobs At Home











------------------------------------------------------------------
This email has been written and proved to be in compliance with the recently 
established can-spam act law in US. We are not provoking or forcing any person 
in any way to participate in our programs. To participate is your own decision 
and you carry the responsibility of taking further part in this promotion. 
Anyway, if you don't want to receive more good offers from us, you can simply 
Unsubscribe by sending us a notification email to [email protected] 
with a mail-subject and text "Unsubscribe me", and we will get your email out 
of our list within 10 days.

This message is STRICTLY CONFIDENTIAL and is solely for the individual or 
organisation to whom it is addressed. It may contain PRIVILEGED and 
CONFIDENTIAL information. If you are not the intended recipient, you are hereby 
notified that any dissemination, distribution or copying of this communication 
and its contents is strictly prohibited. If you are not interested in the 
offered promotions, please just don't answer. If you think you have received 
this message and its contents in error, please delete it from your computer, or 
follow the unsubscribing procedure shown above.
------------------------------------------------------------------
                

--- End Message ---

Reply via email to