Re: [PHP] Meta HTTP refresh problem

2005-03-25 Thread Josh Whiting
 Recently, during a rewrite of my login page, the Meta Refresh I am using 
 suddenly stopped working. I've tried using it by echoing it with php and 
 by just using it in HTML. I've also tried using a javascript redirect 
 with no luck either. Any thoughts on how to debug and what might be 
 holding me up?
[snip]
 //Test to see if you actually got to this part
 echo You should be going somewhere!;
 echo META HTTP-EQUIV=REFRESH CONTENT=\0;URL=main_page.php\;
 } else {
 printLogin();
 echo brcenterInvalid username/password/center;
 }

META tags have to go in the HEAD section of the HTML document.

this is an optional suggestion but, overall, one thing that would be
better is to determine (before you send any ouput) if the login was
correct or not, and if not, use an HTTP redirect:

if (bad login) {
  header(Location: http://mydomain.com/login.php?tryAgain=1;);
} else {
  header(Location: http://mydomain.com/mainpage.php;);
}

(the tryAgain=1 is to tell the script that you want to show a  
login failed, please try again message)

using a redirect instead of a meta refresh is more user friendly because
(1) it's faster since some browsers force a waiting period before the
refresh happens (as a usability feature), and (2) the user can use
back/forward without running into the this page contains POST data, do
you want to resend it? alert, which I avoid in my designs like the 
plague (always using a redirect instead of showing output as the result 
of a POST action).

HTH
/josh w

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] header(Location: page.php target=_parent)?????

2005-03-25 Thread Josh Whiting
 How should I formulate the header function to replace the current frameset 
 page with a new one?  I have tried a combination of header(Location: 
 page.php target=_parent); but I get an error message saying the page does 
 not exist.

 Also, can I save a frameset page with a .php extension?  I have tried it out 
 and it seems to work.  Maybe perhaps I should not do this as there  may be 
 some implications later on???

you cannot replace the main frameset using HTTP headers in the
individual frame pages.  HTTP headers do not allow for this because
frames are a feature of HTML, not HTTP. therefore you must use HTML or
javascript techniques to get a single frame to change the overall
frameset.

however, for what it's worth, you can replace the main framset using a
header in the main frameset page itself, if you make that a php script
and use the header() function. (which means the answer to your second
question is yes)

/jw

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] OR statement

2005-03-24 Thread Josh Whiting
 This work fine, however, I would like to add to the criteria above. I would
 like to say:
 
  if ($audio == Cool or junk or funky){
 
 ...
 

if (in_array($audio,array(Cool,junk,funky))) {
...
}

not the most elegant looking but it gets the job done.

/josh w

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Re: Getting the process ID

2005-03-24 Thread Josh Whiting
 The problem I am having is that people are double-submitting certain 
 transactions.  My first attempt to prevent this was to store a flag in the 
 session record indicating whether or not certain transactions had been 
 completed, but this turned out to be insufficient at times because users 
 could try and initiate a second transaction before the first transaction had 
 finished (and thus the system had not yet flagged the transaction completed 
 in the session record).  They then both completed in parallel, and voila, 
 duplicate transactions again.
 
 I realized that this sort of problem would always exist unless I had some 
 sort of semaphore mechanism.  Once a user has *started* a transaction, they 
 need to be prevented from initiating a second transaction until the first 
 transaction has been completed.
 
 I am open to suggestions on how to do this.  What is the best way?

do you have access to a database? why not just manage the transaction 
on the database level?  transactions, locking, etc. are a core part of 
what databases do for a living. it's not a problem best solved with PHP.

/josh w

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Passing Arrays between pages

2005-03-22 Thread Josh Whiting
 Please can someone tell me how you pass arrays between PHP pages.
 
 $var = serialize($testArray);
 echo INPUT NAME = \kcompany[]\ TYPE = \hidden\ VALUE=\$var\;
 
 Then unserialize the variable on the receiving page.

To this you might also add an MD5 hash to check for authenticity,
depending on what you're doing with that incoming data that you're
unserializing (consider an client who sends you a serialized array that
you didn't intend). You'll also want to encode the serialized data with 
htmlentities:

$serialized = serialize($testArray);
$hash = md5($serialized . my secret phrase);
echo 'input type=hidden name=serialized 
value='.htmlentities($serialized).'';
echo 'input type=hidden name=hash value='.$hash.'';

then on the receiving end:

if ($_POST['hash'] != md5($_POST['serialized'] . my secret phrase) {
  // the hash doesn't match up, consider the data unusable
} else{ 
  $testArray = unserialize($_POST['serialized']);
}

HTH
/jw

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Different approach?

2005-03-21 Thread Josh Whiting
On Thu, Mar 17, 2005 at 11:01:44AM -0500, John Taylor-Johnston wrote:
 Hi,

 I've read:

  http://dev.mysql.com/doc/mysql/en/create-table.html

 Would anyone code/approach this differently?
[...]
 $sql = INSERT INTO $table
 (StudentNumber,Exercise1,Exercise2) values
 ('$StudentNumber','$Exercise1','$Exercise2');

 mysql_select_db($db,$myconnection);
 mysql_query($sql) or die(print mysql_error());


your example looks pretty solid, but the code above does not escape the
$StudentNumber, $Exercise1, and $Exercise2 variables.  If any of these
variables contain data that when placed into the SQL string interferes
with the SQL itself, you'll have unexpected failures and also a security
hole if untrusted users can populate those variables.  The solution is
to wrap any strings or untrusted input like that in a call to
mysql_escape_string(), like so:

$sql = INSERT INTO $table
(StudentNumber,Exercise1,Exercise2) values ('.
mysql_escape_string($StudentNumber).','.
mysql_escape_string($Exercise1).','.
mysql_escape_string($Exercise2).');

-jw

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] warning question about mysql sessions concurrency

2005-03-16 Thread Josh Whiting
On Wed, Mar 16, 2005 at 06:59:43PM +0100, Marek Kilimajer wrote:
 SO, does anyone have some code that uses MySQL to replace PHP's native
 session storage that also correctly handles this concurrency problem? 
 Ideally I'd like to see just a set of functions that can be used with 
 sessions_set_save_handler() to transparently shift PHP's sessions to a 
 database, but I'm not going to use the stuff I've found on the web or 
 even in books (appendix D of O'Reilly's Web Database Applications with 
 PHP  MySQL publishes code with this problem).

 
 MySQL's InnoDB supports row-level locking. So row the right row in 
 session_start and release it in session_close function.

But mysql does not support nested transactions.  Locking the row means
that your entire script has to happen inside a single transaction. 
Since my application logic requires transactions as well, it would mean
using two separate connections to MySQL, one for the session transaction
and one for business transactions, and that means twice the connection
overhead.

 In MyISAM you can use application-level locking: GET_LOCK() and 
 RELEASE_LOCK() in session_start and session_close, respectively. 
 Parameter would be session id with some prefix.

AH!  GET_LOCK() - that would do the trick!  I didn't realize MySQL
supported locking mechanisms independent of transactions/tables etc.

 The problem is when you need to create session id - you must lock the 
 whole table to find unused session id and insert it into table.

Hmm... couldn't I just do an 'insert ignore' with the session id every
time to kick off the process? Then MySQL would handle checking for an
existing row and create an empty one if it didn't exist. A very stripped
down example:

To open a session:
1. insert ignore into sessions (id,content) values ($sess_id,'')
2. select get_lock('my_prefix_.$sess_id, 15)
3. if NULL returned then abort, otherwise lock is aquired OK
4. select * from sessions where id=$sess_id

Do stuff ...

To close it:
1. update sessions set content='...'
2. select release_lock(my_prefix_.$sess_id)

I don't have all the details covered yet but I think that is just 
what I needed.  Thanks!!

/Josh W

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] warning question about mysql sessions concurrency

2005-03-12 Thread Josh Whiting
On Fri, Mar 11, 2005 at 09:57:46AM -0800, Richard Lynch wrote:
  well the trouble is not in the writing at the end of the request, which
  would likely only be a single query and therefore not need a transaction
  in itself. the trouble is the lack of locking out other requests from
  reading the data when you're using it during the main body of
  the request.
[...]
 Short Version:  Either your frames/tabs are gonna be so clunky as to be
 useless, or you need to lock them into only one, or you need business
 logic, not session-logic.  Solving the race condition at session layer
 will only make your application un-responsive.
 
 This may go a long way to explaining why you're not finding any readily
 available solutions out there at the session layer.

You raise good points.  However, my goal is simply to have a
transparent, database-driven replacement for PHP's native session
handler.  I don't want to change my application logic because I'm
switching session handlers - my application shouldn't care!  And so,
transparently reproducing PHP's session layer functionality is going to
mean locking concurrent requests out, because that is what PHP's session
handler already does (for good reason IMHO).

you're argument about slow frames is valid, but is ALSO applicable to
PHP's native session handler, e.g. i'm not introducing a new problem. 
the trouble can be mostly avoided in either case by not starting session
in frames that don't need it (most of them), and doing any session stuff
right away and calling session_write_close() ASAP to free up the lock so
other frames can run.  

you're right that i could change my application logic to deal safely
with concurrent requests from the same session, but PHP already nicely
defeats the issue so you don't have to worry about it, and at a more or
less negligable performance loss if you design the scripts to reduce
session open/locked time to the bare minimum...

i would speculate that the reason i haven't found a readily available
solution is because most folks either (1) don't understand the problem,
or (2) don't think such a race condition is likely enough to warrant a
fix, or (3) are using a different backend (oracle, etc.) that makes the
problem easy to solve. the PHP team, however, clearly thought it was
important enough as is evident by their session handler design.

...have i convinced you yet of the worthiness of this endeavor? :)

/jw

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] warning question about mysql sessions concurrency

2005-03-10 Thread Josh Whiting
On Wed, Mar 09, 2005 at 02:52:52PM -0800, Richard Lynch wrote:
  Agreed, initially I thought of that but I also need to use transactions
  in my business logic and MySQL doesn't support nested transactions, so
  I'd have to open a separate connection to the DB to handle the session
  transaction while another connection handles the business
  transaction(s).  I'm hoping to find a solution that uses locking in the
  application level instead of the database.  Were I using a DB that
  supported nested transactions, it would be a different story.  maybe
  it's time to switch databases.
 
 Since the data only changes when you write it, at the end of the script,
 you could maybe get away with the transaction only being in the
 session_save handler, and be sure to rollback or commit your business
 logic before that.
 
 That would for sure take a lot of discipline, and might even be downright
 impossible for what you need, but it's worth pondering.

well the trouble is not in the writing at the end of the request, which
would likely only be a single query and therefore not need a transaction
in itself. the trouble is the lack of locking out other requests from
reading the data when you're using it during the main body of
the request.

so... no luck finding a concurrency-aware database session handler?

i'm going to try to roll my own, and i'll certainly share what i come up
with on the list.

thanks though, for the help up to this point!

/jw

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] warning question about mysql sessions concurrency

2005-03-08 Thread Josh Whiting
On Tue, Mar 08, 2005 at 10:38:28AM -0800, Richard Lynch wrote:
 Josh Whiting wrote:
  SO, does anyone have some code that uses MySQL to replace PHP's native
  session storage that also correctly handles this concurrency problem?
 
 Create your MySQL session tables using ENGINE=innoDB (in older MySQL, use
 TYPE=innoDB)
 http://mysql.com can tell you lots more about innoDB
 
 Then just wrap the contents of each function from the books in a BEGIN
 query to make them be transactions...  Err, no, I mean, start the
 transaction in the one function and COMMIT in the save function.
 
 That should take care of all the concurrency issues, I think.

Agreed, initially I thought of that but I also need to use transactions
in my business logic and MySQL doesn't support nested transactions, so
I'd have to open a separate connection to the DB to handle the session
transaction while another connection handles the business
transaction(s).  I'm hoping to find a solution that uses locking in the
application level instead of the database.  Were I using a DB that
supported nested transactions, it would be a different story.  maybe 
it's time to switch databases.

 I guess I'm just saying that in the real world, the race condition for a
 single user/session just doesn't occur that often, and when it does, the
 user generally recognizes the problem/error and accepts that they caused
 it by being impatient or running two windows at once or whatever they did
 that made it happen.
 
 That doesn't make it Right, but it does make it Practical.

Point taken.  I guess it's almost more of a pyschological thing for me 
as a programmer - the idea of writing code vulnerable to race conditions 
just doesn't sit well with me.

on the other hand, with the growing popularity of tabbed browsers, and 
of coures the frames issue, i think it is reasonable to demand proper 
behavior during concurrent requests, and while you may not be using 
frames, lots of sites do, and that's a setup for a real headache.

thanks
/josh

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] warning question about mysql sessions concurrency

2005-03-06 Thread Josh Whiting
On Sun, Mar 06, 2005 at 02:27:53PM +, Chris Smith wrote:
 Josh Whiting wrote:
 I've been trying to switch to MySQL-based session storage instead of the
 native PHP session storage.  In doing so, I've run into a lot of code on
 the web that exhibits a serious flaw regarding concurrent requests from
 the same client. All the code I've seen has glossed over the need to
 lock the session for the duration of the request, e.g. not allow
 concurrent requests to use it until it has finished.  PHP's native
 session handler correctly does this, but lots of MySQL-driven session
 code does not.

 Neither do.

PHP absolutely does.  Take the following code:
?
session_start();
if (!isset($_SESSION['count'])) $_SESSION['count'] = 0;
$_SESSION['count'] = $_SESSION['count'] + 1;
sleep(5);
echo htmlbodyh1Current count:;
echo $_SESSION['count'];
echo /h1/body/html;
?

Open up two browser windows/tabs and load this page in both, try to make
both pages load at the same time.  Notice the second request takes 10
seconds while the first takes 5.  That's because the second one is
really, actually, indeed WAITING for the first one to finish.  (You may
have to do a first initial request before trying this to get the cookie
set.)  This also is not a side effect of web server processes,
threading, etc., it's really waiting until the session lock is released.

 No request should be blocking (i.e. wait for concurrent processes to
 execute).  This implies a poor understanding of transactional processing
 from both yourself and PHP!

On the contrary, when dealing with a session store (in PHP, which AFAIK
always writes the session and the end of the request), it's very
important for serial access instead of concurrent.

Take the following sql from two MySQL clients:

client 1: start transaction;
client 2: start transaction;
client 1: select * from mytable where id=1 FOR UPDATE;
# i.e. give me the data from the row with a write lock
client 2: select * from mytable where id=1 FOR UPDATE;
...

client 2 is going to have to sit there and wait until client 1 commits
or rolls back.  that's how it should work with the session store.  each
PHP request should acquire a write lock on the session so no other
request can even *read* the data until the original request is done.

 - You usually only store some kind of identification for a user in the
 session - no other data.  doing otherwise is dangerous as there are
 multiple copies of data floating around uncontrolled.  A session-id is
 enough information to store.  Don't use the session for storing data
 willy nilly - it is for identifying the session only - nothing else.
 Can't say that enough.  Don't use it for shortcutting code.

there is no point to storing only a session id in a session store. the
session id is already in the cookie or querystring.  what's the point of
a session, then?  tell me how you store a user's login status, a user's
shopping cart contents, etc. - that is the place i call the session
store and that is the thing that needs to block concurrent
(write) requests, whatever you want to call it...

 - If you want a proper transactional system, there are two ways to
 handle concurrency:
[snip]
 Personally, no-one in the PHP/MySQL arena tends to understand these
 concepts as the two technologies are rarely used for pushing data around
 on big systems (this is generally Java/Postgres' domain).

i understand your explanations.  in the case of session concurrency, if
you're using a fail commit on change and are also using frames with
PHP's session design, you're site just isn't going to work.
one frame will appear and the rest will say sorry, some other process
got to the data first.  instead, each frame request has to wait for
each other to finish, which is how PHP's native session handler does it.

 I ONLY use PHP/MySQL for knocking up quick web apps to fart out content
 - nothing serious because it's simply not suited.

Have you checked out InnoDB?  Row level locking, transactions, etc etc.
Not as fully featured, agreed, but all the critical stuff is there.
MySQL isn't the same as it was a few years ago.

/jw

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



[PHP] warning question about mysql sessions concurrency

2005-03-05 Thread Josh Whiting
I've been trying to switch to MySQL-based session storage instead of the 
native PHP session storage.  In doing so, I've run into a lot of code on 
the web that exhibits a serious flaw regarding concurrent requests from 
the same client. All the code I've seen has glossed over the need to 
lock the session for the duration of the request, e.g. not allow 
concurrent requests to use it until it has finished.  PHP's native 
session handler correctly does this, but lots of MySQL-driven session 
code does not.

Example timeline:
1. client sends a request
2. session data is loaded from the db 
3. same client sends a request before the first one is finished (e.g. 
frames, tabbed browsing, slow server response times, etc)
4. session data is again loaded from the db
5. the first request changes some values in $_SESSION
6. the second request changes some values in $_SESSION
7. the first request writes the data to the db and exits
8. the second request writes it's data (over that written by the first 
request) and exits

PHP's native handler solves this problem by forcing concurrent requests
to wait for each other. The same thing needs to happen in a
database-driven session handler.

SO, does anyone have some code that uses MySQL to replace PHP's native
session storage that also correctly handles this concurrency problem? 
Ideally I'd like to see just a set of functions that can be used with 
sessions_set_save_handler() to transparently shift PHP's sessions to a 
database, but I'm not going to use the stuff I've found on the web or 
even in books (appendix D of O'Reilly's Web Database Applications with 
PHP  MySQL publishes code with this problem).

Folks using database sessions who do not deal with this scenario be
warned! I'm surprised so much bad code is going around for this task...

Many thanks in advance,
Josh Whiting

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Preventing execution without inclusion

2005-01-14 Thread Josh Whiting
 as per PHP5 example
 
 1 (the preferred way): user accesses 
 http://www.example.org/index.php?function=Join, this loads the class 
 NewUser and begins its implementation. Because of the __autoload, it 
 includes class.join.php, in order to utilize the class.
 
 2 (the wrong way): user accesses 
 http://www.example.org/includes/class.join.php without going through 
 index.php.
 
 I am trying to prevent 2 from even occuring, utilizing a piece of code 
 that would check if index.php had included it, or not. This code would 
 be in the beginning of all the class files, at the top, before any other 
  code was to be executed.
 
 As of yet, it has eluded me...

Put the include file outside the web directory tree.

/jw

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Persistent PHP web application?

2005-01-08 Thread Josh Whiting
On Thu, Jan 06, 2005 at 08:41:57PM -0500, Jason Barnett wrote:
 Does not up to date mean the code isn't working with current releases
 of php 4 or 5? I'd be interested in giving it a try.

 I believe this is the case.  AFAIK the APC library doesn't support PHP5
 or at least it didn't when I looked at it.  If you want to pitch in
 support for APC you should just download it (or PEAR INSTALL it from the
 command line to try it :) )

I don't think Rasmus was talking about APC. AFAIK he mentioned some
extension code that used the Apache SAPI to run a PHP script to acheive
what we're talking about (persistent global vars/functions/objects etc). 

(quoting Rasmus):
 The apache-hooks SAPI can do this, although when I initially wrote it
 nobody seemed interested.  George fixed it/rewrote it to work much better,
 but nobody was interested in his version either, so the current
 implementation isn't up to date anymore.  The problem was likely that
 people really didn't understand how to use it and it is a rather unsafe
 thing to let PHP scripts fiddle with all the various Apache request hooks.
 You can really toast your server that way.  A simple generic data loading
 extension is something people will be able to understand and they are
 unlikely to destroy their server with it.

Rasmus, can you say more about what this is and it's current state of
functionality (or lack thereof)?

(back to quoting Jason):
 I'm picturing an extension that would simply not allow an uncautious or
 unknowing scripter to ruin the server, but would only allow him to store
 all his app defs and data model startup stuff in a one-shot performance
 booster.  It could even do it in a separate (CLI?) interpreter and copy
 the data over (but only once!) to the Apache-embedded interpreter using
 a shared memory resource... hmmm.

 So your process is something like (correct me if I misunderstand):
 php_script

[snip...]

 The above suggestion is quite messy and probably not thread-safe, but
 it's a start.  ;)

Well, bravo for venturing a possible implementation, but I'm not versed
in Apache or PHP internals to propose or evaluate one in that kind of
detail :(.  I don't know what is possible or how PHP internally stores
variables, functions, objects, etc that would make them able to persist 
or be shuffled around.

/jw

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] On large application organization [long and possibly boring]

2005-01-08 Thread Josh Whiting
 Josh, I am interested in what you mean by but there may be a better
 overall approach. 

(which was in reference to my original question which was: why are you
using a big switch?)

The reason I say that is because, though I'm no expert on application
design, in my own code I've found that whenever I'm tempted to use a big
switch, it's probably a better idea for me to make a class hierarchy.  I
just have a hard time imagining a case where a big switch accurately
represents the problem to be solved, since in real life solving problems
is more complicated than a flat set of totally isolated operations.

For example, a while ago I wrote some form validation code for a
classifieds application that used a 50+ case switch to validate incoming
form data, one case for each possible kind of field (make  model,
price, location, color, etc).  There was a large amount of overlapping
functionality for similar field types and it was also hard for me to
understand exactly what the start and end goal was for all the cases. 
If some common thing needed to be changed, most or all the cases had to
be updated and kept consistent, and by the 45th case my brain is going
to miss something.

Now I'm not suggesting this is what you're doing - this is just bad
design on my part.  Currently I'm in the process of implementing a class
hierarchy of fields, where for example the make_and_model class
inherits from the drop_down parent class, which inherits from the
basic_field class, each one extended or overriding it's parents
validate() method.  If I need to update the way drop downs validate, I
make a change in the drop_down class and it propagates naturally to all
the instances and subclasses.  A much better method than my earlier
switch!

So, because I feel like I've learned a bit personally about avoiding big 
switches I was curious about your approach, which I say again could be 
perfectly elegant :)

/jw

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] On large application organization [long and possibly boring]

2005-01-07 Thread Josh Whiting
 If I have a large app what is the difference, other than having a very
 large file, of doing this
 
 switch($action){
   /* several dozen cases to follow */
   case foo:
   writing out all of the code
   break;
 }
 
 and this
 
 switch($action){
   /* several dozen cases to follow */
   case foo:
   include that will handle processing this case
   break;
 }
 
 Correct me if I am wrong, but includes (and/or requires) will get all of
 the code in all of the cases regardless if the case is being processed.
 That being the case the code would essentially be the same length. Given
 that, would there be an efficieny issue?
 
 This would (the second way) also make the project easier to work on by
 multiple programmers at the same time, similar to modular work being
 done by programmers on C++ projects.

Correction: include() statements are executed at *run time*, which means
that if the case is not executed, the include will not be executed and
the file is not even read!  The include() method is not only more
efficient but also easier to maintain as you suggest.

Otherwise, I'm curious as to why you're using a large switch, not that 
it's bad inherently IMHO, but there may be a better overall approach. 

/jw

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Persistent PHP web application?

2005-01-07 Thread Josh Whiting
  Call me crazy or ignorant, i'm both, but would it be possible to build
  an extension that, in its MINIT hook as you suggest, actually runs a
  separate PHP script that contains global definitions, then makes those
  definitions available to later scripts?  this is basically my original
  desire of having a one-time, persistent global include for each apache
  process.  i realize you suggest pulling array data from a source like a
  DB or XML file, which would be a 90% solution for me, but the next step
  (a PHP script) just seemed logical/exciting...
  
  i realize i'm reaching with this, and it implies a conundrum (how does
  an extension run a PHP script if PHP hasn't fully started yet) but this
  kind of thing is just what makes me go and learn new things (C, php/zend
  internals) and I just might try to do it if it was feasible, for the fun
  of it, because it could be useful to lots of people, even if it took me
  a year or two to pull off.
  
  alas, this is a question for the pecl-dev list.
  
  (the mental gears are churning...)
 
 The apache-hooks SAPI can do this, although when I initially wrote it 
 nobody seemed interested.  George fixed it/rewrote it to work much better, 
 but nobody was interested in his version either, so the current 
 implementation isn't up to date anymore.  The problem was likely that 
 people really didn't understand how to use it and it is a rather unsafe 
 thing to let PHP scripts fiddle with all the various Apache request hooks.  
 You can really toast your server that way.  A simple generic data loading 
 extension is something people will be able to understand and they are 
 unlikely to destroy their server with it.
 
 -Rasmus

Does not up to date mean the code isn't working with current releases
of php 4 or 5? I'd be interested in giving it a try.

Forgive my ignorance of Apache and PHP internals, but I'm not
understanding the concept of your implementation. I'm not enivisioning
allowing a PHP script to be able to meddle with the Apache request
process.  

I'm picturing an extension that would simply not allow an uncautious or
unknowing scripter to ruin the server, but would only allow him to store
all his app defs and data model startup stuff in a one-shot performance
booster.  It could even do it in a separate (CLI?) interpreter and copy
the data over (but only once!) to the Apache-embedded interpreter using 
a shared memory resource... hmmm.

I'm not versed enough to suggest a feasible implementation, but if 
the overall goal is feasible then I'm willing to climb the learning 
curve.

I'm also quite surprised there wasn't much interest in these concepts, I
admit the benefits would be small for most apps, but complex apps with
large data setup could save valuable time, and having persistent
namespaces/modules is plugged as a worthy feature of mod_perl and
mod_python, for what it's worth.

/jw

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Persistent PHP web application?

2005-01-06 Thread Josh Whiting
 Anything you do in the MINIT hook is basically free, so it would be 
 trivial to load the data for the array from somewhere.  Like a database, 
 an xml file, etc.  So you wouldn't need to hardcode a complex array 
 structure in your MINIT hook, just have this generic little extension 
 that creates an array (or object) from some external source.  To change 
 the data you would change that external source and restart your server, 
 or you could write a PHP function in your extension that forced a reload 
 with the caveat that each running httpd process would need to have that 
 function be called since your array/object lives in each process separately.
 
 If you ask really nicely one of the folks on the pecl-dev list might 
 just write this thing for you.  Especially if you spec it out nicely and 
 think through how it should work.
 
 -Rasmus

Call me crazy or ignorant, i'm both, but would it be possible to build
an extension that, in its MINIT hook as you suggest, actually runs a
separate PHP script that contains global definitions, then makes those
definitions available to later scripts?  this is basically my original
desire of having a one-time, persistent global include for each apache
process.  i realize you suggest pulling array data from a source like a
DB or XML file, which would be a 90% solution for me, but the next step
(a PHP script) just seemed logical/exciting...

i realize i'm reaching with this, and it implies a conundrum (how does
an extension run a PHP script if PHP hasn't fully started yet) but this
kind of thing is just what makes me go and learn new things (C, php/zend
internals) and I just might try to do it if it was feasible, for the fun
of it, because it could be useful to lots of people, even if it took me
a year or two to pull off.

alas, this is a question for the pecl-dev list.

(the mental gears are churning...)

/jw

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Persistent PHP web application?

2005-01-06 Thread Josh Whiting
 I think I understand where you're coming from. I've had a similar 
 problem and the best solution I've found is eAccelerator (previously 
 known as Turck MMCache). What EA does is keep the bytecodes PHP compiles 
 inshared memory so next time you need that script PHP doesn't need to 
 recompile, EA returns the bytecode from SHM. Now, since PHP scripts are 
 compiled and saved in SHM all I need to do is /save/ the data that does 
 not change often but requires a lot of queries as code (an array inside 
 a script) and include it whenever I need the data. No recompiling, no 
 need to touch the DB again, pure speed. I hope all this helps you. I 
 personally don't need extra processes and stuff like that but if you 
 really want all that you can take a look at phpbeans.
 
 Hope it helps,
 
 
 Adrian Madrid

phpBeans looks interesting. after browsing the site a bit, looking at
the introductory material and so on, i couldn't discern the way the
phpbean server persists the beans, and, hence, if the solution would be
fitting for what i'm looking for.  what i'm looking for is a solution
that will keep compiled zend variables and functions (the end result of
executed php code that defines the variables and functions), in memory,
without serialization, across multiple requests.  if this is how the
beans are kept persistent, i'm interested :)

i'm going to try to find someone at phpbeans who can answer that. if 
others are interested, email me and i'll let you know what i find 
(if/when i find it)...

/jw

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Re: [suspicious - maybe spam] [PHP] [suspicious - maybe spam] SELECT probrem

2005-01-06 Thread Josh Whiting
 Hello Phpu,
 
 Thursday, January 6, 2005, 10:42:15 AM, you wrote:
 
 P I have an array, for ex: $products=array(1, 2,  5 , 7)
 P I want to select all products from the database that has the ids of 
 products.
 P I use this but doesn't work:
 
 $product_ids = implode(',', $products);
 $sql = SELECT product_name FROM accessories WHERE product_id IN 
 ($product_ids);
 
 Best regards,
 
 Richard Davey

Slightly off topic but worth mentioning is:

If your $products array is generated dynamically, make sure it isn't
empty before running the query, MySQL does NOT like:

SELECT product_name FROM accessories WHERE product_id IN ()

The empty () will cause a MySQL error (it won't just return an empty
result set.)  Plus, why run the query if the array is empty anyway?

/jw

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] apache 1 vs 2 w/php

2005-01-05 Thread Josh Whiting
 I am undecided whether to upgrade to apache 2 (currently running 1.3.33)
 I've heard some bad stuff (some good maybe) about using apache 2 with php..
 does anyone have an opinions?

a somewhat interesting discussion on the subject was recently on
slashdot, i suggest reading at least the blog entries linked in the main
story blurb: http://slashdot.org/apache/04/12/21/1837209.shtml

the bad stuff you're referring to is the threading problem, which is
completely solved if you just use the prefork MPM in apache 2, which
AFAIK is the default on linux.  not sure about Win32.

as for a performance improvement... it isn't hard to setup apache 2 on
the same server as apache 1 listening on a different port, then do
benchmarks and see for yourself.

/jw

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Persistent PHP web application?

2005-01-04 Thread Josh Whiting
Wow thanks for the helpful breakdown.

 PHP's model is to be completely sandboxed such that every request is 
 completely separate from every other.  Having a persistent interpreter 
 as you describe would break that rule and break the infinite horizontal 
 scalability model of PHP.

Understood.  A persistent interpreter is a whole different approach. I
appreciate your perspective on that, it helps me to reconsider overall
what I'm doing and why I want to do it :)

 Of course, there is nothing that prevents you from storing persistent 
 data somewhere more permanent.  If it is just simple read-only data you 
 have a lot of options.  For example, you could put them in a .ini file 
 that is only loaded on Apache startup and use get_cfg_var() to fetch 
 them.  If you compile PHP with the --with-config-file-scan-dir switch to 
 configure a configuration scan directory you can just drop your own ini 
 file in that directory and it will be read on startup.  This is just 
 key=value pairs and not PHP code, of course.

I'm dealing with large multidimensional arrays, like this:

$categories = array (
1 = array ( 
'name' = 'Autos',
'depth' = 1,
...

:(

 If you need to do something fancier you can stick things in shared 
 memory.  Many of the accelerators give you access to their shared memory 
 segments.  For example, the CVS version of pecl/apc provides apc_store() 
 and apc_fetch() which lets you store PHP datatypes in shared memory 
 directly without needing to serialize/unserialize them.

That pecl/apc feature sounds like a great, cheap solution to my giant
global variable definition problem, which takes the biggest single chunk
of parsing time. The key AFAICS is avoiding the (un)serialization time.
I'd love to see an example if you have one, just to show me a target to
aim for. I'm unfamiliar with C / reading C source code and with shared
memory so I'm having a tough time figuring out how to use that feature.

(It doesn't solve the function definition problem but that is OK - the
functions take FAR less time to parse than the global variable
definitions.)
 
 And finally, the big hammer is to write your own PHP extension.  This is 
 a lot easier than people think and for people who are really looking for 
 performance this is the only way to go.  You can write whatever code you 
 want in your MINIT hook which only gets called on server startup and in 
 that hook you can create whatever persistent variables you need, pull 
 stuff from a DB, etc.  At the same time you will likely want to pull 
 some of your heavier business logic identified by your profiling into 
 the extension as well.  This combination of C and PHP is extremely hard 
 to beat performance-wise regardless of what you compare it to.

This is something I'm VERY interested in. It is encouraging to hear that
it is easier than I expect, and I will look into it further. Based on
the responses from the list I've gotten, this seems like the most
promising total solution. Any outstanding books/articles on the topic,
considering I'm not a C programmer?

Thanks again
/josh w.

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Re: Persistent PHP web application?

2005-01-04 Thread Josh Whiting
 Why don't you just create a daemon started from the command line 
 (shell/DOS) and have it accept socket connections from your Web server 
 PHP scripts and provide a SOA (Services Oriented API) to the code that 
 accesses your data structures pre-loaded in memory?

Setting up a separate persistent daemon PHP script is appealing to me
because It implements exactly what I want, which is a persistent PHP
application server, and also would allow me to abstract the
implementation behind an API, move it to other servers when load
increases, etc.

However, would a single process PHP server daemon be able to
appropriately handle the incoming load from Apache, which will be
running multiple processes handling concurrent incoming requests?
Implementing a multi-process forking SOAP server in pure PHP seems
inefficient, but maybe I'm not understanding the basic
premise/implementation of the suggestion?

Any good links/tutorials/examples/books for this?

Thanks!
/josh w.

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Persistent PHP web application?

2005-01-04 Thread Josh Whiting
Thanks for taking the time for your comprehensive repsonse!

 However, given your programming philosophy so far, and the fact that you
 are worried about 7ms and that you specifically requested some kind of
 shared memory space in PHP, you should Read This:
 http://us4.php.net/manual/en/ref.sem.php

 That pretty much is exactly what you asked for.

 Be forewarned that few PHP scripters have needed this stuff, and it's not
 anywhere near as hammered on (read: debugged) as the other options above.

I had the thought of using shared memory but I've found a scarce supply
of introductory material and examples on the subject. The documentation
assumes a level of familiarity with shared memory that I don't have, so
I'm struggling. Examples of how to store and retreive a set of large
multidimensional arrays to/from shared memory would be wonderful to look
at, if you have some pointers to articles/books/etc

I am also curious if the the shared memory solution would be a
significant performance improvement, since, AFAIK, the data is
serialized/unserialized for storage/retreival. Since the Zend
Accelerator already uses a shared memory cache for the intermediate
code, would using my own shared memory be any different? Consider that
my global variable definition scripts are nothing more than giant
unserialize statements.

(Incidentally, I've benchmarked the difference between big unserialize
statements and big PHP code array definition statements and found almost
no difference, only a marginal improvement in performance using PHP code
instead of unserialize on long strings.)

  Additionally, there are
  a large number of function definitions (more than 13,000 lines of code
  in all just for these global definitions).

 You should look into breaking these up into groups of functionality -- I'm
 willing to bet you could segment these and have many pages that only call
 in a few functions.

That is good practice, I know i've been a bit lazy in that respect :)
However, it is the variable definitions (big arrays) that take the most
significant time, executing all the function definition code is actually
only a small fraction of the time taken to parse the arrays, so it's a
secondary concern, though you are right I should sit down and rework my
includes so I'm only bringing in what I need.  I'll set aside a week
for that :)

 So if what your application mostly does is load in all this data and
 respond to requests, you could write a *SINGLE* PHP application which
 listened on port 12345 (or whatever port you like) and responded with the
 data requested.  Like writing your own web-server, only it's a
 _-server where you get to fill in the blank with whatever your
 application does.

Please see my repsonse to Manuel Lemos and his suggestion to run a SOAP
server. Basically my concern is the lack of having a
multi-process/forking server to handle concurrent incoming requests
(which may or may not be a problem - not sure).  We're talking about a
persistent PHP server (SOAP or otherwise), and I'm having trouble
groking how that would work in an environment with many concurrent
requests.  (Other than, of course, running a PHP SOAP server inside
Apache which brings me back to square one :)

 Actually, Zend and others could be interested in your comparisons of
 with and without cache...

If folks are interested in my benchmarks I could spend some time and put
together a summary. Incidentally I did end up doing some comparison of
cached code and uncached code because I unwittingly had the accelerator
turned off for while until I realized it :)

 Hope that helps...  Almost feel like I ought to invoice you at this point :-)

:) Thanks for the pro bono consulting, you have indeed helped!

Regards,
/josh w.

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Re: Persistent PHP web application?

2005-01-04 Thread Josh Whiting
  However, would a single process PHP server daemon be able to
  appropriately handle the incoming load from Apache, which will be
  running multiple processes handling concurrent incoming requests?

 I don't think you've quite got the right picture here...

 When you write your single process PHP server daemon, Apache's not even in
 the same picture frame any more.

 The requests aren't coming from Apache -- PHP is listening to a port you
 select, exactly in the same way that Apache listens to port 80, MySQL
 listens to 3306, your SSH server listens to 22, your Mail server listens
 to 25, your FTP server listens to [I forgot]...

 In other words, you are giving PHP a promotion from a Module of Apache,
 to being its own web server, only it won't be a web server, it will be
 a Whatever You Want server.  I'll call it WYW (Whatever You Want) for
 the rest of this post.

 Just don't ask me how to pronounce WYW. :-)

pronounce it like wha u as in wha u say? :)

What I envision is an apache server running mod_php, opening sockets to
a standalone WIW (whatever I want) server (a long-running PHP script)
on some other port.  Apache/mod_php handles the remote HTTP clients, the
WIW server treats the php scripts running from apache as clients.  This
is what I meant by the incoming load from Apache.

 I'm betting the do-nothing PHP socket-server will handle VERY heavy load.
 That code is all thinly-disguised wrappers around the C socket library on
 your server -- The same library Apache, FTP servers, and so on are all
 using to handle their load, almost for sure.

 Whether or not what you need to *DO* with the incoming data and your
 calculations needed to compose a valid response will be fast enough really
 depends on what you want your WYW server to *DO*...

 You might even want to have multiple PHP processes going, just like
 Apache, if you need to handle really have load.

Ok I'm getting the picture.

Am I barking up the wrong tree with this whole concern for persistent
PHP native variables and code? It seems like the implementation of a
standalone PHP server is overkill and I ought to just throw more
loadbalanced servers at the application rather than cut the application
init time with a solution like this... or at least that is how I'm
starting to see it.

Not to say you're suggestions are in vain, if anything they're helping
me to see the bigger picture.

Regards,
/josh w.

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



[PHP] Persistent PHP web application?

2005-01-03 Thread Josh Whiting
Dear list,

My web application (an online classifieds server) requires a set of
fairly large global arrays which contain vital information that most all
the page scripts rely upon for information such as the category list,
which fields belong to each category, and so on. Additionally, there are
a large number of function definitions (more than 13,000 lines of code
in all just for these global definitions).

These global arrays and functions never change between requests.
However, the PHP engine destroys and recreates them every time. After
having spent some serious time doing benchmarking (using Apache Bench),
I have found that this code takes at least 7ms to parse per request on
my dual Xeon 2.4ghz server (Zend Accelerator in use*). This seriously
cuts into my server's peak capacity, reducing it by more than half.

My question is: is there a way to define a global set of variables and
functions ONCE per Apache process, allowing each incoming hit to run a
handler function that runs within a persistent namespace? OR, is it
possible to create some form of shared variable and function namespace
that each script can tap?

AFAIK, mod_python, mod_perl, Java, etc. all allow you to create a
persistent, long-running application with hooks/handlers for individual
Apache requests. I'm surprised I haven't found a similar solution for
PHP.

In fact, according to my work in the past few days, if an application
has a large set of global functions and variable definitions, mod_python
FAR exceeds the performance of mod_php, even though Python code runs
significantly slower than PHP code (because in mod_python you can put
all these definitions in a module that is loaded only once per Apache
process).

The most promising prospect I've come across is FastCGI, which for Perl
and other languages, allows you to run a while loop that sits and
receives incoming requests (e.g. while(FCGI::accept() = 0) {..}).
However, the PHP/FastCGI modality seems to basically compare to mod_php:
every request still creates and destroys the entire application
(although the PHP interpreter itself does persist).

Essentially I want to go beyond a persistent PHP *interpreter* (mod_php,
PHP/FastCGI) and create a persistent PHP *application*... any
suggestions?

Thanks in advance for any help!
Regards,
J. Whiting

* - Please note that I am using the Zend Accelerator (on Redhat
Enterprise with Apache 1.3) to cache the intermediate compiled PHP code.
My benchmarks (7ms+) are after the dramatic speedup provided by the
accelerator. I wouldn't even bother benchmarking this without the
compiler cache, but it is clear that a compiler cache does not prevent
PHP from still having to run the (ableit precompiled) array and function
definition code itself.

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



Re: [PHP] Total Server Sessions

2005-01-03 Thread Josh Whiting
 $num_sessions = count(glob(session_save_path() . '/sess_*'));
 echo There are about {$num_sessions} active sessions.;
 
 It will be fairly active so long as your garbage collection is triggered 
 fairly often.

it is worth noting that this doesn't work if you are using the recursive
directory structure method to store sessions, or any other session
storage method other than the default (in-memory sessions, database
sessions, etc).  and on a shared host, you may be counting the total
number of sessions for all the virtual hosts on the server. also, on a
shared host, it is usually up the sysadmins how PHP stores sessions, so
your mileage may vary.

/jw

-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php



[PHP] Re: Cache Question

2003-08-18 Thread Josh Whiting
this is off the topic of caching, but is related and could have an 
impact on the issue: using a .php instead of a .mp3 would seem like a 
good idea, but this ties into a problem i'm having right now with 
streaming mp3s using, in my case, the flash player plugin to stream and 
play the file.  it would seem that there is a *distinct difference* 
between making a direct file request and making a request for a php file 
that sends the same data. this perhaps has something to do with the way 
apache/php handles the request? I can't come up with any other 
explanation, and i'm not an expert on the behind-the-scenes end the 
server-side applications.  heres the unexplained difference: when 
directly requesting an .mp3 file, the app (flash plugin) works fine. 
when requesting a .php file that reads and sends the data from a file, 
things still work but the plugin/browser cannot interrupt the operation 
until the full file is downloaded - i.e., you can't click a link in the 
page and go somewhere else until the stream is completely downloaded and 
the php script is complete.  the browser just sits there with its icon 
spinning until all is finished, no links work until then.  its very 
weird.  i am doing this just by using a fread() to a file handle and 
echoing the result.  any explanations, or enlightened thoughts on 1) why 
this happens, and 2) how it could impact the differece between a direct 
.mp3 embed and a .php embed?

josh whiting

Ivo Fokkema wrote:
Hi Tony,

Chris explained a lot about this... I'm not an expert on this, but you might
want to give a try to embed something like :
embed src='mp3.php?mp3=filename' autostart=true;

And then let mp3.php send all of the no-cache headers together with the
contents of the filename.mp3.
It might work, I would give it a try...

HTH,

--
Ivo


--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php