php-general Digest 29 Mar 2010 01:35:32 -0000 Issue 6664

Topics (messages 303548 through 303568):

Re: bug tracking system
        303548 by: Ashley Sheridan
        303549 by: Nilesh Govindarajan
        303550 by: Nathan Rixham
        303551 by: ebhakt
        303552 by: Nilesh Govindarajan
        303556 by: shiplu
        303557 by: Nathan Rixham

Re: Server-side postscript-to-PDF on-the-fly conversion
        303553 by: Paul M Foster

Re: optimizing PHP for microseconds
        303554 by: Nathan Rixham

Re: Allowing multiple, simultaneous, non-blocking queries.
        303555 by: Nathan Rixham
        303558 by: Per Jessen
        303560 by: Adam Richardson
        303562 by: Nathan Rixham
        303566 by: Phpster
        303568 by: Nathan Rixham

Re: Web Design
        303559 by: Nathan Rixham
        303561 by: Andre Polykanine

how to provide download of files mow in documentroot
        303563 by: ebhakt

Converting funky characters
        303564 by: Skip Evans
        303567 by: Nilesh Govindarajan

Re: Please guide in selection of Framework: according to your experience
        303565 by: Vishal Rewari

Administrivia:

To subscribe to the digest, e-mail:
        php-general-digest-subscr...@lists.php.net

To unsubscribe from the digest, e-mail:
        php-general-digest-unsubscr...@lists.php.net

To post to the list, e-mail:
        php-gene...@lists.php.net


----------------------------------------------------------------------
--- Begin Message ---
On Sun, 2010-03-28 at 16:28 +0300, Andre Polykanine wrote:

> Hello everyone,
> 
> Can you recommend a bug tracking system to be installed on the site?
> Requirements: written in PHP (or maybe Perl); tickets system; e-mail
> notifications.
> -- 
> With best regards from Ukraine,
> Andre
> Http://oire.org/ - The Fantasy blogs of Oire
> Skype: Francophile; Wlm&MSN: arthaelon @ yandex.ru; Jabber: arthaelon @ 
> jabber.org
> Yahoo! messenger: andre.polykanine; ICQ: 191749952
> Twitter: http://twitter.com/m_elensule
> 
> 


I think your best option is Mantis. It's written in PHP, does all that
you've asked, and it's pretty easy to use really.

Thanks,
Ash
http://www.ashleysheridan.co.uk



--- End Message ---
--- Begin Message ---
On 03/28/2010 06:58 PM, Andre Polykanine wrote:
Hello everyone,

Can you recommend a bug tracking system to be installed on the site?
Requirements: written in PHP (or maybe Perl); tickets system; e-mail
notifications.

http://www.google.co.in/search?aq=0&oq=php+bug&sourceid=chrome&ie=UTF-8&q=php+bug+tracker

--
Nilesh Govindarajan
Site & Server Administrator
www.itech7.com
मेरा भारत महान !
मम भारत: महत्तम भवतु !

--- End Message ---
--- Begin Message ---
Ashley Sheridan wrote:
> On Sun, 2010-03-28 at 16:28 +0300, Andre Polykanine wrote:
>>
>> Can you recommend a bug tracking system to be installed on the site?
>> Requirements: written in PHP (or maybe Perl); tickets system; e-mail
>> notifications.
>> 
> 
> I think your best option is Mantis. It's written in PHP, does all that
> you've asked, and it's pretty easy to use really.
> 

agreed, unless IDE integration and familiarity is the name of the game,
in which case the usual suspects of trac, jira and bugzilla are worth
considering.

regards!

--- End Message ---
--- Begin Message ---
Use drupal with the bug tracking system
http://drupal.org/project/project_issue

On Sun, Mar 28, 2010 at 6:58 PM, Andre Polykanine <an...@oire.org> wrote:

> Hello everyone,
>
> Can you recommend a bug tracking system to be installed on the site?
> Requirements: written in PHP (or maybe Perl); tickets system; e-mail
> notifications.
> --
> With best regards from Ukraine,
> Andre
> Http://oire.org/ - The Fantasy blogs of Oire
> Skype: Francophile; Wlm&MSN: arthaelon @ yandex.ru; Jabber: arthaelon @
> jabber.org
> Yahoo! messenger: andre.polykanine; ICQ: 191749952
> Twitter: http://twitter.com/m_elensule
>
>
> --
> PHP General Mailing List (http://www.php.net/)
> To unsubscribe, visit: http://www.php.net/unsub.php
>
>


-- 
Bhaskar Tiwari
GTSE Generalist
Directory Services
Microsoft

____________________________________________________________
All we have to decide is what to do with the time that has been given to us


http://www.ebhakt.com/
http://fytclub.net/
http://ebhakt.info/

--- End Message ---
--- Begin Message ---
On 03/28/2010 07:25 PM, ebhakt wrote:
Use drupal with the bug tracking system
http://drupal.org/project/project_issue
<http://drupal.org/project/project_issue>

On Sun, Mar 28, 2010 at 7:22 PM, Nilesh Govindarajan <li...@itech7.com
<mailto:li...@itech7.com>> wrote:

    On 03/28/2010 06:58 PM, Andre Polykanine wrote:

        Hello everyone,

        Can you recommend a bug tracking system to be installed on the site?
        Requirements: written in PHP (or maybe Perl); tickets system; e-mail
        notifications.


    
http://www.google.co.in/search?aq=0&oq=php+bug&sourceid=chrome&ie=UTF-8&q=php+bug+tracker
    
<http://www.google.co.in/search?aq=0&oq=php+bug&sourceid=chrome&ie=UTF-8&q=php+bug+tracker>

    --
    Nilesh Govindarajan
    Site & Server Administrator
    www.itech7.com <http://www.itech7.com>
    मेरा भारत महान !
    मम भारत: महत्तम भवतु !


    --
    PHP General Mailing List (http://www.php.net/)
    To unsubscribe, visit: http://www.php.net/unsub.php




--
Bhaskar Tiwari
GTSE Generalist
Directory Services
Microsoft

____________________________________________________________
All we have to decide is what to do with the time that has been given to us


http://www.ebhakt.com/
http://fytclub.net/
http://ebhakt.info/



Hey, I wasn't the one who put up the question.

--
Nilesh Govindarajan
Site & Server Administrator
www.itech7.com
मेरा भारत महान !
मम भारत: महत्तम भवतु !

--- End Message ---
--- Begin Message ---
I want to add with Andre.
I am looking for a free hosted bug tracking solution. I can not afford
to host it in my web server.
So is there any free one??
It should not be public. Only me and my clients will be able to see it.

Thanks

-- 
Shiplu Mokaddim
My talks, http://talk.cmyweb.net
Follow me, http://twitter.com/shiplu
SUST Programmers, http://groups.google.com/group/p2psust
Innovation distinguishes bet ... ... (ask Steve Jobs the rest)

--- End Message ---
--- Begin Message ---
shiplu wrote:
> I want to add with Andre.
> I am looking for a free hosted bug tracking solution. I can not afford
> to host it in my web server.
> So is there any free one??
> It should not be public. Only me and my clients will be able to see it.
> 
> Thanks
> 

yes, for all cases, commercial or not; self hosted or remote; indefero
[1] is a fantastic quick to use solution:

[1] http://www.indefero.net/

Many Regards,

Nathan

--- End Message ---
--- Begin Message ---
On Sat, Mar 27, 2010 at 08:57:02PM +0100, Frank Arensmeier wrote:


<snip>

>
> If your webserver runs on MacOSX, look out for a binary called
> 'pstopdf'. From the man page:
>
> [...]
> pstopdf is a tool to convert PostScript input data into a PDF
> document. The input data may come from a file
>      or may be read from stdin. The PDF document is always written to
> a file. The name of the output PDF file is
>      derived from the name of the input file or may be explicitly
> named using the -o option.
> [...]

This is a fairly standard application on all *nix boxes. That's why
MacOS includes it. And if your hosting platform is Linux, it's probably
there as well.

Paul

-- 
Paul M. Foster

--- End Message ---
--- Begin Message ---
mngghh, okay, consider me baited.

Daevid Vincent wrote:
>> Per Jessen wrote:
>>> Tommy Pham wrote:
>>>
>>>> (I remember a list member, not mentioning his name, does optimization
>>>> of PHP coding for just microseconds.  Do you think how much more he'd
>>>> benefit from this?)
>>> Anyone who optimizes PHP for microseconds has lost touch with reality -
>>> or at least forgotten that he or she is using an interpreted language.
>> But sometimes it's just plain fun to do it here on the list with 
>> everyone further optimizing the last optimized snippet :)
>>
>> Cheers,
>> Rob.
> 
> Was that someone me? I do that. And if you don't, then you're the kind of
> person I would not hire (not saying that to sound mean). I use single
> quotes instead of double where applicable. I use -- instead of ++. I use
> $boolean = !$boolean to alternate (instead of mod() or other incrementing
> solutions). I use "LIMIT 1" on select, update, delete where appropriate. I
> use the session to cache the user and even query results. I don't use
> bloated frameworks (like Symfony or Zend or Cake or whatever else tries to
> be one-size-fits-all). The list goes on.

That's not optimization, at best it's just an awareness of PHP syntax
and a vague awareness of how the syntax will ultimately be interpreted.

Using "LIMIT 1" is not optimizing it's just saying you only want one
result returned, the SQL query could still take five hours to run if no
indexes, a poorly normalised database, wrong datatypes, and joins all
over the place.

Using the session to cache "the user" is the only thing that comes
anywhere near to application optimisation in all you've said; and
frankly I would take to be pretty obvious and basic stuff (yet pointless
in most scenario's where you have to cater for possible bans and
de-authorisations) - storing query results in a session cache is only
ever useful in one distinct scenario, when the results of that query are
only valid for the owner of the session, and only for the duration of
that session, nothing more, nothing less. This is a one in a million
scenario.

Bloated frameworks, most of the time they are not bloated, especially
when you use them properly and only include what you need on a need to
use basis; then the big framework can only be considered a class or two.
Sure the codebase seems more bloated, but at runtime it's easily
negated. You can use these frameworks for any size project, enterprise
included, provided you appreciated the strengths and weaknesses of the
full tech stack at your disposal. Further, especially on enterprise
projects it makes sense to drop development time by using a common
framework, and far more importantly, to have a code base developers know
well and can "hit the ground running" with.

Generally unless you have unlimited learning time and practically zero
budget constraints frameworks like the ones you mentioned should always
be used for large team enterprise applications, although perhaps
something more modular like Zend is suited. They also cover your own
back when you are the lead developer, because on the day when a more
experienced developer than yourself joins the project and points out all
your mistakes, you're going to feel pretty shite and odds are very high
that the project will go sour, get fully re-written or you'll have to
leave due to "stress" (of being wrong).

> I would counter and say that if you are NOT optimizing every little drop of
> performance from your scripts, then you're either not running a site
> sufficiently large enough to matter, or you're doing your customers a
> disservice.

Or you have no grasp of the tech stack available and certainly aren't
utilizing it properly; I'm not suggesting that knowing how to use your
language of choice well is a bad thing, it's great; knock yourself out.
However, suggesting that optimising a php script for microseconds will
boost performance in large sites (nay, any site) shows such a loss of
focus that it's hard to comprehend.

By also considering other posts from yourself (in reply to this and
other threads) I can firmly say the above is true of you.

Optimisation comes down to running the least amount of code possible,
and only when really needed. If you are running a script / query /
process which provides the same output more than once then you are not
optimising. This will be illustrated further down this reply perfectly.

The web itself is the ultimate scalable distributed application known to
man, and has been guided and created by those far more knowledgeable
than you or I (Berners-Lee, Fielding, Godel, Turing et al), everything
you need is right there (and specifically in HTTP). Failing to leverage
this is where a lack of focus and scope comes in to play, especially
with large scale sites, and means you are doing your customers a disservice.

For anything where the output can be used more than once, (at a granular
level), the output should be cached.

For example, if you run SELECT / UPDATE/INSERT queries at a ratio any
higher than 1 SELECT per UPDATE/INSERT then you *will* get a sizeable
performance upgrade by caching the output. Another less granular example
would be a simple "blog", you can generated the page every time, or you
can only "publish" the page every time the post is updated or a comment
is added; and thus you can leverage file system cache's which most
operating systems have now, and http server caching, and HTTP caching
itself by utilizing last-modified; etags and having 304 not modified
returned for any repeat requests.

> I come from the video game world where gaining a frame or two of animation
> per second matters. It makes your game feel less choppy and more fluid and
> therefore more fun to play.

Many lessons can be learned from the video game (and flash) worlds, but
these are generally just how to code well; most of the real
optimizations come from how you serialize data, minimise the amount of
output data + frequency at which it is sent; and moreover by compiler or
bytecode optimisations - some of this can cross over in to PHP world,
but not much since it's interpreted rather than compiled, and even less
since the same code isn't run hundreds of time per second - and if it
is; you are normally doing something wrong (in all but the most specific
of cases).

> If I have to wait 3 seconds for a page to render, that wait is noticeable.
> Dumb users will click refresh, and since (unbelievably in this day and age)
> PHP and mySQL don't know the user clicked 'stop' or 'refresh', and
> therefore mySQL will execute the same query a second time. That's an
> entirely different thread I've already ranted on about.

Render time is a totally different subject, since css/images/javascript
and more come in to play, not to mention the users browser and machine
spec. This is usually improved by including image width and height in
your html (negate this and the user agent has to "sniff" all images to
get their dimensions before layout can be calculated and later
rendered), using static shared stylesheets which can be returned as 304
not modified; and including client-side scripts as deferred or after the
main body of content (hence why google analytics specifies the placing
of their javascript just before the </body> tag).

Now if there was one sentence in all of the recent posts which conveys
the amount of misunderstanding at play here, it's this one: "Dumb users
will click refresh, and since (unbelievably in this day and age) PHP and
mySQL don't know the user clicked 'stop' or 'refresh', and therefore
mySQL will execute the same query a second time."

No no no no no! Unbelievably in this day and age developers are still
creating systems where the "same queries" (implying the same output) can
be executed by something as foreign a second time (and indeed multiple
times).

If you learn anything from this, learn that this is the crux of the
failings, the output of that query, at the very least, should be cache'd
- thankfully your rdmbs is partially saving your ass half the time by
using it's own cache.

PHP and MySQL are not being dumb here, you are in *full* control of what
happens in your application, and if you have it set up so that the same
things, producing the same results, are being run time after time, then
more fool you. That output should be saved, in memory or file, and used
the second time; ideally that full view (if accessed generally more than
once) should be persisted so that it can be served statically until part
of the view needs updated; then regenerate and repeat.

> If you can shave off 0.1s from each row of a query result, after only 10
> rows, you've saved the user 1 full second. But realistically, you are most
> likely displaying hundreds (or in my case, thousands) of rows. Now I've
> just saved this user 10s to 100s (that's a minute and a half!)

<start-negativity>
O.M.G. am I reading these numbers correctly? shave off 0.1 seconds from
each row? saving the user 10-100 seconds? Just how are you coding these
applications!
</end-negativity>

In my world, if a "heavy" script is taking any more than 0.1 seconds to
run in it's entirety we have a problem; honestly, I'm unsure what to
write here - the only constructive thought I have is, why don't we have
a "PHP week" on the list; where a standard application is created; then
we optimise the hell out of it and catalogue what was done for all to see.

We'd need:
2 temporary servers (one web, one db : any spec)
1 donated "application" w/ data

I'd be up for it; and would be interested to see who just quick we can
make the thing between us all.

Would suggest a few test scripts where made to call a series of
operations, user paths as it were, then run it through ab and get some
numbers.

> I'm dealing with TB databases with billions of rows and complex queries
> that would make you (and often times me too) cringe in fright. Sure, if
> you're dealing with your who-gives-a-shit "blog" website and all 20 entries
> of crap-nobody-cares-about, then do whatever you want. But if you're doing
> professional, enterprise level work, or have real customers who expect
> performance, then you sure as hell better be considering all the ways to
> speed up your page. They don't run in a vacuume. They don't just have a
> single query.

no comment; I'm doing the same and have done for years; and the words
you are coming out with just don't add up - if you are on TB datasets
why the hell are you using RDBMS and php/mysql?? you need to be on to
non relational databases; and considering the hadoops of the world.

Suffice to say, if you have a complex query - something is vastly wrong
with the full architecure and system design.

all from experience.

Finally, reading through the list posts from the last week or two I've
become rather concerned about just how much disinformation and lack of
understanding is floating about. Many of the long time posters on this
list who do know better have either kept quiet or not covered the points
properly, whilst many more have been baited in to discussing questions
and points which have no answer, because they are the wrong questions to
be asking in the first place.

Times like this call for a smart-ass, and today I'll be that smart-ass;
not because I want to be labelled as such, but so that the other
knowledgeable people on the list can hook up on anything I've got wrong
and challenge it; and hopefully, ultimately, we'll have a full positive
thread that all can read and gain positive insight from as to how to use
PHP and leverage the full stack of technologies we have available to
address most (if not all) the points raised recently.

And Daevid, specifically, don't think for a minute these aren't learning
curves many of us have taken - skip back a couple of years, look through
the posts, and you'll find another developer banging on about threads in
php and optimising for micro-seconds ;)

Many Regards,

Nathan

--- End Message ---
--- Begin Message ---
Richard Quadling wrote:
> Hi.
> 
> As I understand things, one of the main issues in the "When will PHP
> grow up" thread was the ability to issue multiple queries in parallel
> via some sort of threading mechanism.
> 
> Due to the complete overhaul required of the core and extensions to
> support userland threading, the general consensus was a big fat "No!".
> 
> 
> As I understand things, it is possible, in userland, to use multiple,
> non-blocking sockets for file I/O (something I don't seem to be able
> to achieve on Windows http://bugs.php.net/bug.php?id=47918).
> 
> Can this process be "leveraged" to allow for non-blocking queries?
> 
> Being able to throw out multiple non-blocking queries would allow for
> the "queries in parallel" issue.
> 
> My understanding is that at the base level, all queries are running on
> a socket in some way, so isn't this facility nearly already there in
> some way?

Yes.

"Threading" is only realistically needed when you have to get data from
multiple sources; you may as well get it all in parallel rather than
sequentially to limit the amount of time your application / script is
sitting stale and not doing any processing.

In the CLI you can leverage forking to the process to cover this.

When working in the http layer / through a web server you can leverage
http itself by giving each query its own url and sending out every
request in a single http session; allowing the web server to do the
heavy lifting and multi-threading; then you get all responses back in
the order you requested.

In both environments you can use non-blocking sockets to do your
communications with other services and 3rd parties; whilst you can only
process the the returned data sequentially, at least all the foreign
services are doing their work at the same time. Which cuts down user
perceived runtime and the "real" time (since your own php code can
ultimately only run X fast).

A short example would be to consider using the non blocking mysql query
function against multiple connections; this way mysql is doing the heavy
lifting in parallel and you are processing results sequentially.

In all scenarios /all/ of the contributing aspects have to be considered
though; the number of open connections, how much extra weight that puts
on the server (having a knock on effect on other processes), what
happens when one of the "threads" fails and so forth.

Normally there are many different ways to handle the same problem
though; such as views at the rdbms level, publishing / caching output,
or considering if you are still in the right language - sometimes
factoring bits which require multi threading in to different languages
and services lends to a nicer solution.

And finally, more often than not, the same problem can be addressed by
taking the final output, then working out how to produce it in reverse;
many queries can be turned in to one, data can be normalised higher up
the chain, sorting can occur in php rather than in the rdbms and many
more solutions. Always many ways to skin the cat :)

Regards!

--- End Message ---
--- Begin Message ---
Richard Quadling wrote:

> Hi.
> 
> As I understand things, one of the main issues in the "When will PHP
> grow up" thread was the ability to issue multiple queries in parallel
> via some sort of threading mechanism.
> 
> Due to the complete overhaul required of the core and extensions to
> support userland threading, the general consensus was a big fat "No!".

Maybe a "Thanks, but no thanks".

> As I understand things, it is possible, in userland, to use multiple,
> non-blocking sockets for file I/O (something I don't seem to be able
> to achieve on Windows http://bugs.php.net/bug.php?id=47918).
> 
> Can this process be "leveraged" to allow for non-blocking queries?
> 
> Being able to throw out multiple non-blocking queries would allow for
> the "queries in parallel" issue.
> 
> My understanding is that at the base level, all queries are running on
> a socket in some way, so isn't this facility nearly already there in
> some way?

AFAICT (i.e. without having tried it), myqlnd enables you to do
asynchronous queries on mysql - the docuementation is a little lacking.  
Personally speaking, that would be my first avenue of attack. 


-- 
Per Jessen, Zürich (8.2°C)


--- End Message ---
--- Begin Message ---
>
> "Threading" is only realistically needed when you have to get data from

multiple sources; you may as well get it all in parallel rather than

sequentially to limit the amount of time your application / script is

sitting stale and not doing any processing.


> In the CLI you can leverage forking to the process to cover this.


> When working in the http layer / through a web server you can leverage

http itself by giving each query its own url and sending out every

request in a single http session; allowing the web server to do the

heavy lifting and multi-threading; then you get all responses back in

the order you requested.


Regarding leveraging http to achieve multi-threading-like capabilities, I've
tried this using my own framework (each individual dynamic region of a page
is automatically available as REST-ful call to the same page to facilitate
ajax capabilities, and I tried using curl to parallel process each of the
regions to see if the pseudo threading would by an advantage.)

In my tests, the overhead of the additional http requests killed any
advantage that might have been gained by generating the dynamic regions in a
parallel fashion.  Do you know of any examples where this actually improved
performance?  If so, I'd like to see them so I could experiment more with
the ideas.

Thanks,

Adam

-- 
Nephtali:  PHP web framework that functions beautifully
http://nephtaliproject.com

--- End Message ---
--- Begin Message ---
Adam Richardson wrote:
>> "Threading" is only realistically needed when you have to get data from
> 
> multiple sources; you may as well get it all in parallel rather than
> 
> sequentially to limit the amount of time your application / script is
> 
> sitting stale and not doing any processing.
> 
> 
>> In the CLI you can leverage forking to the process to cover this.
> 
> 
>> When working in the http layer / through a web server you can leverage
> 
> http itself by giving each query its own url and sending out every
> 
> request in a single http session; allowing the web server to do the
> 
> heavy lifting and multi-threading; then you get all responses back in
> 
> the order you requested.
> 
> 
> Regarding leveraging http to achieve multi-threading-like capabilities, I've
> tried this using my own framework (each individual dynamic region of a page
> is automatically available as REST-ful call to the same page to facilitate
> ajax capabilities, and I tried using curl to parallel process each of the
> regions to see if the pseudo threading would by an advantage.)
> 
> In my tests, the overhead of the additional http requests killed any
> advantage that might have been gained by generating the dynamic regions in a
> parallel fashion.  Do you know of any examples where this actually improved
> performance?  If so, I'd like to see them so I could experiment more with
> the ideas.

Hi Adam,

Good question, and you picked up on something I negated to mention.

With HTTP/1.1 came a little used addition which allows you to send
multiple requests through a single connection - which means you can load
up multiple requests and receive all the responses in sequence through
in a single "call".

Thus rather than the usual chain of:

open connection
send request
receive response
close connection
repeat

you can actually do:

open connection
send requests 1-10
receive responses 1-10
close connection

The caveat is one connection per server; but it's also interesting to
note that due to the "host" header you can call different "sites" on the
same physical machine.

I do have "an old class" which covers this; and I'll send you it
off-list so you can have a play.

In the context of this; it is also well worth noting some additional
bonuses.

By factoring each data providing source (which could even be a single
sql query) in to scripts of their own, with their own URIs - it allows
you to implement static caching of results via the web server on a case
by case basis.

A simple example I often used to use would be as follows:

uri: http://example.com/get-comments?post=123
source:
<?php
// if the static comments query results cache needs updated
// or doesn't exist then generate it
if(
    file_exists('/query-results/update-comments-123')
    || !file_exists('/query-results/comments-123')
  ) {
  if( $results = $db->query($query) ) {
    // only save the results if they are good
    file_put_contents(
      '/query-results/comments-123',
      json_encode($results)
    );
  }
}
echo file_get_contents('/query-results/comments-123');
exit();
?>

I say "used to" because I've since adopted a more restful & lighter way
of doing things;

uri: http://example.com/article/123/comments
and my webserver simply returns the static file using os file cache and
its own cache to keep it nice and speedy.

On the generation side; everytime a comment is posted the script which
saves the comment simply regenerates the said file containing the static
query results.

For anybody wondering why.. I'll let ab to the talking:

Server Software:        Apache/2.2
Server Hostname:        10.12.153.70
Server Port:            80

Document Path:          /users/NMR/70583/forum_post
Document Length:        10828 bytes

Concurrency Level:      250
Time taken for tests:   1.432020 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      110924352 bytes
HTML transferred:       108323312 bytes
Requests per second:    6983.14 [#/sec] (mean)
Time per request:       35.800 [ms] (mean)
Time per request:       0.143 [ms] (mean, across all concurrent requests)
Transfer rate:          75644.20 [Kbytes/sec] received


Yes, that's 6983 requests per second completed on a bog standard lamp
box; one dual-core and 2gb ram.

reason enough?

Regards!

--- End Message ---
--- Begin Message ---


On Mar 28, 2010, at 2:45 PM, Nathan Rixham <nrix...@gmail.com> wrote:

Adam Richardson wrote:
"Threading" is only realistically needed when you have to get data from

multiple sources; you may as well get it all in parallel rather than

sequentially to limit the amount of time your application / script is

sitting stale and not doing any processing.


In the CLI you can leverage forking to the process to cover this.


When working in the http layer / through a web server you can leverage

http itself by giving each query its own url and sending out every

request in a single http session; allowing the web server to do the

heavy lifting and multi-threading; then you get all responses back in

the order you requested.


Regarding leveraging http to achieve multi-threading-like capabilities, I've tried this using my own framework (each individual dynamic region of a page is automatically available as REST-ful call to the same page to facilitate ajax capabilities, and I tried using curl to parallel process each of the
regions to see if the pseudo threading would by an advantage.)

In my tests, the overhead of the additional http requests killed any
advantage that might have been gained by generating the dynamic regions in a parallel fashion. Do you know of any examples where this actually improved performance? If so, I'd like to see them so I could experiment more with
the ideas.

Hi Adam,

Good question, and you picked up on something I negated to mention.

With HTTP/1.1 came a little used addition which allows you to send
multiple requests through a single connection - which means you can load
up multiple requests and receive all the responses in sequence through
in a single "call".

Thus rather than the usual chain of:

open connection
send request
receive response
close connection
repeat

you can actually do:

open connection
send requests 1-10
receive responses 1-10
close connection

The caveat is one connection per server; but it's also interesting to
note that due to the "host" header you can call different "sites" on the
same physical machine.

I do have "an old class" which covers this; and I'll send you it
off-list so you can have a play.

In the context of this; it is also well worth noting some additional
bonuses.

By factoring each data providing source (which could even be a single
sql query) in to scripts of their own, with their own URIs - it allows
you to implement static caching of results via the web server on a case
by case basis.

A simple example I often used to use would be as follows:

uri: http://example.com/get-comments?post=123
source:
<?php
// if the static comments query results cache needs updated
// or doesn't exist then generate it
if(
   file_exists('/query-results/update-comments-123')
   || !file_exists('/query-results/comments-123')
 ) {
 if( $results = $db->query($query) ) {
   // only save the results if they are good
   file_put_contents(
     '/query-results/comments-123',
     json_encode($results)
   );
 }
}
echo file_get_contents('/query-results/comments-123');
exit();
?>

I say "used to" because I've since adopted a more restful & lighter way
of doing things;

uri: http://example.com/article/123/comments
and my webserver simply returns the static file using os file cache and
its own cache to keep it nice and speedy.

On the generation side; everytime a comment is posted the script which
saves the comment simply regenerates the said file containing the static
query results.

For anybody wondering why.. I'll let ab to the talking:

Server Software:        Apache/2.2
Server Hostname:        10.12.153.70
Server Port:            80

Document Path:          /users/NMR/70583/forum_post
Document Length:        10828 bytes

Concurrency Level:      250
Time taken for tests:   1.432020 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      110924352 bytes
HTML transferred:       108323312 bytes
Requests per second:    6983.14 [#/sec] (mean)
Time per request:       35.800 [ms] (mean)
Time per request: 0.143 [ms] (mean, across all concurrent requests)
Transfer rate:          75644.20 [Kbytes/sec] received


Yes, that's 6983 requests per second completed on a bog standard lamp
box; one dual-core and 2gb ram.

reason enough?

Regards!

--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php


I am interested in how you are handling security in this process. How are you managing sessions with the restful interface? This is the one thing that really interests me with the whole restful approach.

Bastien

Sent from my iPod

--- End Message ---
--- Begin Message ---
Phpster wrote:
> I am interested in how you are handling security in this process. How
> are you managing sessions with the restful interface? This is the one
> thing that really interests me with the whole restful approach.

one doesn't do sessions with rest :)

http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm

95% of the time the uri's don't need any security or "session" type
stuff as it's all public data (think about it, if it's on a page, it's
naturally public)

with regards security; personally I use client side ssl certificates and
call through https (and further foaf+ssl) however any old
basic/digest/whatever authentication will do.

the major point of rest is to expose everything needed via GET on URIs,
(hypermedia as the engine of application state); URIs are not GETable at
a later date if they require session data. Hence why you pass or prompt
for any needed credentials; and further abstract the security in to the
transport layer (or tunnel, in the case of https).

regards!

--- End Message ---
--- Begin Message ---
Hi Parham,

Long term, I would suggest viewing this as an ideal opportunity to move
to strictly back end PHP only; thus negating and skirting around the
visual issues; it is often more fun this way any way!

Jobs such as web service integrations; payment service provider
integrations and similar provide a broad scope and some enjoyable jobs.

Additionally, there are often many good bug fixing and optimisation jobs
that stay well clear of the visual side of output.

I'm sure many will be keen to work with you, I know that I for one do
not view your blindness as a disability when it comes to coding, and I'm
hoping that I'll have the chance to work with you next time a suitable
project arises.

Many Regards,

Nathan

Parham Doustdar wrote:
> Hello there,
> I am a blind, Iranian web developer. I have worked with PHP for about a 
> year, and have created websites for clients who, due to their close bonds 
> with me (E.G. as a friend, relative, etc) has been able to walk through 
> every step of the website design stages with me (E.G. placement of 
> navigation menu, footers, headers, content, etc). I considered myself a 
> beginner, stepping towards the stage marked "intermediate". Today, though, 
> my idea of where I stand made a jarring change.
> 
> I took a look at websites offering freelance jobs, namely 
> http://www.freelancer.com and http://www.project4hire.com. I saw instantly 
> that what a client expects of me is to just "get it right", meaning, don't 
> call them and ask where should everything be, what color it should be, etc, 
> etc. Realizing that I am far behind due to my blindness, and dismayed, I 
> contacted my other blind friends that have done web development. I have been 
> offered one solution that has always worked: find partner(s).
> 
> I thought long and hard on this, trying to find out a place I could reach to 
> other fellow web developers, and in the end of the day, I came up with the 
> PHP.general mailing list that has always helped me in the past.
> 
> So, in the end, here is my question. Is anyone willing to work with an 
> Iranian PHP developer that has experience with cooperative development 
> tools, enthusiasm towards groupwork, is a fast-learner (meaning is willing 
> to read documentations, ask questions, etc), and has good 
> writing/reading/speaking knowledge, and is completely blind? If so, would 
> you be kind as to email me off-list?
> 
> Thank you,
> Parham Doustdar
> 
> P.S.: Please, if this is off-topic, do not shout at me. I tried going to 
> http://news.php.net to find any rules regarding what is and isn't allowed on 
> the list, but found none. This is of course my shortcoming, but I prefer 
> being contacted off-list, rather than being bombarded by messages that read, 
> "read the rules" or "just fucking google it"; believe me, I have tried.
> Thank you once again, and sorry for the long email. 
> 
> 


--- End Message ---
--- Begin Message ---
Hello Nathan, Parham and all,

                            Actually, I confirm that you can code if
                            you're blind. Parham, you have probably
                            seen me on the other accessibility-related
                            lists, so no need to say that I'm blind
                            myself, too). I have a chance to have a
                            wife who is also a programmer and
                            designer, so we make our projects always
                            together and I don't bother myself with
                            the visual things.
By the way, I have looked at Mantis bugtracking system. It seems to me
rather accessible except of the visual CAPTCHA needed during the
signup process. Will think what to do: either to search for another
solution or to modify the code implementing a logical CAPTCHA
developed by ourselves).
Anyway, glad that there are people like you, Nathan, here around.

-- 
With best regards from Ukraine,
Andre
Skype: Francophile; Wlm&MSN: arthaelon @ yandex.ru; Jabber: arthaelon @ 
jabber.org
Yahoo! messenger: andre.polykanine; ICQ: 191749952
Twitter: m_elensule

----- Original message -----
From: Nathan Rixham <nrix...@gmail.com>
To: Parham Doustdar <parha...@gmail.com>
Date: Sunday, March 28, 2010, 9:06:57 PM
Subject: [PHP] Re: Web Design

Hi Parham,

Long term, I would suggest viewing this as an ideal opportunity to move
to strictly back end PHP only; thus negating and skirting around the
visual issues; it is often more fun this way any way!

Jobs such as web service integrations; payment service provider
integrations and similar provide a broad scope and some enjoyable jobs.

Additionally, there are often many good bug fixing and optimisation jobs
that stay well clear of the visual side of output.

I'm sure many will be keen to work with you, I know that I for one do
not view your blindness as a disability when it comes to coding, and I'm
hoping that I'll have the chance to work with you next time a suitable
project arises.

Many Regards,

Nathan

Parham Doustdar wrote:
> Hello there,
> I am a blind, Iranian web developer. I have worked with PHP for about a 
> year, and have created websites for clients who, due to their close bonds 
> with me (E.G. as a friend, relative, etc) has been able to walk through 
> every step of the website design stages with me (E.G. placement of 
> navigation menu, footers, headers, content, etc). I considered myself a 
> beginner, stepping towards the stage marked "intermediate". Today, though, 
> my idea of where I stand made a jarring change.
> 
> I took a look at websites offering freelance jobs, namely 
> http://www.freelancer.com and http://www.project4hire.com. I saw instantly 
> that what a client expects of me is to just "get it right", meaning, don't 
> call them and ask where should everything be, what color it should be, etc, 
> etc. Realizing that I am far behind due to my blindness, and dismayed, I 
> contacted my other blind friends that have done web development. I have been 
> offered one solution that has always worked: find partner(s).
> 
> I thought long and hard on this, trying to find out a place I could reach to 
> other fellow web developers, and in the end of the day, I came up with the 
> PHP.general mailing list that has always helped me in the past.
> 
> So, in the end, here is my question. Is anyone willing to work with an 
> Iranian PHP developer that has experience with cooperative development 
> tools, enthusiasm towards groupwork, is a fast-learner (meaning is willing 
> to read documentations, ask questions, etc), and has good 
> writing/reading/speaking knowledge, and is completely blind? If so, would 
> you be kind as to email me off-list?
> 
> Thank you,
> Parham Doustdar
> 
> P.S.: Please, if this is off-topic, do not shout at me. I tried going to 
> http://news.php.net to find any rules regarding what is and isn't allowed on 
> the list, but found none. This is of course my shortcoming, but I prefer 
> being contacted off-list, rather than being bombarded by messages that read, 
> "read the rules" or "just fucking google it"; believe me, I have tried.
> Thank you once again, and sorry for the long email. 
> 
> 


-- 
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php


--- End Message ---
--- Begin Message ---
Hi
i am writing a web application in php
this webapp primarily focuses on file uploads and downloads
the uploaded files will be saved in a folder which is not in document root
and my query is how will i be able to provide download to such files not
located in document root via php


-- 
Bhaskar Tiwari
GTSE Generalist
Directory Services
Microsoft

____________________________________________________________
All we have to decide is what to do with the time that has been given to us


http://www.ebhakt.com/
http://fytclub.net/
http://ebhakt.info/

--- End Message ---
--- Begin Message ---
Hey all,

What's the best way to filter/convert characters that don't
translate properly from say news stories to HTML?

For example, I have a form that people cut and paste the lead
in paragraph from news stories they want to link to from their
sites to the original. And of course things like long dashes,
double quotes, single quotes, etc, always translate is wacky
unprintables when they are rendered, and the user needs to
edit them to replace them with standard characters.

Is there way to filter this text through a function that will
convert them to web friendly chars?

Thanks,
Skip

--
====================================
Skip Evans
PenguinSites.com, LLC
503 S Baldwin St, #1
Madison WI 53703
608.250.2720
http://penguinsites.com
------------------------------------
Those of you who believe in
telekinesis, raise my hand.
 -- Kurt Vonnegut

--- End Message ---
--- Begin Message ---
On 03/29/2010 05:35 AM, Skip Evans wrote:
Hey all,

What's the best way to filter/convert characters that don't
translate properly from say news stories to HTML?

For example, I have a form that people cut and paste the lead
in paragraph from news stories they want to link to from their
sites to the original. And of course things like long dashes,
double quotes, single quotes, etc, always translate is wacky
unprintables when they are rendered, and the user needs to
edit them to replace them with standard characters.

Is there way to filter this text through a function that will
convert them to web friendly chars?

Thanks,
Skip


PCRE is your best friend for such problems.

--
Nilesh Govindarajan
Site & Server Administrator
www.itech7.com
मेरा भारत महान !
मम भारत: महत्तम भवतु !

--- End Message ---
--- Begin Message ---
Thank you, I will go forward with the way you guys suggested.

See you around



On Sat, Mar 27, 2010 at 4:54 PM, Andre Polykanine <an...@oire.org> wrote:

> Hello Vishal,
>
> Why don't you want to write raw code, without any frameworks?) It's
> good for lots of objectives...
> --
> With best regards from Ukraine,
> Andre
> Skype: Francophile; Wlm&MSN: arthaelon @ yandex.ru; Jabber: arthaelon @
> jabber.org
> Yahoo! messenger: andre.polykanine; ICQ: 191749952
> Twitter: m_elensule
>
> ----- Original message -----
> From: Vishal Rewari <rewari.vis...@gmail.com>
> To: php-gene...@lists.php.net <php-gene...@lists.php.net>
> Date: Saturday, March 27, 2010, 6:28:52 AM
> Subject: [PHP] Please guide in selection of Framework: according to your
> experience
>
> Dear PHP community,
>
> I am vishal, I have recently started development in PHP
>
> I have come across these PHP frameworks:
>
>
>    1. Codeigniter
>   2. Symphony
>   3. CakePHP
>    4. PEAR
>
>
>
> Please guide me which one of them is *good in performance ? available
> functionality ? Easy to use and configure* or the one you would recommend
> according to your experience.
>
>
> My DB is MySQl, or should I stick to native call from PHP?
>
> --
> --
> Warm Regards
>
> Vishal Rewari
> Student - LD College of Engineering,
> AIESEC - Ahmedabad.
>
>
> ---------------------------------------------------
>
> AIESEC - vishal.rew...@aiesec.net
> Gmail     - rewari.vis...@gmail.com
> Skype    - vishal.rewari
> Mobile    - +919428104319
>
> Website : http://rewarivishal.blogspot.com/
>
> -----------------------------------------------------------
> 19
>
>


-- 
-- 
Warm Regards

Vishal Rewari
Student - LD College of Engineering,
AIESEC - Ahmedabad.


---------------------------------------------------

AIESEC - vishal.rew...@aiesec.net
Gmail     - rewari.vis...@gmail.com
Skype    - vishal.rewari
Mobile    - +919428104319

Website : http://rewarivishal.blogspot.com/

-----------------------------------------------------------
19

--- End Message ---

Reply via email to