php-general Digest 19 Sep 2013 11:35:54 -0000 Issue 8367

Topics (messages 322083 through 322092):

Re: assign database result to iinput text box
        322083 by: Maciek Sokolewicz
        322091 by: ITN Network

Re: high traffic websites
        322084 by: Negin Nickparsa
        322086 by: Sebastian Krebs
        322087 by: Stuart Dallas
        322088 by: Negin Nickparsa
        322089 by: Camilo Sperberg
        322090 by: Sebastian Krebs

No MIME-Type in "imap_fetch_overview()"
        322085 by: Domain nikha.org

Apache's PHP handlers
        322092 by: Arno Kuhl

Administrivia:

To subscribe to the digest, e-mail:
        php-general-digest-subscr...@lists.php.net

To unsubscribe from the digest, e-mail:
        php-general-digest-unsubscr...@lists.php.net

To post to the list, e-mail:
        php-gene...@lists.php.net


----------------------------------------------------------------------
--- Begin Message ---
On 18-9-2013 7:33, iccsi wrote:
I have following html code to show my input text box and php to connect
server and select result from database server.
I would like to know how I can I use php to assign the value to my input
text.
Your help and information is great appreciated,

Regards,

Hi iccsi,

first, look at http://www.php.net/mysql_fetch_array the example should help you.

Once you have the value you're looking for in a variable, you simply assign insert it into the value property of your input element. Ie. you should have something like <input type="text" name="a" id="b" value="the variable containing your data">

Also please note that the mysql extension is deprecated; you are advised to switch to either PDO_MySQL or mysqli instead.

- Tul

--- End Message ---
--- Begin Message ---
<?php
$username = "root";
$password = "myPassword";
$hostname = "localhost";

//connection to the database
$dbhandle = mysql_connect($hostname, $username, $password)  or die("Unable
to connect to MySQL");
echo "Connected to MySQL<br>";

//select a database to work with
$selected = mysql_select_db("iccsimd",$dbhandle)  or die("Could not select
aerver");

//execute the SQL query and return records
$result = mysql_fetch_assoc(mysql_query("SELECT invid, invdate, note,
amount FROM invheader"));
?>

<INPUT type="text" name="Mytxt" id="MytextID" value="<?php echo
$result['note'];?>" />

Like Maciek mentioned, if this is a new project use PDO or MySQLi instead,
else use a PDO wrapper for MySQL functions.


On Wed, Sep 18, 2013 at 7:45 AM, Maciek Sokolewicz <
maciek.sokolew...@gmail.com> wrote:

> On 18-9-2013 7:33, iccsi wrote:
>
>> I have following html code to show my input text box and php to connect
>> server and select result from database server.
>> I would like to know how I can I use php to assign the value to my input
>> text.
>> Your help and information is great appreciated,
>>
>> Regards,
>>
>
> Hi iccsi,
>
> first, look at 
> http://www.php.net/mysql_**fetch_array<http://www.php.net/mysql_fetch_array>the
>  example should help you.
>
> Once you have the value you're looking for in a variable, you simply
> assign insert it into the value property of your input element. Ie. you
> should have something like <input type="text" name="a" id="b" value="the
> variable containing your data">
>
> Also please note that the mysql extension is deprecated; you are advised
> to switch to either PDO_MySQL or mysqli instead.
>
> - Tul
>
>
> --
> PHP General Mailing List (http://www.php.net/)
> To unsubscribe, visit: http://www.php.net/unsub.php
>
>

--- End Message ---
--- Begin Message ---
Thank you Camilo

to be more in details,suppose the website has 80,000 users and each page
takes 200 ms to be rendered and you have thousand hits in a second so we
want to reduce the time of rendering. is there any way to reduce the
rendering time?

other thing is suppose they want to upload files simultaneously and the
videos are in the website not on another server like YouTube and so streams
are really consuming the bandwidth.

Also,It is troublesome to get backups,when getting backups you have problem
of lock backing up with bulk of data.



Sincerely
Negin Nickparsa


On Wed, Sep 18, 2013 at 12:50 PM, Camilo Sperberg <unrea...@gmail.com>wrote:

>
> On Sep 18, 2013, at 09:38, Negin Nickparsa <nickpa...@gmail.com> wrote:
>
> > Thank you Sebastian..actually I will already have one if qualified for
> the
> > job. Yes, and I may fail to handle it that's why I asked for guidance.
> > I wanted some tidbits to start over. I have searched through yslow,
> > HTTtrack and others.
> > I have searched through php list in my email too before asking this
> > question. it is kind of beneficial for all people and not has been asked
> > directly.
> >
> >
> > Sincerely
> > Negin Nickparsa
> >
> >
> > On Wed, Sep 18, 2013 at 10:45 AM, Sebastian Krebs <krebs....@gmail.com
> >wrote:
> >
> >>
> >>
> >>
> >> 2013/9/18 Negin Nickparsa <nickpa...@gmail.com>
> >>
> >>> In general, what are the best ways to handle high traffic websites?
> >>>
> >>> VPS(clouds)?
> >>> web analyzers?
> >>> dedicated servers?
> >>> distributed memory cache?
> >>>
> >>
> >> Yes :)
> >>
> >> But seriously: That is a topic most of us spent much time to get into
> it.
> >> You can explain it with a bunch of buzzwords. Additional, how do you
> define
> >> "high traffic websites"? Do you already _have_ such a site? Or do you
> >> _want_ it? It's important, because I've seen it far too often, that
> >> projects spent too much effort in their "high traffic infrastructure"
> and
> >> at the end it wasn't that high traffic ;) I wont say, that you cannot be
> >> successfull, but you should start with an effort you can handle.
> >>
> >> Regards,
> >> Sebastian
> >>
> >>
> >>>
> >>>
> >>> Sincerely
> >>> Negin Nickparsa
> >>>
> >>
> >>
> >>
> >> --
> >> github.com/KingCrunch
> >>
>
> Your question is way too vague to be answered properly... My best guess
> would be that it depends severely on the type of website you have and how's
> the current implementation being well... implemented.
>
> Simply said: what works for Facebook may/will not work for linkedIn,
> twitter or Google, mainly because the type of search differs A LOT:
> facebook is about relations between people, twitter is about small pieces
> of data not mainly interconnected between each other, while Google is all
> about links and all type of content: from little pieces of information
> through whole Wikipedia.
>
> You could start by studying how varnish and redis/memcached works, you
> could study about how proxies work (nginx et al), CDNs and that kind of
> stuff, but if you want more specific answers, you could better ask specific
> question.
>
> In the PHP area, an opcode cache does the job very well and can accelerate
> the page load by several orders of magnitude, I recommend OPCache, which is
> already included in PHP 5.5.
>
> Greetings.
>
>

--- End Message ---
--- Begin Message ---
2013/9/18 Negin Nickparsa <nickpa...@gmail.com>

> Thank you Camilo
>
> to be more in details,suppose the website has 80,000 users and each page
> takes 200 ms to be rendered and you have thousand hits in a second so we
> want to reduce the time of rendering. is there any way to reduce the
> rendering time?
>

Read about frontend-/proxy-caching (Nginx, Varnish) and ESI/SSI-include
(also NGinx and Varnish ;)). The idea is simply "If you don't have to
process on every request in the backend, don't process it in the backend on
every request".

But maybe you mixed up some words, because the rendering time is the time
consumed by the renderer within the browser (HTML and CSS). This you can
improve, if you improve your HTML/CSS :)


I am a little bit curious: Do you _really_ have 1000 requests/second, or do
you just throw some numbers in? ;)


>
> other thing is suppose they want to upload files simultaneously and the
> videos are in the website not on another server like YouTube and so streams
> are really consuming the bandwidth.
>

Well, if there are streams, there are streams. I cannot imagine, that there
is another way someone can stream a video without downloading it.


>
> Also,It is troublesome to get backups,when getting backups you have
> problem of lock backing up with bulk of data.
>

Even in times, where there is not that much traffix? Automatic backup at
3:00 in the morning for example?


>
>
>
> Sincerely
> Negin Nickparsa
>
>
> On Wed, Sep 18, 2013 at 12:50 PM, Camilo Sperberg <unrea...@gmail.com>wrote:
>
>>
>> On Sep 18, 2013, at 09:38, Negin Nickparsa <nickpa...@gmail.com> wrote:
>>
>> > Thank you Sebastian..actually I will already have one if qualified for
>> the
>> > job. Yes, and I may fail to handle it that's why I asked for guidance.
>> > I wanted some tidbits to start over. I have searched through yslow,
>> > HTTtrack and others.
>> > I have searched through php list in my email too before asking this
>> > question. it is kind of beneficial for all people and not has been asked
>> > directly.
>> >
>> >
>> > Sincerely
>> > Negin Nickparsa
>> >
>> >
>> > On Wed, Sep 18, 2013 at 10:45 AM, Sebastian Krebs <krebs....@gmail.com
>> >wrote:
>> >
>> >>
>> >>
>> >>
>> >> 2013/9/18 Negin Nickparsa <nickpa...@gmail.com>
>> >>
>> >>> In general, what are the best ways to handle high traffic websites?
>> >>>
>> >>> VPS(clouds)?
>> >>> web analyzers?
>> >>> dedicated servers?
>> >>> distributed memory cache?
>> >>>
>> >>
>> >> Yes :)
>> >>
>> >> But seriously: That is a topic most of us spent much time to get into
>> it.
>> >> You can explain it with a bunch of buzzwords. Additional, how do you
>> define
>> >> "high traffic websites"? Do you already _have_ such a site? Or do you
>> >> _want_ it? It's important, because I've seen it far too often, that
>> >> projects spent too much effort in their "high traffic infrastructure"
>> and
>> >> at the end it wasn't that high traffic ;) I wont say, that you cannot
>> be
>> >> successfull, but you should start with an effort you can handle.
>> >>
>> >> Regards,
>> >> Sebastian
>> >>
>> >>
>> >>>
>> >>>
>> >>> Sincerely
>> >>> Negin Nickparsa
>> >>>
>> >>
>> >>
>> >>
>> >> --
>> >> github.com/KingCrunch
>> >>
>>
>> Your question is way too vague to be answered properly... My best guess
>> would be that it depends severely on the type of website you have and how's
>> the current implementation being well... implemented.
>>
>> Simply said: what works for Facebook may/will not work for linkedIn,
>> twitter or Google, mainly because the type of search differs A LOT:
>> facebook is about relations between people, twitter is about small pieces
>> of data not mainly interconnected between each other, while Google is all
>> about links and all type of content: from little pieces of information
>> through whole Wikipedia.
>>
>> You could start by studying how varnish and redis/memcached works, you
>> could study about how proxies work (nginx et al), CDNs and that kind of
>> stuff, but if you want more specific answers, you could better ask specific
>> question.
>>
>> In the PHP area, an opcode cache does the job very well and can
>> accelerate the page load by several orders of magnitude, I recommend
>> OPCache, which is already included in PHP 5.5.
>>
>> Greetings.
>>
>>
>


-- 
github.com/KingCrunch

--- End Message ---
--- Begin Message ---
On 18 Sep 2013, at 12:50, Negin Nickparsa <nickpa...@gmail.com> wrote:

> to be more in details,suppose the website has 80,000 users and each page
> takes 200 ms to be rendered and you have thousand hits in a second so we
> want to reduce the time of rendering. is there any way to reduce the
> rendering time?
> 
> other thing is suppose they want to upload files simultaneously and the
> videos are in the website not on another server like YouTube and so streams
> are really consuming the bandwidth.
> 
> Also,It is troublesome to get backups,when getting backups you have problem
> of lock backing up with bulk of data.

Your question is impossible to answer efficiently without profiling. You need 
to know what PHP is doing in those 200ms before you can target your 
optimisations for maximum effect.

I use xdebug to produce trace files. From there I can see exactly what is 
taking the most amount of time, and then I can look in to how to make that 
thing faster. When I'm certain there is no faster way to do what it's doing I 
move on to the next biggest thing.

Of course there are generic things you should do such as adding an opcode cache 
and looking at your server setup, but targeted optimisation is far better than 
trying generic stuff.

-Stuart

-- 
Stuart Dallas
3ft9 Ltd
http://3ft9.com/

--- End Message ---
--- Begin Message ---
I am a little bit curious: Do you _really_ have 1000 requests/second, or do
you just throw some numbers in? ;)

Sebastian, supposedly_asking_to_get_some_pre_evaluation :)

Even in times, where there is not that much traffix? Automatic backup at
3:00 in the morning for example?

3:00 morning in one country is 9 Am in other country, 3 PM in other country
.

By the way Thank you so much guys, I wanted tidbits and you gave me more.

Stuart, I recall your replies in other situations and always you helped me
to improve.list is happy to have you.

Sincerely
Negin Nickparsa


On Wed, Sep 18, 2013 at 3:39 PM, Sebastian Krebs <krebs....@gmail.com>wrote:

>
>
>
> 2013/9/18 Negin Nickparsa <nickpa...@gmail.com>
>
>> Thank you Camilo
>>
>> to be more in details,suppose the website has 80,000 users and each page
>> takes 200 ms to be rendered and you have thousand hits in a second so we
>> want to reduce the time of rendering. is there any way to reduce the
>> rendering time?
>>
>
> Read about frontend-/proxy-caching (Nginx, Varnish) and ESI/SSI-include
> (also NGinx and Varnish ;)). The idea is simply "If you don't have to
> process on every request in the backend, don't process it in the backend on
> every request".
>
> But maybe you mixed up some words, because the rendering time is the time
> consumed by the renderer within the browser (HTML and CSS). This you can
> improve, if you improve your HTML/CSS :)
>
>
> I am a little bit curious: Do you _really_ have 1000 requests/second, or
> do you just throw some numbers in? ;)
>
>
>>
>> other thing is suppose they want to upload files simultaneously and the
>> videos are in the website not on another server like YouTube and so streams
>> are really consuming the bandwidth.
>>
>
> Well, if there are streams, there are streams. I cannot imagine, that
> there is another way someone can stream a video without downloading it.
>
>
>>
>> Also,It is troublesome to get backups,when getting backups you have
>> problem of lock backing up with bulk of data.
>>
>
> Even in times, where there is not that much traffix? Automatic backup at
> 3:00 in the morning for example?
>
>
>>
>>
>>
>> Sincerely
>> Negin Nickparsa
>>
>>
>> On Wed, Sep 18, 2013 at 12:50 PM, Camilo Sperberg <unrea...@gmail.com>wrote:
>>
>>>
>>> On Sep 18, 2013, at 09:38, Negin Nickparsa <nickpa...@gmail.com> wrote:
>>>
>>> > Thank you Sebastian..actually I will already have one if qualified for
>>> the
>>> > job. Yes, and I may fail to handle it that's why I asked for guidance.
>>> > I wanted some tidbits to start over. I have searched through yslow,
>>> > HTTtrack and others.
>>> > I have searched through php list in my email too before asking this
>>> > question. it is kind of beneficial for all people and not has been
>>> asked
>>> > directly.
>>> >
>>> >
>>> > Sincerely
>>> > Negin Nickparsa
>>> >
>>> >
>>> > On Wed, Sep 18, 2013 at 10:45 AM, Sebastian Krebs <krebs....@gmail.com
>>> >wrote:
>>> >
>>> >>
>>> >>
>>> >>
>>> >> 2013/9/18 Negin Nickparsa <nickpa...@gmail.com>
>>> >>
>>> >>> In general, what are the best ways to handle high traffic websites?
>>> >>>
>>> >>> VPS(clouds)?
>>> >>> web analyzers?
>>> >>> dedicated servers?
>>> >>> distributed memory cache?
>>> >>>
>>> >>
>>> >> Yes :)
>>> >>
>>> >> But seriously: That is a topic most of us spent much time to get into
>>> it.
>>> >> You can explain it with a bunch of buzzwords. Additional, how do you
>>> define
>>> >> "high traffic websites"? Do you already _have_ such a site? Or do you
>>> >> _want_ it? It's important, because I've seen it far too often, that
>>> >> projects spent too much effort in their "high traffic infrastructure"
>>> and
>>> >> at the end it wasn't that high traffic ;) I wont say, that you cannot
>>> be
>>> >> successfull, but you should start with an effort you can handle.
>>> >>
>>> >> Regards,
>>> >> Sebastian
>>> >>
>>> >>
>>> >>>
>>> >>>
>>> >>> Sincerely
>>> >>> Negin Nickparsa
>>> >>>
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> github.com/KingCrunch
>>> >>
>>>
>>> Your question is way too vague to be answered properly... My best guess
>>> would be that it depends severely on the type of website you have and how's
>>> the current implementation being well... implemented.
>>>
>>> Simply said: what works for Facebook may/will not work for linkedIn,
>>> twitter or Google, mainly because the type of search differs A LOT:
>>> facebook is about relations between people, twitter is about small pieces
>>> of data not mainly interconnected between each other, while Google is all
>>> about links and all type of content: from little pieces of information
>>> through whole Wikipedia.
>>>
>>> You could start by studying how varnish and redis/memcached works, you
>>> could study about how proxies work (nginx et al), CDNs and that kind of
>>> stuff, but if you want more specific answers, you could better ask specific
>>> question.
>>>
>>> In the PHP area, an opcode cache does the job very well and can
>>> accelerate the page load by several orders of magnitude, I recommend
>>> OPCache, which is already included in PHP 5.5.
>>>
>>> Greetings.
>>>
>>>
>>
>
>
> --
> github.com/KingCrunch
>

--- End Message ---
--- Begin Message ---
On Sep 18, 2013, at 14:26, Haluk Karamete <halukkaram...@gmail.com> wrote:

>> I recommend OPCache, which is already included in PHP 5.5.
> 
> Camilo,
> I'm just curious about the disadvantageous aspects of OPcache. 
> 
> My logic says there must be some issues with it otherwise it would  have come 
> already enabled.   
> 
> Sent from iPhone 
> 
> 
> On Sep 18, 2013, at 2:20 AM, Camilo Sperberg <unrea...@gmail.com> wrote:
> 
>> 
>> On Sep 18, 2013, at 09:38, Negin Nickparsa <nickpa...@gmail.com> wrote:
>> 
>>> Thank you Sebastian..actually I will already have one if qualified for the
>>> job. Yes, and I may fail to handle it that's why I asked for guidance.
>>> I wanted some tidbits to start over. I have searched through yslow,
>>> HTTtrack and others.
>>> I have searched through php list in my email too before asking this
>>> question. it is kind of beneficial for all people and not has been asked
>>> directly.
>>> 
>>> 
>>> Sincerely
>>> Negin Nickparsa
>>> 
>>> 
>>> On Wed, Sep 18, 2013 at 10:45 AM, Sebastian Krebs 
>>> <krebs....@gmail.com>wrote:
>>> 
>>>> 
>>>> 
>>>> 
>>>> 2013/9/18 Negin Nickparsa <nickpa...@gmail.com>
>>>> 
>>>>> In general, what are the best ways to handle high traffic websites?
>>>>> 
>>>>> VPS(clouds)?
>>>>> web analyzers?
>>>>> dedicated servers?
>>>>> distributed memory cache?
>>>>> 
>>>> 
>>>> Yes :)
>>>> 
>>>> But seriously: That is a topic most of us spent much time to get into it.
>>>> You can explain it with a bunch of buzzwords. Additional, how do you define
>>>> "high traffic websites"? Do you already _have_ such a site? Or do you
>>>> _want_ it? It's important, because I've seen it far too often, that
>>>> projects spent too much effort in their "high traffic infrastructure" and
>>>> at the end it wasn't that high traffic ;) I wont say, that you cannot be
>>>> successfull, but you should start with an effort you can handle.
>>>> 
>>>> Regards,
>>>> Sebastian
>>>> 
>>>> 
>>>>> 
>>>>> 
>>>>> Sincerely
>>>>> Negin Nickparsa
>>>>> 
>>>> 
>>>> 
>>>> 
>>>> --
>>>> github.com/KingCrunch
>>>> 
>> 
>> Your question is way too vague to be answered properly... My best guess 
>> would be that it depends severely on the type of website you have and how's 
>> the current implementation being well... implemented.
>> 
>> Simply said: what works for Facebook may/will not work for linkedIn, twitter 
>> or Google, mainly because the type of search differs A LOT: facebook is 
>> about relations between people, twitter is about small pieces of data not 
>> mainly interconnected between each other, while Google is all about links 
>> and all type of content: from little pieces of information through whole 
>> Wikipedia.
>> 
>> You could start by studying how varnish and redis/memcached works, you could 
>> study about how proxies work (nginx et al), CDNs and that kind of stuff, but 
>> if you want more specific answers, you could better ask specific question.
>> 
>> In the PHP area, an opcode cache does the job very well and can accelerate 
>> the page load by several orders of magnitude, I recommend OPCache, which is 
>> already included in PHP 5.5.
>> 
>> Greetings.
>> 
>> 
>> -- 
>> PHP General Mailing List (http://www.php.net/)
>> To unsubscribe, visit: http://www.php.net/unsub.php
>> 


The original RFC states: 

https://wiki.php.net/rfc/optimizerplus
The integration proposed for PHP 5.5.0 is mostly 'soft' integration. That means 
that there'll be no tight coupling between Optimizer+ and PHP; Those who wish 
to use another opcode cache will be able to do so, by not loading Optimizer+ 
and loading another opcode cache instead. As per the Suggested Roadmap above, 
we might want to review this decision in the future; There might be room for 
further performance or functionality gains from tighter integration; None are 
known at this point, and they're beyond the scope of this RFC.

So that's why OPCache isn't enabled by default in PHP 5.5

Greetings.


--- End Message ---
--- Begin Message ---
2013/9/18 Camilo Sperberg <unrea...@gmail.com>

>
> On Sep 18, 2013, at 14:26, Haluk Karamete <halukkaram...@gmail.com> wrote:
>
> >> I recommend OPCache, which is already included in PHP 5.5.
> >
> > Camilo,
> > I'm just curious about the disadvantageous aspects of OPcache.
> >
> > My logic says there must be some issues with it otherwise it would  have
> come already enabled.
> >
> > Sent from iPhone
> >
> >
> > On Sep 18, 2013, at 2:20 AM, Camilo Sperberg <unrea...@gmail.com> wrote:
> >
> >>
> >> On Sep 18, 2013, at 09:38, Negin Nickparsa <nickpa...@gmail.com> wrote:
> >>
> >>> Thank you Sebastian..actually I will already have one if qualified for
> the
> >>> job. Yes, and I may fail to handle it that's why I asked for guidance.
> >>> I wanted some tidbits to start over. I have searched through yslow,
> >>> HTTtrack and others.
> >>> I have searched through php list in my email too before asking this
> >>> question. it is kind of beneficial for all people and not has been
> asked
> >>> directly.
> >>>
> >>>
> >>> Sincerely
> >>> Negin Nickparsa
> >>>
> >>>
> >>> On Wed, Sep 18, 2013 at 10:45 AM, Sebastian Krebs <krebs....@gmail.com
> >wrote:
> >>>
> >>>>
> >>>>
> >>>>
> >>>> 2013/9/18 Negin Nickparsa <nickpa...@gmail.com>
> >>>>
> >>>>> In general, what are the best ways to handle high traffic websites?
> >>>>>
> >>>>> VPS(clouds)?
> >>>>> web analyzers?
> >>>>> dedicated servers?
> >>>>> distributed memory cache?
> >>>>>
> >>>>
> >>>> Yes :)
> >>>>
> >>>> But seriously: That is a topic most of us spent much time to get into
> it.
> >>>> You can explain it with a bunch of buzzwords. Additional, how do you
> define
> >>>> "high traffic websites"? Do you already _have_ such a site? Or do you
> >>>> _want_ it? It's important, because I've seen it far too often, that
> >>>> projects spent too much effort in their "high traffic infrastructure"
> and
> >>>> at the end it wasn't that high traffic ;) I wont say, that you cannot
> be
> >>>> successfull, but you should start with an effort you can handle.
> >>>>
> >>>> Regards,
> >>>> Sebastian
> >>>>
> >>>>
> >>>>>
> >>>>>
> >>>>> Sincerely
> >>>>> Negin Nickparsa
> >>>>>
> >>>>
> >>>>
> >>>>
> >>>> --
> >>>> github.com/KingCrunch
> >>>>
> >>
> >> Your question is way too vague to be answered properly... My best guess
> would be that it depends severely on the type of website you have and how's
> the current implementation being well... implemented.
> >>
> >> Simply said: what works for Facebook may/will not work for linkedIn,
> twitter or Google, mainly because the type of search differs A LOT:
> facebook is about relations between people, twitter is about small pieces
> of data not mainly interconnected between each other, while Google is all
> about links and all type of content: from little pieces of information
> through whole Wikipedia.
> >>
> >> You could start by studying how varnish and redis/memcached works, you
> could study about how proxies work (nginx et al), CDNs and that kind of
> stuff, but if you want more specific answers, you could better ask specific
> question.
> >>
> >> In the PHP area, an opcode cache does the job very well and can
> accelerate the page load by several orders of magnitude, I recommend
> OPCache, which is already included in PHP 5.5.
> >>
> >> Greetings.
> >>
> >>
> >> --
> >> PHP General Mailing List (http://www.php.net/)
> >> To unsubscribe, visit: http://www.php.net/unsub.php
> >>
>
>
> The original RFC states:
>
> https://wiki.php.net/rfc/optimizerplus
> The integration proposed for PHP 5.5.0 is mostly 'soft' integration. That
> means that there'll be no tight coupling between Optimizer+ and PHP; Those
> who wish to use another opcode cache will be able to do so, by not loading
> Optimizer+ and loading another opcode cache instead. As per the Suggested
> Roadmap above, we might want to review this decision in the future; There
> might be room for further performance or functionality gains from tighter
> integration; None are known at this point, and they're beyond the scope of
> this RFC.
>
> So that's why OPCache isn't enabled by default in PHP 5.5
>


Also worth to mention, that it is the first release with an opcode-cache
integrated. Giving the other some release to get used to it, sounds useful
:)


>
> Greetings.
>
>
> --
> PHP General Mailing List (http://www.php.net/)
> To unsubscribe, visit: http://www.php.net/unsub.php
>
>


-- 
github.com/KingCrunch

--- End Message ---
--- Begin Message ---
Hello all, 

im posting this here, because the bug report system of "php.net" is not right
place for my problem. It's not a bug, but a wish - an I found there no
"wishlist" option at all.

I'm running my own webmail-client, written in PHP. It is stable, fast and
pretty, showing the full power of the PHP imap section.

Of course it presents paginated content lists for every mailbox the user may
open. These lists tell him some usefull things about every mail actually listed:
Sender, date, subject, size and (eventually) flags.

All these things are nicely delivered by the function "imap_fetch_overview()"
The same could be done by calling "imap_headerinfo()" for every single mail, but
"fetch_overview" seems to be faster, because it does it at once for the whole
batch.

BUT NONE OF THEM returns any information about the MIME-Type of the mail!

Since the user of my webmail client has the intrinsic, natural born an general
human right to KNOW whether some mail in his mailbox has attachments or not, I'm
forced to do very ugly things. My script calls additionally for every (!)
actually listed mail  "imap_fetchbody($connect, $msg_no, 0)" - where "$connect"
holds the result of "imap_open()". 

That gives me the mail header, the script reads the line starting with
"Content-Type:" and returns its content. Evaluating this against "mixed" or
"alternative" we have finaly what we want: This mail has attachments! Or is
written in HTML, what is even more we wanted!

Works fine, but is ugly. First "fetch_overview" parses all mail headers, then
they are fetched again to be parsed for the MIME-Type. I could just omit
"fetch_overview" and read the headers by my own means, that whould be faster,
but then I loose the size information, that is NOT (and cannot) be part of the
mail header!

If I want to have both, size and MIME-Type, and I WANT to have both, respecting
the intrinsic, natural born and general human rights of my user, im must call
both, "overview" and "fetchbody".

My question is this: Is there a better solution? Or is there someone that knows
someone among the PHP-Developpers to suggest them an improvement of the
functions "imap_fetch_overivew()" and "imap_headerinfo()". Please, Please, add
the MIME-Type to your fantastic object collections! BTW: It's really easy. Read
the "Content-Type"-Line! Sorry...

Hope, somebody has an idea,
my regards,

Niklaus

 







--- End Message ---
--- Begin Message ---
For the past week I've been trying to get to the bottom of an exploit, but
googling hasn't been much help so far, nor has my service provider.
Basically a file was uploaded with the filename xxx.php.pgif which contained
nasty php code, and then the file was run directly from a browser. The
upload script used to upload this file checks that the upload filename
doesn't have a .php extension, which in this case it doesn't, so let it
through. I was under the impression apache would serve any file with an
extension not listed in its handlers directly back to the browser, but
instead it sent it to the php handler. Is this normal behaviour or is there
a problem with my service provider's apache configuration? Trying this on my
localhost returns the file contents directly to the browser as expected and
doesn't run the php code.

 

Cheers

Arno


--- End Message ---

Reply via email to