php-general Digest 19 Jan 2012 14:49:13 -0000 Issue 7656
Topics (messages 316330 through 316336):
Re: How to correctly validate url?
316330 by: Vikash Kumar
316331 by: Tanel Tammik
Re: sessions and expirations and isolations
316332 by: Stuart Dallas
316334 by: Ford, Mike
Re: SOAP
316333 by: Carlos Medina
Re: Bug 51860
316335 by: Matijn Woudt
Re: if http_referer is not reliable then how do we ...
316336 by: Alex Nikitin
Administrivia:
To subscribe to the digest, e-mail:
php-general-digest-subscr...@lists.php.net
To unsubscribe from the digest, e-mail:
php-general-digest-unsubscr...@lists.php.net
To post to the list, e-mail:
php-gene...@lists.php.net
----------------------------------------------------------------------
--- Begin Message ---
Best way is to use filter_var:
http://in2.php.net/manual/en/function.filter-var.php
filter_var('http://example.com', FILTER_VALIDATE_URL)
On 18 January 2012 16:58, Tanel Tammik <keevit...@gmail.com> wrote:
> Does anyone have a preg expression to validate the url which includes
> these special characters like ÜÕÄÖ included?
>
> Br,
> Tanel
>
>
> 18.01.2012 12:21, Mokaddim Akm kirjutas:
>
> Sent from a handheld device
>>
>> On 18-Jan-2012, at 4:05 PM, Tanel Tammik<keevit...@gmail.com> wrote:
>>
>> Hello,
>>>
>>> how to correctly validate url? now the special local characters like
>>> ÜÕÖÄ etc are allowed as well...
>>>
>>>
>> The generic URI syntax mandates that new URI schemes that provide for
>> the representation of character data in a URI must, in effect,
>> represent characters from the unreserved set without translation, and
>> **should convert all other characters to bytes according to UTF-8, and
>> then percent-encode those values**. This requirement was introduced in
>> January 2005 with the publication of RFC 3986. URI schemes introduced
>> before this date are not affected.[1]
>>
>>
>> [1]
>> http://en.wikipedia.org/wiki/**Percent-encoding<http://en.wikipedia.org/wiki/Percent-encoding>
>>
>> Br,
>>> Tanel
>>>
>>> --
>>> PHP General Mailing List (http://www.php.net/)
>>> To unsubscribe, visit: http://www.php.net/unsub.php
>>>
>>>
>
> --
> PHP General Mailing List (http://www.php.net/)
> To unsubscribe, visit: http://www.php.net/unsub.php
>
>
--- End Message ---
--- Begin Message ---
it doesn't work. please see the results:
var_dump(filter_var('http://example.com', FILTER_VALIDATE_URL));
var_dump(filter_var('http://example', FILTER_VALIDATE_URL));
var_dump(filter_var('http://exämple.com', FILTER_VALIDATE_URL));
http://example should be false
http://exämple.com should be true -- please note the a with dots!
Br,
Tanel
18.01.2012 13:36, Vikash Kumar kirjutas:
Best way is to use filter_var:
http://in2.php.net/manual/en/function.filter-var.php
filter_var('http://example.com', FILTER_VALIDATE_URL)
On 18 January 2012 16:58, Tanel Tammik<keevit...@gmail.com> wrote:
Does anyone have a preg expression to validate the url which includes
these special characters like ÜÕÄÖ included?
Br,
Tanel
18.01.2012 12:21, Mokaddim Akm kirjutas:
Sent from a handheld device
On 18-Jan-2012, at 4:05 PM, Tanel Tammik<keevit...@gmail.com> wrote:
Hello,
how to correctly validate url? now the special local characters like
ÜÕÖÄ etc are allowed as well...
The generic URI syntax mandates that new URI schemes that provide for
the representation of character data in a URI must, in effect,
represent characters from the unreserved set without translation, and
**should convert all other characters to bytes according to UTF-8, and
then percent-encode those values**. This requirement was introduced in
January 2005 with the publication of RFC 3986. URI schemes introduced
before this date are not affected.[1]
[1]
http://en.wikipedia.org/wiki/**Percent-encoding<http://en.wikipedia.org/wiki/Percent-encoding>
Br,
Tanel
--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php
--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php
--- End Message ---
--- Begin Message ---
On 17 Jan 2012, at 23:17, Haluk Karamete wrote:
> Back to this session expiration...
>
> that old quote said...
> <begin>
> The default behaviour for sessions is to keep a session open
> indefinitely and only to expire a session when the browser is closed.
> This behaviour can be changed in the php.ini file by altering the
> line:
>
> session.cookie_lifetime = 0
> If you wanted the session to finish in 5 minutes you would set this to:
> session.cookie_lifetime = 300.
> <end>
>
> Reflecting on this a little more, I got interested in the part that
> says "The default behaviour for sessions is to keep a session open
> indefinitely and only to expire a session when the browser is closed."
>
> How would do the server know that a browser is closed? No browser
> sends such a data to a server.
>
> If you re-open your browser, sure you will get asked to relogin (
> cause that session id cookie is gone ) but that does not mean that old
> session data has been erased form the server. How could it? The only
> way for that to happen is to run session_destroy programmatically but
> for that your users has to click on a link. Certainly, closing a
> browser won't cause that!
>
> This brings the question to the following;
> WHEN DOES THE SERVER KNOW THAT A USER IS REALLY GONE OR HE CLOSED HIS BROWSER?
>
> I'm afraid session.cookie_lifetime = 0 keeps all session data ( that
> is past and present ) in server memory until a server restart/stop
> takes place. Correct me if I'm wrong.
You are wrong. What you need to understand is that the cleanup of the data is
controlled by a completely separate system to that which enables requests to
get access to it. The session.gc_maxlifetime setting controls how long it must
be since the session data was saved before it is considered for cleanup. The
description above is correct in that the default behaviour is for the session
cookie to die with the browser session, but that has absolutely no effect on
how long the data will be retained on the server.
If you want a full description of how the session cleanup logic works I'm happy
to provide it, but you should be able to work it out by looking at the
descriptions of the gc_probability, gc_divisor and gc_maxlifetime settings on
this page:
http://www.php.net/manual/en/session.configuration.php#ini.session.gc-probability
-Stuart
--
Stuart Dallas
3ft9 Ltd
http://3ft9.com/
--- End Message ---
--- Begin Message ---
> -----Original Message-----
> From: Stuart Dallas [mailto:stu...@3ft9.com]
> Sent: 18 January 2012 12:02
>
> On 17 Jan 2012, at 23:17, Haluk Karamete wrote:
>
> > I'm afraid session.cookie_lifetime = 0 keeps all session data (
> that
> > is past and present ) in server memory until a server restart/stop
> > takes place. Correct me if I'm wrong.
>
> You are wrong. What you need to understand is that the cleanup of
> the data is controlled by a completely separate system to that which
> enables requests to get access to it. The session.gc_maxlifetime
> setting controls how long it must be since the session data was
> saved before it is considered for cleanup. The description above is
> correct in that the default behaviour is for the session cookie to
> die with the browser session, but that has absolutely no effect on
> how long the data will be retained on the server.
And you are also possibly wrong that session information is kept in
system memory, as the default is for it to be serialized and saved in
a regular file on disk. There are other options (database, shared memory,
...), but disk files are the default.
Cheers!
Mike
--
Mike Ford,
Electronic Information Developer, Libraries and Learning Innovation,
Portland PD507, City Campus, Leeds Metropolitan University,
Portland Way, LEEDS, LS1 3HE, United Kingdom
E: m.f...@leedsmet.ac.uk T: +44 113 812 4730
To view the terms under which this email is distributed, please go to
http://disclaimer.leedsmet.ac.uk/email.htm
--- End Message ---
--- Begin Message ---
Am 17.01.2012 11:55, schrieb DPRJ Sistemas (OK Cosméticos):
> Hello!
>
>
>
> I am looking for some help on Web Services (SOAP) client.
>
>
>
> Is there anyone here who has already worked with such client?
>
>
>
> Thank you
>
>
>
> Deleo
>
>
Yes Me
Regards
Carlos Medina
--- End Message ---
--- Begin Message ---
On Wed, Jan 18, 2012 at 1:06 AM, Christian Grobmeier
<grobme...@gmail.com> wrote:
> Hello folks,
>
> any chance this one is ever fixed?
> https://bugs.php.net/bug.php?id=51860
>
> I am a customer of 1&1. They told me they will not upgrade until this
> one is fixed. Imagine that there are thousands of customers running
> php 5.2.17 just because of this issue. Unfortunately I am not able to
> fix this on myself - my C-fu is not good enough.
>
> Because big hosters like 1&1 do not upgrade, we (the Apache log4php
> team) cannot make use of newer PHP versions too. We have decided to
> use the features once we see the old hosters move on. Same is true for
> other projects of mine. If we would upgrade we loose all the people on
> the big hosting companies.
>
> I think it is in the interest of the PHP-dev team to see people moving
> on. Otherwise at someday software like phpBB, wordpress et al will not
> work on many hosts at some day in future.
>
> Not sure if there has been any plans on this issue - i am very much
> interested in this and would like know if it is already on the
> schedule.
>
> Thanks,
> Christian
This list is for users having problems with PHP scripts etc. If you
want to contact the devs, you're on the wrong mailing list (there's a
special one for it)
Matijn
--- End Message ---
--- Begin Message ---
Capchas can't hold off any decently smart robots, anyone doing their
research can find at least 3 tools that will defeat various capchas.
For example pwntcha is one, Dan Kaminsky did a talk at black hat and
defcon 16 on pwning audio capchas (and a lot of even good ones will
offer audio as an option) bottom line is capchas don't really hold off
determined robots.
As far as referrer goes, yes it can be easily spoofed, no there is no
really built-in way to test it, yes the script can still be made
pretty secure.
But here are two ways i can think of to help prevent bots from taking
over your email script (ideally use them together):
Tokenize your URL, build a token based on the http_referrer amongst
other things, just make sure you use something that would identify a
normal user consistently, and say only allow one token say 5 emails a
day. When referrer and token don't match, dont send an email. Use a
strong hash algorithm, like sha to generate the token, and salt it,
and add a something at every level. For example, use http_referrer for
user piece, some random string of 32 characters hard coded into your
script, and if you touch a DB, something you pull when you validate
the email, from your db (not the email itself, something randomly
generated when that email was added). This way, having even 2 bits of
information, you still can't reverse the hashes. Note to not use a
random value, you want a consistent hash that you can check.
Set a timeout for your script, that is pause your server side script
for 10 seconds before sending an email, and pop back a confirmation
before actually sending the email after that (use a session to make
sure they are not bypassing that bit). This forces any script to
confirm their action, meaning they will have to execute for at least
10 seconds, meaning that they can only send 10 emails a minute, and
for anyone who wants to do mass spamming with your script, that's
unacceptable. By the way, don't set this time in JS, set an ajax
request that actually needs data that gets pulled from the server to
continue (like a secret random password stored in the session), just a
simple time-out won't solve the issue.
Both used together should provide for a good way to stop any useful
spamming done with your script.
~ Alex
--
The trouble with programmers is that you can never tell what a
programmer is doing until it’s too late. ~Seymour Cray
--- End Message ---