Looks like I am wrong and there is no problem with pg8000.
On Saturday, 1 June 2013 09:09:54 UTC-5, Mariano Reingart wrote:
>
> I don't get errors nor any difference:
>
> db =
> DAL('postgres:pg8000://reingart:1234@localhost/pg8000',pool_size=1,check_reserved=['all'])
>
>
>
> db.define_table('thing',Field('name'))
>
> def test1():
> value = r"\'"
> id = db.thing.insert(name=value)
> value = db(db.thing.id==id).select().first().name
> return dict(id=id, value=value, lenght=len(value),
> adapter=db._adapter.__version__)
>
> def test2():
> id = db.thing.insert(name='%')
> value = db(db.thing.id==id).select().first().name
> return dict(id=id, value=value, lenght=len(value),
> adapter=db._adapter.__version__)
>
> def test3():
> id = db.thing.insert(name='%%')
> value = db(db.thing.id==id).select().first().name
> return dict(id=id, value=value, lenght=len(value),
> adapter=db._adapter.__version__)
>
>
> Test1
>
> adapter:gluon.contrib.pg8000.dbapi 1.10
> id:14L
> lenght:2
> value:\'
>
> Test2
>
> adapter:gluon.contrib.pg8000.dbapi 1.10
> id:15L
> lenght:1
> value:%
>
> Test3
>
> adapter:gluon.contrib.pg8000.dbapi 1.10
> id:16L
> lenght:2
> value:%%
>
> I'm missing something?
>
> Regards
>
> Mariano Reingart
> http://www.sistemasagiles.com.ar
> http://reingart.blogspot.com
>
>
> On Sat, Jun 1, 2013 at 1:39 AM, Massimo Di Pierro
> <[email protected] <javascript:>> wrote:
> > Can you try this? With postgres and pg8000
> >
> > db.define_table('thing',Field('name'))
> > value = r"\'"
> > db.thing.insert(name=value)
> >
> > It should insert the thing but I suspect you will get an error
> >
> > You can also try:
> >
> > id = db.thing.insert(name='%')
> > print db.thing[id].name
> >
> > do you get '%' or '%%'?
> >
> > Massimo
> >
> >
> >
> >
> > On Thursday, 30 May 2013 17:05:30 UTC-5, Mariano Reingart wrote:
> >>
> >> Hi Massimo, do you have a link to the SQL injection issue?
> >>
> >> I couldn't reproduce it, nor the communication problem (there were an
> >> out of sync statement issue under high loads, IIRC)
> >>
> >> BTW, I was given access to the pg8000 official repository (now it is
> >> being maintained again), so I'm planning to merge my version with the
> >> latest updates (including some performance enhancements).
> >>
> >> Joe: I attended the pypy tutorial at PyCon US 2012, seeking to speed
> >> up pg8000 without luck. Not only there was no improvement, also I got
> >> stuck by a pypy unsuported feature in Windows. Maybe pypy has better
> >> support now, maybe the new enhancements in pg8000 are better for its
> >> JIT compiler.
> >>
> >> If you just have to upload a CSV file, see the COPY statement, it is
> >> unbeatable.
> >>
> >> Best regards,
> >>
> >> Mariano Reingart
> >> http://www.sistemasagiles.com.ar
> >> http://reingart.blogspot.com
> >>
> >>
> >> On Thu, May 30, 2013 at 6:33 PM, Massimo Di Pierro
> >> <[email protected]> wrote:
> >> > Mind I have security concern about pg8000. It is vulnerable to SQL
> >> > injections in web2py.
> >> >
> >> >
> >> > On Thursday, 30 May 2013 14:41:55 UTC-5, Joe Barnhart wrote:
> >> >>
> >> >> I have just tried both drivers -- but in an apples-and-oranges
> >> >> comparison.
> >> >> I used pg8000 with pypy and web2py because it is pure Python and can
> be
> >> >> used
> >> >> with pypy. I used psycopg2 with python 2.7 on the same database and
> >> >> application.
> >> >>
> >> >> My application begins with a bulk-load of a CSV file. The file has
> >> >> about
> >> >> 450,000 records of about 10 fields each. Inserting the file using
> >> >> psycopg2
> >> >> and python 2.7 took about 4-5 minutes on a quad-core i7 iMac. The
> >> >> memory
> >> >> used was about 20M for postgres (largest thread) and about an equal
> >> >> amount
> >> >> for python. The task was handled by the web2py scheduler.
> >> >>
> >> >> The pypy-pg8000 version of the file load took almost an hour, but
> that
> >> >> is
> >> >> deceptive. The problem is that it overwhelmed the 12GB of memory in
> >> >> the
> >> >> computer. Both the pypy task and the postgres task ran amok with
> >> >> memory
> >> >> requirements. The postgres task took >8GB and forced the computer
> into
> >> >> swapping, killing the response time.
> >> >>
> >> >> Pypy is known for being somewhat of a memory hog (I was trying
> version
> >> >> 2.0.2). It worked darned well in web2py, with this being the only
> >> >> problem I
> >> >> encountered. Since my code heavily relies on modules, the speedup
> was
> >> >> noticible using pypy. Some of my longer tasks include creating pdf
> >> >> files
> >> >> and this took about 1/3 to 1/5 the time under pypy as compared to
> >> >> cpython
> >> >> 2.7.1.
> >> >>
> >> >> I know this is not an accurate comparison (because of the pypy
> >> >> component),
> >> >> but the runaway memory use of postgres under pg8000 concerned me so
> I
> >> >> thought I'd mention it.
> >> >>
> >> >> -- Joe B.
> >> >>
> >> >> On Wednesday, May 1, 2013 4:59:26 PM UTC-7, Marco Tulio wrote:
> >> >>>
> >> >>> Are there any advantages on one or another or are they basically
> the
> >> >>> same
> >> >>> thing?
> >> >>> I'm using psycopg2 atm.
> >> >>>
> >> >>> --
> >> >>> []'s
> >> >>> Marco Tulio
> >> >
> >> > --
> >> >
> >> > ---
> >> > You received this message because you are subscribed to the Google
> >> > Groups
> >> > "web2py-users" group.
> >> > To unsubscribe from this group and stop receiving emails from it,
> send
> >> > an
> >> > email to [email protected].
> >> > For more options, visit https://groups.google.com/groups/opt_out.
> >> >
> >> >
> >
> > --
> >
> > ---
> > You received this message because you are subscribed to the Google
> Groups
> > "web2py-users" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an
> > email to [email protected] <javascript:>.
> > For more options, visit https://groups.google.com/groups/opt_out.
> >
> >
>
--
---
You received this message because you are subscribed to the Google Groups
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.