Also have a look at: https://github.com/jwass/mplleaflet
Seems to also make Matplotlib-style plots work well with leaflet. haven't
tried it yet, but it looks awesome.
On Monday, January 19, 2015 at 7:42:30 AM UTC+2, Carlos A. Armenta Castro
wrote:
Im trying to work with maps for a vehicle
Note, however, you potentially let anyone download any other user's file.
If you do this, make sure you require signed URLs on the display controller
On Thursday, January 8, 2015 at 7:25:11 PM UTC-5, Rob Paire wrote:
Hi Dominic,
I found the simplest way to serve PDF files is to save them in
etc)
On Friday, November 28, 2014 8:02:15 PM UTC+1, nick name wrote:
Related, but not exactly the same question:
I'm submitting an immediate task in a regular (no sleep or anything --
though no special commit either); I can see it's queued for seconds and
sometimes minutes before getting
Related, but not exactly the same question:
I'm submitting an immediate task in a regular (no sleep or anything --
though no special commit either); I can see it's queued for seconds and
sometimes minutes before getting assigned and running.
There are no other tasks waiting or running, there
I want to let the user do a query and paginate through the resulting table.
grid/smartgrid would be great, except that I need to supply the query
through a form and not leave it open -- the query has minimum and maximum
values for most fields, multiple selects for others (I build the search
Is there an example of using web2py together with OSM, similar to the
google maps examples?
Or some application I can look at and learn from?
(I'm not a Google Maps or OSM expert, just need to add a small map +
markers to an app)
Thanks
--
Resources:
- http://web2py.com
-
I remember PyPy was able to run web2py apps in the past. Is this still the
case?
Did anyone try to run web2py apps under Nuitka, Cython or Numba?
Was it successful? Was there an observable speed difference? Does any of
them eventually let you pack the app in a native executable that does not
On Thursday, November 6, 2014 4:12:01 PM UTC+2, Leonel Câmara wrote:
Does any of them eventually let you pack the app in a native executable
that does not include the source code of the app? (not even compiled byte
code)
Don't use python if you don't want to give out source code.
You
On Thursday, November 6, 2014 5:26:28 PM UTC+2, Leonel Câmara wrote:
Frankly I have no interest in making my code harder to decompile or not
delivering the code, it's protected by copyright and usually I sell the
costumer projects that they paid to develop so I find it ethically wrong
not
On Friday, November 7, 2014 12:00:45 AM UTC+2, Manuele wrote:
well maybe the best option is to use the openlayers library... include
an openstreetmap layer is quite easy but can you be more specific about
what you need?
Are theese example enought for your needs?
On Wednesday, October 22, 2014 6:21:43 PM UTC+3, Carolina Nogueira wrote:
I'm not quite sure whether I am doing something wrong or not... As far as
I understand, the overhead from the web service should be completely
located outside the script and not reflect during its execution. What am I
On Tuesday, October 14, 2014 9:01:19 PM UTC+3, Mandar Vaze wrote:
Is it possible to either :
not allow login from MachineB (show message that You are currently logged
in from MachineA - continue to access the application from MachineA, or
logout from MachineA... or some such message.)
OR
A nice web2py tutorial:
https://impythonist.wordpress.com/2014/02/15/web2py-a-simpleclean-but-powerful-webframework-in-python/
Shamelessly copied to the DigitalOcean community board:
Don't have a mac, and can't reproduce.
However, that's not how you're supposed to run it, I think - you should
activate the specific environment you set up for web2py and then let the
path select the right python.
On Friday, August 8, 2014 9:29:09 AM UTC+3, Massimo Di Pierro wrote:
By any chance, do any of you connect to a local vpn/proxy running on the
server that relays the connection, rather than directly? (through openvpn,
sshuttle, ssh tunnels, nginx, socks or anything of the sort?)
If you do, then it is possible that the connection arrives to the app from
Does anyone have experience with RapydScript (lightweight py-like to JS
translator) and RapydML (pythonic-template to html/xml/svg translator)?
Have just discovered them, and from a cursory examination they seem
extremely nice and useful. RapydScript seems to bridge the JS-Python
bridge better
On Wednesday, February 12, 2014 5:11:29 PM UTC+2, Alex wrote:
IS_DATETIME validator doesn't change anything. I doubt that validators are
used by the DAL.
I guess I have to use native sql to set milliseconds
(Better late than never ... I just saw this)
You don't have to go directly to
You can have web2py's internal cron do that for you with the @reboot
instruction:
http://web2py.com/books/default/chapter/29/04#Cron
Note that soft cron is turned off by default with wsgi.
--
Interesting post in
http://programmers.stackexchange.com/questions/168751/is-the-use-of-utf8-preferable-to-utf8-true
, referenced from http://news.ycombinator.com/item?id=4668373 - I didn't
know that!
tl;dr: It's helpful to add a hidden form variable “utf8=✓” to forms, to
force older IE to
On Friday, August 17, 2012 8:29:12 AM UTC-4, Mike Girard wrote:
The data will be coming from a large XML file, so my script will parse
that and make inserts into several different tables. It's fairly
straightforward.
So is it correct to say that -
1. There is no compelling reason to do
On Wednesday, July 11, 2012 6:26:00 PM UTC-4, Massimo Di Pierro wrote:
I am planning to improve this functionality but it would help to know if
it works for you as it is and what problems you encounter with it.
I originally used the export-to-csv, but a few months ago, I switched to
just
On Tuesday, June 26, 2012 3:43:16 PM UTC-4, Massimo Di Pierro wrote:
If you can reproduce. Chould you try wireshark to do a packet capture? I
would like to see what is going on.
Tim is a new dad and has not been responsive. I will try an attempt to fix
this myself (although I cannot
On Friday, June 22, 2012 11:37:10 AM UTC-4, Anthony wrote:
Maybe it would be possible to convert the results of executesql() to a
pseudo-Rows object (without all the processing of each individual field
value) so it could be used with the grid, etc.
The original ticket that prompted adding
On Thursday, June 28, 2012 10:47:15 AM UTC-4, Massimo Di Pierro wrote:
why not simply?
db.commit()
db.close()
if db in an on object attribute like self.db you can do
if self.db:
self.db.commit()
self.db.close()
self.db = 0
you can also do:
On Wednesday, July 4, 2012 2:23:16 PM UTC-4, Massimo Di Pierro wrote:
web2py has change since one year ago. Now you simply do:
db.define_table('mytable',fields...,auth.signature)
... define more table ...
auth.enable_record_versioning(db)
and mytable will have a
This might have been solved in this week, but in case it wasn't:
You're tackling a general database problem, not a specific task queue or
web2py problem. So you need to solve it with the database: set up another
table to refer to you the task table, such as:
On Sunday, June 10, 2012 6:18:26 AM UTC-4, wdtatenh wrote:
Unfortunately, the update didn't occur (was not a schema change) - simple
insert to auth_membership table. Could see change when using local
cmd/python shell. Wasn't visible when I check using appadamin.
sqlite is fine in this
What works?
What doesn't work?
What are the pitfalls?
Do you need to write any if 0: import ... statements to make it recognize
the functions?
Thanks in advance.
On Thursday, May 24, 2012 2:22:50 PM UTC-4, Cliff Kachinske wrote:
If your development environment is Linux, you can set and read an
environment variable to take care of this.
As sudo or su, add the following line to /etc/environment:
W2PYENV=dev
Python only loads the environment once,
On Saturday, May 19, 2012 4:10:13 PM UTC-4, Massimo Di Pierro wrote:
Basically the same features we have in trunk now. Just lots of bug fixes
compared to latest table.
We will also have full versining, geo api in db, possibly better support
for mongodb and sybase, and a new welcome app
On Tuesday, May 1, 2012 3:11:19 AM UTC-4, Hassan Alnatour wrote:
How Can i copy all the records and tables in my sqlite database to a mysql
database ?
Web2py comes with a tool called cpdb (under scripts/) that does exactly
that, and more. run with --help to get complete explanation.
On Tuesday, April 24, 2012 9:24:34 AM UTC-4, Massimo Di Pierro wrote:
I think this belongs to the validator. If the validator has already
removed the subseconds you are in no luck.
If it works for you, I do not see a problem. Anyway, remember that this
API is experimental. They will stay
Bump. Anyone know the answer?
On Wednesday, April 18, 2012 4:02:37 PM UTC-4, nick name wrote:
A question I keep bugging (and submitting patches about) is the
implementation of subsecond precision in the database. With the new code, I
believe it is as simple as doing:
def
On Sunday, April 22, 2012 10:47:04 PM UTC-4, weheh wrote:
This works fine with sqlite. But postgres and mysql don't work. The
problem is when I launch the queue on postgres or mysql, as per above (with
or without the -N argument), the db().select(...) doesn't see anything in
the
On Thursday, April 19, 2012 7:08:06 PM UTC-4, Ricardo Pedroso wrote:
I post a comment on this issue:
http://code.google.com/p/web2py/issues/detail?id=731#c4
I think this is not a bug but an incorrect use of the dal api.
Ricardo, thanks! That is indeed the problem. Whether or not it is a
I don't understand what you are trying to achieve, but whatever it is, you
are doing it wrong; your model should be:
db.define_table('A', Field('name'))
db.define_table('B', Field('name'), Field('id_from_table_a', 'reference A'))
# alternatively:
# db.define_table('B', Field('name'),
On Wednesday, April 18, 2012 10:26:32 PM UTC-4, Massimo Di Pierro wrote:
In order to isolate the problem, let's check if this is a sqlite:memory
issue. Can you reproduce the problem with sqlite://storage.db ?
Yes, same result exactly. Note that the 'storage.db' is empty, but I'm
seeing this
I was away for two weeks and finally caught up with the git updates and the
mailing list; what a pleasant surprise!
The before/after infrastructure is a joy. It is simple and elegant, much
faster and more effective than the monkeypatching I submitted, AND with
auditing implemented on top of
I can reproduce this problem on both Linux and Windows (have no access to a
Mac), and Massimo cannot reproduce this on his Mac. Perhaps something is
borked about all my python installations (some site-packages I use or
something). Can you help test? Just go into the web2py directory, and start
On Wednesday, April 11, 2012 5:56:22 AM UTC-4, Changju wrote:
I put a file on the static folder, let say 'testApp/static'.
And I download the file on static folder, the downloaded file differs from
the original file in size.
Original file size is 15,227,904 and 15,096,832 for downloaded
The database connection is initialized in models/db.py (assuming you used
the wizard to generate your application). Look for the line that says
db=DAL(...), and make it select the right database according to your
request, e.g. request.host, or however else you determine the configuration
Come say the good things you have to say about web2py on
http://news.ycombinator.com/item?id=3765610
On Wednesday, March 28, 2012 10:13:52 AM UTC-4, Richard Penman wrote:
the smartest web hackers I know universally regard web2py as a
fundamentally incorrect way to approach web development—but usually say so
in far more colorful terms.
I just went and re-read the original threads (from
On Tuesday, March 27, 2012 11:26:17 AM UTC-4, Marco Tulio wrote:
How do I get the data that was on my app (on the sqlite database).
Web2py comes with scripts/cpdb.py, which copies databases from one
connection string to another with lots of other goodies.
see
In one of my management scripts (which runs continuously, after setting up
a web2py environment), I copy a complete sqlite database directory from
another server (copied_db.sqlite, and *.table), open them with
DAL('sqlite://copiedfile.sqlite', auto_import=True, path='/tmp/copy_path).
I copy
On Friday, March 23, 2012 10:21:07 AM UTC-4, Richard wrote:
Why are you doing that?
Richard
A legacy system, which has stand-alone appservers+databases in
geographically distributed locations, with intermittent and low bandwidth
connections (think mobile GPRS modems for the kind and
Probably a bug in DAL (at least for standalone use, but I suspect also when
used in web2py). A test case that easily reproduces this behavior can be
found in Issue 731 http://code.google.com/p/web2py/issues/detail?id=731:
Standalone
DAL is leaking memory+resources (don't know whether or not
On Friday, March 23, 2012 6:09:30 PM UTC-4, Anthony wrote:
There is also an .as_list() method, which converts to a list of
dictionaries rather than a dictionary of dictionaries. You can also just
store Rows objects directly in the session or cache -- the DAL defines a
reduction function
On Friday, March 23, 2012 10:09:07 PM UTC-4, Limedrop wrote:
Thanks for the information. Unfortunately, in my case I really do
need to store the raw query in the session...and then convert it back
to a Query so I can then add a few more filters before the final
select(). Has anyone
On Tuesday, March 20, 2012 2:30:08 PM UTC-4, Rick Ree wrote:
In case anyone else is using this, I found that 1.99.7 requires a change
at line 78:
#rows = self.select(query, ['id'], {})
rows = self.select(query, [table['id']], {})
Please note that there is a ticket tracking this
Le vendredi 9 mars 2012 23:48:30 UTC+1, rochacbruno a écrit :
To do what Django does we need to have some hooks' for that 3 events
(dbset.insert, dbsert.update and Row.update_record)
I can see that if you extend DAL it is possible to redefine via monkey
patching the .insert method.
But
is very small (~5 lines), backward compatible, and
useful for other stuff as well (e.g. saving memory and speeding up
Rows.as_list() as long as only one table is involved, which is a very
common use case).
nick name, can you please provide your table schema so we can test
performance on data sets
As of yesterday, hg has a few files that neither git repo (mdipierro and
web2py) does.
Surprisingly, the git push run make 9 minutes later, but seems to lag
behind the hg.
Only in web2py.hg/applications/admin/controllers: webservices.py
Only in web2py.hg/applications/admin/views/debug:
On Friday, February 10, 2012 12:04:59 AM UTC-5, Massimo Di Pierro wrote:
open a ticket, this can be done. I like the idea of passing a
processor.
Opened in http://code.google.com/p/web2py/issues/detail?id=701 with
discussion and a much improved suggestion of how to handle this.
On Thursday, March 8, 2012 8:52:34 PM UTC-5, Edward Shave wrote:
Many thanks for reply, unfortunately it didn't work in this instance... I
wonder if it is because the table is referencing itself?
By coincidence, I noticed the same problem earlier myself and opened ticket
#700
I have opened 3 suggestion tickets on the tracker at
http://code.google.com/p/web2py/issues/list
Issue 701 http://code.google.com/p/web2py/issues/detail?id=701: Suggestion:
lightweight DAL select processing - x10 to x100 speedup with large query
results
Issue 702
On Friday, March 2, 2012 12:43:03 PM UTC-5, Rajesh Subramanian wrote:
Hello,
First of all, what a beautiful framework web2py is! Thank you!
I am having issues streaming a video file to ios devices.
The file is not the problem because the same file plays properly on
those devices when it
Whichever way you choose, make sure you verify permissions on the server as
well -- don't rely on not presenting the link to the client as a form of
security.
Is that going to be 2.0?
IMO, the rocket download problem which AFAIK has not yet been fixed is a
blocker for 2.0
Also, IIRC, Bruno is working on Bootstrap integration -- which is probably
worthy of delaying 2.0 for (Web2py 2: Now with bootstrap!)
There's a tree structure among the record, upon which the aggregation is
computed.
Some dbs (e.g. oracle) have extensions for tree-like structures (CONNECT BY
etc), but it is not standard, and I need to support both sqlite and
postgres in this app.
This solution will lead to a race condition. Do not use it!
If you have multiple threads, they might update your commons at the same
time and you'll get request from one session, and session from another, and
db from a third.
current is a thread-local thing, guaranteed not to be touched by any
I usually run it with -i 0.0.0.0 , which means it listens simultaneously on
127.x.y.z and on any other address the computer might have. The admin pages
are accessible only when coming through localhost (127.0.0.1) or ssl, no
need for different processes/ports.
One of my controllers need to go through a lot of records to provide a
meaningful answer -- as in, 60k records.
Just loading them from the database takes about 100ms
(db.executesql(select * from table order by id;)); Doing the same through
DAL takes over 6 seconds. I realize that the DAL does
Yes, that is the basis of what I am suggesting.
There is not currently such a thing; there is something called 'select_raw'
implemented in the GoogleDataStore adapter, but not in anything else, and
it isn't exactly what I am proposing.
To elaborate:
Assume the table is defined as follows:
I've run against table inheritance problems (documented in
http://code.google.com/p/web2py/issues/detail?id=649 and
http://code.google.com/p/web2py/issues/detail?id=648) and others have too
(http://code.google.com/p/web2py/issues/detail?id=353). Patches are
provided with specific solutions.
I tried to maintain backward compatibility in the best way possible - by
keeping the old code in place if you don't need higher resolution.
(Note, however, that there's a place in DAL where time() retains
microseconds, and datetime does not, which was probably a bug -- in that
case, I fixed
I've posted a patch for DAL that adds support for fractional seconds; the
existing dal just silently truncates fractional seconds both when storing
to the database and selecting from the database.
http://code.google.com/p/web2py/issues/detail?id=542
While I don't think it's ready for prime
I run my app with migrate=migrate_enabled=False, because when migrations
_are_ needed, they are nontrivial, and the default logic is never what I
want.
I would like to have some migration needed response from the database, so
that when a user of the app runs a new version of the app on an old
Working with unsanitized input like this might be dangerous.
http://localhost/content/../../../etc/passwd
No, the thread started with Ie8 being suspects, but at least from my
experiments it is a problem in Rocket which can be triggered with any
browser or even without a browser (e.g. wget/curl instead of a browser)
See e.g. https://github.com/explorigin/Rocket/issues/1#issuecomment-3734231
The
On Tuesday, January 31, 2012 9:37:54 AM UTC-5, Massimo Di Pierro wrote:
In trunk socket timeout is 60 and this resulted in another problem.
Ctrl-C waits for 60 seconds before joining the worker processes.
Perhaps we should increate socket-timeout, catch Ctrl+C and then kill
the process
Ok, the culprit is definitely ignoring exceptions raised in sendall. In my
humble opinion this is serious enough to be on the 2.0 blocker list.
How to reproduce: you have to have a wsgi worker, that produces output in
parts (that is, returns a list or yields part as a generator). e.g: use
On Saturday, January 28, 2012 10:22:58 AM UTC-5, Phyo Arkar wrote:
its 2.7 as all my servers are (distro default)
Sorry for the confusion. This is true for every version down to at least
2.3 and up to 2.7; At the time I posted, I wrote 2.6 because that was the
only one I verified and did not
Almost surely the same problem discussed in this thread:
https://groups.google.com/d/msg/web2py/1_b63bhBeQs/sYFbXNJL8D4J
I posted https://github.com/explorigin/Rocket/issues/1#issuecomment-3648126- I
suspect it is an interplay between timeouts and sendall(), though I
can't really prove it (and I can't reliably reproduce this either right
now). Also some characterization about when this happens to me (slow links,
I posted https://github.com/explorigin/Rocket/issues/1#issuecomment-3648126
- I suspect it is an interplay between timeouts and sendall(), though I
can't really prove it (and I can't reliably reproduce this either right
now). Also some characterization about when this happens to me (slow links,
I suspect not, I've filed bug 581 (
http://code.google.com/p/web2py/issues/detail?id=581 )
But I haven't seen any other reports of this. Am I doing something wrong?
Are you sure you got the right win32? e.g. it needs to match the Python
version number (2.6) and bit width (32? 64?) of your interpreter.
If you start Python independently, can you import win32con without error?
bulk insert is not really bulk except on GAE, although it might potentially
be in the future. The non GAE implementation at this point is:
def bulk_insert(self, table, items):
return [self.insert(table,item) for item in items]
No database seems to override it.
I just pulled git and hg and there's a small difference (that was also
already there a few days ago):
Only in web2py.hg/applications/examples/static/js: modernizr-1.6.min.js
Only in web2py.hg/applications: __init__.py
Only in web2py.hg/applications/welcome/models: db.py.orig
Only in
I want to have a non-null foreign key reference, e.g.
owner = db.define_table('owner'', Field('name'))
package = db.define_table('package', Field('owner_id', owner,
notnull=True), Field('name'))
SQLite for example has no problem with this:
sqlite create table owner(id int primary key, name
Interesting. DAL has specific reset code for MS SQL and SQLite, but not for
postgresql (or any of the other databases, it seems)
A possibly related (and possibly unrelated) data point:
I've always been running from source. Occasionally, when I try to read
request.body I get a socket timeout, even though exactly zero seconds have
passed, and the timeout is set at 60 seconds.
Running the same request again (it's
Currently, there are two ways exceptions are handled in web2py:
If the controller/view does *NOT* use try: except:, and an exception is
raised, then it is very helpfully logged in the ticket system, and a
somewhat-useful message (that can be customized a little, see e.g.
I keep getting burned by how web2py handles datetimes; specifically, my
project requires millisecond precision of timestamps. The two databases I
care about support it, pgsql has microsecond resolution and sqlite is
agnostic (pysqlite3 specifically supports microseconds). MySQL and Oracle,
Does your cron program run continuously? what database are you using?
If you are using a database that supports mvcc (Oracle, Postgres, MySQL
with InnoDB tables, possibly others), your first select (of any kind, not
just this query) logically freezes the state of the database, and you will
not
add requires=[] on the unique field to disable the vaidation, if you need
the validate_and_update for any of the other fields.
The database itself should validate it for you (you'll get an sql error,
rather than a web2py error).
In your first case, just dropping the name='') from the update
I would classify this as a bug, or at least warranting mention in the
documentation - it causes the appadmin to break for the including tables,
which is very confusing.
This is specifically for the unique=True, whose validator is
IS_NOT_IN_DB(). Perhaps a better solution would be a UNIQUE()
Reminder of issue 354: http://code.google.com/p/web2py/issues/detail?id=354
I have never used a select trigger, so I only have vague ideas about how it
would be useful.
For inserts, a single line is sufficient. For deletes/update, that is not
true.
You can look at (todor's fixes to) my
Note: tested on 1.97.1; I believe problem (or my misunderstanding) is still
on trunk. I tried to check with trunk, but I have some compatibility
problems to solve first - I will post an update when I have solved them.
I have an included table defined like this:
included = db.Table('included',
The book describes how to use the dal left join syntax.
Where it talks about inner join, it uses equality test (e.g.
db(table1.field1 == table2.field2).select())
However, inside dal.py there is an implementation for an inner join, used
like db().select(join=table1.field.on(table2.field))
Using request.now is _guaranteeed_ to go backwards once a year in most of
the world (when going back from daylight saving time to standard time; the
date this happens differs between countries).
request.utcnow, which I mentioned in my original post (and appears in the
readme, but for some
I've just done an hg pull -u:
changeset: 2435:8cbfa1244549
tag: tip
user:mdipierro@massimo-di-pierros-macbook-2.local
date:Wed Sep 21 00:17:23 2011 -0500
summary: sys.exit(0), thanks Praneeth
The README file mentions request.utcnow, but the code doesn't. So either
As this issue keeps popping up every week, how about a mechanism to cope
with this kind of change in the future:
In the wizard generated code, add a first line that calls
UpdatedForWeb2PyVersion('1.97.1', true)
(Where the 1.97.1 is the version that the wizard was run in, and 'true'
means
I hate these kind of hacks.
In this case, a good solution (because of bandwidth and everything) would
be: give me all the updates since 1 hour _before_ what I think is the most
up-to-date update), and that would cover clocks going backwards up to 1
hour. And the cost would be in bandwidth,
the key fact is not request.now, or request.utcnow but the fact that you
can define your table to update these fields automatically so you can forget
about them
Thank you for your thoughtful answer, but that wasn't what I was asking
(apologize for the misunderstanding), and if you actually
This is quite amazing. Thanks to everyone involved for the great work, and
especially Massimo for masterfully co-ordinating everything without breaking
backward compatibility
Small remark: (posted it in another thread, but repeating here in case it
was not visible enough)
request.utcnow is
time.time or datetime.datetime.utcnow() both give you a number that is
independent of timezone (and datetime.utcnow() is supposed to be available
as request.utcnow, although that still isn't the case).
However, as I pointed out, there is still a chance that this can go
backwards in time on a
Python does, in a way ... if you use a depcreated feature, you get a
deprecation warning when running the code.
And, the purpose of such a change is to optimize support/googlegroup traffic
-- this specific issue comes up every single week since it was introduced.
That actually makes a lot more sense than my dumb idea
And seems to be the way to go.
Yes, I keep a update number field on each record, which is essential to
track which records changed since the last check (you could have a changed
bit instead, which is reset on sync -- but that makes
1 - 100 of 130 matches
Mail list logo