@Brian is right, the query is more complicated and it was just a
placeholder. When I get back in the office on Monday, I will run some more
speed tests. I am not the DBA and the city office where I am working
doesn't have access to SQL Server Management Studio. All of that is kept
by the IT
On Wednesday, November 27, 2013 6:48:50 AM UTC+1, Brian M wrote:
Querying out 47K rows is taking 4.5-5.5 seconds on my laptop (and its
specs are nothing special). Not sure how the complexity of my query
compares to yours but 20-150 seconds seems slow. Heck I can get back over
800K rows
I'm assuming @Josh isn't actually doing a straight 'select * from table'
but is rather using that here as a placeholder for his real query which is
presumably doing more and therefore might benefit from some indexing. If it
really was just 'select * from table' why bother with executesql
Querying out 47K rows is taking 4.5-5.5 seconds on my laptop (and its specs
are nothing special). Not sure how the complexity of my query compares to
yours but 20-150 seconds seems slow. Heck I can get back over 800K rows in
about 75 seconds. BTW, running the same queries in SQL Server
@Brian -
Yeah, I do need to pull in those rows at one time. I thought I had
provided a workaround by only pulling a few thousand at a time and
then combining later.
I just got back in town from being gone for about a week. I come back and
now the problem I was having before is no
@Brian - The minute I sent you the email the problem returned.
I ran the following code:
test = db1.executesql('select * from table')
print test
print len(test)
print test yields None
Exception thrown from line print len(test):
type 'exceptions.TypeError' object of type 'NoneType' has no
Wow, you actually need to pull 50,000 rows at a time?! Are you sure there isn't
some sort of aggregating that couldn't be done at the database level to cut
that down? While I work with large tables I have not been retrieving anywhere
near that many rows at once. I will give it a shoot and see
@Josh
OK, I tested against a table with 1.8M records, joined in 3 other tables
and then queried out 10K all the way up to 1.2M records with 20 columns and
had no problems. Worked fine with both the as_dict = False and as_dict =
True arguments to executesql(). This was with latest web2py trunk
@Tim - Thanks for the suggstion. I will upgrade to the newest version soon.
@Willoughby - Thanks for the reply. I doubt there is a row limit, but I
will look into it.
@Brian - The queries are more complicated than that, usually. I was just
giving an example. I have tried the exact same
I was meaning use the SQL server activity monitor to find out exactly what
query is being run and how long it is taking. But if you are using SQL server
express you won't have that. You can use something like express profile 2.0
which is free though. Being able to see exactly what query is
@Brian -
That is a great suggestion and I didn't know about that. I will get it.
The table current has about 50k rows and about 25 columns and I wanted to
pull in all of them. It creates a list of tuples. In the future, the
table might have potentially millions of records, but I would
I've been running an old version of web2py with MS-SQL for several years -
no issues and we we use executesql. But we limit it to kicking off a SPROC
or two, that's about it.
I've never dealt with the Express version though - wonder if there's a row
limit or something holding you back.
On
I use executesql with large tables (multiple 100K rows) and have had no such
problems.
Are your queries really just select * from table or is there more complicated
parameters involved? If it can be reliably reproduced them why not use a
monitor to watch what sqlserver is actual doing when it
Could you try with a more recent version? I have not noticed problems with
executesql with mssql.
Just to make sure, I ran a select * on a table with 130K records several
times in a row (from python web2py.py -M -S an_app) and it worked fine.
DB is sqlserver express 2005. python 2.7.something
14 matches
Mail list logo