[sqlalchemy] Many tables in eagerloading

2007-09-04 Thread Arun Kumar PG
Guys,

Was wondering if we have 10 tables or so which are related to each other and
are required during  let's say report generation then if I specify
eagerloading for all those attributes which are related to these tables then
down the line as the records in the table grows the temp tables generated in
the join (left outer in eagerloading) will be massive before appying the
where clause. So I guess we should worry about this or is that fine as long
as the tables are getting join on primary/foreign key as the query plan
looks decent ?

I am doing this for 7-8 tables out of which data is growing continuously in
couple tables with a factor of 2XN every month. I am worried if eagerloading
may be a problem in the sense if it will bring the db server down to knees
some day considering the joins happening ? FYI: the eager loading is
specified at the Query level so that I can optimize where I really need.

But currently it's faster as compared to executing individual query. And in
my case if I go with individual queries due to lazy load it takes forever.
And in many cases the request times out when using a web browser. So
eargerloading is better but just worried about speed. Any good guidelines on
how we should use eagerloading, best practises, any limitation etc ?

-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Elixir performance

2007-09-04 Thread Acm

I am trying out Elixir 0.3.0 over SQLAlchemy 0.3.10 in a Python 2.5
environment.

Are there any known performance issues with Elixir for CRUD (Create
Select Update Delete) commands?

Thanks.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Elixir performance

2007-09-04 Thread Gaetan de Menten

On 9/4/07, Acm [EMAIL PROTECTED] wrote:

 I am trying out Elixir 0.3.0 over SQLAlchemy 0.3.10 in a Python 2.5
 environment.

 Are there any known performance issues with Elixir for CRUD (Create
 Select Update Delete) commands?

Not that I know of. There shouldn't be any overhead (over raw
SQLAlchemy) after the initial class initialization phase.

-- 
Gaƫtan de Menten
http://openhex.org

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Many tables in eagerloading

2007-09-04 Thread Michael Bayer


On Sep 4, 2007, at 2:56 AM, Arun Kumar PG wrote:

 Guys,

 Was wondering if we have 10 tables or so which are related to each  
 other and are required during  let's say report generation then if  
 I specify eagerloading for all those attributes which are related  
 to these tables then down the line as the records in the table  
 grows the temp tables generated in the join (left outer in  
 eagerloading) will be massive before appying the where clause. So I  
 guess we should worry about this or is that fine as long as the  
 tables are getting join on primary/foreign key as the query plan  
 looks decent ?

 I am doing this for 7-8 tables out of which data is growing  
 continuously in couple tables with a factor of 2XN every month. I  
 am worried if eagerloading may be a problem in the sense if it will  
 bring the db server down to knees some day considering the joins  
 happening ? FYI: the eager loading is specified at the Query level  
 so that I can optimize where I really need.

 But currently it's faster as compared to executing individual  
 query. And in my case if I go with individual queries due to lazy  
 load it takes forever. And in many cases the request times out when  
 using a web browser. So eargerloading is better but just worried  
 about speed. Any good guidelines on how we should use eagerloading,  
 best practises, any limitation etc ?


you might want to look into creating views for each thing that you  
need to select, thus moving the complexity of fine tuning the joins  
over to the database side. 
  

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: FormAlchemy 0.1 released !

2007-09-04 Thread Alexandre Conrad

Hello,

 html = fa.FieldRender(bind=client, column='email').render()
 
 Nicely done, Alexandre!  I see you are already planning to support
 input validation.  Are you thinking of supporting alternate layouts
 such as a table grid?

Yes, I'm working on validation right now. I'm reorganizing the project's 
source code as the project grows. I already have an experimental version 
of table rendering from an item of a collection of items.

 From the latest trunk, you'll be able to do something like this:

import sqlalchemy as sa
import formalchemy as fa
import mymodel
session = sa.create_session()
client = session.query(mymodel.Client).get_by(clientid='aclient')

# For generating fields in a HTML fieldset
print fa.FieldSet(bind=client).render()

# For generating a table from a single item.
print fa.TableItem(bind=client).render()

# For generating a table from a collection of items.
client_list = session.query(mymodel.Client).all()
print fa.TableCollection(bind=mymodel.Client, collection=client_list)

The collection can be an empty list and the bound model can be 
uninstantiated.

But we don't want to take over the SQLAlchemy list. So let's rather talk 
about FormAlchemy on the FormAlchemy list:

   http://groups.google.com/group/formalchemy

:)

Regards,
-- 
Alexandre CONRAD


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Many tables in eagerloading

2007-09-04 Thread svilen

On Tuesday 04 September 2007 17:12:26 Arun Kumar PG wrote:
 Good work svilan!

 couple questions from what you suggested:
  skipping creation of objects - only using the data, if time of
  creation gets critical.
 In my query wherein the eagerloading is being done on 8 tables if I
 manually run the join query being generated by SA ORM I get a
 result set of over 70,000+ records because this is contains
 duplicate things. I guess SA ORM traverses the returned result and
 creates the right number of objects discarding the duplicate or
 unwanted rows.
probably, but unless u discard 50% of rows, it makes no much 
difference.

 How much time is generally spent by SA if let's say we have a
 1,000,000 rows returned and this resultset will form the whole
 heirarchy i.e. A (main entity) A.b, A.c [] (a list) , A.c.d []
 (again a list) etc. ? Is that a considerable time ?
well u just sum how many constructors has to be executed - per row, 
multiply by million... IMO anything in python mutiplied by million is 
considerable time. Maybe in 10 years it will be not. Thats why i want 
to keep my model separate from the denormalization scheme.

 What is the best optimization technique followed (recommended by SA
 ORM) when it comes to aggregation of data from multiple tables with
 lots of rows in it (and continuously growing) ?

SA 0.4 can do query polymorphisms by multiple per-subtype queries, but 
i have no idea how that works or how much better it is than one huge 
union/leftouterjoin + the type-switching over it, + u'll miss overall 
the ordering.
Mike, can this mechanism be used somehow for my vertical loading, 
i.e. some load column from some related table that is not in the 
polymorhistic hierarchy?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: adjacency list: filtering relations eagerly

2007-09-04 Thread stephen emslie

 im going to play with this a little bit, but my first instinct is
 that you might want to use contains_eager('children.children', ...)
 for your deeper aliases.  but im not sure if something might prevent
 that from working since i havent tested contains_eager in self-
 referential scenarios as of yet.

Thanks for taking a look. I did give
contains_eager('children.children') a try as it seemed the most likely
thing to work. Unfortunately it seemed to override the previous
contains_eager relation, so I ended up with the root's 'children'
relation coming up with subnode2 rather than node2 (i.e. skipping the
first relation), but its nice to know I wasn't completely off that
mark :)

Stephen Emslie

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Many tables in eagerloading

2007-09-04 Thread Michael Bayer


On Sep 4, 2007, at 10:15 AM, Arun Kumar PG wrote:

 i thought so earlier but unfortunately i am on a lower version of  
 mysql :(

upgrade.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] max() arg is an empty sequence while trying to reflect bugzilla table (detailed diagnostics)max() arg is an empty sequence while trying to reflect bugzilla table (detailed diagnostics)

2007-09-04 Thread Marcin Kasperski

Test made on 0.4.0beta5. I tried to reflect the table from
some well known application (bugzilla bugs table)


 from sqlalchemy import *

 db = create_engine('mysql://%s:[EMAIL PROTECTED]/bugs' % (username, 
 password)  )
 metadata = MetaData(bind = db)
 bugs = Table('bugs', metadata, autoload = True)
/usr/lib/python2.4/site-packages/SQLAlchemy-0.4.0beta5-py2.4.egg/sqlalchemy/databases/mysql.py:1971:
 RuntimeWarning: Incomplete reflection of column definition   `resolution` 
enum('','FIXED','INVALID','WONTFIX','LATER','REMIND','DUPLICATE','WORKSFORME','MOVED')
 NOT NULL default '',
Traceback (most recent call last):
  File stdin, line 1, in ?
  File build/bdist.linux-i686/egg/sqlalchemy/schema.py, line 114, in __call__
  File build/bdist.linux-i686/egg/sqlalchemy/engine/base.py, line 1172, in 
reflecttable
  File build/bdist.linux-i686/egg/sqlalchemy/databases/mysql.py, line 1588, 
in reflecttable
  File build/bdist.linux-i686/egg/sqlalchemy/databases/mysql.py, line 1926, 
in reflect
  File build/bdist.linux-i686/egg/sqlalchemy/databases/mysql.py, line 2012, 
in _add_column
  File build/bdist.linux-i686/egg/sqlalchemy/databases/mysql.py, line 1109, 
in __init__
ValueError: max() arg is an empty sequence


In case somebody does not have bugzilla handy ;-):

CREATE TABLE `bugs` (
  `bug_id` mediumint(9) NOT NULL auto_increment,
  `assigned_to` mediumint(9) NOT NULL default '0',
  `bug_severity` enum('blocker A','critical','major B','normal','minor 
C','trivial','enhancement high','enhancement medium','enhancement low') default 
NULL,
  `bug_status` 
enum('UNCONFIRMED','NEW','ASSIGNED','REOPENED','RESOLVED','VERIFIED','CLOSED','ANNOUNCEMENT')
 NOT NULL default 'UNCONFIRMED',
  `creation_ts` datetime NOT NULL default '-00-00 00:00:00',
  `delta_ts` timestamp NOT NULL default CURRENT_TIMESTAMP on update 
CURRENT_TIMESTAMP,
  `short_desc` mediumtext NOT NULL,
  `op_sys` enum('All','WinNT','Win2000','Windows 2003','DUNIX 4.0','Tru64 
5.1','OpenVMS 7.1','OpenVMS 7.2','OpenVMS 7.3','RedHat Ent. 2.1','RedHat Ent. 
3','RedHat Ent. 4','AIX 5.1','Solaris 2.6','Solaris 10') default NULL,
  `priority` enum('Very High','High','Normal','Low','Very Low') NOT NULL 
default 'Very High',
  `rep_platform` enum('All','Intel','Alpha') default NULL,
  `reporter` mediumint(9) NOT NULL default '0',
  `version` varchar(64) NOT NULL default '',
  `resolution` 
enum('','FIXED','INVALID','WONTFIX','LATER','REMIND','DUPLICATE','WORKSFORME','MOVED')
 NOT NULL default '',
  `target_milestone` varchar(20) NOT NULL default '---',
  `qa_contact` mediumint(9) NOT NULL default '0',
  `status_whiteboard` mediumtext NOT NULL,
  `votes` mediumint(9) NOT NULL default '0',
  `keywords` mediumtext NOT NULL,
  `lastdiffed` datetime NOT NULL default '-00-00 00:00:00',
  `everconfirmed` tinyint(4) NOT NULL default '0',
  `reporter_accessible` tinyint(4) NOT NULL default '1',
  `cclist_accessible` tinyint(4) NOT NULL default '1',
  `bug_file_loc` text,
  `estimated_time` decimal(5,2) NOT NULL default '0.00',
  `remaining_time` decimal(5,2) NOT NULL default '0.00',
  `alias` varchar(20) default NULL,
  `product_id` smallint(6) NOT NULL default '0',
  `component_id` smallint(6) NOT NULL default '0',
  PRIMARY KEY  (`bug_id`),
  UNIQUE KEY `alias` (`alias`),
  KEY `assigned_to` (`assigned_to`),
  KEY `creation_ts` (`creation_ts`),
  KEY `delta_ts` (`delta_ts`),
  KEY `bug_severity` (`bug_severity`),
  KEY `bug_status` (`bug_status`),
  KEY `op_sys` (`op_sys`),
  KEY `priority` (`priority`),
  KEY `reporter` (`reporter`),
  KEY `version` (`version`),
  KEY `resolution` (`resolution`),
  KEY `target_milestone` (`target_milestone`),
  KEY `qa_contact` (`qa_contact`),
  KEY `votes` (`votes`),
  KEY `product_id` (`product_id`),
  KEY `component_id` (`component_id`),
  FULLTEXT KEY `short_desc` (`short_desc`)
) ENGINE=MyISAM AUTO_INCREMENT=24917 DEFAULT CHARSET=latin1;

Look in the debugger shows that the subroutine
mysql.py/MsEnum/__init__ fails as specified if enums parameter is
empty (what happened here). This looks like one bug, I suspect one can
define enum without any value. Sth like

   if strip_enums:
  length = max([len(v) for v in strip_enums])
   else:
  length = 0

should probably do.

Another question is why this param is empty. Error happens while the
column 'resolution' is handled, in which possible values set is not
empty. Even without digging into the code, it seems likely that allowing
empty string can be confusing (some other enums are taken correctly).

Indeed, deeper look shows that parse_column does not return arg value
for this column. Looking further one can see that _re_column fails to
match the line
 
 `resolution` 
enum('','FIXED','INVALID','WONTFIX','LATER','REMIND','DUPLICATE','WORKSFORME','MOVED')
 NOT NULL default '',

only _re_column_loose matches.

To summarize, there are two bugs in mysql reflection code:

a) MsEnum constructor fails if given empty enums due to unchecked
   use of the max function

b) _re_column 

[sqlalchemy] Re: creating a database through SQLAlchemy

2007-09-04 Thread Travis Kriplean

Thanks everyone!

On 3 Sep, 01:13, Marco Mariani [EMAIL PROTECTED] wrote:
 TravisKriplean ha scritto:

  However, this seems a bit ugly. Is there a way to obtain a non-
  transactional connection from an engine in 0.3.10? If not, is it
  possible in 0.4?

 I use this with SA 0.3.10 and Postgres:

 engine.raw_connection().set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)

 engine.text(CREATE DATABASE %s ENCODING = 'utf8' % dbname).execute()


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Check it out:download free,stock information,knowledge base,hot videos,hot games and hot tickets...

2007-09-04 Thread iamhere

Check it out:download free,stock information,knowledge base,hot
videos,hot games and hot tickets...
http://groups.google.com/group/all-good-things/web/very-useful-websites


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Many tables in eagerloading

2007-09-04 Thread Arun Kumar PG
bit tough as many apps on that server and won't be easy as of now.

On 9/4/07, Michael Bayer [EMAIL PROTECTED] wrote:



 On Sep 4, 2007, at 10:15 AM, Arun Kumar PG wrote:

  i thought so earlier but unfortunately i am on a lower version of
  mysql :(

 upgrade.


 



-- 
Cheers,

- A

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---