On Thursday, April 12, 2012 01:32:34 PM John Fabiani wrote:
> On Thursday, April 12, 2012 06:46:41 PM Cheryl wrote:
> > I am a brand new user to dabo.
> > First, I used the runtime installer yesterday, and then removed it and
> > did the svn install on Windows XP.
> > I have installed the latest svn version of dabo on Windows XP, along
> > with
> > Python2.6, PIL1.1.7, psycopg2-2.4.5,reportlab2.5, and
> > wx-2.8-msw-unicode.
> > I have created a new project using dabo.quickstart.
> > I have started to create a new form with the classdesigner and have
> > created a new connection to a very large postgres database that has 376
> > schemas and a number of tables within each schema.
> > When I test the connection, it works fine.
> > However, when I select the saved connection and click on next, Windows
> > gets stuck and times out trying to get all of the table names from the
> > postgres database(I'm guessing?)
> > Yesterday, when I was using the runtime version, it was over 15 minutes
> > waiting for the tables to display for choosing.
> > Today, it is taking even longer.
> > Any suggestions for speeding this up? (I am stuck with having my
> > postgres
> > schema in this large database).
> 
> I sort doubt that a speed up is possible.  I believe the app is gathering
> all the available table and views from all the schema.  For example dealing
> with only 10 tables per schema means gathering information from 3760
> tables.  Each of those tables are processed for field information including
> name, data types, and determining if the field is primary key.  And all of
> the data is stored in ram.

You do realize that you are in a very unique situation with 3760 schema!

I decided I'd check the ClassDesigner code to see if I was right and as I 
suggested the issue is processing all the table information.  Just getting the 
table information is relatively quick (Postgres is always quick).  But after 
getting the table/views names per schema the code processes each of the 
tables.
From
QuickLayoutWizard->makeConnection:
tbls = crs.getTables()  
                for tb in tbls:
                        fldDict = {}
                        flds = crs.getFields(tb)
                        for fld in flds:
                                fldname = fld[0]
                                fldInfo = fldDict[fldname] = {}
                                fldInfo["type"] = fld[1]
                                fldInfo["pk"] = fld[2]
                        self._dataEnv[tb] = fldDict
                return True

This suggest that all the tables and the table field information is in ram.  
That of course take time just to retrieve and the overhead to process is high.  
Then of course how much ram do you have?

So if you have 3760 schema, each with 10 tables,  each with 5 fields ,  each 
with 4 items of data - you get 734,000 items to process.  Still 15 mins seems 
a little long.  Maybe it would be improved if you increased your ram.  
Although, my experience with windows and min of ram have been very good.

Johnf

_______________________________________________
Post Messages to: [email protected]
Subscription Maintenance: http://leafe.com/mailman/listinfo/dabo-users
Searchable Archives: http://leafe.com/archives/search/dabo-users
This message: http://leafe.com/archives/byMID/1707166.5FGFeJ0c94@linux-12

Reply via email to