On Sep 27, 2009, at 4:11 AM, nkhalasi wrote:
> > My comments below. > > On Sep 25, 9:05 pm, Conor <[email protected]> wrote: >> On Sep 25, 2:11 am, nkhalasi <[email protected]> wrote: > >>> def get_next_id(): >>> session = meta.Session() >>> query = session.query(PKGenerator).filter_by >>> (table_name='principals') >>> nxpkgen = query.first() >>> if nxpkgen: >>> nxid = nxpkgen.next_id >>> if nxid > nxpkgen.end_id: >>> raise Exception('Primary Key range has been exhausted') >>> nxpkgen.next_id += 1 >>> session .commit() >>> return nxid >>> else: >>> raise Exception('Next Primary Key could not be found') >> >> Leave off the () after get_next_id. You want to pass the function >> itself as the default, not the result of calling it once. >> > > In the above function > 1) if I do a commit, it commits the main transaction and subsequent > commit()s fail complaining that the transaction is already closed. > 2) if I do not commit, the next_id value is not updated resulting in > next run to use the IDs that were already used in the last run. > 3) I tried session.begin_nested as well but that also did not result > into commiting my next_id and only did a release save point. I guess > this should be attributed to MySQL's behavior. in theory, you want to say session.flush() so that "n.next_id" is persisted but the transaction is not committed. However the approach you have above wont work in any case assuming the INSERT itself takes place within Session.flush() - the reentrant call to flush should be raising an error immediately. maybe you're on an older SQLA version that doesn't raise this assertion which is why it chokes later on the double commits (which is again, an assertion that something isnt being called in the proper sequence). you don't want to be using the ORM layer within a Column default generator. Just the inefficiency alone of retrieiving an entire object and doing a whole flush for every row in an enclosing series of inserts makes it a bad idea. do this instead: def get_next_id(context): r = context.connection.execute(select([pk_generator_table]).where (pk_genrator_table.c.table_name=='principals')) row = r.fetchone() if row: id = row['next_id'] if id > row['end_id']: raise Exception(...) context.connection.execute(pk_generator_table.update().where (pk_genrator_table.c.table_name=='principals').values (next_id=pk_generator_table.c.next_id + 1)) return id else: raise Exception(...) the above will get you out the door but still does a lot of work, if you had a large volume INSERTs. Ideally you'd use a MySQL stored procedure or trigger to generate the new values, SQLAlchemy would just fetch the new values after the fact. > > I am wondering how would I commit my ID range/next_id independent of > my regular application object commits? This is important for me > because I wanted to ultimately implement a variation of ID generator > like Sybase so that autoincrement happens in memory for the duration > of key cache size and then reset the memory counters to next key cache > sizes. > Here is my table : > pk_generator_table = Table('pk_generator', meta.metadata, > Column('table_name', types.CHAR(30), > primary_key=True, nullable=False), > Column('next_id', bigint, nullable=False), > Column('last_id', bigint, nullable=False), > Column('cache_size', types.Integer, > nullable=False), > mysql_engine='InnoDB', > mysql_row_format='DYNAMIC', > mysql_charset='utf8' > ) > > > Regards, > Naresh > > > --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~----------~----~----~----~------~----~------~--~---
