[sqlalchemy] Re: Python logging with sqlalchemy

2010-07-18 Thread dusans
This is my working version. Its a modification of the DBHandler found
here: 
http://stackoverflow.com/questions/935930/creating-a-logging-handler-to-connect-to-oracle/

class SqlAlchemyHandler(logging.Handler):
def __init__(self, session):
logging.Handler.__init__(self)
self.session = session

def emit(self, record):
try:
#use default formatting
self.format(record)
self.session.add(ProcessLog(record.levelname,
record.message))
except Exception, inst:
print inst

def close(self):
self.session.commit()
logging.Handler.close(self)


On 17 jul., 23:07, dusans dusan.smit...@gmail.com wrote:
 Does anyone have a custom handler for the logging module that would
 use a sqlalchemy connection to save the log into the db.

 Tnx :)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] lockmode

2010-07-18 Thread Michael Bayer

What does your SQL output say? Do you see FOR UPDATE in the log?

Sent from my iPhone

On Jul 17, 2010, at 8:47 PM, Michael Mileusnich  
justmike2...@gmail.com wrote:



Hello,

I am running Python 2.6 SQL Alchemy 0.5.8 against MS SQL Server  
Express 2008 with pyODBC.  When I issue something like:


sess.query(job).with_lockmode(update)

It does not seem to be locking according to the query I am getting  
back from my profiler.  Is this the correct usage?


Thanks
Mike
--
You received this message because you are subscribed to the Google  
Groups sqlalchemy group.

To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en 
.


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: delete attributes of last flush or commit in a MapperExtension already executed

2010-07-18 Thread Michael Bayer


On Jul 14, 4:56 am, sacabuche sacabu...@gmail.com wrote:
 In fact i was only querying, but when i add many items in the same
 flush, all items(of same category)get the same value, and i don't know
 how to include the sku number already calculated to the query, that's
 the reason because i tried this approach of saving last_sku_value,
 but i sow that something was really bad.

 I tried to execute the query on __init__() of my class but i had to
 pass the session to my object or take my session from a global so i
 thought it was a bad idea.

 And i could create my object add it to my session and after call a
 function class.new_sku(), but i dislike the idea to rewrite this for
 each item.

 this is my get_last_value() code, maybe it'll be clear:

    def get_last_value(self, session, cls, instance):
         execute a query to find the last value
         try:
             return self.last_value[instance.dynamic.value]
         except KeyError:
             self.last_value_key = %s-%s %(instance.dynamic.static,
                                             instance.dynamic.value)

         last_sku_value = session.query(cls.value)\
                            .filter(cls.dynamic == instance.dynamic)\
                            .order_by(desc(cls.date_insert)).first()

         if not last_sku_value:
             return None
         try:
             return last_sku_value[0]
         except IndexError:
             return None

your options for doing a query in order to get some database related
information when you make a new object are:

1. use a scoped_session, so that a global transaction is available,
and do it in __nit__.  Usually any app I write of any importance will
be using this technique, and issues like these are not a problem.
2.  after session attach, using SessionExtension.after_attach, using
the given Session
3. before an insert, using MapperExtension.before_insert, using the
Connection passed to that method
4. before flush, using SessionExtension.before_flush
5. as a default value on the column, using the default callable of
Column, you can pass either a SQL expression that will execute (see
http://www.sqlalchemy.org/docs/metadata.html#sql-expressions ), or a
callable that accepts a context argument (this is a little more
exotic, let me know if you want to try that,  a plain SQL expression
is probably all you need here though)




 On 13 jul, 16:35, Michael Bayer mike...@zzzcomputing.com wrote:

  On Jul 13, 2010, at 10:21 AM, sacabuche wrote:

   I was trying to get the last value before_insert a new item but i
   didn't find out, so i change of strategy and i saved the last value of
   this method, but the problem is that when i do other commits my old
   MapperExtension attribute (self.last_value) get the last value and i
   dislike it, even when I close and open a new Session (discovered in my
   tests)

   This is my code:

   class _ValueExtension(MapperExtension):
      Set new value for SkuDynamic and SKuUnique
      def __init__(self):
          self.last_value = {} # --- THE PROBLEM
          self.first_value = FIRST_VALUE

      def before_insert(self, mapper, connection, instance):
          session = object_session(instance)
      cls = instance.__class__

      last_sku_value = self.get_last_value(session, cls, instance)

          if not last_sku_value:
          instance.value = self.first_value
      else:
          instance.value = next_value(last_sku_value)

      self.last_value[self.last_value_key] = instance.value
          self.set_values(instance)

      #def after_insert(self, mapper, connection, instance):
      #    self.__init__()   #--- IF I DO THIS I DON'T HAVE THE LAST
   VALUE IN THE SAME FLUSH

      #def after_update(self, mapper, connection, instance):
      #    self.__init__()

      def get_last_value(self, session, cls, instance):
          This have to return just one value
      and set self.last_value_key is nedded to update elements in same
   session
          Execute The Query or check self.last_value
      
          raise NotImplementedError, 'implement this function'

      def set_values(self, instance):
          Use if necesary
      pass

  A MapperExtension is persistent for the life of the application and also is 
  accessed concurrently, so it is not an appropriate place to store a count 
  for things things that are local to a specific flush, and its really not a 
  good place in general to do what you're trying to do (though its not clear 
  above what you're trying to accomplish, if you want a global last sku or 
  a sku specific to one flush).

  You'd be better off querying the database for the last sku value when you 
  need it, or just adding code to your classes __init__() method to keep 
  track of sku numbers in memory as needed.

   Thanks

   --
   You received this message because you are subscribed to the Google Groups 
   sqlalchemy group.
   To post to this group, send email to sqlalch...@googlegroups.com.
   To 

[sqlalchemy] Re: checking for C extensions at runtime

2010-07-18 Thread Michael Bayer


On Jul 16, 11:56 am, David Gardner dgard...@creatureshop.com wrote:
 I'm actually not sure, I did a bit of googling and couldn't really find
 much.
 Pep 386 talks about version comparison in 
 distutils:http://www.python.org/dev/peps/pep-0386/#id10

 As for scratching my itch it wouldn't have to be in the __version__
 string, I was just wondering how
 I would check if the SQLAlchemy on a system had the C extensions enabled
 on it or not.

at the moment, I'd probably try importing cprocessors from
sqlalchemy, or just look at
sqlalchemy.processors.str_to_date.__file__.   We haven't put any
official method for this in yet and also we'd like to have the C
extensions turned on by default at some point.


 On 07/15/2010 01:40 PM, Michael Bayer wrote:

  I was wondering if there was a way to check at runtime if the C extensions 
  were built with SQLAlchemy?

  If not I was wondering if you would consider adding something to the 
  __version__ string?

  is there some guideline for the format on that ?

 --
 David Gardner
 Pipeline Tools Programmer
 Jim Henson Creature Shop
 dgard...@creatureshop.com

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] problem with multi-table mapping - how to configure merge/cascade behaviour on mappers based on join/select?

2010-07-18 Thread Michael Bayer

On Jul 15, 2010, at 5:11 PM, Harry Percival wrote:

 thanks Michael. I really appreciate your help.
 
 How should use .merge()?  I've tried both:
 
 another_new = MoviesAndDirectorsAndGenres() #init another_new as blank row
 another_new = session.merge(new) #attempt to merge with my 'new'
 object that has desired attributes
 
 and
 
 new = session.merge(another_new) #attempt to merge blank object to my
 new object with desired attributes
 
 the former fails trying to add a genre row that already exists, the
 latter fails trying to add an object with all null values.
 
 the thing is, I'm really not trying to merge anything.  I'm trying to
 create a new object, which happens to be made up partially of new
 entries in some tables, and existing entries in other tables.
 genres_genre_id refers to the primary key of the genres table, and a
 genre with genre_id=6 already exists.
 
 
 am i going to have to map an ORM class to each of the composite
 tables, and use references to them via relationship() instead of
 hoping sql can figure it out by itself from the tables that make up
 the join construct?
 
 can supply a sqlite database and some source code if it helps?

It's typically appropriate to map tables individually and connect them via 
relationship(), if you want to be writing rows to them independently.   That's 
what the phrase partially of new entries in some tables, and existing entries 
in other tables. implies.



 
 rgds,
 Harry
 
 On Thu, Jul 15, 2010 at 9:48 PM, Michael Bayer mike...@zzzcomputing.com 
 wrote:
 
 On Jul 15, 2010, at 2:52 PM, Harry Percival wrote:
 
 A new problem, which seems to occur in both IronPython and normal Python:
 
 I have a database with tables for movies, directors, genres (and a
 bridging table movie_directors)
 I have a class mapped to a join of all three of the above
 
 
 j = join(movies_table,md_table).join(directors_table).join(genres_table)
 mapper(MoviesAndDirectorsAndGenres,j.select(use_labels=True).alias('moviesdirectorsgenres'))
 
 this works fine. i can query the database ok using this mapper.
 
 The tables already have some values in.
 Now, I want to create a new composite object, that references some of
 the existing values, but not all - say it's a new movie by an existing
 director in an existing genre:
 
 
 new = MoviesAndDirectorsAndGenres()
 new.movies_movie_id=8 #does not already exist in the table
 new.directors_director_id=2 #already exists in the table
 new.genres_genre_id=6 #already exists too
 session.add(new)
 
 this fails with a constraint error / integrity error, because SQLA
 tries to re-insert genre #6 even though it already exists. it also
 happens if i use session.merge(new).  it also happens if i fully
 specify all of the attributes of each movie/director/genre correctly.
 
 Is this expected behaviour?  is there any way to get sqla to
 intelligently only do inserts when necessary?  is it something to do
 with cascade configuration, and if so, how do i configure cascades on
 a mapper that's not based on relationship() but based on join()?
 
 you should be using merge(), and you should ensure that the objects being 
 merged have the correct primary key values.   If genre #6 refers to some 
 other column that isn't the primary key of that row, then you'd have to 
 ensure you put the correct primary key value on your object first before 
 merging it.
 
 you can always check what decision merge() made by asking, obj in 
 session.new versus obj in session.dirty.
 
 --
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 
 
 
 
 
 -- 
 --
 Harry J.W. Percival
 --
 Italy Mobile: +39 389 095 8959
 UK Mobile:  +44 (0) 78877 02511
 Skype: harry dot percival
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] sqlite+zxjdbc

2010-07-18 Thread Celil
JDBC already has an sqlite backend, see these examples:

http://wiki.python.org/jython/DatabaseExamples

How difficult would be to add support for that in sqlachemy under jython, so
that one can do something like this

from sqlalchemy import create_engine
engine = create_engine('sqlite+zxjdbc:///database.db')

Thanks,
Celil

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] sqlite+zxjdbc

2010-07-18 Thread Michael Bayer

On Jul 18, 2010, at 9:45 PM, Celil wrote:

 JDBC already has an sqlite backend, see these examples:
 
 http://wiki.python.org/jython/DatabaseExamples
 
 How difficult would be to add support for that in sqlachemy under jython, so 
 that one can do something like this
 
 from sqlalchemy import create_engine
 engine = create_engine('sqlite+zxjdbc:///database.db')


it is likely very straightforward, depending on how well the DBAPI interface is 
supported. We had a dead stop trying to get MSSQL's JDBC driver to work 
since there's apparently no way we can get at the last inserted ID.   SQLite 
would have to give us some way of doing this since I don't think JDBC has a 
.lastrowid equivalent.

The basic idea is to drop a zxjdbc.py file into sqlalchemy/dialects/sqlite/ , 
fill in the appropriate classes and methods, and its good to go.   You can see 
lib/sqlalchemy/dialects/postgresql/zxjdbc.py for a minimal example and 
lib/sqlalchemy/dialects/mysql/zxjdbc.py for something more elaborate.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] lockmode

2010-07-18 Thread Michael Mileusnich
It's just a basic select statement I do not see any other parms.

Thanks
Michael Mileusnich

On Sun, Jul 18, 2010 at 5:05 PM, Michael Bayer mike...@zzzcomputing.comwrote:

 What does your SQL output say? Do you see FOR UPDATE in the log?

 Sent from my iPhone


 On Jul 17, 2010, at 8:47 PM, Michael Mileusnich justmike2...@gmail.com
 wrote:

  Hello,

 I am running Python 2.6 SQL Alchemy 0.5.8 against MS SQL Server Express
 2008 with pyODBC.  When I issue something like:

 sess.query(job).with_lockmode(update)

 It does not seem to be locking according to the query I am getting back
 from my profiler.  Is this the correct usage?

 Thanks
 Mike
 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.comsqlalchemy%2bunsubscr...@googlegroups.com
 .
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.


 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.comsqlalchemy%2bunsubscr...@googlegroups.com
 .
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.