Sounds good. I was able to write following sql sentense, but I could not
figure out how to refer to 'value' (which is created by the
jsonb_array_elements_text) named column in where().
sql_sentense = select([database.tables['jtable'],
that style of PG function use isn't directly supported yet and there is
a recipe to achieve this at
https://bitbucket.org/zzzeek/sqlalchemy/issues/3566/figure-out-how-to-support-all-of-pgs#comment-22842678
On 01/21/2016 09:10 AM, Sami Pietilä wrote:
> Sounds good. I was able to write following
On 01/21/2016 06:23 PM, Nana Okyere wrote:
> In my flask application, a view function has this piece of code:
>
>
> sql_text = text(" \
> SELECT CONNECT_BY_ROOT \
> part_no as ROOT_PART_NO, \
> bc.part_no, \
>
In my flask application, a view function has this piece of code:
sql_text = text(" \
SELECT CONNECT_BY_ROOT \
part_no as ROOT_PART_NO, \
bc.part_no, \
bc.cmpnt_part_no, \
bc.cmpnt_qty, \
LEVEL AS
On 01/21/2016 08:43 PM, Maximilian Roos wrote:
> We're using celery, a job distribution package. On a single machine,
> there are 20+ celery workers running, each with their own Python
> process. We had some issues with the processes attempting to use the
> same SQLAlchemy connections (I think
Great, thanks for the reply Mike.
Given that if the processes are started separately, are the possible
'cross-effects' between processes on the same machine the same as those
between threads within the same process - and so solved with scoped_session?
On Thursday, January 21, 2016 at 11:01:41
Great, thanks for the reply Mike.
I looked further - Celery *is* using fork / multiprocessing, but the
forking occurs before any import of our libraries / sqlalchemy /
create_engine. Is there a risk of reusing connections in that state?
Can I confirm that the 'cross-effects' between processes