Problem solved.  It turns out that recently we had a batch ingested
using swordv2, and a lot of the items had a huge number of authors -
one was over 5K, many 3K. It seems like the rendering of the item page
was slowed by this, which in turn caused SQL connections to not end.
I removed the authors from these items, and just added one
contributor.author metadata that says "More than 100 authors."  The"in
transaction" count went down within minutes.

Just an FYI.

-Jose

On Thu, Aug 18, 2022 at 8:37 PM Jose Blanco <[email protected]> wrote:
>
> Still working on this problem. I have reached my max connection
> counts, which I set to 100, and it seems like I have 100 "idle in
> transaction" connections.  And from the pg_stat_activity table I can
> see that they are all stuck on this sql.  I wonder if anyone can point
> me to where this sql originates.  That might help me determine the
> reason why it is getting stuck.
>
> 2022-08-18 18:36:35,030 DEBUG org.hibernate.SQL @ select
> this_.policy_id as policy_i1_33_6_, this_.action_id as
> action_i2_33_6_, this_.dspace_object as dspace_o9_33_6_,
> this_.end_date as end_date3_33_6_, this_.eperson_id as
> eperson10_33_6_, this_.epersongroup_id as eperson11_33_6_,
> this_.resource_type_id as resource4_33_6_, this_.rpdescription as
> rpdescri5_33_6_, this_.rpname as rpname6_33_6_, this_.rptype as
> rptype7_33_6_, this_.start_date as start_da8_33_6_, dspaceobje2_.uuid
> as uuid1_13_0_, dspaceobje2_1_.bitstream_format_id as bitstre10_2_0_,
> dspaceobje2_1_.checksum as checksum1_2_0_,
> dspaceobje2_1_.checksum_algorithm as checksum2_2_0_,
> dspaceobje2_1_.deleted as deleted3_2_0_, dspaceobje2_1_.internal_id as
> internal4_2_0_, dspaceobje2_1_.bitstream_id as bitstrea5_2_0_,
> dspaceobje2_1_.sequence_id as sequence6_2_0_,
> dspaceobje2_1_.size_bytes as size_byt7_2_0_,
> dspaceobje2_1_.store_number as store_nu8_2_0_,
> dspaceobje2_2_.bundle_id as bundle_i1_4_0_,
> dspaceobje2_2_.primary_bitstream_id as primary_3_4_0_,
> dspaceobje2_3_.admin as admin3_8_0_, dspaceobje2_3_.collection_id as
> collecti1_8_0_, dspaceobje2_3_.logo_bitstream_id as logo_bit4_8_0_,
> dspaceobje2_3_.submitter as submitte5_8_0_,
> dspaceobje2_3_.template_item_id as template6_8_0_,
> dspaceobje2_3_.workflow_step_1 as workflow7_8_0_,
> dspaceobje2_3_.workflow_step_2 as workflow8_8_0_,
> dspaceobje2_3_.workflow_step_3 as workflow9_8_0_, dspaceobje2_4_.admin
> as admin3_10_0_, dspaceobje2_4_.community_id as communit1_10_0_,
> dspaceobje2_4_.logo_bitstream_id as logo_bit4_10_0_,
> dspaceobje2_5_.discoverable as discover1_24_0_,
> dspaceobje2_5_.in_archive as in_archi2_24_0_,
> dspaceobje2_5_.last_modified as last_mod3_24_0_,
> dspaceobje2_5_.item_id as item_id4_24_0_,
> dspaceobje2_5_.owning_collection as owning_c7_24_0_,
> dspaceobje2_5_.submitter_id as submitte8_24_0_,
> dspaceobje2_5_.withdrawn as withdraw5_24_0_, dspaceobje2_7_.can_log_in
> as can_log_1_14_0_, dspaceobje2_7_.digest_algorithm as
> digest_a2_14_0_, dspaceobje2_7_.email as email3_14_0_,
> dspaceobje2_7_.last_active as last_act4_14_0_,
> dspaceobje2_7_.eperson_id as eperson_5_14_0_, dspaceobje2_7_.netid as
> netid6_14_0_, dspaceobje2_7_.password as password7_14_0_,
> dspaceobje2_7_.require_certificate as require_8_14_0_,
> dspaceobje2_7_.salt as salt9_14_0_, dspaceobje2_7_.self_registered as
> self_re10_14_0_, dspaceobje2_8_.eperson_group_id as eperson_1_15_0_,
> dspaceobje2_8_.name as name2_15_0_, dspaceobje2_8_.permanent as
> permanen3_15_0_, case when dspaceobje2_1_.uuid is not null then 1 when
> dspaceobje2_2_.uuid is not null then 2 when dspaceobje2_3_.uuid is not
> null then 3 when dspaceobje2_4_.uuid is not null then 4 when
> dspaceobje2_5_.uuid is not null then 5 when dspaceobje2_6_.uuid is not
> null then 6 when dspaceobje2_7_.uuid is not null then 7 when
> dspaceobje2_8_.uuid is not null then 8 when dspaceobje2_.uuid is not
> null then 0 end as clazz_0_, bitstreamf3_.bitstream_format_id as
> bitstrea1_3_1_, bitstreamf3_.description as descript2_3_1_,
> bitstreamf3_.internal as internal3_3_1_, bitstreamf3_.mimetype as
> mimetype4_3_1_, bitstreamf3_.short_description as short_de5_3_1_,
> bitstreamf3_.support_level as support_6_3_1_, bitstream4_.uuid as
> uuid1_13_2_, bitstream4_.bitstream_format_id as bitstre10_2_2_,
> bitstream4_.checksum as checksum1_2_2_, bitstream4_.checksum_algorithm
> as checksum2_2_2_, bitstream4_.deleted as deleted3_2_2_,
> bitstream4_.internal_id as internal4_2_2_, bitstream4_.bitstream_id as
> bitstrea5_2_2_, bitstream4_.sequence_id as sequence6_2_2_,
> bitstream4_.size_bytes as size_byt7_2_2_, bitstream4_.store_number as
> store_nu8_2_2_, group5_.uuid as uuid1_13_3_, group5_.eperson_group_id
> as eperson_1_15_3_, group5_.name as name2_15_3_, group5_.permanent as
> permanen3_15_3_, group6_.uuid as uuid1_13_4_, group6_.eperson_group_id
> as eperson_1_15_4_, group6_.name as name2_15_4_, group6_.permanent as
> permanen3_15_4_, bitstream7_.uuid as uuid1_13_5_,
> bitstream7_.bitstream_format_id as bitstre10_2_5_,
> bitstream7_.checksum as checksum1_2_5_, bitstream7_.checksum_algorithm
> as checksum2_2_5_, bitstream7_.deleted as deleted3_2_5_,
> bitstream7_.internal_id as internal4_2_5_, bitstream7_.bitstream_id as
> bitstrea5_2_5_, bitstream7_.sequence_id as sequence6_2_5_,
> bitstream7_.size_bytes as size_byt7_2_5_, bitstream7_.store_number as
> store_nu8_2_5_ from public.resourcepolicy this_ left outer join
> public.dspaceobject dspaceobje2_ on
> this_.dspace_object=dspaceobje2_.uuid left outer join public.bitstream
> dspaceobje2_1_ on dspaceobje2_.uuid=dspaceobje2_1_.uuid left outer
> join public.bundle dspaceobje2_2_ on
> dspaceobje2_.uuid=dspaceobje2_2_.uuid left outer join
> public.collection dspaceobje2_3_ on
> dspaceobje2_.uuid=dspaceobje2_3_.uuid left outer join public.community
> dspaceobje2_4_ on dspaceobje2_.uuid=dspaceobje2_4_.uuid left outer
> join public.item dspaceobje2_5_ on
> dspaceobje2_.uuid=dspaceobje2_5_.uuid left outer join public.site
> dspaceobje2_6_ on dspaceobje2_.uuid=dspaceobje2_6_.uuid left outer
> join public.eperson dspaceobje2_7_ on
> dspaceobje2_.uuid=dspaceobje2_7_.uuid left outer join
> public.epersongroup dspaceobje2_8_ on
> dspaceobje2_.uuid=dspaceobje2_8_.uuid left outer join
> public.bitstreamformatregistry bitstreamf3_ on
> dspaceobje2_1_.bitstream_format_id=bitstreamf3_.bitstream_format_id
> left outer join public.bitstream bitstream4_ on
> dspaceobje2_2_.primary_bitstream_id=bitstream4_.uuid left outer join
> public.dspaceobject bitstream4_1_ on
> bitstream4_.uuid=bitstream4_1_.uuid left outer join
> public.epersongroup group5_ on dspaceobje2_3_.submitter=group5_.uuid
> left outer join public.dspaceobject group5_1_ on
> group5_.uuid=group5_1_.uuid left outer join public.epersongroup
> group6_ on dspaceobje2_4_.admin=group6_.uuid left outer join
> public.dspaceobject group6_1_ on group6_.uuid=group6_1_.uuid left
> outer join public.bitstream bitstream7_ on
> dspaceobje2_4_.logo_bitstream_id=bitstream7_.uuid left outer join
> public.dspaceobject bitstream7_1_ on
> bitstream7_.uuid=bitstream7_1_.uuid where (this_.dspace_object=? and
> this_.action_id=?)
>
> Thank you!
> -Jose
>
>
> On Wed, Aug 17, 2022 at 3:12 PM Jose Blanco <[email protected]> wrote:
> >
> > Just out of nowhere I'm getting Hibernate timeout errors on my dspace
> > 6 app.  Not sure what is causing it.  It seems to reach the max
> > connection setting quickly ( 30 is default, and I have been using that
> > for a long time, but I changed it to 50 ).  This is what I am seeing
> > in the database for one of the connections that is not releasing:
> >
> > Any ideas?
> >
> > => SELECT relation, transactionid, pid, mode, granted, relname
> > FROM pg_locks
> > INNER JOIN pg_stat_user_tables
> > ON pg_locks.relation = pg_stat_user_tables.relid
> > WHERE pg_locks.pid='10269';
> >
> >  relation  | transactionid |  pid  |      mode       | granted |
> >   relname
> > -----------+---------------+-------+-----------------+---------+-------------------------
> >   24125181 |               | 10269 | AccessShareLock | t       | collection
> >   24125274 |               | 10269 | AccessShareLock | t       | eperson
> >   24125123 |               | 10269 | AccessShareLock | t       | bitstream
> >   24125535 |               | 10269 | AccessShareLock | t       | 
> > workspaceitem
> >   24125131 |               | 10269 | AccessShareLock | t       |
> > bitstreamformatregistry
> >   24125319 |               | 10269 | AccessShareLock | t       | handle
> >   24125187 |               | 10269 | AccessShareLock | t       | 
> > collection2item
> >   24125512 |               | 10269 | AccessShareLock | t       | 
> > versionhistory
> >   24125208 |               | 10269 | AccessShareLock | t       |
> > community2collection
> >   24125242 |               | 10269 | AccessShareLock | t       |
> > metadatafieldregistry
> >   24125374 |               | 10269 | AccessShareLock | t       | item2bundle
> >   24125154 |               | 10269 | AccessShareLock | t       |
> > bundle2bitstream
> >   24125407 |               | 10269 | AccessShareLock | t       |
> > metadataschemaregistry
> >   24125440 |               | 10269 | AccessShareLock | t       | 
> > resourcepolicy
> >   24125371 |               | 10269 | AccessShareLock | t       | item
> >   24125151 |               | 10269 | AccessShareLock | t       | bundle
> >   24125517 |               | 10269 | AccessShareLock | t       | versionitem
> >  913986038 |               | 10269 | AccessShareLock | t       | 
> > dspaceobject
> >   24125282 |               | 10269 | AccessShareLock | t       | 
> > epersongroup
> >   24125213 |               | 10269 | AccessShareLock | t       |
> > community2community
> >   24125530 |               | 10269 | AccessShareLock | t       | 
> > workflowitem
> >   24125249 |               | 10269 | AccessShareLock | t       | 
> > metadatavalue
> >   24125202 |               | 10269 | AccessShareLock | t       | community
> >  913986043 |               | 10269 | AccessShareLock | t       | site
> >   24125314 |               | 10269 | AccessShareLock | t       |
> > group2groupcache

-- 
All messages to this mailing list should adhere to the Code of Conduct: 
https://www.lyrasis.org/about/Pages/Code-of-Conduct.aspx
--- 
You received this message because you are subscribed to the Google Groups 
"DSpace Technical Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dspace-tech/CAK%3DKc-skhhQ%3DpbKWH4Ks2ezTutEq_vhGaz1QjFGsj7TMXOPEvQ%40mail.gmail.com.

Reply via email to