Steve may have a valid point. You raised an issue with concurrent writes
before, if I recall correctly. Since this limitation may be due to Hive
metastore. By default Spark uses Apache Derby for its database
persistence. *However
it is limited to only one Spark session at any time for the purposes of
metadata storage.*  That may be the cause here as well. Does this happen if
the underlying tables are created as PARQUET as opposed to ORC?

HTH

Mich Talebzadeh,
Solutions Architect/Engineering Lead
London
United Kingdom


   view my Linkedin profile
<https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>


 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Fri, 11 Aug 2023 at 01:33, Stephen Coy <s...@infomedia.com.au.invalid>
wrote:

> Hi Patrick,
>
> When this has happened to me in the past (admittedly via spark-submit) it
> has been because another job was still running and had already claimed some
> of the resources (cores and memory).
>
> I think this can also happen if your configuration tries to claim
> resources that will never be available.
>
> Cheers,
>
> SteveC
>
>
> On 11 Aug 2023, at 3:36 am, Patrick Tucci <patrick.tu...@gmail.com> wrote:
>
> Hello,
>
> I'm attempting to run a query on Spark 3.4.0 through the Spark
> ThriftServer. The cluster has 64 cores, 250GB RAM, and operates in
> standalone mode using HDFS for storage.
>
> The query is as follows:
>
> SELECT ME.*, MB.BenefitID
> FROM MemberEnrollment ME
> JOIN MemberBenefits MB
> ON ME.ID <http://me.id/> = MB.EnrollmentID
> WHERE MB.BenefitID = 5
> LIMIT 10
>
> The tables are defined as follows:
>
> -- Contains about 3M rows
> CREATE TABLE MemberEnrollment
> (
>     ID INT
>     , MemberID VARCHAR(50)
>     , StartDate DATE
>     , EndDate DATE
>     -- Other columns, but these are the most important
> ) STORED AS ORC;
>
> -- Contains about 25m rows
> CREATE TABLE MemberBenefits
> (
>     EnrollmentID INT
>     , BenefitID INT
> ) STORED AS ORC;
>
> When I execute the query, it runs a single broadcast exchange stage, which
> completes after a few seconds. Then everything just hangs. The JDBC/ODBC
> tab in the UI shows the query state as COMPILED, but no stages or tasks are
> executing or pending:
>
> <image.png>
>
> I've let the query run for as long as 30 minutes with no additional
> stages, progress, or errors. I'm not sure where to start troubleshooting.
>
> Thanks for your help,
>
> Patrick
>
>
> This email contains confidential information of and is the copyright of
> Infomedia. It must not be forwarded, amended or disclosed without consent
> of the sender. If you received this message by mistake, please advise the
> sender and delete all copies. Security of transmission on the internet
> cannot be guaranteed, could be infected, intercepted, or corrupted and you
> should ensure you have suitable antivirus protection in place. By sending
> us your or any third party personal details, you consent to (or confirm you
> have obtained consent from such third parties) to Infomedia’s privacy
> policy. http://www.infomedia.com.au/privacy-policy/
>

Reply via email to