[
https://issues.apache.org/jira/browse/SPARK-22340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16971723#comment-16971723
]
Ruslan Dautkhanov commented on SPARK-22340:
-------------------------------------------
Glad to see this is solved.
A nice side-effect should be somewhat better performance on some cases
involving heavy python-java communication
on multi-numa/ multi-socket configurations. With static threads, Linux kernel
will actually have a chance to
schedule threads on processors/cores that are more local to data's numa
placement.
> pyspark setJobGroup doesn't match java threads
> ----------------------------------------------
>
> Key: SPARK-22340
> URL: https://issues.apache.org/jira/browse/SPARK-22340
> Project: Spark
> Issue Type: Bug
> Components: PySpark
> Affects Versions: 2.0.2
> Reporter: Leif Mortenson
> Assignee: Hyukjin Kwon
> Priority: Major
> Fix For: 3.0.0
>
>
> With pyspark, {{sc.setJobGroup}}'s documentation says
> {quote}
> Assigns a group ID to all the jobs started by this thread until the group ID
> is set to a different value or cleared.
> {quote}
> However, this doesn't appear to be associated with Python threads, only with
> Java threads. As such, a Python thread which calls this and then submits
> multiple jobs doesn't necessarily get its jobs associated with any particular
> spark job group. For example:
> {code}
> def run_jobs():
> sc.setJobGroup('hello', 'hello jobs')
> x = sc.range(100).sum()
> y = sc.range(1000).sum()
> return x, y
> import concurrent.futures
> with concurrent.futures.ThreadPoolExecutor() as executor:
> future = executor.submit(run_jobs)
> sc.cancelJobGroup('hello')
> future.result()
> {code}
> In this example, depending how the action calls on the Python side are
> allocated to Java threads, the jobs for {{x}} and {{y}} won't necessarily be
> assigned the job group {{hello}}.
> First, we should clarify the docs if this truly is the case.
> Second, it would be really helpful if we could make the job group assignment
> reliable for a Python thread, though I’m not sure the best way to do this.
> As it stands, job groups are pretty useless from the pyspark side, if we
> can't rely on this fact.
> My only idea so far is to mimic the TLS behavior on the Python side and then
> patch every point where job submission may take place to pass that in, but
> this feels pretty brittle. In my experience with py4j, controlling threading
> there is a challenge.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]