[
https://issues.apache.org/jira/browse/SPARK-41449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Shay Elbaz updated SPARK-41449:
---
Description:
Since the total/max number of executor is constant throughout the application -
in
[
https://issues.apache.org/jira/browse/SPARK-41449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Shay Elbaz updated SPARK-41449:
---
Description:
Since the total/max number of executor is constant throughout the application -
in
Shay Elbaz created SPARK-41449:
--
Summary: Stage level scheduling, allow to change number of
executors
Key: SPARK-41449
URL: https://issues.apache.org/jira/browse/SPARK-41449
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-32578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195948#comment-17195948
]
Shay Elbaz commented on SPARK-32578:
It turned out the problem was in my benchmark, sorry about
[
https://issues.apache.org/jira/browse/SPARK-32578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Shay Elbaz updated SPARK-32578:
---
Description:
The core sendMessage method is incorrect:
{code:java}
def sendMessage(edge:
Shay Elbaz created SPARK-32578:
--
Summary: PageRank not sending the correct values in Pergel
sendMessage
Key: SPARK-32578
URL: https://issues.apache.org/jira/browse/SPARK-32578
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-27318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17162641#comment-17162641
]
Shay Elbaz edited comment on SPARK-27318 at 7/22/20, 11:54 AM:
---
Was
[
https://issues.apache.org/jira/browse/SPARK-27318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17162641#comment-17162641
]
Shay Elbaz edited comment on SPARK-27318 at 7/22/20, 10:14 AM:
---
Was
[
https://issues.apache.org/jira/browse/SPARK-27318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17162641#comment-17162641
]
Shay Elbaz edited comment on SPARK-27318 at 7/22/20, 9:37 AM:
--
Was able
[
https://issues.apache.org/jira/browse/SPARK-27318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17162641#comment-17162641
]
Shay Elbaz commented on SPARK-27318:
Was able to reproduce on 2.4.3.
Executed via spark-shell,
[
https://issues.apache.org/jira/browse/SPARK-30399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17127548#comment-17127548
]
Shay Elbaz commented on SPARK-30399:
Hi [~hyukjin.kwon], thanks for replying.
Perhaps the issue
[
https://issues.apache.org/jira/browse/SPARK-30399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Shay Elbaz updated SPARK-30399:
---
Description:
When using Spark Bucketed table, Spark would use as many partitions as the
number of
Shay Elbaz created SPARK-30399:
--
Summary: Bucketing does not compatible with partitioning in
practice
Key: SPARK-30399
URL: https://issues.apache.org/jira/browse/SPARK-30399
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-30089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Shay Elbaz updated SPARK-30089:
---
Description:
Please consider the following data, where *event_id* has 5 non unique values,
and
Shay Elbaz created SPARK-30089:
--
Summary: count over Window function with orderBy gives wrong
results
Key: SPARK-30089
URL: https://issues.apache.org/jira/browse/SPARK-30089
Project: Spark
Shay Elbaz created SPARK-26438:
--
Summary: Driver waits to spark.sql.broadcastTimeout before
throwing OutOfMemoryError - is this by design?
Key: SPARK-26438
URL: https://issues.apache.org/jira/browse/SPARK-26438
[
https://issues.apache.org/jira/browse/SPARK-19256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16700067#comment-16700067
]
Shay Elbaz commented on SPARK-19256:
[~chengsu] this is great! If there is anything I can do to
[
https://issues.apache.org/jira/browse/SPARK-19256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643828#comment-16643828
]
Shay Elbaz commented on SPARK-19256:
+1
[~tejasp] is this still under progress?
> Hive bucketing
[
https://issues.apache.org/jira/browse/SPARK-24904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1678#comment-1678
]
Shay Elbaz commented on SPARK-24904:
[~mgaido] Technically you *can* that, you just need an
[
https://issues.apache.org/jira/browse/SPARK-24904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Shay Elbaz updated SPARK-24904:
---
Issue Type: Improvement (was: Question)
> Join with broadcasted dataframe causes shuffle of
[
https://issues.apache.org/jira/browse/SPARK-24904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16555769#comment-16555769
]
Shay Elbaz commented on SPARK-24904:
[~mgaido] indeed this assumption is not always true. However
[
https://issues.apache.org/jira/browse/SPARK-24904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Shay Elbaz updated SPARK-24904:
---
Description:
When joining a "large" dataframe with broadcasted small one, and join-type is
on the
Shay Elbaz created SPARK-24904:
--
Summary: Join with broadcasted dataframe causes shuffle of
redundant data
Key: SPARK-24904
URL: https://issues.apache.org/jira/browse/SPARK-24904
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-5377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16369112#comment-16369112
]
Shay Elbaz commented on SPARK-5377:
---
+1
This seems like a very useful improvement and will save us many
24 matches
Mail list logo