+1, too.
Thank you, Hyukjin!
Bests,
Dongjoon.
On Fri, May 17, 2019 at 9:07 AM Imran Rashid
wrote:
> +1, thanks for taking this on
>
> On Wed, May 15, 2019 at 7:26 PM Hyukjin Kwon wrote:
>
>> oh, wait. 'Incomplete' can still make sense in this way then.
>> Yes, I am good with 'Incomplete'
actually, amp-jenkins-staging-worker-01 is seriously unhappy and just
crashed. we will investigate more on monday.
:(
shane
On Fri, May 17, 2019 at 3:19 PM shane knapp wrote:
> all workers are now up, online and ready to build!
>
> On Fri, May 17, 2019 at 2:55 PM shane knapp wrote:
>
>>
all workers are now up, online and ready to build!
On Fri, May 17, 2019 at 2:55 PM shane knapp wrote:
> amp-jenkins-staging-worker-02 and ubuntu-testing are back up.
>
> -01 is being a little reluctant to boot and we're investigating.
>
> On Fri, May 17, 2019 at 2:08 PM shane knapp wrote:
>
>>
amp-jenkins-staging-worker-02 and ubuntu-testing are back up.
-01 is being a little reluctant to boot and we're investigating.
On Fri, May 17, 2019 at 2:08 PM shane knapp wrote:
> machines are down, gpus are about to go in. i expect these workers to
> back up and building in ~30min.
>
> On
machines are down, gpus are about to go in. i expect these workers to back
up and building in ~30min.
On Fri, May 17, 2019 at 1:47 PM shane knapp wrote:
> we're installing some new GPUs for builds to use for tests... the
> following workers will be offline for the next couple of hours:
>
>
we're installing some new GPUs for builds to use for tests... the
following workers will be offline for the next couple of hours:
amp-jenkins-staging-worker-01
amp-jenkins-staging-worker-02
the ubuntu-testing worker will also be down, but that only impacts one
build.
the GPUs will be used for
A cached DataFrame isn't supposed to change, by definition.
You can re-read each time or consider setting up a streaming source on
the table which provides a result that updates as new data comes in.
On Fri, May 17, 2019 at 1:44 PM Tomas Bartalos wrote:
>
> Hello,
>
> I have a cached dataframe:
Hello,
I have a cached dataframe:
spark.read.format("delta").load("/data").groupBy(col("event_hour")).count.cache
I would like to access the "live" data for this data frame without deleting
the cache (using unpersist()). Whatever I do I always get the cached data
on subsequent queries. Even
+1, thanks for taking this on
On Wed, May 15, 2019 at 7:26 PM Hyukjin Kwon wrote:
> oh, wait. 'Incomplete' can still make sense in this way then.
> Yes, I am good with 'Incomplete' too.
>
> 2019년 5월 16일 (목) 오전 11:24, Hyukjin Kwon 님이 작성:
>
>> I actually recently used 'Incomplete' a bit when the
Hi All,
I am getting Out Of Memory due to GC overhead while reading a table from
HIVE from spark like:
spark.sql("SELECT * FROM some.table where date='2019-05-14' LIMIT
> 10").show()
So when I run above command in spark-shell then it starts processing *1780
tasks* where it goes OOM at a
10 matches
Mail list logo