i'll be taking amp-jenkins-staging-worker-0{1,2} offline to upgrade
minikube to v0.28.0.
this is currently blocking: https://github.com/apache/spark/pull/21583
this should be a relatively short downtime, and i'll reply back here when
it's done.
shane
--
Shane Knapp
UC Berkeley EECS Research /
done, and the workers are back online.
$ pssh -h ubuntu_workers.txt -i "minikube version"
[1] 12:37:23 [SUCCESS] amp-jenkins-staging-worker-01.amp
minikube version: v0.28.0
[2] 12:37:24 [SUCCESS] amp-jenkins-staging-worker-02.amp
minikube version: v0.28.0
On Wed, Jul 11, 2018 at 7:34 PM, shane
Severity: Medium
Vendor: The Apache Software Foundation
Versions Affected:
Spark versions through 2.1.2
Spark 2.2.0 through 2.2.1
Spark 2.3.0
Description:
In Apache Spark up to and including 2.1.2, 2.2.0 to 2.2.1, and 2.3.0, it's
possible for a malicious user to construct a URL pointing to a Spa
Severity: High
Vendor: The Apache Software Foundation
Versions affected:
Spark versions through 2.1.2
Spark 2.2.0 to 2.2.1
Spark 2.3.0
Description:
In Apache Spark up to and including 2.1.2, 2.2.0 to 2.2.1, and 2.3.0, when
using PySpark or SparkR, it's possible for a different local user to
conn
i'm seeing some strange docker/minikube errors, so i'm currently rebooting
the boxes. when they're back up, i will retrigger any killed builds and
send an all-clear.
On Wed, Jul 11, 2018 at 7:40 PM, shane knapp wrote:
> done, and the workers are back online.
>
> $ pssh -h ubuntu_workers.txt -i
ok, things seem much happier now.
On Wed, Jul 11, 2018 at 8:57 PM, shane knapp wrote:
> i'm seeing some strange docker/minikube errors, so i'm currently rebooting
> the boxes. when they're back up, i will retrigger any killed builds and
> send an all-clear.
>
> On Wed, Jul 11, 2018 at 7:40 PM,
+1
On Tue, Jul 10, 2018 at 10:15 PM Saisai Shao wrote:
> https://issues.apache.org/jira/browse/SPARK-24530 is just merged, I will
> cancel this vote and prepare a new RC2 cut with doc fixed.
>
> Thanks
> Saisai
>
> Wenchen Fan 于2018年7月11日周三 下午12:25写道:
>
>> +1
>>
>> On Wed, Jul 11, 2018 at 1:31
I guess my question is just whether the Python docs are usable or not in
this RC. They looked reasonable to me but I don't know enough to know what
the issue was. If the result is usable, then there's no problem here, even
if something could be fixed/improved later.
On Sun, Jul 8, 2018 at 7:25 PM
Hi Sean,
The doc for RC1 is not usable because of sphinx issue. It should be rebuilt
with python3 to avoid the issue. Also there's one more blocking issue in
SQL, so I will wait for that to cut a new RC.
Sean Owen 于2018年7月12日周四 上午9:05写道:
> I guess my question is just whether the Python docs are
Hi All,
I am trying to build a spark application which will read the data from
Postgresql (source) one environment and write it to postgreSQL, Aurora
(target) on a dfiffernt environment (like to PROD to QA or QA to PROD etc)
using spark JDBC.
When I am loading the dataframe back to targe
Hi Ryan,
Great job on this! Shall we call a vote for the plan standardization SPIP?
I think this is a good idea and we should do it.
Notes:
We definitely need new user-facing APIs to produce these new logical plans
like DeleteData. But we need a design doc for these new APIs after the SPIP
passed
11 matches
Mail list logo