Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21583
test this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21748
test this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21583
test this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21748
test this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21748
test this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21748
test this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21583
test this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21583
test this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21583
test this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21583
test this pease
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21583
test this please, this time w/o a fetch timeout :(
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21583
test this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21583
test this please as im running out of funny real quick
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21583
test this please and work for real this time
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21748
test this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21583
test this please OR ELSE
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21583
for the love of all thats holy, test this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21583
test this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21748
test this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21748
i'm going to kill the ubuntu build and reboot the worker. i'll retrigger
when it's back.
---
-
To unsubscribe, e-mail
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21583
test this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21583
test this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21583
@ifilonenko -- minikube is updated to v0.28.0:
```
$ pssh -h ubuntu_workers.txt -i "minikube version"
[1] 12:37:23 [SUCCESS] amp-jenkins-staging-worker-01.amp
miniku
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21583
test this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21583
done.
```$ for x in 1 2; do ssh amp-jenkins-staging-worker-0${x} docker -v; done
Docker version 18.03.1-ce, build 9ee9f40
Docker version 18.03.1-ce, build 9ee9f40
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21583
@ifilonenko -- ok, got it.
the irritating news is that docker installs have changed significantly
since i last did our ansible, so it'll take me a couple of hours to refactor
and get
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21583
actually, i think we're running a supported version of docker:
```sknapp@amp-jenkins-staging-worker-02:~$ docker -v
Docker version 17.05.0-ce, build 89658be
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21583
@ifilonenko and @foxish, i will get to the docker upgrade this coming week.
On Sat, Jul 7, 2018 at 12:48 AM, Ilan Filonenko
wrote:
> @foxish <https://github.com/
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21109
test this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/21511
On Fri, Jun 8, 2018 at 10:36 AM, Ilan Filonenko
wrote:
> *@ifilonenko* requested changes on this pull request.
>
> Thank you for this work. Would be nice to
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/20697
we tried the --vm-driver=none and it wasn't working for us... this was ~8
months ago, however, and i can't recall exactly went wrong.
On Fri, Jun 1, 2018 at 10:49 AM, Sean Suchter
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/20697
> so... yeah. i'll have a much, much, much better idea of when we'll be
> getting more ubuntu workers on-line as well as ubuntu-friendly (or fully
> containerized) spark tes
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/20697
just wanted to chime in here:
i went to databricks a couple of weeks back and have confirmation that
after the spark summit, that i will officially have build system support
from
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/20879
other than that, this PR LGTN++
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/20879
i've had to punt on figuring out how to get spark to reliably build across
branches on the ubuntu workers... mostly because i haven't been able to
successfully get the proper versions
Github user shaneknapp commented on a diff in the pull request:
https://github.com/apache/spark/pull/20465#discussion_r165412455
--- Diff: python/pyspark/sql/tests.py ---
@@ -2819,13 +2802,6 @@ def test_to_pandas(self):
self.assertEquals(types[4], 'datetime64[ns
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/19528
ok to test
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/20222
confused:
https://user-images.githubusercontent.com/1606572/34838548-533a1790-f6b3-11e7-9fbc-a035a782ccfb.png;>
anyways, i will change the timeout. ag
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/20222
test this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/20222
@gatorsmile -- the timeout is currently at *six hours*. i am loathe to
bump it any higher.
---
-
To unsubscribe, e-mail
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/20222
argh. i'll bump the job timeout now.
On Wed, Jan 10, 2018 at 12:19 PM, UCB AMPLab <notificati...@github.com>
wrote:
> Test FAILed.
> Refer to this link for b
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/19290
ok sounds good -- we'll keep things 'old' for now.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/19290
the driving force behind this is that the version of R (and all associated
packages) is really quite old on the centos workers (3.1.1), and i'd really
like us to get as up to date
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/19290
i filed a bug about this on the spark jira:
https://issues.apache.org/jira/browse/SPARK-22996
as we're about to move all the spark builds to new ubuntu machines, w/much
more up2date
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/19884
test this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/19884
alright https://github.com/apache/spark/pull/18754 should now be unblocked.
let me know if there's anything else that needs to happen
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/19884
we should be good to go:
```$ pssh -h jenkins_workers.txt -t 0 "export
PATH=/home/anaconda/envs/py3k/bin$PATH; pip install pyarrow==0.8.0"
[1] 05:55:00 [SUCCESS] amp-jenk
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/19884
@HyukjinKwon @wesm @BryanCutler
alright. here's my plan for right now:
* python 3.4.5 -- upgrade pyarrow --> 0.8.0 (confirmed working on my
staging environment)
what
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/19884
pypy is 2.5.1, no pandas or pyarrow (/usr/bin/pypy -- hand-rolled dist i
put together ~3 years ago)
python 3.4.5: pyarrow 0.4.1, pandas 0.19.2 (managed by anaconda)
python
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/19884
@BryanCutler @HyukjinKwon
pandas and pyarrow are most definitely installed on all of the jenkins
workers. the 'missing' packages happened after we had a power outage at the
colo
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/19528
python2.7: 0.17.0
python3: 0.18.1
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/19884
the install is done w/pip in a conda environment.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user shaneknapp commented on a diff in the pull request:
https://github.com/apache/spark/pull/19959#discussion_r156536774
--- Diff: dev/lint-r.R ---
@@ -27,10 +27,11 @@ if (! library(SparkR, lib.loc = LOCAL_LIB_LOC,
logical.return = TRUE)) {
# Installs lintr from
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/19884
yeah, i can do the upgrade next week. i'll be working remotely from the
east coast, but unavailable at all on monday due to travel.
On Mon, Dec 11, 2017 at 1:59 PM, Bryan Cutler
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/19631
That's an expected message from the build and has no impact on it. I'll
change the build config to suppress the message when I'm back from holiday
late next week.
On Nov 1, 2017 6
Github user shaneknapp commented on a diff in the pull request:
https://github.com/apache/spark/pull/19524#discussion_r145757705
--- Diff: dev/run-tests-jenkins ---
@@ -26,4 +26,11 @@ FWDIR="$( cd "$( dirname "$0" )/.." && pwd )"
cd
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/19524
yeah, now that i'm looking at this w/a fresh cup of coffee, i agree w/
@holdenk about shellutil.py being a weird place to exit for a python version
check.
run-tests.py and run-tests
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/19524
yeah, this makes sense. i don't think we've officially supported 2.6 in a
while, esp for tests and i'd be ok w/removing the backports. this makes for a
much clearer exit case.
@rxin
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/19513
argh, it looks like the PATH variable got dropped, so it's not using the
anaconda python install. i've marked worker-03 offline, and once the jobs
there are done building i'll disconnect
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/19513
ill also spot-check the other workers.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/19290
@felixcheung -- yes, this is the system default lintr, meaning all calls
to lintr will be against this version.
as for other branches, i think it could possibly break them
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/19290
hey all, i'm here. was out sick for the past few days and trying to get
caught up. sorry about that!
so... what version of lintr do we need to put on the workers?
On Fri
Github user shaneknapp commented on a diff in the pull request:
https://github.com/apache/spark/pull/19325#discussion_r140848562
--- Diff: python/pyspark/sql/functions.py ---
@@ -2183,14 +2183,29 @@ def pandas_udf(f=None, returnType=StringType()):
:param f: python function
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/19290
@HyukjinKwon -- you will absolutely not have builds install packages on the
build system. this is a really bad idea.
is this absolutely required, or just to fix a warning in the build
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18984
nvm... i checked the last successful build and it took ~90 mins.
On Mon, Aug 21, 2017 at 5:08 PM, shane knapp â <incompl...@gmail.com>
wrote:
> hmm. the bu
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18984
hmm. the build seems to be hanging:
INFO- Cleaning site directory
INFO- Building documentation to directory:
/home/jenkins/workspace/spark-master-docs/spark/sql/site
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18984
building now:
https://amplab.cs.berkeley.edu/jenkins/job/spark-master-docs/3594/console
On Mon, Aug 21, 2017 at 4:30 PM, Hyukjin Kwon <notificati...@github.com>
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18984
done:
```
TASK [jenkins-worker : Install anaconda python packages via pip]
***
ok: [amp-jenkins-staging
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18984
okie dokie. installing mkdocs now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18984
how long has this been failing in this way? i'll take a closer look
tomorrow afternoon.
On Sun, Aug 20, 2017 at 4:31 AM, Hyukjin Kwon <notificati...@github.com>
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18907
thanks for the heads up @hvanhovell -- looks like there was some gunk in
the pipes and now we've got ~10 pull request builds running. :)
---
If your project is set up for it, you can reply
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18907
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18907
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18907
sometimes jobs don't like to trigger and there's nothing in the logs as to
exactly why. since nothing was building, i decided to kick jenkins and then
retrigger this build.
---
If your
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18664
nvm, i just force-installed pandas on the workers to be 0.19.2
let's see if this helps.
On Wed, Jul 26, 2017 at 6:11 PM, shane knapp â <incompl...@gmail.com>
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18664
i'm now thoroughly confused as pandas is now showing 0.18.0 on workers 2-8,
and 0.19.2 on worker 1.
latest is 0.20.3... should i just bump everything to that?
also
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18459
ok, i feel confident that this PR should be g2g:
- i checked the workers that this PRs builds were on and they didn't
leave any stray lockfiles
- i checked ALL workers for stray
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18459
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18459
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18459
i'd kick off a couple #tbh :)
On Fri, Jul 7, 2017 at 5:02 PM, Bryan Cutler <notificati...@github.com>
wrote:
> Ok, no prob. I'll kick off another test, maybe that
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18459
hmm. i have a feeling w/o looking at the test code that we're creating
lots of envs, installing things, and then moving on to a new env... which is
leading to a race condition w/lockfiles
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18459
one quick comment... i see that these tests are using the default ivy
cache of `/home/jenkins/.ivy2/cache`, which is dangerous as other builds and
whatnot can pollute this w/jars and cause test
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18459
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18459
```
(py3k) [root@amp-jenkins-worker-01 ~]# pip show pandas numpy
Name: pandas
Version: 0.19.2
Summary: Powerful data structures for data analysis, time series,and
statistics
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18459
this has been done.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18317
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18317
i'm going to kill the current build and restart it once i finish doing some
package upgrades.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18459
ok, i've backed up the existing anaconda installation on all of the
workers, so i can do the system-wide upgrade and back out if necessary.
i'll post to the dev@ list about upgrading
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18459
alright, this is what a `conda install numpy` would update on the workers:
```
package|build
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18459
no, i will tomorrow though. real life[tm] took over my day today.
On Wed, Jul 5, 2017 at 4:54 PM, Bryan Cutler <notificati...@github.com>
wrote:
> Hi @shanekna
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18459
nah, i got it.
On Sat, Jul 1, 2017 at 7:19 PM, Holden Karau <notificati...@github.com>
wrote:
> @shaneknapp <https://github.com/shaneknapp> - I can do th
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18459
i won't have time to think about and do something until monday... but i
have some ideas.
On Fri, Jun 30, 2017 at 4:29 PM, Bryan Cutler <notificati...@github.com>
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18459
pandas is currently at 0.15.2.
and no, no plans to upgrade unless it's forced by dependency hell.
---
If your project is set up for it, you can reply to this email and have your
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18459
is there a specific version that you would like installed (current is
1.9.2)?
i'll do the this as i need to update some setup scripts and whatnot.
---
If your project is set up
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18443
@BryanCutler after looking at the code, it seems that it's definitely
required for the for() loop located after the declaration, and to create the
temporary python environment.
---
If your
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18443
excellent, thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18443
are we planning on running the pyarrow tests in another script?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/15821
roger copy... "latest" is 0.4.1, which is what's currently on the jenkins
workers.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/15821
btw, do we want pyarrow-0.4.0 or -0.4.1? i'm assuming the latter based on
https://github.com/apache/spark/pull/15821#issuecomment-310905209
---
If your project is set up for it, you can reply
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18439
thanks for closing this... @holdenk and/or @bryancutler will be updating
the tests and removing the call to install pyarrow from run-pip-tests some
time this afternoon.
On Tue
201 - 300 of 654 matches
Mail list logo