ot app though?
>
> But posting "Jenkins test this please" on PRs doesn't seem to work, and I
> can't reach Jenkins:
> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/
>
> On Thu, May 18, 2017 at 12:44 AM shane knapp <skn...@berkeley.edu>
this is done, and we're building again.
On Mon, May 22, 2017 at 1:45 PM, shane knapp <skn...@berkeley.edu> wrote:
> last night i accidentally upgraded a bunch of plugins, which ended up
> breaking alluxio's release pipeline.
>
> to that end, i've downgraded the artifactory plu
last night i accidentally upgraded a bunch of plugins, which ended up
breaking alluxio's release pipeline.
to that end, i've downgraded the artifactory plugin and need to do an
emergency restart. this will be quick, and will happen in about 10
minutes.
sorry for all of the flakiness recently,
/
shane
On Thu, May 18, 2017 at 8:39 AM, shane knapp <skn...@berkeley.edu> wrote:
> yeah, i spoke too soon. jenkins is still misbehaving, but FINALLY i'm
> getting some error messages in the logs... looks like jenkins is
> thrashing on GC.
>
> now that i know what's up, i
...but just now i started getting alerts on system load, which was
rather high. i had to kick jenkins again, and will keep an eye on the
master and possible need to reboot.
sorry about the interruption of service...
shane
On Tue, May 16, 2017 at 8:18 AM, shane knapp <skn...@berkeley.edu>
we've got a booth in the expo center, feel free to stop by, say hi and
get some stickers!
(complaining about jenkins is also welcome, and i will happily join in!)
:)
shane (formerly amplab, now riselab)
-
To unsubscribe
i'm able to VPN in, but not connect to the master or any slaves. it's
looking like i'll need to head down from my building to the colo and
see what's up.
shane
-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
ok, we're back up... thankfully i didn't need to go to the colo!
shane (who strongly feels he just dodged a bullet)
On Tue, Jun 13, 2017 at 3:06 PM, shane knapp <skn...@berkeley.edu> wrote:
> i'm able to VPN in, but not connect to the master or any slaves. it's
> looking like i'll
...and we're back!
On Tue, Jun 13, 2017 at 10:17 AM, shane knapp <skn...@berkeley.edu> wrote:
> i have been seeing continuous and slow network speeds apparently being
> caused by the VPN... it's currently rebooting and should be back up
> in ~5 mins.
>
> sorry for the in
tom: i checked and your username is still on the list.
anyways, it's a pretty big list... i pulled a couple of names out of
it (including mine -- i can add myself back if needed), but i'd be
down to have someone (rxin, maybe?) audit the list and pare it down.
also, after checking out the code
also, do we have any recent PRs where i can see this happening? it
will make my log-diving a bit easier.
On Fri, May 5, 2017 at 12:48 PM, shane knapp <skn...@berkeley.edu> wrote:
> tom: i checked and your username is still on the list.
>
> anyways, it's a pretty big list... i
park/pull/17658
>
> My requests went through, Tom's seemed to be ignored.
>
>
> On Fri, May 5, 2017 at 1:06 PM, shane knapp <skn...@berkeley.edu> wrote:
>> also, do we have any recent PRs where i can see this happening? it
>> will make my log-diving a bit easier.
, which will let me get back on track w/rolling out
jenkins 2.0+ and a much more modern, and hopefully less flaky ghprb
plugin.
wish i had better news. :\
shane
On Fri, May 5, 2017 at 1:14 PM, shane knapp <skn...@berkeley.edu> wrote:
> ok cool, thanks. i'll check the jenkins logs f
working on it. we'll have intermittent downtime the next ~30 mins.
On Sun, May 21, 2017 at 12:01 PM, shane knapp <skn...@berkeley.edu> wrote:
> yeah. i noticed that and restarted it a few minutes ago. i'll have
> some time later this afternoon to take a closer look... :\
>
&
gt;
> When I tried to see console log (e.g.
> https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/77149/consoleFull),
> a server returns "proxy error."
>
> Regards,
> Kazuaki Ishizaki
>
>
>
> From:shane knapp <skn...@berkeley.edu>
&g
i will detail how we control access to the jenkins infra tomorrow.
we're pretty well locked down, but there is absolutely room for
improvement.
this thread is also a good reminder that we (RMs + pwendell + ?)
should audit who still has, but does not need direct (or special)
access to jenkins.
++joshrosen
On Mon, Oct 9, 2017 at 1:48 AM, Sean Owen wrote:
> I'm seeing jobs killed regularly, presumably because the time out (210
> minutes?)
>
> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%
>
not a problem. :)
On Thu, Oct 5, 2017 at 9:26 AM, Felix Cheung <felixcheun...@hotmail.com>
wrote:
> Thanks Shane!
>
> --
> *From:* shane knapp <skn...@berkeley.edu>
> *Sent:* Thursday, October 5, 2017 9:14:54 AM
> *To:* Felix Cheung
>
...and we're green:
https://amplab.cs.berkeley.edu/jenkins/job/spark-master-maven-snapshots/2025/
On Thu, Oct 5, 2017 at 9:46 AM, shane knapp <skn...@berkeley.edu> wrote:
> not a problem. :)
>
> On Thu, Oct 5, 2017 at 9:26 AM, Felix Cheung <felixcheun...@hotmail.com>
>
yep, it was a corrupted jar on amp-jenkins-worker-01. i grabbed a new one
from maven.org and kicked off a fresh build.
On Thu, Oct 5, 2017 at 9:03 AM, shane knapp <skn...@berkeley.edu> wrote:
> yep, looking now.
>
> On Wed, Oct 4, 2017 at 10:04 PM, Felix Cheung <felixch
alright, we're back up!
On Tue, Aug 29, 2017 at 9:13 AM, shane knapp <skn...@berkeley.edu> wrote:
> ok, we were up for a little bit, but had to take the webserver down
> due to a failed disk in the RAID array.
>
> given that this was our only hardware casualty, i will happily
congrats, and welcome! :)
On Fri, Sep 29, 2017 at 12:58 PM, Matei Zaharia wrote:
> Hi all,
>
> The Spark PMC recently added Tejas Patil as a committer on the
> project. Tejas has been contributing across several areas of Spark for
> a while, focusing especially on
it's pretty quiet right now, so i'm going to kill the lone job that's
building and power down the cluster.
once we're back up tomorrow, i'll let everyone know.
shane
-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
we are shutting down our servers (including our lab webserver) tonite
~1030pm PDT, which will effectively take jenkins off-line until the
electrical feed repairs to our building are finished.
the estimate from the contractors is that they should be done tomorrow
morning (8-29) by 7am PDT.
more
greetings, denizens of the aether!
as you are all probably aware, we've been having a lot of issues
w/power in the beautiful little city of berkeley over the summer.
next week, we have two sudden and separate maintenances on both power
to our building (soda hall), and to the datacenter (IST).
in to the morning. :)
i'll check in on the state of things when i wake up tomorrow
morning... this is actually quite a big and dangerous job. they need
to rip out some sidewalk and dig pretty deep to get to the main feed
to replace it.
shane
On Mon, Aug 28, 2017 at 5:35 PM, shane knapp <skn...@berkeley.
gt; to replace it.
>
> shane
>
> On Mon, Aug 28, 2017 at 5:35 PM, shane knapp <skn...@berkeley.edu> wrote:
>> we are shutting down our servers (including our lab webserver) tonite
>> ~1030pm PDT, which will effectively take jenkins off-line until the
>> el
hey all, i'm finally back from vacation this week and will be following up
once i whittle down my inbox.
in summation: jenkins worker upgrades will be happening. the biggest one
is the move to ubuntu... we need containerized builds for this, but i
don't have the cycles to really do all of this
more electrical repairs need to be done on the high voltage leads to our
building, and we will be losing power overnight.
this means the PRB builds will not be working as amplab.cs.berkeley.edu
will be down.
timer-based builds will still run normally.
i'll get everything back up and running
this maintenance was cancelled last night, and will take place some time in
2018. i'll be sure to update the everyone when i get more information.
On Tue, Nov 28, 2017 at 11:53 AM, shane knapp <skn...@berkeley.edu> wrote:
> more electrical repairs need to be done on the high volt
hello from the canary islands! ;)
i just saw this thread, and another one about a quick power loss at the
colo where our machines are hosted. the master is on UPS but the workers
aren't... and when they come back, the PATH variable specified in the
workers' configs get dropped and we see
> Caused by: sbt.ForkMain$ForkError: java.io.IOException: error=2, No such
> file or directory
> at java.lang.UNIXProcess.forkAndExec(Native Method)
> at java.lang.UNIXProcess.(UNIXProcess.java:248)
> at java.lang.ProcessImpl.start(ProcessImpl.java:134)
> at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
> ... 17 more
>
>
--
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu
ay.
shane
--
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu
, 2018 at 11:11 AM, shane knapp wrote:
> hey everyone!
>
> if you ever wanted to meet the one-man operation that keeps things going,
> talk about future build system plans, complain about the fact that we're
> still on centos 6 (yes, i know), or just say hi, i'll be manning the
RISELab booth at summit all three days!
>
> :)
>
> shane
> --
> Shane Knapp
> UC Berkeley EECS Research / RISELab Staff Technical Lead
> https://rise.cs.berkeley.edu
>
--
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu
hey everyone!
if you ever wanted to meet the one-man operation that keeps things going,
talk about future build system plans, complain about the fact that we're
still on centos 6 (yes, i know), or just say hi, i'll be manning the
RISELab booth at summit all three days!
:)
shane
--
Shane Knapp
we just noticed that we're unable to connect to jenkins, and have reached
out to our NOC support staff at our colo. until we hear back, there's
nothing we can do.
i'll update the list as soon as i hear something. sorry for the
inconvenience!
shane
--
Shane Knapp
UC Berkeley EECS Research
and we're back! there was apparently a firewall migration yesterday that
went sideways.
shane
On Mon, Apr 30, 2018 at 8:27 PM, shane knapp <skn...@berkeley.edu> wrote:
> we just noticed that we're unable to connect to jenkins, and have reached
> out to our NOC support staff at our
jenkins got itself in to a 'state' this morning, and required a restart.
it should be back up and building now.
sorry for the inconvenience!
shane
2018-01-158
2018-01-1634
Total builds:4112
Total timeouts:171
Percentage of all builds timing out:4.15856031128
On Wed, Jan 10, 2018 at 9:54 AM, shane knapp <skn...@berkeley.edu> wrote:
> i just noticed we're starting to see the once-yearly rash of git timeouts
> w
this doesn't have anything to do w/the git timeouts... those will timeout
the build 10 mins after starting (and failing on the initial fetch call).
On Wed, Jan 17, 2018 at 9:51 PM, Sameer Agarwal wrote:
> FYI, I ended up bumping the build timeouts from 255 to 275 minutes.
all non-UPS machines (read: all jenkins workers) temporarily lost power a
few minutes ago, and i will need to reconnect them to the master.
this means no builds for ~20 mins.
i will also be installing a plugin for the spark-on-k8s builds (
ok, we're back up and ready to build. sorry for the inconvenience.
On Tue, Jan 16, 2018 at 9:59 AM, shane knapp <skn...@berkeley.edu> wrote:
> all non-UPS machines (read: all jenkins workers) temporarily lost power a
> few minutes ago, and i will need to reconnect them t
our firewall was running a bit... slowly... and needed a reboot. this
means access to jenkins will be gone for ~10 mins.
i'll send out an all-clear when we're back up and running.
and we're back!
On Fri, Jan 26, 2018 at 2:32 PM, shane knapp <skn...@berkeley.edu> wrote:
> our firewall was running a bit... slowly... and needed a reboot. this
> means access to jenkins will be gone for ~10 mins.
>
> i'll send out an all-clear when we're back up and running.
>
the build system is up, but you can't reach it through normal channels (
amplab.cs.berkeley.edu/jenkins or rise.cs.berkeley.edu/jenkins) as the
machine hosting the reverse proxy is down due to a UPS fault during normal
maintenance...
machines are coming up now, and we should have network
...and we're back!
On Wed, Dec 20, 2017 at 7:51 PM, shane knapp <skn...@berkeley.edu> wrote:
> the build system is up, but you can't reach it through normal channels (
> amplab.cs.berkeley.edu/jenkins or rise.cs.berkeley.edu/jenkins) as the
> machine hosting the reverse pr
i'll be patching the build system once the patches are released.
https://security.googleblog.com/2018/01/todays-cpu-
vulnerability-what-you-need.html
https://googleprojectzero.blogspot.com/2018/01/reading-
privileged-memory-with-side.html
whee!
ub.com/apache/spark/pull/21584
2) stop needing 2 builds for pull requests (one for regular tests on
centos, one to test against minikube on ubuntu).
questions/comments/concerns?
shane
--
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu
our building is finally replacing the broken UPS that keeps biting us...
...which means another bit of downtime. :(
it begins in 6 hours (11pm PDT) and will be finished tomorrow (august 1st)
by ~8am PDT.
shane
--
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical Lead
https
the UPS has been replaced, and you can now access the wonderful entity
known as jenkins via the internet superhighway!
shane (who only really showed up early to work and didn't actually help
replace said UPS)
On Tue, Jul 31, 2018 at 5:14 PM, shane knapp wrote:
> our building is fina
n order to get 2.4 out on time.
>>
>>
>
--
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu
in to are here:
https://issues.apache.org/jira/browse/SPARK-24950
thanks in advance,
shane
--
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu
On Fri, Jul 27, 2018 at 1:23 PM shane knapp wrote:
>
>> hey everyone!
>>
>> i'm making great progress on porting the spark builds to run under ubuntu
>> 16.04LTS, but have hit a show-stopper in my testing.
>>
>> i am not a scala person by any definition of
/agreemsg
On Fri, Aug 10, 2018 at 4:02 PM, Sean Owen wrote:
> Seems OK to proceed with shutting off lintr, as it was masking those.
>
> On Fri, Aug 10, 2018 at 6:01 PM shane knapp wrote:
>
>> ugh... R unit tests failed on both of these builds.
>> https://amplab.cs.b
for the 2.4 cut/code freeze, but i wanted to
get this done before it gets pushed down my queue and before we revisit the
ubuntu port.
thanks in advance,
shane
--
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu
<
shiva...@eecs.berkeley.edu> wrote:
> Sounds good to me as well. Thanks Shane.
>
> Shivaram
> On Fri, Aug 10, 2018 at 1:40 PM Reynold Xin wrote:
> >
> > SGTM
> >
> > On Fri, Aug 10, 2018 at 1:39 PM shane knapp wrote:
> >>
> >> https://i
extra work, so no objections from me to hold
> off on things for now.
>
> On Fri, Aug 10, 2018 at 9:48 AM, shane knapp wrote:
>
>> On Fri, Aug 10, 2018 at 9:47 AM, Wenchen Fan wrote:
>>
>>> It seems safer to skip the arrow 0.10.0 upgrade for Spark 2.4 and leav
On Fri, Aug 10, 2018 at 9:47 AM, Wenchen Fan wrote:
> It seems safer to skip the arrow 0.10.0 upgrade for Spark 2.4 and leave it
> to Spark 3.0, so that we have more time to test. Any objections?
>
none here.
--
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical L
python 3.5/pyarrow 0.10.0 build:
https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-sbt-hadoop-2.6-python-3.5-arrow-0.10.0-ubuntu-testing/
On Fri, Aug 10, 2018 at 10:44 AM, shane knapp wrote:
> see: https://github.com/apache/spark/pull/21939#issuecomm
at all
spark branches will happily pass against 3.5, it will not happen until
after the 2.4 cut. :)
however, from my (limited) testing, it does look like that's the case.
still not gonna pull the trigger on it until after the cut.
shane
--
Shane Knapp
UC Berkeley EECS Research / RISELab Staff
e is no consensus about
>>> what is the right fix yet. Likely to miss it in Spark 2.4 because it's a
>>> long-standing issue, not a regression.
>>>
>>
>> This is a really serious data loss bug. Yes its very complex, but we
>> absolutely have to f
ntr since it is missing some tests.
>
> Also these seems like real test failures? Are these only happening in 2.1
> and 2.2?
>
>
> ------
> *From:* shane knapp
> *Sent:* Friday, August 10, 2018 4:04 PM
> *To:* Sean Owen
> *Cc:* Shivaram Venkatarama
On Mon, Aug 6, 2018 at 12:46 PM, shane knapp wrote:
> i'll get something set up quickly by hand today, and make a TODO to get
> the job config checked in to the jenkins job builder configs later this
> week.
>
> shane
>
> On Sun, Aug 5, 2018 at 7:10 AM, Sean Owen wrote:
>
>&
> profiles that are enabled.
>
> I can already see two test failures for the 2.12 build right now and will
> try to fix those, but this should help verify whether the failures are
> 'real' and detect them going forward.
>
>
>
--
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu
ful of how auth tokens
are passed around in builds. there are masked 'password'-style env vars
for things like that, and are easily located in job configs.
we are not immune to exploits like this, so please be careful.
:)
shane
--
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical
the centos workers in to quiet mode now, prepping the
upgrade and once the majority of existing builds are done, i'll kill any
outliers (and retrigger them later) and perform the upgrade.
expect this to take ~3 hours, max.
shane
--
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical
be fully backwards-compatible w/3.4.
of course, this needs to be taken w/a grain of salt, as we're mostly
focused on actual python package requirements, rather than worrying about
core python functionality.
thoughts? comments?
thanks in advance,
shane
--
Shane Knapp
UC Berkeley EECS Research
ge, i will make it happen.
shane (who wants everyone to remember that it's just little old me running
this... not a team of people) ;)
--
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu
est" builds that don't require
> credentials such as GPG keys).
>
> awesome++
> Perhaps we should think about revamping these jobs instead of keeping
> them as is.
>
i fully support this. which is exactly why i punted on even trying to get
them ported over to the ubunt
a
couple of weeks after the 2.4 cut.
> Part of the intent here is to allow this to happen without Shane having to
> reorganize his complex upgrade schedule and make it even more complicated.
>
> this. exactly. :)
--
Shane Knapp
UC Berkeley EECS Research / RISELab Staff
>
> According to the status, I think we should wait a few more days. Any
> objections?
>
> none here.
i'm also pretty certain that waiting until after the code freeze to start
testing the GHPRB on ubuntu is the wisest course of action for us.
shane
--
Shane Knapp
UC Berkele
i hate doing this, because our tests and builds take WY too long,
but this should help get PRs through before the code freeze.
--
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu
--
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu
also, i looked pretty closely @ the python3.5 release notes, and nothing
caught my eye as being a showstopper.
On Thu, Aug 9, 2018 at 10:41 AM, shane knapp wrote:
> please see: https://issues.apache.org/jira/browse/SPARK-25079
>
> this is holding back the arrow 0.10.0 upgrade.
>
o see the dependencies for each package.
yep, we're testing against python 3.4, and pyarrow 0.10.0 needs 3.5+
putting the workers back on-line until i figure out what to do next.
shane
On Wed, Aug 8, 2018 at 10:31 AM, shane knapp wrote:
> pyarrow 0.10.0 has been released, and this is importa
at 11:48 AM, shane knapp wrote:
> well... i've been running in to problems (aka dependency hell), and just
> hit a show-stopper:
>
> UnsatisfiableError: The following specifications were found to be in
> conflict:
> - pyarrow 0.10.* -> arrow-cpp 0.10.0.* -> python >
tion. please hold.
>
> On Wed, Aug 8, 2018 at 11:48 AM, shane knapp wrote:
>
>> well... i've been running in to problems (aka dependency hell), and just
>> hit a show-stopper:
>>
>> UnsatisfiableError: The following specifications were found to be in
>
ion in Jenkins and highest version via AppVeyor FWIW.
>> I don't have a strong preference opinion on this since we have been
>> having compatibility issues for each Python version.
>>
>>
>> 2018년 8월 14일 (화) 오전 4:15, shane knapp 님이 작성:
>>
>>> hey everyon
versions (latest micros should
> be fine).
>
> On Mon, Aug 20, 2018 at 7:07 PM shane knapp wrote:
>
>> initially, i'd like to just choose one version to have the primary tests
>> against, but i'm also not opposed to supporting more of a matrix. the
>> biggest
.
shane
--
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu
t of
>>>> commits from master and I got the following error:
>>>>
>>>> *continuous-integration/appveyor/pr *— AppVeyor build failed
>>>>
>>>> due to:
>>>>
>>>> *Build execution time has reached the maximum allowed time
; just FWIW, I talked about this here (https://github.com/apache/
> spark/pull/20146#issuecomment-406132543) too for possible solutions to
> handle this.
>
>
>
>
> 2018년 7월 25일 (수) 오전 4:32, shane knapp 님이 작성:
>
>> revisiting this thread...
>>
>> i pushed a
ark but also about a lot of Scala libraries that
>>> stopped supporting Scala 2.11, if Spark 2.4 will not support Scala 2.12,
>>> then people will not be able to use them in their Zeppelin, Jupyter and
>>> other notebooks together with Spark.
>>>
>>>
>
>
>
>
--
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu
;> instance, see https://github.com/apache/spark/pull/18447
>>>> I don't explicitly object this idea but at least can I ask who and why
>>>> this was started?
>>>> Is it for notification purpose or to save resource? Did I miss some
>>>> discussion about this?
>>>>
>>>>
--
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu
mastering-kafka-streams
> >> > Follow me at https://twitter.com/jaceklaskowski
> >>
> >> -
> >> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
> >>
>
>
> --
> Marcelo
>
> -
> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>
>
--
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu
> On Thu, Sep 6, 2018 at 12:32 AM Wenchen Fan
>>>>> wrote:
>>>>>
>>>>>> Hi all,
>>>>>>
>>>>>> I've cut the branch-2.4 since all the major blockers are resolved. If
>>>>>> no objections I'll shortly followup with an RC to get the QA started in
>>>>>> parallel.
>>>>>>
>>>>>> Committers, please only merge PRs to branch-2.4 that are bug fixes,
>>>>>> performance regression fixes, document changes, or test suites changes.
>>>>>>
>>>>>> Thanks,
>>>>>> Wenchen
>>>>>>
>>>>>
--
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu
you, Shane! :D
>
> Bests,
> Dongjoon.
>
> On Fri, Sep 7, 2018 at 9:51 AM shane knapp wrote:
>
>> i'll try and get to the 2.4 branch stuff today...
>>
>>
--
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu
this is done.
On Mon, Jul 9, 2018 at 6:48 PM, shane knapp wrote:
> we need to update docker to something more modern (17.05.0-ce ->
> 18.03.1-ce), so i have taken the two ubuntu workers offline and once the
> current builds finish, i will perform the update.
>
> this should
we need to update docker to something more modern (17.05.0-ce ->
18.03.1-ce), so i have taken the two ubuntu workers offline and once the
current builds finish, i will perform the update.
this shouldn't take more than an hour.
shane
--
Shane Knapp
UC Berkeley EECS Research / RISELab St
i'll be taking amp-jenkins-staging-worker-0{1,2} offline to upgrade
minikube to v0.28.0.
this is currently blocking: https://github.com/apache/spark/pull/21583
this should be a relatively short downtime, and i'll reply back here when
it's done.
shane
--
Shane Knapp
UC Berkeley EECS Research
PM, shane knapp wrote:
> i'll be taking amp-jenkins-staging-worker-0{1,2} offline to upgrade
> minikube to v0.28.0.
>
> this is currently blocking: https://github.com/apache/spark/pull/21583
>
> this should be a relatively short downtime, and i'll reply back here when
> it
i'm seeing some strange docker/minikube errors, so i'm currently rebooting
the boxes. when they're back up, i will retrigger any killed builds and
send an all-clear.
On Wed, Jul 11, 2018 at 7:40 PM, shane knapp wrote:
> done, and the workers are back online.
>
> $ pssh -h ubuntu_worke
ok, things seem much happier now.
On Wed, Jul 11, 2018 at 8:57 PM, shane knapp wrote:
> i'm seeing some strange docker/minikube errors, so i'm currently rebooting
> the boxes. when they're back up, i will retrigger any killed builds and
> send an all-clear.
>
> On Wed, Jul 11,
after upgrading minikube to v0.28.0 and much wailing and gnashing of teeth,
it was discovered that v0.25.0 actually *works* as expected and the k8s
integration tests are now green!
side note, i've also opportunistically upgraded the minikube VM drivers
from kvm to kvm2.
shane
--
Shane Knapp
UC
est-maven-hadoop-2.7
>
> Timeouts by day:
> 2018-01-094
> 2018-01-1013
> 2018-01-1127
> 2018-01-1274
> 2018-01-139
> 2018-01-142
> 2018-01-158
> 2018-01-1634
>
> Total builds:4112
> Total timeouts:171
> Percentage of a
on.
shane
--
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu
the problem was identified and fixed, and we should be good as of about an
hour ago.
sorry for any inconvenience!
On Mon, Apr 2, 2018 at 4:15 PM, shane knapp <skn...@berkeley.edu> wrote:
> hey all!
>
> we're having network issues on campus right now, and the jenkins workers
>
this apparently caused jenkins to get wedged overnight. i'll restarting it
now.
On Mon, Apr 2, 2018 at 9:12 PM, shane knapp <skn...@berkeley.edu> wrote:
> the problem was identified and fixed, and we should be good as of about an
> hour ago.
>
> sorry for any inconvenience!
...and we're back!
On Tue, Apr 3, 2018 at 8:10 AM, shane knapp <skn...@berkeley.edu> wrote:
> this apparently caused jenkins to get wedged overnight. i'll restarting
> it now.
>
> On Mon, Apr 2, 2018 at 9:12 PM, shane knapp <skn...@berkeley.edu> wrote:
>
>> the
301 - 400 of 720 matches
Mail list logo