+1
On Thu, Jan 22, 2015 at 2:42 PM, Max Michels (JIRA) j...@apache.org wrote:
[
https://issues.apache.org/jira/browse/FLINK-1410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14287423#comment-14287423
]
Max Michels commented on FLINK-1410:
I've updated the corresponding jira ticket.
On Fri, Jan 30, 2015 at 5:46 PM, Till Rohrmann trohrm...@apache.org wrote:
I looked into the problem and the problem is a deserialization issue on
the TaskManager side. Somehow the system is not capable to send InputSplits
around whose classes
It looks to me that the TaskManager does not receive a
ConsumerNotificationResult after having send the ScheduleOrUpdateConsumers
message. This can either mean that something went wrong in
ExecutionGraph.scheduleOrUpdateConsumers method or the connection was
disassociated for some reasons. The
Hi Andra,
have you tried increasing the number of network buffers in your cluster?
You can control by the configuration value:
taskmanager.network.numberOfBuffers: #numberBuffers
Greets,
Till
On Mon, Feb 9, 2015 at 9:56 AM, Andra Lungu lungu.an...@gmail.com wrote:
Hello everyone,
I am
Hi,
I found an issue with the yarn binaries.
In flink-0.8.0-bin-hadoop2-yarn.tgz the plan visualizer does not work. The
reason is that the resources folder with the javascript files is not copied
to flink-dist.
I'm a little bit undecided wether this is a blocker or not. It is
definitely a bad
Awesome :-)
On Wed, Feb 11, 2015 at 4:27 PM, Paris Carbone par...@kth.se wrote:
Congratulations! Very exciting!
Paris
On 11 Feb 2015, at 15:24, Ufuk Celebi u...@apache.org wrote:
Superb. :-)
On 11 Feb 2015, at 15:00, Kostas Tzoumas ktzou...@apache.org wrote:
Nice!!! Welcome
+1
Definitely very helpful for users and developers.
On Wed, Feb 18, 2015 at 5:21 PM, Stephan Ewen se...@apache.org wrote:
+1
The website should have the latest stable docs and the latest snapshot
docs. The snapshot docs need not be daily up to date for a start.
On Wed, Feb 18, 2015 at
Dependency conflicts were also the reason why we have to use a different
Akka version for the Hadoop 2.0.0-alpha build profile.
Thus, +1.
On Wed, Feb 18, 2015 at 3:48 PM, Robert Metzger rmetz...@apache.org wrote:
I'm also in favor of shading commonly used libraries to resolve this issue
for
+1
On Mon, Feb 16, 2015 at 3:38 PM, Aljoscha Krettek aljos...@apache.org
wrote:
+1
On Mon, Feb 16, 2015 at 3:18 PM, Fabian Hueske fhue...@gmail.com wrote:
+1
2015-02-15 17:47 GMT+01:00 Stephan Ewen se...@apache.org:
I thought about adding a wiki page for that.
On Sat, Feb 14,
+1
On Tue, Feb 17, 2015 at 1:34 PM, Kostas Tzoumas ktzou...@apache.org wrote:
+1
On Tue, Feb 17, 2015 at 12:14 PM, Márton Balassi mbala...@apache.org
wrote:
When it comes to the current use cases I'm for this separation.
@Ufuk: As Gyula has already pointed out with the current design of
I think that the machines have lost connection. That is most likely
connected to the heartbeat interval of the watch or transport failure
detector. The transport failure detector should actually be set to a
heartbeat interval of 1000 s and consequently it should not cause any
problems.
Which
Yes actually the timeouts should not really matter. However, an exception
in the InputSplitAssigner should happen in the actor thread and thus cause
the actor to stop. This should be logged by the supervisor.
I just checked and the method InputSplitAssigner.getNextInputSplit is not
supposed to
I like the idea of having a news list as well :-) +1
On Fri, Jan 9, 2015 at 7:36 PM, Ted Dunning ted.dunn...@gmail.com wrote:
Would the user list do for now?
On Fri, Jan 9, 2015 at 7:27 AM, Robert Metzger rmetz...@apache.org
wrote:
Our PMC Chair or a ASF member has to request a list:
Yeah I agree with that.
On Mon, Jan 12, 2015 at 11:30 AM, Ufuk Celebi u...@apache.org wrote:
On Mon, Jan 12, 2015 at 11:22 AM, Stephan Ewen se...@apache.org wrote:
It would be good to have the patch, but it is also a very tricky patch,
so
pushing it hastily may be problematic.
I
The kryo underflow should be fixed with the PR [1].
[1] https://github.com/apache/flink/pull/391
On Thu, Feb 12, 2015 at 4:10 PM, Nam-Luc Tran namluc.t...@euranova.eu
wrote:
Without the .returns(...) statement it yelled about type erasure.
Putting.returns(Centroid25.class) did the trick.
, 2015 at 10:40 AM, Till Rohrmann trohrm...@apache.org
wrote:
It looks to me that the TaskManager does not receive a
ConsumerNotificationResult after having send the
ScheduleOrUpdateConsumers
message. This can either mean that something went wrong in
ExecutionGraph.scheduleOrUpdateConsumers
Good catch Rui Zhu. Thanks a lot, I'll fix it.
On Wed, Mar 18, 2015 at 2:21 AM, Rui Zhu rui.tyler@gmail.com wrote:
Hello,
I just found a typo in the document of Cluster Setup. In the HDFS Setup
section of Cluster Setup, the command of starting HDFS has a typo: when we
go into the
Do we already enforce the official Scala style guide strictly?
On Mon, Mar 16, 2015 at 4:57 PM, Aljoscha Krettek aljos...@apache.org
wrote:
I'm already always sticking to the official Scala style guide, with the
exception of 100 line length.
On Mar 16, 2015 3:27 PM, Till Rohrmann trohrm
https://gist.github.com/viduranga/e7549ef818c6a2af73e9#file-flink-vidura-jobmanager-localhost-log
On Mar 11, 2015, at 11:32 PM, Till Rohrmann trohrm...@apache.org
wrote:
Hi Dulaj,
sorry for my late response. It looks as if the JobClient tries to
connect
to the JobManager
tried with it is shut down as well) and also I doubly checked hosts files.
I had little snitch installed but I also tried uninstalling it.
Isn’t there a way around without using DNS to resolve localhost?
On Mar 16, 2015, at 10:04 PM, Till Rohrmann trohrm...@apache.org
wrote:
It is really
Hi Zhou Yi,
welcome to the Flink community. Great to hear that you're gonna work on
Gelly. If you have any problems getting started, then let us know.
Cheers,
Till
On Tue, Mar 17, 2015 at 9:22 AM, Stephan Ewen se...@apache.org wrote:
Hi Zhou Yi!
Welcome to the Flink community. Gelly (and
Putting the Scala and Java API into the same module means that we'll have
more mixed Java/Scala projects, right? I just want to check if everyone is
aware of it considering our latest experiences with these kind of modules.
On Tue, Mar 17, 2015 at 2:21 PM, Ufuk Celebi u...@apache.org wrote:
+1
+1 for Scala :-)
On Sat, Mar 7, 2015 at 1:56 PM, Márton Balassi balassi.mar...@gmail.com
wrote:
I'm strongly for consistency and personally would prefer Scala as a default
- thus making the shorter page the default.
On Sat, Mar 7, 2015 at 1:47 PM, Stephan Ewen se...@apache.org wrote:
I
Hi Dulaj,
sorry for my late response. It looks as if the JobClient tries to connect
to the JobManager using its IPv6 instead of IPv4. Akka is really picky when
it comes to remote address. If Akka binds to the FQDN, then other
ActorSystem which try to connect to it using its IP address won't be
Yes, this means that a task has finished its computation and can be removed
from the TaskManager.
On Fri, Mar 6, 2015 at 11:44 AM, Dulaj Viduranga vidura...@icloud.com
wrote:
Thank you all. IntelliJ shows Unregister task with execution ID
(something)” couple of times in the output. But I guess
Have you run the 20 builds with the new shading code? With new shading the
TaskManagerFailsITCase should no longer fail. If it still does, then we
have to look into it again.
On Thu, Mar 12, 2015 at 2:01 PM, Stephan Ewen se...@apache.org wrote:
I am also big time skeptical.
There are some
+1 for removal of old API
On Mar 10, 2015 5:41 PM, Fabian Hueske fhue...@gmail.com wrote:
And I'm +1 for removing the old API with the next release.
2015-03-10 17:38 GMT+01:00 Fabian Hueske fhue...@gmail.com:
Yeah, I spotted a good amount of optimizer tests that depend on the
Record
API.
What do the logs say? It looks as if there is some issue with the
TaskManager start up because the main is in the method
waitForTaskManagersToBeRegistered. This happens for the initial
ForkableFlinkMiniCluster start.
On Wed, Mar 25, 2015 at 1:50 PM, Ufuk Celebi u...@apache.org wrote:
I saw a
+Table
On Thu, Mar 26, 2015 at 9:32 AM, Márton Balassi balassi.mar...@gmail.com
wrote:
+DataTable
On Thu, Mar 26, 2015 at 9:29 AM, Markl, Volker, Prof. Dr.
volker.ma...@tu-berlin.de wrote:
+Table
I also agree with that line of argument (think SQL ;-) )
-Ursprüngliche
+1
On Sun, Mar 29, 2015 at 5:04 PM, Chiwan Park chiwanp...@icloud.com wrote:
+1
Good idea. Users can accept API changes of “flink-staging” module with
“Beta badge.
Regards.
Chiwan Park (Sent with iPhone)
On Mar 29, 2015, at 11:38 PM, Robert Metzger rmetz...@apache.org
wrote:
Hi,
I also like the idea. +1
On Wed, Apr 1, 2015 at 12:20 PM, Robert Metzger rmetz...@apache.org wrote:
Cool. I would like to have the ability to search the docs, so +1 for this
idea!
On Wed, Apr 1, 2015 at 12:10 PM, Ufuk Celebi u...@apache.org wrote:
Hey all,
I think our documentation
+1 for Scala 2.11
On Mon, Mar 2, 2015 at 5:02 PM, Alexander Alexandrov
alexander.s.alexand...@gmail.com wrote:
Spark currently only provides pre-builds for 2.10 and requires custom build
for 2.11.
Not sure whether this is the best idea, but I can see the benefits from a
project management
, 2015 at 11:02 PM, Till Rohrmann trohrm...@apache.org
wrote:
Is this reproducible? If so, then a stack trace of the JVM would be
helpful. With the stack trace we would know which test case stalls.
On Wed, Mar 4, 2015 at 9:46 PM, Henry Saputra (JIRA) j...@apache.org
wrote:
Henry
Is this reproducible? If so, then a stack trace of the JVM would be
helpful. With the stack trace we would know which test case stalls.
On Wed, Mar 4, 2015 at 9:46 PM, Henry Saputra (JIRA) j...@apache.org
wrote:
Henry Saputra created FLINK-1651:
Catching the NullPointerException and throwing an IllegalArgumentException
with a meaningful message might clarify things.
Considering that it only affects the TestBaseUtils, it should not be big
deal to change it.
On Fri, Feb 27, 2015 at 10:30 AM, Szabó Péter nemderogator...@gmail.com
wrote:
It depends on how you started Flink. If you started a local cluster, then
the TaskManager log is contained in the JobManager log we just don't see
the respective log output in the snippet you posted. If you started a
TaskManager independently, either by taskmanager.sh or by start-cluster.sh,
then
, Till Rohrmann trohrm...@apache.org
wrote:
What does the jobmanager log says? I think Stephan added some more
logging
output which helps us to debug this problem.
On Thu, Mar 5, 2015 at 9:36 AM, Dulaj Viduranga vidura...@icloud.com
wrote:
Using start-locat.sh.
I’m using the original
If the streaming-examples module uses the classifier tag to add the
test-core dependency then we should change it into type tag as
recommended by maven [1]. Otherwise it might come to build failures if the
install lifecycle is not executed.
The dependency import should look like:
dependency
Could you check the definition of the collect method in the DataSet.scala
file? Does it contain parentheses or not?
On Tue, Apr 14, 2015 at 3:48 PM, Felix Neutatz neut...@googlemail.com
wrote:
I use the latest maven snapshot:
dependency
groupIdorg.apache.flink/groupId
+1
On Mon, Apr 20, 2015 at 2:50 PM, Timo Walther twal...@apache.org wrote:
+1
On 20.04.2015 14:49, Gyula Fóra wrote:
+1
On Mon, Apr 20, 2015 at 2:41 PM, Fabian Hueske fhue...@gmail.com wrote:
+1
2015-04-20 14:39 GMT+02:00 Maximilian Michels m...@apache.org:
+1 Let's merge it to
Concerning the failed builds in the hadoop2.0.0-alpha profile I see a lot
of
07:47:57,927 ERROR akka.actor.ActorSystemImpl
- Uncaught fatal error from thread
[flink-akka.remote.default-remote-dispatcher-7] shutting down ActorSystem
[flink]
java.lang.VerifyError: (class:
That's a good solution. In order to deal with ranges which overlap two
intervals you have to create multiple coarse-grained join keys. One key
for each interval contained in the range.
Cheers,
Till
On Apr 26, 2015 11:22 PM, Alexander Alexandrov
alexander.s.alexand...@gmail.com wrote:
I thought
Hi,
we would need a little bit more of background on the job you're running and
the cluster setup to help you. Could you please post this information on
the u...@flink.apache.org ML where this belongs to?
Cheers,
Till
On Tue, Apr 28, 2015 at 8:45 AM, 东方不败 dashujudechunt...@163.com wrote:
I am
I would also be in favour of making the distinction between the API and
common API layer more clear by using different names. This will ease the
understanding of the source code.
In the wake of a possible renaming we could also get rid of the legacy code
org.apache.flink.optimizer.dag.MatchNode
Why not doing two separate joins, union the results and doing a distinct
operation on the combined key?
On Fri, Apr 17, 2015 at 9:42 AM, Aljoscha Krettek aljos...@apache.org
wrote:
So, the first thing is a feature of the Java API that removes
duplicate fields in keys, so an equi-join on (0,0)
+1 I thoroughly tested the Flink on Mac OS:
- I ran the start-cluster.sh, stop-cluster.sh and start-webclient.sh scripts
- Started 1 JM and 2 TM
- Checked the logs and out files for exceptions and error messages
- Ran the all examples using the /bin/flink
- Ran the wordcount and pagerank example
Thanks for your great work Ufuk :-)
On Apr 12, 2015 5:27 PM, Henry Saputra henry.sapu...@gmail.com wrote:
This is great news! Thanks for driving th release, Ufuk.
Sorry missed verifying the release this time.
- Henry
On Sunday, April 12, 2015, Ufuk Celebi u...@apache.org wrote:
Hey all,
The error looks as if there is already another JobManager started with
FAKE_JOB_MANAGER name. This might be caused by a JobManager which has not
yet completely shut down.
On Tue, Apr 7, 2015 at 9:52 AM, Márton Balassi (JIRA) j...@apache.org
wrote:
Márton Balassi created FLINK-1831:
Great to hear. This should no longer be a pain point once we support proper
cross validation.
On Tue, Jun 2, 2015 at 11:11 AM, Felix Neutatz neut...@googlemail.com
wrote:
Yes, grid search solved the problem :)
2015-06-02 11:07 GMT+02:00 Till Rohrmann till.rohrm...@gmail.com:
The SGD
...
2015-06-01 20:33 GMT+10:00 Till Rohrmann trohrm...@apache.org:
Since MLR uses stochastic gradient descent, you probably have to
configure
the step size right. SGD is very sensitive to the right step size
choice.
If the step size is too high, then the SGD algorithm does not converge
(HashPartition.java:310)
...
Best regards,
Felix
2015-06-04 10:19 GMT+02:00 Felix Neutatz neut...@googlemail.com:
Yes, I will try it again with the newest update :)
2015-06-04 10:17 GMT+02:00 Till Rohrmann till.rohrm...@gmail.com:
If the first error is not fixed by Chiwans
strategy here because one cannot get rid of the duplicate join keys.
On Mon, Jun 8, 2015 at 1:59 PM Till Rohrmann trohrm...@apache.org wrote:
Hi Felix, I tried to reproduce the problem with the *Hash join exceeded
maximum number of recursions, without reducing partitions enough to be
memory
I also encountered a failing TaskManagerFailsWithSlotSharingITCase using
Java8. I could, however, not reproduce the error a second time. The stack
trace is:
The JobManager should handle hard failing task manager with slot
Hi Pieter-Jan,
I'm not aware of an Eclipse or IntellJ auto format profile. I think that
all Flink contributors apply their style changes manually. The maven output
should tell you quite precisely what's wrong and in which file the
checkstyle errors occur. Moreover, applying an IDE auto format is
already been formatted, it will show up changed in IntelliJ but Git
will recognize that it is in fact unmodified. That way, we would no
longer touch files we've not actually modified.
Regards,
Pieter-Jan Van Aeken
Op Dinsdag, 09/06/2015 om 11:10 schreef Till Rohrmann:
Hi
can be found here:
https://github.com/FelixNeutatz/IMPRO-3.SS15/blob/8b679f1c2808a2c6d6900824409fbd47e8bed826/NullPointerException.txt
Best regards,
Felix
2015-06-04 19:41 GMT+02:00 Till Rohrmann till.rohrm...@gmail.com:
I think it is not a problem of join hints, but rather of too
better because it
is only in some cases necessary to return the id. The special predict
Operation would save this overhead.
Best regards,
Felix
Am 04.06.2015 7:56 nachm. schrieb Till Rohrmann till.rohrm...@gmail.com
:
I see your problem. One way to solve the problem is to implement
classes of the pipeline as well,
in
order to be able to pass the ID through the whole pipeline.
Best regards,
Felix
Am 06.06.2015 9:46 vorm. schrieb Till Rohrmann trohrm...@apache.org
:
Then you only have to provide an implicit PredictOperation[SVM, (T,
Int),
(LabeledVector, Int
AM, Till Rohrmann till.rohrm...@gmail.com
wrote:
You're right Felix. You need to provide the `FitOperation` and
`PredictOperation` for the `Predictor` you want to use and the
`FitOperation` and `TransformOperation` for all `Transformer`s you want
to
chain in front of the `Predictor
I agree with Theo. I think it’s a nice feature to have as part of the
standard API because only few users will be aware of something like
DataSetUtils. However, as a first version we can make it part of
DataSetUtils.
Cheers,
Till
On Wed, Jun 10, 2015 at 11:52 AM Theodore Vasiloudis
Btw: I noticed that all streaming modules depend on flink-core,
flink-runtime, flink-clients and flink-java. Is there a particular reason
why the streaming connectors depend on flink-clients and flink-java?
On Wed, Jun 10, 2015 at 3:41 PM Till Rohrmann trohrm...@apache.org wrote:
I see
that they have to tweak this parameter.
On Thu, Jun 4, 2015 at 2:54 PM, Ted Dunning ted.dunn...@gmail.com wrote:
On Thu, Jun 4, 2015 at 1:26 PM, Till Rohrmann trohrm...@apache.org
wrote:
Maybe also the default learning rate of 0.1 is set too high.
Could be.
But grid search on learning rate
+1 :-)
On Wed, Jun 3, 2015 at 4:53 PM, Vasiliki Kalavri vasilikikala...@gmail.com
wrote:
Hi Sachin,
great idea to keep a blog! Thanks a lot for sharing :))
-V.
On 3 June 2015 at 16:41, Sachin Goel sachingoel0...@gmail.com wrote:
Hi everyone
I'm maintaining a blog detailing my work
I'm also in favour of quickly fixing the failing test cases but I think
that blocking the master is a kind of drastic measure. IMO this creates a
culture of blaming someone whereas I would prefer a more proactive
approach. When you see a failing test case and know that someone recently
worked on
If the first error is not fixed by Chiwans PR, then we should create a JIRA
for it to not forget it.
@Felix: Chiwan's PR is here [1]. Could you try to run ALS again with this
version?
Cheers,
Till
[1] https://github.com/apache/flink/pull/751
On Thu, Jun 4, 2015 at 10:10 AM, Chiwan Park
wrote:
We should probably look into this nevertheless. Requiring full grid
search
for a simple algorithm like mlr sounds like overkill.
Do you have written down the math of your implementation somewhere?
-M
- Ursprüngliche Nachricht -
Von: Till Rohrmann till.rohrm
We should ping the Zeppelin guys to update their Flink dependency.
On Wed, Jun 24, 2015 at 2:34 PM, Maximilian Michels m...@apache.org wrote:
I'm so happy we have pushed it out :) It took a while but I think we can be
very pleased with the result.
I will post an announcement to the user/dev
+1
On Tue, Jun 23, 2015 at 3:16 PM, Robert Metzger rmetz...@apache.org wrote:
+1
On Tue, Jun 23, 2015 at 11:31 AM, Fabian Hueske fhue...@gmail.com wrote:
+1
2015-06-22 17:44 GMT+02:00 Stephan Ewen se...@apache.org:
+1
On Fri, Jun 19, 2015 at 10:48 AM, Matthias J. Sax
?
Cheers, Fabian
2015-06-19 15:08 GMT+02:00 Till Rohrmann trohrm...@apache.org:
What does forever mean? Usually it's the case that you see a steep
decline
in performance once the system starts spilling data to disk because of
the
disk I/O bottleneck.
The system always starts spilling to disk
Hi Andra,
the problem seems to be that the deployment of some tasks takes longer than
100s. From the stack trace it looks as if you're not using the latest
master.
We had problems with previous version where the deployment call waited for
the TM to completely download the user code jars. For
, Till Rohrmann trohrm...@apache.org
wrote:
Hi Andra,
the problem seems to be that the deployment of some tasks takes longer
than
100s. From the stack trace it looks as if you're not using the latest
master.
We had problems with previous version where the deployment call waited
I might have found another release blocker. While running some cluster
tests I also tried to run the `ConnectedComponents` example. However,
sometimes the example couldn't be executed because the scheduler could not
schedule co-located tasks, `NoResourceAvailableException`, even though it
should
release blocker and we
need
to
fix
it.
On Mon, Jun 15, 2015 at 5:04 PM, Till Rohrmann
trohrm...@apache.org
wrote:
I might have found another release blocker. While running some
cluster
tests I also tried to run the `ConnectedComponents` example
.
Furthermore, this also applies to Gelly and FlinkML.
Cheers,
Till
On Fri, Jun 12, 2015 at 9:16 AM Till Rohrmann trohrm...@apache.org wrote:
I'm currently going through the license file and I discovered some
skeletons in our closet. This has to be merged as well. But I'm still
working on it (we
What about the shaded jars?
On Fri, Jun 12, 2015 at 11:32 AM Ufuk Celebi u...@apache.org wrote:
@Max: for the new RC. Can you make sure to set the variables correctly
with regard to stable/snapshot versions in the docs?
are
in. Plus, we need to include all Flink libraries in flink-dist. Are you
going to fix that as well, Till?
On Fri, Jun 12, 2015 at 9:53 AM, Ufuk Celebi u...@apache.org wrote:
On 12 Jun 2015, at 09:45, Till Rohrmann trohrm...@apache.org wrote:
Hi guys,
I just noticed
, 2015 at 10:29 AM Till Rohrmann trohrm...@apache.org wrote:
Well I think the initial idea was to keep the dist jar as small a possible
and therefore we did not include the libraries. I'm not sure whether we can
decide this here ad-hoc. If the community says that we shall include these
libraries
I'm currently going through the license file and I discovered some
skeletons in our closet. This has to be merged as well. But I'm still
working on it (we have a lot of dependencies).
Cheers,
Till
On Fri, Jun 12, 2015 at 12:51 AM Ufuk Celebi u...@apache.org wrote:
On 12 Jun 2015, at 00:49,
it with the
LICENSE [either we find something before the LICENSE update or we only have
to review the LICENSE change]
Since this is not a vote yet, it doesn't really matter, but I'm leaning
towards b).
On Fri, Jun 12, 2015 at 11:43 AM, Till Rohrmann till.rohrm...@gmail.com
wrote:
What about
, Jun 12, 2015 at 9:44 AM, Till Rohrmann trohrm...@apache.org
wrote:
I've finished the legal check of the source and binary distribution. The
PR
with the LICENSE and NOTICE file updates can be found here [1].
What I haven't done yet is addressing the issue with the shaded
dependencies
Transformations page poining to this...
What do you think?
On Wed, Jun 10, 2015 at 12:33 PM, Till Rohrmann till.rohrm...@gmail.com
wrote:
I agree with Theo. I think it’s a nice feature to have as part of the
standard API because only few users will be aware of something like
DataSetUtils
+1 for reverting.
On Thu, Jun 18, 2015 at 10:11 AM Aljoscha Krettek aljos...@apache.org
wrote:
+1 I also think it's the cleanest solution for now. The table API still
works, just without support for null values.
On Thu, 18 Jun 2015 at 10:08 Maximilian Michels m...@apache.org wrote:
I also
Hi guys,
I just updated our LICENSE of the binary distribution and noticed that we
also list dependencies which are licensed under Apache-2.0. As far as I
understand the ASF guidelines [1], this is not strictly necessary. Since it
is a lot of work to keep the list up to date, I was wondering
Yes since it is clearly a deadlock in the scheduler, the current version
shouldn't be released.
On Wed, Jun 10, 2015 at 5:48 PM Ufuk Celebi u...@apache.org wrote:
On 10 Jun 2015, at 16:18, Maximilian Michels m...@apache.org wrote:
I'm debugging the TaskManagerFailsWithSlotSharingITCase.
, Till Rohrmann till.rohrm...@gmail.com
wrote:
Hey Mikio,
yes you’re right. The SGD only needs to know the gradient of the loss
function and some mean to update the weights in accordance with the
regularization scheme. Additionally, we also need to be able to compute
the
loss
Hi Florian,
I just wrote a patch for this problem. I wait until all tests pass and then
I’ll merge the fix. Thus, it will be included in the current master in the
late afternoon.
If you don’t want to wait that long, then you can also solve the issue with
+1 for printOnTaskManager(prefix)
On Tue, Jun 2, 2015 at 12:08 PM, Fabian Hueske fhue...@gmail.com wrote:
+1 for writeToWorkerStdOut(prefix)
On Jun 2, 2015 11:42, Aljoscha Krettek aljos...@apache.org wrote:
+1 for printOnTaskManager(prefix)
On Tue, Jun 2, 2015 at 11:35 AM, Robert
: Till Rohrmann trohrm...@apache.org
Authored: Tue Jun 2 14:45:12 2015 +0200
Committer: Till Rohrmann trohrm...@apache.org
Committed: Tue Jun 2 15:34:54 2015 +0200
--
.../apache/flink/ml/classification/SVM.scala| 73
On Jun 29, 2015, at 4:43 PM, Till Rohrmann trohrm...@apache.org wrote:
Hi Chiwan,
when you use the single element predict operation, you always have to
implement the `getModel` method. There you have access to the resulting
parameters and even to the instance to which the `PredictOperation
one binding. I remember that Akka crashed
because of that before.
As a simple fix, can you try and exclude the SLF4J jar from your build
somehow? Or set it to provided in the Flink POM?
On Thu, May 21, 2015 at 11:49 AM, Till Rohrmann trohrm...@apache.org
wrote:
I'll try to reproduce
I'll try to reproduce this problem locally on my machine.
On Thu, May 21, 2015 at 11:25 AM, Fabian Hueske fhue...@gmail.com wrote:
Hi Flink folks,
the Flink interpreter PR for Apache Zeppelin is blocked by a failing test
case (see below).
Does anybody have an idea what is going on and can
Hi Christoph,
the thing with the current implementation of the SparseVector is that you
can only modify entries which are “non-zero”. All other entries are not
represented in the underlying data structures. This means that you have to
create a new SparseVector if you want to set a zero entry to
I think Sachin wants to provide something similar to the LossFunction but
for the convergence criterion. This would mean that the user can specify a
convergence calculator, for example to the optimization framework, which is
used from within a iterateWithTermination call.
I think this is a good
Good initiative Chiwan. +1 for a more unified code style.
On Tue, Aug 18, 2015 at 10:25 AM, Chiwan Park chiwanp...@apache.org wrote:
Okay, I’ll create a JIRA issue covered this topic.
Regards,
Chiwan Park
On Aug 17, 2015, at 1:17 AM, Stephan Ewen se...@apache.org wrote:
+1 for
+1, there is no point in arguing with Knuth.
On Mon, Aug 17, 2015 at 1:07 AM, Henry Saputra henry.sapu...@gmail.com
wrote:
+1 as well.
This is a great follow-up from my previous email about adding details
in JIRA, which also being echoed by Fabian.
- Henry
On Sun, Aug 16, 2015 at 3:45
Congrats and welcome on board Chesnay :-)
On Thu, Aug 20, 2015 at 11:18 AM, Robert Metzger rmetz...@apache.org
wrote:
The Project Management Committee (PMC) for Apache Flink has asked Chesnay
Schepler to become a committer and we are pleased to announce that they
have accepted.
Chesnay has
I'm also in favor of JIRA, because I fear that nobody will keep the wiki
page in sync. Maybe we can assign a special label for test stability to
these JIRA issues. Then we can quickly find all currently instable test
cases.
On Fri, Aug 21, 2015 at 11:02 AM, Robert Metzger rmetz...@apache.org
Hi MaGuoWei,
this is not a problem. If you look at the implementation of
SlotAllocationFuture.setFutureAction, you’ll see that the method is
synchronized on a lock which is also used to complete the future.
Furthermore, you’ll see that the slot variable is checked upon setting an
action and if
Done
On Mon, Jun 29, 2015 at 9:33 AM, Chiwan Park chiwanp...@apache.org wrote:
We should assign FLINK-2066 to Nuno. :)
Regards,
Chiwan Park
On Jun 29, 2015, at 1:21 PM, Márton Balassi balassi.mar...@gmail.com
wrote:
Hey,
Thanks for picking up the issue. This value can be
This might be a Travis hick-up. Was it the first time this happened?
Cheers,
Till
On Mon, Jul 27, 2015 at 9:47 PM, Sachin Goel sachingoel0...@gmail.com
wrote:
A recent travis build[Job 3] on my forked repo failed with the following
error:
Failed to execute goal
1 - 100 of 3136 matches
Mail list logo