Hi Aida,
The installation has detected a maven version 3.0.3. Update to 3.3.3 and
try again.
Il 08/Mar/2016 14:06, "Aida" ha scritto:
> Hi all,
>
> Thanks everyone for your responses; really appreciate it.
>
> Eduardo - I tried your suggestions but ran into some issues,
Hi Aida
Run only "build/mvn -DskipTests clean package”
BR
Eduardo Costa Alfaia
Ph.D. Student in Telecommunications Engineering
Università degli Studi di Brescia
Tel: +39 3209333018
On 3/4/16, 16:18, "Aida" <aida1.tef...@gmail.com> wrote:
>Hi all,
&
Hi,
try http://OAhtvJ5MCA:8080
BR
On 2/19/16, 07:18, "vasbhat" wrote:
>OAhtvJ5MCA
--
Informativa sulla Privacy: http://www.unibs.it/node/8155
-
To unsubscribe, e-mail:
Hi Gourav,
I did a prove as you said, for me it’s working, I am using spark in local mode,
master and worker in the same machine. I run the example in spark-shell
—package com.databricks:spark-csv_2.10:1.3.0 without errors.
BR
From: Gourav Sengupta
Date: Monday,
Hi Guys,
How could I unsubscribe the email e.costaalf...@studenti.unibs.it, that is an
alias from my email e.costaalf...@unibs.it and it is registered in the mail
list .
Thanks
Eduardo Costa Alfaia
PhD Student Telecommunication Engineering
Università degli Studi di Brescia-UNIBS
Thanks Ted.
On Feb 10, 2015, at 20:06, Ted Yu yuzhih...@gmail.com wrote:
Please take a look at:
examples/scala-2.10/src/main/java/org/apache/spark/examples/streaming/JavaDirectKafkaWordCount.java
which was checked in yesterday.
On Sat, Feb 7, 2015 at 10:53 AM, Eduardo Costa Alfaia
Hi Guys,
How could I doing in Java the code scala below?
val KafkaDStreams = (1 to numStreams) map {_ =
KafkaUtils.createStream[String, String, StringDecoder, StringDecoder](ssc,
kafkaParams, topicMap,storageLevel = StorageLevel.MEMORY_ONLY).map(_._2)
}
val unifiedStream =
Hi Guys,
I’m getting this error in KafkaWordCount;
TaskSetManager: Lost task 0.0 in stage 4095.0 (TID 1281, 10.20.10.234):
java.lang.ClassCastException: [B cannot be cast to java.lang.String
I don’t think so Sean.
On Feb 5, 2015, at 16:57, Sean Owen so...@cloudera.com wrote:
Is SPARK-4905 / https://github.com/apache/spark/pull/4371/files the same
issue?
On Thu, Feb 5, 2015 at 7:03 AM, Eduardo Costa Alfaia
e.costaalf...@unibs.it wrote:
Hi Guys,
I’m getting this error
.
`DefaultDecoder` is to return Array[Byte], not String, so here class casting
will meet error.
Thanks
Jerry
-Original Message-
From: Eduardo Costa Alfaia [mailto:e.costaalf...@unibs.it]
Sent: Friday, February 6, 2015 12:04 AM
To: Sean Owen
Cc: user@spark.apache.org
Subject: Re: Error
Hi Guys,
I would like to put in the kafkawordcount scala code the kafka parameter: val
kafkaParams = Map(“fetch.message.max.bytes” - “400”). I’ve put this
variable like this
val KafkaDStreams = (1 to numStreams) map {_ =
Hi Guys,
some idea how solve this error
[error]
/sata_disk/workspace/spark-1.1.1/examples/src/main/scala/org/apache/spark/examples/streaming/KafkaWordCount.scala:76:
missing parameter type for expanded function ((x$6, x$7) = x$6.$plus(x$7))
Hi Guys,
I am doing some tests with JavaKafkaWordCount, my cluster is composed by 8
workers and 1 driver con spark-1.1.0, I am using Kafka too and I have some
questions about.
1 - When I launch the command:
bin/spark-submit --class org.apache.spark.examples.streaming.JavaKafkaWordCount
Hi guys,
The Kafka’s examples in master branch were canceled?
Thanks
--
Informativa sulla Privacy: http://www.unibs.it/node/8155
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail:
Hi Guys,
I am doing some tests with Spark Streaming and Kafka, but I have seen something
strange, I have modified the JavaKafkaWordCount to use ReducebyKeyandWindow and
to print in the screen the accumulated numbers of the words, in the beginning
spark works very well in each interaction the
.
On Thu, Nov 6, 2014 at 9:32 AM, Eduardo Costa Alfaia e.costaalf...@unibs.it
wrote:
Hi Guys,
I am doing some tests with Spark Streaming and Kafka, but I have seen
something strange, I have modified the JavaKafkaWordCount to use
ReducebyKeyandWindow and to print in the screen
Hi Guys,
Anyone could explain me how to work Kafka with Spark, I am using the
JavaKafkaWordCount.java like a test and the line command is:
./run-example org.apache.spark.streaming.examples.JavaKafkaWordCount
spark://192.168.0.13:7077 computer49:2181 test-consumer-group unibs.it 3
and like a
Hi TD,
I have sent more informations now using 8 workers. The gap has been 27 sec now.
Have you seen?
Thanks
BR
--
Informativa sulla Privacy: http://www.unibs.it/node/8155
Ok Andrew,
Thanks
I sent informations of test with 8 worker and the gap is grown up.
On May 4, 2014, at 2:31, Andrew Ash and...@andrewash.com wrote:
From the logs, I see that the print() starts printing stuff 10 seconds
after the context is started. And that 10 seconds is taken by the
. And that does
not seem to be a persistent problem as after that 10 seconds, the data is
being received and processed.
TD
On Fri, May 2, 2014 at 2:14 PM, Eduardo Costa Alfaia e.costaalf...@unibs.it
wrote:
Hi TD,
I got the another information today using Spark 1.0 RC3 and the situation
Hi TD,
In my tests with spark streaming, I'm using JavaNetworkWordCount(modified) code
and a program that I wrote that sends words to the Spark worker, I use TCP as
transport. I verified that after starting Spark, it connects to my source which
actually starts sending, but the first word count
no
room for processing the received data. It could be that after 30 seconds, the
server disconnects, the receiver terminates, releasing the single slot for
the processing to proceed.
TD
On Tue, Apr 29, 2014 at 2:28 PM, Eduardo Costa Alfaia
e.costaalf...@unibs.it wrote:
Hi TD
are facing?
TD
On Fri, Apr 4, 2014 at 8:03 AM, Eduardo Costa Alfaia
e.costaalf...@unibs.it mailto:e.costaalf...@unibs.it wrote:
Hi guys,
I would like knowing if the part of code is right to use in Window.
JavaPairDStreamString, Integer wordCounts = words.map(
103 new
Hi Guys,
I would like understanding why the Driver's RAM goes down, Does the
processing occur only in the workers?
Thanks
# Start Tests
computer1(Worker/Source Stream)
23:57:18 up 12:03, 1 user, load average: 0.03, 0.31, 0.44
total used free shared
Hi all,
Could anyone explain me about the lines below?
computer1 - worker
computer8 - driver(master)
14/04/04 14:24:56 INFO BlockManagerMasterActor$BlockManagerInfo: Added
input-0-1396614314800 in memory on computer1.ant-net:60820 (size: 1262.5
KB, free: 540.3 MB)
14/04/04 14:24:56 INFO
Hi all,
I am doing some tests using JavaNetworkWordcount and I have some
questions about the performance machine, my tests' time are
approximately 2 min.
Why does the RAM Memory decrease meaningly? I have done tests with 2, 3
machines and I had gotten the same behavior.
What should I
Hi Guys,
Could anyone help me understand this driver behavior when I start the
JavaNetworkWordCount?
computer8
16:24:07 up 121 days, 22:21, 12 users, load average: 0.66, 1.27, 1.55
total used free shared buffers
cached
Mem: 5897
Hi all,
I have put this line in my spark-env.sh:
-Dspark.default.parallelism=20
this parallelism level, is it correct?
The machine's processor is a dual core.
Thanks
--
Informativa sulla Privacy: http://www.unibs.it/node/8155
Hi Guys,
Could anyone explain me this behavior? After 2 min of tests
computer1- worker
computer10 - worker
computer8 - driver(master)
computer1
18:24:31 up 73 days, 7:14, 1 user, load average: 3.93, 2.45, 1.14
total used free shared buffers
cached
, Eduardo Costa Alfaia
e.costaalf...@unibs.it mailto:e.costaalf...@unibs.it wrote:
Hi all,
I have put this line in my spark-env.sh:
-Dspark.default.parallelism=20
this parallelism level, is it correct?
The machine's processor is a dual core.
Thanks
--
Informativa
problem you are facing?
TD
On Fri, Apr 4, 2014 at 8:03 AM, Eduardo Costa Alfaia
e.costaalf...@unibs.it mailto:e.costaalf...@unibs.it wrote:
Hi guys,
I would like knowing if the part of code is right to use in Window.
JavaPairDStreamString, Integer wordCounts = words.map
Hi Guys
I would like printing the content inside of line in :
JavaDStreamString lines = ssc.socketTextStream(args[1],
Integer.parseInt(args[2]));
JavaDStreamString words = lines.flatMap(new
FlatMapFunctionString, String() {
@Override
public IterableString call(String x) {
Thank you very much Sourav
BR
Em 3/26/14, 17:29, Sourav Chandra escreveu:
def print() {
def foreachFunc = (rdd: RDD[T], time: Time) = {
val total = rdd.collect().toList
println (---)
println (Time: + time)
println
Hi Guys,
I think that I already did this question, but I don't remember if anyone
has answered me. I would like changing in the function print() the
quantity of words and the frequency number that are sent to driver's
screen. The default value is 10.
Anyone could help me with this?
Best
Hi Guys,
Could anyone help me to understand this piece of log in red? Why is this
happened?
Thanks
14/03/10 16:55:20 INFO SparkContext: Starting job: first at
NetworkWordCount.scala:87
14/03/10 16:55:20 INFO JobScheduler: Finished job streaming job
1394466892000 ms.0 from job set of time
Yes TD,
I can use tcpdump to see if the data are being accepted by the receiver
and if else them are arriving into the IP packet.
Thanks
Em 3/8/14, 4:19, Tathagata Das escreveu:
I am not sure how to debug this without any more information about the
source. Can you monitor on the receiver side
36 matches
Mail list logo