You can also try https://github.com/zendesk/maxwell
Tamas
On 3 January 2017 at 12:25, Amrit Jangid wrote:
> You can try out *debezium* : https://github.com/debezium. it reads data
> from bin-logs, provides structure and stream into Kafka.
>
> Now Kafka can be your new
Hello,
For a while, we're using Spark on Mesos with fine-grained mode in
production.
Since Spark 2.0 the fine-grained mode is deprecated so we'd shift to
dynamic allocation.
When I tried to setup the dynamic allocation I run into the following
problem:
So I set spark.shuffle.service.enabled =
Hello,
What spark version do you use? I have the same issue with Spark 1.6.1 and
there is a ticket somewhere.
cheers,
Tamas Szuromi
Data Analyst
*Skype: *tromika
*E-mail: *tamas.szur...@odigeo.com <n...@odigeo.com>
[image: ODIGEO Hungary]
ODIGEO Hungary Kft.
1066 Budapest
Weiner
gt;>
>> Can you try the value for paymentdata to this
>> format paymentdata='2015-01-01 23:59:59' , to_date(paymentdate) and see
>> if it helps.
>>
>>
>> On Thursday, March 24, 2016, Tamas Szuromi
>> <tamas.szur...@odigeo.com.invalid> wrote:
>
Hi Mich,
Take a look
https://spark.apache.org/docs/1.6.1/api/java/org/apache/spark/sql/functions.html#unix_timestamp(org.apache.spark.sql.Column,%20java.lang.String)
cheers,
Tamas
On 24 March 2016 at 14:29, Mich Talebzadeh
wrote:
>
> Hi,
>
> I am trying to convert
Hey,
We had the same with Spark 1.5.x and disappeared after we upgraded to 1.6.
Tamas
On Saturday, 5 March 2016, SLiZn Liu wrote:
> Hi Spark Mailing List,
>
> I’m running terabytes of text files with Spark on Mesos, the job runs fine
> until we decided to switch to
Hi, Have a look at on
http://spark.apache.org/docs/latest/configuration.html what
ports need to be exposed. With mesos we had a lot of problems with
container networking but yes the --net=host is a shortcut.
Tamas
On 4 March 2016 at 22:37, yanlin wang wrote:
> We would like
Hello Petr,
We're running Spark 1.5.2 and 1.6.0 on Mesos 0.25.0 without any problem. We
upgraded from 0.21.0 originally.
cheers,
Tamas
On 12 February 2016 at 09:31, Petr Novak wrote:
> Hi all,
> based on documenation:
>
> "Spark 1.6.0 is designed for use with Mesos
llo Tamas,
>
> 2015-11-20 17:23 GMT+01:00 Tamas Szuromi <tamas.szur...@odigeo.com>:
> >
> > Hello,
> >
> > I've just wanted to use sc._jsc.hadoopConfiguration().set('key','value')
> in pyspark 1.5.2 but I got set method not exists error.
>
>
> For m
Hello,
I've just wanted to use sc._jsc.hadoopConfiguration().set('key','value') in
pyspark 1.5.2 but I got set method not exists error.
Are there anyone who know a workaround to set some hdfs related properties
like dfs.blocksize?
Thanks in advance!
Tamas
Hi Zsolt,
How you load the jar and how you prepend it to the classpath?
Tamas
On 19 November 2015 at 11:02, Zsolt Tóth wrote:
> Hi,
>
> I try to throw an exception of my own exception class (MyException extends
> SparkException) on one of the executors. This works
Hello Sebastian,
Did you set the MESOS_NATIVE_JAVA_LIBRARY variable before you started
pyspark?
cheers,
Tamas
On 2 November 2015 at 15:24, Sebastian Kuepers <
sebastian.kuep...@publicispixelpark.de> wrote:
> Hey,
>
>
> I have a Mesos cluster with a single Master. If I run the following
>
Hello,
I'm looking for someone who using hortonworks data platform especially 2.3
and also using spark 1.5.x.
I have the following issue with hdp and I wanted to know is a general bug
with HDP or just a local issue.
https://issues.apache.org/jira/browse/SPARK-10896
Thanks in advance!
*Tamas*
13 matches
Mail list logo