ted correctly, if you're joining then overwrite otherwise only
> append as it removes dups.
>
> I think, in this scenario, just change it to write.mode('overwrite') because
> you're already reading the old data and your job would be done.
>
>
> On Sat 2 Jun, 2018, 10:27 PM Be
:
> Benjamin,
>
> The append will append the "new" data to the existing data with removing
> the duplicates. You would need to overwrite the file everytime if you need
> unique values.
>
> Thanks,
> Jayadeep
>
> On Fri, Jun 1, 2018 at 9:31 PM Benjamin Kim wrote
I have a situation where I trying to add only new rows to an existing data set
that lives in S3 as gzipped parquet files, looping and appending for each hour
of the day. First, I create a DF from the existing data, then I use a query to
create another DF with the data that is new. Here is the
To add, we have a CDH 5.12 cluster with Spark 2.2 in our data center.
On Mon, Nov 13, 2017 at 3:15 PM Benjamin Kim <bbuil...@gmail.com> wrote:
> Does anyone know if there is a connector for AWS Kinesis that can be used
> as a source for Structured Streaming?
>
> Thanks.
>
>
I have a question about this. The documentation compares the concept
similar to BigQuery. Does this mean that we will no longer need to deal
with instances and just pay for execution duration and amount of data
processed? I’m just curious about how this will be priced.
Also, when will it be ready
Does anyone know if there is a connector for AWS Kinesis that can be used
as a source for Structured Streaming?
Thanks.
With AWS having Glue and GCE having Dataprep, is Databricks coming out with
an equivalent or better? I know that Serverless is a new offering, but will
it go farther with automatic data schema discovery, profiling, metadata
storage, change triggering, joining, transform suggestions, etc.?
Just
Has anyone seen AWS Glue? I was wondering if there is something similar going
to be built into Spark Structured Streaming? I like the Data Catalog idea to
store and track any data source/destination. It profiles the data to derive the
scheme and data types. Also, it does some sort-of automated
Hi Bo,
+1 for your project. I come from the world of data warehouses, ETL, and
reporting analytics. There are many individuals who do not know or want to do
any coding. They are content with ANSI SQL and stick to it. ETL workflows are
also done without any coding using a drag-and-drop user
I’m curious about if and when Spark SQL will ever remove its dependency on Hive
Metastore. Now that Spark 2.1’s SparkSession has superseded the need for
HiveContext, are there plans for Spark to no longer use the Hive Metastore
service with a “SparkSchema” service with a PostgreSQL, MySQL, etc.
e you do not want to be writing code which needs you to update it once
> again in 6 months because newer versions of SPARK now find it deprecated.
>
>
> Regards,
> Gourav Sengupta
>
>
>
> On Fri, Feb 24, 2017 at 7:18 AM, Benjamin Kim <bbuil...@gmail.com
>
.0. We are
> waiting for the move to Spark 2.0/2.1.
>
> And besides that would you not want to work on a platform which is at least
> 10 times faster What would that be?
>
> Regards,
> Gourav Sengupta
>
> On Thu, Feb 23, 2017 at 6:23 PM, Benjamin Kim <bbuil...@gmail.com
code and see if the issue resolves, then it can be
> hidden and read from Input Params.
>
> Thanks,
> Aakash.
>
>
> On 23-Feb-2017 11:54 PM, "Benjamin Kim" <bbuil...@gmail.com
> <mailto:bbuil...@gmail.com>> wrote:
> We are trying to use Spark 1.
We are trying to use Spark 1.6 within CDH 5.7.1 to retrieve a 1.3GB Parquet
file from AWS S3. We can read the schema and show some data when the file is
loaded into a DataFrame, but when we try to do some operations, such as count,
we get this error below.
gt; wrote:
>
> Your vendor should use the parquet internal compression and not take a
> parquet file and gzip it.
>
>> On 13 Feb 2017, at 18:48, Benjamin Kim <bbuil...@gmail.com> wrote:
>>
>> We are receiving files from an outside vendor who creates a Parqu
We are receiving files from an outside vendor who creates a Parquet data file
and Gzips it before delivery. Does anyone know how to Gunzip the file in Spark
and inject the Parquet data into a DataFrame? I thought using sc.textFile or
sc.wholeTextFiles would automatically Gunzip the file, but
Has anyone got some advice on how to remove the reliance on HDFS for storing
persistent data. We have an on-premise Spark cluster. It seems like a waste of
resources to keep adding nodes because of a lack of storage space only. I would
rather add more powerful nodes due to the lack of
t;
>
> On Thu, Feb 2, 2017 at 5:43 PM, Benjamin Kim <bbuil...@gmail.com
> <mailto:bbuil...@gmail.com>> wrote:
> Hi Asher,
>
> I modified the pom to be the same Spark (1.6.0), HBase (1.2.0), and Java
> (1.8) version as our installation. The Scala (2.10.5) vers
Dverbose=true"? And did you see only scala 2.10.5 being pulled in?
>
> On Fri, Feb 3, 2017 at 12:33 PM, Benjamin Kim <bbuil...@gmail.com
> <mailto:bbuil...@gmail.com>> wrote:
> Asher,
>
> It’s still the same. Do you have any other ideas?
>
> Cheers,
> Ben
ly, you might want to
> check which version of the scala sdk your IDE is using
>
> Asher Krim
> Senior Software Engineer
>
>
> On Thu, Feb 2, 2017 at 5:43 PM, Benjamin Kim <bbuil...@gmail.com
> <mailto:bbuil...@gmail.com>> wrote:
> Hi Asher,
>
> I modifie
o, if you're seeing this locally, you might want to
> check which version of the scala sdk your IDE is using
>
> Asher Krim
> Senior Software Engineer
>
> On Thu, Feb 2, 2017 at 5:43 PM, Benjamin Kim <bbuil...@gmail.com> wrote:
>
> Hi Asher,
>
> I modified th
her Krim <ak...@hubspot.com> wrote:
>
> Ben,
>
> That looks like a scala version mismatch. Have you checked your dep tree?
>
> Asher Krim
> Senior Software Engineer
>
>
> On Thu, Feb 2, 2017 at 1:28 PM, Benjamin Kim <bbuil...@gmail.com
> <mailto:
ltSource.createRelation(HBaseRelation.scala:51)
at
org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
If you can please help, I would be grateful.
Cheers,
Ben
>
Elek,
If I cannot use the HBase Spark module, then I’ll give it a try.
Thanks,
Ben
> On Jan 31, 2017, at 1:02 PM, Marton, Elek <h...@anzix.net> wrote:
>
>
> I tested this one with hbase 1.2.4:
>
> https://github.com/hortonworks-spark/shc
>
> Marton
>
>
Does anyone know how to backport the HBase Spark module to HBase 1.2.0? I tried
to build it from source, but I cannot get it to work.
Thanks,
Ben
-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org
k/kite
This might be useful.
Thanks!
2016-12-23 7:01 GMT+09:00 Benjamin Kim <bbuil...@gmail.com>:
Has anyone tried to merge *.gz.parquet files before? I'm trying to merge
them into 1 file after they are output from Spark. Doing a coalesce(1) on
the Spark cluster will not work. It just d
s://issues.apache.org/jira/browse/PARQUET-460>
>
> It seems parquet-tools allows merge small Parquet files into one.
>
>
> Also, I believe there are command-line tools in Kite -
> https://github.com/kite-sdk/kite <https://github.com/kite-sdk/kite>
>
> This might
Has anyone tried to merge *.gz.parquet files before? I'm trying to merge them
into 1 file after they are output from Spark. Doing a coalesce(1) on the Spark
cluster will not work. It just does not have the resources to do it. I'm trying
to do it using the commandline and not use Spark. I will
.@gmail.com> wrote:
>
> Agreed. But as it states deeper integration with (scala) is yet to be
> developed.
> Any thoughts on how to use tensorflow with scala ? Need to write wrappers I
> think.
>
>
> On Oct 19, 2016 7:56 AM, "Benjamin Kim" <bbuil...@gmail.com
Has anyone worked with AWS Kinesis and retrieved data from it using Spark
Streaming? I am having issues where it’s returning no data. I can connect to
the Kinesis stream and describe using Spark. Is there something I’m missing?
Are there specific IAM security settings needed? I just simply
On that note, here is an article that Databricks made regarding using
Tensorflow in conjunction with Spark.
https://databricks.com/blog/2016/01/25/deep-learning-with-apache-spark-and-tensorflow.html
Cheers,
Ben
> On Oct 19, 2016, at 3:09 AM, Gourav Sengupta
>
lta load data in spark
> table cache and expose it through the thriftserver. But you have to implement
> the loading logic, it can be very simple to very complex depending on your
> needs.
>
>
> 2016-10-17 19:48 GMT+02:00 Benjamin Kim <bbuil...@gmail.com
> <mailto:bb
y.
>
> With respect to Tableau… their entire interface in to the big data world
> revolves around the JDBC/ODBC interface. So if you don’t have that piece as
> part of your solution, you’re DOA w respect to Tableau.
>
> Have you considered Drill as your JDBC connecti
gt; wrote:
>
> Is there only one process adding rows? because this seems a little risky if
> you have multiple threads doing that…
>
>> On Oct 8, 2016, at 1:43 PM, Benjamin Kim <bbuil...@gmail.com
>> <mailto:bbuil...@gmail.com>> wrote:
>>
>> Mich,
I have a table with data already in it that has primary keys generated by the
function monotonicallyIncreasingId. Now, I want to insert more data into it
with primary keys that will auto-increment from where the existing data left
off. How would I do this? There is no argument I can pass into
Mich,
I know up until CDH 5.4 we had to add the HTrace jar to the classpath to make
it work using the command below. But after upgrading to CDH 5.7, it became
unnecessary.
echo "/opt/cloudera/parcels/CDH/jars/htrace-core-3.2.0-incubating.jar" >>
/etc/spark/conf/classpath.txt
Hope this helps.
.
Thanks,
Ben
> On Sep 16, 2016, at 3:29 PM, Nikolay Zhebet <phpap...@gmail.com> wrote:
>
> Hi! Can you split init code with current comand? I thing it is main problem
> in your code.
>
> 16 сент. 2016 г. 8:26 PM пользователь "Benjamin Kim" <bbuil...@gm
Has anyone using Spark 1.6.2 encountered very slow responses from pulling data
from PostgreSQL using JDBC? I can get to the table and see the schema, but when
I do a show, it takes very long or keeps timing out.
The code is simple.
val jdbcDF = sqlContext.read.format("jdbc").options(
xposing data ie create hive
> tables which "point to" any other DB. i know Oracle provides there own Serde
> for hive. Not sure about PG though.
>
> Once tables are created in hive, STS will automatically see it.
>
> On Wed, Sep 14, 2016 at 11:08 AM, Benjam
Has anyone created tables using Spark SQL that directly connect to a JDBC data
source such as PostgreSQL? I would like to use Spark SQL Thriftserver to access
and query remote PostgreSQL tables. In this way, we can centralize data access
to Spark SQL tables along with PostgreSQL making it very
Disclaimer: Use it at your own risk. Any and all responsibility for any loss,
> damage or destruction of data or any other property which may arise from
> relying on this email's technical content is explicitly disclaimed. The
> author will in no case be liable for any monetary dama
Does anyone have any thoughts about using Spark SQL Thriftserver in Spark 1.6.2
instead of HiveServer2? We are considering abandoning HiveServer2 for it. Some
advice and gotcha’s would be nice to know.
Thanks,
Ben
-
To
We use Graphite/Grafana for custom metrics. We found Spark’s metrics not to be
customizable. So, we write directly using Graphite’s API, which was very easy
to do using Java’s socket library in Scala. It works great for us, and we are
going one step further using Sensu to alert us if there is
of data or any other property which may arise from
> relying on this email's technical content is explicitly disclaimed. The
> author will in no case be liable for any monetary damages arising from such
> loss, damage or destruction.
>
>
> On 3 September 2016 at 20:31, Benjamin
2 September 2016 at 23:08, KhajaAsmath Mohammed <mdkhajaasm...@gmail.com
> <mailto:mdkhajaasm...@gmail.com>> wrote:
> Hi Kim,
>
> I am also looking for same information. Just got the same requirement today.
>
> Thanks,
> Asmath
>
> On Fri, Sep 2, 2016
I was wondering if anyone has tried to create Spark SQL tables on top of HBase
tables so that data in HBase can be accessed using Spark Thriftserver with SQL
statements? This is similar what can be done using Hive.
Thanks,
Ben
I am trying to implement checkpointing in my streaming application but I am
getting a not serializable error. Has anyone encountered this? I am deploying
this job in YARN clustered mode.
Here is a snippet of the main parts of the code.
object S3EventIngestion {
//create and setup streaming
I would like to know if anyone has tried using the hbase-spark module? I tried
to follow the examples in conjunction with CDH 5.8.0. I cannot find the
HBaseTableCatalog class in the module or in any of the Spark jars. Can someone
help?
Thanks,
Ben
It is included in Cloudera’s CDH 5.8.
> On Jul 22, 2016, at 6:13 PM, Mail.com wrote:
>
> Hbase Spark module will be available with Hbase 2.0. Is that out yet?
>
>> On Jul 22, 2016, at 8:50 PM, Def_Os wrote:
>>
>> So it appears it should be possible
From what I read, there is no more Contexts.
"SparkContext, SQLContext, HiveContext merged into SparkSession"
I have not tested it, but I don’t know if it’s true.
Cheers,
Ben
> On Jul 18, 2016, at 8:37 AM, Koert Kuipers wrote:
>
> in my codebase i would like to
It takes me to the directories instead of the webpage.
> On Jul 13, 2016, at 11:45 AM, manish ranjan <cse1.man...@gmail.com> wrote:
>
> working for me. What do you mean 'as supposed to'?
>
> ~Manish
>
>
>
> On Wed, Jul 13, 2016 at 11:45 AM, Benjamin Kim <
Has anyone noticed that the spark.apache.org is not working as supposed to?
-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org
"options(key 'hashtag', frequencyCol 'retweets', timeSeriesColumn
> 'tweetTime' )"
> where 'tweetStreamTable' is created using the 'create stream table ...' SQL
> syntax.
>
>
> -
> Jags
> SnappyData blog <http://www.snappydata.io/blog>
> Download binary, s
re).
>
>
> -
> Jags
> SnappyData blog <http://www.snappydata.io/blog>
> Download binary, source <https://github.com/SnappyDataInc/snappydata>
>
>
> On Wed, Jul 6, 2016 at 12:49 AM, Benjamin Kim <bbuil...@gmail.com
> <mailto:bbuil...@gmail.com>&g
I recently got a sales email from SnappyData, and after reading the
documentation about what they offer, it sounds very similar to what Structured
Streaming will offer w/o the underlying in-memory, spill-to-disk, CRUD
compliant data storage in SnappyData. I was wondering if Structured Streaming
I was wondering if anyone, who is a Spark Scala developer, would be willing to
continue the work done for the Kudu connector?
https://github.com/apache/incubator-kudu/tree/master/java/kudu-spark/src/main/scala/org/kududb/spark/kudu
I have been testing and using Kudu for the past month and
Has anyone implemented a way to track the performance of a data model? We
currently have an algorithm to do record linkage and spit out statistics of
matches, non-matches, and/or partial matches with reason codes of why we didn’t
match accurately. In this way, we will know if something goes
Has anyone run into this requirement?
We have a need to track data integrity and model quality metrics of outcomes so
that we can both gauge if the data is healthy coming in and the models run
against them are still performing and not giving faulty results. A nice to have
would be to graph
g.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe
> InputFormat:
> org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat
> OutputFormat:
> org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat
> Compressed: No
> Num Buckets:
Pw>
>
> http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/>
>
>
> On 3 June 2016 at 17:04, Benjamin Kim <bbuil...@gmail.com
> <mailto:bbuil...@gmail.com>> wrote:
> The table already exists.
>
> CREATE EXTERNAL TABLE `amo
xianrbJd6zP6AcPCCdOABUrV8Pw>
>
> http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/>
>
>
> On 3 June 2016 at 14:13, Benjamin Kim <bbuil...@gmail.com
> <mailto:bbuil...@gmail.com>> wrote:
> Does anyone know how to save data in a DataFrame to
Does anyone know how to save data in a DataFrame to a table partitioned using
an existing column reformatted into a derived column?
val partitionedDf = df.withColumn("dt",
concat(substring($"timestamp", 1, 10), lit(" "), substring($"timestamp", 12,
2), lit(":00")))
Ben
> On May 21, 2016, at 4:18 AM, Ted Yu <yuzhih...@gmail.com> wrote:
>
> Maybe more than one version of jets3t-xx.jar was on the classpath.
>
> FYI
>
> On Fri, May 20, 2016 at 8:31 PM, Benjamin Kim <bbuil...@gmail.com
> <mailto:bbuil...@gmail.com>>
could be wrong.
Thanks,
Ben
> On May 21, 2016, at 4:18 AM, Ted Yu <yuzhih...@gmail.com> wrote:
>
> Maybe more than one version of jets3t-xx.jar was on the classpath.
>
> FYI
>
> On Fri, May 20, 2016 at 8:31 PM, Benjamin Kim <bbuil...@gmail.com
> <mailto:bbuil
I am trying to stream files from an S3 bucket using CDH 5.7.0’s version of
Spark 1.6.0. It seems not to work. I keep getting this error.
Exception in thread "JobGenerator" java.lang.VerifyError: Bad type on operand
stack
Exception Details:
Location:
I have a curiosity question. These forever/unlimited DataFrames/DataSets will
persist and be query capable. I still am foggy about how this data will be
stored. As far as I know, memory is finite. Will the data be spilled to disk
and be retrievable if the query spans data not in memory? Is
m
>
>
> Mobile: +972-54-7801286 <tel:%2B972-54-7801286> | Email:
> ofir.ma...@equalum.io <mailto:ofir.ma...@equalum.io>
> On Sun, May 15, 2016 at 11:58 PM, Benjamin Kim <bbuil...@gmail.com
> <mailto:bbuil...@gmail.com>> wrote:
> Hi Ofir,
>
Hi Ofir,
I just recently saw the webinar with Reynold Xin. He mentioned the Spark
Session unification efforts, but I don’t remember the DataSet for Structured
Streaming aka Continuous Applications as he put it. He did mention streaming or
unlimited DataFrames for Structured Streaming so one
in hbase-spark module.
>
> Cheers
>
> On Apr 27, 2016, at 10:31 PM, Benjamin Kim <bbuil...@gmail.com
> <mailto:bbuil...@gmail.com>> wrote:
>
>> Hi Ted,
>>
>> Do you know when the release will be? I also see some documentation for
>> usage of the hb
Next Thursday is Databricks' webinar on Spark 2.0. If you are attending, I bet
many are going to ask when the release will be. Last time they did this, Spark
1.6 came out not too long afterward.
> On Apr 28, 2016, at 5:21 AM, Sean Owen wrote:
>
> I don't know if anyone has
Can someone explain to me how the new Structured Streaming works in the
upcoming Spark 2.0+? I’m a little hazy how data will be stored and referenced
if it can be queried and/or batch processed directly from streams and if the
data will be append only to or will there be some sort of upsert
?
Thanks,
Ben
> On Apr 21, 2016, at 6:56 AM, Ted Yu <yuzhih...@gmail.com> wrote:
>
> The hbase-spark module in Apache HBase (coming with hbase 2.0 release) can do
> this.
>
> On Thu, Apr 21, 2016 at 6:52 AM, Benjamin Kim <bbuil...@gmail.com
> <mailto:bbuil...@gmai
ty-group.com <mailto:daniel.ha...@veracity-group.com>>
> wrote:
> Hi Benjamin,
> Yes it should work.
>
> Let me know if you need further assistance I might be able to get the code
> I've used for that project.
>
> Thank you.
> Daniel
>
> On 24 Apr 2016, at 17:
I have data in a DataFrame loaded from a CSV file. I need to load this data
into HBase using an RDD formatted in a certain way.
val rdd = sc.parallelize(
Array(key1,
(ColumnFamily, ColumnName1, Value1),
(ColumnFamily, ColumnName2, Value2),
gt; I tried saving DF to HBase using a hive table with hbase storage handler and
> hiveContext but it failed due to a bug.
>
> I was able to persist the DF to hbase using Apache Pheonix which was pretty
> simple.
>
> Thank you.
> Daniel
>
> On 21 Apr 2016, at 16:52, B
release) can do
> this.
>
> On Thu, Apr 21, 2016 at 6:52 AM, Benjamin Kim <bbuil...@gmail.com
> <mailto:bbuil...@gmail.com>> wrote:
> Has anyone found an easy way to save a DataFrame into HBase?
>
> Thanks,
> Ben
>
>
>
Has anyone found an easy way to save a DataFrame into HBase?
Thanks,
Ben
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
I see that the new CDH 5.7 has been release with the HBase Spark module
built-in. I was wondering if I could just download it and use the hbase-spark
jar file for CDH 5.5. Has anyone tried this yet?
Thanks,
Ben
-
To
>> You could certainly use RDDs for that, you might also find using Dataset
>> selecting the fields you need to construct the URL to fetch and then using
>> the map function to be easier.
>>
>> On Thu, Apr 14, 2016 at 12:01 PM, Benjamin Kim <bbuil...@gmail
t; Hi,
>
> Would you try this codes below?
>
> val csvRDD = ...your processimg for csv rdd..
> val df = new CsvParser().csvRdd(sqlContext, csvRDD, useHeader = true)
>
> Thanks!
>
> On 16 Apr 2016 1:35 a.m., "Benjamin Kim" <bbuil...@gmail.com
> <ma
gt; Would you try this codes below?
>
> val csvRDD = ...your processimg for csv rdd..
> val df = new CsvParser().csvRdd(sqlContext, csvRDD, useHeader = true)
>
> Thanks!
>
> On 16 Apr 2016 1:35 a.m., "Benjamin Kim" <bbuil...@gmail.com
> <mailto:bbuil...
Karau <hol...@pigscanfly.ca> wrote:
>
> You could certainly use RDDs for that, you might also find using Dataset
> selecting the fields you need to construct the URL to fetch and then using
> the map function to be easier.
>
> On Thu, Apr 14, 2016 at 12:01 PM, Be
lease check csvRdd api here,
> https://github.com/databricks/spark-csv/blob/master/src/main/scala/com/databricks/spark/csv/CsvParser.scala#L150
>
> <https://github.com/databricks/spark-csv/blob/master/src/main/scala/com/databricks/spark/csv/CsvParser.scala#L150>.
>
> Thanks!
>
I was wonder what would be the best way to use JSON in Spark/Scala. I need to
lookup values of fields in a collection of records to form a URL and download
that file at that location. I was thinking an RDD would be perfect for this. I
just want to hear from others who might have more experience
.load("s3://" + bucket + "/" + key)
//save to hbase
})
ssc.checkpoint(checkpointDirectory) // set checkpoint directory
ssc
}
Thanks,
Ben
> On Apr 9, 2016, at 6:12 PM, Benjamin Kim <bbuil...@gmail.com> wrote:
>
> Ah, I spoke too soon.
>
>
, please let me know.
Thanks,
Ben
> On Apr 9, 2016, at 2:49 PM, Benjamin Kim <bbuil...@gmail.com> wrote:
>
> This was easy!
>
> I just created a notification on a source S3 bucket to kick off a Lambda
> function that would decompress the dropped file and save it to another
to be the endpoint of this
notification. This would then convey to a listening Spark Streaming job the
file information to download. I like this!
Cheers,
Ben
> On Apr 9, 2016, at 9:54 AM, Benjamin Kim <bbuil...@gmail.com> wrote:
>
> This is awesome! I have someplace to start from.
>
t; Sent from my iPhone
>
> On Apr 9, 2016, at 9:55 AM, Benjamin Kim <bbuil...@gmail.com
> <mailto:bbuil...@gmail.com>> wrote:
>
>> Nezih,
>>
>> This looks like a good alternative to having the Spark Streaming job check
>> for new files
ext.hadoopConfiguration.set("fs.s3n.awsSecretAccessKey",
> AWSSecretAccessKey)
>
> val inputS3Stream = ssc.textFileStream("s3://example_bucket/folder")
>
> This code will probe for new S3 files created in your every batch interval.
>
> Thanks,
&g
w S3 files created in your every batch interval.
>
> Thanks,
> Natu
>
> On Fri, Apr 8, 2016 at 9:14 PM, Benjamin Kim <bbuil...@gmail.com
> <mailto:bbuil...@gmail.com>> wrote:
> Has anyone monitored an S3 bucket or directory us
Has anyone monitored an S3 bucket or directory using Spark Streaming and pulled
any new files to process? If so, can you provide basic Scala coding help on
this?
Thanks,
Ben
-
To unsubscribe, e-mail:
Does anyone know if this is possible? I have an RDD loaded with rows of CSV
data strings. Each string representing the header row and multiple rows of data
along with delimiters. I would like to feed each thru a CSV parser to convert
the data into a dataframe and, ultimately, UPSERT a
y-MM-dd'))
> AS TransactionDate
> , TransactionType
> , Description
> , Value
> , Balance
> , AccountName
> , AccountNumber
> FROM tmp
> """
> sql(sqltext)
>
> println ("\nFinished at");
gt;
> HTH
>
> Dr Mich Talebzadeh
>
> LinkedIn
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>
> <https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>
>
> http://talebzadehmich.wordpress.com &l
I have a quick question. I have downloaded multiple zipped files from S3 and
unzipped each one of them into strings. The next step is to parse using a CSV
parser. I want to know if there is a way to easily use the spark csv package
for this?
Thanks,
Ben
I need a little help. I am loading into Spark 1.6 zipped csv files stored in s3.
First of all, I am able to get the List of file keys that have a modified date
within a range of time by using the AWS SDK Objects (AmazonS3Client,
ObjectListing, S3ObjectSummary, ListObjectsRequest,
Hi Gil,
Currently, our company uses S3 heavily for data storage. Can you further
explain the benefits of this in relation to S3 when the pending patch does come
out? Also, I have heard of Swift from others. Can you explain to me the pros
and cons of Swift compared to HDFS? It can be just a
il.com> wrote:
>
> Could you wrap the ZipInputStream in a List, since a subtype of
> TraversableOnce[?] is required?
>
> case (name, content) => List(new ZipInputStream(content.open))
>
> Xinh
>
> On Wed, Mar 9, 2016 at 7:07 AM, Benjamin Kim <bbuil...@gmail.com
is not in branch-1.
>
> compressionByName() resides in class with @InterfaceAudience.Private which
> got moved in master branch.
>
> So looks like there is some work to be done for backporting to branch-1 :-)
>
> On Sun, Mar 13, 2016 at 1:35 PM, Benjamin Kim <bbuil...@gmail
th hbase 1.0
>
> Cheers
>
> On Sun, Mar 13, 2016 at 11:39 AM, Benjamin Kim <bbuil...@gmail.com
> <mailto:bbuil...@gmail.com>> wrote:
> Hi Ted,
>
> I see that you’re working on the hbase-spark module for hbase. I recently
> packaged the SparkOnHBase project and
1 - 100 of 117 matches
Mail list logo