Re: How to insert data for 100 partitions at a time using Spark SQL

2016-05-22 Thread Mich Talebzadeh
Whatever you do the lion share of time is going to be taken by insert into
Hive table.

Ok check this. It is CSV files inserted into Hive ORC table. This version
uses Hive on Spark engine and it is written in Hive executed via beeline

--1 Move .CSV data into HDFS:
--2 Create an external table.
--3 Create the ORC table.
--4 Insert the data from the external table to the Hive ORC table

select from_unixtime(unix_timestamp(), 'dd/MM/ HH:mm:ss.ss') AS
StartTime;
set hive.exec.reducers.max=256;
use accounts;
--set hive.execution.engine=mr;
--2)
DROP TABLE IF EXISTS stg_t2;
CREATE EXTERNAL TABLE stg_t2 (
 INVOICENUMBER string
,PAYMENTDATE string
,NET string
,VAT string
,TOTAL string
)
COMMENT 'from csv file from excel sheet nw_10124772'
ROW FORMAT serde 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
STORED AS TEXTFILE
LOCATION '/data/stg/accounts/nw/10124772'
TBLPROPERTIES ("skip.header.line.count"="1")
;
--3)
DROP TABLE IF EXISTS t2;
CREATE TABLE t2 (
 INVOICENUMBER  INT
,PAYMENTDATEdate
,NETDECIMAL(20,2)
,VATDECIMAL(20,2)
,TOTAL  DECIMAL(20,2)
)
COMMENT 'from csv file from excel sheet nw_10124772'
CLUSTERED BY (INVOICENUMBER) INTO 256 BUCKETS
STORED AS ORC
TBLPROPERTIES ( "orc.compress"="ZLIB" )
;
--4) Put data in target table. do the conversion and ignore empty rows
INSERT INTO TABLE t2
SELECT
  INVOICENUMBER
,
TO_DATE(FROM_UNIXTIME(UNIX_TIMESTAMP(paymentdate,'dd/MM/'),'-MM-dd'))
AS paymentdate
--, CAST(REGEXP_REPLACE(SUBSTR(net,2,20),",","") AS DECIMAL(20,2))
, CAST(REGEXP_REPLACE(net,'[^\\d\\.]','') AS DECIMAL(20,2))
, CAST(REGEXP_REPLACE(vat,'[^\\d\\.]','') AS DECIMAL(20,2))
, CAST(REGEXP_REPLACE(total,'[^\\d\\.]','') AS DECIMAL(20,2))
FROM
stg_t2
WHERE
--INVOICENUMBER > 0 AND
CAST(REGEXP_REPLACE(total,'[^\\d\\.]','') AS DECIMAL(20,2)) > 0.0
-- Exclude empty rows
;
select from_unixtime(unix_timestamp(), 'dd/MM/ HH:mm:ss.ss') AS EndTime;
!exit

And similar using Spark shell and temp table

import org.apache.spark.sql.functions._
import java.sql.{Date, Timestamp}
val HiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
println ("\nStarted at"); sqlContext.sql("SELECT
FROM_unixtime(unix_timestamp(), 'dd/MM/ HH:mm:ss.ss')
").collect.foreach(println)
//
// Get a DF first based on Databricks CSV libraries ignore column heading
because of column called "Type"
//
val df =
sqlContext.read.format("com.databricks.spark.csv").option("inferSchema",
"true").option("header",
"true").load("hdfs://rhes564:9000/data/stg/accounts/nw/10124772")
//
//  [Date: string,  Type: string,  Description: string,  Value: double,
Balance: double,  Account Name: string,  Account Number: string]
//
case class Accounts( TransactionDate: String, TransactionType: String,
Description: String, Value: Double, Balance: Double, AccountName: String,
AccountNumber : String)
// Map the columns to names
//
val a = df.filter(col("Date") > "").map(p =>
Accounts(p(0).toString,p(1).toString,p(2).toString,p(3).toString.toDouble,p(4).toString.toDouble,p(5).toString,p(6).toString))
//
// Create a Spark temporary table
//
a.toDF.registerTempTable("tmp")
//
// Test it here
//
//sql("select TransactionDate, TransactionType, Description, Value,
Balance, AccountName, AccountNumber from tmp").take(2)
//
// Need to create and populate target ORC table nw_10124772 in database
accounts.in Hive
//
sql("use accounts")
//
// Drop and create table nw_10124772
//
sql("DROP TABLE IF EXISTS accounts.nw_10124772")
var sqltext : String = ""
sqltext = """
CREATE TABLE accounts.nw_10124772 (
TransactionDateDATE
,TransactionType   String
,Description   String
,Value Double
,Balance   Double
,AccountName   String
,AccountNumber Int
)
COMMENT 'from csv file from excel sheet'
STORED AS ORC
TBLPROPERTIES ( "orc.compress"="ZLIB" )
"""
sql(sqltext)
//
// Put data in Hive table. Clean up is already done
//
sqltext = """
INSERT INTO TABLE accounts.nw_10124772
SELECT

TO_DATE(FROM_UNIXTIME(UNIX_TIMESTAMP(TransactionDate,'dd/MM/'),'-MM-dd'))
AS TransactionDate
, TransactionType
, Description
, Value
, Balance
, AccountName
, AccountNumber
FROM tmp
"""
sql(sqltext)
//
// Test all went OK by looking at some old transactions
//
sql("Select TransactionDate, Value, Balance from nw_10124772 where
TransactionDate < '2011-05-30'").collect.foreach(println)
//
println ("\nFinished at"); sqlContext.sql("SELECT
FROM_unixtime(unix_timestamp(), 'dd/MM/ HH:mm:ss.ss')
").collect.foreach(println)
sys.exit()


Anyway worth trying

HTH



Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com



On 22 May 2016 at 

Re: How to insert data for 100 partitions at a time using Spark SQL

2016-05-22 Thread swetha kasireddy
I am doing the 1. currently using the following and it takes a lot of time.
Whats the advantage of doing 2 and how to do it?

sqlContext.sql("  CREATE EXTERNAL TABLE IF NOT EXISTS records (id STRING,
record STRING) PARTITIONED BY (datePartition STRING, idPartition STRING)
stored as ORC LOCATION '/user/users' ")
  sqlContext.sql("  orc.compress= SNAPPY")
  sqlContext.sql(
""" from recordsTemp ps   insert overwrite table users
partition(datePartition , idPartition )  select ps.id, ps.record ,
ps.datePartition, ps.idPartition  """.stripMargin)

On Sun, May 22, 2016 at 12:47 PM, Mich Talebzadeh  wrote:

> two alternatives for this ETL or ELT
>
>
>1. There is only one external ORC table and you do insert overwrite
>into that external table through Spark sql
>2. or
>3. 14k files loaded into staging area/read directory and then insert
>overwrite into an ORC table and th
>
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> On 22 May 2016 at 20:38, swetha kasireddy 
> wrote:
>
>> Around 14000 partitions need to be loaded every hour. Yes, I tested this
>> and its taking a lot of time to load. A partition would look something like
>> the following which is further partitioned by userId with all the
>> userRecords for that date inside it.
>>
>> 5 2016-05-20 16:03 /user/user/userRecords/dtPartitioner=2012-09-12
>>
>> On Sun, May 22, 2016 at 12:30 PM, Mich Talebzadeh <
>> mich.talebza...@gmail.com> wrote:
>>
>>> by partition do you mean 14000 files loaded in each batch session (say
>>> daily)?.
>>>
>>> Have you actually tested this?
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * 
>>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> *
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>>
>>> On 22 May 2016 at 20:24, swetha kasireddy 
>>> wrote:
>>>
 The data is not very big. Say 1MB-10 MB at the max per partition. What
 is the best way to insert this 14k partitions with decent performance?

 On Sun, May 22, 2016 at 12:18 PM, Mich Talebzadeh <
 mich.talebza...@gmail.com> wrote:

> the acid question is how many rows are you going to insert in a batch
> session? btw if this is purely an sql operation then you can do all that 
> in
> hive running on spark engine. It will be very fast as well.
>
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> On 22 May 2016 at 20:14, Jörn Franke  wrote:
>
>> 14000 partitions seem to be way too many to be performant (except for
>> large data sets). How much data does one partition contain?
>>
>> > On 22 May 2016, at 09:34, SRK  wrote:
>> >
>> > Hi,
>> >
>> > In my Spark SQL query to insert data, I have around 14,000
>> partitions of
>> > data which seems to be causing memory issues. How can I insert the
>> data for
>> > 100 partitions at a time to avoid any memory issues?
>> >
>> >
>> >
>> > --
>> > View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/How-to-insert-data-for-100-partitions-at-a-time-using-Spark-SQL-tp26997.html
>> > Sent from the Apache Spark User List mailing list archive at
>> Nabble.com.
>> >
>> >
>> -
>> > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> > For additional commands, e-mail: user-h...@spark.apache.org
>> >
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>
>

>>>
>>
>


Re: How to insert data for 100 partitions at a time using Spark SQL

2016-05-22 Thread Mich Talebzadeh
two alternatives for this ETL or ELT


   1. There is only one external ORC table and you do insert overwrite into
   that external table through Spark sql
   2. or
   3. 14k files loaded into staging area/read directory and then insert
   overwrite into an ORC table and th



Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com



On 22 May 2016 at 20:38, swetha kasireddy  wrote:

> Around 14000 partitions need to be loaded every hour. Yes, I tested this
> and its taking a lot of time to load. A partition would look something like
> the following which is further partitioned by userId with all the
> userRecords for that date inside it.
>
> 5 2016-05-20 16:03 /user/user/userRecords/dtPartitioner=2012-09-12
>
> On Sun, May 22, 2016 at 12:30 PM, Mich Talebzadeh <
> mich.talebza...@gmail.com> wrote:
>
>> by partition do you mean 14000 files loaded in each batch session (say
>> daily)?.
>>
>> Have you actually tested this?
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> *
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>>
>> On 22 May 2016 at 20:24, swetha kasireddy 
>> wrote:
>>
>>> The data is not very big. Say 1MB-10 MB at the max per partition. What
>>> is the best way to insert this 14k partitions with decent performance?
>>>
>>> On Sun, May 22, 2016 at 12:18 PM, Mich Talebzadeh <
>>> mich.talebza...@gmail.com> wrote:
>>>
 the acid question is how many rows are you going to insert in a batch
 session? btw if this is purely an sql operation then you can do all that in
 hive running on spark engine. It will be very fast as well.



 Dr Mich Talebzadeh



 LinkedIn * 
 https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
 *



 http://talebzadehmich.wordpress.com



 On 22 May 2016 at 20:14, Jörn Franke  wrote:

> 14000 partitions seem to be way too many to be performant (except for
> large data sets). How much data does one partition contain?
>
> > On 22 May 2016, at 09:34, SRK  wrote:
> >
> > Hi,
> >
> > In my Spark SQL query to insert data, I have around 14,000
> partitions of
> > data which seems to be causing memory issues. How can I insert the
> data for
> > 100 partitions at a time to avoid any memory issues?
> >
> >
> >
> > --
> > View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/How-to-insert-data-for-100-partitions-at-a-time-using-Spark-SQL-tp26997.html
> > Sent from the Apache Spark User List mailing list archive at
> Nabble.com.
> >
> > -
> > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> > For additional commands, e-mail: user-h...@spark.apache.org
> >
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

>>>
>>
>


Re: How to insert data for 100 partitions at a time using Spark SQL

2016-05-22 Thread swetha kasireddy
Around 14000 partitions need to be loaded every hour. Yes, I tested this
and its taking a lot of time to load. A partition would look something like
the following which is further partitioned by userId with all the
userRecords for that date inside it.

5 2016-05-20 16:03 /user/user/userRecords/dtPartitioner=2012-09-12

On Sun, May 22, 2016 at 12:30 PM, Mich Talebzadeh  wrote:

> by partition do you mean 14000 files loaded in each batch session (say
> daily)?.
>
> Have you actually tested this?
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> On 22 May 2016 at 20:24, swetha kasireddy 
> wrote:
>
>> The data is not very big. Say 1MB-10 MB at the max per partition. What is
>> the best way to insert this 14k partitions with decent performance?
>>
>> On Sun, May 22, 2016 at 12:18 PM, Mich Talebzadeh <
>> mich.talebza...@gmail.com> wrote:
>>
>>> the acid question is how many rows are you going to insert in a batch
>>> session? btw if this is purely an sql operation then you can do all that in
>>> hive running on spark engine. It will be very fast as well.
>>>
>>>
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * 
>>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> *
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>>
>>> On 22 May 2016 at 20:14, Jörn Franke  wrote:
>>>
 14000 partitions seem to be way too many to be performant (except for
 large data sets). How much data does one partition contain?

 > On 22 May 2016, at 09:34, SRK  wrote:
 >
 > Hi,
 >
 > In my Spark SQL query to insert data, I have around 14,000 partitions
 of
 > data which seems to be causing memory issues. How can I insert the
 data for
 > 100 partitions at a time to avoid any memory issues?
 >
 >
 >
 > --
 > View this message in context:
 http://apache-spark-user-list.1001560.n3.nabble.com/How-to-insert-data-for-100-partitions-at-a-time-using-Spark-SQL-tp26997.html
 > Sent from the Apache Spark User List mailing list archive at
 Nabble.com.
 >
 > -
 > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 > For additional commands, e-mail: user-h...@spark.apache.org
 >

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org


>>>
>>
>


Re: How to insert data for 100 partitions at a time using Spark SQL

2016-05-22 Thread Mich Talebzadeh
by partition do you mean 14000 files loaded in each batch session (say
daily)?.

Have you actually tested this?

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com



On 22 May 2016 at 20:24, swetha kasireddy  wrote:

> The data is not very big. Say 1MB-10 MB at the max per partition. What is
> the best way to insert this 14k partitions with decent performance?
>
> On Sun, May 22, 2016 at 12:18 PM, Mich Talebzadeh <
> mich.talebza...@gmail.com> wrote:
>
>> the acid question is how many rows are you going to insert in a batch
>> session? btw if this is purely an sql operation then you can do all that in
>> hive running on spark engine. It will be very fast as well.
>>
>>
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> *
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>>
>> On 22 May 2016 at 20:14, Jörn Franke  wrote:
>>
>>> 14000 partitions seem to be way too many to be performant (except for
>>> large data sets). How much data does one partition contain?
>>>
>>> > On 22 May 2016, at 09:34, SRK  wrote:
>>> >
>>> > Hi,
>>> >
>>> > In my Spark SQL query to insert data, I have around 14,000 partitions
>>> of
>>> > data which seems to be causing memory issues. How can I insert the
>>> data for
>>> > 100 partitions at a time to avoid any memory issues?
>>> >
>>> >
>>> >
>>> > --
>>> > View this message in context:
>>> http://apache-spark-user-list.1001560.n3.nabble.com/How-to-insert-data-for-100-partitions-at-a-time-using-Spark-SQL-tp26997.html
>>> > Sent from the Apache Spark User List mailing list archive at
>>> Nabble.com.
>>> >
>>> > -
>>> > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>> > For additional commands, e-mail: user-h...@spark.apache.org
>>> >
>>>
>>> -
>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>
>>>
>>
>


Re: How to insert data for 100 partitions at a time using Spark SQL

2016-05-22 Thread swetha kasireddy
The data is not very big. Say 1MB-10 MB at the max per partition. What is
the best way to insert this 14k partitions with decent performance?

On Sun, May 22, 2016 at 12:18 PM, Mich Talebzadeh  wrote:

> the acid question is how many rows are you going to insert in a batch
> session? btw if this is purely an sql operation then you can do all that in
> hive running on spark engine. It will be very fast as well.
>
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> On 22 May 2016 at 20:14, Jörn Franke  wrote:
>
>> 14000 partitions seem to be way too many to be performant (except for
>> large data sets). How much data does one partition contain?
>>
>> > On 22 May 2016, at 09:34, SRK  wrote:
>> >
>> > Hi,
>> >
>> > In my Spark SQL query to insert data, I have around 14,000 partitions of
>> > data which seems to be causing memory issues. How can I insert the data
>> for
>> > 100 partitions at a time to avoid any memory issues?
>> >
>> >
>> >
>> > --
>> > View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/How-to-insert-data-for-100-partitions-at-a-time-using-Spark-SQL-tp26997.html
>> > Sent from the Apache Spark User List mailing list archive at Nabble.com.
>> >
>> > -
>> > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> > For additional commands, e-mail: user-h...@spark.apache.org
>> >
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>
>


Re: How to insert data for 100 partitions at a time using Spark SQL

2016-05-22 Thread Mich Talebzadeh
the acid question is how many rows are you going to insert in a batch
session? btw if this is purely an sql operation then you can do all that in
hive running on spark engine. It will be very fast as well.



Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com



On 22 May 2016 at 20:14, Jörn Franke  wrote:

> 14000 partitions seem to be way too many to be performant (except for
> large data sets). How much data does one partition contain?
>
> > On 22 May 2016, at 09:34, SRK  wrote:
> >
> > Hi,
> >
> > In my Spark SQL query to insert data, I have around 14,000 partitions of
> > data which seems to be causing memory issues. How can I insert the data
> for
> > 100 partitions at a time to avoid any memory issues?
> >
> >
> >
> > --
> > View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/How-to-insert-data-for-100-partitions-at-a-time-using-Spark-SQL-tp26997.html
> > Sent from the Apache Spark User List mailing list archive at Nabble.com.
> >
> > -
> > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> > For additional commands, e-mail: user-h...@spark.apache.org
> >
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>


Re: How to insert data for 100 partitions at a time using Spark SQL

2016-05-22 Thread Jörn Franke
14000 partitions seem to be way too many to be performant (except for large 
data sets). How much data does one partition contain?

> On 22 May 2016, at 09:34, SRK  wrote:
> 
> Hi,
> 
> In my Spark SQL query to insert data, I have around 14,000 partitions of
> data which seems to be causing memory issues. How can I insert the data for
> 100 partitions at a time to avoid any memory issues? 
> 
> 
> 
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/How-to-insert-data-for-100-partitions-at-a-time-using-Spark-SQL-tp26997.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
> 
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
> 

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: How to insert data for 100 partitions at a time using Spark SQL

2016-05-22 Thread Sabarish Sasidharan
Can't you just reduce the amount of data you insert by applying a filter so
that only a small set of idpartitions is selected. You could have multiple
such inserts to cover all idpartitions. Does that help?

Regards
Sab
On 22 May 2016 1:11 pm, "swetha kasireddy" 
wrote:

> I am looking at ORC. I insert the data using the following query.
>
> sqlContext.sql("  CREATE EXTERNAL TABLE IF NOT EXISTS records (id STRING,
> record STRING) PARTITIONED BY (datePartition STRING, idPartition STRING)
> stored as ORC LOCATION '/user/users' ")
>   sqlContext.sql("  orc.compress= SNAPPY")
>   sqlContext.sql(
> """ from recordsTemp ps   insert overwrite table users
> partition(datePartition , idPartition )  select ps.id, ps.record ,
> ps.datePartition, ps.idPartition  """.stripMargin)
>
> On Sun, May 22, 2016 at 12:37 AM, Mich Talebzadeh <
> mich.talebza...@gmail.com> wrote:
>
>> where is your base table and what format is it Parquet, ORC etc)
>>
>>
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> *
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>>
>> On 22 May 2016 at 08:34, SRK  wrote:
>>
>>> Hi,
>>>
>>> In my Spark SQL query to insert data, I have around 14,000 partitions of
>>> data which seems to be causing memory issues. How can I insert the data
>>> for
>>> 100 partitions at a time to avoid any memory issues?
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-spark-user-list.1001560.n3.nabble.com/How-to-insert-data-for-100-partitions-at-a-time-using-Spark-SQL-tp26997.html
>>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>
>>> -
>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>
>>>
>>
>


Re: How to insert data for 100 partitions at a time using Spark SQL

2016-05-22 Thread swetha kasireddy
So, if I put 1000 records at a time and if the next 1000 records have some
records that has same  partition as the previous records then the data will
be overwritten. How can I prevent overwriting valid data in this case?
Could you post the example that you are talking about?

What I am doing is in the final insert into the ORC table, I
insert/overwrite the data. So, I need to have  a way to insert all the data
related to one partition at a time so that it is not overwritten when I
insert the next set of records.

On Sun, May 22, 2016 at 11:51 AM, Mich Talebzadeh  wrote:

> ok is the staging table used as staging only.
>
> you can create a staging *directory^ where you put your data there (you
> can put 100s of files there) and do an insert/select that will take data
> from 100 files into your main ORC table.
>
> I have an example of 100's of CSV files insert/select from a staging
> external table into an ORC table.
>
> My point is you are more likely interested in doing analysis on ORC table
> (read internal) rather than using staging table.
>
> HTH
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> On 22 May 2016 at 19:43, swetha kasireddy 
> wrote:
>
>> But, how do I take 100 partitions at a time from staging table?
>>
>> On Sun, May 22, 2016 at 11:26 AM, Mich Talebzadeh <
>> mich.talebza...@gmail.com> wrote:
>>
>>> ok so you still keep data as ORC in Hive for further analysis
>>>
>>> what I have in mind is to have an external table as staging table and do
>>> insert into an orc internal table which is bucketed and partitioned.
>>>
>>>
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * 
>>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> *
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>>
>>> On 22 May 2016 at 19:11, swetha kasireddy 
>>> wrote:
>>>
 I am looking at ORC. I insert the data using the following query.

 sqlContext.sql("  CREATE EXTERNAL TABLE IF NOT EXISTS records (id
 STRING,
 record STRING) PARTITIONED BY (datePartition STRING, idPartition STRING)
 stored as ORC LOCATION '/user/users' ")
   sqlContext.sql("  orc.compress= SNAPPY")
   sqlContext.sql(
 """ from recordsTemp ps   insert overwrite table users
 partition(datePartition , idPartition )  select ps.id, ps.record ,
 ps.datePartition, ps.idPartition  """.stripMargin)

 On Sun, May 22, 2016 at 12:37 AM, Mich Talebzadeh <
 mich.talebza...@gmail.com> wrote:

> where is your base table and what format is it Parquet, ORC etc)
>
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> On 22 May 2016 at 08:34, SRK  wrote:
>
>> Hi,
>>
>> In my Spark SQL query to insert data, I have around 14,000 partitions
>> of
>> data which seems to be causing memory issues. How can I insert the
>> data for
>> 100 partitions at a time to avoid any memory issues?
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/How-to-insert-data-for-100-partitions-at-a-time-using-Spark-SQL-tp26997.html
>> Sent from the Apache Spark User List mailing list archive at
>> Nabble.com.
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>
>

>>>
>>
>


Re: How to insert data for 100 partitions at a time using Spark SQL

2016-05-22 Thread Mich Talebzadeh
ok is the staging table used as staging only.

you can create a staging *directory^ where you put your data there (you can
put 100s of files there) and do an insert/select that will take data from
100 files into your main ORC table.

I have an example of 100's of CSV files insert/select from a staging
external table into an ORC table.

My point is you are more likely interested in doing analysis on ORC table
(read internal) rather than using staging table.

HTH

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com



On 22 May 2016 at 19:43, swetha kasireddy  wrote:

> But, how do I take 100 partitions at a time from staging table?
>
> On Sun, May 22, 2016 at 11:26 AM, Mich Talebzadeh <
> mich.talebza...@gmail.com> wrote:
>
>> ok so you still keep data as ORC in Hive for further analysis
>>
>> what I have in mind is to have an external table as staging table and do
>> insert into an orc internal table which is bucketed and partitioned.
>>
>>
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> *
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>>
>> On 22 May 2016 at 19:11, swetha kasireddy 
>> wrote:
>>
>>> I am looking at ORC. I insert the data using the following query.
>>>
>>> sqlContext.sql("  CREATE EXTERNAL TABLE IF NOT EXISTS records (id STRING,
>>> record STRING) PARTITIONED BY (datePartition STRING, idPartition STRING)
>>> stored as ORC LOCATION '/user/users' ")
>>>   sqlContext.sql("  orc.compress= SNAPPY")
>>>   sqlContext.sql(
>>> """ from recordsTemp ps   insert overwrite table users
>>> partition(datePartition , idPartition )  select ps.id, ps.record ,
>>> ps.datePartition, ps.idPartition  """.stripMargin)
>>>
>>> On Sun, May 22, 2016 at 12:37 AM, Mich Talebzadeh <
>>> mich.talebza...@gmail.com> wrote:
>>>
 where is your base table and what format is it Parquet, ORC etc)



 Dr Mich Talebzadeh



 LinkedIn * 
 https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
 *



 http://talebzadehmich.wordpress.com



 On 22 May 2016 at 08:34, SRK  wrote:

> Hi,
>
> In my Spark SQL query to insert data, I have around 14,000 partitions
> of
> data which seems to be causing memory issues. How can I insert the
> data for
> 100 partitions at a time to avoid any memory issues?
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/How-to-insert-data-for-100-partitions-at-a-time-using-Spark-SQL-tp26997.html
> Sent from the Apache Spark User List mailing list archive at
> Nabble.com.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

>>>
>>
>


Re: How to insert data for 100 partitions at a time using Spark SQL

2016-05-22 Thread swetha kasireddy
But, how do I take 100 partitions at a time from staging table?

On Sun, May 22, 2016 at 11:26 AM, Mich Talebzadeh  wrote:

> ok so you still keep data as ORC in Hive for further analysis
>
> what I have in mind is to have an external table as staging table and do
> insert into an orc internal table which is bucketed and partitioned.
>
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> On 22 May 2016 at 19:11, swetha kasireddy 
> wrote:
>
>> I am looking at ORC. I insert the data using the following query.
>>
>> sqlContext.sql("  CREATE EXTERNAL TABLE IF NOT EXISTS records (id STRING,
>> record STRING) PARTITIONED BY (datePartition STRING, idPartition STRING)
>> stored as ORC LOCATION '/user/users' ")
>>   sqlContext.sql("  orc.compress= SNAPPY")
>>   sqlContext.sql(
>> """ from recordsTemp ps   insert overwrite table users
>> partition(datePartition , idPartition )  select ps.id, ps.record ,
>> ps.datePartition, ps.idPartition  """.stripMargin)
>>
>> On Sun, May 22, 2016 at 12:37 AM, Mich Talebzadeh <
>> mich.talebza...@gmail.com> wrote:
>>
>>> where is your base table and what format is it Parquet, ORC etc)
>>>
>>>
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * 
>>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> *
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>>
>>> On 22 May 2016 at 08:34, SRK  wrote:
>>>
 Hi,

 In my Spark SQL query to insert data, I have around 14,000 partitions of
 data which seems to be causing memory issues. How can I insert the data
 for
 100 partitions at a time to avoid any memory issues?



 --
 View this message in context:
 http://apache-spark-user-list.1001560.n3.nabble.com/How-to-insert-data-for-100-partitions-at-a-time-using-Spark-SQL-tp26997.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org


>>>
>>
>


Re: How to insert data for 100 partitions at a time using Spark SQL

2016-05-22 Thread swetha kasireddy
I am looking at ORC. I insert the data using the following query.

sqlContext.sql("  CREATE EXTERNAL TABLE IF NOT EXISTS records (id STRING,
record STRING) PARTITIONED BY (datePartition STRING, idPartition STRING)
stored as ORC LOCATION '/user/users' ")
  sqlContext.sql("  orc.compress= SNAPPY")
  sqlContext.sql(
""" from recordsTemp ps   insert overwrite table users
partition(datePartition , idPartition )  select ps.id, ps.record ,
ps.datePartition, ps.idPartition  """.stripMargin)

On Sun, May 22, 2016 at 12:37 AM, Mich Talebzadeh  wrote:

> where is your base table and what format is it Parquet, ORC etc)
>
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> On 22 May 2016 at 08:34, SRK  wrote:
>
>> Hi,
>>
>> In my Spark SQL query to insert data, I have around 14,000 partitions of
>> data which seems to be causing memory issues. How can I insert the data
>> for
>> 100 partitions at a time to avoid any memory issues?
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/How-to-insert-data-for-100-partitions-at-a-time-using-Spark-SQL-tp26997.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>
>


Re: How to insert data for 100 partitions at a time using Spark SQL

2016-05-22 Thread Mich Talebzadeh
where is your base table and what format is it Parquet, ORC etc)



Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com



On 22 May 2016 at 08:34, SRK  wrote:

> Hi,
>
> In my Spark SQL query to insert data, I have around 14,000 partitions of
> data which seems to be causing memory issues. How can I insert the data for
> 100 partitions at a time to avoid any memory issues?
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/How-to-insert-data-for-100-partitions-at-a-time-using-Spark-SQL-tp26997.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>


How to insert data for 100 partitions at a time using Spark SQL

2016-05-22 Thread SRK
Hi,

In my Spark SQL query to insert data, I have around 14,000 partitions of
data which seems to be causing memory issues. How can I insert the data for
100 partitions at a time to avoid any memory issues? 



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-insert-data-for-100-partitions-at-a-time-using-Spark-SQL-tp26997.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org