Hi,

I am looking for, if the CQLSH COPY command be run using the spark scala 
program. Does it benefit from the parallelism achieved by spark.
I am doing something like below:

val conf = new SparkConf(true).setMaster("spark://Master-Host:7077") 
.setAppName("Load Cs Table using COPY TO")
lazy val sc = new SparkContext(conf)

import com.datastax.spark.connector.cql.CassandraConnector

CassandraConnector(conf).withSessionDo { session =>
                                session.execute("truncate wfcdb.test_wfctotal;")
                                session.execute("COPY wfcdb.test_wfctotal 
(wfctotalid, timesheetitemid, employeeid, durationsecsqty, wageamt, moneyamt, 
applydtm, laboracctid, paycodeid, startdtm, stimezoneid, adjstartdtm, 
adjapplydtm, enddtm, homeaccountsw, notpaidsw, wfcjoborgid, unapprovedsw, 
durationdaysqty, updatedtm, totaledversion, acctapprovalnum) FROM 
'/home/analytics/Documents/wfctotal.dat' WITH DELIMITER = '|' AND HEADER = 
true;")

Regards,
Tarun Tiwari | Workforce Analytics-ETL | Kronos India
M: +91 9540 28 27 77 | Tel: +91 120 4015200
Kronos | Time & Attendance * Scheduling * Absence Management * HR & Payroll * 
Hiring * Labor Analytics
Join Kronos on: kronos.com<http://www.kronos.com/> | 
Facebook<http://www.kronos.com/facebook> | 
Twitter<http://www.kronos.com/twitter> | 
LinkedIn<http://www.kronos.com/linkedin> | 
YouTube<http://www.kronos.com/youtube>

Reply via email to