But if Cassandra supports JDBC interface, it should work and may be
suboptimal.

On Wed, Aug 8, 2012 at 3:47 AM, [email protected] <
[email protected]> wrote:

> Hi Jarcec,
>
> Thanks a lot.
>
> Regards
> Rajesh Kumar
>
> On Wednesday 08 August 2012 04:05 PM, Jarek Jarcec Cecho wrote:
>
>> Hi Rajesh,
>> as far as I know Apache sqoop do not have build-in support for Cassandra
>> at the moment.
>>
>> Jarcec
>>
>> On Wed, Aug 08, 2012 at 03:19:07PM +0530, [email protected] wrote:
>>
>>> Hi Everyone,
>>> Can sqoop version 1.4.1 or any other version, import data from mysql
>>> database to cassandra (into cassandra file system rather than hive
>>> or HDFS).
>>> If so whats the command for it.
>>>
>>> Thanks
>>> Rajesh Kumar
>>> On Wednesday 08 August 2012 01:13 AM, Jarek Jarcec Cecho wrote:
>>>
>>>> Hi Rajesh,
>>>> this seems as a custom build provided by third party. I'm afraid that
>>>> we can't help you as the exception itself is not generated by Sqoop, but
>>>> some additional code that we're simply not familiar with. I would advise
>>>> contacting original source of your build for more help.
>>>>
>>>> Jarcec
>>>>
>>>> On Tue, Aug 07, 2012 at 12:19:15PM +0530, [email protected]:
>>>>
>>>>> Hi Jarcec,
>>>>>
>>>>> I am using sqoop utility provided by Datastax in their Enterprise
>>>>> version: Sqoop 1.4.1-dse-SNAPSHOT
>>>>>
>>>>> I am using the following command to import :
>>>>>
>>>>> ./dse sqoop import --connect 
>>>>> jdbc:mysql://192.168.1.102/**npa_nxx_demo<http://192.168.1.102/npa_nxx_demo>
>>>>> --username root --password *** --table npa_nxx --cassandra-keyspace
>>>>> pdc_crawler --cassandra-column-family npa_nxx_cf --cassandra-row-key
>>>>> npa_nxx_key --cassandra-thrift-host 192.168.0.203,192.168.0.224
>>>>> --cassandra-create-schema --verbose
>>>>>
>>>>> This is the verbose output :
>>>>> ./dse sqoop import --connect 
>>>>> jdbc:mysql://192.168.0.102/**npa_nxx_demo<http://192.168.0.102/npa_nxx_demo>
>>>>> --username root --password orka1 --table npa_nxx
>>>>> --cassandra-keyspace pdc_crawler --cassandra-column-family
>>>>> npa_nxx_cf --cassandra-row-key npa_nxx_key --cassandra-thrift-host
>>>>> 192.168.0.201,192.168.0.202 --cassandra-create-schema --verbose
>>>>> 12/08/07 12:12:26 WARN tool.BaseSqoopTool: Setting your password on
>>>>> the command-line is insecure. Consider using -P instead.
>>>>> 12/08/07 12:12:26 INFO manager.MySQLManager: Preparing to use a
>>>>> MySQL streaming resultset.
>>>>> 12/08/07 12:12:26 INFO tool.CodeGenTool: Beginning code generation
>>>>> 12/08/07 12:12:27 INFO manager.SqlManager: Executing SQL statement:
>>>>> SELECT t.* FROM `npa_nxx` AS t LIMIT 1
>>>>> 12/08/07 12:12:27 INFO orm.CompilationManager: HADOOP_HOME is
>>>>> /home/Downloads/dse-2.1/**resources/hadoop/bin/..
>>>>> Note:
>>>>> /tmp/sqoop-root/compile/**474195de4276c8abd8d19c14d4daed**
>>>>> a5/npa_nxx.java
>>>>> uses or overrides a deprecated API.
>>>>> Note: Recompile with -Xlint:deprecation for details.
>>>>> 12/08/07 12:12:28 INFO orm.CompilationManager: Writing jar file:
>>>>> /tmp/sqoop-root/compile/**474195de4276c8abd8d19c14d4daed**
>>>>> a5/npa_nxx.jar
>>>>> 12/08/07 12:12:28 WARN manager.MySQLManager: It looks like you are
>>>>> importing from mysql.
>>>>> 12/08/07 12:12:28 WARN manager.MySQLManager: This transfer can be
>>>>> faster! Use the --direct
>>>>> 12/08/07 12:12:28 WARN manager.MySQLManager: option to exercise a
>>>>> MySQL-specific fast path.
>>>>> 12/08/07 12:12:28 INFO manager.MySQLManager: Setting zero DATETIME
>>>>> behavior to convertToNull (mysql)
>>>>> 12/08/07 12:12:28 INFO mapreduce.ImportJobBase: Beginning import of
>>>>> npa_nxx
>>>>> 12/08/07 12:12:31 ERROR sqoop.Sqoop: Got exception running Sqoop:
>>>>> org.apache.cassandra.db.**marshal.MarshalException: 97 is not
>>>>> recognized as a valid type
>>>>> org.apache.cassandra.db.**marshal.MarshalException: 97 is not
>>>>> recognized as a valid type
>>>>>      at com.datastax.bdp.util.**CompositeUtil.deserialize(**
>>>>> CompositeUtil.java:93)
>>>>>      at com.datastax.bdp.hadoop.cfs.**CassandraFileSystemThriftStore**
>>>>> .retrieveINode(**CassandraFileSystemThriftStore**.java:585)
>>>>>      at com.datastax.bdp.hadoop.cfs.**CassandraFileSystemThriftStore**
>>>>> .retrieveINode(**CassandraFileSystemThriftStore**.java:563)
>>>>>      at com.datastax.bdp.hadoop.cfs.**CassandraFileSystem.**
>>>>> getFileStatus(**CassandraFileSystem.java:520)
>>>>>      at org.apache.hadoop.fs.**FileSystem.exists(FileSystem.**
>>>>> java:768)
>>>>>      at org.apache.hadoop.mapreduce.**JobSubmissionFiles.**
>>>>> getStagingDir(**JobSubmissionFiles.java:103)
>>>>>      at org.apache.hadoop.mapred.**JobClient$2.run(JobClient.**
>>>>> java:856)
>>>>>      at org.apache.hadoop.mapred.**JobClient$2.run(JobClient.**
>>>>> java:850)
>>>>>      at java.security.**AccessController.doPrivileged(**Native Method)
>>>>>      at javax.security.auth.Subject.**doAs(Subject.java:396)
>>>>>      at org.apache.hadoop.security.**UserGroupInformation.doAs(**
>>>>> UserGroupInformation.java:**1093)
>>>>>      at org.apache.hadoop.mapred.**JobClient.submitJobInternal(**
>>>>> JobClient.java:850)
>>>>>      at org.apache.hadoop.mapreduce.**Job.submit(Job.java:500)
>>>>>      at org.apache.hadoop.mapreduce.**Job.waitForCompletion(Job.**
>>>>> java:530)
>>>>>      at org.apache.sqoop.mapreduce.**ImportJobBase.runJob(**
>>>>> ImportJobBase.java:119)
>>>>>      at org.apache.sqoop.mapreduce.**ImportJobBase.runImport(**
>>>>> ImportJobBase.java:179)
>>>>>      at org.apache.sqoop.manager.**SqlManager.importTable(**
>>>>> SqlManager.java:423)
>>>>>      at org.apache.sqoop.manager.**MySQLManager.importTable(**
>>>>> MySQLManager.java:97)
>>>>>      at org.apache.sqoop.tool.**ImportTool.importTable(**
>>>>> ImportTool.java:380)
>>>>>      at org.apache.sqoop.tool.**ImportTool.run(ImportTool.**java:453)
>>>>>      at org.apache.sqoop.Sqoop.run(**Sqoop.java:145)
>>>>>      at org.apache.hadoop.util.**ToolRunner.run(ToolRunner.**java:65)
>>>>>      at org.apache.sqoop.Sqoop.**runSqoop(Sqoop.java:181)
>>>>>      at org.apache.sqoop.Sqoop.**runTool(Sqoop.java:220)
>>>>>      at org.apache.sqoop.Sqoop.**runTool(Sqoop.java:229)
>>>>>      at org.apache.sqoop.Sqoop.main(**Sqoop.java:238)
>>>>>      at com.cloudera.sqoop.Sqoop.main(**Sqoop.java:57)
>>>>>
>>>>> Thanks a lot.
>>>>> Rajesh Kumar
>>>>>
>>>>> On Friday 03 August 2012 10:39 PM, Jarek Jarcec Cecho wrote:
>>>>>
>>>>>> Hi Rajesh,
>>>>>> could you please specify Sqoop version you're using, entire command
>>>>>> line you've used (please feel free to substitute sensitive information 
>>>>>> with
>>>>>> asterisks) and entire log that sqoop generated with --verbose flag?
>>>>>>
>>>>>> Jarcec
>>>>>>
>>>>>> On Fri, Aug 03, 2012 at 10:07:50AM +0530, [email protected]:
>>>>>>
>>>>>>> Hi Everyone,
>>>>>>> I am trying to import data into cassandra column family from mysql
>>>>>>> and i am getting the following error.
>>>>>>>
>>>>>>> ERROR sqoop.Sqoop: Got exception running Sqoop:
>>>>>>> org.apache.cassandra.db.**marshal.MarshalException: 97 is not
>>>>>>> recognized as a valid type
>>>>>>> org.apache.cassandra.db.**marshal.MarshalException: 97 is not
>>>>>>> recognized as a valid type
>>>>>>>      at com.datastax.bdp.util.**CompositeUtil.deserialize(**
>>>>>>> CompositeUtil.java:93)
>>>>>>>      at com.datastax.bdp.hadoop.cfs.**CassandraFileSystemThriftStore
>>>>>>> **.retrieveINode(**CassandraFileSystemThriftStore**.java:585)
>>>>>>>      at com.datastax.bdp.hadoop.cfs.**CassandraFileSystemThriftStore
>>>>>>> **.retrieveINode(**CassandraFileSystemThriftStore**.java:563)
>>>>>>>      at com.datastax.bdp.hadoop.cfs.**CassandraFileSystem.**
>>>>>>> getFileStatus(**CassandraFileSystem.java:520)
>>>>>>>      at org.apache.hadoop.fs.**FileSystem.exists(FileSystem.**
>>>>>>> java:768)
>>>>>>>      at org.apache.hadoop.mapreduce.**JobSubmissionFiles.**
>>>>>>> getStagingDir(**JobSubmissionFiles.java:103)
>>>>>>>      at org.apache.hadoop.mapred.**JobClient$2.run(JobClient.**
>>>>>>> java:856)
>>>>>>>      at org.apache.hadoop.mapred.**JobClient$2.run(JobClient.**
>>>>>>> java:850)
>>>>>>>      at java.security.**AccessController.doPrivileged(**Native
>>>>>>> Method)
>>>>>>>      at javax.security.auth.Subject.**doAs(Subject.java:396)
>>>>>>>      at org.apache.hadoop.security.**UserGroupInformation.doAs(**
>>>>>>> UserGroupInformation.java:**1093)
>>>>>>>      at org.apache.hadoop.mapred.**JobClient.submitJobInternal(**
>>>>>>> JobClient.java:850)
>>>>>>>      at org.apache.hadoop.mapreduce.**Job.submit(Job.java:500)
>>>>>>>      at org.apache.hadoop.mapreduce.**Job.waitForCompletion(Job.**
>>>>>>> java:530)
>>>>>>>      at org.apache.sqoop.mapreduce.**ImportJobBase.runJob(**
>>>>>>> ImportJobBase.java:119)
>>>>>>>      at org.apache.sqoop.mapreduce.**ImportJobBase.runImport(**
>>>>>>> ImportJobBase.java:179)
>>>>>>>      at org.apache.sqoop.manager.**SqlManager.importTable(**
>>>>>>> SqlManager.java:423)
>>>>>>>      at org.apache.sqoop.manager.**MySQLManager.importTable(**
>>>>>>> MySQLManager.java:97)
>>>>>>>      at org.apache.sqoop.tool.**ImportTool.importTable(**
>>>>>>> ImportTool.java:380)
>>>>>>>      at org.apache.sqoop.tool.**ImportTool.run(ImportTool.**
>>>>>>> java:453)
>>>>>>>      at org.apache.sqoop.Sqoop.run(**Sqoop.java:145)
>>>>>>>      at org.apache.hadoop.util.**ToolRunner.run(ToolRunner.**
>>>>>>> java:65)
>>>>>>>      at org.apache.sqoop.Sqoop.**runSqoop(Sqoop.java:181)
>>>>>>>      at org.apache.sqoop.Sqoop.**runTool(Sqoop.java:220)
>>>>>>>      at org.apache.sqoop.Sqoop.**runTool(Sqoop.java:229)
>>>>>>>      at org.apache.sqoop.Sqoop.main(**Sqoop.java:238)
>>>>>>>      at com.cloudera.sqoop.Sqoop.main(**Sqoop.java:57)
>>>>>>>
>>>>>>> Could anyone help me solve this issue.
>>>>>>>
>>>>>>> Thanks
>>>>>>> Rajesh Kumar
>>>>>>>
>>>>>>
>>>
>
>


-- 
Regards,
Venkatesh

Phone: (408) 658-8368
EMail: [email protected]

http://in.linkedin.com/in/seetharamvenkatesh
http://about.me/SeetharamVenkatesh

“Perfection (in design) is achieved not when there is nothing more to add,
but rather when there is nothing more to take away.”
- Antoine de Saint-Exupéry

Reply via email to