Re: [Dev] [DAS 3.0.0 Beta] No FileSystem for scheme: file

2015-08-14 Thread Maheshakya Wijewardena
Hi Anuruddha,

Were you able to fix the later issue?

On Mon, Jul 27, 2015 at 11:55 AM, Gokul Balakrishnan go...@wso2.com wrote:

 Hi Anuruddha,

 This seems to be an issue with the HBase cluster itself rather than an
 integration issue. Could you check the HBase logs and see if anything is
 reported there? In any case, please also specify the zookeeper quorum
 property as well (hbase.zookeeper.quorum and specify the zk nodes)

 Thanks,

 On 27 July 2015 at 11:45, Anuruddha Liyanarachchi anurudd...@wso2.com
 wrote:

 Hi Gokul,

 Thanks for the reply.

 Yes this runs on a HBase cluster. I have added the properties and it
 fixed the No FileSystem for scheme: file Error. Now I am getting following
 error.

 org.wso2.carbon.analytics.datasource.commons.exception.AnalyticsException:
 Error checking existence of table __SHARD_INDEX_UPDATE_RECORDS__1 for
 tenant -1000
 at
 org.wso2.carbon.analytics.datasource.hbase.HBaseAnalyticsRecordStore.tableExists(HBaseAnalyticsRecordStore.java:126)
 at
 org.wso2.carbon.analytics.datasource.hbase.HBaseAnalyticsRecordStore.get(HBaseAnalyticsRecordStore.java:274)
 at
 org.wso2.carbon.analytics.dataservice.indexing.AnalyticsDataIndexer.loadIndexOperationRecords(AnalyticsDataIndexer.java:554)
 at
 org.wso2.carbon.analytics.dataservice.indexing.AnalyticsDataIndexer.loadIndexOperationUpdateRecords(AnalyticsDataIndexer.java:519)
 at
 org.wso2.carbon.analytics.dataservice.indexing.AnalyticsDataIndexer.loadIndexOperationUpdateRecords(AnalyticsDataIndexer.java:515)
 at
 org.wso2.carbon.analytics.dataservice.indexing.AnalyticsDataIndexer.processIndexUpdateOperations(AnalyticsDataIndexer.java:403)
 at
 org.wso2.carbon.analytics.dataservice.indexing.AnalyticsDataIndexer.processIndexOperations(AnalyticsDataIndexer.java:491)
 at
 org.wso2.carbon.analytics.dataservice.indexing.AnalyticsDataIndexer.access$200(AnalyticsDataIndexer.java:118)
 at
 org.wso2.carbon.analytics.dataservice.indexing.AnalyticsDataIndexer$IndexWorker.run(AnalyticsDataIndexer.java:1744)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException:
 Can't get the locations
 at
 org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:305)
 at
 org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:131)
 at
 org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:56)
 at
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
 at
 org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:287)
 at
 org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:267)
 at
 org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:139)
 at
 org.apache.hadoop.hbase.client.ClientScanner.init(ClientScanner.java:134)
 at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:823)
 at
 org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:601)
 at
 org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:365)
 at
 org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:281)
 at
 org.wso2.carbon.analytics.datasource.hbase.HBaseAnalyticsRecordStore.tableExists(HBaseAnalyticsRecordStore.java:123)
 ... 11 more


 On Mon, Jul 27, 2015 at 10:54 AM, Gokul Balakrishnan go...@wso2.com
 wrote:

 Hi Anuruddha,

 Are you running HBase on a cluster (i.e. on top of HDFS)? If yes, can
 you ensure that you have the following in the analytics-datasource.xml for
 the HBase datasource?

 property
 namefs.hdfs.impl/name
 valueorg.apache.hadoop.hdfs.DistributedFileSystem/value
 /property
 property
 namefs.file.impl/name
 valueorg.apache.hadoop.fs.LocalFileSystem/value
 /property

 Thanks,


 On 27 July 2015 at 10:43, Anuruddha Liyanarachchi anurudd...@wso2.com
 wrote:

 Hi DAS team,

 I have created DAS receiver and analytics cluster as in the diagram
 [1].
 In the setup DAS receivers are  connected to MySQL (FS_DB)  and HBase
 (EventStore).

 I am getting following errors when I start DAS receivers. What could be
 the reason for this error ?
 I have attached the carbon log as well.

 TID: [-1] [] [2015-07-27 04:37:40,838]  WARN
 {org.apache.hadoop.hbase.util.DynamicClassLoader} -  Failed to identify the
 fs of dir /opt/wso2das-3.0.0-SNAPSHOT/tmp/hbase-root/hbase/lib, ignored
 {org.apac
 he.hadoop.hbase.util.DynamicClassLoader}
 java.io.IOException: No FileSystem for scheme: file
 at
 org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584)
 at
 org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
 at 

Re: [Dev] [DAS 3.0.0 Beta] No FileSystem for scheme: file

2015-08-14 Thread Pubudu Gunatilaka
Hi Maheshakya,

The problem was with the HBase cluster. If the HBase cluster is not
properly setup you will get the above mentioned Hbase table issues. Check
the status of the HBase cluster by accessing the Hbase shell.

Thank you!

On Fri, Aug 14, 2015 at 7:28 PM, Maheshakya Wijewardena mahesha...@wso2.com
 wrote:

 Hi Anuruddha,

 Were you able to fix the later issue?

 On Mon, Jul 27, 2015 at 11:55 AM, Gokul Balakrishnan go...@wso2.com
 wrote:

 Hi Anuruddha,

 This seems to be an issue with the HBase cluster itself rather than an
 integration issue. Could you check the HBase logs and see if anything is
 reported there? In any case, please also specify the zookeeper quorum
 property as well (hbase.zookeeper.quorum and specify the zk nodes)

 Thanks,

 On 27 July 2015 at 11:45, Anuruddha Liyanarachchi anurudd...@wso2.com
 wrote:

 Hi Gokul,

 Thanks for the reply.

 Yes this runs on a HBase cluster. I have added the properties and it
 fixed the No FileSystem for scheme: file Error. Now I am getting following
 error.

 org.wso2.carbon.analytics.datasource.commons.exception.AnalyticsException:
 Error checking existence of table __SHARD_INDEX_UPDATE_RECORDS__1 for
 tenant -1000
 at
 org.wso2.carbon.analytics.datasource.hbase.HBaseAnalyticsRecordStore.tableExists(HBaseAnalyticsRecordStore.java:126)
 at
 org.wso2.carbon.analytics.datasource.hbase.HBaseAnalyticsRecordStore.get(HBaseAnalyticsRecordStore.java:274)
 at
 org.wso2.carbon.analytics.dataservice.indexing.AnalyticsDataIndexer.loadIndexOperationRecords(AnalyticsDataIndexer.java:554)
 at
 org.wso2.carbon.analytics.dataservice.indexing.AnalyticsDataIndexer.loadIndexOperationUpdateRecords(AnalyticsDataIndexer.java:519)
 at
 org.wso2.carbon.analytics.dataservice.indexing.AnalyticsDataIndexer.loadIndexOperationUpdateRecords(AnalyticsDataIndexer.java:515)
 at
 org.wso2.carbon.analytics.dataservice.indexing.AnalyticsDataIndexer.processIndexUpdateOperations(AnalyticsDataIndexer.java:403)
 at
 org.wso2.carbon.analytics.dataservice.indexing.AnalyticsDataIndexer.processIndexOperations(AnalyticsDataIndexer.java:491)
 at
 org.wso2.carbon.analytics.dataservice.indexing.AnalyticsDataIndexer.access$200(AnalyticsDataIndexer.java:118)
 at
 org.wso2.carbon.analytics.dataservice.indexing.AnalyticsDataIndexer$IndexWorker.run(AnalyticsDataIndexer.java:1744)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException:
 Can't get the locations
 at
 org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:305)
 at
 org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:131)
 at
 org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:56)
 at
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
 at
 org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:287)
 at
 org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:267)
 at
 org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:139)
 at
 org.apache.hadoop.hbase.client.ClientScanner.init(ClientScanner.java:134)
 at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:823)
 at
 org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:601)
 at
 org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:365)
 at
 org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:281)
 at
 org.wso2.carbon.analytics.datasource.hbase.HBaseAnalyticsRecordStore.tableExists(HBaseAnalyticsRecordStore.java:123)
 ... 11 more


 On Mon, Jul 27, 2015 at 10:54 AM, Gokul Balakrishnan go...@wso2.com
 wrote:

 Hi Anuruddha,

 Are you running HBase on a cluster (i.e. on top of HDFS)? If yes, can
 you ensure that you have the following in the analytics-datasource.xml for
 the HBase datasource?

 property
 namefs.hdfs.impl/name
 valueorg.apache.hadoop.hdfs.DistributedFileSystem/value
 /property
 property
 namefs.file.impl/name
 valueorg.apache.hadoop.fs.LocalFileSystem/value
 /property

 Thanks,


 On 27 July 2015 at 10:43, Anuruddha Liyanarachchi anurudd...@wso2.com
 wrote:

 Hi DAS team,

 I have created DAS receiver and analytics cluster as in the diagram
 [1].
 In the setup DAS receivers are  connected to MySQL (FS_DB)  and HBase
 (EventStore).

 I am getting following errors when I start DAS receivers. What could
 be the reason for this error ?
 I have attached the carbon log as well.

 TID: [-1] [] [2015-07-27 04:37:40,838]  WARN
 {org.apache.hadoop.hbase.util.DynamicClassLoader} -  Failed to identify 
 the
 fs of dir 

Re: [Dev] [DAS 3.0.0 Beta] No FileSystem for scheme: file

2015-07-27 Thread Anuruddha Liyanarachchi
Hi Gokul,

Thanks for the reply.

Yes this runs on a HBase cluster. I have added the properties and it fixed
the No FileSystem for scheme: file Error. Now I am getting following error.

org.wso2.carbon.analytics.datasource.commons.exception.AnalyticsException:
Error checking existence of table __SHARD_INDEX_UPDATE_RECORDS__1 for
tenant -1000
at
org.wso2.carbon.analytics.datasource.hbase.HBaseAnalyticsRecordStore.tableExists(HBaseAnalyticsRecordStore.java:126)
at
org.wso2.carbon.analytics.datasource.hbase.HBaseAnalyticsRecordStore.get(HBaseAnalyticsRecordStore.java:274)
at
org.wso2.carbon.analytics.dataservice.indexing.AnalyticsDataIndexer.loadIndexOperationRecords(AnalyticsDataIndexer.java:554)
at
org.wso2.carbon.analytics.dataservice.indexing.AnalyticsDataIndexer.loadIndexOperationUpdateRecords(AnalyticsDataIndexer.java:519)
at
org.wso2.carbon.analytics.dataservice.indexing.AnalyticsDataIndexer.loadIndexOperationUpdateRecords(AnalyticsDataIndexer.java:515)
at
org.wso2.carbon.analytics.dataservice.indexing.AnalyticsDataIndexer.processIndexUpdateOperations(AnalyticsDataIndexer.java:403)
at
org.wso2.carbon.analytics.dataservice.indexing.AnalyticsDataIndexer.processIndexOperations(AnalyticsDataIndexer.java:491)
at
org.wso2.carbon.analytics.dataservice.indexing.AnalyticsDataIndexer.access$200(AnalyticsDataIndexer.java:118)
at
org.wso2.carbon.analytics.dataservice.indexing.AnalyticsDataIndexer$IndexWorker.run(AnalyticsDataIndexer.java:1744)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't
get the locations
at
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:305)
at
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:131)
at
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:56)
at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:287)
at
org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:267)
at
org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:139)
at
org.apache.hadoop.hbase.client.ClientScanner.init(ClientScanner.java:134)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:823)
at
org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:601)
at
org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:365)
at
org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:281)
at
org.wso2.carbon.analytics.datasource.hbase.HBaseAnalyticsRecordStore.tableExists(HBaseAnalyticsRecordStore.java:123)
... 11 more


On Mon, Jul 27, 2015 at 10:54 AM, Gokul Balakrishnan go...@wso2.com wrote:

 Hi Anuruddha,

 Are you running HBase on a cluster (i.e. on top of HDFS)? If yes, can you
 ensure that you have the following in the analytics-datasource.xml for the
 HBase datasource?

 property
 namefs.hdfs.impl/name
 valueorg.apache.hadoop.hdfs.DistributedFileSystem/value
 /property
 property
 namefs.file.impl/name
 valueorg.apache.hadoop.fs.LocalFileSystem/value
 /property

 Thanks,


 On 27 July 2015 at 10:43, Anuruddha Liyanarachchi anurudd...@wso2.com
 wrote:

 Hi DAS team,

 I have created DAS receiver and analytics cluster as in the diagram [1].
 In the setup DAS receivers are  connected to MySQL (FS_DB)  and HBase
 (EventStore).

 I am getting following errors when I start DAS receivers. What could be
 the reason for this error ?
 I have attached the carbon log as well.

 TID: [-1] [] [2015-07-27 04:37:40,838]  WARN
 {org.apache.hadoop.hbase.util.DynamicClassLoader} -  Failed to identify the
 fs of dir /opt/wso2das-3.0.0-SNAPSHOT/tmp/hbase-root/hbase/lib, ignored
 {org.apac
 he.hadoop.hbase.util.DynamicClassLoader}
 java.io.IOException: No FileSystem for scheme: file
 at
 org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584)
 at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
 at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
 at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
 at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)
 at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
 at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169)
 at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:354)
 at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
 at
 org.apache.hadoop.hbase.util.DynamicClassLoader.init(DynamicClassLoader.java:104)
 at
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.clinit(ProtobufUtil.java:229)
 at 

Re: [Dev] [DAS 3.0.0 Beta] No FileSystem for scheme: file

2015-07-27 Thread Gokul Balakrishnan
Hi Anuruddha,

This seems to be an issue with the HBase cluster itself rather than an
integration issue. Could you check the HBase logs and see if anything is
reported there? In any case, please also specify the zookeeper quorum
property as well (hbase.zookeeper.quorum and specify the zk nodes)

Thanks,

On 27 July 2015 at 11:45, Anuruddha Liyanarachchi anurudd...@wso2.com
wrote:

 Hi Gokul,

 Thanks for the reply.

 Yes this runs on a HBase cluster. I have added the properties and it fixed
 the No FileSystem for scheme: file Error. Now I am getting following error.

 org.wso2.carbon.analytics.datasource.commons.exception.AnalyticsException:
 Error checking existence of table __SHARD_INDEX_UPDATE_RECORDS__1 for
 tenant -1000
 at
 org.wso2.carbon.analytics.datasource.hbase.HBaseAnalyticsRecordStore.tableExists(HBaseAnalyticsRecordStore.java:126)
 at
 org.wso2.carbon.analytics.datasource.hbase.HBaseAnalyticsRecordStore.get(HBaseAnalyticsRecordStore.java:274)
 at
 org.wso2.carbon.analytics.dataservice.indexing.AnalyticsDataIndexer.loadIndexOperationRecords(AnalyticsDataIndexer.java:554)
 at
 org.wso2.carbon.analytics.dataservice.indexing.AnalyticsDataIndexer.loadIndexOperationUpdateRecords(AnalyticsDataIndexer.java:519)
 at
 org.wso2.carbon.analytics.dataservice.indexing.AnalyticsDataIndexer.loadIndexOperationUpdateRecords(AnalyticsDataIndexer.java:515)
 at
 org.wso2.carbon.analytics.dataservice.indexing.AnalyticsDataIndexer.processIndexUpdateOperations(AnalyticsDataIndexer.java:403)
 at
 org.wso2.carbon.analytics.dataservice.indexing.AnalyticsDataIndexer.processIndexOperations(AnalyticsDataIndexer.java:491)
 at
 org.wso2.carbon.analytics.dataservice.indexing.AnalyticsDataIndexer.access$200(AnalyticsDataIndexer.java:118)
 at
 org.wso2.carbon.analytics.dataservice.indexing.AnalyticsDataIndexer$IndexWorker.run(AnalyticsDataIndexer.java:1744)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't
 get the locations
 at
 org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:305)
 at
 org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:131)
 at
 org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:56)
 at
 org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
 at
 org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:287)
 at
 org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:267)
 at
 org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:139)
 at
 org.apache.hadoop.hbase.client.ClientScanner.init(ClientScanner.java:134)
 at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:823)
 at
 org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:601)
 at
 org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:365)
 at
 org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:281)
 at
 org.wso2.carbon.analytics.datasource.hbase.HBaseAnalyticsRecordStore.tableExists(HBaseAnalyticsRecordStore.java:123)
 ... 11 more


 On Mon, Jul 27, 2015 at 10:54 AM, Gokul Balakrishnan go...@wso2.com
 wrote:

 Hi Anuruddha,

 Are you running HBase on a cluster (i.e. on top of HDFS)? If yes, can you
 ensure that you have the following in the analytics-datasource.xml for the
 HBase datasource?

 property
 namefs.hdfs.impl/name
 valueorg.apache.hadoop.hdfs.DistributedFileSystem/value
 /property
 property
 namefs.file.impl/name
 valueorg.apache.hadoop.fs.LocalFileSystem/value
 /property

 Thanks,


 On 27 July 2015 at 10:43, Anuruddha Liyanarachchi anurudd...@wso2.com
 wrote:

 Hi DAS team,

 I have created DAS receiver and analytics cluster as in the diagram [1].
 In the setup DAS receivers are  connected to MySQL (FS_DB)  and HBase
 (EventStore).

 I am getting following errors when I start DAS receivers. What could be
 the reason for this error ?
 I have attached the carbon log as well.

 TID: [-1] [] [2015-07-27 04:37:40,838]  WARN
 {org.apache.hadoop.hbase.util.DynamicClassLoader} -  Failed to identify the
 fs of dir /opt/wso2das-3.0.0-SNAPSHOT/tmp/hbase-root/hbase/lib, ignored
 {org.apac
 he.hadoop.hbase.util.DynamicClassLoader}
 java.io.IOException: No FileSystem for scheme: file
 at
 org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584)
 at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
 at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
 at
 org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
 at 

Re: [Dev] [DAS 3.0.0 Beta] No FileSystem for scheme: file

2015-07-26 Thread Gokul Balakrishnan
Hi Anuruddha,

Are you running HBase on a cluster (i.e. on top of HDFS)? If yes, can you
ensure that you have the following in the analytics-datasource.xml for the
HBase datasource?

property
namefs.hdfs.impl/name
valueorg.apache.hadoop.hdfs.DistributedFileSystem/value
/property
property
namefs.file.impl/name
valueorg.apache.hadoop.fs.LocalFileSystem/value
/property

Thanks,


On 27 July 2015 at 10:43, Anuruddha Liyanarachchi anurudd...@wso2.com
wrote:

 Hi DAS team,

 I have created DAS receiver and analytics cluster as in the diagram [1].
 In the setup DAS receivers are  connected to MySQL (FS_DB)  and HBase
 (EventStore).

 I am getting following errors when I start DAS receivers. What could be
 the reason for this error ?
 I have attached the carbon log as well.

 TID: [-1] [] [2015-07-27 04:37:40,838]  WARN
 {org.apache.hadoop.hbase.util.DynamicClassLoader} -  Failed to identify the
 fs of dir /opt/wso2das-3.0.0-SNAPSHOT/tmp/hbase-root/hbase/lib, ignored
 {org.apac
 he.hadoop.hbase.util.DynamicClassLoader}
 java.io.IOException: No FileSystem for scheme: file
 at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584)
 at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
 at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
 at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
 at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)
 at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
 at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169)
 at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:354)
 at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
 at
 org.apache.hadoop.hbase.util.DynamicClassLoader.init(DynamicClassLoader.java:104)
 at
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.clinit(ProtobufUtil.java:229)
 at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64)
 at
 org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:75)
 at
 org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:86)
 at
 org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:833)
 at
 org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.init(ConnectionManager.java:623)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
 at
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
 at
 org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
 at
 org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)
 at
 org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119)
 at
 org.wso2.carbon.datasource.reader.hadoop.HadoopDataSourceReaderUtil.getHBaseConnection(HadoopDataSourceReaderUtil.java:79)
 at
 org.wso2.carbon.datasource.reader.hadoop.HBaseDataSourceReader.createDataSource(HBaseDataSourceReader.java:35)
 at
 org.wso2.carbon.ndatasource.core.DataSourceRepository.createDataSourceObject(DataSourceRepository.java:202)
 at
 org.wso2.carbon.ndatasource.core.DataSourceRepository.registerDataSource(DataSourceRepository.java:359)
 at
 org.wso2.carbon.ndatasource.core.DataSourceRepository.addDataSource(DataSourceRepository.java:473)
 at
 org.wso2.carbon.ndatasource.core.DataSourceManager.initSystemDataSource(DataSourceManager.java:185)
 at
 org.wso2.carbon.ndatasource.core.DataSourceManager.initSystemDataSources(DataSourceManager.java:164)
 at
 org.wso2.carbon.ndatasource.core.internal.DataSourceServiceComponent.initSystemDataSources(DataSourceServiceComponent.java:192)
 at
 org.wso2.carbon.ndatasource.core.internal.DataSourceServiceComponent.setSecretCallbackHandlerService(DataSourceServiceComponent.java:178)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at
 org.eclipse.equinox.internal.ds.model.ComponentReference.bind(ComponentReference.java:376)
 at
 org.eclipse.equinox.internal.ds.model.ServiceComponentProp.bindReference(ServiceComponentProp.java:430)
 at
 org.eclipse.equinox.internal.ds.model.ServiceComponentProp.bind(ServiceComponentProp.java:218)
 at
 org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:343)
 at
 org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:620)
 at
 org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(InstanceProcess.java:197)
 at