[ 
https://issues.apache.org/jira/browse/PHOENIX-3532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15752394#comment-15752394
 ] 

Nico Pappagianis edited comment on PHOENIX-3532 at 12/15/16 8:15 PM:
---------------------------------------------------------------------

I'm getting the same exception. I'm on a MBP OS X El Capitan.

{code}
Formatting using clusterid: testClusterID
1    [ScalaTest-3] ERROR org.apache.hadoop.hdfs.MiniDFSCluster  - IOE creating 
namenodes. Permissions dump:
path 
'/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target/test-data/cbc82c32-ae51-49c9-81a8-07f5ff4c8e47/dfscluster_d42892a5-e09d-42b7-a3b7-43a4499310ad/dfs/data':
 
        
absolute:/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target/test-data/cbc82c32-ae51-49c9-81a8-07f5ff4c8e47/dfscluster_d42892a5-e09d-42b7-a3b7-43a4499310ad/dfs/data
        permissions: ----
path 
'/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target/test-data/cbc82c32-ae51-49c9-81a8-07f5ff4c8e47/dfscluster_d42892a5-e09d-42b7-a3b7-43a4499310ad/dfs':
 
        
absolute:/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target/test-data/cbc82c32-ae51-49c9-81a8-07f5ff4c8e47/dfscluster_d42892a5-e09d-42b7-a3b7-43a4499310ad/dfs
        permissions: drwx
path 
'/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target/test-data/cbc82c32-ae51-49c9-81a8-07f5ff4c8e47/dfscluster_d42892a5-e09d-42b7-a3b7-43a4499310ad':
 
        
absolute:/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target/test-data/cbc82c32-ae51-49c9-81a8-07f5ff4c8e47/dfscluster_d42892a5-e09d-42b7-a3b7-43a4499310ad
        permissions: drwx
path 
'/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target/test-data/cbc82c32-ae51-49c9-81a8-07f5ff4c8e47':
 
        
absolute:/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target/test-data/cbc82c32-ae51-49c9-81a8-07f5ff4c8e47
        permissions: drwx
path 
'/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target/test-data':
 
        
absolute:/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target/test-data
        permissions: drwx
path 
'/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target': 
        
absolute:/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target
        permissions: drwx
path '/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark': 
        
absolute:/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark
        permissions: drwx
path '/Users/nico.pappagianis/git/nico-pappagianis/phoenix': 
        absolute:/Users/nico.pappagianis/git/nico-pappagianis/phoenix
        permissions: drwx
path '/Users/nico.pappagianis/git/nico-pappagianis': 
        absolute:/Users/nico.pappagianis/git/nico-pappagianis
        permissions: drwx
path '/Users/nico.pappagianis/git': 
        absolute:/Users/nico.pappagianis/git
        permissions: drwx
path '/Users/nico.pappagianis': 
        absolute:/Users/nico.pappagianis
        permissions: drwx
path '/Users': 
        absolute:/Users
        permissions: dr-x
path '/': 
        absolute:/
        permissions: dr-x

java.io.IOException: Cannot create directory 
/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target/test-data/cbc82c32-ae51-49c9-81a8-07f5ff4c8e47/dfscluster_d42892a5-e09d-42b7-a3b7-43a4499310ad/dfs/name1/current
        at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337)
        at 
org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548)
        at 
org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569)
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:161)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:991)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:342)
        at 
org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:176)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:973)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:811)
        at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:742)
        at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:585)
        at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:982)
        at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:863)
        at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:845)
        at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:832)
        at org.apache.phoenix.query.BaseTest.initMiniCluster(BaseTest.java:588)
        at org.apache.phoenix.query.BaseTest.setUpTestCluster(BaseTest.java:509)
        at 
org.apache.phoenix.query.BaseTest.checkClusterInitialized(BaseTest.java:483)
        at org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java:561)
        at org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java:557)
        at 
org.apache.phoenix.end2end.BaseHBaseManagedTimeIT.doSetup(BaseHBaseManagedTimeIT.java:57)
        at 
org.apache.phoenix.spark.PhoenixSparkITHelper$.doSetup(AbstractPhoenixSparkIT.scala:33)
        at 
org.apache.phoenix.spark.AbstractPhoenixSparkIT.beforeAll(AbstractPhoenixSparkIT.scala:88)
        at 
org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)
        at 
org.apache.phoenix.spark.AbstractPhoenixSparkIT.beforeAll(AbstractPhoenixSparkIT.scala:44)
        at 
org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)
        at 
org.apache.phoenix.spark.AbstractPhoenixSparkIT.run(AbstractPhoenixSparkIT.scala:44)
        at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:55)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Exception encountered when invoking run on a nested suite - 
java.io.IOException: Cannot create directory 
/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target/test-data/cbc82c32-ae51-49c9-81a8-07f5ff4c8e47/dfscluster_d42892a5-e09d-42b7-a3b7-43a4499310ad/dfs/name1/current
 *** ABORTED ***
  java.lang.RuntimeException: java.io.IOException: Cannot create directory 
/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target/test-data/cbc82c32-ae51-49c9-81a8-07f5ff4c8e47/dfscluster_d42892a5-e09d-42b7-a3b7-43a4499310ad/dfs/name1/current
  at org.apache.phoenix.query.BaseTest.initMiniCluster(BaseTest.java:591)
  at org.apache.phoenix.query.BaseTest.setUpTestCluster(BaseTest.java:509)
  at 
org.apache.phoenix.query.BaseTest.checkClusterInitialized(BaseTest.java:483)
  at org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java:561)
  at org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java:557)
  at 
org.apache.phoenix.end2end.BaseHBaseManagedTimeIT.doSetup(BaseHBaseManagedTimeIT.java:57)
  at 
org.apache.phoenix.spark.PhoenixSparkITHelper$.doSetup(AbstractPhoenixSparkIT.scala:33)
  at 
org.apache.phoenix.spark.AbstractPhoenixSparkIT.beforeAll(AbstractPhoenixSparkIT.scala:88)
  at 
org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)
  at 
org.apache.phoenix.spark.AbstractPhoenixSparkIT.beforeAll(AbstractPhoenixSparkIT.scala:44)
  ...
  Cause: java.io.IOException: Cannot create directory 
/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target/test-data/cbc82c32-ae51-49c9-81a8-07f5ff4c8e47/dfscluster_d42892a5-e09d-42b7-a3b7-43a4499310ad/dfs/name1/current
  at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337)
  at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548)
  at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569)
  at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:161)
  at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:991)
  at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:342)
  at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:176)
  at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:973)
  at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:811)
  at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:742)
  ...
{code}


was (Author: nico.pappagianis):
I'm getting the same exception. I'm on a MBP OS X El Capitan.

{code:title=Bar.java|borderStyle=solid}
Formatting using clusterid: testClusterID
1    [ScalaTest-3] ERROR org.apache.hadoop.hdfs.MiniDFSCluster  - IOE creating 
namenodes. Permissions dump:
path 
'/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target/test-data/cbc82c32-ae51-49c9-81a8-07f5ff4c8e47/dfscluster_d42892a5-e09d-42b7-a3b7-43a4499310ad/dfs/data':
 
        
absolute:/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target/test-data/cbc82c32-ae51-49c9-81a8-07f5ff4c8e47/dfscluster_d42892a5-e09d-42b7-a3b7-43a4499310ad/dfs/data
        permissions: ----
path 
'/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target/test-data/cbc82c32-ae51-49c9-81a8-07f5ff4c8e47/dfscluster_d42892a5-e09d-42b7-a3b7-43a4499310ad/dfs':
 
        
absolute:/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target/test-data/cbc82c32-ae51-49c9-81a8-07f5ff4c8e47/dfscluster_d42892a5-e09d-42b7-a3b7-43a4499310ad/dfs
        permissions: drwx
path 
'/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target/test-data/cbc82c32-ae51-49c9-81a8-07f5ff4c8e47/dfscluster_d42892a5-e09d-42b7-a3b7-43a4499310ad':
 
        
absolute:/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target/test-data/cbc82c32-ae51-49c9-81a8-07f5ff4c8e47/dfscluster_d42892a5-e09d-42b7-a3b7-43a4499310ad
        permissions: drwx
path 
'/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target/test-data/cbc82c32-ae51-49c9-81a8-07f5ff4c8e47':
 
        
absolute:/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target/test-data/cbc82c32-ae51-49c9-81a8-07f5ff4c8e47
        permissions: drwx
path 
'/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target/test-data':
 
        
absolute:/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target/test-data
        permissions: drwx
path 
'/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target': 
        
absolute:/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target
        permissions: drwx
path '/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark': 
        
absolute:/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark
        permissions: drwx
path '/Users/nico.pappagianis/git/nico-pappagianis/phoenix': 
        absolute:/Users/nico.pappagianis/git/nico-pappagianis/phoenix
        permissions: drwx
path '/Users/nico.pappagianis/git/nico-pappagianis': 
        absolute:/Users/nico.pappagianis/git/nico-pappagianis
        permissions: drwx
path '/Users/nico.pappagianis/git': 
        absolute:/Users/nico.pappagianis/git
        permissions: drwx
path '/Users/nico.pappagianis': 
        absolute:/Users/nico.pappagianis
        permissions: drwx
path '/Users': 
        absolute:/Users
        permissions: dr-x
path '/': 
        absolute:/
        permissions: dr-x

java.io.IOException: Cannot create directory 
/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target/test-data/cbc82c32-ae51-49c9-81a8-07f5ff4c8e47/dfscluster_d42892a5-e09d-42b7-a3b7-43a4499310ad/dfs/name1/current
        at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337)
        at 
org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548)
        at 
org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569)
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:161)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:991)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:342)
        at 
org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:176)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:973)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:811)
        at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:742)
        at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:585)
        at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:982)
        at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:863)
        at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:845)
        at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:832)
        at org.apache.phoenix.query.BaseTest.initMiniCluster(BaseTest.java:588)
        at org.apache.phoenix.query.BaseTest.setUpTestCluster(BaseTest.java:509)
        at 
org.apache.phoenix.query.BaseTest.checkClusterInitialized(BaseTest.java:483)
        at org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java:561)
        at org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java:557)
        at 
org.apache.phoenix.end2end.BaseHBaseManagedTimeIT.doSetup(BaseHBaseManagedTimeIT.java:57)
        at 
org.apache.phoenix.spark.PhoenixSparkITHelper$.doSetup(AbstractPhoenixSparkIT.scala:33)
        at 
org.apache.phoenix.spark.AbstractPhoenixSparkIT.beforeAll(AbstractPhoenixSparkIT.scala:88)
        at 
org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)
        at 
org.apache.phoenix.spark.AbstractPhoenixSparkIT.beforeAll(AbstractPhoenixSparkIT.scala:44)
        at 
org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)
        at 
org.apache.phoenix.spark.AbstractPhoenixSparkIT.run(AbstractPhoenixSparkIT.scala:44)
        at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:55)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Exception encountered when invoking run on a nested suite - 
java.io.IOException: Cannot create directory 
/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target/test-data/cbc82c32-ae51-49c9-81a8-07f5ff4c8e47/dfscluster_d42892a5-e09d-42b7-a3b7-43a4499310ad/dfs/name1/current
 *** ABORTED ***
  java.lang.RuntimeException: java.io.IOException: Cannot create directory 
/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target/test-data/cbc82c32-ae51-49c9-81a8-07f5ff4c8e47/dfscluster_d42892a5-e09d-42b7-a3b7-43a4499310ad/dfs/name1/current
  at org.apache.phoenix.query.BaseTest.initMiniCluster(BaseTest.java:591)
  at org.apache.phoenix.query.BaseTest.setUpTestCluster(BaseTest.java:509)
  at 
org.apache.phoenix.query.BaseTest.checkClusterInitialized(BaseTest.java:483)
  at org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java:561)
  at org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java:557)
  at 
org.apache.phoenix.end2end.BaseHBaseManagedTimeIT.doSetup(BaseHBaseManagedTimeIT.java:57)
  at 
org.apache.phoenix.spark.PhoenixSparkITHelper$.doSetup(AbstractPhoenixSparkIT.scala:33)
  at 
org.apache.phoenix.spark.AbstractPhoenixSparkIT.beforeAll(AbstractPhoenixSparkIT.scala:88)
  at 
org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)
  at 
org.apache.phoenix.spark.AbstractPhoenixSparkIT.beforeAll(AbstractPhoenixSparkIT.scala:44)
  ...
  Cause: java.io.IOException: Cannot create directory 
/Users/nico.pappagianis/git/nico-pappagianis/phoenix/phoenix-spark/target/test-data/cbc82c32-ae51-49c9-81a8-07f5ff4c8e47/dfscluster_d42892a5-e09d-42b7-a3b7-43a4499310ad/dfs/name1/current
  at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337)
  at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548)
  at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569)
  at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:161)
  at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:991)
  at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:342)
  at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:176)
  at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:973)
  at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:811)
  at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:742)
  ...
{code}

> Enable DataFrames and RDDs to read from a tenant-specific table
> ---------------------------------------------------------------
>
>                 Key: PHOENIX-3532
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3532
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: Nico Pappagianis
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Currently the method phoenixTableAsDataFrame in SparkSqlContextFunctions
> and phoenixTableAsRDD in SparkContextFunctions do not pass the tenantId 
> parameter along to the PhoenixRDD constructor. The tenantId parameter was 
> added as part of PHOENIX-3427 but was not properly implemented (by me). This 
> JIRA will fix this issue and add tests around reading tenant-specific tables 
> as both DataFrames and RDDs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to