Upon further investigation, it looks like I can't even follow the steps outlined on the import/export documentation in 1.6.0 http://accumulo.apache.org/1.6/examples/export.html. I get the same error outlined in my first post
[shell.Shell] ERROR: java.lang.RuntimeException: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server <SERVER_NAME>:58444 <-- port chosen by slider Accumulo Recent Logs ---- Failed to open file hdfs://cluster/accumulo/tables/1/b-000005c/I000005d.rf File does not exist: /accumulo/tables/1/b-000005c/I000005d.rf I tried the export/import procedure on my Accumulo 1.5.1 cluster and get the expected result (i.e. the table is imported and can be scanned without error) Tim On Wed, Sep 17, 2014 at 3:57 PM, Tim Israel <[email protected]> wrote: > Hi all, > > I posted something similar on the slider mailing list and was directed > here. After debugging further, it doesn't seem like this is a slider issue. > > I have some tables that were exported from another cluster running > Accumulo 1.5.1 on hoya and I'm trying to import them in Accumulo 1.6.0 on > Slider 0.50.2. This target cluster is Kerberized but Accumulo is running > in simple authentication mode. > > The exported table was distcp'd to a cluster configured with slider. > > The table was imported via accumulo shell successfully. The files get > moved to > /user/accumulo/.slider/cluster/slideraccumulo/database/data/tables/1 > > However, if I scan the imported table, accumulo complains with the > following exception: > Failed to open file hdfs://cluster/accumulo/tables/1/b-000005c/I000005d.rf > File does not exist: /accumulo/tables/1/b-000005c/I000005d.rf > > I can scan the table if I move the files from > /user/accumulo/.slider/cluster/slideraccumulo/database/data/tables/1 to > /accumulo/tables/1 > > I pulled accumulo-site from the slider publisher and saw that > instance.volumes is set as follows: > hdfs://cluster/user/accumulo/.slider/cluster/slideraccumulo/database/data > > Any suggestions would be greatly appreciated. > > Thanks, > > Tim >
