Josh, I've sent an email directly to you with the zip -- I'm not sure what the mailing list behavior is regarding attachments.
For the benefit of the mailing list, the files (and their contents) are as follows: distcp.txt ------------ hdfs://cluster/user/accumulo/.slider/cluster/slideraccumulo/database/data/tables/2/default_tablet/F000009g.rf hdfs://cluster/user/accumulo/.slider/cluster/slideraccumulo/database/data/tables/2/default_tablet/F000009n.rf hdfs://cluster/tmp/table1_export/exportMetadata.zip exportMetadata.zip/accumulo_export_info.txt ------------------------------------------------------------- exportVersion:1 srcInstanceName:instancename srcInstanceID:b458b1bb-f613-4c3c-a399-d3f275a634da srcZookeepers:CensoredZK1,CensoredZK2,CensoredZK3 srcTableName:table1_exp srcTableID:3 srcDataVersion:6 srcCodeVersion:1.6.0 exportMetadata.zip/table_config.txt ----------------------------------------------- table.constraint.1=org.apache.accumulo.core.constraints.DefaultKeySizeConstraint table.iterator.majc.vers=20,org.apache.accumulo.core.iterators.user.VersioningIterator table.iterator.majc.vers.opt.maxVersions=1 table.iterator.minc.vers=20,org.apache.accumulo.core.iterators.user.VersioningIterator table.iterator.minc.vers.opt.maxVersions=1 table.iterator.scan.vers=20,org.apache.accumulo.core.iterators.user.VersioningIterator table.iterator.scan.vers.opt.maxVersions=1 table.split.threshold=100M exportMetadata.zip/metadata.bin ----------------------------------------------- <binary file containing more metadata> Thanks, Tim On Wed, Sep 17, 2014 at 5:45 PM, Josh Elser <[email protected]> wrote: > Hi Tim, > > Any possibility that you can provide the exportMetadata.zip and the > distcp.txt? > > Fair warning - the data from that table won't be included, but some split > points might be included in metadata.bin (inside exportMetadata.zip) which > *might* contain something sensitive. Make sure you dbl check that. > > I'll see if I can reproduce what you saw. It definitely seems strange. > > - Josh > > On 9/17/14, 5:10 PM, Tim Israel wrote: > >> Upon further investigation, it looks like I can't even follow the steps >> outlined on the import/export documentation in 1.6.0 >> http://accumulo.apache.org/1.6/examples/export.html. I get the same >> error outlined in my first post >> >> [shell.Shell] ERROR: java.lang.RuntimeException: >> org.apache.accumulo.core.client.impl.AccumuloServerException: Error on >> server <SERVER_NAME>:58444 <-- port chosen by slider >> >> Accumulo Recent Logs >> ---- >> Failed to open file >> hdfs://cluster/accumulo/tables/1/b-000005c/I000005d.rf File does not >> exist: /accumulo/tables/1/b-000005c/I000005d.rf >> >> >> I tried the export/import procedure on my Accumulo 1.5.1 cluster and get >> the expected result (i.e. the table is imported and can be scanned >> without error) >> >> >> Tim >> >> On Wed, Sep 17, 2014 at 3:57 PM, Tim Israel <[email protected] >> <mailto:[email protected]>> wrote: >> >> Hi all, >> >> I posted something similar on the slider mailing list and was >> directed here. After debugging further, it doesn't seem like this >> is a slider issue. >> >> I have some tables that were exported from another cluster running >> Accumulo 1.5.1 on hoya and I'm trying to import them in Accumulo >> 1.6.0 on Slider 0.50.2. This target cluster is Kerberized but >> Accumulo is running in simple authentication mode. >> >> The exported table was distcp'd to a cluster configured with slider. >> >> The table was imported via accumulo shell successfully. The files >> get moved to >> /user/accumulo/.slider/cluster/slideraccumulo/database/data/tables/1 >> >> However, if I scan the imported table, accumulo complains with the >> following exception: >> Failed to open file >> hdfs://cluster/accumulo/tables/1/b-000005c/I000005d.rf File does not >> exist: /accumulo/tables/1/b-000005c/I000005d.rf >> >> I can scan the table if I move the files from >> /user/accumulo/.slider/cluster/slideraccumulo/database/data/tables/1 >> to /accumulo/tables/1 >> >> I pulled accumulo-site from the slider publisher and saw that >> instance.volumes is set as follows: >> hdfs://cluster/user/accumulo/.slider/cluster/slideraccumulo/ >> database/data >> >> Any suggestions would be greatly appreciated. >> >> Thanks, >> >> Tim >> >> >>
