[ 
https://issues.apache.org/jira/browse/SOLR-9958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802128#comment-16802128
 ] 

Kevin Risden commented on SOLR-9958:
------------------------------------

SOLR-11335 is the same as this ticket. A non HDFS filesystem is closed. Solr 
only handles HDFS filesystems and forces the fs cache to not be used.

> The FileSystem used by HdfsBackupRepository gets closed before the backup 
> completes.
> ------------------------------------------------------------------------------------
>
>                 Key: SOLR-9958
>                 URL: https://issues.apache.org/jira/browse/SOLR-9958
>             Project: Solr
>          Issue Type: Bug
>      Security Level: Public(Default Security Level. Issues are Public) 
>          Components: Hadoop Integration, hdfs
>    Affects Versions: 6.2.1
>            Reporter: Timothy Potter
>            Priority: Critical
>         Attachments: SOLR-9958.patch
>
>
> My shards get backed up correctly, but then it fails when backing up the 
> state from ZK. From the logs, it looks like the underlying FS gets closed 
> before the config stuff is written:
> {code}
> DEBUG - 2017-01-11 22:39:12.889; [   ] 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase; GHFS.close:=> 
> INFO  - 2017-01-11 22:39:12.889; [   ] org.apache.solr.handler.SnapShooter; 
> Done creating backup snapshot: shard1 at 
> gs://master-sector-142100.appspot.com/backups2/tim5
> INFO  - 2017-01-11 22:39:12.889; [   ] org.apache.solr.servlet.HttpSolrCall; 
> [admin] webapp=null path=/admin/cores 
> params={core=gettingstarted_shard1_replica1&qt=/admin/cores&name=shard1&action=BACKUPCORE&location=gs://master-sector-142100.appspot.com/backups2/tim5&wt=javabin&version=2}
>  status=0 QTime=24954
> INFO  - 2017-01-11 22:39:12.890; [   ] org.apache.solr.cloud.BackupCmd; 
> Starting to backup ZK data for backupName=tim5
> INFO  - 2017-01-11 22:39:12.890; [   ] 
> org.apache.solr.common.cloud.ZkStateReader; Load collection config from: 
> [/collections/gettingstarted]
> INFO  - 2017-01-11 22:39:12.891; [   ] 
> org.apache.solr.common.cloud.ZkStateReader; 
> path=[/collections/gettingstarted] [configName]=[gettingstarted] specified 
> config exists in ZooKeeper
> ERROR - 2017-01-11 22:39:12.892; [   ] org.apache.solr.common.SolrException; 
> Collection: gettingstarted operation: backup failed:java.io.IOException: 
> GoogleHadoopFileSystem has been closed or not initialized.
>     at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.checkOpen(GoogleHadoopFileSystemBase.java:1927)
>     at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.mkdirs(GoogleHadoopFileSystemBase.java:1367)
>     at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1877)
>     at 
> org.apache.solr.core.backup.repository.HdfsBackupRepository.createDirectory(HdfsBackupRepository.java:153)
>     at 
> org.apache.solr.core.backup.BackupManager.downloadConfigDir(BackupManager.java:186)
>     at org.apache.solr.cloud.BackupCmd.call(BackupCmd.java:111)
>     at 
> org.apache.solr.cloud.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:222)
>     at 
> org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:463)
>     at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>     at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to