[ 
https://issues.apache.org/jira/browse/SOLR-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17385807#comment-17385807
 ] 

Istvan Farkas commented on SOLR-14660:
--------------------------------------

Thanks [~dsmiley].
Updates:
- I removed the deprecated flags from the HDFS classes
- Did some further tests, fixed 3 which still had missing files. Now all tests 
(even the slow ones) except one work, also verified that the number of tests 
between the main branch and this one, they work.
- The one which fails is org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest.test, 
but it also fails in the main branch with the same exception. 

{code}
 Caused by:
            org.apache.lucene.index.CorruptIndexException: Problem reading 
index from 
NRTCachingDirectory(BlockDirectory(HdfsDirectory@hdfs://localhost:42723/solr_hdfs_home/recoverytest/core_node4/data/index
 lockFactory=org.apache.solr.store.hdfs.HdfsLockFactory@108e64d1); 
maxCacheMB=192.0 maxMergeSizeMB=16.0) 
(resource=NRTCachingDirectory(BlockDirectory(HdfsDirectory@hdfs://localhost:42723/solr_hdfs_home/recoverytest/core_node4/data/index
 lockFactory=org.apache.solr.store.hdfs.HdfsLockFactory@108e64d1); 
maxCacheMB=192.0 maxMergeSizeMB=16.0))
                at 
org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:160)
                at 
org.apache.lucene.index.SegmentReader.<init>(SegmentReader.java:89)
                at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:179)
                at 
org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:786)
                at 
org.apache.lucene.index.IndexWriter.lambda$mergeMiddle$19(IndexWriter.java:4906)
                at 
org.apache.lucene.index.MergePolicy$OneMerge.initMergeReaders(MergePolicy.java:418)
                at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4902)
                at 
org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4503)
                at 
org.apache.solr.update.SolrIndexWriter.merge(SolrIndexWriter.java:201)
                at 
org.apache.lucene.index.IndexWriter$IndexWriterMergeSource.merge(IndexWriter.java:6255)
                at 
org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:636)
                at 
org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:697)

                Caused by:
                java.io.EOFException: position is negative
                    at 
org.apache.hadoop.fs.FSInputStream.validatePositionedReadArgs(FSInputStream.java:103)
                    at 
org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:118)
                    at 
org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:111)
                    at 
org.apache.solr.store.hdfs.HdfsDirectory$HdfsIndexInput.readInternal(HdfsDirectory.java:251)
                    at 
org.apache.solr.store.blockcache.CustomBufferedIndexInput.refill(CustomBufferedIndexInput.java:191)
                    at 
org.apache.solr.store.blockcache.CustomBufferedIndexInput.readByte(CustomBufferedIndexInput.java:45)
                    at 
org.apache.lucene.store.DataInput.readVInt(DataInput.java:131)
                    at 
org.apache.solr.store.blockcache.CustomBufferedIndexInput.readVInt(CustomBufferedIndexInput.java:160)
                    at 
org.apache.lucene.codecs.blockterms.FixedGapTermsIndexReader.<init>(FixedGapTermsIndexReader.java:97)
                    at 
org.apache.lucene.codecs.blockterms.LuceneFixedGap.fieldsProducer(LuceneFixedGap.java:93)
                    at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.<init>(PerFieldPostingsFormat.java:327)
                    at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:389)
                    at 
org.apache.lucene.index.SegmentCoreReaders.<init>(SegmentCoreReaders.java:117)
                    ... 11 more
  2> NOTE: test params are: codec=Asserting(Lucene90): 
{rnd_b=PostingsFormat(name=LuceneFixedGap), a_t=PostingsFormat(name=Direct), 
_root_=Lucene90, a_i=PostingsFormat(name=LuceneFixedGap), 
id=PostingsFormat(name=LuceneFixedGap)}, docValues:{}, 
maxPointsInLeafNode=1458, maxMBSortInHeap=5.749791382491378, 
sim=Asserting(RandomSimilarity(queryNorm=false): {}), locale=ka, 
timezone=America/Adak
  2> NOTE: Linux 4.18.5-1.el7.elrepo.x86_64 amd64/AdoptOpenJDK 11.0.11 
(64-bit)/cpus=2,threads=11,free=110098432,total=279969792
  2> NOTE: All tests run in this JVM: [HdfsRecoveryZkTest]
  2> NOTE: reproduce with: gradlew test --tests HdfsRecoveryZkTest 
-Dtests.seed=CB2CD1549B3A6AE1 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=ka -Dtests.timezone=America/Adak -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
:solr:core:test (FAILURE): 2 test(s), 2 failure(s)
{code}

Since this looks bad also on the main branch, is it okay if I defer this fix 
for now? Should I mark this as a bad apple?

> Migrating HDFS into a package
> -----------------------------
>
>                 Key: SOLR-14660
>                 URL: https://issues.apache.org/jira/browse/SOLR-14660
>             Project: Solr
>          Issue Type: Improvement
>            Reporter: Ishan Chattopadhyaya
>            Priority: Major
>              Labels: package, packagemanager
>
> Following up on the deprecation of HDFS (SOLR-14021), we need to work on 
> isolating it away from Solr core and making a package for this. This issue is 
> to track the efforts for that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to