Re: [jira] [Commented] (SOLR-5441) Expose transaction log files number and their size via JMX

2013-11-14 Thread Rafał Kuć
Hello!

Submitted a patch for this. Sorry :(

-- 
Regards,
Rafał Kuć

 [
 https://issues.apache.org/jira/browse/SOLR-5441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13823079#comment-13823079
  ]

 Tomás Fernández Löbbe commented on SOLR-5441:
 -

 I think Jenkins failures are related to this commit:
 http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/8218/

 Expose transaction log files number and their size via JMX
 --

 Key: SOLR-5441
 URL: https://issues.apache.org/jira/browse/SOLR-5441
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.5
Reporter: Rafał Kuć
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 4.6, 5.0

 Attachments: SOLR-5441-synchronized.patch, SOLR-5441.patch


 It may be useful to have the number of transaction log files and their 
 overall size exposed via JMX for UpdateHandler.



 --
 This message was sent by Atlassian JIRA
 (v6.1#6144)

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org






-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: UUIDField uniqueKey with default=NEW

2012-08-31 Thread Rafał Kuć
Hello!

I think Solr wont throw the exception if you remove the
required=true attribute. At least some work around for now. 

-- 
Regards,
Rafał Kuć
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch

 Hi all,

 I was following
 http://wiki.apache.org/solr/UniqueKey#UUID_techniques to setup uuid
 as my uniqueKey. (recent solr-trunk)

 fieldType name=uuid class=solr.UUIDField indexed=true /

 field name=uniqueKey type=uuid indexed=true stored=true 
 default=NEW required=true /

 uniqueKeyuniqueKey/uniqueKey

 I get the following exception.

 SEVERE: null:org.apache.solr.common.SolrException: uniqueKey field
 (null) can not be configured with a default value (NEW)
 at
 org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:496)
 at
 org.apache.solr.schema.IndexSchema.init(IndexSchema.java:113)
 at
 org.apache.solr.core.CoreContainer.create(CoreContainer.java:851)
 at
 org.apache.solr.core.CoreContainer.load(CoreContainer.java:539)


 I made this working by adding some if checks to IndexSchema.java and 
 UpdateCommand.java.

 getType().getClass().getName().equals(UUIDField.class.getName()

 But I am not sure if this is preferred way.  How can I use uuid as
 my uniqueKey without source code modification?

 Thanks,


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org






-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: UUIDField uniqueKey with default=NEW

2012-08-31 Thread Rafał Kuć
Hello!

Ahmet as far as I know Solr wont throw that exception if you remove the
required=true attribute. At least some work around for now. 

-- 
Regards,
Rafał Kuć
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch

 Hi all,

 I was following
 http://wiki.apache.org/solr/UniqueKey#UUID_techniques to setup uuid
 as my uniqueKey. (recent solr-trunk)

 fieldType name=uuid class=solr.UUIDField indexed=true /

 field name=uniqueKey type=uuid indexed=true stored=true 
 default=NEW required=true /

 uniqueKeyuniqueKey/uniqueKey

 I get the following exception.

 SEVERE: null:org.apache.solr.common.SolrException: uniqueKey field
 (null) can not be configured with a default value (NEW)
 at
 org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:496)
 at
 org.apache.solr.schema.IndexSchema.init(IndexSchema.java:113)
 at
 org.apache.solr.core.CoreContainer.create(CoreContainer.java:851)
 at
 org.apache.solr.core.CoreContainer.load(CoreContainer.java:539)


 I made this working by adding some if checks to IndexSchema.java and 
 UpdateCommand.java.

 getType().getClass().getName().equals(UUIDField.class.getName()

 But I am not sure if this is preferred way.  How can I use uuid as
 my uniqueKey without source code modification?

 Thanks,


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




-- 
Regards,
Rafał Kuć
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r805774 - in /lucene/solr/trunk: ./ src/common/org/apache/solr/common/params/ src/java/org/apache/solr/handler/ src/java/org/apache/solr/update/ src/solrj/org/apache/solr/client/solr

2009-08-19 Thread Rafał Kuć
/solr/update/DirectUpdateHandlerTest.java
 URL:
 http://svn.apache.org/viewvc/lucene/solr/trunk/src/test/org/apache/solr/update/DirectUpdateHandlerTest.java?rev=805774r1=805773r2=805774view=diff

 ==
 ---
 lucene/solr/trunk/src/test/org/apache/solr/update/DirectUpdateHandlerTest.java
 (original)
 +++
 lucene/solr/trunk/src/test/org/apache/solr/update/DirectUpdateHandlerTest.java
 Wed Aug 19 12:21:22 2009
 @@ -17,20 +17,32 @@
   package org.apache.solr.update;
  +import java.io.IOException;
 +import java.util.ArrayList;
  import java.util.HashMap;
 +import java.util.HashSet;
 +import java.util.List;
  import java.util.Map;
 +import java.util.Set;
   import org.apache.lucene.document.Document;
  import org.apache.lucene.document.Field;
  import org.apache.lucene.document.Field.Index;
  import org.apache.lucene.document.Field.Store;
 +import org.apache.lucene.index.IndexReader;
 +import org.apache.lucene.index.SegmentReader;
 +import org.apache.lucene.index.Term;
 +import org.apache.lucene.index.TermEnum;
  import org.apache.solr.common.SolrException;
  import org.apache.solr.common.params.CommonParams;
  import org.apache.solr.common.params.MapSolrParams;
  import org.apache.solr.core.SolrCore;
  import org.apache.solr.request.LocalSolrQueryRequest;
  import org.apache.solr.request.SolrQueryRequest;
 +import org.apache.solr.search.SolrIndexReader;
 +import org.apache.solr.search.SolrIndexSearcher;
  import org.apache.solr.util.AbstractSolrTestCase;
 +import org.apache.solr.util.RefCounted;
   /**
  * @@ -247,6 +259,90 @@
             );
   }
   +  public void testExpungeDeletes() throws Exception {
 +    for (int x = 0; x  3000; x++) {
 +      addSimpleDoc(x + );
 +    }
 +    SolrCore core = h.getCore();
 +    UpdateHandler updater = core.getUpdateHandler();
 +    CommitUpdateCommand cmtCmd = new CommitUpdateCommand(false);
 +    cmtCmd.waitSearcher = true;
 +    updater.commit(cmtCmd);
 +
 +    ListString todelete = new ArrayListString();
 +
 +    SetString segsdel = new HashSetString();
 +
 +    SegmentReader[] sirs = getSegmentReaders(core);
 +    assertTrue(sirs.length  6);
 +    todelete.add(getNthIDTerm(2, sirs[0]));
 +    segsdel.add(sirs[0].getSegmentName());
 +    +    todelete.add(getNthIDTerm(7, sirs[2]));
 +    segsdel.add(sirs[2].getSegmentName());
 +    +    todelete.add(getNthIDTerm(4, sirs[5]));
 +    segsdel.add(sirs[5].getSegmentName());
 +    +    for (String id : todelete) {
 +      deleteSimpleDoc(id);
 +    }
 +    // commit the deletes
 +    cmtCmd = new CommitUpdateCommand(false);
 +    cmtCmd.waitSearcher = true;
 +    updater.commit(cmtCmd);
 +    +    // expunge deletes
 +    cmtCmd = new CommitUpdateCommand(false);
 +    cmtCmd.waitSearcher = true;
 +    cmtCmd.expungeDeletes = true;
 +    updater.commit(cmtCmd);
 +    +    // we'll have fewer segments
 +    SegmentReader[] sirs2 = getSegmentReaders(core);
 +    assertTrue(sirs.length  sirs2.length);
 +    // check the actual segment names
 +    for (SegmentReader sr : sirs2) {
 +      assertTrue(!segsdel.contains(sr.getSegmentName()));
 +    }
 +  }
 +
 +  SegmentReader[] getSegmentReaders(SolrCore core) throws IOException {
 +    RefCountedSolrIndexSearcher ref = core.getSearcher(true, true,
 null);
 +    SolrIndexSearcher is = ref.get();
 +    SegmentReader[] segmentReaders = null;
 +    try {
 +      SolrIndexReader reader = is.getReader();
 +      IndexReader[] subreaders = reader.getSequentialSubReaders();
 +      segmentReaders = new SegmentReader[subreaders.length];
 +      for (int x = 0; x  subreaders.length; x++) {
 +        assert subreaders[x] instanceof SolrIndexReader;
 +        SolrIndexReader sir = (SolrIndexReader) subreaders[x];
 +        SegmentReader sr = (SegmentReader) sir.getWrappedReader();
 +        segmentReaders[x] = sr;
 +      }
 +    } finally {
 +      ref.decref();
 +    }
 +    return segmentReaders;
 +  }
 +
 +  private String getNthIDTerm(int n, IndexReader r) throws IOException {
 +    TermEnum te = r.terms(new Term(id, ));
 +    try {
 +      int x = 0;
 +      do {
 +        if (x = n) {
 +          return te.term().text();
 +        }
 +        x++;
 +      } while (te.next());
 +    } finally {
 +      te.close();
 +    }
 +    return null;
 +  }
 +     private void addSimpleDoc(String id) throws Exception {
     SolrCore core = h.getCore();












-- 
Regards,
 Rafał Kuć



Re: Severe QTime issues to do with updates

2009-08-10 Thread Rafał Kuć
Hello!

   First  of all I would suggest Solr user mailing list for a question
like this. Anyway, we were experiencing similar problem with the older
version of Solr 1.4 - in random situations, after index update on slaves
we  were  getting  response  times  from 1 to 20 seconds. After trying
everything  from  I/O  analysis  to  GC logging we updated SOLR to the
newest version available and that solved response time issues.

   Hope that helps.

-- 
Regards,
 Rafał Kuć


 Hey,
 I've recently noticed that there is a very large spike in the QTime for
 nodes serving queries, immediately after snappulling and snapinstalling.
 The numbers i'm seeing there are obviously some kind of
 lock-contention/concurrency issue, as I've monitored iostat/sar and its not
 a disk IO
 issue ( all the index is still mostly in the os caches, as is also
 noticeable in the snappuller.log which runs very fast - the incremental
 update to the
 lucene index is minimal).
 I'm using a strong machine (16GB 2xQuadCore, CentOS5) for this and from all
 monitoring the CPU/Disk IO seems minimal through-out the day.
 The update runs every 10 minutes, takes about 200seconds to complete (with
 my current autoWarm settings)., and the index size
 is about 20million documents (~6GB). queries/warmup involve mostly some
 pre-fixed set of filters and facets.

 I'm using a nightly build of solr from few months back ( nightly exported -
 yonik - 2009-01-11 08:05:52 )  which should already have
 the benefits of readonlyindexreader, NIOFS, UnInvertedIndex and
 ConcurrentLRUCache (for filterCache).
 I've read a related thread a guy called oleg posted (
 http://mail-archives.apache.org/mod_mbox/lucene-solr-user/200901.mbox/%3c339777.57289...@web50307.mail.re2.yahoo.com%3e)
 , but did not see the thread concluded with any definite conclusion.

 Please advise!

 Best regards,
 -Chak