per field mm

2018-12-14 Thread Koji Sekiguchi
Hi, I have a use case that one of our customers wants to set different mm parameter per field, as in some fields of qf, unexpectedly many terms are produced because they are N-gram fields while in other fields, few terms are produced because they are normal text fields. If it is reasonable,

Re: Error at import data from SQL Server 2017 (Solr 7.5.0)

2018-12-14 Thread Alexis Aravena Silva
Ok. My dih.xml is this: This is the response when I execute the data import in debug mode (is ok): { "responseHeader": { "status": 0, "QTime": 86 }, "initArgs": [ "defaults", [ "config", "dihconfig.xml" ] ], "command": "full-import",

Re: Error at import data from SQL Server 2017 (Solr 7.5.0)

2018-12-14 Thread Erick Erickson
Images and the like are aggressively stripped by the e-mail server, so there's no error information in your post. Exactly _how_ are you importing? Data Import Handler? If so, please show your config as well. Best, Erick On Fri, Dec 14, 2018 at 4:19 PM Alexis Aravena Silva wrote: > > Hello, > >

Error at import data from SQL Server 2017 (Solr 7.5.0)

2018-12-14 Thread Alexis Aravena Silva
Hello, I'am using Solr 7.5 and I have the following error at import data from SQL Server: [cid:45b7e3fd-bb2c-4308-8f4d-16f1cd6a38de] The message doesn't say anything else, that's why, I don't know what is wrong, could you help me with this please. Note: I assigned write access for

Re: Increasing Fault Tolerance of SOLR Cloud and Zookeeper

2018-12-14 Thread Erick Erickson
The only substantive change to the _code_ was changing these lines: permission javax.security.auth.kerberos.ServicePermission "zookeeper/127.0@example.com", "initiate"; permission javax.security.auth.kerberos.ServicePermission "zookeeper/127.0@example.com", "accept"; to permission

Re: Reindex single shard on solr

2018-12-14 Thread Erick Erickson
Why do you need to create a collection? That's probably just there in the test code to have something to test against. WARNING: I haven't verified this, but it should be something like the following. What you need is the hash range for the shard (slice) you're trying to update, then send each doc

Re: Reindex single shard on solr

2018-12-14 Thread Mahmoud Almokadem
Thanks Erick, I got it from TestHashPartitioner.java https://github.com/apache/lucene-solr/blob/1d85cd783863f75cea133fb9c452302214165a4d/solr/core/src/test/org/apache/solr/cloud/TestHashPartitioner.java Here is a sample code router = DocRouter.getDocRouter(CompositeIdRouter.NAME); int

Re: Soft commit and new replica types

2018-12-14 Thread Tomás Fernández Löbbe
Yes, that would be great. Thanks On Fri, Dec 14, 2018 at 5:38 PM Edward Ribeiro wrote: > Indeed! It clarified a lot, thank you. :) Now I know I messed with the > reload core config, but the other aspects were more or less what I have > been expecting. > > Do you think it's worth to submit a PR

[ANNOUNCE] Apache Solr 7.6.0 released

2018-12-14 Thread Nicholas Knize
14 December 2018, Apache Solr™ 7.6.0 available The Lucene PMC is pleased to announce the release of Apache Solr 7.6.0 Solr is the popular, blazing fast, open source NoSQL search platform from the Apache Lucene project. Its major features include powerful full-text search, hit highlighting,

Re: Increasing Fault Tolerance of SOLR Cloud and Zookeeper

2018-12-14 Thread Stephen Lewis Bianamara
Thanks Erick, you've been very helpful. One other question I have, is it reasonable to upgrade zookeeper on an in-place SOLR? I see that 12727 appears to be verified with SOLR 7 modulo some test issues. For SOLR 6.6, would upgrading zookeeper to this version be advisable, or would you say that it

Re: Soft commit and new replica types

2018-12-14 Thread Edward Ribeiro
Indeed! It clarified a lot, thank you. :) Now I know I messed with the reload core config, but the other aspects were more or less what I have been expecting. Do you think it's worth to submit a PR to the Reference Guide with those explanations? I can take a stab at it. Regards, Edward On Fri,

Re: Reindex single shard on solr

2018-12-14 Thread Mahmoud Almokadem
Thanks Erick, You know how to use this method. Or I need to dive into the code? I've the document_id as string uniqueKey and have 12 shards. On Fri, Dec 14, 2018 at 5:58 PM Erick Erickson wrote: > Sure. Of course you have to make sure you use the exact same hashing > algorithm on the . > >

Re: no segments* file found

2018-12-14 Thread Mahmoud Almokadem
Thanks Eric, already tried to Lucene but cannot continue cause I need to get my collection up ASAP. So, I started my reindexing process and I'll investigate this issue while indexing. Mahmoud On Fri, Dec 14, 2018 at 6:08 PM Erick Erickson wrote: > You'd have to dive into the Lucene code and

Re: no segments* file found

2018-12-14 Thread Erick Erickson
You'd have to dive into the Lucene code and figure out the format, offhand I don't know what it is. However, there's no guarantee here that it'll result in a consistent index. Consider merging two segments, seg1 and seg2. Here's the merge sequence: 1> merge the segments, At the end of this you

Re: Solr recovery issue in 7.5

2018-12-14 Thread Erick Erickson
Well, if your indexing is very light, then the commit interval isn't a big deal. a false start on my part. The key here is now long a replay of the tlog would take if it had to be replayed. Probably not an issue though if you have minimal update rates On Fri, Dec 14, 2018 at 12:48 AM shamik

Re: Reindex single shard on solr

2018-12-14 Thread Erick Erickson
Sure. Of course you have to make sure you use the exact same hashing algorithm on the . See CompositeIdRouter.sliceHash Best, Erick On Fri, Dec 14, 2018 at 3:36 AM Mahmoud Almokadem wrote: > > Hello, > > I've a corruption on some of the shards on my collection and I've a full > dataset on my

Re: which Zookeper version for Solr 6.6.5

2018-12-14 Thread Andy C
Bernd, I recently asked a similar question about Solr 7.3 and Zookeeper 3.4.11. This is the response I found most helpful: https://www.mail-archive.com/solr-user@lucene.apache.org/msg138910.html - Andy - On Fri, Dec 14, 2018 at 7:41 AM Bernd Fehling < bernd.fehl...@uni-bielefeld.de> wrote:

Re: Open file limit warning when starting solr

2018-12-14 Thread Daniel Carrasco
Hello, How did you installed Solr?, have you followed this instructions?: https://lucene.apache.org/solr/guide/7_0/taking-solr-to-production.html#taking-solr-to-production On that instructions you first extracts an script file from inside the tar.gz. Then running that script file with a few

RE: Open file limit warning when starting solr

2018-12-14 Thread Armon, Rony
I don’t have a file named solr n etc/init.d and I followed the instructions in the link that you sent. Should I uninstall and re-install? -Original Message- From: Daniel Carrasco [mailto:d.carra...@i2tic.com] Sent: Wednesday, December 12, 2018 5:45 PM To: solr-user@lucene.apache.org

which Zookeper version for Solr 6.6.5

2018-12-14 Thread Bernd Fehling
This question sounds simple but nevertheless its spinning in my head. While using Solr 6.6.5 in Cloud mode which has Apache ZooKeeper 3.4.10 in the list of "Major Components" is it possible to use Apache ZooKeeper 3.4.13 as stand-alone ensemble together with SolrCloud 6.6.5 or do I have to

Reindex single shard on solr

2018-12-14 Thread Mahmoud Almokadem
Hello, I've a corruption on some of the shards on my collection and I've a full dataset on my database, and I'm using CompositeId for routing documents. Can I traverse the whole dataset and do something like hashing the document_id to identify that this document belongs to a specific shard to

no segments* file found

2018-12-14 Thread Mahmoud Almokadem
Hello, I'm facing an issue that some shards of my SolrCloud collection is corrupted due to they don't have segments_N file but I think the whole segments are still available. Can I create a segment_N file from the available files? This is the stack trace:

RE: terms not to match in a search query

2018-12-14 Thread Peter Lancaster
Hi Tanya, I think can have a stop filter applied to the query for your field type. ... You should be aable to use the length filter for the second part of your question. Cheers, Peter. -Original Message- From: Tanya Bompi [mailto:tanya.bo...@gmail.com] Sent: 13

Re: Solr Index Data will be delete if state.json did not exists

2018-12-14 Thread Jan Høydahl
I would use the Backup/Restore API https://lucene.apache.org/solr/guide/7_5/making-and-restoring-backups.html Alternatively, you could create collection B, using same configset as A, stop solr, copy the data folder and

Re: Solr recovery issue in 7.5

2018-12-14 Thread shamik
Thanks Eric. I guess I was not clear when I mentioned that I had stopped the indexing process. It was just a temporary step to make sure that we are not adding any new data when the nodes are in a recovery mode. The 10 minute hard commit is carried over from our 6.5 configuration which actually