Hi
I'm having trouble with setting up authentication. My security.json
looks like this:
{
"authentication":{
"class":"solr.BasicAuthPlugin",
"blockUnknown": false,
"credentials":{
"admin":"IV0EHq1OnNrj6gvRCwvFwTrZ1+z1oBbnQdiVC3otuq0=
Hi
Has somebody a recent logstash config for parsing solr logs? I'm using
version 6.3.0
Thanks!
BR
Arkadi
node was being restarted? There was
a issue introduced due to IndexFingerprint comparison. Check
SOLR-9310. I am not sure if fix made it to Solr6.2
On Nov 25, 2016 3:51 AM, "Arkadi Colson" <ark...@smartbit.be
<mailto:ark...@smartbit.be>> wrote:
I am using SolrClo
ause of the high number you have for
numVersionBuckets, but that's guessing.
If you are _not_ in SolrCloud, then maybe:
https://issues.apache.org/jira/browse/SOLR-9036 is relevant.
Best,
Erick
On Thu, Nov 24, 2016 at 3:10 AM, Arkadi Colson <ark...@smartbit.be> wrote:
This is the code fro
of the config you pasted
here, looks like it is from the slave node.
-Ursprüngliche Nachricht-
Von: Arkadi Colson [mailto:ark...@smartbit.be]
Gesendet: Donnerstag, 24. November 2016 11:56
An: solr-user@lucene.apache.org
Betreff: Re: AW: Resync after restart
Hi Michael
Thanks for the quick
, Sternwald wrote:
Hi Arkadi,
you need to remove the line "startup" from your
ReplicationHandler-config in solrconfig.xml -> https://wiki.apache.org/solr/SolrReplication.
Greetings
Michael
-Ursprüngliche Nachricht-
Von: Arkadi Colson [mailto:ark...@smartbit.be]
Gesendet: Donnerstag,
Hi
Almost every time when restarting a solr instance the index is
replicated completely. Is there a way to avoid this somehow? The index
currently has a size of about 17GB.
Some advice here would be great.
99% of the config is defaul:
${solr.ulog.dir:}
Is there a chance that suggestions will be generated at indexing time
and not afterwards based on indexed data? This will make it possible to
suggest on fields which are not "stored". Or is there another way to
make suggestion like behavior possible?
Thx!
Arkadi
:40 AM, Shawn Heisey <apa...@elyograg.org> wrote:
On 10/27/2016 9:50 AM, Yonik Seeley wrote:
On Thu, Oct 27, 2016 at 9:56 AM, Arkadi Colson <ark...@smartbit.be>
wrote:
Thanks for the answer! Do you know if there is a way to trigger an
optimize for only 1 shard and not the whole collec
,
Erick
On Thu, Oct 27, 2016 at 6:56 AM, Arkadi Colson <ark...@smartbit.be
<mailto:ark...@smartbit.be>> wrote:
Thanks for the answer!
Do you know if there is a way to trigger an optimize for only 1
shard and not the whole collection at once?
On 27-10-16 15:30, Pushk
documents.
In the worst case you can 'optimize' your index which should take care
of removing deleted document
On Oct 27, 2016 4:20 AM, "Arkadi Colson" <ark...@smartbit.be
<mailto:ark...@smartbit.be>> wrote:
Hi
As you can see in the screenshot above in the old
Hi
As you can see in the screenshot above in the oldest segments there are
a lot of deletions. In total the shard has about 26% deletions. How can
I get rid of them so the index will be smaller again?
Can this only be done with an optimize or does it also depend on the
merge policy? If it
Hi
I could not find "Could not download file" in the logs. Should I
increase the log level somewhere? Just let me know... so I can provide
you more detailed logs...
Thx!
Arkadi
On 02-09-16 11:21, Arkadi Colson wrote:
Hi
I cannot find a string in the logs matching "Could no
n Thu, Sep 1, 2016 at 6:05 PM, Arkadi Colson <ark...@smartbit.be
<mailto:ark...@smartbit.be>> wrote:
ERROR - 2016-09-01 14:30:43.653; [c:intradesk s:shard1
r:core_node5 x:intradesk_shard1_replica1]
org.apache.solr.common.SolrExcepti
Hi
Replication seems to be in an endless loop. Anybody any idea?
See below for logs.
If you need more info, just let me know...
INFO - 2016-09-01 14:30:42.563; [c:lvs s:shard1 r:core_node10
x:lvs_shard1_replica1] org.apache.solr.core.SolrDeletionPolicy;
SolrDeletionPolicy.onCommit: commits:
Is it ok to just change the multivalue attribute to true and reindex the
message module data? There are also other modules indexed on the same
schema with multivalued = false. Will it become a problem?
BR,
Arkadi
On 05/27/2013 09:33 AM, Gora Mohanty wrote:
On 27 May 2013 12:58, Arkadi Colson
Hi
We would like to index our messages system. We should be able to search
for messages for specific recipients due to performance issues on our
databases. But the message is of course the same for all receipients and
the message text should be saved only once! Is it possible to have some
Yes indeed... Thx!
On 05/27/2013 09:33 AM, Gora Mohanty wrote:
On 27 May 2013 12:58, Arkadi Colson ark...@smartbit.be wrote:
Hi
We would like to index our messages system. We should be able to search for
messages for specific recipients due to performance issues on our databases
Hi
When having a collection with 3 shards en 2 replica's for each shard and
I want to split shard1. Does it matter where to start the splitshard
command in the cloud or should it be started on the master of that shard?
BR,
Arkadi
Any idea why I got a Broken pipe?
INFO - 2013-05-23 13:37:19.881; org.apache.solr.core.SolrCore;
[messages_shard3_replica1] webapp=/solr path=/select/
Hi
I tried to split a shard but it failed. If I try to do it again it does
not start again.
I see the to extra shards in /collections/messages/leader_elect/ and
/collections/messages/leaders/
How can I fix this?
root@solr07-dcg:/solr/messages_shard3_replica2# curl
clusterstate.json is now reporting shard3 as inactive. Any idea how to
change clusterstate.json manually from commandline?
On 05/22/2013 08:59 AM, Arkadi Colson wrote:
Hi
I tried to split a shard but it failed. If I try to do it again it
does not start again.
I see the to extra shards
Is there a limitation on the number concurrent connections to a Solr
host? Because we have some scripts running simultaious to fill Solr and
when starting up to many we are getting this error:
exception 'SolrClientException' with message 'Unsuccessful update
request. Response Code 0. (null)'
[java.lang.ref.SoftReference@759c8d]) but failed to remove it when the
web application was stopped. Threads are going to be renewed over time
to try and avoid a probable memory leak.
Met vriendelijke groeten
Arkadi Colson
Smartbit bvba • Hoogstraat 13 • 3670 Meeuwen
T +32 11 64 08 80 • F +32 11 64
/changes/Changes.html#4.3.0.upgrading_from_solr_4.2.0
--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com
6. mai 2013 kl. 16:50 skrev Arkadi Colson ark...@smartbit.be:
Hi
After update to 4.3 I got this error:
May 06, 2013 2:30:08 PM org.apache.coyote.AbstractProtocol
Found it on http://wiki.apache.org/solr/SolrLogging!
Thx
On 05/07/2013 08:40 AM, Arkadi Colson wrote:
Any tips on what to do with the configuration files?
Where do I have to store them and what should they look like? Any
examples?
May 07, 2013 6:16:27 AM
startup in 37541 ms
Any idea?
--
Met vriendelijke groeten
Arkadi Colson
Smartbit bvba • Hoogstraat 13 • 3670 Meeuwen
T +32 11 64 08 80 • F +32 11 64 08 81
:36 PM, Mark Miller wrote:
What version of Solr? That should work in Jetty in 4.2 and not before and in
Tomcat in 4.3 and not before.
- Mark
On Apr 29, 2013, at 10:19 AM, Arkadi Colson ark...@smartbit.be wrote:
When I first do a linkconfig the route:implicit seems to be gone! So recreating
Anyone an idea how to debug this?
Thx!
On 04/25/2013 09:18 AM, Arkadi Colson wrote:
Hi
It seems not to work in my case. We are using the solr php module for
talking to Solr. Currently we have 2 collections 'intradesk' and 'lvs'
for 10 solr hosts (shards: 5 - repl: 2). Because
:468)
at
org.apache.solr.servlet.SolrDispatchFilter.remoteQuery(SolrDispatchFilter.java:454)
... 17 more
On 04/30/2013 09:57 AM, Arkadi Colson wrote:
I'm using the latest solr 4.2.1 with apache-tomcat-7.0.33
What exactly is the purpose of the linkconfig commmand? When I run it
before I
}
status=400 QTime=1
The last post is from the host who does the proxying...
On 04/30/2013 01:40 PM, Arkadi Colson wrote:
I'm getting this error in tomcat log:
Apr 30, 2013 11:21:44 AM org.apache.solr.common.SolrException log
SEVERE: null:org.apache.solr.common.SolrException: Error trying
Never mind about these last 2 posts. debugQuery parameter must be false
or true instead of 0 or 1
On 04/30/2013 01:54 PM, Arkadi Colson wrote:
This what I get on the solr host where the query is proxy-ed to:
Apr 30, 2013 11:36:28 AM org.apache.solr.core.SolrCore execute
INFO
.officemeeuwen.smartbit.be:8983/solr;,
leader:true,
router:compositeId}}
Met vriendelijke groeten
Arkadi Colson
Smartbit bvba • Hoogstraat 13 • 3670 Meeuwen
T +32 11 64 08 80 • F +32 11 64 08 81
On 04/30/2013 01:54 PM, Arkadi Colson wrote:
This what I get on the solr host where
Arkadi Colson
Smartbit bvba • Hoogstraat 13 • 3670 Meeuwen
T +32 11 64 08 80 • F +32 11 64 08 81
On 04/25/2013 09:18 AM, Arkadi Colson wrote:
Hi
It seems not to work in my case. We are using the solr php module for
talking to Solr. Currently we have 2 collections 'intradesk' and 'lvs'
for 10
Is it correct that if I create a collection B with parameter
createNodeSet = hostB and I query on hostA something for collectionA it
could not be found?
BR,
Arkadi
I found this in the zookeeper directory /collections/collectionX/
{
configName:smsc,
router:implicit}
Is router:implicit the cause of this? Is it possible to fix?
Thx!
On 04/29/2013 01:24 PM, Arkadi Colson wrote:
Is it correct that if I create a collection B with parameter
createNodeSet
23.7-b01, mixed mode)
Met vriendelijke groeten
Arkadi Colson
Smartbit bvba • Hoogstraat 13 • 3670 Meeuwen
T +32 11 64 08 80 • F +32 11 64 08 81
On 04/29/2013 03:24 PM, Michael Della Bitta wrote:
That means that documents will be indexed and stored on the node
they're sent to. It shouldn't keep
When I first do a linkconfig the route:implicit seems to be gone! So
recreating the collection will solve this. The problem that I cannot
request a collection that does not exists on that host is still there.
Arkadi
On 04/29/2013 03:31 PM, Arkadi Colson wrote:
The strange thing is that I
Hi
It seems not to work in my case. We are using the solr php module for
talking to Solr. Currently we have 2 collections 'intradesk' and 'lvs'
for 10 solr hosts (shards: 5 - repl: 2). Because there is no more disc
space I created 6 new hosts for collection 'messages' (shards: 3 - repl: 2).
Where Influence Isn’t a Game
On Tue, Apr 23, 2013 at 10:45 AM, Arkadi Colson ark...@smartbit.be wrote:
Hi
Is it correct that when inserting or updating document into solr you have to
talk to a solr host where at least one shard of that collection is stored?
For select you can talk to any host
Thx!
On 04/24/2013 04:46 PM, Shawn Heisey wrote:
On 4/24/2013 12:49 AM, Arkadi Colson wrote:
We are using tomcat so we'll just wait. Hopefully it's fixed in 4.3 but
we have a work around for now so...
What exactly is the difference between jetty and tomcat. We are using
tomcat because we've
Hi
Is it correct that when inserting or updating document into solr you
have to talk to a solr host where at least one shard of that collection
is stored?
For select you can talk to any host within the collection.configName?
BR,
Arkadi
Hi
Recently solr crashed. I've found this in the error log.
My commit settings are loking like this:
autoCommit
maxTime1/maxTime
openSearcherfalse/openSearcher
/autoCommit
autoSoftCommit
maxTime2000/maxTime
/autoSoftCommit
The machine has 10GB
http://comments.gmane.org/gmane.comp.jakarta.lucene.solr.user/69168
Regards,
André
Von: Arkadi Colson [ark...@smartbit.be]
Gesendet: Dienstag, 2. April 2013 10:24
An: solr-user@lucene.apache.org
Betreff: java.lang.OutOfMemoryError: Map failed
Hi
Recently
Von: Arkadi Colson [ark...@smartbit.be]
Gesendet: Dienstag, 2. April 2013 11:26
An: solr-user@lucene.apache.org
Cc: André Widhani
Betreff: Re: AW: java.lang.OutOfMemoryError: Map failed
Hmmm I checked it and it seems to be ok:
root@solr01-dcg:~# ulimit -v
unlimited
Any other tips or do you
The index folder is indeed gone but it seems to work. Maybe just a
structural change...
Met vriendelijke groeten
Arkadi Colson
Smartbit bvba • Hoogstraat 13 • 3670 Meeuwen
T +32 11 64 08 80 • F +32 11 64 08 81
On 04/02/2013 04:08 PM, yayati wrote:
I moved solr 4.1 to solr 4.2 on one
I upgraded java to version 7 and everything seems to be stable now!
BR,
Arkadi
On 03/25/2013 09:54 PM, Shawn Heisey wrote:
On 3/25/2013 1:34 AM, Arkadi Colson wrote:
I changed my system memory to 12GB. Solr now gets -Xms2048m -Xmx8192m as
parameters. I also added -XX:+UseG1GC to the java
shards to reduce the disc space?
Is the any progress in the resharding option the developers are working on?
Thx!
--
Met vriendelijke groeten
Arkadi Colson
Smartbit bvba • Hoogstraat 13 • 3670 Meeuwen
T +32 11 64 08 80 • F +32 11 64 08 81
solr01-gs kernel: [716098.114314] Out of memory: kill
process 29301 (java) score 37654 or a child
Mar 22 20:30:01 solr01-gs kernel: [716098.114401] Killed process 29301
(java)
Thanks!
On 03/14/2013 04:00 PM, Arkadi Colson wrote:
On 03/14/2013 03:11 PM, Toke Eskildsen wrote:
On Thu, 2013-03-14
Is sombody using the UseG1GC garbage collector with Solr and Tomcat 7?
Any extra options needed?
Thanks...
On 03/25/2013 08:34 AM, Arkadi Colson wrote:
I changed my system memory to 12GB. Solr now gets -Xms2048m -Xmx8192m
as parameters. I also added -XX:+UseG1GC to the java process. But now
. dokuments.
Regards,
Bernd
Am 25.03.2013 11:55, schrieb Arkadi Colson:
Is sombody using the UseG1GC garbage collector with Solr and Tomcat 7? Any
extra options needed?
Thanks...
On 03/25/2013 08:34 AM, Arkadi Colson wrote:
I changed my system memory to 12GB. Solr now gets -Xms2048m -Xmx8192m
/)
Only for short time monitoring we also use jvisualvm delivered with Java SE JDK.
Regards
Bernd
Am 25.03.2013 14:45, schrieb Arkadi Colson:
Thanks for the info!
I just upgraded java from 6 to 7...
How exactly do you monitor the memory usage and the affect of the garbage
collector?
On 03/25/2013
Searcher@db4268b realtime
Mar 15, 2013 11:56:36 AM org.apache.solr.update.DirectUpdateHandler2 commit
INFO: end_commit_flush
--
Met vriendelijke groeten
Arkadi Colson
Smartbit bvba • Hoogstraat 13 • 3670 Meeuwen
T +32 11 64 08 80 • F +32 11 64 08 81
to allocate stack
guard pages failed.
mmap failed for CEN and END part of zip file
--
Met vriendelijke groeten
Arkadi Colson
Smartbit bvba . Hoogstraat 13 . 3670 Meeuwen
T +32 11 64 08 80 . F +32 11 64 08 81
When I shutdown tomcat free -m and top keeps telling me the same values.
Almost no free memory...
Any idea?
On 03/14/2013 10:35 AM, Arkadi Colson wrote:
Hi
I'm getting this error after a few hours of filling solr with
documents. Tomcat is running with -Xms1024m -Xmx4096m.
Total memory
On 03/14/2013 03:11 PM, Toke Eskildsen wrote:
On Thu, 2013-03-14 at 13:10 +0100, Arkadi Colson wrote:
When I shutdown tomcat free -m and top keeps telling me the same values.
Almost no free memory...
Any idea?
Are you reading top free right? It is standard behaviour for most
modern
Based on what does solr replicate the whole shard again from zero? From
time to time after a restart of tomcat solr copies over the whole shard
to the replicator instead of doing only the changes.
BR,
Arkadi
Hi
I'm filling our solr database with about 5mil docs. All docs are in some
kind of queue which are processed by 5 simultaneous workers. What is the
best way to do commits is such a situation? If I say to let every worker
do a commit after 100 docs there will be 5 commits in a short period.
worker threads issuing overlapping commits. There's also commtWithin
that achieves the same thing.
Upayavira
On Wed, Mar 13, 2013, at 08:02 AM, Arkadi Colson wrote:
Hi
I'm filling our solr database with about 5mil docs. All docs are in some
kind of queue which are processed by 5 simultaneous
between 1s and 15s, and hard commits maybe every 15s to
1min. Those seem to me to be reasonable values.
Upayavira
On Wed, Mar 13, 2013, at 09:19 AM, Arkadi Colson wrote:
What would be a good value for maxTime or maxDocs knowing that we insert
about 10 docs/sec? Will it be a problem that we only use
Due to server failure I have 2 down hosts for 1 shard: lvs_shard2
Is it possible to fix this? The index size is equal so I just need to be
able to let one replica become active.
{
lvs:{
shards:{
shard1:{
range:8000-,
replicas:{
)
at
org.apache.solr.handler.admin.CoreAdminHandler.getCoreStatus(CoreAdminHandler.java:1007)
at
org.apache.solr.handler.admin.CoreAdminHandler.handleStatusAction(CoreAdminHandler.java:701)
... 19 more
On 03/13/2013 03:51 PM, Arkadi Colson wrote:
Due to server failure I have 2 down hosts for 1 shard: lvs_shard2
I just fixed it with unloading one replica core. Restarting tomcat
followed by adding the replica core again...
On 03/13/2013 04:13 PM, Mark Miller wrote:
Anything else in your logs?
- Mark
On Mar 13, 2013, at 10:57 AM, Arkadi Colson ark...@smartbit.be wrote:
On one replica the logs
.
I thought core admin ui had this field, but if not pls file a JIRA. Until then,
you may need to use the http API.
Mark
Sent from my iPhone
On Mar 4, 2013, at 12:46 AM, Arkadi Colson ark...@smartbit.be wrote:
Hi
When creating the shards through the admin interface it's not possibleto
specify
process. I'm not intended to change this from now. So will there be a
problem in the future with this way of workingand do I have to use the
collections APIfrom the beginning?
Thx!
--
Best regards
Arkadi Colson
Smartbit bvba . Hoogstraat 13 . 3670 Meeuwen
T +32 11 64 08 80 . F +32 11 64 08 81
Anyone running Nagios monitoring without JMX on Solr 4.0 or 4.1?
Thx!
--
Best regards
Arkadi Colson
Does it mean that when you redo indexing after the upgrade to 4.1 shard
splitting will work in 4.2?
Met vriendelijke groeten
Arkadi Colson
Smartbit bvba • Hoogstraat 13 • 3670 Meeuwen
T +32 11 64 08 80 • F +32 11 64 08 81
On 02/10/2013 05:21 PM, Michael Della Bitta wrote:
No. You can just
Hi
From time to time it takes quite some time to start tomcat. The logging
is reporting thissnippet below. Any idea? Sorry I'm a beginner with Java.
Dec 17, 2012 8:01:38 AM org.apache.coyote.AbstractProtocol init
INFO: Initializing ProtocolHandler [ajp-bio-8009]
Dec 17, 2012 8:01:38 AM
Hi
When abcdefg 123456 is in Solr I would like to have match with
- abcd
- cdef
- abcdefg 123456
- abcdefg 123456
- defg 1234
The last one is actually not working.
What am I doing wrong?
My config looks like this.
/field name=smsc_description type=text indexed=true
stored=false
)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1699)
at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:455)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:276)
... 15 more
Thx!
Arkadi Colson
Thanks for the info!
Do you know if it'spossible to use file uploads to Tika with this client?
On 12/03/2012 03:56 PM, Bill Au wrote:
https://bugs.php.net/bug.php?id=62332
There is a fork with patches applied.
On Mon, Dec 3, 2012 at 9:38 AM, Arkadi Colson ark...@smartbit.be wrote:
Hi
)
... 15 more
--
Met vriendelijke groeten
Arkadi Colson
Smartbit bvba . Hoogstraat 13 . 3670 Meeuwen
T +32 11 64 08 80 . F +32 11 64 08 81
: Thu, 06 Dec 2012 09:02:14 +0100
From: Arkadi Colson ark...@smartbit.be
Reply-To: ark...@smartbit.be
Organization: Smartbit bvba
To: solr-user@lucene.apache.org solr-user@lucene.apache.org
Anybody an idea?
Dec 5, 2012 3:52:32 PM org.apache.solr.client.solrj.impl.HttpClientUtil
other nodes - in SolrCloud we need every node to be capable of doing that. Each shard only has one leader, but every node in your cluster will be a replication master.
- Mark
On Nov 30, 2012, at 10:32 AM, Arkadi Colson ark...@smartbit.be wrote:
This is my setup for solrClou
replicate the index to other nodes - in SolrCloud we need every node to be
capable of doing that. Each shard only has one leader, but every node in your
cluster will be a replication master.
- Mark
On Nov 30, 2012, at 10:32 AM, Arkadi Colson ark...@smartbit.be wrote:
This is my setup
Hi
Anyone tested the pecl Solr Client in combination with SolrCloud? I
seems to be broken since 4.0
Best regard
Arkadi
Hi
I have a question regarding the NGram filter and full word search.
When I insert arkadicolson into Solr and search for arkadic, solr
will find a match.
When searching for arkadicols, Solr will not find a match because the
maxGramSize is set to 8.
However when searching for the full word
It now hanging for 15 hour and nothing changes in the index directory.
Tips for further debugging?
On 06/27/2012 03:50 PM, Arkadi Colson wrote:
I'm sending files to solr with the php Solr library. I'm doing a
commit every 1000 documents:
autoCommit
maxDocs1000/maxDocs
Hi
I indexed following strings:
abcdefg hijklmnop
When searching for abcdefg hijklmnop Solr returns the result but when
searching for abcdefg hijklmnop Solr returns nothing.
Any idea how to search for more then one word?
[params] = SolrObject Object
(
[0x0007,0x000734f85980,0x00075556)
PSPermGen total 54464K, used 54387K [0x0006fae0,
0x0006fe33, 0x0007)
object space 54464K, 99% used
[0x0006fae0,0x0006fe31cce8,0x0006fe33)
On 06/26/2012 02:36 PM, Arkadi Colson wrote:
Hi,
I'm indexing about 200.000
is working. is your
field's value too long? you sould also tell us average load the
system. the free memory and memory used by jvm
在 2012-6-27 晚上7:51,Arkadi Colson ark...@smartbit.be
mailto:ark...@smartbit.be 写道:
Anybody an idea?
The thread Dump looks like this:
Full thread dump Java
of data. So try seeing if you're getting a bunch of disk activity, you can get
a crude idea of what's going on if you just look at the index directory on
your Solr server while it's hung.
What version of Solr are you using? Details matter
Best
Erick
On Wed, Jun 27, 2012 at 7:51 AM, Arkadi Colson
Hi,
I'm indexing about 200.000 files (average size of 1 MB) with the tika
processor. At some point Solr started hanging. The logs is only reporting:
INFO: [] webapp=/solr path=/replication
params={command=indexversionwt=javabin} status=0 QTime=0
Jun 26, 2012 2:34:00 PM
It is still not working after reindexing. Below you can find the output
of the filed analysis. Any idea what can be wrong?
Index Analyzer
org.apache.solr.analysis.HTMLStripCharFilterFactory
{luceneMatchVersion=LUCENE_35}
text123 456
org.apache.solr.analysis.KeywordTokenizerFactory
Hi
I'm using Tika 0.10 for indexing my documents but I am not getting the
expected results when doing a search. Even after I delete the index and
started over again.
Some of the words in for example a PDF document can be found but most of
them not. Is it related to some language setting
The text field in the schema configuration looks like this. I changed
catenateNumbers to 0 but it still doesn't work as aspected.
fieldType name=text class=solr.TextField positionIncrementGap=100
analyzer type=index
tokenizer class=solr.WhitespaceTokenizerFactory/
!-- in this example, we will
Hi
I'm using the pecl PHP class to query SOLR and was wondering how to
query for a part of a sentence exactly.
There are 2 data items index in SOLR
1327497476: 123 456 789
1327497521. 1234 5678 9011
However when running the query, both data items are returned as you can
see below. Any idea
Hi
I'm using the pecl PHP class to query SOLR and was wondering how to
query for a part of a sentence exactly.
There are 2 data items index in SOLR
1327497476: 123 456 789
1327497521. 1234 5678 9011
However when running the query, both data items are returned as you can
see below. Any idea
88 matches
Mail list logo