Code:
http://pastebin.com/tNjzDbmy
Solr 4.9.0
Tomcat 7
Java 7
I took Erik Hatcher's example for creating a PostFilter and have modified
it so it would work with Solr 4.x. Right now it works...the first time.
If I were to run this query it would work right:
case the creds param must be included in the
hashCode and equals logic.
Joel Bernstein
Search Engineer at Heliosearch
On Wed, Oct 8, 2014 at 1:17 PM, Christopher Gross cogr...@gmail.com
wrote:
Code:
http://pastebin.com/tNjzDbmy
Solr 4.9.0
Tomcat 7
Java 7
I took Erik Hatcher's
Solr 4.9.0
Java 1.7.0_49
I'm indexing an internal Wiki site. I was running on an older version of
Solr (4.1) and wasn't having any trouble indexing the content, but now I'm
getting errors:
SCHEMA:
field name=content type=string indexed=false stored=true
required=true/
LOGS:
Caused by:
/112002776285509593336/112002776285509593336/posts
w: appinions.com http://www.appinions.com/
On Mon, Sep 15, 2014 at 10:06 AM, Christopher Gross cogr...@gmail.com
wrote:
Solr 4.9.0
Java 1.7.0_49
I'm indexing an internal Wiki site. I was running on an older version
of
Solr (4.1
[sorry if this double posts -- I got an error on sending so I'm trying it
again..]
I'm storing the page content in a string in Solr -- for display later.
I'm indexing that content into a text field (text_en_splitting) for
full-text searching.
I'm getting an error on the string portion, but
Thanks Hoss -- adding in the LengthFilterFactory did the trick.
-- Chris
On Mon, Sep 15, 2014 at 1:57 PM, Bryan Bende bbe...@gmail.com wrote:
I ran into this problem as well when upgrading to Solr 4.8.1...
We had a somewhat large binary field that was indexex=false stored=true,
but because
I just got Solr 4.9.0 running as a 3 node cloud. I use the CloudSolrServer
class to connect and do queries, but it isn't working now using HTTPS. I
don't see any options for the CloudSolrServer to use https (no key/trust
store or anything).
What SolrJ classes should I be looking at to connect
is to wipe out the version-2 for all the zookeepers, restart them, then
reload my configs back in.
Thanks!
-- Chris
On Thu, Sep 4, 2014 at 4:12 AM, Shawn Heisey s...@elyograg.org wrote:
On 9/2/2014 11:44 AM, Christopher Gross wrote:
OK -- so I think my previous attempts were causing the problem
goes wrong. As I don't know,
I'm not sure about it but did you already try it and things didn't
work/clean up for you? If that's the case, was there an error that you
noticed?
On Thu, Sep 4, 2014 at 4:45 AM, Christopher Gross cogr...@gmail.com
wrote:
Shawn,
How do I remove a collection
(mulitValued instead of multiValued), which left me in a similar state.
-- Chris
On Thu, Sep 4, 2014 at 3:28 PM, Anshum Gupta ans...@anshumgupta.net wrote:
I'm just curious, do you know why the CREATE failed for you?
On Thu, Sep 4, 2014 at 12:21 PM, Christopher Gross cogr...@gmail.com
wrote
On Tue, Sep 2, 2014 at 2:30 PM, Christopher Gross cogr...@gmail.com wrote:
Is the solr.ssl.checkPeerName option available in 4.8.1? I have my
Tomcat starting up with that as a -D option, but I'm getting an exception
on validating the hostname w/ the cert...
-- Chris
On Tue, Sep 2, 2014 at 1
Solr 4.8.1
Java 1.7
Tomcat 7.0.50
Zookeeper 3.4.6
Trying to get a SolrCloud running with https only. I found this:
https://issues.apache.org/jira/browse/SOLR-3854
I don't have a clusterprops.json file, and running the zkCli command
doesn't add one either.
Command is along the lines of:
not getting a whole lot on searches for clusterprops.json -- any
advice would be appreciated.
-- Chris
On Tue, Sep 2, 2014 at 8:59 AM, Christopher Gross cogr...@gmail.com wrote:
Solr 4.8.1
Java 1.7
Tomcat 7.0.50
Zookeeper 3.4.6
Trying to get a SolrCloud running with https only. I found
Hi Hoss.
I did finally stumble onto that document (just after I posted my last
message, of course).
Using bash shell.
I've now tried those steps:
Tomcat is stopped.
First I run:
./zkcli.sh -zkhost localhost:2181 -cmd put /clusterprops.json
'{urlScheme:https}'
I confirm via the
Side note -- I've also tried adding the clusterprops.json file via
zookeeper's shell client on the command line, and within that client, all
with no luck.
-- Chris
On Tue, Sep 2, 2014 at 12:19 PM, Christopher Gross cogr...@gmail.com
wrote:
Hi Hoss.
I did finally stumble onto that document
OK -- so I think my previous attempts were causing the problem.
Since this is a dev environment (and is still empty), I just went ahead and
wiped out the version-2 directories for the zookeeper nodes, reloaded my
solr collections, then ran that command (zkcli.sh in the solr distro).
That did work.
Is the solr.ssl.checkPeerName option available in 4.8.1? I have my Tomcat
starting up with that as a -D option, but I'm getting an exception on
validating the hostname w/ the cert...
-- Chris
On Tue, Sep 2, 2014 at 1:44 PM, Christopher Gross cogr...@gmail.com wrote:
OK -- so I think my
of times.
I'll see about getting a new version in place soon. If it still happens,
I'll definitely log something in JIRA for it.
Thanks!
-- Chris
On Thu, Aug 7, 2014 at 4:07 PM, Shawn Heisey s...@elyograg.org wrote:
On 8/7/2014 1:46 PM, Christopher Gross wrote:
Solr 4.1, in SolrCloud mode
Solr 4.1, in SolrCloud mode. 3 nodes configured, Running in Tomcat 7 w/
Java 7.
I have a few cores set up, let's just call them A, B, C and D. They have
some uniquely named xslt files, but they all have a rss.xsl file.
Sometimes, on just 1 of the nodes, if I do a query for something in A and
Solr 4.7.2 (and 4.6.1)
Tomcat 7.0.52
Java 1.7.0_45 (and _55)
I'm getting some really odd behavior with some XSLT documents. I've been
doing some upgrades to Java Solr and I'm trying to narrow down where the
problems are happening.
I have a few XSLT docs that I put into the conf/xslt directory
Checked that first -- it's a test site with a small sample size. The field
is set in all of the items. And refreshing the query a few times can yield
either result (with/without the error).
I'm reverting back to an old version of my stack (my code, plus tomcat
solr), I'll step through my
.
Are you satisfying both of those conditions? If so, it's probably ok
to just ignore the warning.
Regards,
Alex.
Personal website: http://www.outerthoughts.com/
Current project: http://www.solr-start.com/ - Accelerating your Solr
proficiency
On Fri, May 2, 2014 at 3:28 AM, Christopher Gross
I get this warning when Solr (4.7.2) Starts:
WARN org.apache.solr.util.xslt.TransformerProvider â The
TransformerProvider's simplistic XSLT caching mechanism is not appropriate
for high load scenarios, unless a single XSLT transform is used and
xsltCacheLifetimeSeconds is set to a sufficiently
(
The TransformerProvider's simplistic XSLT caching mechanism is
not appropriate
+ for high load scenarios, unless a single XSLT transform is used
+ and xsltCacheLifetimeSeconds is set to a sufficiently high
value.
);
}
On Thursday, May 1, 2014 11:29 PM, Christopher
Running Solr 4.6.1, Tomcat 7.0.29, Zookeeper 3.4.6, Java 6
I have 3 Tomcats running, each with their own Solr war, all on the same
box, along with 5 ZK nodes. It's a dev box.
I can get the SolrCloud up and running, then use the Collections API to get
everything going. It's all fine until I
These get added to the startup of Tomcat:
-DhostPort=8181 -Djetty.port=8181
-DzkHost=localhost:2181,localhost:2182,localhost:2183,localhost:2184,localhost:2185
-Dbootstrap_conf=true -Dport=8181 -DhostContext=solr
-DzkClientTimeout=2
-- Chris
On Thu, Apr 24, 2014 at 11:41 AM, Rafał Kuć
...@elyograg.org wrote:
On 4/24/2014 9:44 AM, Christopher Gross wrote:
These get added to the startup of Tomcat:
-DhostPort=8181 -Djetty.port=8181
-DzkHost=localhost:2181,localhost:2182,localhost:2183,
localhost:2184,localhost:2185
-Dbootstrap_conf=true -Dport=8181 -DhostContext=solr
-DzkClientTimeout
I get both of these errors a few times in my tomcat (7.0.52) catalina.out
logfile:
2014-04-02 13:22:32,026 WARN org.apache.solr.schema.FieldTypePluginLoader
- TokenFilterFactory is using deprecated LUCENE_33 emulation. You should at
some point declare and reindex to at least 4.0, because 3.x
Running Apache Solr 4.5 on Tomcat 7.0.29, Java 1.6_30. 3 SolrCloud nodes
running. 5 ZK nodes (v 3.4.5), one on each SolrCloud server, and on 2
other servers.
I want to create a collection on all 3 nodes. I only need 1 shard. The
config is in Zookeeper (another collection is using it)
I have Solr 4.1 running in the SolrCloud mode. My largest collection has 2
index directories (and an index.properties replication.properties in that
directory). Is it safe to remove the older index not listed in
index.properties? I'm running low on disk space, otherwise I'd have just
left it
, if the index isn't the one specified in index.properties then `lsof`
won't show Solr as using it.
FWIW I'm pretty sure there's a bug in Jira about old indexes not getting
purged but I can't find it right now.
Thanks,
Greg
On 2013Oct 31,, at 7:32 AM, Christopher Gross cogr...@gmail.com wrote
In Solr 4.5, I'm trying to create a new collection on the fly. I have a
data dir with the index that should be in there, but the CREATE command
makes the directory be:
collection name_shard1_replicant#
I was hoping that making a collection named something would use a directory
with that name to
was with that.
Thanks Shawn -- I have a much better understanding of all this now.
-- Chris
On Thu, Oct 17, 2013 at 7:31 PM, Shawn Heisey s...@elyograg.org wrote:
On 10/17/2013 12:51 PM, Christopher Gross wrote:
OK, super confused now.
http://index1:8080/solr/admin/**cores?action=CREATEname
)?
To avoid the overhead, could you put Solr on a separate VLAN (with ACLs to
client servers)?
Cheers,
Tim
On 12 October 2013 17:30, Shawn Heisey s...@elyograg.org wrote:
On 10/11/2013 9:38 AM, Christopher Gross wrote:
On Fri, Oct 11, 2013 at 11:08 AM, Shawn Heisey s...@elyograg.org
wrote
the only way that I can add in a collection is to
load the configs into zookeeper, stop tomcat, add it to the solr.xml file,
and restart tomcat.
Is there a primer that I'm missing for how to do this?
Thanks.
-- Chris
On Wed, Oct 16, 2013 at 2:59 PM, Christopher Gross cogr...@gmail.comwrote
://index1:8080/solr/test1-alias/select?q=*:*
but that didn't work. How do I use an alias when it gets made?
-- Chris
On Thu, Oct 17, 2013 at 2:51 PM, Christopher Gross cogr...@gmail.comwrote:
OK, super confused now.
http://index1:8080/solr/admin/cores?action=CREATEname=test2collection
the Zookeeper instance, and do a 'get /aliases.json'.
-Original Message-
From: Christopher Gross [mailto:cogr...@gmail.com]
Sent: Thursday, October 17, 2013 2:40 PM
To: solr-user
Subject: Re: Switching indexes
Also, when I make an alias:
http://index1:8080/solr/admin/collections?action
!
-- Chris
On Tue, Oct 15, 2013 at 7:30 PM, Shawn Heisey s...@elyograg.org wrote:
On 10/15/2013 2:17 PM, Christopher Gross wrote:
I have 3 Solr nodes (and 5 ZK nodes).
For #1, would I have to do that on all of them?
For #2, I'm not getting the auto-replication between node 1 and nodes 2
3
for my
core1 from the cloud. Or keep it around as a backup to which you
can restore simply by changing 'query' alias.
-Original Message-
From: Christopher Gross [mailto:cogr...@gmail.com]
Sent: Wednesday, October 16, 2013 7:05 AM
To: solr-user
Subject: Re: Switching indexes
Shawn
?
-- Chris
On Wed, Oct 16, 2013 at 12:40 PM, Shawn Heisey s...@elyograg.org wrote:
On 10/16/2013 9:44 AM, Christopher Gross wrote:
Garth,
I think I get what you're saying, but I want to make sure.
I have 3 servers (index1, index2, index3), with Solr living on port 8080.
Each
again!
-- Chris
On Wed, Oct 16, 2013 at 2:40 PM, Shawn Heisey s...@elyograg.org wrote:
On 10/16/2013 11:51 AM, Christopher Gross wrote:
Ok, so I think I was confusing the terminology (still in a 3.X mindset I
guess.)
From the Cloud-Tree, I do see that I have collections for what I
In Solr 3.x, whenever I'd reindex content, I'd fill up one instance, copy
the whole data directory over to the second (or third) instance and then
restart that Tomcat to get the indexes lined up.
With Solr 4.1, I'm guessing that I can't go and do that without taking down
all of my nodes and
, Shawn Heisey s...@elyograg.org wrote:
On 10/15/2013 12:36 PM, Christopher Gross wrote:
In Solr 3.x, whenever I'd reindex content, I'd fill up one instance, copy
the whole data directory over to the second (or third) instance and then
restart that Tomcat to get the indexes lined up.
With Solr
I have 3 SolrCloud nodes (call them idx1, idx2, idx3), and the boxes have
SSL certs configured on them to protect the Solr Indexes.
Right now, I can do queries on idx1 and it works fine.
If I try to query on idx3, I get:
org.apache.solr.common.SolrException:
On Fri, Oct 11, 2013 at 11:08 AM, Shawn Heisey s...@elyograg.org wrote:
On 10/11/2013 8:17 AM, Christopher Gross wrote:
Is there a spot in a Solr configuration that I can set this up to use
HTTPS?
From what I can tell, not yet.
https://issues.apache.org/jira/browse/SOLR-3854
https
In 3.x Solr (and earlier) I was able to create a new xslt doc in the
conf/xslt directory and immediately start using it.
In my 4.1 setup, I have:
queryResponseWriter name=xslt class=solr.XSLTResponseWriter
int name=xsltCacheLifetimeSeconds5/int
/queryResponseWriter
But after that small
I've been trying out Solr 4 -- I was able to get it working with 3
instances of Tomcat on the same box (different ports), and 5 Zookeeper
nodes on that box as well. I've started to get my production layout going,
but I can't seem to get the Solr to replicate among the nodes.
I can see that the
that and be explicit if
it's guessing wrong. If you have nodes on different machines, you don't
want it to be localhost.
Next, look at the logs. They should give a clue why the replicas can't
recover from the leader.
- Mark
On Feb 27, 2013, at 8:25 AM, Christopher Gross cogr...@gmail.com wrote
in the zk host string initially. That might make
it easier to track down why it won't connect. It's tough to diagnose
because the root exception is being swallowed - it's likely a connect to zk
failed exception though.
- Mark
On Jan 10, 2013, at 1:34 PM, Christopher Gross cogr...@gmail.com
that
you might have omit positions on the region field?
-- Jack Krupansky
-Original Message- From: Christopher Gross
Sent: Wednesday, November 07, 2012 7:15 AM
To: solr-user
Subject: Matching an exact phrase in a text field
I have this as my text field:
fieldType name=text class
I have a keyword field type that I made:
fieldType name=keyword class=solr.TextField
analyzer type=index
tokenizer
class=solr.KeywordTokenizerFactory/
filter class=solr.LowerCaseFilterFactory/
I'm running Solr 3.4. The past 2 months I've been getting a lot of
write.lock errors. I switched to the simple lockType (and made it
clear the lock on restart), but my index is still locking up a few
times a week.
I can't seem to determine what is causing the locks -- does anyone out
there have
for the people field?
-- Jack Krupansky
-Original Message- From: Christopher Gross
Sent: Tuesday, June 12, 2012 11:05 AM
To: solr-user
Subject: Different sort for each facet
In Solr 3.4, is there a way I can sort two facets differently in the same
query?
If I have:
http
In Solr 3.4, is there a way I can sort two facets differently in the same query?
If I have:
http://mysolrsrvr/solr/select?q=*:*facet=truefacet.field=peoplefacet.field=category
is there a way that I can sort people by the count and category by the
name all in one query? Or do I need to do that
My index has a multi-valued String field called tag that is used to
store a category/keyword for the item the record is about. I made a
faceted query in order to find out all the different tags that are
stored in the index:
be tokenized. Maybe look at alternatives that don't tokenize
fields. Just a guess here though. Good luck.
On Fri, 13 Jan 2012 09:04:00 -0500, Christopher Gross cogr...@gmail.com
wrote:
My index has a multi-valued String field called tag that is used to
store a category/keyword for the item
I'm getting different results running these queries:
http://localhost:8080/solr/select?q=*:*fq=source:wikifq=tag:carsort=score+desc,dateSubmitted+ascfl=title,score,dateSubmittedrows=100
http://wiki.apache.org/nutch/Crawl
This script no longer works. See:
echo - Index (Step 5 of $steps) -
$NUTCH_HOME/bin/nutch index crawl/NEWindexes crawl/crawldb crawl/linkdb \
crawl/segments/*
The index call doesn't existso what does this line get replaced
with? Is there an
Ha, sorry Hoss. Thought i hit user@nutch, gmail did the replace and I
wasn't paying attention.
-- Chris
On Fri, Dec 16, 2011 at 2:46 PM, Chris Hostetter
hossman_luc...@fucit.org wrote:
: http://wiki.apache.org/nutch/Crawl
:
: This script no longer works. See:
If you have a question
. It shouldn't be stored though (unless you just want to verify
for debugging).
-Yonik
http://www.lucidimagination.com
On Fri, Oct 28, 2011 at 9:35 AM, Christopher Gross cogr...@gmail.com wrote:
Hi Yonik.
I never made a dynamicField definition for _latLon ... I was following
the examples
I'm using the geohash field to store points for my data. When I do a
bounding box like:
localhost:8080/solr/select?q=point:[-45,-80%20TO%20-24,-39]
I get a data point that falls outside the box: (-73.03358 -50.46815)
The Spatial Search (http://wiki.apache.org/solr/SpatialSearch)
, Christopher Gross cogr...@gmail.com wrote:
I'm using the geohash field to store points for my data. When I do a
bounding box like:
localhost:8080/solr/select?q=point:[-45,-80%20TO%20-24,-39]
I get a data point that falls outside the box: (-73.03358
-50.46815)
Is there a reason
See:
http://wiki.apache.org/solr/SolrConfigXml
The example in the wiki is:
autoCommit
maxDocs1/maxDocs !-- maximum uncommited docs before
autocommit triggered --
maxTime86000/maxTime !-- maximum time (in MS) after adding
a doc before an autocommit is triggered --
Sorry, lack of sleep made me see an extra 0 in there.
I haven't had this issue -- but after every batch of items that I post
into Solr with SolrJ I run the commit() routine on my instance of the
CommonsHttpSolrServer, so they show up immediately. You could try
altering your code to do that, or
I'm using Solr 3.3, trying to run an XSLT translation on the results
of a query. The xsl file worked just fine for Solr 1.4.1, but I'm
having trouble with the newer version.
The root cause is:
javax.xml.transform.TransformerException: Extra illegal tokens:
'contains', '(', '$', 'posted', ',',
://www.w3.org/TR/xpath-functions/#func-not
So, not(contains)) rather than not contains() should presumably do
the trick.
-Original Message-
From: Christopher Gross [mailto:cogr...@gmail.com]
Sent: Thursday, August 18, 2011 7:44 AM
To: solr-user
Subject: XSLT Exception
I'm
records it was unable to send, and then pull them
out in order to try running them again later? Any insight that anyone has
would be greatly appreciated.
Thanks!
-- Christopher Gross
, Christopher Gross cogr...@gmail.comwrote:
Hi all.
I have designed a synchronizer that goes out to various databases,
extracts some data, does some processing, and then uses the
StreamingUpdateSolrServer to send the records to a Solr index. When
everything is up, it works just fine.
Now I'm
I'm trying to use Solr to store information from a few different sources in
one large index. I need to create a unique key for the Solr index that will
be unique per document. If I have 3 systems, and they all have a document
with id=1, then I need to create a uniqueId field in my schema that
again!
-- Chris
On Tue, Nov 9, 2010 at 10:47 AM, Ken Stanley doh...@gmail.com wrote:
On Tue, Nov 9, 2010 at 10:39 AM, Christopher Gross cogr...@gmail.com
wrote:
I'm trying to use Solr to store information from a few different sources
in
one large index. I need to create a unique key
Thanks Hoss, I'll look into that!
-- Chris
On Tue, Nov 9, 2010 at 1:43 PM, Chris Hostetter hossman_luc...@fucit.orgwrote:
: one large index. I need to create a unique key for the Solr index that
will
: be unique per document. If I have 3 systems, and they all have a
document
: with
in!
Thanks!
-- Chris
On Thu, Sep 30, 2010 at 4:40 PM, Christopher Gross cogr...@gmail.comwrote:
I have also tried using SolrJ to hit my index, and I get this error:
2010-09-30 16:23:14,406 [pool-2-thread-1] DEBUG
org.apache.commons.httpclient.params.DefaultHttpParams - Set parameter
I'm writing some code that pushes data into a Solr instance. I have my
Tomcat (5.5.28) set up to use 2 indexes, I'm hitting the second one for
this.
I try to issue the basic command to clear out the index
(deletequery*:*/query/delete), and I get the error posted below
back.
Does anyone have an
Where can I get SolrJ? The wiki makes reference to it, and says that it is
a part of the Solr builds that you download, but I can't find it in the jars
that come with it. Can anyone shed some light on this for me?
Thanks!
-- Chris
Now I feel dumb, it was right there. Thanks! :)
-- Chris
On Thu, Sep 30, 2010 at 3:04 PM, Allistair Crossley a...@roxxor.co.ukwrote:
it's in the dist folder with the name provided by the wiki page you refer
to
On Sep 30, 2010, at 3:01 PM, Christopher Gross wrote:
Where can I get SolrJ
I have also tried using SolrJ to hit my index, and I get this error:
2010-09-30 16:23:14,406 [pool-2-thread-1] DEBUG
org.apache.commons.httpclient.params.DefaultHttpParams - Set parameter
http.useragent = Jakarta Commons-HttpClient/3.0
2010-09-30 16:23:14,406 [pool-2-thread-1] DEBUG
Hi Andy!
I configured this a few days ago, and found a good resource --
http://wiki.apache.org/solr/MultipleIndexes
That page has links that will give you the instructions for setting up
Tomcat, Jetty and Resin. I used the Tomcat ones the other day, and it gave
me everything that I needed to
, 2010 at 4:54 PM, Christopher Gross cogr...@gmail.com wrote:
Thanks Jak! That was just what I was looking for!
-- Chris
On Mon, Sep 20, 2010 at 4:25 PM, Jak Akdemir jakde...@gmail.com wrote:
It is quite easy to modify its default value. Solr is using default
logging values that started
I'm running an old version of Solr (1.2) on Apache Tomcat 5.5.25.
Right now the logs all go to the catalina.out file, which has been
growing rather large. I have to shut down the servers periodically to
clear out that logfile because it keeps getting large and giving disk
space warnings.
I've
can observe changes from http://localhost:8080/solr/admin/logging
or simply ~/admin/logging pages.
Details are here:
http://wiki.apache.org/tomcat/Logging_Tutorial
http://tomcat.apache.org/tomcat-6.0-doc/logging.html
Jak
On Mon, Sep 20, 2010 at 10:32 PM, Christopher Gross cogr
with a *:*, then the “numFound” attribute of the result
element should give you the rows to fetch by a 2nd request.
On Thu, Sep 16, 2010 at 4:49 PM, Christopher Gross cogr...@gmail.com
wrote:
That will stil just return 10 rows for me. Is there something else in
the configuration of solr to have it return all
:23 AM, Christopher Gross wrote:
@Markus Jelsma - the wiki confirms what I said before:
rows
This parameter is used to paginate results from a query. When
specified, it indicates the maximum number of documents from the
complete result set to return to the client for every request. (You
can
I have some queries that I'm running against a solr instance (older,
1.2 I believe), and I would like to get *all* the results back (and
not have to put an absurdly large number as a part of the rows
parameter).
Is there a way that I can do that? Any help would be appreciated.
-- Chris
That will stil just return 10 rows for me. Is there something else in
the configuration of solr to have it return all the rows in the
results?
-- Chris
On Thu, Sep 16, 2010 at 4:43 PM, Shashi Kant sk...@sloan.mit.edu wrote:
q=*:*
On Thu, Sep 16, 2010 at 4:39 PM, Christopher Gross cogr
,
if you're storing these as text you may just be losing the negative sign
which would
lead to all sorts of interesting failures..
Best
Erick
On Tue, May 11, 2010 at 9:53 AM, Christopher Gross cogr...@gmail.com
wrote:
I've stored some geo data in SOLR, and some of the coordinates
I changed my schema to use the tdouble that the link above describes:
fieldType name=tdouble class=solr.TrieDoubleField precisionStep=8
omitNorms=true positionIncrementGap=0/
and I'm able to do the search correctly now.
-- Chris
On Tue, May 11, 2010 at 11:37 AM, Christopher Gross cogr
86 matches
Mail list logo