Is there any way to make Solr not throw an Exception when unknown field
names are submitted as a part of an otherwise valid document? Ie. I
have field named 'foo' in my document but it's not in my schema. For
various bookkeeping / tracking purposes I need to annotate the documents
with extra
does solr support date range searching? i've tried all the examples on
the lucene site as well as using the solr response format and a few
others that seemed nifty but so far I always get query parsing errors.
i know i can easily convert the dates to ints and do ranges that way but
all the
I'm trying to setup a system to have very low index latency (1-2
seconds) and one of the javadocs intrigued me:
DirectUpdateHandler2 implements an UpdateHandler where documents are
added directly to the main Lucene index as opposed to adding to a
separate smaller index
The plain
update handlers
On 5/10/07, Will Johnson [EMAIL PROTECTED] wrote:
I'm trying to setup a system to have very low index latency (1-2
seconds) and one of the javadocs intrigued me:
DirectUpdateHandler2 implements an UpdateHandler where documents are
added directly to the main Lucene index
: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Yonik
Seeley
Sent: Thursday, May 10, 2007 9:07 AM
To: solr-user@lucene.apache.org
Subject: Re: fast update handlers
On 5/10/07, Will Johnson [EMAIL PROTECTED] wrote:
I guess I was more concerned with doing the frequent commits and how
I've needed similar logged information recently and I looked at the code
and had a few questions:
Why does SolrCore.setResponseHeaderValues(...) set the QTime (and other
response header options) instead of having it as a function of
RequestHandlerBase? If things were tracked in the
Adding entries to RequestHandlerBase.getStatistics() sounds like it
might be a reasonable compromise; backwards compatibility is kept in
place but everything from now on gets the added advantages of more
tracking. So far I've added (because I need)
avgTimePerRequest
avgRequestsPerSecond
I agree
Possible solutions inline:
1) I use facets and want to return the facets for all documents as the
starting point of the user interface. In other words, I want to /select
the facet counts for about 10 facets (like states for example) for all
documents without having to do a search. Is this
3) Once the user saves the list, I want them to be able to do further
searches by do a negative union with the set of ids they already
saved. So for example, if they already saved 1000 ids into one of
their
lists, they would need to be able to get results from Solr that match
their query
-
From: Martin Grotzke [mailto:[EMAIL PROTECTED]
Sent: Tuesday, May 22, 2007 10:43 AM
To: solr-user@lucene.apache.org
Subject: RE: Interesting Practical Solr Question
On Tue, 2007-05-22 at 10:07 -0400, Will Johnson wrote:
Possible solutions inline:
1) I use facets and want to return the facets
I think I had the same problem (the same error at least) and submitted a
patch. The patch adds a new config option to use the nio locking
facilities instead of the default lucene locking. In the ~week since I
haven't seen the issue after applying the patch (ymmv)
Has anyone done anything interesting to preserve display values for
field names. Ie my users would like to see
Download Speed (MB/sec): 5
As opposed to:
ds:5
there are options for doing fancy encoding of field names but those seem
less that ideal. What I'd really like to do is at
display values
Will Johnson wrote:
Has anyone done anything interesting to preserve display values for
field names. Ie my users would like to see
Download Speed (MB/sec): 5
As opposed to:
ds:5
The general model has been to think of solr like SQL... it is only the
database - display
The code in http://solrstuff.org/svn/solrj/ is very stable, works with
most all features for both searching and indexing and will be moving
into the main distribution soon as the standard java client library.
- will
-Original Message-
From: Martin Grotzke [mailto:[EMAIL PROTECTED]
tried using that client, but I didn't get any good results while
searching
for worst with special characters. I have also searched for
documentation
for that client, but didn't find any.
Does anyone know where to find documentation concerning the java client?
On 14/06/07, Will Johnson [EMAIL
If you create a SolrServer and start adding documents you are updating
the index with new documents. If you're id's happen to collide with
ones in the index then you will overwrite those documents but that's the
only way you could end up with less than existing docs + docs added.
- will
is in the index prior to running the rest of the tests (namely nothing).
It's not at all required and probably not really recommended as a best
practice. :)
- will
-Original Message-
From: Will Johnson [mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 26, 2007 4:16 PM
To: solr-user@lucene.apache.org
one thing to keep in mind: it's typically not a good idea to have the
constraint set of a facet change just because some other constraint was
added to the query -- individual constraints might disappear because
they no longer apply, but it can be very disconcerting to a user to
when options hcange
There are may ways to do this but I drop the following into the
solr.war/web-inf/classes/logging.properties files:
handlers = org.apache.juli.FileHandler, java.util.logging.ConsoleHandler
# Handler specific properties.
# Describes
I'm pretty sure that's a BOM: http://en.wikipedia.org/wiki/Byte_Order_Mark
The web.xml in src/webapp also seems to have it there so I would assume that
any/all builds from the solr kit would have the same problem no matter where
you downloaded it from.
It looks like the BOM got checked in
TestJettyLargeVolume.java
Description: Binary data
we were doing some performance testing for the updating aspects of solr and ran into what seems to be a large problem. we're creating small documents with an id and one field of 1 term only submitting them in batches of 200 with commits every
one other note. the errors pop up when running against the 1.3 trunk
but do not appear to happen when run against 1.2.
- will
On 9/19/07, Will Johnson [EMAIL PROTECTED] wrote:
we were doing some performance testing for the updating aspects of solr and
ran into what seems to be a large
With the new/improved value source functions it should be pretty easy to
develop a new best practice. You should be able to pull in the lat/lon
values from valuesource fields and then do your greater circle calculation.
- will
-Original Message-
From: Lance Norskog [mailto:[EMAIL
Check out:
https://issues.apache.org/jira/browse/SOLR-264
- will
-Original Message-
From: Jonathan Ariel [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 30, 2007 4:33 PM
To: solr-user@lucene.apache.org
Subject: Random return
Hi!
I need to query solr and then return a random document
key = sometimesUniqueField + _ + otherTimesUniqueuField;
If one of them is always guaranteed to be unique you should be fine.
- will
-Original Message-
From: Dave C. [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 13, 2007 2:07 PM
To: solr-user@lucene.apache.org
Subject: Multiple
If you want any letter and any possible substring you might be better off
breaking every word into single letters with special tokens between words:
ie:
the quick brown fox
Becomes
t h e ZZ q u i c k ZZ b r o w n ZZ f o x
then you can do all the single letter searches and multi letter searches
Usually you do something like: (assuming this is in a rdbms)
SELECT sku.id as skuid, sku.name as skuname, item.name as itemname,
location.name as locationname
FROM sku, item, location
WHERE sku.item = item.id AND sku.location = location.id
The you can search on any part of the 'flat' record and
://issues.apache.org/jira/browse/SOLR-549 SOLR-549 which is
cross-container but (alas) requires the code to change and introduces (yet
another) logging configuration convention.
Henri
Will Johnson-2 wrote:
Henri,
There are some bridges out there but none had a version number 0.1. I
Jan Hoydal / Otis,
First off, Thanks for mentioning us. We do use some utility functions from
SOLR but our index engine is built on top of Lucene only, there are no Solr
cores involved. We do have a JOIN operator that allows us to perform
relational searches while still acting like a search
Hi all,
Can anyone give me some advice on breaking a document up and indexing it
by access control lists. What we have are xml documents that are
transformed based on the user viewing it. Some users might see all of
the document, while other may see a few fields, and yet others see
nothing at
The XML parser is probably not threadsafe but is being reused concurrently by
multiple post threads resulting in these exceptions. The observed 'randomness'
of the errors would be due to the unpredictable nature of the race condition
between threads. The reason you don't see this with smaller
So I just ran into this bug:
https://issues.apache.org/jira/browse/SOLR-470
and read about this related one:
https://issues.apache.org/jira/browse/SOLR-544
Here is the relevant trace:
Apr 22, 2008 10:59:01 PM org.apache.solr.common.SolrException log
SEVERE: java.lang.RuntimeException:
to be something different than SOLR-470 and SOLR-544 since the
format seems to be accepted at indexing, and is consistent in the index, but is
still not accepted at query time.
Anyone have a suggestion?
Thanks,
Brian Johnson
- Original Message
From: Brian Johnson [EMAIL PROTECTED]
To: solr
: termfrequency
: 2008-04-24T09:03:53Z11076
: 2008-04-24T09:03:55Z10036
i'm assuming 11076 and 10036 are the term frequencies?
Yes - the cut-n-paste didn't do a good job separating the columns
: This now seems to be something different than SOLR-470 and SOLR-544
: since the format seems to be
Unfortunately that data set is long gone, but I can say that I am quite sure
the data was consistently sent to Solr with 3 digits of millis when I provided
the data in the documents. I confirmed this using luke and the data was
consistent, but the exception persisted. I looked into the
are asking about the language/syntax for field
specification. Could/should you not use the ${fifi} syntax? We already use
that in solrconfig.xml, for example.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: Brian Johnson [EMAIL PROTECTED]
To: solr
FWIW - I have no problem with the change.
Thanks,
Brian
- Original Message
From: Walter Underwood [EMAIL PROTECTED]
To: solr-user@lucene.apache.org
Sent: Friday, June 13, 2008 11:38:27 AM
Subject: Re: Seeking Feedback: non back compat for Java API of 3
FilterFactories in 1.3?
We use
Greetings,
I am new to Solr, but have gotten as far as successfully indexing
documents both by sending XML describing the document and by sending the
document itself using update/extract. What I want to do now is, in
effect, do both of these on each of my documents. I want to be able to
Hello,
I'm customizing my XML response using with the XSLTResponseWriter using
wt=xslttr=transform.xsl. Because I have a few use-cases to support, I
wanted to break up the common bits and import/include them from multiple top
level xslt files, but it appears that the base directory of the
Hi,all.
I use Lucene's NumericField to index price field,And query with
solr.TrieDoubleField.
When i use price:[1 TO 5000] to search,it can return all results that
price is between 1 and 5000.
but the price value return is
:ERROR:SCHEMA-INDEX-MISMATCH,stringValue=2000.0/str
anybogy know why?
--
Yonik Seeley-2 wrote:
On Tue, Dec 22, 2009 at 11:41 PM, johnson hong
hong.jinch...@goodhope.net wrote:
I use Lucene's NumericField to index price field,And query with
solr.TrieDoubleField.
When i use price:[1 TO 5000] to search,it can return all results that
price is between 1 and 5000
Hi,all.
I found a problem on distributed-seach.
when i use ?q=keywordstart=0rows=20 to query across
distributed-seach,it will return numFound=181 ,then I
change the start param from 0 to 100,it will return numFound=131.
why return different numFound with same query ?
--
View
Yonik Seeley-2 wrote:
On Thu, Dec 31, 2009 at 2:29 AM, johnson hong
hong.jinch...@goodhope.net wrote:
Hi,all.
I found a problem on distributed-seach.
when i use ?q=keywordstart=0rows=20 to query across
distributed-seach,it will return numFound=181 ,then I
change the start
thanks, that's what I had thought. Wasn't sure if there was a benefit
either way.
On Fri, Dec 16, 2011 at 3:29 PM, Mark Miller markrmil...@gmail.com wrote:
On Fri, Dec 16, 2011 at 8:14 AM, Jamie Johnson jej2...@gmail.com wrote:
What is the most appropriate way to configure Solr when deploying
I've seen in the solr faceting overview that it is possible to sort
either by count or lexicographically, but is there a way to sort so
the lowest counts come back first?
Sekiguchi k...@r.email.ne.jp wrote:
(11/12/29 5:50), Jamie Johnson wrote:
I've seen in the solr faceting overview that it is possible to sort
either by count or lexicographically, but is there a way to sort so
the lowest counts come back first?
As far as I know, no. What is your use case?
koji
Thanks Hoss, I'll take a look at this and see if i can understand this.
On Wed, Dec 28, 2011 at 9:44 PM, Chris Hostetter
hossman_luc...@fucit.org wrote:
: I've seen in the solr faceting overview that it is possible to sort
: either by count or lexicographically, but is there a way to sort so
I'm trying to figure out a way to execute a query which would allow me
to say there were x documents over this period of time with type a, y
documents over the same period of time with type b and z documents
over the same period of time with type c. I was thinking of doing
this using range
I'm not sure that will give me what I want, most likely because I was
clear what I'm looking for. I would like to take this information and
populate a chart so you could see how many documents were created over
time of a particular type. So basically I need the range queries so I
can populate
I'm trying to determine when it is appropriate to use the
solr.ReversedWildcardFilterFactory, specifically if I have a field
content of type text (from default schema) which I want to be able to
search with leading wildcards do I need to index this information into
both a text field and a text_rev
SaaS for Solr -
http://sematext.com/spm/solr-performance-monitoring/index.html
From: Jamie Johnson jej2...@gmail.com
To: solr-user@lucene.apache.org
Sent: Wednesday, January 25, 2012 12:03 PM
Subject: Cluster Resizing question
Is this the JIRA that I should
be throw away. Thus the reason
I'd like to understand what the plans are for this in Solr and
possibly start contributing to this development, assuming it meets my
timelines.
Any thoughts/comments are greatly appreciated.
On 1/25/12, Jamie Johnson jej2...@gmail.com wrote:
Thanks Otis. I have been
I just want to verify some of the features in regards to SolrCloud
that are now on Trunk
documents added to the cluster are automatically distributed amongst
the available shards (I had seen that Yonik had ported the Murmur
hash, but I didn't see that on trunk, what is being used and where can
I
, Yonik Seeley
yo...@lucidimagination.com wrote:
On Fri, Jan 27, 2012 at 11:46 PM, Jamie Johnson jej2...@gmail.com wrote:
I just want to verify some of the features in regards to SolrCloud
that are now on Trunk
documents added to the cluster are automatically distributed amongst
the available
.
On Sat, Jan 28, 2012 at 2:46 PM, Yonik Seeley
yo...@lucidimagination.com wrote:
On Sat, Jan 28, 2012 at 3:45 PM, Jamie Johnson jej2...@gmail.com wrote:
Second question, I know there are discussion about storing the shard
assignments in ZK (i.e. shard 1 is responsible for hashed values
between 0
I'd like to use both the ReversedWildcardFilterFactory and
PorterStemFilterFactory on a text field that I have, I'd like to avoid
stemming the reversed fields and would also like to avoid reversing
the stemmed fields. My original thought was to have the
ReversedWildcardFilterFactory higher in the
Looking closer I think I asked the wrong question, please disregard and I
will start a new chain with that question
On Friday, February 3, 2012, Jamie Johnson jej2...@gmail.com wrote:
Is it possible to have multiple index analysis chains on a single field?
I was recently walking through schema.xml and noticed triedate vs date
and a note that triedate should be considered instead. I believe I
understand the basic principle behind triedate but is there an
analysis that exists which shows how much bigger an index would be if
triedate were used vs
Mark,
is the recommendation now to have each solr instance be a separate core in
solr cloud? I had thought that the core name was by default the collection
name? Or are you saying that although they have the same name they are
separate because they are in different JVMs?
On Wednesday, February 8,
instances per machine
On Wednesday, February 8, 2012, Mark Miller markrmil...@gmail.com wrote:
On Feb 8, 2012, at 9:52 PM, Jamie Johnson wrote:
In solr cloud what is a better approach / use of resources having
multiple
cores on a single instance or multiple instances with a single core? What
So I think the change I made should still be done, but the issue was
on my end missing the '' surrounding the url. After changing that
things are moving again.
On Thu, Feb 9, 2012 at 9:56 AM, Jamie Johnson jej2...@gmail.com wrote:
This morning I pulled the latest code from trunk and was trying
I'm looking at the latest code on trunk and it seems as if the
zookeeper view does not work. When trying to access the information I
get the following in the log
2012-02-09 10:28:49.030:WARN::/solr/zookeeper.jsp
java.lang.NullPointerException
at
done
https://issues.apache.org/jira/browse/SOLR-3117
On Thu, Feb 9, 2012 at 11:02 AM, Mark Miller markrmil...@gmail.com wrote:
On Feb 9, 2012, at 10:14 AM, Jamie Johnson wrote:
So I think the change I made should still be done
If you create a JIRA issue, I'd be happy to pop it in.
- Mark
for this Solr
Core. Please try connecting to an alternate zookeeper address.}
Does the new interface support multiple cores? Should the old
interface require that defaultCoreName be set?
On Thu, Feb 9, 2012 at 10:29 AM, Jamie Johnson jej2...@gmail.com wrote:
I'm looking at the latest code on trunk
I just ran a test with a very modest cluster (exactly the same as
http://outerthought.org/blog/491-ot.html). I then indexed 10,000
documents into the cluster. From what I can tell everything worked
properly but I'm seeing the following errors in the logs. I'm
randomly choosing the solr instance
:
Is that the entire stack trace - no other exception logged?
On Feb 9, 2012, at 2:44 PM, Jamie Johnson wrote:
I just ran a test with a very modest cluster (exactly the same as
http://outerthought.org/blog/491-ot.html). I then indexed 10,000
documents into the cluster. From what I can tell
per SOLR-2765 we can add roles to specific cores such that it's
possible to give custom roles to solr instances, is it possible to
specify this when adding a core through curl
'http://host:port/solr/admin/cores...'?
https://issues.apache.org/jira/browse/SOLR-2765
:30 PM, Jamie Johnson jej2...@gmail.com wrote:
I have the need to take user input and index it in a unique fashion,
essentially the value is some string (say abcdefghijk) and needs to
be converted into a set of tokens (say 1 2 3 4). I am currently have
implemented a custom TokenFilter to do
PM, Jamie Johnson jej2...@gmail.com wrote:
Thanks Robert, I'll take a look there. Does it sound like I'm on the
right the right track with what I'm implementing, in other words is a
TokenFilter appropriate or is there something else that would be a
better fit for what I've described?
I can't
field:1 OR field:2 OR field:3 OR field:4. Is
there something in the tokenizer that needs to be set for this to
generate this type of query or is it something in the query parser?
On Thu, Feb 9, 2012 at 9:02 PM, Robert Muir rcm...@gmail.com wrote:
On Thu, Feb 9, 2012 at 8:54 PM, Jamie Johnson jej2
Think I figured it out, the tokens just needed the same position attribute.
On Thu, Feb 9, 2012 at 10:38 PM, Jamie Johnson jej2...@gmail.com wrote:
Thanks Robert, worked perfect for the index side of the house. Now on
the query side I have a similar Tokenizer, but it's not operating
quite
I know that the latest Solr Cloud doesn't use standard replication but
I have a question about how it appears to be working. I currently
have the following cluster state
{collection1:{
slice1:{
JamiesMac.local:8501_solr_slice1_shard1:{
shard_id:slice1,
state:active,
, 2012, at 9:33 AM, Jamie Johnson wrote:
I know that the latest Solr Cloud doesn't use standard replication but
I have a question about how it appears to be working. I currently
have the following cluster state
{collection1:{
slice1:{
JamiesMac.local:8501_solr_slice1_shard1
Was there a fix recently to address sorting issues for Dates in solr
cloud? On my cluster I have a date field which when I sort across the
cluster I get incorrect order executing the following query I get
solr/select?distrib=trueq=paulsort=datetime_dt%20descfl=datetime_dt
result name=response
that the issue is on my end, will report shortly.
On Fri, Feb 10, 2012 at 12:16 PM, Yonik Seeley
yo...@lucidimagination.com wrote:
On Fri, Feb 10, 2012 at 11:44 AM, Jamie Johnson jej2...@gmail.com wrote:
Was there a fix recently to address sorting issues for Dates in solr
cloud? On my cluster I have a date
here
because for the life of me I can't figure out what is causing this.
I'm doing all of the filtering inside of the process method in my
custom SearchComponent.
On Fri, Feb 10, 2012 at 12:41 PM, Jamie Johnson jej2...@gmail.com wrote:
This is an snapshot of the solrcloud branch from somewhere
work fine, but this isn't what appears to be
happening. For queries that are not distributed I don't see this
issue, only for distributed queries.
On Fri, Feb 10, 2012 at 2:23 PM, Jamie Johnson jej2...@gmail.com wrote:
It looks like everything works fine without my custom component, which
, Feb 10, 2012 at 2:48 PM, Jamie Johnson jej2...@gmail.com wrote:
So looking at query component it appears to sort the entire doc list
at the end of process, my component is defined after this query so the
doclist that I get should be sorted, right? To me this should mean
that I can remove items
don't
understand why it worked, what exactly is this doing?
On Fri, Feb 10, 2012 at 3:12 PM, Jamie Johnson jej2...@gmail.com wrote:
I'd like to look at the pseudo fields you're talking about (don't
really understand it right now), but need to get something working in
the short term. How do I go
Sorry for pinging this again, is more information needed on this? I
can provide more details but am not sure what to provide.
On Fri, Feb 10, 2012 at 10:26 AM, Jamie Johnson jej2...@gmail.com wrote:
Sorry, I shut down the full solr instance.
On Fri, Feb 10, 2012 at 9:42 AM, Mark Miller
, or active
in zookeeper?
On Feb 10, 2012, at 4:48 PM, Jamie Johnson wrote:
Sorry for pinging this again, is more information needed on this? I
can provide more details but am not sure what to provide.
On Fri, Feb 10, 2012 at 10:26 AM, Jamie Johnson jej2...@gmail.com wrote:
Sorry, I shut down
:8501/solr
how'd you resolve this issue?
On Fri, Feb 10, 2012 at 8:49 PM, Mark Miller markrmil...@gmail.com wrote:
On Feb 10, 2012, at 9:33 AM, Jamie Johnson wrote:
jamiesmac
Another note:
Have no idea if this is involved, but when I do tests with my linux box and
mac I run
, 2012, at 9:40 PM, Jamie Johnson wrote:
how'd you resolve this issue?
I was basing my guess on seeing JamiesMac.local and jamiesmac in your
first cluster state dump - your latest doesn't seem to mismatch like that
though.
- Mark Miller
lucidimagination.com
I didn't see anything in the logs, would it be an error?
On Sat, Feb 11, 2012 at 3:58 PM, Mark Miller markrmil...@gmail.com wrote:
On Feb 11, 2012, at 3:08 PM, Jamie Johnson wrote:
I wiped the zk and started over (when I switch networks I get
different host names and honestly haven't dug
for the recover keyword and see if there
are any interesting bits around that.
Meanwhile, I have dug up a couple issues around recovery and committed fixes
to trunk - still playing around...
On Feb 11, 2012, at 8:44 PM, Jamie Johnson wrote:
I didn't see anything in the logs, would
-data
-slice1_shard2
-data
if it matters I'm running everything from localhost, zk and the solr shards
On Mon, Feb 13, 2012 at 8:42 AM, Sami Siren ssi...@gmail.com wrote:
Do you have unique dataDir for each instance?
13.2.2012 14.30 Jamie Johnson jej2...@gmail.com kirjoitti:
Has there been any success in replicating this? I'm wondering if it
could be something with my setup that is causing the issue...
On Mon, Feb 13, 2012 at 8:55 AM, Jamie Johnson jej2...@gmail.com wrote:
Yes, I have the following layout on the FS
./bootstrap.sh
./example (standard example
).
- Mark
On Feb 14, 2012, at 9:10 AM, Jamie Johnson wrote:
Has there been any success in replicating this? I'm wondering if it
could be something with my setup that is causing the issue...
On Mon, Feb 13, 2012 at 8:55 AM, Jamie Johnson jej2...@gmail.com wrote:
Yes, I have the following
to determine that.
I'm investigating more and working on a test + fix. I'll file a JIRA issue
soon as well.
- Mark
On Feb 14, 2012, at 11:39 AM, Jamie Johnson wrote:
Thanks Mark, not a huge rush, just me trying to get to use the latest
stuff on our project.
On Tue, Feb 14, 2012 at 10:53 AM
around the recovery.
I'll keep working on tests in the meantime.
- Mark
On Feb 14, 2012, at 3:15 PM, Jamie Johnson wrote:
Sounds good, if I pull the latest from trunk and rerun will that be
useful or were you able to duplicate my issue now?
On Tue, Feb 14, 2012 at 3:00 PM, Mark Miller
)
at
org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:120)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:208)
On Tue, Feb 14, 2012 at 10:18 PM, Jamie Johnson jej2...@gmail.com wrote:
Doing so now, will let you know if I continue to see the same issues
On Tue, Feb 14, 2012
way of doing this?
On Wed, Feb 15, 2012 at 9:16 AM, Yonik Seeley
yo...@lucidimagination.com wrote:
On Wed, Feb 15, 2012 at 8:58 AM, Jamie Johnson jej2...@gmail.com wrote:
I would like to be able to facet based on the time of
day items are purchased across a date span. I was hoping that I could
that
explicitly, though). If the unique ID is included you could never reach
that score.
Just my 2 cents...
Chantal
On Wed, 2012-02-15 at 07:27 +0100, Jamie Johnson wrote:
Is there anyway with MLT to say get similar based on all fields or is
it always a requirement to specify the fields?
impact.
On Wed, Feb 15, 2012 at 9:30 AM, Jamie Johnson jej2...@gmail.com wrote:
I think it would if I indexed the time information separately. Which
was my original thought, but I was hoping to store this in one field
instead of 2. So my idea was I'd store the time portion as as a
number
Ok, great. Just wanted to make sure someone was aware. Thanks for
looking into this.
On Thu, Feb 16, 2012 at 8:26 AM, Mark Miller markrmil...@gmail.com wrote:
On Feb 14, 2012, at 10:57 PM, Jamie Johnson wrote:
Not sure if this is
expected or not.
Nope - should be already resolved
I am attempting to execute a query with the following parameters
q=*:*
distrib=true
facet=true
facet.limit=10
facet.field=manu
f.manu.facet.mincount=1
f.manu.facet.limit=10
f.manu.facet.sort=index
rows=10
When doing this I get the following exception
null
please ignore this, it has nothing to do with the faceting component.
I was able to disable a custom component that I had and it worked
perfectly fine.
On Thu, Feb 16, 2012 at 12:42 PM, Jamie Johnson jej2...@gmail.com wrote:
I am attempting to execute a query with the following parameters
q
experience. :)
Regards,
Em
Am 16.02.2012 19:23, schrieb Jamie Johnson:
please ignore this, it has nothing to do with the faceting component.
I was able to disable a custom component that I had and it worked
perfectly fine.
On Thu, Feb 16, 2012 at 12:42 PM, Jamie Johnson jej2...@gmail.com wrote
at 3:05 PM, Jamie Johnson jej2...@gmail.com wrote:
still digging ;) Once I figure it out I'll be happy to share.
On Thu, Feb 16, 2012 at 1:32 PM, Em mailformailingli...@yahoo.de wrote:
Hi Jamie,
nice to hear.
Maybe you can share in what kind of bug you ran, so that other
developers
,
node_name:JamiesMac.local:8502_solr,
base_url:http://JamiesMac.local:8502/solr
On Thu, Feb 16, 2012 at 10:24 PM, Mark Miller markrmil...@gmail.com wrote:
Yup - deletes are fine.
On Thu, Feb 16, 2012 at 8:56 PM, Jamie Johnson jej2...@gmail.com wrote:
With solr-2358 being committed to trunk do deletes
1 - 100 of 591 matches
Mail list logo