That is not 100% true. I would think RDBMS and XML would be the most common
importers but the real flexibility is with the TikaEntityProcessor [1] that
comes w/ DIH ...
http://wiki.apache.org/solr/TikaEntityProcessor
Im pretty sure it would be able to handle any type of serde (in the case of
I was playing around w/ Sqoop the other day, its a simple Cloudera tool for
imports (mysql - hdfs) @ http://www.cloudera.com/developers/downloads/sqoop/
It seems to me (it would be pretty efficient) to dump to HDFS and have
something like Data Import Handler be able to read from hdfs://
Are you using Ubuntu by any chance?
It's a somewhat common problem ...
@http://stackoverflow.com/questions/2854356/java-classpath-problems-in-ubuntu
I'm unsure if this has been resolved but a similar thing happened to me on a
recent VMware image in a dev environment. It worked everywhere
You should already get this out of the box ... just tack on a wt=json to the
params ie ...
http://localhost:8983/solr/select/?q=*%3A*version=2.2start=0rows=10indent=onqt=tvrhtv=truetv.tf=truetv.df=truetv.positionstv.offsets=truewt=json
If you look @ /apache-solr-1.4.0/contrib/velocity/src/main
to restructure our
schema.
-Tim
On Sat, May 15, 2010 at 7:12 AM, Sascha Szott sz...@zib.de wrote:
Hi,
I'm not sure if debugQuery=on is a feasible solution in a productive
environment, as generating such extra information requires a reasonable
amount of computation.
-Sascha
Jon Baer
Does the standard debug component (?debugQuery=on) give you what you need?
http://wiki.apache.org/solr/SolrRelevancyFAQ#Why_does_id:archangel_come_before_id:hawkgirl_when_querying_for_.22wings.22
- Jon
On May 14, 2010, at 4:03 PM, Tim Garton wrote:
All,
I've searched around for help with
IIRC, I think what we ended up doing in a project was to use the
VelocityResponseWriter to write the JSON and set the echoParams to read the
handler setup (and looping through the variables).
In the template you can grab it w/ something like
$request.params.get(facet_fields) ... I don't
Does a sort=field5+desc on the query param not work?
- Jon
On Apr 29, 2010, at 9:32 AM, Doddamani, Prakash wrote:
Hi,
I am using the boost factor as below
str name=qf
field1^20.0 field2^5 field3^2.5 field4^.5
/str
Where it searches first in field1 then field1 and
All that stuff happens in the JDBC driver associated w/ the DataSource so
probably not unless there is something which can be set in the Oracle driver
itself.
One thing that might have helped in this case might have been if
readFieldNames() in the JDBCDataSource dumped its return to debug log
Thanks, Im looking @ the atomic broadcast messaging protocol of Zookeeper and
think I have found what I was looking for ...
- Jon
On Apr 28, 2010, at 11:27 PM, Yonik Seeley wrote:
On Wed, Apr 28, 2010 at 2:23 PM, Jon Baer jonb...@gmail.com wrote:
From what I understand Cassandra uses
Good question, +1 on finding answer, my take ...
Depending on how large of log files you are talking about it might be better
off to do this w/ HDFS / Hadoop (and a script language like Pig) (or Amazon EMR)
http://developer.amazonwebservices.com/connect/entry.jspa?externalID=873
Theoretically
To follow up it ... it seems dumping to Solr is common ...
http://highscalability.com/how-rackspace-now-uses-mapreduce-and-hadoop-query-terabytes-data
- Jon
On Apr 29, 2010, at 1:58 PM, Jon Baer wrote:
Good question, +1 on finding answer, my take ...
Depending on how large of log files you
Correct me if Im wrong but I think the problem here is that while there is a
fetchindex command in replication the handler and the master/slave setup
pertain to the core config.
For example for this to work properly the solr.xml configuration would need to
setup some type of global replication
You should end up w/ a file like conf/dataimport.properties @ full import
time, might be that it did not get written out?
- Jon
On Apr 28, 2010, at 3:05 PM, safl wrote:
Hello,
I'm just new on the list.
I searched a lot on the list, but I didn't find an answer to my question.
I'm
I would not use this layout, you are putting important Solr config files
outside onto the docroot (presuming we are looking @ the webapps folder) ...
here is my current Tomcat project (if it helps):
[507][jonbaer.MBP: tomcat]$ pwd
/Users/jonbaer/WORKAREA/SVN_HOME/my-project/tomcat
I don't think there is anything low level in Lucene that will specifically
output anything like lastOptimized() to you, since it can be setup a few ways.
Your best bet is probably adding a postOptimize hook and dumping it to log /
file / monitor / etc, probably something like ...
listener
Uggg I just got bit hard by this on a Tomcat project ...
https://issues.apache.org/jira/browse/SOLR-1238
Is there anyway to get access to that RequestEntity w/o patching? Also are
there security implications w/ using the repeatable payloads?
Thanks.
- Jon
Hi,
It looks like Im trying to do the same thing in this open JIRA here ...
https://issues.apache.org/jira/browse/SOLR-975
I noticed in index.jsp it has a reference to:
%
// a quick hack to get rid of get-file.jsp -- note this still spits out
invalid HTML
out.write(
How large is the index? There is probably alot of work in getting Solr and
dependencies (for example Lucene / RMI from what I have read) ...
Interestingly enough there is a Jetty container for it ...
http://code.google.com/p/i-jetty/
I think Solr itself would be OK to port to Dalvik just the
There is the LuSQL tool which Ive used a few times.
http://lab.cisti-icist.nrc-cnrc.gc.ca/cistilabswiki/index.php/LuSql
http://www.slideshare.net/eby/lusql-quickly-and-easily-getting-your-data-from-your-dbms-into-lucene
- Jon
On Apr 7, 2010, at 11:26 PM, bbarani wrote:
Hi,
I am
You should maybe scan your db for bad data ...
This bit ...
at sun.nio.cs.UTF_8$Decoder.decodeLoop(UTF_8.java:324)
at java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:561)
Is probably happening on a specific record somewhere, in the query limit the id
range and try to narrow down
Before digging through src ...
Docs say ... Every component can have an extra attribute enable which can be
set as true/false.
It doesn't seem that listeners are part of PluginInfo scheme though ... for
example is this possible?
listener event=firstSearcher class=solr.QuerySenderListener
This is just something that seems to come up now and then ...
* - Id like to write a last-component which does something specific for a
particular declared handler /handler1 for example and there is no way to
determine which handler it came from @ the moment (or can it?)
* - It would be nice if
component.
What's the use case for controlling handlers enabled flag on the fly?
Erik
On Mar 29, 2010, at 3:02 PM, Jon Baer wrote:
This is just something that seems to come up now and then ...
* - Id like to write a last-component which does something specific for a
particular
Just throwing this out there ... I recently saw something I found pretty
interesting from CMU ...
http://csunplugged.org/activities
The search algorithm exercise was focused on a Battleship lookup I think.
- Jon
On Mar 24, 2010, at 10:40 AM, Erik Hatcher wrote:
I've got a couple of
into the main Solr example (it's there on trunk, basically) and more
examples are better.
On Mar 18, 2010, at 7:40 PM, Jon Baer wrote:
It's also possible to try and use the Velocity contrib response writer and
paging it w/ the sitemap elements.
BTW generating a sitemap was a big reason
It's also possible to try and use the Velocity contrib response writer and
paging it w/ the sitemap elements.
BTW generating a sitemap was a big reason of a switch we did from GSA to Solr
because (for some reason) the map took way too long to generate (even simple
requests).
If you page
I am interested in this as well ... Im also having the issue of understanding
if a result has been elevated by the QueryElevation component. It should like
SolrJ would need to know about some type of metadata contained within the docs
but I haven't seen SolrJ dealing w/ payloads specifically
Isn't this what Lucene/Solr payloads are theoretically for?
ie:
http://www.lucidimagination.com/blog/2009/08/05/getting-started-with-payloads/
- Jon
On Mar 8, 2010, at 11:15 PM, Lance Norskog wrote:
This is an interesting idea. There are other projects to make the
analyzer/filter chain more
Maybe some things to try:
* make sure your uniqueKey is string field type (ie if using int it will not
work)
* forceElevation to true (if sorting)
- Jon
On Mar 9, 2010, at 12:34 AM, Ryan Grange wrote:
Using Solr 1.4.
Was using the standard query handler, but needed the boost by field
For this list I usually end up @ http://solr.markmail.org (which I believe also
uses Lucene under the hood)
Google is such a black box ...
Pros:
+ 1 Open Source (enough said :-)
There also seems to always be the notion that crawling leads itself to
produce the best results but that is rarely
Hi,
Im trying to figure out if there is an easy way to basically reset all of any
doc boosts which you have made (for analytical purposes) ... for example if I
run an index, gather report, doc boost on the report, and reset the boosts @
time of next index ...
It would seem to be from just
, there is no way to search by boost.
Cheers
Avlesh
On Fri, Nov 13, 2009 at 8:17 PM, Jon Baer jonb...@gmail.com wrote:
Hi,
Im trying to figure out if there is an easy way to basically reset all of
any doc boosts which you have made (for analytical purposes) ... for example
if I run an index
I think it could be as simple as if you have +1 entities in the param
that clean=false as well (because you are specifically interested in
just targeting that entity import) ...
- Jon
On Mar 15, 2009, at 3:07 AM, Shalin Shekhar Mangar wrote:
On Fri, Mar 13, 2009 at 9:56 PM, Jon Baer jonb
Bare in mind (and correct me if Im wrong) but a full-import is still
a full-import no matter what entity you tack onto the param.
Thus I think clean=false should be appended (a friend starting off in
Solr was really confused by this + could not understand why it did a
delete on all
Id suggest what someone else mentioned to just do a full clean up of
the index. Sounds like you might have kill -9 or stopped the process
manually while indexing (would be only reason for a left over lock).
- Jon
On Mar 11, 2009, at 5:16 AM, Ashish P wrote:
I added
Are you using the replication feature by any chance?
- Jon
On Mar 10, 2009, at 2:28 PM, Matthew Runo wrote:
We're currently using 1.4 in production right now, using a recent
nightly. It's working fine for us.
Thanks for your time!
Matthew Runo
Software Engineer, Zappos.com
=hello${x.y} logLevel=finer/
/entity
On Mon, Mar 9, 2009 at 11:55 PM, Jon Baer jonb...@gmail.com wrote:
Hi,
Is there currently anything in DIH to allow for more verbose logging?
(something more than status) ... was there a way to hook in your
own for
debugging purposes? I can't seem to locate
Hi,
Is there currently anything in DIH to allow for more verbose logging?
(something more than status) ... was there a way to hook in your own
for debugging purposes? I can't seem to locate the options in the
Wiki or remember if it was available.
Thanks.
- Jon
you have tried out some function query stuff, but can you share what
you did there?
-Grant
On Feb 18, 2009, at 1:54 PM, Jon Baer wrote:
Ive spent a few months trying different techniques w/ regards to
searching just news articles w/ players and can't seem to find the
perfect setup
This part:
The part of Zoie that enables real-time searchability is the fact that
ZoieSystem contains three IndexDataLoader objects:
* a RAMLuceneIndexDataLoader, which is a simple wrapper around a
RAMDirectory,
* a DiskLuceneIndexDataLoader, which can index directly to the
Ive spent a few months trying different techniques w/ regards to
searching just news articles w/ players and can't seem to find the
perfect setup.
Normally I take into consideration date (frequency + recently
published), title (which boosts on relevancy) and general mm in body
text (and
I don't think general discussion forums really help ... it would be
great if every major page in the Solr wiki had a discuss link off to
somewhere though +1 for that ...
Ie:
http://wiki.apache.org/solr/SolrRequestHandler
http://wiki.apache.org/solr/SolrReplication
etc.
For me even panning
Hi,
Sorry I know this exists ...
If an API supports chunking (when the dataset is too large) multiple calls
need to be made to complete the process. XPathEntityprocessor supports this
with a transformer. If transformer returns a row which contains a field *
$hasMore* with a the value true the
...@gmail.com wrote:
On Mon, Feb 2, 2009 at 9:20 PM, Jon Baer jonb...@gmail.com wrote:
Hi,
Sorry I know this exists ...
If an API supports chunking (when the dataset is too large) multiple
calls
need to be made to complete the process. XPathEntityprocessor supports
,
com.nhl.solr.EnumeratedEntityTransformer
I guess what Im looking for is that snippet which shows how it is setup (the
initial counter) ...
- Jon
On Mon, Feb 2, 2009 at 12:39 PM, Noble Paul നോബിള് नोब्ळ्
noble.p...@gmail.com wrote:
On Mon, Feb 2, 2009 at 11:01 PM, Jon Baer jonb...@gmail.com wrote:
Yes I think what Jared
Hi,
Ive just had a bump in the night where some feeds have disappeared, Im
wondering since Im running the base 1.3 copy would patching it w/
https://issues.apache.org/jira/browse/SOLR-842
Break anything? Has anyone done this yet?
Thanks.
- Jon
Could it be the framework you are using around it? I know some IOC
containers will auto pool objects underneath as a service without you really
knowing it is being done or has to be explicitly turned off. Just a
thought. I use a single server for all requests behind a Hivemind setup ...
umm not
I think DIH would have to support JNDI which it current does not (I
think). Id also be interested in this (or where the credentials came
from the db itself).
- Jon
On Jan 18, 2009, at 11:37 AM, con wrote:
Hi all
Currently i am defining database parameters like the url, username and
Hi,
Anyone have a quick, clever way of dealing w/ paged XML for
DataImportHandler? I have metadata like this:
paging
pageNumber1/pageNumber
totalPages3/totalPages
count15/count
/paging
I unfortunately can not get all the data in one shot
This sounds a little like my original problem of deltaQuery imports
per entity ...
https://issues.apache.org/jira/browse/SOLR-783
I wonder if those 2 hacks could be combined to fix the issue.
- Jon
On Dec 6, 2008, at 12:29 PM, Marc Sturlese wrote:
Hey there,
I am doing some hacks to some
to a VPS ...
http://www.sun.com/bigadmin/content/zones/
- Jon
On Dec 5, 2008, at 10:58 AM, Kashyap, Raghu wrote:
Jon,
What do you mean by off a Zone? Please clarify
-Raghu
-Original Message-
From: Jon Baer [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 04, 2008 9:56 PM
To: solr
Just curious, is this off a zone by any chance?
- Jon
On Dec 4, 2008, at 10:40 PM, Kashyap, Raghu wrote:
We are running solr on a solaris box with 4 CPU's(8 cores) and 3GB
Ram.
When we try to index sometimes the HTTP Connection just hangs and the
client which is posting documents to solr
Sorry missed that (and probably dumb question), does that -D flag work
for setting as a RAMDirectory as well?
- Jon
On Nov 30, 2008, at 8:42 PM, Yonik Seeley wrote:
OK, the development version of Solr should now be fixed (i.e. NIO
should be the default for non-Windows platforms). The next
HadoopEntityProcessor for the DIH?
Ive wondered about this as they make HadoopCluster LiveCDs and EC2
have images but best way to make use of them is always a challenge.
- Jon
On Nov 29, 2008, at 3:34 AM, Erik Hatcher wrote:
On Nov 28, 2008, at 8:38 PM, Yonik Seeley wrote:
Or, it would
This sounds exactly same issue I had when going from 1.3 to 1.4 ... it
sounds like DIH is trying to automagically figure out the columns :-\
- Jon
On Nov 25, 2008, at 6:37 AM, Joel Karlsson wrote:
Hello,
I get Unknown field error when I'm indexing an Oracle dB. I've
reduced the
number of
https://issues.apache.org/jira/secure/attachment/12394282/solr2_maho_impression.png
https://issues.apache.org/jira/secure/attachment/12394266/apache_solr_b_red.jpg
Maybe another template idea ... I just started playing around w/ this
plugin:
http://malsup.com/jquery/taconite/
Would be pretty neat to have that as a response (or @ least the
technique), not sure how well known it is or if there is something W3C-
based in the pipeline that is similar.
Hi,
I wanted to try the TermVectorComponent w/ current schema setup and I
did a build off trunk but it's giving me something like ...
org.apache.solr.common.SolrException: ERROR:unknown field 'DOCTYPE'
Even though it is declared in schema.xml (lowercase), before I grep
replace the entire
if that resolves the problem. Thanks.
- Jon
On Nov 19, 2008, at 6:44 PM, Ryan McKinley wrote:
schema fields should be case sensitive... so DOCTYPE != doctype
is the behavior different for you in 1.3 with the same file/schema?
On Nov 19, 2008, at 6:26 PM, Jon Baer wrote:
Hi,
I wanted
PM, Noble Paul നോബിള്
नोब्ळ् wrote:
Hi John,
it could probably not the expected behavior?
only 'explicit' fields must be case-sensitive.
Could you tell me the usecase or can you paste the data-config?
--Noble
On Thu, Nov 20, 2008 at 8:55 AM, Jon Baer [EMAIL PROTECTED] wrote:
Sorry I
. right?
field column=DOCID template=PLAYER-${players.PLAYERID}/
we did some refactoring to minimize the object creation for
case-insensitive comparisons.
I guess it should be rectified soon.
Thanks for bringing it to our notice.
--Noble
On Thu, Nov 20, 2008 at 10:05 AM, Jon Baer [EMAIL
Ive also had the same issues here but when trying to switch to
HTMLStripWhitespaceTokenizerFactor I found that it only removes the
tags but when it comes to all forms of javascript includes in a
document it keeps it all intact so I ended up w/ scripts in the
document text, is there any
, Noble Paul നോബിള്
नोब्ळ् wrote:
Hi Lance,
I guess I got your problem
So you wish to create docs for both entities (as suggested by Jon
Baer). So the best solution would be to create two root entities. The
first one should be the outer and write a transformer to store all the
urls into the db
On Nov 1, 2008, at 1:16 PM, Grant Ingersoll wrote:
How do you propose to distinguish those words from the other ones?
** They are field values from other documents
The problem you are addressing is often called keyword extraction.
In general, it 's a difficult problem, but you may have
Is that right? I find the wording of clean a little confusing. I
would have thought this is what I had needed earlier but the topic
came up regarding the fact that you can not deleteByQuery for an
entity you want to flush w/ delta-import.
I just noticed that the original JIRA request
Hi,
So Im looking to either use this or build a component which might do
what Im looking for. Id like to figure out if its possible use a
single doc to get tag generation based on the matches within that
document for example:
1 News Doc - contains 5 Players and 8 Teams (show them as
it customize the TV output...
Thanks,
Grant
On Oct 31, 2008, at 5:20 PM, Jon Baer wrote:
Hi,
So Im looking to either use this or build a component which might
do what Im looking for. Id like to figure out if its possible use
a single doc to get tag generation based on the matches within
If that is the case you should look @ the DataImportHandler examples
as they can already index RSS, im doing it now for ~ a dozen feeds on
an hourly basis. (This is also for any XML-based feed for XHTML, XML,
etc). I find Nutch more useful for plain vanilla HTML (something that
was built
Hi,
Im pretty intrigued by the Ocean search stuff and the Lucene patch, Im
wondering if it's something that a tweaked Solr w/ mod Lucene can run
now? Has anyone tried merging that patch and running w/ Solr? Im
sure there is more to it than just swapping out the libs but the real
time
Hi,
What is the proper behavior suppose to be between SolrJ and caching?
Im proxying through a framework and wondering if it is possible to
turn on / turn off caching programatically depending on the type of
query (or if this will have no effect whatsoever) ... since SolrJ uses
Apache
What is your uniqueKey set to? Could it be you have duplicates in
your uniqueKey setup (thus producing only 10 rows in index)?
- Jon
On Oct 12, 2008, at 1:30 PM, con wrote:
I wrote a jdbc program to implement the same query. But it is
returning all
the responses, 25 nos.
But the solr
we do not really
have any knowledge on how to delete specific rows.
how about passing a deleteQuery=type:x in the request params
or having a deleteByQuery on each top level entitywhich can be used
when that entity is doing a full-import
--Noble
On Fri, Oct 3, 2008 at 4:32 AM, Jon Baer [EMAIL
Just curious,
Currently a full-import call does a delete all even when appending an
entity param ... wouldn't it be possible to pick up the param and just
delete on that entity somehow? It would be nice if there was
something involved w/ having an entity field name that worked w/ DIH
to
If I understand your question right ... you would not need a
transformer, basically you nest entities under each other ... ie:
?xml version=1.0?
dataConfig
dataSource name=db type=JdbcDataSource
driver=com.mysql.jdbc.Driver url=jdbc:mysql://localhost/nhldb?
Why even do any of the work :-)
Im not sure any of the free analytic apps (ala Google) can but the
paid ones do, just drop the query into one of those and let them
analyze ...
http://www.google.com/analytics/
Then just parse the reports.
- Jon
On Sep 25, 2008, at 8:39 AM, Mark Miller
(key);
This would be a generic enough for users to get going
thoughts.
--Noble
On Sat, Sep 20, 2008 at 1:52 AM, Jon Baer [EMAIL PROTECTED] wrote:
Actually how does ${deltaimporter.last_index_time} know which
entity Im
specifically updating? I feel like Im missing something, can it
work
Question -
So if I issued a dataimport?command=delta-importentity=one,two,three
Would this also hit items w/o a delta-import like four,five,six, etc?
Im trying to set something up and I ended up with 28k+ documents which
seems more like a full import, so do I need to do something like
Actually how does ${deltaimporter.last_index_time} know which entity
Im specifically updating? I feel like Im missing something, can it
work like that?
Thanks.
- Jon
On Sep 19, 2008, at 4:14 PM, Jon Baer wrote:
Question -
So if I issued a dataimport?command=delta-importentity=one,two
Hi,
For some reason my XPath attribute keeps failing to get picked up here
(is that the proper format?):
field column=thumbnail xpath=/rss/channel/item/[EMAIL PROTECTED]/
- Jon
That was it, thanks Shalin.
On Sep 16, 2008, at 1:41 PM, Shalin Shekhar Mangar wrote:
On Tue, Sep 16, 2008 at 10:41 PM, Jon Baer [EMAIL PROTECTED] wrote:
For some reason my XPath attribute keeps failing to get picked up
here (is
that the proper format?):
field column=thumbnail xpath=/rss
Another +1 for Shalin and Noble for DIH ...
On Sep 16, 2008, at 9:50 PM, Erik Hatcher wrote:
+1 for Grant's efforts! He put a lot of sweat into making this
release a reality.
Erik
On Sep 16, 2008, at 9:29 PM, Grant Ingersoll wrote:
The Apache Solr team is happy to announce the
it into a JSON Array?
Thanks
** julio
-Original Message-
From: Jon Baer [mailto:[EMAIL PROTECTED]
Sent: Sunday, September 14, 2008 9:01 PM
To: solr-user@lucene.apache.org
Subject: Re: SolrJ and JSON in Solr -1.3
Hmm am I missing something but isn't the real point of SolrJ to be
able to
use
Hmm am I missing something but isn't the real point of SolrJ to be
able to use the binary (javabin) format to keep it small / tight /
compressed? I have had to proxy Solr recently and found just throwing
a SolrDocumentList as a JSONArray (via json.org libs) works pretty
well (YMMV). I
Hi,
Was wondering if there was an update on a push for a final 1.3?
Wanted to build a final .war but wondering status and if I should hold
off ... everything in trunk seems promising any major issues?
Thanks.
- Jon
Yeah I think the snapshot techniques that ZFS provides would be very
nice for handling indexes, although remains to be seen as I have not
seen too much info pertaining to it.
Im hoping to have a chance to put Solr on OpenSolaris soon and will
see what works / what doesn't. (BTW this combo
Hi,
Ive started putting together a small cluster and going through the
setup on some of the scripts, do they have any awareness of a
multicore setup? It seems like I can only snapshot a single master
directory, Im assuming these tools are compatible with that type of
setup but just want
Thanks ... on a somewhat related note, does having the index on ZFS
buy me anything, has anyone toyed w/ ZFS snapshots / send / recv to
automount? Does it work?
- Jon
On Aug 21, 2008, at 6:43 PM, Alexander Ramos Jardim wrote:
You need to setup one snapshooter for each index
2008/8/21 Jon
Hi,
(Im sure this was asked before but found nothing on markmail) ...
Wondering if Solr can handle this on its own or if something needs to
be written ... would like to handle recognizing date inputs to a
search box for news articles, items such as August 1,August 1st or
08/01/2008 ...
This is *exactly* my issue ... very nicely worded :-)
I would have thought facet.query=*:* would have been the solution but
it does not seem to work. Im interested in getting these *total*
counts for UI display.
- Jon
On Jul 22, 2008, at 6:05 AM, Stefan Oestreicher wrote:
Hi,
I have a
It seems that spellchecker works great except all the 7 words you
can't say on TV resolve to very important people, is there a way to
contain just certain words so they don't resolve?
Thanks.
- Jon
Hi,
I can't seem to locate any info on how to get SolrJ + Spellcheck
working together, Id like to query the spellchecker if 0 items were
matched, is SolrJ generic enough to pick apart added component
results from the bottom of a query?
Thanks.
- Jon
PM, Mike Klaas wrote:
On 17-Jul-08, at 6:27 AM, Jon Baer wrote:
Ive gone from a complex multicore setup back to a single solrconfig
setup and using a doctype field (since the index is pretty small),
however there are a few spots where items are laid out in tabs and
each tab has a count
Ive gone from a complex multicore setup back to a single solrconfig
setup and using a doctype field (since the index is pretty small),
however there are a few spots where items are laid out in tabs and
each tab has a count of docs associated, ie:
News (123) | Images (345) | Video (678) |
Hi,
On the wiki it says that url attribute can be templatized but Im not
sure how that happens, do you I need to create something read from a
database column in order to use that type of function? ie Id like to
run over some RSS feeds for multiple URLs (~ 30), do I need to copy 1
per
a conditional tag.
- Jon
On Jul 12, 2008, at 12:38 AM, Noble Paul നോബിള്
नोब्ळ् wrote:
On Fri, Jul 11, 2008 at 11:46 PM, Jon Baer [EMAIL PROTECTED] wrote:
Hi,
On the wiki it says that url attribute can be templatized but Im
not sure
how that happens, do you I need to create something read
Hi,
Is there an easy way to use fq to filter down but retain the overall
facet query counts? I can't seem to find how to accomplish this but
seems like a common item needed for navigating though a result set. I
need to do this w/o holding a session and the counts always seem to
reflect
Hi,
Is it currently possible to define a db-data-config.xml to include
both a HttpDataSource and a JDBCDataSource @ all? I can't tell if
this is possible or not (although it seems that dataConfig might only
take a single dataSource child element.
Thanks.
- Jon
Hi,
Im curious, is there a spot / patch for the latest on Nutch / Solr
integration, Ive found a few pages (a few outdated it seems), it would
be nice (?) if it worked as a DataSource type to DataImportHandler,
but not sure if that fits w/ how it works. Either way a nice contrib
patch
Hi,
For some reason even the simplest template is causing me NPE when
using (Solr trunk) ... ie:
How its being used:
field column=link template=http://www.site.com/path/?id=$
{news.id}/
-or-
field column=link template=http://www.site.com/path/?id=123456/
Throw ...
WARNING: transformer
1 - 100 of 113 matches
Mail list logo