hi,
The format over the wire is not of great significance because it gets
unmarshalled into the corresponding language object as soon as it comes out
of the wire. I would say XML/JSON should meet 99% of the requirements
because all the platforms come with an unmarshaller for both of these.
But,If
, at 12:07 PM, Noble Paul നോബിള്
नोब्ळ् wrote:
hi,
The format over the wire is not of great significance because it gets
unmarshalled into the corresponding language object as soon as it
comes out
of the wire. I would say XML/JSON should meet 99% of the requirements
because all
of time as compared to a more compact
binary format.
I think it at least warrants profiling/testing.
-Grant
On Feb 21, 2008, at 12:07 PM, Noble Paul നോബിള്
नोब्ळ् wrote:
hi,
The format over the wire is not of great significance because it gets
unmarshalled
for
SolrJ to use a binary format by default. The other thing it should do
is make sure, when sending/receiving XML is that the XML is as tight
as possible, i.e. minimal whitespace, etc.
Just thinking out loud,
Grant
On Feb 22, 2008, at 8:29 AM, Noble Paul നോബിള്
नोब्ळ् wrote
Good to hear that people are using DatImportHandler
In a couple of days, we are giving another patch which is cleared by
our QA with
better error handling, messaging and a lot of new features.
A committer will have to decide on when it is good enough to be committed
--Noble
On Sat, Mar 8, 2008
hi ,
The tool is undergoing substantial testing in our QA department .
Because it is an official internal project also, the bugs are filed in
our bug tool. We are fixing them as and when they are reported. It has
gone through some good iterations and it is going to power the backend
for a 2 of our
hi,
If my appserver fails during an update or if I do a planned shutdown
without wanting to commit my changes Solr does not allow it?.
It commits whatever unfinished changes.
Is it by design?
Can I change this behavior?
--Noble
Can I make an API call to remove the stale indexsearcher so that the
documents do not get committed?
Basically what I need is a 'rollback' feature
--Noble
On Wed, Mar 26, 2008 at 9:08 PM, Yonik Seeley [EMAIL PROTECTED] wrote:
On Wed, Mar 26, 2008 at 10:18 AM, Noble Paul നോബിള് नोब्ळ्
hi,
I am willing to work on this if you can give me some pointers as to
where to start?
On Thu, Mar 27, 2008 at 9:48 AM, Yonik Seeley [EMAIL PROTECTED] wrote:
On Thu, Mar 27, 2008 at 12:11 AM, Noble Paul നോബിള് नोब्ळ्
[EMAIL PROTECTED] wrote:
Can I make an API call to remove the stale
each entity has an optional attribute called dataSource.
If you have multiple dataSources give them a name and use the name is
dataSource .So you solrconfig must look like
requestHandler name=/dataimport
class=org.apache.solr.handler.dataimport.DataImportHandler
lst name=defaults
str
It is not a problem with the BinaryResponseWriter itself. It is caused
by the bug https://issues.apache.org/jira/browse/SOLR-470
we need to fix it now.
--Noble
On Mon, Apr 21, 2008 at 9:16 AM, Eason. Lee [EMAIL PROTECTED] wrote:
Error comes from solr while parsing the datefield
It is ok with
We are planning to incorporate both your requests in the next patch.
The implementation is going to be as follows.mention the xsl file
location as follows
entity processor=XPathEntitityprocessor xslt=file:/c:/my-own.xsl
/entity
So the processing will be done after the XSL transformation. If
.
~ David
Noble Paul നോബിള് नोब्ळ् wrote:
We are planning to incorporate both your requests in the next patch.
The implementation is going to be as follows.mention the xsl file
location as follows
entity processor=XPathEntitityprocessor xslt=file:/c:/my-own.xsl
It is caused by the new caching feature in Solr. The caching is done
at the browser level . Slr just sends appropriate headers. .We had
raised an issue to disable that.
BTW The command is not exactly
http://localhost:8983/solr/dataimport?command=status .
http://localhost:8983/solr/dataimport
Yes , We are waiting for the patch to get committed.
--Noble
On Fri, Apr 25, 2008 at 5:36 PM, Sean Timm [EMAIL PROTECTED] wrote:
Noble--
You should probably include SOLR-505 in your DataImportHandler patch.
-Sean
Noble Paul നോബിള് नोब्ळ् wrote:
It is caused by the new caching
hi ,
The current replication strategy in solr involves shell scripts . The
following are the drawbacks
* It does not work with windows
* Replication works as a separate piece not integrated with solr.
* Cannot control replication from solr admin/JMX
* Each operation requires manual telnet to the
http://java.sun.com/products/servlet/Filters.html
this is a servlet container feature
BTW , this may not be a right forum for this topic.
--Noble
On Tue, May 13, 2008 at 5:04 PM, Umar Shah [EMAIL PROTECTED] wrote:
On Tue, May 13, 2008 at 4:39 PM, Shalin Shekhar Mangar
[EMAIL PROTECTED] wrote:
On Wed, May 21, 2008 at 6:27 AM, Julio Castillo [EMAIL PROTECTED] wrote:
I wanted to learn how to index data that I have on my dB.
I followed the instructions on the wiki page for the Data Import Handler
(Full Import Example -example-solr-home.jar). I got an exception running it
as is (see
Julio,
This is to convert the 1:n and m:n relationships in a DB to
multivalued fields in solr. A single sql query ends up giving a 2D
matrix where each cell holds one value. It would be harder to
denormalize and extract the multivalued fields from a single result
set. Check the architecture to
It does not have any obvious problems. You must probably run it
through the query analyzer .
Using DIH for indexing should not make any difference .
--Noble
On Sat, May 24, 2008 at 11:11 AM, Julio Castillo
[EMAIL PROTECTED] wrote:
I have a simple test schema
That has users with the following
This would be useful for admin pages or an internal application. If
you expose solr to outside users , they can just bring down your solr
instance.
--Noble
On Mon, May 26, 2008 at 8:17 AM, climbingrose [EMAIL PROTECTED] wrote:
Hi Matthias,
How would you prevent Solr server from being exposed
If a feature that is really big (say distributed search) is half
baked and not ready for primetime, we must hold on the release till it
is completely fixed. That is not to say that every possible
enhancements to that feature must be incorporated before we can do a
release. If the new changes are
hi Grégoire,
I could not find an obvious problem . This is expected if the response
is not written by BinaryResponseWriter .
Could you apply the attached patch and see if you get the same error?
. This patch is not a solution. It is just to diagnose the problem.
--Noble
On Thu, May 29, 2008 at
wasn't 'shards', and this produced the bug). All is working fine now.
Thanks for your quick answers,
Grégoire.
2008/5/29 Noble Paul നോബിള് नोब्ळ् [EMAIL PROTECTED]:
hi Grégoire,
I could not find an obvious problem . This is expected if the response
is not written
Consider constructing the id concatenating an extra string for each
document . You can construct that field using the TeplateTransformer.
in the entity owners keep the id as
field column=id name=id template=owners-${owners.id}/
and in vets
field column=id name=id template=vets-${vets.id}/
or
Sorry I forgot to mention that.
http://wiki.apache.org/solr/DataImportHandler#head-a6916b30b5d7605a990fb03c4ff461b3736496a9
--Noble
On Fri, May 30, 2008 at 11:37 AM, Shalin Shekhar Mangar
[EMAIL PROTECTED] wrote:
You need to enable TemplateTransformer for your entity. For example:
entity
julio,
Looks like it is a bug.
We can give u a new TemplateTransformer.java which we will incorporate
in the next patch
--Noble
On Sat, May 31, 2008 at 12:24 AM, Julio Castillo
[EMAIL PROTECTED] wrote:
I'm sorry Shalin, but I still get the same Null Pointer exception. This is
my complete
You could have been more specific on the dataset size.
If your data volumes are growing you can partition your index into
multiple shards.
http://wiki.apache.org/solr/DistributedSearch
--Noble
On Sat, May 31, 2008 at 9:02 PM, Ritesh Ambastha [EMAIL PROTECTED] wrote:
Dear Readers,
I am a
Hoss:Your are right. It has a version byte written first. This can be
used for any changes that come later..So , when we introduce any
change to the format we can rely on that. If/When we upgrade the
format we must ensure that it is backward compatible .
The format can be used by SolrJ clients as
hi Julio,
delete my previous response. In your schema , 'id' is the uniqueKey.
make 'comboid' the unique key. Because that is the target field name
coming out of the entity 'owners'
--Noble
On Tue, Jun 3, 2008 at 9:46 AM, Noble Paul നോബിള് नोब्ळ्
[EMAIL PROTECTED] wrote:
The field 'id
The field 'id' is repeated for pet also rename it to something else
say
entity name=pets pk=id
query=SELECT id,name,birth_date,type_id FROM pets WHERE
owner_id='${owners.id}'
parentDeltaQuery=SELECT id FROM owners WHERE
id=${pets.owner_id}
field column=id
hi julio,
You must create an extra field for 'comboid' because you really need
the 'id' for your sub-entities. Your data-config must look as follows.
The pet also has a field called 'id' . It is not a good idea. call it
'petid' or something (both in dataconfig and schema.xml). Please make
sure
=${pets.owner_id}
field column=id name=petid/
field column=namename=name/
field column=birth_date name=birthDate/
/entity
/entity
On Wed, Jun 4, 2008 at 10:37 AM, Noble Paul നോബിള് नोब्ळ्
[EMAIL PROTECTED] wrote:
hi julio,
You must create an extra field
issue with partitioning? For eg: A query on 1GB
and 500MB indexed data will take same time to give the result? Or lesser the
index size, lesser the response time?
Regards,
Ritesh Ambastha
Noble Paul നോബിള് नोब्ळ् wrote:
You could have been more specific on the dataset size.
If your data
:
Thanks Noble,
That means, I can go ahead with single Index for long.
:)
Regards,
Ritesh Ambastha
Noble Paul നോബിള് नोब्ळ् wrote:
For the datasize you are proposing , single index should be fine .Just
give the m/c enough RAM
Distributed search involves multiple requests made
commonField=true can be added in any field when you are using an
XPathEntityProcessor.But you will never need to do so because only xml
has such a requirement. If you wish to add a string literal use a
TemplateTransformer and keep the field column=column value=value
template=my-string-literal/
The
attachment did not work try this
http://www.nabble.com/Re%3A-How-to-describe-2-entities-in-dataConfig-for-the-DataImporter--p17577610.html
--Noble
On Thu, Jun 5, 2008 at 9:37 AM, Noble Paul നോബിള് नोब्ळ्
[EMAIL PROTECTED] wrote:
commonField=true can be added in any field when you are using
hi ,
use multi-core Solr. Each core can have its own schema.
If possible the DISCLAIMER can be dropped.
--Noble
On Thu, Jun 5, 2008 at 11:13 AM, Sachit P. Menon
[EMAIL PROTECTED] wrote:
Hi folks,
I have a scenario as follows:
I have a CMS where in I'm storing all the contents. I need to
* install tortoise svn .
* checkout the code
* download the patch
* use tortoise svn to apply the patch
--Noble
On Tue, Jun 10, 2008 at 6:03 PM, Jón Helgi Jónsson [EMAIL PROTECTED] wrote:
Thanks for that. The patch in question is this one:
http://issues.apache.org/jira/browse/SOLR-469
I found
for this specific one the binaries are attached
http://wiki.apache.org/solr/DataImportHandler#head-c24dc86472fa50f3e87f744d3c80ebd9c31b791c
--Noble
On Tue, Jun 10, 2008 at 6:14 PM, Noble Paul നോബിള് नोब्ळ्
[EMAIL PROTECTED] wrote:
* install tortoise svn .
* checkout the code
* download
The configuration is fine but for one detail
The documents are to be created for the entity 'oldsearchcontent' not
for the root entity . so add an attribute rootEntity=false for the
entity 'oldsearchcontentlist' as follows.
entity name=oldsearchcontentlist
!
Regards,
Nicolas Pastorino
On Jun 10, 2008, at 14:55 , Noble Paul നോബിള് नोब्ळ् wrote:
The configuration is fine but for one detail
The documents are to be created for the entity 'oldsearchcontent' not
for the root entity . so add an attribute rootEntity=false for the
entity
On Jun 10, 2008, at 18:05 , Noble Paul നോബിള് नोब्ळ् wrote:
It is a bug, nice catch
there needs to be a null check there in the method
can us just try replacing the method with the following?
private Node getMatchingChild(XMLStreamReader parser) {
if(childNodes == null) return null
On Thu, Jun 12, 2008 at 11:01 AM, Neville Burnell
[EMAIL PROTECTED] wrote:
Hi,
I'm playing with the Solr Data Import Handler, and everything looks
great so far!
thanks
Hopefully we will be able to replace our homegrown ODBC indexing service
[using camping+ferret] with Solr!
The wiki page
The implementation may provide a form where user can
type in a doc id to delete or a lucene query
if it is a POST so be it.
But let us have the functionality
--Noble
On Thu, Jun 19, 2008 at 2:40 AM, Craig McClanahan [EMAIL PROTECTED] wrote:
On Wed, Jun 18, 2008 at 1:55 PM, JLIST [EMAIL
);
}
}
log.info(Reusing parent classloader);
return loader;
}
This seems to be me to be why my class is now found when I include my
utilities jar in solr-home/lib.
Thanks
Brendan
On Jun 18, 2008, at 11:49 PM, Noble Paul നോബിള് नोब्ळ् wrote:
hi,
DIH does not load class using
We plan to use SolrResourceLoader (in the next patch) . That is the
best way to go.
But we still prefer the usage of DIH package classes without any prefix.
type=HttpDataSource
instead of
type=solr.HttpDataSource
But users must be able to load their classes using the solr.classname format
builds ?
Thanks,
William.
On Thu, Jun 19, 2008 at 3:12 AM, Noble Paul നോബിള് नोब्ळ्
[EMAIL PROTECTED] wrote:
We plan to use SolrResourceLoader (in the next patch) . That is the
best way to go.
But we still prefer the usage of DIH package classes without any prefix.
type=HttpDataSource
This means you may need to write your own RequestHandler.
If you wish to push data, write it to a directory and use DIH with
FileDataSource
--Noble
On Thu, Jun 19, 2008 at 9:58 PM, segv [EMAIL PROTECTED] wrote:
I'm new to solr (using the 1.3 nightly at the moment) and trying to configure
it
I can take it up. But should we wait for the feature to 'stabilize'
before adding it to SolrJ? Till then the approach suggested by Yonik
(getResponse()) should be fine
--Noble
On Fri, Jun 20, 2008 at 2:06 AM, Matthew Runo [EMAIL PROTECTED] wrote:
Hmmm, good point. I had completely forgotten
hi ,
You have not registered any datasources . the second entity needs a datasource.
Remove the dataSource=null and add a name for the second entity
(good practice). No need for baseDir attribute for second entity .
See the modified xml added below
--Noble
dataConfig
dataSource
Just extend XPathEntityProcessor override nextRow() after 100
return null. Use it as your processor
--Noble
On Tue, Jun 24, 2008 at 10:45 AM, Noble Paul നോബിള് नोब्ळ्
[EMAIL PROTECTED] wrote:
Just extend XPathEntityProcessor override nextRow() after 100 . Use
it as your processor
return
DIH streams rows one by one.
set the fetchSize=-1 this might help. It may make the indexing a bit
slower but memory consumption would be low.
The memory is consumed by the jdbc driver. try tuning the -Xmx value for the VM
--Noble
On Wed, Jun 25, 2008 at 8:05 AM, Shalin Shekhar Mangar
[EMAIL
it is batchSize=-1 not fetchSize. Or keep it to a very small value.
--Noble
On Wed, Jun 25, 2008 at 9:31 AM, Noble Paul നോബിള് नोब्ळ्
[EMAIL PROTECTED] wrote:
DIH streams rows one by one.
set the fetchSize=-1 this might help. It may make the indexing a bit
slower but memory consumption would
DIH does not modify SQL. This value is used as a connection property
--Noble
On Wed, Jun 25, 2008 at 4:40 PM, Grant Ingersoll [EMAIL PROTECTED] wrote:
I'm assuming, of course, that the DIH doesn't automatically modify the SQL
statement according to the batch size.
-Grant
On Jun 25, 2008, at
The latest patch sets fetchSize as Integer.MIN_VALUE if -1 is passed.
It is added specifically for mysql driver
--Noble
On Wed, Jun 25, 2008 at 4:35 PM, Grant Ingersoll [EMAIL PROTECTED] wrote:
I think it's a bit different. I ran into this exact problem about two weeks
ago on a 13 million
We must document this information in the wiki. We never had a chance
to play w/ ms sql server
--Noble
On Thu, Jun 26, 2008 at 12:38 AM, wojtekpia [EMAIL PROTECTED] wrote:
It looks like that was the problem. With responseBuffering=adaptive, I'm able
to load all my data using the sqljdbc
If you have a master slave configuration I guess it is a good idea to
remove the updatehandler altogether from slaves.
--Noble
On Sat, Jun 28, 2008 at 2:39 AM, Chris Hostetter
[EMAIL PROTECTED] wrote:
: A basic technique that can be used to mitigate the risk of a possible CSRF
: attack like
SOLR-607 is still open.Till it is committed this solution may not be poossible
--Noble
On Mon, Jun 30, 2008 at 10:23 AM, Noble Paul നോബിള് नोब्ळ्
[EMAIL PROTECTED] wrote:
If you have a master slave configuration I guess it is a good idea to
remove the updatehandler altogether from slaves
SolrJ needs a minimum java 5
--Noble
On Mon, Jun 30, 2008 at 8:00 PM, Todd Breiholz [EMAIL PROTECTED] wrote:
What is the minimum JDK that can be used for developing clients that use
SolrJ? I am stuck on JDK 1.4.2 at the moment and am wondering if SolrJ is an
option for me.
Thanks!
Todd
There is a section with this information
http://wiki.apache.org/solr/DataImportHandler#head-138482af9d5c5e9600e60b4135c3eb41d8b34098
--Noble
On Tue, Jul 1, 2008 at 8:08 PM, Shalin Shekhar Mangar
[EMAIL PROTECTED] wrote:
Hi Jon,
Yes it is possible. Define two dataSources in the data config file
Currently there is nothing . There is a hackish way to achieve it.
DIH allows to read values from request params and use it in the templates.
eg: query=select * from atable where id ${dataimporter.request.last_id}
so, DIH must be invoked with the extra request param last_id like this
Solr uses StaX API for parsing xml.
use
org.apache.solr.client.solrj.impl.XMLResponseParser
--Noble
On Fri, Jul 4, 2008 at 11:13 AM, Ranjeet [EMAIL PROTECTED] wrote:
Hi,
Which parser is used to parse Response XML got from Solr search server. can
you pass sample code if you have.
Thanks
apache-solr-solrj-1.3-dev.jar
apache-solr-common-1.3-dev.jar
On Fri, Jul 4, 2008 at 12:19 PM, Ranjeet [EMAIL PROTECTED] wrote:
which jar file should we include?
ranjeet
- Original Message - From: Noble Paul ??? ??
[EMAIL PROTECTED]
To: solr-user@lucene.apache.org
Sent:
You can either delete by a query or by an id. It is like you use any
database . If you can find a condition by which you can identify these
docs then you can delete by a query .
--Noble
On Fri, Jul 4, 2008 at 8:22 PM, Jonathan Ariel [EMAIL PROTECTED] wrote:
Hi,
Is there any good way to do a bulk
, Ranjeet Jha
[EMAIL PROTECTED]
wrote:
It would be great if you send the sample code to parse the Solr
responseXML.
ranjeet
On 7/4/08, Noble Paul നോബിള് नोब्ळ् [EMAIL PROTECTED] wrote:
apache-solr-solrj-1.3-dev.jar
apache-solr-common-1.3-dev.jar
On Fri, Jul 4, 2008 at 12:19 PM
Take a look at http://wiki.apache.org/solr/Solrj
On Sun, Jul 6, 2008 at 7:23 PM, Ranjeet Jha
[EMAIL PROTECTED]
wrote:
It would be great if you send the sample code to parse the Solr
responseXML.
ranjeet
On 7/4/08, Noble Paul നോബിള് नोब्ळ् [EMAIL PROTECTED] wrote:
apache-solr
the follwing link but could not find the any section of
setting classpath for SolrJ. Pls help me how to come out this problem.
Thanks
Ranjeet
- Original Message - From: Noble Paul നോബിള് नोब्ळ्
[EMAIL PROTECTED]
To: solr-user@lucene.apache.org
Sent: Monday, July 07, 2008 3:29 PM
The context 'solr' is not initialized. The most likely reson is that
you have not set the solr.home correctly.
--Noble
On Wed, Jul 9, 2008 at 3:24 AM, sandeep kaur [EMAIL PROTECTED] wrote:
Hi,
As i am running tomcat after copying the solr files to appropriate tomcat
directories, i am
You can put it into a 'string' field directly
On Wed, Jul 9, 2008 at 7:41 PM, Alexander Ramos Jardim
[EMAIL PROTECTED] wrote:
I need to put big xml files on a string field in one of my projects. Does
Solr accept it automatically or should I put a !CDATA on my xml before
putting on the index?
yep. you cant search. It is better to extract the data out and index
it if you want to search
On Wed, Jul 9, 2008 at 8:37 PM, Norberto Meijome [EMAIL PROTECTED] wrote:
On Wed, 9 Jul 2008 19:51:45 +0530
Noble Paul _ __ [EMAIL PROTECTED]
wrote:
You can put
On Wed, Jul 9, 2008 at 8:46 PM, Noble Paul നോബിള് नोब्ळ्
[EMAIL PROTECTED] wrote:
yep. you cant search. It is better to extract the data out and index
it if you want to search
On Wed, Jul 9, 2008 at 8:37 PM, Norberto Meijome [EMAIL PROTECTED] wrote:
On Wed, 9 Jul 2008 19:51:45 +0530
Noble
On Thu, Jul 10, 2008 at 7:53 AM, aris buinevicius [EMAIL PROTECTED] wrote:
We're trying to implement a large scale domain specific web email
application, and so far solr performance on the search side is really doing
well for us.
There are two limitations that I can't seem to get around
Commmit automaticallly does not create snapshots. You must register
the listener to do so
http://wiki.apache.org/solr/CollectionDistribution#head-532ab57f4a3a9cc3ce129a9fb698a01aceb6d0c2
--Noble
On Thu, Jul 10, 2008 at 11:56 AM, Jacob Singh [EMAIL PROTECTED] wrote:
Hi,
I'm trying to get
part of my
question? It says it is failing, but doesn't look like it, but then I
have nothing to go on.
Best,
Jacob
Noble Paul നോബിള് नोब्ळ् wrote:
Commmit automaticallly does not create snapshots. You must register
the listener to do so
http://wiki.apache.org/solr
Searcher is started anyway. I do not think it is very expensive if the
searcher is not used.
On Thu, Jul 10, 2008 at 11:04 PM, climbingrose [EMAIL PROTECTED] wrote:
You do, I think. Have a look at DirectUpdateHandler2 class.
On Thu, Jul 10, 2008 at 9:16 PM, Gudata [EMAIL PROTECTED] wrote:
On Fri, Jul 11, 2008 at 11:46 PM, Jon Baer [EMAIL PROTECTED] wrote:
Hi,
On the wiki it says that url attribute can be templatized but Im not sure
how that happens, do you I need to create something read from a database
column in order to use that type of function? ie Id like to run over some
You may need to check your classpath and ensure that you are using the
correct versions of httpclient (the ones shipped with solr)
On Mon, Jul 14, 2008 at 1:00 PM, Ranjeet [EMAIL PROTECTED] wrote:
Hi,
I am new to solrJ, in following code I got exception. Plese help, we did not
get
You must do a check before adding documents
On Tue, Jul 15, 2008 at 1:15 PM, Sunil [EMAIL PROTECTED] wrote:
Hi All,
I want to change the duplicate content behavior in solr. What I want to
do is:
1) I don't want duplicate content.
2) I don't want to overwrite old content with new one.
Can we collect more information. It would be nice to know what the
threads are doing when it hangs.
If you are using *nix issue kill -3 pid
it would print out the stacktrace of all the threads in the VM . That
may tell us what is the state of each thread which could help us
suggest something
On
comments inline
On Thu, Jul 17, 2008 at 5:00 AM, wojtekpia [EMAIL PROTECTED] wrote:
I have two questions:
1. I am pulling data from 2 data sources using the DIH. I am using the
deltaQuery functionality. Since the data sources pull data sequentially, I
find that some data is getting
it is a bug . I'll post a new patch
On Sat, Jul 19, 2008 at 7:10 PM, chris sleeman [EMAIL PROTECTED] wrote:
Hi,
I have a multivalued Solr text field, called 'categories', which is mapped
to a String[] in my java bean. I am directly converting the search results
to this bean.
This works
meanwhile , you can manage by making the field
ListString categories;
On Sat, Jul 19, 2008 at 11:22 PM, Noble Paul നോബിള് नोब्ळ्
[EMAIL PROTECTED] wrote:
it is a bug . I'll post a new patch
On Sat, Jul 19, 2008 at 7:10 PM, chris sleeman [EMAIL PROTECTED] wrote:
Hi,
I have a multivalued
A patch is submitted in SOLR-536
On Sat, Jul 19, 2008 at 11:23 PM, Noble Paul നോബിള് नोब्ळ्
[EMAIL PROTECTED] wrote:
meanwhile , you can manage by making the field
ListString categories;
On Sat, Jul 19, 2008 at 11:22 PM, Noble Paul നോബിള് नोब्ळ्
[EMAIL PROTECTED] wrote:
it is a bug . I'll
the data is stored in Lucene format. Lucene is the place to look if
you want to know the exact format.
Lucene stores only stored fields . If you need to just index the
data , the actual amount of data may be less than that of the input.
--Noble
On Mon, Jul 21, 2008 at 6:00 PM, sanraj25 [EMAIL
Overtime we have realized this as a common pattern of requirements.
a pre-import, post-import call back hooks are something I can think of.
On Mon, Jul 21, 2008 at 11:13 PM, Jonathan Lee [EMAIL PROTECTED] wrote:
Hello,
I have been using the DataImportHandler successfully to import documents
?
From: Noble Paul നോബിള് नोब्ळ् [EMAIL PROTECTED]
Reply-To: solr-user@lucene.apache.org
Date: Mon, 21 Jul 2008 23:25:15 +0530
To: solr-user@lucene.apache.org
Subject: Re: DIH: post-import action independent SQL statements
Overtime we have realized this as a common pattern of requirements
Did you take a look at DataImportHandler?
On Wed, Jul 23, 2008 at 1:57 AM, Ravish Bhagdev
[EMAIL PROTECTED] wrote:
Can't you write triggers for your database/tables you want to index?
That way you can keep track of all kinds of changes and updates and
not just addition of a new record.
Thanks a lot. This is what keeps us going :)
On Tue, Jul 29, 2008 at 8:42 PM, Jeremy Hinegardner
[EMAIL PROTECTED] wrote:
Thanks for the info. I don't really need it, I was just pondering some
options.
I noticed, anecdotally, in my logs that it didn't appear to be doing
database queries
is the server Solr1.2 or Solr1.3?
On Wed, Jul 30, 2008 at 9:57 PM, Ranjeet [EMAIL PROTECTED] wrote:
Hi Shalin,
I have written Client to index the document, but unfortunatly getting an
exception. I have attached source code of client and exception, please guide
me where I am doing mistake.
If the annotation is applied to a setter method , it should take only one param
On Thu, Jul 31, 2008 at 8:20 PM, Ranjeet [EMAIL PROTECTED] wrote:
Hi,
I have attached the souce code to index the document by solrJ. I am trying
this by pojo by refering http://wiki.apache.org/solr/Solrj; to
Write a custom ServletFilter and apply it before the
SolrDispatchFilter for /update
Do the validation in the filter
On Wed, Aug 13, 2008 at 5:05 PM, Sunil [EMAIL PROTECTED] wrote:
Hi,
I want to password protect the solr select/update/delete URLs. Any link
from where I can get some help
There is a (SOLR-561) feature getting built for doing replication in
any platform . The patch works and it is tested. Do not expect it to
work with the current trunk because a lot has changed in trunk since
the last patch . We will be updating it soon once the dust settles
down.
-
On Fri, Aug
Please post the question as a separate mail with a new subject.
On Mon, Aug 18, 2008 at 9:58 AM, finy finy [EMAIL PROTECTED] wrote:
i'sorry, i am not familiar with the maillist operation.
my question is:
i check the solr source code, and find it uses lucene's QueryParser to parse
user's
keep a slave handy as the second aster and if the real master goes
down let the second one take over.
On Mon, Aug 18, 2008 at 4:44 PM, dudes dudes [EMAIL PROTECTED] wrote:
Hello all,
I'm looking for a doc that full-fill the following situation?
How can two solr servers synchronised with
oh the names are in CAPS. And in your dataconfig it is in small it
must be something like this
field column=MYGALLERY_ID name=id/
On Thu, Aug 21, 2008 at 7:33 PM, Todd Breiholz [EMAIL PROTECTED] wrote:
On Wed, Aug 20, 2008 at 10:54 PM, Shalin Shekhar Mangar
[EMAIL PROTECTED] wrote:
Does
Which build of Solr are you using. The feature of loading libraries
from ${solr.home}/lib is added recently. I guess it must work with the
latest trunk. if it doesn't please let us know
--Noble
On Mon, Aug 25, 2008 at 8:41 PM, Walter Ferrara [EMAIL PROTECTED] wrote:
Launching a multicore solr
are you sure you committed after the 'delete' ?
On Wed, Sep 10, 2008 at 2:26 PM, Athok [EMAIL PROTECTED] wrote:
Hello, I am begining with solr and i have a problem with the delete by query.
If i do a query to solr, it give me the results that i hope but when the
query is sent by XML as a
I guess the post is not sending the correct 'wt' parameter. try
setting wt=javabin explicitly .
wt=xml may not work because the parser still is binary.
check this http://wiki.apache.org/solr/Solrj#xmlparser
On Thu, Sep 18, 2008 at 11:49 AM, Otis Gospodnetic
[EMAIL PROTECTED] wrote:
A quick
On Thu, Sep 18, 2008 at 10:17 PM, syoung [EMAIL PROTECTED] wrote:
I tried setting the 'wt' parameter to both 'xml' and 'javabin'. Neither
worked. However, setting the parser on the server to XMLResponseParser did
fix the problem. Thanks for the help.
Susan
Noble Paul നോബിള് नोब्ळ्
1 - 100 of 987 matches
Mail list logo