which version of Solr are you using?
On Wed, Apr 1, 2009 at 12:01 PM, Radha C. cra...@ceiindia.com wrote:
Hi All,
I am trying to index documents by using solrj client. I have written a
simple code below,
{
CommonsHttpSolrServer server = new
: Radha C. [mailto:cra...@ceiindia.com]
Sent: Wednesday, April 01, 2009 12:28 PM
To: solr-user@lucene.apache.org
Subject: RE: Runtime exception when adding documents using solrj
I am using Solr 1.3 version
_
From: Noble Paul നോബിള് नोब्ळ् [mailto:noble.p...@gmail.com]
Sent: Wednesday
-user@lucene.apache.org
Subject: RE: Runtime exception when adding documents using solrj
I am using Solr 1.3 version
_
From: Noble Paul നോബിള് नोब्ळ् [mailto:noble.p...@gmail.com]
Sent: Wednesday, April 01, 2009 12:16 PM
To: solr-user@lucene.apache.org; cra...@ceiindia.com
Subject: Re
I guess Solr itself is hogging more memory.
May be you can try reloading the core before each import.
On Wed, Apr 1, 2009 at 3:19 PM, Marc Sturlese marc.sturl...@gmail.com wrote:
Hey there,
I am doing performance tests with full-import command from
DataImportHandler. I have configured 20
I guess dateFormat does the job properly but the returned value is
changed according to timezone.
can y try this out add an extra field which converts the date to toString()
field column=original_air_date_d_str
template=${entityname.original_air_date_d}/
this would add an extra field as string
why is the POJO extending FieldType?
it does not have to.
composite types are not supported.because Solr cannot support that.
But the field can be a List or array.
On Thu, Apr 2, 2009 at 5:00 PM, Praveen Kumar Jayaram
praveen198...@gmail.com wrote:
Could someone give suggestions for this
If you are looking at the QTime on the master it is likely to be
skewed by ReplicationHandler becaus ethe files are downloaded using a
request. On a slave it should not be a problem.
I guess we must not add the qtimes of ReplicationHandler
--Noble
On Thu, Apr 2, 2009 at 5:34 PM, sunnyfr
wonder, I shouldn't put 7G to xmx jvm, I don't know,
but slave is as well a little problem during replication from the master.
Noble Paul നോബിള് नोब्ळ् wrote:
If you are looking at the QTime on the master it is likely to be
skewed by ReplicationHandler becaus ethe files are downloaded using
folder merged so every time
it brings back 10G datas.
And during this time my repond time of my request are very slow.
What can I check?
Thanks Paul
2009/4/2 Noble Paul നോബിള് नोब्ळ् noble.p...@gmail.com
slave would not show increased request times because of replication.
If it does
put your native dll/iso file in the LD_LIBRARY_PATH and start Solr
with that. Or the best solution is to use a pure java driver
On Thu, Apr 2, 2009 at 8:13 PM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
On Thu, Apr 2, 2009 at 6:57 PM, Radha C. cra...@ceiindia.com wrote:
Hello List,
This looks strange. Apparently the Transformer did not get applied. Is
it possible for you to debug ClobTransformer adding(System.out.println
into ClobTransformer may help)
On Fri, Apr 3, 2009 at 6:04 AM, ashokc ash...@qualcomm.com wrote:
Correcting my earlier post. It lost some lines some how.
scripts?) the source and
replace the jar for DIH, right? I can try - for the first time.
- ashok
Noble Paul നോബിള് नोब्ळ् wrote:
This looks strange. Apparently the Transformer did not get applied. Is
it possible for you to debug ClobTransformer adding(System.out.println
into ClobTransformer
is to write your field value as one String. Let the
FieldType in Solr parse and create appropriate data structure.
Noble Paul നോബിള് नोब्ळ् wrote:
why is the POJO extending FieldType?
it does not have to.
composite types are not supported.because Solr cannot support that.
But the field can
=second_date_s
xpath=/add/doc/fie...@name='original_air_date_d'] /
When I do this, only the second_date_s will make it into the index. I know
first_date_d instruction is valid but, it just disappears.
Any thoughts?
On 4/1/09 11:59 PM, Noble Paul നോബിള് नोब्ळ् noble.p...@gmail.com
wrote:
I guess
(with ant/maven scripts?) the source
and
replace the jar for DIH, right? I can try - for the first time.
- ashok
Noble Paul നോബിള് नोब्ळ् wrote:
This looks strange. Apparently the Transformer did not get applied. Is
it possible for you to debug ClobTransformer adding(System.out.println
date name=timestamp2009-04-03T11:47:32.635Z/date
/doc
Noble Paul നോബിള് नोब्ळ् wrote:
There is something else wrong with your setup.
can you just paste the whole data-config.xml
--Noble
On Fri, Apr 3, 2009 at 5:39 PM, ashokc ash...@qualcomm.com wrote:
Noble,
I put in a few
came with. Thanks Noble.
Noble Paul നോബിള് नोब्ळ् wrote:
and which version of Solr are u using?
On Fri, Apr 3, 2009 at 10:09 PM, ashokc ash...@qualcomm.com wrote:
Sure:
data-config Xml
===
dataConfig
dataSource driver=oracle.jdbc.driver.OracleDriver
url
the column names are case sensitive try this
field column=PROJECT_AREA name=projects /
field column=PROJECT_VERSION name=projects /
On Sat, Apr 4, 2009 at 3:58 AM, ashokc ash...@qualcomm.com wrote:
Hi,
I need to assign multiple values to a field, with each value coming from a
this transformer can be modified to be case-insensitive for
the column names. If you had written it perhaps it is a quick change for
you?
Noble Paul നോബിള് नोब्ळ् wrote:
I guess u can write a custom transformer which gets a String out of
the oracle.sql.CLOB. I am just out of clue, why this may happen
to Solr. So please don't mind if I
dumb questions.
Please give some sample example if possible.
Noble Paul നോബിള് नोब्ळ् wrote:
On Fri, Apr 3, 2009 at 11:28 AM, Praveen Kumar Jayaram
praveen198...@gmail.com wrote:
Thanks for the reply Noble Paul.
In my application I will be having
There is a debug mode
http://wiki.apache.org/solr/DataImportHandler#head-0b0ff832aa29f5ba39c22b99603996e8a2f2d801
On Mon, Apr 6, 2009 at 2:35 PM, Wesley Small wesley.sm...@mtvstaff.com wrote:
Good Morning,
Is there any way to specify or debug a specific DIH configuration via the
API/http
how are you indexing?
On Mon, Apr 6, 2009 at 2:54 PM, Veselin Kantsev
vese...@campbell-lange.net wrote:
Hello,
apologies for the basic question.
How can I avoid double indexing files?
In case all my files are in one folder which is scanned frequently, is
there a Solr feature of checking
hi sunnyfr,
I wish to clarify something.
you say that the performance is poor during the replication.
I suspect that the performance is poor soon after the replication. The
reason being , replication is a low CPU activity. If you think
otherwise let me know how you found it out.
If the perf is
of the cpu like u can see on the graph, the first
part
http://www.nabble.com/file/p22925561/cpu_.jpg cpu_.jpg
and on this graph and first part of the graph (blue part) it's just
replication no request at all.
normally i've 20 request per second
what would you reckon ?
Noble Paul നോബിള്
are these the numbers for non-cached requests?
On Tue, Apr 7, 2009 at 11:46 AM, CIF Search cifsea...@gmail.com wrote:
Hi,
I have around 10 solr servers running indexes of around 80-85 GB each and
and with 16,000,000 docs each. When i use distrib for querying, I am not
getting a satisfactory
do you see the same problem when you use a single thread?
what is the version of SolrJ that you use?
On Wed, Apr 8, 2009 at 1:19 PM, vivek sar vivex...@gmail.com wrote:
Hi,
Any ideas on this issue? I ran into this again - once it starts
happening it keeps happening. One of the thread
So what I decipher from the numbers is w/o queries Solr replication is
not performing too badly. The queries are inherently slow and you wish
to optimize the query performance itself.
am I correct?
On Tue, Apr 7, 2009 at 7:50 PM, sunnyfr johanna...@gmail.com wrote:
Hi,
So I did two test on
I guess we must add functions for bitwise operations.
say
* bor
* bxor
* band
* bcompliment
These functionalities are trivial to add and may be useful for people
who are familiar with bit operations
--Noble
On Thu, Apr 9, 2009 at 1:10 AM, Erik Hatcher e...@ehatchersolutions.com wrote:
No,
, I'm using simple lock type. I'd tried single type before
that once caused index corruption so I switched to simple.
Thanks,
-vivek
2009/4/8 Noble Paul നോബിള് नोब्ळ् noble.p...@gmail.com:
do you see the same problem when you use a single thread?
what is the version of SolrJ that you use
:397)
at
org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
Note, I'm using simple lock type. I'd tried single type before
that once caused index corruption so I switched to simple.
Thanks,
-vivek
2009/4/8 Noble Paul നോബിള് नोब्ळ् noble.p...@gmail.com:
do
FileDataSource is of type Reader . means getData() returns
ajava.io.Reader.That is not very suitable for you.
your best bet is to write a simple DataSource which returns an
IteratorMapString,Object after reading the serialized Objects
.This is what JdbcdataSource does. Then you can use it with
jvm with Solr webapp?
Thanks,
-vivek
2009/4/9 Noble Paul നോബിള് नोब्ळ् noble.p...@gmail.com:
how many documents are you inserting ?
may be you can create multiple instances of CommonshttpSolrServer and
upload in parallel
On Thu, Apr 9, 2009 at 11:58 AM, vivek sar vivex...@gmail.com
On Thu, Apr 9, 2009 at 8:51 PM, sunnyfr johanna...@gmail.com wrote:
Hi Otis,
How did you manage that? I've 8 core machine with 8GB of ram and 11GB index
for 14M docs and 5 update every 30mn but my replication kill everything.
My segments are merged too often sor full index replicate and
do for a frequent update for a large dabase and lot of
query on it ?
do they turn off the slave during the warmup ??
Noble Paul നോബിള് नोब्ळ् wrote:
On Thu, Apr 9, 2009 at 8:51 PM, sunnyfr johanna...@gmail.com wrote:
Hi Otis,
How did you manage that? I've 8 core machine with 8GB of ram
If you use StreamingUpdateSolrServer it POSTs all the docs in a single
request. 10 million docs may be a bit too much for a single request. I
guess you should batch it in multiple requests of smaller chunks,
It is likely that the CPU is really hot when the autowarming is hapening.
getting a
it is likely that your query did not return any data. just run the
query separately and see if it reallly works.
Or try it out in debug mode. it will tell you which query was run and
what got returned.
--Noble
2009/4/13 Vincent Pérès vincent.pe...@gmail.com:
Hello,
I'm trying to import a
On Tue, Apr 14, 2009 at 7:14 AM, vivek sar vivex...@gmail.com wrote:
Some more update. As I mentioned earlier we are using multi-core Solr
(up to 65 cores in one Solr instance with each core 10G). This was
opening around 3000 file descriptors (lsof). I removed some cores and
after some trial
DIH itself may not be consuming so much memory. It also includes the
memory used by Solr.
Do you have a hard limit on 400MB , is it not possible to increase it?
On Tue, Apr 14, 2009 at 11:09 AM, Mani Kumar manikumarchau...@gmail.com wrote:
Hi ILAN:
Only one query is required to generate a
what is the cntent of your text file?
Solr does not directly index files
--Noble
On Tue, Apr 14, 2009 at 3:54 AM, Alex Vu alex.v...@gmail.com wrote:
Hi all,
Currently I wrote an xml file and schema.xml file. What is the next step to
index a txt file? Where should I put my txt file I want to
DIH streams 1 row at a time.
DIH is just a component in Solr. Solr indexing also takes a lot of memory
On Tue, Apr 14, 2009 at 12:02 PM, Mani Kumar manikumarchau...@gmail.com wrote:
Yes its throwing the same OOM error and from same place...
yes i will try increasing the size ... just curious :
I've too
much update every 30mn something like 2000docs and almost all segment are
modified.
What would you reckon? :( :)
Thanks a lot Noble
Noble Paul നോബിള് नोब्ळ् wrote:
So what I decipher from the numbers is w/o queries Solr replication is
not performing too badly. The queries
nope,
but it is possible to have multiple root entities within a document
and you can execute one at a time.
--Noble
On Tue, Apr 14, 2009 at 4:15 PM, gateway0 reiterwo...@yahoo.de wrote:
Hi,
is it possible to use more than one document tag within my data-config.xml
file?
Like:
use TemplateTransformer to create a key
On Tue, Apr 14, 2009 at 9:49 PM, ashokc ash...@qualcomm.com wrote:
Hi,
I have separate JDBC datasources (DS1 DS2) that I want to index with DIH
in a single SOLR instance. The unique record for the two sources are
different. Do I have to synthesize a
On Wed, Apr 15, 2009 at 12:39 AM, Development Team dev.and...@gmail.com wrote:
Hi everybody,
I have a relatively large index (it will eventually contain ~4M
documents and be about 3G in size, I think) that indexes user data,
settings, and the like. The documents represent a community of
I guess SOLR-599 can be easily fixed if we do not implement
Multipart-support (which is non-essential)
--Noble
On Wed, Apr 15, 2009 at 1:12 AM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
On Wed, Apr 15, 2009 at 12:47 AM, Glen Newton glen.new...@gmail.com wrote:
I see. So this is a
On Thu, Apr 16, 2009 at 3:45 AM, Marc Sturlese marc.sturl...@gmail.com wrote:
Hey there,
I have been reading about StreamingUpdateSolrServer but can't catch exactly
how it works:
More efficient index construction over http with solrj. If your doing it,
this is a fantastic performance
affected
by this bug too.
Apr 15, 2009 1:21:58 PM org.apache.solr.handler.dataimport.JdbcDataSource
init
WARNING: Invalid batch size: null
-Bryan
On Apr 13, 2009, at Apr 13, 11:48 PM, Noble Paul നോബിള് नोब्ळ् wrote:
DIH streams 1 row at a time.
DIH is just a component in Solr. Solr
did you try the deletedPkQuery?
On Thu, Apr 16, 2009 at 7:49 PM, Ruben Chadien ruben.chad...@aspiro.com wrote:
Hi
I am new to Solr, but have been using Lucene for a while. I am trying to
rewrite
some old lucene indexing code using the Jdbc DataImport i Solr, my problem:
I have Entities
On Thu, Apr 16, 2009 at 10:34 PM, Allahbaksh Asadullah
allahbaks...@gmail.com wrote:
Hi,I have followed the procedure given on this blog to setup the solr
Below is my code. I am trying to index the data but I am not able to connect
to server and getting authentication error.
HttpClient
I guess strings are stored by lucene in utf-8 always. BTW As you pass
the Object as a String the encoding is lost
On Thu, Apr 16, 2009 at 7:37 PM, AlexxelA alexandre.boudrea...@canoe.ca wrote:
I'm using the DataImportHandler and my database is in latin1. When i
retreive documents that i have
It is fixed in the trunk
On Thu, Apr 16, 2009 at 10:47 PM, Allahbaksh Asadullah
allahbaks...@gmail.com wrote:
Thanks Noble.Regards,
Allahbaksh
2009/4/16 Noble Paul നോബിള് नोब्ळ् noble.p...@gmail.com
On Thu, Apr 16, 2009 at 10:34 PM, Allahbaksh Asadullah
allahbaks...@gmail.com wrote:
Hi
these are for the beginning and end of the whoke indexing process
On Fri, Apr 17, 2009 at 7:38 PM, Marc Sturlese marc.sturl...@gmail.com wrote:
Hey there,
I have seen the new feature of EventListeners of DIH in trunk.
dataConfig
document onImportStart =com.FooStart onImportEnd=comFooEnd
httpClient.getHttpConnectionManager().closeIdleConnections();
--Noble
On Sat, Apr 18, 2009 at 1:31 AM, Rakesh Sinha rakesh.use...@gmail.com wrote:
When we instantiate a commonshttpsolrserver - we use the following method.
CommonsHttpSolrServer server = new
the snapshooter does not really copy any files. They ar just hardlinks
(does not consume disk space) so even a full copy is not very
expensive
On Sat, Apr 18, 2009 at 12:06 PM, Koushik Mitra
koushik_mi...@infosys.com wrote:
Hi,
We want to create snapshot incrementally.
What we want is every
-r- 46 test test 333 Apr 17 23:26 _i.frq
-rw-r- 46 test test 135 Apr 17 23:26 _i.fnm
-rw-r- 46 test test 12 Apr 17 23:26 _i.fdx
-rw-r- 46 test test 1433 Apr 17 23:26 _i.fdt
Regards,
Koushik
On 18/04/09 12:17 PM, Noble Paul നോബിള് नोब्ळ्
There may be APIs which are introduced after 1.3 release and are not
yet crystallized , may change by the time release happens.
As a rule of thumb any feature in the CHANGES.TXT post 1.3 is possible
to change . But as long as you know what you are using you should be
OK.
It is not quite possible
On Mon, Apr 20, 2009 at 7:15 PM, ahammad ahmed.ham...@gmail.com wrote:
Hello,
I've never used Solr before, but I believe that it will suit my current
needs with indexing information from a database.
I downloaded and extracted Solr 1.3 to play around with it. I've been
looking at the
to me like a sort of filter. What if I don't want to
filter anything and just want to index all the rows?
Cheers
Noble Paul നോബിള് नोब्ळ् wrote:
On Mon, Apr 20, 2009 at 7:15 PM, ahammad ahmed.ham...@gmail.com wrote:
Hello,
I've never used Solr before, but I believe that it will suit
the fact that there is nothing in the data dir sugests that you are
looking at the wrong directory. Just fire a query for *:* and it will
tell you if there are indeed documents in the index. The statistics
admin page can tell you where the index is created
On Thu, Apr 23, 2009 at 12:25 AM,
Let me assume that you are using the in-inbuilt replication.
The replication ties to set the timestamp of all the files same as
that of the files in the master. just cross check.
On Thu, Apr 23, 2009 at 6:57 AM, Jian Han Guo jian...@gmail.com wrote:
Hi,
I am using nightly build on 4/22/2009.
are in sync after replication. Don't know
how it does that.
I haven't check if the two machines are in sync, but even if they are not,
the timestamp should not be Dec 31, 1969, I think.
Thanks,
Jianhan
2009/4/22 Noble Paul നോബിള് नोब्ळ् noble.p...@gmail.com
Let me assume that you
nope.
you must edit the web.xml and register the filter there
On Thu, Apr 23, 2009 at 3:45 PM, Giovanni De Stefano
giovanni.destef...@gmail.com wrote:
Hello Hoss,
thank you for your reply.
I have no problems subclassing the SolrDispatchFilter...but where shall I
configure it? :-)
I cannot
looks like a bug.
https://issues.apache.org/jira/browse/SOLR-1126
On Fri, Apr 24, 2009 at 3:26 AM, Jeff Newburn jnewb...@zappos.com wrote:
I have attached the output from our filelist below. The slaves are on the
same version using the replication internal to solr 1.4. All replicated
files
It is a very uncommon usecase to slowly migrate from lucene to Solr.
I somehow feel that the piecemeal migration is going to be more
expensive than the whole migration .
happy hacking...
--Noble
On Mon, Apr 27, 2009 at 10:05 AM, Paul Libbrecht p...@activemath.org wrote:
Hello list,
I am
I guess not.
On Mon, Apr 27, 2009 at 10:42 AM, Ashish P ashish.ping...@gmail.com wrote:
Right. But is there a way to track file updates and diffs.
Thanks,
Ashish
Noble Paul നോബിള് नोब्ळ् wrote:
If you can check it out into a directory using SVN command then you
may use DIH to index
Sub entities can slow down indexing remarkably.What is that
datasource? DB? then try using CachedSqlEntityProcessor
On Tue, Jun 14, 2011 at 8:31 PM, Mark static.void@gmail.com wrote:
Hello all,
We are using DIH to index our data (~6M documents) and its taking an
extremely long time (~24
Use TemplateTransformer
dataConfig
dataSource
name = wld
type=JdbcDataSource
driver=com.mysql.jdbc.Driver
url=jdbc:mysql://localhost/wld
user=root
password=pass/
document name=variants
it will be in the solr logs
On Tue, Jun 21, 2011 at 2:18 PM, Alucard alucard...@gmail.com wrote:
Hi all.
I follow the steps of creating a LogTransformer in DataImportHandler wiki:
entity name=office_address dataSource=jdbc
pk=office_add_Key transformer=LogTransformer logLevel=debug
On Thu, Jun 23, 2011 at 9:13 PM, simon mtnes...@gmail.com wrote:
The Wiki page describes a design for a scheduler, which has not been
committed to Solr yet (I checked). I did see a patch the other day
(see https://issues.apache.org/jira/browse/SOLR-2305) but it didn't
look well tested.
I
no
On Mon, Jul 4, 2011 at 3:34 PM, Kiwi de coder kiwio...@gmail.com wrote:
hi,
i wondering solrj @Field annotation support embedded child object ? e.g.
class A {
@field
string somefield;
@emebedded
B b;
}
regards,
kiwi
--
There is no spec documented anywhere . It is all in this single file
https://svn.apache.org/repos/asf/lucene/dev/trunk/solr/solrj/src/java/org/apache/solr/common/util/JavaBinCodec.java
On Wed, Jul 25, 2012 at 6:47 PM, Ahmet Arslan iori...@yahoo.com wrote:
Sorry, but I could not find any spec
there is an issue already to write to the index in a separate thread.
https://issues.apache.org/jira/browse/SOLR-1089
On Tue, Apr 28, 2009 at 4:15 AM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
On Tue, Apr 28, 2009 at 3:43 AM, Amit Nithian anith...@gmail.com wrote:
All,
I have a few
apparently you do not have the driver in the path. drop your driver
jar into ${solr.home}/lib
On Tue, Apr 28, 2009 at 4:42 AM, gateway0 reiterwo...@yahoo.de wrote:
Hi,
sure:
message Severe errors in solr configuration. Check your log files for more
detailed information on what may be
the Solr distro contains all the jar files. you can take either the
latest release (1.3) or a nightly
On Tue, Apr 28, 2009 at 11:34 AM, ahmed baseet ahmed.bas...@gmail.com wrote:
As far as I know, Maven is a build/mgmt tool for java projects quite similar
to Ant, right? No I'm not using this ,
writing to a remote Solr through SolrJ is in the cards. I may even
take it up after 1.4 release. For now your best bet is to override the
class SolrWriter and override the corresponding methods for
add/delete.
On Wed, Apr 29, 2009 at 2:06 AM, Amit Nithian anith...@gmail.com wrote:
I do remember
is the serialized data in UTF-8 string?
On Wed, Apr 29, 2009 at 6:42 AM, Matt Mitchell goodie...@gmail.com wrote:
Hi,
I'm attempting to serialize a simple ruby object into a solr.StrField - but
it seems that what I'm getting back is munged up a bit, in that I can't
de-serialize it. Is there
just put it on debugger and you will know if the query is indeed
fetching any rows
On Wed, Apr 29, 2009 at 2:59 AM, Ci-man ciber...@yahoo.com wrote:
Thanks for your question.
Yes all the fields are defined in the schema; I am using the default schema
and mapping between the DB fields and
forgot to mention DIH has a debug mode
2009/4/29 Noble Paul നോബിള് नोब्ळ् noble.p...@gmail.com:
just put it on debugger and you will know if the query is indeed
fetching any rows
On Wed, Apr 29, 2009 at 2:59 AM, Ci-man ciber...@yahoo.com wrote:
Thanks for your question.
Yes all
=org.apache.solr.handler.dataimport.DataImportHandler
lst name=defaults
str name=config/Applications/solr/conf/data-config.xml/str
/lst
/requestHandler
Noble Paul നോബിള് नोब्ळ् wrote:
apparently you do not have the driver in the path. drop your driver
jar into ${solr.home}/lib
On Tue, Apr 28
On Wed, Apr 29, 2009 at 3:24 PM, Wouter Samaey wouter.sam...@gmail.com wrote:
Hi there,
I'm currently in the process of learning more about Solr, and how I
can implement it into my project.
Since my database is very large and complex, I'm looking into the way
of keeping my documents current
nope . details is the only command which can give you this info
On Wed, Apr 29, 2009 at 7:10 PM, sunnyfr johanna...@gmail.com wrote:
Hi,
Just to know if there is a quick way to get the information without hiting
replication?command=details
like =isReplicating
Thanks,
--
View this
Solr uses java.util.concurrent packages which is not available in java1.4.
So it may be impossible to make Solr work in java 1.4
On Thu, Apr 30, 2009 at 5:47 PM, Smiley, David W. dsmi...@mitre.org wrote:
Solr indeed requires Java 1.5.
I am not sure if anyone has tried this but you may be
did you try to POST the query?
On Fri, May 1, 2009 at 12:43 AM, ANKITBHATNAGAR abhatna...@vantage.com wrote:
Hi Guys,
I am using solr 1.3 for performing search.
I am using facet search and I am getting : connection reset error when the
Query
The snapshoot feature is not yet tested . Could you please post the
full stacktrace from the server console. I shall open an issue
--Noble
On Fri, May 1, 2009 at 3:33 AM, Jian Han Guo jian...@gmail.com wrote:
Hi,
If there is no new document added since the last snapshot is created, the
The we interface returns the data in xml for mat because that is the
most readable format.SolrJ uses a compact binary format by default.
But you can make it use xml as well. but there is no real use case for
using xml with solrJ
On Mon, May 4, 2009 at 7:06 PM, ahmed baseet ahmed.bas...@gmail.com
What component is trying to get the SolrCore? if it is implemented as
SolrCoreAware it gets a callback after the core is completely
initialized.
On Tue, May 5, 2009 at 4:29 AM, Amit Nithian anith...@gmail.com wrote:
How do you get access to the current SolrCore in code? More specifically, I
am
hi Eric,
there should be a getter for CoreContainer in EmbeddedSolrServer. Open an issue
--Noble
On Tue, May 5, 2009 at 12:17 AM, Eric Pugh
ep...@opensourceconnections.com wrote:
Hi all,
I notice that when I use EmbeddedSolrServer I have to use Control C to stop
the process. I think the way
hi Walter,
it needs synchronization. I shall open a bug.
On Mon, May 4, 2009 at 7:31 PM, Walter Ferrara walters...@gmail.com wrote:
I've got a ConcurrentModificationException during a cron-ed delta import of
DIH, I'm using multicore solr nightly from hudson 2009-04-02_08-06-47.
I don't know
If you change the the conf files and if you reindex the documents it
must be reflected are you sure you re-indexed?
On Tue, May 5, 2009 at 10:00 AM, Sagar Khetkade
sagar.khetk...@hotmail.com wrote:
Hi,
I came across a strange problem while reloading the core in multicore
scenario. In the
Make your class implement the interface SolrCoreAware and you will get
a callback with the core.
On Tue, May 5, 2009 at 11:11 AM, Amit Nithian anith...@gmail.com wrote:
I am trying to get at the configuration directory in an implementation of
the SolrEventListener.
2009/5/4 Noble Paul
There are two options.
1) pass on the user name and password as request parameters and use
the request parameters in the datasource
dataSource user=x password=${dataimporter.request.pwd} /
where pwd is a request parameter passed
2) if you can create jndi datasources in the appserver use the
The elevate.xml is loaded from conf dir when the core is reloaded . if
you post the new xml you will have to reload the core.
A simple solution would be to write a RequestHandler which extends
QueryElevationComponent which can be a listener for commit and call an
super.inform() on that event
On
yeah the behavior you are observing is right. Now I have second
thoughts on how it should be. I guess , if the deltaImportQuery is
present in a child entity it should be used. you can open an issue
--Noble
On Thu, May 7, 2009 at 12:33 AM, Martin Davidsson
martin.davids...@gmail.com wrote:
wrote:
On May 6, 2009, at 15:17 , Noble Paul നോബിള് नोब्ळ् wrote:
Why would you want to write it to the data dir? why can't it be in the
same place (conf) ?
Well, fact is that the QueryElevationComponent loads the configuration file
( elevate.xml ) either from the data dir, either from
it is wise to optimize the index once in a while (daily may be). But
it depends on how many commits you do in a day. Every commit causes
fragmentation of index files and your search can become slow if you do
not optimize it.
But optimizing always is not recommended because it is time consuming
On Thu, May 7, 2009 at 6:15 AM, Yonik Seeley yo...@lucidimagination.com wrote:
On Wed, May 6, 2009 at 7:32 AM, Andrew Ingram a...@andrewingram.net wrote:
Basically, a product has two price values and a date, the product
effectively has one price before the date and the other one after.
This
you may need to change the mysql connection parameters so that it does
not throw error for null date
jdbc:mysql://localhost/test?zeroDateTimeBehavior=convertToNull
On Thu, May 7, 2009 at 1:39 PM, gateway0 reiterwo...@yahoo.de wrote:
Hi,
when I do a full import I get the following error :
Engineer, Zappos.com
jnewb...@zappos.com - 702-943-7562
From: Noble Paul നോബിള് नोब्ळ् noble.p...@gmail.com
Reply-To: solr-user@lucene.apache.org
Date: Wed, 6 May 2009 10:05:49 +0530
To: solr-user@lucene.apache.org
Subject: Re: no subject aka Replication Stall
SOLR-1096
did you consider using an EmbeddedSolrServer?
On Thu, May 7, 2009 at 8:25 PM, arno13 arnaud.gaudi...@healthonnet.org wrote:
Do you know if it's possible to writing solr results directly on a hard disk
from server side and not to use an HTTP connection to transfer the results?
While the query
makes sense. I'll open an issue
On Fri, May 8, 2009 at 1:53 AM, Grant Ingersoll gsing...@apache.org wrote:
On the page http://wiki.apache.org/solr/SolrReplication, it says the
following:
Force a snapshot on master.This is useful to take periodic backups .command
:
601 - 700 of 987 matches
Mail list logo