Have you check the solr log on the slave to see if there was any commit
done? It looks to me you are still using an older version of the commit
script that is not compatible with the newer Solr response format. If
thats' the case, the commit was actually performed. It is just that the
script
It looks like things have changed a bit since this subject was last brought
up here. I see that there are support in Solr/Lucene for indexing payload
data (DelimitedPayloadTokenFilterFactory and DelimitedPayloadTokenFilter).
Overriding the Similarity class is straight forward. So the last piece
from the facet counts so I tried:
facet.field={!ex=cat:foo}cat
but that has not effect as as foo still show up in the facet counts.
Still looking...
Bill
On Thu, Jul 23, 2009 at 11:53 AM, Bill Au bill.w...@gmail.com wrote:
That's actually what we have been doing. I was just wondering
I have a Solr text field and when I use Solr's field analysis tool, it shows
that wildcard queries are being stemmed. But query results indicate that it
is not. It looks like there is a bug in the tool.
Bill
On Mon, Aug 3, 2009 at 7:21 AM, gateway0 reiterwo...@yahoo.de wrote:
Ok thanks
Avlesh,
You lost me. How does boosting change the sort order? What about
sorting on a field that is not the mlt field?
Bill
On Mon, Aug 3, 2009 at 3:13 AM, Avlesh Singh avl...@gmail.com wrote:
You can boost the similarity field matches, if you want. Look for mlt.boost
at
I checked the javadoc of Lucene's MoreLikeThis and MoreLikeThisQuery and
don't see any method regarding sorting at all. So my guess is we can't sort
the MoreLIkeThis result in Solr until this is supported in Lucene.
Bill
On Mon, Aug 3, 2009 at 5:54 PM, Bill Au bill.w...@gmail.com wrote:
Avlesh
Your heap may be just too small or you may have a memory leak. A stack
trace may not help you since the thread encountered the OutOfMemoryError may
not be where the memory leak is. A heap dump will tell you what's using up
all the memory in your heap.
Bill
On Thu, Jul 30, 2009 at 3:54 PM,
Use the mlt handler and then add the facet parameters after that:
/solr/mlt?q=tille:Amlt.fl=authorfacet-truefacet.field=topic
Bill
On Fri, Jul 31, 2009 at 11:11 AM, Jérôme Etévé jerome.et...@gmail.comwrote:
Hi all,
Is there a way to enable faceting when using a more like this handler?
The CSVLoader is very fast but it doesn't support document or field boosting
at index time. If you don't need that you can also generate input data to
Solr into file(s) to be loaded by the CSVLoader. Just reload whenever you
change the schema. You will need to regenerate data if you add/remove
Thanks, Robert. That's exactly what my problem was. Things work find after
I make sure that all my processing (index and query) are using UTF-8. FYI,
it took me a while to discover that SolrJ by default uses a GET request for
query, which uses ISO-8859-1. I had to explicitly use a POST to do
I am using SolrJ to index the word µTorrent. After a commit I was not able
to query for it. It turns out that the document in my Solr index contains
the word µTorrent instead of µTorrent. Any one has any idea what's going
on???
Bill
1) not sure that you mean, can you give an example?
2) you can set facet.limit=0 to suppress the other values
3) add description:interface to the original query you used to obtain the
facet counts
Bill
On Fri, Jul 24, 2009 at 10:43 AM, Nishant Chandra nishant.chan...@gmail.com
wrote:
Hi,
I
I want to exclude a very small number of terms which will be different for
each query. So I think my best bet is to use localParam.
Bill
On Wed, Jul 22, 2009 at 4:16 PM, Chris Hostetter
hossman_luc...@fucit.orgwrote:
: I am faceting based on the indexed terms of a field by using facet.field.
/visibility on the client would be reasonable.
Erik
On Jul 23, 2009, at 11:43 AM, Bill Au wrote:
I want to exclude a very small number of terms which will be different for
each query. So I think my best bet is to use localParam.
Bill
On Wed, Jul 22, 2009 at 4:16 PM, Chris
You can also do tables join in a SQL select to pick out the fields you want
from multiple tables. You may want to use temporary tables during
processing. Once you get the data the way you want it, you can use the CSV
request handler to load in the output of the SQL select.
Bill
On Wed, Jul 22,
Faceting on tags will give you all the related tags, including the original
tag (tagA in your case). You will have to filter out the original tag on
the client side if you don't want to show that. With Solar 1.4, you will be
able to use localParam to exclude the original tag in the results. If
I am faceting based on the indexed terms of a field by using facet.field.
Is there any way to exclude certain terms from the facet counts?
Bill
With a time stamp you can use a function query to boost the score of newer
documents:
http://wiki.apache.org/solr/SolrRelevancyFAQ#head-b1b1cdedcb9cd9bfd9c994709b4d7e540359b1fd
Bill
On Thu, Jul 9, 2009 at 5:58 PM, vivek sar vivex...@gmail.com wrote:
How do we sort by internal doc id (say on
You definitely need mlt=true if you are not using /solr/mlt.
Bill
On Wed, Jul 8, 2009 at 2:14 PM, Otis Gospodnetic otis_gospodne...@yahoo.com
wrote:
Sergey,
What about
http://localhost:8080/solr/select?q=id:10mlt=truemlt.fl=content_mltmlt.maxqt=
that a try and report back.
Bill
On Tue, Jul 7, 2009 at 4:45 AM, Marc Sturlese marc.sturl...@gmail.comwrote:
Using MoreLikeThisHandler you can use fq to filter your results. As far as
I
know bq are not allowed.
Bill Au wrote:
I have been trying to restrict MoreLikeThis results without
You can use facet.perfix to match the beginning of a given word:
http://wiki.apache.org/solr/SimpleFacetParameters#head-579914ef3a14d775a5ac64d2c17a53f3364e3cf6
Bill
On Tue, Jul 7, 2009 at 11:02 AM, Pierre-Yves LANDRON
pland...@hotmail.comwrote:
Hello,
Here is what I would like to achieve
/core_A/mlt?q=id:7468365fq=price[100http://localhost:8084/solr/core_A/mlt?q=id:7468365fq=price%5B100TO
200]
It will filter the MoreLikeThis results
Bill Au wrote:
I think fq only works on the main response, not the mlt matches. I found
a
couple of releated jira:
http
I have been trying to restrict MoreLikeThis results without any luck also.
In additional to restricting the results, I am also looking to influence the
scores similar to the way boost query (bq) works in the
DisMaxRequestHandler.
I think Solr's MoreLikeThis depends on Lucene's contrib queries
You can add the -V option to both your automatic and manual invocation of
snappuller and snapinstaller tor both core and compare the debug info.
Bill
On Thu, Jul 2, 2009 at 11:02 AM, Vicky_Dev
vikrantv_shirbh...@yahoo.co.inwrote:
Yes. Permissions are same across cores
~Vikrant
Bill Au
You can use a facet query. Here is an example from the Solr Wiki:
http://wiki.apache.org/solr/SimpleFacetParameters#head-1da3ab3995bc4abcdce8e0f04be7355ba19e9b2c
Bill
On Wed, Jul 1, 2009 at 8:34 AM, Sumit Aggarwal sumit.kaggar...@gmail.comwrote:
Suppose i wanted to search for red dress
If you are running on Unix/Linux, you should be able to use the
scripts-based replication with some minor modification. You will need to
change the scripts where it try to use HTTP to trigger a commit in Solr.
Bill
On Wed, May 27, 2009 at 5:36 AM, Ashish P ashish.ping...@gmail.com wrote:
Hi,
I agree. It is always a good idea to start with the example config/schema
in the version that you are upgrading to and work you specific settings back
into them. Newer versions of Solr will probably have new or changed
settings. Even though sometime the config/shema is backward compatible, I
Are the user/group/permissions on the snapshot files the same for both cases
(manual vs postCommit/postOptimize events)?
Bill
On Tue, May 5, 2009 at 12:54 PM, tushar kapoor
tushar_kapoor...@rediffmail.com wrote:
Hi,
I am facing an issue while performing snapshot pulling thru Snappuller
I would also include the -XX:+HeapDumpOnOutOfMemoryError option to get
a heap dump when the JVM runs out of heap space.
On Thu, Apr 16, 2009 at 9:43 PM, Bryan Talbot btal...@aeriagames.comwrote:
If you're using java 5 or 6 jmap is a useful tool in tracking down memory
leaks.
Yes, restarting Tomcat will reset things back to default. But you should be
able to configure Tomcat to disable Solr logging since Solr uses JDK
logging.
Bill
On Wed, Apr 15, 2009 at 4:51 AM, Kraus, Ralf | pixelhouse GmbH
r...@pixelhouse.de wrote:
Bill Au schrieb:
Have you tried setting
Have you tried setting logging level to OFF from Solr's admin GUI:
http://wiki.apache.org/solr/SolrAdminGUI
Bill
On Tue, Apr 14, 2009 at 9:56 AM, Kraus, Ralf | pixelhouse GmbH
r...@pixelhouse.de wrote:
Hi,
is there a way to disable all logging output in SOLR ?
I mean the output text like :
That should be OK. I did a quick scan of all the scripts that use
$solr_hostname. It defaults to localhost if it is not set.
Bill
On Wed, Mar 25, 2009 at 7:24 PM, Garafola Timothy timgaraf...@gmail.comwrote:
I've a question. Is it safe to use 'localhost' as solr_hostname in
scripts.conf?
A sanpshot is created every time snapshooter is invoked even if there is no
changed in the index. However, since snapshots are created using hard
links, no additional space is used if there are no changed to the index. It
does use up one directory entry in the data directory.
Bill
On Mon, Feb
Snapshots are created using hard links. So even though it is as big as the
index, it is not taking up any more space on the disk. The size of the
snapshot will change as the size of the index changes.
Bill
On Mon, Feb 16, 2009 at 9:50 AM, sunnyfr johanna...@gmail.com wrote:
It change a lot
:
Hi,
Ok but can I use it more often then every day like every three hours,
because snapshot are quite big.
Thanks a lot,
Bill Au wrote:
The --delete option of the rsync command deletes extraneous files
from
the
destination directory. It does not delete Solr snapshots. To do
The --delete option of the rsync command deletes extraneous files from the
destination directory. It does not delete Solr snapshots. To do that you
can use the snapcleaner on the master and/or slave.
Bill
On Fri, Feb 13, 2009 at 10:15 AM, sunnyfr johanna...@gmail.com wrote:
root 26834
Solr does have a PHPResponseWriter:
http://wiki.apache.org/solr/SolPHP?highlight=(CategoryQueryResponseWriter)|((CategoryQueryResponseWriter))
http://lucene.apache.org/solr/api/org/apache/solr/request/PHPResponseWriter.html
Bill
On Fri, Jan 16, 2009 at 1:09 PM, Marc Sturlese
You do a commit in step 1 after the update, right? So if you configure Solr
on the indexer to invoke snapshooter after a commit and optimize, then you
would not need to invoke snapshooter explicitly using cron. snappuller
doesn't do anything unless there is a new snapshot on the indexer.
Bill
add more swap space:
http://www.nabble.com/Not-enough-space-to11423199.html#a11424938
Bill
On Fri, Jan 2, 2009 at 10:52 AM, Brian Whitman br...@echonest.com wrote:
I have an indexing machine on a test server (a mid-level EC2 instance, 8GB
of RAM) and I run jetty like:
java -server -Xms5g
I notices that you are using the same rysncd port for both core. Do you
have a scripts.conf for each core?
Bill
On Tue, Dec 9, 2008 at 11:40 PM, Kashyap, Raghu [EMAIL PROTECTED]wrote:
Hi,
We are seeing a strange behavior with snappuller
We have 2 cores Hotel Location
Here are
Take a thread dump of the JVM next time it is stuck. That will tell you
where and why the threads are stuck.
Bill
On Tue, Nov 25, 2008 at 4:15 PM, Alexander Ramos Jardim
[EMAIL PROTECTED] wrote:
Hello guys,
I am getting some stuck threads on my application when it connects to Solr.
The
Do keep in mind that compression is a CPU intensive process so it is a trade
off between CPU utilization and network bandwidth. I have see cases where
compressing the data before a network transfer ended up being slower than
without compression because the cost of compression and un-compression
You probably have a permission problem. Check to make sure that the user id
running Solr has write permission in the directory /data/solr/video/data.
Bill
On Thu, Oct 2, 2008 at 8:11 AM, sunnyfr [EMAIL PROTECTED] wrote:
Oct 2 14:09:30 solr-test jsvc.exec[12890]: Oct 2, 2008 2:09:30 PM
,
Bill Au wrote:
If you use cron, you should use the new -c option of snapshooter which
only takes a snapshot where there have been changes. My personal
preference
is to use postCommit and postOptimize event listeners.
Bill
On Wed, Oct 1, 2008 at 4:28 AM, sunnyfr [EMAIL
Have you seen these two Wiki pages:
http://wiki.apache.org/solr/CollectionDistribution
http://wiki.apache.org/solr/SolrCollectionDistributionOperationsOutline
Solr comes with tools to let you sync the index directory.
Bill
On Thu, Oct 2, 2008 at 3:52 AM, dudes dudes [EMAIL PROTECTED] wrote:
If you use cron, you should use the new -c option of snapshooter which
only takes a snapshot where there have been changes. My personal preference
is to use postCommit and postOptimize event listeners.
Bill
On Wed, Oct 1, 2008 at 4:28 AM, sunnyfr [EMAIL PROTECTED] wrote:
Hi guys,
Do you
Backup or snapshot taken by hard links are save to copy since once Lucene
creates an index file, it will not be modified.
From the 1.3.0 release notes:
The Porter snowball based stemmers in Lucene were updated (LUCENE-1142),
and are not guaranteed to be backward compatible at the index level
Yes, the status file is desiged to go into a logs directory, and not the
snapshots directory.
Bill
2008/9/29 Jarek Zgoda [EMAIL PROTECTED]
I think I found the offending one by running snappuller -V:
++ ssh -o StrictHostKeyChecking=no e02 'ls
/home/jzgoda/solr-master-data|grep
The OS is checking that there is enough memory... add swap space:
http://www.nabble.com/Not-enough-space-to11423199.html#a11432978
On Thu, Sep 25, 2008 at 4:20 AM, sunnyfr [EMAIL PROTECTED] wrote:
Hi,
Any idea ?
Sep 25 06:50:41 solr-test jsvc.exec[23286]: Sep 25, 2008 6:50:41 AM
Try running snapshooter with the -V option. That will show debugging
info.
Bill
On Mon, Sep 22, 2008 at 6:49 AM, sunnyfr [EMAIL PROTECTED] wrote:
Yes but even if I run it I've no snapshot created, I don't get how I can
fixe
it.
Bill Au wrote:
You only need to run the rsync daemon
Au wrote:
Try running snapshooter with the -V option. That will show debugging
info.
Bill
On Mon, Sep 22, 2008 at 6:49 AM, sunnyfr [EMAIL PROTECTED] wrote:
Yes but even if I run it I've no snapshot created, I don't get how I can
fixe
it.
Bill Au wrote:
You
All the scripts dot in (.) the utility script scripts-util, which in turn
dots in scripts.conf.
Why are you running several instances, multiple ports, multiple webapps, or
multiple cores?
http://wiki.apache.org/solr/MultipleIndexes
Bill
On Wed, Sep 17, 2008 at 8:50 AM, sunnyfr [EMAIL
try the command line in stead:
/solr/user/bin/rsyncd-enable
The scripts do not like to be bashed.
Bill
On Wed, Sep 17, 2008 at 9:24 AM, sunnyfr [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED]:/solr/user/bin# bash rsyncd-enable
rsyncd-enable: line 21: cd: rsyncd-enable/..: Not a directory
,
which in turn
dots in scripts.conf.
Bill Au wrote:
All the scripts dot in (.) the utility script scripts-util, which in
turn
dots in scripts.conf.
Why are you running several instances, multiple ports, multiple webapps,
or
multiple cores?
http://wiki.apache.org/solr
You only need to run the rsync daemon on the master.
Bill
On Wed, Sep 17, 2008 at 10:54 AM, sunnyfr [EMAIL PROTECTED] wrote:
Hi Raghu,
Thanks it's clear now;
Kashyap, Raghu wrote:
Hi,
Rsyncd is the rsync(http://samba.anu.edu.au/rsync/) daemon. You need
to make sure that
Try running snappuller with the -V option to show debug output.
Here's the closest thing to a step by step doc:
http://wiki.apache.org/solr/SolrCollectionDistributionOperationsOutline
Please note the first bullet under slave Solr server that the user id
under which the scripts run must be able
On your slave,
solr_hostname should be localhost
and
master_host should be the hostname of your master server
Check out the following Wiki for a full description of the variables in
scripts.conf:
http://wiki.apache.org/solr/SolrCollectionDistributionScripts
Bill
On Thu, Sep 4, 2008 at 4:46
You can either use a dedicated rsync port for each instance or hack the
existing scripts to support multiple rsync modules. Both ways should work.
Bill
On Tue, Jul 1, 2008 at 3:49 AM, Jacob Singh [EMAIL PROTECTED] wrote:
Hi Bill and Others:
Bill Au wrote:
The rsyncd-start scripts gets
Isn't using Lucene over NFS *not* recommended?
Bill
On Mon, Jun 30, 2008 at 4:27 AM, Nico Heid [EMAIL PROTECTED] wrote:
Hey, I'm looking for some feedback on the following setup.
Due to the architects decision I will be working with NFS not Solr's own
distribution scripts.
A few Solr
The rsyncd-start scripts gets the data_dir path from the command line and
create a rsyncd.conf on the fly exporting the path as the rsync module named
solr. The salves need the data_dir path on the master to look for the
latest snapshot. But the rsync command used by the slaves relies on the
I know people have made changes to the scripts in order for them to work
correctly on the UNIX variant that they are running on. Has anyone done
this for AIX?
Bill
On Fri, May 30, 2008 at 10:38 AM, tglawles [EMAIL PROTECTED] wrote:
I've found that the scripts provided with Solr do not run on
Welcome abroad, Koji.
Bill
On Tue, May 6, 2008 at 6:56 PM, Koji Sekiguchi [EMAIL PROTECTED] wrote:
Hi Erik and everyone!
I'm looking forward to working with you. :)
Cheers,
Koji
Erik Hatcher wrote:
A warm welcome to our newest Solr committer, Koji Sekiguchi! He's been
providing
snapinstall calls commit to trigger Solr to use the new index. Do you see
the commit request in your Solr log? Anything in the snapinstaller log?
Bill
On Thu, May 1, 2008 at 8:35 PM, James Brady [EMAIL PROTECTED]
wrote:
Hi Ryan, thanks for that!
I have one outstanding question: when I take
What messages do you see in your log file?
Bill
On Wed, Mar 19, 2008 at 3:15 PM, [EMAIL PROTECTED] wrote:
Hi,
I'm a new Solr user. I figured my way around Solr just fine (I think) ...
I can index and search ets. And so far I have indexed over 300k documents.
What I can't figure out
The problem was with the default configuration of jetty which expands the
webapps in /tmp and some UNIX boxes are configured to purge old files from
/tmp. A simple fix is to create a $(jetty.chom)/work directory for jetty to
use. See bug SOLR-118 for details:
Your first and second conclusions are right, but not the third one.
Solr does not distribute the queries from clients to the slaves. You need
to do that yourself (for example, use a load balancer in front of your pool
of Solr slaves).
I think that when Solr is optimizing the index all update
/SolrCollectionDistributionScripts
So you can invoke the scripts for each instance by setting the data
directory on the command line.
Regards,
Dilip
-Original Message-
From: Bill Au [mailto:[EMAIL PROTECTED]
Sent: Monday, January 14, 2008 9:40 PM
To: [EMAIL PROTECTED]
Subject: Re: Solr
Try setting data_dir to \opt\solr\data.
I noticed that in the error message it shows \ in the path to snapshooter
(ie \opt\solr\bin\snapshooter)
but / in the path to the data directory (ie /opt/solr/data).
Bill
On Dec 13, 2007 7:46 AM, Dilip.TS [EMAIL PROTECTED] wrote:
Hi,
When i changed the
On Dec 14, 2007 7:00 AM, Dilip.TS [EMAIL PROTECTED] wrote:
Hi,
I have the following requirement for SOLR Collection Distribution using
Embedded Solr with the Jetty server:
I have different data folders for multiple instances of SOLR within the
Same
application.
Im using the same SOLR_HOME
Snapinstaller uses commit to notify Solr to open a new Searcher. Is there
anything in the commit log which shows why the commit failed?
Bill
On 10/29/07, Karen Loughran [EMAIL PROTECTED] wrote:
Thanks Yonik and Otis,
I noticed from the log (snapinstaller.log) the following when the slave
Lance, do start a new thread if you run into this problem again and please
include as much info as possible.
Once a snapshot has been taken, the files it contains should not change so I
am not sure why tar was telling you
a file had changed while it was being copied.
Bill
On 9/19/07, Chris
You are welcome.
Bill
On 9/21/07, Yu-Hui Jin [EMAIL PROTECTED] wrote:
Bill,
Thanks for the explanation. That helps my understanding on rsync and the
replication in general.
regards,
-Hui
On 9/20/07, Bill Au [EMAIL PROTECTED] wrote:
The solr that you are referring to in your third
to
this.
regards,
-Hui
On 9/20/07, Bill Au [EMAIL PROTECTED] wrote:
That would be my bad. I noticed the problem while fixing SOLR-282
which is not related. I fixed both problems in stead of opening a
different bug for the response format issue. I will update the change
log
The solr that you are referring to in your third question in the
name of the rsync area which is map to the solr data directory. This
is defined in the rsyncd configuration file which is generated on the
fly as Chris has pointed out. Take a look at rsyncd-start.
snappuller rsync the index from
That would be my bad. I noticed the problem while fixing SOLR-282
which is not related. I fixed both problems in stead of opening a
different bug for the response format issue. I will update the change
log.
Bill
On 9/20/07, Chris Hostetter [EMAIL PROTECTED] wrote:
:
: It seems there's a
What files are there in your /data/pub/index directory?
Bill
On 9/19/07, Venkatraman S [EMAIL PROTECTED] wrote:
Hi ,
Product : Solr (Embedded)Version : 1.2
Problem Description :
While trying to add and search over the index, we are stumbling on this
error again and again.
Do note
Add/Update, Commit/Optimize, Delete, and Delete by query, in Solr are done
using the url /update. So should be able to protect that url at the
container level outside of Solr. If you want you can protect the query url
/select or the admin pages too. Container level authentication is
transparent
I meant outside of the Solr code. You are right that it is still in the
Solr war file since you will need to put the authentication configuration
into web.xml.
Bill
On 9/14/07, jenix [EMAIL PROTECTED] wrote:
When you say outside of Solr do you mean outside of solr.war? We finally
got
++
On Sep 7, 2007, at 2:21 PM, Bill Au wrote:
I just double checked distribution.jsp. The directory where it
looks for
status files is hard coded to logs/clients. So for now
master_status_dir in
your solr/conf/scripts.conf has to be set to that so the scripts
will put
As I had pointed out in my first reply to this thread, you had a directory
named temp-snapshot.20070816120113
in your data directory on the slave. Snapinstaller was mistakenly treating
that as the lastest snapshot and was installing that every time it was
called. Snapinstaller didn't trigger a
++
On Sep 6, 2007, at 6:09 AM, Bill Au wrote:
That is very strange. Even if there is something wrong with the
config or
code, the static HTML contained in distributiondump.jsp should show
up.
Are you using the latest version of the JSP? There has been a
recent fix:
http
.
++
| Matthew Runo
| Zappos Development
| [EMAIL PROTECTED]
| 702-943-7833
++
On Sep 7, 2007, at 7:43 AM, Bill Au wrote:
I that case, definitely take a look at SOLR-333:
http://issues.apache.org/jira/browse/SOLR-333
++
On Aug 31, 2007, at 6:11 PM, Bill Au wrote:
latest snapshot /opt/solr/data/temp-snapshot.20070816120113 already
installed
It looks like you have a directory named temp-snapshot.20070816120113
in your data directory. You should
]
| 702-943-7833
++
On Aug 31, 2007, at 6:12 PM, Bill Au wrote:
Are there any error message in your appserver log files?
Bill
On 8/31/07, Matthew Runo [EMAIL PROTECTED] wrote:
Hello!
/solr/admin/distributiondump.jsp
latest snapshot /opt/solr/data/temp-snapshot.20070816120113 already
installed
It looks like you have a directory named temp-snapshot.20070816120113
in your data directory. You should remove it. One of the other
script might have left that behind somehow.
I will update the snapinstaller
Are there any error message in your appserver log files?
Bill
On 8/31/07, Matthew Runo [EMAIL PROTECTED] wrote:
Hello!
/solr/admin/distributiondump.jsp
This server is set up as a master server, and other servers use the
replication scripts to pull updates from it every few minutes. My
If snapinstaller fails to install the lastest snapshot, then chances are
that it would be able to install any earlier snapshots as well. All it does
is some very simple filesystem operations and then invoke the Solr server to
do a commit. I agree with Chris that the best thing to do is to take
The element str name=dir specifies the working directory for the call to
Runtime.exec(). You still need to set up your PATH if you don't use an
absolute path for the exe element. As Chris has pointed out, the best
thing to do is to use an absolute path.
Bill
On 8/14/07, Chris Hostetter [EMAIL
I just noticed that the Solr Tomcat step-by-step instruction (
http://wiki.apache.org/solr/SolrTomcat) is now out-dated due to the face
that tomcat the zip file for 5.0.20 had been move from the public mirrors to
the archive site. We can either update the Wiki to use 5.0.23 (and then
update it
FYI, in additional to the -v option, the latest version of all the scripts
have a -V option which is equivalent to set -x.
Bill
On 7/20/07, Matthew Runo [EMAIL PROTECTED] wrote:
Just an FYI..
it seems that the generated scripts.conf file had DOS format
newlines. Once I did :set
That change doesn't have anything to do with where snappuller place the
snapshots.
Is the environment variable data_dir set up correctly in conf/scripts.conf?
That's where
snappuller puts the snapshots.
Bill
On 7/12/07, Kevin Lewandowski [EMAIL PROTECTED] wrote:
I've been running solr
is:
rsyncEndSec=`perl -e print time;`
-Original Message-
From: Bill Au [mailto:[EMAIL PROTECTED]
Sent: Monday, July 09, 2007 8:40 AM
To: solr-user@lucene.apache.org
Subject: distribution scripts on Solaris
I am working on bug SOLR-282:
https://issues.apache.org/jira/browse/SOLR-282
and noticed
I would also check
ls -ld /foo/jetty-6.1.3
too.
Bill
On 6/27/07, Chris Hostetter [EMAIL PROTECTED] wrote:
: Here is a puzzling one. I can't get Solr to invoke snaphooter properly.
: Solr claims my snapshooter is not where I said it is:
:
: SEVERE: java.io.IOException: Cannot run program
In additional to snapshot, you can also make backup copies of your Solr
index using the backup script.
Backup are created the same way as snapshots using hard links. Each one is
a viable full index.
Bill
On 5/3/07, Charlie Jackson [EMAIL PROTECTED] wrote:
I have a couple of questions
The collection distribution scripts relies on hard links and rsync. It
seems that both maybe avaialble on Windows
hard links:
http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/fsutil_hardlink.mspx?mfr=true
rsync:
http://samba.anu.edu.au/rsync/download.html
I say
version before 3.0.19). Even though you're using a later
version of Resin, I've had lots of issues with their XML parsing.
-- Ken
2007/4/26, Bill Au [EMAIL PROTECTED]:
Have you tried resin 3.0.x? 3.1 is a development branch so it is less
stable as 3.0.
Bill
On 4/19/07, James liu
What OS are you running on?
The solr on the rsync command line is just a label which is defined in
rsyncd.conf on the master. rsyncd.conf is created on the fly by the script
rsyncd-start:
cat EOF ${solr_root}/conf/rsyncd.conf
rsyncd.conf file
uid = $(whoami)
gid = $(whoami)
use
Here's the Solr Wiki on collection distribution:
http://wiki.apache.org/solr/CollectionDistribution
It describes the incremental nature of the distribution:
A collection is a directory of many files. Collections are distributed
to the slaves as snapshots of these files. Each snapshot is made
You can also run the script with the -V option. It shows debugging
info but not as much as bash -x.
I tried snapshooter on OS X 10.4.9. I did get the cp: illegal option
-- l error.
But that's the only error I got.
Bill
On 4/23/07, Bertrand Delacretaz [EMAIL PROTECTED] wrote:
On 4/23/07,
Or simply create a new copy of the file using vi locally on the system
where you are running solr. Don't cut and paste since you may include
the non-printing characters that way too. Simply type in the content
of the file.
Bill
On 4/14/07, Venkatesh Seetharam [EMAIL PROTECTED] wrote:
you
101 - 200 of 219 matches
Mail list logo