Thanks James.
We have tried the following options *(individually)* including the one you
suggested,
1.selectMethod=cursor
2. batchSize=-1
3.responseBuffering=adaptive
But the indexing process doesn't seem to be improving at all. When we try to
index set of 500 rows it works well gets completed
Does anyone have solrj indexing and searching sample code?
I could not find it on the internet.
Thanks.
Why don't you index all ancestor classes with the document, as a
multivalued field, then you could get it in one hit. Am I missing
something?
Upayavira
On Thu, Mar 28, 2013, at 01:59 AM, Jack Park wrote:
Hi Otis,
That's essentially the answer I was looking for: each shard (are we
talking
First of all, can you check your catalina.out log. It gives the detail
about what is wrong. Secondly you can separate such kind of JVM parameters
from that solr.xml and put them into a file setenv.sh (you will create it
under bin folder of tomcat.) and here is what you should do:
#!/bin/sh
Dear all,
I investigating how to update synonyms.txt.
Some people says CORE RELOAD will reload synonyms.txt.
But solr wiki says:
```
Starting with Solr4.0, the RELOAD command is implemented in a way that
results a live reloads of the SolrCore, reusing the existing various
objects such as the
Does that means i can create multiple collections with different
configurations ?
can you please outline basic steps to create multiple collections,cause i am
not able to
create them on solr 4.0
--
View this message in context:
Exactly, you should usually design your schema to fit your queries, and
if you need to retrieve all ancestors then you should index all
ancestors so you can query for them easily.
If that doesn't work for you then either Solr is not the right tool for
the job, or you need to rethink your
You should be fine for synonym and other schema changes since they are
unrelated to the IndexWriter.
But... if you are using synonyms in your index analyzer, as opposed to in
your query analyzer, then you need to do a full reindex anyway, which is
best done by deleting the contents of the
https://issues.apache.org/jira/browse/SOLR-3587 (pointed to from SOLR-3592)
indicates it is resolved.
I just tried it on my local 4x branch checkout, using the analysis page
(text_general analyzing foo), added a synonym, went to core admin clicked
reload and saw the synonym appear afterwards.
Hi,
I tested this config on Solr 4.2 this morning and it worked:
fieldType name=long class=solr.TrieLongField precisionStep=0
docValuesFormat=Disk positionIncrementGap=0/
field name=MMDDhh type=long indexed=true stored=true
required=true docValues=true multiValued=false /
I also loaded data
Waiting for your assitence to get config entries for 3 server solr cloud setup..
Thanks in advance
Anuj From: anuj vatslt;vats_a...@rediffmail.comgt;Sent: Fri, 22 Mar 2013
17:32:10 To:
solr-user@lucene.apache.orglt;solr-user@lucene.apache.orggt;Cc:
Unable? In what way?
Did you look at the Solr example?
Did you look at solr.xml?
Did you see the core element? (Needs to be one per core/collection.)
Did you see the multicore directory in the example?
Did you look at the solr.xml file in multicore?
Did you see how there are separate
Hi
I need to do complex join in single core with multiple table.
Like Inner , Outer, Left, Right and so on.
I am working with solr4.
Is there I can work with any type of join with solr4?
Is there any way to do so? Please give your suggestion, its very important.
Please help me..
Thanks in
Hi Ashim,
You probably doing something in wrong way if You need using such a
complex joins.
Remember that solr isn't relational database.
You should probably revisit Your schema and flatten Your data structure.
Regards,
Karol
W dniu 28.03.2013 13:45, ashimbose pisze:
Hi
I need to do
Could you give more details on what's not working? Have you followed the
instructions here: http://wiki.apache.org/solr/SolrCloud#Getting_Started
Are you using an embedded Zookeeper or an external server? How many of
them? Are you using numShards=1?2?
What do you see in the Solr UI, in the cloud
Here is the field type definition. same as what you posted yesterday just a
different name.
fieldType name=dvLong class=solr.TrieLongField precisionStep=0
docValuesFormat=Disk positionIncrementGap=0/
And Field Definition
field name=lcontNumOfDownloads type=dvLong indexed=true stored=true
OK, you'll need to re-index. Shutdown, delete the data, re-index.
On Thu, Mar 28, 2013 at 9:12 AM, adityab aditya_ba...@yahoo.com wrote:
Here is the field type definition. same as what you posted yesterday just a
different name.
fieldType name=dvLong class=solr.TrieLongField precisionStep=0
On Mar 24, 2013, at 10:18 PM, Steve Rowe sar...@gmail.com wrote:
The wiki at http://wiki.apache.org/solr/ has come under attack by spammers
more frequently of late, so the PMC has decided to lock it down in an attempt
to reduce the work involved in tracking and removing spam.
From now
On Mar 28, 2013, at 9:25 AM, Andy Lester a...@petdance.com wrote:
On Mar 24, 2013, at 10:18 PM, Steve Rowe sar...@gmail.com wrote:
Please request either on the solr-user@lucene.apache.org or on
d...@lucene.apache.org to have your wiki username added to the
ContributorsGroup page - this is a
Is deltaQuery mandatory in data-config.xml ?
I did it like this :
entity name=residential query=select * from tsunami.consumer_data_01 where
state='MA' and rownum = 5000
deltaQuery=select LEMSMATCHCODE, STREETNAME from residential
where last_modified
Thank you for this. I had thought about it but reasoned in a naive
way: who would do such a thing?
Doing so makes the query local: once the object has been retrieved, no
further HTTP queries are required. Implementation perhaps entails one
request to fetch the presumed parent in order to harvest
No, it's not mandatory. You can't do delta imports without delta queries
though; you'd need to do a full-import. Per your query, you'd only ever do
objects with rownum=5000.
-Original Message-
From: A. Lotfi [mailto:majidna...@yahoo.com]
Sent: Thursday, March 28, 2013 10:07 AM
To:
You may want to run your jdbc driver in trace mode just to see if it is picking
up these different options. I know from experience that the selectMethod
parameter can sometimes be important to prevent SQLServer drivers from caching
the entire resultset in memory.
But something seems very
You do not need deltaQuery unless you're doing delta (incremental) updates.
To configure a full import, try starting with this example:
http://wiki.apache.org/solr/DataImportHandler#A_shorter_data-config
James Dyer
Ingram Content Group
(615) 213-4311
-Original Message-
From: A. Lotfi
What version of Solr4 are you running? We are on 3.6.2 so I can't be confident
whether these settings still exist (they probably do...), but here is what we
do to speed up full-indexing:
In solrconfig.xml, increase your ramBufferSize to 128MB.
Increase mergeFactor to 20.
Make sure autoCommit is
Yes, Only thing is, on master delta import is running every 1/2 hour but as
there is no data change in last 24 hour i think index version still remains
same. another thing i notice is after full import index Gen is bumped to
directly higher than slave. Can that means Master is not increasing
still no luck
Performed.
1. Stop the Application Server (JBoss)
2. Deleted everything under data
3. Star the server
4. Observe exception in log (i have uploaded the file)
on a side note. do i need to have any additional jar files in the solr home
lib folder. currently its empty.
Hello. My company is currently thinking of switching over to Solr 4.2,
coming off of SQL Server. However, what we need to do is a bit weird.
Right now, we have ~12 million segments and growing. Usually these are
sentences but can be other things. These segments are what will be stored
in Solr.
Hi Mike,
Interesting problem - here's some pointers on where to get started.
For finding similar segments, check out Solr's More Like This support -
it's built in to the query request processing so you just need to enable it
with query params.
There's nothing built in for doing batch queries
Please add OussamaJilal to the group.
Thank you.
2013/3/28 Steve Rowe sar...@gmail.com
On Mar 28, 2013, at 9:25 AM, Andy Lester a...@petdance.com wrote:
On Mar 24, 2013, at 10:18 PM, Steve Rowe sar...@gmail.com wrote:
Please request either on the solr-user@lucene.apache.org or on
Update ---
I was able to fix the exception by adding following line in solrconfig.xml
codecFactory name=CodecFactory class=solr.SchemaCodecFactory /
Not sure if its mentioned in any document to have this declared in config
file.
I am now re-indexing and data on the master and will perform test
On Mar 28, 2013, at 11:57 AM, Jilal Oussama jilal.ouss...@gmail.com wrote:
Please add OussamaJilal to the group.
Added to solr ContributorsGroup.
git clone https://github.com/kolstae/openpipe
cd openpipe
mvn install
regards
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-and-OpenPipe-tp484777p4052079.html
Sent from the Solr - User mailing list archive at Nabble.com.
I'm doing fairly frequent changes to my data-config.xml files on some of my
cores in a solr cloud setup. Is there anyway to to get these files active
and up to Zookeeper without restarting the instance?
I've noticed that if I just launch another instance of solr with the
bootstrap_conf flag set
Apologies if you already do something similar, but perhaps of general
interest...
One (different approach) to your problem is to implement a local
fingerprint - if you want to find documents with overlapping segments, this
algorithm will dramatically reduce the number of segments you
Bella lì!
vedo che ci divertiamo
Il giorno 28/mar/2013 17:11, Fabio Curti fabio.cu...@gmail.com ha
scritto:
git clone https://github.com/kolstae/openpipe
cd openpipe
mvn install
regards
--
View this message in context:
Thanks for your reply, Roman. Unfortunately, the business has been running
this way forever so I don't think it would be feasible to switch to a whole
document store versus segments store. Even then, if I understand you
correctly it would not work for our needs. I'm thinking because we don't
care
This might not be a good match for Solr, or for many other systems. It does
seem like a natural fit for MarkLogic. That natively searches and selects over
XML documents.
Disclaimer: I worked at MarkLogic for a couple of years.
wunder
On Mar 28, 2013, at 9:27 AM, Mike Haas wrote:
Thanks for
Thanks Timothy,
In regards to you mentioning using MoreLikeThis, do you know what kind of
algorithm it uses? My searching didn't reveal anything.
On Thu, Mar 28, 2013 at 10:51 AM, Timothy Potter thelabd...@gmail.comwrote:
Hi Mike,
Interesting problem - here's some pointers on where to get
On Thu, Mar 28, 2013 at 12:27 PM, Mike Haas mikehaas...@gmail.com wrote:
Thanks for your reply, Roman. Unfortunately, the business has been running
this way forever so I don't think it would be feasible to switch to a whole
sure, no arguing against that :)
document store versus segments
Can I use a single ZooKeeper ensemble for multiple SolrCloud clusters or
would each SolrCloud cluster requires its own ZooKeeper ensemble?
Bill
: Can I use a single ZooKeeper ensemble for multiple SolrCloud clusters or
: would each SolrCloud cluster requires its own ZooKeeper ensemble?
https://wiki.apache.org/solr/SolrCloud#Zookeeper_chroot
(I'm going to FAQ this)
-Hoss
I didn't have to do anything with the codecs to make it work. Checked my
solrconfig.xml and the codecFactory element is not present. I'm running
the out of the box jetty setup.
On Thu, Mar 28, 2013 at 11:58 AM, adityab aditya_ba...@yahoo.com wrote:
Update ---
I was able to fix the exception
I will definitely let you all know what we end up doing. I realized I
forgot to mention something that might make what we do more clear.
Right now we use sql server full text to get back fairly similar matches
for each segment. We do this with some funky sql stuff which I didn't write
and haven't
Thanks for the fast response. I am still just learning solr so please bear
with me.
This still sounds like the wrong products would appear at the top if they
have more inventory unless I am misunderstanding. High boost low boost
seems to make sense to me. That alone would return the more
Wo. that's strange.
I tried toggling with the code factory line in solrconfig.xml (attached in
this post)
commenting gives me error where as un-commenting works.
can you please take a look into config and let me know if anything wrong
there?
thanks
Aditya
solrconfig.xml
If you had a high boost on the title with a moderate boost on the inventory
it sounds like you'd get boots first ordered by inventory followed by jeans
ordered by inventory. Because the heavy title boost would move the boots to
the top. You can play with the boost factors to try and get the mix
Thanks.
Now I have to go back and re-read the entire SolrCloud Wiki to see what
other info I missed and/or forgot.
Bill
On Thu, Mar 28, 2013 at 12:48 PM, Chris Hostetter
hossman_luc...@fucit.orgwrote:
: Can I use a single ZooKeeper ensemble for multiple SolrCloud clusters or
: would each
Hi,
solr setup in windows worked fine,
I tried to follow installing solr in unix, when I started tomcat I got this
exxception :
SEVERE: Unable to create core: collection1
org.apache.solr.common.SolrException: Could not load config for solrconfig.xml
at
Couple notes though:
java -classpath example/solr-webapp/WEB-INF/lib/*
org.apache.solr.cloud.ZkCLI -cmd upconfig -zkhost 127.0.0.1:9983
-confdir example/solr/collection1/conf -confname conf1 -solrhome
example/solr
I don't think you want that -solrhome - if I remember right, thats for
On 29 March 2013 00:19, A. Lotfi majidna...@yahoo.com wrote:
Hi,
solr setup in windows worked fine,
I tried to follow installing solr in unix, when I started tomcat I got this
exxception :
[...]
Seems it cannot find solrconfig.xml. The relevant part from the logs is:
Caused by:
Hi John,
Mark is right. DocValues can be enabled in two ways: RAM resident (default)
or on-disk. You can read more here:
http://www.slideshare.net/LucidImagination/column-stride-fields-aka-docvalues
Regards.
On 22 March 2013 16:55, John Nielsen j...@mcb.dk wrote:
with the on disk option.
Otis brings up a good point. Possibly you could put logic in your function
query to account for this. But it may be that you can't achieve the mix
you're looking for without taking direct control.
That is the main reason that SOLR-4465 was put out there, for cases where
direct control is needed.
Thanks,
my path to solr home was missing something, it's worlking, but no results, the
same solr app with same configuration files worked in windows.
Abdel
From: Gora Mohanty g...@mimirtech.com
To: solr-user@lucene.apache.org; A. Lotfi majidna...@yahoo.com
Not, sure that making changes to the solrconfig.xml is going down the right
path here. There might something else with your setup that's causing this
issue. I'm not sure what it would be though.
On Thu, Mar 28, 2013 at 1:38 PM, adityab aditya_ba...@yahoo.com wrote:
Wo. that's strange.
I
I do this frequently, but use the scripts provided in cloud-scripts, e.g.
export ZK_HOST=...
cloud-scripts/zkcli.sh -zkhost $ZK_HOST -cmd upconfig -confdir
$COLLECTION_INSTANCE_DIR/conf -confname $COLLECTION_NAME
Also, once you do this, you still have to reload the collection so that it
picks
On 29 March 2013 01:59, A. Lotfi majidna...@yahoo.com wrote:
Thanks,
my path to solr home was missing something, it's worlking, but no results,
the same solr app with same configuration files worked in windows.
What do you mean by no results? Have you indexed stuff, and
are not able to search
There are lots of small issues, though.
1. Is Solr tested with a mix of current and previous versions? It is safe to
run a cluster that is a mix of 4.1 and 4.2, even for a little bit?
2. Can Solr 4.2 run with Solr 4.1 config files? This means all of conf/, not
just the main XML files.
3. We
Hi Walter,
I just did our upgrade from a nightly build of 4.1 (a few weeks before the
release) and 4.2 - thankfully it went off with 0 downtime and no issues ;-)
First and foremost, I had a staging environment that I upgraded first so I
already had a good feeling that things would be fine.
Comments hidden inline below. Overall - we need to focus on upgrades at some
point, but there is little that should stop the old distrib update process from
working (multi node clusters pre solrcloud).
Hoever, we should have tests and stuff. If only the days were twice as long.
On Mar 28,
Steve, could you add me to the contrib group? TomasFernandezLobbe
Thanks!
Tomás
On Thu, Mar 28, 2013 at 1:04 PM, Steve Rowe sar...@gmail.com wrote:
On Mar 28, 2013, at 11:57 AM, Jilal Oussama jilal.ouss...@gmail.com
wrote:
Please add OussamaJilal to the group.
Added to solr
So, by using the numshards at initialization time, using the sample
collection1 solr.xml, I'm able to create a sharded and distributed index.
Also, by removing any initial cores from the solr.xml file, i'm able to use
the collections API via the web to create multiple collection with sharded
True - though I think for 4.2. numShards has never been respected in the cores
def's for various reasons.
In 4.0 and 4.1, things should have still worked though - you didn't need to
give numShards and everything should work just based on configuring different
shard names for core or accepting
Currently, yes. Stop each web container in the normal fashion. That will do a
clean shutdown.
- Mark
On Mar 28, 2013, at 5:48 PM, Li, Qiang qiang...@msci.com wrote:
How to shut down the SolrCloud? Just kill all nodes?
Regards,
Ivan
This email message and any attachments are for the
Interesting, I've been doing battle with it while coming from a 4.0
environment. I only had a single collection then and just created the
solr.xml files for each server up front. They each supported a half dozen
cores for a single collection.
As for 4.1 and collections API, the only issue I've
On Mar 28, 2013, at 6:30 PM, Chris R corg...@gmail.com wrote:
I'll probably move up to 4.2 tomorrow.
4.2.1 should be ready as soon as I have time to publish it - we have a passing
vote and I think we are close to 72 hours after. I just have to stock up on
some beer first - Robert tells me
: Now, what happens is a user will upload say a word document to us. We then
: parse it and process it into segments. It very well could be 5000 segments
: or even more in that word document. Each one of those ~5000 segments needs
: to be searched for similar segments in solr. I’m not quite sure
That's my kind of release!
Sent from my Verizon Wireless Phone
- Reply message -
From: Mark Miller markrmil...@gmail.com
To: solr-user@lucene.apache.org
Subject: Solrcloud 4.1 Collection with multiple slices only use
Date: Thu, Mar 28, 2013 6:34 pm
On Mar 28, 2013, at 6:30 PM, Chris R
: But solr wiki says:
: ```
: Starting with Solr4.0, the RELOAD command is implemented in a way that
: results a live reloads of the SolrCore, reusing the existing various
: objects such as the SolrIndexWriter. As a result, some configuration
: options can not be changed and made active with a
On 3/28/2013 3:01 PM, Walter Underwood wrote:
There are lots of small issues, though.
1. Is Solr tested with a mix of current and previous versions? It is safe to
run a cluster that is a mix of 4.1 and 4.2, even for a little bit?
2. Can Solr 4.2 run with Solr 4.1 config files? This means all
On 3/28/2013 4:23 PM, Mark Miller wrote:
True - though I think for 4.2. numShards has never been respected in the cores
def's for various reasons.
In 4.0 and 4.1, things should have still worked though - you didn't need to
give numShards and everything should work just based on configuring
: I am trying to index data from SQL Server view to the SOLR using the DIH
Have you ruled out the view itself being the bottle neck?
Try running whatever command line SQLServer client exists on your SOLR
server to connect remotely to your existing SQL server and run select *
from view and
On Mar 28, 2013, at 7:30 PM, Shawn Heisey s...@elyograg.org wrote:
Can't you leave numShards out completely, then include a numShards parameter
on a collection api CREATE url, possibly giving a different numShards to each
collection?
Thanks,
Shawn
Yes - that's why I say the
On Mar 28, 2013, at 7:27 PM, Shawn Heisey s...@elyograg.org wrote:
I actually would like more detail on upconfig myself - what if you delete
files from the config directory on disk? Will they be deleted from
zookeeper? I use a solrconfig that has xinclude statements, and occasionally
Not sure, but if you put it in the data dir, I think it picks it up and
reloads on commit.
Upayavira
On Thu, Mar 28, 2013, at 09:11 AM, Kaneyama Genta wrote:
Dear all,
I investigating how to update synonyms.txt.
Some people says CORE RELOAD will reload synonyms.txt.
But solr wiki says:
But this is fixed in 4.2 - now the index writer is rebooted on core reload.
So that's just 4.0 and 4.1.
- Mark
On Mar 28, 2013, at 6:48 PM, Chris Hostetter hossman_luc...@fucit.org wrote:
: But solr wiki says:
: ```
: Starting with Solr4.0, the RELOAD command is implemented in a way that
Though I think *another* JIRA made data dir not changeable over core reload for
some reason I don't recall exactly. But the other stuff is back to being
changeable :)
- Mark
On Mar 28, 2013, at 8:04 PM, Mark Miller markrmil...@gmail.com wrote:
But this is fixed in 4.2 - now the index writer
Hey guys,
I've recently setup basic auth under Jetty 8 for all my Solr 4.x '/admin/*'
calls, in order to protect my Collections and Cores API.
Although the security constraint is working as expected ('/admin/*' calls
require Basic Auth or return 401), when I use the Collections API to create a
In windows when I hit Execute Query button I got this results :
?xml version=1.0 encoding=UTF-8?responselst name=responseHeaderint
name=status0/intint name=QTime181/intlst name=paramsstr
name=indenttrue/strstr name=qstreetname:mdw/strstr
name=wtxml/str/lst/lstresult name=response
On 29 March 2013 07:23, A. Lotfi majidna...@yahoo.com wrote:
In windows when I hit Execute Query button I got this results :
[...]
There seem to be no documents in your Solr index on the
UNIX system. As I mentioned in my previous message, you
either need to copy the index files from the WIndows
In Unix, in data/index there is :
segments.gen 20 B 3/28/2013 rw-r--r--
segments_1 45 B 3/28/2013 rw-r--r--
I don't know how this was generated , should I delete them from the directory,
or from other place ?
If so, how to delete reindex on the UNIX system ?
thanks lot.
81 matches
Mail list logo