Hi,
This can be done on the level of a file system, if you are on *nix OS,
called soft-link.
Dmitry
On Mon, Apr 14, 2014 at 5:43 AM, manju16832003 manju16832...@gmail.comwrote:
Hi All,
I'm using latest version of Solr 4.7.1 and was wondering if there is a way
to share common schema.xml.
This does not happen in all the files. Maybe they're broken.
Thanks for participating. Unfortunately it's not that. Set stored = false for
all fields, nothing has changed.Re: High CPU usage after import
Are you storing the data? That is, the raw binary of the MP3? B/c when
stored=true, Solr
Hi;
I mean you can divide the range (i.e. one week at each delete instead of
one month) and try to check whether you still get an OOM or not.
Thanks;
Furkan KAMACI
2014-04-14 7:09 GMT+03:00 Vinay Pothnis poth...@gmail.com:
Aman,
Yes - Will do!
Furkan,
How do you mean by 'bulk delete'?
Hello Experts,
I want to index my documents in a way that all documents for a day are
stored in a single shard.
I am planning to have shards for each day e.g. shard1_01_01_2010,
shard1_02_01_2010 ...
And while hashing the documents of 01/01/2010 should go to
shard1_01_01_2010.
Thins way I can
Thanks Dmitry,
Yes I'm on *nix OS. Yes *soft-link* was mentioned by one of my sys-admin
friend as well. :-).
I was just trying to from the community if there there would be a way from
Solr itself.
Thanks for suggestion. Probably soft-link is the way to go now :-).
--
View this message in
Would collection aliasing be a relevant feature here (a different
approach):
http://blog.cloudera.com/blog/2013/10/collection-aliasing-near-real-time-search-for-really-big-data/
Regards,
Alex.
Personal website: http://www.outerthoughts.com/
Current project: http://www.solr-start.com/ -
Hello,
I saw that 4.7.1 has morphline and hadoop contribution libraries, but
I can't figure out the degree to which they are useful to _Solr_
users. I found one hadoop example in the readme that does some sort
injection into Solr. Is that the only use case supported?
I thought that maybe there
Hi,
We have a setup of SolrCloud 4.6. The fields Stored value is true.
Now I want to delete a field from indexed document. Is there any way from
which we can delete the field??
Field which we are trying to delete(extracted from schema.xml):
field name=SField2 type=string indexed=true
Aliases are meant for read operations can refer to one or more real
collections.
So should I go with the approach of creating a collection for per day's
data and aliasing a collection with all these collection names?
So instead of trying to route the documents to a shard should I send to a
Hi,
I have updated my solr instance from 4.5.1 to 4.7.1.
Now my solr query failing some tests.
Query: q=*:*fq=(title:((TE)))?debug=true
Before the update:
lstname=debug
strname=rawquerystring*:*/str
strname=querystring*:*/str
strname=parsedqueryMatchAllDocsQuery(*:*)/str
Hi All,
I have implemented a sponsor search where I have to elevate a particular
document for a specific query text.
To achieve this I have made the following changes (solr version:4.7.1):
1) Changes in solrConfig.xml
searchComponent name=elevator class=solr.QueryElevationComponent
str
Currently all Solr morphline use cases I’m aware of run in processes outside of
the Solr JVM, e.g. in Flume, in MapReduce, in HBase Lily Indexer, etc. These
ingestion processes generate Solr documents for Solr updates. Running in
external processes is done to improve scalability, reliability,
On 4/14/2014 5:50 AM, Gurfan wrote:
We have a setup of SolrCloud 4.6. The fields Stored value is true.
Now I want to delete a field from indexed document. Is there any way from
which we can delete the field??
Field which we are trying to delete(extracted from schema.xml):
field
Thanks Eric
It makes sense now, but I am a little surprised that solr does not convert
the object into a json form, I have to use a google library to do that.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Can-not-save-maps-in-solr-tp4130890p4131023.html
Sent from the
On 4/14/2014 7:52 AM, sachin.jain wrote:
It makes sense now, but I am a little surprised that solr does not convert
the object into a json form, I have to use a google library to do that.
Currently Solr treats a Map as a request for an Atomic Update. The key
must be add, inc, or set ... if
On 4/14/2014 7:43 AM, Shawn Heisey wrote:
When results are being determined for the response, it sounds like the
schema is NOT consulted -- I think the code simply reads what's in the
Lucene index and applies the fl parameter to decide which fields are
returned. The index doesn't change when
Yes, that is our approach. We did try deleting a day's worth of data at a
time, and that resulted in OOM as well.
Thanks
Vinay
On 14 April 2014 00:27, Furkan KAMACI furkankam...@gmail.com wrote:
Hi;
I mean you can divide the range (i.e. one week at each delete instead of
one month) and try
I vastly prefer git, but last I checked, (admittedly, some time ago) you
couldn't build the project from the git clone. Some of the build scripts
assumed some svn commands will work.
On 4/12/14, 3:56 PM, Furkan KAMACI furkankam...@gmail.com wrote:
Hi Amon;
There has been a conversation about
: we tried another commands to delete the document ID:
:
: 1 For Deletion:
:
: curl http://localhost:8983/solr/update -H 'Content-type:application/json' -d
: '
: [
You're use of square brackets here is triggering the syntax-sugar that
let's you add documents as objects w/o needing the add
ant compile / ant -f solr dist / ant test certainly work, I use them with a
git working copy. You trying something else?
On 14 Apr 2014 19:36, Jeff Wartes jwar...@whitepages.com wrote:
I vastly prefer git, but last I checked, (admittedly, some time ago) you
couldn't build the project from the
Some update:
I removed the auto warm configurations for the various caches and reduced
the cache sizes. I then issued a call to delete a day's worth of data (800K
documents).
There was no out of memory this time - but some of the nodes went into
recovery mode. Was able to catch some logs this
We'd like to graph the approximate RAM size of our SolrCache instances. Our
first attempt at doing this was to use the Lucene RamUsageEstimator [1].
Unfortunately, this appears to give a bogus result. Every instance of
FastLRUCache was judged to have the same exact size, down to the byte. I
Hi;
It should work with a git clone. I've never faced with an issue for it (I
use git clone for a long time) What kind of problem do you get?
Thanks;
Furkan KAMACI
14 Nis 2014 21:56 tarihinde Ramkumar R. Aiyengar andyetitmo...@gmail.com
yazdı:
ant compile / ant -f solr dist / ant test
On 4/14/2014 12:56 PM, Ramkumar R. Aiyengar wrote:
ant compile / ant -f solr dist / ant test certainly work, I use them with a
git working copy. You trying something else?
On 14 Apr 2014 19:36, Jeff Wartes jwar...@whitepages.com wrote:
I vastly prefer git, but last I checked, (admittedly,
I lost the original thread; sorry for the new / repeated topic, but
thought I would follow up to let y'all know that I ended up implementing
Alex's idea to implement an UpdateRequestProcessor in order to apply
different analysis to different fields when doing something analogous to
copyFields.
Hi Mike,
Glad I was able to help. Good note about the PoolingReuseStrategy, I
did not think of that either.
Is there a blog post or a GitHub repository coming with more details
on that? Sounds like something others may benefit from as well.
Regards,
Alex.
P.s. If you don't have your own
So are you sending the MP3 files to Solr? That's actually generally a
bad practice, it places the load for analyzing all the files on Solr.
Yes, SolrCell makes this possible, and it's great for small data sets.
What I'd actually recommend is that you parse the files on a SolrJ
client using Tika
_which_ solrCache objects? filterCache? result cache? documentcache?
result cache is about average size of a query + window size *
sizeof int) for each entry.
filter cache is about average size of a filter query + maxdoc/8
document cacha is about average size of the stored fields in bytes * size.
The use case I keep thinking about is Flue/Morphline replacing
DataImportHandler. So, when I saw morphline shipped with Solr, I tried
to understand whether it is a step towards it.
As it is, I am still not sure I understand why those jars are shipped
with Solr, if it is not actually integrating
Hi,
The index is the fresh one (older index deleted) and there is no soft
commit on them.
There are other logs available a part of which is as follows:
Apr 11, 2014 9:34:49 AM org.apache.solr.common.SolrException log
SEVERE: null:org.apache.solr.common.SolrException: Unable to create core:
On 4/14/2014 10:25 PM, Modassar Ather wrote:
Caused by: org.apache.lucene.store.LockObtainFailedException: Lock obtain
timed out: NativeFSLock@path/data/index/write.lock:
java.io.FileNotFoundException: path/data/index/write.lock (Permission
denied)
This sounds like the user (the one that's
Thanks Shawn for your suggestion.
Earlier the write permission was set only on the *data *directory. Per your
suggestion after providing write access to
*data/index* folder and all the files under it the core gets loaded.
Regards,
Modassar
On Tue, Apr 15, 2014 at 10:12 AM, Shawn Heisey
32 matches
Mail list logo