Hi,
Here is a discussion we had recently with a fellow Solr user.
It seems reasonable to me and wanted to see if this is an accepted theory.
The bit-vectors in filterCache are as long as the maximum number of
documents in a core. If there are a billion docs per core, every bit vector
will have a
>
Sent: Tuesday, October 3, 2017 12:58:57 AM
To: solr-user@lucene.apache.org
Subject: Re: How to Index JSON field Solr 5.3.2
Hi Sharma,
I guess you are looking for nested documents:
https://lucene.apache.org/solr/guide/6_6/uploading-data-with-index-handlers.html#UploadingDatawithIndexHandlers-Ne
ends on
> input sorted by a certain value. In this scenario, regular solr sorting is
> insufficient as it's performed in post-search, and only collects needed rows
> to satisfy the query. The alternative for naturally sorted index is to sort
> all the docs myself, and I wish to avoid this.
is
insufficient as it's performed in post-search, and only collects needed rows
to satisfy the query. The alternative for naturally sorted index is to sort
all the docs myself, and I wish to avoid this. I use docValues extensively,
it really is a great help.
Erick, I've tried using
Hi Sharma,
I guess you are looking for nested documents:
https://lucene.apache.org/solr/guide/6_6/uploading-data-with-index-handlers.html#UploadingDatawithIndexHandlers-NestedChildDocuments
<https://lucene.apache.org/solr/guide/6_6/uploading-data-with-index-handlers.h
Hi everyone,
I have created a core and index data in Solr using dataImportHandler.
The schema for the core looks like this:
This is my data in mysql database:
md5:"376463475574058bba96395bfb87"
rules:
{"fileRules":[{"file_id":1321241,"md5
Hi Alex,
just to explore a bit your question, why do you need that ?
Do you need to reduce query time ?
Have you tried enabling docValues for the fields of interest ?
Doc Values seem to me a pretty useful data structure when sorting is a
requirement.
I am curious to understand why that was not an
ober 1, 2017, 10:22:45 AM GMT+3, alexpusch <a...@getjaco.com>
> wrote:
>
>
>
>
>
> Hello,
> We've got a pretty big index (~1B small docs). I'm interested in managing
> the index so that the search results would be naturally sorted by a certain
> numeric field, without
Hello,
We've got a pretty big index (~1B small docs). I'm interested in managing
the index so that the search results would be naturally sorted by a certain
numeric field, without specifying the actual sort field in query time.
My first attempt was using SortingMergePolicyFactory. I've
Hello,
We've got a pretty big index (~1B small docs). I'm interested in managing
the index so that the search results would be naturally sorted by a certain
numeric field, without specifying the actual sort field in query time.
My first attempt was using SortingMergePolicyFactory. I've found
gt;
> -Original Message-
> From: Stefan Matheis [mailto:matheis.ste...@gmail.com]
> Sent: Wednesday, September 27, 2017 12:32 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr 7.0.0 -- can it use a 6.5.0 data repository (index)
>
> That sounds like
> https://
PM
To: solr-user@lucene.apache.org
Subject: Re: Solr 7.0.0 -- can it use a 6.5.0 data repository (index)
That sounds like
https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_browse_SOLR-2D11406=DwIFaQ=z0adcvxXWKG6LAMN6dVEqQ=4gLDKHTqOXldY2aQti2VNXYWPtqa1bUKE6MA9VrIJfU
That sounds like https://issues.apache.org/jira/browse/SOLR-11406 if i'm
not mistaken?
-Stefan
On Sep 27, 2017 8:20 PM, "Wayne L. Johnson" <wjohn...@familysearch.org>
wrote:
> I’m testing Solr 7.0.0. When I start with an empty index, Solr comes up
> just fine, I can a
I'm testing Solr 7.0.0. When I start with an empty index, Solr comes up just
fine, I can add documents and query documents. However when I start with an
already-populated set of documents (from 6.5.0), Solr will not start. The
relevant portion of the traceback seems to be:
Caused
any.
Kind Regards,
Furkan KAMACI
On Thu, Sep 14, 2017 at 1:34 PM, Rick Leir <rl...@leirtech.com> wrote:
> hi Can Ezgi
> > First of all, i want to use spatial index for my data include polyghons
> and points. But solr indexed first 18 rows, other rows not indexed.
>
> Do a
Not really. Do note that atomic updates require
1> all _original_ fields (i.e. fields that are _not_ destinations for
copyFields) have stored=true
2> no destination of a copyField has stored=true
3> compose the original document from stored fields and re-index the
doc. This latter j
Hello,
I have a single node Solr 6.4 server, with a Index of 100 Million documents.
The default "id" is the primary key of this index. Now, I would like to setup
an update process to insert new documents, and update existing documents based
on availability of value in another
hi Can Ezgi
> First of all, i want to use spatial index for my data include
polyghons and points. But solr indexed first 18 rows, other rows not
indexed.
Do all rows have a unique id field?
Are there errors in the logfile?
cheers -- Rick
.
Hi everyone,
First of all, i want to use spatial index for my data include polyghons and
points. But solr indexed first 18 rows, other rows not indexed. I need sample
datas include polyghons and points.
Other problem, i will write spatial query this datas. This spatial query
include
On Wed, 2017-09-13 at 11:56 -0700, fabigol wrote:
> my problem is that my index freeze several time and i don't know why.
> So i lost all the data of my index.
> I have 14 million of documents from postgresql database. I have an
> only node with 31 GO for my JVM and my server has 64
Fabien,
What do you see in the logfile at the time of the freeze?
Cheers -- Rick
On September 13, 2017 3:01:17 PM EDT, fabigol <fabien.stou...@vialtis.com>
wrote:
>hi,
>my problem is that my index freeze several time and i don't know why.
>So i
>lost all the data of my index.
hi,
my problem is that my index freeze several time and i don't know why. So i
lost all the data of my index.
I have 14 million of documents from postgresql database. I have an only node
with 31 GO for my JVM and my server has 64GO. My index make 6 GO on the HDD.
Is it a good configuration
hi,
my problem is that my index freeze several time and i don't know why. So i
lost all the data of my index.
I have 14 million of documents from postgresql database. I have an only node
with 31 GO for my JVM and my server has 64GO. My index make 6 GO on the HDD.
Is it a good configuration
t;>>
>>> wunder
>>> Walter Underwood
>>> wun...@wunderwood.org
>>> http://observer.wunderwood.org/ (my blog)
>>>
>>>
>>>> On Aug 30, 2017, at 9:14 AM, Erick Erickson <erickerick...@gmail.com>
>>> wrote:
>>>>
>
Erick Erickson <erickerick...@gmail.com>
>> wrote:
>>>
>>> First, it's often best, by far, to denormalize the data in your solr
>> index,
>>> that's what I'd explore first.
>>>
>>> If you can't do that, the join query parser might work fo
al, Harshal (GE Digital)
> Sent: Wednesday, August 30, 2017 4:36 PM
> To: 'solr-user@lucene.apache.org' <solr-user@lucene.apache.org>
> Cc: Singh, Susnata (GE Digital) <susnata.si...@ge.com>
> Subject: Solr index getting replaced instead of merged
>
> Hello Guys,
>
>
4:36 PM
To: 'solr-user@lucene.apache.org' <solr-user@lucene.apache.org>
Cc: Singh, Susnata (GE Digital) <susnata.si...@ge.com>
Subject: Solr index getting replaced instead of merged
Hello Guys,
I have installed solr in my local system and was able to connect to Teradata
successfully
ttp://observer.wunderwood.org/ (my blog)
> >
> >
> > > On Aug 30, 2017, at 9:14 AM, Erick Erickson <erickerick...@gmail.com>
> > wrote:
> > >
> > > First, it's often best, by far, to denormalize the data in your solr
> > index,
> >
mented as a view or as SQL, but that is a useful
> mental
> > > model for people starting from a relational background.
> > >
> > > wunder
> > > Walter Underwood
> > > wun...@wunderwood.org
> > > http://observer.wunderwood.org/ (my blog)
> &g
.org/ (my blog)
> >
> >
> > > On Aug 30, 2017, at 9:14 AM, Erick Erickson <erickerick...@gmail.com>
> > wrote:
> > >
> > > First, it's often best, by far, to denormalize the data in your solr
> > index,
> > > that's what I'd explore
http://observer.wunderwood.org/ (my blog)
>
>
> > On Aug 30, 2017, at 9:14 AM, Erick Erickson <erickerick...@gmail.com>
> wrote:
> >
> > First, it's often best, by far, to denormalize the data in your solr
> index,
> > that's what I'd explore first.
> >
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Aug 30, 2017, at 9:14 AM, Erick Erickson <erickerick...@gmail.com> wrote:
>
> First, it's often best, by far, to denormalize the data in your solr index,
> that's what I'd explore first.
>
> If you can't
First, it's often best, by far, to denormalize the data in your solr index,
that's what I'd explore first.
If you can't do that, the join query parser might work for you.
On Aug 30, 2017 4:49 AM, "Renuka Srishti" <renuka.srisht...@gmail.com>
wrote:
> Thanks Susheel for yo
solr in my local system and was able to connect to Teradata
> successfully.
> For single table I am able to index the data and query it also but when I am
> trying for multiple tables in the same schema and doing indexing one by one
> respectively.
> I can see datasets getting replaced i
Hello Guys,
I have installed solr in my local system and was able to connect to Teradata
successfully.
For single table I am able to index the data and query it also but when I am
trying for multiple tables in the same schema and doing indexing one by one
respectively.
I can see datasets
, desc, city
> etc.)
> b) What is that you want to show part of search result (name, city etc.)
>
> Based on above two questions, you would know what data to pull in from
> relational database and create solr schema and index the data.
>
> You may first try to denormalize / flat
Hi there,
We are using solr-6.3.0 and have the need to replace the solr index in
production with the solr index from another environment on periodical
basis. But the jvms have to be recycled for the updated index to take
effect. Is there any way this can be achieved without restarting the jvms
questions, you would know what data to pull in from
relational database and create solr schema and index the data.
You may first try to denormalize / flatten the structure so that you deal
with one collection/schema and query upon it.
HTH.
Thanks,
Susheel
On Mon, Aug 28, 2017 at 8:04 AM, Renuka
Hii,
What is the best way to index relational database, and how it impacts on
the performance?
Thanks
Renuka Srishti
write.lock is used whenever a core(replica) wants to, well, write to
the index. Each individual replica is sure to only write to the index
with one thread. If two threads were to write to an index, there's a
very good chance the index will be corrupt, so it's a safeguard
against two or more
that you don't copy
> over the write.lock file however as you may not be able to start
> replicas if that's there.
>
> There's a relatively little-known third option. You an (ab)use the
> replication API "fetchindex" command, see:
> https://cwiki.apache.org/conf
apache.org/confluence/display/solr/Index+Replication to
pull the index from Cloud B to replicas on Cloud A. That has the
advantage of working even if you are actively indexing to Cloud B.
NOTE: currently you cannot _query_ CloudA (the target) while the
fetchindex is going on, but I doubt you really care
behind, we want to bulk copy the binary index
from B to A.
We have tried two approaches:
Approach 1.
For cloud A:
a. delete collection to wipe out everything
b. create new collection (data is empty now)
c. shut down solr server
d. copy binary index from cloud B
totle
>
> Problem:
>
> When we start solrcloud ,the cached index will make memory 98% or
> more used . And if we continue to index document (batch commit 10 000
> documents),one or more server will refuse serving.Cannot login wia ssh,even
> refuse the monitor.
>
> So,how can I limit the solr’s caching index to memory behavior?
>
> Anyone thanks!
>
Hello,
ENV: solrcloud 6.3
3*dell server
128G 12cores 4.3T /server
3 solr node /server
20G /node (with parameter �Cm 20G)
10 billlion documents totle
Problem:
When we start solrcloud ,the cached index will make memory 98% or
more used . And if we continue to index document
than setting dataDir in core.properties for every core,
> especially in a cloud install.
>
> Agreed. Nothing in what I said precludes this. If you don't specify
> dataDir,
> then the index for a new replica goes in the default place, i.e. under
> your install
> directory usuall
I am looking for lessons learned or problems seen when building a Solr index
from AEM using a Solr cluster with content passing through an ELB.
Our configuration is AEM 6.1 indexing to a cluster of Solr servers running
version 4.7.1. When building an index with a smaller data set - 4 million
be a lot easier than setting dataDir in core.properties for every core,
especially in a cloud install.
Agreed. Nothing in what I said precludes this. If you don't specify dataDir,
then the index for a new replica goes in the default place, i.e. under
your install
directory usually. In your case under your
On 8/2/2017 9:17 AM, Erick Erickson wrote:
> Not entirely sure about AWS intricacies, but getting a new replica to
> use a particular index directory in the general case is just
> specifying dataDir=some_directory on the ADDREPLICA command. The index
> just needs an HTTP connection (
he
> Courts
> > of Equity of the United States")
> > 2017-08-02 02:16:36 : 54749/1000 secs : ("The American Cause")
> > 2017-08-02 19:27:58 : 54561/1000 secs : ("register of the department of
> > justice")
> >
> > which could all be annihi
secs : ("register of the department of
> justice")
>
> which could all be annihilated with CG's, at the expense, according to HT,
> of a 40% increase in index size.
>
>
>
> On Thu, Aug 3, 2017 at 11:21 AM, Erick Erickson <erickerick...@gmail.com>
> wro
rding to HT,
of a 40% increase in index size.
On Thu, Aug 3, 2017 at 11:21 AM, Erick Erickson <erickerick...@gmail.com>
wrote:
> bq: will that search still return results form the earlier documents
> as well as the new ones
>
> In a word, "no". By definition the analy
bq: will that search still return results form the earlier documents
as well as the new ones
In a word, "no". By definition the analysis chain applied at index
time puts tokens in the index and that's all you have to search
against for the doc unless and until you re-index the docu
Hey all, I have yet to run an experiment to test this but was wondering if
anyone knows the answer ahead of time.
If i have an index built with documents before implementing the commongrams
filter, then enable it, and start adding documents that have the
filter/tokenizer applied, will searches
CD1234
> ABCD5678
>
> *Expected Descending order*
>
> ABCD5678
> ABCD1234
> 5678ABCD
> 1234ABCD
> 1234#ABCD
> #2345DBCA
> #2345ACBD
> #2345ABCD
>
> Thanks & Regards,
> Paddy
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Custom-Sort-option-to-apply-at-SOLR-index-tp4348787.html
> Sent from the Solr - User mailing list archive at Nabble.com.
Shawn:
Not entirely sure about AWS intricacies, but getting a new replica to
use a particular index directory in the general case is just
specifying dataDir=some_directory on the ADDREPLICA command. The index
just needs an HTTP connection (uses the old replication process) so
nothing huge
Thank you Matt for the reply. my apologize on the clarity about the problem
statement.
The problem was with the source attribute value defined at the source
system.
Source system with the
heightSquareTube_string_mv: 90 - 100 mm
Solr index converts the xml or html code to its symbol
egards,
Paddy
--
View this message in context:
http://lucene.472066.n3.nabble.com/Custom-Sort-option-to-apply-at-SOLR-index-tp4348787.html
Sent from the Solr - User mailing list archive at Nabble.com.
To add to this, not sure of solr cloud uses it, but you're going to want to
destroy the wrote.lock file as well
> On Aug 1, 2017, at 9:31 PM, Shawn Heisey wrote:
>
>> On 8/1/2017 7:09 PM, Erick Erickson wrote:
>> WARNING: what I currently understand about the limitations
On 8/1/2017 7:09 PM, Erick Erickson wrote:
> WARNING: what I currently understand about the limitations of AWS
> could fill volumes so I might be completely out to lunch.
>
> If you ADDREPLICA with the new replica's data residing on the new EBS
> volume, then wait for it to sync (which it'll do
y I'm using one
>> replication factor but I think the downtime will be less than five minutes
>> after following your steps.
>>
>> But how can I start Solr backup or why should I run it although I copied
>> the index and changed theo path?
>>
>> And what do you me
n it although I copied
> the index and changed theo path?
>
> And what do you mean with "Using multiple passes with rsync"?
The first time you copy the data, which you could do with cp if you
want, the time required will be limited by the size of the data and the
speed of
Thanks Shawn,
I'm using ubuntu and I'll try rsync command. Unfortunately I'm using one
replication factor but I think the downtime will be less than five minutes
after following your steps.
But how can I start Solr backup or why should I run it although I copied
the index and changed theo path
hmoud Almokadem wrote:
>> I've a SolrCloud of four instances on Amazon and the EBS volumes that
>> contain the data on everynode is going to be full, unfortunately Amazon
>> doesn't support expanding the EBS. So, I'll attach larger EBS volumes to
>> move the index to.
>
On 7/31/2017 12:28 PM, Mahmoud Almokadem wrote:
> I've a SolrCloud of four instances on Amazon and the EBS volumes that
> contain the data on everynode is going to be full, unfortunately Amazon
> doesn't support expanding the EBS. So, I'll attach larger EBS volumes to
> move the ind
Hello,
I've a SolrCloud of four instances on Amazon and the EBS volumes that
contain the data on everynode is going to be full, unfortunately Amazon
doesn't support expanding the EBS. So, I'll attach larger EBS volumes to
move the index to.
I can stop the updates on the index, but I'm afraid
Ronald:
Actually, people generally don't search on master ;). The idea is that
master is configured for heavy indexing and then people search on the
slaves which are configured for heavy query loads (e.g. memory,
autowarming, whatever may be different). Which is it's own problem
since the time
Bingo! Right on both counts! opensearcher was false. When I changed it to
true, then I could see that master(searching) and master(replicable) both
changed. And autocommit.maxtime is causing a commit on the master.
Who uses master(replicable)? It seems for my simple master/slave
Another sanity check. With deletion, only option would be to reindex those
documents. Could someone please let me know if I am missing anything or if I
am on track here. Thanks.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Lucene-index-corruption-and-recovery
While trying to upgrade 100G index from Solr 4 to 5, check index (actually
updater) indicates that the index is corrupted. Hence, I ran check index
to fix the index which showed broken segment warning and then deleted those
documents. I then ran index update on the fixed index which upgraded fine
has openSearcher=false. This
closed all open segments (i.e. the segments with the new docs)
2> the slave replicated the closed segments and opened a new searcher
on the index, so it shows the new docs
3> the master still hasn't opened a new searcher so continues to not
be able to see the new
I'm testing replication on solr 5.5.0.
I set up one master and one slave.
The index versions match; that is, master(replicable), master(searching), and
slave(searching) are the same.
I make a change to the index on the master, but do not commit yet.
As expected, the version master(replicable
I think Thaer’s answer clarify how they do it.
So at the time they assemble the full Solr doc to index, there may be a new
field name not known in advance,
but to my understanding the RDF source contains information on the type (else
they could not do the mapping
to dynamic field either) and so
es on it are sent to a kafka queue. and we
> >have a consumer which listen to the queue and update the Solr index.
> >
> >regards,
> >Thaer
> >
> >On 7 July 2017 at 10:53, Jan Høydahl <jan@cominvent.com> wrote:
> >
> >> If you do not need the
personally written a Python script to parse RDF files into an in-memory
graph structure and then pull data from that structure to index to Solr.
I.e. you may perfectly well have RDF (nt, turtle, whatever) as source but index
sub structures in very specific ways.
Anyway, as Erick points out, that’s
Hi,
I have personally written a Python script to parse RDF files into an in-memory
graph structure and then pull data from that structure to index to Solr.
I.e. you may perfectly well have RDF (nt, turtle, whatever) as source but index
sub structures in very specific ways.
Anyway, as Erick
fields are known. We get the
>data
>from RDF database (which changes continuously). To be more specific, we
>have a database and all changes on it are sent to a kafka queue. and we
>have a consumer which listen to the queue and update the Solr index.
>
>regards,
>Thaer
&g
a consumer which listen to the queue and update the Solr index.
>
> regards,
> Thaer
>
> On 7 July 2017 at 10:53, Jan Høydahl <jan@cominvent.com> wrote:
>
>> If you do not need the flexibility of dynamic fields, don’t use them.
>> Sounds to me that you really
. and we
have a consumer which listen to the queue and update the Solr index.
regards,
Thaer
On 7 July 2017 at 10:53, Jan Høydahl <jan@cominvent.com> wrote:
> If you do not need the flexibility of dynamic fields, don’t use them.
> Sounds to me that you really want a field “price”
If you do not need the flexibility of dynamic fields, don’t use them.
Sounds to me that you really want a field “price” to be float and a field
“birthdate” to be of type date etc.
If so, simply create your schema (either manually, through Schema API or using
schemaless) up front and index each
e
>>
>> On Wed, Jul 5, 2017 at 4:23 PM, Thaer Sammar <t.sam...@geophy.com> wrote:
>>
>> > Hi,
>> > We are trying to index documents of different types. Document have
>> > different fields. fields are known at indexing time. We run a query on a
>>
://cwiki.apache.org/confluence/display/solr/Schemaless+Mode
>
> On Wed, Jul 5, 2017 at 4:23 PM, Thaer Sammar <t.sam...@geophy.com> wrote:
>
> > Hi,
> > We are trying to index documents of different types. Document have
> > different fields. fields are known at indexing
Hi Thaer,
Do you use schemeless mode [1] ?
Kind Regards,
Furkan KAMACI
[1] https://cwiki.apache.org/confluence/display/solr/Schemaless+Mode
On Wed, Jul 5, 2017 at 4:23 PM, Thaer Sammar <t.sam...@geophy.com> wrote:
> Hi,
> We are trying to index documents of different types. D
Hi,
We are trying to index documents of different types. Document have different
fields. fields are known at indexing time. We run a query on a database and we
index what comes using query variables as field names in solr. Our current
solution: we use dynamic fields with prefix, for example
l.com]
Sent: Tuesday, April 11, 2017 1:56 PM
To: solr-user@lucene.apache.org
Subject: RE: Solr 6.4. Can't index MS Visio vsdx files
Thanks for your responses.
Are there any posibilities to ignore parsing errors and continue indexing?
because now solr/tika stops parsing whole document if it finds any
Sorry. Y, you'll have to update commons-compress to 1.14.
-Original Message-
From: Gytis Mikuciunas [mailto:gyt...@gmail.com]
Sent: Monday, July 3, 2017 9:15 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr 6.4. Can't index MS Visio vsdx files
hi,
So I'm back from my long
hi,
So I'm back from my long vacations :)
I'm trying to bring-up a fresh solr 6.6 standalone instance on windows
2012R2 server.
Replaced:
poi-*3.15-beta1 ---> poi-*3.16
tika-*1.13 ---> tika-*1.15
Tried to index one txt file and got (with poi and tika files that come out
of t
I need a way to index binary files from ftp servers, using UrlDataSource.
I’m doing this locally but I need to do the same from remote sources (Ftp
servers). I read a lot and I can’t find any example of indexing binary
files from ftps. Is it possible to achieve that? How can I use Data Import
I am just trying to shard my index data of size 22GB(1.7M documents) into
three shards.
The total time for splitting takes about 7 hours.
In used the same query that is mentioned in solr collections API.
Is there anyway to do that quicker.
Can i use REBALANCE API . is that secured
ks like something wrong/bug in
> the
> >> > code. Please suggest
> >> >
> >> > ===
> >> > let(a=search(collection1,
> >> > q=id:9,
> >> > fl="id,business_email",
> >> > so
t;> > ===
>> > let(a=search(collection1,
>> > q=id:9,
>> > fl="id,business_email",
>> > sort="business_email asc"),
>> > get(a)
>> > )
>> &
q=id:9,
> > fl="id,business_email",
> > sort="business_email asc"),
> > get(a)
> > )
> >
> >
> > {
> > "result-set": {
> > "docs": [
,
> q=id:9,
> fl="id,business_email",
> sort="business_email asc"),
> get(a)
> )
>
>
> {
> "result-set": {
> "docs&qu
in the
code. Please suggest
===
let(a=search(collection1,
q=id:9,
fl="id,business_email",
sort="business_email asc"),
get(a)
)
{
"result-set": {
"docs": [
{
h EOF:true
> assigned to let variable, it gets changed to EXCEPTION "Index 0, Size 0"
> etc.
>
> So let stream not able to handle the stream/results which has only EOF
> tuple and breaks the whole let expression block
>
>
> ===Complement inside let
> let(
&g
Hi Joel,
I am able to reproduce this in a simple way. Looks like Let Stream is
having some issues. Below complement function works fine if I execute
outside let and returns an EOF:true tuple but if a tuple with EOF:true
assigned to let variable, it gets changed to EXCEPTION "Index 0, S
il.com>
wrote:
>
>
>
>
>
>
>
>
> BTW, is there a better/recommended way to transfer an
> index to another solr?
>
>
>
>
>
>
>
>
>
> On Thu, Jun 22, 2017 at 6:09 PM +0200, "Moritz Michael" <
> moritz.mu..
Usually we index directly into Prod solr than copying from local/lower
environments. If that works in your scenario, i would suggest to directly
index into Prod than copying/restoring from local Windows env to Linux.
On Thu, Jun 22, 2017 at 12:13 PM, Moritz Michael <moritz.mu...@gmail.com>
BTW, is there a better/recommended way to transfer an index to
another solr?
On Thu, Jun 22, 2017 at 6:09 PM +0200, "Moritz Michael"
<moritz.mu
_
From: Michael Kuhlmann <k...@solr.info>
Sent: Donnerstag, Juni 22, 2017 2:50 PM
Subject: Re: Error after moving index
To: <solr-user@lucene.apache.org>
Hi Moritz,
did you stop your local Solr sever before? Copying data fr
801 - 900 of 7580 matches
Mail list logo