Does it mean that if the FL field is the result of some function, do I need
to only Index those fields right? Or do I need to stored it also?
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
> given are simply ignored.
>
>
>
>> On 13 Mar 2020, at 13:15, GTHell wrote:
>>
>> I'm doing a lot of filter query in fq. My search is something like
>> 'q=*:*=..function on a few fields..' . Do I need to only index those
>> field and use FL to get other res
Hell wrote:
>
> I'm doing a lot of filter query in fq. My search is something like
> 'q=*:*=..function on a few fields..' . Do I need to only index those
> field and use FL to get other result or do I need to index everything?
>
>
>
> --
> Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
I'm doing a lot of filter query in fq. My search is something like
'q=*:*=..function on a few fields..' . Do I need to only index those
field and use FL to get other result or do I need to index everything?
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
flagged as an ERROR rather than as INFO or WARN ?
-Original Message-
From: Oakley, Craig (NIH/NLM/NCBI) [C]
Sent: Monday, June 10, 2019 9:57 AM
To: solr-user@lucene.apache.org
Subject: RE: No files to download for index generation
Does anyone yet have any insight on interpreting the seve
Hi!
We have a index with about fifty million documents with five child
documents in average.
The schema has been a _root_-only schema for some time, but we now tried
changing to _nest_path_/NestPathField.
When reindexing the data with the new schema it takes almost twice as long
to index all
here is query
/solr/test/select?q={!parent%20which=doc_type:Parent%20score=max}%20%20{!boost%20b=100.0%20}color:Red%20{!dismax%20qf=title%20v=%27Regular%27%20score=total}=id,product_class_type,title,score,color=json
I have a solr core which has a mix of child-free and with-child documents,
sample xml:
4
Regular Shirt
Parent
Black
8
Solid Rug
Parent
Solid
1
Regular color Shirts
Parent
2
Child
Red>
3
Child
Blue>
5
Rugs
Parent
6
Child
Abstract
7
Child
Printed
Now i want to write a query
there,
>
> We are using AWS EMR as our big data processing cluster. We have like 3TB
> of text files where each line denotes a json record which I want to be
> indexed into Solr.
>
> I have tried this by batching them and pushing to Solr index using
> SolrJClient. But I feel t
Hi there,
We are using AWS EMR as our big data processing cluster. We have like 3TB
of text files where each line denotes a json record which I want to be
indexed into Solr.
I have tried this by batching them and pushing to Solr index using
SolrJClient. But I feel thats really slow.
My doubt
,
"QTime": 6
},
"Operation splitshard caused exception:":
"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
not enough free disk space to perform index split on node
sys-hadoop-1:9100_solr, required: 306.76734546013176, available:
16.77236
If you’re using Solr 8.2 or newer there’s a built-in index analysis tool that
gives you a better understanding of what kind of data in your index occupies
the most disk space, so that you can tweak your schema accordingly:
https://lucene.apache.org/solr/guide/8_2/collection-management.html
aid.
>
> But if you must have a number, assume that the index will be as big as
> your (text) data. It might be 2X bigger or 2X smaller. Or 3X or 4X, but
> that is a starting point. Once you start updating, the index might get as
> much as 2X bigger before merges.
>
What he said.
But if you must have a number, assume that the index will be as big as your
(text) data. It might be 2X bigger or 2X smaller. Or 3X or 4X, but that is a
starting point. Once you start updating, the index might get as much as 2X
bigger before merges.
Do NOT try to get
I’ve always had trouble with that advice, that RAM size should be JVM + index
size. I’ve seen 300G indexes (as measured by the size of the data/index
directory) run in 128G of memory.
Here’s the long form:
https://lucidworks.com/post/sizing-hardware-in-the-abstract-why-we-dont-have
Hello All,
I want to size the RAM for my Solr cloud instance. The thumb rule is your
total RAM size should be = (JVM size + index size)
Now I have a simple question, How do I know my index size? A simple method,
perhaps from the Solr cloud admin UI or an API?
My assumption so far is the total
Hi Amanda,
I did this
https://github.com/freedev/solr-import-export-json
and works with/without cursormark, in case your index does not have an
unique key (primary key).
On Fri, Jan 31, 2020 at 1:18 PM Amanda Shuman
wrote:
> Thanks all!
>
> I wasn't familiar with using curl at th
the file. I basically did what Steve Ge suggested, the command
looks kind of like this for anyone else who needs it in the future:
curl "
http://servername.com:8983/solr/collection1/select?indent=on=*:*=5000=json;
> collection1_index.json
I just set the rows to the number in our index, which I
Hey Sameer,
I tried using the tool on hadoop master node (AWS EMR) like:
hadoop jar cloudera-search-1.0.0-cdh5.2.0-jar-with-dependencies.jar \
org.apache.solr.hadoop.MapReduceIndexerTool \
-D 'mapred.child.java.opts=-Xmx500m' \
--log4j ~/log4j.properties \
--morphline-file
you
> need.
>
> HTH,
> Emir
>
> --
> Monitoring - Log Management - Alerting - Anomaly Detection
> Solr & Elasticsearch Consulting Support Training - http://sematext.com/
>
>
>
> > On 29 Jan 2020, at 15:43, Amanda Shuman wrote:
> >
> > Dear all:
&
ion
Solr & Elasticsearch Consulting Support Training - http://sematext.com/
> On 29 Jan 2020, at 15:43, Amanda Shuman wrote:
>
> Dear all:
>
> I've been asked to produce a JSON file of our index so it can be combined
> and indexed with other records. (We run solr 5.3
0, at 15:43, Amanda Shuman wrote:
> >
> > Dear all:
> >
> > I've been asked to produce a JSON file of our index so it can be combined
> > and indexed with other records. (We run solr 5.3.1 on this project; we're
> > not going to upgrade, in part because funding has
ning - http://sematext.com/
> On 29 Jan 2020, at 15:43, Amanda Shuman wrote:
>
> Dear all:
>
> I've been asked to produce a JSON file of our index so it can be combined
> and indexed with other records. (We run solr 5.3.1 on this project; we're
> not going to upgrade, in
Dear all:
I've been asked to produce a JSON file of our index so it can be combined
and indexed with other records. (We run solr 5.3.1 on this project; we're
not going to upgrade, in part because funding has ended.) The index has
several thousand rows, but nothing too drastic. Unfortunately
Ok, i created collection from scratch based on config
Unfortunately, it does not improve. It is just growing and growing. Except
when I stop solr and then during startup the unnecessary index files are
purged. Even with the previous config this did not happen in older Solr
versions (for sure
o -1. However,
>> indeed openSearcher is set to false. A commit is set to true after doing
>> all the updates, but the index is not shrinking. The files are not
>> disappearing during shutdown, but they disappear after starting up again.
>>
>> On Tue, Jan 21, 2020 at 4
wrote:
> It is btw. a Linux system and autosoftcommit is set to -1. However, indeed
> openSearcher is set to false. A commit is set to true after doing all the
> updates, but the index is not shrinking. The files are not disappearing
> during shutdown, but they disappear after start
It is btw. a Linux system and autosoftcommit is set to -1. However, indeed
openSearcher is set to false. A commit is set to true after doing all the
updates, but the index is not shrinking. The files are not disappearing
during shutdown, but they disappear after starting up again.
On Tue, Jan 21
m. Actually, a very quick test would be
> to submit “http://host:port/solr/collection/update?commit=true” and see if
> the index shrinks as a result. You don’t need to change solrconfig.xml for
> that test.
>
> If you are opening a new searcher, this is very concerning. There
to something other than
-1. Real Time Get requires access to all segments and it takes a new searcher
being opened to release them. Actually, a very quick test would be to submit
“http://host:port/solr/collection/update?commit=true” and see if the index
shrinks as a result. You don’t need to change
From what is see it basically duplicates the index files, but does not delete
the old ones.
It uses caffeine cache.
What I observe is that there is an exception when shutting down for the
collection that is updated - timeout waiting for all directory ref counts to be
released - gave up waiting
Sorry I missed a line - not tlog is growing but the /data/index folder is
growing - until restart when it seems to be purged.
> Am 20.01.2020 um 10:47 schrieb Jörn Franke :
>
> Hi,
>
> I have a test system here with Solr 8.4 (but this is also reproducible in
> older Solr ve
Hi,
I have a test system here with Solr 8.4 (but this is also reproducible in older
Solr versions), which has an index which is growing and growing - until the
SolrCloud instance is restarted - then it is reduced tot the expected normal
size.
The collection is configured to do auto commit
Hi All,
I am getting error
No files to download for index generation: 1385219
continuously on my slave servers. I am using solr-5.5.5 with one master and two
slave architecture.
Please help me on this. Does this error impacts replication also or performance
issue on data query??
Thanks
g Solr and I need your help.
> I have data on HDFS that I need to index with Solr.
>
> I) My data looks like that, it is saved on hdfs :
> ID_METIER_PCS_ESE,CD_PCS_ESE_1,LB_PCS_ESE_1,CD_PCS_ESE_2,LB_PCS_ESE_2,CD_PCS_ESE_3,LB_PCS_ESE_3,DT_DEB,DT_FIN,TS_TEC_INSERT,TS_TEC_UPDATE
>
Hello
I am new in using Solr and I need your help.
I have data on HDFS that I need to index with Solr.
I) My data looks like that, it is saved on hdfs :
ID_METIER_PCS_ESE,CD_PCS_ESE_1,LB_PCS_ESE_1,CD_PCS_ESE_2,LB_PCS_ESE_2,CD_PCS_ESE_3,LB_PCS_ESE_3,DT_DEB,DT_FIN,TS_TEC_INSERT,TS_TEC_UPDATE
37,3
your schema to include the new field and reload your collection.
>>>
>>> Then updating your field should work.
>>>
>>> Best,
>>> Erick
>>>
>>>> On Dec 3, 2019, at 4:40 AM, Vignan Malyala
>>> wrote:
>>>>
&
the new field and reload your collection.
>>
>> Then updating your field should work.
>>
>> Best,
>> Erick
>>
>> > On Dec 3, 2019, at 4:40 AM, Vignan Malyala
>> wrote:
>> >
>> > How to add a new field to already an existi
Erickson, wrote:
> Update your schema to include the new field and reload your collection.
>
> Then updating your field should work.
>
> Best,
> Erick
>
> > On Dec 3, 2019, at 4:40 AM, Vignan Malyala wrote:
> >
> > How to add a new field to already an exist
0 AM, Vignan Malyala wrote:
> >
> > How to add a new field to already an existing index in Solr 6.6 ?
> >
> > I tried to use set for this, but it shows error as undefined field. But
> > however I could create a new index with set.
> > But, how to add new filed
Update your schema to include the new field and reload your collection.
Then updating your field should work.
Best,
Erick
> On Dec 3, 2019, at 4:40 AM, Vignan Malyala wrote:
>
> How to add a new field to already an existing index in Solr 6.6 ?
>
> I tried to use set for thi
How to add a new field to already an existing index in Solr 6.6 ?
I tried to use set for this, but it shows error as undefined field. But
however I could create a new index with set.
But, how to add new filed to already indexed data?
Is it possible?
Thank you!
Regards,
Sai
eed to preserve the cases in the index.
> This means that the normal LowerCaseFilterFactory approach would not work as
> facet values will not preserve cases and will show in all lowercase.
>
> One method was to use facet.contains along with
> f.fieldname.facet.ignoreCase=
Hi,
I am exploring possibility to do case insensitive filter/facet queries in solr.
I would also need to preserve the cases in the index.
This means that the normal LowerCaseFilterFactory approach would not work as
facet values will not preserve cases and will show in all lowercase.
One method
> I mean "For me, it is impossible to "backup" or "restore" Solr's index by
> taking a snapshot."
>
> If I make you confuse, I am sorry about that.
>
> Sincerely,
> Kaya Ota
>
> 2019年11月21日(木) 19:50 Kayak28 :
>
> > Hello, Communi
I was not clear in the last email.
I mean "For me, it is impossible to "backup" or "restore" Solr's index by
taking a snapshot."
If I make you confuse, I am sorry about that.
Sincerely,
Kaya Ota
2019年11月21日(木) 19:50 Kayak28 :
> Hello, Community Members:
&
do:
- create a snapshot: create a binary file (snapshot_N where n is identical
to segment_N) that contains a path of the index.
- the file is created under data/snapshot_metadata directory.
- list snapshot: return JSON, containing all snapshot data which show
segment generation and path
OK, you have two options:
1.1> do NOT construct IDs with the version. Have two separate fields, id (which
is the in your schema and a _separate_ field called tracking (note,
there’s already by default an _version_ field, with underscores, for optimistic
locking, do not use that).
1.2>
Well, I cannot still completely relate to the solutions by you guys, am looking
into it as how could I achieve that with my application. Thanks !
One thing, that I want to know is how to avoid full re-indexing, that is, what
I need is I don’t want that Solr index all the data every time some
Hi,
My name is Suman Pal,I am facing problem during indexing from zip file( .gz
) by using
at present I able to index like below code:
and my xml.xml as follows:
file1
gt; Sent: 04 November 2019 20:04
>> To: solr-user@lucene.apache.org
>> Subject: Re: Delete documents from the Solr index using SolrJ
>>
>> when you add a new document using the same "id" value as another it just
>> over writes it
>>
>> On Mon
; To: solr-user@lucene.apache.org
> Subject: Re: Delete documents from the Solr index using SolrJ
>
> when you add a new document using the same "id" value as another it just over
> writes it
>
> On Mon, Nov 4, 2019 at 9:30 AM Khare, Kushal (MIND) <
> kushal.kh.
You can delete documents in SolrJ by using deleteByQuery. Using this you can
delete any number of documents from your index or all your documents depending
on the query you specify as the parameter. How you use it is down to your
application.
You haven't said if your application performs
to carry on with the solution that you proposed.
Please guide !
-Original Message-
From: David Hastings [mailto:hastings.recurs...@gmail.com]
Sent: 04 November 2019 20:10
To: solr-user@lucene.apache.org
Subject: Re: Delete documents from the Solr index using SolrJ
delete them by query would
org
> Subject: Re: Delete documents from the Solr index using SolrJ
>
> when you add a new document using the same "id" value as another it just
> over writes it
>
> On Mon, Nov 4, 2019 at 9:30 AM Khare, Kushal (MIND) <
> kushal.kh...@m
: Delete documents from the Solr index using SolrJ
when you add a new document using the same "id" value as another it just over
writes it
On Mon, Nov 4, 2019 at 9:30 AM Khare, Kushal (MIND) <
kushal.kh...@mind-infotech.com> wrote:
> Could you please let me
Basically , what I need is to refresh the index. Suppose, in a directory I have
4 docs, that have been indexed. So, my search works upon those 4.
Now, when I delete one of them, re-index and search, still that deleted
document from the directory is being searched upon.
Hope I have made it a bit
age-
> From: Jörn Franke [mailto:jornfra...@gmail.com]
> Sent: 04 November 2019 19:59
> To: solr-user@lucene.apache.org
> Subject: Re: Delete documents from the Solr index using SolrJ
>
> I don’t understand why it is not possible.
>
> However why don’t you simply overwrite th
Could you please let me know how to achieve that ?
-Original Message-
From: Jörn Franke [mailto:jornfra...@gmail.com]
Sent: 04 November 2019 19:59
To: solr-user@lucene.apache.org
Subject: Re: Delete documents from the Solr index using SolrJ
I don’t understand why it is not possible
I don’t understand why it is not possible.
However why don’t you simply overwrite the existing document instead of
add+delete
> Am 04.11.2019 um 15:12 schrieb Khare, Kushal (MIND)
> :
>
> Hello mates!
> I want to know how we can delete the documents from the Solr index . Su
Hello mates!
I want to know how we can delete the documents from the Solr index . Suppose
for my system, I have a document that has been indexed, now its newer version
is into use, so I want to use the latest one, for that I want the previous one
to be deleted from the index.
Kindly help me
did you solved this problem?
Thanks
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
traction.ExtractingRequestHandler" >
>
> true
> ignored_
> sr_mv_txt
>
>
>
>
> Thnx
>
>
> On Thu, Sep 19, 2019 at 11:02 PM PasLe Choix
> wrote:
>
>> I am on Solr 7.7, according to the official document:
>> https://lucene.apache.or
Which collection are you trying to index? Is the localDocs or books?
you can also try to run thru steps @ exercise 1 at above link to post data
to techproducts and in general it should work end to end
solr-7.7.0:$ bin/post -c techproducts example/exampledocs/*
Which documents do you
On Mon
olr 7.7, according to the official document:
> https://lucene.apache.org/solr/guide/7_7/solr-tutorial.html
> Although it is mentioned Post Tool can index a directory of files, and can
> handle HTML, PDF, Office formats like Word, however no example working
> command is given.
>
> ./
I am on Solr 7.7, according to the official document:
https://lucene.apache.org/solr/guide/7_7/solr-tutorial.html
Although it is mentioned Post Tool can index a directory of files, and can
handle HTML, PDF, Office formats like Word, however no example working
command is given.
./bin/post -c
post.jar was removed in solr 5 I think. There are ways to index your files you
can use the post tool https://lucene.apache.org/solr/guide/8_1/post-tool.html
<https://lucene.apache.org/solr/guide/8_1/post-tool.html>, or you can try tika
to extract text from docuements, or you can use curl
ymore, it is just "post", not
"post.jar"
Can you tell me what is the right command to do the index on a folder? I am
not able to find that information in the documentation.
Thank you very much.
On Tue, Sep 17, 2019 at 9:42 AM Raymond Xie wrote:
> Thank you Paras for your reply, ye
cloud Solr 8.1.1 with zookeeper similar to cloud
>> Solr
>>> 6.6.2 which is in use. All configurations and schema files are exactly
>>> alike, but when I try to index the same documents Solr throws *cannot
>>> change field "FIELD_NAME" from* *index
&
t; alike, but when I try to index the same documents Solr throws *cannot
> > change field "FIELD_NAME" from* *index
> options=DOCS_AND_FREQS_AND_POSITIONS
> > to inconsistent index options=DOCS* for a specific field which is of type
> > *string*. It is a required fie
On 9/16/2019 10:18 AM, Bhuvanesh wrote:
Recently I created a cloud Solr 8.1.1 with zookeeper similar to cloud Solr
6.6.2 which is in use. All configurations and schema files are exactly
alike, but when I try to index the same documents Solr throws *cannot
change field "FIELD_NAME" fr
Hi team,
Recently I created a cloud Solr 8.1.1 with zookeeper similar to cloud Solr
6.6.2 which is in use. All configurations and schema files are exactly
alike, but when I try to index the same documents Solr throws *cannot
change field "FIELD_NAME" from* *ind
487 62144 6007
>> -/+ buffers/cache: 9308 6639
>> Swap:0 0 0
>>
>>
>> Thanks & Regards,
>> Akreeti Agarwal
>>
>> -Original Message-
>> From: Akreeti Agarwal
&
gust 28, 2019 2:45 PM
> To: solr-user@lucene.apache.org
> Subject: RE: Index fetch failed
>
> Yes I am using solr-5.5.5.
> This error is intermittent. I don't think there must be any issue with
> master connection limits. This error is accompanied by this
0
Thanks & Regards,
Akreeti Agarwal
-Original Message-
From: Akreeti Agarwal
Sent: Wednesday, August 28, 2019 2:45 PM
To: solr-user@lucene.apache.org
Subject: RE: Index fetch failed
Yes I am using solr-5.5.5.
This error is intermittent. I don't think there must be any i
generation:
1558637
java.nio.file.NoSuchFileException:
/solrm-efs/solr-m/server/solr/sitecore_web_index/data/index/_12i9p_1.liv
at
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at
sun.nio.fs.UnixException.rethrowAsIOException
This looks like ample memory to get the index chunk.
Also, I looked at the IndexFetcher code, I remember you were using Solr
5.5.5 and the only reason in my view, this would happen is when the index
chunk is not downloaded as can also be seen in the error (Downloaded
0!=123) which clearly states
7.8G 0 7.8G 0% /dev/shm
Thanks & Regards,
Akreeti Agarwal
-Original Message-
From: Atita Arora
Sent: Wednesday, August 28, 2019 11:15 AM
To: solr-user@lucene.apache.org
Subject: Re: Index fetch failed
Hii,
Do you have enough memory free for the index chunk to be fet
Hii,
Do you have enough memory free for the index chunk to be fetched/Downloaded
on the slave node?
On Wed, Aug 28, 2019 at 6:57 AM Akreeti Agarwal wrote:
> Hello Everyone,
>
> I am getting this error continuously on Solr slave, can anyone tell me the
> solution for this:
>
&g
Hello Everyone,
I am getting this error continuously on Solr slave, can anyone tell me the
solution for this:
642141666 ERROR (indexFetcher-72-thread-1) [ x:sitecore_web_index]
o.a.s.h.ReplicationHandler Index fetch failed
:org.apache.solr.common.SolrException: Unable to download
y installation that I made public provisionally without a
> password:
> http://5.39.2.59:8987/solr/#/
> (I changed port because the default one was busy)
>
> I believe that the index is not created, should it be created
> automatically? or did I do something wrong?
>
>
it to work properly.
This is my installation that I made public provisionally without a password:
http://5.39.2.59:8987/solr/#/
(I changed port because the default one was busy)
I believe that the index is not created, should it be created
automatically? or did I do something wrong?
if I run
a field for
> > routing.
> > And at the time of indexing we check if item is new and to routing field we
> > set up value "new", or the item is older than some time period we set up
> > value to "old".
> > And we will have one category rou
outing.
> And at the time of indexing we check if item is new and to routing field we
> set up value "new", or the item is older than some time period we set up
> value to "old".
> And we will have one category routed alias routedCollection, and there will
> be
ollection, and there will
be 2 collections old and new.
If we index new item, router choose new collection and this item is inserted
to it. After some period we reindex item and we decide that this item is old
and to routing field we set up value "old". Router decide to update (insert)
item to
On 2019/08/06 06:43:20, Jörn Franke wrote:
> Do you have some more information on index and size?
>
> Do you have to store everything in the Index? Can you store some data (blobs
> etc) outside ?
>
> I think you are generally right with your solution,
Do you have some more information on index and size?
Do you have to store everything in the Index? Can you store some data (blobs
etc) outside ?
I think you are generally right with your solution, but also be aware that it
is sometimes cheaper to have several servers instead keeping engineer
ting field we
> > set up value "new", or the item is older than some time period we set up
> > value to "old".
> > And we will have one category routed alias routedCollection, and there will
> > be 2 collections old and new.
> >
> > If we i
SOLR 8.1.1 index on pdate field included in search results
Hi Shawn,
>The DatePointField class defaults to docValues="true" and
>useDocValuesAsStored="true". Unless those parameters are changed, if the
>field is defined for a document, it will typically be in s
ppearing in the results, it's the
index IDX_ExpirationDate that I don't want in the results.
So you are saying that I should add docValues="false" or
docValuesAsStored="false" to the indexed but not stored field?:
I have other IDX_ fields defined that are not pdate
On 8/5/2019 10:37 AM, Hodder, Rick wrote:
ExpirationDate is supposed to be there, but IDX_ExpirationDate should not. I
know that I can probably keep using date, but it is deprecated, and part of the
reason for upgrading to 8.1.1 is to use the latest non-deprecated stuff ;-)
The
I am migrating from SOLR 4.10.2 to 8.1.1. For some reason, in the 8.1.1 core, a
pdate index named IDX_ExpirationDate is appearing as a field in the search
results documents.
I have several other indexes that are defined and (correctly) do not appear in
the results. But the index I am having
hema field for
> routing.
> And at the time of indexing we check if item is new and to routing field we
> set up value "new", or the item is older than some time period we set up
> value to "old".
> And we will have one category routed alias routedCollectio
to routing field we
set up value "new", or the item is older than some time period we set up
value to "old".
And we will have one category routed alias routedCollection, and there will
be 2 collections old and new.
If we index new item, router choose new collection and this item
Thanks Shawn for detailed information.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
On 7/9/2019 5:59 AM, PandurangPailvan wrote:
In my dot net application for indexing in solr using solrconnection.Post()
to index.
initially in solr UI swap space displays 32gb and once reach to 32 it grows
up to 64 gb and then stops indexing with system.outofmemory exception.
Can anybody help
Hi,
I am using solr 8.0.
In my dot net application for indexing in solr using solrconnection.Post()
to index.
initially in solr UI swap space displays 32gb and once reach to 32 it grows
up to 64 gb and then stops indexing with system.outofmemory exception.
Can anybody help me in this.
My
OK, then let’s see the indexing code. Make sure you don’t
1> commit after every batch
2> never, never, never optimize.
BTW, you do not want to turn off commits entirely, there are some internal data
structures that grow between commits. So I might do something like specify
commitWithin on my
I have tested the query desperately, actually executing query is pretty fast,
it only took a few minutes to go through all results including converting solr
document to java object. So I believe the slowness is in persistence end. BTW,
I am using linux system.
Sent from Yahoo Mail for
On 6/30/2019 2:08 PM, derrick cui wrote:
Good point Erick, I will try it today, but I have already use cursorMark in my
query for deep pagination.
Also I noticed that my cpu usage is pretty high, 8 cores, usage is over 700%. I
am not sure it will help if I use ssd disk
That depends on
101 - 200 of 7580 matches
Mail list logo