data load without
running into this error?
I found that 'batchSize=-1' parameter needs to be specified in the datasource
for MySql, is there a way to specify for others Databases as well?
Thanks and Regards,
Srinivas Kashyap
DISCLAIMER: E-mails and attachments from Bamboo Rose, Inc
data load without
running into this error?
I found that 'batchSize=-1' parameter needs to be specified in the datasource
for MySql, is there a way to specify for others Databases as well?
Thanks and Regards,
Srinivas Kashyap
DISCLAIMER:
E-mails and attachments from TradeStone Software, Inc
[
{
"TECHSPEC.REQUEST_NO": "HQ22 ",
"TECH_SPEC_ID": "HQ22 ",
"DOCUMENT_TYPE": "TECHSPEC",
"TECHSPEC.OWNER": "SHOP ",
ORT process is running unaffected i.e Data is being indexed into
solr with the same datasource configuration in context xml.
Thanks and Regards,
Srinivas Kashyap
Senior Software Engineer
"GURUDAS HERITAGE"
'Block A' , No 59/2, 2nd Floor, 100 Feet Ring Road,
Kadirenahalli, Padmanabhanagar
Ba
Hello,
After starting the solr application and running full imports, running into this
below error after a while:
null:org.apache.catalina.connector.ClientAbortException: java.io.IOException:
Broken pipe
at
org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:393)
at
Hello,
In one of our tomcat based solr 5.2.1 deployment environment, we are
experiencing the below error which is occurring recursively.
When the app is restarted, the error won't show up until sometime. Max allowed
connections in tomcat context xml and JVM memory of solr is sufficient enough.
documents.
Below are some more config details in solrconfig.xml
20
200
false
2
Thanks and Regards,
Srinivas Kashyap
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: 17 May 2017 08:51 PM
To: solr-user <solr-user@lucene.apache.org>
Subject: Re: Perfo
documents.
Below are some more config details in solrconfig.xml
20
200
false
2
Thanks and Regards,
Srinivas Kashyap
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: 17 May 2017 08:51 PM
To: solr-user <solr-user@lucene.apache.
me.
Thanks and Regards,
Srinivas Kashyap
DISCLAIMER: E-mails and attachments from Bamboo Rose, LLC are confidential. If
you are not the intended recipient, please notify the sender immediately by
replying to the e-mail, and then delete it without making copies
can poll the import status of index server in SolrJ, so that
we can refrain sending another adhoc import command while the index is still
runnning?
Thanks and Regards,
Srinivas Kashyap
Senior Software Engineer
"GURUDAS HERITAGE"
100 Feet Ring Road,
Kadirenahalli,
Banashankari 2nd Stage,
Thanks Mikhail,
Can you please explain the same? How can it be done in SolrJ
Thanks and Regards,
Srinivas Kashyap
Senior Software Engineer
“GURUDAS HERITAGE”
100 Feet Ring Road,
Kadirenahalli,
Banashankari 2nd Stage,
Bangalore-560070
P: 973-986-6105
Bamboo Rose
The only B2B marketplace powered
t;,
0,
"115",
0,
"1000017",
0,
"12",
0,
"120",
0,
"122",
0,
"123",
0,
"124",
0,
"125",
0,
"126",
0,
"127",
Thanks and Regards,
Srinivas Kashyap
Hi,
I have standalone solr index server 5.2.1 and have a core with 15 fields(all
indexed and stored).
Through DIH I'm indexing the data (around 65million records). The index process
took 6hours to complete. But after the completion when I checked through Solr
admin query console(*:*),
native way I can find them?
Thanks and Regards,
Srinivas Kashyap
DISCLAIMER:
E-mails and attachments from Bamboo Rose, LLC are confidential.
If you are not the intended recipient, please notify the sender immediately by
replying to the e-mail, and then delete
native way I can find them?
Thanks and Regards,
Srinivas Kashyap
DISCLAIMER:
E-mails and attachments from TradeStone Software, Inc. are confidential.
If you are not the intended recipient, please notify the sender immediately by
replying to the e-mail, and then delete it without making copie
import was
completed("onImportEnd" eventlistener).
Thanks and Regards,
Srinivas Kashyap
-Original Message-
From: Emir Arnautović [mailto:emir.arnauto...@sematext.com]
Sent: 31 January 2018 04:14 PM
To: solr-user@lucene.apache.org
Subject: Re: OnImportEnd EventListener
Hi S
of server
resources.
Is there a way to reduce the number of logins and logouts and have a
persistent DB connection from solr?
Thanks and Regards,
Srinivas Kashyap
DISCLAIMER:
E-mails and attachments from TradeStone Software, Inc. are confidential.
If you are not the intended recipient, please notify
the JVM memory go out of memory.
Is there a way to specify in the child query entity to pull the record related
to parent entity in the full-import mode.
Thanks and Regards,
Srinivas Kashyap
DISCLAIMER:
E-mails and attachments from TradeStone Software, Inc. are confidential.
If you
Hello,
Is there any API to convert Solr query response to JDBC Rowset?
Thanks and Regards,
Srinivas Kashyap
-interface.html
Regards,
Alex
On Wed, Jul 18, 2018, 2:15 AM Srinivas Kashyap, <
srini...@tradestonesoftware.com> wrote:
> Hello,
>
> Is there any API to convert Solr query response to JDBC Rowset?
>
> Thanks and Regards,
> Srinivas Kashyap
>
>
is much
appreciated.
Also, our application was running on tomcat/websphere, will it be any different
in Jetty?
Thanks and Regards,
Srinivas Kashyap
Senior Software Engineer
"GURUDAS HERITAGE"
100 Feet Ring Road,
Kadirenahalli,
Banashankari 2nd Stage,
Bangalore-560070
P: 97
xt();
DataSource ds = null;
Context webContext =
(Context)initContext.lookup("java:/comp/env");
ds = (DataSource) webContext.lookup("jdbc/tssindex");
How to fetch it in jetty.
Thanks in advance,
Srinivas kashyap
.
Thanks and Regards,
Srinivas Kashyap
/2018 2:31 AM, Srinivas Kashyap wrote:
> I have a solr core with some 20 fields in it.(all are stored and indexed).
> For an environment, the number of documents are around 0.29 million. When I
> run the full import through DIH, indexing is completing successfully. But, it
> is occupy
. Is there a possibility where I can go
and check, which document is consuming more memory? Put in another way, can I
sort the index based on size?
Thanks and Regards,
Srinivas Kashyap
DISCLAIMER:
E-mails and attachments from Bamboo Rose, LLC are confidential.
If you
kafka to pull data
from DB and push into solr?
If there are any other alternatives please suggest.
Thanks and Regards,
Srinivas Kashyap
DISCLAIMER:
E-mails and attachments from Bamboo Rose, LLC are confidential.
If you are not the intended recipient, please notify
in to the memory. For Dev and QA environments, the above memory config is
sufficient. When we move to production, we have to increase the memory to
around 16GB of RAM and 12 GB of JVM.
Is there any logic/configurations to limit the memory usage?
Thanks and Regards,
Srinivas Kashyap
and Regards,
Srinivas Kashyap
-Original Message-
From: Shawn Heisey
Sent: 09 April 2019 01:27 PM
To: solr-user@lucene.apache.org
Subject: Re: Sql entity processor sortedmapbackedcache out of memory issue
On 4/8/2019 11:47 PM, Srinivas
the child entities which have some more child
entities in them until I'm done with all the children.
Thanks and Regards,
Srinivas Kashyap
DISCLAIMER:
E-mails and attachments from Bamboo Rose, LLC are confidential.
If you are not the intended recipient, please notify
Hi Alexandre,
Yes, the whole tree gets mapped to and returned as single flat document. When
you search, it should return all the matching documents if it matches that
nested field.
Thanks and Regards,
Srinivas Kashyap
-Original Message-
From: Alexandre Rafalovitch
Sent: 30 April
s more than 20.
How do I write a filter query for this?
P.S: I should only fetch records, whose Salt Composition percentage is more
than 20 and not other percentages.
Thanks and Regards,
Srinivas Kashyap
DISCLAIMER:
E-mails and attachments from Bamboo Rose, LLC are
Hi Shawn, Mikhail
Any suggestions/pointers for using zipper algorithm. I'm facing below error.
Thanks and Regards,
Srinivas Kashyap
**
From: Srinivas Kashyap
Sent: 12 April 2019 03:10 PM
To: solr-user
hosting.
Am I missing some configuration? Please let me know.
Thanks and Regards,
Srinivas Kashyap
DISCLAIMER:
E-mails and attachments from Bamboo Rose, LLC are confidential.
If you are not the intended recipient, please notify the sender immediately by
replying
architecture given the complexity of
your needs
Can you please give pointers to look into, We are using DIH for production and
facing few issues. We need to start phasing out
Thanks and Regards,
Srinivas Kashyap
-Original Message-
From: Alexandre Rafalovitch
Sent: 31 July
is working fine with
AWS hosting. Really baffled.
Thanks and Regards,
Srinivas Kashyap
-Original Message-
From: Erick Erickson
Sent: 31 July 2019 08:00 PM
To: solr-user@lucene.apache.org
Subject: Re: Dataimport problem
This code is a little old, but should give you a place to start:
https
) [?:1.8.0_242]
Thanks and Regards,
Srinivas Kashyap
DISCLAIMER:
E-mails and attachments from Bamboo Rose, LLC are confidential.
If you are not the intended recipient, please notify the sender immediately by
replying to the e-mail, and then delete it without
a thread dump my guess is
you’ll see the number of threads increase over time. If that’s the case, and if
you have no custom code running, we need to see the thread dump.
Best,
Erick
> On Mar 6, 2020, at 05:54, Srinivas Kashyap
> wrote:
>
> Hi All,
>
> I have recently upgraded solr
Hi all,
I have a date field in my schema and I'm trying to facet on that field and
getting below error:
This field I'm copying to text field(copyfield) as well.
Error:
Can't facet on a PointField without docValues
I tried adding like below:
And after the changes, I did full reindex
,
Srinivas
Hello,
I'm trying to upgrade to solr 8.4.1 and facing below error while start up and
my cores are not being listed in solr admin screen. I need your help.
2020-02-03 12:12:35.622 ERROR (coreContainerWorkExecutor-2-thread-1) [ ]
o.a.s.c.CoreContainer Error waiting for SolrCore to be loaded on
February 2020 02:24
To: solr-user@lucene.apache.org
Subject: Re: Solr 8.4.1 error
On 2/3/2020 5:16 AM, Srinivas Kashyap wrote:
> I'm trying to upgrade to solr 8.4.1 and facing below error while start up and
> my cores are not being listed in solr admin screen. I need your help.
&g
Sorry for the interruption, This error was due to wrong context path mentioned
in solr-jetty-context.xml
And in jetty.xml it was referring /solr. So index was locked.
Thanks,
Srinivas
-Original Message-
From: Srinivas Kashyap
Sent: 04 February 2020 11:04
To: solr-user
null)
{
try {
client.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
Thanks and Regards,
Srinivas Kashyap
From: Erick Erickson
(1106)
java.lang.Thread.sleep(Native Method)
org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
java.lang.Thread.run(Thread.java:748)
Thanks and Regards,
Srinivas Kashyap
-Original Message-
From: Erick Erickson
Sent: 06 March 2020 21:34
To: solr-user
Hi,
we were in Solr 5.2.1 and TikaEntityProcessor to index pdf documents through
DIH and was working fine. The jars were tika-core-1.4.jar and
tika-parsers-1.4.jar.
Below is my schema.xml: (p,s. All filed types have been defined)
And my tika-data-config.xml:
Can you share with details, what performance was degraded?
Thanks,
srinivas
From: Natarajan, Rajeswari
Sent: 23 April 2020 12:41
To: solr-user@lucene.apache.org
Subject: Re: How upgrade to Solr 8 impact performance
With the same hardware and configuration we also saw performance degradation
entity processor sortedmapbackedcache out of memory issue
On 4/8/2019 11:47 PM, Srinivas Kashyap wrote:
> I'm using DIH to index the data and the structure of the DIH is like below
> for solr core:
>
>
> 16 child entities
>
>
> During indexing, since the number of request
String content = textHandler.toString();
up.addField("_text_",content);
UpdateRequest req = new UpdateRequest();
req.add(up);
req.setBasicAuthCredentials("solrAdmin", password);
UpdateResponse ur = req.process(solr,"pri
from PDF and pushes into solr?
Thanks,
Srinivas Kashyap
-Original Message-
From: Alexandre Rafalovitch
Sent: 24 August 2020 20:54
To: solr-user
Subject: Re: PDF extraction using Tika
The issue seems to be more with a specific file and at the level way below
Solr's or possibly even
319)
at
org.apache.pdfbox.text.PDFTextStripper.writeText(PDFTextStripper.java:266)
at
org.apache.tika.parser.pdf.PDF2XHTML.process(PDF2XHTML.java:117)
... 15 more
Can you please suggest, how to extract PDF from linux based file system?
Thanks,
Srinivas Kashyap
DISCLAIMER:
E-ma
leneck, but the above test
will tell you
where to go to try to speed things up.
Best,
Erick
> On May 22, 2020, at 12:39 PM, Srinivas Kashyap
> mailto:srini...@bamboorose.com.INVALID>>
> wrote:
>
> Hi All,
>
> We are runnnig solr 8.4.1. We have a database table which has mo
Hi All,
We are runnnig solr 8.4.1. We have a database table which has more than 100
million of records. Till now we were using DIH to do full-import on the tables.
But for this table, when we do full-import via DIH it is taking more than 3-4
days to complete and also it consumes fair bit of
Hello,
We are using HttpSolrClient(solr-solrj-8.4.1.jar) in our app along with
required jar(httpClient-4.5.6.jar). Before that we upgraded these jars from
(solr-solrj-5.2.1.jar) and (httpClient-4.4.1.jar).
After we upgraded, we are seeing lot of below connection evictor statements in
log
quot;PHY_KEY2":"HQ010399"}]
}},
{
"groupValue":"HQ010377",
"doclist":{"numFound":8,"start":0,"docs":[
{
"PHY_KEY2":"HQ010377"}]
}},
Hello,
We are on solr 8.4.1 and In standalone server mode. We have a core with
497,767,038 Records indexed. It took around 32Hours to load data through DIH.
The disk occupancy is shown below:
82G /var/solr/data//data/index
When I restarted solr instance and went to this core to query on
ene.apache.org/solr/guide/8_4/pagination-of-results.html#using-cursors<https://lucene.apache.org/solr/guide/8_4/pagination-of-results.html#using-cursors>
Both work to sort large result sets without consuming the whole memory
> Am 05.06.2020 um 08:18 schrieb Srinivas Kashyap
> mailto:
many shards/replica
do we require for this core considering we allow it to grow as users transact.
The updates to this core is not thru DIH delta import rather, we are using
SolrJ to push the changes.
Thanks,
Srinivas
On 6/4/2020 9:51 PM, Srinivas Kashyap wrote:
&g
I want to retrieve. Will this
be faster?
Related to earlier question:
We are using 8.4.1 version
All the fields that I'm using on sorting are all string data type(modify ts
date) with indexed=true stored=true
Thanks,
Srinivas
On 05-Jun-2020 9:50 pm, Shawn Heisey wrote:
On 6/5/2020 12:17 AM, Srini
Hello,
I have schema and field definition as shown below:
TRACK_ID field contains "NUMERIC VALUE".
When I use sort on track_id (TRACK_ID desc) it is not working properly.
->I have below values in Track_ID
Doc1: "84806"
Doc2: "124561"
Ideally, when I use sort command, query result should
ltivalued-fields-in-solr#37006655>)
Thanks,
Dwane
____
From: Srinivas Kashyap
mailto:srini...@bamboorose.com.INVALID>>
Sent: Thursday, 29 October 2020 3:49 PM
To: solr-user@lucene.apache.org<mailto:solr-user@lucene.apache.org>
mailto:solr-user@luc
Hello,
Say, I have a schema field which is multivalued. Is there a way to maintain
distinct values for that field though I continue to add duplicate values
through atomic update via solrj?
Is there some property setting to have only unique values in a multi valued
fields?
Thanks,
Srinivas
ost.
>
> wunder
> Walter Underwood
> wun...@wunderwood.org<mailto:wun...@wunderwood.org>
> http://observer.wunderwood.org/<http://observer.wunderwood.org> (my blog)
>
> > On Oct 29, 2020, at 3:16 AM, Srinivas Kashyap
> > mailto:srini...@bamboorose.c
Hi Alessandro,
I'm trying to retrieve party id 'abc' 'def' 'ghi' in the same order I pass to
filter query. Is this possible?
The sorting field which I want to get results is not in solr schema for party
core. The sorting field Is outside solr. I want to able to fetch the
QueryResponse(SolrJ)
Hello,
I have a scenario where I'm using filter query to fetch the results.
Example: Filter query(fq) - PARTY_ID:(abc OR def OR ghi)
Now I'm getting query response through solrJ in different order. Is there a way
I can get the results in same order as specified in filter query?
Tried dismax
Hi,
Our datasource is oracle db and we are pulling data to solr through JDBC(DIH).
I have below entry in jetty.xml
jdbc/tss
thin
:1521:ORCL
XXX
XXX
And we have
65 matches
Mail list logo