OK here are the scenarios I tried.
*Scenario - 1: *
dih.xml (aka data-config.xml)
entity dataSource=solr name=listing query=...
transformer=DateFormatTransformer
field name=quot;publish_datequot; column=quot;publish_datequot;
xpath=quot;/RDF/item/datequot;
Hi NewRelic is good enough to monitor the Solr.
Are you using Solarium or SolrJ client to connect to Solr?.
We have used Solarium and able to monitor each calls and gather most of the
info.
--
View this message in context:
Thanks for your feedback. Upgrading SolrJ is going to be a bit difficult
(because it's tied in with a framework which is only partially owned by
us), so we'll just have to find another way to deal with it.
Karel
On Fri, Dec 13, 2013 at 7:28 PM, Shawn Heisey s...@elyograg.org wrote:
On
Hi All,
we have deployed on our production environment a new Solr 4.3 instance
(2 nodes with SolrCloud) but this morning one node gone on outofmemory
status and we have noticed that the JVM uses a lot of Old Gen space
during the normal lifecycle.
What are the items that improve this high
Yonik Seeley-2-2 wrote
On Wed, Dec 28, 2011 at 5:47 AM, ku3ia lt;
demesg@
gt; wrote:
So, based on p.2) and on my previous researches, I conclude, that the
more
documents I want to retrieve, the slower is search and main problem is
the
cycle in writeDocs method. Am I right? Can you advice
Hi,
we get a OutOfMemoryError in RamUsageEstimator and are a little bit
confused about the error.
We are using solr 4.6 and are confused about the Lucene42DocValuesProducer.
We checked current solr code and found that Lucene42NormsFormat will be
returned as NormFormat in Lucene46Codec and so the
Hi,
How to boost documents that contain all search terms in several of its fields ?
Below you cand find a simplified example :
The query with Min should match:
q=beautiful Christmas treemm=2qf=title^12 description^2
There are two offers that match the query :
offer1 {title:Christmas tree,
how to index pdf,doc files from browser?
this query is used for indexing :
curl
http://localhost:8080/solr/document/update/extract?literal.id=12commit=true;
-Fmyfile=@C:\solr\document\src\test1\Coding.pdf
but i need to index from browser, as we do for delete:
Hi,
(13/12/16 19:46), Nutan wrote:
how to index pdf,doc files from browser?
I think you can index from browser.
If you said that
this query is used for indexing :
curl
http://localhost:8080/solr/document/update/extract?literal.id=12commit=true;
On 16 December 2013 16:30, Koji Sekiguchi k...@r.email.ne.jp wrote:
Hi,
(13/12/16 19:46), Nutan wrote:
how to index pdf,doc files from browser?
I think you can index from browser.
If you said that
this query is used for indexing :
curl
I tried to install some versions of solr, from solr 4 to solr 4.6 (the last
release) in my production environment. The configuration of the machine is
- Debian 7 64bit
- java opendjk 7
- CPU 8 core
- 16Gb ram
but i notice that the machine load grows if i install the version 4.4 or
above. Why?
ok thanks,
but is there any other way where -F is not used?
I am creating a api in vc++ and to link to solr i am using libcurl,for this
to work the string is the url,
eg:
curl_easy_setopt(curl,
CURLOPT_URL,http://localhost:8080/solr/document/select?q=*%3A*wt=jsonindent=truefl=id;);
so for
Thanks for the reply.
The error shows it is not able to execute the query.
In my case, if you see my config file I am joining the entities between two
different datasources..
i.e., Entity1 - Datasource1
--Subentity - DataSource2
My doubt is, can we join the entities in two different
hi
i dont know if this can be done
but to avoid this you can create a new table with the results and index
that new table :)
you can then delete the table as well after indexing ...
:)
tc cheers
karan
On Mon, Dec 16, 2013 at 5:42 PM, Lokn nlokesh...@gmail.com wrote:
Thanks for the reply.
Hi Anca,
Can you try following URL?
q=beautiful Christmas treemm=2qf=title^12
description^2defType=dismaxbf=map(query($qq),0,0,0,100.0)qq={!dismax
qf='title description' mm=100%}beautiful Christmas tree
Modified from Jan's solution. See his original post [1] to a similar discussion.
[1]
one of the field used in unique key generation is in date format and other
are in string format. So when we remove this date field ,it is working fine.
--
View this message in context:
http://lucene.472066.n3.nabble.com/uniquekey-generation-in-solr-tp4106766p4106919.html
Sent from the Solr -
Hi
Is there a way to find ,whether the External File Fields mentioned in the
schema.xml is being used or whether Solr reads the value of those external
fields.
I am not sure how to use an External field, can I request the value of an
External File Field in the field list or can I use in my
Hi,
You can request the value of the that field via fl=*,field(adjlocality)
See more about it :
https://cwiki.apache.org/confluence/display/solr/Working+with+External+Files+and+Processes
Actually you can search it too with frange query parser. {!frange l=0
u=0}field(adjlocality)
On
Hi Chris,
The easiest approach is to just create a new core on the new machine that
references the collection and shard you want to migrate. For example, say you
split shard1 of a collection named cloud, which results in having: shard1_0
and shard1_1. Now let's say you want to migrate shard
Tim,
Can you explain how the replication snapshot is done using the coreAdminAPI?
--
Yago Riveiro
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
On Monday, December 16, 2013 at 4:23 PM, Tim Potter wrote:
Hi Chris,
The easiest approach is to just create a new core on the new
On 16 December 2013 16:50, Nutan nutanshinde1...@gmail.com wrote:
ok thanks,
but is there any other way where -F is not used?
I am creating a api in vc++ and to link to solr i am using libcurl,for this
to work the string is the url,
eg:
curl_easy_setopt(curl,
On 12/16/2013 2:34 AM, Torben Greulich wrote:
we get a OutOfMemoryError in RamUsageEstimator and are a little bit
confused about the error.
We are using solr 4.6 and are confused about the Lucene42DocValuesProducer.
We checked current solr code and found that Lucene42NormsFormat will be
Any ideas?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Poor-performance-on-distributed-search-tp3590028p4106968.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi Yago,
When you create a new core (via API or Web UI), you specify the collection name
and shard id, in my example cloud and shard1_0. When the core initializes
in SolrCloud mode, it recognizes that the collection exists and adds itself as
a replica to the shard. Then the main replica
Also, I'm aware that there's two typos in my schema.xml attached. I forgot to
remove the linebreak \ character from the two splitOnCaseChange sections.
This typo does not exist in the official schema.xml that solr is using.
-Original Message-
From: Trevor Handley
Hi @all,
i am playing with the PostingsSolrHighlighter. I'm running solr 4.6.0
and my configuration is from here:
https://lucene.apache.org/solr/4_6_0/solr-core/org/apache/solr/highlight/PostingsSolrHighlighter.html
Search query and result (not working):
http://pastebin.com/13Uan0ZF
Schema
There have been some recent refactorings in this area of the code. The
following class name should work:
org.apache.solr.spelling.suggest.tst.TSTLookupFactory
Cheers,
Timothy Potter
Sr. Software Engineer, LucidWorks
www.lucidworks.com
From: Trevor
Brilliant, thanks Timothy!
Changing the solrconfig.xml lookupImpl (not className) to the
org.apache.solr.spelling.suggest.tst.TSTLookupFactory fixed this issue for me.
Thanks, Trevor
-Original Message-
From: Tim Potter [mailto:tim.pot...@lucidworks.com]
Sent: Monday, December 16, 2013
Awesome ... I'll update the Wiki to reflect the new class names.
Timothy Potter
Sr. Software Engineer, LucidWorks
www.lucidworks.com
From: Trevor Handley hand...@civicplus.com
Sent: Monday, December 16, 2013 11:44 AM
To: solr-user@lucene.apache.org
Hi Shawn,
thanks for your reply. But we don't think that this is really a OOM error,
because we already increased the heap to 64gb and the OOM occurs at a usage
of 30-40gb. So solr would allocate more than 20gb at once. this sounds a
little bit too much.
Furthermore we found
On Fri, Nov 22, 2013 at 6:18 PM, Neil Ireson n.ire...@sheffield.ac.ukwrote:
If the child of” query matches both parent and child docs it returns the
child documents but a spurious numFound.
follow up https://issues.apache.org/jira/browse/SOLR-5553
--
Sincerely yours
Mikhail Khludnev
I have a multi-core master/slave configuration that's showing
unexpected replication behavior for one core; other cores are
replicating without problems.
The master is running Solr 4.1; one slave is running 4.1 under Tomcat,
and another (for testing) is running 4.6 under Jetty. These are
Hello.
We are planning to offer search as an embedded functionality into
mobile/low-power devices.
The main requirement are:
- ability to index and search documents available on the mobile device,
- no need of internet access,
- lightweight, low footprint and fast
We are looking into various
1. Which platform are you looking at? Android, iOS, other?
If you are on Android, you can directly use lucene to build an embedded
solution for search. Depending upon your need, that can offer a small
enough footprint. We've done some work around embedding lucene for a
specific application on
Hi,
Glassfish 3.1.2.2
Solr 4.5
Zookeeper 3.4.5
We have set up a SolrCloud with 4 Solr nodes and 3 zookeeper instances.
I have 5 cores with 1 shard/4 replica setup on each of them.
One of our core is very small, and it takes less than one minute to index.
We run full import on it every
Hi,
Following warning message is filling our application logs very rapidly. This
statement is printed every time application talks with zookeeper.
If your index is that small then use clean=true and let
DataImportHandler clear all documents at the start of the import.
Don't bother with delta tables for such a small index.
On Tue, Dec 17, 2013 at 6:10 AM, kaustubh147 kaustubh.j...@gmail.com wrote:
Hi,
Glassfish 3.1.2.2
Solr 4.5
I'm sorry. I thought you wanted to parse a date stored as string into
a java.util.Date. Clearly, you are trying to go the other way round.
There's nothing in DIH which will convert a mysql date to a string in
specific format. You will need to write a custom transformer either in
javascript or in
Hi
I get an weird problem.
I try to create a core within Solr4.6.
Firstly, on my solr web server tomcat, a log come out[1]:
Then lots of Overseer Info logs come out as [2]. And then the creation
failed.
I have also notice that there is a lot of qn in the Overseer on the
zookeeper:
[zk:
Hi,
In my project, we are doing full index on dedicated machine and the index
will be copied to other search serving machine. For this, we are copying the
data folder from indexing machine to serving machine manually. Now, we
wanted to use Solr's SWAP configuration to do this job. Looks like the
To me, the obvious way of doing this would be to CAST the DATETIME to
CHAR(n), or (probably better) use DATE_FORMAT().
On Tue, Dec 17, 2013 at 5:21 AM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
I'm sorry. I thought you wanted to parse a date stored as string into
a java.util.Date.
Hi
42 matches
Mail list logo