Did we have any progress with that? I'd still love an offline package.
Regards,
Alex.
P.s. I found 'package' command in the menu and got all excited. It let me
execute it, but the resulting zip is on the server and I cannot download
it. I hope I did not offend Infra gods so much as to get
Solr version : 4.0 (running with 9GB of RAM)
MySQL : 5.5
JDBC : mysql-connector-java-5.1.22-bin.jar
I am trying to run the full import for my catalog data which is roughly
13million of products. The DIH ran smoothly for 18 hours, and processed
roughly 10million of records. But all of a sudden it
OK, OK,
I will try it again with dynamic fields. May be the Problem has been
something else. All statements sound reasonable.
Even Lisheng's thoughts about the impact of to many fields on memory
consumption should not be the problem for a JVM with 32G Ram an almost
no gc.
Please give me
Hi ,
Our requirement is have a separate schema for every language which differs
in the field type definition for language based analysis.If I have a
standard schema which differs only in the language analysis part ,which can
be inserted by any of the 3 methods in the schema.xml as mentioned in
All,
Looking into a finding solution for Hotel searches based on the below criteria's
1.City/Hotel
2.Data Range
3.Persons
We have created documents which contains all the basic needed information
inclusive of per day rates. The document looks like the below
Why the fck is the FW
-Original Message-
From: Harshvardhan Ojha [mailto:harshvardhan.o...@makemytrip.com]
Sent: 08 January 2013 16:51
To: solr-user@lucene.apache.org
Subject: FW: How can i multiply documents after DIH?
All,
Looking into a finding solution for Hotel searches based on
Apologies folks was an mistake.
-Original Message-
From: Shubham Srivastava [mailto:shubham.srivast...@makemytrip.com]
Sent: 08 January 2013 16:58
To: solr-user@lucene.apache.org
Subject: RE: How can i multiply documents after DIH?
Why the fck is the FW
-Original Message-
Hi All,
Looking into a finding solution for Hotel searches based on the below criteria's
1.City/Hotel
2.Data Range
3.Persons
We have created documents which contains all the basic needed information
inclusive of per day rates. The document looks like the below
Hi all,
I am in a process of migrating my application from Solr 3.6 to Solr 4. A
query that used to work is giving an error with Solr 4.
The query looks like:
q=*:*fl=E_abc@@xyz
The error displayed on the admin page is:
can not use FieldCache on multivalued field: E_abc
The field printed in
On 8 January 2013 17:10, Harshvardhan Ojha
harshvardhan.o...@makemytrip.com wrote:
Hi All,
Looking into a finding solution for Hotel searches based on the below
criteria's
[...]
Didn't you just post this on a separate thread,
complete with some nonsensical follow-up from
a colleague of
Sorry for that, we just spoiled that thread so posted my question in a fresh
thread.
Problem is indeed very simple.
I have solr documents, which has all the required fields(from db).
Say DOC1,DOC2,DOC3.DOCn.
Every document has 1 night tariff and I have 180 nights tariff.
So a person can
Solr 4.0 or a nightly build? There's been a lot of work since 4.0, I'd be
curious if you see the same problem in a nightly build.
Erick
On Mon, Jan 7, 2013 at 7:29 PM, davers dboych...@improvementdirect.comwrote:
It is new to me... I am using the collections API to delete and recreate
This problem persists, i've filed an issue to track it:
https://issues.apache.org/jira/browse/SOLR-4285
-Original message-
From:Markus Jelsma markus.jel...@openindex.io
Sent: Mon 17-Dec-2012 10:49
To: solr-user@lucene.apache.org
Subject: RE: SolrCloud breaks distributed query
Did you look at a conversation thread from 12 Dec 2012 on this list? Just
go to the archives and search for 'hotel'. Hopefully that will give you
something to work with.
If you have any questions after that, come back with more specifics.
Regards,
Alex.
Personal blog:
If your doing periodic backups, I'm just not getting why you would care. I'm
still missing what stopping indexing would gain you.
- Mark
On Jan 8, 2013, at 1:36 AM, Otis Gospodnetic otis.gospodne...@gmail.com wrote:
Hi,
Right, you can continue indexing, but if you need to run
If you are using 4.0 you can't use the CloudSolrServer with the collections API
- you have to pick a server and use the HttpSolrServer impl. In 4.1 you can use
the CloudSolrServer with the collections API.
- Mark
On Jan 6, 2013, at 8:42 PM, Jay Parashar jparas...@itscape.com wrote:
The
What you describe sounds right to me and seems consistent with the error
stacktrace.. I would increase the MySQL wait_timeout to 3600 and,
depending on your server, you might want to also increase max_connections.
cheers,
Travis
On Tue, Jan 8, 2013 at 4:10 AM, vijeshnair vijeshkn...@gmail.com
Hi,
What would be the best fieldtype for a persons name? at the moment I'm
using text_general but, if I search for bob smith, some results I get back
might be rob thomas. In that it's matched 'ob'.
But I only really want results that are either
'bob smith'
'bob, smith'
'smith, bob'
'smith bob'
On 1/8/2013 2:10 AM, vijeshnair wrote:
Solr version : 4.0 (running with 9GB of RAM)
MySQL : 5.5
JDBC : mysql-connector-java-5.1.22-bin.jar
I am trying to run the full import for my catalog data which is roughly
13million of products. The DIH ran smoothly for 18 hours, and processed
roughly
A Lucene 4.0 document returns for a Date field now a string value, instead of
a Date object.
field name=ModuleImpl.versionAsDate view=Datenstand type=date
Solr4.0 -- 2009-10-29T00:00:009Z
Solr3.6 -- Date instance
Can this be set somewhere in the config?
I prefer to receive a date instance
SimpleDateFormat df= new SimpleDateFormat(-MM-dd'T'hh:mm:ss.S'Z');
Date dateObj = df.parse(2009-10-29T00:00:009Z);
brbrbr--- Original Message ---
On 1/8/2013 09:34 AM uwe72 wrote:brA Lucene 4.0 document returns for a Date
field now a string value, instead of
bra Date object.
br
I recently migrated to Solr Cloud (4.0.0 from 3.6.0) and my auto suggest
feature does not seem to be working. It is a typical implementation with a
/suggest searchHandler defined on the config.
Are there any changes I need to incorporate?
Regards
Jay
A recent jira issue (LUCENE-4661) changed the maxThreadCount to 1 for
better performance, so I'm not sure if both of my changes above are
actually required or if just maxMergeCount will fix it. I commented on
the issue to find out.
Discussion on the issue has suggested that a maxThreadCount
Hi
I am performing wildcard faceting using the patch in SOLR-247 on solr 4.0.
It works like a charm in a single instance...
But it does not work in a distributed mode...
Am i missing something?
./zahoor
JIRA about the fix for 4.1: https://issues.apache.org/jira/browse/SOLR-4140
On 1/8/13 4:01 PM, Jay Parashar wrote:
Thanks Mark...I will use it with 4.1. For now, I used httpclient to call the
Collections api directly (do a Get on
http://127.0.0.1:8983/solr/admin/collections?action=CREATE etc).
I'd guess that the patch simply doesn't implement it for distributed searches.
The code for distributed facets is quite a bit more complicated, and I don't
see it touched in this patch.
-Michael
-Original Message-
From: jmozah [mailto:jmo...@gmail.com]
Sent: Tuesday, January 08, 2013
I can try to bump it for distributed search...
Some pointer where to start will be helpful...
Can SOLR-2894 be a good start to look at this?
./Zahoor
On 08-Jan-2013, at 9:27 PM, Michael Ryan mr...@moreover.com wrote:
I'd guess that the patch simply doesn't implement it for distributed
I think distrib with components has to be setup a little differently - you
might need to use shards.qt to point back to the same request handler for the
sub searches. Just a guess - been a while since I've looked at spellcheck
distrib support and I'm not 100% positive the suggest stuff is all
Or if synonyms are involved, which they likely aren't in this case.
although for name matching I'd think one would want them, perhaps on
another copy of the name field to allow strict vs. nickname matching.
Otis
Solr ElasticSearch Support
http://sematext.com/
On Jan 8, 2013 9:35 AM, Shawn
I have encountered an issue where using DirectXmlRequest to index data on a
remote host results in eventually running out have temp disk space in the
java.io.tmpdir directory. This occurs when I process a sufficiently large
batch of files. About 30% of the temporary files end up permanent.
Thanks Mark!
-Original Message-
From: Mark Miller [mailto:markrmil...@gmail.com]
Sent: Tuesday, January 08, 2013 10:16 AM
To: solr-user@lucene.apache.org
Subject: Re: Sor Cloud Autosuggest not working
I think distrib with components has to be setup a little differently - you
might need
I'm currently running solr 4.0 alpha with manifoldCF v1.1 dev
Manifold is sending solr the datetime as milliseconds expired after
1-1-1970.
I've tried setting several date.formats in the extraction handler but I
always get this error:
and the manifoldcf crawl aborts.
SolrCore
: Manifold is sending solr the datetime as milliseconds expired after
: 1-1-1970.
Hmm... are you certain there is no way to change ManifoldCF to send the
date in ISO-8601 canonical so Solr can handle it natively?
: I've tried setting several date.formats in the extraction handler but I
Are you
Hmm. Fixed it.
Did similar thing as SOLR-247 for distributed search.
Basically modified the FacetInfo method of the FacetComponent.java to make it
work.. :-)
./zahoor
On 08-Jan-2013, at 9:35 PM, jmozah jmo...@gmail.com wrote:
I can try to bump it for distributed search...
Some pointer
I'll certainly ask manifold if they can send the date in the correct format.
Meanwhile;
How would I create an updater to change the format of a date?
Are there any decent examples out there?
thanks,
--
View this message in context:
One quick and not so dirty way to do this is to use the
http://wiki.apache.org/solr/ScriptUpdateProcessor. Oops, sorry the wiki is a
bit sparse currently, but the feature is documented in the Solr 4.0 release
as collection1/conf/update-script.js in the Solr example. It'll probably
require a
I'm confused about the behavior of clean=true using the DataImportHandler.
When I use clean=true on just one instance, it doesn't blow all the data out
until the import succeeds. In a cluster, however, it appears to blow all the
data out of the other nodes first, then starts adding new docs.
Am
Have you uploaded a patch to JIRA???
Upayavira
On Tue, Jan 8, 2013, at 07:57 PM, jmozah wrote:
Hmm. Fixed it.
Did similar thing as SOLR-247 for distributed search.
Basically modified the FacetInfo method of the FacetComponent.java to
make it work.. :-)
./zahoor
On 08-Jan-2013, at
I just found out I must upgrade to Solr 4.0 final (from 4.0 alpha)
I'm currently running Solr 4.0 alpha on Tomcat 7.
Is there an easy way to surgically replace files and upgrade?
Or should I completely start over with a fresh install?
Ideally, I'm looking for a set of steps...
Thanks,
--
View
Hi Solr Experts,
Could you please suggest how many cores we can have on a single solr
instance? support we have 8 slaves running in a load balancing enviornment,
can we have around 800 cores on each slave instance?
I saw a pending request SOLR-1293 which will support lots of core but that
is not
Hi Michael,
in our index ob bibliographic metadata, we see the need for at least
tree fields:
- name_facet: String as type, because the facet should should represent
the original inverted format from our data.
- name: TextField for searching. This field is heavily analyzed to match
different
On 1/8/2013 2:27 PM, eShard wrote:
I just found out I must upgrade to Solr 4.0 final (from 4.0 alpha)
I'm currently running Solr 4.0 alpha on Tomcat 7.
Is there an easy way to surgically replace files and upgrade?
Or should I completely start over with a fresh install?
Ideally, I'm looking for a
On 1/8/2013 2:33 PM, Uomesh wrote:
Hi Solr Experts,
Could you please suggest how many cores we can have on a single solr
instance? support we have 8 slaves running in a load balancing enviornment,
can we have around 800 cores on each slave instance?
I saw a pending request SOLR-1293 which will
Hello,
I'm trying to understand some Solr relevance issues using debugQuery=on,
but I don't see the coord factor listed anywhere in the explain output.
My understanding is that the coord factor is not included in either the
querynorm or the fieldnorm.
What am I missing?
Tom
I am not sure this applies to alpha and final but i do think upgrading from 4.0
to 4.1 will give you trouble regarding data in Zookeeper. At least
clusterstate.json has changed.
Check the appropriate Jira issues between alpha and final regarding Zookeeper
or test to make sure it works.
If it is a problem, you should be able to just stop your cluster and nuke that
file in zookeeper, than startup with the new version.
- Mark
On Jan 8, 2013, at 5:09 PM, Markus Jelsma markus.jel...@openindex.io wrote:
I am not sure this applies to alpha and final but i do think upgrading from
0 down vote favorite
I am using SOLR for indexing documents.I create index from a mysql database.
I create index from PHP which runs on wamp server. I am using SOLR PHP
client to create index. When I create index from the server on which SOLR is
deployed, everything works fine. But when
Hi folks,
I'm using Solr 4.0.0 and trying to modify the example to build a search app.
So far it works fine.
However, I couldn't figure out how to clean up old index, say index created
20 days ago.
I noticed the DeletionPolicy and I activated it by modifying solrconfig.xml
by adding:
apparently, it fails also with @SuppressCodecs(Lucene3x)
roman
On Tue, Jan 8, 2013 at 6:15 PM, Roman Chyla roman.ch...@gmail.com wrote:
Hi,
I have a float field 'read_count' - and unittest like:
assertQ(req(q, read_count:1.0),
//doc/int[@name='recid'][.='9218920'],
: apparently, it fails also with @SuppressCodecs(Lucene3x)
what exactly is the test failure message?
When you run tests that use the lucene test framework, any failure should
include information about the random seed used to run the test -- that
random seed affects things like the codec used,
On 1/8/2013 3:38 PM, hyrax wrote:
Hi folks,
I'm using Solr 4.0.0 and trying to modify the example to build a search app.
So far it works fine.
However, I couldn't figure out how to clean up old index, say index created
20 days ago.
I noticed the DeletionPolicy and I activated it by modifying
Hi Alexandre,
CombiningFilter sounds close (no option to put spaces between original terms),
but hasn't yet been committed:
https://issues.apache.org/jira/browse/LUCENE-3413.
Steve
On Jan 8, 2013, at 4:55 PM, Alexandre Rafalovitch arafa...@gmail.com wrote:
Hello,
I want to take a
For facets, doesn't
http://localhost:8983/solr/select?wt=jsonindent=truefl=name,storeq=*:*facet=on
facet.query={!frange l=0 u=3}geodist(store,45.15,-93.85)
facet.query={!frange l=3.001 u=4}geodist(store,45.15,-93.85)
facet.query={!frange l=4.001 u=5}geodist(store,45.15,-93.85)
work (from
The test checks we are properly getting/indexing data - we index database
and fetch parts of the documents separately from mongodb. You can look at
the file here:
You really have a field name with '@' symbols in it? If it worked in 3.6,
it was probably not intentional, classic undocumented behavior.
The first thing I'd try is replacing the @ with __ in my schema...
Best
Erick
On Tue, Jan 8, 2013 at 6:58 AM, samarth s samarth.s.seksa...@gmail.comwrote:
Hello Solr Users,
I am trying to convert a complex lucene query to solrquery to use it
in a embeddedsolrserver instance.
I have tried the regular toString method without success. Is there any
suggested method to do this ?.
Greatly appreciate the response.
Thanks,
--
Jagadish Nomula -
Hi Jagdish,
So when you use the Lucene parser through Solr you get a different query
than if you use Lucene's QP directly? Maybe you can share your raw/English
query?
Otis
Solr ElasticSearch Support
http://sematext.com/
On Jan 8, 2013 9:14 PM, Jagdish Nomula jagd...@simplyhired.com wrote:
Hi All,
i want to analyze the solr log file... the thing i want to do is, putting
all the queries coming to the server to a log file, on a daily or hourly
basis, and then running a tool to make analysis like most used field or
queries, the queries which have hits and so on... are there any tools
Deniz,
Look at Sematext Search Analytics service, it does that and a lot more.
It's free. URL below.
Otis
Solr ElasticSearch Support
http://sematext.com/
On Jan 8, 2013 9:23 PM, deniz denizdurmu...@gmail.com wrote:
Hi All,
i want to analyze the solr log file... the thing i want to do is,
thank you Otis
I have used sematext's trial version but it requires sending log files to
another url(correct me if i am wrong :) ), but i need something which could
run on local, something would be triggrered by cronjob or something could be
integrated(somehow) with the admin interface
-
Hi,
Are you just trying to extract the personal name? I think Java Mail has the
ability to do that.
Otis
Solr ElasticSearch Support
http://sematext.com/
On Jan 8, 2013 4:56 PM, Alexandre Rafalovitch arafa...@gmail.com wrote:
Hello,
I want to take a composite email address such as John Doe
Hi Ryan,
I'm not sure what is creating those upload files something in Solr? Or
Tomcat?
Why not specify a different temp dir via system property command line
parameter?
Otis
Solr ElasticSearch Support
http://sematext.com/
On Jan 8, 2013 12:17 PM, Ryan Josal rjo...@rim.com wrote:
I have
How complex? Does it use any of the more advanced Query Types or detailed
options that are not supported in the Solr query syntax?
What specific problems did you have.
-- Jack Krupansky
-Original Message-
From: Jagdish Nomula
Sent: Tuesday, January 08, 2013 9:13 PM
To:
Hi Alex,
Thanks for your reply.
I saw prices based on daterange using multipoints . But this is not my
problem. Instead the problem statement for me is pretty simple.
Say I have 100 documents each having tariff as field.
Doc1
doc
double name=tariff2400.0/double
/doc
Doc2
doc
double
Hi Solr Guru,
I have two set of documents in one SolrCore, each set has about 1M
documents with different document type, say 'type1' and 'type2'.
Many documents in first set are very similar with 1 or 2 documents in the
second set, What I want to get is: for each document in set 2, return the
On 8 January 2013 17:48, Harshvardhan Ojha
harshvardhan.o...@makemytrip.com wrote:
Sorry for that, we just spoiled that thread so posted my question in a fresh
thread.
Problem is indeed very simple.
I have solr documents, which has all the required fields(from db).
Say
66 matches
Mail list logo