Yes, re-indexing is the first thing which I execute when field changed.
And now looks like migration to solr4 finished.
Thanks you for your answers David.
WBR Viacheslav.
On 28.11.2012, at 18:25, David Smiley (@MITRE.org) wrote:
Viacheslav,
Did you re-index? Clearly re-indexing is needed
Hi Solr community,
I'm currently doing some benchmarking of a real Solr 3.3 instance vs the
same ported to Solr 4.0.
Benchmarking is done using JMeter from localhost.
Test scenario is a constant stream of queries from a log file out of
production, at targeted 50 QPS.
After some time (marked
Every time I try to do something with the cores from the admin UI, Solr hangs
with no exceptions.
Anyone else experiencing this?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-hangs-after-core-reload-tp4023206.html
Sent from the Solr - User mailing list archive at
I tried downloading them with my browser and also with a c# WebRequest.
If I skip the first and last 4 bytes it seems work fine.
On Thu, Nov 29, 2012 at 2:28 AM, Erick Erickson erickerick...@gmail.comwrote:
How are you downloading them? I suspect the issue is
with the download process rather
Found an article about the issue of multi word synonyms
http://nolanlawson.com/2012/10/31/better-synonym-handling-in-solr/ .
Not sure it's the solution I'm looking for, but it may be for someone else.
--
View this message in context:
I have a problem with query for dimensional type.
In fact in my schema.xml I added this filed:
field name=myObjects type=myObject indexed=true stored=true
required=falsemultiValued=true/
with type:
fieldType name=myObject class=solr.PointType dimension=3
subFieldType=string
There are also other solutions:
Multi-word synonym filter (synonym expansion)
https://issues.apache.org/jira/browse/LUCENE-4499
Since Solr 3.4 i have my own solution which might be obsolete if
LUCENE-4499 will be in a released version.
Thanks Erick, I just found this ticket implying that is able to be used by
the main query also:
https://issues.apache.org/jira/browse/SOLR-2429
https://issues.apache.org/jira/browse/SOLR-2429
As you stated queryResultCache is cheap, I guess it would nice to get a
definitive answer and example
Hi,
I think you should change/set value for multipartUploadLimitInKB attribute
of requestParsers in solrconfig.xml
Regards.
On 29 November 2012 07:58, deniz denizdurmu...@gmail.com wrote:
hello,
during tests, I keep getting
SEVERE: null:java.lang.IllegalStateException: Form too
I tried your second case with SOLR3.5, It runs fine and the record could be
deleted when you only configure deletedPkQuery.
Could you consider upgrading your SOLR to version 3.5?
Best Regards,
Illu Ying
-邮件原件-
发件人: RPSolrUser [mailto:roopa.parek...@gmail.com]
发送时间: 2012年11月27日 23:49
On 11/29/2012 3:15 AM, Daniel Exner wrote:
I'm currently doing some benchmarking of a real Solr 3.3 instance vs
the same ported to Solr 4.0.
Benchmarking is done using JMeter from localhost.
Test scenario is a constant stream of queries from a log file out of
production, at targeted 50 QPS.
Yes, it is sad but true that multi-word synonym processing does not work
right out of the box for all common interesting cases, although it does do
semi-well for index-time processing, but even there, matching synonyms of
varying lengths within larger phrases will sometimes work but sometimes
Using cache=false seems to Not be caching the query result. I ran queries
against our master server that doesn't get web traffic with and without the
parameter and would only notice inserts when the parameter wasn't included.
--
View this message in context:
On 11/29/2012 3:15 AM, Daniel Exner wrote:
I'm currently doing some benchmarking of a real Solr 3.3 instance vs
the same ported to Solr 4.0.
Another note specifically related to this part: Have you used the same
configuration and done the minimal changes required to make it run, or
have you
My jvm settings:
-Xmx8192M -Xms8192M -XX:+CMSScavengeBeforeRemark -XX:NewRatio=2
-XX:+CMSParallelRemarkEnabled -XX:+UseParNewGC -XX:+UseConcMarkSweepGC
-XX:+AggressiveOpts -XX:CMSInitiatingOccupancyFraction=70
-XX:+UseCMSInitiatingOccupancyOnly -XX:-CMSIncrementalPacing
I'll answer both your mails in one.
Shawn Heisey wrote:
On 11/29/2012 3:15 AM, Daniel Exner wrote:
I'm currently doing some benchmarking of a real Solr 3.3 instance vs
the same ported to Solr 4.0.
[..]
In the graph you can see high CPU load, all the time. This is even the
case if I reduce
On 11/29/2012 8:29 AM, Daniel Exner wrote:
I'll answer both your mails in one.
Shawn Heisey wrote:
On 11/29/2012 3:15 AM, Daniel Exner wrote:
I'm currently doing some benchmarking of a real Solr 3.3 instance vs
the same ported to Solr 4.0.
[..]
In the graph you can see high CPU load, all
Thanks for responding Shawn.
Annette is away until Monday so I am looking into this in the meantime.
Looking at the times of the Full GC entries at the end of the log, I think
they are collections we started manually through jconsole to try and reduce
the size of the old generation. This only
On 11/29/2012 10:44 AM, Andy Kershaw wrote:
Annette is away until Monday so I am looking into this in the meantime.
Looking at the times of the Full GC entries at the end of the log, I think
they are collections we started manually through jconsole to try and reduce
the size of the old
Hi Mark,
I get that use case, if the non-leader dies, when it comes back it has to
allow for recovery, that makes perfect sense.
I guess I was (naively!) assuming there was an optimized scenario if the
leader dies, and is the first one to come back (is still therefore leader),
there is no
Several suggestions.
1. Adjust the traffic load for about 75% CPU. When you hit 100%, you are
already in an overload state and the variance of the response times goes way
up. You'll have very noisy benchmark data.
2. Do not force manual GCs during a benchmark.
3. Do not force merge
On Nov 29, 2012, at 1:26 PM, Daniel Collins danwcoll...@gmail.com wrote:
Hi Mark,
I get that use case, if the non-leader dies, when it comes back it has to
allow for recovery, that makes perfect sense.
I guess I was (naively!) assuming there was an optimized scenario if the
leader dies,
Hello Floyd,
I can suggest to start from
http://wiki.apache.org/solr/ExtendedDisMax#bq_.28Boost_Query.29 specifying
bq=id:(1 2 3 4 ... )
Good Luck
On Wed, Nov 28, 2012 at 11:34 PM, Floyd Wu floyd...@gmail.com wrote:
Sorry if this is a duplicated question, I have no luck to get started.
I'm trying to create a SOLR query that groups/field collapses by date. I
have a field in -MM-dd'T'HH:mm:ss'Z' format, datetime, and I'm looking
to group by just per day. When grouping on this field using
group.field=datetime in the query, SOLR responds with a group for every
second. I'm
I was trying to port surround parer in 4.0 to 3.5
After getting the plugin to work I am not able to get the following results:
http://localhost:8983/solr/collection1/select?q=_query_:{!surround}features:(document3w
shiny)
this works on 4.0 but not on 3.5 with the plugin installed
3.5 query
Maybe these are text encoding markers?
- Original Message -
| From: Eva Lacy e...@lacy.ie
| To: solr-user@lucene.apache.org
| Sent: Thursday, November 29, 2012 3:53:07 AM
| Subject: Re: Downloading files from the solr replication Handler
|
| I tried downloading them with my browser and
Hi Robert,
SolrJ is sending data over a socket so that might explain some of the lag.
Are is your SolrJ app and the Solr server running on the same physical
machine?
I thought Mark M's idea sounded good.
One other idea:
When initializing SolrJ's connection for normal searching you probably use
Howdy,
I'm having rather a lot of difficulty getting Solr 4.0 running under Linux
(I got it up-and-running under Windows very quickly). My web server is
Glassfish 3.1.1. Additonally, my solr/home dir is /opt/solr/solr-4.0 and my
data dir is /opt/solr/data .
When I deploy the solr war file or
Sorry, yes, I had been using the BETA version. I have deleted all of that,
replaced the jars with the released versions (reduced my core count), and now I
have consistent results.
I guess I missed that JIRA ticket, sorry for the false alarm.
Dave
-Original Message-
From: Erick
Marcin Rzewucki wrote
I think you should change/set value for multipartUploadLimitInKB attribute
of requestParsers in solrconfig.xml
the value for multiPartUploadLimit is shown as 2048000 in the config and in
the error logs i see 20, related with jetty... I have changed some part
in the
On 27 November 2012 21:19, Paul Tester paulteste...@gmail.com wrote:
Hi all,
At our company we have an asp.net webapplication hosted in IIS 7.5. This
application have a search module which is using solr. For communication
with the solr instance we use a 3th party plugin. For every search we
Why not create a new field that just contains the day component? Then you
can group by this field.
On Thu, Nov 29, 2012 at 12:38 PM, sdanzig sdan...@gmail.com wrote:
I'm trying to create a SOLR query that groups/field collapses by date. I
have a field in -MM-dd'T'HH:mm:ss'Z' format,
Hello, I am having a weird problem with solrcloud and sorting, I will open a
bug ticket about this too, but wondering if anyone had similar problems like
mine
Background: Basically, I have added a new feature to Solr after I got the
source code. Similar to the we get score in the resultset, I am
Or group by a function query which is the date field converted to
milliseconds divided by the number of milliseconds in a day.
Such as:
q=*:*group=truegroup.func=rint(div(ms(date_dt),mul(24,mul(60,mul(60,1000)
-- Jack Krupansky
-Original Message-
From: Amit Nithian
Sent:
Can anyone shed some light on this code in ZkController...
if (localHostContext.contains(/)) {
throw new IllegalArgumentException(localHostContext (
+ localHostContext + ) should not contain a /);
}
...i don't really understand this limitation. There's nothing in the
What's the performance impact of doing this?
On Thu, Nov 29, 2012 at 7:54 PM, Jack Krupansky j...@basetechnology.comwrote:
Or group by a function query which is the date field converted to
milliseconds divided by the number of milliseconds in a day.
Such as:
After playing with this more, i think I have some clue...
on the standalone solr, when i give start 11 and rows 20, i can see
documents with positions ranging from 12 to 31, which is correct... on the
cloud, when i give the same parameters, again i get the same documents, but
this time position
Hello everyone
I use ManifoldCF (File Crawler) to crawl and index file contents into
Solr3.6.
ManifoldCF uses ExtractingRequestHandler to extract contents from files.
Somehow IOFileUploadException occurs and tells there are too many open
files.
Does Solr open temporary files under /var/tmp/ a
38 matches
Mail list logo