will eventually. The complaint is it
can't find ELspell, which I had defined in the old setup that I blew
away, so I'll have to redefine it at some point! For now, I'm just
gonna delight in having searching working again!
Mark
On 9/26/2015 11:05 PM, Erick Erickson wrote:
No need to re
. Default config puts tags around the search, but I'm not
using an HTML renderer and I don't want characters of any sort inserted
into the text returned in the result set. rather, I just want the
start/end position. How do I configure that?
Mark
On 9/27/2015 12:49 PM, Alexandre Rafalovitch wrote:
Mark,
Thank you for your valuable feedback. The newbie's views are always appreciated.
Admin Admin UI command is designed for creating a collection based on
the configuration you already have. Obviously, it makes that point
somewhat less than
my own
environment. Can someone please point me to the document(s)/tutorial(s)
that I am missing?
Mark
RE lastmodtime > '${dataimporter.last_index_time}';">
Hope this helps!
Thanks,
Mark
On 9/24/2015 10:57 AM, Erick Erickson wrote:
Geraint:
Good Catch! I totally missed that. So all of our focus on schema.xml has
been... totally irrelevant. Now that you pointed that out, there's also the
correct results. Does this mean I have a problem in my
indexing process when I used /dataimport. Or does it mean I have
something wrong in my query?
Also, notice in the results that category, logtext, and username fields
are returned as arrays, even though I do not include multi
t;querystring":"*deeper*",
"parsedquery":"logtext:*deeper*",
"parsedquery_toString":"logtext:*deeper*",
"explain":{
"2012-07-10 13:23:39.0":"\n1.0 = logtext:*deeper*, product of:\n
1.0 = boost\n 1.0 = que
get several results, but I
don't know what the output means.
thanks,
Mark
On 9/23/2015 12:30 PM, Erick Erickson wrote:
Then my next guess is you're not pointing at the index you think you are
when you 'rm -rf data'
Just ignore the Elall field for now I should think, although get rid of it
if you don't think you need it.
DIH should be irrelevant here.
So let's back
t shows you're searching
on the logtext field, this line is the relevant one:
"parsedquery_toString":"logtext:deeper",
Should I just get rid of "ELall"? I only created it with the intent to
be able to search on "fenbers" and get hits if "fenbers" occurred in
either place, the logtext field or the username field.
thanks,
Mark
for
indexes (main, index-based and File-based)?? Can I make them subdirs of
the main index (in /localapps/dev/EventLog/index)? Or would that mess
up the main index?
Thanks for raising my awareness of these errors!
Mark
On 9/21/2015 5:07 PM, Mikhail Khludnev wrote:
Both of these guys below try
with a few files. So it seems "spellcheck.build" worked, but
I am still not getting any hits when I purposefully misspell a word.
But I'll post this problem with more details in a separate post.
Mark
On 9/21/2015 5:07 PM, Mikhail Khludnev wrote:
Both of these guys below try to w
ate" because it is a Timestamp object in Java and a "timestamp
without time zone" in PostgreSQL. But even with these changes, the
results are the same as before.
Do you have any more ideas why searching on any literal string finds
zero documents?
Thanks,
Mark
On 9/18/2015 10:30 PM, E
is the pertinent schema.xml snippet you asked for.
The logtext column in my table contains merely keyboarded text, with the
infrequent exception that I add a \uFFFC as a placeholder for images.
So, should I be using something besides text_en as the fieldType?
Thanks,
Mark
On 9/21/2015 12:12 PM
A snippet of my solrconfig.xml is attached. The snippet only contains
the Spell checking sections (for brevity) which should be sufficient for
you to see all the pertinent info you seek.
Thanks!
Mark
On 9/19/2015 3:29 AM, Mikhail Khludnev wrote:
Mark,
What's your solconfig.xml?
On Sat
t
I could have done to break searching capabilities in the process.
Again, searching is not completely broken because it will return all the
documents with * as the token.
thanks,
Mark
showing zero results. The "logtext" field is what I search
on, and this field type is plain text, although I don't think I
specifically declare this anywhere.
Both attachments were run with debug on.
Thanks,
Mark
You should also check the "debugQuery" box on the Query tab,
e.toString() is what is formatting in JSON... Doink!!
Thanks for the nudge!
Mark
On 9/18/2015 6:15 PM, Upayavira wrote:
What URL are you posting to? Why do you want to use JSON or XML from
SolrJ, which is best using javabin anyway?
Get it right via a URL first, then try to port it ove
in a post that increasing
writeLockTimeout would help. It did not help for me even increasing it
to 20,000 msec. If I don't build, then my resultset count is always 0,
i.e., empty results. What could be causing this?
Mark
indent
debugQuery
dismax
edismax
hl
facet
spatial
spellcheck
spellc
ults on my text
that is chock full of misspelled words. Any ideas? Attached is my
solrconfig snippet:
Mark
text_general
index
logtext
solr.IndexBasedSpellChecker
.
true
0.5
2
tter what I try. Can
you offer specific advice?
Mark
nd
schema.xml are posted in another thread having a subject "Moving on to
spelling" if that helps you help me.
Thanks,
Mark
HTTP ERROR 500
Problem accessing /solr/EventLog/spellCheckCompRH. Reason:
{msg=SolrCore 'EventLog' is not available due to init failure:
java
)? Does it matter which way I specify them?
thanks,
Mark
t and same solrconfig.xml files
because "internal" is not defined in AbstractLuceneSpellchecker.java!
Once I edited these two problems in my own solrconfig.xml, the
stacktrace errors went away!! Yay!
But I'm not out of the woods yet! I'll resume later, after our system
upgrade today.
Ah ha!! Exactly my point in the post I sent about the same time you did
(same Thread)!
Mark
On 9/16/2015 8:03 AM, Mikhail Khludnev wrote:
https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/spelling/AbstractLuceneSpellChecker.java#L97
this mean that
0.5
glish
dictionary file on the web and add Solr's FileBasedSpellChecker?? Or
does Solr already have what I need and it's a matter of me learning how
to configure that properly?? (If so, how?)
Mark
Have you used jconsole or visualvm to see what it is actually hanging on to
there? Perhaps it is lock files that are not cleaned up or something else?
You might try: find ~/.ivy2 -name "*.lck" -type f -exec rm {} \;
- Mark
On Wed, Sep 16, 2015 at 9:50 AM Susheel Kumar <susheel2
I mention the same thing in
https://issues.apache.org/jira/browse/LUCENE-6743
They claim to have addressed this with Java delete on close stuff, but it
still happens even with 2.4.0.
Locally, I now use the nio strategy and never hit it.
- Mark
On Wed, Sep 16, 2015 at 12:17 PM Shawn Heisey <
You should be able to easily see where the task is hanging in ivy code.
- Mark
On Wed, Sep 16, 2015 at 1:36 PM Susheel Kumar <susheel2...@gmail.com> wrote:
> Not really. There are no lock files & even after cleaning up lock files (to
> be sure) problem still persists. It works
hurdles I run into!
Thanks for the help!
Mark
On 9/15/2015 11:13 AM, Yonik Seeley wrote:
On Tue, Sep 15, 2015 at 11:08 AM, Mark Fenbers <mark.fenb...@noaa.gov> wrote:
I'm working with the spellcheck component of Solr for the first time. I'm
using SolrJ, and when I submit my que
e sentence, not a single word. Would that matter?
Mark
t the end-user typed. "eventlogtext.logtext" is
the table.column this text is stored in.
I'm using Solr 5.3.0. I'm cross-eyed! Any ti
by the non-zero
list of suggestions? Or is there a third option I haven't thought of
(like, spell-check as I type)??
I'm just trying to picture the behavior in my head so I know what
programming approach to take. Thanks for the help!
Mark
-around. Does
Solr have any baseline processors that will handle the URL-encoding?
Being new to Solr, I'm not sure I have the skill to write my own. Or,
is there another kind of encoding I can use that Solr doesn't adversely
react to??
Mark
On 9/11/2015 12:11 PM, Erick Erickson wrote:
Several
nk that would matter, though.
Another example... In one of the documents returned by the "Friday"
query results, I noticed in the text the name of a co-worker "Drzal".
So, I searched on "Drzal" and my results came up with 0 documents. (!?)
Any ideas where I went wrong??
Mark
d solr and "Viola!" All works as designed! It even indexed my
entire database on the first try of a full-import! Woohooo!
Thanks for your help. I would have abandoned this project without your
persistence.
Mark
Perhaps there is something preventing clean shutdown. Shutdown makes a best
effort attempt to publish DOWN for all the local cores.
Otherwise, yes, it's a little bit annoying, but full state is a combination
of the state entry and whether the live node for that replica exists or not.
- Mark
s around to try to get this
working, and my lack of knowledge of where all the places are that it
looks for jars), but they would be the same versions.
Unfortunately, the piece of Solr that is not working for me
(DataImportHandler) is the very piece I need for my project. :-((
Mark
s not seem to indicate that it has found a duplicate jar.
Though Solr is running, I am not able to create an index from my
database data.
What do you make of the information in the log file?
Mark
2015-09-06 11:02:30.674 INFO (main) [ ] o.e.j.u.log Logging
initialized @8
of the DataImportHandler error
originates?
Thanks!
Mark
url="jdbc:postgresql://dx1f/OHRFC" user="awips" />
deltaQuery="SELECT posttime FROM eventlogtext WHERE
lastmodtime > '${dataimporter.last_index_time}'">
hostly configuration
settings?
Mark
2015-09-05 19:19:54.416 INFO (main) [ ] o.e.j.u.log Logging
initialized @902ms
2015-09-05 19:19:54.815 INFO (main) [ ] o.e.j.s.Server
jetty-9.2.11.v20150529
2015-09-05 19:19:54.851 WARN (main) [ ] o.e.j.s.h.RequestLogHandler
!RequestLog
2015-
r yet, but I
haven't exhausted exploring the Solr website yet. Hopefully, I'll find
some examples using Solr in Java code...
Mark
On 9/2/2015 9:51 PM, Chris Hostetter wrote:
: various $HOME/solr-5.3.0 subdirectories. The documents/tutorials say to edit
: the solrconfig.xml file for vari
It seems to me that this problem could be a classpath issue, but I
copied the appropriate jar file into the solr/lib directory to be sure.
This made the (slightly different) initial error go away, but now I
cannot make this one go away.
Any ideas?
Mark
the solrconfig.xml file for various
configuration details, but they never say which one of these dozen to
edit. Moreover, I cannot determine which version is being used once I
start solr, so that I would know which instance of this file to
edit/customize.
Can you help??
Thanks!
Mark
Do a search with the same criteria before and after?
On Wed, Sep 2, 2015 at 3:30 PM, Renee Sun wrote:
> I run this curl trying to delete some messages :
>
> curl
> 'http://localhost:8080/solr/mycore/update?commit=true=
> abacd'
> | xmllint --format -
>
> or
>
> curl
>
-user@lucene.apache.org
Subject: Re: Same query fails in SolrJ, but works in the browser.
On Tue, Aug 4, 2015 at 4:24 PM, Mark Horninger
mhornin...@grayhairsoftware.com wrote:
Hi Solr Users,
I am running into a problem where a more complex dismax query I am
trying to run works
();
}
return result;
}
For whatever reason, this is not working. If anyone could shed some light on
the issue I would be extremely grateful.
Thanks,
-Mark
[GrayHair]
GHS Confidentiality Notice
This e-mail message, including any attachments, is for the sole use of the
intended recipient
light, let's see the schema file?
Maybe the results of adding debug=all to the query?
Best,
Erick
On Fri, Jun 26, 2015 at 8:05 AM, Mark Ehle marke...@gmail.com wrote:
In my schema from 4.6, the text was in the 'text' field, and the stored
attrib was set to true as it is in the 5.2 schema. I
=jsonindent=truehl=truehl.fl=texthl.simple.pre=%3Cem%3Ehl.simple.post=%3C%2Fem%3E
used to produce snippets of highlited text in 4.6. In 5.2 it does not.
Thanks -
Mark Ehle
Computer Support Librarian
Willard Library
Battle Creek, MI
On Tue, Jun 30, 2015 at 10:50 AM, Alessandro Benedetti
so when hl.usePhraseHighlighter is enabled.
So you should be ok.
Keep us posted
2015-06-30 16:00 GMT+01:00 Mark Ehle marke...@gmail.com:
Alessandro -
Someone asked to see the schema, I posted it. Should I have just attached
it? Does this mailing list support that?
I am
request handler.
I don't _think_ that there's a default in the code, but I've been surprised
before.
Could you add
debug=trueechoParams=all
to the query and paste the results?
Best,
Erick
On Tue, Jun 30, 2015 at 12:32 PM, Mark Ehle marke...@gmail.com wrote:
Do you mean
configuration.
This suggestion can be obvious, but … have you checked you have your
stored attribute set true for the field you are interested ?
I am talking about the 5.2 schema.
Cheers
2015-06-26 12:24 GMT+01:00 Mark Ehle marke...@gmail.com:
Folks -
I am using SOLR 4.6 to run
highlighting so I need the text there.
What would be different about 5.2 that would account for this?
Thanks!
Mark Ehle
Computer Support Librarian
Willard Library
Battle Creek,MI
I think there is some better classpath isolation options in the works for
Hadoop. As it is, there is some harmonization that has to be done depending
on versions used, and it can get tricky.
- Mark
On Wed, Jun 17, 2015 at 9:52 AM Erick Erickson erickerick...@gmail.com
wrote:
For sure
I didn't really follow this issue - what was the motivation for the rewrite?
Is it entirely under: new code should be quite a bit easier to work on for
programmer
types or are there other reasons as well?
- Mark
On Mon, Jun 15, 2015 at 10:40 AM Erick Erickson erickerick...@gmail.com
wrote
:) ).
- Mark
On Mon, Jun 15, 2015 at 11:35 AM Upayavira u...@odoko.co.uk wrote:
The current UI was written before tools like AngularJS were widespread,
and before decent separation of concerns was easy to achieve in
Javascript.
In a sense, your paraphrase of the justification was as you described
to a
specific deletion policy?
Thanks,
Aurélien
--
- Mark
about.me/markrmiller
File a JIRA issue please. That OOM Exception is getting wrapped in a
RuntimeException it looks. Bug.
- Mark
On Wed, Jun 3, 2015 at 2:20 AM Clemens Wyss DEV clemens...@mysign.ch
wrote:
Context: Lucene 5.1, Java 8 on debian. 24G of RAM whereof 16G available
for Solr.
I am seeing the following
We will have to a find a way to deal with this long term. Browsing the code
I can see a variety of places where problem exception handling has been
introduced since this all was fixed.
- Mark
On Wed, Jun 3, 2015 at 8:19 AM Mark Miller markrmil...@gmail.com wrote:
File a JIRA issue please
Hi Alexandre,
Solr ASM is the extact poblem I'm looking to hack about with so I'm keen
to consider any code no matter how ugly or broken
Regards
Mark
On 9 May 2015 at 10:21, Alexandre Rafalovitch arafa...@gmail.com wrote:
If you only have classes/jars, use ASM. I have done this before
I looking to use Solr search over the byte code in Classes and Jars.
Does anyone know or have experience of Analyzers, Tokenizers, and Token
Filters for such a task?
Regards
Mark
(java.util.logging.LogRecord);
public static void main(java.lang.String[]) throws java.lang.Exception;
public static void go() throws java.lang.Exception;
static {};
}
On May 8, 2015, at 3:31 PM, Mark javam...@gmail.com wrote:
I looking to use Solr search over the byte code in Classes
Erik,
Thanks for the pretty much OOTB approach.
I think I'm going to just try a range of approaches, and see how far I get.
The IDE does this suggestion would be worth looking into as well.
On 8 May 2015 at 22:14, Mark javam...@gmail.com wrote:
https://searchcode.com/
looks really
https://searchcode.com/
looks really interesting, however I want to crunch as much searchable
aspects out of jars sititng on a classpath or under a project structure...
Really early days so I'm open to any suggestions
On 8 May 2015 at 22:09, Mark javam...@gmail.com wrote:
To answer why
A bug fix version difference probably won't matter. It's best to use the
same version everyone else uses and the one our tests use, but it's very
likely 3.4.5 will work without a hitch.
- Mark
On Tue, May 5, 2015 at 9:09 AM shacky shack...@gmail.com wrote:
Hi.
I read on
https
If copies of the index are not eventually cleaned up, I'd fill a JIRA to
address the issue. Those directories should be removed over time. At times
there will have to be a couple around at the same time and others may take
a while to clean up.
- Mark
On Tue, Apr 28, 2015 at 3:27 AM Ramkumar R
Hmm...can you file a JIRA issue with this info?
- Mark
On Fri, Mar 27, 2015 at 6:09 PM Joseph Obernberger j...@lovehorsepower.com
wrote:
I just started up a two shard cluster on two machines using HDFS. When I
started to index documents, the log shows errors like this. They repeat
when I
maintained on
confluence?
--
Mark E. Haase
202-815-0201
them to node Y anyway, why not copy them to the place they'll be
needed?
The scripts make an assumption that you're running on the same node
you're running the scripts for quick-start purposes.
Best,
Erick
On Thu, Mar 26, 2015 at 9:24 AM, Mark E. Haase meha...@gmail.com wrote:
I can't get
/solr/CoreAdmin+API#CoreAdminAPI-CREATE
And one of the few things [I thought] I knew about cloud vs non cloud
setups was the Collections API is for cloud and Cores API is for non cloud,
right? So why would the non-cloud API take a cloud-only argument?
On Thu, Mar 26, 2015 at 1:16 PM, Mark E. Haase
files first, does it?
--
Mark E. Haase
in the Admin UI.
Thanks!
-Original Message-
From: Mark Bramer
Sent: Thursday, March 26, 2015 7:42 PM
To: 'solr-user@lucene.apache.org'
Subject: RE: i'm a newb: questions about schema.xml
Hi Shawn,
Definitely helpful to know about the instance and files stuff in Admin. I'm
...@elyograg.org]
Sent: Thursday, March 26, 2015 7:28 PM
To: solr-user@lucene.apache.org
Subject: Re: i'm a newb: questions about schema.xml
On 3/26/2015 4:57 PM, Mark Bramer wrote:
I'm a Solr newb. I've been poking around for several days on my own test
instance, and also online at the info
, and
I've already recognized some differences. Is my ES background clouding my
grasp of Solr fundamentals?
Thanks for any help.
Mark Bramer | Technical Team Lead, DC Services
Esri | 8615 Westwood Center Dr | Vienna, VA 22182 | USA
T 703 506 9515 x8017 | mbra...@esri.commailto:mbra...@esri.com
Doesn't ConcurrentUpdateSolrServer take an HttpClient in one of it's
constructors?
- Mark
On Sun, Mar 22, 2015 at 3:40 PM Ramkumar R. Aiyengar
andyetitmo...@gmail.com wrote:
Not a direct answer, but Anshum just created this..
https://issues.apache.org/jira/browse/SOLR-7275
On 20 Mar 2015
because the
old leader won't give up the thrown. We don't try and force the new leader
because that may just hide bugs and cause data loss, we no leader is
elected.
I'd guess there are two JIRA issues to resolve here.
- Mark
On Sun, Mar 8, 2015 at 8:37 AM Markus Jelsma markus.jel...@openindex.io
wrote
If you google replication can cause index corruption there are two jira issues
that are the most likely cause of corruption in a solrcloud env.
- Mark
On Mar 5, 2015, at 2:20 PM, Garth Grimm garthgr...@averyranchconsulting.com
wrote:
For updates, the document will always get routed
I’ll be working on this at some point:
https://issues.apache.org/jira/browse/SOLR-6237
- Mark
http://about.me/markrmiller
On Feb 25, 2015, at 2:12 AM, longsan longsan...@sina.com wrote:
We used HDFS as our Solr index storage and we really have a heavy update
load. We had met much problems
you
are trying to specify the name, I'm guessing something about the command is
not working. You might try just shoving it in a browser url bar as well.
- Mark
On Wed Feb 18 2015 at 8:56:26 PM Hrishikesh Gadre gadre.s...@gmail.com
wrote:
Hi,
Can we please document which HTTP method
by the
way.
- Mark
http://about.me/markrmiller
On Feb 3, 2015, at 7:47 PM, Tim Smith secs...@gmail.com wrote:
Hi,
I have a SolrCloud (Solr 4.4, writing to HDFS on CDH-5.3) collection
configured to be populated by flume Morphlines sink. The flume agent reads
data from Kafka and writes
the SID from the existing doc, add any additional fields,
then add to solr.
On Wed, Jan 28, 2015 at 11:56 AM, Mark javam...@gmail.com wrote:
Is it possible to use curl to upload a document (for extract indexing)
and specify some fields on the fly?
sort of:
1) index this document
2
Second thoughts SID is purely i/p as its name suggests :)
I think a better approach would be
1) curl to upload/extract passing docID
2) curl to update additional fields for that docID
On 28 January 2015 at 17:30, Mark javam...@gmail.com wrote:
Create the SID from the existing doc implies
I'm looking to
1) upload a binary document using curl
2) add some additional facets
Specifically my question is can this be achieved in 1 curl operation or
does it need 2?
On 28 January 2015 at 17:43, Mark javam...@gmail.com wrote:
Second thoughts SID is purely i/p as its name suggests
Yes, after 45 seconds a replica should take over as leader. It should
likely explain in the logs of the replica that should be taking over why
this is not happening.
- Mar
On Wed Jan 28 2015 at 2:52:32 PM Joshi, Shital shital.jo...@gs.com wrote:
When leader reaches 99% physical memory on the
Use case is
use curl to upload/extract/index document passing in additional facets not
present in the document e.g. literal.source=old system
In this way some fields come from the uploaded extracted content and some
fields as specified in the curl URL
Hope that's clearer?
Regards
Mark
On 28
at 18:03, Mark javam...@gmail.com wrote:
Use case is
use curl to upload/extract/index document passing in additional facets not
present in the document e.g. literal.source=old system
In this way some fields come from the uploaded extracted content and some
fields as specified in the curl URL
Is it possible to use curl to upload a document (for extract indexing)
and specify some fields on the fly?
sort of:
1) index this document
2) by the way here are some important facets whilst your at it
Regards
Mark
=foo_txtboost.foo_txt=3literal.blah_s=Bah;
-F tutorial=@help.pdf
and therefore I learned that you can't update a field that isn't in the
original which is what I was trying to do before.
Regards
Mark
On 28 January 2015 at 18:38, Alexandre Rafalovitch arafa...@gmail.com
wrote:
Well, the schema does
, application/vnd.ms-outlook);
Regards
Mark
On 27 January 2015 at 18:39, Mark javam...@gmail.com wrote:
Hi Alex,
On an individual file basis that would work, since you could set the ID on
an individual basis.
However recuring a folder it doesn't work, and worse still the server
complains
.
Thanks for eveyones suggestions.
Regards
Mark
On 27 January 2015 at 18:01, Alexandre Rafalovitch arafa...@gmail.com
wrote:
Your IDs seem to be the file names, which you are probably also getting
from your parsing the file. Can't you just set (or copyField) that as an ID
on the Solr side
a folder means that is requires an ID strategy - which I believe
is lacking.
Reagrds
Mark
On 27 January 2015 at 10:57, Erik Hatcher erik.hatc...@gmail.com wrote:
Try adding -Dauto=true and take away setting url. The type probably isn't
needed then either.
With the new Solr 5 bin/post it sets
Sorry, there is no great workaroud. You might try raising the max idle time
for your container - perhaps that makes it less frequent.
- Mark
On Tue Jan 20 2015 at 1:56:54 PM Nishanth S nishanth.2...@gmail.com wrote:
Thank you Mike.Sure enough,we are running into the same issue you
I'm looking to index some outlook extracted messages *.msg
I notice by default msg isn't one of the defaults so I tried the following:
java -classpath dist/solr-core-4.10.3.jar -Dtype=application/vnd.ms-outlook
org.apache.solr.util.SimplePostTool C:/temp/samplemsg/*.msg
That didn't work
/response
However just calling the extract works
curl http://localhost:8983/solr/update/extract?extractOnly=true; -F
myfile=@6252671B765A1748992DF1A6403BDF81A4A22C00.msg
Regards
Mark
On 26 January 2015 at 21:47, Alexandre Rafalovitch arafa...@gmail.com
wrote:
Seems like apple to oranges
are NOT posting to the same URL.
On 26 January 2015 at 17:00, Mark javam...@gmail.com wrote:
http://localhost:8983/solr/update
Sign up for my Solr resources newsletter at http://www.solr-start.com/
bq. Is this the correct approach ?
It works, but it might not be ideal. Recent versions of ZooKeeper have an
alternate config for this max limit though, and it is preferable to use
that.
See maxSessionTimeout in
http://zookeeper.apache.org/doc/r3.3.1/zookeeperAdmin.html
- Mark
On Mon Jan 26
I'd have to do some digging. Hossman might know offhand. You might just
want to use @SupressSSL on the tests :)
- Mark
On Mon Jan 12 2015 at 8:45:11 AM Markus Jelsma markus.jel...@openindex.io
wrote:
Hi - in a small Maven project depending on Solr 4.10.3, running unit tests
that extend
bq. ClusterState says we are the leader, but locally we don't think so
Generally this is due to some bug. One bug that can lead to it was recently
fixed in 4.10.3 I think. What version are you on?
- Mark
On Mon Jan 12 2015 at 7:35:47 AM Thomas Lamy t.l...@cytainment.de wrote:
Hi,
I found
of the issues
are not known or documented and that updates may introduce new issues. You
face some risk like that no matter what though.
- Mark
On Fri Jan 02 2015 at 8:37:47 AM Shawn Heisey apa...@elyograg.org wrote:
On 1/1/2015 6:35 PM, William Bell wrote:
But tons of people on this mailing list do
. It is possible that the mirror you
are using may not have replicated the release yet. If that is the
case, please try another mirror. This also goes for Maven access.
Happy Holidays,
Mark Miller
http://www.about.me/markrmiller
201 - 300 of 2164 matches
Mail list logo